[Ffmpeg-devel-irc] ffmpeg-devel.log.20160827
burek
burek021 at gmail.com
Sun Aug 28 03:05:03 EEST 2016
[00:19:59 CEST] <nevcairiel> Its still useful if you just know which fail
[00:20:02 CEST] <nevcairiel> It's not many
[00:22:33 CEST] <nevcairiel> I have considered somehow hooking up a fate station that tests this, but my fate box is virtual so no GPU
[00:33:58 CEST] <durandal_1707> will add copy mode to metadata filter
[00:35:46 CEST] <durandal_1707> hmm, does anyone get infinite loop with: ffmpeg -h full?
[00:36:08 CEST] <nevcairiel> That usually means two things share the same avclass
[02:05:15 CEST] <cone-688> ffmpeg 03Michael Niedermayer 07master:c75273310cf1: avformat/utils: End probing if the expected codec surpasses AVPROBE_SCORE_STREAM_RETRY
[07:47:16 CEST] <DSM___> michaelni: issue seems to be fixed
[13:12:43 CEST] <cone-248> ffmpeg 03Vittorio Giovara 07master:6648da359114: vf_colorspace: Check av_frame_copy_props() return value
[13:12:43 CEST] <cone-248> ffmpeg 03Vittorio Giovara 07master:69abf4f93cb6: vf_colorspace: Add support for full range yuv
[13:25:28 CEST] <BtbN> Great. Nvidia put the NVENC SDK/Header behind a registration-wall.
[13:25:33 CEST] <BtbN> Now I want to bundle it again.
[13:27:15 CEST] <BtbN> But there now is VP9 decoding in there.
[13:29:13 CEST] <JEEB> welp
[13:34:08 CEST] <BtbN> And they have to acknowledge my access to the SDK. In a manual process.
[13:39:08 CEST] <JEEB> that's basically a big middle finger kind of thing
[13:39:25 CEST] <JEEB> since they freed it up for a while and now it's back to the old way
[13:39:34 CEST] <BtbN> The header is still MIT licensed
[13:40:03 CEST] <BtbN> Otherwise I'd have to reject the two patches on the ML...
[13:41:07 CEST] <BtbN> https://developer.nvidia.com/nvidia-video-codec-sdk <- they even advertise FFmpeg
[13:48:10 CEST] <cone-248> ffmpeg 03Paul B Mahol 07master:b2c6a11fb604: avfilter/vf_atadenoise: add planes option
[13:50:27 CEST] <BtbN> michaelni, https://patchwork.ffmpeg.org/patch/270/ it doesn't exactly handle those kind of attached patches well.
[14:49:36 CEST] <RiCON> BtbN: if you only need nvencodeapi.h you can also get it here: https://github.com/jb-alvarado/media-autobuild_suite/blob/gh-pages/extras/nvEncodeAPI.h
[14:50:04 CEST] <BtbN> The registration was accepted already
[14:50:13 CEST] <BtbN> took them "only" an hour.
[14:50:41 CEST] <BtbN> Preparing a patch to bundle the header with ffmpeg.
[16:14:40 CEST] <cone-248> ffmpeg 03Paul B Mahol 07master:f242d74d170e: avfilter/vf_convolution: add >8 bit depth support
[16:22:26 CEST] <cone-248> ffmpeg 03James Almer 07master:dc7e5adbc086: avformat/utils: fix a codecpar non use
[17:42:57 CEST] <atomnuker> can someone take a look at the MLP encoder?
[17:43:10 CEST] <atomnuker> the patch's been on the ML for a few days now
[18:22:34 CEST] <BtbN> If I have a filter formula, that applies on RGB colors, can I just apply it unmodified to YUV, and it will work?
[18:22:51 CEST] <nevcairiel> that depends entirely what it does
[18:23:16 CEST] <BtbN> remove color-spill for greenscreens and stuff
[18:34:25 CEST] <BtbN> resR = clamp(R - keyR + (R+G+B)/3, 0.0, 1.0)*intensity + R*(1-intensity)
[19:43:15 CEST] <kierank> is there an idiom for rounding down to the nearest multiple
[19:43:24 CEST] <kierank> i.e the opposite to (x+n-1)/n
[19:43:37 CEST] <kierank> oh bah
[19:43:45 CEST] <kierank> (x / n) * n
[19:49:31 CEST] <DHE> I have a minipatch I'd like feedback on: https://github.com/DeHackEd/FFmpeg/commit/fad431cf451a
[19:50:23 CEST] <DHE> The issue I have is that at 29.97fps, most reasonable hls_time values like 6 seconds will result in 6.006 second durations for the segment files, which the old code would round up to 7s
[19:50:42 CEST] <DHE> and I have a specific player that has... uhh... issues when the error is relatively large
[19:51:11 CEST] <JEEB> DHE: don't comment out code but rather remove it or add extra checks. also use spaces instead of tabs
[19:51:27 CEST] <DHE> I mean from a purely conceptual standpoint. I know commenting out the old code isn't right.
[19:51:51 CEST] <JEEB> also what did the time variable contain and when is it initialized or not initialized?
[19:52:06 CEST] <DHE> it's set with -hls_time X and defaults to 2
[19:53:45 CEST] <DHE> the original problem (vendor-specific possibly) is that it assumes that the length of the segments are exactly what the target_duration is set to. when it's 6.006 and the file contains '7' the internal clock drifts pretty badly
[19:54:05 CEST] <DHE> and according to the HLS spec it must be a whole integer. :/
[19:54:35 CEST] <JEEB> if you set it to six and your actual segment length is 6.006 that is actually out of spec I think though?
[19:54:38 CEST] <JEEB> https://tools.ietf.org/html/draft-pantos-http-live-streaming-19#section-4.3.3.1
[19:55:14 CEST] <DHE> interesting, I must have misread that...
[19:55:48 CEST] <DHE> well, damned if I do, damned if I don't...
[19:57:00 CEST] <JEEB> and yeah, that means your thing you're poking is completely bonkers :P congratulations. I've had my own share of crappy vendors with other formats
[19:58:05 CEST] <JEEB> but yeah, if you try to do something that's against the spec then most likely it isn't going to get merged
[19:58:26 CEST] <DHE> -this one's funny. if you set a large hls_time for a live stream you can basically get pause-live-TV without needing storage on the player. but it's not using the timestamps to measure times, it just assumes every segment is the target duration length.
[20:07:11 CEST] <DHE> bug report submitted. the spec wins
[20:07:17 CEST] <DHE> to the player vendor I mean
[20:07:27 CEST] <JEEB> hopefully it gets fixed
[20:07:56 CEST] <JEEB> in my case sometimes the vendor would point at a completely different part of the spec (unrelated) or would say it would get fixed in the next model
[20:08:48 CEST] <DHE> they've given me test firmwares for a couple of bugs I found. that's reassuring, but releases are infrequent... :/
[20:40:37 CEST] <michaelni> BtbN, i think attaching multiple patches per mail isnt supported in patchwork. If theres something that can be done about this, someone please tell me, but IIUC raz correctly, the way projects generally handle this is to reject such patches and ask for clean resubmission. Multiple patches per mail are also a bit annoying to deal with even without patchwork
[21:12:12 CEST] Action: durandal_1707 wonders why is there dither in owdenoise
[21:13:56 CEST] <durandal_1707> michaelni: I more interested in generic frame multithreading in lavfi
[21:14:51 CEST] <DHE> as opposed to slice threading?
[21:18:12 CEST] <durandal_1707> yes, unless there are clear benefits doing it other way around
[21:18:45 CEST] <michaelni> <michaelni> durandal_1707, a filter could implement frame multitherading without any changes to avfilter core. It could be done throgh core with multiple filter instances filtering several frames at once, there are probably more options and i wonder if this wasnt discussed before on ML
[21:19:01 CEST] <michaelni> i think you didnt see ^^ as you timedout after
[21:24:13 CEST] <DHE> depends on the filters. some this could work (eg: scale) but some require processing of frames in order (eg: deinterlacers)
[21:27:23 CEST] <michaelni> DHE filters like deinterlacers would need to syncrnize the different instances so none accesses something from a previous frame that hasnt been calculated yet. Its the same as wih P frames in video decoders, but slice multitreading might be easier
[21:30:59 CEST] <DHE> in a "thinking out loud" way I'm wondering if there's some advantage to a work queue system for filters and/or decoders. I have some big ffmpeg jobs that end up creating hundreds of threads while running. I do have a lot of cores, but not THAT many. Could this help limit the number of threads needed?
[21:31:19 CEST] <DHE> process ulimits do count threads, so that's a pretty big hit on systems where limits have not been configured.
[21:37:00 CEST] <Timothy_Gu> BtbN: in case you are interested, I put all the MIT-licensed headers of the nvidia video codec sdk here: https://github.com/TimothyGu/nvidia-video-codec-sdk
[21:37:22 CEST] <BtbN> there's more than one?
[21:38:00 CEST] <BtbN> cuvid... interesting.
[21:38:14 CEST] <BtbN> With that it could be made non-nonfree and bundled as well
[21:38:33 CEST] <BtbN> wait, it needs dynlink_cuda.h as well
[21:38:35 CEST] <BtbN> so, nevermind that
[21:38:51 CEST] <Timothy_Gu> yeah. dynlink_cuda is nonfree
[21:39:28 CEST] <Timothy_Gu> For a diff between 6.0 and 7.0 see https://github.com/TimothyGu/nvidia-video-codec-sdk/compare/v6.0...v7.0?w=1
[21:42:43 CEST] <DHE> hardware vp9 encoding is in the newest chips?
[21:42:54 CEST] <BtbN> It's in Maxwell and Pascal
[21:43:15 CEST] <BtbN> oh, _en_coding
[21:43:20 CEST] <BtbN> I don't think that's in there at all
[21:43:25 CEST] <BtbN> https://developer.nvidia.com/nvidia-video-codec-sdk <- has a feature matrix
[21:43:44 CEST] <RiCON> vp9 is only on pascal, iirc
[21:43:51 CEST] <durandal_1707> DHE: you got so many threads with lavfi or lavc?
[21:43:59 CEST] <BtbN> RiCON, it's also marked on GM20x
[21:44:22 CEST] <DHE> durandal_1707: both really. but lavfi seems to be the worst offender. x264 needs threads and I have multiple instances
[21:44:24 CEST] <RiCON> oh, so 950/960
[21:44:35 CEST] <BtbN> All the 900 cards
[21:44:52 CEST] <RiCON> right, hevc is the one that's only in those two
[21:45:06 CEST] <BtbN> HEVC is exception ²
[21:45:13 CEST] <Timothy_Gu> lol nvidia even lists ffmpeg as "Video Codec SDK In Action"
[21:45:23 CEST] <BtbN> "Only GM206 in the Maxwell generation of GPUs supports hardware accelerated HEVC decoding"
[21:45:27 CEST] <durandal_1707> DHE: what filters you use at same time?
[21:45:49 CEST] <DHE> durandal_1707: yadif, split, and a scale on each split output
[21:46:16 CEST] <RiCON> Timothy_Gu: they also use the nvresize patch
[21:46:17 CEST] <DHE> durandal_1707: one thing of note is that this machine has 80 threads total, so there's a fairly strong bias to big numbers on everything
[21:46:31 CEST] <durandal_1707> well, only yadif creates threads
[21:46:54 CEST] <BtbN> RiCON, nope. They just link to ffmpeg org.
[21:47:04 CEST] <BtbN> All the functionality of that nvresize stuff is in ffmpeg by now.
[21:47:19 CEST] <RiCON> it is?
[21:47:26 CEST] <RiCON> i remember it being refused
[21:47:34 CEST] <BtbN> And propperly reimplemented
[21:47:44 CEST] <Timothy_Gu> vf_scale_npp?
[21:47:52 CEST] <BtbN> yes, and cuviddec, and CUDA frames
[21:48:00 CEST] <BtbN> you can have a full decode, scale, encode chain with CUDA now.
[21:48:32 CEST] <Timothy_Gu> oh cool
[21:51:43 CEST] <durandal_1707> michaelni: why is number of slice threads global per graph, instead of per filter?
[21:52:20 CEST] <durandal_1707> this is bad design
[21:52:56 CEST] <DHE> this would require a central thread queue. which I don't think exists right now
[21:53:29 CEST] <durandal_1707> what?
[21:53:54 CEST] <durandal_1707> no, it can be controlled per filter
[21:53:59 CEST] <DHE> is there? I don't recall seeing one.
[21:54:26 CEST] <durandal_1707> because I have an idea
[21:54:42 CEST] <DHE> oh, sure, I go looking closer and find it in 15 seconds...
[23:14:34 CEST] <durandal_1707> michaelni: added posibilty for finer control of number of threads which Will be used
[23:47:53 CEST] <durandal_1707> jamrial: so what you tried that does not work with transcode.c?
[23:50:46 CEST] <jamrial> durandal_1707: try converting any mpeg2 or h264 file in and to avi/mp4. when i tested the resulting files had stutter
[23:52:09 CEST] <jamrial> pre-patch h264 seems to fail with a "broken ffmpeg presets" error from libx264, though
[23:52:44 CEST] <jamrial> probably because of avcodec_copy_context(), which is removed with this patch
[23:55:19 CEST] <durandal_1707> who didn't filled evaluation?
[00:00:00 CEST] --- Sun Aug 28 2016
More information about the Ffmpeg-devel-irc
mailing list