burek021 at gmail.com
Thu Jun 9 02:05:03 CEST 2016
[01:33:42 CEST] <cone-832> ffmpeg 03James Almer 07master:49b024663501: avformat/matroskadec: force 48kHz sample rate when rescaling Opus inital padding
[01:51:54 CEST] <philipl> BtbN: sorry, pushed to where? I don't see it on your cuvid branch
[02:07:56 CEST] <jya> is there a way to get ffprobe to ignore and not attempt to determine the codec of a stream, and simply display the pts/dts of a mp4 file ?
[02:23:49 CEST] <philipl> BtbN: Oh, I see. rebased.
[10:21:22 CEST] <BtbN> andrey_turkin_, also, I'm not sure how easily frame output can be delayed with the classic hwaccels. And without that cuvid performs horribly.
[10:22:58 CEST] <nevcairiel> you cant use cuvid with a standard hwaccel, as those also require letting the base decoder perform the reordering
[10:23:16 CEST] <nevcairiel> unless you can trick cuvid into doing that
[10:27:24 CEST] <andrey_turkin> I see
[10:30:24 CEST] <andrey_turkin> turns out nvenc doesn't reorder SEI data. So my A53 patch doesn't do that much good there
[11:15:30 CEST] <cone-397> ffmpeg 03Muhammad Faiz 07master:1e69ac9246be: avfilter/avf_showcqt: cqt_calc optimization on x86
[11:30:19 CEST] <BtbN> nevcairiel, it's the cuvid Parser that does the re-ordering.
[11:30:30 CEST] <BtbN> The pure cuvid decoder should work fine as a classic hwaccel
[14:18:57 CEST] <kierank> who are the mxf people to talk to
[14:19:02 CEST] <kierank> I have a meeting with smpte director next week
[14:19:08 CEST] <kierank> I need things to complain
[14:24:21 CEST] <durandal_1707> complain about mxf format or implementation in ffmpeg?
[14:25:23 CEST] <durandal_1707> mxf people dissapeared, I did some useful changes to demuxer long ago
[14:36:56 CEST] <Compn> kierank : complain that ffmpeg is used in every device and computer on the planet and we still dont have access to smpte docs? :P
[14:37:05 CEST] <Compn> or did that change now?
[14:40:37 CEST] <BtbN> andrey_turkin, nevcairiel. Hm, I'm almost tempted to try implementing cuvid as classic hwaccel now. Would have some nice perks, like cc support.
[14:40:50 CEST] <andrey_turkin> that's why I asked )
[14:40:53 CEST] <nevcairiel> and with a ffmpeg hwaccel module same advantages
[14:41:16 CEST] <nevcairiel> filling the structs is probably not hard, can just copy what the others do
[14:41:25 CEST] <BtbN> thinking more about lav* api users here.
[14:41:39 CEST] <BtbN> But the hwaccel could still support initializing CUDA itself if no external context is supplied.
[14:41:56 CEST] <BtbN> It's way less of a mess than it is for vaapi/cuda/dxva.
[14:41:57 CEST] <andrey_turkin> I tried to look at vdpau implementation and boy it looks scary
[14:41:59 CEST] <nevcairiel> i think there is a precedence for that in one other hwaccel
[14:42:08 CEST] <nevcairiel> re: self-init
[14:42:17 CEST] <BtbN> For vdpau/vaapi/dxva need a lot of external help to init the context
[14:42:21 CEST] <BtbN> for cuda it's just one function call
[14:43:13 CEST] <BtbN> The problem is, I can't realy implement anything else than h264 and vc1 that way, due to lack of compatible.
[14:43:23 CEST] <nevcairiel> how so?
[14:43:25 CEST] <BtbN> I really wish nvidia would release those GT930...
[14:43:29 CEST] <nevcairiel> oh hardware
[14:43:40 CEST] <BtbN> oh, forgot that word, lol
[14:44:03 CEST] <BtbN> Would definitely take a 40¬ Maxwell Gen 2 card
[14:44:06 CEST] <Compn> just h264 and vc1 are good enough
[14:44:15 CEST] <BtbN> At least hevc would also be nice
[14:44:19 CEST] <BtbN> And vp9
[14:44:20 CEST] <Compn> dont have to support mpeg4 asp, mpeg2, jpeg
[14:44:20 CEST] <Compn> oh
[14:44:23 CEST] <nevcairiel> they dont have much interest in extremely low end cards that can barely compete with iGPUs, just no money to make there
[14:44:43 CEST] <BtbN> They annouced those cards quite a while ago
[14:44:46 CEST] <BtbN> but never released them
[14:45:09 CEST] <nevcairiel> i never saw anything official
[14:45:17 CEST] <nevcairiel> only rumors
[14:45:26 CEST] <BtbN> The latest gt card you can get is Kepler based
[14:45:42 CEST] <BtbN> And even that is kinda complicated, because it has the same name as 2 fermi based ones
[14:45:58 CEST] <BtbN> Well, I will have Pascal hardware in december...
[14:46:29 CEST] <BtbN> And maybe one of those new NUCs, if they support HEVC10
[14:50:21 CEST] <BtbN> Is it possible to delay frame output with the classic hwaccels?
[14:50:33 CEST] <BtbN> I guess I can just queue them internaly and start sending them once the queue is filled?
[15:13:15 CEST] <nevcairiel> dont think so
[15:13:23 CEST] <nevcairiel> dxva on nvidia would also benefit from that
[15:13:31 CEST] <nevcairiel> but it would need to be solved in ffmpeg.c somewhere
[15:13:55 CEST] <nevcairiel> (dxva and cuvid really behave the same way, 2 frame delay or something is enough to reach full speed)
[15:14:08 CEST] <nevcairiel> vdpau probably too even
[15:14:37 CEST] <BtbN> The problem with cuvid is, it can't be solved externally
[15:14:39 CEST] <nevcairiel> a proper hwaccel can do no such things internally, it just gets told to decode one frame and return exactly that frame
[15:14:47 CEST] <BtbN> It has to be queued internally
[15:14:50 CEST] <nevcairiel> if it does anything else, its screwed
[15:15:15 CEST] <BtbN> Because there is no way i can return the mapped cuvid frame
[15:15:24 CEST] <BtbN> I have to cuMemcpy it
[15:16:00 CEST] <nevcairiel> why cant you return a pointer to that internal format, and have the client code deal with whatever is required
[15:16:19 CEST] <BtbN> Because I have to unmap it at some point.
[15:16:30 CEST] <BtbN> There can only be a very small count of mapped frames at the same time.
[15:16:55 CEST] <nevcairiel> dont map it then, have the client code also do that
[15:17:14 CEST] <BtbN> What should I return then? There's nothing else.
[15:17:51 CEST] <BtbN> Also, for mapping the frame, you need the cuvid decoder instance
[15:18:34 CEST] <nevcairiel> exposing that through hwaccel_context wouldnt be impossible
[15:18:58 CEST] <BtbN> It would still be horrible. Needing a way to lock those "frames", which are nothing more than an integer index for the map call.
[15:19:04 CEST] <nevcairiel> in any case, a hwaccel can do nothing of that sort
[15:19:17 CEST] <nevcairiel> it needs to decode and return the same frame immediately
[15:19:46 CEST] <nevcairiel> so the only option is to delay the mapping to the user-code somewhere
[15:20:00 CEST] <nevcairiel> maybe with some API to do that
[15:20:13 CEST] <BtbN> It would also add a cuvid dependency to everything consuming those cuda frames, overthrowing the entire existing codebase using CUDA frames
[15:20:38 CEST] <BtbN> So in the end, it would introduce a new CUVID pix fmt, and a filter to convert CUVID->CUDA
[15:21:02 CEST] <nevcairiel> whats a cuda frame? just a device ptr?
[15:21:05 CEST] <BtbN> yes
[15:24:17 CEST] <nevcairiel> yeah well that is kinda screwing you, you cant really output a cuda frame without crippling the decoding performance severly (even more so then without the delay, since you would even by-pass re-ordering)
[15:25:22 CEST] <BtbN> So the current implementation it is.
[15:26:04 CEST] <BtbN> It's not that it's bad. But stuff like CC parsing is close to impossible.
[15:26:19 CEST] <nevcairiel> yeah because you need to re-order that
[15:26:20 CEST] <BtbN> Unless it's somehow possible to invoke that parser individually.
[15:26:24 CEST] <nevcairiel> that would be really complex
[15:27:59 CEST] <BtbN> Didn't even think about the reordering problem.
[15:28:03 CEST] <BtbN> Yes, that makes it even worse.
[15:28:15 CEST] <nevcairiel> parsing the SEI itself wouldnt be that bad
[15:28:28 CEST] <nevcairiel> but its all out of whack
[15:28:29 CEST] <nevcairiel> :D
[15:29:21 CEST] <nevcairiel> there are reasons why i tell people to no longer use cuvid or qsv in my own project, it just lacks all these features that avcodec+hwaccel gives them, with no downsides in my case
[15:30:00 CEST] <BtbN> What else is there that doesn't work that way, except for CC?
[15:30:50 CEST] <nevcairiel> i would have to double check, but it wasnt the only thing
[15:31:36 CEST] <nevcairiel> if you dont need it for zero-copy or something, barely any reason to not use dxva2/vdpau/vaapi
[15:31:52 CEST] <BtbN> Well, you need X for vdpau
[15:31:56 CEST] <andrey_turkin> yep
[15:32:22 CEST] <nevcairiel> does it work with dummy X servers?
[15:32:23 CEST] <BtbN> Which hopefully changes at some point, with EGL/Wayland stuff appearing. But i guess you need a running Wayland-Thing then...
[15:32:38 CEST] <BtbN> It needs an X server with the nvidia driver running.
[15:35:10 CEST] <BtbN> Well, but to summarize this. CUVID as an hwaccel would be severely slowed-down, because no way to delay it. And the current implementation doesn't benefit from the ffmpeg h264 parser.
[15:36:43 CEST] <BtbN> I'd choose cuvidParser solution.
[15:37:44 CEST] <nevcairiel> yeah screw a few extra features if the speed is dragged down
[15:38:16 CEST] <BtbN> If someone needs those features, he can allways go the h264 sw -> hwupload_cuda way
[15:38:33 CEST] <BtbN> or potential vdpau/cuda interop
[15:39:39 CEST] <cone-559> ffmpeg 03Michael Niedermayer 07master:f883f0b0bd0d: avcodec/h264: Put context_count check back
[15:39:39 CEST] <cone-559> ffmpeg 03Mark Thompson 07master:9d8664dd848e: MAINTAINERS: Add myself as maintainer for VAAPI encoders
[15:39:39 CEST] <BtbN> Specially as nvenc apparently can't easily "encode" CC?
[15:42:34 CEST] <BtbN> Also, I just noticed, I'm using WAY too much delay
[15:42:47 CEST] <BtbN> currently delaying by 8 frames, instead of 4.
[15:42:55 CEST] <nevcairiel> in my experienec, you only need like 2
[15:43:00 CEST] <nevcairiel> but 4 is probably fine
[15:43:06 CEST] <BtbN> 2-4 is recommended
[15:43:09 CEST] <BtbN> So i went with 4
[15:43:21 CEST] <BtbN> but missed that both my code and the cuParser are delaying it now.
[15:43:23 CEST] <BtbN> So it's doubled
[16:59:40 CEST] <vade> Hi - AVFilter question - pre-emptive apologies for asking in here : -Im attempting to use the loudnorm filter via the AVFIlter API. I have a normal filter chain set up with a buffer source, sink, and a filter or two in between. When I add the loudnorm filter (along with a log callback to fetch the output) - I see the filter set up, but my buffer source always gets EAGAIN - removing the loudnorm filter works perfectly and samples
[16:59:41 CEST] <vade> passed through. Since loudnorm is a analysis filter, is there a slightly different than standard programatic filter set up for it unlike a transcode pipeline?
[18:47:52 CEST] <haasn> A recent ffmpeg change (earlier today) introduced some assembly that fails building in older versions of yasm (1.1.0)
[18:47:57 CEST] <haasn> This is causing travis builds of mpv to fail
[18:48:09 CEST] <haasn> Would it be possible to disable this asm when too old versions of yasm are detected?
[18:49:32 CEST] <JEEB> I guess what generally has been done with that kind of stuff is that you just raise the minimum yasm version
[18:49:51 CEST] <JEEB> and you build with --yes-disable-asm
[18:50:03 CEST] <JEEB> (or get a newer yasm into your PATH)
[18:56:06 CEST] <Compn> haasn : what change / code cerror ?
[18:56:08 CEST] <Compn> -c
[18:56:22 CEST] <Compn> ignore if you already brought it up on -devel
[18:56:44 CEST] <haasn> Compn: https://travis-ci.org/mpv-player/mpv/jobs/136098741 libavfilter/x86/avf_showcqt.asm:202: error: invalid combination of opcode and operands
[19:01:52 CEST] <haasn> I'm guessing 1e69ac9246be8c9a1bf595e7fe949df5bc541c55 is the faulty commit
[19:24:09 CEST] <cone-559> ffmpeg 03James Almer 07master:99b899483e10: avutil/x86util: move haddps sse emulation from showcqt
[19:24:10 CEST] <cone-559> ffmpeg 03James Almer 07master:82dbfccaf00b: x86/aacdec: use HADDPS macro
[20:20:33 CEST] <cone-559> ffmpeg 03Muhammad Faiz 07master:a096d3ec4782: avfilter/avf_showcqt: set range on fps/rate/r option
[21:06:08 CEST] <cone-559> ffmpeg 03Muhammad Faiz 07master:2991d935203e: avfilter/src_movie: call open_stream after guess_channel_layout
[21:56:51 CEST] <BtbN> Hm, so is the current cuvid patch set ok to push? Did some minor tweaks locally to fix the double-delay.
[22:00:15 CEST] <andrey_turkin_> looks ok to me
[22:00:43 CEST] <andrey_turkin_> I don't think there were any objections from nevcairiel too, right?
[22:01:15 CEST] <BtbN> The frames buffer unref should also be fine, can't realy thing of anything it could possible break.
[22:02:12 CEST] <cone-559> ffmpeg 03Rostislav Pehlivanov 07master:a04ae469e748: aacsbr: reduce element type mismatch warning severity
[22:02:25 CEST] <andrey_turkin_> yeah. I figured out ist->hw_frames_ctx is (mentally) part of hwaccel state and not ist; so every hwaccel manages it as it pleases
[22:03:12 CEST] <andrey_turkin_> too bad "internal" things like that don't enjoy same good documentation as external API
[22:03:30 CEST] <BtbN> I don't think any of the hwaccels unrefs it, so unrefing it on cleanup seems correct
[22:03:37 CEST] <andrey_turkin_> vaapi does
[22:03:41 CEST] <BtbN> it also does that for the global hwdevice
[22:04:11 CEST] <BtbN> Oh, indeed
[22:04:36 CEST] <BtbN> In that case the patch is wrong, hm
[22:04:45 CEST] <BtbN> It wouldn't break anything though
[22:04:46 CEST] <andrey_turkin_> it doesn't do anything bad, I thnk
[22:23:23 CEST] <jkqxz> Please do change the behaviour of vaapi if you feel the referencing would be more consistent a different way. (Not having anything else to test is unhelpful when trying to decide how some notionally-generic API should work.)
[22:26:04 CEST] <BtbN> It's just a question of who owns that pointer.
[22:28:26 CEST] <BtbN> And "the hwaccel owns it" is perfectly fine.
[22:35:04 CEST] <kylophone> leave
[00:00:00 CEST] --- Thu Jun 9 2016
More information about the Ffmpeg-devel-irc