[Ffmpeg-devel-irc] ffmpeg-devel.log.20180116
burek
burek021 at gmail.com
Wed Jan 17 03:05:04 EET 2018
[00:00:13 CET] <atomnuker> hm, that looks alright
[00:00:14 CET] <jkqxz> I do think you should get an AMD and/or an Nvidia for testing this, because discrete GPUs really do have significantly different properties wrt where memory is and how it can be used.
[00:00:29 CET] <atomnuker> yeah, I'll get one when I move soon
[00:00:42 CET] <atomnuker> what command line do you use?
[00:00:55 CET] <atomnuker> I can't get -init_hw_device to work because hwupload needs a reference
[00:01:03 CET] <atomnuker> so I use -vulkan_device <>
[00:01:58 CET] <jkqxz> See above. You need -filter_hw_device.
[00:02:40 CET] <jkqxz> Having thought about that stuff, I think that rather than introducing -foo_device for everything, making it so that if there is exactly one device it's assumed to be the filter device would be better.
[00:02:52 CET] <jkqxz> (And kill -vaapi_device.)
[00:03:11 CET] <atomnuker> what's with the p11/cl prefix?
[00:03:55 CET] <atomnuker> are they custom so you can choose?
[00:04:16 CET] <jkqxz> Yeah, completely arbitrary.
[00:04:59 CET] <jkqxz> I gave them names so I could easily switch by just changing the -filter_hw_device.
[00:05:44 CET] <atomnuker> yeah, that works perfect, I'll remove the hack in ffmpeg_opt
[00:09:27 CET] <atomnuker> nope, getting rid of the pix_fmt != 0 assumption in transfer formats (pushed that to the repo) doesn't fix 420
[00:10:38 CET] <atomnuker> my mistake, nevermind
[00:10:50 CET] <atomnuker> had 1 more assumption in vkfmt_from_pixfmt
[00:11:58 CET] <jkqxz> I think that's a vote for wm4's AV_PIX_FMT_YUV420P != 0 change :)
[00:15:22 CET] <atomnuker> yep, I'd vote for that now :)
[00:15:56 CET] <nevcairiel> one bad assumption doesnt replace all the others
[00:16:06 CET] <nevcairiel> plenty places that check for < 0, i'm sure
[00:16:17 CET] <nevcairiel> unless you can proof you fixed them all, no dice =p
[00:16:56 CET] <nevcairiel> why any new code would even check against any literal value instead of the symbol names is just beyond me however
[00:16:59 CET] <nevcairiel> so you deserve any pain you get
[00:18:33 CET] <atomnuker> I find it nice that radv advertises sane alignment values, nvidia just says 1 for all is optimal so vulkan code in mpv doesn't try to align anything outside of what's absolutely required
[00:19:22 CET] <atomnuker> and even then what's absolutely required is what's required on nvidia so rgba pixfmts with intel are still broken
[00:19:51 CET] <nevcairiel> why would nvidias values affect intel
[00:21:18 CET] <atomnuker> I guess its because nvidia doesn't have alignment requirements whilst intel does?
[00:21:51 CET] <nevcairiel> from your first comment I assume there is an API you can ask about those, so again, why would nvidia cause intel to break?
[00:22:07 CET] <nevcairiel> unless your design is faulty and you ask the wrong device
[00:24:14 CET] <nevcairiel> or if its trying to share frames between those two gpus, it should ask both for its requirements and determine the smallest alignment needed to work on both
[00:26:37 CET] <atomnuker> no idea why mpv vulkan is broken on intel, I tried to debug a few months ago and I did indeed see the image had an incorrect stride and the code there correctly asserted that something's wrong
[00:26:51 CET] <atomnuker> mpv doesn't ask for requirements like I said
[00:27:06 CET] <atomnuker> sorry, optimal buffer alignments, not requirements
[00:27:32 CET] <atomnuker> there's a requirement for mapping memory which is always 4k on any gpu I've seen
[00:27:50 CET] <jkqxz> That's probably just the system page size.
[00:28:02 CET] <atomnuker> yep, that makes sense
[00:44:33 CET] <SortaCore> I'm struggling to get metadata to be outputted in a MP4
[00:44:48 CET] <SortaCore> even changing the encoder tag, which is outputted anyway, fails
[00:46:23 CET] <SortaCore> is this by design, or should any metadata be outputted?
[00:50:58 CET] <atomnuker> just set key=value using av_dict in avformatcontext->metadata
[03:48:08 CET] <SortaCore> atomnuker: I do, but it's not outputted
[03:48:20 CET] <SortaCore> the file only contains encoder Lavf and that's it
[03:48:51 CET] <atomnuker> guess either the keys you're setting aren't supported or can't be mapped to anything mp4 supports
[04:37:28 CET] <atomnuker> jkqxz: right, found out why it crashed, I did "flags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;" on every image when that's not the case
[04:37:38 CET] <atomnuker> not sure what I was on when I wrote that
[04:38:15 CET] <atomnuker> probably a leftover from where I didn't keep track of flags
[05:50:57 CET] <atomnuker> jkqxz: I'm wondering how much utilities should I provide in lavu's hwcontext and how much to leave for api users?
[05:51:34 CET] <atomnuker> should I provide buffer allocation functions despite the hwcontext not doing any vkbuffer allocations and only dealing with images?
[05:51:36 CET] <wm4> I'd expect that it provides the basic primitives like frame allocation upload/download
[05:52:10 CET] <atomnuker> well yeah, that's a given, just wondering about other things that could be useful
[05:52:32 CET] <atomnuker> I think maybe I shouldn't have buffer alloc functions there but in lavfi's vulkan utils file
[05:53:49 CET] <atomnuker> because using the same device flags for memory as images might make buffer access suboptimal (since things like vertex buffer need the fastest local memory, even if its slow to upload)
[05:54:26 CET] <atomnuker> I better leave it to lavfi/api users, the hwcontext should just do what's best for images only
[05:59:03 CET] <wm4> so that other code would be private to ffmpeg?
[06:01:34 CET] <atomnuker> bah, I'll keep buffer allocation code inside lavu, its barely a hundred lines and saves users the annoyance of searching through memory for flags
[06:02:33 CET] <atomnuker> can't think of anything else that should be in lavu, pipeline things really need to be separate from memory allocation stuff
[06:37:46 CET] <SortaCore> I can't even set the encoder one, it's overwritten
[06:38:02 CET] <SortaCore> is there a list of supported mp4 metadata keys then?
[09:22:44 CET] <mistym> I'm seeing a muxer I'm writing crashing during cleanup - anyone able to help out?
[09:22:50 CET] <mistym> (My code is probably pretty bad, lol)
[09:25:46 CET] <mistym> Backtrace shows that after `av_write_trailer`, it went into `av_opt_free`, and then crashed here in `av_opt_next`: https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/opt.c#L51
[09:26:05 CET] <mistym> (Sorry for linking to a mirror, git.ffmpeg.org blob view seems to not be working right now)
[09:26:28 CET] <mistym> My muxer doesn't have any options, so not sure why it's taken this path
[09:27:49 CET] <mistym> This is my muxer: https://github.com/mistydemeo/FFmpeg/blob/add_segafilm_muxer/libavformat/segafilmenc.c
[09:35:34 CET] <wm4> I think if you set priv_class, the first member in your priv struct must be AVClass
[09:37:30 CET] <mistym> Oh yeah, that was it! Thanks!
[09:50:26 CET] <mistym> I'm reminded, too - I think I asked about this before, but...
[09:51:49 CET] <Shiz> lots of details that are not really obvious
[09:51:52 CET] <Shiz> happens to the best
[09:52:00 CET] <mistym> This is a format where I need to buffer the video and audio in advance.
[09:52:36 CET] <mistym> This is because the header contains a table with info on every sample in the file, so I have to have knowledge on that before the header ets written.
[09:53:07 CET] <mistym> Currently, attempting to buffer more than a moderate number of audio packets returns a "Too many packets buffered" error related to max_muxing_queue_size. Not sure the best way for me to handle this.
[09:53:13 CET] <nevcairiel> you cant buffer all the audio or video, its not something you should ever even consider
[09:53:20 CET] <Shiz> I'll gladly admit ignorance on the lavf API, but wouldn't it be better to write the header last?
[09:53:24 CET] <Shiz> after all the other packets are written
[09:53:42 CET] <Shiz> buffering everything seems like a very bad thing performance and memory-wise
[09:53:52 CET] <nevcairiel> instead, look at how mp4 works, it has the same requirements, it writes sample data into the header
[09:55:20 CET] <JEEB> mov/mp4 is movenc.c in lavf
[09:59:07 CET] <JEEB> mistym: btw I did some initial testing of the ATRAC3+ stuff and it seems like the parser needs to be able to resync
[09:59:13 CET] <JEEB> (like after a seek)
[09:59:37 CET] <JEEB> also I noticed an error popping up about something not being set but I was still getting audio; I should check that out too when I have time.
[12:04:42 CET] <jkqxz> atomnuker: I think avoid adding external functions to the code in lavu unless they really need internal stuff, because the API/ABI gets fixed for a long time.
[12:04:52 CET] <jkqxz> So, probably keep buffer the allocation stuff private in lavfi.
[12:07:44 CET] <jkqxz> (And think again if you write anything in lavc, but even then it isn't very large so duplicating a bit of object code may well be preferable to adding external symbols.)
[12:58:47 CET] <cone-943> ffmpeg 03wm4 07master:6512ff72f9cc: avformat: deprecate another ffserver API leftover
[12:58:48 CET] <cone-943> ffmpeg 03wm4 07master:631c56a8e46d: avformat: make avformat_network_init() explicitly optional
[14:24:52 CET] <jkqxz> wm4: Define an AVD3D11HWConfig for the user to indicate what they want to do with the frames? (That is why that argument exists, after all.)
[14:26:56 CET] <wm4> you _could_ do that
[14:27:10 CET] <wm4> I'm not sure whether it's really worth to abstract this
[14:28:22 CET] <wm4> if the API user knows it has to be useable with shaders or decoders or whatever, he can use D3D11 API directly to determine support, and the constraints API is mostly about knowing what ffmpeg's code supports
[15:29:03 CET] <wm4> jkqxz: anyway, can you ack that patch? or am I maybe the maintainer of it and I can just push? I have no idea
[15:35:54 CET] <atomnuker> the hwcontext_d3d11va constraints patch? code-wise it looks fine to me, but don't take my word for it - I thought pix_fmt of 0 was invalid until yesterday
[16:28:03 CET] <nevcairiel> why is it that you cant do thread priorities on linux without root or messing with limits.conf? sure smells funky to me
[16:30:19 CET] <DHE> thread priorities are just renice on individual threads. so the same rules apply
[16:31:59 CET] <BtbN> you should be able to de-prioritize as user though?
[16:32:10 CET] <nevcairiel> of the three consumer OSes, its the only one that lacks this ability, and its really quite annoying if you cant prioritize say audio rendering over other background tasks
[16:32:15 CET] <DHE> de-prioritize, yes. for example x264 does this to its lookahead threads
[16:32:45 CET] <BtbN> so you basically de-prioritize everything else to achive that
[16:32:53 CET] <nevcairiel> i dont think you actually can
[16:32:55 CET] <wm4> I never had the need for that anyway
[16:33:16 CET] <BtbN> I have seen processes, including ffmpeg, to have different nicenesses on its thread in htop
[16:34:00 CET] <nevcairiel> in default linux, the "normal" schedulers like SCHED_OTHER (the default) don't have any scheduling priorities, you need one of the real-time schedulers for that - which require super-user
[16:35:55 CET] <nevcairiel> you could potentially put some threads on SCHED_IDLE to de-priorities them, but thats not the granuliarity i'm looking for, since that basically puts them on hold if the system is busy with any normal tasks
[16:36:20 CET] <DHE> on linux there's another option - control groups. administrative action is required to set them up, and either the app of whatever launches the app must be aware of it, but there's some flexibility there
[16:37:15 CET] <nevcairiel> with admin interaction i can also edit limits.conf to allow my user to use SCHED_RR or SCHED_FIFO
[16:37:57 CET] <DHE> SCHED_RR and SCHED_FIFO will monopolize the CPU though. these have hard priority over regular SCHED_OTHER processes
[16:38:21 CET] <nevcairiel> and thats fine if i'm having an audio renderer that absolutely needs to feed the device when I want it to
[16:38:24 CET] <DHE> the objective is to raise priority above other apps as a non-root user, right?
[16:38:35 CET] <DHE> where's SCHED_ISO when you need it...
[16:39:09 CET] <nevcairiel> basically, we're having issues with audio drop-outs unless we use a real-time prioritiy to keep our audio output thread properly in charge
[16:39:18 CET] <nevcairiel> only on super low-power devices mind you, but still
[16:39:20 CET] <nevcairiel> the change helps
[16:40:38 CET] <DHE> https://github.com/torvalds/linux/blob/master/include/uapi/linux/sched.h#L40 This was supposed to be the solution. SCHED_ISO would grant a high immediate priority to a process that sleeps often, but its priority drops rapidly if it abuses this
[16:40:54 CET] <DHE> making it good for apps that require immediate attention when woken up, but don't require much CPU time overall
[16:41:14 CET] <nevcairiel> that sounds like a lot of book keeping in the scheduler
[16:41:50 CET] <nevcairiel> why is it the kernels job to control my app like that, i'm the developer, i should be in charge
[16:42:03 CET] <DHE> I imagine the implementation would be like a process with a nice value that goes towards -20 when sleeping and goes towards +20 while running
[16:42:20 CET] <DHE> because SCHED_ISO is available to normal users while not allowing a single app to monopolize the CPU
[16:42:51 CET] <nevcairiel> i would still contest that its not the kernels job to care about that
[16:43:03 CET] <nevcairiel> it should give me the power to (ab)use
[16:46:49 CET] <atomnuker> nevcairiel: there's a dbus service to alter process priorities, pulseaudio uses it
[16:47:39 CET] <nevcairiel> .. and that there is whats wrong with linux and dbus
[16:48:01 CET] <nevcairiel> also I want thread priorities, otherwise my background thread doing whatever processing is still going to shit on audio output
[16:48:51 CET] <atomnuker> well threads have a separate pid from main processes so you can use it
[16:49:15 CET] <atomnuker> rtkit, that's what its called
[16:49:18 CET] <wm4> what kind of audio output are you using that anything can shit on it
[16:50:19 CET] <nevcairiel> its just alsa output, but if you're on a Pi or something and doing R128 audio analysis in the background, it can glitch out the audio
[17:07:46 CET] <philipl> BtbN: were you able to put any time into the deinterlacer over the weekend?
[17:41:54 CET] <cone-943> ffmpeg 03wm4 07master:27b9f82e2c5d: hwcontext_d3d11va: implement av_hwdevice_get_hwframe_constraints()
[22:03:20 CET] <cone-557> ffmpeg 03Jun Zhao 07master:a919ab853efc: lavc/snow_dwt: add struct MpegEncContext to fix headers check.
[23:58:35 CET] <cone-557> ffmpeg 03Zhong Li 07master:e23190269fb6: lavu/qsv: add log message for libmfx version
[23:58:36 CET] <cone-557> ffmpeg 03Zhong Li 07master:1efbbfedcaf4: examples/qsvdec: do not set the deprecated field refcounted_frames
[23:58:37 CET] <cone-557> ffmpeg 03Mark Thompson 07master:d204b7ff610c: Merge commit 'e23190269fb6e8217d080918893641ba3e0e3556'
[23:58:38 CET] <cone-557> ffmpeg 03Mark Thompson 07master:725ae0e2d022: Merge commit '1efbbfedcaf4a3cecab980273ad809ba3ade2f74'
[00:00:00 CET] --- Wed Jan 17 2018
More information about the Ffmpeg-devel-irc
mailing list