[Ffmpeg-devel-irc] ffmpeg-devel.log.20160527

burek burek021 at gmail.com
Sat May 28 02:05:03 CEST 2016


[00:17:10 CEST] <llogan> durandal_1707: i mentioned that trip as soon as i found out about it.
[00:44:17 CEST] <cone-813> ffmpeg 03Gregor Riepl 07master:d970f7ba3124: ffserver: fixed deallocation bug in build_feed_streams
[03:19:34 CEST] <AndrewMock> Are there plans to support DTS' embedded downmix coefficients? (Not just the generic ac3 spec ones)
[03:36:40 CEST] <jamrial_> AndrewMock: afaik that's currently supported
[03:38:33 CEST] <AndrewMock> oh dang
[03:38:41 CEST] <AndrewMock> -ac 2 does that or?
[03:59:06 CEST] <jamrial_> AndrewMock: no, -request_channel_layout as input file option
[03:59:25 CEST] <AndrewMock> nice THANK YOU
[04:00:41 CEST] <jamrial_> any 6.1 or 7.1 stream will have a 5.1 downmix, but they rarely have stereo downmix
[04:01:10 CEST] <jamrial_> if there's no stereo downmix, it will instead output the 5.1 one
[08:45:25 CEST] <phucnguyenv> hello
[11:04:50 CEST] <BtbN> andrey_turkin, are you sure that CUDA abstraction is worth it? Enabling cuda makes ffmpeg non-free and non-redistributable anyway, so you build it on the system with CUDA and very likely the nvidia driver installed.
[11:10:42 CEST] <andrey_turkin> It has a use in my situation (where we have some machines with GPUs and some without GPUs and we'd really like to keep the same build for both). I guess there are others with same needs
[11:11:25 CEST] <andrey_turkin> Regarding non-free and non-redistributible - I am not sure why it is has to stay that way
[11:11:43 CEST] <andrey_turkin> given as nvenc is no longer considered non-free
[11:12:06 CEST] <nevcairiel> nvenc headers are licensed as MIT now
[11:12:12 CEST] <nevcairiel> cuda headers are still proprietary
[11:12:37 CEST] <andrey_turkin> right, but nvenc found a way around that, right (just "reimplement" required bits in ffmpeg)
[11:13:17 CEST] <andrey_turkin> whole ffmpeg API use just needs 2-3 more definitions to be compilable without CUDA headers
[11:30:53 CEST] <BtbN> if you can get rid of the cuda.h dependency entirely, then the non-free could be dropped, yes.
[11:31:20 CEST] <BtbN> Oh, what I also noticed: I don't think you need to check for LoadLibrary/dlopen in the enabled cuda part in configure.
[11:31:25 CEST] <nevcairiel> that sounds like it might turn out rather ugly though, as contrary to nvenc, the cuda stuff is used in various files
[11:31:57 CEST] <andrey_turkin> well the definitions go into cuda_api.h and everything else uses it
[11:32:22 CEST] <nevcairiel> i would probably find that not really good to have in ffmpeg
[11:32:36 CEST] <BtbN> just add $ldl to extralibs, like nvenc does.
[11:32:52 CEST] <nevcairiel> we shouldnt litter our code with re-implementations of proprietary headers
[11:33:20 CEST] <nevcairiel> if some definitions are hidden in a single C file that uses them fine, but an extra header seems rather ugly
[11:34:37 CEST] <nevcairiel> as such, i really dont like having something like cuda_api.c/h, which in turn is even public API, since its in avutil and used from avfilter and avcodec
[11:35:17 CEST] <andrey_turkin> not public but cross-library
[11:35:30 CEST] <nevcairiel> anything thats cross-library is practically public
[11:37:17 CEST] <nevcairiel> even if its avpriv, its part of the ABI, and as such prone to the same ABI/API change limitations as any public API
[11:37:25 CEST] <andrey_turkin> Either we have to make ffmpeg builds with cuda support non-free, non-redistributable and require 1+Gb download in order to build that support in; or we need to add something similar to cuda_api.h; or we need to place same code all over the place
[11:37:27 CEST] <nevcairiel> so one would have to have a really convincing argument to add that
[11:38:39 CEST] <andrey_turkin> I'd be happy to hear any ideas how to achieve same result more cleanly
[11:38:45 CEST] <andrey_turkin> I don't have any
[11:38:50 CEST] <nevcairiel> maybe we just shouldn't
[11:39:03 CEST] <nevcairiel> there are plenty of external libraries that impose some sort of limitations
[11:39:09 CEST] <nevcairiel> we can't re-implement the headers for all of them
[11:42:01 CEST] <BtbN> I'm against the patch as long as it doesn't drop any external dependencies on cuda.h, as it's entirely pointless otherwise.
[11:42:06 CEST] <andrey_turkin> I can't really argue cuda is a big deal until ffmpeg can do something useful with it without libnpp.
[11:42:40 CEST] <BtbN> And even then I think it's questionable if re-implementing larger parts of cuda.h in ffmpeg is desirable.
[11:43:02 CEST] <andrey_turkin> I was going to send another patch to do just that when I got some feedback on first one
[11:44:01 CEST] <BtbN> keeping CUDA non-free seems like the right thing in the first place. I'm not even sure if the CUDA runtime even still counts as a system library for the GPL.
[11:44:27 CEST] <andrey_turkin> basically only missing type (beyond those defined in nvenc.c) is CUDA_MEMCPY2D
[11:45:01 CEST] <andrey_turkin> any plans to bring CUDA runtime into play? For now it is just driver API
[11:45:18 CEST] <nevcairiel> well for now, but you say yourself cuda is not very helpful in ffmpeg, and once someone starts implementing extra features, you'll start to re-implement a lot more of CUDA API
[11:47:05 CEST] <nevcairiel> and for license purposes, you are not allowed to look at cuda.h and copy its contents
[11:47:49 CEST] <andrey_turkin> you are allowed to look in the documentation and reimplement it
[11:48:09 CEST] <andrey_turkin> mingw, reactos and wine guys all manage
[11:48:46 CEST] <BtbN> Doesn't that depend on the license the documentation is released under?
[11:49:33 CEST] <nevcairiel> those are dedicated projects to doing such things - if you want to re-implement a OSS header in a separate project feel free, i'm just saying its not something I feel suitable for ffmpeg
[11:50:59 CEST] <andrey_turkin> ok then; if this looks like a bad idea to you guys so be it
[11:53:57 CEST] <BtbN> andrey_turkin, regarding your local situation, the CUDA SDK contains a shim library that has all the CUDA exports, just copy that around internaly alongside ffmpeg.
[11:55:00 CEST] <BtbN> For me those are in /opt/cuda/lib64/stubs
[11:55:35 CEST] <andrey_turkin> I'll just keep the patch in place for my needs )
[11:55:50 CEST] <andrey_turkin> I wonder how come NVidia relicensed nvEncodeAPI.h ?
[11:56:04 CEST] <nevcairiel> because j-b asked
[11:56:28 CEST] <andrey_turkin> if they can do the same for dynlink_cuda_cuda.h it would be wonderful
[11:56:28 CEST] <nevcairiel> maybe they'll do the same with cuda at a later point
[11:58:11 CEST] <BtbN> I don't have that file in my CUDA SDK?
[11:58:16 CEST] <andrey_turkin> it would be even better if they made better interoperability between opencl and nvenc/nvcuvid
[11:58:24 CEST] <andrey_turkin> it's in nvenc SDK
[11:58:27 CEST] <BtbN> that's not going to happen.
[11:58:35 CEST] <j-b> andrey_turkin: because I can be an annoying person.
[11:58:41 CEST] <nevcairiel> nvcuvid is unsupported at this point anyway
[11:58:52 CEST] <nevcairiel> they didnt even bother to update it for 10-bit decoding
[11:59:13 CEST] <BtbN> Kind of a shame, as there is no other comparable decode api
[11:59:50 CEST] <BtbN> But Nvidia is never going to care more for OpenCL than barely acknowledging that it exists.
[12:00:10 CEST] <andrey_turkin> looks that way
[12:01:42 CEST] <BtbN> Until very recently the OpenCL stuff in ffmpeg didn't work on nvidia, because they didn't support OpenCL 1.1
[12:02:22 CEST] <andrey_turkin> you mean 1.2? I think 1.1 was supported for a while not
[12:02:23 CEST] <nevcairiel> nvidia didnt support 1.2 for a long time, but 1.1 should've been fine
[12:02:39 CEST] <BtbN> Oh, yeah, whatever ffmpeg needs. Might have been 1.2.
[12:02:59 CEST] <BtbN> I hope that with EGL/Wayland and stuff there will be some way to initialize vdpau without X.
[12:03:06 CEST] <BtbN> preferable on a headless machine
[12:03:34 CEST] <BtbN> Or a refresh of CUVID
[13:33:53 CEST] <andrey_turkin> wow that's a big patchset
[13:49:36 CEST] <omerjerk> Hi everyone.
[13:49:45 CEST] <omerjerk> I have a question regarding the SoftFloat api
[13:50:09 CEST] <omerjerk> I want to use this av_cmp_sf function  - https://www.ffmpeg.org/doxygen/2.2/softfloat_8h_source.html#l00095
[13:50:43 CEST] <omerjerk> I want to compare whether my SoftFloat is eqal ro 0.f or not 
[13:50:54 CEST] <omerjerk> Is it even possible with this ?
[13:51:12 CEST] <omerjerk> Else I'll have to think of some other logic in my code.
[13:58:07 CEST] <omerjerk> Also, what's the proper way to get the sign bit in SoftFloat ?
[14:32:09 CEST] <cone-681> ffmpeg 03Anton Khirnov 07master:44d16df41387: h264_parser: eliminate H264SliceContext usage
[14:32:10 CEST] <cone-681> ffmpeg 03Hendrik Leppkes 07master:2dc954e0bd54: Merge commit '44d16df413878588659dd8901bba016b5a869fd1'
[16:32:48 CEST] <cone-681> ffmpeg 03Michael Niedermayer 07master:281caece46c4: avfilter/avfiltergraph: Clear graph pointers in ff_filter_graph_remove_filter()
[16:44:15 CEST] <vade> hello. Is the newer codecpar  API, using send / recieve + frame / packet API stable for use? Ive successfully ported / implemented a decoder - but my encoder is failing. I am using what was the latest  GIT commit on github - since im wanting HWAccell encode for OS X via the h264_videotoolbox encoder. 
[16:45:47 CEST] <rkern> Does it give you an error message?
[16:46:56 CEST] <vade> 	hi rkern - no, I suspect the issue is on my end. Inspecing packets I recieve from the encoder - they appear to have data (ie - non null values) and the only errors I get are stating that the encoder needs more frames - which obviously stops once it has enough data to begin to vend output packets to me
[16:46:56 CEST] <ubitux> michaelni: "no i dont know if someone uses it with 1000 filters" i remember someone doing thousands of drawbox instance of something (one per frame)
[16:48:01 CEST] <vade> rkern: Ive read some migration documentation, and per suggestion I create my own AVCodecContext which I configure via setting values on my created output video streams codecpar, and then initialize the codec context via avcodec_parameters_to_context
[16:48:29 CEST] <vade> I used the old pattern that was warned against, which was using the streams -> codec directly. But that did result in output frames. 
[16:48:37 CEST] <michaelni> ubitux, interresting
[16:48:56 CEST] <vade> (to be clear, that working encoder output was via the old API) 
[16:49:49 CEST] <vade> I imagine I am somehow not linking either my CodecContext, or my stream to my output format - but I cant seem to deduce where or why. Is there anything else one needs other than manually making a codec context and ensuring it matches your created streams codecparam settings?
[16:49:50 CEST] <ubitux> michaelni: well, it was pure madness but there is a ticket somewhere
[16:50:16 CEST] <ubitux> michaelni: http://trac.ffmpeg.org/ticket/5222
[16:52:44 CEST] <rkern> vade: when I setup the AVCodecContext first, and copy it to the AVStream.codecpar with avcodec_parameters_from_context() it works.
[16:54:25 CEST] <vade> rkern: are there any samples using the new API ? 
[16:55:42 CEST] <durandal_1707> ubitux: somethng michaelni changed?
[16:56:52 CEST] <ubitux> durandal_1707: i was replying to one of his remark mentioning that probably no one is doing a filtergraph with 1k filters
[16:57:16 CEST] <ubitux> in one of the recent patchset on the ml
[16:57:19 CEST] <durandal_1707> He is wrong
[16:57:43 CEST] <vade> Oh rkern  - interesting. So you init your codec context, and then you set up the codec context, and then you migrate your codec contexts settings to your output streams codec params? To be clear?
[17:00:23 CEST] <rkern> vade: right, set the parameters such as width and height, open the codec, then copy it to codecpar.
[17:00:38 CEST] <rkern> for encoders anyway
[17:02:32 CEST] <vade> thanks rkern - ill let you know shortly. very much appreciate that input.
[17:14:33 CEST] <vade> rkern: interesting - libx264 works, but it seems like the video toolbox encoder doesnt
[17:14:42 CEST] <vade> h264_videotoolbox
[17:15:29 CEST] <vade> your note appears to have been the key though, thank you!
[17:15:57 CEST] <rkern> ok, so you were setting the values in codecpar first?
[17:16:15 CEST] <vade> exactly. 
[17:16:30 CEST] <rkern> I'll try it out
[17:16:34 CEST] <vade> I was setting up my stream, and then configuring my AVCodecContext by calling the context from params call
[17:16:48 CEST] <vade> thank you again for your input - I really appreciate it :) 
[17:17:18 CEST] <vade> also as per the h264_videotoolbox encoder, ive yet to try any private codec options, so I might need to do additional configuration on it
[17:18:47 CEST] <vade> also apologies if I drop out - internet here is shoddy.
[17:22:01 CEST] <vade> rkern: ah, I noticed a subtlety
[17:22:22 CEST] <vade> I need to have also have called avcodec_open2  prior to avcodec_parameters_from_context
[17:22:35 CEST] <vade> otherwise my stream is unhappy
[17:44:14 CEST] <rkern> vade: copying from the context to codecpar is best. The encoder may set certain values when it's opened (such as has_b_frames).
[17:45:43 CEST] <vade> ah yea. Makes sense
[17:46:31 CEST] <vade> ive configured my contexts and open them prior to calling avcodec_parameters_from_context and all seems well. I still cant get the h264_videotoolbox to output however
[18:00:17 CEST] <rkern> vade: are you using OS X or iOS?
[18:00:27 CEST] <vade> rkern: OS X 
[18:01:58 CEST] <cone-681> ffmpeg 03Håvard Espeland 07master:9c43703620a8: avcodec/proresdec2: Add support for grayscale videos
[18:04:00 CEST] <vade> rkern: I think I see the issue
[18:05:19 CEST] <vade> the pixel format returned by the AVCodec* that x264 returns is AV_PIX_FMT_YUV420P, whereas when I request a videotoolbox I get AV_PIX_FMT_VIDEOTOOLBOX as the only entry. Is that something to be concerned about?
[18:06:48 CEST] <rkern> You can use AV_PIX_FMT_VIDEOTOOLBOX, AV_PIX_FMT_NV12, or AV_PIX_FMT_YUV420P
[18:06:56 CEST] <vade> oh. I lied. Sorry. Yea. I just saw that
[18:07:03 CEST] <vade> sorry im playing catch-up. Apologies. 
[18:10:20 CEST] <vade> do you have h264_videotoolbox running on OS X rkern ?
[18:11:16 CEST] <rkern> I haven't tried it with the new avcodec_send_frame()/avcodec_receive_packet() API
[18:12:23 CEST] <vade> how would I go about submitting a bug report on this? I feel like if x264 is working in my current code, but h264_videotoolbox isnt, it might be somethign in h264_videotoolbox ? is that fair ?
[18:12:44 CEST] <vade> *something
[18:15:22 CEST] <rkern> vade: you can submit a bug at trac.ffmpeg.org. Please include a minimal code sample if possible.
[18:16:45 CEST] <nevcairiel> sounds to me like you are better suited asking in a user help context first, it sounds to me like you are not quite sure if you are using everything correctly either way
[18:16:53 CEST] <vade> Thanks for all the help rkern - onel ast q - via the old API to have h264_videotoolbox working, did you have to submit any particular flags to the encoder or private options? 
[18:17:00 CEST] <vade> nevcairiel: this is indeed true :) 
[18:17:01 CEST] <nevcairiel> please note that this channel is for development of ffmpeg, and not for help using it
[18:17:41 CEST] <vade> Ah, apologies. I rarely got in-depth API help on #ffmpeg, I was unclear if this was dev *using* the API, or dev for FFMPEG and is APIs only. 
[18:20:19 CEST] <nevcairiel> for the new decode API, most people have probably not switched to using it yet
[18:20:43 CEST] <vade> yea ive not seen any real examples. How new is it ? Im fairly new to FFMPEG dev over-all
[18:20:51 CEST] <nevcairiel> but I don't think it should cause anything to bug out that worked with the old one
[18:20:59 CEST] <nevcairiel> couple months
[18:21:38 CEST] <nevcairiel> one of these days I should switch to using it myself ;)
[18:21:40 CEST] <vade> Oh, im 99% sure its on my end, ha :)
[18:22:34 CEST] <vade> Id be up for possible submitting some examples of using it - its actually fairly clean once you deduce how to set it up. the new API makes a lot of sense decoupling sending packets / recieving frames. 
[18:23:06 CEST] <nevcairiel> indeed, its pretty simple
[18:25:48 CEST] <vade> ok, one last q - do I need any flag other than setting up a codec via avcodec_find_encoder_by_name("h264_videotoolbox") to enable HWAccell with the new API? 
[18:28:35 CEST] <rkern> No, hw is the only option unless you set a flag to allow a software encode (it will still use hw if available). I've jumped in #ffmpeg if you have any other questions. I'm looking at OS X with the new API now, but iOS works fine with it.
[18:28:58 CEST] <vade> thanks so much. Apologies for being off topic. I very much appreciate all of your input.
[18:31:39 CEST] <Compn> wonder how long steam has been distributing libavcodec/libavformat/libswscale ;P
[18:31:56 CEST] <Compn> really are on every computer ever :P
[18:37:59 CEST] <RiCON> at least since in-home streaming
[19:48:37 CEST] <esdwdftty> I don't know, you (developers) know about it or not. I came across this, I show this information. I have a AMD and a maximum avx1. https://software.intel.com/en-us/articles/avoiding-avx-sse-transition-penalties
[20:02:35 CEST] <nevcairiel> esdwdftty: we avoid these problems by explicitly zeroing the upper part of the registers when transitioning
[20:03:00 CEST] <nevcairiel> (ie. method 3 they mention)
[20:05:13 CEST] <esdwdftty> ok
[20:26:42 CEST] <rcombs> nevcairiel: is that ar issue fixed in the libc or in binutils?
[20:26:56 CEST] <nevcairiel> in mingws mkstemp
[20:26:59 CEST] <nevcairiel> so i gues slibc
[20:27:29 CEST] <rcombs> apparently they downgraded binutils to 2.25 a couple months back
[20:27:46 CEST] <rcombs> and I needed 2.26 for my application
[20:28:11 CEST] <rcombs> so I guess we'll have to build binutils 2.26 against the new libc
[20:30:30 CEST] <nevcairiel> what does 2.26 offer?
[20:33:17 CEST] <BBB> nevcairiel: I thought it was a combination of 1/2 and 3
[20:33:23 CEST] <BBB> nevcairiel: between functions, its 3
[20:33:34 CEST] <BBB> nevcairiel: within functions, if avx, everything is vex, else everything is non-vex
[20:33:39 CEST] <BBB> which is 1/2
[20:33:39 CEST] <nevcairiel> is suppose we use vex encoding when appropriate, yes
[20:36:00 CEST] <BBB> kierank: is that a protest-IWNA?
[20:36:06 CEST] <kierank> no
[20:36:33 CEST] <kierank> it is an IRL stuff iwna
[20:36:51 CEST] <BBB> alrighty
[21:01:08 CEST] <somebody_useless> Hi Guys, how do you record sample input video? I need to record some sample input clips and supply them to my open TRAC ticket, however I can't get the two recommended methods do not work. 
[21:01:09 CEST] <somebody_useless> mplayer --dumpstream produces garbage video that's barely visible but is missing most of the picture data [using mpegts input stream video].
[21:01:09 CEST] <somebody_useless> AVIOCAT wont compile due to physically missing C library files (in libaviutil folder).
[21:01:09 CEST] <somebody_useless> Thanks!
[21:10:16 CEST] <Compn> whats the input somebody_useless ?
[21:10:21 CEST] <Compn> i mean , http? dvb ? 
[21:10:41 CEST] <Compn> hardware /dev/stream ?
[22:06:08 CEST] <somebody_useless> compn, sorry! Thanks for responding. It's mpegts
[22:09:30 CEST] <somebody_useless> compn, we're receiving the mpegts stream via multicast from our receivers and transcoding the mpeg2ts to mpeg4ts for WAN transport (to our customers). I need a way to record the multicast mpegts video footage; however I have not yet been able to do so. I can record the raw packet data using python but this is likely not acceptable (as was my ffmpeg and cvlc recordings)
[22:09:58 CEST] <somebody_useless> I recognize that this is not the general support chat, but I really want to help your team find the problem :)
[22:10:59 CEST] <somebody_useless> [I'm interested in the entire input stream and not just the video.]
[22:17:16 CEST] <kierank> multicat
[22:50:54 CEST] <somebody_useless> kierank! THANKS!!!!!!!
[00:00:00 CEST] --- Sat May 28 2016


More information about the Ffmpeg-devel-irc mailing list