[Ffmpeg-devel-irc] ffmpeg-devel.log.20161007
burek
burek021 at gmail.com
Sat Oct 8 03:05:03 EEST 2016
[01:31:41 CEST] <cone-988> ffmpeg 03Rodger Combs 07master:1f7d5860525a: ffmpeg: don't reconfigure terminal if we're not taking input from stdin
[01:31:42 CEST] <cone-988> ffmpeg 03Rodger Combs 07master:021286720248: tests: add -nostdin flag when calling ffmpeg
[02:48:58 CEST] <crelloc> Hey everyone, i am working on writing a selftest to improve code coverage...When I add my own selftest program do I need to modify a Makefile to get my added selftest to compile??
[04:51:48 CEST] <philipl> BtbN: Doesn't work. They absolutely don't want you to access the raw backing memory for a GL texture. That's why you can't get a pointer directly from the mapped resource either.
[10:35:16 CEST] <BtbN> philipl, so you need to cuMemCpy anyway, don't you? Or what does using an Array in ffmpeg improve?
[12:45:44 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:04a357726378: ffmpeg: remove unused and errorneous AVFrame timestamp check
[12:48:42 CEST] <wm4> nevcairiel: so the plan is to follow Libav's change to use AVFrame.pts, right?
[12:48:48 CEST] <nevcairiel> yes
[12:48:54 CEST] <nevcairiel> i didnt see anyone clearly objecting
[12:48:54 CEST] <wm4> and pkt_pts will be deprecated?
[12:48:58 CEST] <nevcairiel> yes
[12:49:02 CEST] <wm4> but not pkt_dts?
[12:49:14 CEST] <nevcairiel> we dont have a replacement field for that, so no, the other pkt_* fields remain
[12:49:27 CEST] <nevcairiel> its only consolidating pkt_pts,pts into only pts
[12:50:09 CEST] <wm4> yeah
[13:04:40 CEST] <BtbN> A dts doesn't really make sense for non-packets anyway
[13:04:56 CEST] <BtbN> And I can't think of an immediate use of the former pkts dts when dealing with a decoded frame.
[13:05:47 CEST] <nevcairiel> that info isnt changing either way
[13:06:28 CEST] <wm4> BtbN: unfortunately, the dts is often useful in broken situations
[13:06:50 CEST] <cone-928> ffmpeg 03Anton Khirnov 07master:32c8359093d1: lavc: export the timestamps when decoding in AVFrame.pts
[13:06:51 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:3f9137c57d23: Merge commit '32c8359093d1ff4f45ed19518b449b3ac3769d27'
[13:06:54 CEST] <wm4> such as avi timestamps
[13:06:55 CEST] <BtbN> Also, I just noticed that vf_hwupload_cuda can probably be removed.
[13:07:02 CEST] <wm4> which live fourth in vfw-muxed mkv
[13:07:11 CEST] <BtbN> Updated it to the new API, and there is nothing CUDA specific left in there.
[13:07:22 CEST] <BtbN> So the generic hwupload should work
[13:07:49 CEST] <BtbN> hm, or not. the generic one is not that generic
[13:08:19 CEST] <wm4> wut
[13:08:28 CEST] <wm4> the generic one was designed to replace that, wasn't it
[13:09:00 CEST] <wm4> so, can I assume maintainership of mp3dec.c?
[13:09:42 CEST] <BtbN> wm4, it was, but it expects an externally supplied hwdevice_ctx
[13:09:53 CEST] <BtbN> Which the cuda one just creates itself
[13:09:56 CEST] <wm4> that's a given
[13:10:02 CEST] <wm4> how does that even work?
[13:10:13 CEST] <wm4> can cuda surfaces be passed to different cuda contexts?
[13:11:01 CEST] <BtbN> no
[13:11:11 CEST] <BtbN> but hwupload is the beginning of the chain
[13:11:18 CEST] <BtbN> that's where the cuda context initially comes from
[13:11:32 CEST] <BtbN> everything after that uses it
[13:12:31 CEST] <wm4> I'm not quite sure where you'd create the initial context in e.g. vaapi full-hw transcoding
[13:12:37 CEST] <wm4> but shouldn't it be similar?
[13:13:00 CEST] <jkqxz> Magically pulling a cuda context out of nowhere is not nice, though it makes ffmpeg (the utility) a bit simpler.
[13:13:32 CEST] <BtbN> wm4, in ffmpeg.c
[13:13:40 CEST] <BtbN> that's what ffmpeg_vaapi.c and ffmpeg_cuvid.c do
[13:13:54 CEST] <jkqxz> If you set the global hw_device_ctx somewhere there then the generic hwupload works (because every filter gets given that device).
[13:14:09 CEST] <BtbN> optionally cuvid.c can do it, if it doesn't get an external context, it just creates one
[13:14:34 CEST] <BtbN> jkqxz, yes, but by now we can just create the context in the generic upload
[13:14:39 CEST] <BtbN> if there is not a global one
[13:14:46 CEST] <jkqxz> This is horrible, though, because you can't use multiple devices. There was some thought in libav about how to make this better, but we haven't come up with a nice answer.
[13:15:09 CEST] <BtbN> of course you can?
[13:15:14 CEST] <jkqxz> Something like allowing multiple devices to be created on the command line and then parse the filter graph to put them in the right place in the filter chain, but it's all ugly.
[13:15:35 CEST] <jkqxz> You can't use multiple devices in the ffmpeg utility. It's fine if you just use lavfi.
[13:15:35 CEST] <BtbN> There is no point in having more than one global device anyway
[13:16:01 CEST] <BtbN> Per chain, that is.
[13:16:16 CEST] <BtbN> You need to use the same one, otherwise the later filters/encoders can't access the frames.
[13:16:29 CEST] <jkqxz> No. My vaapi <-> opencl interop stuff requires two devices because derived devices can't work for that case.
[13:16:30 CEST] <BtbN> https://gist.github.com/71e79155759c3c2274b8261c8972119e that's the changes I made to the cuda hwupload. It's essentialy generic now.
[13:17:00 CEST] <cone-928> ffmpeg 03Anton Khirnov 07master:beb62dac6296: Use AVFrame.pts instead of deprecated pkt_pts.
[13:17:01 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:6f74e3cde614: Merge commit 'beb62dac629603eb074a44c44389c230b5caac7c'
[13:17:46 CEST] <jkqxz> Ideally derived devices would work, and then you only need to supply the first one, but branching cases can still be hard.
[13:18:08 CEST] <BtbN> For CUDA, the CUdeviceptr is only valid for the context it was created in
[13:18:13 CEST] <BtbN> so there is no way to use more than one context
[13:18:24 CEST] <cone-928> ffmpeg 03Martin Storsjö 07master:dc7501e524dc: checkasm: Issue emms after benchmarking functions
[13:18:26 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:6fc74934de1f: Merge commit 'dc7501e524dc3270335749302c7aa449973625f3'
[13:18:36 CEST] <BtbN> You can of course have independend chains on multiple devices. Which is already perfectly possible.
[13:18:45 CEST] <BtbN> the hwupload_cuda has a parameter for the device to run on.
[13:19:15 CEST] <cone-928> ffmpeg 03Martin Storsjö 07master:8c3c7b892003: dxva2_h264: Remove an unused variable
[13:19:16 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:40b2878ad3bb: Merge commit '8c3c7b8920033d61c7aa15a4465b759c84e5958f'
[13:19:28 CEST] <jkqxz> Yes. And that's why hwupload_cuda magically creating devices is bad, because they aren't interoperable. If you have two input streams and you upload both of them, it doesn't work because they are not the same device.
[13:19:41 CEST] <cone-928> ffmpeg 03Mark Thompson 07master:11b8030309ee: vaapi_encode: Fix fallback when input does not match any format
[13:19:41 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:5e872d908368: Merge commit '11b8030309ee93d79b3a6cd4b83bf00757db1598'
[13:20:11 CEST] <jkqxz> *not the same context. (Device is unhelpfully overloaded there.)
[13:20:12 CEST] <cone-928> ffmpeg 03Mark Thompson 07master:fe498ef5144d: hwcontext_vaapi: Return all formats for constraints without config
[13:20:13 CEST] <cone-928> ffmpeg 03Luca Barbato 07master:4dbfcd07570a: librtmp: Avoid an infiniloop setting connection arguments
[13:20:14 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:e8487d71be4d: Merge commit 'fe498ef5144d3712b887f44a0c5e654add99ead7'
[13:20:15 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:8dd0e3d50f84: Merge commit '4dbfcd07570a9e45e9597561023adb6da26f27f6'
[13:20:27 CEST] <BtbN> There is no sane way to achive that though
[13:20:51 CEST] <jkqxz> ? Create the device outside lavfi.
[13:20:55 CEST] <BtbN> implicit context creation saves a lot of headaches.
[13:21:43 CEST] <wm4> if the format changes, lavfi will be reinitialized
[13:21:47 CEST] <wm4> then your context would die
[13:22:05 CEST] <BtbN> it will create a new one, but the old one will still exist
[13:22:09 CEST] <cone-928> ffmpeg 03Diego Biurrun 07master:3c84eaae9da0: h264: Eliminate unused but set variable
[13:22:10 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:da76175d6812: Merge commit '3c84eaae9da0dc450ae99c65bb6b9865e3ba7fad'
[13:22:11 CEST] <BtbN> until it's not in use anymore
[13:22:54 CEST] <BtbN> every frame has a ref to the hwframes_ctx, which has a ref to the hwdevice_ctx
[13:23:05 CEST] <BtbN> so the device won't be freed until the last frame has been freed
[13:23:20 CEST] <cone-928> ffmpeg 03Diego Biurrun 07master:eedbeb4c2737: msmpeg4: Remove some broken, commented-out cruft
[13:23:21 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:5114c62902d3: Merge commit 'eedbeb4c2737f28844157fae4bd87ed42a61bb1d'
[13:23:55 CEST] <BtbN> Hm, what's the ffmpeg-way to create a (temporary) string from a number?
[13:24:00 CEST] <cone-928> ffmpeg 03Diego Biurrun 07master:4f98bb7b6d03: msmpeg4: Remove commented-out debug logging code
[13:24:01 CEST] <cone-928> ffmpeg 03Martin Storsjö 07master:31aa5335c390: libopenh264enc: Fix inconsistent whitespace
[13:24:02 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:2335e189fb7b: Merge commit '4f98bb7b6d0323d9ecc3bebd6e24d46a3a374bad'
[13:24:03 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:edb4c4451195: Merge commit '31aa5335c390c83a6c3ea955b155067c36c4a2c4'
[13:24:21 CEST] <nevcairiel> av_d2str?
[13:26:35 CEST] <nevcairiel> itoa etc are probably frowned upon since they use system malloc
[13:26:52 CEST] <wm4> wut
[13:27:09 CEST] <nevcairiel> oh wait they take a buffer
[13:27:14 CEST] <wm4> isn't strtod/ll the proper call
[13:27:31 CEST] <BtbN> that's the other way around
[13:27:33 CEST] <ubitux> afaict its first and only use was in 12ad66712
[13:27:41 CEST] <ubitux> which was removed in 82f19afe
[13:27:49 CEST] <ubitux> maybe
[13:27:57 CEST] <ubitux> i may have skipped steps
[13:27:59 CEST] <nevcairiel> of av_d2str?
[13:28:02 CEST] <ubitux> yeah
[13:28:03 CEST] <nevcairiel> yeah its unused now
[13:28:16 CEST] <nevcairiel> can always use av_asprintf to print it into a string =p
[13:28:24 CEST] <ubitux> yeah, no we can :)
[13:28:57 CEST] <ubitux> now*
[13:29:00 CEST] <cone-928> ffmpeg 03Martin Storsjö 07master:0c9c4004ed57: omx: Don't return > 0 from omx_encode_frame
[13:29:02 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:85146dfc23d3: Merge commit '0c9c4004ed57de210b4d83c7b39bbfb00b86b9af'
[13:29:04 CEST] <ubitux> but it was added in 2011
[13:29:10 CEST] <ubitux> d2str is from 2009
[13:29:21 CEST] <ubitux> the technology wasn't here yet
[13:29:24 CEST] <nevcairiel> heh
[13:29:37 CEST] <BtbN> I remember there being some macro that does it on the stack, specifically to be used as a parameter for a function
[13:30:12 CEST] <ubitux> that's av_ts2str
[13:30:15 CEST] <ubitux> and av_ts2timestr
[13:30:28 CEST] <cone-928> ffmpeg 03Anton Khirnov 07master:5b63b15663d3: lavfi: set the link hwframes context before configuring the dst input
[13:30:29 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:adfcf16f76de: Merge commit '5b63b15663d31f50ce45d980b904a68795ad3f7a'
[13:30:39 CEST] <nevcairiel> so many different things
[13:31:02 CEST] <nevcairiel> all commits merged until the next avconv set, oh well
[13:31:44 CEST] <ubitux> only 416 commits left :3
[13:31:48 CEST] <ubitux> "only" :(
[13:32:18 CEST] <ubitux> sorry 403
[13:33:25 CEST] <nevcairiel> the current avconv set is at least interesting, its the late-init set that fixes the dependency on accurate avformat info, and instead lets teh decoders provide more
[13:33:52 CEST] <BtbN> #define tmp_itoa(i) itoa((i), (char[64]){0}, 10)
[13:33:54 CEST] <BtbN> something like that.
[13:41:23 CEST] <BtbN> I'll just use snprintf.
[14:01:40 CEST] <BtbN> Almost done with this huge set of patches...
[14:45:30 CEST] <cone-928> ffmpeg 03Anton Khirnov 07master:1c169782cae6: avconv: explicitly postpone writing the header until all streams are initialized
[14:45:31 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:82c4d57553d4: Merge commit '1c169782cae6c5c430ff62e7d7272dc9d0e8d527'
[14:47:06 CEST] <nevcairiel> this one was easy, the next ones seem harder:d
[14:57:40 CEST] <cone-928> ffmpeg 03Michael Niedermayer 07master:572f16e10041: avformat/matroskaenc: Fix () error
[15:31:49 CEST] <michaelni> nevcairiel, "./ffmpeg -y -metadata a=b test.ffmeta" broke (results in empty file)
[15:33:24 CEST] <nevcairiel> whats the point of that command working
[15:33:30 CEST] <nevcairiel> it doesnt process any media data
[15:34:35 CEST] <nevcairiel> should probably check AVFMT_NOSTREAMS somewhere though
[15:37:51 CEST] <michaelni> nevcairiel, you can also use "./ffmpeg -i matrixbench_mpeg2.mpg test.ffmeta" that broke too
[15:38:20 CEST] <nevcairiel> a format with no media data is just weird, but should be easily fixed
[15:46:18 CEST] <nevcairiel> michaelni: patch on ml
[15:52:38 CEST] <jamrial> nevcairiel: see my df2ae8f noop'd merge. it's needed before merging one of the upcoming avconv commits (i think 50722b4 but not sure)
[15:52:45 CEST] <jamrial> one of the things we check for framerate is a filtergraph that wouldn't be available where the code would be moved to
[15:53:22 CEST] <nevcairiel> those upcoming ones need careful thinking either way
[15:53:25 CEST] <nevcairiel> move a lot of stuff
[15:53:50 CEST] <jamrial> yeah
[15:54:15 CEST] <nevcairiel> i'll probably call it quits for today, going out in the evening and need to eat before
[16:01:08 CEST] <michaelni> nevcairiel, patch fixes the issue, thanks alot
[16:07:44 CEST] <philipl> BtbN: you do one cooy from the cuvid output buffer to the output frame. If ffmpeg supports arrays, mpv can define an external buffer pool for the hwcontext that is made up of gl textures.
[16:07:55 CEST] <cone-928> ffmpeg 03Hendrik Leppkes 07master:ab7e83efed9c: ffmpeg: explicitly write headers for files with no streams
[16:08:19 CEST] <philipl> so you do the one copy in cuvid and then off you go.
[16:09:17 CEST] <philipl> gl textures are mapped as arrays by interop and you can never get a memory pointer so you use the array mode of memcpy2d
[16:09:23 CEST] <nevcairiel> if your entire goal is direct rendering, why not use vdpau
[16:09:56 CEST] <philipl> because it doesnt support the new hardware decoding capabilities.
[16:10:14 CEST] <nevcairiel> neither does cuvid =p
[16:10:37 CEST] <philipl> You get more from cuvid than vdpau
[16:10:55 CEST] <philipl> and i have more confidence in cuvid catching up than vspau.
[16:11:02 CEST] <nevcairiel> i wouldnt
[16:11:05 CEST] <nevcairiel> cuvid hasnt changed for years
[16:11:13 CEST] <philipl> and let's be clear.
[16:11:18 CEST] <cone-928> ffmpeg 03Michael Niedermayer 07master:72061177f383: ffmpeg: Fix bitstream typo
[16:11:19 CEST] <nevcairiel> except the occasional repackage into a new zip
[16:11:59 CEST] <philipl> i'm not saying this array mode thing is easential by any means. I'm just saying it can be done.
[16:12:10 CEST] <philipl> the vp9 support is new
[16:12:16 CEST] <nevcairiel> i tried to report various issues in cuvid last year and even had some insider tried to get me proper contacts, but they were just not interested
[16:12:40 CEST] <philipl> oh well. vdpau had one engineer who left.
[16:12:51 CEST] <philipl> cant get worse than that.
[16:37:07 CEST] <BtbN> hm, is it possible to have internal functions in libavutil, to be used by the other libraries?
[16:37:17 CEST] <BtbN> So, just ff_ functions with a symbol, but no public header?
[16:38:08 CEST] <ubitux> we avoid them, but avpriv_*
[16:38:44 CEST] <BtbN> Can't really think of a better way. Can't really make them public, as that would involve installing the cuda dynload headers.
[16:39:04 CEST] <BtbN> Could make the whole thing header-only, but that seems kind of a mess
[17:16:00 CEST] <nevcairiel> its really frowned upon to export specific stuff from avutil, if you can find a way without that, that would be much better
[17:16:59 CEST] <BtbN> can't think of one. As libavutil itself, libavfilter and libavcodec need that stuff. The other solution would be to install the dynload_cuvid/nvenc/cuda.
[17:17:06 CEST] <nevcairiel> because anything thats exported, marked public or not, is public ABI
[17:17:51 CEST] <nevcairiel> why cant we just rely on headers installed by the SDK, like we do with any other external component
[17:17:56 CEST] <nevcairiel> this hackery seem to go out of hand
[17:18:50 CEST] <BtbN> Those headers are not part of the SDK
[17:19:21 CEST] <nevcairiel> anyway, exporting implementation specific symbols from avutil needs a really strong case
[17:19:46 CEST] <nevcairiel> we have fought against this before and in the end we got an abstraction in hwcontext instead of various specific crap
[17:20:10 CEST] <nevcairiel> so in short: please don't
[17:20:11 CEST] <nevcairiel> :)
[17:20:20 CEST] <BtbN> Like I said, I could just put all the load-functions as macros in the header. But I'm not sure if that's better.
[17:20:34 CEST] <nevcairiel> stuff works in git master today, doesnt it
[17:20:34 CEST] <nevcairiel> :D
[17:21:13 CEST] <BtbN> It works while requiring a hard link against nvidia libraries and non-free headers.
[17:21:26 CEST] <nevcairiel> thats on nvidia, not us
[17:21:47 CEST] <BtbN> nvidia provides the headers to dynload the libraries. Just using them.
[17:22:07 CEST] <nevcairiel> and hard linking is what we do with all sorts of other hardware libs, too
[17:22:17 CEST] <nevcairiel> feel free to submit to the ML, but expect the same arguments again
[17:23:12 CEST] <nevcairiel> avutil is not a dumping ground for things convenient to be shared between the libraries, its meant to make sense on its own
[17:24:15 CEST] <philipl> A fully inline header does avoid that argument.
[17:24:31 CEST] <BtbN> But it's an ugly hack.
[17:24:41 CEST] <BtbN> including the full loader into absolutely everything
[17:39:46 CEST] <philipl> Put each loader in a separate header ;-)
[17:42:28 CEST] <jamrial> libavcuda
[17:42:41 CEST] <philipl> yes. that was ny next joke.
[17:42:51 CEST] <philipl> followed by libavcore.
[18:05:52 CEST] <jkqxz> An nvidia shim library which contains the headers and does the dynamic loading, but is itself linked directly, doesn't sound like a terrible idea. There is already precedent for that sort of approach with the mfx_dispatch stuff.
[18:07:05 CEST] <jkqxz> I don't know whether the licensing terms would actually permit distribution of such a thing, though.
[18:09:01 CEST] <BtbN> I just took everything that was in ffmpeg already and put it in a header.
[18:09:05 CEST] <BtbN> So I don't see an issue
[18:15:34 CEST] <iive> BtbN: so this header is needed only during compilation and should not be installed with the rest of ffmpeg headers?
[18:16:01 CEST] <BtbN> iive, yes.
[18:16:25 CEST] <BtbN> I'll probably just put it to the other dynload headers in compat/cuda, and make it static inline functions inside of the header
[18:16:43 CEST] <iive> BtbN: then you can place it whereever you like.
[18:16:54 CEST] <iive> that actually sounds like reasonable place.
[18:17:14 CEST] <BtbN> the problem is not the header, but the actual code loading the libraries
[18:23:11 CEST] <jkqxz> Which is why an external shim package which wraps all that up (headers and loading), and is then used just like any normal library, might be a nicer option. All of the code inside ffmpeg would just include the normal headers and call normal functions declared in those headers, not caring about this problem.
[18:24:01 CEST] <BtbN> would be kind of against the point of making it more easily accessible though
[18:24:12 CEST] <jkqxz> And that has use beyond ffmpeg, too. Internal stuff in libavutil might make ffmpeg a bit easier, but anyone using the libraries would still have to implement everything again.
[18:24:41 CEST] <BtbN> Having it as header-only in ffmpeg does that as well.
[18:25:23 CEST] <iive> compat is a directory in ffmpeg root, not in libavutil.
[19:45:19 CEST] <BtbN> philipl, https://gist.github.com/25b8466223cd204f4b84c442631dd941 it's not even _that_ bad.
[19:45:52 CEST] <philipl> BtbN: Indeed. Truly the hero we need.
[21:50:17 CEST] <cone-609> ffmpeg 03James Almer 07master:c45ba265fcbb: avformat/matroskaenc: fix Tags master on seekable output if there are tags after the last stream duration
[21:54:15 CEST] <BtbN> philipl, https://github.com/BtbN/FFmpeg/commit/32b5bda43fccc4cb8a7eb51f6de4df793c8ba73f that works. Now just have to incorporate it into the previous 20 commits.
[22:15:49 CEST] <Dresk|Dev> (re-paste from #ffmpeg) So help me out here, we realize getting hardware decoding of H.264 is a difficult thing in ffmpeg due to all the platform differences, but AVHWAccel, what does that actually do for ffmpeg?
[22:17:25 CEST] <Chloe> Dresk|Dev: wrong channel.
[22:17:33 CEST] <Dresk|Dev> Blast!
[22:20:53 CEST] <Dresk|Dev> Chloe: What's the deal with DXVA2 practically haven't no hardware decoding support? I'm actually super surprised my Titan X Pascal cannot hardware decode via ffmpeg, and decoding 4K videos is actually very taxing
[22:20:58 CEST] <philipl> BtbN: w00t. Looks great.
[22:21:25 CEST] <BtbN> Getting it into these commits might be kinda hard though. Some of them don't make sense with this anymore. Might have to re-apply them all manually
[22:21:42 CEST] <philipl> Yeah, but you probably want to squash some of these anyway right?
[22:22:12 CEST] <BtbN> philipl, not really, except for the last one.
[22:22:28 CEST] <Chloe> Dresk|Dev: no idea. I've never worked with hardware accel (as I dont have the hardware for it).
[00:00:00 CEST] --- Sat Oct 8 2016
More information about the Ffmpeg-devel-irc
mailing list