[Ffmpeg-devel-irc] ffmpeg-devel.log.20170213
burek
burek021 at gmail.com
Tue Feb 14 03:05:03 EET 2017
[00:35:56 CET] <cone-040> ffmpeg 03Bela Bodecs 07master:2b9f92fcc548: avformat/hlsenc: fix stream level metadata handling
[01:33:38 CET] <jarkko> if there are several tickets about this shouldnt it be closed then
[01:33:39 CET] <jarkko> https://trac.ffmpeg.org/ticket/3035
[02:34:50 CET] <atomnuker> I wasn't expecting a mountain of emails on ffmpeg-devel on a sunday
[03:39:58 CET] <wm4> I'm wondering how you're supposed to cleanly switch quality levels with HLS
[03:40:17 CET] <wm4> wouldn't this require an API extension
[03:40:47 CET] <wm4> (you want to switch to different quality on segment boundaries, how it's supposed to work with adaptive streaming protocols)
[03:50:28 CET] <rcombs> doesn't it involve setting AVStream::discard
[03:51:22 CET] <wm4> yeah, but you'd set it in the middle of a segment
[03:51:39 CET] <wm4> which would not get you smooth switching
[03:52:07 CET] <wm4> I mean, you'd probably set it in the middle of a segment (the API user can't know where the boundaries are)
[04:31:58 CET] <rcombs> it might only switch when it hits a segment boundary? I'unno
[04:35:47 CET] <philipl> Yeah - I think if you observe other players, they switch on the segment boundary.
[04:36:12 CET] <rcombs> referring to lavf/hls.c in particular
[04:36:57 CET] <philipl> I mean, conceptually, with hls, the segment is considered an atomic unit; you'd never change anything in the middle, and the segments are small enough that latency is reasonable.
[04:51:58 CET] <wm4> the API user needs to set discard on the new AVStream and unset it on the old one in the "right" moment - when is that moment?
[07:53:49 CET] <wm4> michaelni: feel free to run it against your own tests https://github.com/wm4/FFmpeg/commits/filter-merge
[07:54:18 CET] <wm4> <jkqxz> The bogus_video.mp4 one seems to be some sort of filtering problem. Nothing comes out of the audio filtergraph until you've pushed a lot of frames into it, hence the absence of an audio stream in the output file when you truncate the video. <- I don't know either
[07:54:26 CET] <nevcairiel> and i thought the recent avfilter reworks were supposed to avoid the n eed for push filtering
[07:54:40 CET] <wm4> jkqxz: it was fixed by adding AV_BUFFERSRC_FLAG_PUSH
[07:54:49 CET] <wm4> really
[07:54:57 CET] <wm4> well it buffers audio data like crazy anyway
[07:55:04 CET] <nevcairiel> but why?
[07:55:07 CET] <wm4> and I have no idea why flushing doesn't push them out either
[08:00:01 CET] <wm4> I didn't know we had a coreimage filter
[08:07:33 CET] <wm4> might be better to make it work without AV_BUFFERSRC_FLAG_PUSH
[08:07:54 CET] <wm4> what's bad about "pushing" anyway?
[08:11:51 CET] <wm4> oh I guess it's because avfilter_graph_request_oldest somehow does the work
[08:12:03 CET] <wm4> whatever it does
[08:13:44 CET] <wm4> lol
[08:13:48 CET] <wm4> it uses pts values
[08:14:11 CET] <wm4> sure that will help if the consumer has huge or disconnected pts values on the same graph
[08:14:25 CET] Action: wm4 wonders if libavfilter will ever have sane scheduling
[08:26:55 CET] <cone-146> ffmpeg 03wm4 07master:e3af49b14bf3: AVFrame: add an opaque_ref field
[08:26:55 CET] <cone-146> ffmpeg 03wm4 07master:50708f4aa40c: hwcontext_dxva2: support D3D9Ex
[08:59:17 CET] <wm4> ffmpeg.c accesses AVStream.cur_dts, which is an inernal field
[08:59:21 CET] <wm4> internal even
[09:01:57 CET] <wm4> it's only set in deprecated code in libavformat too, so awesome
[09:23:12 CET] <nevcairiel> wm4: even if pts cause insane buffering, a flush should probably get it out of there
[09:23:26 CET] <wm4> probably
[09:23:35 CET] <wm4> but it doesn't buffer in master
[09:28:24 CET] <wm4> oh... master uses AV_BUFFERSRC_FLAG_PUSH
[09:28:31 CET] <wm4> I accidentally removed it in my commit
[09:28:47 CET] <wm4> <nevcairiel> and i thought the recent avfilter reworks were supposed to avoid the n eed for push filtering
[09:28:50 CET] <wm4> apparently not?
[09:29:18 CET] <nevcairiel> thats what nicolas said on the ml, b ut i suppose there is some s pecial cases left
[09:29:25 CET] <nevcairiel> but if master uses push, easily fixed, i guess
[09:30:55 CET] <wm4> sure
[09:31:28 CET] <wm4> and apparently flushing doesn't "push" either
[09:31:43 CET] <wm4> (makes no sense to me but whatever, libavfilter data flow is just a broken mess)
[09:32:19 CET] <nevcairiel> I'm just happy you didnt run in the datafloiw issue with audio frame sizes that I faced back when I first tried
[09:32:22 CET] <nevcairiel> which was a real blocker
[09:32:52 CET] <wm4> I felt blocked enough anyway
[09:38:25 CET] <wm4> nevcairiel, michaelni, BtbN: so can I push?
[09:42:52 CET] <nevcairiel> should probably give another chance to test the fixed version
[09:43:16 CET] <wm4> let me know when you've done that (you = whoever is interested)
[09:43:54 CET] <wm4> current state is at https://github.com/wm4/FFmpeg/commits/filter-merge
[10:36:45 CET] <ubitux> > Starting in Firefox 51, you can play MP4 files using the FLAC codec
[10:36:49 CET] <ubitux> mp4 with flac? wut
[10:38:00 CET] <nevcairiel> ubitux: http://git.xiph.org/?p=flac.git;a=blob;f=doc/isoflac.txt;h=574df9f54a779fca2e62c726703fc7be199d4c05;hb=HEAD
[10:39:31 CET] <ubitux> > draft
[10:39:33 CET] <ubitux> right
[10:39:52 CET] <nevcairiel> these things never leave draft status for reasons =p
[10:41:18 CET] <nevcairiel> probably too much red tape to get ISO to actually certify them
[10:43:57 CET] <wm4> can we remove the old vdpau API yet
[10:44:14 CET] <nevcairiel> next bump i think
[10:44:29 CET] <wm4> I mean the implementation
[10:44:48 CET] <wm4> (like Libav has it)
[10:45:03 CET] <nevcairiel> should just do it with the bump to avoid more drama
[10:45:43 CET] <nevcairiel> also, personally I consider removing the functionality of an API equally breaking as removing the API itself, whats the point of keeping some non-functional shell =p
[10:46:15 CET] <nevcairiel> but libav likes to do that for reasons
[10:46:17 CET] <wm4> not breaking builds
[10:46:26 CET] <nevcairiel> rather break functionality?
[10:46:28 CET] <nevcairiel> thats even worse
[10:46:29 CET] <nevcairiel> :D
[11:02:05 CET] <wm4> for functionality that nobody uses? not really
[11:16:38 CET] <michaelni> wm4, ill test filter-merge, but i need a bit of time
[11:19:26 CET] <mateo`> *
[11:33:31 CET] <BtbN> wm4, fine with me. Breaking master for nvenc might raise interest in finding out what the hell is going on.
[11:34:02 CET] <cone-146> ffmpeg 03Timo Rothenpieler 07master:8a3fea14ae94: avcodec/nvenc: set frame buffer format for mapped frames
[11:37:39 CET] <wm4> why the fuck is AVStream.codecpar not a public field?
[11:58:42 CET] <durandal_1707> wm4: is it in libav?
[12:00:32 CET] <wm4> durandal_1707: of course
[12:00:50 CET] <wm4> see my patch
[12:01:05 CET] <wm4> which makes codecpar public again (according to doxygen)
[12:39:59 CET] <StefanT29_> hi everyone!
[12:40:10 CET] <StefanT29_> can you please tell me if ffmpeg applied for gsoc 2017 and where can i find the ideas for this year's gsoc?
[12:41:15 CET] <J_Darnley> I think they want to. I don't know if they've been accepted. I also thought that the deadline for applications was later this month.
[12:41:47 CET] <J_Darnley> The projects are probably on the trac wiki again
[12:42:21 CET] <StefanT29_> J_Darnley: hi, thanks! the deadline was on 9th February
[12:44:50 CET] <StefanT29_> J_Darnley: i assume this is the wiki page https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2017
[12:45:38 CET] <michaelni> wm4: one that fails with the branch is for example: ./ffmpeg -i ~/videos/matrixbench_mpeg2.mpg -vf scale=80x60 small.mpg && ./ffmpeg -i small.mpg -vframes 3 -metadata compilation="1" blah.m4a
[12:57:28 CET] <wm4> michaelni: what's wrong/failing, small.mpg or blah.m4a?
[12:57:50 CET] <michaelni> wm4, the 2nd file
[12:58:03 CET] <wm4> can you upload small.mpg?
[13:00:09 CET] <michaelni> ill have to recreate it, i overwrote it when checking if older versions worked
[13:00:53 CET] <michaelni> or i can upload what i have, it shoudl repro too i think
[13:05:16 CET] <michaelni> wm4: (if this doesnt result in a empty output ill try to recreate the exact small.mpg) http://www.ffmpeg.org/~michael/small.mpg
[13:13:19 CET] <wm4> hm I guess I can reproduce
[13:14:02 CET] <StefanT29_> BBB_: hi! i just read the ideas from https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2017 and i found your project very interesting (VP9 decoder improvements). Can you give me more details? (where should i go to find more about VP9 - i read just the wikipedia page of VP9 - and what documentation to read in order to start working on a task from the "Expected results" list)
[13:14:21 CET] <wm4> maybe some eof handling issue
[13:27:35 CET] <wm4> michaelni: I wish you had taken this much care when e.g. merging the mp4 edit list changes
[13:33:06 CET] <wm4> I wonder how can it mux 2 packets of 29 bytes, consisting of 0 frames and 0 samples
[13:37:05 CET] <BBB> StefanT29_: what type of tasks are you into? simd optimizations (speedups using vector math)? or threading optimizations? or new features? or somewhat more algorithmic work?
[13:37:35 CET] <BBB> StefanT29_: https://blogs.gnome.org/rbultje/2016/12/13/overview-of-the-vp9-video-codec/ is a reasonable read-up of vp9 technologies
[13:38:06 CET] <BBB> StefanT29_: https://forum.doom9.org/showthread.php?t=168947 is similar
[13:51:59 CET] <StefanT29_> BBB: for start, i can try something easier, just to familiarize myself with the sources and figure out how to approach the Qualification task
[13:53:31 CET] <StefanT29_> BBB: i had a class this year (i'm in my third year at my university) of "parallel and distributed algorithms", so it would be awesome to use that knowledge in real a project, rather than individual homeworks
[13:58:52 CET] <BBB> distributed isnt very useful
[13:59:45 CET] <BBB> threading is a form of parallelism, so you can start with that as a qual task
[14:01:28 CET] <BBB> (tile threading)
[14:32:19 CET] <BBB> michaelni: please dont review the cinepak decoding patch
[14:32:38 CET] <BBB> michaelni: weve discussed that extensively. you may have missed it, but theres pretty broad agreement we dont want that patch in that form
[14:32:51 CET] <BBB> michaelni: please dont give the guy false hope by re-starting the review process
[14:33:34 CET] <BBB> michaelni: the claim faster decoding is a lie, he means its faster to output that format than the original decoder + swscale from native format to that modified pix_fmt; decoding speed itself is not changed
[14:33:56 CET] <BBB> michaelni: and no, we dont want non-native output in libavcodec decoders, thats an endless process with no clear advantages
[14:38:43 CET] <BBB> StefanT29_: so to get started on tile threading, look at libavcodec/vp9.c, specifically vp9_decode_frame()
[14:38:57 CET] <BBB> StefanT29_: in there, search for // main tile decode loop
[14:39:23 CET] <BBB> (it starts about 40 lines below that comment)
[14:40:24 CET] <BBB> StefanT29_: the code should be pretty straightforward, but read that loop and try to understand that a tile is basically a subdivision of a frame, e.g. you can have 2 or 4 tile rows or tile cols, and each tile col (but not tile row) can be decoded independently in its own thread
[14:40:37 CET] <BBB> StefanT29_: the qual task is to implement that, using avctx->execute2()
[14:40:47 CET] <BBB> StefanT29_: see vp8.c for examples of how to use execute2
[14:41:42 CET] <BBB> StefanT29_: youll get into trouble with the loopfilter, so youll probably want to figure out a way to signal tile progress to a separate (or the main) thread so loopfilter can be done at the frame (not tile) level and that doesnt break
[14:42:00 CET] <BBB> StefanT29_: and youll have tons of questions while doing it, so just ask
[14:45:42 CET] <michaelni> BBB, please dont call things a lie (implying someone is lieing)
[14:46:22 CET] <BBB> I have no idea how to respond to that other than to say what else am I supposed to do ?!?
[14:46:49 CET] <BBB> I can see an implicit interest in the patch (I had that too) because it makes stuff faster. thats a very reasonable thing to be interested in
[14:47:05 CET] <BBB> but we have discussed the patch, theres signficant opposition to it
[14:47:39 CET] <BBB> I tend to think its not a good thing to review patches for stylistic issues if theres fundamental issues that probably require the patch to be rewritten (or thrown out)
[14:48:48 CET] <michaelni> there is opposition, but why i dont understand, noone opposing is affected by this code, and the author of the patch might very well be the only one interrested in maintaining cinepack
[14:49:21 CET] <nevcairiel> there is opposition because we dont want ffmpeg to become the dumping ground of peoples personal pet features
[14:49:31 CET] <nevcairiel> (not more then it already is, anyway)
[14:49:54 CET] <BBB> maintaining isnt the same as thrash can
[14:49:56 CET] <michaelni> so rejecting it totally and aggressivly might result in noone maintaining cinepack, accepting it might result in one contributor more
[14:50:17 CET] <nevcairiel> his entire attitude has shown we can do fine without him
[14:50:21 CET] <BBB> I appreciate him contributing, but I dont think this particular contribution is a good idea
[14:50:28 CET] <BBB> and his attitude is indeed problematic
[14:50:33 CET] <BBB> but maybe that can change
[14:51:02 CET] <nevcairiel> and honestly, a 25 year old codec like cinepak doesnt need a maintainer, it just functions
[14:52:29 CET] <nevcairiel> in any case, there are several good technical arguments we all made repeatedly on why we think its a bad idea to muddle pixfmt conversions into a decoder
[14:52:39 CET] <michaelni> if tolerance of other peoples view is so that code others maintain gets rejected that could put a hard limit on community size
[14:53:00 CET] <michaelni> more people more views more rejects
[14:53:22 CET] <BBB> this is all very philosophical michaelni
[14:53:34 CET] <BBB> michaelni: did you agree with the technical arguments we made against the patch?
[14:53:41 CET] <BBB> michaelni: or did you disagree? or no opinion?
[14:53:44 CET] <michaelni> some yes some no
[14:53:51 CET] <BBB> can you elaborate?
[14:54:09 CET] <michaelni> the environment var was bad and he removd it
[14:54:19 CET] <michaelni> you suggested an AVOption IIRC and the code now uses one
[14:54:23 CET] <kierank> The random RGB scaling is bad
[14:54:25 CET] <BBB> no I didn't
[14:54:33 CET] <nevcairiel> we suggested get_format
[14:54:34 CET] <BBB> I said for general cases, decoding options might use AVOption
[14:54:44 CET] <BBB> but in this case, it should use get_format() because thats the API designated for this
[14:54:45 CET] <michaelni> it uses get_format() too
[14:54:51 CET] <BBB> the avoption should go
[14:54:57 CET] <j-b> I thought the official policy for libavcodec was to output the native pix_fmt ?
[14:55:07 CET] <BBB> j-b: yup, thats my main argument against it
[14:55:18 CET] <BBB> j-b: although the native format is really YCoCg which we dont really support
[14:55:23 CET] <BBB> j-b: so it outputs rgb24 (dont ask)
[14:55:25 CET] <j-b> Cinepak is gray, paletized and YUV, no?
[14:55:34 CET] <BBB> ycocg, not yuv
[14:55:53 CET] <nevcairiel> its basically de-correlated rgb
[14:56:05 CET] <nevcairiel> when cinepak was designed, it was a rgb codec
[14:56:22 CET] <michaelni> iam pretty sure cinepack as in the binary codec is rgb
[14:56:24 CET] <BBB> ycocg is subsampled like yuv and has brightess decorrelation like y in yuv, but the chroma is basically a quick hack to allow seamless conversion to/from rgb
[14:56:57 CET] <BBB> michaelni: the format is ycocg& the native decoder may convert internally similar to how we do (that would make sense, since the palette version is in rgb also)
[14:57:13 CET] <BBB> plus ycocg-to-rgb conversion is trivial (basically a butterfly plus two shifts)
[14:57:20 CET] <BBB> ignoring the subsampling for a second
[14:57:31 CET] <nevcairiel> does cinepak even have subsampling
[14:57:40 CET] <BBB> ycocg has subsampling, yes
[14:57:45 CET] <j-b> I thought it did,yes
[14:57:48 CET] <BBB> co/cg is at half resolution as y
[14:57:54 CET] <BBB> see decode_codebook
[14:58:00 CET] <BBB> like 420 yuv
[14:58:13 CET] <BBB> anyway
[14:58:18 CET] <BBB> getting very technical very quickly here
[14:58:24 CET] <BBB> back to the patch
[14:58:43 CET] <BBB> michaelni: please lets accept the opinion of some other devs that that patch isnt right for us
[14:59:07 CET] <kierank> I am getting on a plane but I object this patch wholeheardetly
[14:59:11 CET] <kierank> as it breaks modularity
[14:59:13 CET] <BBB> michaelni: I agree speedy decoding+colorspace conversion has a place somewhere& but merging that together and calling it a decoder is (IMHO) not something where we should want to go, especially for game codecs
[14:59:22 CET] <BBB> michaelni: imagine what people would do to h264 if this became acceptable
[14:59:31 CET] <nevcairiel> its not technically a game codec by origin
[14:59:34 CET] <BBB> michaelni: I can imagine 16bit output for all (including 8) bitdepths
[14:59:36 CET] <BBB> because why not
[14:59:42 CET] <BBB> michaelni: I dont want to maintain that
[14:59:48 CET] <BBB> michaelni: decoders are hard enough as they are already
[15:00:36 CET] <BBB> kierank: yeah, modularity is a beatiful word for it
[15:01:15 CET] <nevcairiel> thats my main concern as well, its not strictly about cinepak, i could care less what it does, but it sets a precedence of adding non-native processing into a decoder, so whats that guy or other people from stopping to breach more modularity barriers and in a couple years all our popular decoders have crazy hacks for some use-cases some person once wanted for his proprietary setup somewhere
[15:01:29 CET] <kierank> draw_horiz_band is what should be doing this
[15:01:29 CET] <kierank> really
[15:02:04 CET] <michaelni> BBB, theres no question about the majority oppinion here, i see that i dont dispute that. I do dispute that this is the right choice to grow the community and i disagree that this is technically the right choice
[15:02:21 CET] <kierank> "grow the community"
[15:02:24 CET] <kierank> ...
[15:02:29 CET] <nevcairiel> you cant grow the community by pissing off the existing community off, though
[15:02:34 CET] <nevcairiel> -off
[15:02:40 CET] <kierank> ^ that
[15:02:50 CET] <michaelni> nevcairiel, i agree, this is bad
[15:03:37 CET] <michaelni> i wish there was a middle ground eveyone would be happy with, not a patch author vs core team decission
[15:03:44 CET] <BBB> michaelni: I disagree that this is technically the right choice, you mean you disagree that his patch is technically wrong and you think its an ok approach? or you agree that his patch is technically wrong and dont like the approach either?
[15:03:47 CET] <nevcairiel> and we don't want to accept questionable patches just to "please" some random guy that doesn't seem very likeable in the first place
[15:03:50 CET] <kierank> we have it, it's called draw_horiz_band
[15:04:16 CET] <nevcairiel> kierank: but he couldnt use it in his proprietary application that he cant modify!
[15:04:31 CET] <nevcairiel> (or apparentely, several even!)
[15:05:28 CET] <michaelni> BBB, what the patch does is move colorspace convert from per frame after the decoder to decoder init. aka every frame to just at init. This idea is quite old for VQ codecs and it was possibly the main reason why we have get_format() iam not sure its long ago
[15:06:07 CET] <BBB> its not really init, its in decode_codebook
[15:06:12 CET] <BBB> codebooks are per-frame things also
[15:06:15 CET] <BBB> theyre just not per-pixel
[15:06:31 CET] <BBB> but yes it reduces complexity somewhat at that level, at least w.r.t. colorspace conversion
[15:06:41 CET] <BBB> I dont think anyone is arguing against that fact
[15:06:50 CET] <BBB> the question is whether thats really worth it
[15:06:59 CET] <michaelni> ... but everyone hates the patch
[15:07:02 CET] <BBB> as j-b said, theres the general rule that a decoder should output its native format
[15:07:24 CET] <BBB> so get_format() vs. output native format is basically somewhat of a clash of two rules and we need to find a balance between these two
[15:07:43 CET] <BBB> do you want 16bit output for all bitdepths in the h264 decoder?
[15:07:48 CET] <kierank> I want uyvy in h264
[15:07:51 CET] <kierank> uyvy 10-bit
[15:07:55 CET] <kierank> that would solve my use-case a lot
[15:08:05 CET] <BBB> sure, why not
[15:08:12 CET] <wm4> I want tiled bit-packed 10 bit nv12 for performance
[15:08:12 CET] <kierank> but i realise it's a stupid idea
[15:08:14 CET] <BBB> so, michaelni& thats the balance that were trying to find :)
[15:08:27 CET] <kierank> oh i want chroma upsampling in h264 420 as well
[15:08:32 CET] <kierank> can we do that in the decoder
[15:08:34 CET] <kierank> i want to do it inplace
[15:08:45 CET] <BBB> michaelni: the job of a maintainer of the projet is a lot harder than the job of a patch contributor, beause were responsible for the whole project, not just that one feature in that one decoder
[15:08:51 CET] <BBB> darn it my c is broken
[15:08:52 CET] <kierank> proprietary software does this but it's bad design
[15:09:01 CET] <kierank> we have a responsibility as an OSS project to encourage good design
[15:09:11 CET] <kierank> I've REd decoders with a single function
[15:09:15 CET] <kierank> where *everything* is inlined
[15:09:23 CET] <kierank> asm, colourspace conversion, c++ templated etc
[15:09:23 CET] <BBB> was it vp8.c? :D
[15:09:34 CET] <BBB> vp8.c is literally a single decoding function
[15:09:35 CET] <BBB> not a joke
[15:09:39 CET] <BBB> (in the binary)
[15:09:47 CET] <kierank> yeah but not inlined asm, and templated pixel format outputs
[15:09:50 CET] <BBB> true
[15:24:08 CET] <BBB> J_Darnley: omg mbaff
[15:29:00 CET] <J_Darnley> yes
[16:46:20 CET] <Chloe> 14:03:37 <michaelni> i wish there was a middle ground eveyone would be happy with, not a patch author vs core team decission
[16:46:36 CET] <Chloe> Do you not think this *is* the middle ground?
[16:50:51 CET] <michaelni> Chloe, when i did the micro review i was hoping that the last patch was acceptable to people, in fact i did the review as a kind of ping to see if people still dislike it and the reaction on irc sounded to me like there are still major objections
[17:03:37 CET] <Chloe> michaelni: Ok that sounds reasonable.
[17:35:55 CET] <kierank> BBB: we are so evil
[17:36:46 CET] <durandal_1707> so i have found container that stores multiple video frames in single packet
[17:37:29 CET] <atana> michaelni, ping
[17:37:34 CET] <atomnuker> durandal_1707: can a parser split them up
[17:37:51 CET] <durandal_1707> atomnuker: h264 fails
[17:37:52 CET] <michaelni> atana, pong
[17:38:05 CET] <atana> michaelni, there is million song dataset which could be used for evaluation of music information retrieval systems
[17:38:13 CET] <atana> http://labrosa.ee.columbia.edu/millionsong/
[17:38:29 CET] <atana> also please check mail
[17:39:28 CET] <nevcairiel> durandal_1707: if you instruct it to perform full parsing h264 should be able to handle this, assuming there is annexb start codes or mp4 length codes in the appropriate format and the necessary extradata
[17:39:48 CET] <atana> also there are MIR dataset for music information retrieval competitions http://colinraffel.com/wiki/mir_datasets not sure which will be good of our purpose
[17:40:46 CET] <michaelni> atana, any public dataset should do
[17:41:32 CET] <atana> also how much more time do you expect is needed to finish the project successfully ? any estimate?
[17:41:59 CET] <atana> million song dataset is freely available
[17:42:41 CET] <durandal_1707> nevcairiel: if i use raw h264 demuxer it returns all frames. otherwise it returns just some
[17:42:43 CET] <atana> I suppose it has only audio features and metadeta for song. let me check
[17:43:25 CET] <michaelni> atana, if you work on it full time the time remaining may be enough, otherwise i dont think its possible
[17:43:45 CET] <atana> ok.
[17:45:37 CET] <atana> btw, the dataset does not include any audio, only the derived features but it's mentioned that sample audio can be fetched from services like 7digital, using code which they provide.
[17:46:32 CET] <atana> michaelni, code link https://github.com/tbertinmahieux/MSongsDB/tree/master/Tasks_Demos/Preview7digital
[17:48:35 CET] <Chloe> what's the preferred environment for ffmpeg dev on windows?
[17:49:12 CET] <nevcairiel> easiest is likely getting msys2
[17:56:23 CET] <michaelni> atana, also we dont need millions of songs, something like 100 is plenty for testing at this point, having a huge number would be too slow also
[17:57:19 CET] <atana> michaelni, then may be we should use one form this list http://colinraffel.com/wiki/mir_datasets
[17:58:09 CET] <atana> does our use comes under non-commercial or not?
[18:06:33 CET] <michaelni> atana, have you found any dataset that can be freely downloaded ?
[18:07:33 CET] <atana> still searching.. I have gone through the list but some of them are not freely available and some doesn't provide audio.
[18:11:01 CET] <atana> michaelni, http://www.music-ir.org/mirex/wiki/2014:Audio_Fingerprinting under Queryset section they provide 1062 clips of wav format is it something of our interest?
[18:24:11 CET] <michaelni> atana, i guess we can use that for now
[18:25:31 CET] <atana> it's a noisy data
[18:30:08 CET] <michaelni> atana, the originals are anywere to be downloaded ?
[18:30:27 CET] <michaelni> atana, but really it doesnt matter so much, we can use this for now
[18:32:16 CET] <atana> I was looking for some other dataset. Ok we will go with this then.
[18:33:04 CET] <cone-855> ffmpeg 03Alex Converse 07master:3f1a38c9194d: aac_latm: Allow unaligned AudioSpecificConfig
[18:33:05 CET] <cone-855> ffmpeg 03Alex Converse 07master:20ea8bf9390b: aac_latm: Copy whole AudioSpecificConfig when it is sized.
[18:33:06 CET] <cone-855> ffmpeg 03Alex Converse 07master:3bb24fc344f0: aacsbr: Associate SBR data with AAC elements on init
[18:33:07 CET] <cone-855> ffmpeg 03Alex Converse 07master:1fce67d6403a: aac_latm: Align inband PCE to the start of the payload
[18:33:46 CET] <atana> downloading it.
[18:34:45 CET] <atana> michaelni, it has more around 1000 clips should we use all or some sampling
[18:35:32 CET] <michaelni> atana, i think we should use a subset unless using all is fast, it would be bad if a single test run takes half an hour or longer
[18:36:13 CET] <michaelni> atana, also i found this: http://opihi.cs.uvic.ca/sound/genres.tar.gz
[18:36:26 CET] <michaelni> iam not sure but maybe these are the non noisy originals
[18:36:37 CET] <michaelni> still downloading so it might be anything
[18:37:23 CET] <atana> size is around 1 gb will take some time
[18:39:12 CET] <atana> michaelni, this one is set of classical music. 330 musical recordings
[18:39:16 CET] <atana> https://homes.cs.washington.edu/~thickstn/musicnet.html
[18:47:33 CET] <michaelni> atana, the http://opihi.cs.uvic.ca/sound/genres.tar.gz file seems to be the clear non noisy data
[18:48:04 CET] <michaelni> so with the 2 we have some good samples to test
[18:48:35 CET] <michaelni> atana, hows the implementation ? did you change anyting since saturday ?
[18:51:28 CET] <atana> I am still downloading the data.
[18:51:45 CET] <atana> I have made any change not sure what to change
[18:52:32 CET] <atana> the plan of making consecutive pairs doesn't sound well as disscussed last time
[18:53:53 CET] <atana> Is there anything more to change for now(before evaluation)?
[18:54:01 CET] <michaelni> atana, using 4 consecuitve maxima frequency values should work, pairs wont i think
[18:55:12 CET] <michaelni> atana, i think the sount > 3 i suggested on saturday is bad, it needs to be higher it looses too many points
[18:55:32 CET] <atana> wouldn't the time difference would be zero?
[18:55:49 CET] <michaelni> time diff would be 0 yes
[18:56:16 CET] <michaelni> atana, also how do you want to evaluate it ?
[18:56:36 CET] <michaelni> we have no way to store or lookup the fingerprints yet
[18:56:48 CET] <atana> I still didn't get it? shouldn't the anchor point be paired with other points too (points from other fft window) to generate hash?
[18:57:36 CET] <atana> michaelni, can we use elastic search for storing or looking up or do you have something else in mind
[18:58:15 CET] <michaelni> what the paper describes is to find 2D maxima and to pair them giving 2 frequencies and a time diff, but the implementation is just 2D so a pair is too weak
[18:58:26 CET] <michaelni> "implementation is just 1D"
[19:01:59 CET] <michaelni> atana, i was thinking of a very simple file based database, simply put N bits of the fingerprint in the filename and store the rest of each such fingerprint and song info in the file
[19:02:47 CET] <michaelni> iam not sure what elastic search is but wikipedia says java, and a java dependancy would for the final code not be ok
[19:03:53 CET] <atana> we have some fft window (let's say 5) and with their time information. If we look at the constellation map (freq-time axis) and if we divided columns as strip (which is can be though of a fft window) so we have points and time info. Isn't this info conveys what is in 2D ?
[19:06:38 CET] <michaelni> atana, there are several problems, the first is that there is no implementation of finding the maximaalng the time axis
[19:07:06 CET] <michaelni> the 2nd problem is that the correlation of a window and the next is quite high
[19:07:57 CET] <michaelni> also there are too many points if the maxima are just taken along one axis and then paired
[19:08:06 CET] <atana> In paper I guess they said they choose stables points/peaks which taking amplitude in account
[19:10:41 CET] <atana> 'The peaks in each time-frequency locality are also chosen according amplitude, with the justification that the highest amplitude peaks are most likely to survive the distortions listed above.'
[19:12:15 CET] <michaelni> the paper works with local maxima in 2D, if you want to implement that we can do that but that moves us to square 1
[19:16:09 CET] <atana> could you explain the taking consecutive pairs for hashing again? would it also be from different fft window or just the same if it's from same then everytimt time diff would be 0.
[19:17:26 CET] <michaelni> i would suggest to try to work with 1 fft window, and take 4 consecutive local 1D maxima, that is their frequencies
[19:17:47 CET] <michaelni> because that is very close to what you implemented
[19:18:21 CET] <jarkko> ffmpeg doesnt compile anymore, it did yesterday (git)
[19:18:41 CET] <jarkko> common.mak:60: recipe for target 'libavcodec/allcodecs.o' failed
[19:19:02 CET] <jarkko> libavformat/movenc.c: In function mov_flush_fragment:
[19:19:12 CET] <jarkko> libavformat/movenc.c:4540:12: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow]
[19:19:13 CET] <jarkko> static int mov_flush_fragment(AVFormatContext *s, int force)
[19:19:32 CET] <jarkko> gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
[19:19:34 CET] <JEEB> that's a warning
[19:19:44 CET] <jarkko> it fails to compile
[19:19:56 CET] <jarkko> make: *** Waiting for unfinished jobs....
[19:20:09 CET] <JEEB> just pastebin the whole thing :P
[19:20:13 CET] <JEEB> and link here
[19:20:25 CET] <JEEB> also #ffmpeg, not here
[19:21:09 CET] <jarkko> you are sure about ffmpeg?
[19:21:11 CET] <jarkko> and not here?
[19:21:12 CET] <jarkko> http://pastebin.com/cxREzLjj
[19:21:17 CET] <JEEB> yes
[19:24:07 CET] <atana> michaelni, I will do it. Also can u give an example of the file database you explained. I am not clear about the point "store the rest of each such fingerprint and song info in the file"
[19:29:03 CET] <michaelni> atana, for example if you have frequencies like 112:234:567:888 then the filename could be file112.data
[19:29:38 CET] <atana> ok that's anchor point identification
[19:30:08 CET] <michaelni> and the file would contain 112,234,567,888,1001 songname, artist, and location in time where the match is
[19:30:25 CET] <michaelni> and all that for every case that starts with 112
[19:30:48 CET] <michaelni> just a very simple way to store and lookup data
[19:31:02 CET] <atana> so one file for unique anchor point
[19:31:27 CET] <atomnuker> later on can we have like an index which maps that identification to filenames?
[19:31:52 CET] <atana> what about time info?
[19:32:29 CET] <michaelni> atana, with each entry in the file the time should be stored too, thats the number of the fft window uses from the start of the song
[19:32:54 CET] <michaelni> atana, a list of songs/artists that maps to song ids that then are used can be added later yes
[19:33:57 CET] <michaelni> also we will need to put more than the first frequency at some point (like one in the file and one in a directory) to make this efficient when there are many songs
[19:34:21 CET] <michaelni> but for now lets just get it working so we have start point we can test and continue from
[19:35:31 CET] <michaelni> s/atana/atomnuker/ in, "atana, a list of songs/artists that maps to song ids that then are used can be added later yes"
[19:37:26 CET] <atana> ok
[19:40:46 CET] <atomnuker> nice
[19:41:39 CET] <jarkko> ?
[19:43:14 CET] <J_Darnley> What's this you're working on? Some sort audio fingerprinting?
[19:44:55 CET] <jarkko> is it possible to code a audio/video code like so that it notices same patterns on audio/video and uses the same buffer over and over again. never heard that audio would work like this
[20:05:42 CET] <Chloe> jarkko: reconfigure
[20:06:12 CET] <peloverde> michaelni: Can you please send me the sample from 79a98294da6cd85f8c86b34764c5e0c43b09eea3? We've relaxed the added constraint several times and I'd like to make sure we aren't rebreaking things
[20:17:00 CET] <michaelni> atana, for the "file database", we will also need to add a AVOption to the peekpoint2 filter so the user can specify if she wants to add the current song into the database or wants to look it up
[20:18:27 CET] <atana> yes.
[20:26:28 CET] <atana> michaelni, invalid points checking is problematic while creating consecutive pairs
[20:27:41 CET] <atana> is there any efficient way to get rid of those invalid points, should I copy all the valid points in array or same cpt array
[20:28:03 CET] <rcombs> jarkko: there's always https://en.wikipedia.org/wiki/Kernel_same-page_merging I guess
[20:28:05 CET] <atana> that way valid points will be in sequential order
[20:30:23 CET] <jarkko> are there lost of ways x265 can be improved?
[20:30:36 CET] <JEEB> jarkko: plenty
[20:30:48 CET] <tmm1> rcombs: figured out the seek issue, bad cleanup of the linked list in the flush callback
[20:30:49 CET] <JEEB> x265 is still young and full of stuff that's borked
[20:30:57 CET] <JEEB> jarkko: but this is not really the correct channel
[20:31:01 CET] <JEEB> there's #x265 for it :P
[20:31:03 CET] <JEEB> and its mailing list
[20:32:22 CET] <michaelni> atana, you can do a simple pass over the array and copy the valid points only, you could copy to a new array or the same array
[20:43:01 CET] <rcombs> tmm1: ah, nice
[20:51:40 CET] <tmm1> i spent some more time on the vt hwaccel and its a disaster in there
[20:57:22 CET] <Fenrirthviti> jarkko: go fix the licensing as a starting point.
[21:05:14 CET] <BBB> atomnuker: is the opus encoder marked as experimental?
[21:10:06 CET] <durandal_1707> atomnuker: what about multichannel support?
[21:12:25 CET] <durandal_1707> michaelni: swscale dithering apparently outputs out of range pixels
[21:14:10 CET] <atomnuker> BBB: yes
[21:15:41 CET] <atomnuker> durandal_1707: only mono/stereo supported for now, though each CeltFrame contains 1 coupled pair so adding more channels is a matter of figuring out where to code another CeltFrame
[21:17:20 CET] <BBB> atomnuker: ok, cool, no worries then
[21:21:29 CET] <jkqxz> wm4: Is there any more I can do towards filter-merge? I saw michaelni posted another difference earlier today, has that one been investigated yet?
[21:22:29 CET] <michaelni> jkqxz, if theres a new version for me to test, someone tell me
[21:26:49 CET] <jkqxz> michaelni: Do you find any more differences/failures which I could investigate? From the above it looks like wm4 was working on the one you posted ~8:30 ago, but I don't see any conclusion.
[21:52:14 CET] <atana> michaelni, I have added consecutive points hash concept and updated the repo. could you please check and let me know if it's ok
[22:01:24 CET] <michaelni> jkqxz, i saw more, some may be duplicates, also some differences, one for example ./ffmpeg -i ~/tickets/1292/advanced_prediction.avi test.avi (file output is different, didnt investigate why/what)
[22:06:16 CET] <michaelni> atana, iam looks a bit more complex than needed
[22:06:53 CET] <atana> michaelni, what's different?
[22:08:09 CET] <michaelni> atana, you dont need the checks on mark_index i think
[22:09:16 CET] <michaelni> just the if (freqeuency != -1) {copy, increase mark_index }
[22:09:49 CET] <michaelni> nothing wrong with how you wrote it but simpler is better as its less confusing
[22:11:47 CET] <michaelni> atana, mark_index would of course have to start at 0 without the checks
[22:12:28 CET] <atana> ok will do it. anything else with the hash part?
[22:13:39 CET] <michaelni> atana, at least for me wih the mp3 iam trying the count > 3 removes too much so the fingerprints are containing "repeated" frequencis from next fft window
[22:14:34 CET] <atana> yes you mentioned it earlier. It should be high
[22:14:57 CET] <michaelni> yes, it seems so, 3 is too aggressivly removing points
[22:15:37 CET] <atomnuker> when are we going to get news if we made it into gsoc this year?
[22:16:21 CET] <atana> michaelni, what value high value should be put for now?
[22:17:16 CET] <michaelni> atana, a very random guess is 30 but we will surely tune this more once we have some test working with the audio samples
[22:17:50 CET] <michaelni> atomnuker, feb 29 see https://summerofcode.withgoogle.com/how-it-works/#timeline
[22:18:07 CET] <atana> sure, so for now I will put 30 and will make update
[22:18:26 CET] <michaelni> atana, ok, next we need the file db & avoption
[22:18:44 CET] <michaelni> atana, for storing data in the file, store it binary not text
[22:19:42 CET] <atana> ok
[22:21:35 CET] <michaelni> atana, for binary read/write in uint8_t array see #include "bytestream.h" and functions like bytestream_get_byte() bytestream_get_le16()
[22:21:37 CET] <atana> so content should be f1:f2:f3:f4:t2-t1:t3-t2:t4-t3:t1 (start point):songid in binary format?
[22:22:36 CET] <michaelni> atana, i think it would be best if we store the whole fft window for each entry, that is up to 8 frequencies, that way we have more to compare
[22:23:22 CET] <michaelni> that is f1, f2, f3, f4, f5, ..., f8. (start point):songid in binary format
[22:23:52 CET] <atana> timediff too? how many?
[22:24:42 CET] <michaelni> atana, i think timedifff isnt usefull with the current implementation
[22:26:17 CET] <michaelni> we can take just fingerprints were timediff is 0
[22:26:26 CET] <atana> so, say given a hash how search will be performed? what are the steps? first would be to look for file with that starting id
[22:26:47 CET] <michaelni> yes
[22:27:05 CET] <michaelni> then look at the entries (there can be more than one)
[22:27:35 CET] <michaelni> compare current fft window peaks with it find number of matches
[22:28:12 CET] <atana> match should be in order right (f1, f2, ..f8) all should match?
[22:28:43 CET] <michaelni> atana, yes, this possibly would be slightly easier if -1 entries are left in the array
[22:30:35 CET] <michaelni> when a well matching entry is found the time of entry vs time of input and the songid would then be "stored"
[22:32:03 CET] <michaelni> stored as in array[matching_entry.time - song_to_lookup.time] += "songid" <-- that is not C code, just to explain the idea
[22:32:51 CET] <michaelni> and then the entry in array[] that has a songid occur most often is the final match, i think thats also what the paper suggested
[22:33:34 CET] <michaelni> but we can do just the file thing first and just accept the first matching entry
[22:33:51 CET] <michaelni> probably better to do it in 2 steps as its a bit complex
[22:34:34 CET] <atana> yes.
[22:35:12 CET] <BtbN> philipl, i think I see where cuvid.c leaks an active cuda context. If the cuvid_output_frame runs into the EOF or EAGAIN case, it just returns without ever poping it.
[22:35:41 CET] <philipl> BtbN: I'll take a look.
[22:35:45 CET] <michaelni> atana, for 2nd step i think plain array with qsort() will be easiest (requireng no compex data structs), ill explain it once the first part is done
[22:35:52 CET] <philipl> BtbN: Did you try Miroslav's patch?
[22:35:56 CET] <cone-388> ffmpeg 03Lou Logan 07master:fb32c561c382: doc/protocols: add option usage description
[22:36:09 CET] <BtbN> philipl, yes, but it's the wrong approach. We don't want a globally pushed coda context.
[22:36:17 CET] <BtbN> I'll push/pop around the encode function instead.
[22:36:30 CET] <atana> sure. It's better to do things step wise
[22:36:32 CET] <philipl> Ok. I'm glad he was able to help.
[22:36:55 CET] <philipl> And yes, you should push/pop selectively like we do everywhere else.
[22:38:22 CET] <philipl> BtbN: where is this problem in output_frame? EOF/EAGAIN set the ret code and then continue into the error: section and the pop happens.
[22:39:01 CET] <BtbN> ah, right.
[22:39:02 CET] <BtbN> Hm
[22:39:13 CET] <BtbN> Then... where is the cuda context coming from that makes it work right now.
[22:41:03 CET] <BtbN> I'd really like C++ style Lock-Guards for this...
[22:41:30 CET] <philipl> So, if we go back to ordering, having nvenc init first may have some implicit effect on this.
[22:41:46 CET] <philipl> Some undefinec cuda resource behaviour that ends up making it work
[22:41:59 CET] <BtbN> well, it fails on encodepicture
[22:42:07 CET] <BtbN> so there has to be a cuda context bound, somehow
[22:42:17 CET] <philipl> Doesn't nvenc init require a cuda context?
[22:42:25 CET] <BtbN> Yes, but it's immediately poped
[22:42:37 CET] <BtbN> and never pushed
[22:42:48 CET] <BtbN> there is not a single call to PushCurrent in nvenc.c
[22:42:53 CET] <jkqxz> michaelni: Ok, that one is probably an improvement. The stream itself is identical, but because of the later muxer start there is more metadata available - the new file additionally contains a vprp header with the aspect ratio in it.
[22:42:57 CET] <jkqxz> michaelni: Another?
[22:43:17 CET] <BtbN> and if it's coming via hw_frames_ctx, it's even never created, but the external one is used
[22:43:47 CET] <philipl> BtbN: Yeah, so it looks obvious in retrospect that you'd need to push the context.
[22:43:56 CET] <philipl> I think it's simply dumb luck that it ever worked.
[22:43:58 CET] <BtbN> Yes, it makes sense to push it.
[22:44:05 CET] <BtbN> And I'm wondering _why_ it ever worked.
[22:44:14 CET] <BtbN> So we have to be leaking a context somewhere
[22:44:30 CET] <philipl> In the old ffmpeg.c, the context still came from hw_frames_ctx?
[22:44:34 CET] <BtbN> Which is consistent with people reporting issues with using nvenc/cuvid multiple times in the same API application
[22:44:44 CET] <BtbN> the context always comes from there
[22:44:52 CET] <BtbN> nothing about that changed yet
[22:44:56 CET] <philipl> K
[22:45:21 CET] <philipl> I wonder if we could add push/pop to the coverity tooling, so it can detect leaks.
[22:45:27 CET] <philipl> The indirection might make that hard.
[22:47:15 CET] <philipl> The documentation for modelling is so bad, I have no idea if it's possible.
[22:48:46 CET] <atana> michaelni, what does GetByteContext, PutByteContext do is Get* for reading and Put* writing to file?
[22:53:49 CET] <michaelni> atana, Get is for reading from array, Put is for writing in to array, the array then would need to be read/writen to file. But maybe using avio is easier as it can write / read to the file directly
[22:54:33 CET] <cone-388> ffmpeg 03Paul B Mahol 07master:72864547f91f: avfilter/vf_lut: do not always explicitly clip pixels
[22:54:34 CET] <cone-388> ffmpeg 03Paul B Mahol 07master:aa234698e92f: avfilter/vf_lut: make it possible to clip pixel values that are out of valid range
[22:54:53 CET] <atana> avio?
[22:54:59 CET] <michaelni> atana, see tools/aviocat.c for how to use avio, if it makes sense to you we can use it
[22:55:07 CET] <BtbN> philipl, hm, seems like something static analysis might be able to catch though. Every PushContext call has to have a matching Pop one
[22:56:04 CET] <michaelni> atana, with avio you can use avio_wl16() and avio_rl16() and similar functions to read / write directly, may be easier
[22:57:41 CET] <michaelni> atana, does tools/aviocat.c make sense or should i explain ?
[22:58:09 CET] <atana> I am going through it
[22:58:24 CET] <michaelni> ok, no hurry
[23:02:34 CET] <BtbN> philipl, https://github.com/BtbN/FFmpeg/commit/ba7563b0c4829560fb80b2da7e4b3213da04c20e
[23:08:17 CET] <atana> michaelni, avio_open2(&input, input_url, AVIO_FLAG_READ, NULL, NULL) is input_url a file ?
[23:08:52 CET] <JEEB> a file or a protocol url
[23:09:03 CET] <JEEB> whatever AVProtocols can read
[23:09:07 CET] <JEEB> (or write)
[23:11:58 CET] <Chloe> I dont understand what 'subframe' means exactly. Looking at decoders which use it doesn't really help. But from what I can tell it doesn't look like it would be useful for subtitles
[23:13:54 CET] <durandal_1707> multiple frames in single packet
[23:16:04 CET] <Chloe> Right, this won't help me I think. Since it's not sequential data, or multiple interleaved. It's exactly like different streams.
[23:16:05 CET] <Chloe> This sucks
[23:21:37 CET] <BBB> j-b: alive?
[23:21:42 CET] <j-b> BBB: yes
[23:21:55 CET] <j-b> well, I think I am
[23:23:42 CET] <cone-388> ffmpeg 03Mark Thompson 07master:c1a5fca06f75: lavc: Add device context field to AVCodecContext
[23:28:50 CET] <michaelni> atana, i think we cannot use avio, we need to append to files to add new entries and avio seems not able o do that
[23:30:16 CET] <atana> yes append is needed to add more entries.
[23:30:57 CET] <atana> does bytestream does that?
[23:31:40 CET] <michaelni> atana, avpriv_open + bytestream + write() + close()
[23:32:27 CET] <michaelni> atana, avpriv_open is just like normal C open()
[23:33:17 CET] <atana> ok
[23:35:38 CET] <michaelni> atana, probably easiest if you create a fixed length array like uint8_t entry[16 + 1234] and then write into that with bytestream or bytestream2 and then avpriv_open, write(entry), close
[23:37:25 CET] <michaelni> atana, bytestream2_init_writer() + bytestream2_put_le16() + ... to write into the array
[23:39:48 CET] <atana> ok I will do ti
[23:39:52 CET] <atana> (it
[23:40:24 CET] <michaelni> sadly a bit more complex than avio
[23:44:09 CET] <atana> why size 16+1234 ?
[23:50:56 CET] <michaelni> atana, 16 is for 2 byte per each of the 8 maxima for the frequency
[23:51:09 CET] <michaelni> atana, 1234 is bad :)
[23:52:04 CET] <michaelni> 1234 should be the size neede to store songid, time, and anything else
[23:52:48 CET] <michaelni> if you store songid as string its longer than if you store song id as integer
[23:53:32 CET] <michaelni> atana, songid as integer will fit in 4 byte (32bit) and time as number of fft windows should be fine with 2 byte (16bit)
[23:54:03 CET] <michaelni> so maybe it would be 16 + 4 + 2, depends on what we need to store per entry
[00:00:00 CET] --- Tue Feb 14 2017
More information about the Ffmpeg-devel-irc
mailing list