[Ffmpeg-devel-irc] ffmpeg-devel.log.20130819

burek burek021 at gmail.com
Tue Aug 20 02:05:02 CEST 2013


[00:28] <cone-119> ffmpeg.git 03Carl Eugen Hoyos 07master:037af63b3373: Fix frame width and height for some targa_y216 samples.
[04:56] <BBB> ubitux: ah you're done?
[04:57] <BBB> cool
[04:58] <BBB> ubitux: did you confirm that the keyframe of the vp90-2-00-quantizer-23.webm (etc.) files now all decode correctly (i.e. output matches what vpxdec produces)?
[05:10] <BBB> ubitux: tested, looks correct to me, nice!
[05:11] <BBB> ubitux: pushed
[05:22] <BBB> ubitux: ok, things to do... mc isn't yet complete (I'm currently working on that, but happy to pick up something else if you want to do this), loopfilter for inter frames but that should probably wait for mc to be done so we can actually test it (it basically just involves handling the skip flag)
[05:22] <BBB> ubitux: resolution switching (a little like svc), bw adaptivity for inter frames (basically mirror the keyframe code and use that for interframe-specific probabilities)
[05:23] <BBB> ubitux: start writing simd, add 16-pixel switchable loopfilter dsp functions for 8wd and 4wd mixes (currently we do 2 calls if that happens, one 8px 4wd and one 8wd 8px, but we should be able to merge that into one 16px variable wd (since 8wd is a superset of 4wd) for faster (sse2) simd; the same logic could also be used to create 32px (avx2) versions
[05:23] <BBB> ubitux: fate tests :-p
[05:25] <BBB> ubitux: oh, something other simd'y, write idct_add_block VP9DSPContext functions that basically are like itxfm_add, but dct_dct only, and do multiple at a time, for inter frames
[05:26] <BBB> ubitux: the idea is to do 2 or even 4 4x4 idct adds in a single loop
[05:26] <BBB> since idct4 only requires 4 word registers, so mmx; using sse2, we can do 2 at a time; using avx2, possibly 4
[05:26] <BBB> I don't know exactly what that would look like, but maybe a fun experiment
[09:13] <ubitux> BBB: i'm curious how mc works, what could i possibly do in that area? otherwise i'd like to try writing some simd, for whatever part, eventually one i haven't touched yet
[09:26] <BBB> I can commit what I have done so far, it's basically Y >=8x8; sub8x8 and uv are small pieces of extra wrapper code that still need doing; also, inverse transform for inter block coding isn't yet done
[09:26] <BBB> or better yet, I can show you a patch so you know how far it is
[09:26] <BBB> sec
[09:28] <BBB> ubitux: http://pastebin.com/KLxdPLHJ
[09:31] <BBB> better yet, http://pastebin.com/Un4sVc3V (minus one bug)
[09:31] <BBB> probably still horribly broken
[09:34] <BBB> ubitux: if you want, take that and finish it :)
[09:34] <BBB> ubitux: then I'll go ... and ... maybe take a vacation
[09:35] <ubitux> mmh; can you commit a version that builds where i can gradually fill the gaps?
[09:35] <BBB> does this build?
[09:35] <ubitux> no idea
[09:35] <BBB> should build
[09:35] <BBB> it basically printfs a few places where stuff is missing, but it mostly works, I think
[09:35] <BBB> it also doesn't crash
[09:36] <ubitux> great
[09:36] <BBB> shall I commit it to my tree?
[09:36] <ubitux> yeah i guess
[09:37] <BBB> oh, emu_edge handling is missing also, do you know how that works?
[09:37] <ubitux> ok so i've no idea what i should start with; i guess the missing bits of Y>=7x7 and sub8x8 in inter_reconn()?
[09:38] <BBB>     if (b->bl == BL_8X8 && b->bp != PARTITION_NONE) {
[09:38] <BBB>         printf("Sub8x8 inter prediction not yet implemented\n");
[09:38] <BBB> and
[09:38] <BBB>     if (!b->skip) {
[09:38] <BBB>         // y itxfm_add
[09:38] <BBB>         printf("Inter itxfm add loop not yet implemented\n");
[09:38] <BBB> start with these pieces, so the Y plane reconstructs correctly
[09:38] <ubitux> BBB: we had a talk about that in h264 a while ago but i forgot about it; i can still read my irc logs though
[09:38] <ubitux> ok
[09:38] <BBB> it's basically a motion vector that requires out of frame data
[09:38] <BBB> then we reconstruct a temp buffer that extends the edges
[09:39] <BBB> and use that for MC, instead of the reference frame
[09:39] <BBB> h264, vp8, etc. all use it
[09:39] <ubitux> ok
[09:40] <BBB> after that, there's also the same, but for uv; it's a separate function so we can take advantage of both u and v planes being identical, so we don't need to calculate some stuff twice
[09:40] <BBB> vp8 does it like that also
[09:42] <ubitux> "u and v planes being identical"  huh?
[09:42] <BBB> (in case you're wondering, the sub8x8 code would basically be somewhat of a mirror of the branchy code in vp8.c's inter_predict(), basically branching between bp == PARTITION_V/H/SPLIT), and then doing two 8x4 predictions, two 4x8 predictions, or 4 4x4 predictions; the UV handling needs no special casing here)
[09:42] <BBB> uv planes identical, it uses the same mv at the same subsampling etc.
[09:43] <BBB> the only thing changing is the dst/src pointers
[09:43] <ubitux> ok
[09:43] <BBB> see vp8.c vp8_mc_chroma()
[09:43] <ubitux> thx, i guess i have more than enough info to start working on this; i'll start with sub8x8
[09:43] <BBB> cool, I'll do bw adapt
[09:43] <BBB> and commit this patch to my tree
[09:44] <BBB> pushed
[09:44] <ubitux> thx :)
[11:23] <cone-717> ffmpeg.git 03Reimar Döffinger 07master:9a27acae9e6b: ogg: Fix potential infinite discard loop
[11:23] <cone-717> ffmpeg.git 03Michael Niedermayer 07master:54e718d014e0: Merge remote-tracking branch 'qatar/master'
[11:48] <cone-717> ffmpeg.git 03Paul B Mahol 07master:daede1e3fa75: matroskaenc: remove unneeded wavpack tag
[12:12] <pippin> michaelni: http://pippin.gimp.org/a_dither/0001-swscale-make-bayer-not-error-diffusion-be-the-specia.txt
[12:45] <cone-717> ffmpeg.git 03Stephen Hutchinson 07master:76d8d2388120: Revert "doc/RELEASE_NOTES: add a note about AVISynth"
[12:56] <michaelni> pippin, your patch breaks fate (make fate)
[12:58] <pippin> no wonder, I left an evil fprintf in there
[12:59] <pippin> or no, that is with my additional patch on top - wait a min
[13:13] <cone-717> ffmpeg.git 03Stephen Hutchinson 07release/2.0:8d9568b4a1a2: avisynth: Support video input from AviSynth 2.5 properly.
[13:13] <cone-717> ffmpeg.git 03Stephen Hutchinson 07release/2.0:423b87d62176: Revert "doc/RELEASE_NOTES: add a note about AVISynth"
[13:18] <pippin> michaelni: it is line 1206 in utils.c which is offending; though I do not see why
[13:20] <pippin> michaelni: maybe it has to do with "AUTO" being a new and defualt member of the dither enum?
[13:21] <pippin> and that if the error diffusion _flag_ has not been set, it the enum value still is auto?
[13:22] <michaelni> its auto probably
[13:43] <pippin> michaelni: if you also apply http://pippin.gimp.org/a_dither/ffmpeg-swscale-always-resolve-auto-enum.txt fate passes
[13:55] <michaelni> iam not sure its a problem but then dither will say bayer even in some cases where no dither is actually applied
[13:58] <pippin> what is the reason for this full color interpolation distinction between the code-paths?
[13:59] <pippin> michaelni: making it be else c->dither = SWS_DITHER_NONE; makes fate fail in the same way
[14:00] <pippin> michaelni: thus the test in fate (I do not know what the parameters are) seems to expect auto to default to bayer (at least for GIF)
[14:05] <pippin> for my other project(s); I anticipate to eventually replace 'a dither' with some better variation of a green/blue noise LUT (unless a-dither continus incrementally improving)
[14:05] <michaelni> IIRC its not implemented for the other codepath
[14:05] <pippin> and not really provide any ability to tweak the dither
[14:07] <pippin> (for GIMP displaying high bitdepth things on an 8bpc / 10 / 12bpc display - the pattern shouldn't be discernable by the user - thus numerical correctness/color reproduction should be all that matters)
[14:08] <michaelni> a dither / bayer could be replaed by a LUT, i suspect they wont beat error diffusion for still images though
[14:08] <pippin> and bayer is not implemented on the error-diffusion code path, but implementing bayer there should be easy
[14:09] <pippin> michaelni: 'a dither' will not, but some of the computationally costly to construct blue/green noise masks can come close to error diffusion
[14:09] <pippin> error diffusion also has problems like the dimple along the top of: http://pippin.gimp.org/a_dither/error-diffusion.png
[14:10] <pippin> which the threshold mask based methods do not have
[14:10] <michaelni> i know, ED is pretty "primitve"
[14:10] <michaelni> there are better (more complex) dithers that dont suffer from such artifacts
[14:10] <pippin> to circumvent that artifact in ED, people often do serpentine order (alternate directions for scanlines)
[14:12] <michaelni> about high bitdepth vissibility, therea a unfortunate problem, some (maybe most?) TFT screens arent true 24bit but use dither themselfs so viewing bayer ditherd images on a bayer dithering display can look suboptimal
[14:12] <pippin> I've got one of those
[14:13] <pippin> it alternates two different buffers temporally
[14:13] <pippin> if I blink, I can see a checkerboard for some colors
[14:13] <pippin> another problem is people that do full screen color management by tweaking the gamma LUTs of the gpu...
[14:14] <pippin> causing unexpected additional quantization leading to banding for optimal dithering methods
[14:14] <pippin> this is a scenario where mor stochastic methods degrade better
[14:14] <durandal11707> add all mentioned here: http://en.wikipedia.org/wiki/Dither
[14:15] <pippin> the void and cluster there uses a way too small mask
[14:35] <iKriz> hi guys i'm using the ffmpeg library in my app only its hard to find documentation about setting up a RTSP stream using the library any examples anywhere? i have the h263ES frames in memory so i dont want to use the exe files
[14:45] <durandal_1707> i'm poring eq2 right now, but filter will be named eq
[14:57] <ubitux> cool :)
[15:05] <iKriz> can anyone point me in the right direction?
[15:06] <iKriz> i've searched the world's wild web but it's all wilderness
[15:07] <iKriz> with other goals in mind and such
[18:52] <cone-268> ffmpeg.git 03Michael Niedermayer 07master:8c50ea2251b5: swscale: set dither to a specific value for rgb/bgr8 output
[18:52] <cone-268> ffmpeg.git 03Michael Niedermayer 07master:23b3141261b7: swscale: improve dither checks
[20:00] <durandal_1707> hmm why should i do saturation in filter, hue does that already
[20:01] <durandal_1707> also why not split it into contrast,gamma,brightness filters
[20:02] <ubitux> doesn't sound like a bad idea
[20:02] <ubitux> but wouldn't that be 2x or 3x slower if you cumulate them?
[20:32] <durandal_1707> indeed
[20:32] <ubitux> durandal_1707: what about updating hue and add more parameters, and alias the filter name to something more meaningful?
[20:35] <durandal_1707> hue does not use lut at all
[20:36] <ubitux> it could, if you had a "constant" path (expr is const debate again i guess)
[20:36] <ubitux> no?
[20:39] <ubitux> basically just like vf overlay or vignette, using a "eval" option for now
[20:39] <durandal_1707> well it could use lut table, if my understanding of code is correct
[20:40] <durandal_1707> it would also be much faster
[21:03] <durandal_1707> indeed, converting to lut gives doesn't breaks fate
[21:03] <durandal_1707> review fail
[21:04] <ubitux> huh?
[21:04] <ubitux> hue fate test is a varying one iirc
[21:04] <ubitux> are you recomputing the lut for each frame?
[21:05] <durandal_1707> only if lut actually changes
[21:06] <durandal_1707> doesn't matter, because unless you only care for 256x256(or smaller) images its faster
[21:08] <durandal_1707> actuall i'm wrong
[21:08] <durandal_1707> its 16*16
[21:13] <durandal_1707> sent patch, feel free to flame^Wbenchmark
[21:19] <durandal_1707> and for gamma,..etc one wants fancy expressions too?
[21:20] <ubitux> i guess one might want it at least for brightness
[21:20] <ubitux> (to make an advanced fade effect)
[21:20] <ubitux> so why not the other parameters?
[21:24] <durandal_1707> yes, i just need to make use of this inline assembly
[22:11] <durandal_1707> huh someone f* libdvdnav, they like to rebase too much
[22:17] <michaelni> ubitux, do you have a comment about: 0818  0:38 Matthew Heaney  (1.3K) 
[22:17] <michaelni> if not ill apply the patch this is about (if it works and looks reasonable)
[22:18] <ubitux> my opinion is still the same; to me it's ok (from the theorical PoV, i don't remember the technical details, but i'm not a mkv maintainer anyway)
[22:29] <michaelni> ubitux, ok will apply then
[22:30] <durandal_1707> damn, that hue coverage is flawed
[22:36] <cone-268> ffmpeg.git 03Matthew Heaney 07master:818ebe930fa4: avformat/matroskadec: add WebVTT support
[22:46] <ubitux> j45: if you pick some commits from ffmpeg for libav, please use the correct authorship for the commit, don't use "Author:" in the commit description
[22:47] <durandal_1707> ubitux: it's already said on ml
[22:47] <j45> ubitux: yes, Luca instructed me in the correct procedure
[22:48] <ubitux> thx
[22:48] <j45> np
[22:53] <ubitux> j45: do you know if your QT patch is fixing https://ffmpeg.org/trac/ffmpeg/ticket/1845 ?
[22:57] <j45> ubitux: there's a good chance
[22:57] <ubitux> cool
[22:57] <ubitux> would be nice to test
[22:59] <durandal_1707> ubitux: well i can't see how could I put other gamma/contrast/brightness without changing current hue/saturation code
[22:59] <ubitux> it was just a suggestion, i dunno
[23:02] <ubitux> j45: why doing so much effort btw? :)
[23:02] <ubitux> you're likely being requested to rewrite that code anyway
[23:04] <j45> we want to use avformat for muxing in HandBrake.  Need to fix a few things to bring avformat mov and mkv muxers up to par with our current implementations.
[23:04] <ubitux> isn't the feature already present in libavformat? ;)
[23:06] <durandal_1707> people should rewrite things every 5 years
[23:06] <j45> you mean in ffmpeg's version of libavformat?  yes faststart is already there. 
[23:07] <ubitux> then you don't need to do anything ;)
[23:07] <j45> except rewrite a gob of handbrake code to change libraries... again... no thanks.
[23:07] <ubitux> it's fully compatible
[23:08] <ubitux> you should not need to rewrite anything
[23:08] <j45> key word being *should*
[23:08] <ubitux> a test of lib switch should be easier than rewriting the faststart feature
[23:09] <ubitux> that would actually be useful for us to know
[23:10] <j45> you're welcome to try it ;) 
[23:10] <ubitux> i don't personally use handbrake, it would take me time
[23:28] <durandal_1707> well if i just add contrast, brighness and gamma, should it be ok to remove mp filters?
[23:30] <ubitux> i think the compromise didn't change
[23:30] <ubitux> same features, same speed ’ ok drop
[23:31] <durandal_1707> what features would saturation from eq2 represent?
[23:32] <durandal_1707> also this is lut only so speed is much less relevant
[00:00] --- Tue Aug 20 2013


More information about the Ffmpeg-devel-irc mailing list