[Ffmpeg-devel-irc] ffmpeg-devel.log.20131230
burek
burek021 at gmail.com
Tue Dec 31 02:05:03 CET 2013
[00:11] <BBB> smarter: shall i help?
[00:51] <cone-251> ffmpeg.git 03James Almer 07release/2.1:b962157ce36f: matroskadec: Fix bug when parsing realaudio codec parameters
[01:03] <cone-251> ffmpeg.git 03Michael Niedermayer 07master:100a54da5264: avcodec/lagarith: disable lag_decode_zero_run_line() and ask for a sample
[01:03] <cone-251> ffmpeg.git 03Michael Niedermayer 07master:3410122c687b: avcodec/lagarith: fix src/src_size for esc_count < 8
[01:03] <cone-251> ffmpeg.git 03Michael Niedermayer 07master:e80aa47abf74: avcodec/lagarith: fix init_get_bits() size in lag_decode_arith_plane()
[01:33] <BBB> how times have changed
[01:34] <BBB> I asked first during the msvc work whether we could kill compound literals
[01:34] <BBB> the flamewar that that started is still in my mind, and not in a positive way
[01:34] <BBB> well I guess it's good sentiments change
[01:38] <wm4> is there any advantage to make the standard timebase independent from ABI? (that's the only effect such a change could have)
[01:38] <wm4> are you planning changing the timebase every 2 months or so?
[01:38] <wm4> or randomize it at program start?
[01:41] <BBB> I'm not planning anything
[01:41] <wm4> that goes to Daemon404 I guess
[01:44] <Compn> lol randomize at program start
[01:45] <wm4> yeah, that was some sarcasm, but I seriously wonder why it can't be a constant
[01:46] <Compn> just make a patch -constanttb
[01:46] <Compn> it will be applied
[01:46] <Compn> so let it be written, so let it be committed
[02:25] <cone-251> ffmpeg.git 03Carl Eugen Hoyos 07master:b4c89c90ffc7: Allow hiding the banner.
[02:25] <cone-251> ffmpeg.git 03Carl Eugen Hoyos 07master:de40905f55cd: Fix condition for transparency warning in xsub encoder.
[02:25] <cone-251> ffmpeg.git 03Michael Niedermayer 07master:4aa9c9150820: Merge remote-tracking branch 'cehoyos/master'
[04:33] <cone-251> ffmpeg.git 03James Almer 07master:d890db5f537b: oggdec: add support for VP8 demuxing
[10:53] <ubitux> i have a weird bug i'm not sure how to solve
[10:53] <ubitux> ./ffmpeg -i ~/samples/big_buck_bunny_1080p_h264.mov -vf scale=80:44,tile=10x1 -frames:v 1 -y out.jpg this works
[10:54] <ubitux> ./ffmpeg -i ~/samples/big_buck_bunny_1080p_h264.mov -vf fps=0.10,scale=80:44,tile=10x1 -frames:v 1 -y out.jpg this doesn't
[10:54] <ubitux> [mjpeg @ 0x216eba0] bitrate tolerance too small for bitrate
[10:54] <ubitux> of course the mpeg encoder doesn't need a huge bitrate
[10:54] <nevcairiel> i guess it somehow factors the fps into there
[10:54] <ubitux> the problem is, that sanity check is based on the not filtered input
[10:55] <ubitux> afaict
[10:55] <saste> when you set the bitrate, how the encoder knows the framerate?
[10:56] <ubitux> i have absolutely no idea
[10:56] <ubitux> anyway i workarounded the problem using a constant scale (-q:v 0)
[10:56] <ubitux> but i'd like to know if there is something fixable here
[11:04] <ubitux> saste: why "or"?
[11:04] <ubitux> it's a "and"
[11:05] <ubitux> it pauses the playback if necessary, and then it steps to the next video frame
[11:07] <saste> if you just press "s", it will pause AND step to the next frame?
[11:08] <saste> or it will only pause?
[11:08] <ubitux> it will pause AND step to the next frame
[11:09] <ubitux> basically the step to next frame is only meaningful if the video is paused
[11:09] <ubitux> so it pauses the video, and then does a one step frame
[11:09] <ubitux> the code is pretty straightforward about this
[11:09] <ubitux> there is even a comment about that
[11:10] <saste> i'm fine with "and" if that's what the implementation does
[11:10] <ubitux> mmh, comment is about sth else, sorry
[11:12] <ubitux> in the ffplay logic, it actually unpause it if necessary, step one frame, then pause
[11:12] <ubitux> which is equivalent and more user-friendly to saying "pause if necessary, and then step one frame"
[11:34] <cone-510> ffmpeg.git 03Luca Barbato 07master:9a4c10e3af01: lavu: Move preprocessor macros in a separate file
[11:34] <cone-510> ffmpeg.git 03Michael Niedermayer 07master:8ccc58bb7d72: Merge remote-tracking branch 'qatar/master'
[11:44] <ubitux> saste: we don't have a -pix_fmts, -formats, etc output with the writers right?
[11:46] <saste> ubitux, no
[11:46] <ubitux> too bad :(
[12:00] <ubitux> oh, got my rpi, i should be able to test your stuff clever
[12:42] <saste> ubitux, why do you need them?
[12:42] <saste> the output is already "more or less" parsable
[12:43] <saste> adding an option for that would be from trivial to easy
[12:43] <ubitux> accessing the number of components
[12:43] <ubitux> to check if the pix fmt is alpha or not
[12:44] <ubitux> yes it's more or less parsable, but if i could do a pix_fmts = json.loads(exec('ffprobe -of json -pix_fmts')) that would be even better :)
[12:44] <ubitux> s/is alpha/has alpha/
[12:48] <cone-510> ffmpeg.git 03Reimar Döffinger 07master:b74eead27b47: compat: provide va_copy for old gcc versions.
[12:48] <cone-510> ffmpeg.git 03Reimar Döffinger 07master:311f61e1b4d5: configure: check that pthreads is compatible with compiler.
[12:48] <cone-510> ffmpeg.git 03Reimar Döffinger 07master:3cc0f335fe14: af_aresample: remove only use of array compound literals with non-const initializers in FFmpeg.
[13:15] <ubitux> Daemon404: i'm pretty sure some other places assume 1000000 as timebase
[13:15] <ubitux> so chagning it will likely break various things
[13:15] <ubitux> i don't think the macro is here "to be changed at some point in the future" but more as a semantic constant
[13:16] <ubitux> (try to grep 1000000 in ffplay for instance)
[13:17] <ubitux> also maybe stuff like libavformat/tta.c: if(samplerate <= 0 || samplerate > 1000000){
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:bcfcb8b8524d: lavc/ffwavesynth: fix dependency sizeof(AVFrame).
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:a55692a96099: ffprobe: check av_frame_alloc() failure.
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:38004051b53d: lavc/utils: check av_frame_alloc() failure.
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:a91394f4de63: lavc/diracdec: check av_frame_alloc() failure.
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:97af2faaba70: lavc/libopenjpegenc: check av_frame_alloc() failure.
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:19a2d101acc0: lavc/mjpegenc: check av_frame_alloc() failure.
[13:37] <cone-510> ffmpeg.git 03Nicolas George 07master:2ebaadf35c93: lavc/mjpegenc: use proper error codes.
[13:37] <cone-510> ffmpeg.git 03Michael Niedermayer 07master:905bac2cd30d: Merge remote-tracking branch 'cigaes/master'
[14:13] <cone-510> ffmpeg.git 03Michael Niedermayer 07master:6f1b2967712e: avcodec/lagarith: reenable buggy lag_decode_zero_run_line()
[14:13] <cone-510> ffmpeg.git 03Michael Niedermayer 07master:afd1245433e9: avcodec/lagarith: use init_get_bits8()
[14:40] <j-b> so much fun on lagarith, as I see :)
[16:26] <cone-510> ffmpeg.git 03Michael Niedermayer 07master:61d43a265176: avcodec/lagarith: check and propagate return value from init_get_bits8()
[17:04] <Daemon404> [07:15] <@ubitux> Daemon404: i'm pretty sure some other places assume 1000000 as timebase <-- then it's wrong.
[17:05] <ubitux> sure, but that's something you would have to deal with if you want to change it
[17:05] <ubitux> changing this constant doesn't look trivial at all
[17:06] <Daemon404> yes, but that's also beyond the scope here
[17:08] <ubitux> note that AV_TIME_BASE is used in a lot of apps
[17:08] Action: Daemon404 needs to go to an eye exam now... herp
[17:08] <Daemon404> [11:08] <@ubitux> note that AV_TIME_BASE is used in a lot of apps <-- you could say this about *any* API bit
[17:08] <Daemon404> it's not a useful argument
[17:09] Action: Daemon404 really does go now
[17:09] <iive> it means that changing it would break the abi
[17:09] <ubitux> well, i think the constant has its usages
[17:09] <ubitux> i would just make the getter return the constant
[17:09] <ubitux> for people who want to be ABI compatible in a potential future
[17:10] <ubitux> and keep the macro for ppl who don't bother
[17:10] <Daemon404> [11:09] <@iive> it means that changing it would break the abi <-- again this is sad about any bump chnge
[17:10] <Daemon404> which happen quuite often
[17:10] <iive> thanks to the crap pouring from libav.
[17:10] <ubitux> i think having the getter and the constant is nice
[17:10] <Daemon404> lots of the stuff theyve done is atually use ful (ref counting)
[17:10] <Daemon404> (and lots is not)
[17:11] <wm4> can someone explain my why the constant can't be part of the ABI, and if not, why you just don't use custom timebases everywhere (stored in extra fields)
[17:12] <ubitux> you mean having it as an extern int/int64?
[17:12] <iive> constant is exported by .h headers and compiled directly into application code.
[17:13] <iive> to be honest, I have no idea why there is timebase at all, ffmpeg had rational number support for quite long time.
[17:13] <Daemon404> [11:12] <@ubitux> you mean having it as an extern int/int64? <-- this gave certain compilers issues (whatsup windows?)
[17:13] <Daemon404> there was a big push to remove them all a while back
[17:13] <ubitux> don't we have av_export for this?
[17:13] <nevcairiel> data exports are evil
[17:13] <ubitux> sure
[17:14] <Daemon404> .. i really must go now or ill be late
[17:14] Action: Daemon404 out
[17:14] <ubitux> i'm just asking if that's what wm4 was asking
[17:14] <wm4> no
[17:15] <wm4> AFAIK AVFormatContext.duration is one of the fields which implicitly use AV_TIME_BASE
[17:15] <wm4> so why not just add AVFormatContext.duration_timebase?
[17:15] <ubitux> iive ?
[17:16] <ubitux> afaik format don't have a global timebase, they're using AV_TIME_BASE
[17:16] <iive> AVRational ?
[17:16] <ubitux> which is needed for seeking typically
[17:16] <ubitux> and they're probably some other usage
[17:18] <nevcairiel> wm4: but why not simply define a constant timebase for all fields that don't have a file timebase they are derived from
[17:19] <nevcairiel> you know, like today
[17:19] <wm4> I'm not the one who wants to change everything for no reason :)
[17:19] <wm4> I assume there's a problem with the current timebase
[17:19] <wm4> which is why it apparently (?) has to change
[17:19] <ubitux> i think the main reason is inline AVRational
[17:20] <ubitux> in the _Q version
[17:20] <ubitux> and for consistency AV_TIME_BASE is changed as well
[17:20] <nevcairiel> the problem is the inline C99 feature in an installed header
[17:20] <wm4> you don't have to use the C99 feature
[17:21] <nevcairiel> i know, and i never did
[17:21] Action: wm4 is searching for the problem
[17:21] <wm4> ...has it been found yet?
[17:21] <ubitux> dunno
[17:21] <ubitux> Daemon404 certainly has a reason for wanting to change everything ;)
[17:21] <nevcairiel> someone thought that was it, having a C99 feature in a public header
[17:21] <iive> look, if I understand the problem correctly, e.g. the duration field is supposed to be in seconds. because it usually is a fraction of a seconds, you need to use a fixed of float point to get reasonable representation.
[17:22] <iive> at the moment it is a fixed point with pre-defined constant time_base.
[17:22] <iive> i guess you don't want to use floats, for various of reasons.
[17:23] <iive> the other solution is to use rational number, where the nominator is the current value and the denominator just holds the time_base
[17:23] <nevcairiel> borh are even worse api breaks
[17:23] <nevcairiel> both
[17:24] <ubitux> we should just add the 2 function helpers and keep the macro as well
[17:24] <ubitux> that way c++ apps & friends can access it without trouble
[17:24] <ubitux> and we are not annoying our users and changing the whole codebase
[17:25] <ubitux> ...for no apparent reason except "making it sane/consistent/pretty/whatever"
[17:25] <iive> sure, this is why there should be a good reason to break the api in first place and have an idea of cost/benefit ratio.
[17:25] <iive> you want to turn the define into function call?
[17:26] <ubitux> and btw, haters gonna hate but... it's gonna add a bunch of function calls! (since it's exported i don't think it will be inlined)
[17:26] <ubitux> well, Derek wants, I don't
[17:26] <iive> why does he wants it?
[17:27] <iive> what problem does he solve?
[17:27] <ubitux> 17:20:28 <+nevcairiel> the problem is the inline C99 feature in an installed header
[17:27] <iive> or rather tries to solve.
[17:27] <ubitux> which is already the case anyway (av_ts2str, av_err2str, probably more)
[17:28] <nevcairiel> dont tell him, or he'll "fix" those too
[17:28] <ubitux> he can't
[17:28] <ubitux> there is no other way
[17:28] <nevcairiel> he can make em private
[17:28] <ubitux> no, users are already using it
[17:29] <ubitux> so i will put a veto on it
[17:29] <nevcairiel> any api change will screw users, its just a question of how much
[17:29] <ubitux> there is no need for an api change&
[17:30] <ubitux> we just need to provide one or more function helper for app not supporting the macro
[17:30] <nevcairiel> tell that to the patch authors. :p
[17:35] <wm4> also, even in C++, you can construct AV_TIME_BASE_Q trivially
[17:35] <ubitux> yes.
[17:35] <wm4> you can assign fields to structs, right?
[17:36] <ubitux> yes, you can put that function in your c++ app
[17:36] <wm4> AVRational r; r.num=1; r.den=AV_TIME_BASE;
[17:36] <wm4> whoo!
[17:48] <nevcairiel> in recent c++ compilers you can even do it in one line
[17:48] <nevcairiel> not sure the old macro works in C++11
[17:48] <ubitux> oh, dvb sub patch.
[17:50] <wm4> it should have been "#define AV_TIME_BASE_Q {1, AV_TIME_BASE}"
[17:51] <ubitux> you can't inline it in function call without the cast
[17:52] <wm4> C99 users could have added this themselves
[17:52] <wm4> and it'd work as initializer in C90/C++
[17:53] <ubitux> you win 2 char with your method
[17:53] <ubitux> macro is basically void
[17:53] <ubitux> no purpose at all
[18:19] <clever> ubitux: when using eosd, are there any utils to merge nearby elements?
[18:20] <wm4> clever: no
[18:21] <wm4> these come from libass, and libass returns a surface for each glyph
[18:21] <wm4> actually, at least 2 per glyphs if the text has a border
[18:21] <clever> i guess i'll have to do some merging myself then
[18:22] <clever> dispmanx has a software limit of 128 elements
[18:22] <wm4> you should branch them to the gpu
[18:22] <wm4> oh
[18:22] <wm4> s/branch/batch
[18:22] <clever> the api does allow batching, but its hard limited to 128 elements at a time
[18:23] <clever> so i need to create a few regions, like the top and bottom line of text
[18:23] <clever> and merge the glyphs into a single line of text
[18:23] <clever> then create 2 or 3 surfaces for those
[18:23] <clever> that, or just do full 720p frames and kill performance
[18:24] <wm4> most images can't be combined, because every image has a per-surface color/alpha
[18:24] <clever> pallet based stuff?
[18:24] <clever> i havent fully looked at exactly what its giving me yet
[18:26] <clever> hmmm, bitmap is a 1 bit per pixel alpha map, color is a rgb value
[18:26] <clever> so i would need to bulk up things with the same color into a single layer
[18:27] <clever> and figure out if dispmanx can handle 1 bit per pixel alpha with solid color
[18:27] <clever> wm4: one more question, if a glyph is reused many times, will they all have the same mp_eosd_image ?
[18:27] <clever> will the opaque pointer survive between glyph reuse?
[18:28] <wm4> I don't know that part of mplayer (my mplayer fork has relatively different internals here), but no
[18:29] <clever> was thinking along the lines of drawing each glyph to a rgba resource and then reusing those
[18:31] <clever> as for pallet support, ive seen another ticket, and the hardware can only handle a single pallet at a time
[18:32] <clever> so i would likely need to merge all the colors into a single pallet, and create an 8bbp image
[18:37] <j-b> m
[18:37] <j-b> oops, sorry
[19:16] <clever> size:18x19 pos:567x428
[19:16] <clever> wm4: looks like most of the glyphs are of a reasonable size
[19:17] <wm4> clever: depends entirely on the input file
[19:17] <wm4> but on insane stuff, little rpi will probably lock up forever anyway
[19:18] <clever> its already locking up solid, not even ssh responds
[19:18] <clever> and i'm not even trying to handle the glyph image!
[20:00] <clever> wm4: each mp_eosd_image contains a *next pointer to the next image, so its imposible to reuse a single instance in a frame
[20:00] <clever> so thats half of my question
[20:00] <wm4> no, it doesn't do that
[20:01] <wm4> clever: this is what libass outputs: http://repo.or.cz/w/libass.git/blob/HEAD:/libass/ass.h#l37
[20:01] <wm4> the image data (->bitmap) can be reused, but ASS_Image obviously not
[20:01] <wm4> an application has to re-render stuff on every frame
[20:02] <clever> i was trying to see if i could cache the converted bitmap, and then reuse it
[20:02] <wm4> ass_render_frame() merely can signal to the application that the frame didn't change visually (even if the ASS_Images changed), mplayer maybe uses that
[20:02] <wm4> no you can't
[20:02] <clever> yeah, i do see a changed flag in eosd
[20:02] <clever> vdpau will reuse the entire surface list unchanged
[20:03] <clever> just need to think out how i'm going to draw 100 glyphs into 2 or 3 chunks
[20:05] <wm4> it's a hard problem
[20:06] <wm4> at least if you don't want to do cpu rasterization
[20:06] <wm4> or rather, cpu blending
[20:12] <clever> wm4: as long as no glyphs overlap, i should be able to do hardware blending
[20:13] <wm4> many glyphs will overlap
[20:13] <clever> it allows full rgba layers and blending between all 128 layers
[20:13] <clever> i can handle glyph overlap cheaply by just creating 2 hardware layers
[20:13] <clever> and letting the hw blend the overlap
[20:13] <clever> the hard part is knowing which hw layer to put each glyph into
[20:14] <clever> and how big to make each hw layer
[21:24] <wm4> is there a primer for yasm+x86inc?
[21:24] <nevcairiel> there is this: https://wiki.videolan.org/X264_asm_intro/
[21:24] <nevcairiel> and the comments in x86inc
[21:25] <wm4> thanks
[22:08] <llogan> kierank: wow. your license violation report actually got response (but i didn't check compliance for their updated stuff).
[22:28] <Compn> on that dshow plugin ?
[22:29] <Compn> on thier bug tracker or ours ?
[22:32] <ubitux> wm4: gonna write som asm?
[22:33] <wm4> ubitux: someone wants for libass
[22:33] <ubitux> oh, ok.
[22:33] <wm4> xy-vsfilter has pages over pages of instrinsics
[22:34] <Daemon404> wm4: inline asm too iirc?
[22:34] <Daemon404> also some of the mmx code is slower than C++
[22:34] <Daemon404> it's very great
[22:34] <wm4> hm, possibly, but I don't remember
[22:34] <Daemon404> i really like when people wrte naive SIMD intrinsics
[22:34] <wm4> but xy-vfilter is literally "TOO SLOW? HERE IS MORE INTRINSICS"
[22:34] <Daemon404> its so easy to be slower than compiler-generated code
[22:34] <Daemon404> [16:34] <+wm4> but xy-vfilter is literally "TOO SLOW? HERE IS MORE INTRINSICS" <-- gnna sound bad but... china quarity
[22:36] <iive> aren't intrinsics compiler generated code?
[22:36] <Compn> if you guys keep insulting chinese devels, you might never see them join the project :P
[22:37] <Daemon404> iive: well i mean generated from C/C++
[22:37] <nevcairiel> i dunno what people have against intrinsics
[22:37] <Daemon404> nevcairiel: i dont have anything against them.
[22:37] <Daemon404> i am against the horrible way they tend t be written
[22:37] <Daemon404> by certain people
[22:39] <Daemon404> nevcairiel: well also modern compilers do OK at generating SSE/SSE2 fp code
[22:39] <Daemon404> without any help
[22:39] <Daemon404> keyword: modern
[22:40] <nevcairiel> if someone uses intrinsics just to write single-data fp code, they deserve to be yelled at
[22:40] <nevcairiel> its for multiple-data code where its useful, read SIMD
[22:40] <Daemon404> that sounds exactly like something xy would do
[22:40] <Daemon404> and also no C path.
[22:41] <iive> nevcairiel: intrinsics are SIMD code in C
[22:41] <nevcairiel> i know what they are, but that doesnt mean you use them with the MD in mind :p
[22:41] <iive> but compilers are quite unstable about the results they generate, even when the translation seems quite straight - forward.
[22:42] <nevcairiel> nonsense, recent compilers process them rather well
[22:42] <iive> last time i tried it was one year ago... so 4.7 or 4.8 ...
[22:51] <iive> ffmpeg have its altivec (ppc) acceleration written as intrinsics
[22:51] <wm4> anyway, when that guy tried instrinsics, he noticed that there weren't many SSE instructions in the generated code
[22:51] <wm4> and the compiler (apparently) did many stupid things
[22:52] <iive> you need to specify sse capable cpu and autogeneration of sse instructions.
[22:53] <wm4> well he wasn't that dumb
[22:53] <iive> this also means you need more makefile tricks if you want things as autodetection and conditional execution.
[22:53] <wm4> i.e. he did make sure of that
[22:53] <nevcairiel> have to be careful not to look at the generated code in a debug build, compilers like to add a copy back into memory after every intrinsic call then for better variable inspection
[22:55] <iive> wm4: sure, as I said, compilers are known to do stupid things.
[22:56] <nevcairiel> i never had issues, and i frequently inspected the generated code while testing, so i blame you or your compilers, mine works :D
[22:57] <wm4> so does msvc generate better code from intrinsics than gcc?
[22:57] <nevcairiel> no idea
[22:57] <nevcairiel> msvc generates proper code at least
[22:58] <nevcairiel> basically mapped 1:1 to the corresponding instruction
[23:00] <nevcairiel> but since microsoft has been working to fade-out inline asm (like by not allowing it for 64-bit builds at all), they may have put more time into it
[23:13] <ubitux> BBB: any reason in the dsp init not to make the loop_filter functions depends on each others?
[23:13] <ubitux> i mean, calling dsp->loop_filter_... instead
[23:13] <ubitux> that way typically in x86 if you add a small one it gets automatically used in all the dependency
[23:14] <ubitux> ah i guess dsp is not actually accessible
[23:41] <kierank> llogan: i told him
[00:00] --- Tue Dec 31 2013
More information about the Ffmpeg-devel-irc
mailing list