[Ffmpeg-devel-irc] ffmpeg-devel.log.20150419
burek
burek021 at gmail.com
Mon Apr 20 02:05:02 CEST 2015
[01:27:02 CEST] <cone-200> ffmpeg 03Andreas Cadhalpun 07master:faf9fe2c224e: alsdec: validate time diff index
[02:38:42 CEST] <cone-200> ffmpeg 03Lou Logan 07master:d1a892209803: cmdutils: indent protocols listing
[03:19:55 CEST] <cone-200> ffmpeg 03James Almer 07master:a40cee03a3be: avutil: remove pointless bmi1 define
[04:33:45 CEST] <cone-200> ffmpeg 03Andreas Cadhalpun 07release/2.5:96c1421627aa: alsdec: ensure channel reordering is reversible
[04:33:46 CEST] <cone-200> ffmpeg 03Michael Niedermayer 07release/2.5:5be683d687b6: avcodec/alsdec: Use av_mallocz_array() for chan_data to ensure the arrays never contain random data
[04:33:47 CEST] <cone-200> ffmpeg 03Andreas Cadhalpun 07release/2.5:faac8e43315d: alsdec: validate time diff index
[08:51:48 CEST] <jamrial> rcombs: wanna be among the first to know when the ticket reaches 400? :p
[08:55:14 CEST] <rcombs> jamrial: that or when something actually happens
[10:37:02 CEST] <enigma> is this the right channel if i have trouble using libav ? or is this strictly for ffmpeg development itself
[10:38:04 CEST] <enigma> I am having trouble muxing H264 bitstream and ADTS AAC streams together into FLV. If i pack H264 bitstream (source is hardware encoder) into FLV , it plays fine with proper duration and bitrate. But when I mux audio along , the bitrate and duration go all wrong and on the playback, I only hear the audio. The video gets stuck on first frame. here is my muxing code : http://pastebin.com/C6bYqtsA
[10:38:56 CEST] <enigma> ok sorry, just read the description. i
[10:39:03 CEST] <enigma> li'll stick to #ffmpeg
[12:57:57 CEST] <cone-149> ffmpeg 03James Almer 07n2.5.6:HEAD: avutil: remove pointless bmi1 define
[13:36:25 CEST] <cone-149> ffmpeg 03Michael Niedermayer 07master:3668701f9600: avformat/http: Return an error in case of prematurely ending data
[14:49:36 CEST] <cone-149> ffmpeg 03Michael Niedermayer 07master:22c0585a00d4: avformat/http: Fix 2 typos
[15:41:32 CEST] <cone-149> ffmpeg 03Michael Niedermayer 07master:9a0f60a0f89a: avcodec/mpeg4videodec: Use check_marker()
[16:45:18 CEST] <cone-149> ffmpeg 03Thomas Guillem 07release/2.4:3e1c9da38b84: matroskadec: fix crash when parsing invalid mkv
[16:45:19 CEST] <cone-149> ffmpeg 03Michael Niedermayer 07release/2.4:0b71cedfe844: Merge commit '3e1c9da38b849ce2982b516004370081fdd89ed0' into release/2.4
[17:00:54 CEST] <cone-149> ffmpeg 03Michael Niedermayer 07master:0cab0931dcf2: avformat/matroskadec: remove now duplicate doctype check
[17:02:29 CEST] <BtbN> It's quite hard to find documentation on chroma keying. No idea if i'm missing something extremely obvious about it, or if all algorithms are proprietary and hidden.
[17:06:57 CEST] <iive> what is chroma keying?
[17:14:42 CEST] <ubitux> wm4: i don't remember what test i asked for...
[17:15:20 CEST] <ubitux> wm4: about the mpv/gif (wtf are your users doing), you can trim with... trim filter in the filtergraph, i suppose
[17:15:52 CEST] <ubitux> (it will send an EOF when it reaches the end bound)
[17:16:28 CEST] <ubitux> for the starting point, you can use it but it will unfortunately not "seek" (just drop frames until it reaches the timestamp)
[17:17:06 CEST] <ubitux> for the --start we indeed have a -ss for input, and one for output
[17:17:13 CEST] <ubitux> so unless you have something equivalent..
[17:18:32 CEST] <wm4> nothing like this was ever necessary (until now=
[17:18:44 CEST] <wm4> and that user is developing a mpv frontend
[17:18:58 CEST] <wm4> so I suppose using mpv for this was an obvious thought
[17:19:03 CEST] <ubitux> i don't understand why your seek isn't exclusively on the input actually
[17:20:17 CEST] <wm4> so you accurately seek on the output
[17:20:27 CEST] <wm4> think about frame doubling deinterlacing and such
[17:21:09 CEST] <BtbN> iive, basicaly, the thing that does greenscreens.
[17:42:21 CEST] <Compn> isnt the vf framestep for that
[17:42:32 CEST] <Compn> skip 1500 frames, then start output
[17:42:40 CEST] <Compn> it would be nice if it had that feature
[17:42:47 CEST] <Compn> instead of outputting 1 frame every 1500 frames.
[17:44:32 CEST] <rcombs> Compn: use the select filter
[18:13:04 CEST] <lglinskih> fate-md5: libavutil/md5-test$(EXESUF)
[18:13:09 CEST] <lglinskih> fate-md5: CMD = run libavutil/md5-test
[18:13:30 CEST] <lglinskih> we don't need EXESUF in the second string?
[18:23:07 CEST] <jamrial> no, you can run windows executables without it
[18:23:41 CEST] <jamrial> but you need to specify it to Make when building them
[18:24:35 CEST] <wm4> yeah, on windows $(EXESUF) would expand to .exe
[18:25:00 CEST] <lglinskih> and what about other OS?
[18:25:10 CEST] <wm4> other systems don't have one
[18:25:19 CEST] <wm4> it'll expand to an empty string, so it'd do nothing
[18:26:17 CEST] <kirelagin> I bet there is an OS that has special extension for executables and wont execute them without extension ;)
[18:27:33 CEST] <kirelagin> But what matters is that ffmpeg probably isnt aiming to support them
[18:31:03 CEST] <lglinskih> is it fine if I just add $(EXESUF)?=)
[18:33:00 CEST] <iive> if binaries are not installed, you can make it to .elf :)
[18:34:44 CEST] <jamrial> lglinskih: there's no need to add it for any "CMD =" line
[20:00:03 CEST] <BtbN> RGB to YUV conversion is confusing, there are multiple formular on the net, which all claim to be exact by definition oO
[20:04:12 CEST] <BBB> BtbN: it depends on what type of yuv
[20:04:29 CEST] <BBB> BtbN: 601/709/smpte240 all have slightly different conversion formulae
[20:04:33 CEST] <BtbN> Whatever ffmpeg has in AV_PIX_FMT_YUV(A)420P
[20:04:33 CEST] <JEEBsv> in general the algo is the same, but then you have coefficients
[20:04:48 CEST] <BBB> BtbN: that depends on other descriptors in avcodeccontext
[20:05:00 CEST] <BBB> BtbN: plus the rgb it is converted into is not identical
[20:05:23 CEST] <BBB> BtbN: this is a really hairy area, you may not care enough to want to know, its like chasing your own tail for a year
[20:05:24 CEST] <BtbN> I'm in a filter, and i get an input color as user parameter, which i need as yuv
[20:05:36 CEST] <wm4> it's in AVFrame now too
[20:05:48 CEST] <wm4> AVFrame.colorspace etc.
[20:06:08 CEST] <BBB> look at AVColorSpace and related enums in libavutil/pixfmt.h
[20:06:09 CEST] <wm4> but in general, if you don't work with YUV but RGB, just make the input RGB?
[20:06:13 CEST] <BBB> they are used in AVCodecContext
[20:06:28 CEST] <BtbN> chromakey is yuv
[20:06:28 CEST] <BBB> wm4: oh cool
[20:06:58 CEST] <BtbN> The only place where i get in touch with RGB is the user parameter
[20:07:05 CEST] <BtbN> AV_OPT_TYPE_COLOR gives me rgba
[20:07:20 CEST] <wm4> maybe it would be nice if there were some utility code to return the correct matrix for a given conversion, instead of making it secret knowledge of libswscale and various standards
[20:07:24 CEST] <wm4> ah
[20:07:36 CEST] <wm4> for chomakey, wouldn't a YUV parameter make more sense?
[20:07:43 CEST] <BBB> Id just call it yuv then, yes
[20:07:44 CEST] <wm4> even if you take user convenience into account
[20:07:52 CEST] <BBB> opengl calls its buffers rgba, but its undefined what is in each channel
[20:07:56 CEST] <BBB> so people happily put yuv in it
[20:08:06 CEST] <BBB> (r=y,g=u,b=v)
[20:08:15 CEST] <BtbN> The problem with that is, that AV_OPT_TYPE_COLOR accepts strings like "green" as input
[20:08:20 CEST] <BBB> (or r=each per plane)
[20:08:21 CEST] <BtbN> which would end up as everything but green
[20:08:43 CEST] <BBB> if you want to convert in your filter, youll need the user to specify the colorspace
[20:08:48 CEST] <BBB> (AVColorspace)
[20:08:58 CEST] <BBB> and then do the conversion according to the correct coefficients
[20:09:03 CEST] <BBB> (or take it from the input yuv stream)
[20:10:29 CEST] <BtbN> Oh, there are RGB_TO_Y/U/V defines in libavutil/colorspace.h
[20:10:58 CEST] <wm4> last time I tried to use them I ended up using something else
[20:11:11 CEST] <wm4> because trying to understand these weird macros took way more effort than NIHing it
[20:11:24 CEST] <wm4> (they are absolutely horrible)
[20:12:11 CEST] <wm4> actually maybe the TO macros are ok
[20:12:24 CEST] <wm4> I tried the macros above that
[20:12:43 CEST] <wm4> which implicitly reference variables you have to declare before the macros etc.
[20:12:46 CEST] <BtbN> That filter will be horrible slow though... No way this will work in realtime.
[20:12:51 CEST] <BBB> I dont actually know which standard these macros follow
[20:13:07 CEST] <BBB> why, you need the conversion more than once per frame?
[20:13:16 CEST] <BtbN> No, the entire filter.
[20:13:18 CEST] <BBB> I mean, if the user input is rgb and you need that in yuv, its just one variable right?
[20:13:22 CEST] <BBB> so you convert it in init
[20:13:25 CEST] <BtbN> The conversion is only needed once at init
[20:13:31 CEST] <BBB> and use the converted value in your speed critical code
[20:13:33 CEST] <BBB> right
[20:13:38 CEST] <BBB> so why is it slow?
[20:13:39 CEST] <BtbN> But the entire chromakey algorithm
[20:13:45 CEST] <BBB> well thats your problem :-p
[20:13:53 CEST] <BBB> and easily simd'ed
[20:14:22 CEST] <BtbN> The algorithm i found uses floats.
[20:14:24 CEST] <BBB> (I foresee a combination of pcmpgtb)
[20:14:29 CEST] <BBB> so write it in its
[20:14:39 CEST] <BBB> ints*
[20:14:47 CEST] <BtbN> I don't think that'd work out propperly.
[20:14:50 CEST] <kierank> why not
[20:14:53 CEST] <BBB> why
[20:14:58 CEST] <kierank> the coefficients are fixed point
[20:15:05 CEST] <BBB> float is just int with an autoshifter
[20:15:16 CEST] <BBB> (sort of)
[20:21:43 CEST] <BtbN> yep, it's doing 4 fps...
[20:22:25 CEST] <BtbN> Converting these algorithms to integer math is absolutely neccesary, but i have no idea how the color distance calculation would work then.
[20:24:06 CEST] <Daemon404> why are you manually doing this
[20:24:21 CEST] <BtbN> hm?
[20:24:28 CEST] <Daemon404> swscale is a thing
[20:24:38 CEST] <BBB> Daemon404: chromakey distance != swscale
[20:24:38 CEST] <BtbN> swscale to convert an algorithm?
[20:24:53 CEST] <Daemon404> ffs i misread
[20:25:10 CEST] <Daemon404> re: chroma key, do you have places for proper blending around edges
[20:25:13 CEST] <BBB> I assume distance is expressed as 1.2*(ypicture - ykey) + 3.7*(upicture - ukey) + 0.2*(vpicture - vkey)?
[20:25:13 CEST] <Daemon404> i.e. feathering
[20:25:24 CEST] <BBB> (where I made up the coeficients)
[20:25:25 CEST] <BBB> ?
[20:25:35 CEST] <BBB> and then some abs in there
[20:25:47 CEST] <BBB> and if distance < threshold alpha=0 else alpha=1?
[20:25:58 CEST] <BBB> (or something roughly along these lines)
[20:26:41 CEST] <BtbN> It's basicaly just sqrt(diff_u² + diff_v²), with diff_u=u-key_u
[20:26:56 CEST] <BtbN> y is ignored
[20:26:59 CEST] <BtbN> for the keying
[20:26:59 CEST] <BBB> diff_u is int
[20:27:01 CEST] <BBB> ok
[20:27:04 CEST] <BBB> diff_v is also int
[20:27:06 CEST] <BBB> sqrt is int
[20:27:11 CEST] <BBB> (or can be int)
[20:27:22 CEST] <BBB> sq is obv int
[20:27:32 CEST] <BBB> so I dont see the issue
[20:27:42 CEST] <BBB> theres a lot of sqrt int algos out there
[20:28:07 CEST] <BBB> theyre not necessarily faster than float in scalar, but since theyre easier to vectorize and their precision is tunable, they tend to be faster in practice
[20:28:08 CEST] <BtbN> The values are in the range from 0-255, so with an integer sqrt it wouldn't be precise enough.
[20:28:19 CEST] <BBB> why
[20:28:33 CEST] <BBB> you can express the result as a fixed-point value
[20:28:38 CEST] <BBB> 0x100 = 1.0
[20:28:40 CEST] <BBB> 0x200 = 2.0
[20:28:43 CEST] <BBB> 0x180 = 1.5
[20:28:44 CEST] <BBB> etc.
[20:28:49 CEST] <BtbN> So basicaly just multiple up everything...
[20:28:50 CEST] <BBB> it can be as precise as you like
[20:28:56 CEST] <BBB> thats fixed point integer math, yes
[20:29:08 CEST] <BtbN> Very annoying. I want this to work with floats first.
[20:29:16 CEST] <BBB> lol ok:)
[20:29:39 CEST] <BtbN> Cause at the moment it's keying everything, so something is still wrong.
[20:29:45 CEST] <BBB> right
[20:29:46 CEST] <BBB> ok
[20:30:03 CEST] <BBB> debug it first, when youre ready for integer feel free to ask any questions here
[20:30:09 CEST] <BBB> a lot of us have experience in that area :)
[20:30:42 CEST] <BtbN> I have also no idea if the algorithm itself is ideal. There isn't much about that topic, except how to do it in various commercial softwares.
[21:24:06 CEST] <nalply> I am new here. AVInputFormat.get_device_list() is not implemented for dshow and avfoundation. I would like to rewrite avdevice so that I can list devices programmatically (i.e. not by executing `ffmpeg -list_devices true -f dshow`). What do you think? Please also see https://trac.ffmpeg.org/ticket/4486.
[21:37:48 CEST] <nevcairiel> if ffmpeg.c can list the devices, then there must be a API to do it programmatically
[21:54:50 CEST] <nalply> Sorry, no. Please read line 685 of libavdevice/avfoundation.m: https://ffmpeg.org/doxygen/trunk/avfoundation_8m_source.html#l00685
[21:56:15 CEST] <nalply> It's in the function avf_read_header(). It's invoked when the input format starts reading from the device.
[21:57:22 CEST] <nalply> It checks whether the option list_device is set then it does not invoke avdevice_list_devices() but immediately queries AVFoundation and puts the information to av_log() calls.
[22:00:48 CEST] <nalply> Similarly for dshow: https://ffmpeg.org/doxygen/trunk/dshow_8c_source.html#l01013
[22:14:42 CEST] <BtbN> Hm, it does _something_, but it keys the wrong color.
[22:17:48 CEST] <nalply> Huh?
[22:30:46 CEST] <BBB> nalply: Im fine with that being an explicit api
[22:30:59 CEST] <BBB> BtbN: well thats a bug in your code :-p
[22:33:50 CEST] <nalply> Ah, thanks. Did you write about ffvp9 last year?
[23:45:25 CEST] <cone-149> ffmpeg 03Michael Niedermayer 07master:93db2708d3b0: ffmpeg: Fix null pointer dereference in do_video_out()
[00:00:00 CEST] --- Mon Apr 20 2015
More information about the Ffmpeg-devel-irc
mailing list