[Ffmpeg-devel-irc] ffmpeg.log.20161014

burek burek021 at gmail.com
Sat Oct 15 03:05:01 EEST 2016


[01:11:23 CEST] <Sashmo> can anyone tell me why VLC can play a AC3 audio only file, and ffplay start playing the first few seconds, then craps out and turns into a digital noise nightmare?
[01:15:45 CEST] <Sashmo> ive tried all the decoders.....
[01:37:51 CEST] <iive> sounds like a bug.
[01:38:02 CEST] <iive> can you fill an issue and upload a sample?
[01:38:11 CEST] <iive> sorry i got to go.
[01:40:32 CEST] <Sashmo> sure
[01:40:35 CEST] <Sashmo> will do
[02:01:14 CEST] <i_o> hei
[02:02:07 CEST] <i_o>  I am freaking out, what's up with the spectral graph when playing audio??
[02:02:23 CEST] <i_o> I didn't set any option for it.
[06:08:04 CEST] <Kiicki> Should I keep the framerate constant or variable when converting movies?
[07:25:20 CEST] <Spring> interesting guide, https://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide
[07:26:17 CEST] <Spring> guess it was intended to be kind of supplementary to the ffmpeg wiki/docs
[12:00:42 CEST] <CorvusCorax> Hi. I have a problem. I am trying to encode video from a raw image buffer. The buffer is BGR (24bit) 2040x1086 (from a Basler camera). I memcpy it into a frame (at frame->data[0]) created with alloc_picture with the correct width/height. The output video has correct width and height but is distorted. Every line seems to be missing 8 pixels so the next one ends up getting shifted. resolution of destination video (or images if exported as png)
[12:00:42 CEST] <CorvusCorax> is correct though
[12:01:18 CEST] <CorvusCorax> if I create an opencv image directly from the source buffer and display or save it, it has the same dimensions, but no missing pixels/shift
[12:04:37 CEST] <CorvusCorax> sorry, with alloc_picture I meant the wrapper from the muxer example in the src code, thats using av_frame_alloc() internally
[12:37:31 CEST] <CorvusCorax> nvm, I found the cause.  The allocated frame had a stride longer than the line, so by just copying the buffer with memcpy I ended up putting pixels into the padding-bytes at the end of the line
[12:37:41 CEST] <CorvusCorax> so I need to copy line by line :(
[12:44:25 CEST] <kerio> CorvusCorax: :(
[12:50:05 CEST] <BtbN> CorvusCorax, that's what av_image_copy_plane is for.
[12:53:29 CEST] <CorvusCorax> BtbN  would that be actually faster than a loop over all lines with a memcpy() of each line?
[12:54:04 CEST] <BtbN> that's pretty much what it does internally, but why re-invent something that's already implemented for exactly that purpose?
[12:56:41 CEST] <CorvusCorax> true. but while we are at it - could I skip the copying and somehow use the existing buffer to be treated as a frame?
[12:58:14 CEST] <CorvusCorax> something like "av_frame_change_buffer(frame,uint8_t*buffer,size_t stride)" ?
[13:05:08 CEST] <CorvusCorax> I need to somehow push out 2 megapixel @90 fps on a single core on board a UAV, so even such a small optimization would actually matter ;)
[13:14:52 CEST] <BtbN> CorvusCorax, just set the data pointers accordingly.
[13:15:04 CEST] <BtbN> and set the correct linesize
[13:15:08 CEST] <CorvusCorax> works. forcing linesize[0] and data[0] to match linesize and address of the buffer does the job
[13:15:13 CEST] <CorvusCorax> just did that
[13:15:29 CEST] <BtbN> Just make sure you unset it again before freeing the frame
[13:15:54 CEST] <CorvusCorax> now I wonder, the example says, some codecs keep copies of frames internally:/* when we pass a frame to the encoder, it may keep a reference to it
[13:15:54 CEST] <CorvusCorax>      * internally;
[13:15:54 CEST] <CorvusCorax>      * make sure we do not overwrite it here
[13:15:54 CEST] <CorvusCorax>      */
[13:15:54 CEST] <CorvusCorax>     ret = av_frame_make_writable(pict)
[13:16:37 CEST] <CorvusCorax> keeping that make_writable call would be sufficient to make sure that won't cause issues?
[13:17:05 CEST] <CorvusCorax> I think none of the codecs I can use for my usecase would do that anyway, just curious
[13:21:58 CEST] <BtbN> anything that needs the frame will get itself a copy.
[13:44:07 CEST] Action: kepstin notes that the way the 'crop' video filter in ffmpeg works is it just adjusts the data pointers, stride, etc. no copying involved.
[13:49:54 CEST] <CorvusCorax> BtbN is there a way to tell avio_open to open a file in O_SYNC mode and disable write caching for the OS ?
[13:50:37 CEST] <BtbN> kepstin, it should still make the frame writeable first.
[13:56:43 CEST] <kepstin> I assume it does yeah
[15:14:40 CEST] <lasser> hello folks, it's me again, the guy with the annoying YT live stream and his very old machine...
[15:15:19 CEST] <kerio> sup
[15:15:51 CEST] <lasser> I managed to go down to less than 20% load.
[15:16:10 CEST] <lasser> The problem was the capturinge of the audio device:
[15:16:32 CEST] <lasser> -f alsa -i hw:0,0
[15:17:02 CEST] <lasser> after some search it turns out, that I can convince ALSA to send less data:
[15:17:14 CEST] <lasser> -f alsa -ar 11025 -i hw:0,0
[15:17:19 CEST] <kerio> oh GOD
[15:18:43 CEST] <kerio> anyway try setting 44100 explicitly?
[15:19:00 CEST] <kerio> maybe it was defaulting to something stupid like 96k and then resampling it
[15:19:21 CEST] <furq> it defaults to 48k afaik
[15:19:21 CEST] <lasser> it defaults to 48kHz
[15:19:26 CEST] <lasser> :-)
[15:19:37 CEST] <furq> it defaults to 48k
[15:20:12 CEST] <furq> so is there a problem or did you just want to tell us
[15:21:10 CEST] <lasser> I wanted to tell you, but there's something new
[15:23:07 CEST] <lasser> during the live stream the used input still image could be altered. Is there a chance that ffmpeg uses the altered one without restarting?
[15:24:53 CEST] <furq> you could probably do it with the movie source and sendcmd, but you'd need to have all the images in advance
[15:25:32 CEST] <furq> assuming you don't know the timestamps that the image will change at
[15:27:40 CEST] <lasser> ok, good to know. So I have to stop and restart the streaming after altering the image. I can live with that.
[15:28:00 CEST] <furq> i guess you could use x11grab
[15:36:25 CEST] <lasser> as I understand x11grab grabs a portion of the screen, right? -i 0,0+100,100
[15:37:13 CEST] <BtbN> doesn't it expect the display in -i?
[15:38:01 CEST] <lasser> youre right: -i :0,0+100,100
[15:38:50 CEST] <lasser> so at first the image hast to stay at one place as long as ffmpeg runs. But also no other window should come over it. Or is it possible to grab a specific window?
[15:39:18 CEST] <lasser> or grab a specific workspace?
[15:39:50 CEST] <furq> you can create a dummy device
[16:07:41 CEST] <cakkal> hi guys can you help me at this little problem =) https://stackoverflow.com/questions/40043319/random-ts-filenames-in-the-m3u8-file
[16:11:39 CEST] <lasser> furq: ?
[17:24:18 CEST] <markvandenborre> I have this idea of monitoring an audio stream by taking screenshots
[17:24:32 CEST] <markvandenborre> then also "screenshotting" the audio volume level
[17:24:52 CEST] <markvandenborre> into an overlay for that screenshot
[17:25:05 CEST] <markvandenborre> is there an easy way to get a number out of ffmpeg
[17:25:16 CEST] <markvandenborre> to represent audio level?
[17:25:44 CEST] <markvandenborre> so not a full blown equaliser
[17:25:53 CEST] <markvandenborre> just a simple number
[17:27:56 CEST] <Mavrik> Not sure why ffmpeg would handle that.
[17:28:06 CEST] <Mavrik> You'll have to read that value out of your OS sound system.
[17:29:39 CEST] <markvandenborre> Mavrik: there's some volumedetect stuff
[17:29:49 CEST] <markvandenborre> in ffmpeg, but not entirely sure what I could do with it
[17:29:51 CEST] <emilsp> hello, how do I initialize a decoder and a filter to decode and filter a raw mjpeg file ? the file has no headers, but I know the framerate and the resolution
[17:30:53 CEST] <kerio> but jpegs have a header :<
[17:30:53 CEST] <Mavrik> markvandenborre, it's the wrong tool for the job.
[17:30:55 CEST] <markvandenborre> basicly trying to get _some_ feedback to the person monitoring the stream
[17:31:01 CEST] <markvandenborre> about audio levels
[17:31:36 CEST] <markvandenborre> this is on a headless machine that is exposing the images grabbed through a simple web server
[17:31:55 CEST] <markvandenborre> the main interesting thing is "do we have any audio at all?"
[17:32:07 CEST] <emilsp> kerio, well, sortakinda, but this is just a stream of them, there is no 'container'
[17:32:25 CEST] <kerio> so... a mjpeg
[17:32:26 CEST] <kerio> :D
[17:32:31 CEST] <emilsp> okidoki
[17:32:59 CEST] <kepstin> markvandenborre: ffmpeg has a couple ways of drawing graphs from audio analysis filters, see the examples on the 'drawgraph' filter and the ebur128's "video" option.
[17:33:10 CEST] <kepstin> markvandenborre: those could be output to an image or video stream of some sort
[17:33:17 CEST] <emilsp> well, I'm just extrapolating due to my lack of knowledge - when I pipe the stream directly from the source to vlc, it plays, when I pipe it to a file and then open the file from vlc, it just exits immediately because "lol 0fps, gl m8"
[17:33:18 CEST] <markvandenborre> kepstin: great, thank you
[17:33:40 CEST] <emilsp> so I was wandering how would I initialize a decoder and a filter
[17:35:00 CEST] <kepstin> emilsp: you probably want to use the mjpeg demuxer (format), which handles parsing the stream into frames, and has an avoption to set framerate
[17:36:51 CEST] <hellos> hello
[17:38:16 CEST] <emilsp> kepstin, why would I want to demux them ? I receive no audio, and the frames are received sequentially
[17:39:03 CEST] <kepstin> emilsp: the demuxer handles parsing the stream and turning it into packets with individual frames to hand to to the decoder
[17:39:18 CEST] <kepstin> unless you want to do that yourself, you should just use the demuxer that does it for you :)
[17:39:24 CEST] <hellos> need help
[17:39:25 CEST] <emilsp> hmm
[17:41:34 CEST] <hellos> :)
[17:43:34 CEST] <hellos> i get error
[17:46:51 CEST] <hellos> hy
[18:10:35 CEST] <emilsp> hmm, so the doxygen docs are the best I can get, right ?
[18:43:46 CEST] <furq> markvandenborre: https://ffmpeg.org/ffmpeg-filters.html#showvolume
[18:48:04 CEST] <furq> -filter_complex "[0:a]showvolume[vol];[0:v][vol]overlay[v]" -map [v] -map 0:a
[18:48:07 CEST] <furq> http://i.imgur.com/J6rE8Jy.jpg
[18:48:49 CEST] <markvandenborre> furq: thx
[18:50:11 CEST] <furq> i guess this is pretty new because some of the options don't work here
[18:50:30 CEST] <furq> in a build from about two weeks ago
[18:50:48 CEST] <markvandenborre> I see
[18:50:54 CEST] <markvandenborre> will have a look into it
[18:53:55 CEST] <markvandenborre> furq: first mentioned in 2.8 release notes...
[18:55:22 CEST] <furq> oh fun
[18:55:31 CEST] <furq> showvolume=o=1 works, but showvolume=o=vertical doesn't
[19:42:42 CEST] <CorvusCorax> yaaaay :)  I just managed a 2 megapixel 60 fps realtime lossless encoding, using 2 SSDs in a raid array and huffyuv
[19:43:17 CEST] <CorvusCorax> it only blocks two of my cores, so I should still be able to do the on-board computer vision stuff I had planned
[20:07:49 CEST] <feliwir> hey, whats the best way to capture the desktop with the ffmpeg library? And then send it via stream on another device
[20:11:50 CEST] <kerio> depends on the OS
[20:14:33 CEST] <zyclonicz> Will VP9 actually make the quality better on the webms?
[20:17:52 CEST] <feliwir> kerio, cross platform preferably
[20:24:53 CEST] <CorvusCorax> If I use my own code, similarly to doc/examples/muxer.c  what would be the correct way to tell ffmpeg to run on X threads - equivalent to the ffmpeg -threads command line option?
[20:27:10 CEST] <CorvusCorax> feliwir: on linux ffmpeg -formats lists "x11grab" which is a demuxer only that takes frames from the running X server
[20:28:03 CEST] <CorvusCorax> that is in theory multi platform, but would only run on machines that have an Xserver running for its GUI, so usually linux (but not the more recent wayland versions) and older Unixes, but not mac os x (unless you run an xserver on top of its native GUI) or windows
[20:35:40 CEST] <feliwir> CorvusCorax, i want to use the library not the commandline tool
[20:36:42 CEST] <CorvusCorax> feliwir: then you could use a platform independent solution to get the screen content in an image buffer and encode that, bypassing the ffmpeg supported demuxers
[20:37:12 CEST] <CorvusCorax> I don't know if there is a universal platform independent "grab my desktop" solution available though
[20:37:22 CEST] <CorvusCorax> maybe opencv has one, or Qt ?
[20:39:16 CEST] <CorvusCorax> ffmpeg can do it, using existing demuxers but its different demuxers for each platform. on linux/x11 you can use the x11grab demuxer. the windows one is called screen-capture-recorder or something
[22:24:19 CEST] <CorvusCorax>  I get corrupted video data when encoding with huffyuv and more than one thread (-threads parameter) with libavcodec
[22:24:43 CEST] <CorvusCorax> this is a fast moving hand encoding with 1 thread: https://postimg.org/image/o4vytiam9/
[22:24:58 CEST] <CorvusCorax> and the same thing with multiple threads:    https://postimg.org/image/wyi9wv875/
[22:25:19 CEST] <CorvusCorax> the raw data from the camera is fine but somehow ffmpeg mixes data from different frames in the same destination frame
[22:25:29 CEST] <CorvusCorax> libavcodec rather, im using the library
[22:25:43 CEST] <CorvusCorax> I used the code from doc/examples/muxer to encode
[22:26:14 CEST] <furq> does it happen with the ffmpeg cli
[22:27:00 CEST] <CorvusCorax> as in encoded with a single thread and then re-encoding?
[22:27:46 CEST] <furq> i guess
[22:29:53 CEST] <CorvusCorax> im trying that right now
[22:31:05 CEST] <CorvusCorax> hmm looks like the commandline CLI encodes correctly
[22:31:12 CEST] <CorvusCorax> but i ran it on a different machine
[22:31:54 CEST] <CorvusCorax> would it be necessary to allocate separate frames for consecutive input frames?\
[22:32:15 CEST] <CorvusCorax> the doc/examples muxer only allocates one frame used for all frames
[22:32:42 CEST] <furq> that sounds plausible
[22:32:47 CEST] <furq> i've never touched multithreading in the api
[22:42:10 CEST] <CorvusCorax> furq: that fixed it
[22:42:25 CEST] <CorvusCorax> this is still a bug, as that code was straight from the example
[22:42:47 CEST] <CorvusCorax> also example yields: libavhelper.c:342:5: warning: avcodec_encode_video2 is deprecated (declared at /usr/local/include/libavcodec/avcodec.h:5321) [-Wdeprecated-declarations]
[22:42:47 CEST] <CorvusCorax>      ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
[22:43:51 CEST] <CorvusCorax> would be nice if the deprectaed warning would tell you what to use instead.  or you know, if you could just check the example in doc/examples *sic* *scnr*
[22:44:51 CEST] <CorvusCorax> I just allocated 20 frames which are cycled through instead of one, quick and dirty hack, but does the trick
[22:45:04 CEST] <CorvusCorax> (as long as its not run on a machine with more than 20 cores I guess ;) )
[22:49:21 CEST] <CorvusCorax> whoo yeah, this is fun. 2040*1086*rgb24*90fps huffyuv realtime encoding :)
[22:51:23 CEST] <furq> ffvhuff should be faster if you can use it
[22:52:10 CEST] <furq> it should compress better as well
[22:53:46 CEST] <kerio> what about lossless x264
[22:55:12 CEST] <furq> that's much slower and also really slow to decode
[23:00:42 CEST] <furq> actually i just checked and it's much better than i remember
[23:00:55 CEST] <furq> it's still about half as fast as ffvhuff though
[23:01:36 CEST] <CorvusCorax> ffvhuff is also lossless, right?
[23:01:48 CEST] <kerio> furq: how's the bitrate tho
[23:02:52 CEST] <furq> CorvusCorax: yes
[23:03:07 CEST] <furq> kerio: better, but if huffyuv is in the picture then bitrate is obviously no concern
[23:03:59 CEST] <kerio> also does it work in realtime
[23:04:07 CEST] <furq> yeah
[23:04:25 CEST] <furq> i just tested with 1080p30 and got 150fps
[23:04:29 CEST] <furq> ffvhuff was over 200 though
[23:04:54 CEST] <kerio> what about huffyuv?
[23:06:38 CEST] <CorvusCorax> if i understood that right ffvyuv is an improvement on huffyuv, reaching slightly better fps and size
[23:06:48 CEST] <CorvusCorax> while being overall similar
[23:07:33 CEST] <CorvusCorax> gotta run, cya another time :)
[23:09:10 CEST] <kerio> what about utvideo
[23:11:14 CEST] <kerio> woah utvideo is way faster than ffv1
[23:12:46 CEST] <kerio> furq: lossless h264 is faster than ffv1 here :s
[00:00:00 CEST] --- Sat Oct 15 2016


More information about the Ffmpeg-devel-irc mailing list