[Ffmpeg-devel-irc] ffmpeg.log.20170507

burek burek021 at gmail.com
Mon May 8 03:05:01 EEST 2017

[00:00:15 CEST] <grublet> the hood on my oven didnt vent properly
[00:00:20 CEST] <furq> i had a job interview on one of those days as well
[00:00:28 CEST] <grublet> so anytime i cooked the house would fill with smoke or steam
[00:00:31 CEST] <furq> i had to shower by filling a bunch of 2-litre coke bottles with water from the kettle
[00:00:36 CEST] <furq> that was fun
[00:00:44 CEST] <grublet> ive taken plenty of sink showers
[00:00:55 CEST] <grublet> when i left my place i slept in my car and showered at the gym
[00:01:15 CEST] <furq> well this was in london, so we were paying £900 pcm for the privilege of getting fucked around like this
[00:01:35 CEST] <grublet> my rent at that shitty complex was $170 a month
[00:01:47 CEST] <furq> yeah i pay less than a third of that for my current place
[00:01:50 CEST] <grublet> now i pay $450 but that includes everyting
[00:02:11 CEST] <grublet> and i live in the gay part of the city, so that's pretty good deal
[00:02:36 CEST] <furq> also at that place in london, the guy downstairs complained that there was a leak coming through from our flat into his bathroom
[00:02:47 CEST] <grublet> i know one gay homeless dude who squats in a place in old louisville with like 4 other gay dudes or w/e
[00:02:55 CEST] <furq> so the landlord came round, determined that it was nothing to do with him, but actually that we were taking such vigorous baths that the water was spilling over the edge of the bath
[00:03:03 CEST] <grublet> furq: my shower would link on the old lady beneath me
[00:03:06 CEST] <furq> even though both of us only ever used the shower
[00:03:15 CEST] <kinkinkijkin> we pay $200 a month plus utilities to live in a crappy-but-maintained old single-floor house in a nice neighbourhood in the middle of all of the bad neighbourhoods
[00:03:16 CEST] <grublet> leak*
[00:03:41 CEST] <furq> so instead of resealing the bath, he went outside, literally looked around in the alley round the back of our house, found a discarded upvc window frame, cut one edge off it, and screwed it to the side of the bath
[00:03:57 CEST] <furq> he actually thought this would work
[00:04:16 CEST] <furq> obviously he was back round a week later to reseal the fucking bath
[00:04:21 CEST] <furq> but he left the windowframe there
[00:04:29 CEST] <grublet> landlords; fuck em
[00:04:29 CEST] <furq> it's probably still there now, baffling the new occupant
[00:04:43 CEST] <furq> that was one of the most mental things i've ever seen anyone do
[00:04:52 CEST] <kinkinkijkin> I know a landlord on espernet who is actually a pretty good landlord
[00:04:54 CEST] <grublet> you havent seen much then
[00:05:06 CEST] <kinkinkijkin> gives his occupants free gigabit via ethernet lol
[00:05:16 CEST] <grublet> kinkinkijkin: furq are you from the uk?
[00:05:27 CEST] <furq> he'd travelled halfway across london to come to the flat, brought his mate with him, and then went to that much effort to avoid paying for a tube of caulk
[00:05:28 CEST] <kinkinkijkin> I'm in canada
[00:05:34 CEST] <grublet> same thing
[00:05:34 CEST] <kinkinkijkin> but this landlord I know is from the US
[00:05:36 CEST] <grublet> just colder
[00:05:40 CEST] <furq> well, "effort"
[00:05:52 CEST] <furq> and yeah i am
[00:05:52 CEST] <kinkinkijkin> lolno, canada is like the US if it were good
[00:06:02 CEST] <kinkinkijkin> the UK is the US of europe
[00:06:16 CEST] <grublet> canada is like the us if we were a territory
[00:07:15 CEST] <kinkinkijkin> oh yeah by the way, I don't recommend using VAAPI encoding on AMD GPUs right now unless you're testing; outputs garbage
[00:07:32 CEST] <kinkinkijkin> and runs at .1x speed for me
[00:07:39 CEST] <grublet> Canada is only good for 2 things. One is Trailer Park Boys, and the other is the Montreal death metal scene
[00:07:51 CEST] <furq> fuck off lahey
[00:08:09 CEST] <kinkinkijkin> let's not get to insulting each others' countries even though I did it first
[00:08:15 CEST] <grublet> Yeah a canadian friend turned me onto that show
[00:08:19 CEST] <kinkinkijkin> ignore my hypocrisy
[00:08:55 CEST] <furq> put a fuckin shirt on randy
[00:09:25 CEST] <grublet> If i were a character from that show, it would probably be randy, just because im fat and gay
[00:10:12 CEST] <furq> you seem more like a cory to me
[00:10:17 CEST] <kinkinkijkin> I am not a character from that show, and I will not pretend to share similarities with any character of that show
[00:10:25 CEST] <furq> i'm obviously cyrus
[00:10:34 CEST] <grublet> i look like julian
[00:10:58 CEST] <grublet> my canadian friend is basically an irl ricky
[00:15:15 CEST] <grublet> furq: https://s-media-cache-ak0.pinimg.com/736x/af/a6/53/afa653f749cb888f9cad2a34d34fd60d.jpg
[00:17:58 CEST] <grublet> i like trailer park boys because it characters and situations could actually exist in real life
[00:28:32 CEST] <kinkinkijkin> heh, x264 is 5x faster at minimum
[00:34:19 CEST] <grublet> kinkinkijkin: 5x faster than what?
[00:34:34 CEST] <kinkinkijkin> h264_vaapi on a 7850
[00:34:41 CEST] <kinkinkijkin> running radeon/mesa
[00:39:47 CEST] <grublet> my gpu is pretty old now
[00:39:53 CEST] <grublet> gtx 670
[01:06:47 CEST] <pomegranate> Hi, I am a newbie and need some guidance.  I am trying to write a code that receives a live input stream (rtmp or hls for example), decode a frame every x seconds, and pass the image as an array to another program. Where do you recommend to start?
[01:07:27 CEST] <JEEB> '34
[02:07:21 CEST] <DHE> pomegranate: there's API examples in doc/examples for you to start with. do note that if you only pick a single frame to send then you must either single out keyframes or decode all frames in memory for selection
[03:04:43 CEST] <kinkinkijkin> I have a dilemma: radeon+mesa affords me higher framerates in games and more gpu transcode options
[03:05:14 CEST] <kinkinkijkin> but amdgpu+mesa lets me use svp and has better gpu transcode performace and allows me to use vulkan
[03:07:37 CEST] <c3r1c3-Win> Sounds like a dilemma for AMD to solve. ;-)
[03:12:21 CEST] <kinkinkijkin> oh and then amdgpu pro gives better vulkan and OCL performance but limits my system in ways I'm reluctant to allow
[04:43:07 CEST] <atomnuker> supposing I have an AVPacket of a single jpeg image and I wanted to mux that as a cover image to a flac stream
[04:44:34 CEST] <atomnuker> how would I do that using the lavf API? I'm already writing the flac stream, but can't find any info on just single images
[05:04:37 CEST] <james999> well now i'm gonna listen to a video about #Brexit
[05:04:41 CEST] <james999> this should be interesting
[10:01:11 CEST] <crow> i need to do bisect for an bug which i opened, working release is 3.2.4 and 3.3 is not, but how to find these commit in master? I was checking https://git.ffmpeg.org/gitweb/ffmpeg.git/shortlog and https://git.ffmpeg.org/gitweb/ffmpeg.git/tags but for example i dont get it what is commit from 3.2.4 which is good
[10:13:15 CEST] <furq> crow: https://github.com/FFmpeg/FFmpeg/tags
[10:14:12 CEST] <furq> you can just use the tag name with git bisect though
[10:19:41 CEST] <crow> furq thank you for both, well first time doing bisect so thank you for the tipp
[12:52:52 CEST] <faLUCE> iive: see? nobody cared about the usage example :-)
[13:55:20 CEST] <hiihiii> hello
[13:55:50 CEST] <hiihiii> scale up then double frame rate? or do reverse
[13:56:06 CEST] <hiihiii> double frame rate then scale up?
[13:56:48 CEST] <hiihiii> "scale=480:-1,setsar=1:1,fps=60" vs "fps=30,scale=480:-1,setsar=1:1"
[13:57:03 CEST] <hiihiii> sorry "scale=480:-1,setsar=1:1,fps=60" vs "fps=60,scale=480:-1,setsar=1:1"
[13:59:43 CEST] <hiihiii> I think 1st way is more efficient if ffmpeg scales the frame then outputs it two times in a row
[14:00:02 CEST] <hiihiii> as opposed to double the frame then do scaling for both of them
[14:03:27 CEST] <furq> yeah, scale first
[14:04:20 CEST] <hiihiii> okay
[14:07:47 CEST] <hiihiii> can you explain the tblend filter in terms of what it does to frames
[14:08:34 CEST] <hiihiii> a 60fps : A B C D E F G H
[14:09:00 CEST] <furq> it'll output A+B C+D E+F G+H at 30fps
[14:10:02 CEST] <hiihiii> huh? I run "tblend=all_mode=average" on it and it became 60fps : A+B B+C C+D D+E E+F G+H
[14:10:27 CEST] <hiihiii> well not sure
[14:10:31 CEST] <furq> oh
[14:10:35 CEST] <furq> i'm probably wrong
[14:10:36 CEST] <hiihiii> all I know is that it stayed 60fps
[14:11:58 CEST] <hiihiii> so it must have blended the frames to A+B B+C C+D D+E E+F G+H. I can't think of any other way
[14:12:20 CEST] <hiihiii> I had to put "tblend=all_mode=average,fps=30" to get a 30fps
[14:12:31 CEST] <hiihiii> A+B C+D E+F G+H
[15:40:44 CEST] <TheWild> hello
[15:42:08 CEST] <TheWild> can ffmpeg be used to cut MP3 files (and possibly other types) at frame boundaries?
[15:42:46 CEST] <c_14> if you use -c copy it'll cut at frame boundaries by default
[15:49:47 CEST] <TheWild> This worked like a charm! Thank you very much.
[15:50:44 CEST] <TheWild> I regret I didn't know about this software few years ago.
[16:03:39 CEST] <faLUCE> hey, do you know how I can feed av_read_frame with:  char* muxed_packet?
[16:04:36 CEST] <faLUCE> (so I can demux muxed_packet)
[16:07:03 CEST] <mosb3rg> hey guys, does anyone have any reading material or reference material on a way to implement the hls cookie system used in ffmpeg to simply load the feed directly into a browser connecting directly, i have been digging around everywhere i just cannot find any documentation on how its done.
[16:58:57 CEST] <main> Hey. Can someone tell me how to compile the ffplay. I have tdm-gcc and VS 2013. I tried to do everything according to the instructions. But I get a lot of mistakes.
[17:00:10 CEST] <JEEB> ffplay needs SDL2 IIRC
[17:01:25 CEST] <tdr> yes it does
[17:02:13 CEST] <main> i have sdl2, msys, yasm. do as told here https://trac.ffmpeg.org/wiki/CompilationGuide/MSVC
[17:02:39 CEST] <JEEB> then time to check config.log
[17:03:28 CEST] <JEEB> also I hope you're on the latest update of vs2013
[17:18:47 CEST] <crow> i am trying to compile an commit but i needed to remove few stuff from configure and now it wont finish compiling https://defuse.ca/b/FbCRnoT5 <- i am doing bisect and i am almost there : "Bisecting: 1 revision left to test after this (roughly 1 step)" .. if i skip it or mark bad as i cant compile would it change a lot on in final result?
[17:24:34 CEST] <nyuszika7h> okay I can't seem to reproduce the issue of getting 4000 kbps for 720x400 with CRF 18 so far, maybe that was in an earlier encode where I wasn't doing deinterlacing/scaling, I'll wait and see with the rest of the episodes
[17:38:19 CEST] <nyuszika7h> heh, exactly 1x encoding speed
[18:10:09 CEST] <faLUCE> Any idea of how I can demux a char* array? do I have to link a custom I/O (avio_ctx) to an AVFormatContext and then call avio_write(avio_ctx, array, size) ?
[18:40:54 CEST] <JEEB> faLUCE: yes if it's not a way that lavf generally reads things then you will have to implement the callbacks for read read (and possibly others)
[18:44:12 CEST] <main> VS2013. Horror, like a pervert, to compile files.
[18:44:13 CEST] <main> It was necessary to replace in all files the expression "static inline" to "statics __inline"
[18:44:15 CEST] <main> Had to comment a few lines. Example cmdutils.c "PRINT_LIB_INFO(avresample, AVRESAMPLE, flags, level);" avresample not found.
[18:44:16 CEST] <main> It was surprisingly assembled. But it works badly. Sound does not play for network stream
[18:44:41 CEST] <JEEB> uhh
[18:44:49 CEST] <JEEB> I'm pretty sure a new enough FFmpeg has that inline stuff
[18:45:01 CEST] <JEEB> since MSVC2013/2015/2017 work out of the box
[18:45:26 CEST] <JEEB> ctrl+F microsoft on http://fate.ffmpeg.org/
[18:45:33 CEST] <JEEB> those are the automated tests run on master
[18:49:24 CEST] <main> thank You.  I did not know about this page
[18:49:51 CEST] <main> failed test for VC2013
[18:53:04 CEST] <JEEB> yea, but as far as I know no need for your change as it should be done by the configure script
[18:53:16 CEST] <JEEB> or I think it might even be just under an ifdef
[18:53:46 CEST] <JEEB> what I'm aiming at is that the change that you were doing is there, otherwise I would've not been able to build FFmpeg with MSVC either :P
[18:53:56 CEST] <JEEB> (I have built it with 2013,2015 and 2017)
[18:55:52 CEST] <main> I will be very grateful if you will help me . Send me please a project file "*.sln" or "*.vcxproj", so that it works out of the box. My task is only ffplay.
[18:58:16 CEST] <main> i used the source code "ffmpeg-20170503-a75ef15-win32-dev"
[19:02:28 CEST] <JEEB> no, you use the build system to get the compiles
[19:02:37 CEST] <JEEB> heck, I even linked you the FATE configurations
[19:03:42 CEST] <main> In an ideal I want to change a little ffplay to build as "dll" and to make some interface outside, but  it is impossible to build really even "exe" ))
[19:04:04 CEST] <JEEB> no, ffplay is a proof of concept thing
[19:04:11 CEST] <JEEB> if you want a player use libvlc or libmpv as a base
[19:09:33 CEST] <main> I need  the ffmpeg, i could write my own player, but it's easier to take a ready-made implementation. Ffplay for this fits.
[19:10:08 CEST] <JEEB> I'm pretty much sure it doesn't
[19:10:34 CEST] <JEEB> libvlc is LGPL and most likely a better fit if you actually need a player
[19:10:49 CEST] <JEEB> it also uses FFmpeg in the background
[19:10:55 CEST] <JEEB> and is more tested than ffplay
[19:11:03 CEST] <JEEB> (and is less of a PoC than ffplay)
[19:24:23 CEST] <crow> can someone these compiling these two skiped commits in this bug report https://trac.ffmpeg.org/ticket/6364#comment:3 because of that a first bad commit is not found. and I am not sure why it does not get compiled
[19:26:35 CEST] <BtbN> just disable-doc?
[19:51:09 CEST] <crow> ok let me check
[20:16:01 CEST] <faLUCE> [18:40] <JEEB> faLUCE: yes if it's not a way that lavf generally reads things then you will have to implement the callbacks for read read (and possibly others)  <--- sorry for my late answer, so, I do I have to call avio_write(), then av_read_frame() with a custom read callback (avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size, 0, &bd, &read_packet, NULL, NULL)  ) ?
[20:23:49 CEST] <kinkinkijkin> anybody know an alternative to SVP for frame interpolation? I'm okay with reencoding beforehand
[20:25:31 CEST] <durandal_1707> kinkinkijkin: vapoursynth and mvtools
[20:27:50 CEST] <kinkinkijkin> do I have to write anything myself to use this?
[20:27:55 CEST] <kinkinkijkin> it looks like I do
[20:30:16 CEST] <faLUCE> I'm seeing that avio_reading uses av_file_map() in order to provide to the aviocontext the muxed data:   https://ffmpeg.org/doxygen/3.2/avio_reading_8c-example.html#a14 ... What should I use if I don't have a file but char* arrays?
[20:31:23 CEST] <JEEB> faLUCE: this is old code but this is all I did with IStreams :P https://github.com/jeeb/matroska_thumbnails/blob/master/src/matroska_thumbnailer.cpp#L131..L145
[20:31:42 CEST] <JEEB> and I didn't have to call anything extra as the callbacks got used
[20:32:00 CEST] <JEEB> https://github.com/jeeb/matroska_thumbnails/blob/master/src/istream_wrapper.c#L14
[20:32:07 CEST] <JEEB> this is the functions themselves :P
[20:32:24 CEST] <JEEB> I mean, it's not harder than that
[20:34:55 CEST] <faLUCE> thanks JEEB, this  was what I did, more or less. But I don't see avio_write() ... how do you provide arrays to the muxer?
[20:35:05 CEST] <JEEB> I didn't do any muxing
[20:35:12 CEST] <faLUCE> demuxer
[20:35:14 CEST] <JEEB> so if you are doing muxing you provide the writer callback
[20:35:21 CEST] <faLUCE> I'm doing demuxing
[20:35:32 CEST] <JEEB> then just reading stuff is all you need, and if you can seeking :P
[20:35:42 CEST] <JEEB> I don't see where the writing comes to play
[20:36:49 CEST] <faLUCE> but the read callback is called after av_read_frame() .... I need to know how to feed the demuxer with the char* arrays of muxed data
[20:37:04 CEST] <faLUCE> I tried avio_write() but I don't know if it is the right method
[20:37:07 CEST] <JEEB> what
[20:37:18 CEST] <JEEB> the callback will get called when the thing wants data from you
[20:37:23 CEST] <JEEB> I don't get what's the problem there
[20:38:11 CEST] <faLUCE> JEEB: now I see. I thought that the read callback was called after av_read_frame
[20:38:32 CEST] <faLUCE> ok, thanks, let's try
[20:41:49 CEST] <faLUCE> more precisely: it's called after av_read_frame(), and inside it  I have to fill the buffer with custom data
[20:41:51 CEST] <faLUCE> right?
[20:41:58 CEST] <kinkinkijkin> durandal_1707, do I have to write anything myself for using mvtools? if so, how much writing will I have to do?
[20:43:11 CEST] <durandal_1707> kinkinkijkin: download vapoursynth and mvtools and write script
[20:43:30 CEST] <furq> https://kaangenc.me/mpv/
[20:43:34 CEST] <furq> kinkinkijkin: there's a sample script there
[20:44:23 CEST] <JEEB> faLUCE: if av_read_frame was the demuxing function then yes, within it if it will require more data it will ask you for it through that callback
[20:45:36 CEST] <kinkinkijkin> thanks furq
[20:52:52 CEST] <crow> BtbN seems it does not work, as if i disable that link ffmpeg will not be made in packages
[20:53:06 CEST] <crow> BtbN make: *** No rule to make target 'ffmpeg.1'. Stop.
[20:53:30 CEST] <BtbN> probably in the middle of some broken merge then, an can be safely ignored
[20:53:41 CEST] <kinkinkijkin> furq this implies you can't use vapoursynth with hwdec, is this so?
[20:55:04 CEST] <crow> BtbN this is PKGBUILD (archlinux build file) https://defuse.ca/b/IQOtEhbq
[20:56:03 CEST] <BtbN> so?
[20:58:59 CEST] <crow> do you see anything suspect not to build ffmpeg binary file?
[20:59:32 CEST] <crow> as with line 32 commented out it builds but no /usr/bin/ffmpeg
[21:00:32 CEST] <BtbN> when building those specific commits?
[21:00:39 CEST] <crow> exactly
[21:00:53 CEST] <BtbN> they are probably libav merges, and thus it'll build avconv instead.
[21:03:37 CEST] <kinkinkijkin> furq, contrary to the implications of this post, this does in fact work with hwdec
[21:16:52 CEST] <kinkinkijkin> hmmm, mvtools' framerate upscaling is less consistent than SVP but also has far less artifacts
[21:17:53 CEST] <crow> BtbN that was it, it build avconv . now my bisecting was done, and this seems as bad commit http://git.ffmpeg.org/gitweb/ffmpeg.git/commit/744801989099df26e90b00062c645969c5347533 for https://trac.ffmpeg.org/ticket/6364
[23:17:32 CEST] <faLUCE> JEEB: I'm seeing that avformat_open_input() calls the read callback and wants a number of muxed bytes to buffer. Is there a way to know this number before calling  avformat_open_input() ?
[23:18:10 CEST] <faLUCE> (In this way I can buffer enough bytes before calling it)
[23:18:21 CEST] <JEEB> not really
[23:19:28 CEST] <faLUCE> damn...
[23:21:09 CEST] <JEEB> generally if you really run out of bytes you can't do much more than block (or try to tell you don't have the bytes yet by returning less)
[23:22:59 CEST] <faLUCE> JEEB: if I return less than bufsize, then do I have to call avformaT_open_input() again?
[23:23:18 CEST] <faLUCE> (when I have enough bytes)
[23:24:00 CEST] <JEEB> depends. it might just be going to retry the reading by itself.
[23:24:18 CEST] <JEEB> or it just will try to guess your input format without actually reading any bytes
[23:24:35 CEST] <JEEB> if you know your demuxer before hand it'd probably a be a good idea to define that :P
[23:24:59 CEST] <JEEB> and then after it picks the demuxer you probably can start just reading data and it will ask you for more buffer each time :P
[23:26:14 CEST] <faLUCE> [23:24] <JEEB> or it just will try to guess your input format without actually reading any bytes <--- I can do that, but do I have to call avformat_open_input() in this case too?
[23:27:07 CEST] <faLUCE> sorry, wrong question
[23:30:07 CEST] <faLUCE> the correct question is:  I call avformaT_open_input(), and I see, inside the read callback, that it needs more bytes than what I actually have. How can I retry that, without blocking??
[23:30:28 CEST] <JEEB> IO tends to be blocking
[23:31:08 CEST] <faLUCE> I know, but I'm trying to avoid blocking
[23:31:49 CEST] <JEEB> you can only do that to a limit without threads for the IO and trying to pre-empt
[23:31:57 CEST] <JEEB> like, you get a successful return from the open
[23:32:05 CEST] <JEEB> (by not returning any data)
[23:32:16 CEST] <JEEB> (or by having X amount of bytes returned)
[23:32:37 CEST] <JEEB> and then of course the next stuff would be to call the read() function which can need X or Y amount of bytes
[23:32:45 CEST] <JEEB> depending on the format and the size of coded samples etc
[23:33:12 CEST] <faLUCE> JEEB: yes, I suspected that. So, do I have to return less bytes from the callback, so that avformaT_open_input() returns unsucessful, and then I know that I have to call it again?
[23:33:48 CEST] <JEEB> only if it doesn't succeed, I mean most likely it will pick some demuxer (although you can define the correct demuxer in various ways)
[23:34:01 CEST] <JEEB> after which it's just reading that you need to do
[23:34:56 CEST] <faLUCE> what I wonder is: if I return less bytes from the callback, does avformaT_open_input() returns unsucessful, so I can re-call it?
[23:35:55 CEST] <JEEB> I don't know, because you might want to open a demuxer without feeding any data to it yet :P
[23:36:16 CEST] <JEEB> so in theory as long as the opening of the demuxer succeeds you should be able to move to the reading phase instead
[23:36:21 CEST] <JEEB> but what do I know :P
[23:37:01 CEST] <faLUCE> do you mean that another solution would be opening a demuxer with correct infos, so it doesn't need to buffer while avformat_open_input() ?
[23:37:29 CEST] <JEEB> if you know your input format for the demuxer before hand that would make sense yes
[23:37:38 CEST] <JEEB> that you tell that it will be MPEG-TS, for example :P
[23:37:57 CEST] <faLUCE> I don't think so... I suspect this is true for av_find_stream_info()
[23:38:06 CEST] <faLUCE> but not for avformat_open_input()
[23:39:37 CEST] <faLUCE> I think that avformat_open_input() is only some buffering stuff, but not probing stuff
[23:39:45 CEST] <faLUCE> but I can be wrong
[23:43:55 CEST] <JEEB> https://ffmpeg.org/doxygen/trunk/structAVFormatContext.html#a78efc5a53c21c8d81197445207ac4374
[23:44:10 CEST] <JEEB> the input format gets set by avformat_open_input
[23:49:35 CEST] <faLUCE> right. I could try to open the demuxer by manually filling iformat = MPEGTS, but I suspect that it wants other fields to be set... (streams with codecs)
[23:49:47 CEST] <faLUCE> JEEB
[23:50:36 CEST] <JEEB> well mpeg-ts brings you streams on runtime
[23:50:44 CEST] <JEEB> as you keep reading packets
[00:00:00 CEST] --- Mon May  8 2017

More information about the Ffmpeg-devel-irc mailing list