[Ffmpeg-devel-irc] ffmpeg.log.20190312

burek burek021 at gmail.com
Wed Mar 13 03:05:02 EET 2019


[00:04:06 CET] <AuroraAvenue> systemd0wn, Its too complicated (I'm really an XP user :(  ) ...
[00:04:41 CET] <AuroraAvenue> If the two webm files are in the same directory cant you just give me he file command line reference, for ease ?
[00:05:40 CET] <systemd0wn> AuroraAvenue: Try this StackOverflow link then https://stackoverflow.com/a/11175851/2171182 they have commands listed.
[03:18:11 CET] <ossifrage> What is the proper way to do what avformat_find_stream_info() does, but with the frame data from memory? I have the correct NALUs in a buffer, but I'd rather have libavformat parse it then do it my self.
[03:32:16 CET] <ossifrage> I looks like I found what to do in libavformat/dashdec.c
[04:14:00 CET] <damdai> is there a website where i can upload a video where video quality is not downgraded
[04:27:44 CET] <another> probably any website which does just "files" and not "video"
[04:37:09 CET] <dustsquid> vimeo if you follow their guideline
[04:37:43 CET] <dustsquid> https://vimeo.com/help/compression
[04:40:32 CET] <furq> i'm pretty sure vimeo reencodes everything
[04:42:08 CET] <dustsquid> they host very high quality videas and have built their entire reputation on that
[04:42:26 CET] <dustsquid> assuming you encode to their standards I don't see any reason why they would re-encode
[04:42:48 CET] <dustsquid> they certainly re-encode to support other resolutions
[04:43:09 CET] <dustsquid> but your source should be fine
[04:44:48 CET] <furq> https://vimeo.com/blog/post/untangling-the-knotty-myths-of-video-compression
[04:44:49 CET] <furq> #5
[04:46:41 CET] <dustsquid> I am wrong and you are absolutely correct
[04:46:48 CET] <dustsquid> apparently I have been believing the myth
[05:50:36 CET] <damdai> why does ffplay take so much CPU usage playingback 2k video  50%, when i play with vlc  it takes only 5%
[06:10:00 CET] Last message repeated 1 time(s).
[06:11:05 CET] <mozzarella> I don't know
[06:27:28 CET] <mikemocha> anyone know where to obtain a Linux 64 bit static ffmpeg built with nvenc support?
[06:27:51 CET] <damdai> what is nvenc ?
[06:29:33 CET] <mikemocha> https://trac.ffmpeg.org/wiki/HWAccelIntro
[07:44:02 CET] <ossifrage> It seems like avformat_open_input()/avformat_find_stream_info() don't honour AVFormatContext.probesize, it kept reading until it had consumed everything in my avio buffer.
[07:46:35 CET] <ossifrage> Hmm, it seems to be some sort of interaction between the probsize, and the buffer_size passed to avio_alloc_context
[07:47:50 CET] <ossifrage> When probesize == buffer_size, it keeps reading until the buffer is exhausted
[09:39:57 CET] <damdai> i have a 10 minute video and from  6 minute point to 8 minute point, i want to mute the audio: is this possible to do with ffmpeg?
[09:42:36 CET] <JEEB> pretty sure you can do that with libavfilter, check the filters part of ffmpeg-all.html
[10:48:01 CET] <AhirPK> hi, can anyone please tell me what did i do wrong here?
[10:48:08 CET] <AhirPK> filter graph: https://pastebin.com/dVhx1uTJ
[10:48:34 CET] <AhirPK> [afifo @ 0x9cd880] Format change is not supported
[10:48:34 CET] <AhirPK> Error while feeding the filtergraph
[10:48:40 CET] <AhirPK> i am getting this error
[11:01:43 CET] <CyberShadow> AhirPK: Mostly guessing but is your audio input in FLTP?
[11:02:17 CET] <CyberShadow> Posting the command / code that resulted in said graph might be helpful
[11:03:58 CET] <AhirPK> CyberShadow: input file https://pastebin.com/V8dG7GXH
[11:04:23 CET] <AhirPK> code: https://www.dropbox.com/s/eojzuw78s7zemzt/main.c?dl=0
[11:04:50 CET] <AhirPK> output https://i.stack.imgur.com/JR30E.png
[11:05:42 CET] <AhirPK> and here is detailed explanation of my problem in stack overflow https://stackoverflow.com/questions/55115335/getting-error-in-ffmpeg-complex-filter-init
[11:06:48 CET] <CyberShadow> I am not an ffmpeg developer but what I would do is change where the error is produced in avfilter.c to print the given and expected formats
[11:09:22 CET] <JEEB> you can dump the config with an avfilter command after you've configured the chain
[11:10:07 CET] <JEEB> generally I recommend deciding before hand what you want as the output formats, and waiting for the initial AVFrame from the audio decoder to set the input parameters
[11:10:54 CET] <JEEB> I know stuff like stereo <-> 5.1 should work as far as switching goes
[11:11:03 CET] <JEEB> (although this depends on the input filter maybe :P)
[11:11:33 CET] <JEEB> (the actual resampling filter does support reconfig as far as I can tell)
[11:17:37 CET] <AhirPK> JEEB: yes correct I can dump filter chain
[11:18:12 CET] <AhirPK> and i thought showvolume filter works only on fltp format
[11:18:16 CET] <AhirPK> is it true?
[11:19:19 CET] <CyberShadow> Looks like it: https://github.com/FFMpeg/FFmpeg/blob/master/libavfilter/avf_showvolume.c#L121
[11:20:58 CET] <AhirPK> and i have set all output parameters as per requirements and inputs are dynamic
[11:21:58 CET] <AhirPK> CyberShadow: correct, thats why I have added fixed sample format in aformat filter
[13:52:35 CET] <Momentum> Hi, anyone knows if there's a way to record i3 workspaces?
[13:52:45 CET] <Momentum> or like a specific workspace
[13:53:36 CET] <justinasvd> Morning!
[13:53:51 CET] <JEEB> mornin'
[13:56:50 CET] <justinasvd> I am having a problem with av_read_frame. Basically, it hangs indefinitely when it stops receiving frames from "pipe:". I tried to implement timeouting via fmtctx interrupt_callback, but it doesn't help.
[13:57:40 CET] <justinasvd> Any other approaches? How to implement forced timeout for av_read_frame?
[13:58:41 CET] <BtbN> av_read_frame just uses whatever avio you gave the underlying format. So you'll probably have to write your own.
[13:58:46 CET] <JEEB> I know file followed the rw_timeout thing but not sure if pipe does that
[13:58:48 CET] <BtbN> The default one is blocking.
[13:58:58 CET] <JEEB> and yes if you need custom AVIO I recommend implementing the callacks
[13:59:00 CET] <JEEB> *callbacks
[13:59:08 CET] <JEEB> I think we might even have an example of that under doc/examples
[14:00:05 CET] <justinasvd> Would be greatly appreciated.
[14:06:21 CET] <ossifrage> Finally getting closer to having mp4 streaming from this camera...
[14:06:59 CET] <BtbN> mp4 isn't exactly streamable
[14:07:10 CET] <ossifrage> BtbN, fmp4
[14:07:30 CET] <BtbN> I'd rather have a stream with... really anything else
[14:07:59 CET] <ossifrage> I'm using dash for the rate adaption stuff
[14:08:25 CET] <BtbN> Can't dash do .ts?
[14:08:46 CET] <ossifrage> The overhead of a transport stream kills any benefits
[14:09:28 CET] <ossifrage> I may end up doing rtsp if I can't get the dash stuff to work well enough
[14:10:08 CET] <ossifrage> Especially if I can't get the start latency down below 1s
[14:10:56 CET] <ossifrage> I was seeing about 400-500ms sensor to display latency with IBBP h.26[45]
[14:11:16 CET] <BtbN> I think that's pretty much impossible with those fragments formats due to the way they work
[14:11:31 CET] <BtbN> Twitch is doing some really hacky proprietary HLS extensions to achive ~1 second latency
[14:12:08 CET] <ossifrage> I wonder what the breakdown of the latency is for them?
[14:12:23 CET] <ossifrage> I was getting 500ms streaming over ssh
[14:12:27 CET] <BtbN> They basically tell the player to play segments that don't exist yet
[14:12:33 CET] <BtbN> And it just so happens to work out
[14:14:28 CET] <ossifrage> I was hoping to be able to hide the startup latency with a lower resolution stream (with smaller GOPs and no B) but that will require the player to cooperate
[14:14:30 CET] <Momentum> anyone knows how to record a specific i3 workspace?
[14:14:44 CET] <Momentum> or if that is an option
[14:16:08 CET] <BtbN> I don't think startup latency depends on the resolution or bitrate.
[14:16:18 CET] <ossifrage> BtbN, does that twitch 1s include transcoding or are they able retransmit what the user sends?
[14:16:26 CET] <BtbN> It's the source stream
[14:16:32 CET] <BtbN> The transcodes have a couple seconds more delay
[14:16:49 CET] <ossifrage> BtbN, the lower resolution saves bits to I can afford to have extra streams
[14:17:33 CET] <ossifrage> The h.265 4k stream has long gops, the 1080p h.264 stream has shorter gops, but the 720p (or less) has short gops and no B
[14:17:47 CET] <ossifrage> So you have more entry points into the 720 stream vs the higher res streams
[14:18:04 CET] <BtbN> If you have a fragmented stream, each segments is an entry point anyway?
[14:18:40 CET] <BtbN> So whatever is downloaded, can be played immediately either way. It will just be more es less behind.
[14:19:27 CET] <BtbN> You will most likely never get a close to realtime stream with DASH/HLS. It's just not what they are designed for.
[14:20:07 CET] <ossifrage> I think it is one of those things I need to see how it works first
[14:20:36 CET] <ossifrage> If you want low latency you can watch bloody m-jpeg :-)
[14:21:01 CET] <ossifrage> I was sorta amazed how well chrome was playing back 45mbps motion jpeg
[14:21:28 CET] <ossifrage> I pointed the camera at a mentronome and I didn't see any stutter
[14:22:05 CET] <ossifrage> I remember doing that same test some years back and it was jitter central
[14:22:20 CET] <BtbN> Why would it be? mjpeg isn't very hard on the CPU or anything
[14:22:58 CET] <ossifrage> I think the problem at one point was that the jpeg decoder sucked
[14:23:30 CET] <ossifrage> chrome still has an annoying bug that they won't output a frame until they receive the next one which really pisses me off
[14:24:04 CET] <ossifrage> (firefox doesn't have that problem, but it doesn't seem to do nearly as good of a job doing the stream/decode/playback
[14:25:07 CET] <ossifrage> I still need to figure out how to dial down the TCP window for a go http connection so it drops earlier
[14:26:49 CET] <ossifrage> I'd assume the firefox playback problems are that they suck at nvidia GPU on linux, whereas chrome is gpu heavy
[14:27:07 CET] <BtbN> Both Firefox and Chrome do not support hardware acceleration on Linux.
[14:27:26 CET] <BtbN> They gave up due to too many driver issues
[14:27:54 CET] <ossifrage> The last I checked chrome makes heavy use of the GPU, I'm not sure if it bothers with the hardware decoders though
[14:29:16 CET] <ossifrage> Chrome has: #enable-webrtc-h264-with-openh264-ffmpeg
[14:29:27 CET] <BtbN> That's an encoder.
[14:30:14 CET] <ossifrage> "When enabled, an H.264 software video encoder/decoder pair is included. If a hardware encoder/decoder is also available it may be used instead of this encoder/decoder.  Mac, Windows, Linux, Chrome OS"
[14:31:04 CET] <BtbN> They both definitely do not support any hwaccels on Linux
[14:31:17 CET] <BtbN> Too many APIs, too many driver issues.
[14:31:26 CET] <ossifrage> Hardware Protected Video Decode: Hardware accelerated
[14:31:34 CET] <BtbN> I highly doubt that
[14:31:46 CET] <ossifrage> That is what chrome://gpu/ says
[14:31:53 CET] <BtbN> Specially as there are patched versions of Chromium to force-enable VAAPI and the like.
[14:32:12 CET] <BtbN> Hardware Protected Video Decode is probably DRM stuff, not hwaccel.
[14:33:14 CET] <BtbN> "Video Decode: Hardware accelerated" is the flag you want
[14:33:20 CET] <BtbN> and it's definitely not a thing on Linux
[14:33:44 CET] <BtbN> And according to this, the option even straight up lies on Linux: https://www.omgubuntu.co.uk/2018/10/hardware-acceleration-chrome-linux
[14:33:51 CET] <BtbN> no idea how reliable of a source that is
[14:36:01 CET] <ossifrage> Ouch... Well the software decoders work surprisingly well on this very old machine
[15:54:30 CET] <flowgunso> is ffmpeg supposed to return exit codes other than 0 ?
[15:55:03 CET] <flowgunso> im trying to convert a file, expecting it to fail. it fails but the exit code stays at 0.
[15:55:17 CET] <JEEB> yes it will
[15:55:25 CET] <JEEB> as in, return nonzero
[15:55:44 CET] <flowgunso> well it does return 0. always.
[15:55:55 CET] <flowgunso> even on failures
[15:56:07 CET] <JEEB> ok, then whatever happened was not considered failure by ffmpeg.c
[15:56:52 CET] <flowgunso> JEEB, give me a mn, i'll pastebin
[16:02:53 CET] <flowgunso> JEEB, pastebin.com/YvPPFDn8
[16:03:05 CET] <JEEB> merci
[16:03:24 CET] <JEEB> also i wihs
[16:03:42 CET] <JEEB> wish my terminal client would pick that up as a link :p
[16:03:54 CET] <JEEB> will check when i get home
[16:04:16 CET] <flowgunso> merci, c'est pas pressé :)
[17:12:38 CET] <ossifrage> To just multiplex h26[45] elementary streams to fmp4, is it necessary to pass avformat_new_stream() a AVCodec?
[17:13:16 CET] <ossifrage> To do that I needed to call avcodec_find_encoder() which requires linking against libavcodec, which I'd like to avoid
[17:13:42 CET] <JEEB> I'm still waiting for my dev VM to boot up, but I'm pretty sure not. you just need to set the AVCodecID and other parameters for the stream's codecpar
[17:14:28 CET] <ossifrage> that seemed to work, but for small values of work
[17:15:32 CET] <JEEB> it's simpler with an already available thing since there's helpers that set everything from a decoder or encoder context, but it should work 100% the same as long as you set all the required data
[17:15:44 CET] <DHE> avformat doesn't get directly involved with the codec operation anymore. just fill in the AVStream->codecpar fields ahead of avformat_write_header()
[17:16:18 CET] <JEEB> required data being the fields and buffers in the stream's codecpar :P
[17:17:25 CET] <ossifrage> Once I deal with the buildroot issues I'm only going to build libavformat (well at least only install it)
[17:17:48 CET] <JEEB> do note that depending on requirements you might need BSFs and I think those are in avcodec
[17:17:55 CET] <JEEB> f.ex. if your encoder's output is Annex B
[17:18:01 CET] <JEEB> and you need AVCc
[17:18:11 CET] <JEEB> although I don't remember if that way was automated within movenc
[17:18:46 CET] <DHE> annex b is more for mpegts I thought...
[17:18:49 CET] <JEEB> yes
[17:18:57 CET] <JEEB> I just didn't remember which way the bsf was
[17:19:03 CET] <JEEB> and which way there was auto-conversion within movenc
[17:19:10 CET] <JEEB> or if there was any
[17:19:37 CET] <JEEB> ok so teh BSF was the other way yes
[17:19:41 CET] <JEEB> from AVCc to annexb
[17:19:49 CET] <ossifrage> It is now linking with just avutil/avformat
[17:20:12 CET] <JEEB> yea, there is ff_hevc_annexb2mp4_buf
[17:20:14 CET] <JEEB> in movenc
[17:21:50 CET] <ossifrage> I'm encoding h265, but I was having performance problems playing 4k back so I switch to h264 1080p until I can get more stuff working
[17:22:27 CET] <ossifrage> The encoder on the hi3519a really isn't that bad, so much better then the last time I worked with a hisilicon part
[17:33:45 CET] <JEEB> ossifrage: also it just bumped into my mind that the movenc test set is actually an API client doing something similar to what you're doing :P
[17:34:05 CET] <JEEB> it uses pre-made pseudo-valid AVC buffers to mux stuff into fragmented isobmff
[17:34:41 CET] <JEEB> libavformat/tests/movenc.c
[17:46:10 CET] <ossifrage> JEEB, thanks, I'll take a look at it as soon as I can figure out why my buildroot ffmpeg is not happy
[18:17:23 CET] <USian_nogoodnick> i'm using ffplay to display an rtsp stream, but it shows ffplay output in original terminal and shows stream in a new terminal window. how to make it display the stream in the original terminal window (replacing command output) instead?
[18:18:27 CET] <Boobuigi> USian_nogoodnick:  libcaca is one way
[18:19:28 CET] <Boobuigi> ... that depends on what you mean by stream, though.  You're talking about a video, right?
[18:19:46 CET] <USian_nogoodnick> yes, rtsp video stream from ip cam
[18:20:41 CET] <Boobuigi> Then libcaca--though not many people are bad-ass enough to actually desire video output in their terminals.
[18:21:45 CET] <USian_nogoodnick> lol. well right now i'm using ffplay in 4 terminal windows as my nvr gui, basically. :)
[18:22:28 CET] <USian_nogoodnick> i'll check libcaca out, thanks
[18:31:27 CET] <USian_nogoodnick> i'm wondering if i'm misunderstanding what is happening when i run the ffplay command. what kind of window is ffplay using to show the video? is that a gtk window when one is using gnome? i thought it was another gnome terminal window.
[18:31:50 CET] <Boobuigi> Oh boy.
[18:32:47 CET] <USian_nogoodnick> ffplay docs just say sdl and ffmpeg.
[18:32:48 CET] <furq> USian_nogoodnick: it's whatever sdl uses
[18:33:01 CET] <furq> i think it's just a straight x11 window
[18:33:08 CET] <USian_nogoodnick> i see thanks
[18:36:09 CET] <faLUCE> Hello. What is the "app" field of a jpeg image? I have issues when decoding it
[18:39:00 CET] <faLUCE> it seems something associated to "motion jpeg", but not to a "jpeg" image
[18:40:59 CET] <USian_nogoodnick> my ultimate goal is to make my own gui for my ip cams, but right now i'm just trying to determine the easiest way to get 4 rtsp streams (ip cam video) in a grid in one desktop gui window. is ffmpeg best for the capturing part in this scenario or gstreamer?
[18:57:46 CET] <mozzarella> I don't like gstreamer
[18:57:50 CET] <faLUCE> USian_nogoodnick: do you have to write your code or do you want to use ffmpeg as a standalone app?
[18:59:36 CET] <flowgunso> JEEB, have you taken a look to pastebin.com/YvPPFDn8 ?
[19:00:58 CET] <JEEB> flowgunso: ok, so I think that hits the logic of "I tried to read this file as much as I could" -> zero exit code
[19:01:03 CET] <JEEB> there's a parameter to try and make that more strict
[19:04:15 CET] <mozzarella> guys
[19:04:29 CET] <mozzarella> how can I speed up a video 240 times using ffmpeg
[19:04:44 CET] <flowgunso> -loglevel trace or something?
[19:05:46 CET] <USian_nogoodnick> mozzarella: out of curiosity: what don't you like about it?
[19:07:05 CET] <USian_nogoodnick> faLUCE: at first, it would be nice to just be able to display 4 cam streams in one window using ffmpeg, if that's possible
[19:07:23 CET] <Hello71> USian_nogoodnick: there is, but it would be easier to use a tiling window manager
[19:08:29 CET] <mozzarella> USian_nogoodnick: there are some of my porn videos I've never been able to watch using gstreamer, I tried to solve the issue on my own and it's a pain in the ass to troubleshoot, never had a problem with ffmpeg
[19:08:52 CET] <mozzarella> tells me I don't have the right codecs even though I do
[19:08:54 CET] <mozzarella> ¯\_(Ä)_/¯
[19:09:24 CET] <mozzarella> seems to be a demuxing problem but no one understands what's going on
[19:09:34 CET] <USian_nogoodnick> mozzarella: i see, thanks
[19:10:39 CET] <USian_nogoodnick> Hello71: right now i'm using 4 ffplay windows and that's not really a big problem it's just that you have to have 4 terminal tabs open too. i suspect it would be the same with a tiling window manager?
[19:10:50 CET] <Hello71> >terminal tabs
[19:10:52 CET] <Hello71> wat
[19:11:18 CET] <mozzarella> you can detach them from the terminal
[19:11:32 CET] <mozzarella> and you'll get your prompt back so you can launch new ones in the same tab
[19:12:07 CET] <USian_nogoodnick> mozzarella: this is the info i was after in the beginning. how do you do that?
[19:12:29 CET] <mozzarella> with zsh I do
[19:12:32 CET] <mozzarella> ffmpeg &!
[19:13:04 CET] <mozzarella> there's also something called nohup
[19:13:29 CET] <USian_nogoodnick> great, thanks
[19:20:09 CET] <flowgunso> JEEB, -xerror works, thank you for your help !
[19:22:26 CET] <furq> USian_nogoodnick: you could potentially use xstack to tile the inputs together, but it's maybe not advisable if these are network streams
[19:22:57 CET] <furq> if one stream drops out the whole thing will die, and any kind of packet loss or timestamp discontinuity will cause issues
[19:23:46 CET] <furq> also you'd need to do this with ffmpeg and pipe it into whatever player because ffplay doesn't support multiple inputs
[19:24:17 CET] <USian_nogoodnick> furq: interesting
[19:53:48 CET] <ossifrage> Ugg, everything is cross connected, even thought it only linked against -llibavutil -llibavformat, it actually had a huge number of libs
[19:54:34 CET] <ossifrage> ldd on my desktop counts 102 .so for a simple test program (not sure on the arm box, no ldd)
[20:05:49 CET] <ossifrage> I finally tried running it and: "[NULL @ 0x23240] Requested output format 'mp4' is not a suitable output format"
[20:25:42 CET] <TheAMM> Is the size in the status line "size=    6926kB time=00:04:50.51 bitrate= 195.3kbits/s speed=17.8x" kilobytes (1000) or kibibytes (1024)?
[20:25:54 CET] <mozzarella> how can I speed up a video 240 times using ffmpeg
[20:31:12 CET] <DHE> TheAMM: ffmpeg is careful about its use of bits vs bytes. Capital B means bytes. also k = 1000 exactly
[20:31:26 CET] <DHE> mozzarella: like, super-high fps?
[20:31:32 CET] <TheAMM> Thanks, just making sure
[20:34:00 CET] <kepstin> mozzarella: in general, you use the setpts filter to adjust the frame timestamps to change the video speed, then use the fps filter to set your output framerate (dropping frames if needed)
[20:34:44 CET] <kepstin> (some minor quirks in there depending on what timebase you're using and which particular frames you want to keep)
[20:38:18 CET] <mozzarella> DHE: I don't mind having it play at the original framerate
[20:39:43 CET] <mozzarella> they are sleep recordings I'm trying to speed up& so a video of 8 hours should play in 2 minutes
[21:11:01 CET] <Boobuigi> My n4.1.1 installation gets GIF animation timings wrong.  Please look:  https://www.dropbox.com/s/21ormag6m6obzzi/FFmpeg%20GIF%20Timing%20Issues.mkv
[21:11:30 CET] <Boobuigi> Here is the GIF used in the demonstration video:  https://www.dropbox.com/s/glj72e9xq5zgo23/realTime.gif
[21:12:25 CET] <Boobuigi> Can anyone confirm/deny that the same thing happens with their FFmpeg installation?
[21:14:12 CET] <Boobuigi> The frames are only 0.01 seconds long.  This seems to be related.
[21:18:04 CET] <durandal_1707> Boobuigi: what is problem?
[21:19:32 CET] <Boobuigi> durandal_1707:  The timings are off.
[21:19:46 CET] <Boobuigi> Each frame is should be 0.01 seconds long, but ffmpeg stretches them out much longer.
[21:22:08 CET] <durandal_1707> Boobuigi: look at gif demuxer options
[21:22:26 CET] <Boobuigi> durandal_1707:  You mean fps, right?
[21:22:28 CET] <durandal_1707> too low duration is extended by default
[21:22:40 CET] <durandal_1707> Boobuigi: nope, min frame duration
[21:22:59 CET] <Boobuigi> Ah!
[21:23:05 CET] <durandal_1707> ffmpeg -h demuxer=gif
[21:23:29 CET] <Boobuigi> The default is 2, hehe.  Thanks for steering me in the right direction.
[21:31:41 CET] <Boobuigi> durandal_1707:  That did the trick!  Thank you.
[23:29:06 CET] <cryptopsy> test
[23:30:15 CET] <cryptopsy> how do i get ffmpeg to only output the files being created when breaking up a video into a series of frames (one image every 30s for example) ?
[23:31:52 CET] <kepstin> cryptopsy: it might have a debug log if you turn the log level up high enough, but I wouldn't rely on parsing that
[23:32:10 CET] <cryptopsy> the info is in the regular output but i want to get rid of the other stuff
[23:32:54 CET] <cryptopsy> basically the names of the files being created when doing %o1d.png
[23:33:17 CET] <kepstin> you could also consider using an OS file notification api (e.g. inotify on linux) to notice when new files are written, rather than getting it from ffmpeg
[23:33:55 CET] <cryptopsy> how do i do that?
[23:34:11 CET] <cryptopsy> it would have to run in parallel as the ffmpeg command
[23:34:36 CET] <kepstin> well, i don't know what you need this list for
[23:34:48 CET] <cryptopsy> i want to know how far along the ffmpeg command is running
[23:34:55 CET] <cryptopsy> so it has to be real-time
[23:35:21 CET] <kepstin> then you'd need something running in parallel anyways?
[23:35:33 CET] <cryptopsy> i think your approach is wrong
[23:36:05 CET] <kepstin> if you were making a tool to consume the images that ffmpeg was outputting, my idea might make sense.
[23:36:32 CET] <kepstin> but if you just want progress... iirc there was a way to enable machine-readable stats output? wouldn't have the filenames, but it should have a frame counter
[23:37:30 CET] <kepstin> the separate file output isn't a common use case, so there's no real special handling for that.
[23:37:59 CET] <cryptopsy> i think i can pipe with &| to while IFS= read -r l; do
[23:44:33 CET] <cryptopsy> how can i ouput just %{frame_num} ?
[23:46:30 CET] <kepstin> cryptopsy: no way as far as I know. no matter what, you're probably gonna have to parse some of the ffmpeg output.
[23:46:50 CET] <cryptopsy> thats fine but i would like to get a line for each frame
[23:47:43 CET] <cryptopsy> maybe with progress pipe:1
[23:47:48 CET] <cryptopsy> -progress pipe:1
[23:48:09 CET] <kepstin> the progress output is rate limited, updated about once a second
[23:48:15 CET] <kepstin> which is fine for general progress output
[23:48:41 CET] <kepstin> why do you need to know about each frame?
[23:48:46 CET] <cryptopsy> just the numbre
[23:48:56 CET] <cryptopsy> as an indicator of how far along the command is going
[23:49:17 CET] <kepstin> then the -progress output should be fine, I think the current frame number is something it includes
[23:51:05 CET] <cryptopsy> what if i want better resolution than 1 second?
[23:54:53 CET] <kepstin> use libavcodec and friends directly as a library.
[23:55:48 CET] <kepstin> you could also try enabling the human-readable stats output, which updates more often, but note that its format is not stable between ffmpeg releases, so it's hard to reliably parse.
[23:57:33 CET] <cryptopsy> when i pipe to grep it outputs line by line but when i pipe to set it waits for the command to finish before ouputting all lines
[23:59:59 CET] <kepstin> the stats output has lines separated with \r (carriage return) not \n (line feed), your application has to handle that correctly
[00:00:00 CET] --- Wed Mar 13 2019


More information about the Ffmpeg-devel-irc mailing list