[Ffmpeg-devel-irc] ffmpeg.log.20190619

burek burek021 at gmail.com
Thu Jun 20 03:05:02 EEST 2019


[00:01:07 CEST] <furq> FlipFlops2001: it'll reduce the decoding speed
[00:01:20 CEST] <furq> which is going to be a tiny fragment of the encoding speed but you might as well
[00:01:29 CEST] <furq> also i'm not sure why you wouldn't just use libx265 from ffmpeg directly
[00:31:07 CEST] <FlipFlops2001> @furq: I agree with the speed, but doesn't it allow x265 more processor time for encoding? My binary ver of x265 is 3.1_RC1+3 compiled with MSVC using Perfomance Guided Optimization (PGO). Don't you think this would be faster+having all the advantages and options of the new release candidate?
[00:31:48 CEST] <FlipFlops2001> As opposed to using libx265?
[02:57:23 CEST] <prowell> I have a video where the time_base in ((AVFormatContext*)formatContext)->streams[0]->time_base and (AVCodecContext*)context->time_base when decoding that stream don't match.  Do they mean different things or is there some specific problem with the video that would cause this?   The steam has a 90kHz rate while the codec context shows a 60Hz rate.
[03:06:16 CEST] <nine_milli> say something balrog
[03:06:25 CEST] <nine_milli> everywhere i go youre there
[03:06:35 CEST] <balrog> Why
[03:08:23 CEST] <nine_milli> you a street fighter fan?
[05:06:28 CEST] <YellowOnion> how do I perverse the audio format during filtering, for sample reason it's converting pcmf32le to pcm_s16le
[05:06:54 CEST] <furq> pastebin the full command and output
[05:09:40 CEST] <YellowOnion> oh the plot thickens: https://gist.github.com/YellowOnion/7b486a30e5bbb1fe348cac795b14e7f5
[05:15:08 CEST] <furq> oh right
[05:15:12 CEST] <furq> that has nothing to do with filtering
[05:15:18 CEST] <furq> the default codec for wav is pcm_s16le
[05:15:24 CEST] <furq> so -c:v pcm_s32le
[05:15:28 CEST] <furq> or f32 rather
[05:15:35 CEST] <furq> and -c:a rather
[07:22:56 CEST] <grosso> change settings on-the-fly (while streaming rtmp): it is supported?
[07:23:52 CEST] <grosso> I mean, while receiving rtmp stream, using the libraries
[07:25:46 CEST] <grosso> suppose I'm receiving flv with h264/aac.. then the sender end changes video resolution (with and height)
[07:26:22 CEST] <grosso> sometimes it works, sometimes it don't... when it works, I don't understand how
[07:27:41 CEST] <grosso> same thing with ffplay, in fact
[08:16:45 CEST] <mfwitten> How can I cut one stream down to the length of another? 'trim=end_frame=430' cuts a video shorter, but then how do I get ffmpeg to calculate where exactly to cut the associated audio stream?
[08:18:41 CEST] <olavx200> How can i have the input and output files be the same and make ffmpeg overwrite the output file.
[08:19:01 CEST] <mfwitten> olavx200: -y
[08:19:42 CEST] <olavx200> ffmpeg -ss 5 -y -i "$FILE" "$FILE"
[08:20:03 CEST] <olavx200> error: FFmpeg cannot edit existing files in-place.
[08:20:10 CEST] <mfwitten> oh
[08:20:51 CEST] <olavx200> i know i could copy "$FILE" to "$FILE"tmp and then do ffmpeg blablabla -i "$FILE" "$FILE"tmp but is there a more clean way to do this
[08:22:01 CEST] <mfwitten> olavx200: What exactly is your purpose?
[08:22:29 CEST] <mfwitten> olavx200: You just want to trim the file with as little calculation as possible?
[08:23:06 CEST] <olavx200> yeah
[08:23:25 CEST] <olavx200> i want to trim the intro music from a tutorial series i downloaded
[08:25:04 CEST] <mfwitten> olavx200: Alas, you'd probably be fighting your file system, too, so I recommended dealing with a copy rather in-place
[08:25:49 CEST] <olavx200> alright.
[08:26:29 CEST] <olavx200> the videos are small so i though maybe it would be possible to load them into ram or something like that.
[08:27:19 CEST] <mfwitten> olavx200: Well, if your temporary directory is RAM-based, then you could just copy the file to it
[08:27:27 CEST] <olavx200> true
[08:27:32 CEST] <Lyphe0> mfwitten: use the -shortest flag
[08:28:01 CEST] <olavx200> anyway thanks for advice
[08:28:27 CEST] <mfwitten> Lyphe0: Thanks for the idea, but that's an encoding/muxing option. I'm trying to build a complex filter graph where the streams are cut to size in the middle of filtering
[08:28:39 CEST] <mfwitten> Lyphe0: So, encoding/muxing hasn't happened yet.
[08:29:24 CEST] <mfwitten> Lyphe0: In other words, the `-shortest' option works if I allow myself to use an intermediate file to store the streams, but I don't want (or maybe can't) do that.
[08:31:21 CEST] <mfwitten> Lyphe0: Maybe I could pipe the streams to another instance of ffmpeg, though....
[09:23:35 CEST] <mfwitten> Lyphe0: I did something liei the following and it worked: ffmpeg ... -shortest -c:a pcm_s32le -c:v rawvideo -f nut pipe:1 | ffmpeg -i pipe:0 ...
[09:24:58 CEST] <mfwitten> Lyphe0: In my opinion, that sort of functionally should be available in the filter graph; maybe `concat' should be able to specify that each segment duration is determined by the shortest stream, for example.
[09:44:31 CEST] <trashPanda_> Hello, I have a question regarding directing udp ffmpeg output to a particular NIC on windows 10.  I used to use the format udp://[ip]:[port]/[NIC] but recently that has stopped working.  Has anyone else noticed this?
[09:48:00 CEST] <trashPanda_> For some reason it appears to just be using my windows routing table when I use this format, instead of going to the NIC I specify
[09:55:57 CEST] <JEEB> not sure if that was ever properly supported?
[09:56:12 CEST] <JEEB> and the URL options are really brittle
[09:56:18 CEST] <JEEB> I would recommend utilizing the AVOptions
[09:56:30 CEST] <JEEB> which through ffmpeg.c are just options
[09:57:18 CEST] <trashPanda_> Could you explain that to me a little?  Why is it brittle/not fully supported?  And which AVOption are you talking about, a dictionary key/value?
[10:06:34 CEST] <DHE> in source code you use the AVDictionary. on the cli tool, unknown "-key value" parameter pairs will be put into dictionaries to let whatever accepts them gobble them up
[10:09:33 CEST] <trashPanda_> DHE, Thank you for that, I want to be able to do it in both places. In the CLI there are udp protocol options that I can set with ?localaddr=[NIC] and that works, but I need to be able to append pkt_size as well.  The documentation says to use & to append the two, like ?localaddr=[NIC]&pkt_size=[size] but that isn't accepting the second param
[10:12:01 CEST] <DHE> well there is the problem of a unix shell interpreting the & character
[10:12:14 CEST] <trashPanda_> Im in windows cmd prompt
[10:12:17 CEST] <DHE> but yes you can use -pkt_size 1316 -localaddr 192.168.0.3 udp://....
[10:12:39 CEST] <trashPanda_> interesting, ok I'll try that
[10:17:33 CEST] <trashPanda_> that worked well, thank you
[10:17:52 CEST] <trashPanda_> DHE, do you know where you pass the AVDictionary in for the source code?
[10:22:35 CEST] <DHE> most of the *_open() or other initialization functions take it
[15:12:27 CEST] <trashPanda_> Does anyone know where the output option "pkt_size" should be set on an output stream, I'm using the api.  Ive tried setting the option on the output avctx with av_opt_set and in a dictionary thats passed to avformat_write_header but neither is working
[16:25:54 CEST] <pk08> hi
[16:26:32 CEST] <pk08> is there any way to use ffmpeg's filter_complex in ffplay
[16:26:57 CEST] <pk08> i want to get volumebars over video so i am doing this by this command: https://pastebin.com/wdXYDNdH
[16:27:47 CEST] <pk08> so if is there any way to play directly using ffplay without transcoding by ffmpeg,thn it will be very helpful
[16:27:49 CEST] <pk08> thank you
[16:43:05 CEST] <DHE> pk08: There's a little trick to video and audio filtering. they're still complex filters, with a default input label of [in] and a default output label of [out]. so you can just do -vf "[in]shenanigans;...;[d1][d2]overlay[out]"
[16:44:32 CEST] <pk08> DHE: ffplay -f lavfi -i "udp://@235.1.1.118:374" -vf "[a]showvolume=w=w:h=w/12:o=1:f=0.50:r=25[vol0];[v][vol0]overlay=shortest=1:eval=0[out]"
[16:44:35 CEST] <DHE> eg: play video at low resolution with an ugly green tint: ffplay VID_20161011_091524.mp4  -vf "[in]scale=320:240[scaled];color=c=green at 0.5[red];[scaled][red]overlay[out]"
[16:44:47 CEST] <pk08> using this command, i am getting  No such filter: 'udp://'
[16:45:41 CEST] <DHE> ah.... right, you need both audio and video. with -f lavfi it means that rather than a file you instead specify a full filter_complex string
[16:46:17 CEST] <DHE> which can work with the filter "movie=..." but that gets messy quick, and I don't know if this would work with audio output as well
[16:46:32 CEST] <DHE> TO THE MANUAL!
[16:46:44 CEST] <pk08> i tried movie too!
[16:46:49 CEST] <pk08> didnt get any luck
[16:47:03 CEST] <DHE> well your colon will cause problems
[16:47:21 CEST] <pk08> and i am not able to find more examples using ffplay
[16:48:33 CEST] <pk08> DHE: its udp multicast to there will be colon :(
[16:48:59 CEST] <DHE> just thinking into my keyboard: ffplay -f lavfi "movie=udp\\://236.1.1.118\\:374:s=dv+da[v][a];[a]showvolume=w=w:h=w/12:o=1:f=0.50:r=25[vol0];[v][vol0]overlay=shortest=1:eval=0[out]"
[16:49:00 CEST] <pk08> so there*
[16:49:44 CEST] <DHE> maybe leave out the [out] at the end...
[16:50:29 CEST] <cehoyos> This looks strange:":s=dv+da"
[16:50:54 CEST] <DHE> http://ffmpeg.org/ffmpeg-filters.html#movie
[16:51:06 CEST] <DHE> otherwise the movie filter only outputs the video
[16:51:25 CEST] <cehoyos> Thank you, just found it!
[16:51:43 CEST] <cehoyos> lgtm
[17:01:35 CEST] <pk08> ffplay -f lavfi "movie='udp\\://236.1.1.118\\:374':s=dv+da[v][a];[a]showvolume=w=w:h=w/12:o=1:f=0.50:r=25[vol0];[v][vol0]overlay=shortest=1:eval=0"
[17:02:07 CEST] <pk08> with this command, i am not getting any error but not getting output too
[17:03:22 CEST] <DHE> well I seem to have typo'd your multicast address
[17:03:26 CEST] <DHE> that might be related
[17:03:51 CEST] <pk08> no, i tried with different address
[17:04:00 CEST] <pk08> address isnt the problem
[17:05:08 CEST] <DHE> you're also both quoting and escaping the backslash....
[17:06:36 CEST] <pk08> if i dont quote
[17:06:37 CEST] <pk08> Parsed_movie_0 @ 0x7f5b68001000] Failed to avformat_open_input 'udp'
[17:06:38 CEST] <pk08> [lavfi @ 0x7f5b68009260] Error initializing filter 'movie' with args 'udp://239.1.1.1//:1234:s=dv+da'
[17:06:38 CEST] <pk08> movie=udp\://239.1.1.1//:1234:s=dv+da[v][a];[v]scale=480x270[vout];[a]showvolume=w=480:h=20:o=1:f=0.50:r=25[vol0];[vout][vol0]overlay=shortest=1:eval=0: No such file or directory
[17:06:47 CEST] <pk08> i am getting this error
[17:21:46 CEST] <seanrdev> Hello. I'm just wondering if there is a way to limit the amount of RAM used with pulling rtsp streams using ffmpeg.
[17:24:57 CEST] <ksk> I am not an experienced ffmpeg user, but I do not know about something like this
[17:25:13 CEST] <ksk> seanrdev: might be cgroups etc can help you? put the ffmpeg into a memory-limited cgroup for example
[17:26:00 CEST] <DHE> but if ffmpeg runs out of memory I would just assume it stops running
[17:29:05 CEST] <ksk> totally possible, yes.
[17:34:27 CEST] <trashPanda_> DHE, did you happen to see my question earlier about setting the options in C?  I've tried throwing the pkt_size option into a few places and none of them have worked.
[17:35:04 CEST] <DHE> i did'
[17:36:39 CEST] <trashPanda_> any chance you know where the place to send it in would be? I'm honestly not sure where else would make sense to try and send it in other than the write_header or avctx
[17:36:55 CEST] <DHE> I don't
[17:37:06 CEST] <trashPanda_> Alright, thanks for your help
[18:43:14 CEST] <aykroyd> is there a way to make ffplay listen to keyboard commands with the -nodisp switch enabled? i'm using it to listen to live HLS streams and want to quit without CTRL+C
[18:43:52 CEST] <aykroyd> sample call: `ffplay -volume 9 -nodisp https://17963.live.streamtheworld.com/KBZTHD2AAC/HLS/playlist.m3u8`
[18:45:30 CEST] <trashPanda_> DHE, just an fyi I figured it out.  You can append it like the documentation says with the api, like udp://235.255.0.1:9000?pkt_size=1316&localaddr=192.168.1.12 and pass that whole string into avformat_alloc_output_context2
[18:46:06 CEST] <trashPanda_> so if you ever wanted to know or someone else asks, thanks for the help earlier : )
[18:55:34 CEST] <welcius> Hello, I have been googling for a while how to live transcode an http stream and output it to another http stream but no luck so far. Any idea of how can I do this?
[19:06:45 CEST] <friendofafriend> welcius: ffmpeg -i <input> -c:a copy -c:v copy -f type <output> ?
[19:07:33 CEST] <welcius> friendofafriend: output can be for example http://127.0.0.1:1234 ?
[19:08:00 CEST] <friendofafriend> Sure, if you've got a server accepting connections from ffmpeg as a source.
[19:08:20 CEST] <welcius> how can I do that? sorry its out of scope
[19:08:44 CEST] <friendofafriend> You need a server that will do it.
[19:10:32 CEST] <saml_> what codec and container are you using?
[19:11:32 CEST] <saml_> if you're just copying, why do you run ffmpeg at all? just use input http:// ?
[19:13:13 CEST] <friendofafriend> ffserver used to be the way to go for that, might try that mkvserver_mk2 thing.  https://github.com/klaxa/mkvserver_mk2
[19:15:30 CEST] <saml_> This project is the result of years of thinking, trying and finally succeeding.
[19:15:58 CEST] <friendofafriend> He didn't do it over coffee one morning.
[19:45:55 CEST] <ChocolateArmpits> Anyone experienced keyframe timestamps given by ffprobe not really helping with keyframe accurate trimming for mp4 files? It somehow works with input ts files though.
[20:25:01 CEST] <lavaflow> dumb question, but for a given codec, is `(vertical-resolution * horizontal-resolution) / average-bitrate` a reasonable metric by which to compare two different encodings of the same content?
[20:25:50 CEST] <furq> if you divide by fps as well then that already is a metric (bits per pixel)
[20:26:50 CEST] <furq> it's a good metric if you think the encoder used similar settings for both encodes
[20:26:51 CEST] <lavaflow> ah right, fps too.   does ffmpeg already calculate bits per pixel?  or do I just calculate it myself?
[20:26:58 CEST] <furq> mediainfo will show it
[20:27:03 CEST] <lavaflow> nice, thanks
[20:29:25 CEST] <furq> bpp is bitrate / (w*h*fps)
[20:29:31 CEST] <furq> so not quite the same but the same sort of idea
[20:29:42 CEST] <lavaflow> ah yeah, same principle
[21:33:00 CEST] <mfwitten> I have 2 audio streams, each sampled at 48000 Hz; when I use either one by itself (and therefore throw out the other one), or even when I pass one through `amix=inputs=1' (which basically just passes it through), then everything is OK with using that one audio stream. However, if I attempt to mix the two streams into one stream (using `amix'), ffmpeg complains about bad DTS values: "Non-monotonous DTS in
[21:33:06 CEST] <mfwitten> output stream 0:1; previous: 371000, current: -442721857768786551; changing to 371001..." Any ideas? Thanks!
[21:34:15 CEST] <mfwitten> Each stream has 0 as its initial PTS (as set by `asetpts=PTS-STARTPTS', which is used after `atrim' for each stream).
[21:36:43 CEST] <mfwitten> I get a bunch of those DTS errors.
[21:45:48 CEST] <mfwitten> The "output stream 0:1" does indeed name the audio stream of the output file.
[21:59:17 CEST] <mfwitten> Interstingly, the problem goes away when I save just the mixed audio rather than usign it as part of `concat'. Maybe ffmpeg doesn't handle a concat segment properly when the video stream is shorter than the audio.
[22:18:08 CEST] <aykroyd> any ffplay experts? i'm wondering if i'm missing a config/setting or if i've got a bug tied with the -nodisp flag
[22:18:53 CEST] <saml_> what's better vp9 or av1?
[22:22:53 CEST] <kepstin> saml_: "it depends"
[22:23:11 CEST] <mfwitten> saml_: av1, but the encoder is currently too slow to be practical. VP9 is great, but there are a lot of devices that don't play it. Then again, Windows doesn't play H.264 out-of-the-box either. In truth, nothing works; A/V is a giant hack.
[22:24:47 CEST] <mfwitten> kepstin: Still a major problem: https://trac.ffmpeg.org/ticket/7716
[22:30:16 CEST] <saml_> every device plays av1 natively right?
[22:32:49 CEST] <mfwitten> saml_: No
[22:57:08 CEST] <mfwitten> Well, now ffmpeg is giving me a segfault just because I swapped the order of concat segments. You know... this tool has promise, and I've done some interesting things with it, but it's just too buggy. If anyone cares to hear my advice, it is this: re-write ffmpeg with clean-room principles, from the ground up, with correctness and consistency as a major goal.
[22:57:16 CEST] <mfwitten> So long, and thanks for for all the fish!
[23:00:53 CEST] <furq> good advice
[23:01:15 CEST] <c_14> did anyone ask him what version he was running?
[23:01:20 CEST] <c_14> if it's segfaulting it's probably not new enough
[23:01:41 CEST] <durandal_1707> noone carse for fake users
[23:01:57 CEST] <furq> i love telling people working on a project that was started in 2000 that "this tool has promise"
[23:02:25 CEST] <furq> maybe some day some major corporations will have their entire backend based on this promising upstart tool
[23:03:35 CEST] <durandal_1707> furq: what is your point?
[23:08:28 CEST] <durandal_1707> trollers are not welcomed here
[00:00:00 CEST] --- Thu Jun 20 2019


More information about the Ffmpeg-devel-irc mailing list