[Ffmpeg-devel-irc] ffmpeg.log.20190111
burek
burek021 at gmail.com
Sat Jan 12 03:05:01 EET 2019
[08:03:54 CET] <kingsley> Is it reasonable for the .webm format to need 469 MiB for 15 minutes of 1920x1080 resolution video?
[08:05:38 CET] <JEEB> there is no specific magical thing that can tell you anything about that
[08:06:22 CET] <JEEB> that's generally why I tell people to use some sort of variable bit rate rate control (like f.ex. crf in x264), and run that on like 2-5 minutes of a clip that contains the kind of content you'll be encoding
[08:07:11 CET] <JEEB> and then you can just iterate over rate control values etc
[08:07:14 CET] <JEEB> :P
[08:07:35 CET] <JEEB> to find the parameters that give you maximum compression for the quality you want
[08:07:44 CET] <JEEB> and then you can start using those or similar parameters on your actual clips
[14:25:42 CET] <Jaex> Could I get a hostmask sharex/staff/bfnt.xenthys for Xenthys?
[14:26:52 CET] <Jaex> oh my lol
[14:26:55 CET] <Jaex> i thought i was in #freenode
[14:27:08 CET] <Jaex> :facepalm:
[14:29:33 CET] Action: Xenthys pats Jaex - it's ok buddy
[15:18:13 CET] <atbd> hi, i'm trying to make a transcoding program with ffmpeg api. I succeed to transcode both video and audio but not to sync each other. Can somebody help me or provide example(s) of synchronization?
[15:22:32 CET] <Mavrik> Well, you need to keep proper timestamps and write things next to each other on the output.
[15:22:37 CET] <Mavrik> The default transcoding example should do that properly.
[15:42:28 CET] <atbd> i keep timestamp generated by filtergraph on my frames (av_buffersink_get_frame) and then convert them to stream timebase for the packets. Audio and video then seem right but not synchronized
[15:43:51 CET] <Mavrik> If they're not synchronized then the timestamps aren't correct
[15:43:56 CET] <Mavrik> Or you're not interleaving packets correctly.
[15:44:14 CET] <Mavrik> (e.g. your audio packets are so late that player can't compensate with buffer)
[15:54:35 CET] <atbd> I think it comes from timestamps from video filtergraph, they begin at 0 whereas audio timestamp begin at a far greater value
[15:57:25 CET] <Mavrik> Did you set timebases and whatnot correctly on filtergraph as well?
[16:12:31 CET] <atbd> yes all filtergraph inputs/outputs have correct timebase
[16:14:36 CET] <kepstin> the filtergraph stuff doesn't change the timestamps to start at 0 on its own, it'll only do that if there's a filter in the filter graph that's modifying timestamps.
[16:15:12 CET] <kepstin> and if you are modifying timestamps in a video filter, you have to also modify the audio to keep stuff in sync :/
[16:24:59 CET] <atbd> my only video filter is for rescale, i'm looking for a pts reset elsewhere which cause my problem
[17:08:48 CET] <atbd> thanks for your help, i found my mistake for the video timestamps .)
[17:08:53 CET] <atbd> :)*
[18:27:38 CET] <tommy``> guys, this is good for encoding .ass? ffmpeg -i file.mp4 -vf "ass=subtitles.ass" file_encoded.mp4
[18:51:15 CET] <scriptease> test
[18:51:17 CET] <scriptease> ahh
[18:51:19 CET] <scriptease> ^^
[18:53:40 CET] <scriptease> ffmpeg -hwaccel cuvid -i "input 1080p x264.mkv" -c:v hevc_nvenc -preset slow -rc vbr_hq -b:v 6M -maxrate:v 10M "output 1080p x265.mkv" ....returns errors:
[18:53:43 CET] <scriptease> [h264 @ 000002aa9bd92c80] decoder->cvdl->cuvidCreateDecoder(&decoder->decoder, params) failed -> CUDA_ERROR_INVALID_VALUE: invalid argument
[18:53:53 CET] <scriptease> [h264 @ 000002aa9bd92c80] Failed setup for format cuda: hwaccel initialisation returned error.
[18:55:00 CET] <scriptease> using the latest ffmpeg for windows (10/x64) on a geforce 960
[18:55:12 CET] <scriptease> what is wrong here?
[18:55:14 CET] <scriptease> ^^
[19:16:26 CET] <kepstin> does 960 even have a hevc encoder?
[19:16:30 CET] Action: kepstin looks that up
[19:16:42 CET] <scriptease> should work yes
[19:18:11 CET] <kepstin> hmm, yeah, that's second gen baxwell, should work with 4:2:0
[19:18:33 CET] <scriptease> so that strin here is correct?
[19:18:34 CET] <scriptease> ffmpeg -hwaccel cuvid -i "input 1080p x264.mkv" -c:v hevc_nvenc -preset slow -rc vbr_hq -b:v 6M -maxrate:v 10M "output 1080p x265.mkv"
[19:18:37 CET] <scriptease> *string
[19:18:38 CET] <kepstin> interesting, that's an error setting up the decoder, not encoder?
[19:18:51 CET] <kepstin> scriptease: please share (via pastebin or similar) the complete ffmpeg output
[19:18:59 CET] <scriptease> k
[19:19:00 CET] <scriptease> 1 mom
[19:22:40 CET] <scriptease> https://pastebin.com/iXRdCmAk
[19:24:48 CET] <kepstin> wait, so the command is actually working?
[19:25:10 CET] <scriptease> it works but it seems to be a "fallback" to software encoder
[19:25:21 CET] <kepstin> no, there's no fallback in the encoder
[19:25:26 CET] <kepstin> it's falling back to software decoder
[19:25:36 CET] <scriptease> yep
[19:26:05 CET] <kepstin> no idea why the decoder isn't initializing, it should be fine with a 4:2:0 high stream like that :/
[19:26:20 CET] <scriptease> look at line 112/113
[19:27:03 CET] <kepstin> yes, all that says is "the nvidia driver didn't let us set up the decoder"
[19:27:15 CET] <scriptease> but why decoder?
[19:27:22 CET] <scriptease> i want to ENcode?!
[19:27:33 CET] <kepstin> the encoder works fine
[19:27:58 CET] <scriptease> hmm
[19:28:00 CET] <scriptease> ok
[19:28:03 CET] <kepstin> the set of options you've provided attempts to do hardware decoding then hardware encoding
[19:28:06 CET] <scriptease> how do you see that?
[19:28:22 CET] <kepstin> the hardware decoder setup is failing, and falls back to software. the hardware encoder works.
[19:28:36 CET] <scriptease> but that isnt a prob at all?!
[19:29:09 CET] <kepstin> well, i'd expect the hardware decoder to work fine on that video, so it's a bit confusing.
[19:29:22 CET] <scriptease> i was also a bit confused because i couldnt find a string for hwaccel in the ffmpeg documentation
[19:29:23 CET] <scriptease> https://ffmpeg.org/ffmpeg.html
[19:29:40 CET] <scriptease> *for "cuvid"
[19:31:10 CET] <scriptease> http://prntscr.com/m5yo2z
[19:31:25 CET] <scriptease> there is no information about nvenc (cuda)
[19:31:52 CET] <kepstin> probably the docs didn't get updated :/ Does it show up when you run `ffmpeg -hwaccels` ?
[19:32:44 CET] <scriptease> yes it does
[19:32:45 CET] <scriptease> http://prntscr.com/m5yoqo
[19:32:48 CET] <scriptease> :)
[19:33:04 CET] <kepstin> https://trac.ffmpeg.org/wiki/HWAccelIntro#NVDECCUVID is the docs for hw decoding/encoding
[19:33:22 CET] <scriptease> ah ok
[19:33:22 CET] <scriptease> thx
[20:05:39 CET] <tommy``> guys do you know if ffmpeg could have problems with long font names during encoding?
[20:05:53 CET] <tommy``> i have an .ass file with this style: Style: msg_finale,5aeab475c0663b98150ecfa3bffece2,75,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,-1,0,0,0,100,100,0,0,1,1,1,2,10,10,10,1
[20:07:19 CET] <tommy``> and this is the unique font i can't see in video
[20:39:32 CET] <ldm> Hi, I've got a question, can ffmpeg convert from 60fps to 30fps and blend frames? the idea is to add motion blur to make a film shot at 60fps feel more "cinematic"
[20:49:11 CET] <kepstin> it can do a simple blend between frames, but the result doesn't really look much like motion blur :/
[20:51:21 CET] <kepstin> "-vf tblend=average,framestep=2" will do it if you want to try.
[21:28:52 CET] <ABLomas> hello, need some advice - i'm putting 4 videos from dshow devices into one screen, like this:
[21:28:53 CET] <ABLomas> https://pastebin.com/Y9nd0wky
[21:30:01 CET] <ABLomas> looks OK, but they desync badly, by ~4 seconds each, probably because ffmpeg buffers first source, then second, then third, last and with buffer this "sequential opening" adds delay compared to each other
[21:31:13 CET] <ABLomas> how can i force them to be in sync (for example, any video should not reach filter_complex before others)? I can't kill rtbufsize because then it does not work properly at all
[21:44:24 CET] <kepstin> the ffmpeg command line tool isn't designed for this use case. I've heard that some people have hacked this sort of thing into working by having each input handled by a separate ffmpeg command, piping the output to an additional ffmpeg that does the combining.
[21:44:48 CET] <kepstin> probably better would be to write an app that uses ffmpeg libraries and does input in threads
[22:00:03 CET] <ABLomas> :/
[22:01:01 CET] <ABLomas> i also found similar suggestions on web, but no real success stories. I want just put 4 low-res videos side-by-side (2x2 arrangement), this sounds trivial task for PC but that buffering...
[22:17:01 CET] <kepstin> it's not that hard of a problem, tbh, it's just not a use case the ffmpeg cli tool was designed for.
[22:17:33 CET] <kepstin> that tool is a batch file processing tool that's been extended over the years to mostly work with some types of realtime processing.
[22:21:32 CET] <Gambit-> There we go.
[22:21:50 CET] <Gambit-> Hi folks - I'm trying to take several sub selections from a video and concatenate them together
[22:22:11 CET] <Gambit-> The Filtergraph I have right now doesn't seem to be working right - it puts a bunch of dead time in the middle.
[22:22:35 CET] <Gambit-> ffmpeg -i ../media/in.mp4 -filter_complex "[0]trim=end_frame=20:start_frame=10[s0];[0]trim=end_frame=1050:start_frame=1000[s1];[s0][s1]concat=n=2[s2]" -map [s2] out.mp4
[22:23:05 CET] <Gambit-> I feel like that should produce a video that 's 60 frames long - frames 10-20 followed by frames 1000-1050
[22:23:42 CET] <Gambit-> Suggestions?
[22:32:26 CET] <furq> Gambit-: trim=xyz,setpts=PTS-STARTPTS
[22:32:33 CET] <furq> for both trims
[22:37:43 CET] <Gambit-> furq, do I need to do both setpts and asetpts?
[22:40:03 CET] <Gambit-> hm it looks like if I don't do anything with the audio, it will concat the video tracks but completely drop the audio ones.
[22:40:08 CET] <Gambit-> That's... frustrating.
[22:49:04 CET] <kepstin> Gambit-: if you use a -map option, ffmpeg disables its default mapping and only includes the specified tracks
[22:49:51 CET] <kepstin> Gambit-: adding a "-map 0:a" or similar will include the audio again, although it won't be trimmed of course because you're only filtering the video right now.
[22:54:21 CET] <Gambit-> huh
[22:54:57 CET] <Gambit-> kepstin, what's the minimum I should be looking at? If I remove the map would I get the audio from the two segments by using just [0]trim+setpts?
[22:55:22 CET] <kepstin> you need to separately trim the audio using the atrim filter
[22:55:37 CET] <kepstin> (and then include it in the inputs to the concat filter, which will need the a=1 option added)
[22:56:47 CET] <Gambit-> okay, so I do need the full trim+setpts atrim+asetpts set
[22:58:42 CET] <Gambit-> Having all of those trim settings just felt expensive but I guess if it's compiled properly under the hood...
[22:59:00 CET] <Gambit-> is there a library version of FFmpeg that avoids any potential command-line-length limitations?
[22:59:59 CET] <Gambit-> I'd like to, say, extract a thousand small samples from a large video file and have it output to a stream (it's okay if it's sporadic, as long as it's fast), and stacking it all up from the command line sounds like a recipe for failure.
[23:00:37 CET] <kepstin> ffmpeg is built on a set of libraries that can do all the same thing. But for your use case, you probably want to use the "-filter_complex_script" option that lets you load a filter spec from a file.
[23:00:55 CET] <Gambit-> Oh that's probably sufficient, yea
[23:02:55 CET] <Gambit-> thanks kepstin, that'll probably get me at least to the next hurdle :)
[23:06:55 CET] <Gambit-> hm it takes a long time to seek (I assume) to some parts of the file.
[23:07:12 CET] <kepstin> trim filter doesn't seek, it discards decoded frames
[23:07:21 CET] <Gambit-> ahhh
[23:07:35 CET] <kepstin> if you want to seek, you have to use a separate input (or a "movie" filter in the filter chain) with the -ss input option
[23:08:50 CET] <Gambit-> ... should I just be using several movie filters instead of trim/atrim?
[23:09:48 CET] <Gambit-> or maybe I need to do a movie+trim+atrim for each segment?
[23:11:18 CET] <scriptease> kepstin...thx for helping so much btw ;-)
[23:11:21 CET] <kepstin> it depends. if you have small segments close together, a single input might be less overhead than multiple inputs
[23:11:32 CET] <kepstin> if there's bigger gaps, you'll probably want seeking
[23:14:05 CET] <Gambit-> would the general structure be movie+trim+atrim[1], movie+trim+atrim[2], ... [1][2]...concat=n=N?
[23:14:27 CET] <Gambit-> or rather [1a] and [1v] for each of those
[23:16:07 CET] <kepstin> you can name the filter pads anything you like, just do something you'll be able to understand again later. Note that integers are used to refer to ffmpeg inputs, so it's best to avoid them for pads you name.
[23:16:25 CET] <kepstin> i like using stuff like 'v1', 'a1' personally
[23:16:28 CET] <Gambit-> sure, sure - but that's the general structure I should be looking at?
[23:17:03 CET] <kepstin> a movie filter can read audio+video at the same time, synced, so you probably want something like this:
[23:17:06 CET] <scriptease> haha this is funny
[23:17:15 CET] <scriptease> didnt know that existed
[23:17:15 CET] <scriptease> https://explainshell.com/
[23:18:04 CET] <Gambit-> scriptease: neat!
[23:18:48 CET] <kepstin> movie=filename=blah.mp4:sp=12:s=dv+da[v0][a0];[v0]trim,setpts[v0trim];[a0]atrim,asetpts[a0trim];[v0trim][a0trim]...concat=v=1:a=1:n=N
[23:19:10 CET] <Gambit-> why dv+da and not v+a?
[23:19:16 CET] <kingsley> JEEB: Thank you for taking the time to type in your detailed thoughts on the size of .webm files.
[23:19:36 CET] <kepstin> Gambit-: because that's what you specify to tell the movie filter to select the default video and audio tracks.
[23:19:42 CET] <Gambit-> ah
[23:19:44 CET] <kingsley> Your knowledge and generosity are both fine qualities.
[23:20:36 CET] <kingsley> JEEB: I followed your advice. I experimented with render settings in kdenlive.
[23:21:16 CET] <Gambit-> huh interesting that didn't work and then I changed the names to exactly match yours and it did work
[23:21:24 CET] <Gambit-> I was using mv and ma as the outputs of movie, and it didn't like that.
[23:22:12 CET] <kingsley> JEEB: The default setting of kdenlive's video quality slider rendered 5 seconds of 1920x1090 into 4.3 megabytes.
[23:23:17 CET] <kingsley> JEEB: Setting kdenlive's video quality slider to 45 rendered 5 seconds of 1920x1090 into 3.2 megabytes.
[23:23:48 CET] <kingsley> JEEB: Setting it to 15 rendered 5 seconds of 1920x1090 into 4.9 megabytes.
[23:23:55 CET] <Gambit-> kepstin: I get a slightly alarming message: [out_0_1 @ 0x7ff028e1c3c0] 100 buffers queued in out_0_1, something may be wrong.
[23:24:28 CET] <kingsley> JEEB: I think all 3 looked OK, and am not surprised by the differences in file sizes
[23:24:53 CET] <kepstin> Gambit-: i'd have to see the full filter chain. that usually means you have some issue with routing between filters such that you're writing to a concat input that's not ready to read yet, for example
[23:25:29 CET] <Gambit-> full filter chain is https://pastebin.com/D3NYiter
[23:26:07 CET] <kepstin> Gambit-: your trim and atrim on the first input don't have the same start time
[23:26:19 CET] <Gambit-> hm since there's no setpts attached to the movie= filter, that probably means the timestamps are still correct?
[23:26:39 CET] <Gambit-> whups thanks for the spot
[23:26:45 CET] <Gambit-> yeah, that disappeared the warning
[23:27:18 CET] <kepstin> Gambit-: if you're using the seek_point option to the movie filter, you probably don't need the start option on the trim (since the seek will set the start)
[23:27:26 CET] <kepstin> that said, i don't think it'll hurt anything :)
[23:27:40 CET] <Gambit-> *nods*
[23:27:47 CET] <Gambit-> because the timestamp remains coherent all the way through
[23:27:53 CET] <kepstin> the movie filter keeps the timestamps when seeking - so if you use seek_point=5 on it, the timestamp of the first frame will be 5 seconds
[23:27:56 CET] <Gambit-> So I'd be starting from the "current position"
[23:28:00 CET] <Gambit-> gotcha
[23:28:09 CET] <kepstin> note that this is different from using doing -ss 5 -i foo.mp4 on the cli
[23:28:21 CET] <Gambit-> how so?
[23:28:22 CET] <kepstin> ffmpeg by default resets timestamps to start at 0 after seeking when using -ss and -i
[23:28:32 CET] <Gambit-> ah
[23:28:36 CET] <Gambit-> just from a timestamp perspective, right
[23:28:47 CET] <Gambit-> Is there a more performant way of doing this I should be looking at?
[23:28:52 CET] <kepstin> so it'll start at 5 seconds, but the timestamp of the frame at 5 seconds will be 0
[23:29:27 CET] <kepstin> there's no real better way without using a proper editing tool or a custom built thing with ffmpeg libraries
[23:29:41 CET] <Gambit-> okay, then this is good enough for govt work, so to speak
[23:29:53 CET] <kepstin> (ffmpeg libraries can seek within a loaded file - so you could seek to 5s, read 10s, seek to 1050s, read 10s for example)
[23:29:57 CET] <Gambit-> thanks for the tips, there was some definitely arcane pieces here.
[23:30:00 CET] <Gambit-> Yeah
[23:30:11 CET] <Gambit-> That's "future", when I eventually Need It.
[23:30:41 CET] <Gambit-> is there a way to tell -f to specify the current filetype?
[23:30:48 CET] <Gambit-> I'm outputting this to stdout and it gets confused.
[23:31:07 CET] <kepstin> yeah, you can use -f as an output option to specify the container to use
[23:31:16 CET] <kepstin> (note that you can't write mp4 to stdout)
[23:31:53 CET] <Gambit-> I'm planning on taking the output and sending it to a remote client over http using HLS+video.js, but I'd rather not transcode it to something until I need to.
[23:32:04 CET] <kepstin> also, your command right now only outputs video, you'll need something like ...concat=n=2:v=1:a=1[outv][outa]" -map [outv] -map [outa] ...
[23:32:08 CET] <Gambit-> So do I need to probe the file for the right format to use first and then specify that in a subsequent invocation?
[23:32:24 CET] <Gambit-> huh it plays audio when I drop it in the player...
[23:32:35 CET] <kepstin> hmm.
[23:32:41 CET] <kepstin> oh, right, filter complex is weird
[23:32:56 CET] <Gambit-> FFmpeg.* is weird.
[23:32:56 CET] <kepstin> unattached output pads are automatically included in the output file
[23:33:09 CET] <Gambit-> that is odd.
[23:33:17 CET] <Gambit-> should I do the map's to be more coherent?
[23:33:18 CET] <kepstin> so the audio output from concat doesn't have a named pad - that means it's just passed on to the output file.
[23:33:32 CET] <kepstin> (if you use a named pad, you have to use -map or it's sent nowhere iirc)
[23:33:48 CET] <kepstin> that might be an error, actually
[23:34:47 CET] <Gambit-> [outv][outa]" -map [outv] -map [outa] out2.mp4 -- did not produce a fully formed file
[23:34:54 CET] <Gambit-> it had the first video segment but not the second one
[23:35:22 CET] <kepstin> that doesn't make sense, there must be another issue somewhere in there :/
[23:36:07 CET] <Gambit-> *shrugs* It does work "as expected" with just [out]" -map [out] out2.mp4
[23:36:29 CET] <Gambit-> any tip on specifying the right parameter to -f to "do no unnecessary work" when outputting to stdout?
[23:39:47 CET] <Gambit-> ffprobe -v error -show_entries stream=codec_name -select_streams v:0 ../media/volleyball.mp4 -- this kicks out h264 but I'm not sure if that's enough to specify... will test.
[23:41:18 CET] <Gambit-> huh no that's not quite enough
[00:00:00 CET] --- Sat Jan 12 2019
More information about the Ffmpeg-devel-irc
mailing list