[Ffmpeg-devel-irc] ffmpeg.log.20190208

burek burek021 at gmail.com
Sat Feb 9 03:05:02 EET 2019


[01:09:02 CET] <Aerroon> is it possible to put text onto a video without re-rendering? i'm guessing it's not, right?
[01:09:56 CET] <JEEB> nope
[01:10:06 CET] <JEEB> I mean, if you mean into the actual video stream
[01:10:19 CET] <JEEB> so that it isn't a separate subtitle track/stream
[01:10:37 CET] <Aerroon> that's what i figured
[01:11:02 CET] <Aerroon> i was just thinking that a lot of simple editing stuff people want to do shouldn't really require you to render the resulting video
[01:11:23 CET] <Aerroon> in simple editing things you mostly just want to cut and transition from clip to clip and sometimes text of some kind
[01:11:45 CET] <Aerroon> in those cases probably something like 90% of the video would be untouched, but it still gets rendered again
[01:14:38 CET] <JEEB> partial re-encoding is a thing but it isn't something many tools do out of the box (but enable if you utilize the APIs and make your own tool for it)
[01:15:24 CET] <JEEB> because you basically just look at the initialization parameters of the source stream, and then try to set up your encoder to match those parameters
[01:15:39 CET] <JEEB> after that you figure out where does your editing actually touch the video surface
[01:15:59 CET] <JEEB> and then re-encode only parts that are touched, otherwise pass the original video packets through :P
[01:19:09 CET] <ossifrage> Aerroon, it might be possible to use some of the rarely used codec toolbox features to get an overlay layer, but it would be very-non-trivial
[01:19:35 CET] <Aerroon> oh nah i'm not interested in doing it, i was just wondering if it's possible
[01:19:56 CET] <Aerroon> but yeah, i was thinking that partial reencoding like JEEB mentioned
[01:20:04 CET] <Aerroon> i wonder if that's ever going to be a thing with editing software
[01:21:02 CET] <ossifrage> For example, you might be able to use the SCV stuff in h.264 to get an overlay, but it would be a hack
[01:21:47 CET] <ossifrage> The partial re-encoding tricks only really work if you can constrain the initial encode
[11:09:01 CET] <perseiver> Do anyone get success in setting packetization mode = 0 in H.264 video using FFMPEG?  currently if any video is converted using ffmpeg (vcodec libx264) , the result stream comes with packetization-mode=1 in SDP parameter
[13:12:27 CET] <hedgehog90> Hi, I'm trying to use the palettegen with a segment of video. I want it to be frame accurate so I use -ss after -i, but then I get the following error:
[13:12:32 CET] <hedgehog90>     Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)
[13:12:37 CET] <hedgehog90> If I put -ss before -i (inaccurate times) then it works.
[13:12:42 CET] <hedgehog90> Works: ffmpeg -ss 5 -t 5 -i in.mkv -vf "scale=320:-1:flags=lanczos" -y out.mkv
[13:12:46 CET] <hedgehog90> Fails: ffmpeg -i in.mkv -ss 5 -t 5 -vf "scale=320:-1:flags=lanczos" -y out.mkv
[13:13:38 CET] <hedgehog90> ^ oops, forgot to add palettegen after scale filter in both examples
[13:13:48 CET] <pink_mist> I thought -i specifies the input file
[13:14:06 CET] <pink_mist> not anything to do with "inaccurate times"
[13:14:41 CET] <hedgehog90> right, but if you put -ss before the -i then it is not frame accurate
[13:16:11 CET] <hedgehog90> any ideas?
[13:18:30 CET] <ariyasu> i think your error is the -t 5 after i
[13:19:08 CET] <ariyasu> also unless it's raw video where every frame is an iframe
[13:19:12 CET] <hedgehog90> I've tried either, makes no diff. -t is frame accurate, the video is 5 seconds exactly.
[13:19:14 CET] <ariyasu> it will never be frame accurate
[13:21:12 CET] <furq> -ss should be frame accurate if you're not stream copying
[13:21:15 CET] <furq> before -i
[13:22:21 CET] <hedgehog90> Oh really? It's hard for me to tell with a 16x16 palette if I've got the right frames.
[13:22:41 CET] <furq> just run the command without palettegen
[13:23:06 CET] <TheAMM> -ss before -i has had fast frame-accurate seeks by default for years (AFAIK)
[13:23:12 CET] <hedgehog90> so -ss before -i is only inaccurate when copying video stream?
[13:23:15 CET] <furq> you might want to put -t after -i, i forget how -t before -i works with -accurate_seek
[13:23:24 CET] <furq> but yeah just check if it's right
[13:23:55 CET] <hedgehog90> Nice one. Thanks furq
[13:24:09 CET] <furq> -ss before -i seeks to the nearest keyframe before and then decodes and discards everything up to the timestamp you gave it
[13:24:20 CET] <furq> obviously that doesn't work if you're stream copying because segments have to start on a keyframe
[13:24:25 CET] <hedgehog90> Got it.
[13:28:35 CET] <hedgehog90> Also, while I'm here, with filter_complex is it possible to palettegen and paletteuse in one filter chain? I tried it and it appeared no palette was being used.
[13:38:50 CET] <furq> it is possible but it might use a lot of memory
[13:41:03 CET] <hedgehog90> ok, would you recommend doing it in a separate process then?
[13:41:11 CET] <furq> as a general rule yes
[13:41:16 CET] <hedgehog90> ok
[13:42:44 CET] <furq> ffmpeg -i foo.mkv -lavfi "[0:v]select=between(t\,5\,10),palettegen[pal];[0:v][pal]paletteuse" bar.gif
[13:42:47 CET] <furq> something like that
[13:43:52 CET] <hedgehog90> you're doing it in a single chain / process there
[13:44:05 CET] <furq> yeah that's the one that might use a lot of memory
[13:44:10 CET] <furq> it shouldn't be too bad if you're only selecting five seconds
[13:44:58 CET] <furq> actually nvm that buffers like 4.5GB here on a 720p clip
[13:45:00 CET] <furq> so maybe not
[14:38:27 CET] <Elirips> Hello. I have a set of images, which I would like to use as input for ffmpeg and then output them again as images. The input shall be read with 1 fps, outputed to 5fps. I tried with 'ffmpeg.exe -r 1 -loop 1 -i E:\Var\dummy_%02d.jpg -r 1 -an -s 352x288 -q:v 2 -f image2pipe -update 1 -y \\.\pipe\Cam01.jpg' but it seems the first '-r 1' gets ignored?
[14:39:22 CET] <Elirips> (sorry, the second '-r 1' should be '-r 5', but it does not matter for the problem)
[14:39:46 CET] <Elirips> I also tried with something like -r 0.1 for the first fps parameter (for the input), but it does not change anything
[14:44:23 CET] <Elirips> also using '-framerate' instead of '-r' before the input does not help
[14:52:11 CET] <Elirips> hm, if outputing to avi, everything works as expected: 'ffmpeg.exe -f image2 -framerate 1 -i E:\Var\dummy_%02d.jpg -s 352x288 -y "c:\test.avi"'
[15:27:55 CET] <Elirips> now I would like to take that avi, loop it forever, read it with 1 fps, and output that to image2pipe with 5 fps... impossible?
[15:28:36 CET] <c_14> try -vsync vfr as an output option
[15:29:08 CET] <c_14> and stream_loop instead of loop
[15:33:53 CET] <Elirips> c_14: thanks I'll give that a try
[15:34:35 CET] <Elirips> any idea why reading single images with '-f image2' and outputing them using '-f image2pipe' seems to ignore all '-framerate' settings?
[15:40:38 CET] <Elirips> stream_loop works fine for the avi input, but still it is outputing the input avi at an enormous speed
[15:41:05 CET] <Elirips> trying with 'ffmpeg.exe -vsync 1 -stream_loop -1 -i E:\test.avi -f image2pipe -y \\.\pipe\Cam01.jpg'
[15:41:57 CET] <Elirips> trying with 'ffmpeg.exe -r 1 -vsync 1 -stream_loop -1 -i E:\test.avi -f image2pipe -y \\.\pipe\Cam01.jpg'
[15:42:06 CET] <Elirips> its played with a speed of ~460x :D
[16:01:56 CET] <c_14> try -re
[16:02:01 CET] <c_14> that might be what you want
[16:02:37 CET] <mfwitten> Elirips: ffmpeg -y -f image2 -framerate 1 -loop 1 -i E:\Var\dummy_%02d.jpg -r 5 -f image2 -update 1  \\.\pipe\Cam01.jpg
[16:03:56 CET] <mfwitten> Elirips: The thing is, what you are expecting on output really?
[16:04:06 CET] <mfwitten> Elirips: What is consuming those images?
[16:04:36 CET] <Elirips> i am consuming those images from within a c++ app written by me :/
[16:04:51 CET] <Elirips> and normally, I have a bunch of rtsp streams as inputs
[16:05:15 CET] <Elirips> but with my test-environment here, I can only start about ~ 10 streams at once, then the single camera I have crashes
[16:05:24 CET] <Elirips> so I was trying to just feed it with single images
[16:05:43 CET] <mfwitten> Elirips: OK. Well, ffmpeg will read in your frames, and you're telling it that those frames occur at 1 frame per second, and then you're telling ffmpeg to re-generate those frames such that they occur at 5 frames per second, and then you are writing out those generated frames to a pipe
[16:06:09 CET] <mfwitten> Whatever is consuming those frames would be responsible for interpreting their presentation time (i.e., the frame rate)
[16:06:19 CET] <Elirips> mfwitten: yes, so I would expect ffmpeg to give me one frame at my pipe every 200ms
[16:06:21 CET] <mfwitten> Elirips: You just want to scale the images, etc. right?
[16:06:48 CET] <Elirips> more or less, yes
[16:07:08 CET] <BtbN> Elirips, ffmpeg will go as fast as it can. The fps value only governs the timestamps, not the actual speed it processes at.
[16:07:20 CET] <BtbN> Like c_14 said, to artificially limit that, you want -re.
[16:07:23 CET] <mfwitten> Elirips: ffmpeg is not a player.
[16:07:27 CET] <mfwitten> Elirips: That's the issue
[16:07:27 CET] <Elirips> thank you, so there is a fundamental missunderstanding on my side :)
[16:07:51 CET] <Elirips> I'll give that a try thanks everyone for the hints
[16:08:07 CET] <BtbN> Keep in mind -re is an ffmpeg.c hack. It's not available via the API.
[16:09:04 CET] <Elirips> yep, reading the doc about -re sounds exactly like what I need
[16:11:09 CET] <mfwitten> Elirips: ffmpeg -y -re -f image2 -framerate 5 -loop 1 -i E:\Var\dummy_%02d.jpg -f image2 -update 1 \\.\pipe\Cam01.jpg
[16:15:21 CET] <Elirips> awesome :)
[16:15:39 CET] <mfwitten> Elirips: What is your final command line?
[16:17:25 CET] <Elirips> ffmpeg.exe -y -re -f image2 -framerate 1 -loop 1 -i E:\dummy_%02d.jpg -f image2pipe -r 5 -s 352x288 -q:v 2 -y \\.\pipe\Cam01.jpg
[16:18:41 CET] <mfwitten> Elirips: To me, that would read it in at 1 FPS and then as quickly as possible produce 5 FPS (generate duplicate images) and send them off to your app as fast as possible
[16:18:51 CET] <mfwitten> Elirips: So, you're probably doing a lot more processing than necessary
[16:19:17 CET] <Elirips> right right
[16:19:26 CET] <Elirips> if I look at the timings on my side
[16:19:30 CET] <Elirips> things arrive very quickle
[16:19:35 CET] <Elirips> then nothing for a long time
[16:19:39 CET] <Elirips> and again 5 frames :)
[16:19:51 CET] <Elirips> but I'm getting the idea now
[16:20:05 CET] <Elirips> I'll just multiply my input images *5
[16:20:16 CET] <Elirips> then do '-re -framerate 5'
[16:20:22 CET] <Elirips> and I thinkg then I should get what I want
[16:20:57 CET] <mfwitten> Elirips: No need to multiply by 5
[16:21:43 CET] <Elirips> hm, but how then: I would like to get the first image 5 times, with a wait of ~200ms inbetween
[16:22:16 CET] <mfwitten> Elirips: oh
[16:22:44 CET] <Elirips> its for testing only, so multiply the input images will be fine
[16:22:55 CET] <Elirips> I need to simulate an input-device with 5 fps
[16:23:15 CET] <Elirips> but where the image changes only every 1 second
[16:23:19 CET] <Elirips> (dont ask if that makes sense or not)
[16:23:51 CET] <mfwitten> Elirips: All right. Good luck!
[16:24:06 CET] <Elirips> mfwitten: thanks a lot (also all others who helped)!
[18:08:28 CET] <hedgehog90> Is there a way to delay/re-time a subtitles filter (reading an srt) in ffmpeg?
[18:09:06 CET] <hedgehog90> I'm hardcoding some subs and i want to delay them/make them appear sooner
[18:09:15 CET] <kepstin> hedgehog90: using the subtitles filter? No. What you can do is re-time the *video* that it's being rendered over.
[18:09:37 CET] <hedgehog90> Ok, but I want the base video to stay the same
[18:09:39 CET] <kepstin> so if you want the subtitles to render earlier, use e.g. the -itsoffset input option to make the video be later
[18:10:17 CET] <hedgehog90> -itsoffset delays the video?
[18:11:13 CET] <kepstin> the -isoffset input option applies a timestamp offset to all of the streams in an input file
[18:11:18 CET] <kepstin> -itsoffset
[18:12:00 CET] <hedgehog90> I have to input the subtitle using filter_complex, I don't think -itoffset will work. I have a very complex filter chain, in which subtitles are just one element, I'm not sure -itoffset will work
[18:12:02 CET] <kepstin> it doesn't actually change the contents in anyways, it just applies an offset to the timestamp numbers. And then the subtitles filter uses those timestamp numbers to figure out how to sync up the subtitles.
[18:12:46 CET] <kepstin> ah, if you're doing multiple things then that would make it complicated, yeah. You might have to use a setpts filter to apply an offset before the subtitles filter, then another setpts filter afterwards to undo the offset.
[18:13:41 CET] <hedgehog90> ah, setpts is what I was looking for I think
[18:13:44 CET] <hedgehog90> thanks
[18:14:49 CET] <kepstin> (as an aside, an option in the subtitles filter to set a timestamp offset would be pretty handy...)
[18:17:22 CET] <hedgehog90> It would be nice :)
[18:37:26 CET] <furq> you might just want to use another tool to retime the srt in advance
[18:37:35 CET] <furq> any decent subtitle editor will do it
[18:44:11 CET] <hedgehog90> furq I'd like to avoid that if possible
[18:44:37 CET] <hedgehog90> btw I tried using setpts and couldnt get it to work.
[18:44:57 CET] <JEEB> filters don't work with sbutitles because subtitels are the only things still not using AVFrames
[18:45:04 CET] <JEEB> they have AVSubtitles
[18:45:25 CET] <hedgehog90> even text subtitles?
[18:45:32 CET] <furq> the ugly double input itsoffset trick should work iirc
[18:45:38 CET] <hedgehog90> ie, not image based
[18:45:48 CET] <furq> -i foo.mp4 -itsoffset 123 -i foo.mp4 -map 0:v -map 0:a -map 1:s
[18:46:45 CET] <furq> also it's only text subtitles, image-based subtitles are treated as a video stream iirc
[18:47:10 CET] <furq> idk if they're handled specially but you can definitely use filters on them
[18:47:18 CET] <kepstin> hedgehog90: you need to apply the setpts filter on the video stream that the subtitles will be rendered over, and offset in the opposite direction
[18:47:30 CET] <JEEB> there's sub2video for picture based subs
[18:47:38 CET] <JEEB> which is a very horrible hack
[18:47:55 CET] <JEEB> I've tried to improve it at some point but ended up having it go barely realtime with overlay :P
[18:48:36 CET] <hedgehog90> ok, I think I've worked out a solution
[18:48:51 CET] <kepstin> hedgehog90: e.g. to make the subtitles show up 10s later, do setpts=PTS-10/TB,subtitles=...,setpts=PTS+10/TB
[18:48:56 CET] <JEEB> also itsoffset often doesn't work at all without copyts
[18:49:02 CET] <JEEB> because ffmpeg.c loves to look at the initail offset
[18:49:05 CET] <JEEB> and zero that out
[18:49:22 CET] <hedgehog90> kapstin oh that's neat
[18:49:24 CET] <JEEB> and/or -vsync passthrough
[18:49:43 CET] <furq> yeah that setpts hack looks nicer than itsoffset
[18:51:44 CET] <hedgehog90> I think you got the - and + back to front kepstin, flipping them worked
[18:51:57 CET] <hedgehog90> thanks!
[18:52:38 CET] <kepstin> yeah, that works in either direction
[18:53:35 CET] <hedgehog90> works perfectly
[19:29:33 CET] <lays147> Hello guys, I would like to have some guidance today. I have this old script that runs in a probably very old ffmpeg(dont have access to the machive to check the version) and the new script that runs on ffmpeg 2.8.15
[19:29:34 CET] <lays147> https://pastebin.com/0nLx9fxj
[19:29:57 CET] <lays147> the problem is that apparently the audio channel generated by the codec aac is not compatible with out old system
[19:30:55 CET] <DHE> he_v2 is most likely the issue. try again with the default profile. though you'll probably want to raise the audio bitrate as well
[19:30:57 CET] <lays147> and then I compiled ffmpeg with the lib libfdk_aac and I am trying to generate a video/audio that works with the new specs
[19:32:36 CET] <lays147> but changing the acodec from aac to libfdk_aac, and running the new script, I get an error that the option preset isnt recognized.
[19:33:49 CET] <lays147> so anyone can help me to change the old script to works on new versions of ffmpeg ? or create a script that works? If I use the lib libfdk_aac and just remove the commands that arent recognized, I can get a video, but it freezes in almost every frame
[19:50:30 CET] <furq> lays147: did you build the new ffmpeg with libx264
[19:52:50 CET] <lays147> furq: the ffmpeg that I build from my is based on the snapshot N-93094-g7f8bfbe based on this: https://stackoverflow.com/questions/18746359/compile-ffmpeg-with-libfdk-aac
[19:53:11 CET] <lays147> that option is disabled by default when compiling by hand?
[19:53:36 CET] <furq> any option starting with --enable-lib is disabled by default
[19:54:45 CET] <DHE> so you'll need at a minimum: ./configure --enable-libx264 --enable-gpl --enable-libfdk-aac --enable-nonfree
[19:54:55 CET] <DHE> though chances are this will be enough for you
[19:55:11 CET] <lays147> DHE: thanks, just checked out and the libx264 is missing
[19:56:38 CET] <furq> i take it you're not building ffmpeg 2.8
[19:57:37 CET] <lays147> furq: yeah, I got from the src... I think that I will get the src of 2.8 to not have future issues
[19:57:54 CET] <furq> you should really build a newer version unless something's blocking you
[19:58:06 CET] <furq> 2.8 is pretty old now
[19:58:20 CET] <furq> if you're trying to build the same version your distro has then there's really no point doing that
[19:58:41 CET] <furq> just build with --enable-static and keep your ffmpeg separate from the system one
[19:59:19 CET] <lays147> furq: ok
[21:45:11 CET] <KungFuJesus> you guys need to mangle names on this function or there's a symbol conflict: https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/cuda_check.c
[21:45:45 CET] <JEEB> post it either on the trac issue tracker, or the ffmpeg-devel mailing list I guess?
[21:46:03 CET] <KungFuJesus> libavutil and libavcodec both are exported ff_cuda_check
[21:46:17 CET] <JEEB> that sounds broken indeed
[21:46:19 CET] <KungFuJesus> s/exported/exporting/g
[21:47:44 CET] <JEEB> or I guess if you can catch anyone handling the cuda related code you might also have luck on the developer IRC channel. this is strictly for user support and mostly lacks developers :P
[21:47:48 CET] <JEEB> (except masochists like me)
[21:49:27 CET] <KungFuJesus> Ok, will try.  Not sure it's worthy of a whole bug report, somebody will find it pretty quickly trying to build from master
[21:49:49 CET] <KungFuJesus> if they are building libavutil + libavcodec (one uses the static archive from the other)
[22:06:54 CET] <BtbN> KungFuJesus, it's like that for quite a while now, and it works great for me
[22:07:11 CET] <BtbN> the identical same symbol exported by multiple libs isn't usually an issue
[22:07:24 CET] <BtbN> (Also, the file you linked has no functions)
[22:08:41 CET] <BtbN> That functions probably should not be exported in the first place though
[22:32:46 CET] <Skandalist> Hello. I have audio stream with length 1330 sec and video stream with length 1594. Is it possible to change video tempo to make it equal to audio? (As an opposite operation of changing audio tempo faster in Audacity)
[22:32:49 CET] <Skandalist> &
[22:32:50 CET] <Skandalist> ?
[22:33:30 CET] <Skandalist> I've found this in man: "With -map you can select from which stream the timestamps should be taken. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one."
[22:34:42 CET] <KungFuJesus> I'm on gcc 7 if that matters
[22:35:11 CET] <KungFuJesus> it seems that at least for my build, libavcodec tries to use code from the static archive in libavutil
[22:35:14 CET] <Skandalist> and tried "-map 0:0,1:0" from advanced options
[22:35:29 CET] <Skandalist> with no luck
[22:35:43 CET] <KungFuJesus> perhaps it's because my distro moved to force pie on gcc by default and relocations are an issue?
[22:40:35 CET] <BtbN> Specially on a static build, the linker should just eliminate the duplicate symbol
[22:40:45 CET] <BtbN> make sure you are doing a clean build
[22:41:11 CET] <BtbN> Though I agree that it should be fixed
[23:06:09 CET] <wfbarksdale> hey folks, i'm wondering if there is any downside to using a flicks timebase (1 / 705600000) when writing an mp4 with ffmpeg?
[23:12:21 CET] <JEEB> wfbarksdale: your PTS values can get awfully high quickly mostly
[23:16:52 CET] <wfbarksdale> hmm, indeed
[00:00:00 CET] --- Sat Feb  9 2019


More information about the Ffmpeg-devel-irc mailing list