[Ffmpeg-devel-irc] ffmpeg.log.20190306

burek burek021 at gmail.com
Thu Mar 7 03:05:02 EET 2019


[00:01:40 CET] <omani> hi guys. I have a really hard time to figure out how to merge two videos as such that the second video starts 5 seconds before the first video ends. (trying to append an outro video)
[00:02:21 CET] <omani> the second video has music in it. so right before the first one ends (someone is speaking until the end) the second video's music should start.
[00:02:26 CET] <omani> is this even possible with ffmpeg?
[00:03:20 CET] <omani> I guess anything is possible with ffmpeg.
[00:03:31 CET] <c_14> use the trim filter to split the first and second video/audio tracks into 2 parts each, the part where only they play and the 5 second part where they overlap
[00:03:52 CET] <c_14> blend the 5second video tracks, amix the 5second audio tracks, use the concat filter to put the parts into the right order
[00:12:42 CET] <omani> c_14: ok thanks
[01:18:35 CET] <jennx> heya
[01:18:52 CET] <systemd0wn> Question: I have a m3u8 (~26 seconds) that I've generated with ffmpeg (single TS output) and would like to use that as an input and mp4 as an output. However, when I do so I end up with ~ 2 second file and logs show "root atom offset 0x1c0278: partial file" and "Non-monotonous DTS in output stream 0:1; previous: 95782, current: 45565; changing to 95783. This may result in incorrect timestamps in the
[01:18:58 CET] <systemd0wn> output file." I'm happy to share more information like commands used, sample files, etc. I'm curious why ffmpeg exits 0 after turning a 26 second file into a 2 second file. I'm curious what I did the the original file or playlist to cause these time related log entries.
[01:19:10 CET] <jennx> are CPU x264 hardware encoders used automatically with ffmpeg or do i have to supply a switch?
[01:19:53 CET] <brimestone> JEEB - I've had it.. Im setting up a Z840 with 2 nVidia Tesla cards :) running ubuntu.. lets see what kind of speed ill get with these.
[01:21:06 CET] <kepstin> jennx: if you want a specific encoder, you have to manually specify it.
[01:21:33 CET] <kepstin> jennx: note that libx264 is a specific software encoder
[01:22:51 CET] <systemd0wn> jennx: https://trac.ffmpeg.org/wiki/HWAccelIntro may also be helpful
[01:25:07 CET] <jennx> ta
[01:36:57 CET] <kepstin> huh, fun, the hevc_vaapi encoder on my RX 560 only works in constant-qp mode. If i try setting a bitrate instead, I get garbage.
[01:37:13 CET] <kepstin> (it's garbage at the requested bitrate, so that's something at least)
[01:37:32 CET] <kepstin> tbh, I'm kinda surprised it works at all, the h264 encoder doesn't work :)
[02:42:05 CET] <Sauraus> Hi there
[02:42:59 CET] <Sauraus> I was wondering if anyone has tried capturing Twitch life streams on a EC2 instance in AWS and if they ran into any weird connectivity issues.
[02:43:50 CET] <Sauraus> My command line and subsequent errors can be found in this gist https://gist.github.com/Sauraus/1848aefefa5c27c134d6d3ad22e534c6
[02:45:03 CET] <Sauraus> When I run that command on my local workstation it runs flawlessly for hours on end, but on an c5m.xlarge EC2 instance on AWS the life video feed capture is very jerky and unreliable
[10:41:07 CET] <th3_v0ice> Am I understanding the code correctly that if I set AVFMT_TS_NONSTRICT, this will omit the check(cur_dts < prev_dts) and will send a packet to the output?
[11:21:55 CET] <slingamn> i'm having trouble finding a guide on extracting subtitles from a DVD's .vob files
[11:22:15 CET] <slingamn> https://stackoverflow.com/questions/19200790/converting-dvd-image-with-subtitles-to-mkv-using-avconv this is the closest thing i've seen
[11:25:44 CET] <slingamn> this is the closest i've gotten:
[11:25:49 CET] <slingamn> ffmpeg -fflags +genpts -analyzeduration 1000000k -probesize 1000000k -i /mnt/usbstick/VIDEO_TS/VTS_01_1.VOB -c:s copy -map 0:3 /home/shivaram/holding/holding/mymovie.sub
[11:26:07 CET] <slingamn> which produces:
[11:26:08 CET] <slingamn> [microdvd @ 0x560a19b18ec0] Exactly one MicroDVD stream is needed.
[11:26:52 CET] <furq> there's no vobsub muxer
[11:27:09 CET] <furq> you can extract the subtitles into mkv or something
[11:27:29 CET] <slingamn> that would work
[11:27:56 CET] <furq> bear in mind the palette will probably be broken because that's stored in the corresponding ifo
[11:28:51 CET] <slingamn> i get "[matroska @ 0x55f527354f60] Only audio, video, and subtitles are supported for Matroska.5.64x"
[11:29:11 CET] <furq> what ffmpeg version
[11:29:16 CET] <slingamn> hmm, it might be mad about this: "Stream #0:2: Data: dvd_nav_packet"
[11:29:26 CET] <slingamn> 3.4.4-0ubuntu0.18.04.1
[11:30:04 CET] <furq> weird
[11:30:08 CET] <furq> that works here on 3.4.2 and 4.1
[11:30:17 CET] <slingamn> ah so this worked:
[11:30:18 CET] <slingamn> ffmpeg -fflags +genpts -analyzeduration 1000000k -probesize 1000000k -i /mnt/usbstick/VIDEO_TS/VTS_01_2.VOB -c:s copy -map 0:3 ./out.mkv
[11:31:02 CET] <slingamn> mkvinfo shows S_VOBSUB data successfully extracted in the outfile
[11:31:15 CET] <furq> anyway if the palette is broken then try -ifo_palette VTS_01_0.IFO and remove -c:s copy
[11:31:40 CET] <furq> or rather change it to -c:s dvdsub
[11:32:33 CET] <slingamn> what file extension should i use with -c:s dvdsub?
[11:32:49 CET] <furq> still mkv
[11:33:42 CET] <furq> it's just decoding and reencoding to the same format
[11:33:48 CET] <slingamn> ah
[11:33:57 CET] <furq> ifo_palette is a dvdsub decoder option for boring technical reasons, so it doesn't work with stream copy
[11:34:05 CET] <furq> but it should be lossless anyway
[11:34:27 CET] <slingamn> so the dvd is split into 3 vob files, and my ultimate goal is to create a single detached subfile
[11:35:43 CET] <furq> you probably want to use an actual tool that will spit out a contiguous dvd title then
[11:36:00 CET] <furq> sometimes you'll get lucky and the split vob set is exactly one title, but usually there's some other stuff on there
[11:36:15 CET] <furq> normally on *nix i use tccat (part of transcode) and pipe it to ffmpeg
[11:36:36 CET] <furq> if you're sure it's exactly one title then you can literally just cat the vobs together
[11:36:59 CET] <slingamn> interesting
[11:37:19 CET] <slingamn> `-i -` is stdin?
[11:39:11 CET] <JEEB> yes
[11:39:29 CET] <JEEB> "-" is stdin in many media apps
[11:54:07 CET] <slingamn> oh this is neat
[11:54:12 CET] <slingamn> i was able to do it with subtitleripper
[11:54:29 CET] <slingamn> cat /mnt/usbstick/VIDEO_TS/VTS_01_*.VOB | tcextract -x ps1 -t vob -a 0x20 > mysub.ps1
[11:54:40 CET] <slingamn> subtitle2vobsub -i /mnt/usbstick/VIDEO_TS/VTS_01_0.IFO -p ./mysub.ps1 -o mymovie
[16:11:33 CET] <pax_rhos> hello, how to record audio not from mic, but from output using ffmpeg?
[16:16:39 CET] <pax_rhos> I use pulseaudio
[16:30:42 CET] <relaxed> pax_rhos: https://ffmpeg.org/ffmpeg-devices.html#pulse
[16:52:30 CET] <pax_rhos> I've read that
[16:53:59 CET] <relaxed> you're trying to capture something playing through pulseaudio?
[16:54:10 CET] <pax_rhos> yes
[16:54:14 CET] <pax_rhos> skype video chat
[16:55:16 CET] <relaxed> while using skype, run "pactl list sources" and maybe it will list its name or something obvious
[16:56:33 CET] <relaxed> then use that name instead of "default"
[16:56:42 CET] <pax_rhos> oh, I thought I could address streams by their number
[16:56:49 CET] <pax_rhos> turns out I should in fact use name
[16:56:54 CET] <pax_rhos> even though it's damn long
[16:58:47 CET] <pink_mist> copy+paste?
[17:37:11 CET] <brimestone> Hey guys, help me understand this.. Im doing this. "ffmpeg -i <ProRes444 4k>.mov -f null -" and all I get is "speed=0.337x" - which means if I add -vf to it - its just going to make it slower..   how can I make the decoding faster?
[17:38:07 CET] <furq> the prores decoder apparently supports threading
[17:38:16 CET] <furq> so try adding -threads 4 before -i
[17:38:22 CET] <furq> (or whatever number you want)
[17:38:24 CET] <brimestone> Testing now..
[17:40:29 CET] <brimestone> oh wow, it works..
[17:40:42 CET] <brimestone> Im getting 1.4x now
[17:43:13 CET] <brimestone> Awesome..  thanks..
[17:43:44 CET] <brimestone> But then if I add this "-vf scale=-1:720" it shoots it down back to 0.6x
[17:45:06 CET] <Mavrik> Are you sure there's no encoding going on?
[17:45:13 CET] <Mavrik> Also how slow is the machine that it gets killed by scaling? O.o
[17:47:29 CET] <DHE> you should see a line like: "Stream #0:0 -> #0:0 (prores (native) -> wrapped_avframe (native))" which indicates it's outputting to wrapped_avframe (which will be effectively not reencoding). an uncompressed format would also be fine.
[17:47:51 CET] <brimestone> Its this https://www8.hp.com/us/en/workstations/z840.html
[17:48:13 CET] <DHE> there's also the more detailed output description section but it's not so copy/paste friendly in IRC
[17:48:23 CET] <brimestone> With nvidia Quadro K4200
[17:48:46 CET] <DHE> GPU means nothing in this config
[17:49:19 CET] <brimestone> DHE checking the avframes now
[17:49:51 CET] <brimestone> Stream #0:0 -> #0:0 (prores (native) -> wrapped_avframe (native))
[17:50:24 CET] <brimestone> and this is 0:0 - Video: prores (ap4h / 0x68347061), yuv444p10le(bt709, progressive), 4480x3096, 1815020 kb/s, SAR 1:1 DAR 560:387, 24 fps, 24 tbr, 24 tbn, 24 tbc (default)
[17:50:40 CET] <DHE> that's the input... that's higher than 4k
[17:50:50 CET] <brimestone> :)
[17:50:58 CET] <DHE> and yuv444...
[17:51:03 CET] Action: DHE wipes the drool
[17:51:30 CET] <Mavrik> Yeah, you might be having unrealistic expectations about performance then :P
[17:51:58 CET] <Mavrik> Isn't there a nvidia scaling filter somewhere?
[17:52:41 CET] <brimestone> I ran the same task - which is -vf scale=-1:720:lut3d="ARRI_EE_LogC_R709.cube" on DaVinciResolve and man! it was doing It around 2.4x
[17:52:42 CET] <furq> does nvdec support prores at all
[17:52:49 CET] <furq> much less 4:4:4 at >4k
[17:53:14 CET] <DHE> no, which means you'll have to move the frames in/out of the GPU which might be another possible slowdown point
[17:53:21 CET] <DHE> (not that I know what it's capable of)
[17:53:36 CET] <furq> the support matrix on nvidia.com doesn't mention it
[17:53:41 CET] <Mavrik> the idea was more like if you could do hwupload,scale,hwdownload
[17:53:46 CET] <furq> but also it's on nvidia.com so i don't believe what it has to say
[17:53:49 CET] <Mavrik> But yeah, resolution support is questionable
[17:55:08 CET] <brimestone> When Resolves does its it uses all the cores on the CPU and all the cores on the GPU.. which got it to do ~2.4x
[17:57:34 CET] <DHE> they may be running a custom application which does even more multi-threading. I don't think the scale filter is multi-threaded
[17:57:49 CET] <brimestone> Got it..
[17:57:58 CET] <DHE> (that's what I'm doing)
[17:58:10 CET] <furq> maybe try zscale if your build has it
[17:58:23 CET] <brimestone> zscale?
[17:58:32 CET] <furq> it's a different scaling filter
[17:58:40 CET] <furq> it might have a faster path for this, idk
[17:59:21 CET] <brimestone> testing..
[18:01:13 CET] <brimestone> Both my Linux and macOS doesn't have zscale. Is it suppose to be faster?
[18:01:40 CET] <furq> like i said, idk
[18:01:45 CET] <furq> in my experience it's sometimes faster
[18:02:03 CET] <brimestone> thanks.. this has got me further than before..
[18:02:03 CET] <furq> apparently there is some private apple api for prores hwdec
[18:02:08 CET] <furq> so maybe resolve is using that
[18:02:27 CET] <brimestone> I'm also investigating that now.. AVFoundation + VideoToolBox
[18:03:04 CET] <furq> no idea if ffmpeg can use that avi
[18:03:05 CET] <furq> api
[18:03:12 CET] <furq> i would assume not but i have no idea how avfoundation works
[18:32:33 CET] <angular_mike_> Is it possible with ffmeg to pack individual opus framed on input into an ogg container file on output without decoding to pcm?
[18:34:53 CET] <DHE> yeah, just set the codec to "copy"
[18:53:15 CET] <angular_mike_> DHE: how do I input them tho?
[19:39:03 CET] <kepstin> angular_mike_: how are they framed? ffmpeg has to know how to read that framing mechanism
[19:39:25 CET] <kepstin> opus itself doesn't have a raw format, since the packets rely on having length signalled through the container/framing
[19:44:05 CET] <angular_mike_> kepstin: raw frames interecepted from discord API
[19:44:14 CET] <angular_mike_> bytes
[19:45:13 CET] <kepstin> angular_mike_: to feed them to the ffmpeg cli tool, you have to put the opus packets into a bytestream with framing that indicates their length. in other words, you're gonna have to mux them into a container in order to input them to the ffmpeg cli.
[19:45:24 CET] <JEEB> sounds like you might as well use the lavf API :P
[19:45:31 CET] <JEEB> if you are already intercepting packets
[19:45:36 CET] <kepstin> if you're using ffmpeg libraries, you could just put the opus data int AVPacket structures, yeah
[19:45:46 CET] <angular_mike_> kepstin:  here is how are they received https://github.com/b1naryth1ef/disco/blob/master/disco/voice/opus.py
[19:48:46 CET] <angular_mike_> kepstin: my bad, i think it's actually this: https://github.com/b1naryth1ef/disco/blob/305f1800c17062f962674ba1cfae695172961ea5/disco/voice/udp.py#L165-L305
[19:49:16 CET] <angular_mike_> kepstin: I'm trying to minimize unecessary transcoding
[20:55:00 CET] <GuiToris> hello, ffmpeg values can't be animated, can they?
[20:56:08 CET] <kepstin> GuiToris: some filters support evaluating expressions in their parameters every frame, effectively allowing you to animate them
[20:56:32 CET] <kepstin> this is typically mentioned in the filter docs.
[20:58:01 CET] <GuiToris> kepstin, it seems perspective filter has such option
[20:58:06 CET] <GuiToris> how does it work?
[20:58:31 CET] <GuiToris> I see two options eval init or eval frame
[20:59:37 CET] <GuiToris> https://ffmpeg.org/ffmpeg-filters.html#perspective
[20:59:43 CET] <GuiToris> I don't see anything like that
[21:00:10 CET] <kepstin> the rotate filter has some good examples: https://www.ffmpeg.org/ffmpeg-filters.html#Examples-92 that make  your video spin or oscillate.
[21:00:33 CET] <kepstin> other filter usually work similarly, but you have to check the docs *on each filter* to find out what variable names are available
[21:01:34 CET] <kepstin> (note that in the rotate filter examples, you have to replace `T` and `A` with numbers to run them)
[21:01:34 CET] <GuiToris> wow, it doesn't look easy, but I'll give it a shot
[21:02:04 CET] <kepstin> yeah, any animation you do have to be expressed in terms of time
[21:02:37 CET] <kepstin> it's not "animate from here to there", it's "calculate how far between here and there it is at time t"
[21:03:07 CET] <GuiToris> I have the right values and I tried them with ffmpeg it looked okay, I wanted to try it with after effects and I used the very same numbers but the output was way different
[21:03:13 CET] <GuiToris> do you happen to know the reason?
[21:03:33 CET] <kepstin> GuiToris: i have no idea what numbers or effect you're talking about, so that question is meaningless
[21:03:59 CET] <GuiToris> I'll show you and it will make much more sense
[21:04:56 CET] <kepstin> as far as it goes in general, you should read the ffmpeg docs, read the after effects docs, find out what the meaning of the parameters are, and translate them if it makes sense to do so.
[21:06:53 CET] <kepstin> if this is about the perspective filter in particular, note that ffmpeg uses sense=source by default, which is basically an inverse transform - you specify the locations of the corners in the source video, and it pulls those out to the corners of the destination video
[21:07:09 CET] <kepstin> it's possible that AE goes in the other direction.
[21:08:06 CET] <GuiToris> wait, what? inverse transform?
[21:08:30 CET] <GuiToris> how can I calculate the other values?
[21:08:35 CET] <GuiToris> this must be the problem
[21:08:54 CET] <GuiToris> with aftereffects I have no other options but the basic xy coordinates
[21:09:39 CET] <GuiToris> I saw this source and destination options but I don't understand them
[21:10:23 CET] <kepstin> i have no idea how to convert the values. I know they can be converted, but I don't know the formulas. If you know some linear algebra it shouldn't be hard to work out.
[21:10:48 CET] <GuiToris> oh yeah, changing it to destination I get the very same output which is not what I want
[21:29:25 CET] <GuiToris> I can't figure it out, if anyone know how to convert values between 'source' and 'destination' please let me know
[21:35:31 CET] <GuiToris> kepstin, I assume you don't know but I have found another tool inside aeffects which has an 'unstretch' option, and I'm not 100% sure but it looks okay, do you think this is the source/destination switch?
[21:36:06 CET] <kepstin> no idea, i've never used AR
[21:36:08 CET] <kepstin> AE*
[21:37:18 CET] <GuiToris> I'll compare them but I think that's what I wanted, thank you for your time and help kepstin
[21:37:52 CET] <durandal_1707> options are just x/y coordinates
[21:39:41 CET] <GuiToris> that's what I also thought
[21:40:36 CET] <kepstin> yeah, but the x/y coordinates have to be different to get the same result depending on whether the forwards or reverse transform (sense=source, sense=destination) is selected
[21:41:32 CET] <kepstin> the "forwards" perspective transform is normally you take the existing corners of the image and then pull them to the specified coordinates (sense=destination)
[21:41:58 CET] <kepstin> ffmpeg defaults to a reverse transform, where you specify x/y coordinates of spots within the image frame, and then they get pulled to the corners of the image.
[21:52:40 CET] <GuiToris> the original image: https://ptpb.pw/nhz9 , code: ffmpeg -i original.jpg -vf perspective=-32:73:1895:28:5:1125:1927:1060 ffmpeg.jpg   output: https://ptpb.pw/kOxZ after effects: https://ptpb.pw/YuBl  then I found this: https://ptpb.pw/W5D9
[21:52:49 CET] <GuiToris> this looks really similar
[21:53:20 CET] <GuiToris> I think that button is the sourse/destination switch in aftereffects
[21:53:47 CET] <GuiToris> I haven't rendered the image so I could compare them but it's much better than before
[21:54:04 CET] <GuiToris> I don't know why I needed a separate effect for this
[23:07:14 CET] <angular_mike_> `fmpeg -i in.pcm  -f s16le -ar 48k -ac 2 out.ogg` gives me `in.pcm: Invalid data found when processing input` while `play -t raw -r 48k -e signed -b 16 -c 2 in.pcm` plays audio
[23:07:18 CET] <angular_mike_> what's wrong?
[23:08:56 CET] <durandal_1707> angular_mike_: wrong order of options
[23:09:08 CET] <angular_mike_> wdym?
[23:09:31 CET] <kepstin> angular_mike_: your options are in the wrong order
[23:09:38 CET] <angular_mike_> what's the right order?
[23:09:39 CET] <durandal_1707> angular_mike_: move -i in.pcm  after -ac 2
[23:09:44 CET] <JEEB> input options before input
[23:09:47 CET] <angular_mike_> huh
[23:09:49 CET] <angular_mike_> weird
[23:09:50 CET] <JEEB> output options after input and before output
[23:10:25 CET] <angular_mike_> what goes after output then?
[23:10:34 CET] <kepstin> `ffmpeg -f s16le -sample_rate 48000 -channels 2 -i in.pcm out.ogg`
[23:10:47 CET] <JEEB> angular_mike_: you can put another output after the first etc
[23:10:50 CET] <kepstin> anything after the output filename will be applied to the next output filename
[00:00:00 CET] --- Thu Mar  7 2019


More information about the Ffmpeg-devel-irc mailing list