[Ffmpeg-devel-irc] ffmpeg.log.20180126

burek burek021 at gmail.com
Sat Jan 27 03:05:01 EET 2018


[00:11:29 CET] <zerodefect> So I see there is a 'av_buffersrc_parameters_set(..)' method. Does what it says on the tin.
[00:11:33 CET] <zerodefect> :)
[00:12:08 CET] <zerodefect> It's times like this where I wish I was more familiar with the API
[00:13:35 CET] <zerodefect> JEEB: It look like I'm at that point where you once were.
[02:45:23 CET] <lindylex> I am having trouble slicing this video.  This is the info about the video :  https://pastebin.com/sR5xrAMU   This is my command : ffmpeg -i l3.mov -vcodec copy -acodec copy -ss 00:00:05.000 -t 00:00:11.000 l4.mov   This is the file :  http://mo-de.net/d/l3.mov
[02:47:06 CET] <lindylex> I meant this is my command :  ffmpeg -i l3.mov -vcodec copy -acodec copy -ss 00:00:05.000 -to 00:00:11.000 l4.mov
[02:49:14 CET] <fella> with *codec copy you can only cut at full frame boundaries - if that's your question
[02:50:33 CET] <lindylex> I am sking why does it not cut at the seconds I want it to.  I think it is the video file.  It works for other video files.
[02:52:01 CET] <fella> you have full frames/images ( also called I-Frames ) and other frames which are derived from those full frames
[02:52:19 CET] <fella> the occur only every 10sec or so
[02:52:38 CET] <fella> and with -vcodec copy you can only cut at those bondaries
[02:52:48 CET] <fella> they occur
[02:53:41 CET] <lindylex> How do I fix this?
[02:54:36 CET] <fella> change '-vcodec copy' to '-vcodec libx264 -preset fastest' and '-acodec copy' to '-an'
[02:54:45 CET] <fella> ^^ for testing - to see if it works
[02:56:48 CET] <lindylex> This ffmpeg -i l3.mov -ss 00:00:05.000 -to 00:00:11.000 -vcodec libx264 -preset fastest -an  l4.mov  Gives me this error :  Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[02:57:25 CET] <lindylex> I changed it to this ffmpeg -i l3.mov -ss 00:00:05.000 -to 00:00:11.000 -vcodec libx264 -preset ultrafast -an  l4.mov
[02:57:34 CET] <lindylex> One sec let me test.
[02:58:08 CET] <fella> -t EN:DT:IM.e, not -to !
[02:59:26 CET] <lindylex> -to works better because I do not have to do any calculations.
[02:59:32 CET] <lindylex> Thanks it worked.
[02:59:56 CET] <fella> kk
[03:10:56 CET] <jasom> I'm trying to IVTC some video; -vf pullup seems to drop 2x as many frames as expected (it reliably produces ~12000/1001 FPS output on telecined video)
[03:26:07 CET] <jasom> ah, nevermind my test video was already deinterlaced; on actual TC content it seems to work.
[03:32:08 CET] <kazuma_> had 3 crashes today with zeranoe's nightly build
[03:32:25 CET] <kazuma_> before that, i don't think i've ever had an ffmpeg crash
[03:57:24 CET] <mkdir_> Hello
[03:59:18 CET] <mkdir_> stevenliu, a question?
[04:01:37 CET] <stevenliu> 
[04:01:49 CET] <stevenliu> What's happen?
[04:02:43 CET] <mkdir_> I'm trying to convert my ogg to .wav, but I am getting a strange threshold cutoff
[04:02:50 CET] <mkdir_> viewing the spectogram it becomes apparent
[04:06:29 CET] <stevenliu> how can is reproduce it?
[04:06:38 CET] <stevenliu> how can i reproduce it?
[04:08:17 CET] <mkdir_> ffmpeg -i myOgg.ogg myOgg.wav
[04:08:34 CET] <stevenliu> I don't have ogg file :(
[04:08:39 CET] <mkdir_> I have a python script to plot the wav as a spectogram
[04:08:51 CET] <mkdir_> http://shtooka.net/search.php?str=cat&lang=
[04:08:52 CET] <mkdir_> acat
[04:08:57 CET] <mkdir_> a cat
[04:09:44 CET] <stevenliu> Can you give me a wget link?
[04:10:08 CET] <mkdir_> what is a wget link?
[04:10:50 CET] <mkdir_> ooh
[04:10:51 CET] <mkdir_> one moment
[04:10:54 CET] <mkdir_> I will do that
[04:10:56 CET] <mkdir_> my guy
[04:11:57 CET] <mkdir_> http://packs.shtooka.net/eng-balm-judith/ogg/eng-a35d9c64.ogg
[04:46:52 CET] <mkdir_> stevenliu, still there buddy?
[04:47:12 CET] <stevenliu> I Cannot reproduce it
[04:48:06 CET] <stevenliu> http://bbs.chinaffmpeg.com/out.wav
[04:48:09 CET] <stevenliu> try this please
[04:48:31 CET] <stevenliu> ffmpeg -i eng-a35d9c64.ogg out.wav
[04:48:38 CET] <stevenliu> ffmpeg version N-89672-g41e51fbcd9 Copyright (c) 2000-2018 the FFmpeg developers
[04:51:19 CET] <mkdir_> out.wav has the cutoff
[04:53:15 CET] <mkdir_> Maybe I should try vorbis...
[04:53:33 CET] <mkdir_> you may need my script to see the cutoff
[04:53:38 CET] <mkdir_> are you looking at the spectogram?
[07:36:00 CET] <himawari> I am splitting video files using the ffmpeg stream segmenter.
[07:36:01 CET] <himawari> Some of the segments won't play. When I use ffprobe on the offending video files, I noticed the start time is after the duration. Could the incorrect start time be the cause of the problem?
[07:36:46 CET] <himawari> my segmenting command: ffmpeg -i movie.mp4 -an -map 0 -segment_time 8 -f segment out.mp4
[07:38:47 CET] <himawari> actually out_%03d.mp4
[07:39:01 CET] <himawari> there seem to be no warnings or errors when splitting
[07:40:41 CET] <Johnjay> any idea how much louder I can make this mp3 file?
[07:40:54 CET] <Johnjay> i'm at 5x volume. not sure what typical pc speakers can tolerate
[07:41:19 CET] <Diag> ...?
[07:41:29 CET] <Diag> its not the speakers
[07:42:02 CET] <Diag> pcm is only 16 bit usually so you literally get +-32768
[07:42:08 CET] <Diag> you cant go above that
[07:42:10 CET] <himawari> Audio files have a limit to how loud they can get. Anything past that limit will just clip
[07:42:19 CET] <Johnjay> ok. but i don't want to damage the speakers on my pc when i play it back
[07:42:23 CET] <Johnjay> ok
[07:42:29 CET] <Diag> you cant.......
[07:42:34 CET] <himawari> If you want to make it sound louder you can use a limiter to boost the volume while keeping it from clipping
[07:42:56 CET] <Johnjay> i'm just using the volume filter on ffmpeg right now
[07:43:15 CET] <Johnjay> how would I use a limiter in ffmpeg?
[07:44:14 CET] <Diag> Johnjay: himawari is only partially right with a limiter, dynamic range is shit in audio nowadays anyways
[07:44:20 CET] <Diag> youll likely not gain much from a limiter
[07:45:29 CET] <Johnjay> well i know very little about audio engineering anyway
[07:45:36 CET] <Johnjay> so half of the ffmpeg filters are greek to me
[07:45:38 CET] <himawari> Johnjay: check this out https://superuser.com/questions/323119/how-can-i-normalize-audio-using-ffmpeg
[07:45:38 CET] <Diag> ok well basically
[07:45:54 CET] <Johnjay> thanks mr. sunflower san
[07:45:58 CET] <Diag> a limiter will just try to keep the amplitude the same
[07:46:10 CET] <Diag> So itwont duck and it wont clip
[07:51:33 CET] <furq> !filter volumedetect @Johnjay
[07:51:33 CET] <nfobot> Johnjay: http://ffmpeg.org/ffmpeg-filters.html#volumedetect
[07:51:50 CET] <furq> run that and then amplify it by max_volume
[08:02:02 CET] <himawari> I figured out the answer to my own question! Adding the option '-reset_timestamps 1' fixed it
[08:03:12 CET] <Johnjay> oh nice
[08:03:13 CET] <Johnjay> thanks
[08:15:14 CET] <m712> hi, is there a way I can shift the subtitles of a video when using the subtitles filter? I'm trying to cut a segment from a video file with the subtitles burned. ffmpeg -ss 00:16:05.00 -to 00:16:45.00 -i my_vid.mkv -vf subtitles=my_vid.mkv -c:v libx264 -b:v 600K -c:a libmp3lame -b:a 128K out.mp4
[08:15:49 CET] <m712> this starts the subtitles from the beginning instead of from the segment i want
[08:45:22 CET] <FishPencil> Does x265/x264 CRF mean constant quality on an external scale, or is it related to the input stream? Basically, is there any CRF value (other than 0), that would result in no visual quality loss after 1000s of reencodes?
[08:59:57 CET] <kerio> i doubt there's an idempotent h264 encoder out there
[09:04:41 CET] <FishPencil> So they're all just relative quality from the source
[09:11:47 CET] <kerio> both crf and qp try to achieve constant quality, using two different metrics
[09:12:17 CET] <kerio> keyword being try
[09:14:48 CET] <FishPencil> I guess a broader question would be: if I want to retain a video with minimal to no quality loss (but don't want to use FFV1), what crf value for x265 is good? I need to filter the video, so I can't just copy it
[09:16:08 CET] <FishPencil> What would be considered "safe", like -crf 18 was for x264 I believe
[10:33:43 CET] <dradakovic> I have a quetion regarding the -re option. I am converting live online radios into mpegts. In order for radios to work fine i specificall have to have the -re option, else they are choppy.
[10:34:13 CET] <dradakovic> Is there a better alternative to -re as some radios still get choppy after few hours and i need to reconnect to them again
[10:34:34 CET] <dradakovic> my pastebin for 1 radio https://pastebin.com/L3CjYSf1
[10:34:50 CET] <dradakovic> When i reconnect to the radio, the choppiness is gone
[10:36:43 CET] <dradakovic> I have about 30 radios and this thing happens on lets say 5 or 6 of the radios
[10:37:16 CET] <dradakovic> If i remove the -re option, the choppiness is present on all the radios as it causes the packets to flow too fast to my device
[10:39:25 CET] <dradakovic> Maybe this will be a better pastebin https://pastebin.com/XbU3Ve8c
[11:00:52 CET] <dradakovic> Or better question, how do i make sure that radio outputs always at the 1x speed?
[11:01:46 CET] <JEEB> you can limit the udp and mpegts outputs, read upon the documentation with regards to their options
[11:02:14 CET] <JEEB> and yes, -re is a hack although usually it fails the other way (you get a timestamp jump to 8h in the future and your ffmpeg.c is going to be sitting there :P)
[11:05:48 CET] <dradakovic> Would you have any recommendation options on what to try? I have been playing with various options for months now and this code is the best i could come up with
[11:07:25 CET] <dradakovic> I mean it seems like i need -re. But it also looks like it stops working for specific radios after some time and they act like there would be no -re present.
[11:16:52 CET] <dradakovic> Or what would be the first thing you would change, for udp packets not to cause overflow?
[11:41:17 CET] <XoXFaby> why/how does rendering 100mb of frames into a 11mb video file end up using 1.5gb of RAM?
[11:42:11 CET] <pmjdebruijn> presumably a bunch of frames need to be in memory uncompressed ?
[11:42:15 CET] Action: pmjdebruijn is just guessing
[11:58:27 CET] <DHE> I'm guessing libx264. look-ahead buffers a lot of frames and 1920x1080 in decompressed format is pretty big
[12:08:20 CET] <fdsfds> when I save rtsp strem to 264 file i get error malloc of size ... failed, why?
[12:08:33 CET] <fdsfds> ffmpeg -i rtsp://...... test.264
[12:11:48 CET] <DHE> if the source is already h264, you should use -c copy
[12:20:33 CET] <fdsfds> DHE: even my source is mjpeg i got this error
[12:42:12 CET] <steinchen> hi all
[12:42:43 CET] <steinchen> i want to build a very simple streaming sollution to view all videos in a specific folder.. but i have no idea where to start
[12:43:00 CET] <steinchen> server is running lamp and ffserver etc
[12:55:30 CET] <BtbN> ffserver is dead, don't use it.
[13:08:07 CET] <fdsfds> BtbN: witch server is rocommmended that working with ffpmpeg?
[13:08:31 CET] <fdsfds> and can stream via most of protocols?(mjpeg/rtsp(tcp ond udp)
[13:31:14 CET] <DHE> fdsfds: nginx-rtmp is what we're recommending users consider
[13:51:22 CET] <fdsfds> DHE: nginx-rtmp working with ffmpeg? can i do ffmpeg -i (some input) and send it directli to nginx-rtmp ? like i did with ffserver?
[13:51:37 CET] <furq> yes
[13:51:57 CET] <fdsfds> with with protocols is supported ?
[13:52:08 CET] <fdsfds> mjpeg/rtsp(tcp ond udp)
[13:52:25 CET] <DHE> I don't think it cares about the codec
[13:53:18 CET] <fdsfds> that now codec , that streaming protocols
[13:53:28 CET] <fdsfds> that not *
[13:55:02 CET] <DHE> mjpeg is a codec
[13:55:52 CET] <fdsfds> it codec+ streaming protocol
[13:58:44 CET] <fdsfds>  i not see that nginx-rtmp can streaming with rtsp udp and rtsp tcp , too bad, ffserver cand do it ...
[14:14:44 CET] <XoXFaby> DHE: any suggestion for helping not use as much RAM>
[14:15:08 CET] <XoXFaby> I need to render 30 to 250, depending on how long I want my clips to be, rendered on a raspberry pi
[14:29:13 CET] <steinchen> mmh isit possible what im planning? to stream all videos from a specific folder? and also if new files are added or removed..
[15:05:05 CET] <buzzing> Hey everyone. I would like to merge several video files with the same codec into one. But I would like x seconds of black between them and a y frames/seconds long audio fade at the beginning and end of each video. How do I do this with ffmpeg? :-)
[16:10:28 CET] <kepstin> buzzing: by writing fairly long and complicated filter chains. You'll probably want a script that'll generate temp files with the fade-in/out applied, then generate some segments of silent/black video, then concatenate everything together.
[16:11:32 CET] <kepstin> note that there'll be some scripting involved in particular to do the audio fade-out, since none of the filter support anything like "y seconds before the end of the video" - so you have to pre-calculate the timing.
[16:36:57 CET] <buzzing> Thanks kepstin. I'll look into it :-)
[16:40:59 CET] <Daglimioux> Hey there. I'm doing a compilation with this command: ffmpeg -i video1.webm -i video2.webm -f lavfi -i color=s=640x360:color=black:d=1 -i video3.webm -i audio1.opus -i audio2.opus -i audio2.opus -filter_complex "amix=inputs=3:dropout_transition=0; [0:v] scale=640x360, setdar=0 [r1c1]; [1:v] scale=640x360, setdar=0 [r1c2]; [3:v] scale=640x360, setdar=0 [r2c2]; [r1c1][r1c2] hstack=2 [r1]; [2][r2c2] hstack=2 [r2]; [r1][r2] vstack=2" 
[16:41:01 CET] <Daglimioux> libx264 -b:v 300k -crf 20 -threads 2 -f mp4 -preset ultrafast -y output.mp4
[16:42:42 CET] <Daglimioux> but my problem is that one of those videos (video2.webm) is missing a "frame" at 00:00:26 and produces an output that freezes all videos at second 26. Is there a way to fix that?
[16:48:34 CET] <kepstin> hmm. the framesync code in hstack/vstack should allow it to keep showing frames from other videos.
[16:48:43 CET] <kepstin> that's really odd.
[16:49:49 CET] <ddubya> is avcodecdescriptor_get the replacement for CodecContext->codec_name ?
[16:50:58 CET] <ddubya> or is avcodec_get_name better
[16:52:38 CET] <storrgie> I've got a decklink device in linux that I'm trying to record video/audio from... the example in the ffmpeg documentation has `-format_code Hi50`... is there a way to list out format_code so I can see what options I'd like to use?
[16:56:07 CET] <Daglimioux> kepstin: Yes, that's really odd. The other frames from other videos, even from the same video, are not showing up. The first video duration is 00:01:33, while the others are 00:01:32 and I get an output file of 00:01:33, but at second 00:00:26, where the frames are missing in video2, all videos are freezed
[16:56:39 CET] <Daglimioux> kepstin: I have the latest version of ffmpeg (I updated it last monday)
[17:00:13 CET] <storrgie> actually I was able to get a list of format_code.. however I'm getting `Unrecognized option 'format_code'.` now... and my ffmpeg does have `--enable-decklink` in its configuration
[17:13:03 CET] <kepstin> storrgie: that's an input option, so it should be placed before the -i for the decklink input
[17:13:16 CET] <kepstin> I think
[17:14:53 CET] <storrgie> I'm wondering if I've compiled ffmpeg properly
[17:15:11 CET] <storrgie> there is a cryptic note in the documentation about decklink: "To enable this input device, you need the Blackmagic DeckLink SDK and you need to configure with the appropriate --extra-cflags and --extra-ldflags"
[17:15:21 CET] <ddubya> is it necessary to hard-code what codecs are compatibel with what containers or is therey a way to query it
[17:15:26 CET] <storrgie> I'm not sure what I should have --extra-cflags and --extra-ldflags set to
[17:15:59 CET] <kepstin> ddubya: there is not a working query right now (there's an api for it, but it's not wired up in most containers, so it just returns "maybe" for almost every codec)
[17:16:14 CET] <ddubya> ok
[17:16:26 CET] <kepstin> ddubya: on the other hand, if you try to mux a particular codec into a container, it'll either work or produce an error
[17:16:40 CET] <kepstin> so... try, and if it doesn't work then it's not compatible :)
[17:16:50 CET] <ddubya> sure, but I don't want to present a UI with options that don't work
[17:17:21 CET] <kepstin> sure, but you also don't want to present a ui that won't let you do something that does work because it was added more recently than the ui was built.
[17:17:29 CET] <kepstin> depends on how advanced your users are i guess
[17:17:34 CET] <ddubya> yeah its a tradeoff
[17:18:09 CET] <ddubya> given this app has a lot of users I'll take the former and save myself some bug reports
[17:18:41 CET] <storrgie> kepstin, even though I have `--enable-decklink` in my configuration... it appears ffmpeg doesn't recognize that critical option of `format_code`: https://gist.githubusercontent.com/storrgie/92b43fbaf8a69fe0b338ab4c608ae631/raw/37f35fb7dabe672f5bf1d7cbdc2e7555dacdb19c/gistfile1.txt
[17:19:28 CET] <kepstin> storrgie: what does the output of "ffmpeg -h demuxer=decklink" look like?
[17:21:54 CET] <storrgie> kepstin, https://gist.githubusercontent.com/storrgie/de82baeae1e61369469edc42eebbf063/raw/552bc9bda0868fc089dfd78f7cec38870796b2c9/gistfile1.txt
[17:22:32 CET] <storrgie> kepstin, I was going off the discussion here: http://www.ffmpeg-archive.org/DeckLink-cannot-enable-video-input-td4679883.html and the documentation in ffpmeg where both say you need to have format_code specified
[17:23:09 CET] <kepstin> storrgie: well, that decklink demuxer simply does not that a format_code option, so hmm.
[17:23:35 CET] <ddubya> anyone know if the old and new codec_cap flags after rename have the same values?
[17:24:22 CET] <storrgie> kepstin, when trying to record from the interface I just get [decklink @ 0x11b7a00] Cannot enable video input, DeckLink Duo (1): Input/output error
[17:30:09 CET] <kepstin> storrgie: I think your problem is that your ffmpeg is just too old
[17:30:20 CET] <kepstin> storrgie: that's the 2.8 from ubuntu, you should get something newer
[17:30:54 CET] <kepstin> if you compiled & installed a new ffmpeg, you need to make sure it's in your PATH so you actually run it&
[17:41:39 CET] <Johnjay> is there a way to compare decibels of sound
[17:41:50 CET] <Johnjay> with the -4 and -10db levels that ffmpeg reports in an mp3 file?
[17:42:06 CET] <Johnjay> this wiki article is saying 40db is the volume of a whisper
[17:44:52 CET] <kepstin> Johnjay: not sure what values you're talking about ffmpeg report. Is this replaygain stuff?
[17:45:05 CET] <Johnjay> volumedetect filter
[17:45:17 CET] <Johnjay> it reports mean and max db level with max PCM volume as a reference
[17:45:21 CET] <furq> Johnjay: https://en.wikipedia.org/wiki/DBFS
[17:45:22 CET] <kepstin> ok, volumedetect just returns volumes in dbFS, yeah
[17:45:43 CET] <kepstin> so all that says is "this file is 4dB quieter than the max that can be represented in digital audio"
[17:46:12 CET] <kepstin> it doesn't have any direct mapping to a reference, since the output level depends on the volume you have set on your sound system
[17:46:38 CET] <Johnjay> weird it says that -6dbFS is 50% of max volume but 10^(-6/10) is .25
[17:46:41 CET] <Johnjay> not .5
[17:47:12 CET] <Johnjay> wouldn't -3dbFS be 50%?
[17:48:15 CET] <kepstin> a change of ±6dB is a change in signal levels (voltage) of about ±50%
[17:49:53 CET] <furq> it's 20 * log10(value, max)
[17:50:17 CET] <Johnjay> wikipedia is saying you're taking the square root
[17:50:23 CET] <Johnjay> and distinguishes power ratio from amplitude ratio
[17:51:06 CET] <kepstin> Johnjay: note that we're talking about a change of 6 db, so you have to use a +6 in the power, and that it's a 20 rather than 10, yeah
[17:51:12 CET] <furq> value / max rather
[17:51:32 CET] <Johnjay> wiki is saying it's "useful" consider the square of the field strength... ok then
[17:51:42 CET] <kepstin> 10^(6/20) = 1.99, so an increase of 6dB multiplies signal values by 1.99 (approximately doubles them)
[17:51:51 CET] <therage3> it's interesting how doing these things properly requires knowledge of electrical/electronics engineering
[17:51:55 CET] <kepstin> for -6dB, you divide by 1.99
[17:52:15 CET] Action: kepstin rounded poorly, but you get the idea
[17:52:27 CET] <furq> but yeah since you were asking earlier about damaging speakers if you amplify too much
[17:52:34 CET] <furq> if you overamplify you'll just clip to 0dB
[17:52:59 CET] <kepstin> well, a pcm signal clipped at full scale can actually be louder than 0dBFS :/
[17:55:11 CET] <kepstin> but as for damaging speakers, you have to look at (analogue?) power meters tracking voltages going to the speakers and see if they're in specs, I guess? Maybe you can work back through your amp and dac settings to figure out what digital signal level corresponds to what output voltage.
[17:55:45 CET] <Johnjay> right
[17:55:56 CET] <Johnjay> but now i'm trying to figure out how to get a sound sample to be a whisper, i.e. 40db
[17:56:10 CET] <furq> turn your volume down
[17:56:15 CET] <Johnjay> lol ok
[17:56:15 CET] <kepstin> simple - play any sound sample, and adjust the volume on the amp until it's a whisper
[17:56:20 CET] <furq> ^5
[17:57:38 CET] <kepstin> Johnjay: this is a calibration thing. you need to find out what digital signal level corresponds to what dB output from your system.
[17:57:47 CET] <kepstin> and once you have that, then you just use relative numbers.
[17:58:55 CET] <kepstin> (and this obviously also depends on listener's distance to the speakers, too)
[18:00:51 CET] <furq> dB is a relative scale, and dBFS and dB SPL aren't relative to the same thing
[18:01:55 CET] <furq> dBFS can tell you whether something will sound like a whisper relative to some other sound in the same file, but not absolutely
[18:02:02 CET] <furq> and obviously it still depends on your output level
[18:03:12 CET] <kepstin> keep in mind that many SPL meters can use a human-peception weighted scale (humans have different response at different frequencies), and you might have to do similar compensation in the digital side, e.g. by using the ebu r128 LUFS values rather than dBFS.
[18:34:00 CET] <andrew-shulgin> Hello! Is there a function to get timestr from timeval (av_parse_time vice-versa)?
[18:34:29 CET] <Johnjay> how do i overlay two mp3 files together?
[18:35:07 CET] <durandal_170> Johnjay: amix filter?
[18:35:48 CET] <Johnjay> right
[18:36:10 CET] <Johnjay> that should work
[18:36:13 CET] <Johnjay> i'm trying to remix this track of jungle sounds and don't know what i'm doing
[18:37:38 CET] <Johnjay> specifically i want to mix in some sounds, but then normalize the audio so it's increasing and then decreasing
[18:37:42 CET] <Johnjay> kind of like a bell curve
[18:37:46 CET] <Johnjay> right now i'm using afade=in and afade=out
[18:38:23 CET] <Johnjay> the doc says afade accepts parameters like exponential and sine
[19:02:58 CET] <kikobyte> BtbN, I was investigating a segmentation fault happening during the nvenc h264 encoding (remember, the one which segfaulted on nvEncUnmapInputResource). What I found is, a particular surface gets actually unmapped twice. Long story short, I decided to check here https://github.com/FFmpeg/FFmpeg/blob/release/3.3/libavcodec/nvenc.c#L1452 if the incoming frame is already mapped (that's what caused double unmap)
[19:03:48 CET] <kikobyte> BtbN, What happened is, the input picture was duplicated because of the default -vsync 1 behavior
[19:07:30 CET] <BtbN> But only in combination with re-negoatiation I guess?
[19:07:34 CET] <BtbN> And does it still happen on master?
[19:08:35 CET] <kikobyte> BtbN, haven't checked on master, but I don't think I saw any protection from this case there as well
[19:09:12 CET] <kikobyte> Re-negotiation here, seems doesn't really matter, but because of changing resolution, the next frame comes with a little delay in PTS, which makes the vsync logic to duplicate the previous frame
[19:09:46 CET] <kikobyte> Since the encoder on the far end of the whole pipeline (not even in this process) cannot reinitialize instantly
[19:10:37 CET] <kikobyte> Basically, it would be good to put a check in nvenc_register_frame for the frame being actually in the internal encoder queues at the moment
[19:10:43 CET] <BtbN> I wonder if this should be plain not supported, or somehow make nvenc aware of the same frame being input twice
[19:10:58 CET] <BtbN> well, doing such a check would be a huge effort for every single frame
[19:11:10 CET] <kikobyte> At least a diagnostic message would be good
[19:11:31 CET] <BtbN> If you figure it out to give a message, you might as well handle it...
[19:11:59 CET] <kepstin> this case would be two refcounted frames sharing data, right?
[19:12:03 CET] <kikobyte> BtbN, if (ctx->registered_frames[i].mapped) __builtin_trap(); is what gave me the clue
[19:12:03 CET] <BtbN> But just noticing that you are getting the same frame twice is not straight forward
[19:12:26 CET] <kikobyte> Just a O(1) lookup per frame
[19:12:37 CET] <BtbN> The registered_frame is internal to nvenc
[19:12:51 CET] <BtbN> if you get a new frame from external, you have to iterate and compare with all others
[19:13:04 CET] <kikobyte> kepstin, in this case nvenc_register_frames overwrites the existing surface
[19:13:44 CET] <kikobyte> registered_frames[...].mapped only gets changed when the frame is either registered or pushed while processing the output surface
[19:13:48 CET] <kepstin> wouldn't a simple fix be to just have the encoder call av_frame_make_writable() on every frame it gets so that none share memory?
[19:14:00 CET] <BtbN> you cannot make writeable a hardware frame.
[19:14:13 CET] <kepstin> oh, hardware frames, right :/
[19:14:21 CET] <kikobyte> I'm referring to the NvencSurface structure
[19:14:33 CET] <BtbN> That's internal to nvenc
[19:14:38 CET] <BtbN> and assigned for every input frame
[19:14:49 CET] <kikobyte> It's a fixed-size array defined in nvenc.c
[19:14:50 CET] <BtbN> so if the same frame is input twice, it also ends up with two internal handles
[19:15:08 CET] <BtbN> so?
[19:15:09 CET] <kikobyte> used for mapping incoming frames (register + map) for gpu-only pipeline
[19:15:19 CET] <BtbN> yes, exactly
[19:15:25 CET] <BtbN> one mapping is made for every frame it gets
[19:15:38 CET] <BtbN> so if it gets the same frame twice, it will be mapped twice, and eventually explode
[19:16:08 CET] <BtbN> So it would need logic to detect frames referencing the same data. And then ref-counting logic on the mappings
[19:16:15 CET] <BtbN> Which is a bit much for a simple fix
[19:16:32 CET] <kikobyte> yeah, but when a new frame arrives, the nvenc_register_frame function goes through the fixed-size array of existing mappings and has the logic of quickly picking the previously registered instance if the cuda pointer matches
[19:17:05 CET] <BtbN> And what do you do then? You'll still end up unmapping it twice.
[19:17:20 CET] <kikobyte> essentially what I'm saying is, in the very same structure which keeps the cuda pointer, there's an indication of whether this mapping is currently active (read: not pushed out of the encoder)
[19:17:59 CET] <BtbN> It's an array of mappings, so you cannot just add another for the frame with same data
[19:18:13 CET] <kikobyte> nothing to add...
[19:18:24 CET] <kikobyte> just check .mapping
[19:18:45 CET] <kikobyte> if (ctx->registered_frames[i].ptr == (CUdeviceptr)frame->data[0]) { return i; }
[19:18:51 CET] <BtbN> You will eventually unmap it
[19:18:54 CET] <BtbN> and it will explode then
[19:18:57 CET] <BtbN> not on originally mapping it
[19:19:05 CET] <kikobyte> if (ctx->registered_frames[i].ptr == (CUdeviceptr)frame->data[0]) { assert(!ctx->registered_frames[i].mapped) return i; }
[19:19:07 CET] <kepstin> you'd have to refcount it then?
[19:19:15 CET] <BtbN> you'd have to come up with a full refcounting, yes
[19:20:11 CET] <kikobyte> guys, that's weird. You have a structure, which remembers which frames are already registered, and it also remembers whether they're mapped.
[19:20:25 CET] <BtbN> It crashes on _unmap_
[19:20:31 CET] <kikobyte> You use that structure to check if the incoming buffer needs registering through nvenc or you can re-use that reg resource
[19:20:39 CET] <BtbN> And there is no way for it to know if the frame is in the queue again at that point
[19:20:44 CET] <kikobyte> why not check if you already mapped it? because that's exactly "gates of mordor"
[19:20:59 CET] <kikobyte> the frame is in the queue is then and only then if .mapped is 1
[19:21:10 CET] <BtbN> It already does that. Which is why it explodes when it tries to use and unmap it a second time
[19:21:24 CET] <kikobyte> it crashes, and you can prevent that crash
[19:21:39 CET] <kikobyte> by rejecting even to accept that frame
[19:22:04 CET] <BtbN> you don't seem to follow. It needs to actually count how often the frame comes in, and if it can already unmap it, or needs to wait
[19:22:14 CET] <BtbN> so mapped would have to be a counter instead of just a flag
[19:23:07 CET] <kepstin> basically - when it gets to the unmap step, how is it supposed to know whether or not there's another later frame that's using the same mapping?
[19:23:23 CET] <kikobyte> what I'm proposing is to detect and gracefuly exit once that kind of situation happens
[19:23:36 CET] <kikobyte> you cannot refcount that mapping because, afaik, the requirements for the nvenc api are that you cannot even write to the buffer which you have mapped
[19:23:42 CET] <BtbN> that wouldn't really be an improvement. It will still fail the entire encode.
[19:23:56 CET] <BtbN> you're not writing to the frame
[19:24:15 CET] <kikobyte> I thought that a diagnostic message is better than a random crash
[19:24:16 CET] <BtbN> it's just used twice
[19:25:25 CET] <kikobyte> supposedly, if you've got the frame you mapped (and haven't unmapped yet) again, it might mean that it's potentially modified. I can be wrong here, if the previous element in the pipeline doesn't re-use the buffer based on the refcount
[19:25:38 CET] <kikobyte> so won't argue here
[19:26:08 CET] <BtbN> If you modify a frame that you gave to an encoder you're violating API
[19:26:18 CET] <kikobyte> exactly what i'm saying
[19:26:27 CET] <BtbN> So that's not a case that needs to be worried about.
[19:26:28 CET] <kikobyte> in this particular case, the frame isn't being modified, just being re-sent
[19:26:54 CET] <BtbN> I'm not even sure if that kind of usage is valid
[19:27:12 CET] <BtbN> But pretty much nothing evet actually modifies a frame
[19:27:16 CET] <kikobyte> good to know
[19:27:29 CET] <BtbN> You only modify a frame if the refcount is 1
[19:27:57 CET] <kikobyte> anyways, I just felt obliged to share my findings. To put it short - sending the same frame twice in a rapid succession (as with -vsync 1 frame duplication) spuriously crashes encoder
[19:40:40 CET] <BtbN> I wonder if this also happens if you put a fps filter in the chain to double the framerate
[19:40:58 CET] <kikobyte> BtbN, my assumption - yes
[19:41:23 CET] <BtbN> depends on how the filter duplicates frames
[19:41:27 CET] <BtbN> It might be doing a full copy
[19:41:30 CET] <kikobyte> Sorry man, I'm heading home
[19:42:46 CET] <BtbN> nope, high fps works just fine
[20:23:59 CET] <bbert> I have some files that were muxed with a custom application built with libav.  I would like to remux them using ffmpeg.  The original container is matroska, with dnxhd video and pcm_s24le audio.  The issue is that the custom application doesn't seem to properly handle setting the frame rate, or perhaps timestamps.  I *can* verify that all video frames and audio samples are present, but the duration of the file is not properly detected by ffmpeg.  If I
[20:23:59 CET] <bbert> force the proper frame rate (using -r on the input), video is remuxed perfectly, and everything lines up with my separately recorded audio, which was recorded with AD converters locked to the same master clock as the video devices.  However, the remuxed file ends prematurely and the remaining frames are not included in the output file.  Playing the source file in ffplay has a similar result, where the final frames which are beyond the calculated
[20:24:00 CET] <bbert> duration are omitted.  However, when playing the source file in VLC, those frames are played.  Is there a way to make ffmpeg include those files?  I can't really take the time to try many different methods.  These are quite large files (~1TB) and remuxing takes around an hour.
[20:25:41 CET] <bbert> ^ should read "Is there a way to make ffmpeg include those *frames*?"
[20:33:55 CET] <kepstin> bbert: ffmpeg should be including all of the frames. It's possible that the *audio* is too short - if the audio is shorter than the video, most players will stop playback when the audio track ends.
[20:34:32 CET] <kepstin> or.. does this video file have no audio track? is it video only?
[20:36:57 CET] <bbert> kepstin: The mismatch in framerate vs audio rate (proper 48k) does, in fact, cause the audio to end earlier than the video when the video is not forced to the proper frame rate.  However, the behavior persists when using -an
[20:38:05 CET] <kepstin> bbert: it would be helpful to see a pastebin of the ffmpeg command you're running and its complete output.
[20:39:13 CET] <bbert> ffmpeg -r 30000/1001 -i input.mkv -c:a copy -c:v copy output.mxf
[20:39:30 CET] <bbert> Or the same w/ -an and no -c:a
[20:40:00 CET] Action: bbert reads more... "complete output...."
[20:41:16 CET] <bbert> how about the complete output, but for only the first few seconds of remuxing?  The full run will take a long time, and produce copious output....
[20:44:01 CET] <ddubya> avcodec_get_name segfaults.... could it be because the codec isn't open? All it is passed is a codec id
[20:44:34 CET] <ddubya> this is what I'm using as replacement for avcodeccontext->codec_name
[20:45:34 CET] <saml> til color space
[20:46:03 CET] <saml> how does -filter_complex format=pix_fmts=rgb24  work given yuv420p?  would it lose color?
[20:46:24 CET] <saml> or yuv420p (12bpp)  to rgb24 (24bpp)  is lossless?
[20:46:31 CET] <bbert> kepstin: Here's a paste from one of the smaller files I've been using for testing.   https://pastebin.com/uLVrzjJx
[20:53:40 CET] <ddubya> is AVCodec->name going to be deprecated as AVCodecContext->codec_name has been?
[21:00:22 CET] <kepstin> bbert: hmm. kinda odd. nothing obvious there except that those dts errors will probably cause some issues later. It looks like you exited that early with ctrl-c?
[21:02:55 CET] <bbert> kepstin: yes, ctrl-c.  running the full file would have taken ~20 min
[21:07:12 CET] <ddubya> hmm I don't see where anyone is calling av_register_alll ... doh
[21:08:14 CET] <bbert> kepstin: interesting... doing the same, but to .mov does not result in the same dts problems... I'm going to do a full remux to .mov and see what happens...
[21:15:02 CET] <wandering_segfau> Hello, is there a way to force ffmpeg to copy a private data PID in a transport stream? -copy_unknown doesn't seem to do anything.
[21:19:31 CET] <ddubya> anyone know why avcodec_get_name always segfaults, tested on version 2.2.9 and git master same
[21:22:26 CET] <ddubya> nvm I'm an idiot
[21:22:26 CET] <BtbN> because you never called register_all?
[21:22:46 CET] <ddubya> its a blasted function pointer due to this dynamic loader they've got
[22:14:01 CET] <ddubya> I'm upgraded to ffmpeg 3.5 (git master) and avcodec_open2 always returns -22. Any idea what to look for?
[22:14:16 CET] <ddubya> doesn't seem to matter what the codec is
[22:14:24 CET] <JEEB> did you register_all() before?
[22:14:34 CET] <ddubya> yes
[22:14:54 CET] <ddubya> they're audio codecs with s16 sample format
[22:15:03 CET] <ddubya> works fine with v57
[22:17:01 CET] <ddubya> here's the codec setup, https://goo.gl/euT4iL
[22:17:13 CET] <ddubya> maybe something stands out as off
[22:28:45 CET] <BtbN> Did you ever alloc the context?
[22:32:27 CET] <ddubya> it was set to point to the stream->codec after allocating the stream
[22:34:27 CET] <ddubya> then filled with various parameters, then avcodec_open2
[22:35:08 CET] <ddubya> something here is now illegal since v57 and I've got no idea
[22:41:43 CET] <alexpigment> ddubya: https://stackoverflow.com/questions/26205017/ffmpeg-avcodec-open2-returns-22-if-i-change-my-speaker-configuration
[22:41:47 CET] <alexpigment> is it possible that's related?
[22:44:43 CET] <ddubya> maybe
[22:44:54 CET] <ddubya> not setting channel_layout at all here, but setting channels
[22:45:04 CET] <ddubya> channels is always 1 or 2
[22:46:26 CET] <alexpigment> gotcha
[22:46:34 CET] <alexpigment> just figured i'd throw that out in there in case it helped
[22:46:45 CET] <alexpigment> i have no idea about the actual problem, but that came up in a google search
[22:46:49 CET] <ddubya> thanks, I can use all the help I can get
[22:46:57 CET] <BtbN> Look at the log output
[22:46:57 CET] <ddubya> would a debug build of ffmpeg help you think
[22:47:05 CET] <BtbN> it will most likely tell you what it dislikes
[22:47:41 CET] <ddubya> ok, I don't see any logging at the moment, maybe they disabled it
[22:54:47 CET] <ddubya> "aac unsupported channel layout". I'm a PLEB. Thanks for the logging suggestion
[22:57:43 CET] <lazorshade> Is there a way to cut out multiple short clips from a longer video in one go? Something like "ffmpeg -i inputfile.mp4 -ss 5:05 t 10 -ss 6:30 -t 10 out.mp4". I know I could go one by one but seeking through the files takes a long time if the timestamp is at the 1h mark.
[22:58:16 CET] <BtbN> put -ss before -i
[22:59:05 CET] <lazorshade> But then the video doesn't cut as exactly as when I put it after -i.
[22:59:06 CET] <alexpigment> BtbN: isn't that going to be inaccurate though?
[22:59:10 CET] <alexpigment> yeah
[22:59:43 CET] <alexpigment> it's fairly accurate for me, but I always encode with a gop the same as the frame rate
[22:59:54 CET] <alexpigment> and i realize most people don't do that
[23:00:15 CET] <BtbN> gop as same as frame rate is pretty bad quality wise
[23:00:39 CET] <alexpigment> you're assuming too much about my bitrate/crf
[23:01:04 CET] <alexpigment> hard drives are cheap. cheap enough to not take shortcuts when encoding
[23:01:34 CET] <BtbN> sounds more like you are looking for a proper lossless intermediate format
[23:01:48 CET] <alexpigment> me?
[23:02:07 CET] <lazorshade> Yeah, I mean it's not super inaccurate, but usually there's about 0.5 to 1 second of a frozen frame in the clip while the audio is already playing when i put -ss before -i.
[23:02:24 CET] <lazorshade> And I'm trying to avoid that.
[23:03:20 CET] <alexpigment> lazorshade: one of the suggestions here mentions using the trim filter: https://superuser.com/questions/681885/how-can-i-remove-multiple-segments-from-a-video-using-ffmpeg
[23:03:28 CET] <ChocolateArmpits> lazorshade, you can check for the closest keyframe using ffprobe and then cut on the related timestamp
[23:03:49 CET] <alexpigment> i'm not sure if that is much quicker than doing each one individually, but it's worth a try
[23:04:16 CET] <lazorshade> alexpigment, ChocolateArmpits: Thank you, I'll look into that!
[23:04:50 CET] <ChocolateArmpits> it would probably be smart to script it, because there's quite some data to parse manually each time
[23:05:52 CET] <lazorshade> Yeah, I'd be working with a list of timestamps that I'll run through a script anyway, so that should work. Thanks.
[23:07:30 CET] <BtbN> lazorshade, if you're re-encoding anyway, you might as well just transcode it to utvideo or something, and then cut that quick
[23:08:52 CET] <alexpigment> the one downside to that method is that you then have to really actively think about and deal with situations where you'd run out of hard drive space
[23:09:11 CET] <alexpigment> (if the source video resolution and length is highly variable)
[23:09:41 CET] <lazorshade> I'm not familiar with utvideo, does it make seeking through the video faster?
[23:10:11 CET] <lazorshade> Disk space isn't really an issue, the source files are deleted once the short clips are cut out.
[23:14:43 CET] <alexpigment> lazorshade: it should make it faster I would think. you'd have to run through some tests to see if it's overall faster, since you're adding another encode and decode step to the process
[23:15:38 CET] <lazorshade> Okay, thank you.
[23:18:17 CET] <islanti> hmm i need to convert an aax file to multiple mp4s for each chapter
[23:20:52 CET] <islanti> is there a command line solution to do that?
[23:22:50 CET] <alexpigment> islanti: https://github.com/r15ch13/audible-converter ?
[23:27:36 CET] <islanti> looks like that keeps chapters intact in the mp4 file, but i need seperate mp4 files for each chapter
[23:29:14 CET] <saml> what is output format for .mp4  if I were to output to - ?
[23:29:20 CET] <saml> -f mp4 -  won't work
[23:29:22 CET] <islanti> hmm unless my ipod gen7 supports chapter markes, then i'm ok with that. i will have to check..
[23:29:32 CET] <islanti> *markers
[23:34:03 CET] <saml> looks like mp4 is special and needs to write full size in the header and can't be streamed to -
[23:34:32 CET] <saml> it's just so weird -filter_complex psnr   gives very different number if i'm comparing .mp4 and .avi
[23:35:55 CET] <alexpigment> saml: same exact codec?
[00:00:00 CET] --- Sat Jan 27 2018



More information about the Ffmpeg-devel-irc mailing list