[Ffmpeg-devel-irc] ffmpeg.log.20180414
burek
burek021 at gmail.com
Sun Apr 15 03:05:01 EEST 2018
[00:00:14 CEST] <hiihiii> hi, what's the command or option to convert an input to constant frame rate mp4 file
[00:00:38 CEST] <hiihiii> or rather in libx264 codec
[00:00:44 CEST] <spicypixel> -framerate flag?
[00:00:46 CEST] <spicypixel> or -r
[00:01:03 CEST] <hiihiii> as output option?
[00:01:58 CEST] <hiihiii> how about the fps filter?
[00:02:12 CEST] <hiihiii> does that achieve the same thing
[00:05:15 CEST] <hiihiii> ok thank you it does have same effect
[00:07:01 CEST] <kepstin> i normally recommend the fps filter rather than -r output option, it has better (and more configurable) rounding behaviour
[00:10:59 CEST] <Anonrate> Okay, this is either a bug... Or I messed something in severely because now ffmpeg wont find anything via pkg-config. Now it wont find libass.. I've checked if it's there as well, and it is..
[00:16:38 CEST] <DHE> do you need to set PKG_CONFIG_PATH ?
[00:17:18 CEST] <Anonrate> Right now it's not set, but pkg-config still finds all the package if I check myself via pkg-config --validate <pkgname> --print-errors
[00:19:02 CEST] <Anonrate> Even setting it I still get the same error.
[00:52:09 CEST] <spicypixel> I'm surprised I can't find a script to take a bunch of premade DASH fragments and make a manifest for them
[00:52:32 CEST] <JEEB> just make output file called something dot mpd?
[00:52:43 CEST] <JEEB> since FFmpeg has a DASH "muxer" for quite some time now
[00:52:59 CEST] <spicypixel> yeah but I can't feed it a filelist?
[00:53:09 CEST] <spicypixel> I don't have a source mp4, I have 10,000 DASH fragments
[00:54:13 CEST] <spicypixel> is there a syntax for inputting the files and making a .mpd file I've missed?
[00:54:52 CEST] <JEEB> you could try the concat demuxer or so and wonder if it works, and try to generate a playlist with that
[00:55:10 CEST] <JEEB> or concat protocol, no idea which is better at your use case, if any
[00:55:49 CEST] <spicypixel> I tried concat and it didn't work reliably, some folders worked some didn't
[00:56:46 CEST] <JEEB> then I'd almost recommend you make a fragmented mp4 HLS playlist or something, that's at least simpler to template. I think.
[00:57:37 CEST] <spicypixel> even if the source files identify as DASH?
[00:57:40 CEST] <spicypixel> that's fine?
[00:57:53 CEST] <JEEB> pretty sure the demuxer won't care as long as it can open the files
[00:58:00 CEST] <JEEB> DASH is fragmented mp4
[00:58:11 CEST] <spicypixel> okay will try
[00:58:29 CEST] <JEEB> but you'll have to make a python script or something that goes through the segments and puts their file names into a playlist to make a single coherent playlist. and then if the audio fragments are separate you'll have to make one for that, and then the "master" playlist which makes both available and that you have an audio group
[00:58:35 CEST] <JEEB> so that the audio gets played together with the video
[00:58:49 CEST] <JEEB> (also a simpler way would be to get the manifest from the streaming service)
[00:59:04 CEST] <JEEB> in whatever format they produce it, and convert that into DASH or HLS or whatever
[01:00:37 CEST] <spicypixel> hopefully a little simpler than that
[01:00:43 CEST] <spicypixel> audio is inside the mp4 fragments
[01:01:35 CEST] <spicypixel> https://pastebin.com/raw/2jj627rG
[01:01:47 CEST] <spicypixel> all of them are 240 frames, 8 seconds, 30fps
[01:01:57 CEST] <JEEB> ok, then you can just generate a single HLS m3u8 playlist
[01:02:01 CEST] <JEEB> a variant one
[01:02:11 CEST] <JEEB> https://tools.ietf.org/html/rfc8216
[01:03:22 CEST] <spicypixel> mhmmm
[01:03:29 CEST] <JEEB> see https://tools.ietf.org/html/rfc8216#section-8.1 and https://tools.ietf.org/html/rfc8216#section-3.3
[01:03:40 CEST] <JEEB> first one is an example of a media (variant) playlist
[01:03:54 CEST] <JEEB> the second is specifics on which version you need to note for fragmented mp4 to be valid
[01:04:34 CEST] <spicypixel> okay that's handy
[01:05:38 CEST] <spicypixel> essentially unifi video spits out thousands of these DASH fragments but their web ui only lets you download 30min segments from the source, presumably on the fly processing, I have all the recorded files so this hopefully works to be able to slap a webserver over the saved location and start playback on them
[01:07:47 CEST] <giaco> could you please explain me how to preserve audio quality in a rtp streaming? I have an input file that gets completely ruined at the rtp receiver
[01:11:30 CEST] <spicypixel> I'm wondering if it's easier, albeit time consuming, to just concat the binaries into a blob and pipe it into ffmpeg with the hls/dash muxer output and segment size set
[01:11:40 CEST] <spicypixel> obviously double IO overhead
[01:11:56 CEST] <spicypixel> guess it depends if ffmpeg can read it
[01:12:01 CEST] <JEEB> yes
[01:12:10 CEST] <JEEB> you can test with cat + `ffmpeg -i -`
[01:12:22 CEST] <JEEB> the "-" meaning, "stdin, please"
[01:12:24 CEST] <spicypixel> windows/powershell sadly but I get the gist
[01:12:40 CEST] <JEEB> pretty sure there are cat binaries for windows
[01:12:46 CEST] <spicypixel> yeah I will grab them
[01:12:53 CEST] <JEEB> and while powershell buffers everything, cmd.exe doesn't
[01:12:56 CEST] <JEEB> (in pipes)
[01:13:17 CEST] <spicypixel> though not sure how ffmpeg will handle the input here, what with it being an object pipe not STDIN?
[01:13:41 CEST] <JEEB> it works since even powershell implements the piping thing
[01:13:47 CEST] <JEEB> but powershell is stupid and buffers the whole thing
[01:13:50 CEST] <JEEB> which will kill your RAM
[01:13:55 CEST] <JEEB> so just use cmd.exe
[01:13:55 CEST] <spicypixel> ah man it's a 60GB file.
[01:13:57 CEST] <spicypixel> hah
[01:13:59 CEST] <spicypixel> okay
[01:14:12 CEST] <JEEB> (I think there's some parameter to have powershell not buffer the thing, but I'm a lazy git)
[01:15:16 CEST] <JEEB> oh, windows type.exe does something similar to cat?
[01:15:23 CEST] <JEEB> unless it inserts endlines, but you'll notice
[01:15:39 CEST] <spicypixel> copy /b "in theory" does binary concat
[01:15:42 CEST] <spicypixel> in cmd
[01:15:46 CEST] <spicypixel> I'm trying it now
[01:15:51 CEST] <spicypixel> but it's going to a file not pipe
[01:16:07 CEST] <spicypixel> intermediate file solves one problem atm, one problem at a time
[01:16:17 CEST] <spicypixel> can worry about unbuffered piping after if this works
[01:16:44 CEST] <spicypixel> https://stackoverflow.com/questions/27440768/powershell-piping-causes-explosive-memory-usage seems to be related
[01:17:17 CEST] <spicypixel> glad I noticed the ^ escaping in this example or I'd be lost
[01:18:51 CEST] <spicypixel> looks like git bash comes with up to date coreutils binaries for windows so yay
[01:20:56 CEST] <spicypixel> I tried using the windows linux subsystem but its weakness is IO
[01:21:07 CEST] <spicypixel> so useless for concating 10,000 files per directory
[01:47:23 CEST] <spicypixel> ah well sleep
[04:34:29 CEST] <darkdrgn2k> hi all
[04:34:51 CEST] <darkdrgn2k> my ffmpeg seems to freeze 6 seconds into transcode a live stream from my happauge HDPVR
[04:35:10 CEST] <darkdrgn2k> cat /dev/video1 | ffmpeg -err_detect ignore_err -i - -hls_time 30 -qscale 0 -acodec mp3 TEST2.m3u8
[04:35:25 CEST] <darkdrgn2k> i tried many permutation, cant cat /dev/video1 | seems to be the most stable yet still issues
[05:30:18 CEST] <boblamont> What does the "1" do after -strftime in this command: ffmpeg -i http://admin:Stupidpassword1@10.12.10.40/Streaming/channels/1/picture -vframes 1 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S_doorbell.jpg"?
[06:31:31 CEST] <nuomi> hi Channel, I have a problem with this command, https://pastebin.com/z1SNDrdM, which print out text_w and text_h of drawtext filter to console. But I don't understand how this 'print' work, can't find anything about it in official manual. How can I save the output of 'print' to later use?
[06:31:39 CEST] <nuomi> Thanks in advance.
[06:41:29 CEST] <boblamont> nuomi: first, please understand that I have absolutely no idea what I'm talking about. What happens in the terminal when you use the command?
[06:41:45 CEST] <furq> nuomi: http://ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation
[06:44:36 CEST] <nuomi> terminal has the output like this https://pastebin.com/HdwqGbar
[06:45:11 CEST] <nuomi> "272.000000 " is the actual output of 'print'
[06:46:42 CEST] <nuomi> furq: thank you, It seems the right place to digg
[06:48:16 CEST] <furq> if you just want to capture those values and nothing else then i guess you could run ffmpeg -v error and use print(x, 16)
[06:48:23 CEST] <furq> that should just log the output of print
[06:52:40 CEST] <nuomi> [NULL @ 0x7fef5500d800] Unable to find a suitable output format for '16'
[06:52:41 CEST] <nuomi> 16: Invalid argument
[06:52:50 CEST] <furq> you might need to escape the comma
[06:55:45 CEST] <nuomi> furq: I've escape the comma, but still, https://pastebin.com/zYAtAJti
[06:56:13 CEST] <furq> quote the argument to -vf
[07:00:11 CEST] <nuomi> furq: sitll error, https://pastebin.com/DXGQUahV
[07:01:12 CEST] <nuomi> furq: or how to get text_h with a simple straight way?
[07:53:29 CEST] <vtorri> hello
[07:53:58 CEST] <vtorri> what is the best lossless codec available in ffmpeg ?
[07:54:38 CEST] <vtorri> best in the sense compression, whateveer the speed, and fast decompression
[08:06:08 CEST] <[E]sc> i'm trying to strip an mp3 of the metadata, specifically the album image. however, when i do this, it keeps reverting back to the original image after i edit the metadata like the name and artist. is there a way to make that stop? i'm using ffmpeg 3.4. 1. The command i'm using is :=> ffmpeg -i tagged.mp3 -vn -codec:a copy -map_metadata -1 out.mp3
[08:27:17 CEST] <Foaly> [E]sc what do you mean by "edit the metadata"?
[08:28:10 CEST] <Foaly> you only gave the command you use to remove the metadata
[08:29:08 CEST] <ariyasu> it's possible what ever media player you are using to play the file is going out and grabbing the album art and so on
[08:34:07 CEST] <[E]sc> Foaly, oh, then how can i remove the album image? everytime i edit back the metadata it reverts back to the original image for some reason.
[08:34:39 CEST] <Foaly> how do you edit the metadata?
[08:35:05 CEST] <[E]sc> Foaly, using vlc afterwards to add back the artist, song and album info.
[08:35:36 CEST] <Foaly> well, then that's a vlc problem
[08:36:14 CEST] <Foaly> you can instead set the metadata via the commandline or another program
[08:37:28 CEST] <[E]sc> Foaly, thanks, i'll try that then see if it doesn't revert back to the original album art.
[10:31:51 CEST] <keglevich> hello all...one simple question...is it possible to encode to mp4 (using libx264 codec) and get the final result as CBR? I tried many possible combinations but mediainfo always says "variable overall video bitrate"... is is possible to somehow do real CBR encoding with ffmpeg?
[10:33:36 CEST] <keglevich> another thing...if I have 50p source and I'd like to get the 25i end result, what parameters should I use? I tried with "tff=1", but the result is some sort of "mbaff top frame first" instead of "interlaced top frame first"...this probably isn't the same?
[10:46:11 CEST] <furq> keglevich: -b:v 1234k -maxrate 1234k -x264-params "nal-hrd=cbr"
[10:47:35 CEST] <keglevich> furq: of that's something I didn't try yet...thank you
[11:50:24 CEST] <roxlu> When I have a couple of h264 frames like I,P,P,P,P,P,P,I, can I remove some P frames so I get: I,P,P,P,I ?
[11:52:47 CEST] <Mavrik> By reencoding, yes? :)
[11:54:16 CEST] <roxlu> Mavrik: not w/o reencoding?
[11:55:19 CEST] <Mavrik> How would that work? :)
[11:55:28 CEST] <Mavrik> If those P frames reference previous P frames?
[11:56:43 CEST] <roxlu> Ah hmm I thought P frames only referred to I frames
[11:57:29 CEST] <Mavrik> Nope.
[11:57:40 CEST] <Mavrik> You have max ref frames which is maximum distance a frame can reference another
[11:57:47 CEST] <Mavrik> And you have GOP size which can be significantly larger
[11:58:16 CEST] <roxlu> Ok thanks then this approach won't work indeed.
[11:58:33 CEST] <roxlu> I'm looking for a way to synchronise the recording of streams
[11:59:16 CEST] <roxlu> I receive 4 RTP streams and I want to start recording them to file at the same time (the RTP timestamps can't be trusted to be in sync)
[11:59:18 CEST] <Mavrik> E.g. high profile 4.1 is limited to 4 reference frames
[11:59:31 CEST] <Mavrik> While it's entirely possible to have a huge GOP of > 100
[11:59:44 CEST] <Mavrik> Also refering back to I would deteriorate encoding significantly
[12:01:00 CEST] <roxlu> Ok. Do you maybe know about another way to synchronise recordings of streams?
[12:01:35 CEST] <Mavrik> Well, make an I-frame only stream :)
[12:01:45 CEST] <Mavrik> If you're on a fast connection to RTP source
[12:02:45 CEST] <roxlu> I can't change the input streams :#
[12:03:33 CEST] <roxlu> I could keep track of the relative time difference between te streams. Maybe it's possible to "inject" a new keyframe
[12:03:42 CEST] <roxlu> at the right time
[12:06:12 CEST] <Mavrik> Hmm, you'd still have the same issue
[12:06:21 CEST] <Mavrik> Since P frames will reference something
[12:07:16 CEST] <roxlu> yeah though I could only reencode one gop
[12:07:32 CEST] <roxlu> and adjust the number of frames/offset where I start between frames
[12:07:45 CEST] <Mavrik> That could work yeah
[12:07:45 CEST] <furq> i take it you can't just encode vfr
[12:08:53 CEST] <roxlu> furq: if possible I prefer not indeed
[12:36:24 CEST] <spicypixel> genuinely feel like I'm missing something here in a situation where no one is trying to build a hls playlist from existing mp4 fragments
[13:17:27 CEST] <furq> spicypixel: http://vpaste.net/xNj2C
[13:17:31 CEST] <furq> procrastination is a wonderful thing
[13:40:19 CEST] <spicypixel> ha
[13:40:34 CEST] <spicypixel> I just wrote a powershell script to do the same before I saw that furq!
[13:43:03 CEST] <spicypixel> lines up roughly with what I've done though I had to add an EXT-X-MAP: directive, and EXT-X-INDEPENDENT-SEGMENTS
[13:43:06 CEST] <spicypixel> otherwise the same
[13:43:48 CEST] <spicypixel> on the upside I'm learning Lua too this afternoon by the looks of it:D
[14:58:10 CEST] <DHE> AV1 status: 12.5 days, 5430 frames processed (3m37s of this video)
[14:58:51 CEST] <iive> it might be faster to rewrite the encoder first.
[15:00:04 CEST] <DHE> that is one of the jokes I've been keeping on standby
[15:00:09 CEST] <DHE> of course, if my computer crashes it's game over.
[15:02:05 CEST] <iive> next time you might run it in vm and snapshot it every hour or so.
[15:03:00 CEST] <iive> of course, it might crash reliably and reproducible at 99%
[15:04:43 CEST] <furq> lol i was just thinking about that a minute ago
[15:05:00 CEST] <jkqxz> Also the format has probably changed since you started, so it won't be compatible with current libaom.
[15:05:01 CEST] <furq> i was going to say "i can't believe you're sticking with it" but i guess it's only using one core anyway
[15:07:04 CEST] <DHE> yeah... I kinda want to see how it fares on the video. I can only assume any changes will be for the better... right?
[15:07:27 CEST] <furq> obviously as soon as it completes you'll realise you undercropped by 1px and you need to start again
[15:07:40 CEST] <JEEB> well they've got two encoders already going and at least one more is coming up
[15:07:47 CEST] <DHE> making an H264 vs H265 vs AV1 video collection. H264 finished a bit under realtime. H265 took 12 hours. AV1 I should survive to see. :)
[15:07:50 CEST] <JEEB> so AV1 is looking quite positive
[15:07:59 CEST] <furq> JEEB: what's the third one for
[15:08:17 CEST] <JEEB> I would guess a version of Eve for AV1
[15:09:07 CEST] <furq> is rav1e just an academic exercise then
[15:10:11 CEST] <JEEB> in a way I'd call libaom at this point more academic
[15:10:22 CEST] <JEEB> since it's the one where optimizations weren't going to happen just yet
[15:10:51 CEST] <furq> i thought optimising aom would be the main focus once the bitstream was frozen
[15:10:52 CEST] <JEEB> while rav1e generally is just made to verify a) how well encoders can be made in rust 2) that the spec matches what is meant
[15:11:03 CEST] <JEEB> yes, libaom probably will get work on that
[15:11:51 CEST] <JEEB> anyways, we'll see how things go
[15:13:07 CEST] <furq> it would definitely kick ass if the awesome new free to use codec is only usable if you pay for a commercial encoder
[15:14:53 CEST] <JEEB> on the topic of AV1, I found it hilarious how harmonics is trying to discredit a format before it's even finished.
[15:15:15 CEST] <furq> maybe that's why aom put that press release out
[15:15:17 CEST] <furq> the old bait and switch
[16:34:13 CEST] <Mavrik> Hmm, how much of existing x264/x265 SIMD code can be reused for AV1?
[16:40:36 CEST] <DHE> AV1 already includes a lot of SIMD instructions. but SIMD is not an automatic performance winner
[16:40:55 CEST] <Mavrik> Well it's not automatic, but it does make a quite a difference in a lot of cases.
[16:41:17 CEST] <DHE> SIMD buys a number of percentage points. we need a few orders of magnitude
[16:41:28 CEST] <Mavrik> So what's the biggest bottleneck?
[16:42:01 CEST] <DHE> according to perf, ff_deblock_h_chroma_8_mmxext and ff_deblock_h_luma_mbaff_8_sse2
[16:42:18 CEST] <DHE> wait.. that might be the video I'm playing
[16:42:44 CEST] <DHE> write_frame_header_obu, aom_highbd_sad64x64x4d_sse2, aom_sad128x128x4d_sse2
[16:42:45 CEST] <DHE> that makes more sense
[16:45:25 CEST] <furq> https://xvc.io/
[16:45:27 CEST] <furq> wtf is this
[16:48:11 CEST] <kepstin> as far as I can tell, it's an encoder that'll steadily get worse each time someone claims an encoding tool it implements violates a patent.
[16:49:17 CEST] <kerio> lmao
[16:49:38 CEST] <kepstin> from their concept page: "[targeting] device with an ability to remotely receive and install software updates. [...] in order to remove individual tools that have been determined to not be covered by the xvc license."
[16:52:26 CEST] <furq> lol
[16:53:24 CEST] <furq> check out my new codec, it's called, uh...HEVD
[16:55:15 CEST] <furq> calling on DHE to do a two-week encode with xvc and watch the video quality steadily get worse as features are removed from the encoder
[17:01:41 CEST] <DHE> sorry, my CPU is busy for the next 6 months
[17:03:52 CEST] <kerio> how are they going to remove features if you run it without internet tho
[17:03:56 CEST] <kerio> i bet it has always-on DRM
[17:19:18 CEST] <ChocolateArmpits> DHE, are you using cpu-used 0 ?
[17:19:37 CEST] <furq> he's using whatever the defaults are
[17:22:10 CEST] <ChocolateArmpits> vp8 and vp9 default is 1 so I guess it's that. Not max gains
[17:23:33 CEST] <FishPencil> Does FFmpeg allocate memory for a single frame, and fill it with the data of the following frames? Or is the frame deallocated and then reallocated for each frame?
[17:45:54 CEST] <DHE> ChocolateArmpits: ffmpeg_g -i input.mkv -strict -2 -map 0:v -map 0:a -c:v libaom-av1 -c:a copy -b:v 1M -maxrate:v 1M -bufsize:v 2M -threads:v 8 output.mkv
[17:46:32 CEST] <DHE> but I'm only getting 1 thread actually used
[17:49:22 CEST] <ChocolateArmpits> DHE, I think there was suggested an alternative way in the doom9 thread to up thread usage, however for the libaom built encoder
[17:49:44 CEST] <ChocolateArmpits> did you think of segmenting the video ?
[17:51:29 CEST] <DHE> indirectly. I did end up making a ~45 second clip for all 3 codecs for my own demonstration focusing on the "difficult to encode" bits
[17:55:07 CEST] <ChocolateArmpits> About two months ago I ran a simple test with a 10 second SD source, encoding took about 25 hours with cpu-used 0. At 100kbps av1 beat placebo x264. Compared to x264 av1 lost more detail however the picture was considerably more structurally sound, parts weren't flying away from where they were meant to stay.
[17:57:17 CEST] <DHE> yeah that was my 45 second test as well... 1 megabit at 1080p for x264, x265 and aom-av1
[17:57:47 CEST] <DHE> x264 just got destroyed, x265 looks pretty good and av1 was choppy due to high decoding requirements but was possibly better than x265 by a bit...
[17:59:57 CEST] <ChocolateArmpits> DHE, you should decode to a lossless format and then playback that
[18:00:30 CEST] <JEEB> yea, low bit rate scenarios are where the new formats shine the most since most psychovisual optimizations f off in usefulness
[18:01:50 CEST] <JEEB> aq-mode 2 and disabling psy-rd (tune ssim'ish) probably lead to better results with x264
[18:08:04 CEST] <DHE> well constrained 1 megabit 1080p is just cruel at x264
[18:11:36 CEST] <JEEB> sure, but you might still get a bit better result with aq-mode 2 and no psy-rd. and since it's the fastest to encode it also shouldn't be too hard to try out
[18:50:30 CEST] <FurretUber> I have found when the microphone audio get out of sync with the internal audio. It's after messages like: [matroska @ 0x55f48932cb20] Non-monotonous DTS in output stream 0:0; previous: 54271, current: 54246; changing to 54271. This may result in incorrect timestamps in the output file.
[18:52:38 CEST] <FurretUber> The log is a bit big (3,3 GB), so I am unable to upload it to pastebin
[18:53:00 CEST] <JEEB> that just means that the input timestamps at some point of time went backwards
[18:53:27 CEST] <JEEB> 54246 comes before 54271,
[18:54:52 CEST] <FurretUber> Yes, this happens when a application start to play sound (opening a media player, for example). Then the microphone sound goes forward, while the internal sound is still in sync
[18:55:39 CEST] <FurretUber> As if the internal sound locked and then it's corrected but the microphone is overcorrected
[18:56:59 CEST] <JEEB> time to use -debug_ts if I recall correctly
[18:57:07 CEST] <JEEB> that will print out all timestamps at different points of time
[18:57:22 CEST] <JEEB> also -v verbose is generally helpful
[18:58:20 CEST] <FurretUber> The command I used was _the formatting was lost for the characters with accents, like á:
[18:58:20 CEST] <FurretUber> ffmpeg -report -vaapi_device /dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -r 30 -thread_queue_size 2048 -f openal -sample_rate 96000 -i "Monitor of \xc3\x81udio interno Est\xc3\xa9reo anal\xc3\xb3gico" -thread_queue_size 2048 -f openal -sample_rate 96000 -i "\xc3\x81udio interno Est\xc3\xa9reo anal\xc3\xb3gico" -filter_complex "amix=inputs=2" -r 30 -thread_queue_size 2048 -f x11grab -probesize 32 -s 1366x768 -i :0.0+0,
[18:58:20 CEST] <FurretUber> 0 -muxpreload 1 -acodec libopus -application lowdelay -vf "format=nv12,hwupload,scale_vaapi=w=640:h=360" -muxpreload 1 -vcodec vp8_vaapi -threads 4 -r 30 -f webm -flags +global_header gravando000.webm
[18:59:14 CEST] <FurretUber> There is a strange problem using -framerate, it makes many messages like "Past Duration 0.xxxx too large" and then it drop frames, so I have to use -r
[18:59:46 CEST] <JEEB> if you want to minimize the effect of ffmpeg.c's vsync logic you can try -vsync passthrough -copyts
[18:59:51 CEST] <JEEB> it's still not zero, mind you
[19:00:13 CEST] <JEEB> but yea, -debug_ts (I think it was like that and not debugts) would give you a line in various parts of ffmpeg.c on the timestamps
[19:00:33 CEST] <JEEB> although I remember it had applied something already to the logged value so it's not 100% perfect, but gives you some data to debug off of
[19:03:32 CEST] <FurretUber> How do I make the video have two separated audio inputs? Now I'm using -filter_complex amix=inputs=2, and maybe this filter is the causing the issue
[19:03:55 CEST] <FurretUber> Two separated audio outputs*
[19:10:06 CEST] <furq> get rid of amix?
[19:10:17 CEST] <furq> it's not really clear what you want to do
[19:11:45 CEST] <FurretUber> I'm trying to have audio from two different inputs in the output file
[19:12:33 CEST] <furq> if you want them as two separate streams then just get rid of amix
[19:13:25 CEST] <FurretUber> Without the amix, it records only one audio input in the output file
[19:13:37 CEST] <furq> add -map 0 -map 1 -map 2
[19:15:28 CEST] <furq> on which note, it'd be nice to have -map -1 or something to map all streams from all inputs
[19:19:19 CEST] <FurretUber> While it added both streams to the output, only one of them is playing (tested with mpv, VLC and ffplay), which is not the intended result
[19:49:19 CEST] <FurretUber> From this page: https://trac.ffmpeg.org/wiki/AudioChannelManipulation
[19:49:19 CEST] <FurretUber> I have tried to use "[0:a][1:a]amerge=inputs=2,pan=stereo|c0<c0+c2|c1<c1+c3[aout]" -map "[aout]" instead of amix=inputs=2 but it has the same issue, when that message appears the microphone sound get out of sync, as if it was overcorrected and plays before the video and the internal audio
[21:04:06 CEST] <Romano> Whta are the settings that need to match between files for concatenating with the concat demuxer??
[21:04:22 CEST] <BtbN> pretty much all of them
[21:04:49 CEST] <Romano> In the documentation they only talk about codec and timescale
[21:05:25 CEST] <Romano> Is there a way to make a video file have the same settings as another one?
[21:05:47 CEST] <Romano> Or will I need to do it for each setting?
[21:05:59 CEST] <BtbN> transcode it to the same codec with exact same parameters
[21:06:12 CEST] <BtbN> iirc the extradata has to match
[21:07:53 CEST] <furq> so i typed "york" into rhymezone.com and one of the exact matches it returns is "fourcc"
[21:07:58 CEST] <furq> i trust that's how you've all been pronouncing it
[21:12:30 CEST] <DHE> four-see-see ?
[21:13:06 CEST] <furq> yeah i thought that too considering it should be styled FourCC
[21:13:13 CEST] <furq> but a higher authority has spoken
[21:15:49 CEST] <Romano> How can I check the settings of a video for transcoding the others?
[21:16:17 CEST] <Romano> And is there a single command to transcode a video file with the same parameters?
[21:16:26 CEST] <Romano> I'm using mp4 format and H264 codec
[21:16:50 CEST] <Romano> Im no expert at H264 but I know there are different profiles and configurations
[21:18:02 CEST] <Romano> Is sufficient to only match codecs or will I need to re-encode in the same profile?
[21:22:27 CEST] <furq> you presumably want all the settings the decoder cares about to match exactly
[21:22:44 CEST] <furq> so obviously dimensions, framerate, pixel format, ref frames, b-frames
[21:22:54 CEST] <furq> there's probably some other stuff as well
[21:26:29 CEST] <furq> if the source is x264 then mediainfo will show you the exact encoding settings used
[21:26:39 CEST] <furq> otherwise just try and match profile, level, refs etc
[21:26:42 CEST] <Romano> Is there some documentation for the settings to match?
[21:26:53 CEST] <furq> not that i've ever been able to find
[21:27:26 CEST] <Romano> Ok, I guess I will try all of the above and see if the concatenation is successfull
[21:27:52 CEST] <Romano> Oh, and what about bitrate?
[21:28:00 CEST] <BtbN> Shouldn't matter
[21:28:03 CEST] <furq> ^
[21:28:04 CEST] <BtbN> If the extradata matches, you're good
[21:28:12 CEST] <BtbN> ffprobe prints it somewhere
[21:28:16 CEST] <BtbN> it's a bunch of hex
[21:28:49 CEST] <Romano> ffprobe does print the information I need
[21:29:03 CEST] <BtbN> it prints the extradata
[21:29:11 CEST] <Romano> One more thing. What about number of streams?
[22:20:34 CEST] <FurretUber> Here is a sample video with the audio sync issue: https://mega.nz/#!4gRjmIaD!_2lQJeHO5FB76lxH_XvARePgEqGO-N9GW6IZCCDBCFI
[22:20:35 CEST] <FurretUber> Please note that different media players play the video differently (which is already a bad sign). Some of them have the internal audio delayed, while others have the microphone audio ahead, while others have both audio streams ahead and others have both audio streams delayed.
[22:25:53 CEST] <FurretUber> https://pastebin.com/df6XfZj4
[00:00:00 CEST] --- Sun Apr 15 2018
More information about the Ffmpeg-devel-irc
mailing list