[Ffmpeg-devel-irc] ffmpeg.log.20200112

burek burek at teamnet.rs
Mon Jan 13 03:05:01 EET 2020


[12:16:49 CET] <Mathijs> I'm streaming live video (BlackMagic Decklink input) of which I'd like to replace the audio with another live audio stream (windows soundcard input). I need to keep both streams in sync. Currently the audio drifts out of sync after a while. Can ffmpeg correct for the slight difference in speeds of both streams?
[13:50:44 CET] <TheWild> hello
[13:51:45 CET] <TheWild> audio in movie is damaged starting from around 2:00 and neither ffmpeg nor mpv wants to play it. I suspect whole audio stream is contained within the file. Is there a way to fix the audio?
[13:58:25 CET] <TheWild> (hmm... but it works on website?)
[13:58:25 CET] <TheWild> Okay, I think I found out the cause. Website served damaged stream to fool youtube-dl.
[15:18:03 CET] <funnybunny2> join #ffmpeg-devel
[15:18:20 CET] <funnybunny2> Hey
[15:20:02 CET] <funnybunny2> I can't include avformat.h and compile with gcc -ansi. Is ffmpeg intended to only support C99? I want to keep my source code C89. What should I do to isolate ffmpeg?
[15:21:38 CET] <funnybunny2> Here are the errors I get: https://pastebin.com/dbQfauTv
[15:22:00 CET] <BtbN> FFmpeg is C99, indeed.
[15:22:58 CET] <funnybunny2> It doesn't seem like there are too many errors so far, but maybe if I use more functions there will be more. I wonder how hard it would be to just make the headers C89 compatible
[15:23:57 CET] <funnybunny2> I can't think of a way apply C89 checking to my own code while also including ffmpeg headers
[15:25:19 CET] <funnybunny2> I guess I'll just deal with it for now
[15:25:26 CET] <funnybunny2> Hope it's not a problem on Windows
[15:25:55 CET] <funnybunny2> Last time the MSVC compiler didn't fully support C99
[15:52:25 CET] <utack> is there a way to use benchmark to determine the time it takes to decode each frame? for example to determine if even the most complex frame in an entire stream could be decoded in under 16ms?
[16:16:48 CET] <DHE> utack: you can try an imprompu decode run: ffmpeg -i input.mp4 -c:v rawvideo -c:a pcm_s16le -f null /dev/null
[16:16:52 CET] <DHE> I think that's right
[16:29:05 CET] <vlt> Hello. How can I prepend a 5 second black clip to a video file without having to recode it? Is there a way to concat two clips if I manage to match their properties somehow?
[16:46:45 CET] <utack> DHE thank you. but  would i see if one frame can't be decoded quickly enough?
[16:46:50 CET] <utack> it just tells me the average fps
[16:48:03 CET] <DHE> vlt: the concat demuxer should suffice, but you need to make sure you get the parameters exactly right. h264 for example is sensitive to the profile (baseline, main, high)
[16:48:13 CET] <DHE> (not to be confused with the concat filter)
[17:23:52 CET] <yang> I am trying to create video DVD
[17:23:55 CET] <yang> http://paste.debian.net/plainh/5f314534
[17:23:58 CET] <yang> got some errors
[17:24:28 CET] <yang> ffmpeg -i Contaboscale5.avi -target pal-dvd movie.mpg
[17:25:04 CET] <yang> looking at https://www.savvyadmin.com/tag/growisofs/
[17:25:12 CET] <yang> couldn't find any recent DVD video manual
[17:32:27 CET] <cehoyos> yang: Please provide the command line you tested together with the complete, uncut console output - the errors are unexpected afair
[17:43:41 CET] <yang> cehoyos: https://ufile.io/kih2sjg2
[17:52:34 CET] <cehoyos> Is your input an actual video?
[17:52:45 CET] <cehoyos> I ask because I can reproduce the issue with input from /dev/random
[17:53:32 CET] <cehoyos> Try the output option "-trellis 1"
[18:02:46 CET] <yang> it's an actual video
[18:03:45 CET] <yang> same buffer overflows with ffmpeg -i Contaboscale5.avi -target pal-dvd -trellis 1 movie.mpg
[18:04:30 CET] <cehoyos> It may help if you change the bitrate with -b:v, you could also test with a very old FFmpeg version to check for a regression. (gtg)
[18:17:00 CET] <yang> ffmpeg -i /home/yang/Matroska/Contaboscale5.avi -i /home/yang/Matroska/Contaboscale5.avi -map 1:0 -map 0:1 -y -target pal-dvd -sn -g 12 -bf 2 -strict 1 -ac 2 -aspect 1.3333333333333333 -s 720x576 -trellis 1 -mbd 2 -b:a 224k -b:v 5000k /root/movie/movies/movie_0.mpg
[18:17:08 CET] <yang> this one gives buffer errors aswell
[19:26:52 CET] <mr_lou> So I need to convert a bunch of WAVs into AMRs... but the amr_nb codec is apparently disabled in my ffmpeg. How do I enable it? Do I need to compile ffmpeg myself in order to do that?
[19:27:44 CET] <pink_mist> probably
[19:27:51 CET] <pink_mist> never heard of AMR before
[19:30:17 CET] <mr_lou> It's a limited audio format. I know it from back in the day before iPhone and Android. Cellphone games back then used AMR for streamed audio, typically used for sound effects in games. And MIDI for music.
[19:35:03 CET] <mr_lou> And that's what I need it for. Porting a game to the old phones.
[19:36:25 CET] <BtbN> There simply does not appear to be an encoder for that format. Only a decoder.
[19:36:55 CET] <mr_lou> Oh....  but when I google for "ffmpeg wav to amr", there are plenty of results showing how it's done.
[19:37:01 CET] <mr_lou> Except I don't have the "amr_nb" encoder.
[19:37:14 CET] <mr_lou> "Automatic encoder selection failed for output stream #0:0. Default encoder for format amr (codec amr_nb) is probably disabled. Please choose an encoder manually."
[19:38:36 CET] <BtbN> libopencore-amr appears to have encoding capabilities
[19:39:01 CET] <BtbN> But you will most likely need to build it and ffmpeg yourself
[19:42:59 CET] <mr_lou> :-/
[19:43:42 CET] <mr_lou> Ubuntu does have libopencore...  I could try installing that and see...
[19:43:54 CET] <BtbN> That's not how that works
[19:44:03 CET] <BtbN> but at least you won't have to build it yourself then. Only ffmpeg.
[19:44:05 CET] <mr_lou> Why not? It says it's a shared library.
[19:44:23 CET] <BtbN> Installing it won't magically modify ffmpeg to have it built in.
[19:45:03 CET] <mr_lou> Well I don't know enough. Thought maybe ffmpeg looked for external libs too.
[19:45:06 CET] <mr_lou> I guess not.
[19:46:54 CET] <BtbN> It does. At compile time.
[19:48:47 CET] <mr_lou> :->
[19:50:15 CET] <mr_lou> I have a love/hate relationship with that. I hate when apps needs libs that I don't have (anymore) because the app is now looking for an older version. So I always prefer when everything is compiled together - except now of course when I need AMR and have never succeeded in compiling any C stuff.
[19:58:33 CET] <pink_mist> mr_lou: compiling ffmpeg from latest git is the only really supported ffmpeg in this channel ... most people will also support a compile from the latest release, but not everyone will .. if you're getting your ffmpeg from your OS, it's the OS's job to support you
[19:59:05 CET] <mr_lou> .....job?
[19:59:22 CET] <pink_mist> what else would you call it?
[19:59:24 CET] <pink_mist> pastime?
[19:59:54 CET] <mr_lou> Hobby I guess.
[20:00:15 CET] <pink_mist> does your OS get hobbies?
[20:00:27 CET] <mr_lou> O_o
[20:01:38 CET] <mr_lou> I think you may be taking things a bit too serious mate. And I don't, so....  cheers!
[20:35:56 CET] <cehoyos> pink_mist: Did you try a precompiled static binary from the FFmpeg download page? I would expect that to support amr encoding. If not, compilation should be trivial on Ubuntu.
[20:44:09 CET] <pink_mist> cehoyos: I'm not the one that had problems with AMR
[20:54:15 CET] <cehoyos> sorry..
[21:25:03 CET] <funnybunny2> https://ffmpeg.org/doxygen/4.0/group__lavu__sampfmts.html#gaf9a51ca15301871723577c730b5865c5 "The data described by the sample format is always in native-endian order." Native to what?
[21:26:07 CET] <durandal_1707> to your machine/OS
[21:26:32 CET] <funnybunny2> Surely ffmpeg doesn't detect my CPU architecture and convert endianness of samples in audio files
[21:26:50 CET] <durandal_1707> you are deeply confused
[21:27:06 CET] <funnybunny2> Why would it do that instead of just returning the samples as they are in the file?
[21:27:18 CET] <durandal_1707> file can be any endianess
[21:27:37 CET] <funnybunny2> Yeah, but don't specific audio formats specify an endianness?
[21:27:49 CET] <durandal_1707> yes, they do
[21:27:59 CET] <funnybunny2> So why wouldn't ffmpeg leave that untouched?
[21:28:12 CET] <cehoyos> Because that would mean twice as many audio formats for no gain
[21:28:18 CET] <cehoyos> (in nearly all cases)
[21:28:33 CET] <funnybunny2> I don't understand
[21:28:56 CET] <cehoyos> Instead of five (?) internal audio formats, you would twice as many
[21:29:01 CET] <cehoyos> need
[21:29:17 CET] <funnybunny2> Why? I don't get it
[21:29:34 CET] <cehoyos> Instead of only s16, we would have s16le and s16be
[21:29:55 CET] <cehoyos> This would definitely be technically possible, but why?
[21:30:16 CET] <funnybunny2> durandal_1707 is right. I am deeply confused
[21:30:18 CET] <velix> What's the "state of the art" switch for 50fps to 25fps? I've found "-vf setpts=PTS*2 -r 25 -af atempo=0.5" on the web?
[21:30:59 CET] <cehoyos> There is also a fps filter, both variants have advantages and disadvantages
[21:31:10 CET] <cehoyos> But what you provided should work fine
[21:31:26 CET] <funnybunny2> OK, so .wav files for example store PCM samples in little-endian. Suppoes my machine were big-endian. ffmpeg reads the .wav file and returns samples. Why would it convert them to big-endian instead of just returning them as-is?
[21:32:01 CET] <cehoyos> Because we would need twice as many audio formats (and twice as many conversion functions)
[21:32:06 CET] <velix> cehoyos: I hate all those FPS and Audio hz problems... The Big Bang Theory sounds totally different on 44.1 KHz PAL than 48 KHz on DVD and other than 48 KHz on NTSC DVD :(
[21:32:15 CET] <velix> And it sounds different than TV
[21:32:17 CET] <cehoyos> And since wav files take no cpu, the gain would be negligible
[21:32:37 CET] <funnybunny2> Why do you need another audio format?
[21:32:37 CET] <cehoyos> velix: If you hear the difference (I don't), I suspect you are able to fix it...
[21:32:39 CET] <cehoyos> With FFmpeg
[21:32:47 CET] <cehoyos> Is this "who is on first"?
[21:32:52 CET] <funnybunny2> Are you talking about two .wav formats?
[21:33:11 CET] <velix> cehoyos: Yeah, not with ffmpeg I guess. I was working for a professional music studio 15 years ago. We had expensive software for that.
[21:33:13 CET] <cehoyos> No, are you?
[21:33:16 CET] <velix> Also expensive hardware :)
[21:33:24 CET] <velix> DAT vs. CDA always was a problem.
[21:33:28 CET] <funnybunny2> I don't see why it matters what endianness the samples are stored in in the file.
[21:33:49 CET] <cehoyos> I don't think this is a useful argumentation, but since I don't hear the difference you will have to find somebody else to convince you it makes no sense;-)
[21:34:04 CET] <cehoyos> funnybunny2: You claimed that afair
[21:34:14 CET] <cehoyos> We claim it makes no difference...
[21:34:18 CET] <funnybunny2> I just want to hand the samples off to ALSA. Why do I need to check what my CPU native endianness is? It shouldn't depend on that
[21:34:26 CET] <velix> cehoyos: Yeah, I'm audiophil and musican. I can detect those pitch shifts :(
[21:34:31 CET] <funnybunny2> Obviously it's little-endian though...
[21:34:43 CET] <durandal_1707> funnybunny2: ALSA also uses native . so nothing needs to be done
[21:34:51 CET] <cehoyos> There are filters that fix that, I find it unlikely that they are no high-quality
[21:34:59 CET] <funnybunny2> ALSA takes whatever you want
[21:35:05 CET] <funnybunny2> https://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html#gaa14b7f26877a812acbb39811364177f8
[21:35:23 CET] <furq> velix: if your ffmpeg was built with librubberband then i guess that should be higher quality than atempo
[21:35:41 CET] <velix> furq: thanks! I could recompile it maybe. Thanks for this idea.
[21:35:41 CET] <cehoyos> Note that the internal format is not meant to be provided to also, there are audio "encoders" that produce what (for example) alsa wants.
[21:35:55 CET] <durandal_1707> furq: nope, it do different things with different algorithm
[21:35:56 CET] <cehoyos> If you don't want to use these "encoders", you have to check endianess, yes
[21:36:06 CET] <cehoyos> (seems trivial)
[21:36:26 CET] <funnybunny2> What do you mean "encoders"?
[21:36:44 CET] <cehoyos> There is a pcm_s16le encoder, and alsa accepts pcm_s16le.
[21:36:45 CET] <funnybunny2> I am using ffmpeg to decode the audio into PCM
[21:37:05 CET] <cehoyos> The internal format is not (necessarily) pcm, this is more a coincidence
[21:37:11 CET] <funnybunny2> Then my intent was to just look at the codec sample format and specify that directly to ALSA
[21:37:38 CET] <cehoyos> This is possible (because the sample format is very similar to pcm), but this is not how FFmpeg works in general
[21:37:44 CET] <funnybunny2> Oh
[21:38:02 CET] <funnybunny2> So I am always supposed to explicitly convert samples to PCM with ffmpeg?
[21:38:09 CET] <cehoyos> In general, you have something like: protocol -> demuxer -> decoder -> filter -> encoder -> output
[21:38:41 CET] <cehoyos> You don't have to if you know what you are doing, but you cannot ask "why is there only native s16, I want s16 to be little endian on big endian hardware)
[21:38:41 CET] <furq> if you decode it then you should encode it
[21:38:58 CET] <velix> cehoyos: sorry to crash your discussion: I had problems with the decoder in the past, because it added padding, which made audio off sync
[21:39:14 CET] <cehoyos> .. because there is a pcm_s16le encoder that you can use if you want to be sure to have pcm_s16le
[21:39:18 CET] <velix> But again: normal people might not be able to detect this :)
[21:39:40 CET] <cehoyos> velix: A decoder cannot add padding
[21:40:04 CET] <velix> cehoyos: LAME did. I've reported it and it got fixed.
[21:40:08 CET] <velix> It's a few years ago
[21:40:11 CET] <furq> lame is an encoder
[21:40:13 CET] <furq> in spite of the name
[21:40:17 CET] <velix> furq: lame also does decoding.
[21:40:23 CET] <cehoyos> But not in FFmpeg
[21:40:31 CET] <velix> cehoyos: Yeah and that was the problem.
[21:40:42 CET] <velix> Lame encoding and lame decoding = anything was fine
[21:40:51 CET] <velix> lame encoing and fmpeg decoding = problem
[21:40:56 CET] <funnybunny2> Sounds like thanks to the fact that ffmpeg returns samples in the machines native encoding, I have to potentially do a LE to LE encoding which does absolutely nothing, otherwise add a check for my CPU endianness
[21:41:00 CET] <velix> But it got fixed years ago
[21:41:17 CET] <durandal_1707>  funnybunny2: non needed to use pcm encoders
[21:41:25 CET] <cehoyos> funnybunny2: If the conversion does absolutely nothing, there is no problem, no?
[21:41:32 CET] <funnybunny2> cehoyos: Yeah, a waste of time
[21:41:40 CET] <durandal_1707> funnybunny2: just if they are planar/packed formats
[21:41:49 CET] <cehoyos> No, a conversion that does absolutely nothing, is free and does not waste time
[21:41:58 CET] <velix> I've spent lots of time with the LAME community when working in the studio. We needed to create perfect MP3s for online stores (before they did it on their own).
[21:42:23 CET] <funnybunny2> cehoyos: Well I can't say that. I don't know what the ffmpeg encoding functions do. Maybe they still loop through the whole buffer
[21:42:36 CET] <cehoyos> But I can say it.
[21:43:02 CET] <cehoyos> You otoh, could simply benchmark it to proof me wrong (which will be difficult)
[21:43:09 CET] <cehoyos> in this case, I mean;-)
[21:43:16 CET] <durandal_1707> funnybunny2: you only need to worry about planar sample formats, because alsa does not supports that
[21:43:43 CET] <funnybunny2> Wait...
[21:43:54 CET] <funnybunny2> Are you saying there would need to be more formats like more enums
[21:43:57 CET] <funnybunny2> In ffmpeg
[21:44:06 CET] <funnybunny2> Because that's exactly what I think there should be!
[21:44:25 CET] <funnybunny2> Just like https://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html#gaa14b7f26877a812acbb39811364177f8
[21:44:29 CET] <funnybunny2> https://ffmpeg.org/doxygen/4.0/group__lavu__sampfmts.html#gaf9a51ca15301871723577c730b5865c5
[21:45:28 CET] <cehoyos> As Paul explained, there are no planar formats in alsa: You can either take care yourself or use the pcm_s16le encoder (as said above)
[21:45:47 CET] <funnybunny2> I know there are no planar formats
[21:45:55 CET] <funnybunny2> I can easily convert ffmpeg samples to interleaved
[21:46:05 CET] <funnybunny2> Has nothing to do with endianness
[21:46:33 CET] <cehoyos> I am curious: How do you "easily" convert from planar to interleaved?
[21:46:33 CET] <funnybunny2> Wait
[21:46:44 CET] <funnybunny2> Are we talking bit endianness or byte endianness
[21:47:05 CET] <durandal_1707> just use alsa endian formats
[21:47:49 CET] <funnybunny2> So
[21:48:37 CET] <funnybunny2> For planar to interleaved, you just loop through all the samples taking one sample from each channel and output in that order
[21:48:51 CET] <funnybunny2> I have some code
[21:49:16 CET] <cehoyos> Do you do that in C? The reason I ask is that above you were concerned about speed and FFmpeg contains heavily optimized funtions to convert between planar and interleaved...
[21:49:45 CET] <durandal_1707> or just use swresample, it does all this conversions for you automagically
[21:50:23 CET] <funnybunny2> Yes, in C
[21:50:39 CET] <cehoyos> That will unavoidably take a lot of time
[21:50:51 CET] <cehoyos> (That can be avoided when not using C)
[21:51:02 CET] <velix> furq: it's built against librubberband now. How can I activate it? :D
[21:51:17 CET] <furq> https://ffmpeg.org/ffmpeg-filters.html#rubberband
[21:51:26 CET] <velix> furq: -filter:a rubberband=tempo=
[21:51:29 CET] <velix> furq: thx
[21:52:25 CET] <funnybunny2> Something like this https://pastebin.com/NBDpZe69
[21:52:45 CET] <cehoyos> Yes, this is very slow
[21:52:52 CET] <nicolas17> why do that yourself in C when ffmpeg already has asm-optimized functions to do the same?
[21:53:04 CET] <funnybunny2> I didn't know it did
[21:53:18 CET] <nicolas17> cehoyos said "FFmpeg contains heavily optimized funtions..."
[21:53:28 CET] <funnybunny2> Yeah, I wrote this code before he said that
[21:53:29 CET] <funnybunny2> lol
[21:54:23 CET] <funnybunny2> So what are these functions?
[21:54:44 CET] <cehoyos> Use the resample filter or libavresample directly
[21:54:55 CET] <cehoyos> (You cannot use the optimized functions directly)
[21:55:01 CET] <durandal_1707> LIBSWRESAMPLE
[21:55:06 CET] <funnybunny2> I followed this tutorial https://steemit.com/programming/@targodan/decoding-audio-files-with-ffmpeg and I guess the author didn't know about them
[21:55:08 CET] <durandal_1707> avresample is deprecated
[21:55:21 CET] <funnybunny2> I dunno
[21:55:34 CET] <funnybunny2> I think his goal was only to graph waveforms or something since he output everything in floating point
[21:55:46 CET] <durandal_1707> stop using random clueless blog post about ffmpeg found on internet
[21:55:56 CET] <cehoyos> Sorry, the filter name is aresample and the library name is libswresample;-))
[21:56:09 CET] <funnybunny2> Well ffmpeg has terrible examples/documentation
[21:56:55 CET] <funnybunny2> I have this https://ffmpeg.org/doxygen/trunk/decode__audio_8c_source.html
[21:57:27 CET] <funnybunny2> But it didn't show how to handle multiple file formats
[21:59:20 CET] <velix> Okay, that doesn't work. The audio is funny now... I think, I have to do more reading.
[21:59:40 CET] <cehoyos> funnybunny2: Did you already see doc/examples?
[21:59:59 CET] <funnybunny2> cehoyos: I saw the one I linked
[22:00:15 CET] <funnybunny2> These? https://ffmpeg.org/doxygen/trunk/dir_687bdf86e8e626c2168c3a2d1c125116.html
[22:00:56 CET] <furq> https://ffmpeg.org/doxygen/trunk/demuxing_decoding_8c-example.html
[22:01:07 CET] <furq> this opens arbitrary input formats/codecs
[22:01:51 CET] <funnybunny2> Yeah, but I don't want to do video
[22:01:58 CET] <funnybunny2> I'm only interested in audio
[22:02:40 CET] <furq> well that part should be the same either way
[22:02:42 CET] <furq> https://ffmpeg.org/doxygen/trunk/transcode_aac_8c-example.html
[22:02:46 CET] <funnybunny2> I essentially want to start by writing aplay
[22:02:51 CET] <furq> this should cover more or less everything you want though
[22:03:25 CET] <funnybunny2> Does aplay use ffmpeg?
[22:04:24 CET] <durandal_1707> nope
[22:04:53 CET] <funnybunny2> Wow, maybe I don't need ffmpeg. I wonder what formats alsa supports
[22:05:49 CET] <durandal_1707> voc, wav, raw or au
[22:05:59 CET] <funnybunny2> Why is it playing my mp3 files...
[22:06:23 CET] <funnybunny2> Oh nevermind
[22:06:26 CET] <funnybunny2> I typed the wrong thing
[22:09:13 CET] <funnybunny2> So what I'm trying to write to get started is just an audio-only ffplay
[22:09:50 CET] <durandal_1707> why? there are already players doing it
[22:10:00 CET] <funnybunny2> That's not my end goal
[22:11:49 CET] <funnybunny2> Oh, it looks like ffplay is using SDL
[22:11:50 CET] <funnybunny2> bleh
[22:12:07 CET] <funnybunny2> See there is no complete example with just ffmpeg and alsa
[22:12:29 CET] <funnybunny2> Which is probably why I'm fumbling around
[22:12:56 CET] <funnybunny2> I've written aplay. I just need to connect it to the ffmpeg decoding
[22:14:23 CET] <funnybunny2> I don't need this demuxing example or this transcoding example
[22:14:35 CET] <funnybunny2> I'm just decoding to PCM
[22:15:29 CET] <nicolas17> if you have an audio file you need to demux it and then decode the resulting packets
[22:16:15 CET] <funnybunny2> By demux you mean just get the first audio stream
[22:16:27 CET] <funnybunny2> That is what https://steemit.com/programming/@targodan/decoding-audio-files-with-ffmpeg does
[22:17:42 CET] <nicolas17> yes
[22:17:46 CET] <cehoyos> If you know exactly what you need, why are you asking here? We believe you have to learn audio filtering with FFmpeg which is explained in doc/examples.
[22:18:12 CET] <nicolas17> decoders expects audio packets, not all the bytes in a file
[22:19:01 CET] <funnybunny2> I guess I will also use the resample library so that I am always passing the same sample format to ALSA
[22:19:09 CET] <funnybunny2> Using this example https://ffmpeg.org/doxygen/trunk/resampling__audio_8c_source.html
[22:19:34 CET] <furq> i'm pretty sure transcoding_aac.c does everything you asked for except outputting to alsa
[22:19:44 CET] <furq> except it encodes to aac instead of pcm
[22:20:00 CET] <furq> which isn't hard to change
[22:20:08 CET] <funnybunny2> OK
[22:20:12 CET] <funnybunny2> I'll look at it
[22:22:47 CET] <funnybunny2> Why does ALSA support so many input sample formats?
[22:23:08 CET] <funnybunny2> If everyone is suggesting I just convert to one specific PCM format to pass to ALSA
[22:24:19 CET] <pink_mist> isn't there an alsa-related channel that might have that answer?
[22:25:22 CET] <funnybunny2> Yeah, good idea
[22:33:10 CET] <velix> Interesting. This works: 1. demux video (v.h264) and audio (a.aac) and remux them: ffmpeg -i a.h264 -r 25 -i b.aac -c copy final.mp4
[22:36:13 CET] <funnybunny2> If converting to a specific PCM format with ffmpeg, what format should I use?
[22:36:30 CET] <funnybunny2> Not sure why it matters
[22:39:29 CET] <velix> Wpw. This also works: "-filter:v fps=fps=25"
[22:45:03 CET] <velix> Is it possible to just skip all 25-50 frames?
[22:45:07 CET] <velix> instead of recoding?
[22:45:07 CET] <cehoyos> velix: Didn't I write that somewhere above?
[22:45:18 CET] <velix> cehoyos: No, really not.
[22:45:37 CET] <velix> I came in with "-vf setpts=PTS*2 -r 25 -af atempo=0.5" - but it stutters the audio
[22:45:44 CET] <cehoyos> funnybunny2: alsa contains internal conversions for these "many input formats", most of them either slow or low-quality or both, s16le should be a safe bet.
[22:45:59 CET] <cehoyos> There is also a fps filter, both variants have advantages and disadvantages
[22:46:13 CET] <cehoyos> ... is what I wrote before
[22:46:52 CET] <funnybunny2> cehoyos: What does fps stand for?
[22:47:08 CET] <velix> frames per second
[22:47:10 CET] <cehoyos> For a filter that I suggested to velix an hour ago and that I now found
[22:47:13 CET] <nicolas17> the fps thing wasn't for you :P
[22:47:16 CET] <cehoyos> h
[22:47:16 CET] <nicolas17> velix: to skip frames without re-encoding you would need the source video to use an intra-only codec
[22:47:17 CET] <cehoyos> he
[22:47:24 CET] <funnybunny2> Oh
[22:47:28 CET] <velix> nicolas17: intra?
[22:47:34 CET] <nicolas17> where each frame is a full image
[22:47:47 CET] <nicolas17> with no compression/prediction between one frame and the next
[22:47:59 CET] <velix> oh okay.
[22:47:59 CET] <cehoyos> You can skip frames with -r and -vf fps, in very specific cases, you can throw away frames without re-encoding (but I believe this is not what you asked for)
[22:48:20 CET] <velix> cehoyos: Actually, I never dealed with 50 fps video before.
[22:48:35 CET] <nicolas17> with common video codecs like MPEG, you can't just throw away every other frame without re-encoding, because compressed frames depend on each other
[22:48:45 CET] <velix> nicolas17: makes sense!
[22:49:03 CET] <velix> Damn... it's for a friend, which stupid Smart TV cannot playback 50 fps.
[22:49:13 CET] <velix> I've got a $20 RPi, which can do anything.
[22:49:17 CET] <velix> I hate smart TVs
[22:49:31 CET] <funnybunny2> cehoyos: You wrote protocol -> demuxer -> decoder -> filter -> encoder -> output. What is the filter?
[22:49:37 CET] <velix> I'll tell him to get a FireTV stick with Kodi...
[22:49:44 CET] <cehoyos> aresample
[22:50:00 CET] <funnybunny2> Isn't that also the encoder?
[22:50:06 CET] <cehoyos> You can grep for it in doc/examples
[22:50:18 CET] <cehoyos> No, there is not encoder called "aresample"
[22:50:58 CET] <funnybunny2> I think just file -> audio stream -> native samples -> s16le -> alsa
[22:51:09 CET] <cehoyos> Yes
[22:51:12 CET] <funnybunny2> OK
[22:51:26 CET] <cehoyos> To get from "native samples" to the pcm_s16le encoder, you need aresample
[22:51:49 CET] <cehoyos> If you don't want to use the pcm_s16le encoder, you need aresample to get from "native samples" to s16
[22:52:00 CET] <funnybunny2> I thought the resampler encodes native samples to s16le
[22:52:28 CET] <cehoyos> No, as I tried to explain before, it would have to do that if we had native and non-native endian audio formats
[22:52:29 CET] <funnybunny2> Oh, I forgot that ffmpeg doesn't have BE and LE
[22:52:43 CET] <cehoyos> But since we only have one native audio format, this is not necessary.
[22:52:57 CET] <funnybunny2> So then the encoder is just the dummy thing I add on at the end in case my machine is stupidly BE
[22:53:06 CET] <funnybunny2> Which it never will be
[22:53:44 CET] <cehoyos> No: If you choose to use the encoder, you always use it
[22:53:45 CET] <funnybunny2> For future proofing I guess
[22:54:23 CET] <funnybunny2> I understand I always use it. I just meant that the only reason to use it is for a hypothetical BE machine
[22:54:33 CET] <cehoyos> But since the alsa hardware will expect native endian audio, I guess you don't absolutely have to use the encoder.
[22:54:47 CET] <funnybunny2> ALSA accepts many different formats
[22:54:56 CET] <cehoyos> Please see above
[22:55:10 CET] <funnybunny2> About the slowness?
[22:55:32 CET] <funnybunny2> Like, it converts everything to native endian at the end?
[22:55:36 CET] <cehoyos> You have to choose something that is supported by your hardware. You cannot know but s16 is the most likely choice
[22:56:01 CET] <cehoyos> If you are unlucky, it will resample everything to 44100 or 48k
[22:56:15 CET] <funnybunny2> But ALSA will just convert if I pass the wrong thing, right?
[22:56:30 CET] <funnybunny2> I understand I want to avoid that though
[00:00:00 CET] --- Mon Jan 13 2020


More information about the Ffmpeg-devel-irc mailing list