[Ffmpeg-devel-irc] ffmpeg.log.20190603

burek burek021 at gmail.com
Tue Jun 4 03:05:03 EEST 2019


[06:48:09 CEST] <xochilpili> hello everyone
[06:48:53 CEST] <xochilpili> im doing some capture from my screen, first capture works fine, but when i start a new one i got:  Input/output error i have check $DISPLAY env that is not changed
[06:49:16 CEST] <xochilpili> im using this: ffmpeg -video_size 1920x1080 -framerate 25 -f x11grab -i :0.0 -f pulse -ac 2 -i output.mkv
[06:49:38 CEST] <xochilpili> i also used: ffmpeg -video_size 1920x1080 -framerate 25 -f x11grab -i $DISPLAY -f pulse -ac 2 -i output.mkv << same error
[11:13:26 CEST] <frappu> I just downloaded ffmpeg and I'm trying to convert a uhd bluray to a 1080p mkv but I don't know to set the resolution, bitrate, select the correct audio stream and convert it to aac or ac3
[11:16:07 CEST] <JEEB> see ffmpeg-all.html under ffmpeg.org and ctrl+f for zscale/tone mapping. for bit rate you usually don't set it unless you need a specific file size (instead some constant rate factor thing is what you'd use). for selection of streams, ctrl+F in that document "-map"
[11:16:29 CEST] <frappu> thanks ;)
[11:16:30 CEST] <JEEB> you want to (possibly crop), tone map, scale down
[11:19:41 CEST] <durandal_1707> so now are all uhd blurays in wrong colorspace?
[11:20:13 CEST] <JEEB> I'd say most are HDR
[11:20:25 CEST] <JEEB> of course if it's not then tone mapping isn't needed :P
[11:20:59 CEST] <JEEB> technically I think the spec lets you do BT.709 but those sweet marketing dollars need to be used on something ;)
[11:21:24 CEST] <JEEB> and of course I made the assumption that the person wants to have a normal SDR HD encode
[11:34:16 CEST] <an3k> What's the latest compatible version of nVIDIA CUDA? They started to split the npp libs at least with 9.2 so current code doesn't find nppi
[11:36:41 CEST] <JEEB> you should attempt to poke the nvidia stuff related maintainers I guess?
[11:44:04 CEST] <frappu> is it correct this part to select 4th audio stram and convert to ac3 160k ? -map -codec:a:4 ac3 160k
[11:44:14 CEST] <an3k> CUDA SDK 8.0 is the latest that still ships with nppi.lib
[11:44:31 CEST] <JEEB> frappu: 4th so beginning from 0 would be 3
[11:44:35 CEST] <an3k> Afaik external headers/libs aren't included anymore so one has to download them manually
[11:44:37 CEST] <JEEB> and -map 0:a:3
[11:44:46 CEST] <JEEB> the codec part is on the selected streams
[11:45:04 CEST] <JEEB> so if you only have a single audio track selected you can just use -c:a ac3
[11:45:06 CEST] <frappu> sorry, audio stream is the fifth, so 4 is correct
[11:45:33 CEST] <JEEB> if you have multiple then -c:a:N where which of the selections of that type it is (if counting the maps)
[11:45:44 CEST] <JEEB> because you apply encoding options on the selected streams
[11:45:59 CEST] <JEEB> and for the audio bit rate it'd be -b:a or -b:a:N 160k
[11:48:02 CEST] <frappu> -map 0:a:3 , 0 is for the video stream?
[11:48:23 CEST] <JEEB> input
[11:48:37 CEST] <JEEB> so if you have more than one input defined then that'd go upwards from 0
[11:48:45 CEST] <JEEB> in your case you probably have just one
[11:48:47 CEST] <JEEB> so 0
[14:45:27 CEST] <lain98> a
[14:48:11 CEST] <lain98> is there a way to know at runtime, what container formats are supported by a compression format ?
[15:09:59 CEST] <DHE> no. it would be the other way around, and even then there isn't always a good listing. muxers only list their preferred/default codecs which are hence guaranteed to be available
[15:37:53 CEST] <an3k> I put everything in mkv. If it contains Dolby Vision only mp4 can handle that correctly
[16:54:26 CEST] <Gunstick> hi. I have a file with 5 mono channels. 1st 2 are muted. I want to line-graph the 3 others separately and mix them to a single audio stream. Current command line is: ffmpeg -t 20 -i You_Bore.wav -filter_complex "[0:a]showwaves=s=1280x720:split_channels=y:mode=line,format=yuv420p[v]" -map "[v]" -map 0:a -c:v libx264  You_Bore.mkv # source file is here: http://gkess.homeip.net/~georges/You_Bore.wav
[16:58:59 CEST] <durandal_1707> Gunstick: use channelsplit filter
[16:59:45 CEST] <lain98> what if i want to know the containers supported by a specific compression method
[17:00:05 CEST] <lain98> before runtime
[17:00:05 CEST] <furq> then you look it up online
[17:00:10 CEST] <lain98> lol uh yeah
[17:00:55 CEST] <lain98> ok, i got one more question
[17:02:06 CEST] <lain98> i was putting some text in a video file using drawtext filter and when i play the video its there but when i just a third party library to decode the video, the text is missing
[17:02:36 CEST] <lain98> is the text added using drawtext really burned into the stream or is it something else ?
[17:02:47 CEST] <lain98> it could be a stupid question, im new to video
[17:03:08 CEST] <furq> it's burned into the stream
[17:03:21 CEST] <lain98> but then why when i decode it,its missing
[17:03:42 CEST] <furq> i have no idea
[17:03:46 CEST] <furq> are you sure you're decoding the right video
[17:03:53 CEST] <lain98> i think so yeah
[17:04:14 CEST] <furq> that's the best answer i can offer you
[17:04:38 CEST] <lain98> so, its not like audio and video are different streams and so is text ?
[17:04:53 CEST] <furq> you can have separate text streams but that's not what drawtext does
[17:04:58 CEST] <lain98> okay
[17:05:09 CEST] <lain98> must be that im doing something funny
[17:05:14 CEST] <lain98> thanks
[17:07:51 CEST] <lain98> could it be that the text is on a different video stream with transparent background, and they are overlayed ?
[17:52:44 CEST] <kepstin> lain98: no, drawtext draws the text directly into the video frame. it's single-input, single-output. so, unless you have a weird filter chain where you've explicitly done drawtext over an extra transparent video stream then that's not the case.
[18:01:38 CEST] <Soni> what's a good compressor if I want to have a 480p file, a 720p-480p file, a 1080p-720p file, etc?
[18:01:53 CEST] <Soni> so that you need the smaller file to get to the bigger file
[18:03:42 CEST] <DHE> I'm confused. can you rephrase that?
[19:18:18 CEST] <kepstin> sounds like Soni wants something like h264 svc
[19:19:18 CEST] <kepstin> Soni: it's not worth the effort in general, and it's not well supported by encoders or decoders.
[19:19:31 CEST] <Soni> and can I use that in a browser?
[19:19:42 CEST] <kepstin> Soni: no, you can't use it in a browser
[19:19:44 CEST] <Soni> or in a torrent?
[19:19:57 CEST] <Soni> (or in a torrent in a browser?)
[19:20:19 CEST] <Soni> (peertube ftw)
[19:20:40 CEST] <kepstin> Soni: just encode separate streams for each resolution and download only the one you want.
[19:21:48 CEST] <Soni> kepstin: but then the lower bitrates will be poorly seeded
[19:25:06 CEST] <kepstin> hmm, it looks like in theory browsers should play back vp9 multi-resolution encodes, but I have no idea how you'd actually make one
[19:26:07 CEST] <kepstin> (and to do a torrent-like distribution, you'd have to use variable block size so you can request only the desired packets, and use mse to actually play it)
[19:26:48 CEST] <Soni> or you could use the good old client-side javascript
[19:27:07 CEST] <Soni> just split it into multiple torrents and add some metadata for recombining
[19:28:14 CEST] <Soni> you don't want the bigger resolutions, no problem, just torrent the base torrent. you want the bigger resolutions then torrent multiple torrents.
[19:28:34 CEST] <Soni> or maybe use multiple files in the same torrent or something.
[19:55:41 CEST] <Necrosand> what is best  video/audio  container  if you want the fastest seeking?
[19:59:28 CEST] <DHE> mp4 should do you fine, but especially for network use you should build it properly
[20:34:41 CEST] <hashworks> Hi! I have a bunch of files I need to encode, however I only want specific streams. I could map them directly, but there are different quantities and orderings of streams. I know I can drop/map specific languages with -map -m:language:ger, but does that work with formats as well?
[20:40:02 CEST] <kepstin> Necrosand: one thing that affects seeking speed is the distance between keyframes - if keyframes are far apart, then you'll have a bit of lag as you seek from having to decode and drop the frames since the previous keyframe.
[20:40:03 CEST] <Necrosand> e
[20:40:53 CEST] <kepstin> (of course, this is a trade-off - shorter keyframe interval means higher bitrate video for the same quality)
[22:42:08 CEST] <Necrosand> i had about  10 different android phones:  i cannot do recording on all of them but one phone allowed me to:
[22:42:08 CEST] <Necrosand> what did that one phone has that allowed me to do that, when 9 others didn't
[22:42:08 CEST] <Necrosand> BIG REWARD
[23:48:03 CEST] <Necrosand> ffmpeg -f dshow -i audio="Microphone" output.mp3  worked
[23:48:03 CEST] <Necrosand> ffmpeg -f dshow -i audio="desktop audio" output.mp3  also worked
[23:48:03 CEST] <Necrosand> ffmpeg -f dshow -i audio="desktop audio" -f dshow -i audio="Microphone" output.mp3 does NOT work
[23:49:17 CEST] <pink_mist> I don't believe mp3 supports multiple audio streams
[23:50:38 CEST] <Necrosand> then what format does
[23:50:57 CEST] <Necrosand> pink_mist  does ffmpeg support multiple audio streams though?
[23:51:33 CEST] <pink_mist> you perhaps want to add a filter that takes two input audio streams and merges them
[23:51:45 CEST] <pink_mist> ffmpeg supports it just fine
[23:52:03 CEST] <pink_mist> and .mka should support multiple audio streams ... probably loads of other formats too
[23:52:12 CEST] <pink_mist> but is it really multiple audio streams you want?
[23:52:20 CEST] <pink_mist> like I said, you probably want a filter that merges them instead
[23:52:51 CEST] <Necrosand> mka is just container
[23:53:06 CEST] <Necrosand> do you mean aac ?
[23:55:00 CEST] <Necrosand> ffmpeg -f dshow -i audio="desktop audio" -f dshow -i audio="Microphone" output.mka does NOT work   and it created vorbis
[23:55:12 CEST] <pink_mist> it's the container that matters for multiple audio streams
[23:55:18 CEST] <pink_mist> in what way did it not work?
[23:55:27 CEST] <Necrosand> it just records "desktop audio"
[23:55:29 CEST] <Necrosand> not "mic"
[23:55:50 CEST] <pink_mist> are you sure? it probably did both, but in two different audio streams
[23:55:51 CEST] <pink_mist> like I said
[23:56:03 CEST] <Necrosand> output.mka or output.mp3
[23:56:07 CEST] <pink_mist> ...
[23:56:19 CEST] <Necrosand> i don't hear my mic
[23:56:30 CEST] <pink_mist> switch to the other audio stream in the .mka then
[23:57:53 CEST] <Necrosand> pink_mist is that even correctly formatted  command ?
[23:57:59 CEST] <pink_mist> no clue
[23:58:00 CEST] <Necrosand> ffmpeg -f dshow -i audio="desktop audio" -f dshow -i audio="Microphone" output.mkv
[23:58:04 CEST] <Necrosand> ffmpeg -f dshow -i audio="desktop audio" -f dshow -i audio="Microphone" output.mka
[23:58:06 CEST] <pink_mist> I'm not a ffmpeg guru
[23:58:15 CEST] <pink_mist> I just know mp3 can't handle multiple streams
[23:58:17 CEST] <pink_mist> but mka can
[23:58:18 CEST] <cehoyos>  -map 0 -map 1 missing
[23:58:18 CEST] <Necrosand> [14:48] <Necrosand> ffmpeg -f dshow -i audio="Microphone" output.mp3  worked
[23:58:18 CEST] <Necrosand> [14:48] <Necrosand> ffmpeg -f dshow -i audio="desktop audio" output.mp3  also worked
[23:59:02 CEST] <Necrosand> ffmpeg -map 0 -f dshow -i audio="desktop audio" -f dshow -i audio="Microphone" output.mka
[23:59:07 CEST] <pink_mist> cehoyos: like I told him before though, he probably wants a filter that merges them into a single stream
[23:59:13 CEST] <pink_mist> but he refuses to listen to me
[23:59:17 CEST] <cehoyos> He did not answer that he needs such a filter
[23:59:30 CEST] <cehoyos> If he wants multiple streams, he has to use the map option
[23:59:31 CEST] <pink_mist> he's just flailing around
[23:59:37 CEST] <Necrosand> pink_mist sorry i am not understanding
[23:59:46 CEST] <Necrosand> i just one  one  output.mka file
[23:59:47 CEST] <cehoyos> It would probably make more sense if he tried the option that I suggested instead of a different one;-)
[23:59:48 CEST] <pink_mist> Necrosand: do you want to listen to BOTH STREAMS AT THE SAME TIME?
[23:59:51 CEST] <Necrosand> i just want  one  output.mka file
[00:00:00 CEST] --- Tue Jun  4 2019


More information about the Ffmpeg-devel-irc mailing list