[Ffmpeg-devel-irc] ffmpeg.log.20190215

burek burek021 at gmail.com
Sat Feb 16 03:05:01 EET 2019


[01:09:31 CET] <pmacdiggity> I'm having a hard time getting clarity on the current state, is x265 the only way to produce 10-bit HEVC that's compatible with iOS? This seems to require yuv420p10le(tv, bt2020nc/bt2020/smpte2084), which means no hardware acceleration like hevc_nvenc?
[01:28:58 CET] <iive> pmacdiggity, no idea about your question, just want to point out that for the encoded bitstream it doesn't matter if the input format is yuv420p10le or yuv420o10be
[01:29:23 CET] <iive> o/p
[01:30:39 CET] <pmacdiggity> I mean on the output pix_fmt, hevc_nvenc says no dice, and subs in p010le, but only yuv420p10le seems to play back correctly on iOS
[01:32:10 CET] <pmacdiggity> or at least the yuv420p10le is the only discernible difference I can find with x265 vs hevc_nvenc
[01:37:13 CET] <pmacdiggity> and actually, it seems to be the bt2020nc/bt2020/smpte2084 part in particular that's important vs yuv420p10le(tv, progressive) not playing correctly
[01:37:14 CET] <iive> ffmpeg swscale filter should be able to convert yuv420p10 to yuv420
[01:37:33 CET] <pmacdiggity> yeah, but I want 10bit output
[01:38:13 CET] <iive> sorry, isn't the problem that the output is 10bit?
[01:39:02 CET] <pmacdiggity> no, it's that it's 10bit, but it's washed out on playback
[01:39:02 CET] <furq> p010le is 10-bit
[01:39:15 CET] <furq> it's the 10-bit nv12 iirc
[01:39:18 CET] <pmacdiggity> yes, but it seems that's not the particular flavor that Apple likes
[01:39:25 CET] <furq> 4:2:0 with interleaved chroma planes
[01:40:39 CET] <pmacdiggity> it plays back like a 10bit HEVC plays back at 8bit without correct tone mapping
[01:41:01 CET] <furq> do both files have the same primaries etc
[01:41:21 CET] <pmacdiggity> not sure what you mean as primaries?
[01:42:23 CET] <furq> the "bt2020nc/bt2020/smpte2084" bit of the thing you pasted
[01:46:41 CET] <pmacdiggity> ah, not on the output, but I don't seem to be able to output bt2020nc/bt2020/smpte2084 with hevc_nvenc, it it says not supported and outputs with p010le, but looking at the outputs with ffprobe, the correct file is yuv420p10le(tv, bt2020nc/bt2020/smpte2084), using x265, and the incorrect file is output to yuv420p10le(tv, progressive) (from hevc_nvenc, which it said was p0120le
[01:46:49 CET] <pmacdiggity> er... p010le
[01:48:11 CET] <furq> yeah i have no experience with nvenc but nv12/p010le are just internal formats afaik
[01:50:13 CET] <furq> try adding -colorspace bt2020nc -color_primaries bt2020 -color_trc smpte2084
[01:54:07 CET] <iive> yuv420 uses 3 planes for y u v. nv12 uses 2 planes, first is Y, the seconds is interleaved UV
[01:58:03 CET] <pmacdiggity> ok, I'll give that a shot, thanks!
[02:14:51 CET] <pmacdiggity> success! thank you!
[04:06:51 CET] <BullHorn> hello!
[04:07:00 CET] <BullHorn> how can i convert a variable bitrate video to constant bitrate?
[04:30:38 CET] <shdown> Hello, Im trying to use the ffmpeg API to decode, encode, and then stream a video via UDP.
[04:30:40 CET] <shdown> The problem is that it generates packets with duration=0, and, even if I manually set duration to, say, 1, mpv still refuses to play anything but the first frame it receives.
[04:30:46 CET] <shdown> Here is the code: https://gist.githubusercontent.com/shdown/0b88bd67b5a99516dba237ca1ec63842/raw/cc3093a0cec6fd3d15fa1599f9ab9936046980aa/stream.c
[04:31:11 CET] <shdown> (ffmpeg.pastebin.com does not seem to exist.)
[09:00:11 CET] <th3_v0ice> shdown: Packet duration can be left at 0, but if you need to set it yourself then it cant be 1 it needs to be 1 / framerate for video and 1 / timebase * nb_samples for audio. But generating it like this will give you trouble trying to sync audio and video streams. One more thing that can happen is that your audio and video are so much out of sync that your player is simply waiting for packets.
[09:00:11 CET] <th3_v0ice> Try to print timestamps that go to the muxer.
[10:17:28 CET] <forgon> What is the best way to record only a specific window, known from its PID?
[10:40:38 CET] <relaxed> forgon: I don't know it's possible using the PID, but here's a one liner that will turn your mouse pointer to an +, then click on the window you want to record:  x=($(xwininfo|awk '/Wid/||/Hei/||/Cor/{gsub(/+/,",");print $2}') );ffmpeg -video_size "${x[0]}"x"${x[1]}" -framerate 60 -f x11grab -i :0.0"${x[2]/,/+}" -t 30 -c:v rawvideo -pix_fmt yuv420p -y output.mkv
[10:42:21 CET] <forgon> relaxed: Thanks.
[13:35:24 CET] <devp> Hello. How i can capture a rtp stream with their all tracks ? at this moment i only can capture one track of audio, how i could capture all tracks?
[13:42:28 CET] <th3_v0ice> devp: I think you need to use -map and map all streams from input to output. https://trac.ffmpeg.org/wiki/Map
[13:45:02 CET] <devp> th3_v0ice, and how i can do that if i don't know exaclty how many streams it has ?
[13:49:26 CET] <devp> ffmpeg -i input -map 0 -c copy output.mp4 # copies all video and audio channels from input one to output, not just one video
[15:09:49 CET] <sybariten> hey
[15:11:08 CET] <sybariten> (pretty sure there are a number of discussions about this online, but to my surprise my google-fu wasnt strong enough.... keywords?)    I'm guessing it's quite doable to define only absolute values when doing split cutting with ffmpeg (start time and end time) instead of using duration. But this is a bit too complicated to be worth it perhaps?
[15:13:27 CET] <TheAMM> I'm not sure what you're asking
[15:13:57 CET] <TheAMM> You should be able to use -ss <start> -i file -to <end> to grab a section of the file?
[15:14:29 CET] <DHE> keep in mind that syntax "-to" is interpreted more as duration than actual endtime
[15:14:55 CET] <TheAMM> "Stop writing the output or reading the input at position."
[15:15:12 CET] <TheAMM> "-ss start -to end -i file" should work properly then?
[15:15:25 CET] <TheAMM> Or does that still work as a duration
[15:16:41 CET] <DHE> I believe on the input that will work as you want
[15:16:51 CET] <TheAMM> It does, tested it
[15:16:58 CET] <DHE> it's a matter of telling it to stop at the indicated INPUT timestamp, or OUTPUT timestamp. those are two different things
[15:17:16 CET] <TheAMM> Well aware
[15:17:20 CET] <TheAMM> Hopefully this helps sybariten
[15:28:01 CET] <devp> there's some way to take a screenshot in grayscale colors ?
[15:32:45 CET] <TheAMM> ffmpeg -ss 00:00:05 -i file.mkv -vf format=gray -frames:v 1 out.png
[15:33:18 CET] <sybariten> heh, i dont really get what you are saying.  :)
[15:33:38 CET] <sybariten> the syntax is -t , no? And its a duration in hh:mm:ss
[15:34:12 CET] <TheAMM> -t is duration, yes
[15:36:12 CET] <sybariten> ok so there is another option to use absolute times ?
[15:36:31 CET] <TheAMM> <TheAMM> "-ss start -to end -i file"
[15:37:28 CET] <sybariten> oh! This must be a bit mnore recent then. All the googling i did only talked about -t
[15:37:31 CET] <sybariten> thanks
[17:33:40 CET] <th3_v0ice> devp: I am not exactly sure what can be done in that particular case. If you do not expect the stream to change hardcode the values, otherwise write a simple bash/batch script to do the work for you and select needed streams.
[20:08:39 CET] <TheWild> hello
[20:09:19 CET] <TheWild> I want to concatenate few mp4 files into one, without reencoding. I did "ffmpeg -i 'concat:1.mp4|2.mp4|3.mp4' -c copy output.mp4", but it only encodes first file. What I'm doing wrong?
[20:10:07 CET] <durandal_1707> TheWild: use concat demuxer not protocol
[20:12:17 CET] <TheWild> ok, now I used "ffmpeg -f concat -i filelist.txt -c copy output.mp4"
[20:12:34 CET] <TheWild> but I remember I used concat protocol at least once in the past and it worked
[20:14:28 CET] <durandal_1707> TheWild: it worked for ts probably, but can not work for mp4
[20:14:47 CET] <TheWild> mmmm, okay. Thanks durandal_1707
[20:14:58 CET] <TheWild> weird ffmpeg didn't even print warning
[20:19:15 CET] <TheWild> 1037 MB     (546 MB if you don't count commercials ;))
[20:53:43 CET] <TheWild> how I can extract *raw* subtitles/teletext data?
[20:54:53 CET] <TheWild> this doesn't work: "ffmpeg -i 20190215.mts -scodec copy teletext.dat"
[22:46:35 CET] <mfolivas> guys, many of my customers are asking me to add a frame or watermark on their videos.  Can I do that with ffmpeg?
[22:47:58 CET] <furq> !filter overlay @mfolivas
[22:47:58 CET] <nfobot> mfolivas: http://ffmpeg.org/ffmpeg-filters.html#overlay-1
[22:48:07 CET] <mfolivas> oh nice
[22:48:42 CET] <mfolivas> thank you guys
[23:12:30 CET] <lindylex> If my video is being sped up using this : setpts=0.1*PTS  What is the formula to get the audio speedup value?
[23:13:44 CET] <furq> lindylex: the inverse
[23:13:48 CET] <furq> so 10
[23:14:00 CET] <lindylex> But it only can go to 2
[23:14:14 CET] <furq> it goes up to 100 in new versions
[23:14:17 CET] <furq> https://ffmpeg.org/ffmpeg-filters.html#atempo
[23:14:36 CET] <lindylex> Reading now.
[23:14:43 CET] <lindylex> Thanks
[23:14:57 CET] <furq> looks like that works in 4.1 so it didn't change super recently
[23:15:03 CET] <furq> not sure exactly when though
[23:16:01 CET] <lindylex> Thanks I found old documentation that stated i would have to string multiple together to get it done.
[23:16:17 CET] <furq> if you have an old version then i guess it'd be 2,2,2,1.25
[23:17:11 CET] <lindylex> This is the version I am using ffmpeg version 4.1-1
[23:17:23 CET] <furq> 10 should work fine then
[23:19:13 CET] <lindylex> I just became aware of the set time for when an affect you start to work.  I need to look into the syntax some more. I want to add it to my pythin script that I wrote to make ffmpeg commands shorter.
[23:21:12 CET] <lindylex> What is the formula?  Can see what the formula looks like?
[23:21:28 CET] <lindylex> If i have this setpts=0.05*PTS[v]
[23:21:46 CET] <lindylex> What is the formula to get the value for audio?
[23:22:00 CET] <lindylex> is .1*100?
[23:22:35 CET] <kepstin> lindylex: not sure what you're trying to do here.
[23:22:51 CET] <kepstin> lindylex: are you doing slow motion video, then also trying to slow down the audio to match?
[23:22:59 CET] <lindylex> What value would I use for the audio?
[23:23:08 CET] <lindylex> Speed up the video and audio/
[23:23:17 CET] <kepstin> if so, you'll need to use a different filter on the audio, which would take different arguments
[23:23:34 CET] <lindylex> I know thi sis what I did ffmpeg -i c116_sp.mov -filter_complex "[0:v]setpts=0.05*PTS[v];[0:a]atempo=10.0[a]" -map "[v]" -map "[a]" c116_Done.mkv
[23:23:36 CET] <kepstin> if you're using e.g. the rubberband filter, it takes a "speed" option, I think?
[23:23:44 CET] <lindylex> I want to speed it up even more.
[23:24:13 CET] <lindylex> I have no idea what this is.  That filter is new to me.
[23:24:57 CET] <kepstin> if you do setpts=0.5*PTS on the video, then the video is twice as fast, so you'd use 1/0.5 = 2 as the speed setting an the audio filter, like atempo=2
[23:24:58 CET] <furq> lindylex: you could just use setpts=PTS/20 and atempo=20
[23:26:09 CET] <lindylex> kepstin it is .05 not .5
[23:26:44 CET] <lindylex> furq : That is interesting.  This sets the video to match the audio speed
[23:26:51 CET] <lindylex> That is interesting.
[23:26:53 CET] <kepstin> lindylex: I told you how to do the math to make them match.
[23:27:20 CET] <lindylex> kepstin: sorry thanks I get it now.
[23:29:04 CET] <lindylex> Thanks both of you.  That was extremely helpful.
[00:00:00 CET] --- Sat Feb 16 2019


More information about the Ffmpeg-devel-irc mailing list