[Ffmpeg-devel-irc] ffmpeg.log.20191120

burek burek at teamnet.rs
Thu Nov 21 03:05:02 EET 2019


[01:19:51 CET] <jaevanko> 6:11:34 PM - Hello, I am trying to use ffmpeg to copy blu-ray discs and it's doing that job quite well. However, I can't find a way to include the chapters and language tags from the disc in my output. Is there a way to do this?
[03:14:29 CET] <gp> Is there a trick to passing -strict experimental through the hls segmenter? https://dpaste.de/jJJ6
[03:15:14 CET] <JEEB> welcome to meta (de)muxers
[03:15:32 CET] <JEEB> it's possible, I think the dash one might already pass that value down
[03:15:38 CET] <JEEB> but yes, the author needs to remember to do it :P
[03:15:58 CET] <gp> Should I file a ticket?
[03:16:42 CET] <JEEB> libavformat/dashenc.c:        ctx->strict_std_compliance = s->strict_std_compliance;
[03:16:48 CET] <JEEB> so the dash thing does it, hls one doesn't
[03:17:04 CET] <gp> I seek out trouble =)
[03:17:07 CET] <JEEB> although I'm still confuzzled why *both* let you get HLS and DASH
[03:17:33 CET] <JEEB> and thus you have two muxers that seemingly do similar htings, but still different things
[03:18:07 CET] <JEEB> I think those should be just put into a single code module, and then they should just export both hls and dash from there as synonyms more or less :P
[03:18:25 CET] <JEEB> that way if something is broken or fixed, it gets done in both
[03:19:18 CET] <gp> can't argue with that
[03:20:31 CET] <DHE> <gp> I seek out trouble =)     # that does not begin to describe what some people (like me) get into
[03:21:14 CET] <gp> DHE: haha the struggle is real
[03:44:53 CET] <gp> oops forgot to mention - opened ticket at https://trac.ffmpeg.org/ticket/8388
[04:11:49 CET] <rfer> Does anyone know of a way for ffmpeg or ffplay to error check an encoded H.264 video stream as though the decoder was limited to a specific level or profile?
[04:12:22 CET] <rfer> other than checking the field in the SPS
[05:37:57 CET] <gp> rfer: You could use ffprobe - It lists the level and profile with json output
[05:38:13 CET] <rfer> that just reads the SPS, no?
[05:38:32 CET] <rfer> i want to see if specific frames are not respecting what the stream stated in its SPS
[05:39:16 CET] <gp> rfer: ah - not sure how to help - you can get a detail frame print out too but I don't think it will give you what you're asking
[05:54:31 CET] <rfer> it might indirectly
[05:54:47 CET] <rfer> alternatively, is there a way to list out how many reference frames the current frame refers to?
[05:55:06 CET] <cehoyos> rfer: FFmpeg is not a stream analyzer
[05:55:22 CET] <cehoyos> Stream analyzers exist, they are apparently very expensive
[05:55:54 CET] <cehoyos> The information about the number of reference frames is likely trivial to output, patch probably welcome
[05:56:15 CET] <cehoyos> I am not sure if it is as easy to check for (complete) level compliance
[06:01:28 CET] <rfer> aw damn, was hoping that something might've already existed that I missed in the flag docs
[06:03:11 CET] <rfer> in that case i'll probably just end up waiting for a patch from a teammate tomorrow lol, was trying to work around a blocker and test something. thanks for confirming cehoyos
[06:05:42 CET] <cehoyos> Hardware developers told me once that they do not garantee real-time encoding for a given level. If this is correct (I cannot know) and given the fact that levels have no relevance for software decoders, I doubt the whole concept
[06:06:09 CET] <cehoyos> And especially checking for exact compliance
[08:04:19 CET] <BeerLover> When too use -map and when to use -var_stream_map?
[08:04:56 CET] <BeerLover> ffmpeg -i song.mp3 -b:a:0 320k -b:a:1 128k -b:a:2 64k -b:a:3 32k -c:a aac -map 0:a:0 -map 0:a:0 -map 0:a:0 -map 0:a:0 -f hls -var_stream_map "a:0 a:1 a:2 a:3" -master_pl_name master.m3u8 -hls_time 10 -hls_segment_filename 'pk/%v/segment%05d.ts' -hls_list_size 0 pk/%v/index.m3u8
[08:05:22 CET] <BeerLover> I want to create 4 bitrate hls playlist and a master.m3u8
[08:18:41 CET] <jemius> What's the best way to extract a single foto from a video stream? Just taking a single frame results in terrible quality
[08:20:26 CET] <th3_v0ice> What is the command you used to extract that single frame?
[08:26:30 CET] <jemius> ffmpeg -i video.avi -vf scale="320x240" fps=25   frames/c01_%04d.jpeg
[08:30:38 CET] <th3_v0ice> Do you need it scaled down? Or why are you using the -vf scale filter? Picture at that resolution cant look that good.
[08:32:43 CET] <jemius> Was just an experiment. The resolution is only 560x480 anyways
[08:33:24 CET] <jemius> the thing is that video codecs consist of poorer images than a still standing jpeg, because the eye can't see as much details when things moove. I want to have a sharp picture anyways
[08:40:16 CET] <th3_v0ice> Well, you are correct about the details and the eye. But whatever decoder gave you is the picture it reconstructed from the available data and, to my knowledge, it can't be better then that.
[08:41:12 CET] <jemius> but let's say the camera filmed a statue for 5 seconds without moving. This means I have a lot of frames. Can't I use them to for example reduce noise and increase sharpness?
[08:44:37 CET] <pink_mist> I'm sure it's possible to come up with an algorithm for that, sure ... but is there one implemented by regular video codecs?
[08:45:30 CET] <pink_mist> and would it waste huge amounts of cpu time calculating such an algorithm without even being told to
[08:46:00 CET] <th3_v0ice> Photoshop seems to be able to do that https://www.slrlounge.com/take-high-quality-still-image-video-ease-photoshop/
[08:46:50 CET] <pink_mist> yeah, the correct tool for this is something other than ffmpeg
[08:48:41 CET] <jemius> hmm :(
[10:08:30 CET] <squ> exposure does that
[10:08:42 CET] <squ> increase it for still object
[11:59:51 CET] <kab0m> hi at all
[12:00:01 CET] <kab0m> Im trying to wrap my head around one thing and i hope you guys maybe ca help me by this...i have to export ALL tags from one audiofile(flac) and need to import them into another audiofile(m4a) in my bashscript. Does anyone know how this can be done? tia
[15:38:01 CET] <Fyr> guys, why does $ffmpeg -h encoder=flac not show compression_level?
[15:38:10 CET] <Fyr> is it only my version?
[15:41:45 CET] <JEEB> Fyr: looking at master the internal flac encoder does not have such avoption
[15:41:48 CET] <JEEB> https://git.videolan.org/gitweb.cgi/ffmpeg.git/?p=ffmpeg.git;a=blob;f=libavcodec/flacenc.c;h=170c3caf4844f46d7d45ecf22244d4fa1be7ad05;hb=HEAD#l1464
[15:42:31 CET] <Fyr> JEEB, why is it absent?
[15:42:49 CET] <JEEB> wasit ever there?
[15:43:33 CET] <Fyr> JEEB, the online FFMPEG help contains such a parameter.
[15:45:50 CET] <JEEB> ok, it's a global lavc context property
[15:45:56 CET] <JEEB> not an avoption
[15:46:30 CET] <JEEB> https://git.videolan.org/gitweb.cgi/ffmpeg.git/?p=ffmpeg.git;a=blob;f=libavcodec/flacenc.c;h=170c3caf4844f46d7d45ecf22244d4fa1be7ad05;hb=HEAD#l297
[15:46:36 CET] <Fyr> I see
[15:48:25 CET] <JEEB> unfortunately since all properties are always available to all encoders, you can't really list in the -h encoder what it takesin
[15:49:02 CET] <JEEB> did not btw know we had such a lavc context property :D
[15:49:11 CET] <JEEB> the more i know
[16:33:07 CET] <matto312> Hello! I need some direction. I'm new to video work, but am an experienced developer.
[16:33:08 CET] <matto312> From FFMPEG, I need constant HLS output, but sometimes my input may drop so I need to be able to replace inputs on the fly (maintaining constant HLS output.)
[16:33:08 CET] <matto312> I know this might not be able to be done in ffmpeg. Does anyone have suggestions on how to approach this requirement?
[16:34:31 CET] <JEEB> matto312: your API client (be it ffmpeg.c or otherwise) would have to a) have a concept of time and b) be able to generate a secondary input
[16:35:03 CET] <JEEB> libavfilter does have a filter that can switch from one input to another, *but* that lacks any understanding of time or how to switch back
[16:35:46 CET] <JEEB> on the other hand, something like upipe has nice realtime support for back-up sources (be it generated or otherwise), but it is geared towards broadcast and thus you would have to write the HLS output in it
[16:39:06 CET] <matto312> Thx, upipe sounds interesting
[16:40:00 CET] <bencoh> I wrote a hls backend back when I worked for the company that started the upipe project, by the way
[16:40:14 CET] <SpiritHorse> bencoh: neat
[16:40:21 CET] <bencoh> upipe was definitely comfortable for that
[16:41:54 CET] <bencoh> (what was really neat was seeing iOS tablets keep a/v properly synced after days of playback with our streams, while streams produced by industry-standard encoders desynced in a noticeable fashion after a few hours)
[16:42:52 CET] <bencoh> matto312: if you're going the upipe way, you should have a look at the upipe-videocont pipe
[16:43:42 CET] <bencoh> it basically does what you're looking for (combined with upipe_video_blank, and/or other video sources)
[16:43:47 CET] <bencoh> (same for audio by the way)
[16:43:50 CET] <matto312> Thx bencoh was just looking at upipe and it may be too complex for me
[16:44:21 CET] <bencoh> aww, that's sad then (but I know what you mean ... :)
[16:44:51 CET] <JEEB> yes, you have to write your own thing for it. ffmpeg.c is actually something where you can get scarily far with ffmpeg.c, but then when you start getting layers of meta-muxers and I/O
[16:45:12 CET] <JEEB> or the fact that FFmpeg by itself as a framework has no concept of real time
[16:45:18 CET] Action: bencoh nods
[16:46:13 CET] <JEEB> which is why FFmpeg's libraries cannot "buffer for X seconds" for example. you can do that in your API client, but the core libs have no concept of time :P
[16:47:48 CET] <JEEB> and yea, ffmpeg.c is a general thing that tries to do a lot of stuff on a general level. that is then why it as an API client doesn't attempt to use time etc
[16:48:05 CET] <JEEB> it would actually make sense to have a separate API client which is for live
[16:48:16 CET] <JEEB> since the realities of live input(s) is quite different
[16:48:48 CET] <JEEB> unfortunately I don't have unlimited free time and energy :D
[16:49:02 CET] <bencoh> :)
[16:49:42 CET] <matto312> JEEB: stupid question, but what do you mean by "API client"
[16:49:51 CET] <JEEB> matto312: FFmpeg itself is a set of libraries
[16:50:01 CET] <JEEB> ffmpeg.c which most people know as the `ffmpeg` command is just one API client
[16:50:11 CET] <JEEB> just the same way someone can make an API client for their own use case
[16:50:18 CET] <matto312> JEEB: understood, thx
[16:50:19 CET] <JEEB> there's plenty of stuff that ffmpeg.c does not handle.
[16:50:34 CET] <JEEB> like, the API lets you figure out when the stream configuration of an MPEG-TS input changes
[16:50:44 CET] <JEEB> ffmpeg.c has no code for that (yet)
[16:50:58 CET] <bencoh> well, regarding that, in upipe time management is mostly handled by the eventloop (say, libev, or any user-supplied eventloop assuming it's glued to framework).
[16:51:05 CET] <JEEB> partially because that would mean that you'd have to add the concept of "which program are we following right now"
[16:51:24 CET] <bencoh> upipe then mostly deals with metadata (timestamps) to keep track of time, apart from source/sink pipes that fire timers (making use of the eventloop again)
[16:51:42 CET] <JEEB> bencoh: yea, so I guess you can note that some input hasn't gotten stuff in, say, 250ms
[16:51:51 CET] <bencoh> exactly
[16:51:55 CET] <JEEB> and then if that happens, you can switch to backup until you start getting stuff from primary
[16:52:31 CET] <bencoh> in matto312's case (and mine back then), upipe_blank_source outputs a blank frame (or a picture, whatever) every 40ms
[16:52:43 CET] <JEEB> yup
[16:52:58 CET] <bencoh> upipe_videocont has a finite number of content inputs, and a 'default' input (which is connected to blank_source output)
[16:53:14 CET] <bencoh> the default input acts as a clock source, ie a trigger to output a frame
[16:53:35 CET] <matto312> JEEB: for someone with experience using upipe, how low do you estimate would take to set up?
[16:53:36 CET] <bencoh> assuming current content input queue is empty, it will output the "default input" latest frame
[16:54:15 CET] <JEEB> matto312: I haven't been able to test it yet because of priorities and availability of free time :)
[16:54:32 CET] <JEEB> so I only have a slight idea of what upipe can provide
[16:55:39 CET] <matto312> nw, doubtful there is enough time on this project for me to learn upipe
[16:56:51 CET] <JEEB> do you have your own FFmpeg API client already or are you just (ab)using ffmpeg.c so far?
[16:59:57 CET] <matto312> I was just pulled on this project, right now the setup goes: live RTMP input => FMPEG (running on EC2) to transcode into multi-variant HLS stream => send to AWS MediaPackage + AWS Cloudfront ... the RTMP input occasionally will drop and may resume
[17:01:44 CET] <matto312> I'm stuck trying to find reliable way to handle input loss
[17:03:00 CET] <JEEB> since you didn't reply I will guess you're just (ab)using ffmpeg.c
[17:03:23 CET] <JEEB> you will either have to come up with a local thing that provides a constant source that switches between your RTMP and a back-up thing
[17:03:32 CET] <JEEB> or you code into ffmpeg.c or so handling of actual real time
[17:03:56 CET] <JEEB> as in, "if you didn't receive stuff from this input in f.ex. 250ms, switch to the back-up one which generates a stable stream"
[17:03:58 CET] <matto312> I think so, installed latest ffmpeg package on ubuntu
[17:04:19 CET] <BtbN> problem with rtmp is, it can be bursty
[17:05:13 CET] <bencoh> you'll need to buffer / smooth it out then
[17:05:34 CET] <bencoh> ie add a player-like component in between (that outputs at playback rate)
[17:05:42 CET] <matto312> We are using RTMP, but I think there is also HLS input that could be used
[17:06:06 CET] <BtbN> that's bursty by design
[17:06:10 CET] <bencoh> :)
[17:06:46 CET] <matto312> :) I'm still learning the domain of video work
[17:07:13 CET] <JEEB> but yea, there's literally two alternatives: you build handling of switching of realtime inputs into ffmpeg.c, or you write a separate thing in front of ffmpeg.c, which handles providing a smooth thing to ffmpeg.c that never ends
[17:07:19 CET] <Mavrik> Restreaming HLS with HLS just sounds like a recipe for insane amounts of lag :D
[17:07:20 CET] <BtbN> It's generally not really sensibly possible to do what you intend to do. Since it's close to impossible to determine if the stream dropped or is just lagging.
[17:07:38 CET] <matto312> Mavrik: agreed
[17:07:38 CET] <Mavrik> BtbN: unless you have some kind of realtime guarantees
[17:07:46 CET] <BtbN> With rtmp or hls?
[17:07:52 CET] <Mavrik> With whatever input
[17:08:01 CET] <Mavrik> If you're doing live streaming and haven't seen a packet in 500ms
[17:08:03 CET] <BtbN> rtmp and hls are very much not realtime
[17:08:08 CET] <Mavrik> You can assume that your players will drop that anyway :P
[17:08:26 CET] <BtbN> 500ms can be all within the encoders re-ordering delay though
[17:08:28 CET] <Mavrik> As protocols no. But it depends on what your source is.
[17:08:44 CET] <bencoh> separating it from ffmpeg probably means transcoding twice, though
[17:08:49 CET] <Mavrik> If you're restreaming a live TV channel, not seeing data packets for awhile will have the same effect no matter the inherent burstiness of your input.
[17:09:16 CET] <BtbN> depends on the size of your local buffer
[17:10:31 CET] <JEEB> bencoh: not really. I'd probably do raw video/audio over NUT or something if I really had to have the stuff separate (like localhost UDP multicast or whatever)
[17:11:17 CET] <bencoh> I always considered using nut as an intermediate format for that kind of use, but never went for it in the end :)
[17:11:47 CET] <JEEB> although I still consider it a bit more eww than handling it in your API client
[17:12:02 CET] <JEEB> but raw v/a with NUT is probably the least bad alternative
[17:12:03 CET] <bencoh> definitely
[17:12:19 CET] <bencoh> (definitely eww :D)
[17:12:38 CET] <bencoh> (but yeah, less eww than real transcoding)
[17:14:27 CET] <JEEB> yea, doing it locally means you don't have to do transcoding on that layer
[17:14:37 CET] <JEEB> since no network involved
[17:17:13 CET] <BeerLover> When too use -map and when to use -var_stream_map? ffmpeg -i song.mp3 -b:a:0 320k -b:a:1 128k -b:a:2 64k -b:a:3 32k -c:a aac -map 0:a:0 -map 0:a:0 -map 0:a:0 -map 0:a:0 -f hls -var_stream_map "a:0 a:1 a:2 a:3" -master_pl_name master.m3u8 -hls_time 10 -hls_segment_filename 'pk/%v/segment%05d.ts' -hls_list_size 0 pk/%v/index.m3u8
[17:18:27 CET] <JEEB> BeerLover: var_stream_map is an AVOption exposed by various output modules
[17:18:36 CET] <JEEB> in this case the HLS meta-muxer
[17:18:52 CET] <JEEB> map is ffmpeg.c's option that maps inputs for an output
[17:19:20 CET] <BeerLover> I want to create 4 bitrate hls playlist and a master.m3u8
[17:19:21 CET] <JEEB> easiest to explain is to check the output of `ffmpeg -h muxer=hls`
[17:19:41 CET] <JEEB> then you can see that var_stream_map is a HLS module option, not an ffmpeg.c option
[17:19:58 CET] <JEEB> so that is the difference between the two things :P
[17:20:43 CET] <JEEB> BeerLover: the stream numbers you set in var_stream_map are from those that you have mapped to the HLS muxer output
[17:21:57 CET] <BeerLover> JEEB do i need -re option. In examples, they have used -re everywhere.
[17:22:08 CET] <JEEB> -re is an ffmpeg.c option that simulates real-time
[17:22:22 CET] <JEEB> as in, it makes ffmpeg.c sleep the difference of timestamps of input packets
[17:22:32 CET] <JEEB> otherwise ffmpeg.c will go as fast as your input goes
[17:22:48 CET] <JEEB> it is also error-prone because timestamps do not always end up going forwards at the point where -re reads them :P
[17:29:20 CET] <kepstin> if your input is already realtime, the -re option can make ffmpeg run slower than your input, causing buffering or desync issues, or even connection dropping in some cases.
[17:29:52 CET] <BeerLover> for local files, examples are using -re
[17:31:30 CET] <kepstin> the -re option is specifically to support streaming local files (or other vod type inputs) as if they were live/realtime inputs, yeah.
[17:43:36 CET] <rockyh> hello!
[17:44:30 CET] <rockyh> Yesterday we talked about conversions from smpte2084 to bt709. I had now a doubt about another issue: having a video with this features
[17:44:39 CET] <rockyh> Video: hevc (Main 10), yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 23.98 tbc (default)
[17:44:50 CET] <BeerLover> kepstin i want to convert mp3 to hls and store them in S3. Then clients will pull from there
[17:45:15 CET] <rockyh> how is it possible to lower its resolution to 1920x1080, but preserving *all* the other video features? (yuv420p10le(tv, bt2020nc/bt2020/smpte2084))
[17:45:33 CET] <DHE> but are you streaming them to users in realtime, or for their convenience to listen whenever from start to finish?
[17:46:32 CET] <JEEB> rockyh: use scale or zscale to just scale it down, and it shouldn't do anything else to the video. then make sure the encoder you're utilizing can flag those things (trc, primaries etc)
[17:47:33 CET] <rockyh> JEEB: if I use just `-vf scale=1920:1080', colors are altered, as if ffmpeg picked some defaults which are different from (yuv420p10le(tv,bt2020nc/bt2020/smpte2084)
[17:47:53 CET] <BeerLover> JEEB they can listen anytime
[17:47:56 CET] <JEEB> please post -v verbose full uncut terminal output and link it here
[17:49:00 CET] <JEEB> ok, x264 seems to pass avctx->color_trc as is to vui.i_transfer in libx264, libx265 has some checks but seems to support it too?
[17:49:10 CET] Action: JEEB tests by encoding a bit of his godzilla sample
[17:51:59 CET] <furq> i've always needed to explicitly set -colorspace -color_trc -color_primaries with ffmpeg and x264
[17:52:24 CET] <JEEB> I've had it work but unfortunately it in various cases currently seems to require that
[17:52:45 CET] <JEEB> because it doesn't initialize the encoder with the correct things (when it gets the first AVFrame that would have the correct metadata)
[17:52:57 CET] <furq> this was just for downscaling a 1080p bluray, so nothing fancy
[17:53:10 CET] <JEEB> this was the same thing
[17:53:15 CET] <JEEB> essentially
[17:53:17 CET] <furq> fun
[17:53:39 CET] <rockyh> it took a little bit: https://pastebin.com/kDE0mxQ3
[17:54:17 CET] <JEEB> ok, seems like either lavfi drops the information, or the encoder gets init too early
[17:56:34 CET] <JEEB> http://up-cat.net/p/e7b18b8c
[17:56:38 CET] <JEEB> let's see if this looks correct
[17:56:57 CET] <JEEB> it indeed did not set the correct metadata for libx264 if I didn't set those
[17:57:36 CET] <rockyh> in my case, the original video is x265 (and I would like to keep this also in the output if it's possible)
[17:57:57 CET] <JEEB> yea, I just had libx264 around to test in this build :P
[17:58:15 CET] <rockyh> ok :)
[17:58:23 CET] <JEEB> ok, seems correct enough so FFmpeg didn't quak it too badly
[17:58:47 CET] <JEEB> the encoder probably gets initialized too early, and thus there is no data to fill the lavc context with
[17:58:59 CET] <JEEB> in theory you could reconfigure it with the first AVFrame received
[17:59:10 CET] <JEEB> but I haven't had too much luck trying to get that to work with libx264's reconfigure :D
[17:59:21 CET] <JEEB> in theory it should work because nothing has been output yet
[17:59:26 CET] <JEEB> since nothing has been fed to the encoder yet
[18:00:02 CET] <JEEB> vlc seems to try initializing the encoder in init() (and then breaks it down), and then actually initializes the encoder as the first frame is received
[18:02:53 CET] <rockyh> uhm I'm not sure I understood. Is this an ffmpeg problem, or it generate a correct file and then the player "misundertands" it?
[18:03:01 CET] <rockyh> s/generate/generates
[18:03:55 CET] <JEEB> rockyh: if you look at my example, video wise even if the -color_* were removed the actual content is correct, but the H.264 stream would not have the correct color metadata, and thus players show it as standard BT.709/gamma
[18:05:16 CET] <JEEB> ok, tested without the filter chain as well, and even then it doesn't properly propagate
[18:05:42 CET] <JEEB> so ffmpeg.c sets the encoder values at a point where it doesn't yet know the colorspace etc of the input
[18:06:04 CET] <furq> rockyh: if you already encoded the whole thing then you can set those flags with -bsf hevc_metadata
[18:06:10 CET] <JEEB> yes
[18:06:27 CET] <furq> https://www.ffmpeg.org/ffmpeg-bitstream-filters.html#hevc_005fmetadata
[18:06:33 CET] <JEEB> in other words, it's a metadata problem in the video stream, not actual content
[18:07:43 CET] <rockyh> oh, ok, got it and this is already a little relief
[18:08:45 CET] <rockyh> BTW using the explicit settings in your paste, `-color_trc smpte2084 -colorspace bt2020_ncl -color_primaries bt2020', the output video is correctly played
[18:09:05 CET] <JEEB> yes, because it's now properly flagged in the stream :)
[18:09:40 CET] <JEEB> I really think we should support more encoder configuration on the first fed AVFrame :/
[18:09:49 CET] <JEEB> since they contain the metadata as well
[18:10:11 CET] <furq> yeah that'd be nice
[18:10:17 CET] <rockyh> furq: so IIUC I can set the flags also after the encoding with just `-vf scale=1920:1080', using the linked page and `-bsf'
[18:10:29 CET] <furq> yeah
[18:10:52 CET] <furq> that filter doesn't support constants though so afaik you need to pull the values from the hevc spec
[18:10:55 CET] <JEEB> I looked into this at some point, and it seems like the simplest answer is to just wait with the actual initialization until the first AVFrame
[18:10:56 CET] <furq> at least i needed to do that with h264
[18:10:58 CET] <rockyh> wonderful!
[18:11:30 CET] <JEEB> I guess lavc context values would be preferred if set to something else than undefined, and otherwise configure it according to the first AVFrame
[18:11:42 CET] <furq> https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-H.265-201906-I!!PDF-E&type=items
[18:12:03 CET] <JEEB> or I get x264/x265 reconfig working
[18:12:38 CET] <JEEB> (which I did try once and I didn't get the darn thing to bulge since I think I didn't change something enough - and yes, I was patching x264 as well to try and make sure that would cause a reconfig)
[18:13:14 CET] <JEEB> alternatively, only init the lavc context when the frame is to be passed to it
[18:13:59 CET] <furq> -i foo.mkv -map 0 -c copy -bsf:v hevc_metadata=colour_primaries=9:transfer_characteristics=16:matrix_coefficients=9 bar.mkv
[18:14:02 CET] <furq> i think that should be right
[18:15:38 CET] <rockyh> thanks :), I would never had been able to find them in the ITU document (even if I downloaded it!)
[18:16:32 CET] <furq> the bsf docs do at least mention the section of the spec to look at
[18:16:39 CET] <furq> so that's nice of them
[18:18:13 CET] <rockyh> furq: actually it generated this error in my x265 file https://pastebin.com/FZNgHfeD
[18:18:30 CET] <furq> oh
[18:18:38 CET] <furq> yeah obviously you need h264_metadata for h264
[18:20:26 CET] <rockyh> but my file is h265
[18:20:56 CET] <furq> the pastebin from a minute ago is outputting h264
[18:21:03 CET] <furq> the default encoder for mkv is x264 if you don't set one
[18:27:20 CET] <rockyh> oh, ok, I made some confusion during the encoding. I had used `ffmpeg -i input.mkv -vf scale=1920:1080 -c:a copy output.mkv' without specifying x265, so output.mkv had x264 encoding
[18:28:37 CET] <rockyh> instead, with `ffmpeg -i ctest.mkv -c:v libx265 -vf scale=1920:1080 -c:a copy output.mkv', I can then use `ffmpeg -i output.mkv -map 0 -c copy -bsf:v hevc_metadata=colour_primaries=9:transfer_characteristics=16:matrix_coefficients=9 output2.mkv' to correct metadata
[18:29:18 CET] <rockyh> thank you so much to both! JEEB furq (again, now all it's more clear, even for a newbie like me!)
[18:31:55 CET] <furq> rockyh: if you're encoding anyway then just set it up front with -colorspace etc
[18:32:01 CET] <furq> there's no need for a second pass
[18:32:14 CET] <JEEB> the bsf is just to fix past mistakes
[18:32:17 CET] <furq> right
[18:33:29 CET] <rockyh> yes, of course, got it
[20:04:36 CET] <jemius> is there a shortcut to replace a video's audio stream? My way of first storing without audio and then muxing is a bit slow
[20:05:15 CET] <JEEB> two inputs, map what you need from each
[20:05:18 CET] <kepstin> jemius: something like "ffmpeg -i original.mp4 -i newaudio.mp4 -map 0:v -map 1:a -c copy output.mp4"
[20:05:36 CET] <jemius> kepstin, I have the audio as .flac
[20:05:46 CET] <kepstin> add additional maps if needed to preserve subtitle tracks or multiple audio or whatever, change encoder options if needed.
[20:06:25 CET] <kepstin> right, then you'll probably need to set -c:a to something that works in your output container, if you don't want to leave it as flac.
[20:14:46 CET] <jemius> mhm, mapping isn't exactly intuitive
[20:19:59 CET] <kepstin> it's pretty straightforwards? the option is "-map" followed by a stream specifier. Every stream that matches the stream specifier will be included in the output. If you specify -map multiple times, they all apply, in order.
[20:20:28 CET] <kepstin> so my command says "grab the video tracks from input 0", then "grab the audio tracks from input 1"
[20:24:01 CET] <jemius> and what would happen if file contains more than one audio stream?
[20:24:43 CET] <kepstin> read the stream specifiers doc for detail, but my description already says what happens. Note that I pluralized "tracks".
[20:25:14 CET] <kepstin> you can use more specific specifiers if you want to grab one particular track.
[20:25:45 CET] <kepstin> https://www.ffmpeg.org/ffmpeg.html#Stream-selection is the doc in question
[20:42:41 CET] <jemius> awkward, if I mux the files the audio is early by ~0.5 s
[20:55:19 CET] <friendofafriend> I'm getting decode_slice_header error"
[20:56:53 CET] <friendofafriend> Drat, sorry.  I'm getting "decode_slice_header error", "non-existing PPS 0 referenced", "decode_slice_header error", and "no frame!" errors when trying to play video streaming from ffmpeg to ffplay.  Why do these error messages occur, and why can ffplay play the resulting stream but not VLC or mplayer?
[20:58:25 CET] <JEEB> friendofafriend: that means that the stream you're feeding it hasn't yet given enough information to the decoder to produce an image
[20:58:38 CET] <JEEB> usually meaning that there hasn't been initialization data yet
[20:58:53 CET] <JEEB> PPS/SPS are those things that contain initialization data for the decoder
[20:59:03 CET] <JEEB> so if you don't have those yet the decoder is not going to bulge
[20:59:19 CET] <JEEB> alternatively, the stream is broken/corrupted
[21:00:10 CET] <JEEB> if these warnings/errors happen at the beginning and then go away it just means that you started from a position where there was not initialization data at first
[21:00:19 CET] <JEEB> and then you have to wait for the stream to contain that for the next time
[21:02:09 CET] <friendofafriend> Thank you, JEEB!  Those messages eventually go away and ffplay doesn't show them again, but other players just seem to hang when trying to load the stream.  I can even copy the stream to a file, and ffplay will play it and other players don't.
[21:02:56 CET] <JEEB> so yes, you probably started reading the stream not from the very beginning
[21:03:21 CET] <JEEB> and the initialization data is first at the start, and later at following random access points
[21:03:41 CET] <JEEB> modern video encoders default to like 250 frames max duration between random access points for compression
[21:03:53 CET] <JEEB> so that can take very easily seconds
[21:04:32 CET] <friendofafriend> Indeed, it does take quite a while to start.  Longer then the -g option would suggest.
[21:04:46 CET] <friendofafriend> The command I'm using is something like this:  ffmpeg -i rtsp://1.2.3.4:5000/example -vsync 1 -content_type video/mp4 -c:v libx264 -g 5 -map 0 -f h264 -preset medium -profile:v main -x264opts "keyint=5:min-keyint=5:no-scenecut" -tune zerolatency icecast://source:password@localhost:8000/test.mp4
[21:05:04 CET] <johnjay> JEEB: not sure if this is ffmpeg but is there a way to convert a file to .SRT?
[21:05:27 CET] <JEEB> johnjay: if it is text subtitles and readable by FFmpeg, yes?
[21:05:28 CET] <johnjay> the file is like TIMECODE\nSubtitle\nTIMECODE\nSubtitle but this program wont' accept it
[21:06:10 CET] <JEEB> ffmpeg -v verbose -i BLAH out.srt
[21:06:18 CET] <JEEB> if BLAH is readable by FFmpeg
[21:08:15 CET] <johnjay> well i didn't think ffmpeg read subtitle files
[21:08:22 CET] <johnjay> but in any event it says no stream
[21:09:07 CET] <johnjay> when i try on a modified srt file i made it says Invalid Data found when processing input
[21:10:04 CET] <johnjay> ah i see. it does take the original unmodified srt file
[22:45:57 CET] <johnjay> well a quick perl script solves that problem
[22:48:49 CET] <cehoyos> johnjay: More than a dozen different subtitle file formats are supported by FFmpeg
[22:49:34 CET] <johnjay> cehoyos: maybe an existing format matches it, i'm not sure
[22:49:57 CET] <johnjay> this file is just like 00:10\n hello there\n 00:15\n how are you?\n
[22:50:12 CET] <johnjay> but the player i'm on only accepts srt
[22:50:37 CET] <cehoyos> Please provide a sample file if FFmpeg does not support the subtitle format.
[22:51:09 CET] <johnjay> you mean like a literal file?
[22:51:18 CET] <johnjay> this is the format youtube does all downloaded subs in
[22:51:22 CET] <johnjay> so they are all like that
[22:51:58 CET] <johnjay> again i might be ignorant and maybe there's a subtitle format that matches that
[22:52:05 CET] <cehoyos> If the format is not supported by FFmpeg, please provide a sample file
[22:53:52 CET] <johnjay> ok...
[22:53:58 CET] <johnjay> for example this is the file given from youtube for this video
[22:54:00 CET] <johnjay> https://paste.ubuntu.com/p/gyD9Y4ctfK/
[22:54:06 CET] <johnjay> https://www.youtube.com/watch?v=Pnhxz0learg
[22:54:31 CET] <furq> how did you download it
[22:54:36 CET] <johnjay> you don't
[22:54:44 CET] <johnjay> you click on the ... and choose Open Annotation
[22:54:44 CET] <furq> how did you get it then
[22:54:49 CET] <johnjay> then you have to highlight the text yourself
[22:54:57 CET] <johnjay> er Open Transcript
[22:55:13 CET] <furq> what ...
[22:55:15 CET] <cehoyos> Which software supports this format?
[22:55:18 CET] <johnjay> on the lower right of the video
[22:55:25 CET] <johnjay> by the Share and Save buttons
[22:55:44 CET] <furq> oh right
[22:55:53 CET] <furq> yeah that's not a real subtitle format
[22:56:04 CET] <johnjay> i don't know. i think youtube confusingly demands you upload subs in .srt format
[22:56:04 CET] <johnjay> so i don't know why they don't provide .srt
[22:56:24 CET] <johnjay> is it similar to any existing formats?
[22:56:27 CET] <johnjay> i thought maybe webvtt
[22:56:31 CET] <furq> use youtube-dl --write-sub for real CCs or --write-auto-sub for the autogenerated ones
[22:57:41 CET] <johnjay> furq: i'm trying it on that video. it says it's writing vtt?
[22:57:48 CET] <furq> yeah that makes sense
[22:58:18 CET] <furq> also --skip-download if you just want the subs
[22:58:27 CET] <cehoyos> The format is trivial and could be supported...
[22:58:43 CET] <furq> it's not a real format though
[22:58:54 CET] <furq> he just copied and pasted it off the video page
[22:59:18 CET] <cehoyos> It looks like a format to me...
[22:59:32 CET] <cehoyos> I just downloaded it myself
[22:59:46 CET] <furq> if you mean the one youtube-dl gives you then that's webvtt
[22:59:49 CET] <furq> that's already supported
[22:59:52 CET] <johnjay> well youtube does that for all videos
[23:00:05 CET] <johnjay> so idk if that makes it a "format" but it it is common
[23:00:09 CET] <cehoyos> No, the one that is provided by youtube without using third-party tools;-)
[23:00:11 CET] <johnjay> i download transcripts for language learning
[23:00:18 CET] <nicolas17> copying formatted text from a webpage and pasting it as plaintext could give different results (whitespace) depending on the browser
[23:00:22 CET] <furq> right
[23:00:35 CET] <johnjay> i'm using chromium
[23:00:40 CET] <cehoyos> We ignore whitespace for subtitles
[23:00:43 CET] <furq> if there's a way to actually have it send you a plaintext subs file then i'd consider that a format
[23:00:57 CET] <johnjay> well it just displays it in a window on the right i think
[23:01:37 CET] <johnjay> my usecase is i'm learning a foreign language so i run the transcript through a translator
[23:01:50 CET] <johnjay> then use an extension to show the english subs next to the foreign ones
[23:02:00 CET] <johnjay> but the extension only takes srt oddly
[23:02:09 CET] <nicolas17> convert from webvtt
[23:02:44 CET] <furq> yeah ffmpeg will do vtt to srt no problem
[23:02:52 CET] <furq> apparently --sub-format srt will work for some videos but not on this one
[23:03:03 CET] <furq> maybe that's an auto captions thing, idk
[23:03:12 CET] <johnjay> well now i can do that since furq showed me how. thanks
[23:03:15 CET] <johnjay> !
[23:03:24 CET] <johnjay> yes i use auto captions
[23:03:33 CET] <johnjay> they aren't perfect but are suitable for the purpose
[23:03:51 CET] <johnjay> anyhow i already wrote a perl script to do this so either way i'm set
[23:37:30 CET] <GuiToris> hello, a 500mib, 22-minute long, 1080p x265 video is rather big, isn't it?
[23:43:38 CET] <furq> depends what's in it
[23:43:44 CET] <furq> it's not that big though
[23:46:34 CET] <GuiToris> furq, I used -preset veryslow and -crf 29
[23:46:51 CET] <GuiToris> the source is a png sequence
[23:47:06 CET] <GuiToris> also -pix_fmt yuv420p
[23:47:19 CET] <GuiToris> how could I make it even smaller?
[23:47:27 CET] <GuiToris> (without sacrificing the quality)
[23:51:05 CET] <GuiToris> I'm not sure but I think video files are smaller when they are created with kdenlive (melt maybe)
[23:57:11 CET] <Atlenohen> in before youtube adds yuv444
[23:57:53 CET] <Atlenohen> *whistle* ...
[23:58:47 CET] <Atlenohen> But, in seriousness, if there was some thing in one industry, something that would make 420 just not work, maybe they can do enough pressure on youtube for that option
[23:59:02 CET] <Atlenohen> heck 422 would be a start
[23:59:18 CET] <GuiToris> I'm not going to upload this to yt, if that's what you meant
[23:59:45 CET] <Atlenohen> Nah I'm just commenting generally on my lack of mood of youtube's 420
[23:59:54 CET] <GuiToris> oh okay
[00:00:00 CET] --- Thu Nov 21 2019


More information about the Ffmpeg-devel-irc mailing list