[Ffmpeg-devel-irc] ffmpeg.log.20190526
burek
burek021 at gmail.com
Mon May 27 03:05:02 EEST 2019
[00:14:01 CEST] <frubsen> hey, anyone here know much about ffmpeg and decklink?
[08:41:44 CEST] <Trunk_> should i use CBR/VBR/ABR/CRF
[08:43:49 CEST] <upgreydd> Trunk_: whats thw question?
[08:46:41 CEST] <Trunk_> should i use CBR/VBR/ABR/CRF
[08:46:45 CEST] <Trunk_> that's my question
[08:49:50 CEST] <upgreydd> Trunk_: it depends on usecase and what you're trying to reach
[08:51:02 CEST] <Trunk_> recoding a online chess game
[08:51:14 CEST] <Trunk_> using 15fps
[11:42:35 CEST] <upgreydd> log2_max_frame_num_minus4 < how to set this?
[12:03:31 CEST] <upgreydd> is there an option to export all settings from h264 file (br, h264 specific settings etc.) and use with another file?
[12:04:02 CEST] <TheAMM> I don't think so
[12:04:55 CEST] <TheAMM> AFAIK you can use mediainfo to export the x264 settings and use them
[12:05:22 CEST] <TheAMM> I don't think ffmpeg/ffprobe has any way to show those
[12:06:10 CEST] <upgreydd> TheAMM: thanks. do you know meybe how to set some custom settings in libx264 like log2_max_frame_num_minus4 or bit_rate_scale ?
[12:06:47 CEST] <TheAMM> Dunno about that option, but -x264-params is a thing
[12:07:08 CEST] <furq> that'll only work if it's x264 and it has the SEI intact
[12:07:14 CEST] <furq> and also some options contain colons which break with x264-params
[12:07:28 CEST] <TheAMM> https://ffmpeg.org/ffmpeg-codecs.html#libx264_002c-libx264rgb, at the bottom
[12:24:21 CEST] <Lyberta> hi, I have hdr video (rec.2020 10bit) and want to extract frames from it, I want to keep them in HDR, it looks like my only options are OpenEXR and maybe JEPEG 2000, can ffmpeg do that? I've tried JPEG 2000 export and it produced very dark colors
[12:38:26 CEST] <JEEB> Lyberta: what is your definition of "extracting frames"?
[12:39:18 CEST] <JEEB> because you could just write out raw YCbCr into Y4M or so. it will lose the colorspace metadata, but it will give you the exact decoded YCbCr image
[12:39:34 CEST] <JEEB> also do note, rec 2020 only is the colorspace, the actual HDR part is the transfer function
[12:39:43 CEST] <JEEB> which is usually one of the things defined in rec 2100
[12:39:46 CEST] <JEEB> either HLG or PQ
[12:46:12 CEST] <snap1> what will create smaller file 720p constant whole black screen or 720p constant whole white screen
[12:50:35 CEST] <Lyberta> JEEB, VLC says ST2084
[12:50:43 CEST] <JEEB> that is PQ
[12:51:03 CEST] <JEEB> ffprobe -v verbose FILE would also tell it most likely
[12:51:57 CEST] <Lyberta> JEEB, Stream #0:0: Video: hevc (Main 10), 1 reference frame, yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 23.98 tbc (default)
[12:52:03 CEST] <JEEB> yup
[12:52:23 CEST] <JEEB> so TV range, BT.2020 non-constant luminance, bt.2020 and SMPTE ST.2084 transfer function
[12:52:40 CEST] <Lyberta> JEEB, so I want frames as individual raster image files
[12:53:19 CEST] <JEEB> at this point the question becomes what do you want to open them with?
[12:53:30 CEST] <JEEB> that would define what you want to utilize as your output
[12:53:56 CEST] <JEEB> and what are your requirements for what is the definition of "extracting frames"
[12:54:10 CEST] <JEEB> as in, do you want just the decoded YCbCr? or do you want RGB with PQ? or what
[12:55:32 CEST] <Lyberta> JEEB, can image viewers open YCbCr? I mean for LDR I could just export to PNG and be done with it, but here I don't want to lose data
[12:56:27 CEST] <JEEB> well JPEG when decoded in 99% of all cases is YCbCr, so on some level they should. now the problem is what do your image viewers actually open
[12:56:52 CEST] <Lyberta> JEEB, say, GIMP
[12:56:59 CEST] <JEEB> I have no idea, sorry
[12:57:06 CEST] <furq> tiff will store raw yuv
[12:57:12 CEST] <JEEB> you will have to ask GIMP people regarding what to do with your PQ
[12:57:31 CEST] <JEEB> because it all depends on if your thing that you will be opening stuff with will support PQ/BT.2020
[12:57:53 CEST] <JEEB> and that then defines if you want YCbCr or converted to RGB or whatever
[12:58:01 CEST] <JEEB> (both still in PQ)
[12:58:23 CEST] <JEEB> if none of your image things support PQ then you will have to figure out what colorspaces for HDR they actually do support
[12:58:37 CEST] <JEEB> s/colorspaces/transfer functions/
[12:59:25 CEST] <JEEB> (and colorspaces in general)
[12:59:33 CEST] <Lyberta> JEEB, ok, so what format can store YCbCr + PQ?
[12:59:59 CEST] <JEEB> anything that lets you feed raw YCbCr into it. metadata is a separate piece of a mess of course
[13:00:05 CEST] <JEEB> but that way your data should be unchanged
[13:00:18 CEST] <JEEB> raw YCbCr data, Y4M, and as furq noted TIFF seems to support raw YCbCr?
[13:00:33 CEST] <furq> maybe not 10-bit
[13:00:49 CEST] <furq> lossless webp should work for 8-bit as well
[13:01:01 CEST] <JEEB> well we're dealing with PQ content so 8bit isn't gonna fly
[13:01:03 CEST] <furq> but support for that is pretty patchy
[13:01:27 CEST] <JEEB> Lyberta: so yes, your output from FFmpeg will 100% be defined by what your stuff can support as input
[13:01:33 CEST] <Lyberta> ffmpeg refused to encode into OpenEXR :( and GIMP supports opening it
[13:01:50 CEST] <JEEB> what's the colorspace in OpenEXR?
[13:02:17 CEST] <JEEB> IIRC it was RGB (or XYZ), so some sort of conversion has to take place
[13:04:11 CEST] <Lyberta> JEEB, it looks like it allows specifying reference colors manually in the file
[13:06:52 CEST] <JEEB> seems like we only have a decoder for openexr
[13:07:27 CEST] <JEEB> so you can *decode* OpenEXR images from... mp4
[13:07:31 CEST] <JEEB> (or mov)
[13:07:41 CEST] <JEEB> but not write them :P
[13:07:47 CEST] <furq> so apparently the tiff encoder doesn't do high bit depth yuv but it is technically supported
[13:07:52 CEST] <furq> although you need to pad it to 16-bit
[13:08:13 CEST] <furq> if ffmpeg is out then i guess openexr is easier anyway
[13:10:17 CEST] <JEEB> somehow at this point it feels as if exporting one frame as raw YCbCr from FFmpeg (keeping in mind its colorspace and transfer function), and checking out if anything takes that in with the colorspace/trc metadata and is able to write something that you need :P
[13:10:38 CEST] <JEEB> although I'd really, really recommend looking up what things are actually supported properly by teh applications you plan on using
[13:11:03 CEST] <JEEB> because it sucks hard when you go through a workflow and seemingly have something that should contain the data you require
[13:11:10 CEST] <Lyberta> JEEB, well, it has to be free software so.... GIMP seems to be the swiss army knife
[13:11:11 CEST] <JEEB> just to find that the support on the reading side sucks
[13:11:31 CEST] <JEEB> I recommend talking about it with the GIMP community then
[13:11:47 CEST] <JEEB> that you have some BT.2020/PQ YCbCr content that you would like to open up
[13:12:01 CEST] <JEEB> and what is the best way to open that up in GIMP
[13:12:27 CEST] <JEEB> Lyberta: btw what is the idea behind this export?
[13:12:39 CEST] <JEEB> I'm just wondering that if you are just going to be viewing it, you will be tone mapping anyways
[13:12:46 CEST] <JEEB> (on the screen)
[13:13:11 CEST] <upgreydd> JEEB: found other DoorIN sample with correct h264 headers and found differences in log2_max_frame_num_minus4 log2_max_pic_order_cnt_lsb_minus4 and other such kind options. They have lot more than I. Is there an option to set them?
[13:13:52 CEST] <JEEB> upgreydd: those sound like really internal values in the stream
[13:14:03 CEST] <JEEB> as in, you won't find encoder options strictly setting those
[13:14:16 CEST] <JEEB> I recommend you read up their definition and if actually trying to mimic those makes sense
[13:14:44 CEST] <JEEB> then if it seems like those are indeed your thing, then check x264's source code regarding setting those flags
[13:14:53 CEST] <JEEB> what affects them etc
[13:15:14 CEST] <JEEB> I think that's the least bad way of getting it done in case that actually makes sense as you read those fields' definition
[13:15:29 CEST] <JEEB> anyways, dropping out to switch servers
[13:17:56 CEST] <Lyberta> JEEB, well, video players have good colors so I assume that file contains enough info to convert to LDR automatically
[13:18:21 CEST] <JEEB> yes the HEVC file has enough metadata that players that support it work
[13:18:29 CEST] <JEEB> such as mpv etc
[13:18:34 CEST] <JEEB> and it's not LDR, it's SDR :P
[13:18:56 CEST] <JEEB> Lyberta: so basically it depends on the actual use case you have for this stuff and why you want it to be in an image viewer specifically
[13:20:18 CEST] <Lyberta> JEEB, well, people market "HDR" displays so I assume I want to keep all those 10 bit and transfer function in case I get one in the future
[13:21:02 CEST] <JEEB> ok, so at that point you play or view the original file with a player or viewer that supports that?
[13:21:19 CEST] <Lyberta> JEEB, yes
[13:21:31 CEST] <JEEB> ok, then I'm wondering what the whole export thing was about?
[13:21:52 CEST] <Lyberta> JEEB, to get rid of video file and keep only frames I like
[13:22:34 CEST] <JEEB> then it sounds like writing singular HEVC images in 10bit lossless mode is the least bad alternative?
[13:22:46 CEST] <JEEB> that way you keep the metadata and don't need to do any colorspace conversions
[13:24:12 CEST] <Lyberta> JEEB, so what container can I use?
[13:24:35 CEST] <JEEB> anything sane? mp4 probably is most compatible, and matroska right after that if you like that stuff
[13:25:05 CEST] <JEEB> although if you're doing lossless at that point compatibility kind of becomes less relevant since not many hw decoders support lossless anyways :P
[13:25:14 CEST] <JEEB> in the best case the frames you like are random access points
[13:25:23 CEST] <JEEB> and you can just -c copy those :P
[13:41:07 CEST] <Lyberta> JEEB, if I export to PNG all colors are wrong, what can be the case?
[13:41:22 CEST] <JEEB> the default conversion path is not optimal at all
[13:41:45 CEST] <JEEB> you will have to utilize zscale and one of the tone mapping filters for that
[13:41:59 CEST] <JEEB> I would recommend taking screenshots with a recent enough version of mpv or so
[13:42:07 CEST] <JEEB> (preferably current git master)
[13:42:12 CEST] <Lyberta> JEEB, can't I just say "use whatever is in the file"?
[13:42:19 CEST] <JEEB> no
[13:42:23 CEST] <Lyberta> why?
[13:42:39 CEST] <JEEB> since PNG is Xbit sRGB in SDR generally
[13:42:47 CEST] <JEEB> so there has to be a conversion to that
[13:42:56 CEST] <JEEB> not only colorspace conversions but also tone mapping
[13:43:02 CEST] <JEEB> FFmpeg doesn't unfortunately do that by default
[13:43:27 CEST] <JEEB> it can do it, but it's sub-optimal due to the tonemap filter basing on older version of mpv/libplacebo functionality
[13:43:44 CEST] <JEEB> then the intel opencl filter is more up-to-date but that stuff has changed again in mpv/libplacebo
[13:44:23 CEST] <Lyberta> JEEB, ufffff
[13:44:42 CEST] <Lyberta> ok, VLC also have wrong colors, only mpv has proper ones
[13:44:55 CEST] <JEEB> yes, vlc 4 will have more libplacebo integration
[13:45:52 CEST] <JEEB> screenshot-format=png and screenshot-tag-colorspace=yes in your mpv config file should give you a nice result.
[13:46:03 CEST] <JEEB> also if your GPU can take it, profie=gpu-hq
[13:46:05 CEST] <upgreydd> JEEB: found problem :D I need H264 annex-b :D
[13:46:17 CEST] <JEEB> upgreydd: how the hell were you outputting AVCc H.264 :P
[13:46:30 CEST] <JEEB> I think both samples were annex b no?
[13:46:34 CEST] <JEEB> the ones you linked
[13:47:06 CEST] <gvth> Hello folks; I have cut the end off a video which makes the background music stop abruptly. I would like to provide the audience a smooth experience by decreasing the volume starting at 'end minus two seconds' to the actual end gradually from 100% to 0%. Is ffmpeg capable of accomplishing that? Thanks in advance for any straightforward suggestion :)
[13:47:51 CEST] <upgreydd> JEEB: my h264 without custom header starts with 00 00 00 01 67 their have 00 00 00 01 09 10 00 00 00 01 67
[13:48:30 CEST] <JEEB> upgreydd: both three and four byte start codes are Annex B
[13:48:42 CEST] <JEEB> I don't remember how AUD worked but is it that?
[13:48:52 CEST] <JEEB> (AUD helps parsers if I recall correctly)
[13:49:37 CEST] <JEEB> you can set x264 f.ex. to use access unit delimiters
[13:49:39 CEST] <upgreydd> JEEB: I was thinking 00 00 00 01 09 10 is custom but this comes from h264
[13:50:16 CEST] <JEEB> gvth: there's a filter for that but I'm pretty sure it's geared for non-live :)
[13:50:18 CEST] <upgreydd> JEEB: I'm not sure what's that, searching how to activate it :D
[13:50:30 CEST] <JEEB> I think the API option is aud=1 or something
[13:51:06 CEST] <JEEB> yup
[13:51:14 CEST] <JEEB> -x264-params "aud=1" or so
[13:51:22 CEST] <JEEB> that should have it start writing AUDs
[13:51:48 CEST] <JEEB> note: check the spec if that stuff you find is AUDs
[13:51:54 CEST] <JEEB> I have no idea how AUDs look like :P
[13:52:03 CEST] <durandal_1707> gvth: afade filter, you need to set start of fade manually, and not from end of stream
[13:52:20 CEST] <JEEB> yea
[13:52:38 CEST] <JEEB> durandal_1707: so I think that wouldn't work nicely with a live stream which is what I think gvth is doing?
[13:53:03 CEST] <gvth> JEEB, durandal_1707: Could you please provide an actual sample command or a weblink?
[13:53:37 CEST] <gvth> JEEB, durandal_1707: Using a search engine, I cannot find my issue being addressed
[13:54:00 CEST] <durandal_1707> gvth: search ffmpeg afade filter
[13:57:10 CEST] <upgreydd> JEEB: aud=1 is correct :D
[13:57:11 CEST] <durandal_1707> also look at: https://github.com/guillaumekh/ffmpeg-afade-cheatsheet
[13:58:25 CEST] <JEEB> upgreydd: ok, so it was AUDs?
[13:58:58 CEST] <durandal_1707> gvth: http://ffmpeg.org/ffmpeg-filters.html#afade-1
[14:00:27 CEST] <durandal_1707> gvth: i could implement commands for afade, so you could do more advanced stuff
[14:00:28 CEST] <gvth> durandal_1707: thanks, your last link addressed my issue
[14:01:24 CEST] <upgreydd> JEEB: it's still not playing but it can be related to wrong header now xD
[20:02:23 CEST] <Cracki> so... why does this page not link to actual documentation of libavcodec, only this kind of "book cover"? https://www.ffmpeg.org/libavcodec.html
[20:02:51 CEST] <Cracki> I'm trying to get into libav* but mere lists of classes and function signature aren't gonna cut it
[20:06:57 CEST] <Cracki> tbh these pages are pointless. _now_ I discovered I want "api" docs. that needs to be linked right there, or those pages can go. I really see no point in them existing.
[20:11:10 CEST] <Cracki> also, where would I go to learn "big concepts" such as how AVCodec and AVCodecContext relate to each other, or how time bases between codec context, stream, container, ... relate, or how offsets between streams are treated, or how long (duration) the last frame of a video stream is presented
[20:12:01 CEST] <JEEB> the last frame part is a really container specific thing. in some containers you have a duration, in many you don't
[20:12:17 CEST] <JEEB> and yea, for the API docs site:ffmpeg.org doxygen trunk KEYWORD
[20:12:23 CEST] <JEEB> is probably going to be the best way
[20:12:27 CEST] <Cracki> I'm ok with answers, I'd be happy with knowing WHERE these things are written down
[20:12:49 CEST] <Cracki> I'm looking for a written guide, not for a API reference
[20:14:09 CEST] <Cracki> something that explains the rules of the game, not an engineering drawing of the chess pieces with metric dimensions
[20:14:13 CEST] <JEEB> i don't think there's a step-by-step guide, you can take a look at the examples under doc/examples in the source tree. although beware of the "decoding" only or "encoding" only examples since they seem to take weird short-cuts like reading the file without lavf or outputting the encoded buffers etc
[20:14:34 CEST] <JEEB> the transcoding example or transcode_aac examples are probably some of the better ones
[20:14:53 CEST] <Cracki> I'm interested in encoding. too much stuff out there only treats decoding.
[20:15:03 CEST] <JEEB> transcoding is the whole chain
[20:15:28 CEST] <JEEB> and f.ex. the encoding-only stuff often leaves stuff out to "simplify" the example which I'm not sure is always of the best interest of the example
[20:15:31 CEST] <Cracki> tbh, I treat these examples as "someone wrote this a while ago, NO GUARANTEE that it's what you're supposed to do"
[20:15:41 CEST] <JEEB> if we provide helpers that make some things simpler
[20:15:56 CEST] <Cracki> helpers are great, if they're explained
[20:16:15 CEST] <JEEB> anyways, the transcoding example IIRC doesn't utilize the new dec/enc APIs yet. the transcoding_aac one I think is up-to-date on that
[20:16:18 CEST] <Cracki> oh well, guess this is one of those days where I have to suppress my genocidal urges
[20:17:04 CEST] <Cracki> what is the "dec/enc" api, is that documented already, what am I supposed to use
[20:17:18 CEST] <Cracki> how am I supposed to decide what to choose
[20:17:31 CEST] <JEEB> https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[20:17:37 CEST] <JEEB> yes, it even has a general how-to guide :P
[20:17:40 CEST] <Cracki> do the examples indicate which ones are "up to date" and which ones I should avoid?
[20:17:57 CEST] <Cracki> how would I have discovered these facts on my own?
[20:18:13 CEST] <Cracki> >Set up and open the AVCodecContext as usual.
[20:18:19 CEST] <Cracki> so this assumes I know things already
[20:19:00 CEST] <Cracki> the very first paragraph I expect to read on this "old/new" stuff is who and WHY this was deemed necessary
[20:19:25 CEST] <Cracki> the second paragraph should explain the differences without assuming that the reader knows either the old or the new way already
[20:19:52 CEST] <JEEB> jesus fuck
[20:19:55 CEST] <Cracki> same
[20:20:13 CEST] <Cracki> ffmpeg is in the position that there's no alternative
[20:20:26 CEST] <JEEB> the APIs are not that fucking bad.
[20:20:32 CEST] <Cracki> that's not what I said
[20:20:35 CEST] <JEEB> it might not have perfect documentation, but vittu perkele
[20:20:51 CEST] <JEEB> and I am not being paid to sit her and listen you ramble how you hate it
[20:20:59 CEST] <Cracki> I said I need certain documentation, which I haven't found yet
[20:21:41 CEST] <Cracki> such as, where is the general idea of time bases explained, and how it touches various classes in these libraries
[20:21:41 CEST] <JEEB> anyways, the general thing is: lavf context for input reading and you get AVPackets, lavc context for each stream you want to decode and at this point you start getting AVFrames from decoder, then you do filtering in lavfi if you need to, then you open an encoder for stream(s) you need and finally you have an output AVFormatContext to which you stick the AVPackets received from the encoder
[20:21:53 CEST] <JEEB> see the transcoding example for example
[20:22:04 CEST] <JEEB> I am a shitty coder yet I could make a simple pice of shit with the API
[20:22:15 CEST] <Cracki> you're familiar with it
[20:22:20 CEST] <durandal11707> Cracki: please contact phone support
[20:22:30 CEST] <JEEB> I fucking wasn't
[20:22:37 CEST] <JEEB> i might have poked the internals
[20:22:42 CEST] <JEEB> but not the external APIs
[20:22:46 CEST] <JEEB> vittusaatana
[20:22:50 CEST] <Cracki> I am sorry for this interaction.
[20:22:57 CEST] <Cracki> and what does that word even mean
[20:23:11 CEST] <JEEB> anyways, see the transcoding example for example
[20:23:15 CEST] <JEEB> it will open an input avformat context
[20:23:26 CEST] <JEEB> then use the fabulously misnamed av_read_frame
[20:23:31 CEST] <JEEB> which gives you AVPackets
[20:23:48 CEST] <JEEB> each packet will have an AVStream index marked on it
[20:24:04 CEST] <JEEB> you can use the avformatcontext's list of AVStreams to figure out what stream that is
[20:24:29 CEST] <JEEB> then you can create an AVCodecContext from the parameters of that input stream for decoding
[20:24:48 CEST] <JEEB> then you use the feed/receive API as mentioned to get AVFrames out of AVPackets
[20:25:17 CEST] <Cracki> this project has a codec of conduct...
[20:25:32 CEST] <JEEB> then there's filtering if you need to change some parameters
[20:25:47 CEST] <JEEB> then you search for an encoder you want f.ex. by its name
[20:25:50 CEST] <Cracki> durandal11707, I'm sure you were joking because that's not at all what I was asking for
[20:27:35 CEST] <Cracki> thank you JEEB, but you really don't need to. I understand now that there's no guide for this.
[20:27:36 CEST] <JEEB> https://www.ffmpeg.org/doxygen/trunk/group__lavc__encoding.html
[20:27:55 CEST] <JEEB> there's avcodec_find_encoder_by_name f.ex.
[20:28:12 CEST] <durandal11707> Cracki: than what you need? "FFmpeg API guide for dummies" book?
[20:28:27 CEST] <Cracki> durandal11707, why so deprecating?
[20:28:44 CEST] <JEEB> we havne't done major API changes in a few years now so please drop that meme
[20:28:50 CEST] <Cracki> yes, maybe that's what I'm looking for, something that explains the concepts, not the apis
[20:28:58 CEST] <JEEB> even the "new" encoding/decoding APIs are now two+ years old
[20:29:04 CEST] <pink_mist> Cracki: probably because you come in here asking to be spoonfed without showing that you've actually done any work to figure things out on your own
[20:29:22 CEST] <JEEB> Cracki: anyways please take a look at f.ex. the transcoding examples and ask here
[20:29:36 CEST] <Cracki> pink_mist, why the hate
[20:29:48 CEST] <Cracki> am I touching a sore spot somehow?
[20:29:49 CEST] <pink_mist> and you claim that I'm hating you now ..?
[20:30:05 CEST] <pink_mist> I was explaining that your attitude was pretty shitty
[20:30:16 CEST] <pink_mist> if you want to make that into me hating you, you're mistaken
[20:30:18 CEST] <JEEB> we are used to people asking "I tried x, y, z but I'm not sure if this is good" and pasting code
[20:30:29 CEST] <JEEB> and then we have a discussion
[20:30:35 CEST] <Cracki> I was explaining that I see an opportunity to improve the documentation
[20:30:45 CEST] <Cracki> and you people come at me with aggression and insults
[20:30:58 CEST] <Cracki> I'm not here to cause trouble, believe me
[20:31:01 CEST] <pink_mist> Cracki: if you're willing to put in the work to improve the documentation it will be appreciated, I'm sure
[20:31:04 CEST] <JEEB> ok. then there was some misreading of things if that is what you meant
[20:31:14 CEST] <JEEB> because it really seemed like you were just herping a derp
[20:31:23 CEST] <Cracki> I see that this project has a code of conduct, yet I get this kind of response from several people
[20:31:41 CEST] <Cracki> I am very sorry that you have bad experiences. don't take it out on me.
[20:32:01 CEST] <JEEB> but really, please take a look at the basic transcoding examples and/or explain your use case
[20:32:05 CEST] <Cracki> I am also very sorry that I have to be the one to try to stay cool
[20:32:12 CEST] <Cracki> I AM looking at that example
[20:32:34 CEST] <JEEB> because by knowing your use case it'd be simpler to grab which parts you need
[20:32:37 CEST] <pink_mist> what "this kind of response"? I'm being perfectly civil to you, yet you keep complaining that I must hate you? please stop doing that.
[20:34:48 CEST] <Cracki> my concrete use case? write a video, qtrle codec, where the picture changes at specific times.
[20:34:53 CEST] <Cracki> I have got that working.
[20:35:17 CEST] <Cracki> but I have no idea if I set the right values in the right objects
[20:35:33 CEST] <durandal11707> the explanation what time_base is not in documentation IIRC
[20:35:53 CEST] <JEEB> ok, so no decoding etc? just getting raw video frames and feeding to an AVCodecContext you have created and then feeding that to the muxer?
[20:35:54 CEST] <Cracki> on top of that, it's in python, with "pyav", the documentation of which consistently refers to libav* to know the details because it's a more or less thin wrapper
[20:36:37 CEST] <JEEB> the problem with FFmpeg's APIs is that you can do so many things with it :P it's either you make a thin wrapper or you really limit yourself to a specific use case
[20:36:42 CEST] <JEEB> both are valid approaches
[20:37:09 CEST] <Cracki> I don't know what its "av.open" does but I have a qtrle stream, on which I set a time_base (probably wrong), and I have a codec_context for that stream, on which I can set gop_size=1 (for intra), and I have avframes on which i can set time_base and pts
[20:37:37 CEST] <JEEB> Cracki: so you have your AVFrame with the raw video. it has a PTS on some time base. AVFrames by themselves (unfortunately) don't have time bases
[20:38:08 CEST] <Cracki> but from setting and reading these fields, I noticed that it computes least common multiples (or gcd), so that's ok, but I'd like to know what I do that's right and what's just coincidentally not causing it to fail
[20:38:10 CEST] <JEEB> so when you are taking some data into the FFmpeg's APIs for encoding, you just need to keep a view on what the time base is for your AVFrames
[20:38:37 CEST] <JEEB> then when you create your AVCodecContext you set a time base to it
[20:38:42 CEST] <Cracki> good, avframes don't have timebases, that's valuable info
[20:39:04 CEST] <JEEB> before you feed the AVFrame to the encoder, you have to make sure that it is in the time base of the AVCodecContext
[20:39:11 CEST] <Cracki> so on what object do I set the timebase? codec context? stream? container?
[20:39:19 CEST] <Cracki> I realize containers might have time bases too
[20:39:34 CEST] <JEEB> yes, on avformat level every *AVStream* has a time base
[20:39:43 CEST] <JEEB> for example audio might have it 1/48000
[20:39:49 CEST] <Cracki> "in the timebase" meaning? that avcodeccontext has a timebase, and the frame's pts is in the context's time base?
[20:39:50 CEST] <JEEB> and video might have 1001/24000
[20:40:26 CEST] <JEEB> Cracki: yes, when you feed an AVFrame to an encoder you use one of the built-in functions to move the value from whatever the previous time base was to the encoder's
[20:40:56 CEST] <Cracki> does the stream take the timebase of its codeccontext, or do both objects need that set explicitly?
[20:40:59 CEST] <JEEB> in your case you have raw data you're feeding from some input so just make sure your input AVFrames have pts on the time base set into your encoder AVCodecContext
[20:41:10 CEST] <JEEB> avformat time bases are 100% unrelated
[20:41:36 CEST] <JEEB> also avformat depending on the output container can change the time base to something that isn't what you asked for during init
[20:41:39 CEST] <JEEB> for example
[20:41:44 CEST] <JEEB> you might want to output FLV or MPEG-TS
[20:41:49 CEST] <JEEB> both of these have hard-coded time bases
[20:42:02 CEST] <JEEB> so even if you set a time base there, it will get set to 1/1000 or 1/90000 respectively
[20:42:16 CEST] <JEEB> thankfully, that value is availbale in both structures after initialization to read
[20:42:21 CEST] <JEEB> so you don't have to special case it
[20:43:03 CEST] <Cracki> so... I use the context's timebase, and leave the stream's timebase alone?
[20:43:08 CEST] <JEEB> when you init streams in an avformatcontext for muxing, you set a time base, and then when you are making sure that the pts/dts of the AVPacket are on the AVStream's time base, you just don't expect it to be something but instead read it from the output AVStream
[20:43:20 CEST] <JEEB> no, the time base will be in the stream
[20:43:32 CEST] <JEEB> after init it will get set to the hard-coded value or otherwise to the one you defined
[20:43:41 CEST] <JEEB> so the logic is the same in both cases
[20:44:00 CEST] <Cracki> I think I want to define a 1/1000 timebase because my desired presentation timestamps are at best millisecond precise
[20:44:23 CEST] <JEEB> sure, and then if the container is somehow limited it will get set to something else if needed
[20:44:35 CEST] <Cracki> I'm ok with a finer timebase, I just need to know what it'll be so I can compute pts right
[20:44:51 CEST] <JEEB> yes, it is in the AVStream and there's even a function for the rescaling
[20:45:04 CEST] <JEEB> I don't remember what it was - it takes an AVPacket
[20:45:19 CEST] <Cracki> I have seen these translation functions. I'll have to check how this python wrapper library does it
[20:45:21 CEST] <JEEB> and you give it the encoder context's time base and the AVStream's time base as params
[20:45:34 CEST] <JEEB> so basically you have your input data on some time base
[20:45:45 CEST] <JEEB> then you initialize the encoder lavc context with some time base
[20:46:00 CEST] <Cracki> so, just asking if I understood it... it translates a frame's pts, from context timebase, to stream timebase?
[20:46:01 CEST] <JEEB> at that point you rescale the pts of the AVFrame to the lavc context's time base
[20:46:16 CEST] <JEEB> Cracki: in case of the AVPacket function it scales all time related things
[20:46:20 CEST] <JEEB> pts, dts, duration
[20:46:24 CEST] <Cracki> good
[20:46:38 CEST] <JEEB> basically when encoding you set two time bases
[20:46:45 CEST] <JEEB> the encoder and the streams'
[20:47:01 CEST] <JEEB> (well there can be more than two of them if you encode multiple streams but you get the idea hopefully)
[20:47:11 CEST] <JEEB> but two places for each stream you encode & mux :P
[20:47:30 CEST] <Cracki> encoder/stream (almost) always come in pairs?
[20:47:38 CEST] <Cracki> uh codec context
[20:47:51 CEST] <JEEB> well if your idea is to encode something and then mux it, you will have those two
[20:47:56 CEST] <Cracki> ic
[20:48:15 CEST] <JEEB> then if you need filtering the filter chain input and output has time bases as well, but in your case it sounds like you're not filtering
[20:48:23 CEST] <Cracki> aye
[20:48:34 CEST] <JEEB> so you have some data, and some timestamp for that data
[20:48:52 CEST] <JEEB> what you make sure is you feed the lavc encoder context the AVFrames with values on the time base that you set the encoder to
[20:49:13 CEST] <Cracki> encoder = avcodeccontext?
[20:49:20 CEST] <JEEB> yes, that's why I keep noting lavc
[20:49:23 CEST] <Cracki> k
[20:49:30 CEST] <JEEB> l(ib)avc(odec)
[20:49:35 CEST] <JEEB> as opposed to lavf for libavformat
[20:49:42 CEST] <JEEB> or lavfi for filtering
[20:49:46 CEST] <JEEB> (libavfilter)
[20:50:05 CEST] <JEEB> and then after you receive AVPackets from the encoder lavc context, you just rescale the AVPackets' values to the output stream's time base
[20:50:11 CEST] <JEEB> and you should be golden
[20:50:26 CEST] <Cracki> aha! I rescale the packets' time fields
[20:50:35 CEST] <JEEB> yes, there's a function for that
[20:50:47 CEST] <Cracki> thanks for this explanation.
[20:50:54 CEST] <JEEB> so you rescale from the lavc context time base to the lavf context's AVStream's time base
[20:51:15 CEST] <Cracki> wasn't sure (on) what the function precisely rescales
[20:51:31 CEST] <JEEB> I think it's pts,dts,duration
[20:51:37 CEST] <JEEB> since those relate to time
[20:52:43 CEST] <JEEB> basically the AVPacket taking rescale function is supposed to handle all time related values for you between time bases so you don't have to care
[20:52:51 CEST] <JEEB> anyways, I need to finish cooking :P
[20:53:00 CEST] <JEEB> it kind of got paused when I started chatting here
[20:53:31 CEST] <durandal11707> JEEB forgets real life events while chatting on IRC
[20:53:33 CEST] <Cracki> k lemme put it in my own words. I create a codec context with a time base. avframes get pts and such in that timebase. I create a stream for that codec context. the stream might get a different timebase. the codec context encodes an avframe into packets, and the packets initially have codec context timebase. I need to translate these times into *stream* timebase using that helper function.
[20:53:49 CEST] <Cracki> bon appetit
[20:54:29 CEST] <JEEB> yup
[20:54:33 CEST] <JEEB> also this was the function https://svn.ffmpeg.org/doxygen/trunk/group__lavc__packet.html#gae5c86e4d93f6e7aa62ef2c60763ea67e
[20:54:38 CEST] <durandal11707> aren't you using ffmpeg api via python? how that even works?
[20:54:38 CEST] <JEEB> av_packet_rescale_ts
[20:55:26 CEST] <JEEB> Cracki: basically if you were filtering or decoding before your AVFrames would be in the time base of either the filter chain output or the decoder lavc context. in your case your input is generated by you
[20:55:40 CEST] <Cracki> appears to be an OOP wrapper around it. libav* are classes already so it's not that much of a stretch. https://github.com/mikeboers/PyAV
[20:55:42 CEST] <JEEB> so you should know exactly the times of your input AVFrames when you generate them
[20:56:09 CEST] <Cracki> that is correct
[20:56:19 CEST] <JEEB> your line just sounded like newly created AVFrames by default would be in the lavc context's time base. they will not be.
[20:56:32 CEST] <JEEB> since AVFrames by themselves just lack any time base and the timestamps are by default NOPTS
[20:56:35 CEST] <JEEB> or so
[20:56:51 CEST] <JEEB> so you just make sure your your timestamps are on the correct time base you set for the lavc encoder context
[20:57:04 CEST] <JEEB> before feeding that created data into the encoder
[20:57:09 CEST] <JEEB> I hope that's clear enough
[20:57:51 CEST] <Cracki> ah! so I could just create avframes with no time information at all, pass that through the codec context, and set pts and such (in stream timebase) on the resulting packets?
[20:57:57 CEST] <JEEB> no
[20:57:59 CEST] <JEEB> please don't do that
[20:58:02 CEST] <Cracki> heh good
[20:58:07 CEST] <JEEB> ok, so my explanation was shit
[20:58:36 CEST] <Cracki> I understand avframes have no timebase, so their time values are interpreted depending on what uses them
[20:58:42 CEST] <JEEB> yes
[20:59:08 CEST] <JEEB> and you are generating the input so you just make sure that your PTS of AVFrames is the same as the lavc encoder context's
[20:59:15 CEST] <Cracki> so what comes out of a codec context is AVPackets, which do have a timebase, and it'll be the timebase of the codec context
[20:59:28 CEST] <JEEB> no, the AVPackets don't have a time base either
[20:59:34 CEST] <Cracki> ah good
[20:59:39 CEST] <Cracki> it's starting to make sense
[20:59:49 CEST] <durandal11707> water is boiling, kitchen is overheating already
[20:59:51 CEST] <JEEB> basically their timestamp etc values depend on what created them
[21:00:08 CEST] <JEEB> so since you set time base on the encoder, the created AVPackets will be on that time base
[21:00:10 CEST] <Cracki> can I assume that the packets for a frame get the same PTS, but DTS will be determined by codec/context?
[21:00:55 CEST] <JEEB> for audio you have encoder delay so the packets can start with a negative timestamp
[21:01:14 CEST] <JEEB> so you have like 1024 or whatever samples of encoder initialization stuff first
[21:01:27 CEST] <JEEB> which does decode into stuff, but should be marked as pre-ezro
[21:01:29 CEST] <JEEB> *zero
[21:01:35 CEST] <JEEB> so the PTS is not always the exactly same thing
[21:01:38 CEST] <Cracki> and not played back? interesting
[21:01:54 CEST] <JEEB> it is often played back, which is where the negative timestamps also come in handy
[21:01:58 CEST] <Cracki> ic
[21:01:59 CEST] <JEEB> because it keeps the A/V sync
[21:02:11 CEST] <JEEB> since the zero point is the start of the actual encoded audio as opposed to encoder delay
[21:08:47 CEST] <Cracki> some automagic picks up on my first frame not having pts 0, and using the first frame's pts as the stream's "start", so even though I might want the first frame to be presented at 17 seconds... how does that happen, and is it even *possible* to have the video black/blank at the beginning and seeing the first frame only after a while
[21:09:54 CEST] <Cracki> avstream::start_time appears to determine this. I'll try that.
[21:10:27 CEST] <Cracki> oh, docs warn me against setting anything else there
[21:12:35 CEST] <Cracki> it's probably best to create an explicit first frame...
[21:54:55 CEST] <A4L> why are movies usually stored in avi formats when mp4 is more storage efficient (movies on TPB etc...)?
[21:57:15 CEST] <der_richter> avi and mp4 are containers and those don't directly relate to storage efficiency/quality etc
[21:57:35 CEST] <der_richter> it's rather the formats/codecs used for the streams packed into those containers
[21:59:40 CEST] <A4L> so if I just `ffmpeg input.avi output.mp4` it should not make any filesize difference and I should instead look into H.264 codec settings if I want to make compression inprovements?
[22:00:01 CEST] <furq> you should probably just be better at piracy
[22:00:04 CEST] <furq> nobody uses avi any more
[22:00:24 CEST] <der_richter> there might be a small diffrence, which is negligible
[22:00:32 CEST] <Ariyasu> lol
[22:00:52 CEST] <Ariyasu> ren *.avi *.mp4
[22:01:03 CEST] <der_richter> if it's about video, yeah look into a better codec, codec settings or encoder
[22:01:52 CEST] <Cracki> and use -c copy to be sure it's copying the streams, not transcoding them
[22:02:24 CEST] <A4L> yeah, thanks, you are right, I got smaller filesize just because CRF was 23 by default... :facepalm: yeah, I have read H.264 docs and I think I get it now. see ya
[22:03:06 CEST] <A4L> Cracki: if I use -c copy I cant modify crf, preset and tune options, right?
[22:03:16 CEST] <Cracki> that's the point, it copies the streams
[22:04:37 CEST] <A4L> yeah, but I am trying to compress the video, so I will use -crf 23 and -preset slow and -acodec copy (only audio) and not -c copy (a&v), right?
[22:04:47 CEST] <Cracki> right
[22:04:55 CEST] <A4L> k tnx
[22:05:50 CEST] <Ariyasu> -vcodec libx264
[22:05:55 CEST] <Ariyasu> don't forget that bit
[22:06:11 CEST] <Ariyasu> you might want to use a higher crf value also
[22:06:27 CEST] <Ariyasu> like 21ish i guess it depends on type of content
[22:06:44 CEST] <Ariyasu> if it's scripted you might want to use like 17 or 18
[22:11:29 CEST] <A4L> ok
[22:14:33 CEST] <A4L> yeah, i will use 18, it should be "visually lossless"...
[22:15:56 CEST] <upgreydd> JEEB: still fighting xD Still online?
[22:28:47 CEST] <A4L> conclusion: after running it for 5 movies at veryslow, 18, film, libx264, I figured out that my films are already very optimised and compressed and further compression would be useless. so yeah, TPB-ish movies are already at best quality/filesize...
[22:31:24 CEST] <Ariyasu> no
[22:31:44 CEST] <Ariyasu> the best quality == bd/dvd source mpeg2
[22:31:57 CEST] <Ariyasu> best filesize = debatable
[22:32:28 CEST] <upgreydd> OK, looking for option to get x264-params from existing file cause I need encode second one very simillar. Is there some analyzer available? For example all headers was looking same in both files but one was with option "aud=1" and it was noticable only in hex :/ any advices please?
[22:35:21 CEST] <frubsen> hey, anyone here know much about ffmpeg and decklink?
[23:57:05 CEST] <aldenp> hi, so I'm getting a bunch of ALSA `underrun occured` errors when playing back with ffplay
[00:00:00 CEST] --- Mon May 27 2019
More information about the Ffmpeg-devel-irc
mailing list