[Ffmpeg-devel-irc] ffmpeg-devel.log.20180225
burek
burek021 at gmail.com
Mon Feb 26 03:05:03 EET 2018
[00:05:23 CET] <jkqxz> atomnuker: Works for me? (Or at least, the output looks very similar to the input.)
[00:59:10 CET] <gagandeep> guys i am reading a tutorial on how to write a video player in 1k lines so as to understand the working on ffmpeg libraries but i cant properly include files from the library
[01:01:05 CET] <JEEB> this is for #ffmpeg
[01:01:13 CET] <JEEB> that is for usage of the libraries
[01:01:18 CET] <JEEB> this is for development OF the libraries
[01:02:59 CET] <gagandeep> yeah but i have to see how libraries work with the external video containers, or is there other way for understanding as i have just begun learning video and audio compressions
[01:03:30 CET] <JEEB> if you want help with the usage of FFmpeg's libraries, let's move this discussion to #ffmpeg
[01:03:39 CET] <gagandeep> k
[01:41:06 CET] <BtbN> philipl, I'm at an event right now so kinda hard to do stuff, but just link it again, I will look at it eventually.
[01:50:16 CET] <philipl> BtbN: https://github.com/philipl/ffmpeg/tree/yuv444p10
[02:00:06 CET] <jamrial> i don't think this will be well received
[02:00:14 CET] <jamrial> the new pixfmt, i mean
[02:03:15 CET] <philipl> It was not badly received last time.
[02:03:31 CET] <philipl> The alternative is to not map it to anything and just ignore it.
[02:03:48 CET] <philipl> but the current situation is broken
[02:04:09 CET] <atomnuker> jkqxz: odd, seems like it works on decoding a video but not on frames imported from drm
[02:04:25 CET] <atomnuker> (though yeah, I can't see much of a change in output)
[02:04:44 CET] <jkqxz> Maybe it only works on Y-tiled surfaces.
[02:05:45 CET] <jkqxz> Can you make a Y-tiled surface in whatever your external thing is? (gbm_surface_create_with_modifiers?)
[02:07:33 CET] <atomnuker> dunno what Y-tiling is, I was just curious whether denoise_vaapi would work if I randomly put it in when encoding from the screen
[02:08:29 CET] <BtbN> philipl, it's definitely fine with me, but I don't think I'm the one to convince about that pix_fmt. I'm in favor of adding it though, as it clearly has real world usage.
[02:08:56 CET] <jkqxz> philipl: I think it's a reasonable answer. But, what is going to create it? Are you going to add swscale support as well?
[02:09:00 CET] <BtbN> the nvenc part is ok, once the pix_fmt is in
[02:09:41 CET] <jkqxz> atomnuker: It will likely work if you pass it through a do-nothing scale_vaapi instance first.
[02:09:52 CET] <atomnuker> I did
[02:12:25 CET] <jkqxz> Works for me in that case. ('hwmap=derive_device=vaapi,denoise_vaapi,scale_vaapi=format=nv12' gives black, 'hwmap=derive_device=vaapi,scale_vaapi=format=nv12,denoise_vaapi' works.)
[02:12:40 CET] <jkqxz> Well, "works". Doesn't obviously do anything, but the image is right.
[02:12:59 CET] <jkqxz> (With kmsgrab, which is I assume what you're using there.)
[02:22:18 CET] <atomnuker> yep, works, seems like adding :format=yuv420p or :format=nv12 fixes it
[02:44:09 CET] <philipl> jkqxz: I would prefer the lazily evaluate the swscale part - I think the set of scenarios where you'd actually use this are very small - (hevc main 10 4:4:4...).
[02:44:31 CET] <philipl> But if people insisted on swscale support to add the pix_fmt, I guess i'd do it.
[03:20:41 CET] <philipl> annoyingly the generic conversion logic in swscale doesn't handle the bit shifting, even though the shift is a pixdesc property
[06:35:10 CET] <philipl> BtbN: Things I learned today: 1) swscale_unscaled's generic path doesn't know about the 'shift' field of the pix_desc and so does the wrong thing for this case. I roughly know how to fix that. 2) The scaled path fails for different reasons that I haven't nailed down yet. 3) swscale doesn't know how to dither to 10bits so one of the motivations of this exercise goes away - it's going to truncate to
[06:35:16 CET] <philipl> 10bits regardless of anything else.
[10:49:35 CET] <nevcairiel> philipl: adding a new format that in its commit message says "this is basically useless, so don't worry about it" feels like you might as well not add it =p
[16:51:44 CET] <philipl> nevcairiel: tis true.
[16:52:58 CET] <philipl> BtbN: So, another thought. Given that I now know we don't having dithering to 10bits, and that you wouldn't want swscale in a hardware transcode pipeline, should we just add P016 to the set of support formats for nvenc? and leave 444p16 there? It avoids the incorect format preference problem and then everything should work as well as it can.
[18:17:33 CET] <atomnuker> would anyone like to help in adding support for meson?
[18:17:35 CET] <atomnuker> https://github.com/atomnuker/FFmpeg/tree/exp_meson
[18:17:44 CET] <JEEB> ooh
[18:17:47 CET] <atomnuker> it can currently compile a subset of libavutil
[18:17:49 CET] <JEEB> someone *actually* made a PoC?
[18:17:57 CET] <JEEB> I was wondering about that, but FFmpeg is lolhueg
[18:18:20 CET] <atomnuker> welp so is mesa but they got meson support
[18:18:30 CET] <atomnuker> the pain starts with libavcodec though
[18:18:35 CET] <JEEB> :D
[18:18:35 CET] <atomnuker> libavutil is small
[18:18:42 CET] <JEEB> yea, libavutil is the first step thing
[18:18:52 CET] <philipl> BtbN: whelp - even that isn't a cheap fix as we don't know how to transform to P016 today.
[18:18:56 CET] <JEEB> all those deps and options and things being able to be enabled
[18:19:08 CET] <JEEB> or disabled
[18:19:27 CET] <atomnuker> jamrial / nevcairiel: any of you interested in having meson?
[18:20:21 CET] <atomnuker> currently the only thing stopping me from compiling the entire lavu is bprint.c
[18:20:56 CET] <atomnuker> for some reason it thinks strftime isn't defined but time.h is included right there and its pretty well defined there
[18:37:35 CET] <durandal_1707> atomnuker: so you picked another project which you not gonna complete?
[18:42:51 CET] <atomnuker> durandal_1707: have you seen the monstrosity that is hwcontext_vulkan and vulkan_common?
[18:43:35 CET] <atomnuker> its not easy to think of a way to abstract all the verbosity
[18:44:03 CET] <durandal_1707> just post it, finish later
[18:45:04 CET] <atomnuker> I'll post it once I have a sharpen compute shader working, and for that I need to add some more abstraction for a compute pipeline in vulkan_common and rename the chromaticaberration filter
[18:45:33 CET] <atomnuker> we should have a lavfi template for filters
[21:57:09 CET] <jkqxz> jamrial: Have you seen <http://ffmpeg.org/pipermail/ffmpeg-user/2018-February/038989.html>?
[21:58:28 CET] <jkqxz> I'm kindof ambivalent about fixing it.The behaviour of the encoder there is technically invalid in that it didn't create extradata, but there isn't any way of doing it (because the implementation is incomplete and doesn't support getting the headers), and it does provide the headers inline so it works for most use-cases.
[21:59:10 CET] <jkqxz> Still, it is kindof a regression, even if the files it previously made were pretty dubious.
[22:09:09 CET] <jamrial> jkqxz: wouldn't the correct fix here be making h264_vaapi encoder fill and propagate extradata instead if AV_CODEC_FLAG_GLOBAL_HEADER is set?
[22:10:49 CET] <jkqxz> It does on Intel. AMD doesn't support packed headers, though, so you can't.
[22:11:01 CET] <jkqxz> (That's why it's a problem only on AMD.)
[22:19:33 CET] <nevcairiel> it would generate a technically invalid mp4 file, so failing is not wrong?
[22:20:52 CET] <nevcairiel> although a more specific error m ight be nice
[22:21:06 CET] <jamrial> yeah, both mp4 and matroska files muxed like this would be invalid, even if playable by lax players/parsers since the headers would be available inband
[22:21:15 CET] <jkqxz> Indeed. You can write a raw stream and remux it later.
[22:21:19 CET] <jamrial> agree with the need for an error. there's none right now
[22:21:34 CET] <BtbN> can't you put in a bsf to extract the extradata?
[22:21:45 CET] <jamrial> possibly
[22:22:23 CET] <jamrial> extract_extradata, but it sends the extracted headers as packet side data, and i don't know if the muxers actually bother to look at that for h264 right now
[22:23:03 CET] <jkqxz> IIRC muxer creation is still before you have the first packet anyway?
[22:23:49 CET] <jamrial> yes, but both matroskaenc and movenc support getting extradata from the first packet and rewrite the output file header if needed
[22:23:56 CET] <jamrial> depending on codec
[22:25:00 CET] <jamrial> at least in matroska, with flac the extradata has a fixed size, so seek, rewrite header, seek back
[22:25:27 CET] <jamrial> with aac that's not the case, so we reserve the max amount of bytes extradata might need
[22:25:54 CET] <nevcairiel> aac max size isnt that big, for h264 the size could vary widely
[22:25:59 CET] <jamrial> we could do the same for h264, but i don't think cluttering muxers because one encoder doesn't behave correctly is a good idea
[22:28:39 CET] <jamrial> also, max sps is 32 and max pps is 256, so fuck that. the amount of reserved space would be huge
[22:36:32 CET] <jkqxz> You currently get a "some packed headers are not available" warning on AMD, but it's not very clear that that will break muxing to some formats.
[22:37:09 CET] <atomnuker> can you mux to nut?
[22:38:48 CET] <jkqxz> Seems to work currently.
[23:01:31 CET] <atomnuker> fuck, libavutil/time.h gets included instead of time.h even if a c file uses #include <time.h> if the -I <directory> gets included
[23:04:30 CET] <klaxa> i just saw someone apply for the dicom project, i'd like to do that as well, should i just also apply? :I
[23:04:30 CET] <BtbN> why do we even have a header whose name collides with common libc headers?
[23:04:42 CET] <klaxa> ah, gsoc
[23:04:59 CET] <nevcairiel> BtbN: because usually thats not a problem? :p
[00:00:00 CET] --- Mon Feb 26 2018
More information about the Ffmpeg-devel-irc
mailing list