[Ffmpeg-devel-irc] ffmpeg-devel.log.20190305
burek
burek021 at gmail.com
Wed Mar 6 03:05:04 EET 2019
[00:00:02 CET] <jkqxz> lrusak_: Sure. Use the external allocation route for hwcontext, and make your decoder support the hw_frames_ctx interface.
[00:02:20 CET] <jkqxz> You can kindof do it with VAAPI now, but it's a pain to make objects with the right constraints (tiling, alignment etc.) to actually decode to.
[00:37:48 CET] <lrusak_> jkqxz: by external allocation route do you mean using av_buffer_pool_init2 to allow specifying our own allocation function?
[00:43:10 CET] <vel0city> lrusak_: o/ whatchu working on? :)
[00:43:48 CET] <jkqxz> Yes. Though if you never allocate directly from it then it's not actually necessary to have an allocation function - for a decoder that means supplying a get_buffer2 callback which will return one of your frames.
[00:46:08 CET] <lrusak_> ah ok, Thank you, I knew abou tthe get_buffer2 callback but wasn't sure if that was the best method or not. I guess it allows us to allocate our own buffers in our own buffer pool and provide them to ffmpeg directly
[00:46:57 CET] <lrusak_> vel0city: zero-copy SW decoding :) or something like that to improve our SW decoding performance
[00:48:23 CET] <jkqxz> If you're SW decoding then you don't need necessarily need to consider the hwcontext stuff at all. If your code can supply the software-mapped buffers then you can give them to get_buffer2 like any other frame.
[00:48:57 CET] <jkqxz> (Though it may be useful to have it anyway if you're using it in other cases, and it can do the mapping for non-weird cases.)
[00:49:01 CET] <vel0city> lrusak_: neat
[00:51:26 CET] <lrusak_> jkqxz: gotcha, thanks for the pointers. I'll give that a try and come back if I get stuck.
[00:53:39 CET] <amaim_> Hello there, I'm trying to get a grip on the already working codes to be able to do the "Qualification Task", but I was wondering how can I know which files/headers/structures that may come in handy from such huge libraries? I'm sure the question is trivial but I'm an extreme noob and new to open source, thanks for your time.
[00:55:47 CET] <jkqxz> amaim_: It very much depends what areas you are interested in.
[00:56:34 CET] <amaim_> I'm currently checking the HEIF SUPPORT project, I need to write a trivial (!) heif muxer as the Qualification Task
[00:56:53 CET] <jkqxz> The libraries are quite modular, so a place to start would be to look at an example of the type of component you are interested in (e.g. a decoder) and understand how that works.
[00:57:24 CET] <amaim_> I've been trying to understand for example the bmp related code since I've experienced, but the amount of custom structures, functions used is huge :D
[00:57:38 CET] <amaim_> and each time I go to one of the included files I find more included files
[00:58:31 CET] <amaim_> Sorry, I'm sure the amount of noobness in my questions is irritating but I'm totally new to this :D
[00:59:21 CET] <JEEB> so the mess that is HEIF is ISOBMFF-based
[00:59:30 CET] <JEEB> ISOBMFF muxing is handled by libavformat/movenc.c
[00:59:48 CET] <jkqxz> I'm not quite sure what that task means by a "trivial (!) heif muxer". Maybe one which takes a single HEVC frame intra-frame of a fixed size and puts that into a file, with most of it hard-coded?
[01:00:36 CET] <jkqxz> Though the standards in this area are quite complex, so there is a lot of boilerplate to get through there.
[01:01:19 CET] <amaim_> @JEEB Thanks for the information.
[01:02:09 CET] <amaim_> @jkqxz thanks, I don't know either I think I will have to speak to the mentor to know exaclty, thanks anyways
[01:04:16 CET] <JEEB> I hope you have seen the comments in the initial patch for reading HEIF which was never merged
[01:04:44 CET] <JEEB> although of course in theory muxers are indeed simpler, since they only need to support what they consider are needed
[01:05:16 CET] <JEEB> granted, with HEIF nothing tends to be too simple :P
[01:05:18 CET] <JEEB> https://ffmpeg.org/pipermail/ffmpeg-devel/2017-August/215003.html
[01:05:28 CET] <JEEB> this is the initial HEIF patch set for reading, and the rant
[01:05:43 CET] <JEEB> amaim_: also I hope you have the HEIF specification
[01:06:03 CET] <amaim_> Well as a total beginner, should I abandon this? you all seem to say it's complicated xD
[01:06:21 CET] <amaim_> Should I pick another format? or ?
[01:07:04 CET] <JEEB> I don't mean you should abandon it, I'm just noting that it will not be a walk in the park
[01:08:00 CET] <JEEB> anyways, for specifications it seems like HEIF spec is available for free
[01:08:02 CET] <JEEB> https://standards.iso.org/ittf/PubliclyAvailableStandards/index.html
[01:08:14 CET] <JEEB> grab 14496-12 and 23008-12
[01:08:28 CET] <JEEB> 14496-12 is ISOBMFF (which colloquially gets called "mp4")
[01:08:40 CET] <JEEB> which is based on the Apple MOV format originally
[01:08:51 CET] <JEEB> 23008-12 then is HEIF, which is based on ISOBMFF
[01:10:26 CET] <amaim_> Currently reading the HEIF initial patch and the guy seems mad about the format xD
[01:10:35 CET] <amaim_> Thank you so much JEEB! really
[01:10:43 CET] <JEEB> yes, since it seems to be such an overcomplicated mess
[01:10:48 CET] <amaim_> What about DICOM? would you consider it easier?
[01:10:56 CET] <JEEB> I don't even know what that is :P
[01:11:21 CET] <amaim_> Oh heheh
[01:11:24 CET] <amaim_> Alright great
[01:12:45 CET] <JEEB> also I would rather look at the HEIF spec to see how complex a basic single image would be
[01:13:22 CET] <JEEB> movenc should already handle putting HEVC (the video format) into MP4
[01:13:58 CET] <JEEB> so you figure out what sort of MP4 based monstrocity HEIF is
[01:14:13 CET] <JEEB> and see how complex it feels like getting a basic thing flying
[01:14:30 CET] <JEEB> (and then you'd have to verify it against applications that take HEIF in, such as some Apple stuff)
[01:16:17 CET] <amaim_> Alright noted, I'll try to and hopefully I succeed
[01:16:18 CET] <JEEB> also I recommend you figure out what muxers are in FFmpeg world if you are going to poke them
[01:16:40 CET] <amaim_> Yes I've been trying to get a general info about that
[01:16:55 CET] <amaim_> Are there good educative examples you would recommend?
[01:17:56 CET] <JEEB> not sure how educative or good, but you can see the basics in movenc.c for example. it has X different muxers utilizing the same core.
[01:18:02 CET] <JEEB> scroll to the very end of the file
[01:18:11 CET] <JEEB> you will see definitions of AVOutputFormats
[01:18:26 CET] <JEEB> that is the base structure that defines an output format
[01:18:56 CET] <JEEB> then you have function pointers in it
[01:19:05 CET] <JEEB> like init, write header/packet
[01:19:11 CET] <JEEB> write trailer
[01:19:18 CET] <JEEB> and finally deinit
[01:19:28 CET] <JEEB> which of these a format implements depends on the format
[01:20:11 CET] <JEEB> but basically then the API user will initialize a muxer, set up streams etc, write the header, and then write one or more packets
[01:20:51 CET] <JEEB> then after the API user is done with muxing, the trailer is written and finally the context can be closed :)
[01:21:01 CET] <JEEB> you can see how those function pointers match up with these things
[01:21:08 CET] <JEEB> the framework basically abstracts that
[01:22:18 CET] <JEEB> amaim_: for a much simpler example that only has "write me a header" and "write me a packet" there is libavformat/yuv4mpegenc.c (see how it doesn't have an initialization thing, footer thing, or deinit thing)
[01:23:34 CET] <amaim_> Thank you for the amazing details, I'll study every line you have said thoroughly, If I succeed know that you are the one who gave me the first push, thank you!
[01:24:22 CET] <JEEB> but yes, for HEIF you will either have to grow movenc.c, or you make a separate file which then utilizes helpers from movenc.c
[01:24:52 CET] <JEEB> but something like yuv4mpegenc might give you a moment to grasp how the interface works
[01:25:00 CET] <JEEB> anyways, night'o
[01:25:09 CET] <amaim_> I'll look at it, great!
[01:25:12 CET] <amaim_> Good night :D
[01:41:44 CET] <diamondman> I have found myself trying to draw text/shapes to non standard displays with non square pixels such as https://github.com/diamondman/att26a or yarn in a scarf. If I just use standard rasterization, text looks stretched on these bizarre mediums. I know this is not explicitly video related, and therefore not the right place to ask, but since video deals with these types of transformations, I was hoping someone may have
[01:41:44 CET] <diamondman> a pointer on where I should start. I would greatly appreciate any suggestions.
[01:48:54 CET] <J_Darnley> Drawing them how? Your self or with some ffmpeg features?
[01:50:34 CET] <J_Darnley> https://raw.githubusercontent.com/diamondman/att26a/master/docs/images/AT%26T_26A_FACE.jpg
[01:50:39 CET] <J_Darnley> I will guess yourself
[01:51:46 CET] <J_Darnley> There are some terms you can search for: aspect ratio, anamorphic
[01:52:33 CET] <diamondman> J_Darnley: I guess I meant generally... like somehow taking pixel data from any output that renders to square pixels (like ffmpeg or freetype) and getting them to look right in these other forms. It is very likely I simply don't know what I don't know here yet.
[01:52:53 CET] <diamondman> J_Darnley: Thanks, I will look up those terms :)
[01:53:39 CET] <J_Darnley> You probably need to define the shape of your pixels and apply the reverse tansform for that.
[01:54:06 CET] <J_Darnley> basic example, pixel aspect ratio of 2:1
[01:54:22 CET] <J_Darnley> each pixel is twice as wide as it is tall
[01:54:56 CET] <J_Darnley> so that would widen the image by a factor of 2
[01:55:36 CET] <J_Darnley> So to get the right shape you need to reduce width to half
[03:11:01 CET] <diamondman> J_Darnley: Thanks for input. I will see what I can do with that :)
[12:58:13 CET] <j-b> Yo
[18:57:31 CET] <kurosu> jamrial: for anything win32, do you use the mingw32 shell, or mingw64 + cross-compiler ?
[18:57:57 CET] <kurosu> I no longer have the will to build myself gcc & co to be multilib, but that used to be my preferred way
[18:59:20 CET] <kurosu> actually, the next question is dav1d-oriented, so...
[19:01:26 CET] <nevcairiel> for myself, technically i "cross-compile" for 64-bit windows, as-in the win64 toolchain is prefixed, while my win32 toolchain is not prefixed, both use mingw-w64 of course. Judging from the terms you use, thats probably from the toolchains that msys2 packages for you?
[19:01:45 CET] <jamrial> kurosu: i use the msys2's mingw32 shell (msys2_shell.cmd -mingw32)
[19:02:05 CET] <kurosu> ok, thanks, that was what I started to do
[19:02:13 CET] <jamrial> and the mingw32 packages for gcc, binutils and such
[19:02:18 CET] <kurosu> I just find it cumbersome to have basically a duplicate system
[19:02:37 CET] <kurosu> while I just do validating using mingw32, no actual dev or benchmarking
[19:02:53 CET] <jamrial> for nasm and such the msys package can be shared between the two shells
[19:04:27 CET] <jamrial> nevcairiel: yeah, he's setting up msys2 and its toolchains are cleanly split
[19:05:35 CET] <nevcairiel> i never liked their setup, i like being able to use both from the same shell, even if i have to prefix my 64-bit tools
[19:06:05 CET] <nevcairiel> although I always was too lazy to build a proper multi-lib compiler
[19:06:13 CET] <nevcairiel> hence, prefixing =p
[20:17:17 CET] <durandal_1707> lol, doom9 shares cracks
[21:00:53 CET] <durandal_1707> why nobody wants to improve my patch/work on scalarproduct_float? that is very sad and devastating to know
[21:07:30 CET] <durandal_1707> everybody is on #dav1d
[21:09:20 CET] <jamrial> durandal_1707: we can't really add avx for it until the next bump, to at least increase the required alignment
[21:09:26 CET] <jamrial> unless we make it branchy
[21:10:03 CET] <jamrial> checking both alignment and element count
[21:10:38 CET] <durandal_1707> aligned no more relevant
[21:10:49 CET] <durandal_1707> we should just remove that restriction
[21:10:56 CET] <nevcairiel> didnt someone tell me that avx can deal with unaligned input anyway
[21:12:43 CET] <jamrial> yes, except on instructions that specify alignment, like movdqa/movaps
[21:39:23 CET] <BBB> aligned is still preferred
[21:45:47 CET] <kurosu> there's minimal (or at least way way smaller) penalty to unaligned data nowadays
[21:46:53 CET] <durandal_1707> what about adding scalarproduct2_float?
[21:49:45 CET] <kurosu> nevcairiel: missed your comment re mingw64, but yes
[21:50:24 CET] <kurosu> I don't care like you might do about distributable packages, so I just needed a compiler, not a load of various libs
[21:51:10 CET] <kurosu> in old times, I was just adding -m32 to cflags (ie on the configure command line), and was done with this part
[21:51:48 CET] <kurosu> now, I probably have half a gig of additional build chain & assorted tools
[21:52:15 CET] <kurosu> not the end of the world, even on a ssd
[21:57:24 CET] <nevcairiel> Luckily the number of libraries etc are pretty low
[21:57:33 CET] <nevcairiel> basically zlib and some pthreads variant
[22:20:16 CET] <atomnuker> kurosu: somewhat, sometimes its hardly measurable
[22:20:56 CET] <atomnuker> but nowadays timing of movu == mova for aligned data
[23:05:43 CET] <lynne> hi, is there any way to disable v4l2_m2m? --disable-everything --disable-autodetect --disable-hwaccels still leaves it enabled
[23:07:37 CET] <kierank> jamrial: iirc there is avx for some functions and they just fail if you use your own allocator
[23:07:49 CET] <kierank> So it was always a problem
[23:07:59 CET] <kierank> I saw this like 5 years ago
[00:00:00 CET] --- Wed Mar 6 2019
More information about the Ffmpeg-devel-irc
mailing list