[Ffmpeg-devel-irc] ffmpeg-devel.log.20180217
burek
burek021 at gmail.com
Sun Feb 18 03:05:03 EET 2018
[00:19:39 CET] <SortaCore> k no patch then
[00:24:25 CET] <atomnuker> SortaCore: git format-patch -1 to export your last commit as a patch
[00:24:37 CET] <atomnuker> -N to export last N commits as patches
[00:27:13 CET] <JEEB> grmbl
[00:27:49 CET] <JEEB> I should probably send my patch for pkg-config'ification of the primary zlib check out
[00:28:01 CET] <SortaCore> danke atom
[00:28:15 CET] <JEEB> because I'm kind of getting tired of configure not finding my zlib from my cross-compilation prefix
[09:18:10 CET] <amit2020cs> Hi everyone , I am new to the open source
[11:16:33 CET] <Guest2640> Hi actually I wanted to make a 2:1 filter in ffmpeg. I went through the writing filter documentation but I am a bit confused. For taking 2 inputs do I just need to add another element to AVFilterPad inputs array?
[11:21:24 CET] <durandal_170> Guest2640: see sidechaincompress filter or blend filter source code
[11:22:23 CET] <Guest2640> Thanks durandal_170
[12:31:10 CET] <Guest2640> How is a frame represented here? I understood about width and height and linespace and padding but what about if we have colour image? Is there a inlink ->depth or something like that?
[12:33:31 CET] <jkqxz> inlink->format is the format of the frames you will get. Set the formats you can handle in your query_formats function.
[12:41:16 CET] <Guest2640> Thanks got it. Can you tell me where can I find documentation for different types of formats I could use?
[12:41:17 CET] <jkqxz> libavutil/pixfmt.h
[16:07:40 CET] <cone-534> ffmpeg 03Michael Niedermayer 07master:ab6f571ef719: avutil/common: Fix integer overflow in av_clip_uint8_c() and av_clip_uint16_c()
[16:07:40 CET] <cone-534> ffmpeg 03Michael Niedermayer 07master:dd8351b1184b: avcodec/exr: Check remaining bits in last get code loop
[16:08:00 CET] <Nakul> Hi, can anyone help me
[16:08:26 CET] <Nakul> For getting started into GSoc?
[16:18:12 CET] <jkqxz> Nakul: What have you done so far? What areas are you interested in?
[16:29:39 CET] <Nakul> I am doing b tech .I am in 2nd semester.i know c,little bit git,asm I am learning
[16:30:23 CET] <Nakul> I like codding and doing something new stuff
[16:31:00 CET] Last message repeated 1 time(s).
[16:31:13 CET] <Nakul> @jkqxz
[16:34:20 CET] <jkqxz> Nakul: And what are you interested in with ffmpeg in particular? Do you use it at all?
[17:28:49 CET] <Guest2640> I am still stuck at taking 2 inputs. What I did was add one more element to AVFilter pad inputs array and kept config_props and filter_frame of both inputs same and tried to just add the 2 frames pixel wise (for testing) in filter_frames function. The erro that I am getting is simple filtergraph expected to have exactly one input and 1 output however it had >1 input or output. Please help.
[17:33:10 CET] <atomnuker> jkqxz: what does vaQueryVendorString() return for AMD devices?
[17:34:30 CET] <durandal_170> Guest2640: use -filter-complex
[17:35:13 CET] <durandal_170> and you need 2 inputs for 2 input pads
[17:37:47 CET] <Guest2640> is -filter_complex a command line flag?
[17:38:01 CET] <durandal_170> yes
[17:41:30 CET] <durandal_170> instead of -vf
[17:42:52 CET] <Guest2640> sorry for silly doubt but where should I put the flag exactly in the command?
[17:43:00 CET] <Guest2640> ooh thanks
[17:43:19 CET] <Guest2640> Do I have to change makefile for this?
[17:45:25 CET] <durandal_170> Guest2640: if you are adding new filter, yes
[17:50:10 CET] <Guest2640> Okay I tried it but now I am getting error Cannot find matching stream for unlabeled input pad 1 on filter parsed_foobar_0
[17:51:33 CET] <durandal_170> Guest2640: have you specified 2 inputs?
[17:51:46 CET] <Guest2640> sorry got it i forgot putting -i between two inputs
[17:51:53 CET] <jkqxz> atomnuker: Similar detail to Intel: "Mesa Gallium driver 18.1.0-devel for AMD Radeon (TM) RX 460 Graphics (POLARIS11 / DRM 3.23.0 / 4.15.2, LLVM 4.0.1)".
[17:53:02 CET] <jkqxz> (The device details come from the same place as e.g. the OpenGL renderer string.)
[17:56:33 CET] <atomnuker> I wish va had a way to get a properly formatted device string after initialization
[17:57:05 CET] <jkqxz> What do you mena?
[17:57:08 CET] <jkqxz> *mean
[18:00:03 CET] <atomnuker> there's no way to know what device is used is from a VADisplay, and you can't go the drm route and get a drm fd from a VADisplay to get a device ID
[18:00:47 CET] <jkqxz> Add it to libva?
[18:01:52 CET] <jkqxz> The string wouldn't help there anyway, because the path isn't immutable.
[18:03:07 CET] <atomnuker> btw which repo provides the amd va driver? is it separate like intel-vaapi-driver?
[18:03:39 CET] <jkqxz> No, it's in the Mesa tree: <https://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/state_trackers/va>.
[18:04:18 CET] <jkqxz> (Sharing the same backend stuff as VDPAU and OpenMAX.)
[18:08:48 CET] <Guest2640> Thanks durandal_170 now I am able to take 2 inputs. Now basically I have set both input's filter_frame to one function whose parameters are AVFilterLink and AVFrame. How do I access data of two different frames. For one input I just did in -> data to get frame's data. Now will AvFrame be array of 2 frames and can I simply take both frames by index of AVFrame parameter?
[18:13:28 CET] <kepstin> Guest2640: hmm, if you're writing a filter that takes multiple inputs, you'll probably want to look into using the "framesync" framework to handle timestamp differences, which means that you'll be using the activate callback rather than filter_frame.
[18:16:03 CET] <Guest2640> Thanks kepstin where can I find documentation for framesync framework?
[18:16:09 CET] <kepstin> Guest2640: probably, oh, vf_mix.c is one of the simplest filters that uses that (although it has dynamic inputs rather than static)
[18:16:31 CET] <kepstin> Guest2640: and read framesync.h in the libavfilter directory
[18:16:38 CET] <Guest2640> Thanks!
[18:17:52 CET] <kepstin> Guest2640: note that for filter with 2 fixed inputs, there's some helpers to make it easier to use.
[18:22:04 CET] <Guest2640> Can you tell me bout them?
[18:28:02 CET] <kepstin> they're shown in the framesync.h file, and you can see them used in some filters (do a git grep for 'dualinput' in the libavfilters directory)
[19:48:09 CET] <adi_> quit
[19:59:40 CET] <atomnuker> jkqxz: nice, I can import vaapi surfaces into vulkan now by exporting them as drm
[20:00:01 CET] <atomnuker> seems like something is broken though since I get drm_format 0x20203852
[20:00:34 CET] <atomnuker> if I export them when their format is yuv420p or nv12 (doesn't change)
[20:01:05 CET] <atomnuker> it doesn't correspond to any fourcc code so I'm wondering if that's some sort of undocumented DRM_FORMAT_UNSUPPORTED flag?
[20:01:21 CET] <jkqxz> Yes, it does. The first plane is R8 in both of those.
[20:02:00 CET] <atomnuker> why don't I get DRM_FORMAT_NV12 though?
[20:02:28 CET] <atomnuker> oh maybe its because I thought my libva supports these new formats
[20:02:42 CET] <atomnuker> because I didn't want to implement multiobject importng
[20:02:55 CET] <jkqxz> Does Vulkan support NV12 natively?
[20:02:58 CET] <atomnuker> yes
[20:03:29 CET] <jkqxz> Then pass VA_EXPORT_SURFACE_COMPOSED_LAYERS to vaExportSurfaceHandle().
[20:03:42 CET] <jkqxz> And you'll get a single NV12 layer.
[20:04:21 CET] <jkqxz> (Note: does not work on AMD. The planes are in different objects, so only the separate form works.)
[20:05:47 CET] <jkqxz> If the Vulkan import allows multiple-object forms then I recommend just taking the R8/RG88 and combining them at import time.
[20:12:06 CET] <atomnuker> well, yuv420p works, nv12 doesn't (flickers for 1 frame after dozens, otherwise just green)
[20:12:36 CET] <atomnuker> yeah, I can import multiple planes separately by using disjoint images
[20:13:12 CET] <atomnuker> was kinda hoping I could avoid that but I'll need to if I want to map host memory as vulkan images to use the transfer queue
[20:13:57 CET] <jkqxz> Since there are no constraints at all on how host memory is arranged?
[20:15:40 CET] <atomnuker> for the host memory mapping? yes since, there's no guarantee that (data[0] + width*height*bps) == data[1]
[20:17:08 CET] <jkqxz> And pitch need not be greater than width*bps.
[20:18:03 CET] <atomnuker> hmm, haven't thought about how that would work with non-tight strides
[20:19:08 CET] <jkqxz> Is negative pitch allowed? (From vflip, say.)
[20:22:36 CET] <atomnuker> I don't think so
[20:22:44 CET] <atomnuker> also non-tight pitches won't be a problem
[20:22:57 CET] <atomnuker> since the images which are backed by host memory are temporary
[20:23:11 CET] <atomnuker> I'll just define them as width=stride
[20:23:33 CET] <atomnuker> and when copying I'll copy them to the actual tiled image
[20:24:52 CET] <atomnuker> (which will be allocated with correct width and height so the copy will crop the input image)
[20:32:04 CET] <atomnuker> jkqxz: btw what's the difference between a drm object and a drm layer?
[20:33:33 CET] <jkqxz> An object is just a single allocated thing. Layer is a concept we invented here to express the different ways of putting multiple-plane surfaces into objects.
[20:43:19 CET] <atomnuker> jkqxz: I'm always getting 1 object regardless of what flags I export with or what format I'm exporting from
[20:43:36 CET] <atomnuker> its the number of layers that changes
[20:44:08 CET] <jkqxz> Yes. There is only one object on Intel.
[21:02:36 CET] <atomnuker> what about on AMD?
[21:02:58 CET] <atomnuker> would I get multiple objects if I use SEPARATE or a single object if I use COMPOSED?
[21:05:36 CET] <jkqxz> You get multiple objects if you use SEPARATE and you get an error if you used COMPOSED.
[21:08:35 CET] <atomnuker> derp
[21:53:53 CET] <LiquidAcid> hello, trying to debug some segfault in vgmstream (which is using ffmpeg), i'm getting a null codecpar in AVStream. which is strange, since the code calls avformat_find_stream_info() prior to this
[21:54:06 CET] <LiquidAcid> according to the API codecpar should be set after this
[21:55:22 CET] <JEEB> it really depends on if probing can figure things out
[21:56:08 CET] <LiquidAcid> JEEB, so it can happen that nb_streams != 0, but the streams don't have codecpar set?
[21:57:30 CET] <JEEB> I can't say much without seeing code
[21:58:05 CET] <LiquidAcid> JEEB, https://github.com/kode54/vgmstream/blob/master/src/coding/ffmpeg_decoder.c#L406
[21:58:22 CET] <LiquidAcid> that's the line where it crashes, codecpar is null at this point
[21:58:44 CET] <JEEB> ok, so you are calling av_register_all at least
[21:58:48 CET] <JEEB> that one I wanted to check just in case
[21:58:49 CET] <LiquidAcid> line 396: avformat_find_stream_info() is called
[21:59:38 CET] <JEEB> does ffprobe say anything interesting about the file you're trying to load, since it's using basically the same APIs?
[22:00:08 CET] <LiquidAcid> JEEB, "Invalid data found when processing input"
[22:01:24 CET] <JEEB> wonder if with the custom AVIO it can't figure out what the input is?
[22:01:27 CET] <JEEB> try calling av_dump_format
[22:01:32 CET] <JEEB> and see what it outputs
[22:03:45 CET] <LiquidAcid> JEEB, does not print anything
[22:04:13 CET] <JEEB> interesting
[22:05:02 CET] <JEEB> what sort of stuff is your input?
[22:05:44 CET] <LiquidAcid> JEEB, has extension .nop, it's some proprietary opus container used on the switch
[22:06:29 CET] <JEEB> so what is libavformat supposed to take in?
[22:07:10 CET] <LiquidAcid> JEEB, no idea, keep in mind that this is not my code
[22:08:45 CET] <JEEB> basically setting a file name with an extension that makes sense and/or forcing the format that the input's supposed to be would help if one would know what you're trying to open :P
[22:09:18 CET] <LiquidAcid> i don't think that's how vgmstream is working
[22:09:45 CET] <JEEB> well, yes. currently the name is set to "" and forced format to nullptr
[22:09:59 CET] <JEEB> anyways, this is API usage and I just noticed we're on -devel
[22:10:13 CET] <JEEB> API usage stuff is supposed to be discussed on the non-devel channel
[22:11:57 CET] <LiquidAcid> k
[00:00:00 CET] --- Sun Feb 18 2018
More information about the Ffmpeg-devel-irc
mailing list