[Ffmpeg-devel-irc] ffmpeg-devel.log.20180119
burek
burek021 at gmail.com
Sat Jan 20 03:05:03 EET 2018
[01:00:19 CET] <jamrial> BBB: https://github.com/jamrial/FFmpeg/commit/e13b3615a69d3b2f104566fdda5862958a71bb86
[02:02:05 CET] <mypopydev> @jkqxz
[02:02:08 CET] <mypopydev> hi
[02:02:12 CET] <rcombs> jkqxz: wanna LGTM this? http://ffmpeg.org/pipermail/ffmpeg-devel/2018-January/223978.html
[02:05:19 CET] <mypopydev> @jkqxz if we put VAAPIVPPContext as the first element of the FooVAAPIContext , how to find a way to get VAAPIVPPContext bypass FooVAAPIContext from AVFilterContext *avctx ?
[02:07:45 CET] <jkqxz> mypopydev: It's avctx->priv as well.
[02:08:54 CET] <jkqxz> (A pointer to a structure is also a pointer to the first element.)
[02:09:14 CET] <mypopydev> @jkqxz Then use (VAAPIVPPContext *) to exe cast ?
[02:10:54 CET] <jkqxz> It's a void*, so you don't need to cast. Just use VAAPIVPPContext *ctx = avctx->priv; inside vaapi_vpp.c and FooVAAPIContext *ctx = avctx->priv; in the individual filters.
[02:11:41 CET] <jkqxz> rcombs: Seems fine? I assume that multiplication does the right thing...
[02:11:49 CET] <rcombs> jkqxz: also, a concept I've been thinking about: moving the generic code from vf_scale that decides on the output res (the eval and such) to lavfi/scale_generic.c
[02:12:07 CET] <rcombs> and the relevant AVOption definitions to a macro in lavfi/scale_generic.h
[02:12:14 CET] <rcombs> similar to tls.c/h
[02:12:26 CET] <rcombs> and then having vf_scale_vaapi and vf_scale both use that
[02:13:07 CET] <jkqxz> There is already scale.c which does that for '', vaapi, cuda and npp.
[02:13:55 CET] <jkqxz> Well, the options could be moved too.
[02:15:48 CET] <rcombs> oh so there is
[02:15:50 CET] <rcombs> when did that show up
[02:16:10 CET] <jkqxz> A year ago? :P
[02:16:11 CET] <rcombs> february, wew
[02:16:27 CET] <rcombs> thanks tmm1, implementing my ideas a year before I have them
[02:17:28 CET] <rcombs> so I guess I just want to refactor the aspect ratio handling into that
[02:18:03 CET] <rcombs> and yeah that multiplication is from vf_scale and seems to work fine here so
[02:18:41 CET] <jkqxz> I just tried it and replied.
[02:18:48 CET] <mypopydev> @jkqxz the other thing, now we want to support multiple reference frame for h264/h265 vaapi enc,and I have send a early version as RFC, do you have any comments or suggestion ?
[02:22:26 CET] <mypopydev> Now vaapi 264/265 encoder only support max 2 reference frames (1 forward reference frame/ 1 backup reference frame )
[02:24:21 CET] <jkqxz> Have you got any figures for how much that last-few-frames method helps? As I noted before, I'm sceptical that you will achieve anything without actually looking at the frames to try to work out which ones will be useful as references.
[02:24:36 CET] <jkqxz> Also, I don't think it's reasonable to triplicate a lot of the setup code like that.
[02:25:09 CET] <cone-095> ffmpeg 03Rodger Combs 07master:381a4820c64b: lavfi/vf_scale_vaapi: set output SAR
[02:25:51 CET] <mypopydev> Yes, so I think we need a common DPB management API for multiple reference frame
[02:29:42 CET] <mypopydev> @jkqxz for i965 driver, I think multiple reference frame can't get more income, but for media-driver (iHD), we can get better compression ratio as my test result
[02:31:21 CET] <jkqxz> Even when the frames offered to it as references are just the most recent few frames of the stream? The problem I have with it is that those frames don't look useful for anything unless your stream has done something quite strange like a very short cut away and back.
[02:37:22 CET] <mypopydev> @jkqxz at least not worse in this case :)
[02:38:10 CET] <jkqxz> It will be slower, and the requirements on the decoder will be greater.
[02:39:29 CET] <mypopydev> For HW encoder, as my test result, maybe 5% - 15 % fps reduce when enable multiple reference frame
[02:40:02 CET] <mypopydev> And I think gst-vaapi/MediaSDK both support this feature :)
[04:35:44 CET] <BBB> jamrial: yeah that looks reasonable
[04:40:31 CET] <jamrial> BBB: should i send it to the ml, or just push?
[04:40:46 CET] <BBB> Id send to ml just because thats what we normally do
[04:41:14 CET] <BBB> the order of operations is also a little weird, you seem to do things depending on colorspace, whereas the specific vp9 definition seems to do it based on profile (and then colorspace)
[04:41:23 CET] <BBB> I dont think it makes an actual difference, both work fine
[04:41:37 CET] <BBB> I just noticed its different
[04:41:49 CET] <BBB> (its an ordering/indentation difference, not a functional difference afaict)
[04:41:57 CET] <jamrial> I basically copied it from vp9.c
[04:42:38 CET] <BBB> its fine :)
[04:42:49 CET] <BBB> Im gonna sleep in a little bit, but feel free to push if you think its finished
[04:43:05 CET] <BBB> Id normally send to ml and let it sit for 24hrs but I have no strong opinions on this sort of stuff
[04:43:45 CET] <jamrial> i did however have to set avctx->profile after validating it was a vp9 frame (with VP9_SYNCCODE) since otherwise most samples would show up as profile 0
[04:44:30 CET] <BBB> yes that wouldnt be good
[04:44:31 CET] <jamrial> fate samples i mean. apparently because of the last packet
[04:44:34 CET] <BBB> I dont think we have rgb samples btw
[04:45:50 CET] <jamrial> if they are available as part of the conformance suit we could add a couple
[04:46:11 CET] <BBB> I dont think they are :(
[04:46:23 CET] <BBB> its one of these features that was never really used
[07:27:03 CET] <Madsfoto> durandal_1707 and sfan5> I can happily report that changing the `#define FF_BUFQUEUE_SIZE 129` to something larger (I chose 1000) and adding atadenoise=0a=0:0b=0:1a=0:1b=0:2a=0:2b=0:s=599 gives perfect results. Thank you very much for your assistance!
[10:03:42 CET] <cone-459> ffmpeg 03Karthick Jeyapal 07master:0afa171f25bc: avformat/hlsenc: Add CODECS attribute to master playlist
[11:39:44 CET] <Madsfoto> durandal_1707 and sfan5> And it is considerably faster than Imagemagick: In IM it took 6 seconds per averaged image, in ffmpeg it takes 2.3 seconds.
[13:28:08 CET] <Chloe> why are static mutexes all macro'd to pthread_* but then we have ff_thread_ stuff?
[13:30:06 CET] <wm4> Chloe: what do you mean?
[13:32:00 CET] <Chloe> wm4: you said I should use ff_thread_once instead of pthread_once, but the os/2 compat layer is for pthread_mutex instead of ff_mutex
[13:32:24 CET] <Chloe> I guess my question is: what's the point of the ff_thread/ff_mutex stuff, can't we just use pthread_
[13:32:47 CET] <Chloe> and if it's single threaded, without threads, then just define pthread_ stuff to whatever we're doing for ff_ stuff
[13:33:11 CET] <Chloe> also does http://sprunge.us/HfEU look adequate for a static mutex lock (though I guess I'll have to use ff_mutex)
[13:34:12 CET] <wm4> Chloe: the only reason is because we allow building with threads, in which case we don't define pthread_* for some reason, but the ff_thread ones
[13:34:34 CET] <BtbN> you mean without threads?
[13:34:36 CET] <wm4> also yes I guess
[13:34:41 CET] <wm4> BtbN: yeah
[13:36:26 CET] <Chloe> wm4: yeah but cant we just define pthread_* instead
[13:37:17 CET] <wm4> *shrug*
[13:37:43 CET] <wm4> I think there were some concerns about overriding standard names
[13:38:03 CET] <wm4> which isn't a problem on win32/os2 because pthread is not a standard thing there
[13:38:15 CET] <wm4> but then some ASSERT_LEVEL crap redefines pthread functions anyway
[16:15:37 CET] <thardin> is it just me or does dashdec seem particularly awful?
[16:15:55 CET] <atomnuker> blame dash
[16:16:41 CET] <thardin> I'm sure the format is awful too, just all these string manipulations one doesn't really have much business doing in C
[16:17:09 CET] <thardin> or at least not doing oneself
[20:07:18 CET] <durandal_1707> atomnuker: you managed to finish soon something?
[20:43:01 CET] <atomnuker> no, started writing the vulkan thing and decided to do something no one else had done before again by using the ycbcr formats
[20:43:56 CET] <atomnuker> it works now, but I still need to write the filter's command buffer code and I want to abstract it properly so other filters may use it and it'll be able to deal with multiple sources
[20:44:12 CET] <atomnuker> (oh and the buffer description code)
[20:44:48 CET] <atomnuker> I'll probably work on atrac9 this weekend though, I'd like to decode at least the simplest form of coefficients
[20:45:54 CET] <atomnuker> and on vulkan throughout the week, I'm taking some time off to go to barcelona and nice
[21:27:33 CET] <atomnuker> jkqxz: is there a limit to how many vaapi instances the hardware can handle? I tried 40 at once and it still worked with barely 30% gpu usage
[21:33:41 CET] <jkqxz> No. It's like any other GPU-using process, so you can have arbitrarily many just as you can with OpenGL. (There isn't any special state.)
[21:47:27 CET] <atomnuker> wow, impressive
[21:47:57 CET] <nevcairiel> once you run out of resources like memory, it slows down to a crawl tho
[21:50:03 CET] <atomnuker> its better than just having completely dedicated fixed decoders which can only handle a few streams at a time, despite each block having enough throughput for more
[21:50:50 CET] <nevcairiel> those are basically that, except that its not a fixed stream limit, just macroblock throughput
[21:52:31 CET] <atomnuker> I thought they handled things like quantization/transforms/prediction separately but that works too
[23:09:53 CET] <BtbN> philipl, no, I haven't. And most likely won't this weekend as well, as my flatmate is preparing and holding his birthday party.
[23:10:04 CET] <BtbN> So if you want to jump on it in the meantime, feel free
[23:28:41 CET] <cone-840> ffmpeg 03Nikolas Bowe 07master:e07649e618ca: avformat/matroskadec: Fix float-cast-overflow undefined behavior in matroska_parse_tracks()
[23:32:46 CET] <cone-840> ffmpeg 03Yogender Gupta 07master:07a96b6251f0: avcodec/cuviddec: set key frame for decoded frames
[00:00:00 CET] --- Sat Jan 20 2018
More information about the Ffmpeg-devel-irc
mailing list