[Ffmpeg-devel-irc] ffmpeg-devel.log.20180516

burek burek021 at gmail.com
Thu May 17 03:05:03 EEST 2018


[00:04:23 CEST] <atomnuker> BBB: nowhere, I'm losing motivation to get it past Mt. Lv_Map and without it its nowhere
[00:04:28 CEST] <atomnuker> all this damn contexts
[00:13:58 CEST] <TD-Linux> you mean "optimization opportunities" :)
[00:45:34 CEST] <cone-174> ffmpeg 03Zhao Zhili 07master:84d4af4ea8aa: examples/filtering_video: drop an always true condition
[00:45:35 CEST] <cone-174> ffmpeg 03Zhao Zhili 07master:c0a845f9481d: examples/filtering_video: add missing headers
[00:45:36 CEST] <cone-174> ffmpeg 03Zhao Zhili 07master:f24b2e64b0f3: examples/filtering_video: fix memory leak
[00:45:37 CEST] <cone-174> ffmpeg 03Zhao Zhili 07master:d0b5952325c9: doc/examples: add missing ignore files
[00:45:38 CEST] <cone-174> ffmpeg 03Michael Niedermayer 07master:64f59a21b39b: avcodec: Disable new iterate API for ossfuzz
[02:08:19 CEST] <BBB> atomnuker: hmk...
[03:08:21 CEST] <atomnuker> ffs, when can we use the new api in decoders?
[03:08:54 CEST] <atomnuker> that new libopus fec patch could do things correctly if it could output multiple frames per packet
[03:09:30 CEST] <atomnuker> as well as atrac9 and a whole bunch of voice codecs, and who knows how many other stuff in lavc
[03:11:36 CEST] <atomnuker> the old API can't use the new API in AVCodec to emulate old behaviour of 1 packet in -> N frames out, can it?
[03:54:39 CEST] <tmm1> is there any way to access AVFormatContext from an AVStream?
[04:02:19 CEST] <jamrial> tmm1: don't think so
[04:02:42 CEST] <jamrial> if this is an internal function, just change it to also take an AVFormatContext
[04:04:18 CEST] <tmm1> i need it from wrap_timestamp(), but one of the callers is av_add_index_entry() which is public
[04:05:26 CEST] <tmm1> it seems i would need to add AVFormatContext pointer into AVStreamInternal
[04:24:16 CEST] <jamrial> tmm1: av_add_index_entry() could always just call wrap_timestamp() with NULL as argument for the new avformatcontext param, assuming the changes you'll make don't make it an obligatory param
[04:25:50 CEST] <tmm1> i'm storing a new index (for timestamp discontinuities) in the format context, so i need access to it
[04:26:50 CEST] <tmm1> adding to AVStreamInternal worked for now, can revisit and maybe the index can be moved elsewhere for easier access
[08:59:52 CEST] <JEEB> atomnuker: i thought that was all handled by some wrapper? unless we're talking sbout more internal things?
[09:49:09 CEST] <atomnuker> wm4: ping? you implemented the wrapper, right?
[11:19:46 CEST] <nevcairiel> its audio, cant you just concat the packets into one buffer
[12:50:03 CEST] <cone-432> ffmpeg 03Paul B Mahol 07master:4e816b549142: avfilter: add aderivative and aintegral filter
[14:32:05 CEST] <eric-> Hey all, does anyone know why the some of the DNxHD bitrates listed in libavcodec/dnxhddata.c, do not match the VC3 standard ? For example the 1259 profile array lists 63,84,100,100, but the standard lists 80,85,100
[14:34:30 CEST] <cone-432> ffmpeg 03Jun Zhao 07master:48c5ac8b0f66: lavc/h2645_parse: log more HEVC NAL type.
[14:34:30 CEST] <cone-432> ffmpeg 03Jun Zhao 07master:7582a907e40e: lavc/h2645_parse: rename the nal_unit_name to hevc_nal_unit_name.
[14:34:32 CEST] <cone-432> ffmpeg 03Jun Zhao 07master:b7cd2ab22e21: lavc/h2645_parse: add h264_nal_unit_name for h264 NAL type.
[15:26:30 CEST] <gagandeep> kierank: there was difference coding flag in codec state that decides which subband will be difference coded
[15:26:58 CEST] <kierank> ok let me know if you have a patch
[15:27:07 CEST] <wm4> atomnuker: no, elenril did
[15:27:18 CEST] <wm4> or rather, rewrote it into its current form
[15:30:19 CEST] <kierank> eric-: guessing here but probably what works in the real world
[16:15:05 CEST] <eric-> kierank: possibly, but the array numbers don't make sense, as the 63mbit number imples an approx frame-rate of 19fps
[16:45:50 CEST] <gagandeep> kierank: interlacing fixed
[16:46:05 CEST] <gagandeep> subband 8 was only difference coded not quantized
[16:46:23 CEST] <gagandeep> so remove dequantization for those bands with difference coding
[16:47:49 CEST] <kierank> gagandeep: nice, send patch to ffmpeg-devel
[16:48:52 CEST] <gagandeep> kierank: i will after properly formatting and updating codec states in header
[16:51:45 CEST] <ubitux> durandal_1707: give me time to read & understand the paper
[17:40:37 CEST] <durandal_1707> ubitux: all serious implementations i checked use local max patch weight (except handbrake which use global custom one), but if you can do better than that it would be great (from paper, local stein improves small details)
[17:42:33 CEST] <ubitux> ok that's good to know
[17:43:06 CEST] <ubitux> maybe we should introduce different modes but i guess the max version will be good enough
[17:43:15 CEST] <ubitux> i just want to read it before, if you can wait
[17:43:54 CEST] <ubitux> i'm still uncomfortable about the strength vs amount thing, i don't think it's a good idea to have both, but maybe there is no official way around it
[19:21:00 CEST] <cone-432> ffmpeg 03Aman Gupta 07master:673d8cfd5188: avformat/hls: fix seeking around EVENT playlist after media sequence changes
[20:33:45 CEST] <durandal_1707> can LDLT cholesky decomposition be parallized?
[20:44:17 CEST] <atomnuker> daddesio^^
[20:44:48 CEST] <daddesio> I've never checked...
[20:48:51 CEST] <daddesio> I've really never implemented any parallel matrix algos before.
[22:04:42 CEST] <akravchenko188> hi guys. is there way to print filter graph?
[22:05:22 CEST] <akravchenko188> I am trying to figure out what filters added automaticaly
[22:08:55 CEST] <durandal_1707> akravchenko188: -dumpgraph ? 
[22:11:22 CEST] <akravchenko188> Option dumpgraph not found.
[22:11:33 CEST] <JEEB> yea, not sure if the cli had that
[22:11:42 CEST] <JEEB> there was an API function to do an ASCII dump of the filter chain
[22:22:22 CEST] <akravchenko188> oh, I found the issue. hw decoder transfer data to system memory by default. I thought filter like vf_hwdownload does this and tried to find who adds it :)
[22:29:44 CEST] <atomnuker> yes, you need to use -hwaccel_output_format if you don't want to download them (or you want a different dl format)
[22:34:47 CEST] <akravchenko188> thanks, I was just curious why it transfers to system memory by default.
[22:35:00 CEST] <akravchenko188> it is additional operation
[22:35:44 CEST] <nevcairiel> it does that by default because it requires careful setup of the entire chain to support zero-copy hardware processing
[22:38:25 CEST] <wm4> amazon? nice
[22:39:29 CEST] <akravchenko188> ok,I see,  thanks
[22:39:47 CEST] <wm4> why did fpga on PC never take off
[22:40:04 CEST] <kierank> mostly useless
[22:45:30 CEST] <atomnuker> expensive
[22:45:46 CEST] <atomnuker> those with enough gates to be useful cost quite a lot
[22:46:21 CEST] <nevcairiel> yeah its really only useful for enterprise use due to the cost
[22:47:25 CEST] <akravchenko188> is there any negotiation mechanism about formats during build graph/pipeline?
[22:47:37 CEST] <nevcairiel> sort of, but its pretty rigid
[22:48:10 CEST] <rcombs> what are FPGAs even useful for that you would be doing on a PC anyway
[22:48:20 CEST] <klaxa> ok, after reading a lot i'm finally picking up some pace with the config file stuff, atomnuker, should i change this structure? a "downside" of this format i found is that you cannot have a stream called "bind_address" or "port", i'm not sure how important that is? https://gist.github.com/klaxa/30080590752ba738bc23b381314d5e52
[22:48:21 CEST] <wm4> nevcairiel: i.e., no
[22:48:55 CEST] <wm4> rcombs: DSP stuff?
[22:49:05 CEST] <rcombs> like what?
[22:49:35 CEST] <wm4> such as video decoding
[22:49:59 CEST] <cone-432> ffmpeg 03Marton Balint 07master:50d6b7bd830e: avcodec/xwddec: fix palette alpha
[22:50:01 CEST] <jkqxz> Being able to do all the video transforms in one cycle would be nice.
[22:50:15 CEST] <atomnuker> klaxa: weird, does the socket namespace somehow get initialized and used
[22:50:25 CEST] <rcombs> might as well just sell an ASIC for that
[22:50:39 CEST] <rcombs> lower-power too, I'd imagine
[22:50:42 CEST] <rcombs> (oh hey they do)
[22:50:44 CEST] <wm4> now ASICs are expensive
[22:50:45 CEST] <klaxa> ah, no i mean just from the format, those keys are used for, well the address and the port, so you can't reuse those keys for stream-names
[22:51:01 CEST] <klaxa> a stream is just a named object in a server
[22:51:05 CEST] <wm4> unless you design them with great effort and then sell them at very high volumes
[22:51:12 CEST] <atomnuker> klaxa: its not important at all then
[22:51:19 CEST] <klaxa> ok
[22:51:33 CEST] <rcombs> which they do
[22:51:38 CEST] <jkqxz> If Intel get their act together with Altera and make some sort of closely-coupled FPGA thing in their processors then maybe there is significant use there for video.
[22:51:45 CEST] <atomnuker> or they cut corners (which they do, in the case of vp9)
[22:51:59 CEST] <jkqxz> (Like, special magic instructions which invoke a load of FPGA logic.)
[22:52:21 CEST] <jkqxz> But anything further apart than that probably isn't going to be very useful, because the fixed elements of it mean you might as well make ASICs instead.
[22:52:52 CEST] <rcombs> plus if it's on FPGA then you don't get to sell new chips every time there's a new codec
[22:52:58 CEST] <atomnuker> did you know that that decoding/encoding/transform instructions is how GPUs decode video?
[22:53:13 CEST] <atomnuker> apparently you can use them from shaders too, if the drivers exposed them
[22:53:21 CEST] <atomnuker> nvidia has an extension to do that
[22:54:02 CEST] <wm4> which one
[22:54:36 CEST] <kierank> atomnuker: there are closed shaders for vp9 in quicksync
[22:55:02 CEST] <nevcairiel> those are for the crappy hybrid mode tho
[22:56:14 CEST] <rcombs> there are closed shaders for every supported codec, even without hybrid mode
[22:58:40 CEST] <atomnuker> wm4: NV_gpu_program5
[22:58:58 CEST] <jkqxz> VP9 on Nvidia is just Google's IP block dumped in.  Are other codecs more integrated, or is that shader logic actually entirely separate from the codec hardware?
[22:59:27 CEST] <nevcairiel> nvidia  generally doesnt use shaders for decoding at all
[23:00:07 CEST] <wm4> jkqxz: is that goog IP closed?
[23:02:01 CEST] <jkqxz> Yes, but it's available for no charge if you sign up to put it in a product.  <https://www.webmproject.org/hardware/vp9/>
[23:02:29 CEST] <cone-432> ffmpeg 03Marton Balint 07release/4.0:61fed89ad425: avcodec/xwddec: fix palette alpha
[23:04:47 CEST] <nevcairiel> jkqxz: since Pascal, NVIDIA supports 8192x8192 VP9 decode, the google block does not seem to  support that (while in Maxwell, NVIDIAs limit matched the google limit), so either they extended it or replaced it, or there is a newer one not mentioned there
[23:12:29 CEST] <jkqxz> I would be unsurprised if there were a better version.
[23:16:12 CEST] <atomnuker> my mobile maxwell doesn't even support vp9 or vp8
[23:30:17 CEST] <jkqxz> Hmm, you can now (or soon?) buy a Cannonlake with AVX-512.
[23:30:45 CEST] <nevcairiel> well low power mobile CPUs anyway
[23:30:53 CEST] <nevcairiel> if you want a dual core with avx512=p
[23:31:07 CEST] <jkqxz> Stuck inside a shitty laptop.
[23:31:27 CEST] <jkqxz> So not really usable for testing anyway.
[23:33:55 CEST] <jkqxz> nevcairiel:  Are you happy with how that test now makes a separate fate-hw target?
[23:37:09 CEST] <nevcairiel> that was my suggestions, so its fine
[23:38:05 CEST] <nevcairiel> (if it works like that)
[23:39:36 CEST] <jkqxz> Yeah.
[23:41:49 CEST] <jamrial> jkqxz: does it really have avx512?
[23:41:57 CEST] <jamrial> this one i3 in ark.intel doesn't mention that
[23:42:20 CEST] <jamrial> oh, they updated the entry
[23:42:22 CEST] <jamrial> now it does
[23:42:45 CEST] <jamrial> yesterday it didn't. copy paste placeholder info?
[23:42:54 CEST] <nevcairiel> ARK is often inaccurate
[23:43:12 CEST] <nevcairiel> because its not managed by tech people directly
[23:43:58 CEST] <jamrial> but yeah, lowpower mobile, so i can't see avx512 being useful (or able to run decently) at all
[23:48:37 CEST] <jkqxz> Well, you can probably test the new instructions...  But you'll get nuked by the power and thermal limits if you actually try to benchmark anything.
[23:51:16 CEST] <jamrial> https://aomedia.googlesource.com/aom/+/21f4825469610deb8a22fe2c3adcc694381e699d :D
[00:00:00 CEST] --- Thu May 17 2018


More information about the Ffmpeg-devel-irc mailing list