[Ffmpeg-devel-irc] ffmpeg-devel.log.20180307

burek burek021 at gmail.com
Thu Mar 8 03:05:04 EET 2018


[00:46:55 CET] <cone-854> ffmpeg 03Mark Thompson 07master:56912555bc19: h264_metadata: Actually fail when sei_user_data option is invalid
[01:06:38 CET] <cone-854> ffmpeg 03Stefan _ 07master:7c39305a1748: libavformat/tls_libtls: pass numeric hostnames to tls_connect_cbs()
[01:06:43 CET] <JEEB> sfan5: ^
[01:11:53 CET] <cone-854> ffmpeg 03Masaki Tanaka 07master:8b0a9f79c897: mpegvideo_parser: fix indentation of an if statement
[02:28:45 CET] <tmm1> so where did the external headers end up?
[02:28:54 CET] <tmm1> do i need them for cuda/nv on windows?
[02:29:28 CET] <tmm1> ah i see the nv-codec-headers repo
[03:01:18 CET] <Compn> is that ati guy happy yet ?
[03:23:11 CET] <ZhongLi> 4
[03:23:28 CET] <ZhongLi> `md:  `` 
[03:42:31 CET] <cone-854> ffmpeg 03James Almer 07master:a43e9cdd442b: avformat/isom: don't free extradata before calling ff_get_extradata()
[05:02:47 CET] <gagandeep_> guys what is coded_width and coded_height?
[05:08:11 CET] <FishPencil> Does anyone know of a reference algorithm to create a mirrored boarder around a frame? I'm writing one myself and wondering if it's been done before or if there's a better way
[05:22:18 CET] <Compn> FishPencil : you mean for cell phone fake borders ?
[05:22:24 CET] <Compn> probably i dont know of any though
[05:23:31 CET] <FishPencil> Compn: It's a border for denoising 
[05:25:31 CET] <Compn> de noising?
[05:25:35 CET] <Compn> or de shaking ?
[05:25:52 CET] <FishPencil> Compn: de noising
[05:27:20 CET] <Compn> maybe someone else here knows
[07:20:46 CET] <gagandeep_> guys what does the function bytestream2_get_be16 does?
[07:21:05 CET] <gagandeep_> i can't find it's definition in the documentation
[07:31:40 CET] <gagandeep_> anybody
[07:51:18 CET] <gagandeep_> nevermind now have understood gb, buffer, avpacket and their interconnection
[08:13:21 CET] <`md> ZhongLi: yes?
[11:42:53 CET] <peak3d> hi here, I'm currently doing some research on rockchip 3399 device / v4l2-m2m decoding of h.264 / vp8/9
[11:43:44 CET] <peak3d> The device driver is not capable of bitstreaming h.264 frames, instead it wants to be fed with sps / pps / slice parts
[11:44:33 CET] <peak3d> before I start investigating how this could  be done in ffmpeg, I wanted to get sure that noone else is working already on it
[11:46:53 CET] <Compn> peak3d : ask on mailing list , not all devs are on irc
[11:46:58 CET] <jkqxz> V4L2 M2M is a mess, and I don't think the Rockchip driver is tested with it at all.  On the other hand, Rockchip do provide their own decoder library (MPP) and that does work properly.
[11:47:33 CET] <ldts> jkqxz: what do you mean by it being a mess?
[11:48:14 CET] <jkqxz> Every driver has different quirks.
[11:48:16 CET] <ldts> h264/vp8/vp9 should work as long as the corresponding kernel driver works
[11:48:21 CET] <ldts> ack
[11:48:25 CET] <ldts> fully agree
[11:49:58 CET] <peak3d> I do not agree with the implementation in mpp, thats the reason I try to find alternative solutions
[11:50:25 CET] <jkqxz> "do not agree"?
[11:50:28 CET] <peak3d> we see high CPU load because of non properly synced thread e.g.
[11:51:01 CET] <ldts> why not change the kernel driver to follow some sort of standard ie, accepting the bit stream (then you could use ffmpeg as is)
[11:51:35 CET] <ldts> not sure about pushing the extra complexity to ffmpeg is the best solution
[11:52:05 CET] <peak3d> From what I have read in sunxi mail bitstream parser for complex codex should not be part of the kernel driver
[11:53:09 CET] <peak3d> If you say mpp is the way to go with rk, I'm fine with that and will try to support mpp
[11:53:32 CET] <peak3d> But it is again something special / same mess as amcodec
[11:54:22 CET] <peak3d> For me the V4L2 approach makes some sense to bundle all these devices together
[11:55:01 CET] <peak3d> @ldts what is the standard in kernel? Bitstream parsing yes or no?
[11:55:10 CET] <ldts> um ok .maybe the rockchip ffmpeg "quirk" for v4l2_m2m can  be  done nicely 
[11:55:41 CET] <ldts> the  boards I have used do accept the bistream
[11:56:46 CET] <peak3d> yes, most probably AML will go this way too
[11:56:57 CET] <peak3d> because they have all this already in kernel
[11:58:58 CET] <ldts> right. still if you can come up with a non-intrusive solution for your parsing then feel free to post. by non-intrusive I mean something that doesnt obscure the current implementation.
[12:01:20 CET] <ldts> out of curiosity, does gstreamer support your platform?
[12:01:39 CET] <peak3d> I'm not really ffmpeg code experienced, can you estimate how much work it would be to extract the codec parts?
[12:01:56 CET] <peak3d> yes, gestreamer support seems to be implemented
[12:02:14 CET] <peak3d> but not sure what way they go (mpp / others)
[12:02:55 CET] <peak3d> @ldts ffmpeg already provides all bitstream stuff, is this correct?
[12:03:50 CET] <ldts> @peak3d yes
[12:04:34 CET] <peak3d> ok, I'll play around with that, thx @ldts
[12:04:37 CET] <ldts> I dont know how to estimate without checking what the v4l2 driver consumes
[12:04:45 CET] <ldts> (the rockchip driver)
[12:05:45 CET] <ldts> do you know if gstreamer use mpp or do they implement the parsing?
[12:06:02 CET] <peak3d> I'll look first into gstreamer now
[12:11:08 CET] <wm4> would be nice if v4l could provide a hwaccel like API
[12:11:25 CET] <wm4> you know, like everything else that's proven to work well, like dxva2 and vaapi
[12:12:05 CET] <wm4> (so that would mean no full bistream, just slices)
[12:12:44 CET] <jkqxz> It's tuned to completely opaque whole-stream codecs.
[12:13:49 CET] <wm4> (how do they feed vp9 anyway, you need to frame the packets)
[12:17:31 CET] <jkqxz> I imagine that decoders will accept both frames and superframes.
[12:17:49 CET] <peak3d> @+wm4 this is (from my understanding) whats supported: https://github.com/peak3d/linux/blob/odroidn1-4.4.y/drivers/media/platform/rockchip-vpu/rk3399_vdec_hw.c#L27-L63
[12:19:27 CET] <JEEB> wm4: yes, I would have really liked ARM to have standardized on something that desktop also had. I mean, the overhead wouldn't be too big from just having a proper API :D
[12:19:54 CET] <JEEB> I mean, we have mediacodec on crappy embedded devices
[12:20:02 CET] <JEEB> if that isn't an abstraction I don't know what is
[12:21:18 CET] <wm4> yeah, slice decoders would be much simpler and thus cheaper, more reliable, etc.
[12:21:33 CET] <wm4> maybe vendors actually want full bit stream decoders because that's "faster"?
[12:23:05 CET] <jkqxz> That link above is implementing a slice decoder.
[12:24:29 CET] <wm4> huh?
[12:24:30 CET] <jkqxz> You can see there are v4l2_ctrl_h264_sps and similar structures coming from userspace.
[12:24:39 CET] <jkqxz> I can't find where those are defined, though.
[12:25:08 CET] <wm4> the file that was linked doesn't tell me much
[12:25:17 CET] <jkqxz> They aren't in the mainline kernel.
[12:25:27 CET] <jkqxz> Other files in the same directory.
[12:26:05 CET] <peak3d> @+wm4 https://github.com/peak3d/linux/blob/odroidn1-4.4.y/drivers/media/platform/rockchip-vpu/rockchip_vpu_dec.c#L86-L161
[12:26:25 CET] <wm4> oh
[12:26:31 CET] <jkqxz> So that's not going to work in the current v4l2-m2m driver at all.  It needs a hwaccel instead.
[12:26:50 CET] <wm4> maybe it's that mysterious stateless v4l API?
[12:26:53 CET] <jkqxz> I wonder whether any other vendors will implement this.
[12:28:56 CET] <jkqxz> There we go: <https://github.com/peak3d/linux/blob/8550019a27127e4ebf9770f74da712d2ae498459/include/uapi/linux/v4l2-controls.h#L1016>.
[12:30:25 CET] <peak3d> Regarding bitstream parsing in kernel: http://linux-sunxi.org/VE_Planning#No_parsing_in_kernel
[12:30:51 CET] <wm4> " CHROMIUM: v4l: Add H264 low-level decoder API compound controls. "
[12:32:14 CET] <wm4> peak3d: parsing in libv4l? gawd
[12:33:28 CET] <wm4> well maybe they'll accidentally arrive at the only good API design (essentially dxva2)
[12:33:34 CET] <JEEB> :D
[12:33:52 CET] <JEEB> do note that IIRC MS also had to go through a few iterations, but right now it looks like a pretty good end result
[12:34:11 CET] <wm4> yeah
[12:36:51 CET] <peak3d> @+wm4 I hope this is only an idea :-)
[12:37:31 CET] <nevcairiel> DXVA1 was horrible, but not because of the w ay you passed the bitstream, that basically didnt change, just the API in general
[12:38:11 CET] <JEEB> yea
[12:38:27 CET] <nevcairiel> not sure if there were any precursors to that before
[12:38:50 CET] <nevcairiel> also back in DXVA1 days you still had all those partial accelerators whcih only d id MoComp or IDCT
[12:38:54 CET] <nevcairiel> they also discontinued those
[12:39:09 CET] <wm4> so like xvmc
[12:39:29 CET] <nevcairiel> technically you can still use those partial things through DXVA2
[12:39:33 CET] <nevcairiel> but noone does that
[12:39:50 CET] <nevcairiel> and for any modern codecs they also didnt specify that interface anymore
[12:43:12 CET] <nevcairiel> back when i originally started working on this stuff, in 2010 or so, people did occasionally ask for partial acceleration support, but thats a long time ago now
[12:43:20 CET] <nevcairiel> noone runs that hardware anymore =p
[12:44:18 CET] <peak3d> https://cs.chromium.org/chromium/src/media/gpu/v4l2/v4l2_slice_video_decode_accelerator.cc?sq=package:chromium <-- seems to be one implementation of it
[12:44:19 CET] <JEEB> yea
[12:45:22 CET] <peak3d> @jkqxz so hwaccel and NOT modifying the current decoder, yes?
[12:50:33 CET] <jkqxz> Yeah.  Possibly some of the frame output code could be shared with the current decoder, but everything on the input side is different.
[12:52:53 CET] <peak3d> ok
[13:31:23 CET] <durandal_1707> I wrote ultimate filter, you should worship me now!
[13:36:10 CET] <jkqxz> Dr. Meter will fix all your measurement-related ills?
[13:38:49 CET] <durandal_1707> yes, with it you can find how bad your audio brickwalled is, bad news is he can only diagnose it but not cure it
[13:56:28 CET] <cone-766> ffmpeg 03Stefan _ 07master:5ab0ecf2830f: avcodec/mediacodec_wrapper: fix false positives in swdec blacklist
[14:42:28 CET] <atomnuker> so this is what being sony means
[14:42:30 CET] <atomnuker> https://pars.ee/temp/atrac9_win.svg
[15:43:20 CET] <durandal_1707> atomnuker: whats that?
[15:43:49 CET] <atomnuker> one lobe of the window atrac9 uses
[15:44:11 CET] <atomnuker> not sure what they're aiming for, they seem to be boosting leakage sidelobes
[15:44:50 CET] <durandal_1707> no, thats for perfect audio reproduction...
[15:59:57 CET] <durandal_1707> atomnuker: don't complain, because atrac10 may appear
[16:00:14 CET] <atomnuker> nah, opus exists
[16:01:28 CET] <durandal_1707> noo, now they will rip opus into next atrac
[16:02:55 CET] <jdarnley> Steal the bitstream, change a few constants to make it incompatible, release as atrac10?
[16:02:57 CET] <atomnuker> I don't think even they can make opus any weirder than it is already
[16:03:42 CET] <atomnuker> there's barely any conventional bitstream elements (e.g. binary flags or integers) in opus
[16:22:50 CET] <jamrial> jkqxz: https://pastebin.com/raw/7c72z3RH
[16:23:21 CET] <jamrial> using memset to handle the array of fragment units seems to be really inefficient
[16:23:46 CET] <jamrial> can't you use something like lavu's tree.h, or a linked list?
[17:45:40 CET] <nevcairiel> rcombs: ImageMagick supports HEIF now using libde265 for decoding, so you could peak there how they dealt with the untiling mess (libde265 has no such support, so it must be in ImageMagick)
[18:08:39 CET] <JEEB> well imagemagick isn't trying to be a decoder-level thing
[18:08:55 CET] <JEEB> so it isn't going to have the same issues as to where the hell one has to stitch the things together
[18:09:04 CET] <JEEB> but yea, checking it I guess makes sense?
[18:10:19 CET] <nevcairiel> shit like that doesnt really fit any sane abstracted design ever
[18:10:33 CET] <nevcairiel> because its like container-level information that needs to be applied at the very end of the chain
[18:11:49 CET] <JEEB> yes
[18:11:54 CET] <JEEB> so with FFmpeg it would be the API client
[18:12:00 CET] <atomnuker> doesn't heif contain multiple versions of the image?
[18:12:05 CET] <JEEB> and then you have the problem of how the flying numbfsck you export it
[18:12:26 CET] <nevcairiel> not only that, the tiles probably also need to be decoded separately
[18:12:29 CET] <nevcairiel> its a huge mess
[18:12:44 CET] <nevcairiel> the only sane way to fit that into ffmpeg would be doing it in the decoders
[18:12:51 CET] <nevcairiel> but thats terrible
[18:13:14 CET] <nevcairiel> (or possibly using a lot of custom logic and get_buffer2 callbacks)
[18:13:53 CET] <nevcairiel> that might actually work, you can allocate a big frame buffer and just decode the tiles into that, but still needs all the glue somewhere
[18:14:38 CET] <nevcairiel> why would you ever decide to tile such a format, what were they on
[18:15:33 CET] <JEEB> probably something like being able to decode and scale into smaller size the image in parts which would possibly let one use the hwdec for the job even if the full image is hell huge?
[18:15:40 CET] <JEEB> not that I know what the Nokia people smoked
[18:15:48 CET] <JEEB> (must've been good stuff)
[18:17:54 CET] <nevcairiel> i suppose that reason makes some sort of sense, an existing hevc decoder may have resolution limits and you avoid this
[18:18:19 CET] <atomnuker> I don't get the issue, couldn't you do this with a parser?
[18:18:33 CET] <atomnuker> each tile does contain where its meant to go to, right?
[18:18:40 CET] <JEEB> are bitstreams somehow stitchable?
[18:18:42 CET] <nevcairiel> it better
[18:18:55 CET] <JEEB> or do you mean you could output the parts and then the info on where to re-assemble?
[18:19:00 CET] <wm4> nevcairiel: what about alignment? I guess it works if the tiles are not cropped and the decoder doesn't write out of bounds
[18:19:28 CET] <nevcairiel> i dont know any in-depth details about their tiles, if its unaligned tiles that might be an issue
[18:19:38 CET] <nevcairiel> can always resort to manual combination
[18:19:59 CET] <nevcairiel> writing a small standalone app that does this is probably not even that hard
[18:20:03 CET] <wm4> does HEIF have stuff like progressive decoding? (you know, like jpg and png can)
[18:20:18 CET] <nevcairiel> but shoe-horning it into something like ffmpeg or avformat/avcodec as-is, thats the mess
[18:22:57 CET] <atomnuker> why? can't we have a bsf to make all the tiles like regular hevc tiles?
[18:23:13 CET] <nevcairiel> seems doubtful that would work
[18:24:01 CET] <nevcairiel> wm4: it has an embedded thumbnail, does that count
[18:30:59 CET] <wm4> hm no
[18:31:27 CET] <nevcairiel> not sure if it has anything else, or relies on the codec to have something
[18:32:04 CET] <nevcairiel> progressive is unfortunately a bit overloaded to make searching hard
[18:35:39 CET] <atomnuker> chances are its possible there'll be a similar thing for av1's still picture spec
[18:35:51 CET] <atomnuker> since pretty much the same people are doing it
[18:37:14 CET] <nevcairiel> good that i dont really care about supporting image formats like that
[18:42:54 CET] <jamrial> atomnuker: i'd expect webp to be updated to support the av1 bitstream instead
[18:43:18 CET] <nevcairiel> who knows, they might steal stupid features like that
[18:47:12 CET] <atomnuker> welp, as far as I know google are planning on adding real support for 4 planes rather than the webp way
[18:55:05 CET] <jkqxz> jamrial:  Did that stream have a large number of slices?
[18:55:44 CET] <jamrial> jkqxz: i'm not actually sure. it's a 4k 250mbps sample
[18:55:48 CET] <jamrial> jellyfish-250-mbps-4k-uhd-h264
[18:57:10 CET] <jkqxz> Hmm.  Was that profile showing only memmove() in the unit stuff, or could it be memmove() elsewhere?
[18:58:13 CET] <jkqxz> Because the unit structures really aren't very large.
[18:58:20 CET] <jamrial> at least in ffmpeg, the only memmove i could find that would be triggered was in cbs
[18:58:27 CET] <jamrial> the sample was in matroska, so i remuxed into raw annexb to run perf since matroskadec also has a few memmove calls to buffer packets
[19:35:45 CET] <jamrial> jkqxz: nevermind, seems it's not the memmove from cbs
[19:35:51 CET] <jamrial> i can't get pref to show me a callchain, so i can't say what's calling it
[19:42:30 CET] <wm4> wasn't it -g or so
[19:43:22 CET] <jamrial> let me try that
[19:48:44 CET] <jamrial> didn't help
[19:49:15 CET] <wm4> probably have to pass it both for record and report
[19:49:47 CET] <jamrial> i did, the result wasn't useful
[19:50:26 CET] <jamrial> does realloc call memmove internally in libc?
[19:51:19 CET] <wm4> hm I don't know when it'd make sense to call memmove instead of memcpy in that situation...
[20:05:14 CET] <atomnuker> jkqxz: is there no opencl->vaapi interop?
[20:09:57 CET] <jkqxz> On Intel it's not doable in any way that would actually be useful.
[20:10:26 CET] <jkqxz> Primarily because of the single-object constraint.
[20:11:14 CET] <jkqxz> It might work sensibly on AMD, but I don't have a usable setup for that.
[20:12:15 CET] <atomnuker> it there was, would it be theoretically possible to do e.g. -hwaccel vaapi -i <stuff> -vf hwmap=derive_device=opencl,unsharp_opencl,hwmap=derive_device=vaapi?
[20:12:50 CET] <atomnuker> or is only 1 mapping doable currently?
[20:13:02 CET] <atomnuker> (I know you can't specify filter_device twice)
[20:13:04 CET] <jkqxz> The original version of the OpenCL hwcontext from August 2016 (see <https://lists.libav.org/pipermail/libav-devel/2016-August/078945.html>) did allow it because it made every frame as a single buffer and then made subbuffers to which it applied image2d_from_buffer to make images.
[20:13:33 CET] <jkqxz> But that was kindof painful for use with anything else.
[20:13:57 CET] <jkqxz> You can do the derive both ways with reverse mapping in the second hwmap.
[20:14:53 CET] <jkqxz> -vf 'hwmap=derive_device=opencl,unsharp_opencl,hwmap=derive_device=vaapi:reverse=1'
[20:15:21 CET] <wm4> intuitive
[20:15:39 CET] <jkqxz> Very.  Someone should write some documentation for this stuff.
[20:15:50 CET] <atomnuker> the reverse param basically makes the opencl hwcontext do the exporting instead of vaapi doing importing, right?
[20:16:12 CET] <jkqxz> s/ex/im/, then yes.
[20:17:05 CET] <atomnuker> well, whatever, as long as its not a lavfi limitation, since with vulkan I think I can export an fd and import that with vaapi
[20:17:44 CET] <jkqxz> A single object containing both planes in the way the Intel VAAPI driver expects?
[20:18:26 CET] <atomnuker> yep
[20:18:33 CET] <jkqxz> That's good.
[20:18:50 CET] <jkqxz> The reverse mapping is pretty horrible.
[20:19:01 CET] <atomnuker> I didn't cheat and split every plane into separate vkimages, I use the ycbcr images
[20:19:27 CET] <wm4> so, can you sample ycbcr from ycbcr images?
[20:20:17 CET] <jkqxz> And what happens when you write to a single pixel on them?
[20:21:55 CET] <atomnuker> wm4: yep
[20:21:58 CET] <atomnuker> oh, nope
[20:22:15 CET] <atomnuker> well, yeah
[20:22:31 CET] <atomnuker> you create an imageview out of every plane separately
[20:22:56 CET] <atomnuker> and you process them as if they're VK_FORMAT_R8/16
[20:23:35 CET] <atomnuker> jkqxz: you can't write to them unless you treat them separately
[20:25:15 CET] <jkqxz> Right, good.
[20:25:54 CET] <atomnuker> if you sample from them you get either 444 (for an identity sampler) or rgb (for a converting sampler)
[20:26:46 CET] <atomnuker> though I tried to do the latter last week and it didn't work, I think the drivers haven't been tested
[20:27:13 CET] <wm4> atomnuker: oh that's at least helpful, though not sure why it exists as abstraction then
[20:27:16 CET] <jkqxz> How does the filtering work for the chroma sample location?  That seems like a fun thing in the driver.
[20:27:25 CET] <wm4> reminds me of the weird d3d11 nv12 image format
[20:30:09 CET] <atomnuker> jkqxz: you specify them
[20:30:46 CET] <atomnuker> https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#VkSamplerYcbcrConversionCreateInfoKHR
[20:33:34 CET] <cone-766> ffmpeg 03Aman Gupta 07master:23c91abe4f6a: avcodec/aacdec: log configuration change details
[20:37:16 CET] <atomnuker> as for how the driver deals with them? it writes shaders for you and as far as I understand runs them before the textures are accessed
[20:38:12 CET] <cone-766> ffmpeg 03Matt Wolenetz 07master:b59b59944693: lavc/vorbisdec: Allow avcodec_open2 to call .close
[20:38:13 CET] <cone-766> ffmpeg 03Michael Niedermayer 07master:5735a390a69d: avformat/internal: Document the freeing behavior of ff_alloc_extradata()
[20:38:14 CET] <cone-766> ffmpeg 03Michael Niedermayer 07master:c87bf5b6d0cd: avfilter/vf_*_vaapi: Add missing AV_OPT_FLAG_FILTERING_PARAM
[20:38:15 CET] <cone-766> ffmpeg 03Michael Niedermayer 07master:367929bed9de: avformat/mov: Fix integer overflow in mov_get_stsc_samples()
[20:38:16 CET] <cone-766> ffmpeg 03Michael Niedermayer 07master:3934aa495d78: libavformat/oggparsevorbis: Fix memleak on multiple headers
[20:38:17 CET] <cone-766> ffmpeg 03Michael Niedermayer 07master:da069e9c68ec: avformat/oggdec: Fix metadata memleak on multiple headers
[20:38:18 CET] <cone-766> ffmpeg 03Michael Niedermayer 07master:1b1362e408cd: avformat/utils: Fix integer overflow of fps_first/last_dts
[21:39:33 CET] <cone-766> ffmpeg 03Paul B Mahol 07master:ea0963181a2c: avfilter/af_alimiter: check if buffer_size is valid
[22:47:18 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:0b4ad86959cd: crc: add AV_CRC_8_SBC as a 8 bits CRC with polynomial 0x1D
[22:47:19 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:443988719802: sbc: implement SBC decoder (low-complexity subband codec)
[22:47:20 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:2505ebc632d4: sbc: add parser for SBC
[22:47:21 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:2e08de08159d: sbc: add raw demuxer for SBC
[22:47:22 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:ff4600d95471: sbc: implement SBC encoder (low-complexity subband codec)
[22:47:23 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:88508a87a557: sbc: add raw muxer for SBC
[22:47:24 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:f1e490b1aded: sbcenc: add MMX optimizations
[22:47:25 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:f677718bc87a: sbcenc: add armv6 and neon asm optimizations
[22:47:26 CET] <cone-766> ffmpeg 03Aurelien Jacobs 07master:840f6eb77aed: Changelog: list the new SBC codec
[00:00:00 CET] --- Thu Mar  8 2018


More information about the Ffmpeg-devel-irc mailing list