[Ffmpeg-devel-irc] ffmpeg-devel.log.20120311

burek burek021 at gmail.com
Mon Mar 12 02:05:04 CET 2012


[00:51] <CIA-17> ffmpeg: 03Carl Eugen Hoyos 07master * r6cb89df845 10ffmpeg/configure: 
[00:51] <CIA-17> ffmpeg: Add missing requirements to libavdevice.pc if lavfi is enabled.
[00:51] <CIA-17> ffmpeg: Fixes ticket #1050.
[01:27] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * ra928ed3751 10ffmpeg/libavcodec/x86/vp8dsp.asm: vp8: convert mbedge loopfilter x86 assembly to use named arguments.
[01:27] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * rbee330e300 10ffmpeg/libavcodec/x86/vp8dsp.asm: vp8: convert inner loopfilter x86 assembly to use named arguments.
[01:27] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * r5518827816 10ffmpeg/libavcodec/xxan.c: 
[01:27] <CIA-17> ffmpeg: xxan: convert to bytestream2 API.
[01:27] <CIA-17> ffmpeg: Protects against overreads.
[01:27] <CIA-17> ffmpeg: Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
[01:27] <CIA-17> ffmpeg: CC: libav-stable at libav.org
[01:27] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * rf77bfa8376 10ffmpeg/libavcodec/xxan.c: 
[01:27] <CIA-17> ffmpeg: xxan: protect against chroma LUT overreads.
[01:27] <CIA-17> ffmpeg: Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
[01:27] <CIA-17> ffmpeg: CC: libav-stable at libav.org
[01:27] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * r71af42bd96 10ffmpeg/libavcodec/xxan.c: 
[01:27] <CIA-17> ffmpeg: xxan: reindent xan_unpack_luma().
[01:27] <CIA-17> ffmpeg: It used 3-space indent instead of 4-space indent.
[01:27] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * r442c3a8cb1 10ffmpeg/libavcodec/ (cook.c cookdata.h): 
[01:27] <CIA-17> ffmpeg: cook: expand dither_tab[], and make sure indexes into it don't overflow.
[01:28] <CIA-17> ffmpeg:  cook: expand dither_tab[], and make sure indexes into it don't overflow.
[01:28] <CIA-17> ffmpeg:  xxan: reindent xan_unpack_luma().
[01:28] <CIA-17> ffmpeg:  xxan: protect against chroma LUT overreads.
[01:28] <CIA-17> ffmpeg:  xxan: convert to bytestream2 API.
[01:28] <CIA-17> ffmpeg: 03Ronald S. Bultje 07master * rf1279e286b 10ffmpeg/libavcodec/xxan.c: 
[01:28] <CIA-17> ffmpeg: xxan: don't read before start of buffer in av_memcpy_backptr().
[01:28] <CIA-17> ffmpeg: Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
[01:28] <CIA-17> ffmpeg: CC: libav-stable at libav.org
[02:36] <michaelni> darn topic was too long :/
[02:37] <kierank> surely ffmpeg should get accepted in gsoc first...
[02:37] <kierank> =p
[02:37] <michaelni> well, if the page is empty then we probably wont be accepted ...
[02:37] <michaelni> or would you if you were google ? ;)
[02:40] <michaelni> btw, kierank as you are already here ...
[02:40] <michaelni> do you want to mentor something .... assuming ffmpeg is accepted ?
[02:41] <kierank> don't think i have time this summer
[02:41] <michaelni> :(
[03:22] <iive> michaelni: why ffmpeg would not be accepted in gsoc?
[03:24] <michaelni> iive, because the project page is shit and has just 2 projects on it with mentors
[03:24] <michaelni> btw, do you want to mentor something ?
[03:26] <michaelni> shit is maybe not the right word, the page isnt shit but too few mentors have added their projects
[03:27] <Compn> yes its shitty :)
[03:27] <Compn> not many people working on it
[03:30] <iive> i have nothing in mind that I could mentor. And I probably won't be able to navigate even in my own code with all the "cleanups".
[03:31] <Compn> all the proposals i think of are way too complicated for students
[03:31] <Compn> and the libav ones are too
[03:31] <Compn> theres no way you are going to get a wavelet decoder (h265) out of a student
[03:32] <Compn> jpeg2000 failed a few times, multiple students
[03:32] <Compn> multiple devels too
[03:32] <Compn> still incomplete
[03:32] <Compn> but works for most stuff :)
[03:32] Action: Daemon404 sees dts-hd appeared yet again
[03:32] <Compn> is it supposed to be there Daemon404 ?
[03:32] <Daemon404> its been there amny times
[03:32] <Daemon404> not once has it been done
[03:32] <Compn> oh
[03:32] <Compn> well there is some spec now
[03:33] <Compn> isnt there ?
[03:33] <Compn> thats the tipping point ? 
[03:33] <Daemon404> lol maybe
[03:33] <Daemon404> dts isnt known for their nice and lovely specs
[03:33] <Compn> proposals without specs fail because reverse engineering takes a long time
[03:33] <Compn> true
[03:33] <iive> i also got the impression that it got committed. 
[03:33] <Daemon404> i think kostya said their specs outright lie
[03:37] <iive> n8 ppl
[03:38] <Compn> all specs lie a little bit :)
[03:47] <michaelni> Daemon404, do you want to mentor something ?
[03:47] <Daemon404> lol considering im a student myself
[03:47] <Daemon404> i doubt im qualified
[03:47] <Daemon404> :P
[03:47] <michaelni> sure you are
[03:47] <michaelni> you could also be a student of course
[03:47] <michaelni> you get more payed as student
[03:47] <Daemon404> im doing my internship all summer
[03:47] <Daemon404> i dont think GSoC fits with that
[03:48] <Daemon404> mind you, i aim to get payed to work on ffmpeg.
[03:48] <michaelni> get payed twice, its better :)
[03:48] <michaelni> just skip sleeping
[03:48] <Daemon404> im certain there will be some sort of rules against it
[03:48] <Daemon404> either uni or employer
[03:48] <Daemon404> lol
[03:49] <Daemon404> i remember someone else having this problem with gsoc...
[03:51] <Compn> Daemon404 : anyone can be mentor, as long as he/she knows the code or can mentor the code of course 
[03:52] <Compn> also you can be backup mentor , if you think you can give little amount of time
[03:52] <Compn> just in case
[03:52] <Compn> its good to have backup mentors
[03:52] <Daemon404> i can help out
[03:52] <Daemon404> full mentor probably not
[03:58] Action: Daemon404 has high hopes that j-b's emails will come through for him too :D
[04:02] <Daemon404> hmmm
[04:03] <Daemon404> [rm @ 01419460] Unsupported stream type 00000185
[04:03] <Daemon404> it sees a cook track, rv40 track
[04:03] <Daemon404> and that
[04:03] <sj_underwater> daemon404: http://ffmpeg.org/trac/ffmpeg/wiki/MacOSXCompilationGuide
[04:04] <Daemon404> sj_underwater, michaelni's the man w/ the plan
[04:04] <Daemon404> not i
[04:04] <sj_underwater> k
[04:05] <sj_underwater> just a skeleton now, planning to fill it out soon
[04:11] <funman> pkg-config & glibc ? Oo
[04:11] <sj_underwater> ya, neither comes w/ OS X
[04:11] <funman> i mean why do you need glibc
[04:11] <sj_underwater> glibc for pkg-config
[04:12] <sj_underwater> easy 2 forget on linux :D
[04:12] <funman> i'm pretty sure it's glib, and not glibc
[04:12] <sj_underwater> sorry, typo
[04:12] <sj_underwater> link is to glib tho
[04:20] <funman> k
[04:21] <funman> version 0.23 works fine and doesnt depend on glib btw
[04:21] <funman> i think it has a static copy of it
[04:21] <sj_underwater> i could specify 0.23, but the newest is 0.26 i think
[05:18] <sj_underwater> ok, i finished the glib section
[05:27] <Daemon404> why not just use macports?
[05:27] <sj_underwater> im not saying anyone has to follow these directions, but isnt information a good idea?
[05:27] <sj_underwater> and i hate macports
[05:28] <Daemon404> im not so sure a guide on how to compile dependencies belongs there...
[05:28] <sj_underwater> glib specifically?
[05:28] <sj_underwater> or the codecs
[05:29] <Daemon404> list em, but i dont think you need to explain how to configure and install deps
[05:29] <sj_underwater> except that the ones mentioned have specific issues
[05:29] <Daemon404> (and configure && make install is a good way to litter your system with cruft)
[05:29] <sj_underwater> otherwise i'd say "compile normally"
[05:30] <sj_underwater> true, i choose the make sections myself
[05:30] <sj_underwater> esp docs
[05:37] <sj_underwater> another question: is it possible to make the freetype test more flexible?
[05:37] <sj_underwater> it only checks with pkg-config and there's no .pc file on macs, tho it's installed
[05:54] <sj_underwater> nevermind, found a solution
[09:03] <CIA-17> ffmpeg: 03Matthieu Bouron 07master * rad029c24a6 10ffmpeg/ (doc/general.texi libavformat/mxfdec.c): 
[09:03] <CIA-17> ffmpeg: mxfdec: add timecode to metadata
[09:03] <CIA-17> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[10:44] <pasteeater> Daemon404: what do you suggest other than configure, make, install? skip the install?
[10:48] <pasteeater> and dependency compile instructions are good so they don't come to #ffmpeg asking why their non --enable-static x264 isn't working with ffmpeg.
[16:41] <Compn> j-b : i mailed maxim. work on g2m4 continues slowly... :)
[18:26] <CIA-17> ffmpeg: 03Thilo Borgmann 07master * rdaeffccd98 10ffmpeg/libavcodec/alsdec.c: 
[18:26] <CIA-17> ffmpeg: alsdec: pretty print for another log message
[18:26] <CIA-17> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[18:26] <CIA-17> ffmpeg: 03Thilo Borgmann 07master * r599881b028 10ffmpeg/libavcodec/alsdec.c: 
[18:26] <CIA-17> ffmpeg: alsdec: Fix out of ltp_gain_values read.
[18:26] <CIA-17> ffmpeg: Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
[18:26] <CIA-17> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[18:26] <CIA-17> ffmpeg: 03Joseph Artsimovich 07master * r84b9b4aa18 10ffmpeg/libavformat/ (mxf.h mxfdec.c): 
[18:26] <CIA-17> ffmpeg: Fix frame height vs field height confusion in MXF decoding.
[18:26] <CIA-17> ffmpeg: Reviewed-by: Tomas Härdin <tomas.hardin at codemill.se>
[18:26] <CIA-17> ffmpeg: Reveiwed-by: Baptiste Coudurier <baptiste.coudurier at gmail.com>
[18:26] <CIA-17> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[18:26] <CIA-17> ffmpeg: 03Baptiste Coudurier 07master * rf49cb8e669 10ffmpeg/libavfilter/vf_crop.c: vf_crop: keepaspect support
[18:26] <CIA-17> ffmpeg: 03Paul B Mahol 07master * r4ed0d182e2 10ffmpeg/tests/ (fate/demux.mak ref/fate/cdxl-demux): 
[18:27] <CIA-17> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[18:27] <CIA-17> ffmpeg: 03Stefano Sabatini 07master * r4272dc3ec5 10ffmpeg/doc/filters.texi: 
[18:27] <CIA-17> ffmpeg: doc: add vf_crop keepaspect documentation
[18:27] <CIA-17> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[21:03] <kz1> opencl ? no need for it or just in the too hard basket?
[21:06] <Compn> how many pages is the spec? :P
[21:06] <Compn> ehe
[21:07] <kz1> just that my fusion chip has two cpus and 160 gpus but ffmpeg only uses the 2 cpus
[21:12] <kz1> Utilising all those gpus could make a very big difference for some operations which could make realtime possible...
[21:12] <Compn> opencl is a good idea.
[21:12] <Daemon404> opencl has been on gsoc forever
[21:12] <Daemon404> ive never seen it done...
[21:13] <kz1> so what's the hold up then? just a time vs effort vs priorities vs money issue or something techical
[21:15] <durandal_1707> only one word: motivation
[21:15] <kz1> :-)
[21:15] <beastd> probably dedication of one developer/contributor who really wants to get it done. if that is available  technical issues may also pop up :)
[21:16] <kz1> what motivates people round here? Is it the root of all evil or something more altruistic?
[21:18] <Daemon404> well
[21:19] <Daemon404> for me its a hobby
[21:19] <Daemon404> (for now... a man can dream!)
[21:19] <kz1> I think most of us are in that boat at the moment.
[21:19] <Compn> maybe its quite hard to paralellize stuff using opencl
[21:20] <Daemon404> also
[21:20] <Daemon404> anything not slice based isnt super-parallelizable
[21:20] <Compn> there is talk of CUDA too
[21:21] <Daemon404> personally
[21:21] <Daemon404> i couldnt care less abotu opencl or cude for ffmpeg
[21:21] <Daemon404> but thats me
[21:21] <Daemon404> :P
[21:21] <kz1> Yeah, I saw some mentions of the slice issue in an old forum post from 2009. 
[21:21] <kz1> I'm working on a project which attempts to take things to the realtime level so it makes a big difference for me...
[21:22] <Compn> everything is from 2009 basically
[21:22] <Compn> it looks like ati dropped opencl and then ran for the hills :P
[21:22] <Compn> no code to help oss-world
[21:22] <kz1> So what's the general consensus for CUDA vs opencl?
[21:23] <kz1> I think AMD is coming round again with their stated direction for HSA
[21:23] <JEEB> very similar, but CUDA might have some specific stuff and closer integration/whatever. you only have one alternative anyways if you want to support both major hardware makers
[21:23] <Compn> the general consensus is 'patches welcome'
[21:24] <Compn> developers are a lazy lot
[21:24] <Compn> especially when they are paid to do other things
[21:24] <Compn> ;)
[21:24] <kz1> I'm also looking at this from a best practice POV
[21:24] <Daemon404> and opencl isnt very interesting
[21:24] <Daemon404> to many
[21:24] <Daemon404> lots of work
[21:24] <Daemon404> little benefit
[21:24] <kz1> is it preferable to have CUDA over opencl?
[21:24] <Daemon404> s/opencl/gpu stuff/
[21:24] <kierank> to do what
[21:24] <Compn> kz1 : you might want to ask #x264 people
[21:24] <Compn> they've looked at cuda / opencl i think
[21:25] <Daemon404> theres someone working on it, Compn 
[21:25] <Daemon404> as in working patches
[21:25] <Compn> well there you go ;)
[21:25] <JEEB> but the thing is
[21:25] <JEEB> IIRC it basically has a company working on it
[21:25] <JEEB> so tl;dr
[21:25] <Daemon404> this reminds me
[21:25] <JEEB> you really need time for it, and dedication of one sorts or another
[21:26] <Daemon404> i read more into that avxsynth
[21:26] <Daemon404> i am convinced that the peopel funding it
[21:26] <Daemon404> and working on it
[21:26] <Daemon404> are fuckign retarded
[21:26] <kz1> That's the kind of feedback I want to avoid.
[21:26] <kz1> :-)
[21:27] <Daemon404> kz1, that was unrelated to you btw
[21:27] <Daemon404> just a random /rant
[21:27] <Compn> most of the hardware accel stuff is contributed by companies
[21:27] <kz1> I mean for my contributions whent hey happen not personally
[21:27] <Compn> like vdpau was from nvidia 
[21:28] <Compn> they swooped in with support forum and patches to ffmpeg and mplayer 
[21:28] <Daemon404> kz1, you wont get much of that if you contribute
[21:28] <Daemon404> if any at all
[21:28] <Daemon404> avxsynth is... special.... and unrelated to ffmpeg
[21:28] <kz1> :-)
[21:29] <kz1> So you think that it would be best coming directly from AMD or nVidia in the end?
[21:29] Action: Daemon404 is nto in any place to know
[21:29] <kz1> so does the x264 stuff get picked up by ffmpeg or is it isolated to that codec?
[21:30] <JEEB> it'll definitely be relatively libx264-specific code
[21:32] <JEEB> anyways, depending on what exactly you want to do on the GPU you'll just have to begin with R&D of if it even is feasible to do that specific task on the GPU
[21:32] <kz1> so what kind of functionality would benefit the most from accessing the GPU's?
[21:33] <JEEB> something that can be multithreaded like hell
[21:33] <kz1> I'm also trying to put together a brief for presenting to AMD directly.
[21:33] <JEEB> float mathematics are often faster than integer
[21:33] <kz1> And I assume ffmpeg makes pretty extensive use of floats?
[21:34] <Daemon404> only where absolutey necessary
[21:34] <JEEB> and something that can be designed in the way where the inherent RAM<->VRAM transfer lag does not end up making it less useful
[21:34] <kz1> One issue that was raised onteh forums was that it was pointelss because of memory management. But with the fusion chips that problem is solved.
[21:35] <kz1> because they are both on the same die and use the system memory
[21:35] <kz1> not so useful for pci cards
[21:39] <kz1> so in that regard do you have any suggestions for parts of ffmpeg which would get a big boost from being able to access the GPU's on the fusion chipsets?
[21:39] <kz1> Forget about pci-e cards for now.
[21:43] <kz1> I suppose that is the major difference between 2009 and today. Back then the fusion chips hadn't been released.
[21:44] <kz1> For example transcoding. Would that benefit from massive parallelization?
[21:45] <kierank> no
[21:45] <kz1> My gut says it would but I'm not sure on the technical implementation
[21:45] <kierank> we already have gpu decoding
[21:45] <kierank> amd have already sponsored gpu encoding
[21:45] <kierank> to port x264's lookahead thread on the gpu
[21:45] <kz1> ok. 
[21:46] <kierank> but this is a very very small part of encoding
[21:47] <kz1> so is it a case of parallelization is not useful or that ffmpeg is designed around the principal that it is not required?
[21:50] <cbsrobot> kz1: maybe take a look at http://cuj2k.sourceforge.net/index.html for what step they use the gpu for encoding
[21:51] <kz1> :cbsrobot  thanks
[21:53] <kz1> They are getting very fast results
[21:54] <cbsrobot> but as I understand it they do a hybrid threading/gpu encoding
[21:55] <kz1> they claim to be *one* of the fastest jpeg2000 encoders
[21:55] <kz1> does that mean there is another way to do it too?
[21:56] <cbsrobot> kakadu calim also to be one of the fastest
[21:56] <cbsrobot> but it's closed sourced
[21:56] <kz1> ok, Well I'm only interested in opensource :-)
[21:57] <kz1> so, IIUC there is gpu decoding and h264 gpu encoding already integrated?
[21:59] <cbsrobot> not sure h264 gpu encoding is already integrated
[22:02] <kz1> So, are there any other areas where the GPU's will be useful or is it pretty much a done deal with GPU decoding/encoding?
[22:08] <Daemon404> md5 cracking
[22:08] <kz1> :-)
[22:12] <kz1> Found a couple more threads. The consensus seems to be that the effort required does not provide enough ROI.
[22:13] <kz1> Is that because the end result is not fast enough or just because it requires so much effort to make it happen that it requires someone to sponsor it to make it worth anyone while? 
[22:13] <kz1> cos it look like it will have to be replicated work across multiple platforms.
[22:14] <Daemon404> also that gpu stuff just isnt that applicable
[22:14] <kz1> Which really requires salary to make it worth anyone's time
[22:14] <Daemon404> to many decoding/encoding tasks
[22:15] <cbsrobot> kz1: see http://www.imagineparallel.com/
[22:15] <cbsrobot> the link at the top
[22:16] <kz1> cbsrobot: thx
[22:16] <cbsrobot> and send her an email and ask her to join this channel - looks like she'd be glad to help.
[22:16] <kz1> :-)
[22:23] <Compn> gpu were good for bitcoins
[22:23] <Compn> when bitcoins were easy to hash and were worth more :p
[22:26] <kz1> just found this thread:  http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/62614
[22:29] <kz1> Basically there is a lot of work that could be done but GPU's are not going to be useful for every problem. So it comes down to motivation for developers who want to contribute.
[22:29] <kz1> I'll ask again on the list and see if I can map out some kind of road map.
[22:39] <kierank> for the last 4 years people have said they had the ability to add gpu support to x264
[22:39] <kierank> they all disappeared
[22:39] <kz1> It's been a hard 4 years.
[22:40] <ohsix> intel has builtin encoders and decoders now :> the encoder can do 40MB/s high profile stuff
[22:41] <JEEBsv> last I looked the intel stuff wasn't really anything special :P
[22:42] <ohsix> it's not, but it works
[22:42] <JEEBsv> and it also is a complete black box so you wouldn't be able to use the things you thought could be effective in it either
[22:42] <ohsix> the encoder is, the decoder(s) are a bunch of different blocks
[22:43] <kierank> feel free to explain that...
[22:43] <kierank> and feel free to give us access
[22:45] <ohsix> what do you mean?
[22:46] <kierank> everybody always says that the gpu has components that x264/ffmpeg can use but nobody actually explains how to access them
[22:46] <ohsix> vaapi?
[22:47] <ohsix> this is all really old news
[22:47] <ohsix> the encoder block is in snb+
[22:47] <kierank> wrong
[22:47] <kierank> you can't access encoder components
[22:47] <kierank> you put pictures in and you get frames out
[22:47] <ohsix> what are you talking about? i've used them already
[22:48] <ohsix> well if you have a different definition you should have said something
[22:48] <ohsix> you can use the blocks the decoder use if you write your own driver stuff, but why would you bother
[22:50] <ohsix> it's documented, and you can take vaapi as a sample if not something you'd modify to do what you want; there are a bunch of controls for it, if not all exposed already to vaapi
[22:53] <ohsix> http://intellinuxgraphics.org/documentation.html and http://intellinuxgraphics.org/documentation/SNB/IHD_OS_Vol2_Part2.pdf in particular for snb+ stuff with the fancy encoders
[22:58] <kierank> again, this is for using certain parts of the decoder
[22:58] <kierank> which is pointless if we can get the gpu to decode the whole thing
[22:58] <ohsix> you could always have the gpu do all of it
[22:59] <ohsix> there's just not a lot of reason to
[22:59] <kierank> so basically you've showed nothing
[22:59] <j-b> Compn: cool
[22:59] <ohsix> that's the encoder ...
[22:59] <ohsix> you dislike intel or something? heh
[23:01] <Daemon404> as someone who interned at intel once
[23:01] <Daemon404> i can say
[23:01] <Daemon404> their video shit is shit
[23:01] <Daemon404> and mostly undocumented
[23:02] <ohsix> there's documentation right there, and vaapi is 100% open source, what are your expectations?
[23:02] <Daemon404> in this context im referring to their chip-encoder stuff
[23:02] <Daemon404> on SB
[23:03] <ohsix> that's right there
[23:04] <ohsix> onlive ;]
[23:04] <Daemon404> also lol @ libva
[23:05] <JEEBsv> the intel hardware encoder component could've been more interesting if it wasn't a completely black box with a few switches
[23:05] <JEEBsv> alas, that's what it is
[23:05] <ohsix> is it
[23:05] <JEEBsv> (unless you pay hefty money for it)
[23:05] <kierank> all that document i think is so you can use parts which are common to decode/encode
[23:06] <ohsix> heh, i'm getting the distinct impression nobody has even bothered to look, and they have some personal reason to never expect otherwise
[23:06] <JEEBsv> no, in regards with the encoding thing people have actually looked and the black box encoder is just mostly useless
[23:06] <ohsix> people, they're good
[23:07] <ohsix> some person who's judgement you trust, good enough for me
[23:07] <JEEBsv> there was some intel guy around who talked about integrating some parts of the black box into x264, but that guy quietly went into oblivion :P
[23:08] <JEEBsv> after that we had another intel guy come around handbrake to offer the black box as-is into it, he was politely'ishly told "no, thank you"
[23:08] <kierank> so yeah i can't actually see anything new in that document
[23:08] <kierank> it just says here are some decoding functions that may or may not be useful in encoding
[23:08] <ohsix> you see what's in DevSNB+
[23:09] <ohsix> if you are looking for decode bits you're looking in the wrong place
[23:09] <ohsix> i cited that one in particular because it described the encoding components new in snb
[23:09] <ohsix> but you can see there are like 25 other documents there
[23:09] <JEEBsv> also, I had someone test the intel encoder on an i5 around phoronix back in the day, and the results were that it matched x264's superfast speed and x264's ultrafast was 2x faster
[23:10] <ohsix> back in the day eh, so like a month or two ago?
[23:10] <JEEBsv> sandy bridge came with its encoder component way before that, the testing app used quant-based encoding at qp 26 IIRC
[23:11] <kierank> http://intellinuxgraphics.org/documentation/SNB/IHD_OS_Vol1_Part4.pdf is about the encoder but there aren't any blocks
[23:11] <JEEBsv> anyways, long story short: the intel encoder component could be interesting if and only if various parts of it could be used
[23:11] <JEEBsv> instead of just having a black box to be dealt with
[23:12] <ohsix> ok
[23:12] <Daemon404> [18:04] < ohsix> onlive ;]
[23:12] <Daemon404> er dont teh yuse x264
[23:12] <Daemon404> they*
[23:12] <JEEBsv> no
[23:12] <Daemon404> i mean didnt D_S work there
[23:12] <JEEBsv> gaikai does
[23:12] <JEEBsv> gaikai is where D_S works
[23:12] <Daemon404> oh
[23:12] <Daemon404> right.
[23:12] <ohsix> so you mean using the decoder bits for other things than what vaapi already uses for mpeg2 or mpeg4
[23:13] <JEEBsv> unfortunately it seems that intel has chosen the path of either not giving such access to anyone, or giving the access only to people with lots of NDAs and money
[23:13] <Daemon404> the concept of online is shitty anyway
[23:13] Action: Daemon404 is happy with his local games
[23:13] <JEEBsv> the decoder stuff is what it is, and can be separately used of course
[23:13] <JEEBsv> for decoding
[23:13] <ohsix> why hold back, or they are killing babies
[23:14] <Daemon404> ohsix, i think i saw the dead baby room when i was there
[23:14] <ohsix> you get access to the bitstream acceleration and a bunch of stuff, but i'd have to make effort to be more specific; so blat
[23:14] <JEEBsv> ohsix: everyone already knows that you can do hardware decoding with X APIs
[23:15] <ohsix> vaapi isn't tied to x
[23:15] <JEEBsv> ...
[23:16] <ohsix> :]
[23:16] <JEEBsv> that X was not a reference to X.org
[23:16] <JEEBsv> just so that you know
[23:16] <ohsix> it's open source, don't just consume it; vaapi is malleable
[23:16] <JEEBsv> ...
[23:16] <JEEBsv> I hear my comment flying way over your head
[23:17] <ohsix> ok
[23:17] <ohsix> you're suggesting something that is untenable and now i'm not too surprised at your earlier comments
[23:17] <JEEBsv> Anyways, let's put it this way. The _decoding_ part of it is not really interesting right now.
[23:18] <JEEBsv> be it VDPAU or whatever
[23:18] <ohsix> i only call them decoding elements because that's what they're known as, and as a distinction with what's in snb+
[23:18] <ohsix> they're not just decode elements
[23:19] <ohsix> but ok, nevermind
[23:20] <JEEBsv> so you're telling me that people should be going through Intel's documentation and implementation details to find functions that could be used for encoding from their decoding space?
[23:20] <JEEBsv> pardon my french, but that really doesn't sound like making much sense
[23:21] <ohsix> i'm saying if vaapi doesn't currently fit your use case, change it to something that does; there's a large gulf between the api intel publishes for accessing it and what it can actually do
[23:22] <ohsix> if you don't want to support your use case i don't know who will ... :]
[23:23] <ohsix> gwenole and the other person(s?) working on it are pretty easy to contact
[23:24] <JEEBsv> Which is exactly what I was saying. And I don't have a use case, I just think that if the black box'd encoder component wasn't black box'd -- now that could be VERY muchos useful for things
[23:24] <JEEBsv> unfortunately Intel chose another path :P
[23:24] <ohsix> ah then i don't disagree
[23:24] <ohsix> but exposing all the blocks isn't useful for someone, and you still have the documentation; it's just a different level of access
[23:25] <ohsix> you could concievably expose the blocks as a graph of elements in vaapi instead of encapsulating a few of them that are logically useful from a consumer perspective
[23:26] <JEEBsv> you are saying that the blocks of the encoder component are actually _available_ ? That fights with every PDF I read from Intel back when SB was newer :P
[23:26] <ohsix> but then again, that's exactly what you have already, but you would be working with the documentation; and i presumed you would be augmenting vaapi for uniform access to the features you wanted to expose, not creating something new
[23:26] <JEEBsv> as far as I know the encoder is only available as a place to stick data in, some switches and a place that gives you an H.264 bitstream
[23:26] <ohsix> see my distinction earlier about referencing decoder elements, and that it would take effort to point something out in particular as useful
[23:27] <JEEBsv> ...
[23:27] <ohsix> you're not too far off, at least as it's used by vaapi
[23:27] <JEEBsv> Not only VAAPI
[23:27] <ohsix> right
[23:27] <JEEBsv> every software-level API I've seen does exactly that and I don't believe Intel lets any normal mortal touch that encoding ASIC any more
[23:28] <ohsix> the public documentation and code is somewhat organized to compartmentize the IP as well
[23:28] <ohsix> compartmentalize
[23:29] <ohsix> it might be a forernsic exercise for someone who doesn't already work at intel, but there are things the code does that you can't track in any documentation, ifunowatimean
[23:29] <JEEBsv> I wonder why you couldn't just say "Yes, you can't access the encoder ASIC in a more thorough manner" and instead dive into "B-but you can look at these random APIs and things intel left us! Y-you might be able to use it for something!"
[23:30] <ohsix> anyways,  unless i get bored and have a concrete example to offer up, i'm done; we pretty much agree at the moment
[23:30] <ohsix> that wasn't the intention
[23:30] <ohsix> you wouldn't access the asic directly anyways, unless you are writing a vaapi workalike, and vaapi already exists so that is pretty silly
[23:31] <ohsix> you'd essentially be writing a driver, for something else to use
[23:31] <ohsix> it's not like the old mga devices where you just get exclusive access and open it and send some ioctls
[23:32] <ohsix> you need to share resources with say, glx; and have access arbitrated, you are working with only one component of many that are working together
[23:33] Action: JEEBsv sighs and feels really herp derp because of the actual point being missed
[23:33] <ohsix> i get your point, i was elaborating on reality
[23:33] <ohsix> how would this widget you're writing cooperate with other clients using vaapi or the gpu
[23:33] <ohsix> it's a semantic difference
[23:35] <ohsix> most things vaapi do are exclusive from the other gpu tasks except from very specific, simple interactions; if you were writing an alternate driver you would be in a space having to cooperate with vaapi
[23:35] <JEEBsv> You do understand that what I was talking about was the blackbox'ification of the encoder component, right?
[23:35] <JEEBsv> not about anything else
[23:36] <ohsix> you seemed to be conflating how it was accessed with what it actually does, and assuming that relationship is fixed forever
[23:37] <JEEBsv> also, I am basing the assumption that you can in theory access the encoder ASIC/FPGA/whatever in a better way to the fact that there is a thread made by an Intel employee who later vanished that was talking about knacking some components of the now-blackbox-only encoder thingy into x264
[23:38] <ohsix> what was his name?
[23:38] <JEEBsv> I don't remember, but his nickname was his name IIRC
[23:39] <ohsix> isn't mentioning x264 aside from the intel employee thing kind of a red herring, you mentioned earlier that you would want to reuse all the components that you could, and those cover more than h264
[23:40] <JEEBsv> I have no idea what you're talking about
[23:40] <ohsix> not important, it just seemed to undermine your argument from earlier
[23:42] <ohsix> if i get some time to read the documentation again i'll do some citations, i don't see that happening soon though :]
[23:45] <ohsix> you could ask gwenole what's easily doable pretty easy tho
[00:00] --- Mon Mar 12 2012


More information about the Ffmpeg-devel-irc mailing list