[Ffmpeg-devel-irc] ffmpeg-devel.log.20150102
burek
burek021 at gmail.com
Sat Jan 3 02:05:03 CET 2015
[00:34] <kierank> Daemon404: have you used hopper?
[00:34] <kierank> it's quite nice
[01:00] <Daemon404> kierank, hopper?
[01:00] <kierank> hopper disassembler
[01:00] <Daemon404> oh that
[01:00] <Daemon404> i trid it
[01:00] <Daemon404> for 32-bit, it's not quite as nice as hexrays
[01:01] <Daemon404> x64 is obviously better.
[01:01] <kierank> it has a decompiler as well
[01:01] <kierank> but to pseudocode
[01:01] <Daemon404> yes
[01:01] <Daemon404> hexrays has some nicer features, but is 32bit only
[01:02] <Daemon404> (there supposedly also exists an arm version)
[01:02] <Daemon404> i guess if youre trying to Be Legal (TM) then hopper is actually, you know, purchasable
[01:03] <wm4> how much does this hexrays thing cost?
[01:03] <Daemon404> doesnt matter
[01:03] <Daemon404> they dont sell to you unless you are notable
[01:03] <wm4> lolwut
[01:03] <wm4> seems like a bad sales strategy
[01:04] <Daemon404> not really
[01:04] <Daemon404> theyre undispitedly the best, and every single firm uses them
[01:04] <Daemon404> undesputedly*
[01:04] <compn> they are e-l335 !
[01:04] <Daemon404> eh, screw spelling.
[01:04] <compn> er el33t
[01:45] <BBB> isnt hexrays the decompiler?
[01:45] <BBB> you dont need hexrays, IDA is good enough
[03:25] <michaelni> cousin_luigi, thx
[03:33] <cone-515> ffmpeg.git 03Michael Niedermayer 07master:aeb36fd2072e: avcodec/mpeg12dec: Move user data debug code out of unrelated if (buf_size > 29)
[03:33] <cone-515> ffmpeg.git 03Michael Niedermayer 07master:1010b36d8672: avcodec/mpeg12dec: Recalculate SAR unconditionally
[03:33] <cone-515> ffmpeg.git 03Michael Niedermayer 07master:75cc57f73f9a: avcodec/mpeg12dec: Check actual aspect ratio instead of aspect_ratio_info
[08:54] <anshul_mahe> michaelni: I have tried to complete all your comments in closed caption decoder patch, is something still missing
[09:38] <nevcairiel> someone should delete #4219, its some disguised spam again
[10:04] <anshul_mahe> closed it as invalid
[10:06] <ubitux> it should be deleted
[10:06] <ubitux> btw it's kind of interesting
[10:06] <ubitux> it looks like a bot that tried to pick this: http://trac.sagemath.org/ticket/3101
[10:06] <ubitux> and it seems to have attempted the same thing in several other places: https://josm.openstreetmap.de/ticket/10915 https://www.mail-archive.com/tor-bugs@lists.torproject.org/msg67946.html
[10:48] <anshul_mahe> ubitux: I dont know how to delete, please do the needful if you know how to delete
[10:48] <ubitux> i don't have the rights
[10:48] <ubitux> afaik
[11:47] <cone-375> ffmpeg.git 03Clément BSsch 07master:55faf56c7256: avformat/mov: move edit list heuristics into mov_build_index()
[11:47] <cone-375> ffmpeg.git 03Clément BSsch 07master:11201bbf7fc9: avformat/mov: reindent after previous commit
[11:48] <ubitux> akira4: i'll try to reply to your mail tonight :)
[11:48] <akira4> okay ! thanks :)
[14:49] <datenwolf> Hi, I'm trying to understand the ffmpeg build system. Specifically I'd like to use libswscale (and just libswscale) in a small project of mine and for that transplant a copy of the libswscale sources into the source tree of my project (the rationale behind that is, that I don't want to have people install a full ffmpeg source build just for building a program which grabs webcam images and process
[14:49] <datenwolf> es them (and may have to perform YUV->RGB transformation for that)).
[14:50] <datenwolf> So essentially what I actually want to understand is how to build the libswscale sources without using the ffmpeg build system.
[14:50] <wm4> libswscale is pretty bad for yuv->rgb conversions
[14:50] <datenwolf> wm4: and the other way round? Because that will happen as well.
[14:50] <wm4> but what's hard to understand? it's handwritten, a bunch of shell scripts and makefiles
[14:50] <wm4> maybe even worse
[14:51] <datenwolf> wm4: Well for example there's identically named source files in libswscale/ and libswscale/x86, so I wonder are those complementary or do they replace it.
[14:52] <wm4> also, you can disable almost everything when building ffmpeg (though I'm not sure if you can actually disable everything except libswscale)
[14:52] <datenwolf> wm4: Excuse me? Worse for colorspace transformation? So given the fact that you have to convert color spaces for about every lossy video codecs and swscale being used by ffmpeg for this, this statement surprises me.
[14:52] <wm4> creating your own build system for libswscale sounds like maintenance nightmare
[14:52] <JEEBsv> it is, and you really don't want to do it
[14:52] <datenwolf> wm4: Well, actually I'm using CMake
[14:53] <wm4> libswscale just has been dragged along from its beginnings... it was never particularly good
[14:53] <wm4> oh, I'm sorry
[14:53] <datenwolf> JEEBsv: So, a) why is it then still in ffmpeg and b) any suggestions for a different library. I'd really prefer not to write that stuff myself.
[14:54] <wm4> Libav wanted to rewrite libswscale for ages, but nothing came of it
[14:54] <JEEBsv> libz?
[14:54] <wm4> yeah, zimg might be an alternative
[14:54] <JEEBsv> not as optimized, but seems to be getting used @ vapoursynth
[14:54] <wm4> datenwolf: a) because it's always been used in ffmpeg, and is not easy to replace
[14:55] <datenwolf> So when you say "swscale is bad" what exactly do you mean by it?
[14:55] <wm4> maybe libyuv is also viable?
[14:55] <wm4> slow and low quality
[14:55] <JEEBsv> you usually want only one of those at most
[14:56] <JEEBsv> https://github.com/sekrit-twc/zimg
[14:56] <JEEBsv> this is the one that seems to have most buzz around it atm
[14:56] <datenwolf> JEEBsv: Well, essentially the problem boils down to: Webcam may not support the colorspace I need (I420 in particular), so whatever comes out of it, must be converted first (and fast).
[14:56] <wm4> this is probably more for scaling and end-conversion
[14:56] <datenwolf> So anything that gets the job done is fine to me
[14:56] <wm4> not so much about handling the fucked up shit webcams can output
[14:57] <wm4> like potentially packed formats etc.
[14:57] <JEEBsv> seems to be planar YCbCr so far according to his words :D
[14:57] <wm4> well, some webcams even output jpeg and stuff
[14:57] <JEEBsv> yes
[14:57] <JEEBsv> but in that case you don't only need swscale :P
[14:57] <datenwolf> wm4: Actually the jpeg case is what I'm mostly dealing with, and I got that covered already.
[14:58] <JEEBsv> you need avcodec and friends, or whatever you choise
[14:58] <wm4> I'm just saying yuv conversion alone is not all of it
[14:58] <JEEBsv> of course not, but that's what he seems to be wanting
[14:58] <JEEBsv> scaling and colorspace conversion
[14:58] <JEEBsv> :P
[14:58] <datenwolf> But JPEG is quite allowing in the colorspaces that can be shoved through it.
[14:59] <datenwolf> I'm using libuvc to access the webcams (deliberately bypassing any system drivers for webcam access here).
[14:59] <JEEBsv> anyways, zimg can be quick with newer CPUs, swscale has some optimizations for various older CPUs
[14:59] <JEEBsv> but you have to remember that swscale isn't exactly doing it all the best way
[14:59] <cone-375> ffmpeg.git 03Michael Niedermayer 07master:4c7a1ccb366c: Changelog: Add cropdetect non 8bpp
[14:59] <JEEBsv> since it's meant for quick n' dirty things
[15:00] <datenwolf> JEEBsv: actually quick and dirty sounds nice to me :)
[15:00] <wm4> it originates from mplayer, where they wanted to scale and convert 4:2:0 in realtime on pentiums, or something
[15:00] <wm4> (I wonder if that ever actually worked...)
[15:00] <JEEBsv> maybe with the resolutions in use back then...
[15:00] <JEEBsv> well, I'm just saying that you might be able to use both
[15:00] <wm4> datenwolf: also, libswscale depends on libavutil
[15:01] <JEEBsv> also if you are going to use either, you will want to just make your cmake build system look for the libraries, or you use git submodules and use the ffmpeg/zimg build system from cmake
[15:01] <datenwolf> The main reason why I don't want to use the full ffmpeg stack for the whole processing (which actually it could be used for) is, that I never managed to get its processing chain down to only a single frame of latency.
[15:02] <JEEBsv> that's possible but requires poking at the buffering etc
[15:02] <JEEBsv> never done it fully myself but I've seen people do it :P
[15:02] <datenwolf> Yes, and I'd like to avoid that. ffmpeg is kind of a swiss army knife, but what I need is more like a specialized surgical scalpel right now.
[15:02] <JEEBsv> input buffering, filtering and then making sure the encoder plays along as well
[15:04] <wm4> yeah, ffmpeg has the surgical precision of as shovel
[15:04] <wm4> (just wanted to say that)
[15:05] <wm4> anyway, libswscale is so disappointing, people have written their own naive conversions in C, and were then surprised that it was faster than libswscale
[15:06] <anshul_mahe> wm4: if u have some benchmark regarding what u ar speaking, why dont you make swscale faster
[15:07] <wm4> because I'm neither an optimization or colorspace conversion expert, nor am I courageous enough to hack libswscale code
[15:08] <wm4> and I think it's important not to promise the wrong things
[15:08] <wm4> also, libavscale will liberate us all
[15:09] <datenwolf> Maybe I should quickly explain, what I'm trying to do: Take two webcams in a stereoscopic configuration, mount them on a Oculus Rift. Fetch their images with as little latency as possible, encode/lossy compress the video stream in an error resilient fashion, adding the positional tracking data, send it over UDP (I don't care if packages are lost, just get it there fast). On the receiving side dec
[15:09] <datenwolf> ompress into OpenGL textures and integrate it into Cube Maps, then view these through another Oculus Rift on the other end (using the viewing Rift's tracking data to control the view onto the cubemaps). And do that in both directions.
[15:09] <datenwolf> Some of the code to do this is already in place (the whole cubemap integration and viewing for example).
[15:10] <datenwolf> I originally wanted to use x264 in zerolatency mode, but then had the problem that apparently there's no "standalone" lightweight decoder for h.264 available. So my second choice fell on libvpx.
[15:11] <JEEBsv> well, depending on your feature set used that cisco thing might be useful for you
[15:11] <JEEBsv> although it probably doesn't support stuff like 4:4:4 chroma for example
[15:12] <JEEBsv> that said, libvpx doesn't support it either in any sane configuration
[15:12] <wm4> anyway, ffmpeg can be stripped down a lot
[15:12] <datenwolf> libvpx expects a I420 format. I already got that part working, but only from a file with the images preconverted.
[15:13] <wm4> while distros force you to use their version (or something), it doesn't matter if you redistribute your program as a blob
[15:13] <datenwolf> Or YV12 which is kind of the same a I420 but with chroma planes swapped.
[15:13] <wm4> and then I believe you could reduce ffmpeg to having a h264 decoder and libswscale, or so
[15:13] <datenwolf> Well, actually I'd prefer not to distribute a blob, but a repository which kind of builds "stand alone".
[15:15] <datenwolf> Maybe that's just habitual, but it comes with the line of work I'm earn my money with, where we have to produce reproducible builds and have tight control over every piece code that goes into the final program.
[15:15] <datenwolf> s/I'm earn/I'm earning/
[15:18] <datenwolf> libyuv looks exactly what I need.
[15:18] <datenwolf> have to take a glimpse at zimg yet, but so far it looks far better suited than swscale to me.
[15:18] <datenwolf> Thanks!
[15:20] <datenwolf> Okay, which zimg is zimg? If I type "zimg library" into Google I get a number of (orthogonal) results.
[15:21] <JEEBsv> the twc-sekrit one on github
[15:21] <datenwolf> JEEBsv: got it.
[15:23] <datenwolf> Okay, both libyuv and zimg are implemented in C++; not that I'd complain, but I's prefer pure C (again habitual, due to the headaches I've to deal with at work).
[15:24] <JEEBsv> the API for zimg should be C
[15:24] <JEEBsv> so as such it shouldn't be a problem
[15:24] <datenwolf> Yes it's C.
[15:25] <datenwolf> Well for a stand alone project it doesn't matter. But the linkage problems involved when multiple C++ libraries built by different compilers collide, ugh...)
[15:26] <datenwolf> Take one library that makes liberal use of G++ extension features and take one library that uses CUDA and try to use them in a Windows build.
[15:26] <datenwolf> In Windows CUDA only works with MSVC++
[15:27] <wm4> C++ being "more portable" is part of why everyone is using it
[15:27] <datenwolf> define "more portable"...
[15:27] <wm4> though that's mostly MS actively sabotaging C, and the C standard committe being a bunch of old turdheads
[15:28] <cone-375> ffmpeg.git 03Supraja Meedinti 07master:c6bb651bce2b: libavutil: Added Camellia symmetric block cipher
[15:28] <wm4> datenwolf: these days you can have threads, atomic, and a number of things that work on all platforms in the core language
[15:29] <wm4> while in C you must NIH your portable mechanisms
[15:29] <datenwolf> wm4: C11 defines most of these as well.
[15:29] <datenwolf> (or rather all of them, actually)
[15:29] <wm4> except it's completely useless
[15:29] <wm4> because for example MSVC doesn't implement C11
[15:30] <wm4> or even if they do, threads are most likely a toy implementation that doesn't interact well with "foreign" native threads
[15:30] <wm4> just like "fopen" is completely useless in msvc
[15:30] <datenwolf> wm4: Don't remind me (O_BINARY / O_TEXT...)
[15:30] <wm4> the problem with fopen is that it doesn't do utf-8
[15:31] <datenwolf> wm4: You mean in filenames?
[15:31] <wm4> yes
[15:32] <datenwolf> Well, that's actually not a fopen problem, but a problem deep down in the Windows API and the assumptions made by legacy programs Microsoft doesn't want to break if they introduced proper UTF-8 support in filenames. The filesystems deal with it just fine.
[15:32] <wm4> the main issue is that MS doesn't care
[15:34] <datenwolf> A number of "shortcommings" of the Windows API are actually that Microsoft doesn't want to break legacy code. And fopen they consider a legacy API. They don't care? Maybe, but it's also part negligence in the original implementation and lock-in due to broken programs.
[15:34] <datenwolf> Anyway, thanks for your help and suggestions.
[15:34] <datenwolf> both libraries look fine, and I'll try each of them.
[15:35] <wm4> since on Microsoft each program has its own (independent) stdlib anyway, there's no reason why MS couldn't provide a fixed, modern replacement
[15:36] <datenwolf> wm4: Well, every version of MSVC comes with a new MSVCRTxxx.dll (and a bunch of statically linked and debug build counterparts).
[15:37] <datenwolf> So it could be integrated in that quite effortlessly.
[15:37] <wm4> I also blame mingw
[15:37] <datenwolf> And the API to access UTF-8 filenames does exist in the Win32 API.
[15:37] <wm4> and the terrible way they use an ancient msvcrt, and their own replacements for some functionality
[15:37] <wm4> what where
[15:37] <datenwolf> You mean, because MinGW uses the system default MSVCRT?
[15:38] <wm4> yes
[15:39] <datenwolf> That's probably to avoid licensing issues regarding the redistributable.
[15:40] <datenwolf> but yeah, shitty situation. However the MSVCRT gets updated with every version of Windows as well. It retains the interface used up until VisualStudio-6 for compatibility (VS6 linked against the system MSVCRT as well).
[15:40] <datenwolf> But it _gets_ updated.
[15:42] <datenwolf> After all it's the CRT drivers and services are to be linked against.
[16:51] <kierank> datenwolf: what's wrong with just disabling the parts of the ffmpeg build you don't need
[16:52] <datenwolf> kierank: That then I've to include the whole ffmpeg sources into my source tree OR have people perform a tailored ffmpeg build themselves.
[16:53] <datenwolf> kierank: Also I'm not sure just disabling the unneeded parts and using libavcodec would give me the required low latencies without hacking into the libavcodec sources themself.
[16:53] <kierank> then you are wrong
[16:53] <kierank> just because ffmpeg.c is not low latency
[16:54] <kierank> doesn't mean libavcodec is suddenly high latency
[16:55] <datenwolf> Well, a few years ago I had another project (without the latency requirements) where I had to fetch images from Webcams as well. And because I was lazy I just used libavcodec for that. But I never got the camera latency down as much as I managed with this project by using libuvc and accessing them directly.
[16:56] <datenwolf> I made a small libavcodec wrapper for that then (terribly outdated, ffmpeg's API moved along some way) https://github.com/datenwolf/aveasy
[16:59] <kierank> what kind of latency are you looking for?
[17:15] <iive> imho, most of the latency is in the output muxers. libx264 could buffer a dozen frames, but it also have nice options to disable that and lower latency further.
[17:16] <nevcairiel> libx264 has a low-latency option as well if that is desired
[17:16] <nevcairiel> the decoders dont buffer frames needlessly in my experience, you just need to turn off frame-threading if latency is a concern
[17:17] <nevcairiel> but a webcam is unlikely to spit out h264 anyway, rather some intra codec like mjpg, or even uncompressed
[17:17] <nevcairiel> (or at the very least only i/p frames, not b frames)
[17:17] <nevcairiel> (so no delay)
[17:31] <datenwolf> iive: Well, in that case I mentioned we just grabbed the images from the Webcam. No encoding whatsoever.
[17:32] <datenwolf> We were using libavcodec then to have some flexibility regarding the end system setup.
[17:32] <iive> datenwolf: -c copy doesn't do any encoding either, but it does demuxing and mixing.
[17:32] <datenwolf> Could have been a USB camera or a IEEE1394
[17:33] <datenwolf> iive: Well, look at my aveasy code; it's just that what we used and we has between 10 to 15 frames of latency.
[17:33] <datenwolf> Not a big deal for that project then, but a huge roadblock if to be used on a VR system.
[17:34] <datenwolf> iive: That's why I've decided to bypass as much as I could using libuvc to get the camera images.
[17:34] <datenwolf> And the result speaks for itself.
[17:34] <anshul_mahe> is there any filter which merge 2 programs to 1
[17:35] <iive> datenwolf: link?
[17:35] <anshul_mahe> https://github.com/datenwolf/aveasy
[17:35] <datenwolf> iive: that would be the "old" stuff.
[17:36] <nevcairiel> its hard to judge where the delay is being created, v4l2 might add delay
[17:36] <datenwolf> The low latency stuff is still in a rather unordered form. Needs a proper repository. Will setup a public Git repo as soon as its done (that was the goal from the beginning).
[17:37] <datenwolf> nevcairiel: Ideed that may be the case. That's why I now use libuvc to get around anything that might introduce latency.
[17:38] <datenwolf> The drawback is, that it's no longer truly "plug-and-play", because it requires raw USB access and at least in Linux you must add a few UDev rules to set the /dev/bus/usb/* nodes to rw for permitted users/groups.
[17:41] <iive> i thought that v4l2 provides monotonic timestamps, so you should be well aware of the delay of the frames when you get them.
[17:41] <iive> the code looks like simple wrapper, and it even uses swscale.
[17:43] <datenwolf> iive: Yes, this is old code. I no longer use/maintain it.
[17:44] <datenwolf> The timestamps are only good to know the latency. But if the frame's are "stuck" in the kernel side buffers you don't get them until you get them. And there's nothing you can do about that. Which means, you can't get less latency than the kernel allows you.
[17:44] <iive> true
[17:45] <datenwolf> iive: And yes, it uses swscale. That's why I was asking, because I already know how to use it. But then ffmpeg was just installed on the system and I had not to care that other people had to build it.
[17:46] <anshul_mahe> datenwolf: you can try xenomai patch for kernel, and make the latency accurate
[17:48] <datenwolf> anshul_mahe: Why should I bother? libuvc gives me extremely low latency (less than a frame in fact, measured it using a flashing LED).
[17:49] <datenwolf> And installing custom kernels is even more inconvenient than just chmod 660 on /dev/usb/... or installing a udev rule, this would be kind of counterproductive.
[17:51] <datenwolf> Apparently the Webcams I use use rolling shutter (no big surprise) and feed the scanlines to USB as they come in.
[17:52] <iive> so, libuvc access the usb camera directly, without bothering with the kernel?
[17:52] <anshul_mahe> no library can access kernel directly
[17:53] <datenwolf> iive: it goes directly to the USB using the kernel interfaces.
[17:53] <datenwolf> In Linux you have /dev/usb/... in Windows you have WinUSB.
[17:53] <iive> like libusb
[17:53] <datenwolf> iive: In fact libuvc uses libusb
[17:55] <iive> i would expect it to :)
[17:55] <anshul_mahe> libuvc seems to interface directly uvc webcam driver rather then using v4l2, just a guess
[17:56] <datenwolf> andrewrk: libuvc does implement a UVC driver
[17:56] <datenwolf> It's not using any OS drivers other than direct access to USB.
[17:57] <datenwolf> anshul_mahe: UVC is simple enough
[17:57] <anshul_mahe> yes I have started looking at it
[17:57] <anshul_mahe> I am looking at https://github.com/ktossell/libuvc, this is the one
[17:58] <datenwolf> Of course the latency also depends on the Webcam, how it controls the shutter, internal buffers, etc. etc.
[18:04] <anshul_mahe> datenwolf: thnks it look good i will surely try to add libuvc support in ffmpeg, but may be some month later
[18:04] <datenwolf> anshul_mahe: Take you time :)
[21:23] <cone-375> ffmpeg.git 03Nicolas George 07master:55763b6f5eda: lavd/lavfi: allow to extract subcc.
[22:34] <cone-375> ffmpeg.git 03Giorgio Vazzana 07master:88d19d240aee: avutil/camellia: fix documentation for av_camellia_crypt()
[22:34] <cone-375> ffmpeg.git 03Giorgio Vazzana 07master:8e38b1539e7d: avutil/camellia: make LR128() more robust
[22:35] <cone-375> ffmpeg.git 03Giorgio Vazzana 07master:fbb792f90fce: avutil/camellia: use K[2] instead of *K in generate_round_keys()
[22:35] <cone-375> ffmpeg.git 03Giorgio Vazzana 07master:a3ab87427c53: avutil/camellia: cosmetic fixes
[00:00] --- Sat Jan 3 2015
More information about the Ffmpeg-devel-irc
mailing list