[Ffmpeg-devel-irc] ffmpeg.log.20130618

burek burek021 at gmail.com
Wed Jun 19 02:05:01 CEST 2013


[03:05] <rcombs> can the new swscale dither feature be used to avoid the banding one currently gets when encoding a frame from an H.264 High 10 video as an rgb24 PNG?
[03:06] <lacrymology> how can I make sure libmlt uses my ffmpeg installation (manual make install into /usr/local) and not my distro's preferred libav for libavformat? and can someone point me as to how to find out why does autoconf say I've got no libdv support (amongst others(
[03:06] <lacrymology> )
[03:06] <rcombs> (or generally just to avoid banding when working with High 10 video?)
[03:06] <lacrymology> and is there support for gpu-accelerated h264 encoding?
[03:08] <rcombs> lacrymology: those are both <SOMETHING>_PATH issues
[03:10] <rcombs> lacrymology: it sounds like libdv is installed but not in a _PATH (LD_LIBRARY_PATH?), and /usr/local/lib needs to come before <wherever the system libav is {/usr/lib?}> in a _PATH (again, LD_LIBRARY?)
[03:13] <lacrymology> rcombs: ok, it looks like that's OK, but while making melted, I got this: /usr/bin/ld: /usr/local/lib/libavformat.a(allformats.o): relocation R_X86_64_32 against `ff_a64_muxer' can not be used when making a shared object; recompile with -fPIC  any clues?
[03:14] <rcombs> lacrymology: and this is where my expertise ends. Maybe try recompiling libavformat with -fPIC (not that I know what that is)
[03:14] <rcombs> ?
[03:15] <lacrymology> rcombs: =) yes, I got that much. But I'm pretty sure it's something I need to tell autoconf
[03:15] <rcombs> haven't the foggiest, sorry
[03:16] <rcombs> either wait around for someone who knows more than me or try googling a bit
[03:16] <lacrymology> rcombs: thanks. Any clues about the hw accel thing, by any chance?
[03:19] <lacrymology> I think it's --enable-shared
[03:19] <rcombs> lacrymology: H.264 encoding in ffmpeg generally involves use of x264. There's an experimental x264 patch that offloads the lookahead function to the GPU, but it's buggy as hell
[03:20] <rcombs> lacrymology: so for the moment, I'm going to say "not yet"
[03:21] <rcombs> the OpenCL lookahead function apparently does give significant speed improvements without losing quality, but it's buggy and I'm not sure if you could use it with ffmpeg even if you compiled libx264 with a patch yourself
[03:22] <lacrymology> damnit
[03:22] <lacrymology> I've got a pretty-much-simultaneous multi-format streaming requirement I don't know how I'll support
[03:23] <lacrymology> I'll prbably end up using the cheapest codec around, and to hell with compression
[03:23] <lacrymology> it's PROBABLY good enough
[03:25] <rcombs> lacrymology: libx264 in ffmpeg can be very fast
[03:25] <rcombs> lacrymology: it'll just be either large or low-quality
[03:26] <rcombs> well, if the machine's moderately powerful you can do >1x speed encoding with rather good quality and reasonable bitrates
[03:26] <rcombs> Plex pulls it off marvelously, and it's all ffmpeg/libx264 in the backend
[03:28] <lacrymology> rcombs: I've got no clue as how to do that. I've been given this task, and I don't know shit about codecs, transports, and the such. Is libx264 the default h264 codec in ffmpeg, or do you think I need to do something special for it to use it?\
[03:28] <lacrymology> answered that already
[03:29] <rcombs> lacrymology: there is no such thing as a "default <x> codec" in ffmpeg
[03:29] <rcombs> lacrymology: you don't tell ffmpeg "make this H.264", you tell it "encode this with libx264"
[03:30] <lacrymology> ok.. so I guess I meant "it's enabled by default"?
[03:30] <rcombs> lacrymology: i.e. the command includes -vcodec libx264
[03:30] <lacrymology> ah
[03:30] <lacrymology> thanks
[03:30] <lacrymology> by the way ./configure --enable-gpl --enable-libx264 does the trick
[03:30] <rcombs> good
[03:31] <rcombs> mine: ./configure --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libass --enable-libcelt --enable-libfaac --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-openssl --enable-libopus --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --prefix
[03:31] <rcombs> =/usr/local --enable-static
[03:31] <lacrymology> the problem is that in the end, I don't know, because I won't be using ffmpeg directly, I'm making open source TV broadcasting software, using melted
[03:31] <rcombs> oh, multiple lines :|
[03:32] <lacrymology> rcombs: do you think I can do --enable-static and --enable-shared at the same time? do I get both lib sets?
[03:33] <lacrymology> what does openssl provide?!
[03:34] <rcombs> HTTPS, iirc
[03:34] <lacrymology> of course
[03:34] <lacrymology> how did you get your configure data? can you retrieve it, or do you have it saved somewhere?
[03:35] <rcombs> ffmpeg spits it out every time you run it unless you make it very non-verbose with CLI flags
[03:35] <rcombs> along with version number and the compiler it was built with
[03:35] <rcombs> and build timestamp
[03:35] <rcombs> "built on Jun 12 2013 19:26:29 with Apple LLVM version 5.0 (clang-500.1.58) (based on LLVM 3.3svn)"
[03:35] <rcombs> "ffmpeg version N-53981-g8aea2f0 Copyright (c) 2000-2013 the FFmpeg developers"
[03:35] <rcombs> etc&
[03:36] <lacrymology> tru dat
[03:37] <rcombs> good thing too, or I'd never have the same config across 2 builds :3
[03:38] <rcombs> well, no, I would
[03:38] <rcombs> but across 2 uses of ./configure
[03:40] <lacrymology> so, as I was saying, the problem is that in the end, I don't know, because I won't be using ffmpeg directly, I'm making open source TV broadcasting software, using melted, which uses ffmpeg itself. I think I can tell it which codecs to use, but I'm not sure
[03:43] Action: rcombs does not know melted at all
[03:45] <lacrymology> libmlt is a multimedia framework with TV broadcating in mind. Does some neat stuff, like playing, transcoding, playlisting, streaming, and that kind of stuff
[03:45] <rcombs> I might have to look into that
[03:45] <lacrymology> melt is a player / transcoder / thingie, melted is a server-client player
[03:45] <lacrymology> http://www.mltframework.org/
[03:46] <rcombs> I've been considering writing a video playback server for a cable TV station that runs on RasPi
[03:46] <rcombs> it'd basically be fucktons of wrappers on omxplayer, really
[03:46] <lacrymology> CasparCG is an open source playout for windows built on top of libMLT.
[03:47] <lacrymology> we're making something very similar, but to run under *nix
[03:47] <rcombs> neat!
[03:47] <rcombs> will it run on RasPi?
[03:47] <lacrymology> https://github.com/inaes-tic/mbc-playout/
[03:47] <lacrymology> what's RasPi?
[03:47] <rcombs> Raspberry Pi
[03:47] <lacrymology> ah
[03:47] <lacrymology> we're making it run on a BlackMagic
[03:48] <lacrymology> I don't know what protocol RasPi handles
[03:48] <lacrymology> I think it's the same?
[03:49] <rcombs> RasPi is a computer, and Blackmagic is a company
[03:50] <lacrymology> yes, I see now
[03:50] <rcombs> anyone know about using the swscale dither for reducing 10bit->8bit banding?
[03:50] Action: rcombs has no idea how to use it
[03:50] <lacrymology> http://www.blackmagicdesign.com/products/decklink/ this is the card we're using
[03:50] <rcombs> OK, Decklink
[03:50] Action: rcombs is somewhat familiar
[03:51] <rcombs> playing video out from a RasPi basically involves shooting video at /dev/fb0
[03:51] <lacrymology> I guess it depends on how well it encodes
[03:51] <rcombs> s/en/de/?
[03:52] <lacrymology> encodes / decodes / whatever, I don't know
[03:52] <rcombs> decode is video stream -> frames
[03:52] <rcombs> and the RasPi has an onboard H.264 encoder and decoder
[03:52] <lacrymology> rcombs: if the RasPi just gets frames and it works, then yeah, sure
[03:52] <lacrymology> yes, we're using it for something else, I think
[03:53] <rcombs> lacrymology: the tricky bit is that you'd have to write code to use the onboard hwdec, as the CPU isn't fast enough for it
[03:53] <rcombs> also it's ARM, and that shouldn't matter, but&
[03:55] <lacrymology> it looks like mltframework doesn't come with a RasPi consumer
[03:55] <lacrymology> you'd need to write that
[03:56] <lacrymology> it comes with a decklink consumer
[03:56] <rcombs> which would mostly involve tearing code out of OMXPlayer and stuffing it into MLTFramework
[03:57] <rcombs> OpenELEC and PleXBMC could also be good to examine
[03:57] <lacrymology> comes with avformat, blipflash, decklink, gtk2, jac, libdv, rtaudio sdi sdl, and an XML builder =)
[03:57] <lacrymology> BUT
[03:57] <rcombs> the great thing about RasPi is that it costs $35 and runs on 700mA of microUSB power and is the size of a credit card
[03:57] <lacrymology> if you build an mvcp driver for OMXPlayer, you can still use our software =)
[03:58] <lacrymology> yes, I think we're using one of those to do something else
[03:58] <lacrymology> an automatic camera switch
[03:59] <rcombs> you can use them for all sorts of things
[04:00] <rcombs> receiving a stream from the web and playing out over HDMI or composite& controlling a USB or serial device over a network& I'm using one right now for music on hold playback at an optometry practice
[04:00] <rcombs> (serial devices via a USB<->Serial adapter)
[04:02] <lacrymology> I think I need to 3x1080p camera inputs into a single fullHD webstream
[04:02] <lacrymology> hdmi / composite is not.. professional enough
[04:03] <lacrymology> or so I've been told
[04:04] <rcombs> composite is still used in plenty of pro stuff
[04:04] <rcombs> but yeah, it sucks
[04:04] <rcombs> and it shouldn't be
[04:05] <rcombs> pro gear usually uses SDI in lieu of HDMI, but you can do 1080p on either, and you can convert between the two using a $200-ish device Blackmagic sells
[04:06] <rcombs> SDI has a more expensive enc/decoder, but can go up to 4K resolution uncompressed with SDI-3G (3Gbps) and does much longer distances (huzzah for coax!)
[04:10] <lacrymology> any clues about libavdevice? shouldn't it be built by ffmpeg?
[04:11] <rcombs> --enable-avdevice? I'unno
[04:12] <lacrymology> yeah, but it's supposed to be there by default
[04:12] <rcombs> that it is
[04:12] <rcombs> I'unno, are shared libs not being built?
[04:14] <lacrymology> hm
[04:14] <lacrymology> the file is there
[04:14] <lacrymology> there's a problem in ldconfig, then
[04:16] <lacrymology> ah, I just had to RUN ldconfig
[05:01] <lacrymology> any easy way of checking if libx264 is working?
[05:05] <tlhiv_laptop> vulture: around?
[05:06] <vulture> sup
[05:07] <tlhiv_laptop> hi ... i was able to make the source code changes that xlinkz0 suggested before, but i was wondering if you had any idea how i could do something similar to ffmpeg (instead of ffplay)
[05:08] <vulture> which part?
[05:08] <tlhiv_laptop> i would like to have the space bar (while encoding) output the current A/V time to a file
[05:08] <vulture> not sure about that one
[05:08] <tlhiv_laptop> he had me place http://codepad.org/OdCTj0Do in the ffplay.c
[05:09] <vulture> dont have the source handy to look atm
[05:09] <tlhiv_laptop> oh ;)
[05:17] <arobin> Hello all. How does one properly set pts for synchronizing an Audio at 48000Hz and Video at 60Hz. I've read through the example code, and haven't found any information on pts. (muxing.c encodes a file; I'm looking for encoding rtp streams.)
[05:37] <arobin> Hello all. How does one properly synchronize an Audio Stream at 48000Hz and a Video Stream at 60Hz. I've read through the example code, and haven't found any information on pts.
[05:43] <tyn> Is there some generic quality setting for audio that doesn't need to be specified as a bitrate, and will work with wav?
[06:06] <Plorkyeran> lossless formats don't have quality settings
[06:34] <hi117> yes they do...
[06:34] <hi117> sample rate can be considered quality
[08:38] <zap0> no.
[09:20] <natrixnatrix89> can anyone recommend a streaming server to use with ffmpeg to restream ip cameras?
[09:21] <natrixnatrix89> So far trying ffserver is a headache, because I can t get libx264 working with it..
[09:22] <natrixnatrix89> has anyone had experience with mistserver or darwin or flumotion? or something else
[09:22] <natrixnatrix89> I need it for a server with no GUI
[09:23] <natrixnatrix89> And wowza is just too expensive..
[09:33] <klemax> Hello.
[09:33] <hi117> hello
[09:33] <natrixnatrix89> hello
[09:34] <klemax> When I run "ffmpeg 2>&1 | head -n", it gives ffmpeg: error while loading shared libraries: libavutil.so.49: cannot open shared object file: No such file or directory
[09:34] <klemax> i tried yum whatprovides */libavutil.so.49, i could not find any library to fix it.
[09:34] <klemax> what am i missing exactly?
[09:35] <Mavrik> you have a shared ffmpeg build that links against version 49 of libavutil
[09:35] <Mavrik> and obviously something happened to your libavutil.so.49 :)
[09:35] <relaxed> fedora doesn't provide ffmpeg, so which repo are you using?
[09:36] <klemax> relaxed, i used source code, not rpm.
[09:36] <relaxed> you probably need to add /usr/local/lib to your ld.so.conf
[09:37] <klemax> ah yeah
[09:37] <klemax> nice pointer.
[09:39] <klemax> thanks relaxed
[09:41] <klemax> relaxed: do i need to recompile ffmpeg after ldconfig -v?
[09:42] <relaxed> no
[09:43] <klemax> okay.
[09:44] <relaxed> sudo ldconfig
[10:51] <Edoardo> Is there a way to display the webcam live stream with ffmpeg?
[10:55] <relaxed> you could use ffplay
[10:56] <Edoardo> How?
[10:57] <relaxed> ffplay <stream url>
[10:58] <Edoardo> ffplay -f dshow -i video="WebCam"
[10:59] <Edoardo> This should work
[10:59] <t4nk176> Hi guys, I'm trying to decode a mmsh stream - the decoder is returning a stream of about 1 second of video followed by 1 second of audio. Is there a way to tell ffmpeg to buffer and re-order the packets so i get a continuous interleaved stream in my application instead?
[11:01] <Edoardo> Yeah it works, thanks!
[11:04] <t4nk176> I've tried using data->CodecContext->flags|=AVFMT_FLAG_SORT_DTS but it seems to have no effect whatsoever
[11:07] <Mavrik> t4nk176, most decoders don't listen to that flag
[11:07] <Mavrik> t4nk176, just make a minheap sorted by DTS and buffer two seconds of it
[11:13] <t4nk176> DTS is decode timestamp right - so wouldn't that give me the same order?
[11:14] <Mavrik> for most non-broken streams it shouldn't :)
[11:24] <t4nk176> ok thanks Mavrik, looks like this is going to be way harder than i was hoping for :(
[11:25] <Mavrik> uh
[11:25] <Mavrik> just how hard is it to create a simple heap of packets sorted by DTS? O.o
[11:25] <Mavrik> it's like CS 101 :P
[11:28] <t4nk176> well i'm using a mix of c#, C++ and c and marshalling between all three whilst doing realtime video processing, audio bitrate format conversion and transcoding and recording to H264 in multiple threads - so whilst creating a heap of packets would be easy in plain c it just aint gonna work with what i'm doing
[11:29] <Mavrik> well, you can kindly ask people not to create non-interleaved files then :)
[11:29] <Mavrik> btw, ffmpeg can do interleaving when writing output though
[11:31] <t4nk176> i was hoping i could tell ffmpeg to worry about resequencing the packets instead - the writing part is fine its the decoding i'm struggling with. I've taken a look at ffplay.c - i was hoping some of the guts of the horror that is PTS/DTS syncing would have made it into a core library
[11:36] <Mavrik> that would probably due to ffmpeg being meant for transcoding, not playing content
[11:36] <Mavrik> no need to do PTS/DTS sync there.
[11:37] <Mavrik> have you tried other frameworks like ffdshow?
[11:38] <t4nk176> no i'll take a look - thanks
[16:04] <yellabs-r2> hi there
[16:05] <yellabs-r2> i would like to improve an old video, wich goes from light to dark to light brightness, is there a way to equelize this ?
[16:08] <braincracker> image-by image applying the desired curve maybe
[16:09] <braincracker> demux, modify images, mux
[16:09] <yellabs-r2> i see, would be a lot of work indeed
[16:10] <braincracker> if the error is a repeating waveform, then you can automate it
[16:10] <braincracker> like one side of the reader head is "weaker"
[16:20] <yellabs-r2> ok thanks, trying it with avidemux first ..
[16:20] <yellabs-r2> :P
[18:13] <burek> is this the correct way to request the h264 encoded stream from h264 enabled web cameras (with their own encoder): ./ffmpeg -f v4l2 -input_format h264 -s 640x360 -r 30 -i /dev/video0 -c copy out.flv
[18:29] <MithrilTuxedo> Can you not just use `ffprobe -show_streams /dev/video0` to figure out which one you want and then use ffmpeg's -map option?
[18:30] <burek> who?
[18:32] <lacrymology> I need to stream multicast preferably in rawvideo, but I can't seem to make it work
[18:59] <tlhiv_work> i am trying to convert a JPG image to a video using ffmpeg, but the encoding seems to be making a gray color completely (or virtually completely) white
[18:59] <tlhiv_work> here is the command that i am using
[19:00] <tlhiv_work> ffmpeg -loop_input -i foo.jpg -vcodec libx264 -b 10000k -vbsf h264_mp4toannexb -t 10 -y foo.ts
[19:00] <tlhiv_work> here is the JPG --> http://www.tlhiv.org/tmp/foo.jpg
[19:01] <burek> https://ffmpeg.org/trac/ffmpeg/wiki/Create%20a%20video%20slideshow%20from%20images
[19:02] <tlhiv_work> i don't have any trouble creating the video ... the trouble seems to be what it's doing to a fairly light color gray in that it is making it virtually white
[19:02] <burek> what is your ffmpeg version
[19:02] <burek> can you use the pastebin
[19:02] <tlhiv_work> 10.0.7
[19:02] <burek> 10?
[19:03] <tlhiv_work> 0.10.7
[19:03] <tlhiv_work> sorry
[19:03] <burek> the latest is 1.2
[19:03] <tlhiv_work> is there a setting that is causing the color mapping to be like this?
[19:03] <burek> btw, try using -crf instead of -b, when working with libx264
[19:04] <burek> well, you have multiple issues
[19:04] <burek> first your ffmpeg is probably old
[19:04] <burek> and that issue might even be fixed if it's a bug at all
[19:05] <tlhiv_work> do you have time to download that JPG that i posted and run that command and show me the results?
[19:05] <burek> just a sec
[19:05] <tlhiv_work> thank you
[19:07] <burek> www.gusari.org/uploads/foo.ts
[19:08] <tlhiv_work> same result
[19:08] <tlhiv_work> do you see the gray background in the title of the JPG
[19:08] <tlhiv_work> it is vanishing to white during this conversion
[19:09] <ubitux> your player is broken
[19:09] <ubitux> it's fine here
[19:09] <tlhiv_work> hmmm
[19:09] <tlhiv_work> wow ... you are correct :-/
[19:09] <ubitux> i confirm that mplayer has some trouble
[19:10] <ubitux> though, mpv, ffplay and mplayer2 works fine
[19:10] <tlhiv_work> as does mplayer2
[19:10] <tlhiv_work> both seem to make that gray vanish
[19:10] <ubitux> mplayer2 works fine
[19:10] <tlhiv_work> ffplay shows it fine
[19:11] <tlhiv_work> mplayer2 doesn't work fine here for me
[19:11] <ubitux> i believe it's a yuvJ issue
[19:11] <ubitux> (colorspace range)
[19:11] <ubitux> (16-248 instead of 0-255 or something)
[19:11] <tlhiv_work> is there a way to "fix" that on encoding
[19:11] <ubitux> try to force a different -pix_fmt
[19:11] <ubitux> like maybe yuv420p
[19:12] <tlhiv_work> suggestions?
[19:12] <tlhiv_work> ah
[19:12] <tlhiv_work> let me try
[19:12] <tlhiv_work> before my -vcodec libx264
[19:12] <ubitux> wherever you want as output option
[19:12] <tlhiv_work> beautiful ... that works
[19:12] <tlhiv_work> thank you very much
[19:14] <tlhiv_work> now to figure out when i overlay two videos (one which has sound and the other does not) why my overlay does not have sound :-/
[19:14] <ubitux> map the audio
[19:20] <tlhiv_work> i am using
[19:20] <tlhiv_work> ffmpeg -i 2_6_42-wide.mp4 -vf "movie=2_6_42-instruction.mp4 [i]; [in][i] overlay=960:0" 2_6_42-wide_overlay.mp4
[19:20] <tlhiv_work> 2_6_42-instruction.mp4 <--- this is the video that has audio in it
[19:20] <tlhiv_work> 2_6_42-wide.mp4 <--- this does not have audio
[19:23] <burek> https://ffmpeg.org/trac/ffmpeg/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
[19:23] <burek> you can map 0:a
[19:27] <tlhiv_work> i am having trouble mapping it because i'm not using the "top layer" video as a -i option and thus i don't know how to access the audio channel from that top layer video
[19:29] <burek> well, i would try making both inputs with -i
[19:29] <burek> ffmpeg -i 2_6_42-wide.mp4 -i 2_6_42-instruction.mp4 -map 1:a -vf "..." 2_6_42-wide_overlay.mp4
[19:30] <tlhiv_work> but i don't know enough about the -vf to make my filter ;)
[19:30] <burek> :)
[19:30] <tlhiv_work> i found examples online and have "tweaked" them for my needs ;)
[19:34] <tlhiv_work> this works ... but something seems wrong about it to me -->
[19:34] <tlhiv_work> ffmpeg -i 2_6_42-wide.mp4 -i 2_6_42-instruction.mp4 -vf "movie=2_6_42-instruction.mp4 [i]; [in][i] overlay=960:0" 2_6_42-wide_overlay.mp4
[19:34] <tlhiv_work> the part i don't understand is [in][i]
[19:34] <tlhiv_work> what is [in]
[19:39] <burek> take a look at the link i gave you before
[19:39] <burek> you'll understand better
[19:39] <burek> you even have images :)
[19:41] <tlhiv_work> btw ... here is my result --> http://www.tlhiv.org/tmp/foo.mp4
[19:41] <burek> basicly you would have: ffmpeg -i 2_6_42-wide.mp4 -i 2_6_42-instruction.mp4 -vf "[0:v] setpts=PTS-STARTPTS [first]; [1:v] setpts=PTS-STARTPTS [second];[first][second] overlay=960:0" 2_6_42-wide_overlay.mp4
[19:41] <burek> adding -map 1:a would be
[19:41] <burek> ffmpeg -i 2_6_42-wide.mp4 -i 2_6_42-instruction.mp4 -map 1:a -vf "[0:v] setpts=PTS-STARTPTS [first]; [1:v] setpts=PTS-STARTPTS [second];[first][second] overlay=960:0" 2_6_42-wide_overlay.mp4
[19:42] <burek> which should give you the audio you are looking for
[19:45] <tlhiv_work> i had to modify the source code of ffmpeg and recompile to have FFMPEG append the current time (in the video) to a text file upon me pressing the space bar while it is encoding ... it allowed me to have one single video of explaining with "markers" in a text file to tell me how to "split" the result or really the times that each JPG would remain on screen
[19:46] <tlhiv_work> i'm am getting an error with your suggestion -->
[19:46] <tlhiv_work> ffmpeg -i 2_6_42-wide.mp4 -i 2_6_42-instruction.mp4 -vf "[0:v] setpts=PTS-STARTPTS [first]; [1:v] setpts=PTS-STARTPTS [second];[first][second] overlay=960:0" -y 2_6_42-wide_overlay.mp4
[19:46] <tlhiv_work> Output pad "default" for the filter "src" of type "buffer" not connected to any destination
[19:46] <tlhiv_work> Error opening filters!
[19:47] <burek> your ffmpeg is old i guess
[19:47] <burek> ffmpeg -i 2_6_42-wide.mp4 -i 2_6_42-instruction.mp4 -map 1:a -vf "[0:v] setpts=PTS-STARTPTS [first]; [1:v] setpts=PTS-STARTPTS [second];[first][second] overlay=960:0 [out]" 2_6_42-wide_overlay.mp4
[19:47] <burek> maybe that will help
[19:50] <tlhiv_work> no video there :-(
[19:50] <tlhiv_work> all i hear is audio
[19:52] <burek> <tlhiv_work> 0.10.7 <- ?
[19:52] <tlhiv_work> :-)
[19:57] <luc4_mac> Hi! Is it possible to deinterlace a video file without transcoding it? I suppose it is not right?
[20:08] <Mavrik> luc4_mac, nop :0
[20:15] <luc4_mac> Mavrik: ok, thanks :-)
[20:57] <klemax> relaxed: when i run ldconfig -v, it gives "libavutil.so.50 -> libavutil.so.50.15.1"
[20:57] <klemax> but the error says: ffmpeg: error while loading shared libraries: libavutil.so.49: cannot open shared object file: No such file or directory
[21:01] <lacrymology> is there some kind of complete-ish tutorial? I'm completely at a loss as to what are my options here, and what are the parameters
[21:02] <JEEB> you can do so many things with ffmpeg that a single tutorial would not be enough
[21:02] <JEEB> it's good to note what you're trying to do first
[21:02] <JEEB> and then start from that
[21:04] <lacrymology> JEEB: what I want to do is to stream to multicast with the biggest possible quality, with the lowest possible temporal overhead. Raw video would be best, but I don't seem to be able to pull it off
[21:04] <lacrymology> JEEB: encoding overhead. Ignoring decoding costs
[21:09] <lacrymology> JEEB: right now, for example, `ffmpeg -re -i test/video.mp4 -vcodec libx264 -f mpegts udp://224.224.224.224:1234 casts all right, but x264 has quite some latency. >3s
[21:10] <JEEB> that most probably has something to do with the player as well, but in general if you really really want to minimize latency with libx264 you can use -tune zerolatency
[21:11] <JEEB> in that case the rate control algorithm used is CRF (constant rate factor), default is 23. If that looks fine then you can keep that as-is. But you will probably also want to use -maxrate and -bufsize to set a maximum average rate and the buffer over which that gets calculated
[21:11] <lacrymology> changing to -vcodec rawvideo loses the video stream
[21:11] <lacrymology> at least VLC can't read it
[21:16] <lacrymol1gy> damn, what was the last that got in?
[21:16] <Guest_88|2> hola
[21:16] <lacrymology> hi
[21:16] <Guest_88|2> :/
[21:17] <JEEB> <lacrymology> at least VLC can't read it
[21:19] <lacrymology> ok
[21:19] <lacrymology> >> and ffmpeg -i udp.. says about the video stream: Stream #0:0[0x100]: Unknown: none ([6][0][0][0] / 0x0006)
[21:20] <Guest_88|2> why GSpot tells me that my video made with -target vcd is VBR?
[21:21] <JEEB> lacrymology, if this is local kind of stuff and you just want to push some raw video through, I guess using nut or something instead of mpegts would be an option
[21:24] <Guest_88|3> why
[21:25] <Guest_88|3> and VLC content bitrate show bitrates between  1260-1390
[21:25] <Guest_88|3> that is CBR ?
[21:27] <Guest_88|3> vagos trabajen!
[21:28] <lacrymology> 16:23 < lacrymology> JEEB: again no video. Output says Stream #0:0 -> #0:0  (h264 -> rawvideo)
[21:28] <lacrymology> 16:23 < lacrymology> but input says udp://224.224.224.224:1234: Invalid data  found when processing input
[21:28] <lacrymology> 16:24 < lacrymology> if I declare -f nut in input.. it just.. never finds it
[21:28] <lacrymology> 16:25 < lacrymology> also, it's 83mbps XD
[21:28] <JEEB> yes, raw video is raw video :P
[21:28] <JEEB> also what... H.264 to rawvideo?
[21:28] <JEEB> what on earth are you trying to do?
[21:31] <lacrymology> JEEB: forget about that, that's because I'm using a file for input. I'm testing what will be the end of the pipeline. Right now I'm pushing to either sdl or decklink and it works, but I need to do some downsampling and sending over the network as a preview, as well, and in the near future I'll probably want to webcast it, so the idea was to multicast raw video, and then have different clients that send stuff to the different places
[21:31] <vulture-> I'm going to wager that sending rawvideo over a network has higher latency than encoding rawvideo->h264->network and then converting back to rawvideo
[21:33] <lacrymology> vulture-: I can assume dedicated Gb network, and h264 has about 3 seconds latency. I need to be within the 10 frames, tops. mp4 seems to have less latency, but I haven't been able to find how to stop quality loss
[21:33] <vulture-> why 3 seconds?
[21:33] <vulture-> h264 latency is very low if you do sliced multithreading
[21:33] <vulture-> vlc likely buffers AT LEAST 3 seconds of video
[21:33] <lacrymology> vulture-: it takes over a second to even START casting
[21:33] <JEEB> he hasn't tried setting any low latency stuff for libx264 :P
[21:33] <vulture-> thats probably the overhead of everything else
[21:33] <lacrymology> yes, probably
[21:34] <JEEB> <JEEB> that most probably has something to do with the player as well, but in general if you really really want to minimize latency with libx264 you can use -tune zerolatency
[21:34] <JEEB> <JEEB> in that case the rate control algorithm used is CRF (constant rate factor), default is 23. If that looks fine then you can keep that as-is. But you will probably also want to use -maxrate and -bufsize to set a maximum average rate and the buffer over which that gets calculated
[21:34] <lacrymology> yeah, streaming doesn't start until sec. 2.5x
[21:35] <JEEB> (and just to note, I didn't meant that you get CRF as the default rate control algorithm with -tune zerolatency, just wanted to note it since you weren't setting any)
[21:36] <JEEB> lacrymology, just fyi -- libx264 is one of the top low-latency encoders, so that's not the problem. You're either using it wrong, or libavcodec is using it wrong, or the player you're using to check is doing stuff for too long.
[21:37] <vulture-> if not the top :D
[21:37] <lacrymology> JEEB: as far as I know, libx264 does some forward-scanning, I don't know how that can be no-latency
[21:37] <lacrymology> any ways of measuring latency?
[21:37] <JEEB> that depends completely on the settings you're using
[21:38] <vulture-> you can do I-frame only
[21:38] <lacrymology> yeah, I heard that
[21:38] <JEEB> basically --tune zerolatency on x264cli or most possibly -tune zerolatency make x264 set stuff that will lower latency in itself
[21:38] <vulture-> when I used decklink, the actual kick off to start the video capture was as long as several seconds.. you talking about that? or after it starts transmitting? in any case, it's probably that you're using vlc which buffers before displaying... because if it misses a packet, it will stop/stutter and nobody likes that, especially for audio
[21:39] <vulture-> maybe vlc has some unbuffered mode idk
[21:39] <JEEB> lacrymology, in any case libx264 itself is being used in very low latency things, not zero-latency of course, that's impossible
[21:39] <JEEB> but within milliseconds
[21:39] <vulture-> when I stream internet radio I need 5 seconds of buffer, and that's just for audio!
[21:39] <JEEB> remember Gaikai? guess what that used :P
[21:40] <JEEB> so libx264 not being low-latency is utter and complete bullshit
[21:40] <lacrymology> vulture-: web stream will go out with latency, of course, but that'll happen in that pipeline
[21:40] <lacrymology> ok, ok
[21:40] <JEEB> it's all up to how you use it
[21:40] <vulture-> that's the pipeline you're testing is it not?
[21:40] <vulture-> vlc included
[21:41] <JEEB> most people of course straight up use libx264 if they need really low latency, not libx264 via libavcodec (which is how it ends up being with ffmpeg)
[21:41] <JEEB> but I have no idea how good/bad atm ffmpeg would be for low-latency
[21:42] <JEEB> also don't make me copy and paste those two lines I've noted for the third time :P
[21:42] <vulture-> from what people have told me, ffmpeg disables the sliced multithreading x264 by default
[21:42] <lacrymology> vulture-: I understand vlc has it's own latency, but I'm using the ffmpeg output to measure how long it starts streaming
[21:42] <vulture-> so it's not as low latency as x264 suggests
[21:42] <vulture-> but presumably you can change that
[21:42] <lacrymology> ok, let me explain the whole thing
[21:42] <JEEB> vulture-, that depends on the settings. libx264 itself isn't in low latency mode by default
[21:42] <JEEB> -tune zerolatency should set the tuning
[21:42] <vulture-> lacrymology: ok well what are the timestamps of each event that you're noticeing?
[21:43] <JEEB> which turns certain knobs around to enable low latency in theory
[21:43] <vulture-> JEEB: not sure, just from friends that told me ffmpeg's defaults dont multithread it properly
[21:43] <vulture-> I've not done it myself
[21:43] <lacrymology> vulture-: I'm pretty much measuring by eye, thus far. But let me explain
[21:43] <JEEB> bullshit, multithreading is just fine. But yes, just like libx264 itself, the defaults aren't for low latency
[21:43] <vulture-> I did notice that that option was off in my compile as well
[21:43] <JEEB> wat
[21:43] <lacrymology> I'm making TV broadcasting software
[21:44] <JEEB> there's a lot of hearsay and herp derp going around here as far as I can see :P
[21:44] <JEEB> and I don't think you can even disable slice-based threading in libx264, unless you really fucking try aka "poke the source code"
[21:45] <JEEB> the default is frame-based, though, since that is better
[21:45] <JEEB> but
[21:45] <JEEB> --tune (or -tune with ffmpeg) zerolatency
[21:45] <vulture-> this was probably 2 or 3 years ago, but they were trustworthy people!
[21:45] <lacrymology> JEEB: as I said, I'm seeing the ffmpeg output, and it starts reading, and doesn't start streaming for the first 2.5 secs, at least (without zerolatency)
[21:45] <JEEB> yeah, sure they were...
[21:45] <vulture-> they said ffmpeg could do it if you changed some compile option
[21:45] <lacrymology> anyways
[21:45] <vulture-> anyways
[21:45] <JEEB> lacrymology, yes so you are herping a derp about NOT LOW LATENCY SETTINGS
[21:45] <JEEB> geez
[21:46] <lacrymology> ok, ok, I got that =)
[21:46] <vulture-> lacrymology: can you actually print out some quantitative results? print out the timestamps for the events you care about or something
[21:46] <vulture-> does starts reading mean you're opening a video device? or reading 200MB/sec rawvideo from disk?
[21:46] <JEEB> I hope you understand why libx264 would opt to have the more quality-giving settings by default instead of limiting a lot of stuff in order to give low latency :P
[21:49] <lacrymology> the thing is, the transmitter is melted, which is built on top of libmlt, which has consumers. A consumer can be sdl, decklink, or ffmpeg/libav (avformat, they call it), amongst others. I need to send preview video, as well as the decklink output, and it needs to be as in-sync as possible. I can make melted throw it out through more than one consumer at a time, but someone had the idea of just sending it out to multicast, and then have several clien
[21:50] <JEEB> vulture-, ffmpeg's libx264 usage was sane'ified around spring 2011 btw, before that you had to set options manually. x264 got a lot of the low-latency stuff by early 2010 http://x264dev.multimedia.cx/archives/249
[21:51] <JEEB> also as a fun fact: nvidia actually copied libx264's low-latency API for their hardware encoder
[21:51] <vulture-> yeah that would be after I was discussing this with my pals :P
[21:52] <JEEB> spring 2011 was when the -preset and related settings became usable from ffmpeg
[21:52] <JEEB> before that you just had to mimic them via the command line :)
[21:54] <vulture-> lacrymology: I've made low latency video streaming using both decklink and ffmpeg in live hospital procedure rooms, so I have at least some experience :)
[21:54] <vulture-> unfortunately, it's not possible as far as we could tell to get low enough latency for their real time requirements ;/
[21:54] <vulture-> but you can still do like 50ms latency no problem
[21:54] <JEEB> uhh
[21:55] <JEEB> I've heard of way better figures
[21:55] <JEEB> with libx264
[21:55] <vulture-> this is the whole pipeline
[21:55] <JEEB> not via ffmpeg/libavcodec of course
[21:55] <vulture-> capture->processing->output
[21:55] <JEEB> hmm
[21:55] <teratorn> vulture-: huh cool re: decklink streaming
[21:56] <teratorn> vulture-: is that through your work/company?
[21:56] <vulture-> actually we dont output back through the decklink card
[21:56] <vulture-> we output to the windows desktop which we pipe back out through vga out :D
[21:56] <vulture-> outputting back through decklink made it bluescreen :(
[21:56] <vulture-> even using their sample apps
[21:56] <teratorn> hahaha.
[21:56] <JEEB> fun
[21:56] <teratorn> fuckign binary drivers
[21:57] Action: teratorn has been playing with BM cards
[21:57] <vulture-> it took like a half dozen versions just to make it so ctrl+c'ing their sample capture apps didnt bluescreen either haha
[21:57] <vulture-> *versions of their drivers
[21:57] <teratorn> I like how their sample capture app captures all null bytes for the audio unless you choose exactly the right video display mode
[21:57] <vulture-> and yeah past job/school
[21:58] <vulture-> I think their example dshow capture format selection is the best example I've seen though, way better and more compatible with other capture cards than the dshow sdk examples
[21:59] <teratorn> meh; we're on Linux :)
[21:59] <teratorn> unlike virtually every other AV company
[21:59] <vulture-> haha
[21:59] <vulture-> advantages/disadvantages
[22:00] <teratorn> yeah
[22:00] <vulture-> I wrote a custom scheduling algorithm for this
[22:00] <vulture-> it ended up being slower because windows wont let you implement a custom scheduler ;(
[22:00] <teratorn> heh
[22:00] <vulture-> I had to write a virtualized user mode scheduler on top of windows, lots of overhead
[22:00] <vulture-> linux just... lets you do it ;(
[22:01] <teratorn> you basically have to write a rootkit to do anything on windows
[22:02] <vulture-> well, I'll take the win32api any day... its just random quirky things like this that I wish I could do more
[22:03] <teratorn> maaan, win32 api? COM? please no
[22:03] <vulture-> COM no
[22:03] <teratorn> heh
[22:03] <vulture-> win32api itself is fine :D
[22:03] <teratorn> *A() *W() heh
[22:03] <vulture-> any time I have to use COM? write a .c wrapper around it, then use the .c
[22:04] <vulture-> pff A/W are fine
[22:04] <vulture-> T is stupid though :D
[22:05] <lacrymology> damn, I'm trying to use the multi-consumer and it won't work
[22:05] <vulture-> which one is that
[22:06] <lacrymology> vulture-: it's a.. multiplug consumer for melted. You write a small yaml file and tell it which consumers you want the output to go to. I set it up to go to both SDL and multicast, but it won't work
[22:06] <vulture-> no idea about that
[22:06] <lacrymology> I know
[22:10] <lacrymology> ok, I got it working.. now how do I really check latency?
[22:11] <lacrymology> I mean, I'm seeing 1.5secs between the two players, but that might be player buffering, so..
[22:12] <vulture-> according to my vlc the buffer is set to 300ms, but it takes several seconds usually to load up a video stream so I'm not sure I believe that, or maybe it also has an internal buffer, or dynamically changes it
[22:12] <vulture-> I would just print out the timestamps at each point in your process
[22:12] <vulture-> though if you're not programmatically receiving the data it might be hard to tell vlc to do that :P
[22:15] <Mista_D> Anyway to detect a pixelated video with a filter(s)/tool(s)?
[22:16] <lacrymology> vulture-: what about playback through ffmpeg itself? does that even exist?
[22:16] <vulture-> ffmpeg might be able to read from your stream, ffplay does actual playback
[22:16] <vulture-> no idea if ffplay could read from a stream
[22:17] <vulture-> I havent found a single ffmpeg example that shows how to safely handle streaming data ;(
[22:19] <lacrymology> I'm getting about 2 and some secs delay. But i've got no clue whether that's playback decoding or buffering, or encoding
[22:19] <vulture-> Mista_D: talking about the blocky artifacting from low bitrate jpeg/mpeg? maybe a 2D high pass filter and see if there's high values on block boundaries?
[22:21] <Mista_D> vulture- : cool idea, let me test...
[22:21] <lacrymology> ok, with --fflags nobuffer it gets down to <1s
[22:21] <vulture-> yay
[22:22] <vulture-> are you using that -tune zerolatency thing?
[22:31] <lacrymology> vulture-: yes, yes I am
[22:32] <lacrymology> vulture-: also I'm using melted's multiconsumer which is buggy as hell, it breaks when you pause or stop playback, so I still sort of don't know how to measure delays
[22:32] <vulture-> <1s seems pretty reasonble tbh, if you have a complex pipeline
[22:37] <lacrymology> vulture-: I'm getting 8f
[23:00] <Keshl> This'll be fun: 1920x1080 @ 120 fps. What could possibly go wrong? =D?~!
[23:04] <ctolman> Trying to build FFMPEG using MSVC & get the "c99wrap cl is unable to create an executable file" error. The last line of the config.log is "Command line error D8021 : invalid numeric argument '/FiC:/Users/ctolman/AppData/Local/Temp/ffconf.cbIJyIpb.o_preprocessed.c' C compiler test failed." Any ideas on what I need to do?
[00:00] --- Wed Jun 19 2013


More information about the Ffmpeg-devel-irc mailing list