[Ffmpeg-devel-irc] ffmpeg.log.20160505

burek burek021 at gmail.com
Fri May 6 02:05:01 CEST 2016


[00:05:50 CEST] <ajsharp> anyone here have experience with overlay? having an issue where the video stops if the main video is shorter than the pip: http://ffmpeg.gusari.org/viewtopic.php?f=11&t=2830&p=8517#p8517
[00:06:17 CEST] <bruber> ajsharp: check the man pages for ffmpeg-filters
[00:06:27 CEST] <bruber> ajsharp:  there's an option that addresses that
[00:06:38 CEST] <ajsharp> yea, i've already tried it to no avail :(
[00:06:51 CEST] <ajsharp> you're probably talking about "shortest"
[00:07:07 CEST] <bruber> ajsharp: eof_action
[00:07:13 CEST] <ajsharp> yea, tried that too
[00:07:38 CEST] <ajsharp> "The action to take when EOF is encountered on the secondary input; it accepts one of the following values:"
[00:07:44 CEST] <ajsharp> my problem is on the main input
[00:07:49 CEST] <ajsharp> when the main input is shorter than the secondary
[00:08:02 CEST] <ajsharp> i want the main to freeze and the secondary to continue
[00:08:40 CEST] <bruber> ahh.just swap your main and secondary inputs... you'll need to pad your overlay ot the size of the main, which will slow down rendering, and you'll need to alpha mask the overlay where the pip is
[00:08:41 CEST] <ajsharp> maybe i need to detect which input is longer and always assign the main input to the longer one?
[00:09:30 CEST] <ajsharp> i guess i can just use file size as a proxy for which should be the main and which should be secondary?
[00:10:04 CEST] <bruber> I'm assuming you're using bash, or similar?>
[00:10:30 CEST] <ajsharp> yea
[00:11:20 CEST] <ajsharp> why is there some metadata i can read off the first few lines of the file or something?
[00:12:27 CEST] <bruber> ffprobe with a -show_streams and whatever option gets you the best output, pipe it to sed to get the times, pipe it to bc or similar to pick the longest, and keep track of it all w/ bash vars
[00:12:41 CEST] <ajsharp> right
[00:12:59 CEST] <ajsharp> is there a particular reason i wouldn't just use file size as a proxy if i want to keep it a lil simpler?
[00:12:59 CEST] <bruber> or write up a quick-n-dirty python/ruby/etc... script to do the same
[00:13:02 CEST] <ajsharp> (and faster)
[00:13:23 CEST] <bruber> compression ratio will probably differ, unless all of your sources are CBR
[00:13:38 CEST] <ajsharp> sources are uniform from ios devices
[00:14:06 CEST] <bruber> uniform codec, probably h.264.  differing content will cause varying bitrate
[00:14:18 CEST] <ajsharp> hmm
[00:14:18 CEST] <ajsharp> got it
[00:14:45 CEST] <bruber> ffprobe will report actual length (the actual length that's in the metadata... you can make it count frames, but it's slow)
[00:15:09 CEST] <ajsharp> so, why is the pad and alpha necessary?
[00:15:30 CEST] <bruber> also, watch out for variable frame rate... don't know about ios devices, but some mobile/embedded video sources ***really*** want to use variable fps for efficiency
[00:15:40 CEST] <ajsharp> currently just doing this: [1:v]scale=iw/4:-1[ovrl]
[00:16:03 CEST] <ajsharp> @bruber will that crash the encoding? or do i need to normalize the frame rate?
[00:16:19 CEST] <ajsharp> b/c you're right, some devices record at different frame rates, or variable in some cases i believe
[00:19:21 CEST] <bruber> so, you're picking the longer video, and making it the primary input to overlay.  then, overlay can repeat the last frame of or end the secondary when it runs out of time...
[00:20:16 CEST] <ajsharp> right yea, the eof_action
[00:20:42 CEST] <bruber> ... if it happens that you want the longer video "on top" of the shorter, you'll need to do scale=iw/4:-1, then pad it up to the size of the secondary video (with the appropriate offsets)...
[00:22:01 CEST] <bruber> then, because it's primary to overlay, the secondary video, which is the original dimension, needs to have a 0.0 alpha hole in it where the primary will show through
[00:22:17 CEST] <ajsharp> right
[00:22:39 CEST] <ajsharp> sucks there isn't another way to say which video you want "on top"
[00:23:12 CEST] <bruber> on the fly, you can generate a greyscale image, all white with a black square in the place of the PIP, then feed it into the alphamerge filter
[00:25:54 CEST] <ajsharp> do i need to do that, or is this possible just with pad?d
[00:26:21 CEST] <ajsharp> oh that's the alpha mask you were talking about
[00:26:25 CEST] <bruber> don't think it'll work out the way you want w/o alpha
[00:26:32 CEST] <kepstin> hmm. maybe I should make a filter that just repeats the last frame of a video at constant fps. I've needed to do that a few times :/
[00:26:53 CEST] <ajsharp> @kepstin yep
[00:27:14 CEST] <pfelt> afternoon all. i'm playing with a decklink quad card where i'm trying to crop the video into two halves and outputting half to two outputs.  i'm seeing performace issues, however one cpu of 40 (hyperthreaded) is nearly maxed and the rest are empty.  is there a way to thread it such that i can spread the load without trying to use named pipes?
[00:36:47 CEST] <pfelt> or conversely, i do have to -pix_fmt uyvy422 on both output streams.  if i remove that option on one of the two it seems to be able to keep up.  is there any good way to do that uyvy convert once and not on both output streams?
[00:37:11 CEST] <bruber> Speaking of overlays and multiple inputs... I need to take two videos, time stretch one by a known number of frames, and overlay them...  say, it's 20 frames over the course of a 50000 frames... I should be able to do something like setpts=PTS+20*N/50000 ... right?  But, when I then use that as an input to overlay, I don't get the drift I'm looking for...
[00:39:15 CEST] <bruber> pfelt:  the pixel format isn't a conversion.  that's the data format being delivered by the decklink.  it's also really fast to convert to a planar format, which is probably happening in your filter graph anyway.   it's almost certainly not the source of the performance promblem
[00:39:50 CEST] <bruber> pfelt:  what resolution and bit depth are you using w/ the decklinlk?
[00:39:51 CEST] <pfelt> bruber: it's just odd that i can keep up fine up till i add that to my outputs
[00:40:10 CEST] <pfelt> 1920x1080.  i believe uyvy422 is 8bit ?
[00:40:26 CEST] <bruber> pfelt: ahh... sorry.. misread.  decklink is the output.
[00:40:34 CEST] <pfelt> yep :(
[00:41:00 CEST] <pfelt> ideally i could thread output a frame to each decklink, but synchronization might be a problem
[00:41:26 CEST] <bruber> still... it's fast to go from planar to packed... just try it rendering out to a nullsink to see the performance difference.
[00:41:47 CEST] <pfelt> i think i'm able to get to about 27fps, where i need 30.  so i'm only slightly behind
[00:41:56 CEST] <pfelt> ok.  i can try that.
[00:41:59 CEST] <bruber> 30, or 29.97?
[00:42:08 CEST] <pfelt> 29.97.  /me is lazy
[00:42:13 CEST] <pfelt> and i know it matters here
[00:42:15 CEST] <pfelt> :D
[00:42:24 CEST] <bruber> ok... make sure you're not trying to do 30... use 30000/1001
[00:42:45 CEST] <bruber> or 29.97002997
[00:43:06 CEST] <bruber> otherwise, the NTSC gods will smite you
[00:43:30 CEST] <pfelt>     Stream #3:0: Video: wrapped_avframe, uyvy422, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 29.97 fps, 29.97 tbn
[00:43:47 CEST] <pfelt> that's interesting though.  so you were right.  when going to /dev/null i'm getting 32fps
[00:44:14 CEST] <pfelt> though my proc seems to be about equally utilized
[00:44:28 CEST] <pfelt> so maybe this is really an issue with getting frames out to the decklink
[00:45:00 CEST] <bruber> I don't know anything about the queue structure of the decklink output driver... I do know that the hardware outputs don't share a phase-locked reference clock
[00:45:21 CEST] <bruber> so, your fear of sync problems is valid
[00:45:29 CEST] <bruber> s/is/may be/
[00:46:25 CEST] <bruber> I deal with this on a daily basis, but from multiple decklink/intensity devices (separate PCIe cards on the same backplane)
[00:46:54 CEST] <bruber> and on the input side... I don't use them for output, but ingest...
[00:47:35 CEST] <bruber> the only way do deal with it there, and stay sane, is to align all of the sources to a common reference clock
[00:48:29 CEST] <bruber> even frame-sync devices are just cheats, and the jitter they introduce starts to matter with certain types of content
[00:54:06 CEST] <bruber> I should have said, the hardware *inputs* don't share a phase-locked clock... They're seen as separate cards at the kernel level, and can even be set to different formats & frame rates...  I don't actually know about the outputs
[00:55:13 CEST] <bruber> ... and I don't actually use ffmpeg to interface with them...  just libavformat/libavcodec to do my heavy lifting...
[01:03:21 CEST] <bruber> pfelt:  what's your source?
[01:12:09 CEST] <rsully> I'm using the compile guide for ubuntu on the site and I added webp to it, but when I run ffmpeg it tells me it can't find libwebp.so.6 - is there something special I need to do to make it statically linked?
[01:18:49 CEST] <pfelt> bruber: source is a multicast stream.  i've even tried scaling that way up to 3840x2160 so that i don't have to scale the outputs
[01:19:10 CEST] <pfelt> bruber: so did you write your ingest code?  we've tried, with no luck, to use ffmpeg for ingest over sdi
[01:19:18 CEST] <pfelt> (these are actually the new dual quad cards)
[01:27:18 CEST] <pfelt> and the more outputs i turn on the more apparently slow the video goes.  it really feels like i'm playing about 95% actual speed
[01:28:51 CEST] <llogan> rsully: why are you using webp? i've never heard of anyone using that.
[01:51:55 CEST] <rsully> llogan I mean the point of having a box with ffmpeg is to have something I can throw literally anything at - if webp is supported why wouldn't I compile with it?
[01:53:42 CEST] <c_14> rsully: well, you need to compile libwebp statically
[01:54:26 CEST] <llogan> ffmpeg has a native webp decoder
[01:56:43 CEST] <llogan> rsully: i assume you're going to try to attempt to enable every conceivable external library then?
[01:57:23 CEST] <rsully> llogan not all - but a good amount. compiles fine, so far I've only gotten to libwebp runtime error - not sure if the rest work
[01:58:16 CEST] <rsully> using this configure: http://pastie.org/private/yksf3sb6okxj4ls4o2zya
[01:58:51 CEST] <rsully> based off of my build on my Mac from homebrew
[01:59:19 CEST] <llogan> "--enable-hardcoded-tables"
[02:01:06 CEST] <llogan> you don't need --enable-librtmp. the native implementation should be just fine.
[02:01:20 CEST] <llogan> --enable-pthreads is autodetected so you don't need that
[02:02:32 CEST] <rsully> any pro/con for hardcoded tables?
[02:02:36 CEST] <rsully> I'll drop the other 2
[02:04:43 CEST] <llogan> see note in https://trac.ffmpeg.org/wiki/CompilationGuide
[02:04:46 CEST] <rsully> and I compiled libwebp with --enable-static
[02:05:12 CEST] <rsully> kk so not a big deal
[03:03:13 CEST] <rsully> llogan any other ideas? still can't get this to link statically
[03:05:43 CEST] <llogan> rsully: you can just skip stupid webpee
[03:06:06 CEST] <rsully> libwebp*
[03:06:15 CEST] <rsully> does ffmpeg come with good support for encode/decode of that?
[03:06:25 CEST] <rsully> not that this should be hard to fix anyways
[03:06:54 CEST] <llogan> it comes with a native weburine decoder
[03:10:33 CEST] <furq> rsully: you can try adding -Wl,-Bstatic to extra-ldflags to force everything to be static linked
[03:11:23 CEST] <rsully> I am trying "--extra-cflags=--static --extra-libs=-static" now, and having to compile some dependencies from scratch since the OS packages do not ship the static files
[03:11:41 CEST] <furq> i don't think either of those are things
[03:11:50 CEST] <rsully> well it certainly broke my ./configure step
[03:11:53 CEST] <furq> -static or -Wl,-Bstatic would go in extra-ldflags
[03:12:11 CEST] <rsully> I got those 2 flags from https://ffmpeg.org/pipermail/ffmpeg-user/2011-August/002112.html
[03:12:15 CEST] <c_14> theres --pkg-config-flags=--static or something
[03:12:28 CEST] <rsully> I already had that one
[03:12:38 CEST] <furq> that wouldn't make a difference
[03:12:48 CEST] <furq> that just tells pkg-config to also spit out the Libs.private section
[03:17:51 CEST] <rsully> I'll try without libwebp for the moment just to see if it actually works, or if there is another problem lurking behind this corner
[03:18:59 CEST] <relaxed> you need --extra-ldflags="-L$HOME/ffmpeg_build/lib -static"
[03:19:51 CEST] <rsully> ah ok, I will try that next
[03:19:59 CEST] <furq> they should both do the same thing here
[03:20:15 CEST] <relaxed> both of what?
[03:20:23 CEST] <furq> -static and -Wl,-Bstatic
[03:20:45 CEST] <relaxed> I don't use the latter
[03:20:53 CEST] <relaxed> and --pkg-config-flags="--static" is required
[03:21:04 CEST] <furq> yeah it is but it won't solve this problem
[03:22:00 CEST] <furq> out of interest, do you static link a libc into your binaries
[03:22:10 CEST] <furq> i assume you have to
[03:22:18 CEST] <relaxed> I do, glibc
[03:22:21 CEST] <rsully> ok so without libwebp, ffmpeg does run without lib errors
[03:22:24 CEST] <furq> oh
[03:22:27 CEST] <furq> i didn't think that would work properly
[03:22:38 CEST] <rsully> I will add libwebp again now and try with the above flags
[03:22:46 CEST] <furq> not that i've ever needed to, but i read it on the almighty stack overflow
[03:23:14 CEST] <rsully> relaxed is there a benefit to linking glibc static?
[03:23:22 CEST] <furq> portability
[03:23:49 CEST] <furq> i'm not sure how much difference it makes on OSX, although it wouldn't be glibc there
[03:24:16 CEST] <rsully> I'm compiling for ubuntu
[03:24:31 CEST] <furq> oh
[03:24:35 CEST] <furq> why did i think you were on OSX
[03:24:44 CEST] <relaxed> I think he is
[03:24:46 CEST] <rsully> I mentioned homebrew, because that is where I copied my configure flags from
[03:24:50 CEST] <furq> ah
[03:25:13 CEST] <furq> well yeah static linking glibc is only of value if you want to use the binaries on any linux machine
[03:25:22 CEST] <rsully> so adding -static to extra-ldflags gives me "libass not founding using pkg-config"
[03:25:40 CEST] <furq> did you do --pkg-config-flags="--static"
[03:25:41 CEST] <relaxed> rsully: now you're getting somewhere!
[03:25:45 CEST] <rsully> now I did compile libass in ~/ffmpeg_build
[03:25:50 CEST] <furq> or --pkg-config="pkg-config --static" works too
[03:25:51 CEST] <rsully> furq yes I've had that this whole time
[03:25:56 CEST] <relaxed> look at config.log
[03:25:58 CEST] <furq> ^
[03:26:09 CEST] <rsully>  /usr/bin/ld: cannot find -lfribidi
[03:26:16 CEST] <furq> fun
[03:26:27 CEST] <relaxed> down the rabbit hole you go
[03:26:36 CEST] <rsully> libfribid-dev is installed from apt
[03:26:47 CEST] <furq> did it install a static lib
[03:26:52 CEST] <rsully> mm how can i check
[03:27:06 CEST] <furq> https://packages.debian.org/stretch/amd64/libfribidi-dev/filelist
[03:27:08 CEST] <furq> apparently not
[03:27:09 CEST] <rsully> (though I'm guessing not)
[03:27:27 CEST] <furq> find /usr/lib -name libfribidi*.a
[03:27:31 CEST] <furq> if you want to make absolutely sure
[03:27:40 CEST] <rsully> not found
[03:27:44 CEST] <rsully> (i did /)
[03:28:00 CEST] <furq> it'll always be /usr/lib if you installed from apt
[03:28:08 CEST] <rsully> so next step, compile that?
[03:28:13 CEST] <furq> looks like it
[03:28:23 CEST] <furq> or file a debian bug, since that should definitely be in the dev package
[03:28:28 CEST] <furq> and then wait six months
[03:28:31 CEST] <rsully> hehe
[03:28:36 CEST] <furq> or even longer, actually, since you're on ubuntu
[03:29:16 CEST] <relaxed> you'll need to compile static libs of the dependencies for each external lib you --enable-* for ffmpeg
[03:29:27 CEST] <furq> well most of them should be available in apt
[03:29:33 CEST] <relaxed> but don't worry, it's really fun :/
[03:30:16 CEST] <furq> i have a makefile (and ~40 sub-makefiles) which does all this, but it's only for mingw
[03:30:23 CEST] <furq> i should probably genericise it, if that's even a word
[03:30:32 CEST] <rsully> it would sure help people like me ;D
[03:31:33 CEST] <rsully> I am documenting all my commands though
[03:32:22 CEST] <furq> it wouldn't be that hard, i just lack a decent way of testing the builds
[03:32:43 CEST] <furq> it's already enough of a pain doing it for mingw without worrying about other platforms
[03:33:51 CEST] <rsully> I'm getting some weird error now
[03:34:58 CEST] <rsully> tail of config.log: http://pastie.org/private/61uev6m1dvymq12fmzq
[03:37:45 CEST] <furq> you probably need to recompile libzmq with libsodium support
[03:37:54 CEST] <rsully> libzmq came from apt
[03:38:04 CEST] <furq> yeah i checked and the deb package doesn't depend on libsodium
[03:38:07 CEST] <furq> but those are libsodium symbols
[03:38:27 CEST] <furq> alternatively you could just not include zeromq because i have no idea what it's even for
[03:38:32 CEST] <llogan> are you ever going to use the zmq filter?
[03:38:42 CEST] <furq> i've never heard of anyone using it
[03:38:48 CEST] <furq> in ffmpeg, that is
[03:39:10 CEST] <rsully> I mean I'd like to compile with all available options that I *could* potentially use just for sake of having a single kitchen sink binary
[03:40:00 CEST] <llogan> honestly, that sounds like a waste of time
[03:40:01 CEST] <furq> that's nice but i can almost guarantee you'll never use zmq
[03:40:12 CEST] <rsully> alright I'll comment out that option for now
[03:40:12 CEST] <furq> or most of the other libs, for that matter, but if they compile then who cares
[03:40:40 CEST] <llogan> you can always re-compile later. and if you're using ffmpeg often then you should re-compile occassionally to kepe up to date
[03:40:45 CEST] <rsully> yep
[03:41:23 CEST] <furq> out of interest, what's missing from relaxed's builds that you need
[03:41:36 CEST] <rsully> I haven't seen his builds
[03:41:43 CEST] <furq> http://johnvansickle.com/ffmpeg/
[03:41:59 CEST] <rsully> first thing that stands out of libfdk
[03:42:03 CEST] <rsully> is8
[03:42:06 CEST] <furq> ah
[03:42:06 CEST] <rsully> is**
[03:45:33 CEST] <rsully> removing libzmq makes configure succeed, but make fails near end
[03:45:33 CEST] <rsully> http://pastie.org/private/bafbwegpfipa46ieqoqgq
[03:53:19 CEST] <rsully> gotta go for the night, I'll be back tomorrow to continue this 'fun'. thanks for the help
[04:37:22 CEST] <pfelt> are there any good ways to synchronize streams coming out of multiple ffmpeg commands?
[04:53:28 CEST] <pfelt> i suppose i need to provide more information for that question.  i'm outputting to 4 decklink cards from 4 different ffmpeg processes.  i need to synchronize them somehow
[05:13:11 CEST] <zarathushtra> #0:0 (h264 (native) -> h264 (libx264)) -> trying to resize a .mkv... but receiving "Error while opening encoder for output stream #0:0"
[05:16:05 CEST] <zarathushtra> also putting some subs
[05:18:32 CEST] <zarathushtra> i think it worked
[05:18:34 CEST] <zarathushtra> lol
[06:39:46 CEST] <Tynach> Hi, I'm having troubles with the last frame of variable framerate files. Everything works great except the very last frame, which has the wrong duration in almost all cases. I'm using '-vsync vfr -copytb 1 -copyts -r 20'. The '-r 20' at the end is needed or it pushes timestamps to the nearest half second or so. I'm working with animated .gif files.
[06:40:53 CEST] <Tynach> I'm wanting to encode the .gif files into .mp4 files, with the H.264 codec.
[06:47:26 CEST] <Tynach> If there's no way to fix the At the very least, is there a way for me to send the last frame twice, the second time having a pts that's 1/fps (fps being 20 in this case, so 0.05) seconds less than the duration of the input file?
[06:47:46 CEST] <Tynach> Wow, the start of that post got messed up.
[06:48:55 CEST] <Tynach> "If there's no way to fix the last frame's duration, is there at least a way for me to..."
[08:58:32 CEST] <fling> Which preset to use for a noisy webcam?
[09:04:55 CEST] <operator> ffmpeg says: Stream #0:1: Audio: none, 48000 Hz, 2 channels, 2304 kb/s
[09:05:33 CEST] <operator> and cannot play video. How can say which codec should ffmpeg use?
[09:34:51 CEST] <EugenA> how to force ffmpeg to play specific audio codec? any help is welcome :-)
[09:50:39 CEST] <durandal_1707> EugenA: you mean decode? -c:v codec_name
[09:51:29 CEST] <durandal_1707> -c:a pcm_s24le
[09:51:42 CEST] <durandal_1707> for audio
[10:15:33 CEST] <EugenA> durandal_1707: http://pastebin.com/awg4DPwm
[10:16:43 CEST] <EugenA> durandal_1707: ffmpeg cannot recognize audio codec, but it should be pcm_s24le
[10:17:26 CEST] <EugenA> so what can I do? file is corrupted? file is created by blackmagic intensity pro 4k hardware
[10:18:23 CEST] <EugenA> it worked before.. oder files are normally convertable by ffmpeg
[10:25:51 CEST] <durandal_1707> EugenA: put -c:a pcm_s24le in front of your ffmpeg input
[14:05:13 CEST] <Prelude2004c> hey guys..  good morning. can anyone tell me why i am getting the errors at the bottom
[14:05:14 CEST] <Prelude2004c> http://pastebin.com/raw/kkX1CESg
[14:07:27 CEST] <J_Darnley> You keep pressing c
[14:07:40 CEST] <jkqxz> Whatever you are sending to stdin is wrong.
[14:08:09 CEST] <topcat> would anyone know if I should be able to run ffmpeg in MS' new cut down nanoServer? I run the latest static x64 exe --help and nothing is returned.
[14:10:23 CEST] <J_Darnley> It being Microsoft I bet some error message has popped up on some display you can't see.
[14:10:54 CEST] <Prelude2004c> hey .. sorry
[14:10:57 CEST] <Prelude2004c> http://pastebin.com/raw/j7Fzx7bP
[14:11:01 CEST] <topcat> J_Darnley: lol.
[14:11:06 CEST] <Prelude2004c> that is the correct pastebin..
[14:11:43 CEST] <Prelude2004c> i am very stuck.. tried to use vdpaul to decode but keeps crashing the nvidia card.. using the library built by nvidia does not crash it but now i have other errors :( hopping to get some help on it
[14:12:00 CEST] <J_Darnley> WTF is all that?
[14:12:22 CEST] <Prelude2004c> J_Darnley ? you talking to me ?
[14:12:34 CEST] <J_Darnley> Yes.
[14:13:05 CEST] <Prelude2004c> btw, the idea is this... i have a list of input files.. i want to loop out into fifo.. because i want the second thing to listen and segment. but i don't want the segmenter to close.. i want it to sit there and listen to the input
[14:13:46 CEST] <Prelude2004c> as the input will be in a loop.. so it will just keep shuffling the input files..sending them to nvtranscoder to transcode .. and then back out to segment
[14:14:47 CEST] <Prelude2004c> no excuse if you see something silly there with stdin .. i been playing with settings. now i dont know which are correct anymore :)
[14:15:48 CEST] <Prelude2004c> i have been really struggling with this for a few days.. tried vdpau too with no success.. i am using nvtranscoder successfully with a UDP source input ( live stream ) but.. .now the issue is when i am trying to serve up static files :(
[14:18:15 CEST] <Prelude2004c> any suggestions ?
[15:18:25 CEST] <bostonmacosx> Hello there
[16:08:53 CEST] <IneedHELP> Good day, I need hopefully simple help.
[16:08:53 CEST] <pgorley> From what I've read, if I want to implement hardware acceleration, I have to implement it separately for each codec and for each platform/API I want to support. Is this correct?
[16:10:21 CEST] <jkqxz> pgorley:  Pretty much yes.  The platform-specific elements of everything are just fatally different, however much people try to write "generic" APIs for it.
[16:10:35 CEST] <pgorley> Ok, thanks
[16:10:52 CEST] <IneedHELP> I have a Canon XF100. it gives a video in several .MXF files.  can I use this software to make it into one more popular format such as MP4?
[16:13:44 CEST] <relaxed> IneedHELP: yes, see https://trac.ffmpeg.org/wiki/Encode/H.264
[16:15:05 CEST] <relaxed> IneedHELP: and https://trac.ffmpeg.org/wiki/Concatenate
[16:25:54 CEST] <Fyr> how to set language for audio track?
[16:35:49 CEST] <Prelude2004c> hey, anyne know what this means " Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly "
[16:35:58 CEST] <Prelude2004c> getting lots of errors when converting a video
[16:36:17 CEST] <Fyr> is your output file zero-sized?
[16:36:36 CEST] <Prelude2004c> no, there is data
[16:36:41 CEST] <Fyr> cool
[16:36:42 CEST] <Prelude2004c> the source is doing this.. [mpegts @ 0x3393b20] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 3952800 >= -9223372036854775807
[16:37:20 CEST] <Prelude2004c> still struggling to have this stuff working :( .. been at it for days.. nobody can help me so far
[16:42:36 CEST] <Prelude2004c> also tons of these too
[16:42:37 CEST] <Prelude2004c> [mpegts @ 0x2156b20] pts (230400) < dts (234000) in stream 0
[16:42:37 CEST] <Prelude2004c> [mpegts @ 0x2156b20] pts (237600) < dts (241200) in stream 0
[16:42:37 CEST] <Prelude2004c> [mpegts @ 0x2156b20] pts (244800) < dts (248400) in stream 0
[16:46:16 CEST] <c_14> Fyr: -metadata:s:a:0 lang=
[16:46:34 CEST] <Fyr> ok
[16:46:43 CEST] <Fyr> how to embed chapters?
[16:47:18 CEST] <c_14> do you have a chapter file or does the source have chapters?
[16:47:38 CEST] <Fyr> chapter file
[16:50:08 CEST] <c_14> Hmm, not sure if/how ffmpeg supports that. Either try adding it as an input and using -map_chapters or you'll have to use something like mkvtoolnix or mp4box
[16:50:50 CEST] <Jepp> Is there a way of encoding a video such that B and P frames will not contain I-macroblocks (and only B and P macroblocks respectively)?
[16:51:20 CEST] <kepstin> Jepp: with x264?
[16:51:45 CEST] <Jepp> kepstin: with anything
[16:51:51 CEST] <kepstin> maybe, but you'd probably have to use their api directly rather than go through ffmpeg, and it would reduce coding efficiency
[16:52:44 CEST] <kepstin> that doesn't seem like something that would actually be useful to do - why do you want it?
[16:53:00 CEST] <Jepp> using mpegflow for motion estimation
[16:53:21 CEST] <Jepp> i'm extracting motion vectors from B and P frames
[16:53:33 CEST] <Jepp> and whenever there's an I macroblock in one of those, it just gives no information
[16:55:02 CEST] <kepstin> right, but you have to remember that motion vector coding in video is a way of improving coding efficiency - it doesn't follow actual motion, it just points you to blocks that are sufficiently similar that there's a coding efficiency gain if you use one as a base
[16:55:44 CEST] <kepstin> if you want motion vectors - and for all frames even, including i-frames, why not just run only a motion vector search over the video?
[16:56:14 CEST] <Jepp> what do you mean by "motion vector search"? do you mean something like optical flow?
[16:57:08 CEST] <iive> it is usually referred as Motion Estimation
[16:59:15 CEST] <Jepp> well, the thing is that optical flow just takes too much time, and i can lose a little accuracy for getting an efficiency gain
[16:59:52 CEST] <Jepp> extracting motion vectors using mpegflow gives me just that, but it's too inaccurate because of the I macroblocks, I believe
[17:00:42 CEST] <kepstin> hmm, you could probably speed it up by using the estimator extracted from a video codec to find candidate motion vectors rather than run a full optical flow analyzer (codecs have a lot of heuristics to make it fast, limit search area)
[17:01:37 CEST] <kepstin> would be faster than doing a full video encode anyways
[17:02:09 CEST] <Jepp> by feeding the flow i get from the codec motion vectors as an initial guess to an optical flow algorithm?
[17:02:46 CEST] <Jepp> encoding is done server-side for me, so i don't care about processing time there too much...
[17:02:47 CEST] <Fyr> c_14, http://pastebin.com/wRdZGDdu
[17:02:48 CEST] <Fyr> this command sets English to every audio channel.
[17:03:14 CEST] <kepstin> you'd run the codec's motion estimator standalone to replace the step you're currently doing of encoding the video then decoding with mpegflow to get the motion vectors
[17:05:08 CEST] <Jepp> so i could get motion vectors for the entire frame without doing the encoding - i see.
[17:05:14 CEST] <Jepp> (if that's what you mean)
[17:05:43 CEST] <kepstin> yeah. and you'd get them for all frames and all blocks, rather than only predicted blocks on predicted frames
[17:07:04 CEST] <Jepp> sounds great! i can use their api for that, right?
[17:11:33 CEST] <kepstin> not sure, I don't think most codecs expose that as api since it's an internal implementation detail. You *might* be able to use the mpeg or snow one from ffmpeg? But i dunno if that's public or internal api.
[17:13:36 CEST] <Jepp> hmm, i'm kind of lost here. could you give me a hint on how to find how to do that? i'm not that familiar with codecs and ffmpeg :( what should i google?
[17:16:15 CEST] <kepstin> hmm. looking at the vf_mcdeint.c code might be interesting, that uses the snow encoder in ffmpeg in a motion estimation only mode. Other than that, you'd probably end up copy/pasting some motion estimation from a codec into your application :/
[17:16:38 CEST] <Jepp> sounds like a lot of fun :)
[17:16:47 CEST] <Jepp> thanks a lot, i'll look into that!
[17:20:11 CEST] <Workflow> Hi Guys, trying to take snapshots from a mp4 file using fps, but starting at a specific start time using -ss, but it seems just to be dropping everything, -t works in terms of setting a duration, but it starts from 00:00:00
[17:20:14 CEST] <Workflow> Any ideas?
[17:21:59 CEST] <Fyr> Workflow, what are you trying to do?
[17:22:55 CEST] <Workflow> Fyr, I have a recording which is 2 hours long and i'm trying to take snapshots from the video every 2 minutes in the quickest way possible
[17:23:38 CEST] <Workflow> So I thought of splitting up into multiple processes to start at specific times in the video for a number of minutes
[17:24:33 CEST] <Fyr> Workflow, http://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video
[17:24:35 CEST] <Workflow> Unless there is a quicker way to take snapshots from a mp4 every 2 minutes?
[17:27:47 CEST] <__jack__> Workflow: all your process will have to read and decode the movie, won't be efficient
[17:29:27 CEST] <Workflow> I see, understood. So just lobbing loads of CPU at the one process is the best way to go?
[17:37:03 CEST] <__jack__> yep
[17:39:44 CEST] <paperManu> hi, I have updated some code so that it compiles with ffmpeg 3.x (following the deprecation warnings and errors), but I get a very high cpu load when reading a video while compiled against ffmpeg 3.0.2
[17:40:00 CEST] <paperManu> is there a guide or something to move from ffmpeg 2.8 to 3.x ?
[18:48:15 CEST] <llogan> paperManu: how can we duplicate this issue?
[18:48:48 CEST] <zarathushtra> im reencoding a .mkv into a .mp4... its working the way I want but i need to resize it at the same time... im using "ffmpeg -i $1.mkv -c:a mp3 -vf subtitles=$1.srt:charenc=CP1252 got-s${T}e$1.mp4", i need something like -vf screen=-1:480... how would that go?
[18:49:21 CEST] <kepstin> zarathushtra: you can put multiple filters into the -vf list separated by commas
[18:49:34 CEST] <zarathushtra> a-ha
[18:49:52 CEST] <zarathushtra> -vf subtitles=whatever.srt,screen=xxxx
[18:49:57 CEST] <zarathushtra> this way?
[18:50:28 CEST] <llogan> scale=-2:480,subtitles=foo.srt
[18:50:30 CEST] <Fyr> isn't it rendering subtitles into the video?
[18:50:50 CEST] <zarathushtra> scale! yea
[18:50:55 CEST] <zarathushtra> whats the difference with -2 and -1?
[18:51:07 CEST] <zarathushtra> Fyr: the subs are ok, but i want to resize the video too :)
[18:51:14 CEST] <llogan> -2 is like -1, but ensures the values is divisible by 2
[18:51:33 CEST] <JEEB> because we needed even more of non-guessable values \o/
[18:51:36 CEST] <llogan> which is needed when encoding with libx264 with 4:2:0 chroma subsampling
[18:51:45 CEST] <zarathushtra> oooh cool
[18:52:00 CEST] <zarathushtra> i lost many hours trying to figure out those kind of errors lol
[18:52:20 CEST] <zarathushtra> thanks for pointing the right direction! its streaming now... lets see the result in a few minutes, but it should work now! thanksss :)
[18:52:52 CEST] <llogan> its possible you can simply stream copy the audio isntead of re-encoding it
[18:53:28 CEST] <zarathushtra> i know, but i dont think my TV accepts aac :(
[18:58:32 CEST] <t4nk247> Hi guys, I was compiling ffmpeg for android. but I am facing this error  arm-linux-androideabi/bin/ld: error: cannot find -lfdk-aac how can I solve it?
[18:59:00 CEST] <Mavrik> you need to compile FDK-AAC for Android if you want to use it
[18:59:11 CEST] <t4nk247> And it finally terminates with collect2: error: ld returned 1 exit status ERROR: libfdk_aac not found
[18:59:50 CEST] <t4nk247> I have compiled fdk-aac and the static library can be found at /usr/local/lib
[19:00:09 CEST] <Mavrik> Did you compile it for Android with NDK compilers?
[19:00:19 CEST] <Mavrik> For armeabi like the ffmpeg binary?
[19:01:21 CEST] <t4nk247> No. I just compiled it with --enable-shared --enable-static flags
[19:05:14 CEST] <Mavrik> Well your ARM phone won't run x86 code :)
[19:06:23 CEST] <llogan> we need someone to provide ARM ffmpeg binaries
[19:07:52 CEST] <JEEB> llogan: but not with something that is non-distributable :P
[19:08:02 CEST] <JEEB> (fdk-aac is such)
[19:08:17 CEST] <llogan> i wasn't suggesting that
[19:08:18 CEST] <Mavrik> llogan, for which flavor? :P
[19:08:26 CEST] <Mavrik> armeabi ffmpeg is beyond useless :)
[19:08:28 CEST] <JEEB> aand yeah, ARM is a mess like that
[19:08:57 CEST] <llogan> Mavrik: whichever one someone gives to me
[19:09:18 CEST] <relaxed> I plan to have static arm builds soon.
[19:09:18 CEST] <Mavrik> hm?
[19:09:34 CEST] <llogan> good news, everyone.
[19:09:58 CEST] <JEEB> anyways, I build FFmpeg for my mpv-android builds and it doesn't really get simpler than that. of course if you need something like libvorbis, libopus or libx264 then you'll have to poke that, too :P
[19:10:09 CEST] <Mavrik> yp
[19:10:32 CEST] <Mavrik> It's still annoying because there's very perceptible performance difference between armeabi, armeabi-v7a + NEON and arm64-v8a builds.
[19:10:44 CEST] <Mavrik> And that doesn't even include all those silly people trying to use RPis for video processing.
[19:11:25 CEST] <JEEB> I loved how the Rpi1's CPU wasn't fast enough for realtime AAC
[19:11:45 CEST] <JEEB> "I can encode this video with the HWenc just fine, but muh audiooo... :<"
[19:11:52 CEST] <Mavrik> :P
[19:12:11 CEST] <Mavrik> I remember using it for Kodi and it couldn't handle DTS decode.
[19:12:19 CEST] <JEEB> exactly
[19:29:47 CEST] <t4nk247> This was the command that I used to compile fdk_aac: export CC=arm-linux-androideabi-gcc  function build_fdk_aac { ./configure  --enable-static --enable-shared make make install make distclean ldconfig }  build_fdk_aac
[19:30:06 CEST] <t4nk247> but I am still getting ERROR: libfdk_aac not found
[19:31:03 CEST] <EugenA> is it possible to write and read video file (for broadcast)
[19:37:05 CEST] <furq> writing and reading video files is more or less what ffmpeg does
[19:38:35 CEST] <t4nk247> This was the command that I used to compile fdk_aac: export CC=arm-linux-androideabi-gcc  function build_fdk_aac { ./configure  --enable-static --enable-shared make make install make distclean ldconfig }  build_fdk_aac  but I am still getting ERROR: libfdk_aac not found
[19:38:42 CEST] <t4nk247> Where am I mistaken?
[20:01:03 CEST] <zarathushtra> please, how can I pass a -c:v OPTION that would guarantee to use "MPEG-4 SP" or ASP?
[20:02:41 CEST] <llogan> -c:v mpeg4
[20:03:29 CEST] <llogan> then check with: ffprobe -loglevel error -show_entries stream=profile input.mp4
[20:04:01 CEST] <zarathushtra> tnx mate
[21:22:46 CEST] <qvac> can ffmpeg automatic set a bitrate, if the input is like 128? on file i want to convert?
[21:23:15 CEST] <Fyr> qvac, it depends on the encoder.
[21:23:34 CEST] <Fyr> usually, FFMPEG sets quality (i.e. VBR).
[21:23:50 CEST] <qvac> i gonna convert to m4a
[21:25:10 CEST] <qvac> Fyr:
[21:25:48 CEST] <Fyr> qvac, don't worry, it will automatically set bitrate.
[21:26:28 CEST] <Fyr> 128 kbps
[21:26:28 CEST] <qvac> Fyr: i mean.. if i convert a 128 bitrate to m4a, it would be 128 right? :)
[21:26:34 CEST] <Fyr> yes
[21:26:41 CEST] <Fyr> http://ffmpeg.org/ffmpeg-codecs.html#aac
[21:26:55 CEST] <Fyr> the documentation concurs.
[21:26:56 CEST] <qvac>  ffmpeg -i "input.webm" -c:a aac output.m4a
[21:26:57 CEST] <Fyr> (=
[21:27:03 CEST] <Fyr> right
[21:30:09 CEST] <qvac> Fyr: a fast question, higher volyum on the audio in ffmpeg?
[21:31:00 CEST] <Fyr> ffmpeg -i input.mp4 -vn -filter:a volume=2dB -f mp4 output_audio.m4a
[21:31:57 CEST] <Fyr> you might want to use -vn option, which disables audio in output.
[21:32:36 CEST] <Fyr> some audio players get confused if an m4a file contains video inormation, even an empty video stream.
[21:32:55 CEST] <Fyr> *-vn disables video
[21:33:24 CEST] <qvac> ok
[21:34:16 CEST] <devster31> Hi, I managed to cross-compile ffmpeg with some libraries, I didn't set --enable-shared but ffmpeg complains about missing shared libraries, how can I debug?
[21:34:58 CEST] <qvac> is it possible to higher volyme same time i convert?
[21:35:08 CEST] <Fyr> yes
[21:35:38 CEST] <qvac> but if arent mp4?
[21:36:09 CEST] <Fyr> qvac, you should try and if you encounter an error, report it here.
[21:36:24 CEST] <BtbN> devster31, that only causes the libav* libs to be static or shared
[21:36:43 CEST] <BtbN> you have to supply all the dependencies as static libs if you want a full static build.
[21:39:24 CEST] <devster31> BtbN: ok, so if both shared and static libraries are present the shared one are chosen?
[23:13:07 CEST] <qvac> is this the same thing what Fyr show me?
[23:13:10 CEST] <qvac> https://trac.ffmpeg.org/wiki/How%20to%20change%20audio%20volume%20up-down%20with%20FFmpeg
[23:28:31 CEST] <furq> 20:26:28 ( qvac) Fyr: i mean.. if i convert a 128 bitrate to m4a, it would be 128 right? :)
[23:28:41 CEST] <furq> just to be clear, it'll only be 128 because that's the default bitrate for aac
[23:28:53 CEST] <furq> the input file's bitrate isn't taken into account
[23:29:05 CEST] <furq> because it wouldn't make any sense
[23:59:40 CEST] <vade> hello. Im successfully using libswscale to convert YUV AVFrames to RGB  AVFrames. Im trying to do the inverse now, and running into some hiccups. is it safe to init a frame as av_frame_init() mark its pixel format and full raster width and height (rather than planar half size for UV) and let av_frame_get_buffer do the work to mark the appropriate linesize, etc?
[00:00:00 CEST] --- Fri May  6 2016


More information about the Ffmpeg-devel-irc mailing list