[Ffmpeg-devel-irc] ffmpeg.log.20130405

burek burek021 at gmail.com
Sat Apr 6 02:05:01 CEST 2013


[00:00] <rager> but honestly... you don't really need it - it's just ffprobe -show_streams [file]
[00:04] <mark4o> rager: did you try -vf setsar=1 to set it to a normal value?
[00:05] <rager> I in fact did
[00:05] <relaxed> what happens when you try to encode it?
[00:06] <rager> http://hastebin.com/lusatorefa.vhdl
[00:07] <rager> same error whether I'm taking a screenshot or telling it to reencode
[00:07] <relaxed> could you stick a sample up somewhere?
[00:07] <rager> a sample?
[00:07] <rager> you mean, the video in question?
[00:08] <rager> http://alan.appredeem.com/wtf.mov
[00:08] <rager> it's encoded using some part of Apple's iOS video API's
[00:08] <rager> by some guy who's doing some sort of consulting coding for my work... I'm just dealing with this really weird probably malformed video file his stuff is generating
[00:09] <rager> at this point, I'm just genuinely curious as to why the video might work differently in different players and what I should tell the guy to make him do things correctly
[00:09] <relaxed> I'm unable to resolve that url
[00:09] <i_s> i'm having a ton of problems with quicktime generated .movs as well
[00:10] <rager> oh, sorry
[00:10] <rager> http://alan.appredeemdev.com/wtf.mov
[00:11] <rager> what's interesting... is that it plays upside down in everything except quicktime, I think
[00:11] <rager> it's upside-down on vlc and gnome mplayer as well as android mxplayer
[00:17] <relaxed> in ffplay and avplay as well
[00:17] <relaxed> I've never seen negative SAR/DAR
[00:19] <rager> yeah, same
[00:19] <rager> and I can't find anybody on the internet that has much to say about it
[00:20] <relaxed> you should probably yell at your programmer
[00:21] <rager> oh, I sent him an email explaining that he's sending malformed videos that are useless to us
[00:21] <rager> like... my boss sounded like he wanted to just kinda let it slide if it'd work
[00:21] <rager> but I reminded him that the guy is being paid to deliver a product... and that product had better not be needlessly shitty
[00:23] <rager> ffplay/avplay are the same thing?
[00:23] <relaxed> ha, did this person even playback the video?
[00:23] <relaxed> yes
[00:23] <rager> it plays back fine on quicktime
[00:23] <rager> which is obnoxious
[00:23] <relaxed> apple supports their own broken shit
[00:24] <rager> sofa king weird, though
[00:54] <rager> yeah... it was some weird transformations he was trying to do
[00:54] <rager> he found the most novel way I've ever seen to horizontally flip a video
[02:16] <burek> does ffmpeg support things like: ffmpeg -vcodec h264_vdpau -i bla.mp4 ...
[02:17] <relaxed> nope
[02:17] <burek> short and concise :) thanks :)
[02:19] <shadowing> does anyone have experience recording from a Blackmagic Intensity Pro on Windows?
[02:21] <burek> btw, relaxed, what's the purpose of that vcodec anyway?
[02:21] <relaxed> before the input it forces a decoder
[02:22] <burek> yes
[02:22] <iive> that codec is used by other programs.
[02:22] <relaxed> oh, sorry :)
[02:22] <burek> vdpau is supposedly used to force gpu decoding if i understood things correctly
[02:22] <iive> there are some plans to implement a full decoding cycle...
[02:22] <burek> iive, so it can't be used directly with ffmpeg tools?
[02:23] <burek> tool*
[02:23] <iive> not at the moment and near future.
[02:23] <burek> i see, that makes sense
[02:23] <burek> thanks both of you again :)
[02:24] <iive> vdpau allows the decoded image to be moved to system memory, so in theory it could be used as usual decoder.
[02:24] <relaxed> hardware decoding via vdpau will not be as resilient as software decoding.
[02:24] <relaxed> but, of course, it will be faster when it works :)
[02:25] <iive> most important, it would not use cpu, that could be used for other tasks.
[02:25] <llogan> vdpau doesn't even use the gpu
[02:25] <iive> hum?
[02:26] <burek> "This VDPAU API allows video programs to offload portions of the video decoding process and video post-processing to the GPU video-hardware."
[02:26] <llogan> i think it uses some other onboard accelleration chip that isn't a part of the main gpu
[02:26] <iive> i know ati have special video engine that handles it.
[02:27] <burek> that would also be so cool to implement
[02:27] <burek> since, i believe, most modern graphic cards have the support for such thing
[02:27] <llogan> maybe i'm wrong. i can't remember now.
[02:29] <iive> well, no matter how it is implemented in the video card, the data is processed by the gpu chip and in video card memory.
[02:38] <iive> burek: without trying it myself, I don't see tinterlace mentioned in the ffmpeg output, most likely it have something to do with the -vf option position.
[02:38] <iive> i should probably try it myself...
[02:48] <iive> burek: yeh, tell him to move -vf right before -vcodec, after all -i <inputs>
[03:23] <burek> oh thanks iive :)
[09:02] <zap0> using  -qscale  with  an output .mp4  (defaults to libx264) doesn't seem to accept the -qscale parameter.  what do i do instead ?
[09:07] <zap0> found a guide.  nevermind.
[14:35] <chocis> hi everyone, recently started playing with ffserver, because I want to create versatile streaming server, but I can't really understand if it is possible to stream RTMP protocol. I understand that it is possible to receive using RTMP, but can I send my webcam stream using RTMP?
[15:09] <Zeeflo> crap
[15:09] <Zeeflo> i forgot the -vf command for resolution to example 1280x720
[15:09] <Zeeflo> can someone refresh my memory?
[15:11] <Zeeflo> is it scale?
[15:12] <Zeeflo> yup
[15:12] <Zeeflo> thanks
[15:12] <johto> no problem
[17:50] <durandal_1707> ungureanuVlad: what version ffmpeg outputs when it starts?
[17:50] <durandal_1707> you pastebin it
[17:50] <durandal_1707> or simple term like Release X.Y would be fine
[17:51] <ungureanuVlad> @durandal_1707 they do not have the latest one in their repo, ffmpeg version 0.8.5-6:0.8.5-1+rpi1,
[17:51] <durandal_1707> that is ooold
[17:52] <durandal_1707> so getting support for that is going to be little harder
[17:52] <ungureanuVlad> @durandal_1707 on OS X i tried with ffmpeg version 1.2 and the same warning i get
[17:52] <durandal_1707> just warning? or some worse like error and no output?
[17:53] <ungureanuVlad> @durandal_1707 just warning http://pastebin.com/1MPBtQXE
[17:54] <ungureanuVlad> the output image at 1st sight looks really ok, no errors in it.
[17:57] <durandal_1707> good, so what is actual problem/question?
[17:58] <ungureanuVlad> i wanted to know the source of the warning, maybe i can overcome it somehow
[17:58] <durandal_1707> well if it is streaming in real time there is nothing to fix
[17:59] <ungureanuVlad> yes, it it real time..
[17:59] <durandal_1707> it will start to give frames, once first key frame arrive
[18:05] <Jordan_> first what is the difference in -vf scale, and -s?
[18:05] <Jordan_> Second is ffmpeg going to integrate intel media sdk?
[18:06] <markit> hi, GNU/Linux, if I record from a webcam, even at 10fps, 640x480 the audio is delayed of 1 second. Any clue? like $ ffmpeg -f alsa -ac 1 -ar 22050  -i hw:2,0  -acodec libvorbis  -f v4l2 -s 640x480 -r 10 -i /dev/video0 -vcodec libvpx -b:v 1000k  test.webm
[18:06] <markit> ffmpeg 1.0.6-dmo2, debian sid 64 bit
[18:06] <markit> btw, same problem with vlc
[18:07] <durandal_1707> perhaps audio encoder is too slow?
[18:07] <durandal_1707> for your hw
[18:08] <Jordan_> http://www.anandtech.com/show/6864/handbrake-to-get-quicksync-support
[18:08] <Jordan_> wondering if ffmpeg is planning on that as well
[18:09] <durandal_1707> only if it is open bug on bug tracker than it is in some kind of plan
[18:11] <Jordan_> what is the diff in -vf scale and -s, it appears they are not the same, i'm getting different sizes and encode times, -vf seems to be faster
[18:12] <Jordan_> actually disregard that, it close to call
[18:12] <Jordan_> too clost to call
[18:13] <Jordan_> but functionally what is the difference
[18:13] <markit> durandal_1707: do you suggest a faster one, just to experiment?
[18:15] <markit> btw, if I simply "play" (not recording) the webcam with vlc, also image is lagging, while with kamerka is "in real time"
[18:15] <durandal_1707> markit: raw/pcm ones are fastest but takes a lot of disk space
[18:15] <mark4o> Jordan_: scale filter allows greater flexibility where the scaling occurs with respect to other filters, in case you have multiple filters
[18:15] <markit> durandal_1707: like (I'm not an expert) -acodec raw ?
[18:15] <Jordan_> i only have the scale one, so does that mean there is no difference, under the hood are thye using the same scale?
[18:16] <durandal_1707> -acodec pcm_*
[18:16] <Jordan_> mark4o, ^
[18:16] <durandal_1707> markit: pcm_s16le for example
[18:16] <durandal_1707> markit: or flac/alac
[18:18] <Jordan_> well i guess the article i mentioned is about the new haswell processor has a new video quality engine and intel open sourced the component
[18:18] <mark4o> Jordan_: should be no difference with one filter when used as output option; -s can also be used as an input option with some demuxers
[18:19] <Jordan_> k
[18:19] <Jordan_> does anyone know about the intel media sdk, is that going to speed up transcode times?
[18:22] <markit> durandal_1707: mmm with test.webm file I see Stream mapping: Stream #1:0 -> #0:0 (rawvideo -> libvpx) and Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) even with ffmpeg -f alsa -ac 1 -ar 22050  -i hw:2,0  -acodec pcm_s16le -f v4l2 ...
[18:22] <markit> with .avi I have Stream #0:0 -> #0:1 (pcm_s16le -> libmp3lame)
[18:24] <Jordan_> well -vf and -s are not exactly the same, i get  4,522KB with -s, and 4,563KB with -vf scale
[18:24] <msmithng> burek: nice write up on the concat function
[18:25] <durandal_1707> markit: i see -f v4l2 twice, also use pastebin to show full output (may get help faster)
[18:28] <markit> durandal_1707: http://www.pastebin.ca/2350445 thanks :)
[18:31] <durandal_1707> markit: you pust pcm line at right end and not in middle
[18:31] <durandal_1707> *put
[18:31] <durandal_1707> eg before test.webm
[18:31] <durandal_1707> and after (or before) libvpx codec
[18:32] <durandal_1707> because default audio for webm is vorbis
[18:32] <mark4o> Jordan_: I get identical size.  What is your exact cmdline with scale filter that gives different size?
[18:33] <markit> durandal_1707: I see, let's try thanks (I thought was to be put after audio related parameters...)
[18:34] <Jordan_> mark4o, -i "in.MTS" -c:v libx264 -pix_fmt yuv420p -tune film -f mp4 -r 15 -g 75 -vf "scale=trunc(308*dar/hsub)*hsub:308" -crf 23 -maxrate 350000 -bufsize 700000 -preset slow  -strict experimental -c:a aac -ac 1 -ab 48k  "out.mp4"
[18:35] <Jordan_> mark4o, then use -s 560x308 in place of -vf
[18:35] <markit> durandal_1707: had to use .avi. Same delay (but correctly Stream #0:0 -> #0:1 (pcm_s16le -> pcm_s16le))
[18:37] <durandal_1707> than its from libvpx than (is all cpu used to to encode video?)
[18:39] <durandal_1707> or alsa/whatever issue
[18:41] <mark4o> Jordan_: -r is another filter so may be a minor difference due to the order the filters are run, or possibly your expression does not equal 560
[18:42] <Jordan_> mark4o, nope it does, i just changed the -vf to make sure of that -vf scale=560:308
[18:42] <Jordan_> well difference is small, but i want to know what is going on should i prefer one over the other?
[18:43] <Jordan_> so if i move the -r you think that would change it?
[18:43] <mark4o> Jordan_: well if you want to use an expression then use the scale filter
[18:43] <mark4o> I don't think the order of -r changes the order within the filter chain
[18:43] <Jordan_> i don't have to becuase i have to do a probe anyway
[18:44] <Jordan_> well what order do they get run in the two examples?
[18:45] <Jordan_> you think they be using a different scaling interpolation
[18:46] <mark4o> Jordan_: You could try replacing -r with fps filter and try different orders to check for a match.  Could also depend on the point within the filter chain where it converts to pix_fmt yuv420p if your input is something different.  You could use format=yuv420p filter if you want to try to control that.  Shouldn't really make much difference though.
[18:46] <markit> durandal_1707: so I should try "raw" video also? how? (sorry to beg you but I'm lost in the number of possible parameters)
[18:48] <Jordan_> so mark40 the order of these things are going to change anything in quality?
[18:49] <mark4o> Jordan_: for those particular filters probably not, but for other filters yes
[18:49] <Jordan_> so the filters i'm using is fps,scale, and pixel format
[18:52] <Jordan_> well tried moving the fps infront and behind scale, i'm not getting any matches, they are all coming out different
[18:52] <mark4o> Jordan_: the relative order of those particular filters shouldn't make much difference, although it might make a slight difference in speed if you run the filters on the smaller picture (i.e. put scale first if you are making it smaller or last if making it bigger)
[18:52] <Jordan_> puting fps last made it go slower
[18:52] <Jordan_> i think
[18:52] <mark4o> Jordan_: I wouldn't worry about it though the difference should be minimal
[18:55] <mark4o> Jordan_: if you are reducing the fps then put that first so that there will be fewer frames to put through the other filters
[18:55] <Jordan_> that's what i thought
[18:56] <mark4o> Jordan_: unless you are reducing the size even more than you are reducing the fps, then put that first
[18:57] <mark4o> Jordan_: but really, shouldn't matter much
[18:59] <Jordan_> you got a link where it discusses the ordering of filters, i'd like to know the optimal way to do things
[18:59] <mark4o> Jordan_: actually depending on your sizes and your input pix_fmt it may matter when you convert to yuv420p since the chroma is subsampled, so there will be a difference if you subsample the larger frame or the smaller frame
[19:00] <Jordan_> is -tune film a filter
[19:01] <mark4o> no, -tune is an option on the encoder so it will apply after all filtering
[19:01] <Jordan_> ok when should i apply format?
[19:02] <mark4o> depends, what is your input size and input pix_fmt?
[19:03] <Jordan_> does ffmpeg do any of this determination or does it just do it the same every time
[19:03] <Jordan_> say if i didn't specify a filter chain
[19:05] <mark4o> Jordan_: it should do it the same every time, but you can use -vf to control it precisely if needed
[19:07] <Jordan_> well if there is a good way to determine order then I'd like to figure that out
[19:08] <Jordan_> i would think ffmpeg would try to be smart about it
[19:09] <mark4o> Jordan_: if you are converting from a higher quality pix_fmt, and the other filters in the chain support the higher quality pix_fmt, then convert to 420 at the end; if you use a filter that doesn't support that pix_fmt then it will automatically convert the pix_fmt before that filter
[19:14] <Jordan_> what's the command that tells you how long it is running
[19:15] <mark4o> ?
[19:16] <Jordan_> gives you statistics at the end
[19:17] <mark4o> should get that by default
[19:18] <Jordan_> it is missing the processor time
[19:18] <mark4o> time ffmpeg ...
[19:22] <mark4o> Jordan_: I guess there is also a -benchmark option; never used that
[19:26] <Jordan_> yea that's the one
[19:28] <Jordan_> what is maxrss
[19:29] <mark4o> real memory usage (max resident set size)
[19:33] <Jordan_> i'm going to try every combo of those 3 filters
[19:38] <Jordan_> is ffmpeg smart enough to take the processor into account?
[19:47] <Jordan_> i mean will it enable and disable processor instructions
[19:48] <Jordan_> mark4o, well this could be a problem i'm getting different sizes on the same command
[19:53] <Jordan_> mark4o, among the 6 combos of fps,scale,format they all look about the same quality as far as i can tell, and about +/- 3 seconds of processor time.
[19:54] <Jordan_> the range in size is 4,535 to 4553, but I'm getting different sizes on the same command why is that
[19:56] <mark4o> Jordan_: probably the placement of the format filter would have the most effect, because it is subsampled chroma
[19:57] <Jordan_> i'm scaling down size and fps
[19:57] <Jordan_> where should i place format
[19:58] <mark4o> what is your input pix_fmt
[19:58] <Jordan_> yuv420p what if it were different
[19:59] <Jordan_> maybe I should not even apply the yuv filter if they are the same
[20:02] <mark4o> if you're not changing pix_fmt then it shouldn't matter (unless one of the filters doesn't accept that pix_fmt and ffmpeg inserts an automatic conversion), if the source was 444 then convert to 420 after all filters (assuming the filters you are using all work with 444).  I think -v verbose will show what format it is using at each stage and whether there are automatically inserted filters.
[20:04] <Jordan_> well the only filters i'm using are fps and scale, i don't think it matters what the pixel format is?
[20:08] <mark4o> if you were converting, say, 411 to 420 then it would matter, or 444 to 420 and enlarging, or something.    444 to 420 and shrinking would not make much difference but would the placement of the format change would make a slight difference in which pixels were subsampled in the scaling.
[20:12] <Jordan_> so if i don't specify a filter chain what is the order
[20:15] <mark4o> Jordan_: I think it will put scale filter last if you use -s
[20:15] <mark4o> that's what the docs say
[20:16] <Jordan_> does the fps filter just throw out frames with it down converts
[20:16] <mark4o> yes
[20:16] <Jordan_> what if it isn't a multiple of the source fps
[20:17] <Jordan_> like 29.96->15 i could see it having to do more work
[20:17] <Jordan_> than if it were 30->15
[20:18] <mark4o> 29.96 -> 15 fps would usually throw out 1 frame for every 2 but occasionally throw out 0
[20:18] <Jordan_> i've read the chroma subsampling on wikipedia several times its just not getting through to me
[20:18] <mark4o> if your input is 420 and your output is also 420 then it doesn't matter
[20:21] <Jordan_> just curious now, how is it possible to get all the colors with just two channels UV
[20:23] <mark4o> 8 bits for U x 8 bits for V = 256.   x 8 bits Y = 16 million including different shades
[20:24] <mark4o> s/256/65k/
[20:25] <Jordan_> so its really 65k colors, with different shades
[20:25] <mark4o> 65k different chroma values, "color" usually includes shades
[20:37] <Jordan_> 4:2:0 the 2nd row is a copy of 1st?
[20:38] <mark4o> for 4:2:0, there are half as many rows of chroma (U and V) as there Y rows
[20:39] <mark4o> and half as many columns
[20:51] <Jordan_> i may just take the pix_fmt out
[20:51] <Costin> hi there
[20:52] <Costin> need some help with rtmp to rtmp transcoding
[20:53] <Costin> i'm able to take an input stream, and send it to an output stream... but it always waits for the input stream to end first, before it starts doing the output
[20:53] <Costin> and i AM using "-re"
[20:53] <Costin> can anyone offer any help?
[21:08] <Costin> ok, will do
[21:08] <Costin> let me clean it up a bit
[21:22] <Costin> ok guys, here it is: http://pastebin.com/XjNpXwaY
[21:22] <Costin> seems very simple to me
[21:22] <Costin> but it just doesn't *stream*.  It waits until the source stream ends, and then pushes the output
[21:22] <ubitux> "and the COMPLETE console output."
[21:22] <Costin> sorry, will do
[21:24] <Costin> how's this: http://pastebin.com/w6UcBHds
[21:27] <Costin> i'm basically trying to transcode from h264 to sorenson spark/h263
[21:31] <Costin> anyone looking at it?
[21:33] <characterlimit> is it possible to feed a video into ffmpeg with stdin?
[21:33] <characterlimit> also... is it possible to have it stream the output to stdout?
[21:33] <ubitux> characterlimit: yes, yes
[21:34] <characterlimit> know of any good tutorials to show me how I should go about it?
[21:37] <Costin> hey guys
[21:37] <Costin> got booted off
[21:37] <Costin> still have that pastebin?
[21:38] <ubitux> characterlimit: use '-'
[21:38] Action: characterlimit throws a - at ffmpeg
[21:39] <ubitux> ffmpeg <your-input-settings> -i - <your-output-settings> -
[21:39] <characterlimit> oh
[21:39] <characterlimit> interesting
[21:39] <characterlimit> I'm trying to script something to work w/ffmpeg w/o using temp files
[21:39] <ubitux> Costin: no idea for your problem
[21:39] <characterlimit> thanks
[21:40] <Costin> :(
[21:40] <llogan> Costin: i'm assuming it works as expected with a normal, file output?
[21:40] <Costin> is there a flag to FORCE output to begin immediately?
[21:40] <Costin> llogan, yes, I'm able to save it... but only when the input stream has ended
[21:40] <Costin> same behavior basically
[21:41] <characterlimit> ubitux: does this mean that ffmpeg will let me pipe input into it without any extra flags?
[21:41] <characterlimit> I saw some documentation about some pipes stuff that's out of date but couldn't find anything current
[21:44] <ubitux> depends on your input
[21:44] <ubitux> if it's raw data you'll likely need to tell ffmpeg the size, pixel format, etc
[21:46] <Costin> ubitux, talking to me?
[21:46] <ubitux> nope
[21:46] <ubitux> sorry
[21:46] <Costin> np
[21:46] <characterlimit> hm
[21:46] <characterlimit> it'll be the file just catted through it
[21:46] <characterlimit> well, sorta
[21:47] <characterlimit> ok, I'll say what i'm specifically doing
[21:47] <characterlimit> I'm scripting php to transform videos, and I'd like to stream the video data straight from whatever server I grab it from into ffmpeg
[21:48] <characterlimit> without having to use a temp file that requires making assumptions about the system that I'm running on
[21:49] <characterlimit> ie I don't want to put a video file in /tmp/ in the interim
[21:51] <Costin> ok, some progress
[21:51] <Costin> it seems to START after some time
[21:51] <Costin> is there any way to tell FFMpeg to start encoding the live input stream immediately?
[22:01] <characterlimit> ok... use - as the input
[22:01] <characterlimit> and - as the target file
[22:01] <characterlimit> I understand, now
[22:13] <mark4o> Costin: I'm not very familiar with rtmp but have you tried -rtmp_buffer 0 (or some number smaller than the default of 3000) as an output option?  no idea whether that will do what you want
[22:25] <memand> Hey guys, is ffmpeg coded in c or c++ ?
[22:25] <zamabe> Hello. I'm having a bit of trouble getting the results I want. I have a video that I'd like to strip the video off of, keeping only audio, but the output file has no audio in it. $ ffmpeg -i file1.webm -acodec copy -vn file2.vob
[22:26] <brontosaurusrex> vob?
[22:27] <brontosaurusrex> wont webm have ogg as audio?
[22:27] <zamabe> vorbis
[22:27] <brontosaurusrex> try ogg
[22:27] <zamabe>     Stream #0:0: Audio: vorbis, 44100 Hz, stereo (default)
[22:27] <zamabe> Stream mapping:
[22:27] <zamabe>   Stream #0:1 -> #0:0 (copy)
[22:28] <zamabe> ogg works :o
[22:28] <zamabe> however it sounds like shit.
[22:28] <viric> vorbis is a codec, not a container
[22:28] <zamabe> ah.
[22:29] <brontosaurusrex> zamabe, it should sound exactly as before in webm
[22:29] <zamabe> yeah, that's what I'm expecting
[22:29] Action: zamabe tries copypasta
[22:30] <zamabe> :D it worked
[22:30] <zamabe> thanks guyz
[22:30] <zamabe> guys even
[22:34] <llogan> memand: C
[22:42] <Nitsuga> I have this originally progressive video that is encoded interlaced, but the fields are out of phase. So every interlaced frame has odd lines from one of the original frames and even lines from the next. Is there a way to fix this?
[22:49] <Nitsuga> Or maybe, is there a way to drop a single interlaced field, not a frame, from a video?
[23:01] <memand> llogan: Thx :)
[23:22] <brontosaurusrex> Nitsuga, yes, just use the dumbest deinterlacer found in dumbest mode
[23:22] <brontosaurusrex> not sure about the syntax thought
[23:24] <brontosaurusrex> another way would be to just use fast scaler to scale the vertical to 50% and then back to 100%
[23:25] <brontosaurusrex> but before that you may want to use: a. field dominance swap and see if that does something
[23:26] <brontosaurusrex> b. yadif
[00:00] --- Sat Apr  6 2013


More information about the Ffmpeg-devel-irc mailing list