[Ffmpeg-devel-irc] ffmpeg.log.20130929

burek burek021 at gmail.com
Mon Sep 30 02:05:01 CEST 2013


[01:24] <smjd> are you sure FLV doesn't support that?
[01:26] <smjd> or is there also a codec called FLV?
[01:28] <sacarasc> There is a codec called FLV.
[01:31] <smjd> is it Sorenson Spark?
[06:48] <elBradford> Does anyone have experience cross compiling ffmpeg with librtmp?
[06:50] <elBradford> Everything seems to be going well, however I get "undefined reference" errors in librtmp.o
[06:50] <elBradford> Then it stops going well, obviously.
[07:37] <taqattack> is there a way command to reduce latency / control packet size in ffmpeg for streaming?
[07:45] <EvanDotPro> i am pushing a tcp h264 stream from ffmpeg to a socket server and parsing the NAL units... if i stream it over 127.0.0.1, the NAL units are ~20k-30k bytes each, and the video decodes fine on the other end, but if i try to go over my router / modem where the MTU size is affected, the NAL units are tiny, like 1400-1500 bytes, and the video doesn't seem to want to decode.
[07:47] <EvanDotPro> piping to netcat and then over 127.0.0.1 seems to cause the problem too, except that the NAL units seem to be 8191 bytes in that case
[08:26] <skyroveRR> Hello, I'd like to display metadata for an audio/video file using ffmpeg, I'm using slackware, I tried http://stackoverflow.com/questions/9464617/retrieving-and-saving-media-metadata-using-ffmpeg but haven't been successful yet, the metadata.txt file is empty.
[08:28] <skyroveRR> I'd like to also edit and view the title, comments and other metadata on a particular media file, so what's the command for that?
[08:41] <skyroveRR> Hello, I'd like to display metadata for an audio/video file using ffmpeg, I'm using slackware, I tried http://stackoverflow.com/questions/9464617/retrieving-and-saving-media-metadata-using-ffmpeg but haven't been successful yet, the metadata.txt file is empty, and I also want to edit the metadata itself, any ideas?
[09:28] <elBradford> Has anyone compiled ffmpeg with librtmp for Android? Looking to pay for help at this point, kind of desperate.
[10:22] <BlackBishop> relaxed, burek, sacarasc: mhmm .. same .. tried -preset ultrafast, -vcodec flv | libx264 .. none get past 13 fps :| weird ..
[10:27] <BlackBishop> Mhmm, I do have a raspberry pi that is said to have an encoder for mkv/wmv not just decoder ..
[10:31] <BlackBishop> actually .. vc1/mpeg2 .. still, good enough ..
[11:11] <kms_> there are any key for realtime encoding on low CPU with libtheora?
[11:21] <IamTrying> http://paste.fedoraproject.org/42905/38044632/ - why the is giving "could not decode stream" ??
[12:10] <luc4> Hi! I wrote this function: http://paste.kde.org/pb44c1418/. It implements a pixel format conversion from yuyv to argb manually or using ffmpeg. What I noticed, and surprised me a little is that the manual conversion is considerably faster than the ffmpeg algorithm. This is pretty weird. Any idea why? Maybe I'm using ffmpeg wrong? Maybe there is a faster algorithm than SWS_FAST_BILINEAR?
[13:17] <luc4> Anyone using ffmpeg for pixel format conversion?
[13:18] <zap0> luc4.  do you really care?    or you have an actual question?
[13:19] <JEEB> zap0, he has
[13:19] <JEEB> see an hour before
[13:19] <luc4> I already asked some hours ago, just trying again to see if someone new connected: Hi! I wrote this function: http://paste.kde.org/pb44c1418/. It implements a pixel format conversion from yuyv to argb manually or using ffmpeg. What I noticed, and surprised me a little is that the manual conversion is considerably faster than the ffmpeg algorithm. This is pretty weird. Any idea why? Maybe I'm using ffmpeg wrong? Maybe there is a faster algorithm than
[13:19] <luc4> SWS_FAST_BILINEAR?.
[13:19] <JEEB> also you should most probably ask about this on #ffmpeg-devel, although there could be various reasons
[13:19] <JEEB> you could check the code in swscale
[13:19] <JEEB> for this conversion path
[13:20] <zap0> how is it ffmpeg being slower makes it wrong?       isn't wrong a measure of the correctness of pixel calculations?
[13:20] <JEEB> He is asking if it's possible he's using ffmpeg wrong
[13:20] <JEEB> not that ffmpeg is wrong
[13:20] <JEEB> not to mention that the compilation options generally don't tend to do too much optimization because that can "get in the way of hand-written assembly optimizations" so if you compile your own code with various optimizations enabled it can end up being faster
[13:21] <luc4> Both seems to be correct from what I see.
[13:21] <JEEB> I've not looked at that code path myself
[13:21] <zap0> ffmpeg would be doing per pixel math.   not some lookup table
[13:22] <luc4> I can look at the code, but it is unlikely I'll get why it is slower in reasonable time.
[13:23] <JEEB> hmm, a look-up table
[13:23] <vlt> Hello. I ffmpeg'd an AC3 5.1 stream to wav and got a six channel file. Do you know which channel is which?
[13:23] <JEEB> anyways, there can be various reasons
[13:23] <JEEB> you're not getting any non-aligned data warnings, right?
[13:24] <zap0> vlt, there is no OFFICIAL standard ordering.  but variuos configs are "mostly" consistant.
[13:25] <luc4> JEEB: I got a warning related to software conversion being run. But I'm not getting it at the moment.
[13:25] <zap0> vlt, 6 channels can mean:  channels of which there are 6.    or it could mean  5.1    or it could mean  quad+2-sides.
[13:27] <zap0> vlt, if ffmeg was able to detect the 6 channels as meaning "5.1"  then it will consistantly use it's 5.1 layout.
[13:27] <vlt> zap0: Thsi is what `file` says: ATSC A/52 aka AC-3 aka Dolby Digital stream, 48 kHz,, complete main (CM) 3 front/2 rear, LFE on,, 640 kbit/s reserved Dolby Surround mode
[13:28] <zap0> vlt, when playing it back, you will likely have to manually tell the playback device the meaning of this 6 channel file is "5.1"   and in 90% of situations it should then work correctly.
[13:29] <zap0> vlt, the original file or the WAV file you converted?
[13:29] <vlt> zap0: orig
[13:29] <zap0> ok, so ffmpeg should comprehend that as meaning 5.1  (3f,2r+LFE)
[13:29] <zap0> it will then use 5.1 layout.
[13:30] <zap0> but WAV doesn't store the meaning.  so the playback device has to be told!
[13:30] <zap0> the playback device is only going to see a WAV that has 6 channels.
[13:31] <zap0> *TYPICALLY* due to market forces... many playback devices seeing a 6 chanel file will presume 5.1 orientation
[13:31] <vlt> zap0: I converted it to that 6 ch wav to be able to edit the audio (in ardour). And now I don't know which channel is which.
[13:31] <zap0> vlt, i *i know*  sucks, don't it ?!? ;)
[13:32] <vlt> zap0: ?
[13:32] <zap0> get one of the  5.1 sample AAC files with human talk on each channel.    convert it, open it in your app.  then listen back, you'll hear what is what.
[13:33] <vlt> zap0: Good idea. Thank you.
[13:34] <luc4> JEEB: trying in the devel channel. Thans for your advice.
[14:46] <funyun> hi. is there anyway to add an image to a movie with ffmpeg? so there's like a thumbnail when it's in itunes?
[14:48] <sacarasc> Does iTunes not just take a frame of the movie to use as a thumbnail?
[14:49] <funyun> sacarasc: itunes/other apps show a thumbnail if someone has added one. if not it just shows a frame. i'm just wondering if i can add one with ffmpeg
[14:49] <sacarasc> I do not know much about meta data adding with ffmpeg...
[14:49] <sacarasc> You probably could with MP4box, though.
[14:50] <funyun> sacarasc: okay, and i should look under meta data?
[14:50] <sacarasc> I would assume so. It would be like album art...
[14:51] <funyun> sacarasc: thanks
[16:58] <burek> !pastebin BlackBishop
[18:32] <zap0> if i generate some  16bit PCM audio, will it be bigendian or little endian?
[19:40] <DorjePy> what is the easiest, accurate way learn filters
[19:46] <klaxa> read sourcecode? :x
[19:51] <BlackBishop> burek: ?
[19:52] <durandal_1707> klaxa: why?
[19:52] <klaxa> durandal_1707: it's probably the most accurate way to learn how filters work
[19:52] <klaxa> unless that's not what he wanted and we misunderstood each other
[19:54] <durandal_1707> klaxa: what he wanted?
[19:54] <klaxa> <DorjePy> what is the easiest, accurate way learn filters
[19:54] <durandal_1707> that could mean how to learn to use filters
[19:55] <klaxa> the question is asked rather vaguely
[19:55] <durandal_1707> why would i waste time writing filter documentation
[20:18] <defaultro> is it possible to crop specific pixel on top and bottom, different size?
[20:22] <ztane> hi y'all, I am trying to segment an h264+aac flv stream into individual h264+aac mp4 files, but only the first file works right;
[20:23] <ztane> in the past there used to be reset_timestamps option, but it does not seem to be there anymore
[20:27] <defaultro> this isn't working, -croptop 88 -cropbottom 88
[20:27] <defaultro> ok
[20:29] <defaultro> http://pastebin.com/M2RH2KC3
[20:30] <defaultro> I have 2376x1584 video and I want to remove 187 pixels on top and 62 pixels at the bottom
[20:32] <durandal_1707> defaultro: have you read last few lines?
[20:32] <defaultro> yeah, use crop filter instead
[20:32] <defaultro> isn't cropbottom the cropfilter?
[20:33] <defaultro> foudn this, i hope it works - http://ubuntuforums.org/archive/index.php/t-1690664.html
[20:35] <defaultro> it works but I am not sure if it's going to remove the pixels properly
[20:35] <defaultro> :( it didn't work
[20:35] <defaultro> i mean, cropping didn't work
[20:36] <durandal_1707> defaultro: read documentation of crop filter
[20:36] <defaultro> -vf crop=2376:1584:187:62
[20:36] <defaultro> that didn't do a thing
[20:38] <ztane> is it possible to segment a h264/aac video with just vcodec copy, acodec copy?
[20:38] <ztane> I still get ever increasing space in the beginning of each file, alas I have to segment the video using raspberry pi, so I cannot re-encode it
[20:39] <durandal_1707> defaultro: you set out width and height to same value as input, and expect output to be different ....
[20:39] <defaultro> i'm understanding it now :)
[20:39] <defaultro> it's working, just need to calculate now
[20:39] <defaultro> the first value are the new output dimension
[20:40] <defaultro> "Width of the output video. It defaults to iw. This expression is evaluated only once during the filter configuration."his should have been well written as
[20:40] <defaultro> Width of the new output video. It defaults to iw. This expression is evaluated only once during the filter configuration.
[20:41] <ztane> ah works, I didn't have reset_timestamps 1 :">
[20:44] <defaultro> looks like the new crop filter is not as flexible as croptop and cropbottom. Now I can't specify a specifc height of pixels I want to remove on top and bottom.
[20:45] <defaultro> you know what, maybe it did work, 2536:1334:187:62 :)
[20:50] <durandal_1707> defaultro: really, you can use expressions to leave filter to calculate it for you
[20:52] <defaultro> oh
[20:52] <defaultro> that is cool
[20:53] <defaultro> btw, when I use -qp 0, the video on youtube sometimes stops
[21:13] <defaultro> durandal_1707: just finished reencoding it with -qp 0 and I'm currently uploading to youtube. I'm uploading a 2376x1334 video as compared to my other upload which is 1920x1080. Here is the old one - http://www.youtube.com/watch?v=NSJBYi8RdHs
[21:14] <defaultro> brb, going to the gym :)
[22:28] <EvanDotPro> can anyone point me to the place in where the x264 NAL unit size is determined based on the MTU or something?
[22:37] <EvanDotPro> when i set the output to -f h264 tcp://127.0.0.1:1234 it makes the NAL unit size like 30k bytes on average.
[22:38] <EvanDotPro> but when i make the output stdout and pipe to netcat it's like 8k bytes, then if i set it to another IP on my LAN or a remote server, it seems to adjust to the MTU and make them 1440 bytes
[22:44] <EvanDotPro> also, i'm a little confused as to if slice-max-size is related to the NAL unit size or not
[00:00] --- Mon Sep 30 2013


More information about the Ffmpeg-devel-irc mailing list