[Ffmpeg-devel-irc] ffmpeg.log.20130821
burek
burek021 at gmail.com
Thu Aug 22 02:05:01 CEST 2013
[00:04] <axorb> jtriley: you probably want the second video stream, as it's probably a continuation of the first
[00:04] <axorb> but if you can't get it to work, then that's good enough
[00:04] <axorb> although I'd suggest -map 0:v:0 and -map 0:a:0 instead
[00:04] <axorb> or -map 0:a
[06:00] <buhman> I was attempting to give -acodec pcm_s16le -ac 2 -f s16le output to directly to an alsa handle with hw params SND_PCM_ACCESS_RW_INTERLEAVED SND_PCM_FORMAT_S16_LE, 2 channel
[06:01] <buhman> so, firstly, -ac 1 output with a single-channel playback handle sounds perfectly fine
[06:01] <buhman> but once I try to playback two-channel audio, things sound completely garbled
[06:02] <buhman> with -f s16le, how are the two channels physically written?
[06:03] <buhman> or, better yet, how can I specify the interleave format I want?
[08:58] <shur> when i specify the custom as, configure does not end..
[08:58] <shur> if i specify only cc, cxx, nm, ar and ld, configure ends but am not sure it would be calling a correct as
[08:58] <shur> this is cross compiling to android
[08:59] <JEEB> if all those tools have a common cross-prefix you could set that
[09:02] <shur> i thought about, and looked for --tolchain, that looking into the configure, wouldnt seem to be, but now am finding a --cross-prefix, maybe is that which i need to use, is that the point?
[09:02] <shur> am cross compile newcomer btw
[09:05] <shur> wow it ended now!
[09:05] <shur> thnks
[09:05] <shur> no pkg-config but let us hope i dont need it
[09:43] <shur> woa! it seems to have worked!
[11:24] <shur> but it seems am having prablems to get the static items and not the dynamic
[11:25] <shur> both unspecifying static and dynamic and also doing --disable-shared --enable-static
[11:52] <XwZ> hi, I have a question about how to close a rtsp stream, I'm using this code : http://pastebin.com/7UNz9zEN and even if i use avcodec_close(vCodecCtxp); and avformat_close_input(&pFormatCtx); my handle on my camera is not released, that mean after few tries I have to reboot my camera to free the handle otherwise i can't connect to it
[11:52] <XwZ> what am i don't release ?
[12:03] <shur> is it the git HEAD static build working?
[12:18] <iKriz> guys i've got NVENC encoded frames and i want to use ffmpeg to stream them over the network
[12:18] <iKriz> can i just put the encoded frame into an AVPacket?
[14:09] <iKriz> anyone online?
[14:12] <shur> why are there both branch/release/2.0 and tags/n2.0? isnt taht confusing? good one is tags/n2.0 isntit?
[14:21] <xlinkz0> good one is git clone --depth 1 git://source.ffmpeg.org/ffmpeg
[14:22] <CentRookie> hi all!
[14:22] <CentRookie> I have a little question
[14:23] <CentRookie> saw I have a mp4 with x264 vid and vorbis audio codec stream and would like to convert it to mp4 with libAAC audio stream, is it possible without video re-encoding?
[14:23] <CentRookie> it's clear that auto stream has to be fully re-encoded
[14:24] <xlinkz0> use -c:v copy
[14:28] <CentRookie> but will that produce a streamable mp4?
[14:28] <shur> why are there both branch/release/2.0 and tags/n2.0? isnt taht confusing? good one is tags/n2.0 isntit? <-- puristic question, not a particular problem
[14:28] <sacarasc> No. You'll need -movflags faststart for that, CentRookie.
[14:29] <sacarasc> (As well as.)
[14:29] <CentRookie> ok
[14:29] <CentRookie> if thats all, then its pretty straight forward
[14:31] <CentRookie> by the way is there a difference between nero aac and faac lib, in terms of quality?
[14:35] <sacarasc> Nero is one of the better free (as in no money) AAC encoders. faac isn't.
[14:38] <CentRookie> i see, hm, makes me want to search for nero aac for linux
[14:39] <sacarasc> You get it in the same zip file as the windows ones.
[14:39] <CentRookie> oh
[14:39] <CentRookie> do i need to compile it?
[14:39] <sacarasc> No.
[14:44] <CentRookie> im sorry to ask, but im not sure where to put the neroaac files pathwise, my ffmpeg is in usr/ffmpeg/ , when i check faac it seems to be in a lot of paths
[14:44] <CentRookie> i have centos6
[14:44] <sacarasc> IIRC, ffmpeg can't use NeroAacEnc, so it doesn't really matter.
[14:44] <CentRookie> oh, how about mencoder?
[14:45] <sacarasc> Pretty sure that can't either.
[14:45] <siki> hey
[14:45] <siki> why is the option '-rtsp_transport' missing in ffplay documentation?
[14:45] <siki> it was important for me
[14:46] <CentRookie> i see, neroaac is closed source
[14:46] <iKriz> hey
[14:47] <iKriz> guys i've got NVENC encoded frames and i want to use ffmpeg to stream them over the network
[14:47] <iKriz> anyone got any examples of rtsp streaming from library?
[14:48] <iKriz> not using ffmpeg.exe :)
[14:49] <CentRookie> hm so there are no alternatives for mobile and streamable files than to use faac ?
[14:49] <CentRookie> not speaking of mp3 of course
[14:51] <CentRookie> sarcastic, there seem to be ways to make ffmpeg work with neroaacenc on windows
[14:51] <CentRookie> using -f wave - | neroAacEnc
[14:51] <CentRookie> *wav
[14:53] <CentRookie> does somebody know where i should put the audio codec lib file in ffmpeg?
[14:53] <CentRookie> so that it is loaded when i use it as lib
[14:55] <sacarasc> I do not understand what you mean.
[14:56] <CentRookie> like you said, i can pipe the audio to neroEnc
[14:56] <sacarasc> It has nothing to do with ffmpeg, though. You can put it wherever you want, if you call the full path or put in your PATH and just use the executable name.
[14:57] <CentRookie> ok, but still it would be most basic if both files are in the same folder of /ffmpeg, right ?
[14:58] <sacarasc> I'd put it in ~/bin myself and have that in the PATH.
[15:13] <CentRookie> ok, tried that
[15:13] <CentRookie> sigh, not so easy at all :D
[15:14] <sacarasc> What are you streaming to?
[15:14] <CentRookie> to browser and mobile devices
[15:14] <CentRookie> just want to make sure that it s mobile ready
[15:14] <CentRookie> so im careful about compatibility
[15:15] <CentRookie> opera for example supports mp4 streaming, but not skipping, while firefox natively supports mp4
[15:15] <CentRookie> opera requires a plugin for that
[15:25] <CentRookie> since there are some real encoding cracks here
[15:26] <CentRookie> Would it be ok to discuss encoding parameters?
[15:27] <borgebjo> Does anyone here have experience with using ffmpeg in applications? Would like to stream video from a server to a client, both written by myself. The server gets video frames from a camera with variable frame rate. Low latency is required. Any best practices?
[15:29] <CentRookie> what output quality and resolution are you looking for?
[15:30] <Mavrik> CentRookie, currently the best available AAC encoder for ffmpeg is the libfdk_aac one
[15:30] <Mavrik> internal is probably better than libfaac as well
[15:31] <CentRookie> thanks, mavrik, gonna look into that
[15:31] <borgebjo> CentRookie, 752x480, good quality
[15:31] <CentRookie> check out this: http://blog.mmacklin.com/2013/06/11/real-time-video-capture-with-ffmpeg/
[15:31] <borgebjo> thanks
[15:32] <CentRookie> a lot of real time podcast application probably uses ffmpeg or some sort for real time streaming, if they support 264
[16:11] <borgebjo> Got a problem with mmacklins example.
[16:12] <borgebjo> $ ffmpeg -r 60 -f rawvideo -pix_fmt rgba -s 1280x720 -i - -threads 0 -preset fa
[16:12] <borgebjo> st -y -crf 21 -vf vflip output.mp4
[16:12] <borgebjo> ffmpeg version N-55644-g68b63a3 Copyright (c) 2000-2013 the FFmpeg developers
[16:12] <borgebjo> built on Aug 19 2013 20:27:12 with gcc 4.7.3 (GCC)
[16:12] <borgebjo> configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libr
[16:12] <borgebjo> tmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
[16:12] <borgebjo> libavutil 52. 42.100 / 52. 42.100
[16:14] <borgebjo> oops.
[16:15] <Mavrik> pastebin ;)
[16:17] <CentRookie> geez
[16:17] <CentRookie> i give up
[16:18] <CentRookie> neroaacenc is too restrictive
[16:18] <CentRookie> would need to do 3 steps encoding to get 1 single video file
[16:18] <sacarasc> Or... Script it!
[16:19] <CentRookie> extract audio and encode as wav, encode with nero to aac, merge with video stream
[16:19] <CentRookie> and i bet somewhere among those steps there will be asynch audio
[16:19] <CentRookie> like when the input file has varialbe bitrates
[16:19] <borgebjo> love11ven JOIN
[16:19] <borgebjo> > love11ven JOIN
[16:19] <borgebjo> > like this: http://pastebin.com/cpuZdtCg
[16:20] <CentRookie> even scripting it would be too slow, as you would need to write out the audio stream first, instead of caching it in memory
[16:20] <CentRookie> when you got 5000 video files, that would be 15-20000 step encoding steps
[16:20] <CentRookie> probably double or tripple the amount of read and write
[16:21] <CentRookie> just for slighty better audio quality?
[16:21] <sacarasc> ffmpeg -i blah.mkv -vn -f wav - | neroaacenc -o cheese.mp4 && ffmpeg -i blah.mkv -i cheese.mp4 -map 0.0 -map 1.0 -movflags faststart final.mp4
[16:21] <CentRookie> in theory it should work
[16:21] <CentRookie> but doesnt
[16:21] <CentRookie> for example if the video file uses vorbis, ogg, somehow you cant pipe it as wav
[16:22] <CentRookie> but i can pipe it as ogg
[16:22] <CentRookie> but neroaac only accepts wav
[16:22] <CentRookie> so i need to write out the file
[16:22] <CentRookie> anyway, faac is not so bad
[16:23] <CentRookie> XD
[16:23] <CentRookie> until somebody shows me a better solution
[16:23] <sacarasc> mkfifo temp.wav && ffmpeg -i blah.mkv -vn -f wav temp.wav && neroaacenc temp.wav -o cheese.mp4 && ffmpeg -i blah.mkv -i cheese.mp4 -map 0.0 -map 1.0 -movflags faststart final.mp4 && rm temp.wav
[16:23] <CentRookie> what does mkfifo do
[16:24] <sacarasc> It makes a named pipe.
[16:24] <CentRookie> i see
[16:24] <CentRookie> clever
[16:24] <CentRookie> but && is basically still the same, 3 step encoding problem
[16:25] <sacarasc> Remove the first &, just leave it with 1 there, and I think it should do the two audio steps at the same time. \o/
[16:25] <CentRookie> hm
[16:26] <sacarasc> But, I have no linux box with which to test at the moment.
[16:26] <CentRookie> i think the 4th command should contain -i temp.wav
[16:26] <CentRookie> not cheese.mp4
[16:26] <CentRookie> or you are inputting 2 video streams
[16:27] <sacarasc> The MP4 in this case is the output from nero.
[16:27] <sacarasc> Not a video.
[16:27] <CentRookie> ah, then it should be m4a
[16:27] <CentRookie> nero only outputs audio
[16:27] <sacarasc> Not really.
[16:27] <CentRookie> it doesnt recognize mp4
[16:28] <CentRookie> but i get it
[16:28] <CentRookie> could work that way
[16:28] <CentRookie> would still be read mkv, extract to wav, read wav, write m4a, read mkv, read m4a, write mp4
[16:29] <CentRookie> compared to read mkv write mp4
[16:30] <sacarasc> The nero docs say it can write to mp4. Considering m4a is just mp4 with a different name, why wouldn't it? :D
[16:30] <CentRookie> hit me XD
[16:30] <CentRookie> it still doesnt
[16:30] <CentRookie> it says mp4 output not recognized
[16:31] <CentRookie> might also be just my noobish skills
[16:32] <CentRookie> but yeah, i abandoned it
[16:32] <CentRookie> go kill yourself, neroaacenc
[16:32] <CentRookie> i thought it was opensource
[16:32] <CentRookie> shouldnt they make it better?
[16:32] <sacarasc> You could use the lib-fdkaac encoder, which you can use with ffmpeg, though you might have to compile yourself or use a static build.
[16:33] <CentRookie> hm
[16:34] <CentRookie> how is the compatibility to older systems?
[16:35] <CentRookie> do you need some special aac codec to enjoy the higher quality?
[16:35] <CentRookie> as viewer
[16:35] <sacarasc> No, it should all create normal AAC, it's just different encoders are better than others.
[16:35] <CentRookie> i see
[16:36] <CentRookie> it supports the new he profiles
[16:36] <CentRookie> but those should be backward compatible right?
[16:36] Action: sacarasc shrugs.
[16:36] <sacarasc> I don't do much with aac.
[16:36] <CentRookie> i see
[16:36] <CentRookie> do you fix async video and audio from time to time?
[16:36] <CentRookie> im having trouble with itsoffset
[16:37] <CentRookie> no matter where i put that variable and what time parameter, it still is async like the original file
[16:37] <CentRookie> tried all differnet mapping permutations lol
[16:37] <CentRookie> in hope it would fix it
[16:37] <CentRookie> does itsoffset work with c:v copy and c:a copy?
[16:38] <sacarasc> It should.
[16:38] <CentRookie> then i really dont know what im doing wrong
[16:38] <CentRookie> it feels like a curse
[16:39] <CentRookie> whenever you think you solved one problem, another problem pops up
[16:39] <CentRookie> ffmpeg -y -i Pacific.Rim.2013.R6.HDCAM.mp4 -itsoffset 00:10:10.000 -i Pacific.Rim.2013.R6.HDCAM.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 pacific-sound-delayed-10s.mp4
[16:39] <CentRookie> ignore the obvious filename related issue
[16:40] <CentRookie> still gives me the same audio delayed audio
[16:40] <CentRookie> the original mkv on the other hand is in sync
[16:41] <CentRookie> goal was to dely video for 10 min, so that audio is 10 min faster
[16:41] <CentRookie> but for some reason, ffmpeg ignores the itsoffset command
[16:41] <CentRookie> putting the audio and video to the original timing
[16:42] <Mavrik> CentRookie, the new HE profiles aren't backward compatible
[16:42] <CentRookie> new ones, are the HE 2.0 ?
[16:42] <Mavrik> e.g. if you try to play HE-AACv2 audio on a non-compatible player all the low frequencies are missing
[16:42] <CentRookie> argh
[16:42] <CentRookie> i thought that much
[16:43] <CentRookie> how about HE-AAC
[16:43] <Mavrik> but you need to tell fdk_aac explicitly you want a HE-AACv2 profile
[16:43] <Mavrik> HE-AAC and HE-AACv2 are mostly the same with exception of SBR
[16:43] <Mavrik> (which means HE-AACv2 mono is HE-AAC)
[16:43] <CentRookie> so HE-AACv1 would also create missing low frequencies?
[16:43] <Mavrik> CentRookie, HE-AACv2 is widely supported though, so I wouldn't worry much about it
[16:43] <Mavrik> yeah
[16:44] <Mavrik> CentRookie, or just encode into low profile and you're done
[16:44] <CentRookie> well, it is good to know, will take a note of that
[16:44] <Mavrik> there's no point in using the HE profiles for anything above about 64kbps anyway
[16:44] <CentRookie> i havent found a way to install fdk_aac yet on centos
[16:44] <CentRookie> not sure if i have to compile it totally
[16:44] <CentRookie> and then also recompile ffmpeg for that
[16:45] <Mavrik> CentRookie, yes and yes
[16:45] <Mavrik> fdk_aac has a different license and compiled ffmpeg can't be distributed if it's build with fdk_aac support
[16:46] <CentRookie> i see
[16:54] <CentRookie> Mavrik, I only found fdk for android
[16:54] <CentRookie> and a git source code
[16:54] <CentRookie> but the git source code doenst seem to play well with centos
[16:55] <CentRookie> bug in configure script
[16:55] <CentRookie> the android on sourgeforge is the right one?
[16:55] <CentRookie> cuz it says amr
[17:26] <CentRookie> you were right, mavrik, fdk is better than faac
[17:26] <CentRookie> at low bitrates that is
[17:30] <CentRookie> brilliant! sound is much better
[17:34] <CentRookie> what the heck
[17:35] <CentRookie> could it be that fdk doesnt support multicore?
[17:37] <GoaLitiuM> too bad it's not GPL-compatible
[17:37] <CentRookie> does it mean it is not multithreading compatible?
[17:37] <GoaLitiuM> no
[17:38] <CentRookie> it means you dont know
[17:38] <CentRookie> well at least the version i downloaded isnt
[17:38] <CentRookie> sigh
[17:38] <CentRookie> all the work for nothing
[17:43] <WhiteNight> Hi, can I use libffmpeg to rtp-stream the output of x264_encoder_encode()?
[17:45] <durandal_1707> unfortuantely ffmpeg is not creator of libffmpeg
[17:56] <CentRookie> do you guys know of a fast way to hardcode subtitles XD
[17:56] <CentRookie> i guess cant use c:v copy on that one
[18:00] <durandal_1707> CentRookie: it is explained in wiki
[18:05] <CentRookie> drandal, i know
[18:05] <CentRookie> i know how to hardcode subtitles
[18:05] <CentRookie> i was asking if there is a way to cheat or faster way to hardcode it
[18:06] <CentRookie> but i guess since it has to be overlayed for each frame
[18:06] <CentRookie> full re-encoding is a must
[18:15] <CentRookie> works
[18:15] <CentRookie> pew
[18:16] <Mista_D> is there any video filter that can measure edges on macroblock boundries? Or can be used to measure percieved video quality?
[18:17] <durandal_1707> what percieved video quality?
[18:19] <Mista_D> durandal_1707: measure macroblock pixelation/exessive blurness etc...
[18:20] <durandal_1707> using single video?
[18:21] <xlinkz0> CentRookie: when speed is an issue consider using x264 profiles like superfast
[18:22] <CentRookie> actually best value is always optimization
[18:22] <CentRookie> want as much quality and speed per computing time as possible
[18:23] <xlinkz0> i found that for the tests i've conducted i can not see the quality drop in superfast profiles
[18:23] <CentRookie> depends on bitrate
[18:23] <CentRookie> im working at ultra low bitrate area
[18:24] <CentRookie> 1hour ~ 120mb
[18:26] <xlinkz0> then why are you complaining about re-encoding? it must work blazingly fast at normal presets
[18:26] <CentRookie> well it is because of some raw footage
[18:26] <CentRookie> they were recorded with vorbis audio and later added with external subs
[18:27] <CentRookie> going through all of them takes a lot of time
[18:27] <CentRookie> so was just wondering if there is a way to fast overlay subs
[18:27] <CentRookie> but i dont think there is
[18:27] <CentRookie> so im re-encoding them fully
[18:58] <Samus_Aran> I have a sequence of images frame.0200.png to frame.0400.png, how can I refer to these filenames for encoding video?
[18:59] <Samus_Aran> -i "frame.%04d.png" doesn't work: No such file or directory
[19:00] <Samus_Aran> wildcards don't seem to work
[19:01] <sacarasc> cat frame.02* frame 03* frame.04* | ffmpeg -f image2pipe -
[19:02] <Samus_Aran> ffmpeg can't accept them as filenames?
[19:03] <sacarasc> for the %04d method, you'd need to start at 0000, and ffmpeg doesn't support globbing.
[19:03] <durandal_1707> globbing is supported, again read documentation
[19:03] <Samus_Aran> I found documentation saying how to use globbing, but when I tried it, it said it was an unknown option
[19:04] <durandal_1707> because you do not use ffmpeg
[19:04] <durandal_1707> or use extremly old version
[19:04] <Samus_Aran> hm?
[19:04] <sacarasc> It wasn't when I last used it. :D
[19:05] <Samus_Aran> ffmpeg version 0.8.6-6:0.8.6-0ubuntu0.12.10.1, built on Apr 2 2013 17:02:16 with gcc 4.7.2
[19:05] <durandal_1707> that is not ffmpeg
[19:05] <Samus_Aran> what is it?
[19:05] <durandal_1707> fake ffmpeg
[19:05] <Samus_Aran> and what does that mean?
[19:06] <durandal_1707> read output of "ffmpeg -h"
[19:06] <sacarasc> Samus_Aran: On Ubuntu and other distros, you're getting avconv from the libav project rather than ffmpeg from the ffmpeg project.
[19:07] <Samus_Aran> "Hyper fast Audio and Video encoder"
[19:07] <durandal_1707> not that ...
[19:08] Action: Samus_Aran gives durandal_1707 the Award For Excellence In Achieving Vagueness
[19:09] <durandal_1707> you cleary lack skills big time
[19:09] <Samus_Aran> durandal_1707: you told me to read the help, whatever for? it doesn't even say what app it is
[19:10] <durandal_1707> it says, you just need to read with understanding
[19:10] <durandal_1707> for examples is in that output FFmpeg project ever mentioned
[19:11] <durandal_1707> if you want help for avconv and ffmpeg from Libav go to #libav
[19:18] <Samus_Aran> sacarasc: thank you. haven't used ffmpeg recently, didn't know it was forked.
[19:19] <durandal_1707> Mista_D: if you didn't know there is psnr filter, it needs 2 videos
[22:12] <Leoneof> is there a way to get the properties (Presets) of Video.mp4 file, and then apply this preset to convert other videos?
[22:14] <cbreak> encoding properties of a video stream are not stored in the container
[22:15] <cbreak> they aren't even usually stored in the video stream (although x264 used to do that a few years ago)
[22:17] <Leoneof> oh you're here
[22:17] <Leoneof> hello
[22:19] <Leoneof> ok.
[22:25] <cbreak> I am in a lot of places :)
[22:26] <cbreak> if you use x264, it can show you the encoding options if you just run strings on the stream
[00:00] --- Thu Aug 22 2013
More information about the Ffmpeg-devel-irc
mailing list