[Ffmpeg-devel-irc] ffmpeg.log.20130108

burek burek021 at gmail.com
Wed Jan 9 02:05:01 CET 2013


[02:48] <pietro10> Hi. I'm looking for a way to batch trim leading and trailing silence from a bunch of wav files I have here. I tried using sox, but it is cutting out some details, for instance the trailing t sound on voice samples that end with words like "great". I see there is a silencedetect filter but I'm not entirel sure how to use it. Can ffmpeg do tihs? Thanks.
[04:34] <perise__> who
[04:34] <perise__> hello
[04:35] <perise__> who
[04:37] <sacarasc> Hi.
[04:39] <llogan> ok!!
[08:37] <pietro10> hm the voice silences are pure silence
[08:37] <pietro10> I could just write a tool to do this
[09:11] <boomrx> http://pastebin.com/27VaCAxY
[09:57] <flo`> hi
[09:57] <flo`> i want to use ffmpeg in order to grab a http video stream, add a watermark to it and stream it again
[09:58] <flo`> but i'm failing to even grab the stream :/ what is wrong with ffmpeg -i http://myserver:1234/stream ...?
[09:58] <flo`> i'm getting always a "no such file or directory" on the http:// url\
[10:18] <divVerent> is there an official http-accessible ffmpeg git repo?
[10:18] <divVerent> $ setproxy git ls-remote http://source.ffmpeg.org/ffmpeg
[10:18] <divVerent> fatal: http://source.ffmpeg.org/ffmpeg/info/refs?service=git-upload-pack not found: did you run git update-server-info on the server?
[10:18] <divVerent> does not work
[10:20] <divVerent> like, is the github mirror always kept current automatically?
[10:20] <divVerent> or is http://git.videolan.org/git/ffmpeg.git always current
[10:21] <divVerent> it LOOKS to me like source.ffmpeg.org == git.videolan.org, and thus the http url with the git.videolan.org hostname SHOULD be ok, can anyone confirm this officially? I want to refer to it from a PKGBUILD because it annoys me to have to use git:// when behind a web proxy
[10:22] <divVerent> because that simply doesn't work through a web proxy
[10:22] <JEEB> I don't see the repo changing from the videolan any time soon
[10:22] <divVerent> of course
[10:22] <divVerent> what would be BEST
[10:22] <divVerent> would be supporting the http backend on source.ffmpeg.org too
[10:23] <divVerent> by copying the same setup to source.ffmpeg.org's vhost, or MAYBE using a redirect (not sure if git http allows redirects), http://source.ffmpeg.org/git/ffmpeg.git could be made work
[10:24] <divVerent> AH, I see the issue
[10:24] <divVerent> it's INTENDED to work, but the rewrite rule is broken :P
[10:24] <divVerent> wget -O- 'http://source.ffmpeg.org/git/ffmpeg.git/info/refs?service=git-upload-pack'
[10:24] <divVerent> Location: http://git.videolan.org/?p=ffmpeg.gitservice=git-upload-pack&service=git-upload-pack [following]
[10:25] <JEEB> lol
[10:25] <divVerent> wonder why it inserts the same thing twice
[10:27] <ubitux> flo`: quote it properly? (maybe an issue with your shell)
[10:28] <ubitux> divVerent: yes the configuration of the server is kinda weird, source.ffmpeg.org redirect strangely to git.videolan.org, you can't really rely on it :(
[10:30] <divVerent> ubitux: I wonder how much work it would be to fix it
[10:30] <divVerent> for that one would probably first need to know what relies on the existing redirect logic
[10:30] <ubitux> i don't have access to the server
[10:30] <divVerent> is there a bug tracker for that kind of issue?
[10:31] <ubitux> i think the trac is fine
[10:31] <ubitux> or well
[10:31] <ubitux> just ask ffmpeg-devel
[10:32] <divVerent> I really wonder why on this tracker I always lose my password
[10:34] <t4nk926> hi
[10:35] <t4nk926> i have a little problem with ffmpeg when i want to generate teaser form an MP4, my audio is not synch with my video
[10:36] <t4nk926> have you got an idea for solved my issue ?
[10:39] <t4nk926> http://pastebin.com/2VevM5Fm
[10:40] <t4nk926> this is my command and my output
[11:05] <_dr> how can i get a list of available pix_fmts that go with a certain codec?
[11:20] <ubitux> - ./ffmpeg -help encoder=libx264|&grep pixel Supported pixel formats: yuv420p yuvj420p yuv422p yuv444p
[11:29] <Olive6767> Hi there :)
[11:30] <Olive6767> I'm compiling ffmpeg on OSX 10.8, but I would like the obtained binary to be both compatible with OSX 10.8 and 10.7, how should I do so?
[11:39] <Olive6767> nobody?
[12:27] <_dr> ubitux: thanks
[12:31] <_dr> mh, now i have raw images, which i convert to tiff, which i encode using ffv1 with -pix_fmt gray16le. then i extract the frames to .tiff (gray16le), which i then convert back to raw images... but they don't match the original raw images. any idea where the problem could be?
[12:31] <_dr> i made sure the problem isn't caused by imagemagick's convert, which i use to convert raw images to tiffs, because raw (reference) to tiff back to raw equals raw (reference)
[12:32] <_dr> i believed the problem lied with yuv conversion, which is why i tried a codec (ffv1) which supports gray16le
[12:32] <_dr> still no luck
[12:34] <_dr> here's the commands i use http://pastebin.com/LsrNJyik
[13:53] <cbsrobot> Olive6767: what happens if you try it on 10.7 and 10.8 ?
[13:53] <cbsrobot> any errors ?
[13:53] <Olive6767> cbsrobot: only tried on 10.8, it works
[13:53] <Olive6767> but I would like to be sure it will on 10.7
[13:54] <cbsrobot> I can try on 10.7
[13:54] <Olive6767> that would be great
[15:03] <mpfundstein> what would you guys use as an intermediate format ? i dont want to use raw data or huff as its waaay too big. can't make a decision
[15:03] <mpfundstein> thought about mpeg2 15bps
[15:03] <mpfundstein> Mbps
[15:05] <sacarasc> Lossless h264 maybe?
[15:06] <mpfundstein> i will check thanks
[15:35] <mpfundstein> ill actually settle with ts
[15:35] <mpfundstein> thx
[16:03] <crashev> hello is this possible to double mono channel into "stereo", I have a mono recording, I would like to hear the same thing on both channels
[16:03] <Mavrik> yeah, just set channel number to 2
[16:03] <Mavrik> and it'll do duplication
[17:00] <serp_> hi, I want to programmatically iterate over all still images in a video. is ffmpeg the right tool for this?
[17:09] <MadTBone> I have footage that was accidentally recorded as rawvideo uyvy422 from a component (YPbPr) input.  The actual video source was s-video.  The resulting files contain a proper luma channel in the "y" samples, a proper chroma channel in the place of the "u" samples, and garbage in the place of the "v" samples.  Is there a way I can get ffmpeg to ignore the "u" samples?
[18:24] <Mista_D> Can I have FFmpeg output "fps= " value only during encoding?
[18:26] <sacarasc> When else would it output it?
[18:27] <Mista_D> sacarasc: can I see only the fps value?
[18:27] <sacarasc> Oh, you only want to see it.
[18:28] <Mista_D> yes.
[18:28] <sacarasc> You could hack on ffmpeg to only output that, I suppose. I don't know if there is another way.
[20:06] <brad_c6> I have successfully used the api-example.c to decode a audio file, but I would like to save attributes about the audio as to allow me to encode them into some other format later. Is there something in libavcodec that could give me similar function to ffprobe? Thank You
[20:11] <Mavrik> brad_c6, if you call av_dump_format on the input format context you'll get the same output as ffprobe on the console
[20:11] <Mavrik> brad_c6, also all that data is stored in the AVFormatContext structure
[20:12] <brad_c6> Mavrik: ok I'll try that, thx
[20:18] <pietro10_> what's the proper way to produce ogg vorbis with avconv? -c libvorbis just produces ogg-flac again
[20:20] <pietro10_> ah -acodec
[20:21] <pietro10_> ...does file just assume all ogg files are ogg-flac?
[20:23] <pietro10_> no because I get file is corrupt every time
[20:23] <beastd> pietro10_: you can always use "ffmpeg -i created.file" to find out what you just created.
[20:23] <pietro10_> it just tells me flac
[20:23] <pietro10_> here is the command line I used:
[20:23] <pietro10_> { for i in *; do echo avconv -i $i $i.ogg -acodec libvorbis '&&'; done; echo true; } | bash
[20:23] <pietro10_> where all of * are .wav files
[20:26] <beastd> pietro10_: we do not support the avconv tools from the ffmpeg fork in this channel. but usually you write the options that associate to an outputfile in front of that output file not after
[20:26] <beastd> also what should '&&' accomplish?
[20:27] <pietro10_> oh those are forks?
[20:28] <pietro10_> then the debian/ubuntu/whatever people added the "ffmpeg is deprecated" message themselves :|
[20:28] <pietro10_> as for && it will stop on the first file that fails
[20:28] <beastd> pietro10_: the depricated message is misinformation spread by the fork
[20:29] <brad_c6> Indeed
[20:29] <pietro10_> good to know!
[20:29] <pietro10_> what is the point of the fork then
[20:29] <beastd> in e.g. debian you get the fork somehow and it installs replacements or whatever for ffmpeg. if the forks software finds out it is called as ffmpeg it prints the depricated message
[20:30] <pietro10_> I see
[20:31] <beastd> Still i think you got some things wrong in your bash commandline
[20:32] <pietro10_> right now it's program -i $i -acodec libvorbis $i.ogg
[20:32] <beastd> the && being quoted as '&&' is certainly wrong
[20:32] <pietro10_> the point of this is just to test to make sure the ogg conversion doesn't reduce in loss of quality since the .wav files are recordings of things that were passed through a filter to make them sound like long-distance radio transmissions
[20:33] <beastd> pietro10_: could you produce correct output file now? one with a vorbis autio stream?
[20:33] <beastd> *audio
[20:34] <pietro10_> yep
[20:34] <pietro10_> thanks
[20:34] <pietro10_> so now I'm curious what the difference between ffmpeg and avconv is
[20:35] <ubitux> pietro10_: ffmpeg is the conversion tool from the FFmpeg project, avconv is the conversion tool from the Libav project
[20:35] <ubitux> it's essentially the same, except that ffmpeg has more stuff
[20:38] <pietro10_> right I read that I was thinking ofre about the 'stuff' part =P
[20:46] <brad_c6> I got av_dump_format to work (thx again) from what I got out of demuxing.c. Is there a way though to read the data from a std::vector<char> or char[] instead? (I loaded the file into memory as it was extracted from a MPQ archive, in hopes to then transcode it to a different format)
[20:46] <pietro10_> it looks like no one has made a yes/no comparison of the original and its fork
[20:47] <Mavrik> brad_c6, instead of trying to parse that
[20:47] <pietro10_> maybe when I have a lot fo time to kill I will
[20:47] <Mavrik> brad_c6, why aren't you reading fields in your format and codec contexts?
[20:49] <brad_c6> Mavrik:?
[20:49] <Mavrik> what?
[20:50] <brad_c6> Mavrik: I am not sure what you mean?
[20:50] <Mavrik> did you decode the audio?
[20:50] <brad_c6> It does, but I have to set tribute about the audio manually
[20:50] <Mavrik> ?
[20:51] <Mavrik> if you decoded the audio
[20:51] <Mavrik> you had to get an AVFormatContext, which describes your input container and AVCodecContext which describes your input codec
[20:51] <Mavrik> av_dump_format takes those structures and creates that readable output
[20:51] <Mavrik> if you need parameters of the input go check those
[21:01] <brad_c6> I am looking at avformat_open_input which specifies a filename, is there a way to give it a char[] or vector instead of a filename be via a option or otherwise? (I think that is the question I should be asking)
[21:41] <beastd> brad_c6: maybe read this http://ffmpeg.org/doxygen/trunk/group__lavf__decoding.html#details
[21:44] <brad_c6> maybe something with avio?
[21:46] <brad_c6> like avio_read
[21:47] <beastd> brad_c6: if i am not mixing things up you can use avio_alloc_context () and  then preallocate a AVFormatContext with avformat_alloc_context(). initialize its pb field and pass it to avformat_open_input()
[21:47] <brad_c6> Your guess is as good as mine. I'll give it a go
[21:49] <beastd> brad_c6: ok. you should be able to find the api docs for the functions i listed above. please tell if it worked out. i did never try myself but i am quite sure the use case you described is supported by lavf.
[21:57] <pietro10_> thanks again
[21:58] <tclavier> hello
[21:59] <brad_c6> beastd: Can I set opaque pointer to NULL? (I am not sure what the opaque does)  it says "An opaque pointer to user-specific data."
[21:59] <brad_c6> tclavier:hello
[22:00] <beastd> brad_c6: if you mean for avio_alloc_context. then yes
[22:00] <brad_c6> beastd:yep
[22:00] <tclavier> With a .mov file with DNxHD video
[22:00] <tclavier> [dnxhd @ 0x16c6060] unsupported cid 1244
[22:00] <tclavier> bug or bad video file ?
[22:01] <tclavier> video file was produce by Apple :-(
[22:02] <luc4> Hi! I would like to extract a stream from a container. I therefore looked at the code demuxing.c from the sample codes and dumped the AVPacket after av_read_frame like this: http://pastebin.com/DafVdvaC. Is this correct?
[22:02] <beastd> brad_c6: usual callback design. in a bigger apps you mostly would not want your callback to directly access some global data. so opaque provides you with a way to provide that data which will in turn be passed to your callbacks when they are called.
[22:04] <tclavier> ok
[22:06] <tclavier> http://pastie.org/5650369
[22:09] <tclavier> and http://pastie.org/5650393 if i say Yes
[22:14] <emerica_> DNxHD-TR not supported is all le goog seems to bring up
[22:15] <tclavier> compression id 1244 it's DNxHD-TR ... ok
[22:15] <tclavier> Do you know where can we read specification ?
[22:16] <brad_c6> beastd:I must be making a stupid mistake in this method call. Any ideas? http://pastebin.com/SbXprYeC
[22:17] <brad_c6> beastd:I am getting no such method (it is included though)
[22:22] <brad_c6> beastd:fixed it, has to cast (unsigned char *)
[22:24] <brad_c6> beastd: That did it! Thank you so much!
[22:24] <beastd> brad_c6: i am glad it works for you.
[22:25] <beastd> brad_c6: but note the buffer should actually be allocated with av_malloc for correct alignment. maybe you can adept your code to allocate the buffer with e.g. av_malloc
[22:25] <brad_c6> beastd: I think when I get all the FFMPEG related stuff done with my app, I'm gonna write a big tutorial/use cases
[22:26] <beastd> brad_c6: contributions are to doc/ and doc/examples are welcome too
[22:28] <beastd> brad_c6: but we usually won't accept C++ code for our codebase and for examples. anyway if you put up something on your own, you can submit the address to ffmpeg-devel and we might link to it from our homepage.
[22:30] <brad_c6> beastd: ok, I will do that as I love what ffmpeg does and I think having more documentation would make the project much more accessible to developers. I would turn them into c code examples for sure. (As for the parts of my project are basically C anyways (besides the std::vector)
[22:40] <cbsrobot> tclavier: please file a bug report and upload a sample
[22:41] <llogan> that sample would be huge
[22:42] <llogan> tclavier: is it just one file or many?
[22:42] <llogan> did you test ffmpeg from git head?
[22:42] <cbsrobot> DNxHD TR (Thin Raster) - never heard of it
[22:42] <llogan> might give FFmbc a try too
[22:43] <llogan> cbsrobot: same here
[22:44] <tclavier> just one ... it's the first file, and i can't build a new one more shorter
[22:44] <tclavier> i can make a dd :-D
[22:45] <beastd> brad_c6: maybe you can use avio dynbuf for replacing vector: http://ffmpeg.org/doxygen/trunk/avio_8h.html#adb5259ad07633518173eaa47fe6575e2
[22:46] <cbsrobot> tclavier: that would be great
[22:46] <tclavier> i not use the last git commit (build faild) but last build available on deb-multimedia
[22:46] <cbsrobot> although if you can upload the original to datafilehost, or a private website too
[22:46] <brad_c6> beastd:I guess I could make a specific ReadFile function in my MPQArchive Class
[22:47] <tclavier> on a friend computer with Quiktime and DNxHD codec that work well
[22:48] <beastd> brad_c6: it is not exactly comparable but you can write your data into the dyn buf and then when you close it you get the pointer to the resulting buffer (the resizing of the buffer is handled by internally).
[22:48] <brad_c6> beastd:nice
[22:48] <beastd> brad_c6: i mean you get the resulting buffer and its length on close
[22:49] <tclavier> if you whant to test you can dl ftp://backup.azae.net/MOBILITE.mov
[22:49] <brad_c6> beastd: I will use that for the audio/video files ( I have been using std::vector<char> as a general file container for many different libraries)
[22:50] <luc4> Hi! Anyone who can suggest some sample code to extract streams?
[22:51] <llogan> 2.8 G. too big for me.
[22:51] <cbsrobot> tclavier: I try to create a smaller sample
[22:52] <tclavier> how to know the min dd size to use ?
[22:53] <beastd> brad_c6: yeah ok. as the underlying buffers of avio dyn buf are managed with av_malloc & friends it would solve your problem with avio_alloc_context  buffer requirements.
[22:54] <llogan> that's a 14.9 MB/s bitrate
[22:56] <brad_c6> beastd: I'll implement avid din but for performance reasons, for the moment avid_alloc_context is working great
[22:57] <tclavier> llogan ... i have request a "very good quality"
[22:57] <tclavier> but i can't read it :-(
[22:57] <llogan> tclavier: i don't know. 10 seconds should be good enough i guess. that would be ~1227770 kilobits
[22:58] <llogan> ~150 MB
[22:58] <tclavier> ok
[22:58] <llogan> i'm not sure if it will work with dd. i don't use mov or dnxhd often
[22:59] <llogan> so test your output with ffmpeg first. it might give a different error
[22:59] <llogan> your dd output i mean
[22:59] <tclavier> :-D
[23:00] <brad_c6> beastd: (fell dumb asking) Is there a function I call on AVFormatContext to query channel, sampling format, ect thank you for all the help :D
[23:04] <tclavier> stange ... with a 2G file a get : Invalid data found when processing input
[23:05] <tclavier> same pb with 150MB file :-(
[23:05] <Mavrik> brad_c6, those are properties of a stream, not format
[23:05] <Mavrik> brad_c6, so look for AVCodecContext
[23:05] <beastd> brad_c6: i meant you to use avio dyn buf  e.g. for filling the buffer (it is write only) and feet the returned buffer to avio_allloc_context for putting in AVFormatContext before avformat_open_input .
[23:05] <Mavrik> brad_c6, it's right here in the documentation: http://ffmpeg.org/doxygen/trunk/structAVCodecContext.html
[23:05] <brad_c6> Mavrik:sorry about that
[23:06] <llogan> tclavier: maybe there is an "atom" at the end of the file that is missing in the cut samples
[23:06] <brad_c6> beastd:oh, ok now I understand will implement that
[23:07] <tclavier> ha yes ... just before ffmpeg say : moov atom not found
[23:10] <tclavier> in libavcodec/dnxhddec.c, cid is construct with cid = AV_RB32(buf + 0x28);
[23:11] <tclavier> do you know from where the '0x28' constant come from ?
[23:37] <ElMarikon> cheers!
[23:38] <ElMarikon> does anyone know, what I can do aginst the message "Warning: data is not aligned! This can lead to a speedloss", when I'm scaling and padding my video?
[23:38] <ElMarikon> how can I align my data?
[23:41] <ElMarikon> it looks like it has to do with mmx and sse2... more I could not understand...
[23:44] <ElMarikon> noone?
[23:47] <luc4> I don't know that warning but sometimes buffers have to be aligned in memory.
[23:57] <ElMarikon> oookay... could u quickly explain,. how i do that?
[23:57] <luc4> Anyone who knows what is the difference between side_data and data element in AVPacket?
[23:58] <luc4> ElMarikon: if that is your issue you need to allocate your buffers with a specific alignment in memory, typically 4 bytes in memory.
[23:58] <ElMarikon> luc4: sorry, i still don't get it:-(
[00:00] --- Wed Jan  9 2013


More information about the Ffmpeg-devel-irc mailing list