[Ffmpeg-devel-irc] ffmpeg.log.20121119

burek burek021 at gmail.com
Tue Nov 20 02:05:01 CET 2012


[00:03] <frecel> you on klaxa?
[00:16] <knoopx> is it possible to transcode a video using only the 25% of the original video frames?
[00:17] <frecel> im using ffmpeg to stream to twitch.tv is it possible to stream audio from both my microphone and game sounds?
[00:19] <dbarrett> knoopx: as in the first 25% or the same duration just a quater of the framerate?
[00:19] <llogan> frecel: this may help https://ffmpeg.org/trac/ffmpeg/wiki/Capturing%20audio%20with%20FFmpeg%20and%20ALSA
[00:20] <knoopx> dbarrett: 25% of frames and duration
[00:20] <frecel> knoopx, you want to make some sort of time lapse?
[00:21] <knoopx> frecel: not really but they will look alike, what I want is to generate preview videos of full length videos
[00:22] <knoopx> ie: 1 frame every minute, it needs to be fast to transcode
[00:23] <knoopx> is there a way to force frameskipping?
[00:26] <dbarrett> This might help you: http://debuggable.com/posts/FFMPEG_multiple_thumbnails:4aded79c-6744-4bc1-b30e-59bccbdd56cb
[00:31] <frecel> can i use the -i flag twice?
[00:50] <fredm> I have a question about FFmpeg development. Is this the right channel?
[00:59] <llogan> fredm: the development channel is #ffmpeg-devel
[01:00] <JEEBsv> if it's ffmpeg itself naturally
[01:03] <fredm> Just to confirm - Is the qscale property of MpegEncContext the MQuant value?
[01:10] <juanmabc> mmm, what's the deal with some audio streams not getting total size/length in seconds?
[01:13] <juanmabc> i determine stream size by stream duration, but some does not have stream duration
[01:13] <juanmabc> is there some "size" on the struct i missed?
[04:41] <rpcesar> I am writing a web service with the intention of creating a "slideshow" I can upload to youtube. I am very concerned about patents on the encoding alogorithms, and was wondering if there are any supported formats that are devoid of patents, and what any drawbacks would be in using them
[06:01] <shifter1> Anyone here used libopus before?
[06:01] <shifter1> or know how to force VBR ?
[06:01] <shifter1> using -aq doesn't seem to change anything
[06:02] <klaxa> frecel: heh, now i am
[06:04] <frecel> klaxa: I did a super long stream and nothing has crashed entire time I had it going :D
[06:04] <klaxa> nice
[06:05] <shifter1> are there any other options than -aq to get vbr audio?
[06:07] <frecel> klaxa: I have one more question though. This is how I select my audio source
[06:07] <frecel>  -f alsa -ac 2 -i default
[06:07] <frecel> but I cant seem to be able to change it to anything other than my microphone
[06:07] <klaxa> are you using pulse?
[06:07] <frecel> yes
[06:07] <shifter1> alsactl ?
[06:07] <frecel> actually I used to have it set as  -f alsa -ac 2 -i pulse
[06:08] <klaxa> if you open up the pulseaudio volume control you can set the default recording device in the "Input Devices" section and select the recording device for a running sink (i.e. after you started ffmpeg) in the "Recording" section
[06:10] <klaxa> if you want to record both, your microphone and your in-game audio it gets more complicated...
[06:10] <klaxa> i can't really find where ffmpeg can actually mix two audio streams
[06:11] <frecel> do you think its possible to use some other software to mix audio, feed it to ffmpeg and then stream it?
[06:13] <klaxa> that would add overhead on the cpu and increase latency... trust me ;)
[06:13] <hurfadurf> could do ecasound through jack if you really wanted to do it in an external program
[06:13] <klaxa> you could either use sox to accomplish that, or use pulseaudio with two loopback-modules and a null-sink
[06:13] <klaxa> the pulseaudio solution heavily increases latency
[06:14] <hurfadurf> the jack solution doesn't
[06:14] <hurfadurf> you could probably pipe the output of ecasound into ffmpeg though maybe? dunno if it'll do that.
[06:15] <frecel> sox would do that withouth too much delay?
[06:15] <klaxa> not sure
[06:15] <klaxa> thanks to pulse i think everything is a bit laggy :X
[06:15] <frecel> I wonder if vlc could mix audio streams
[06:15] <hurfadurf> ecasound will do this and it's commandline
[06:16] <klaxa> frecel: i guess what hurfadurf is suggesting might work
[06:16] <klaxa> although i never heard of it
[06:17] <frecel> I wonder how I feed an audio stream from ecasound to ffmpeg
[06:17] <hurfadurf> it would be interesting to try just piping raw samples
[06:18] <hurfadurf> theoretically that would work but that's just a little crazy
[06:18] <hurfadurf> this is the sort of thing jack was designed to do, but if you're using pulseaudio then they won't play nice
[06:19] <frecel> Jack is my archenemy
[06:19] <frecel> I never got that thing installed properly :D
[06:19] <hurfadurf> haha
[06:19] <hurfadurf> so you're the opposite of me
[06:19] <hurfadurf> pulseaudio can die a slow death as far as i'm concerned
[06:23] <klaxa> pulseaudio actually has nice abstraction layers
[06:23] <klaxa> but it's to inefficient
[06:23] <klaxa> *too
[06:23] <klaxa> it's easy to use, easy to understand, but it just takes too much cpu
[06:23] <hurfadurf> yeah i don't doubt it
[06:23] <hurfadurf> i've used pulseaudio a few times
[06:23] <hurfadurf> the cpu usage was absurd
[06:23] <hurfadurf> the JACK dudes are at least up front about it
[06:24] <klaxa> although i think on my system that might be because i have real time equalizing running
[06:24] <hurfadurf> "jack is zero-latency, but at the expense of some CPU"
[06:24] <klaxa> which makes all the shit sound good
[06:24] <klaxa> pulse creates quite the latency
[06:24] <klaxa> especially if you use some modules
[06:24] <klaxa> feels like the pipe-buffers are way too large
[06:24] <hurfadurf> i don't understand why we didn't just extend jack a little bit and standardize on that
[06:25] <klaxa> to reduce potential stuttering
[06:25] <klaxa> because lennart poettering happened? i don't fucking know
[06:26] <hurfadurf> the jury's still out on systemd, but pulseaudio needs to go the way of HAL
[06:48] <Zumu> hi
[06:48] <Zumu> I see "av_interleaved_write_frame(): Invalid argument" :(
[07:45] <Zumu> upgraded to ffmpeg-0.11.2 and seems like it works
[08:04] <arai1> I'm starting to investigate the viability of using ffmpeg and related libraries for the back-end of a master controller scheduling feeds from hardware, pre-recorded, and rtsp sources, and producing feeds to rtsp destinations, and hardware (for cable head-ends).  Am I completely barking up the wrong tree?
[12:37] <fatpony> is there any way to extract the dts core from a dts hd-ma stream using ffmpeg?
[12:41] <durandal_1707> fatpony: you mean core and not decoded core?
[12:49] <fatpony> durandal_1707: i'm not sure i understand what you mean by "decoded core", i want to get the dts stream from the dts hd-ma track
[13:03] <durandal_1707> fatpony: bitstream or decoded stream?
[13:14] <durandal_1707> anyway in first case one would need to write bitsream filter
[13:14] <durandal_1707> *bitstream
[13:24] <abuko> http://pastebin.com/UR7E9egZ
[13:27] <klaxa> abuko: did you try using mkvmerge?
[13:27] <abuko> if its a binary tool then i don't think that i would be able to use it on android
[13:27] <klaxa> oh wait...
[13:27] <klaxa> derp, yeah
[13:28] <klaxa> hmm... tough question...
[13:30] <abuko> is it possible to have two h264 streams in mp4 file one after the other?
[13:30] <abuko> with different codec settings obviously
[13:31] <JEEBsv> you can have it correctly'ishly with multiple metadata decoder whatchamacallits and all, but almost nothing supports that :D mp4box and possibly concaterating raw H.264 streams might give you something that some things might play, but that would be completely nonstandard and all that.
[13:34] <abuko> Ok. so different question. As it works with annexb byte stream format then is it possible to encode video (using libav and h264) with annex b format?
[13:34] <abuko> Or do i have to encode it and then change its byte stream format
[13:42] <JEEBsv> abuko: it "works", it doesn't work. At least spec-wise afaik
[13:42] <JEEBsv> and depending on the decoder
[13:42] <JEEBsv> and you should be able to output annex b just fine if you need it
[13:43] <JEEBsv> just set -f h264 and then output f.ex. dot-264 or dot-h264 -- whatever you like
[13:44] <abuko> yeah that's true for binary, but how can i achieve that in code with libav
[13:45] <JEEBsv> set the format with libavformat and output that?
[14:54] <fatpony> durandal_1707: so i have to write a bitstream filter that extracts the core from the dts hd-ma stream?
[14:57] <durandal_1707> fatpony: rough idea (i yet have to write my first bitstream filter), but imo it should be doable
[14:57] <fatpony> sounds a bit hardcore
[14:57] <fatpony> i have no idea what the dts hd ma specs looks like
[14:58] <retard> only for the hardcore uk ravers
[14:58] <durandal_1707> fatpony: create ticket and cross fingers :)
[14:58] <fatpony> yeah i'll do that
[14:59] <fatpony> in the meantime, is there an existing tool to do that on linux?
[14:59] <durandal_1707> fatpony: and there is spec available on web and parser in lavc, those two are all what you should need to implement it
[15:01] <fatpony> so libavcodec already has a parser for dts hdma?
[15:03] <durandal_1707> yes
[15:03] <durandal_1707> hdma is just extension for dts
[15:03] <fatpony> yeah i know that
[15:04] <fatpony> i didn't know there was already some code in lavc to parse it
[15:04] <fatpony> so yeah, i guess it wouldn't be that hard to write the filter, i'd just have to understand how to implement one... that's the part that scares me
[15:05] <dbarrett> Hi Guys, is there anything out for ffmpeg that can do this: http://stackoverflow.com/questions/13455255/chromakey-flv-transparency or how hard would it be to write a filter for that? (New to ffmpeg)
[15:06] <fatpony> durandal_1707: i assume that's the code i have to look at http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/dca_parser.c;h=266520f36c0fa71d64d74125079ddeaa530e09cc;hb=HEAD
[15:08] <durandal_1707> fatpony: yes, it describe how bitstream looks like, but you need to look at decoder too, to find out relation between extensions and core
[18:10] <StFS> Hi. Stupid question but where does the avcodec stuff come from? I'm having dependency problems with it in Ubuntu and I don't know where I should file a bug report
[18:10] <StFS> ugh... nevermind
[18:10] <StFS> found it
[22:47] <Kapsel> i'd like to do some analysis/quality assurance on video files that have been encoded with applications such as ffmpeg, i.e. if a frame from a framestack was unable to get read correctly.. so, if theres big variations in colors etc., i'd like some sort of notification. any suggestions on how this could be accomplished?
[22:48] <JEEB> SSIM comparison or something
[22:51] <llogan> --enable-basement-of-orphan-children
[22:52] <llogan> oops. that requires --enable-gpl.
[22:55] <JEEB> llogan, well truth be told stuff like he noted should in some ways be possible to find out, with SSIM (and PSNR, but PSNR loves blurring, which might or might not be wished for) metrics. Of course far from perfect and you would have to have the source around: :)
[22:55] <Kapsel> https://github.com/jterrace/pyssim seems interesting, thank you JEEB.
[22:56] <JEEB> that said, libx264 f.ex. can output the whole SSIM of the encode compared to the input frames, of course that would not take into mention that somehow you couldn't read frames while you could read them later...
[22:56] <JEEB> no idea how you could end up with that tho
[22:59] <Kapsel> i did some fairly cool "scene change detector" using x264 and the stats file about a year ago
[00:00] --- Tue Nov 20 2012


More information about the Ffmpeg-devel-irc mailing list