[Ffmpeg-devel-irc] ffmpeg.log.20131218
burek
burek021 at gmail.com
Thu Dec 19 02:05:01 CET 2013
[00:20] <Overflip> anyone?
[00:21] <bencc> what video formats are lossless?
[00:21] <bencc> webm? h.264?
[00:22] <Overflip> I only have libx264.a
[00:22] <Overflip> not libx264.so
[00:22] <Overflip> hmm
[00:22] <Overflip> sorry I'm not the greatest with linux
[00:24] <Vandalite> one second, was waiting for that.
[00:25] <Vandalite> http://pastebin.com/pjiA209j
[00:25] <Vandalite> shows both versions of the same command, run as different users on the same server.
[00:25] <llogan> bencc: http://superuser.com/a/347434/110524
[00:26] Action: llogan failed to paste URL correctly...
[00:26] <llogan> ...i mean your url, Vandalite...anyway i'll look at it
[00:28] <llogan> Vandalite: unrelated, but "using cpu capabilities: none!" is not good
[00:28] <Vandalite> that's intentional
[00:28] <llogan> why?
[00:29] <Vandalite> The server doesn't have YASM
[00:29] <llogan> install it
[00:29] <Vandalite> can't. not my authority to do so. That's not the problem here though
[00:29] <llogan> one step at a time
[00:30] <Vandalite> you'll note the 'root' version gets past that without problems.
[00:30] <Overflip> llogan if you could give me a hand after that would be great =)
[00:30] <fonso1> hey there, I have installed ffmpeg in a amazon EC2 instance following a CentOS tutorial, but now if I run the command : whereis ffmpeg it returns me nothing, but doing $ffmpeg it founds. Thanks
[00:30] <llogan> Vandalite: why are you not using the presets? there is no reason to attempt to use a million encoding options
[00:31] <Vandalite> it is using a preset... (ipod320)
[00:31] <llogan> that's not a real preset
[00:31] <llogan> https://trac.ffmpeg.org/wiki/x264EncodingGuide
[00:31] <llogan> anyway, start simple. does it work with: ffmpeg -i input -vcodec libx264 -an output.mp4
[00:31] <Vandalite> but actually to answer your question, except for the '-y -v verbose' at the end, this is exactly the output the script for this user generated.
[00:31] <llogan> the script is bad
[00:32] <Vandalite> well let's see if the simple one works
[00:34] <Vandalite> same problem.
[00:34] <Vandalite> as the user, failure.
[00:34] <Vandalite> as root: success.
[00:35] <Vandalite> i even tried putting 'strace' on it to see if it was running into fileystem errors.
[00:35] <Vandalite> ... nope, except for a few attempts at locating the ffpreset file (which it eventually found)
[00:36] <Vandalite> want me to update the pastebin with the new outputs?
[00:37] <llogan> ok
[00:38] <Overflip> I'm so lost why I'm getting ERROR: libx264 not found
[00:39] <Overflip> I just installed it from source
[00:39] <Overflip> why cant it find it?
[00:39] <llogan> Overflip: see "man whereis": whereis has a hard-coded path, so may not always find what youre looking for.
[00:39] <Vandalite> llogan: http://pastebin.com/60N4gT3r
[00:40] <Overflip> whereis libx264 gives me libx264: /usr/local/lib/libx264.a
[00:41] <llogan> Overflip: --extra-libs="-ldl"
[00:41] <Vandalite> whups, user-version of the output didn't have '-v verbose'
[00:42] <Overflip> Thank you llogan
[00:42] <Overflip> works
[00:42] <Overflip> what does --extra-libs="-ldl" mean?
[00:42] <llogan> Vandalite: options don't go after the last output. global options are the first options
[00:42] <Vandalite> llogan: heh, that's funny, cause it totally takes them at the end too.
[00:42] <Vandalite> -v debug works fine there... and so does -y
[00:44] <llogan> options may work fine, but you should follow the documentation so things will work as expected
[00:45] <Vandalite> same output putting -v and -y at the beginning didn't change the result.
[00:45] <llogan> i'm just pointing it out
[00:46] <llogan> does the verbose output show anything useful?
[00:48] <Vandalite> It added exactly one line... doesn't seem to be anything but info about the x264 encoder settings: "[graph 0 input from stream 0:0 @ 0x8c37e0] w:1920 h:1080 pixfmt:yuv420p tb:1/2997 fr:2997/100 sar:0/1 sws_param:flags=2"
[00:49] <llogan> i don't know why it's not working for you. you could just download a recent static binary and use it instead
[00:50] <llogan> http://ffmpeg.org/download.html#LinuxBuilds
[00:50] <Vandalite> the odd thing is, in verbose mode, the libx264 parts of the output have two lines that do not appear when running as the user.
[00:51] <rager> howdy
[00:51] <Vandalite> right after the 'using cpu capabilities: none' line are two more lines that *do not* show up on user mode.
[00:52] <rager> I'm working on a java wrapper for ffmpeg, and I've got this last error while building: "error: invalid application of 'sizeof' to incomplete type 'AVDictionary'"
[00:52] <fonso1> Hey, the ffmpeg is saying me : permission denied, I set up 777 on the dir and all the files which ffmpeg are installed...
[00:53] <llogan> Vandalite: are you encoding for iOS?
[00:54] <Vandalite> well, the profile originally was baseline 1.3, but that changed when we started using your 'simplified' ffmpeg command
[00:54] <Vandalite> now it says profile high level 4.0
[00:54] <Vandalite> but the point is, that 'profile' line doesn't show up at all when run as the user.
[00:54] <llogan> the simple command was to rule out that any options or whatever were causign the issue
[00:55] <llogan> i was going to give you a non-ancient command but i don't know why you're encoding in the first place
[00:56] <Vandalite> the original invocation I had was how clipshare was configured.
[00:56] <llogan> so that means it's right?
[00:56] <llogan> this is #ffmpeg.
[00:57] <Vandalite> i ruled out clipshare as the cause when I isolated this as a problem running ffmpeg as the user, and not root.
[00:57] <Vandalite> i'm trying to figure out what the difference is. every file accessed by ffmpeg during the conversion is accessible by both root, and the user.
[00:58] <llogan> did you compile it?
[00:58] <Vandalite> yes, this is compiled from source.
[00:58] <rager> are you invoking ffmpeg with just "ffmpeg" or the full path to the binary?
[00:58] <Vandalite> I even tried updating x264 with the latest stable snapshot, no help.
[00:58] <Vandalite> full path.
[00:58] <llogan> https://trac.ffmpeg.org/wiki/CentosCompilationGuide
[00:59] <rager> oh.. ffmpeg on centos? that's not fun.
[01:02] <Samus_Aran> is there any way to add silence to a video that doesn't produce corrupted audio packets?
[01:03] <Samus_Aran> I'm trying with the -f lavfi -i aevalsrc=0 method, but if I try to use the resulting file with ffmpeg, it has garbage audio (mostly white noise)
[01:03] <Samus_Aran> such as re-encoding it
[01:05] <llogan> i can't duplicate the issue
[01:05] <Samus_Aran> I'm looking for a way to add audio to a video created with images that has proper headers
[01:05] <Samus_Aran> so I can then use the concat filter to put it with other video clips
[01:05] <Samus_Aran> but if I try to do anything at all with the resulting file, it turns the proper silence into white noise
[01:05] <llogan> you didn't provide enough information to duplicate the issue. using aevalsrc=0 works for me.
[01:06] <Samus_Aran> it plays fine in mplayer or whatever, silence
[01:06] <Samus_Aran> but ffmpeg can't re-encode it
[01:06] <llogan> so show your commands and the console outputs so we can try to duplicate the issue
[01:07] <llogan> you've been here enough to know this
[01:07] <rager> Samus_Aran: you could just remove the audio track?
[01:07] <rager> passthru the video stream and don't use the audio
[01:08] <Samus_Aran> rager: some of the clips have audio, some don't. I thought when using concat it needed to be the same on all of them?
[01:08] <Samus_Aran> llogan: I don't have working commands, I've been trying to work out the commands for hours
[01:08] <llogan> i meant for you to show your commands that *are not* working
[01:08] <rager> sorry, didn't read enough
[01:09] <llogan> i don't understand how you got white noise from aevalsrc
[01:09] <llogan> *aevalsrc=0
[01:09] <Samus_Aran> llogan: I didn't. I got white noise by re-encoding the clip that was produced using aevalsrc
[01:09] <rager> llogan: I've got some issues with some code I'm compiling against the libs: http://hastebin.com/biboruhahe.coffee
[01:09] <llogan> Samus_Aran: so show these commands.
[01:09] <llogan> jesus
[01:10] <rager> I'm not really exactly sure what the error message means, but it seems like I'm misusing AVDictionary
[01:10] <llogan> rager: sorry, i don't know.
[01:10] <rager> no worries
[01:11] <rager> I wish I knew C properly
[01:11] <llogan> is jjmpeg something you're making?
[01:11] <llogan> i mean is it something you wrote
[01:12] <rager> naw, it's a project I found that wraps ffmpeg in java
[01:12] <rager> I'm trying to make it work with ffmpeg 2.x rather than 1.x
[01:13] <rager> the original author seems to have gone to lengths to avoid using AVDictionary, though, and I'm not really sure how I should go about this
[01:13] <llogan> we don't support 3rd party stuff here
[01:13] <rager> I'm not trying to get support for 3rd-party stuff
[01:13] <rager> I'm trying to be the 3rd-party stuff.
[01:14] <rager> AVDictionary is not 3rd-party.
[01:17] <llogan> obviously i was referring to jjmpeg
[01:23] <Samus_Aran> llogan: http://pastebin.com/raw.php?i=2dNrwH07
[01:24] <Samus_Aran> I had to use -frames:v 356 because -shortest was non-functional
[01:24] <Samus_Aran> but I may have done something wrong. anyhow, it was 356 frames so that worked.
[01:24] <llogan> thanks. can you also include: ffmpeg -i out.mkv -i Public_Domain_Dedication_With_Audio.mkv
[01:25] <Zeranoe> Is there any FFmpeg command that strips the last few frames off of the end of a video?
[01:25] <llogan> Samus_Aran: although i see that the silent audio is pcm_f64le and the other is pcm_s16le
[01:28] <llogan> oh...i read that wrong.
[01:29] <Samus_Aran> http://pastebin.com/raw.php?i=ddPEhr8Y
[01:29] <Samus_Aran> it's at the end
[01:30] <Samus_Aran> Zeranoe: there are various methods for this. if you know the number of frames, you can use -frames:v with a re-encode or stream copy.
[01:30] <Samus_Aran> Zeranoe: stream copying is not exact on most codecs, though, due to keyframes or whatever. so you may need to re-encode to get exact
[01:31] <llogan> Samus_Aran: there is a difference in audio rate though: 48000 vs 44100
[01:32] <llogan> oh, and one is stereo and one is mono
[01:32] <Samus_Aran> shouldn't that automatically be handled? I mean, I am re-encoding to pcm_s16le in that example
[01:32] <Zeranoe> Samus_Aran: How do I get the number of frames? Do i have to do a pipe with encoding to get it like this example http://stackoverflow.com/questions/2017843/fetch-frame-count-with-ffmpeg or do you know another way?
[01:32] <Samus_Aran> but I will enforce it to the correct rates and try again
[01:35] <llogan> and -shortest works for me for the aevalsrc command
[01:35] <Samus_Aran> Zeranoe: I'm not sure what the proper way is, personally I would just dump tiny images to get the file count. you can probe the number of frames, but this isn't always accurate
[01:36] <Zeranoe> Samus_Aran: On Windows here... Hopefully cmd/batch can provide what I need. Thanks anyway, I'll figure it out
[01:38] <Samus_Aran> Zeranoe: I was just suggesting an ffmpeg command, nothing specific to Linux
[01:38] <Zeranoe> Samus_Aran: Well, you would still need to count the # of files if you wanted to automate it
[01:38] <Samus_Aran> it spits out numbered frames
[01:38] <Samus_Aran> so just look at the last file
[01:39] <Samus_Aran> i.e. ffmpeg -i foo.mp4 -vf scale=160:-1 -an "frame-%05d.bmp"
[01:41] <Zeranoe> Samus_Aran: That would work. But you still need to find the last file number
[01:41] <Vandalite> not sure if this is gonna do any good, but reconfiguring ffmpeg and x264 to use yasm.
[01:41] <Zeranoe> then extract the number, and feed it as frames
[01:43] <Samus_Aran> Zeranoe: sorry, I didn't get that you were trying to script this
[01:44] <Samus_Aran> it's always removing a fixed number of frames?
[01:44] <Samus_Aran> if you dump all the files, in a Batch script you can do a for loop on them with a variable counting up by +1
[01:44] <Zeranoe> Samus_Aran: I just want to strip the last few frames from the end. It makes sense I need a frame count, and on Bash I would have no issues scripting. On Windows it's not as fun
[01:46] <Vandalite> llogan: the compile guide you linked me to was for a local non-system version of ffmpeg. I need to fix the *system* one.
[01:47] <llogan> then move the binary to whever your PATH points to
[01:47] <Samus_Aran> llogan: do I need to use filters to convert the mono to stereo, or does aevalsrc have some way to specify that?
[01:47] <Vandalite> the build i'm running right now already writes to the system version
[01:47] <llogan> or change the prefix to wherever you want
[01:48] <Vandalite> i'm just updating it to include yasm
[01:48] <llogan> Samus_Aran: aevalsrc="0|0"
[01:49] <Samus_Aran> I just tried -ac 2 and it seems to have worked, though I don't know what it did
[01:49] <Samus_Aran> llogan: thanks
[01:49] <llogan> -ac works too. or you could use amerge+pan
[01:50] <Samus_Aran> does -ac 2 on a mono file duplicate that channel to the others?
[01:50] <llogan> not amerge+pan..i was thinking 4 channels to 2
[01:51] <Samus_Aran> *to the other
[01:51] <llogan> i don't know
[01:54] <Samus_Aran> is there some way to choose the order of the streams in an output file? some of my files have audio first, some video first, which makes mkvmerge a huge pain
[01:56] <llogan> Samus_Aran: switch your -maps so vidoe is listed first
[01:56] <llogan> vidoe and audoi and subtitlse
[01:57] <Vandalite> no luck. using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2, still fails with no error.
[01:59] <Samus_Aran> llogan: I am still getting white noise when the formats match
[01:59] <Samus_Aran> with the PCM errors
[01:59] <llogan> can you provide the files?
[02:00] <llogan> the inpus
[02:00] <llogan> *inputs
[02:00] <Samus_Aran> what does the default in this mean:
[02:00] <Samus_Aran> Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s (default)
[02:00] <Samus_Aran> the other audio stream doesn't have that
[02:01] <Samus_Aran> with the ffmpeg -i one -i two
[02:01] <rager> found my issue
[02:02] <rager> in libavutil/struct.h, AVDictionary does not have its implementation shown
[02:02] <rager> so I had to just add one to my project so that sizeof(AVDictionary) would work
[02:02] <Samus_Aran> llogan: can I PM you the links?
[02:02] <llogan> sure
[02:06] <Samus_Aran> just a moment, need to poking at Apache.
[02:08] <Zeranoe> Samus_Aran: My solution: http://pastebin.com/hFhcRw8i
[02:10] <Samus_Aran> Zeranoe: nice. Batch is a lot uglier than Bash. :p
[02:10] <Zeranoe> Samus_Aran: Tell me about it
[02:14] <Vandalite> llogan: want to hear odd? If i change the encoder format from libx264 to another codec (I picked libxvid) it works.... even in user mode
[02:19] <llogan> Samus_Aran: i ran out of time to look at it now. i have to leave, but you can try asking on ffmpeg-user mailing list
[02:20] <Samus_Aran> llogan: okay, thanks anyhow.
[02:27] <Samus_Aran> llogan: I still don't have it working with ffmpeg concat filter, but I do have it working with mkvmerge now.
[02:27] <Samus_Aran> yay.
[02:28] <Samus_Aran> that's good enough for me. I just need a finished product that works. :)
[03:18] <Zeranoe> Does ffmpeg ever print multiple lines of "frame= " when encoding? If so, how could I reproduce that
[03:41] <DeadSix27> it actually does print multiple
[03:41] <DeadSix27> wait, zeranoe?
[03:41] <DeadSix27> just a coincidence?
[03:47] <relaxed> Zeranoe: for i in {01..10}; do printf "%s\r" "frame $i";sleep 1;done
[04:51] <Samus_Aran> where on the command line is -shortest supposed to appear?
[04:52] <Samus_Aran> I can't get it to work. I have a 13.00 second clip of images I am encoding to AVC, generating silence ith aevalsrc. it stops at 39 minutes or more
[04:52] <Samus_Aran> using -frames:v doesn't help, the audio keeps going
[05:49] <relaxed> Samus_Aran: after the input. show me your command
[05:54] <Samus_Aran> ffmpeg -r 25 -i .rvs_temp-23737/frame-%03d.tiff -f lavfi -i aevalsrc=0|0 -frames:v 324 -shortest -c:a pcm_s16le -ar 48000 -ac 2 -strict experimental -codec:v libx264 -pix_fmt yuv420p -preset veryfast -crf 18 -sn Public_Domain_Dedication.mkv
[05:55] <Samus_Aran> this produces a 40 minute video
[05:55] <Samus_Aran> but the images stop after 324 frames
[05:57] <Samus_Aran> presumably the audio keeps playing for the rest, not sure, it's silent, heh
[05:57] <relaxed> use either -frames:v or -shortest
[05:59] <Samus_Aran> I'm pretty sure I've tried with both, with neither, and with each one by itself, but I'll try again
[06:00] <Samus_Aran> with only -shortest it is still running away
[06:00] <Samus_Aran> 19 minutes encoded so far
[06:01] <relaxed> it should stop when the frames run out.
[06:02] <Samus_Aran> frame= 322 fps=2.5 q=23.0 size= 2978kB time=00:38:26.69 bitrate= 10.6k
[06:02] <Samus_Aran> it is stuck there
[06:02] <Samus_Aran> hasn't increased the time for a couple minutes
[06:02] <relaxed> why aevalsrc=0|0 and not aevalsrc=0 ?
[06:03] <Samus_Aran> llogan suggested using that for stereo. I also tried -ac which worked, forgot to remove one of them
[06:03] <Samus_Aran> not sure if they are identical or not
[06:04] <Samus_Aran> Duration: 00:38:26.77, start: 0.000000, bitrate: 1540 kb/s
[06:04] <Samus_Aran> final file
[06:05] <Samus_Aran> there are 324 TIFF images on the input. that output was stuck at 38 minutes with 322 frames
[06:05] <Samus_Aran> it never got to the final frames
[06:11] <Samus_Aran> my only guess is that converting the audio rate is altering the syncing with the video frames.
[06:12] <Samus_Aran> I'm trying to generate 48KHz/16bit/stereo
[06:12] <Samus_Aran> I guess I will try generating the silence with sox and muxing it
[06:15] <relaxed> you just need silence?
[06:15] <Samus_Aran> I tried removing the -ac and -ar, but it still ran away
[06:16] <Samus_Aran> I'm trying to add an empty audio channel to the files I generate from images
[06:16] <relaxed> tried using /dev/zero for the audio?
[06:18] <Samus_Aran> I'd prefer to not use something Linux-specific, as it's part of a set of scripts
[06:18] <Samus_Aran> sox is fine, if it works remuxing
[06:18] <Samus_Aran> I've had nothing but problems for weeks with FFmpeg.
[06:18] <relaxed> scripts for windows?
[06:19] <Samus_Aran> I was thinking other Unix systems, but Windows as well, sure. Bash and common tools are avail.
[06:19] <Samus_Aran> I'd be surprised if sox wasn't available for Win
[06:20] <Samus_Aran> also, the file produced from 324 frames at 25 FPS is 13.04 seconds, according to ffprobe
[06:20] <Samus_Aran> but the input of the frames says 12.96
[06:20] <Samus_Aran> 2 frames behind.
[06:21] <Samus_Aran> not sure what that's about, but random frame shifts seem to be standard with all ffmpeg cutting commands
[06:21] <Samus_Aran> including the frame-accurate ones
[06:25] <relaxed> if you know how many frames there are then you're done
[06:25] <Samus_Aran> relaxed: it doesn't stop
[06:25] <Samus_Aran> or if you're talking about the other, the time of the video is longer than the actual frames allow for
[06:25] <relaxed> does it stop without -i aevalsrc=0|0 ?
[06:26] <Samus_Aran> without audio, it stops fine
[06:26] <Samus_Aran> other than being 2 frames longer on time
[06:26] <relaxed> you've tried with git master?
[06:26] <Samus_Aran> I'm using git from when I started messing around, a couple weeks ago I think
[06:27] <Samus_Aran> ah, it's more than that, a couple months behind
[06:27] <relaxed> update to latest and see if it's fixed. if not, file a bug report
[06:32] <Samus_Aran> seems to be working fine using sox to generate silence and remuxing.
[06:33] <relaxed> still needs to be fixed
[06:48] <relaxed> Samus_Aran: did you quote aevalsrc=0|0?
[06:48] <relaxed> it works fine here
[06:48] <relaxed> -i 'aevalsrc=0|0'
[07:09] <Samus_Aran> relaxed: aevalsrc=0|0 was quoted, and it worked fine
[07:09] <Samus_Aran> the problem was it not stopping output for 40+ minutes
[07:10] <Samus_Aran> 0|0 without -ac 2 did stereo
[07:20] <relaxed> it stop here using ffmpeg version N-45667-ge10fccf
[07:20] <relaxed> stops*
[07:27] <lucaswang> hi, i have a question. what is the different with AVDISCARD_BIDIR and AVDISCARD_NONREF?
[07:27] <relaxed> Samus_Aran: does this stop? ffmpeg -f rawvideo -pix_fmt yuv420p -framerate 15 -video_size 320x240 -i /dev/zero -f lavfi -i 'aevalsrc=0|0' -map 0 -map 1 -frames:v 200 -c:a:1 pcm_s16le -ar 48000 -ac 2 -y output.avi
[07:30] <Samus_Aran> relaxed: will try, moment.
[07:32] <Samus_Aran> frame= 200 fps=112 q=2.0 Lsize= 261kB time=00:00:13.33
[07:32] <Samus_Aran> stops quickly
[07:32] <Samus_Aran> Duration: 00:00:13.40, start: 0.000000, bitrate: 159 kb/s
[07:33] <relaxed> odd... change the encoding part to match your workflow
[07:35] <Samus_Aran> I'm trying to generate an .mkv with silent audio and avc from images.
[07:35] <Samus_Aran> but when I used -frames:v it didn't help because it wasn't getting to the end frames
[07:37] <Samus_Aran> my next issue is that the .mkv file I create from the images doesn't return a duration with ffprobe
[07:37] <Samus_Aran> so I have no way to automatically generate the silence with sox.
[07:37] <relaxed> you have -sn in your command
[07:37] <Samus_Aran> it says N/A in the output
[07:37] <relaxed> pastebin your command and all output
[07:37] <Samus_Aran> to remove subtitles
[07:44] <Samus_Aran> I just assembled the paste, then I noticed that the issue is only in ffprobe and not in ffmpeg itself
[07:44] <Samus_Aran> ffprobe sees the duration as N/A, ffmpeg sees it as 13.04
[07:45] <Samus_Aran> I was using ffprobe to grab the duration. works on most files.
[07:45] <Samus_Aran> I guess I'll switch to parsing ffmpeg output for the duration.
[07:48] <Samus_Aran> correction: ffprobe video.mkv also returns 13.04, but when I use: ffprobe -sexagesimal -show_streams -print_format compact, it changes to N/A on this file generated by ffmpeg.
[07:49] <Samus_Aran> works fine on my other random video files I have laying around
[14:27] <bratao1> Help ! I encoded in ffmpeg a raw opus audio
[14:28] <bratao1> Help ! I encoded in ffmpeg a raw opus audio ( run avcodec_encode_audio2 and saved the packet directly) . It do work with other formats, but I can´t get anyone to read this file. How can I read this file back in ffmpeg ?
[15:09] <grkblood13> is there a way to transcode with libspeex without wrapping it in an ogg container?
[15:09] <grkblood13> I want the speex output without a container
[15:12] <Samus_Aran> grkblood13: either specify the output format, or use a file extension for the format
[15:13] <grkblood13> well, speex isn't listed in -formats and I'm not outputting to a file. I'm piping stdout via pipe:1 to another process.
[15:14] <Samus_Aran> what's wrong with having a header?
[15:16] <grkblood13> Well, it's additional overhead and the wrapper+header just use more bandwidith. Right now I'm stripping that stuff out so I wanted to see if I could avoid having to do that.
[15:18] <grkblood13> If I can avoid cutting out the audio data from the container with additional code I would like to do that
[15:19] <Mavrik> grkblood13, it seems that currently ffmpeg just doesn't support speex output without muxing it into something
[15:20] <grkblood13> real quick, when people say muxing what do they mean? I'm not affluent in the audio lingo.
[15:21] <Compn> grkblood : you know avi, mov, mp4, mkv right ?
[15:21] <grkblood13> and thanks Mavrik, I guess I'll continue cutting out the audio data myself.
[15:21] <Compn> inside those 'containers' can contain 'codecs'
[15:21] <grkblood13> right, i got that
[15:21] <Compn> you can output speex with -f rawaudio probably
[15:21] <Compn> if you want raw speex
[15:21] <Compn> or -f raw something , better check manual
[15:22] <Compn> so muxing is when you put a codec into a container
[15:22] <grkblood13> yea, i grepped specially for "raw" and nothing nothign about speex
[15:22] <Compn> remuxing is moving a codec from one container (mp4) to another (mkv) etc
[15:22] <Compn> its a generic raw muxer
[15:22] <Compn> so it supports all codecs...
[15:23] <grkblood13> -f rawaudio fails for me
[15:23] <Compn> whats your command line
[15:23] <grkblood13> with speex atleast
[15:23] <grkblood13> also, rawaudio is not listed in my ffmpegs list of supported formats. rawvideo however is though.
[15:23] <Compn> ffmpeg -formats for a list
[15:24] <grkblood13> ya, thats what I did
[15:24] <Samus_Aran> can you just do -codec:a copy -vn file.speex ?
[15:24] <Samus_Aran> grkblood13: and which raw ones do you have listed?
[15:24] <grkblood13> Samus_Aran: its pretty lengthy. you want me to dump it in here?
[15:25] <grkblood13> https://gist.github.com/grkblood13/72122a9fd40b2bff6680
[15:26] <grkblood13> ^ supported raw formats
[15:26] <Compn> yeah dont see rawaudio
[15:26] <Compn> weird
[15:26] <Samus_Aran> on mine I see rawvideo, but not rawaudio
[15:26] <Compn> could use mplayer -dumpaudio to dump it
[15:27] <Compn> after its in .ogm or whatever
[15:27] <Compn> :)
[15:27] <grkblood13> I don't want to have to bring along mplayer. That's worse than what I'm doing right now in my opinion.
[15:28] <grkblood13> It's not difficult to cut out the speex data from the ogg container. I just don't want to do it if I don't have to as it seems very unnecessary.
[15:28] <Samus_Aran> grkblood13: did you try the copy line I suggested? might want to try .raw in case ffmpeg handles that.
[15:29] <grkblood13> gotta do a man on -vn first. I've never used that
[15:29] <eckesicle> Hi I'm returning today with the same problem as yesterday. avprobe_input_format2 returns null and avformat_open_file returns -1. Here's the relevant gist: https://gist.github.com/cfeckardt/0fc7f5e0c2e4cafd144c
[15:30] <grkblood13> -codec:a is the same as -acodec correct?
[15:33] <grkblood13> if I you copy I'll no longer be encoding with speex
[15:33] <grkblood13> s/you/use
[15:41] <Samus_Aran> <grkblood13> -codec:a is the same as -acodec correct? << yes
[15:42] <Samus_Aran> -vn means null for the video
[15:42] <Samus_Aran> -an is no audio, -sn for no subtitles
[15:43] <Samus_Aran> grkblood13: sorry, didn't realise you were encoding the speex with ffmpeg, thought it was just passing through your pipe
[16:32] <eckesicle> Okay so I have pinpointed my problem
[16:34] <eckesicle> avformat_open_input return AVERROR_INVALIDDATA (in av_probe_input_buffer lin 299 in libavformat/utils.c - I'm trying to load an mp4 file. What could be the problem?
[17:14] <k-s> is there an easy (or recommended) way to use an OpenGL FB as source for encoding? We have a compositor in EFL/Enlightenment and we'd like to generate screencasts from its output
[17:15] <k-s> we could have the x11 compositor to output in sw (rgb-pixels in RAM) but it would be much slower
[19:02] <Samus_Aran> night
[19:23] <pyBlob> Hello there!
[19:24] <pyBlob> I want to use ffmpeg and java to capture images from my webcam, this worked nice by using the system-exec-stuff to call ffmpeg from java, but there are some performance drawbacks
[19:24] <pyBlob> do you know any good ffmpeg bindings for java?
[19:25] <pyBlob> currently I've found jjmpeg and xuggle ... and both should work for my problem
[20:20] <lkiesow> pyBlob: What about just starting a separate process (execute the ffmpeg binary passing an appropriate commandline)
[20:21] <pyBlob> that's what I've already done
[20:21] <pyBlob> and it seems that this is the only thing that really works xD
[20:22] <lkiesow> And what is the performance drawback with that?
[20:22] <lkiesow> I'm an Opencast Matterhorn developer and that is exactly what we do, too :)
[20:25] <stieno> HI, I have an mp4 container containing 2 h264 video tracks. I'll want to stream those two streams at the same time via rtp to 2 different clients. A requirement is that they are played in sync at the client side. How can this be achieved? I tried with tee to split into 2 different rtp streams, but this failed.
[20:31] <stieno> please see http://pastebin.com/GWVtbXij for the error message I get.
[21:02] <pyBlob> ... I give up, using ffmpeg librariers with java on win64 doesn't seem to work, so I'll continue to just execute the ffmpeg binary
[21:04] <stieno> nobody can help me how to sync two video rtp streams on different clients?
[21:09] <pyBlob> is this a stereo-video-stream?
[21:09] <pyBlob> "[rtp @ 0x285b380] Only one stream supported in the RTP muxer"
[21:10] <stieno> no, actually I cut a video into 2 pieces, and want to display each piece on a different client. But they need to be in sync of course.
[21:12] <pyBlob> ok ... as you see, there are 2 streams in one mp4-file, and as the error-messages says, you can't send that to the clients using the RTP muxer
[21:12] <pyBlob> but you could try merging both streams into one bigger video: e.g. side-by-side
[21:13] <pyBlob> so there is only one stream with size 1024x576 instead of 2 streams
[21:14] <stieno> yes, the initial video file was one stream (1024x576). But I've split this into two pieces to send to 2 different clients. I do not want on the client side to decode the full resolution (1024x576), but only half of it (512x576).
[21:15] <pyBlob> or do you want to show piece 1 on client 1, and piece 2 on client 2?
[21:15] <stieno> yes, exactly
[21:16] <pyBlob> you used map, so you're halfway there ^^
[21:17] <pyBlob> try: ffmpeg -re -i output.mp4 -f rtp -map 0:1 -c copy rtp://localhost:5554 -f rtp -map 0:0 rtp://localhost:5556
[21:17] <pyBlob> oops, there's another -c copy missing for the second output
[21:18] <stieno> great! sending out the stream is working now! Are they sent out in sync?
[21:19] <pyBlob> they *should* be sent out in sync
[21:19] <stieno> Of do I need to modify timestamps in the rtp stream?
[21:20] <stieno> Ok. At the receiving side, is it obliged to use the sdp file, or is there another method not using the sdp file?
[21:21] <pyBlob> don't know about that :/
[21:32] <pyBlob> I'm using ffplay to view a video, why does the sound get muted, when the window isn't focuesd?
[21:36] <stieno> pyBlob, On a single pc it seems indeed to be in sync. I'll try later on 2 different pc's to see if they are still in sync. Thanks for the great help!
[21:42] <pzich> hmm, I'm trying to create a video (to play in HTML5) that has sections to loop until the user interacts to go to the next section. Is there support in MP4 for an index of seek positions, to speed up jumping to these predefined times? and is it possible to feed these into ffmpeg?
[21:47] <pyBlob> you might want to split the video into seperate files
[21:49] <pzich> pyBlob: hmm, we were hoping this would cut down on the number of network requests and inherent HTTP overhead
[21:50] <pyBlob> when using videos, the HTTP overhead shouldn't be that big compared to file size
[21:50] <pzich> yeah, there will just be a lot of videos, and there's more latency caused by making a request for each than having one, but it's probably still the better way to go
[21:51] <pyBlob> have you considered making use of browser caching and preloading?
[21:52] <pyBlob> another solution would be to use javascript to detect and set the looping position
[21:52] <pyBlob> (while the video is playing)
[21:53] <pzich> well right now I have a if (video.currentTime >= loopEnd) video.currentTime = loopBegin
[21:53] <pzich> I'm just trying to optimize seeking back to the beginning if possible
[21:56] <pyBlob> perhaps this helps too: http://stackoverflow.com/questions/8616855/how-to-output-fragmented-mp4-with-ffmpeg
[22:03] <pzich> hmm, interesting, I'll see if fragments would help with seeking
[22:07] <Jellicent> Hey guys, I'm aiming to stream (twitch.tv) but I'm having a little sound problem. I'm using alsa and the sound is awfully low. Has anything like this been reported? Couldn't find anything like this.
[22:09] <Jellicent> By the way, I'm not talking about ingame sound and music for example. Microphone works fine.
[22:59] <bencc> is there a difference in performance on the client (browser) when redering a big video which shows several small videos and rendering separate small videos?
[23:00] <rager> number of threads involved for playing 1 video vs playing 4 videos is probably slightly different
[23:04] <bencc> what about 25 small videos vs 1 big video?
[23:04] <SLAYBoz> does ffmeg automatically convert all interlaced video to progressive?
[23:38] <pyBlob> I want ffmpeg to output it's data to a named pipe on windows, but it says the file doesn't exist:
[23:38] <pyBlob> http://pastebin.com/D1N6uHww
[00:00] --- Thu Dec 19 2013
More information about the Ffmpeg-devel-irc
mailing list