[Ffmpeg-devel-irc] ffmpeg.log.20160323

burek burek021 at gmail.com
Thu Mar 24 02:05:01 CET 2016


[00:06:02 CET] <nownot> I have an avi file and when I try to convert I get Format avi detected only with low score of 1, misdetection possible! test.avi: Invalid data found when processing input .... thoughts on how to fix this?
[00:10:24 CET] <J_Darnley> What's wrong with that?
[00:10:38 CET] <J_Darnley> You said it was an avi file and ffmpeg thinks it is an avi file.
[00:11:10 CET] <nownot> J_Darnley : it won't convert, I get that error when doing ffmpeg -i test.avi output.mp4
[00:11:23 CET] <livingBEEF> error: No output pad can be associated to link label 'v1'.
[00:11:40 CET] <livingBEEF> Is there any way to get something more informative?
[00:12:26 CET] <nownot> J_Darnley : https://gist.github.com/akatreyt/5fcba513707064090903
[00:13:44 CET] <J_Darnley> Try increasing -probesize, start with 20M and see if it helps
[00:15:29 CET] <livingBEEF> http://sprunge.us/IiAZ
[00:15:42 CET] <nownot> J_Darnley : didn't work :/
[00:16:09 CET] <livingBEEF> that's the command
[00:18:09 CET] <J_Darnley> nownot: can anything play your avi file?
[00:18:32 CET] <J_Darnley> livingBEEF: are you sure?  there is no "v1" there
[00:18:42 CET] <nownot> J_Darnley : no. was hoping that ffmpeg would be able to get the data out :/
[00:19:26 CET] <J_Darnley> If the file has been damaged in some way you might be out of luck.
[00:19:45 CET] <J_Darnley> Where did it come from?
[00:19:54 CET] <livingBEEF> Damn. I'm an idiot. I edited just the debug output
[00:20:01 CET] <livingBEEF> (it's a bash script)
[00:20:44 CET] <nownot> meh, thanks for the help
[00:36:50 CET] <livingBEEF> I'm trying to cut off last frame, but I can't find anything...
[00:36:53 CET] <livingBEEF> any ideas?
[00:37:26 CET] <livingBEEF> trim only accepts absolute nubers, and it has no variables as far as I can tell
[00:47:16 CET] <llogan> why do you want to do that?
[00:47:57 CET] <livingBEEF> Timing.
[00:48:18 CET] <livingBEEF> I'll probably just cut the frame from the start...
[00:48:42 CET] <livingBEEF> works well enough
[00:48:55 CET] <llogan> you could try counting the frames with ffprobe then using -frames:v in ffmpeg. probably won't work but worth a try.
[00:49:35 CET] <llogan> ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 input.foo
[00:49:52 CET] <livingBEEF> The problem with that is that it's in the middle of filter_complex....
[00:50:22 CET] <llogan> that would have been good to know beforehand
[00:50:44 CET] <llogan> one reason why we are so adamant about asking for the ffmpeg command and the complete console output
[00:50:57 CET] <llogan> i was lazy, so therefore ended up wasting time.
[00:51:01 CET] <livingBEEF> Yeah.
[00:51:05 CET] <livingBEEF> Sorry about that
[00:56:57 CET] <FiloSottile> I think I am stepping into a bug. I can't use the loop filter in a filtergraph
[00:57:00 CET] <FiloSottile> http://pastebin.com/Sf30pamh
[00:57:15 CET] <FiloSottile> [AVFilterGraph @ 0x7febdc000940] No such filter: 'loop'
[00:57:43 CET] <FiloSottile> while, well, it's supposed to exist: https://ffmpeg.org/ffmpeg-filters.html#loop_002c-aloop
[00:58:22 CET] <FiloSottile> (I know I can workaround this with -loop, but I want the loop to be applied AFTER the scale filter, or performance will be abysmal)
[00:58:34 CET] <llogan> FiloSottile: your ffmpeg is too old. the filter is newer than the 3.0 branch.
[00:59:07 CET] <llogan> general users are recommended to use a build from git master instead of releases which are mainly for distributors
[00:59:24 CET] <FiloSottile> llogan: ah, that makes sense
[00:59:47 CET] <FiloSottile> but 3.0 is the latest I'll find in any package, correct?
[01:00:04 CET] <llogan> what do you mean?
[01:00:42 CET] <FiloSottile> llogan: I meant that 3.0 is the latest "cut" version, right? I'm ok with using master, just checking
[01:01:00 CET] <llogan> 3.0 is the latest release branch
[01:01:17 CET] <llogan> you can download a build from http://www.evermeet.cx/ffmpeg/
[01:01:28 CET] <FiloSottile> so, before the introduction of the loop filter, how was one supposed to do a still background?
[01:01:43 CET] <llogan> ...or use the --HEAD option in homebrew (I think. not an OS X user)
[01:01:58 CET] <FiloSottile> (yeah, running --HEAD right now...)
[01:02:32 CET] <llogan> you use the -loop image file demuxer input option
[01:02:48 CET] <llogan> -loop 1 -i foo.jpg
[01:03:08 CET] <llogan> since you're recompiling consider using libfdk-aac instead of libfaac
[01:03:26 CET] <llogan> and remvoe enable-hardcoded-tables
[01:03:46 CET] <FiloSottile> yeah, I got -loop working, but it causes the poor scale filter to take most of the CPU
[01:05:10 CET] <FiloSottile> llogan: thanks for the suggestions, maybe you could check out https://github.com/Homebrew/homebrew/blob/master/Library/Formula/ffmpeg.rb and if you have improvements submit them there?
[01:05:28 CET] <FiloSottile> it would improve a LOT of users' experience
[01:05:37 CET] <FiloSottile> for example, --enable-hardcoded-tables is hardcoded
[01:07:51 CET] <FiloSottile> (for fdk-aac instead I'm offered the option)
[01:08:50 CET] <FiloSottile> why would I want to disable hardcoded-tables, just out of curiosity?
[01:28:35 CET] <petecout_> Is there a way to specify which channel metadata gets encoded on? I can't find any documentation on that and Flash has an audio only channel for ID3 metadata
[02:37:33 CET] <newbe-user> \help
[02:38:57 CET] <newbe-user> Hi
[02:41:47 CET] <J_Darnley> What help are you looking for?
[03:22:57 CET] <mydarkside> hello! can someone please help me? I'm trying to capture the screen + Webcam + microphone + system audio , I can do ALL that the only problem I've is the webcam video is too forward ...any ideas how to match the timestamps or any other solution? thanks
[03:58:54 CET] <newbe-user> Can I ask for help here?
[04:02:46 CET] <newbe-user> this is my question -> http://stackoverflow.com/questions/36168994/how-can-i-combine-two-mp4-files
[07:03:36 CET] <KuZon> hello
[07:03:43 CET] <KuZon> i need some help regarding ffmpeg
[07:04:26 CET] <KuZon> i want to access each frame of a hevc video and do some calculations on it
[07:04:51 CET] <KuZon> i'm kinda stuck with it since 2 days please help me
[07:06:14 CET] <relaxed> what have you tried?
[07:09:46 CET] <KuZon> ffmpeg -i input -c:v libx265 -preset medium -x265-params crf=28 -strict experimental -r 50/1 output.mp4
[07:09:59 CET] <KuZon> i used this to extract frames of an 50fps video
[07:10:22 CET] <KuZon> but the frame output is some kind of corrupted
[07:10:39 CET] <TD-Linux> um what, that's reencoding, probably to h.264
[07:10:58 CET] <TD-Linux> maybe you want something like a PNG sequence?
[07:11:08 CET] <relaxed> -c:v libx265
[07:11:26 CET] <furq> KuZon: it sounds like you want -c copy -frames:v 1
[07:14:12 CET] <relaxed> KuZon: describe exactly what you want
[07:15:13 CET] <KuZon> sorry i have pasted wrng command
[07:15:33 CET] <KuZon> ffmpeg -i input -c:v libx265 -preset medium -x265-params crf=28 -strict experimental -r 50/1 filename%03d.jpg
[07:15:56 CET] <furq> yeah that's completely broken
[07:16:06 CET] <furq> get rid of everything between input and filename%03d.jpg
[07:16:17 CET] <KuZon> i want to extract frames of a hevc video and calculate psnr value of two consecutive frames
[07:16:25 CET] <furq> that also isn't extracting frames, that'll reencode to jpeg
[07:16:35 CET] <furq> if you want to calculate psnr difference then i guess png will work
[07:16:51 CET] <furq> there's probably a better way though
[07:18:04 CET] <KuZon> tell me other way
[07:18:16 CET] <furq> if i knew it i wouldn't have said probably
[07:18:31 CET] <KuZon> when i remove libx265 will it re encode it to x265 and give the result
[07:18:37 CET] <KuZon> x264*
[07:18:43 CET] <furq> ??
[07:18:50 CET] <KuZon> when i remove libx265 will it re encode it to x264 and give the result
[07:18:54 CET] <furq> why would it do that
[07:19:55 CET] <KuZon> because when i use ffmpegon a x265 video with out enabling libx265  say for changing framerate...it reencodes into x264 and gives the result
[07:20:13 CET] <furq> it does that if the output format is mp4 because that's the default codec for mp4 if you don't specify
[07:20:22 CET] <furq> it obviously doesn't do that if the output format is png
[07:23:19 CET] <KuZon> ohh
[07:24:00 CET] <furq> if you want to copy the input codec then use -c:v copy
[07:24:34 CET] <furq> s/codec/stream/
[07:32:27 CET] <KuZon> thank you for the help
[09:32:37 CET] <t4nk360> when I concatenate two files like this :        "concat:file1.ts|file2.ts"    file2.ts has no audio anymore (file1 never had one) , is there a way to prevent this ?
[09:33:00 CET] Last message repeated 1 time(s).
[09:33:40 CET] <t4nk360> when I instead use "concat:file2.ts|file1.ts" the audio of file2 is still there. file1 never had audio
[09:50:46 CET] <t4nk360> it states my audio as am mp2
[09:56:35 CET] <wizonesolutions> Hi  I've been trying the patch at https://trac.ffmpeg.org/ticket/4437#comment:21. What I am seeing is that things are better when recording and ffmpeg is the only thing recording, but if I try to record the screen when another application is also using the microphone, then the audio comes out poorly. Is it some sort of contention issue? It's Chrome that
[09:56:36 CET] <wizonesolutions> is also using the microphone over WebRTC.
[13:06:39 CET] <flux> well, is ffmpeg is recording only audio it cannot know if some other app is doing video capture :), so it certainly sounds like a cpu/scheduling issue
[13:08:13 CET] <flux> wizonesolutions, perhaps increasing ffmpeg's priority (how to do it depends on the platform) or adjusting recording buffer sizes (depends on used capture device) would help
[13:09:35 CET] <wizonesolutions> flux: I'm on OS X. Both applications are recording audio on the microphone. What happens it that the audio then comes out sped up and generally choppy on ffmpeg's side.
[13:09:59 CET] <flux> "sped up" sounds like an ffmpeg issue
[13:11:06 CET] <flux> (I suppose it could be an OS X issue as well.)
[13:16:32 CET] <DHE> both apps slicing up the audio samples and splitting them between them?
[13:16:44 CET] <DHE> like two apps trying to read from the same pipe
[13:17:21 CET] <wizonesolutions> DHE: well, Chrome using the mic over WebRTC and then trying to record that on the screen + the microphone audio with ffmpeg, basically
[13:17:41 CET] <wizonesolutions> for archival purposes basically of what's happening on the call
[13:49:21 CET] <t4nk242> when I concatenate two files like this :        "concat:file1.ts|file2.ts"    file2.ts has no audio anymore (file1 never had one) , is there a way to prevent this ? when I instead use "concat:file2.ts|file1.ts" the audio of file2 is still there. file1 never had audio
[14:16:17 CET] <Azrael_-> hi, i try this: ./ffmpeg -re -threads 3 -i ../programme/test.mp4 -c:v libx264 -b:v 300k -c:a libfdk_aac -ar 44100 -ac 1  -f mp4 rtmp://localhost:1935/myapp  but always get the message: Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument    i switched from "-f flv" to "-f mp4" according to some documentation.
[14:16:24 CET] <Azrael_-> why do i get the error?
[14:16:39 CET] <flux> wizonesolutions, can you verify that it works with some other audio capture program pair?
[14:16:49 CET] <furq> Azrael_-: use flv for rtmp
[14:17:12 CET] <furq> also which documentation told you to use mp4 because it's wrong
[14:18:42 CET] <Azrael_-> furq: the mp4-documentation was some generic one telling me i should use mp4 container over flv for rtmp, so i tried to accomplish this with ffmpeg. found the "-f mp4"-switch here: https://trac.ffmpeg.org/wiki/Encode/H.264
[14:19:17 CET] <furq> well yeah rtmp only supports flv
[14:19:34 CET] <Azrael_-> did you ever encounter stuttering with the live-stream? currently my video stream and sounds stops every few seconds for 1-2sec and then resumes
[14:19:47 CET] <Azrael_-> ok, good to know the other doc wasn't right
[14:20:27 CET] <furq> is the encoding going at the same speed as the input file
[14:21:53 CET] <Azrael_-> i started it with "-re" and it is running with 0.996x
[14:22:10 CET] <furq> no idea then
[14:22:50 CET] <Azrael_-> so many components working together which could be the problem and i have no clue how dig into it further
[14:39:03 CET] <nadavrub> I am trying to build ffmpeg using MinGW64: configure --enable-shared --enable-libx264 --enable-libfaac --enable-nonfree --enable-gpl --enable-zlib --enable-memalign-hack --disable-static --extra-ldflags="-L/usr/local/lib" --extra-cflags="-I/usr/local/include", the resulting "avcodec-57.def" include libfaac exports rather than the expected av_*.* exports, might this B a problem w/ the latest ffmpeg @ github ??
[14:39:10 CET] <nadavrub> EXPORTS     faacEncClose     faacEncEncode     faacEncGetCurrentConfiguration     faacEncGetDecoderSpecificInfo     faacEncGetVersion     faacEncOpen     faacEncSetConfiguration
[14:41:40 CET] <atomnuker> why would you want faac? it's literally the worst aac encoder since a few years
[14:44:44 CET] <nadavrub> @atomnuker -> What is the alternative for AAC encoding ?
[14:46:03 CET] <atomnuker> the ffmpeg native aac encoder
[14:46:26 CET] <nadavrub> Should I use any specific ./configure switch for that ?
[14:46:51 CET] <atomnuker> nope, it's already built-in
[14:48:52 CET] <nadavrub> Ok, TnX, will try
[14:56:18 CET] <nadavrub> Yup, ditching libfaac makes ffmpeg compile,
[14:56:24 CET] <nadavrub> TnX
[15:01:18 CET] <ac_slater> hey guys. I've had a problem for a while and it's been annoying me. When I stream v4l via udp, the stream gradually (but quickly) falls behind. Meaning, there is a 0-20second delay overtime. I'm using ffplay/vlc/mpv as the players. I've narrowed out ffmpeg's udp as the issue by replacing it with netcat. Clues? Is my encoding too slow? ffmpeg is reporting > 1.00x encoding speed
[15:02:24 CET] <ac_slater> my command line looks a little something like `ffmpeg -r 15 -f video4linux2 -video_size 640x480 -framerate 15 -f /dev/video0 -f mpegts udp://127.0.0.1?pkt_size=752`
[15:11:06 CET] <t4nk242> maybe you could use the -re command
[15:16:18 CET] <ac_slater> t4nk242: good point. I'll try that again really quick. I assume you mean on input
[15:17:20 CET] <ac_slater> t4nk242: when I do that, I get "Past Duration 0.xxxxx too large" ... I think it's from the mpegts muxer
[15:17:49 CET] <ac_slater> I should mention that the encoder is a custom h264 encoder, I forgot to include it in the command line.
[15:18:10 CET] <ac_slater> but, all of these same things happen with x264.
[15:18:30 CET] <ac_slater> t4nk242: (-re didnt help the latency)
[15:19:00 CET] <t4nk242> ok, sorry tha nI don't know, I am still a beginner ^^
[15:19:16 CET] <ac_slater> thanks anyways
[15:50:59 CET] <t4nk242> when I concatenate two files like this :        "concat:file1.ts|file2.ts"    file2.ts has no audio anymore (file1 never had one) , is there a way to prevent this ? when I instead use "concat:file2.ts|file1.ts" the audio of file2 is still there. file1 never had audio
[15:53:15 CET] <J_Darnley> Oh actually, I might understand
[15:53:37 CET] <J_Darnley> Increase probesize until ffmpeg sees the audio in the second file.
[16:05:38 CET] <t4nk242> I'll try J_Darnley , ty :)   , anyways, here is the pastebin : http://pastebin.com/L0kC5zRj
[16:06:58 CET] <t4nk242> how to increase the probesize ?
[16:08:34 CET] <relaxed> t4nk242: if that doesn't work just add a silent audio stream to the first file that matches the second.
[16:14:14 CET] <t4nk242> @relaxed  I considerd this, but it does not seem optimal to me ^^
[16:16:06 CET] <t4nk242> how to whisper ?
[16:21:01 CET] <gnome1> hello all, I've been playing a bit with video transcoding lately, mostly trying to get video files smaller with libx264. out of curiosity, I tried to compare with HandBrakeCLI and the closest settings I got still give me a ~180 MiB bigger file, I wonder if this means I overlooked some setting I should have used... does someone know which ffmpeg options would correspond to some of the handbrake builtin
[16:21:07 CET] <gnome1> profiles?
[16:22:51 CET] <t4nk242> probesize increase didn't help :/
[16:27:32 CET] <durandal_170> for how much was it increased?
[16:27:49 CET] <wizonesolutions> flux: you mean e.g. record with QuickTime at the same time as Chrome? ok
[16:29:45 CET] <shincodex> why does ext4 not defautly install as nodelayalloc
[16:30:57 CET] <t4nk242> I tried it with 100, 1000 and 10000
[16:36:57 CET] <J_Darnley> Wow.  One hundred bytes.  One thousand bytes.  Ten thousand bytes.
[16:37:07 CET] <J_Darnley> I think the default is five million bytes.
[16:39:52 CET] <petecout_> Is it possible to injected Timed ID3 metadata into an HLS output?
[16:40:12 CET] <petecout_> Using ffmpeg
[16:40:15 CET] <Filarius> how to pass to ffmpeg same video several times, but splited by time, like 4 parts of same length, but without actually spliting original ?
[16:41:00 CET] <J_Darnley> what?
[16:41:17 CET] <petecout_> ^
[16:41:27 CET] <J_Darnley> Perhaps you should describe what you are trying to achieve
[16:41:38 CET] <J_Darnley> Then we can tell you how to do it.
[16:42:16 CET] <Filarius> me or petecount_, or both ?
[16:42:22 CET] <J_Darnley> you.
[16:42:31 CET] <J_Darnley> I remember petecout_ from the other day
[16:42:41 CET] <J_Darnley> I still don't know how to solve his problem.
[16:44:48 CET] <Filarius> okay, I have many small videos of same resolution, I'm looking for simple way to place all videos on one, but grid-like. Like 3x3 grid playing videos at same time and grid cells switching to new video if previous is ended.
[16:46:06 CET] <Filarius> I already know how to make grid-video, but from separate files from start to end
[16:47:32 CET] <Filarius> so I thought is it possible to send to ffmpeg same file several times, but like say to ffmpeg to take from 0 to 1/4 of total time from first, from 1/4 to 2/4 from second input
[16:47:36 CET] <Filarius> and etc
[16:48:13 CET] <Filarius> and concatinate all small files to one long
[16:48:18 CET] <Filarius> before this
[16:48:52 CET] <J_Darnley> Sounds overly ambitious for the horror that is ffmpeg's filtering
[16:50:28 CET] <wizonesolutions> flux: one interesting thing. FFmpeg is often OK for roughly 10-15 seconds before it starts getting particularly noticeable. It isn't consistent.
[16:50:37 CET] <wizonesolutions> flux: btw, yeah, QuickTime Player can record just fine.
[16:50:48 CET] <wizonesolutions> no distortion, audio and video are in sync
[16:50:55 CET] <Filarius> so, it is possible to say to ffmpeg to take only 1/4 of total time from some input, by setting start and end time not in seconds, but like size relative to total time
[16:51:57 CET] <Filarius> like 0.2 or 0.5
[16:52:05 CET] <J_Darnley> no
[16:52:25 CET] <J_Darnley> you can only instruct ffmpeg to seek based on absolute time.
[16:54:49 CET] <Filarius> okay
[16:58:07 CET] <petecout_> J_Darnley: Lol no one knows the answer to my problem. It's something that is just starting to pick up speed.
[17:30:06 CET] <PlanC> is it possible to reduce the bitrate of an MP3 without having to re-encode it?
[17:30:43 CET] <JEEB> no
[17:31:02 CET] <PlanC> I have a few hundred MP3s from my lectures and some have bizarre bitrates, but it takes such a time to reduce them to something reasonable like 128 kbps
[17:31:11 CET] <PlanC> JEEB: that's a shame
[17:31:23 CET] <gnome1> is there some way to specify width:heigth but keep the aspect ratio? (that is, specify maximum frame width and maximum frame height)?
[17:32:32 CET] <JEEB> PlanC: not sure how you'd expect to be able to do anything but cutting them without recompression :)
[17:32:43 CET] <PlanC> JEEB: yeah, you're right
[17:32:58 CET] <PlanC> do you know of any tricks that'll speed the process up at least?
[17:33:20 CET] <gnome1> or maybe I should just specify the width (w:-1), should work unless it's not landscape
[17:34:14 CET] <Demp> ffmpeg hangs after about 3000 frames when using this command: "ffmpeg -i "http://62.90.90.56/iba_vod/_definst_/smil:iba-xDsEA1-sRRo.smil/playlist.m3u8?ttl=1458833551&cdn_token=fb7a0d9f49f055fe265f3bdd4ba7c616" -c copy -bsf:a aac_adtstoasc bla.mkv". any ideas?
[17:34:28 CET] <PlanC> gnome1: why not just get the width and height with ffprobe first
[17:34:32 CET] <PlanC> calculate the aspect ratio
[17:34:48 CET] <J_Darnley> Let ffmpeg do that for you.
[17:34:49 CET] <PlanC> calculate the new, smaller width and height based on the aspect ratio?
[17:35:10 CET] <J_Darnley> Read the other examples of scale expressions
[17:35:33 CET] <gnome1> PlanC: I was looking for a simpler command line that'd not have to be tweaked for every single file
[17:36:24 CET] <PlanC> gnome1: I guess that's handy if you're entering it straight into the shell
[17:36:39 CET] <gnome1> how else, that's how I have always used ffmpeg
[17:37:07 CET] <PlanC> BASH, Python, PHP, etc
[17:37:30 CET] <gnome1> bash's a shell...
[17:38:26 CET] <PlanC> no kidding
[17:38:38 CET] <PlanC> what I'm trying to say is that you could make a bash script
[17:38:55 CET] <PlanC> but there seems to be a feature for this so you might not need it
[17:38:56 CET] <PlanC> https://ffmpeg.org/ffmpeg-filters.html#Options-1
[17:39:08 CET] <J_Darnley> Or you could read the fucking manual!
[17:39:16 CET] <J_Darnley> There it is ^
[17:41:40 CET] <PlanC> works in this case, but sometimes the whole process doesn't fit in one line
[17:42:12 CET] <gnome1> I'm not trying to go that far, what I'm looking exactly is to use ffmpeg directly. and I did mention -1. nevermind that I asked.
[17:42:49 CET] <J_Darnley> If -1 works for you then what's the problem?
[17:43:55 CET] <PlanC> yeah, with -1 and force_original_aspect_ratio what else is there to ask for?
[17:51:24 CET] <magento_rocks> can i add metadata to an audio file with ffmpeg?
[17:52:13 CET] <magento_rocks> i have a .flac file that i converted to .wav, then converted to .m4a (ALAC) but the metadata is lost in the conversion process, i am trying to add it back to the .m4a file
[17:52:35 CET] <J_Darnley> -metadata
[17:52:57 CET] <J_Darnley> or if you still have the original file, be all fancy and use -map_metadata
[17:53:47 CET] <Timeless_> Is it possible to restream a live MP4v2 stream to another format with FFmpeg?
[17:54:12 CET] <Timeless_> Currently when I try to restream it waits for the stream to end and then converts the file.
[17:54:13 CET] <J_Darnley> probably
[17:54:28 CET] <Timeless_> I would like to do it on the fly
[17:55:51 CET] <Timeless_> ffmpeg -i http://addres:port/test.mp4 -f m4v -r 2 -vcodec mpeg4 test.mp4
[17:56:16 CET] <Timeless_> This waits for the input stream to end and then conv the file.
[17:56:27 CET] <Timeless_> But I like to have it on the fly. Am I missing something here?
[17:56:36 CET] <J_Darnley> Yes.
[17:56:51 CET] <J_Darnley> You cannot use an mp4 file until it is finished.
[17:56:59 CET] <magento_rocks> thanks J_Darnley ill try that
[17:57:15 CET] <Timeless_> So there is no way of live streaming it?
[17:57:25 CET] <Timeless_> sorry"live input"
[17:57:51 CET] <Timeless_> Or do you mean that my output cannot be mp4?
[17:57:51 CET] <gnome1> if it cannot be used until it is finished, how is it suitable for streaming? it isn't?
[17:58:09 CET] <gnome1> oh, that's plain http.
[17:58:50 CET] <Timeless_> The problem is that my input can only be MP4v2 because the source is proprietary
[18:01:34 CET] <JEEB> Timeless_: if whatever is generating your input can use the "fragments" feature then you will be able to start quicker, but without it the index will be at the end
[18:01:40 CET] <JEEB> and without an index the file can't be read. at all
[18:02:11 CET] <kepstin> well, it's not really the index, it's just that there's some codec init stuff stuck in with the index
[18:02:19 CET] <kepstin> so it can't write that until it has the index ready :/
[18:03:01 CET] <JEEB> yes, although the result is the same - no index, not usable
[18:04:12 CET] <Timeless_> I have 3 options. seperate frames(aka jpeg) Payload and Raw
[18:04:28 CET] <Timeless_> the last 2 are detected as m4v by vlc and ffmpeg
[18:04:40 CET] <Timeless_> but this could be just a default
[18:05:20 CET] <Timeless_> But if I append the MOOV ATOM data to the m4v stream. would it them be possible use it as live input?
[18:05:29 CET] <Timeless_> then*
[18:05:45 CET] <JEEB> m4v is probably just some raw video stuff, that would be readable right away
[18:06:02 CET] <JEEB> wouldn't have audio of course, but not sure if you have that anyways
[18:06:06 CET] <JEEB> also wouldn't have timing information
[18:06:07 CET] <kepstin> 'm4v' is typically just an alternate extension for 'mp4', and is the normally the same format
[18:06:30 CET] <JEEB> "detected by ffmpeg as 'm4v'" usually means MPEG-4 Part 2 raw bitstream :P
[18:06:48 CET] <Timeless_> JEEB: yeah thats correct.
[18:06:56 CET] <JEEB>  DE m4v             raw MPEG-4 video
[18:07:05 CET] <kepstin> hmm, I'm getting that confused, then :/
[18:07:18 CET] <JEEB> the extension is used by apple for ISOBMFF with video though
[18:07:33 CET] <kepstin> oh, m4a is just mp4 without a video stream, i thought they did the same thing with video
[18:07:38 CET] <Timeless_> I tought that m4v is mp4(without audio)
[18:07:53 CET] <JEEB> Apple uses the *file* *extensions* m4a and m4v
[18:08:08 CET] <JEEB> which as I said are ISOBMFF with either only audio or video (and possibly audio)
[18:08:49 CET] <JEEB> ISOBMFF = ISO Base Media File Format aka what you usually call "mp4" :P
[18:08:56 CET] <Timeless_> But because of the "index" (meta data?) it is not possible to use it as live input?
[18:09:13 CET] <JEEB> the raw bit stream you can as-is, since it's just a raw bit stream
[18:09:32 CET] <JEEB> the container'ized thing requires the index with the init stuff etc
[18:09:43 CET] <JEEB> unless it can do fragments, which I guess it doesn't :P
[18:09:53 CET] <Timeless_> No it cannot do fragments :(
[18:10:09 CET] <Timeless_> Normally this source is used to write video to a file
[18:10:22 CET] <Timeless_> which I now pipe to a http stream
[18:10:27 CET] <kepstin> Timeless_: if correctly made, an mp4 (iso) file can have the 'moov' moved to the start of the file so it can be used as a live input. The issue is that you can't use an mp4 as a live *output*. (i.e. you can't read one as it's being written)
[18:11:05 CET] <Timeless_> sorry. it seems that my question is unclear
[18:11:08 CET] <Timeless_> what I ment is:
[18:11:41 CET] <Timeless_> m4v (mp4 - raw)  as input ---> whatever coded(container) is capable of live stream as output
[18:11:51 CET] <Timeless_> coded = codec*
[18:12:07 CET] <JEEB> yes, you just lose the timestamps and any audio if there is any
[18:12:11 CET] <JEEB> since that's not in the raw bit stream
[18:12:23 CET] <JEEB> but yes, raw bit stream is supposed to be readable and decode'able A=>
[18:12:27 CET] <JEEB> *A=>B
[18:13:05 CET] <Timeless_> ok great
[18:13:10 CET] <kepstin> Timeless_: you can a couple of options there for live streamable output; e.g. mkv is streamable, you could use mpeg-ts, you could use fragmented output like dash or hls.
[18:13:27 CET] <JEEB> kepstin: he has a closed thing that outputs either raw MPEG-4 Part 2 or MPEG-4 Part 2 in mp4
[18:13:30 CET] <JEEB> or MJPEG
[18:13:46 CET] <Timeless_> yeah thats correct JEEB
[18:13:50 CET] <JEEB> and he is asking if any of those is something he can push through HTTP to achieve live streaming :P
[18:14:25 CET] <Timeless_> the output is not my problem right now. that isn't difficult :)
[18:14:48 CET] <kepstin> hmm, so i'm looking at the wrong end of the problem, then?
[18:14:54 CET] <Timeless_> JEEB: so if the input is indeed RAW I only have to set some option during the CLI action and that should make it work?
[18:15:08 CET] <Timeless_> kepstin: yup :)
[18:15:27 CET] <Timeless_> JEEB: so options like framesize bitrate fps ?
[18:17:49 CET] <JEEB> Timeless_: the raw bit stream should have that info, it's not raw video as such
[18:17:53 CET] <JEEB> it's raw compressed bit stream
[18:18:06 CET] <JEEB> so it should have the init info at every IDR picture, hopefully
[18:18:21 CET] <JEEB> if not, you're fucked unless you start from the beginning of the stream :P
[18:24:21 CET] <Timeless_> JEEB: ok great. I will give that a try.
[18:24:38 CET] <Timeless_> Currently I first start the pipe --> http and then ffmpeg
[18:24:50 CET] <Timeless_> so maybe Im missing the initial init info
[18:25:07 CET] <Timeless_> I have to go now. thanks for your help people :)
[19:07:38 CET] Action: Keshl facedesks a few thousand times. "So, I have an image sequence starting with, say, "Frame_0001.bmp", except it's 25,216 frames long. The program that made these files kept a consistent numbering scheme up until frame 9,999, and then added an extra digit to get to 10,000. Is there any "easy" way to handle throwing this at ffmpeg?"
[19:22:45 CET] <gnome1> wonderful.
[19:32:15 CET] <Keshl> Whoo! Got it! Easier than I thought!
[19:32:59 CET] <Keshl> ffmpeg -i "Frame_%04d.bmp" -i "Frame_1%04d.bmp" -i "Frame_2%04d.bmp" -i "Sound.wav" -vcodec (Etc)
[19:34:50 CET] <Keshl> Oh, duh. Don't forget -r 144 (Or whichever your framerate's at.)
[19:38:37 CET] <gnome1> if that's a one-time problem and the machine has "rename" installed, it may be worth to "rename Frame_ Frame_0 Frame_????.bmp". ffmpeg won't be the only tool to have a trouble with that kind of numbering
[20:47:45 CET] <bubbely> question: im trying to write an app to stream my Spotify to my symbian nokia 808 phone. is something like that possible with ffmpeg
[20:48:10 CET] <bubbely> maybe a webserver with the current song im listening to being streamed and parsed by ffmpeg on my phone with a qt app i write
[20:48:36 CET] <J_Darnley> ffmpeg is a tool for turning one sort of format into another.
[20:49:00 CET] <J_Darnley> if you don't need to do that it is probably overkill
[20:50:54 CET] <bubbely> the webpage says it can "stream"
[20:50:55 CET] <bubbely> whats that about
[20:51:18 CET] <J_Darnley> It can output a contious stream
[20:51:31 CET] <bubbely> what can i use to listen to it
[20:51:41 CET] <J_Darnley> that can be watched/listen lived
[20:51:44 CET] <J_Darnley> *live
[20:51:52 CET] <bubbely> J_Darnley: is there an example of how to set up the server and a client ?
[20:52:21 CET] <bubbely> well.. how does someone listen live ?
[20:52:23 CET] <J_Darnley> there must be hundreds of examples of streaming using ffmpeg
[20:52:59 CET] <J_Darnley> Not that I know any
[20:53:07 CET] <J_Darnley> I think streaming is a cancer
[20:53:48 CET] <bubbely> hey, J_Darnley, how does one listen to the stream?
[20:54:00 CET] <J_Darnley> open it in any good media player
[20:54:04 CET] <gnome1> I'd say there's one use for streaming, and that's live content. For everything else, streaming is security through obscurity or a misunderstanding of how things work.
[20:54:07 CET] <J_Darnley> (I assume)
[20:54:32 CET] <bubbely> I need a Qt example of someone streaming from a FFMPEG server
[20:54:36 CET] <bubbely> any idea gnome1 ?
[20:54:45 CET] <J_Darnley> I don't even think spotify will let you easily get music out of it anyway.
[20:54:52 CET] <gnome1> no, I don't have experience with Qt
[20:55:31 CET] <andrey_utkin> bubbely: elaborate plz
[20:55:59 CET] <bubbely> andrey_utkin: I want to create an FFMPEG server streaming my PC playback device
[20:56:11 CET] <bubbely> andredy_utkin: and i want to listen to it from a Qt app on my symbian device
[20:56:21 CET] <bubbely> andrey_utkin*
[20:56:23 CET] <bubbely> makes sense?
[20:57:29 CET] <andrey_utkin> Streaming your pc input device you mean?
[20:57:56 CET] <bubbely> streaming my pc playback, the music playing on my computer
[20:59:19 CET] <bubbely> andrey_utkin: it seems i need to find a way to play .asf files on my phone
[20:59:36 CET] <kepstin> bubbely: you might be better off just getting the 'libspotify' sdk and building a native spotify player for your phone.
[21:00:10 CET] <andrey_utkin> Getting data of what goes out of your earphones is separate task, i d look at pulseaudio, jack or alsa stuff
[21:00:22 CET] <bubbely> kepstin: I had that idea
[21:00:22 CET] <gnome1> there's probably some alsa setting to do that
[21:00:29 CET] <bubbely> kepstin: but i figured streaming would be faster
[21:00:38 CET] <kepstin> ffmpeg can use the pulseaudio playback device as a capture source, that's not hard to do
[21:00:41 CET] <gnome1> I recall seeing examples of config files where the "pcm" output was available (by software) as an input for recording
[21:00:46 CET] <gnome1> well, for capturing
[21:00:53 CET] <gnome1> some cards have this on hardware, some don't
[21:01:10 CET] <kepstin> if you're using alsa without pulseaudio, you'd need either a card with a 'what you hear' hardware recording device, or use the 'aloop' module to make a virtual soundcard.
[21:01:16 CET] <kepstin> pulseaudio makes it easier :)
[21:01:27 CET] <gnome1> yeah, I believe it's aloop
[21:01:42 CET] <bubbely> kepstin: do u have a Qt example of playing a .asf stream
[21:02:39 CET] <kepstin> bubbely: not a qt example. you'd want to check if there's any os apis for it (although I doubt it for asf), or you could always use ffmpeg to decode it...
[21:03:05 CET] <bubbely> kepstin: qt example of any form of ffmpeg ?
[21:03:24 CET] <andrey_utkin> bubbely: see above, guys figured you can get raw audio, personally no idea what asf is
[21:03:46 CET] <J_Darnley> Shit from microsoft
[21:04:08 CET] <andrey_utkin> bubbely: ffmpeg api i just c, no big sense to ask for qt specific examples
[21:04:22 CET] <bubbely> andrey_utkin: Ok
[21:04:23 CET] <kepstin> they have apis for getting the audio if you have an api key, i don't think it's actually that hard.
[21:04:33 CET] <bubbely> kepstin: spotify ?
[21:04:39 CET] <kepstin> they used to even have a native symbian app for spotify, but that's gone now
[21:04:49 CET] <andrey_utkin> It is like jquery for math in js, not needed :-)
[21:04:54 CET] <kepstin> https://news.spotify.com/us/2009/11/23/spotify-for-nokia-and-more/ is just a bunch of dead links :)
[21:07:08 CET] <andrey_utkin> bubbely: you could serve at last single connection by rtsp or http protocol from ffmpeg command line. Should be enough to start streaming to your phone
[21:07:29 CET] <andrey_utkin> Is vlc available on your phone?
[21:08:02 CET] <andrey_utkin> Or any rtsp or http player
[21:10:41 CET] <bubbely> i have DLNA play
[22:09:51 CET] <AlexQ> Why did ffmpeg encode 16-bit WAV to 24-bit FLAC when remuxing to MKV? At least that's what mediainfo says. I have playback issues.
[22:10:09 CET] <AlexQ> VLC says it's 32 bit FLAC xD
[22:12:33 CET] <AlexQ> That's really strange. How do I force bit depth?
[22:12:40 CET] <J_Darnley> If you told it to copy then it copied.
[22:15:01 CET] <AlexQ> I did exactly the same thing on the other file before with no problems. Maybe that's because the original DTS track from the video-source MKV was 24-bit? Somehow ffmpeg got confused?
[22:15:53 CET] <AlexQ> BTW. There is that sofalizer plugin - is it available in official ffmpeg build or not? And how do I decide which profile file fits me best, I'd need to listen to all of them?
[22:22:43 CET] <kepstin> AlexQ: the output audio bit depth that ffmpeg uses is automatically selected from the input and filters used, so in some cases it might do a higher depth flac depending on the input format. You can use the -sample_fmt option to force a specific output format (it might cause an extra conversion filter to be added)
[22:23:16 CET] <AlexQ> kepstin: Might that have been caused by the "volume" filter?
[22:24:07 CET] <kepstin> could be; I think the 'volume' filter only works in floating point, so you get 32bit float by default
[22:24:27 CET] <kepstin> (and it picks the flac format that loses the fewest bits of precision)
[22:30:24 CET] <JEEB> the audio/video formats IIRC are only decided by a list for a container ;P
[22:30:57 CET] <JEEB> in other words, you really don't want to not define a "codec" for your streams
[22:31:18 CET] <JEEB> if you don't define it and don't set a global copy - you get what you asked for, which is russian roulette
[22:31:47 CET] <AlexQ> JEEB: I defined codec as "FLAC" but with no sample rate. Fair enough, that's couse of the filter
[22:32:25 CET] <JEEB> kepstin was just commenting as if the sample format would affect the "codec" decision
[22:32:35 CET] <JEEB> which as far as I know it doesn't
[22:32:38 CET] <AlexQ> So how should I use volume and FLAC output codec so that I achieve 16-bit FLAC? My input is going to be a 32-bit WAV
[22:33:15 CET] <JEEB> use a resample filter at the end of your filter chain
[22:33:31 CET] <Azrael_-> hi
[22:33:38 CET] <AlexQ> and sample_fmt option would be risker than that?
[22:33:59 CET] <Azrael_-> does anybody use ffmpeg to stream to wowza and can tell me how to send the proper source/password for the rtmp-publishing?
[22:34:02 CET] <JEEB> that most probably does the resampling logic behind the curtain
[22:34:22 CET] <JEEB> as in, "if input isn't this, add resample filter to the end of the filer chain"
[22:36:06 CET] <JEEB> *filter
[22:36:39 CET] <AlexQ> JEEB: Though to make sure the resampling happens on the end of the filter chain it would be best to use resample filter manually then, yeah?
[22:37:06 CET] <JEEB> well it would happen in the end anyways
[22:37:19 CET] <JEEB> so in this case it most likely doesn't matter
[22:40:59 CET] <AlexQ> JEEB: What's the difference between s16 and s16p? Both are 16-bit
[22:41:20 CET] <JEEB> how the data is handled
[22:41:25 CET] <J_Darnley> not-planar and planar
[22:41:41 CET] <JEEB> p is planar, as in first you get the samples for one channel and then the other and so forth
[22:42:05 CET] <JEEB> the other is packed, which means sample/sample/sample from each of the channels one after another
[22:43:27 CET] <AlexQ> So which should I choose for a regular FLAC stream?
[22:43:49 CET] <J_Darnley> it doesn't matter but the flac encoder only accept packed
[22:45:29 CET] <AlexQ> So -sample_fmt s16
[22:46:26 CET] <J_Darnley> yes
[22:47:50 CET] <AlexQ> When I do: ffmpeg -i pcm_f32le.wav -af "volumedetect" -f null /dev/null I do get Stream mapping: Stream #0:0 -> #0:0 (pcm_f32le (native) -> pcm_s16le (native))
[22:48:06 CET] <AlexQ> But I do hope that the analysys happens before conversion to s16le, doesn't it?
[22:48:26 CET] <J_Darnley> almost certainly
[22:48:51 CET] <J_Darnley> you can add -loglevel verbose or debug and it should show you where the resample is added
[22:49:46 CET] <AlexQ> What about that sofalizer filter? It isn't available in standard ffmpeg distributions?
[22:50:45 CET] <J_Darnley> I don't know
[22:50:51 CET] <J_Darnley> check how it was built
[22:51:03 CET] <durandal_1707> you need netcdf
[22:52:35 CET] <durandal_1707> its for downmixing surround to binaural stereo
[22:54:40 CET] <AlexQ> So which one is better, durandal_1707?
[22:56:02 CET] <durandal_1707> AlexQ: which is another?
[22:56:14 CET] <AlexQ> sofalizer
[22:56:46 CET] <durandal_1707> and another?
[22:57:18 CET] <AlexQ> what is that NetCDF? Is it a filter or what?
[22:57:23 CET] <AlexQ> There is also e.g. Dolby Headphone
[22:57:44 CET] <AlexQ> there is some filter from VLC, though I don't know what that is
[22:58:06 CET] <durandal_1707> Netcdf is library you need to have sofalizer filter
[22:58:41 CET] <AlexQ> you can use Dolby Headphone with Foobar
[22:59:14 CET] <AlexQ> to convert files
[22:59:38 CET] <durandal_1707> with headphone?
[22:59:57 CET] <AlexQ> ?
[23:00:58 CET] <durandal_1707> how you can convert files with foobar and Dolby headphone?
[23:01:14 CET] <AlexQ> there is a Dolby Headphone DSP plugin for Foobar
[23:01:19 CET] <AlexQ> you need to have the DLL
[23:01:36 CET] <AlexQ> and Foobar supports audio conversion with DSP chains
[23:01:47 CET] <durandal_1707> ah
[23:02:32 CET] <durandal_1707> for sofalizer you can pick various files to suit you taste
[23:02:54 CET] <AlexQ> though technologies using HRTF for in-ear monitors are probably better I guess? Yes, I think that's cool
[23:02:58 CET] <gnome1> when scaling down video, does the method matter that much, should the default be good enough?
[23:03:39 CET] <AlexQ> Okay, I've got to go. Thanks and bye!
[23:04:03 CET] <AlexQ> durandal_1707: So basically I need to get that library, enable sofalizer in ffmpeg build flags and then recompile ffmpeg?
[23:04:09 CET] <AlexQ> to play around with sofalizer?
[23:04:24 CET] <durandal_1707> yes
[23:04:42 CET] <J_Darnley> gnome1: if you use bicubic or better it makes little difference.
[23:04:57 CET] <AlexQ> and I should be able to check these presetes without the need to encode to files with ffplay?
[23:06:01 CET] <durandal_1707> you need just ffplay to use audio filter, no need to encode, it should be pretty fast
[23:06:25 CET] <gnome1> J_Darnley: bicubic is the default, right?
[23:06:33 CET] <J_Darnley> no idea
[23:06:39 CET] <AlexQ> Or even better, is there any GUI player that allows the playback of surround files and supports sofalizer output filters? To check which profile provides best effect. I should try these hrtf_b profiles I guess?
[23:06:52 CET] <J_Darnley> I might expect bilinear given the age of swscale
[23:07:39 CET] <durandal_1707> AlexQ: there are many sofa files, each with pros and cons
[23:08:21 CET] <durandal_1707> There's no GUI afaik, only patch for vlc
[23:13:22 CET] <AlexQ> durandal_1707: Well, for the best effect I would need to have a HRTF for my own body I believe
[23:14:17 CET] <durandal_1707> there are pretty good generic ones
[23:15:04 CET] <AlexQ> do these profiles/sofalizer itself use LFE as well?
[23:15:32 CET] <AlexQ> Dolby Headphone ignores it, which is stupid as good headphones are able to transfer these freqs, or at least part of them
[23:19:20 CET] <AlexQ> Egrrrh, that problem had nothing to do with the FLAC being 24-bit or whatever. It is just a strange sync issue
[23:19:23 CET] <durandal_1707> yes, lfe is passed as mono
[23:19:59 CET] <AlexQ> the video starts with no audio
[23:20:25 CET] <AlexQ> then the video freezes, audio begins to play up to the moment when it synces with video and the video starts to play as well
[23:20:37 CET] <AlexQ> and then after a minute or so the audio is gone. What's up?
[23:21:28 CET] <durandal_1707> sample?
[23:21:38 CET] <AlexQ> Where should I upload?
[23:22:39 CET] <gnome1> if this weren't #ffmpeg, I'd guess you used mencoder somewhere
[23:23:15 CET] Action: gnome1 was never able to get a video with decent a-v sync in mencoder
[23:23:33 CET] <AlexQ> The original MKV file has chapters or whatever. Maybe they cause the issue. How can I ignore chapters when remuxing?
[23:26:55 CET] <i10k> Hey everyone. I've stumbled upon a little problem with video made with ffmpeg and I hope you can help.
[23:26:58 CET] <durandal_1707> ordered chapters?
[23:27:02 CET] <i10k> I have two flv files. One of them has just the video track, the other one has both audio and video. As a result I'd like to have a file that has the video from the former and audio from the latter. But there's a caveat.
[23:27:05 CET] <i10k> Video is 55 seconds shorter than the audio, so I tried using -itsoffset to make the video start after first 55 seconds.
[23:27:09 CET] <i10k> But when I open the resulting file in VLC I only hear audio with no video and it throws an error "VLC does not support the audio or video format "undf""
[23:27:13 CET] <i10k> Here's the command I'm using http://pastebin.com/f7apAJDh
[23:27:17 CET] <i10k> While I was playing with arguments I noticed that file becomes unreadable by players when -itsoffset is bigger then 5 seconds. Am I doing something wrong? Thanks
[23:27:53 CET] <BtbN> players probably just probe the first 5 seconds, and if they don't find a video stream then, they decide that it's audio-only
[23:29:26 CET] <AlexQ> i10k: You could try -ss to seek one of the streams instead of the -itsoffset maybe? I used it once to resync a/v
[23:29:31 CET] <AlexQ> without transcoding
[23:29:32 CET] <i10k> I thought that ffmpeg stretches the video track by filling the "skipped" part with the first frame when using -itsoffset.
[23:30:35 CET] <AlexQ> Maybe, dunno
[23:31:15 CET] <AlexQ> durandal_1707: Umm, just some chapters or whatever. Can I ignore them or sth? Really wanted to have properly downmixed audio to play on my mobile device using headphones
[23:32:02 CET] <durandal_1707> dunno, without sample I can't tell
[23:32:39 CET] <AlexQ> So where should I upload one?
[23:32:58 CET] <AlexQ> I tried null decode and there is a lot of errors with -loglevel debug
[23:33:20 CET] <AlexQ> I mean, there is something about DTS, while the DTS stream is gone and replaced with the FLAC stream
[23:33:40 CET] <AlexQ> but that only appears with -loglevel debug
[23:36:20 CET] <durandal_1707> try remixing it with codec copy
[23:36:29 CET] <durandal_1707> *remuxing
[23:36:49 CET] <AlexQ> You mean, remuxing it again? Without any audio filters this time?
[23:37:27 CET] <AlexQ> Maybe I'll try to remux it with mkvmerge?
[23:39:04 CET] <AlexQ> extracted first 3 mins of FLAC and it played well
[23:40:02 CET] <AlexQ> if I do extract h264 and FLAC and then mux these two, all other info should be gone I presume? Chapters etc., even sync info
[23:42:40 CET] <durandal_1707> does sync issues happen only when using filters?
[23:43:26 CET] <AlexQ> Um, haven't tried without
[23:44:07 CET] <AlexQ> I'm extracting FLAC and h.264 now...
[23:48:40 CET] <AlexQ> mkvmerge told me my h264 file has unknown type. To what filetype should I extract then?
[23:49:36 CET] <durandal_1707> can't you use ffmpeg to remux?
[23:49:37 CET] <AlexQ> I'll check if ffmpeg will be able to mux these two..
[23:50:01 CET] <AlexQ> just wanted to try something else to increase the chances of everything going well
[23:50:43 CET] <AlexQ> do I need to use map when I use vid-only file first and audio-only file as second when muxing using ffmpeg?
[23:52:41 CET] <llogan> AlexQ: probably not
[23:52:45 CET] <AlexQ> Does the .h264 file contain all the info neccessary to play/mux it? Maybe I chose the wrong output format
[23:52:52 CET] <AlexQ> Couldn't remux
[23:53:23 CET] <durandal_1707> directly from input format
[23:55:30 CET] <AlexQ> durandal_1707: You mean that I should remux directly from input mkv?
[23:57:49 CET] <AlexQ> http://paste.ubuntu.com/15483500/
[23:58:10 CET] <AlexQ> Guess I can't mux .h264 without specifying the parameters manually or sth?
[00:00:00 CET] --- Thu Mar 24 2016



More information about the Ffmpeg-devel-irc mailing list