[Ffmpeg-devel-irc] ffmpeg.log.20160218

burek burek021 at gmail.com
Fri Feb 19 02:05:01 CET 2016


[00:07:55 CET] <CoJaBo> Is there a faster way to speed up a video than this? https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video
[00:11:08 CET] <EmleyMoor> DHE/kepstin: Thanks, between you I got it.
[00:23:09 CET] <CoJaBo> I think what I want to do is like, copy ONLY keyframes or something; is that possible?
[00:38:25 CET] <CoJaBo> ..ok, is there a problem with the above method and "-i concat:" ?
[00:43:58 CET] <Wader8> how do you mean faster way to speed up ?
[00:44:43 CET] <CoJaBo> Wader8: It takes about 10 seconds per frame
[00:45:24 CET] <CoJaBo> A more significant problem is that it ignores the concat: input, and ONLY reads the very first file
[00:51:27 CET] <CoJaBo> Is there a way to fix that?
[00:51:44 CET] <pzich> what is the exact concat command you're running?
[00:51:46 CET] <pzich> better yet...
[00:52:13 CET] <pzich> and are your input files the same resolution/framerate/codec etc.?
[00:52:50 CET] <CoJaBo> They're  all the same
[00:52:52 CET] <CoJaBo> "setpts=(STARTPTS+PTS)/60"
[00:53:10 CET] <CoJaBo> And I'm using -i concat:file1.mp4|file2.mp4|...
[00:53:17 CET] <CoJaBo> The output stops after file1
[00:55:26 CET] <pzich> Have you tried using a concat file, like https://trac.ffmpeg.org/wiki/Concatenate#demuxer ?
[00:56:03 CET] <furq> yeah you can't use the concat protocol for mp4
[00:56:09 CET] <furq> use the demuxer
[00:56:41 CET] <CoJaBo> "Unknown input format: 'concat'"
[00:57:02 CET] <furq> which ffmpeg version
[00:57:10 CET] <furq> i'm guessing it predates 1.1
[00:57:35 CET] <CoJaBo> The fricking ubuntu version avconv version 9.18-6:9.18-0ubuntu0.14.04.1, Copyright (c) 2000-2014 the Libav developers
[00:57:49 CET] <furq> that's not even ffmpeg
[00:59:17 CET] <furq> upgrade ubuntu or install ffmpeg from http://johnvansickle.com/ffmpeg/
[00:59:32 CET] <furq> or remux all your input files to a format which does work with the protocol, like ts
[01:13:12 CET] <CoJaBo> furq: So, will -f concat work for sure if I can install the nonbuntufied ffmpeg?
[01:14:21 CET] <pzich> definitely should, if it's the right version and installed correctly
[01:15:12 CET] <CoJaBo> ..is there a build I can install from a site that isn't plain http?
[01:17:16 CET] <pzich> you could install it for real instead of downloading a static build
[01:18:41 CET] <CoJaBo> Ubuntu package manager doesn't actually have it tho
[01:20:01 CET] <pzich> yup, will have to find a solution for that. Google is good at that: https://www.google.com/search?q=ubuntu+install+ffmpeg
[01:21:31 CET] <CoJaBo> pzich: The PPA also broke everything
[01:21:44 CET] <CoJaBo> PPAs in ubuntu are.. not very reliable :/
[01:21:50 CET] <furq> yeah short of upgrading ubuntu or compiling it yourself, that's your only option
[01:21:59 CET] <pzich> so pick your poison ;)
[01:22:13 CET] <furq> i would just upgrade ubuntu but i appreciate that some people sadly have reasons to use an LTS release
[01:23:33 CET] <furq> obviously PPAs are also technically an option but i excluded those because why would you want to ruin ubuntu even more
[01:24:52 CET] <CoJaBo> lol
[01:25:18 CET] <furq> i guess installing debian in a chroot is also an option
[01:25:33 CET] <pzich> oh jeez
[01:25:34 CET] <furq> or like i said just remuxing your input files to ts
[01:25:53 CET] <furq> assuming you only need to do this once that's going to be easiest
[01:27:52 CET] <CoJaBo> I'm going to want to make these regularly; also the files are a few tens of GB, each
[01:28:44 CET] <CoJaBo> My last attempt at installing a PPA managed to toast even the /home directory >_>
[01:29:22 CET] <lufi> hi, are the rhel packages provided in rpmfusion repo includes qt-faststart already? or should I follow the tutorials I see online (manually compiling it in tools dir)
[01:33:42 CET] <CoJaBo> furq: ..ok, so I managed to get a recent verison running on a different machine, but ussing -f concat just appears to... freeze
[01:33:51 CET] <CoJaBo> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x4bc78e0] Auto-inserting h264_mp4toannexb bitstream filter
[01:38:09 CET] <CoJaBo> ..ok, maybe it is going, but it's kinda slow
[01:38:47 CET] <CoJaBo> it just got to frame #3
[01:40:04 CET] <furq> pastebin the command
[01:40:22 CET] <CoJaBo> pzich: Ok, so it will no doubt be hours till that finishes; but assuming it works, is there a way to speed up doing a timelapse video? E.g., something like making it decode only keyframes?
[01:41:08 CET] <furq> no seriously pastebin the command
[01:41:12 CET] <furq> there's no way it should be that slow
[01:42:50 CET] <CoJaBo> ffmpeg -f concat -i ~/tmp.txt -an -r ntsc -filter:v "setpts=PTS/60" -codec:v libx264 -preset placebo -qp 0 ~/output5.mkv
[01:43:08 CET] <furq> why are you transcoding
[01:43:39 CET] <CoJaBo> ..I don't think it can speed up without doing so, can it?
[01:43:59 CET] <furq> you can change framerate without reencoding
[01:44:24 CET] <CoJaBo> I need to actually drop frames tho; lots of them
[01:45:09 CET] <furq> oh
[01:45:10 CET] <CoJaBo> I'm speeding it up by 240×
[01:45:22 CET] <furq> well if you're reencoding then you can select only keyframes with -vf select
[01:45:59 CET] <furq> -vf select=eq(pict_type\,I)
[01:46:15 CET] <furq> also there is literally no reason to use -preset placebo
[01:46:16 CET] <furq> hence the name
[01:47:56 CET] <CoJaBo> I figured I might as well get that extra 0.007% compression if decompression is taking 99% of the time anyway lol
[01:48:41 CET] <furq> placebo is something like an order of magnitude slower than veryslow for insignificant gains
[01:50:11 CET] <CoJaBo> That's actually still a tiny fraction of the overall time lol; I actually changed it to veryfast when adding the -vf, in case that does go tons faster...
[01:50:37 CET] <furq> i would have thought it would be much faster but i've not done much x264 lossless
[01:50:43 CET] <CoJaBo> I think it's reading the entire input file at startup tho, because it's been stalled completely for a minute or 2 now
[01:51:12 CET] <CoJaBo> I'm doing lossless mostly because I'm not sure the output will be fast enough (I may want to speed it up further)
[01:56:23 CET] <CoJaBo> ..ok, i screwed something up
[02:05:39 CET] <CoJaBo> ..ok, got it right that time, but it's only about 2-4× faster; filter command is  -filter:v "select=eq(pict_type\,I),setpts=PTS/60"
[02:06:18 CET] <CoJaBo> at least i hope its right <_<
[02:18:09 CET] <CoJaBo> well, it didn't crash this time. With any luck, it'll be done by March
[02:21:07 CET] <CoJaBo> furq: Seems to be going slowly, but surely; huge thanks =D
[02:21:25 CET] <CoJaBo> (unless there's any way to make it even faster lol)
[04:15:36 CET] <needmorespeed> What does this value mean? AVCodecContext.thread_type=3
[04:15:56 CET] <needmorespeed> avcodec.h says FF_THREAD_FRAME=1 and FF_THREAD_SLICE=2
[06:09:40 CET] <Abbott> what command would i use to change name001.bmp, name002.bmp,...,name089.bmp into a 30fps video
[06:10:19 CET] <Abbott> I tried ffmpeg `ffmpeg name%d.bmp -vcodec mpeg4 output.mp4
[06:10:54 CET] <furq> -i name%03d.bmp
[06:11:01 CET] <furq> also don't use mpeg4, use libx264
[06:13:29 CET] <klaxa> i think the default is 25 fps, so use ffmpeg -r 30 name%d.bmp [...]
[06:13:37 CET] <klaxa> *+ -i
[06:13:52 CET] <klaxa> so ffmpeg -r 30 -i name%d.bmp -c:v libx264 out.mkv
[06:23:15 CET] <Abbott> that did it. I also muxed in audio with another -i flag and a -c:a flag. I am noticing that the video plays back slower than the original video i extracted the frames from (most likely because the original video has a variable framerate). I tried pumping up the framrate, but that doesn't seem to have an effect on how quickly the frames pass. Is there something I'm not doing right?
[06:24:01 CET] <Abbott> This is the command I have: ffmpeg -i D:\Users\Abbott\Desktop\extractedmp4\name%03d.bmp -i D:\Users\Abbott\Desktop\snap.flac -c:v libx264 -r 60 -pix_fmt yuv420p -c:a aac -strict experimental out.mp4
[06:24:24 CET] <Abbott> i tried pumping it up to 60 but no change
[06:25:27 CET] <Abbott> oh wait adding -framrate 40 at the beginning did it
[06:25:34 CET] <Abbott> sorry I'm probably really frustrating to deal with lol
[06:35:11 CET] <klaxa> heh, it's ok, you found your mistake on your own :)
[10:02:22 CET] <cowai> Hi, is there anyone here that can answer this. Can ffmpeg apply one filter to all outputs but also have individual filters (like scale) on each output without using several commands/pipes ?
[10:13:57 CET] <durandal_170> cowai: ffmpeg tool cannot do that afaik, but can be done with programming own tool
[10:14:19 CET] <cowai> Thanks
[10:14:45 CET] <cowai> I want to deinterlace with mcdeint first and then scale each output
[10:15:05 CET] <cowai> doing a pipe is okay, but I want to have as little delay as possible.
[10:27:09 CET] <durandal_170> cowai: actually it can be done
[10:27:47 CET] <cowai> I would appreciate it if you could provide me with an example.
[10:28:02 CET] <durandal_170> use split filter, and than scale on each output than use -map
[10:58:07 CET] <cowai> durandal_170: Would it be possible for you to check this out and make the needed adjustments?
[10:58:07 CET] <cowai> ffmpeg -i - -c:v rawvideo -filter:v "yadif=3:0,mcdeint=1:0,framestep=2" -f nut -|\
[10:58:07 CET] <cowai> ffmpeg -i - -c:a aac -b:a 128k -c:v libx264 -b:v 1M -filter:v "scale=640x480" -f flv rtmp://127.0.0.1/level1 -c:a aac  -b:a 64k -c:v libx264 -b:v 1M -filter:v "scale=320x240" -f flv rtmp://127.0.0.1/level2
[10:58:46 CET] <cowai> piped into the first command is a rawvideo nut output from bmdtools
[11:06:27 CET] <durandal_1707> cowai: ffmpeg -i - -c:v rawvideo -lavfi "yadif=3:0,mcdeint=1:0,framestep=2,split=2[a][b],[a]scale=640x480[A],[b]scale=320x240[B]" -map '[A]' -f flv rtmp:/.. -map '[B]' -f flv rtmp://..
[11:07:29 CET] <cowai> Thanks !
[12:06:10 CET] <kudjomensah> Hello
[12:08:29 CET] <durandal_170> hello
[12:08:50 CET] <kudjomensah> Need some help compiling ffmpeg on ubuntu
[12:09:18 CET] <durandal_170> use pastebin or similar
[12:09:34 CET] <durandal_170> what's problem?
[12:12:07 CET] <kudjomensah> followed all the steps without a problem
[12:12:19 CET] <kudjomensah> till I got to libx265
[12:15:32 CET] <kudjomensah> I get "error:  size_t does not name a type
[12:15:32 CET] <kudjomensah>      size_t framesize;
[12:15:55 CET] <durandal_1707> what gcc version?
[12:17:05 CET] <kudjomensah> 4:4.8.2
[12:25:39 CET] <durandal_170> kudjomensah: pastebin console output
[12:51:04 CET] <cowai> is there any comparable alternative to mcdeint in ffmpeg?
[12:51:29 CET] <cowai> with the same quality but faste
[12:51:30 CET] <cowai> r
[12:54:39 CET] <durandal_1707> cowai: there is nnedi, but its slow too
[12:55:20 CET] <durandal_1707> and there is soon bwdif to combine yadif and w3fdif
[12:55:57 CET] <cowai> interesting
[12:56:42 CET] <pavel_> Hello, how I can launch rtsp stream with listening incoming connections?
[12:56:50 CET] <pavel_> I try avformat_alloc_output_context2(&m_avOutputContext, NULL, "rtsp", "rtsp://127.0.0.1:30010/live.sdp");
[12:57:13 CET] <pavel_> maybe need some flag?
[12:57:20 CET] <cowai> I have problems with plain yadif where important things like mouth/teeth and eyelids does not look good. yadif + mcedeint makes it much better, but with i7 I can barely keep up.
[13:27:02 CET] <cowai> durandal_1707, in my test w3fdif was much much better quality then yadif
[13:27:09 CET] <cowai> is that normal?
[13:33:31 CET] <durandal_1707> cowai: there are cases where w3fdif have artefacts
[13:33:43 CET] <durandal_1707> high motion stuff
[13:33:56 CET] <durandal_1707> and static pixels
[13:34:29 CET] <cowai> would I see it more in high motion scenes?
[13:34:37 CET] <durandal_1707> bwdif should help here, i will push it soon to upstream
[13:35:17 CET] <durandal_1707> cowai: where motion is really hight like moving trees just in front of you
[13:36:05 CET] <cowai> our channel is mainly static talk shows though.
[13:36:22 CET] <cowai> But thanks for the heads up, I will test with a high motion scene and check
[13:36:39 CET] <cowai> I have yadif in production now, should I just wait for bwdiff?
[13:36:44 CET] <cowai> *bwdif
[13:37:14 CET] <cowai> how will the performance be with bwdif ?
[13:37:15 CET] <durandal_1707> currently bwdif is 2 times slower than yadif because lack of SIMD
[13:38:52 CET] <cowai> so about the same as w3fdif?
[13:42:57 CET] <durandal_1707> w3fdif have SIMD
[13:43:36 CET] <durandal_1707> and is actually faster than yadif=1
[13:44:10 CET] <J_Darnley> Is that double rate though?
[13:44:45 CET] <durandal_1707> w3fdif doesnt have send frame mode
[13:45:12 CET] <durandal_1707> so yes, double rate
[13:46:50 CET] <cowai> Do I need to care about double rate in my case durandal_1707?
[13:47:00 CET] <cowai> as long as I set the output rate?
[13:47:18 CET] <durandal_1707> actually bwdif is not 2x slower than yadif, with broadcast sample i have it is 3.53 vs 4.94 realtime
[13:48:04 CET] <cowai> nice
[13:48:36 CET] <durandal_1707> cowai: if you are only interested in yadif=0 mode than w3fdif is still slower because it does more calculations
[13:49:53 CET] <durandal_1707> bwdif have both modes, but its  default is bwdif=1 compared to  yadif which is yadif=0
[13:50:56 CET] <durandal_1707> so fetch latest ffmpeg, and compare bwdif=0 with yadif=0
[13:51:25 CET] <cowai> my input is 25fps, but what I need to output is 30fps. Could double rate actually help to make the motion more smooth, or would it be the same?
[13:52:33 CET] <durandal_1707> fps filter just drop/duplicate frames, so it depends...
[13:53:18 CET] <durandal_1707> i guess i hight motion scenes it could theoretically help
[13:53:21 CET] <durandal_1707> *in
[13:55:00 CET] <durandal_1707> cowai: what ffmpeg version you are using?
[13:56:27 CET] <cowai> "7:3.0.0+git~trusty " from ppa:mc3man/trusty-media
[13:57:02 CET] <cowai> I will try to build latest
[13:57:04 CET] <durandal_1707> that version should have w3fdif simd
[14:38:45 CET] <kudjomensah> http://pastebin.com/fdsNHXsK
[14:39:57 CET] <J_Darnley> > error when building x265
[14:40:05 CET] <J_Darnley> What do you want us to do about it?
[14:42:54 CET] <kudjomensah> sorry it took me so long
[14:43:52 CET] <kudjomensah> want to know if I have to fix it before I continue
[14:44:31 CET] <J_Darnley> You need to fix the problem before x265 will compile, yes
[14:44:31 CET] <kudjomensah> Or I can ignore it
[14:44:56 CET] <J_Darnley> Are you ultimately trying to build ffmpeg?
[14:45:04 CET] <kudjomensah> yes
[14:45:20 CET] <J_Darnley> x265 is an optional library you can enable
[14:45:30 CET] <J_Darnley> if you don't need it, don't enable it.
[14:46:01 CET] <kudjomensah> ok
[14:48:12 CET] <kudjomensah> will let you know how it goes
[14:48:17 CET] <kudjomensah> Thanks
[14:53:59 CET] <oldcode> is it possible for swr_alloc_set_opts to fail if the input and output formats are the same?
[14:58:08 CET] <oldcode> nevermind, the problem might be something else
[14:59:20 CET] <oldcode> yeah, it was an int64 thing
[15:05:38 CET] <cowai> I will try to build latest
[15:05:48 CET] <cowai> oops
[15:05:55 CET] <cowai> alt-tab fail
[15:23:41 CET] <cowai> alright I have the latest git now. durandal_1707
[15:23:50 CET] <cowai> Is there any options I should feed bwdif?
[15:24:03 CET] <cowai> my source is top field first
[15:29:10 CET] <durandal_1707> cowai: only bwdif=0 if you want same mode as yadif default
[15:29:39 CET] <cowai> what if I want to set the field order?
[15:35:51 CET] <durandal_1707> cowai: ffmpeg -h filter=bwdif
[15:36:07 CET] <durandal_1707> bwdif=parity=tff/bff/auto
[15:36:07 CET] <cowai> ah :)
[15:37:22 CET] <cowai> yeah, thanks
[15:37:24 CET] <cowai> :)
[15:37:39 CET] <cowai> when exactly do I want parity to be field?
[15:37:54 CET] <cowai> is when I want to make 25i -> 50p ?
[15:40:45 CET] <durandal_1707> cowai: yes
[15:41:22 CET] <cowai> btw, bwdif looks very nice
[15:41:37 CET] <durandal_1707> i mean mode, send_field, parity option is something else
[15:41:40 CET] <cowai> and more "stable" overall in my test clips
[15:42:07 CET] <cowai> ok
[15:45:46 CET] <cowai> ah yeah, parity is field order, right.
[15:45:57 CET] <cowai> I meant mode.
[17:11:02 CET] <stu0> Hi all, I have a question about using pipes for input sources to ffmpeg. I'm writing encoded (h264 and aac) streams from an android app and writing to a pipe to which a CLI (arm) ffmpeg is listening, but ffmpeg is stalling on the first input.  I've successfully streamed to youtube with this approach when i used a null audio src and encoded it to aac within ffmpeg, but with two pipes it doesn't seem happy.  Is this approach possible in 
[17:11:09 CET] <stu0> if this is allowed... : http://stackoverflow.com/questions/35469445/how-can-i-run-command-line-ffmpeg-and-accept-multiple-pipes-video-and-audio-wi
[17:12:13 CET] <stu0> My current theory is maybe I need to uncouple the different streams and start writing the video without audio until ffmpeg processes and starts looking for audio
[17:42:29 CET] <mrph> is there an ffmpeg lib or a thirdparty lib which could be coupled with ffmpeg that would allow me to embed annotations into a video output? By annotations I mean lines and shapes drawn by a user as the video plays as well as pause and scrub annotations.
[17:43:10 CET] <J_Darnley> Use subtitles?
[17:43:58 CET] <mrph> those are just text though, I need to animate a line being drawn for example
[17:44:08 CET] <J_Darnley> use ass subs then
[17:44:24 CET] <J_Darnley> that'll draw various shapes
[17:45:47 CET] <mrph> oh really? that would be awesome, do you have a link you can point me to with examples?
[17:46:01 CET] <J_Darnley> No, not really
[17:46:17 CET] <J_Darnley> Aegisub might have a decent tutorail
[17:46:20 CET] <faLUCE> Hello, how can I create a video with animatedpicture.gif + audio.mp3 ? I tried :   ffmpeg -i animatedpicture.gif -f image2 -i audio.mp3 out.avi,  but the resulting video is not viewable. I use ffmpeg 2.4.3
[17:46:56 CET] <faLUCE> J_Darnley: ?????
[17:46:59 CET] <mrph> alright. I'll see what I can find. what about simulating the user scrubbing.
[17:47:09 CET] <J_Darnley> What is "not viewable"?
[17:47:13 CET] <faLUCE> J_Darnley: I just pasted the exact command
[17:47:14 CET] <J_Darnley> What player?
[17:47:17 CET] <faLUCE> J_Darnley: vlc
[17:47:24 CET] <J_Darnley> But nothing of the output
[17:47:51 CET] <faLUCE> J_Darnley: the output of what?
[17:48:05 CET] <J_Darnley> THE TEXT THAT FFMPEG PRINTS TO YOUR TERMINAL
[17:48:32 CET] <bencoh> ouch
[17:48:49 CET] <J_Darnley> mrph: what does "user scrubbing" mean?
[17:48:52 CET] <drv> you shouldn't need to do '-f image2'
[17:49:37 CET] <faLUCE> J_Darnley: http://pastie.org/10727631
[17:50:15 CET] <J_Darnley> ha ha drv is right.
[17:50:28 CET] <J_Darnley> you are telling ffmpeg that audio.mp3 is an image file
[17:50:50 CET] <J_Darnley> as for why vlc can't play that video stream, I don't know.  blame vlc
[17:51:05 CET] <mrph> J_Darnley: We have an app that listens for user interactions with a video and then saves that info so that we can replay the users actions. So is a user is scrubbing (doing frame-by-frame navigation) we want to record that and then generate a video which replays those actions.
[17:51:25 CET] <mrph> if a user is scrubbing*
[17:51:41 CET] <faLUCE> drv: J_Darnley: same result if I omit it. I can't hear audio and the image is too big (it is not resized according to the screen size)
[17:52:11 CET] <J_Darnley> Of course you can't hear audio.  There is no audio in the file.
[17:52:41 CET] <J_Darnley> And a player controls how a video is shown.
[17:52:44 CET] <drv> what does the ffmpeg output look like without -f image2?
[17:53:01 CET] <drv> you could copy the mp3 audio directly into the avi so it doesn't get re-encoded, although that shouldn't make a difference
[17:53:04 CET] <faLUCE> how can I add audio, J_Darnley? I used this command:  ffmpeg -i animatedpicture.gif -f image2 -i audio.mp3 out.avi
[17:53:23 CET] <J_Darnley> Try reading what we say.
[17:53:39 CET] <J_Darnley> [Thu 17:50] <J_Darnley> you are telling ffmpeg that audio.mp3 is an image file
[17:53:44 CET] <J_Darnley> [Thu 17:52] <drv> what does the ffmpeg output look like without -f image2?
[17:53:46 CET] <faLUCE> J_Darnley: even if I remove -f image the result is the same
[17:53:51 CET] <faLUCE> J_Darnley: as said before
[17:54:06 CET] <J_Darnley> What?  That ffmpeg thinks the mp3 file is another image?
[17:54:22 CET] <faLUCE> J_Darnley: I don't know.
[17:54:34 CET] <faLUCE> J_Darnley: I just don't hear audio
[17:54:41 CET] <J_Darnley> mrph: save edit lists or soemthing?  that doesn't sounds like a feature of any video format
[17:55:26 CET] <J_Darnley> Some do support arbitrary data/metadata so you could abuse that
[17:58:29 CET] <faLUCE> any idea?
[18:02:57 CET] <mrph> J_Darnley: say you watch a video of an athlete doing some particular movement. You then scrub around the video and comment of different parts of their technique. You then want to share that with them on youtube or something but first a video of the scrubbing you did needs to be recreated. thats what I'm trying to  produce...hope that made more sense
[18:03:15 CET] <J_Darnley> A little
[18:03:28 CET] <J_Darnley> I must say I know of no existing solution for that
[18:04:00 CET] <J_Darnley> I mean obviously editors will store that somehow
[18:06:14 CET] <J_Darnley> I guess they use a project file rather than trying to stick it into a video file as metadata
[18:06:41 CET] <mrph> yeah I mean right now we do it via the player and a json file which has cues to move the playhead this way and that way but obviously that won't do much in the way of making that content shareable
[18:08:09 CET] <J_Darnley> You could modify ffmpeg to read the json and follow the instructions in there to render the final file
[18:08:39 CET] <J_Darnley> or have it store the json somewhere
[18:08:50 CET] <J_Darnley> or extract the command and store those somewhere
[18:09:00 CET] <mrph> essentially build a custom lib ?
[18:09:03 CET] <J_Darnley> Yes.
[18:09:18 CET] <J_Darnley> If no solution exists you will have to write it yourself
[18:09:25 CET] <J_Darnley> But definitel ask others
[18:09:30 CET] <J_Darnley> *definitely
[18:09:53 CET] <J_Darnley> I don't try to follow all the horrors people do with video these days.
[18:10:09 CET] <mrph> haha this me asking others
[18:10:35 CET] <J_Darnley> I would suggest also emailing the ffmpeg-users list
[18:11:11 CET] <J_Darnley> Be clear and ask if anyone knows of an exsting solution.
[18:11:33 CET] <ChocolateArmpits> That sounds like it could be better done using screen capture
[18:11:42 CET] <J_Darnley> It probably won't be available through ffmpeg
[18:11:58 CET] <J_Darnley> but people might know of other software that does it
[18:12:07 CET] <J_Darnley> Then all you have to do it adapt that to your needs.
[18:12:13 CET] <techtopia> may i ask why you want spy on your users mrph :p
[18:12:20 CET] <mrph> mmm but scaling wouldn't work well with screen capture I don;t think
[18:12:33 CET] <mrph> we're not spying
[18:12:43 CET] <mrph> checkout upmygame.com
[18:12:55 CET] <mrph> #shameless
[18:13:11 CET] <furq> i'm so glad that this has a picture of an actual sportsman
[18:13:23 CET] <furq> for a minute there i was worried that this would be for ESPORTS
[18:13:33 CET] <techtopia> looks nice :)
[18:13:54 CET] <furq> but yeah that sounds like something that should be done on the player side
[18:14:38 CET] <furq> oh never mind you already said that
[18:15:28 CET] <mrph> thanks
[18:15:39 CET] <mrph> yep. already doing int via the player
[18:15:44 CET] <mrph> it*
[18:19:01 CET] <mrph> thanks J_Darnley. Appreciate the help
[18:31:35 CET] <theFam> oh boy isn't this slow
[18:32:40 CET] <theFam> my ffmpeg converting has been going for 2 hours and so far 28% aka 43 seconds were finished
[18:35:33 CET] <durandal_1707> theFam: what command?
[18:47:38 CET] <theFam> durandal:
[18:48:51 CET] <theFam> ffmpeg -i Comp\ 1.mp4 -c:v libx264 -preset veryslow -refs 3 -threads 2 -c:a copy Tristam\ -\ Once\ Again.mp4
[18:49:19 CET] <ChocolateArmpits> theFam: what is the video format ?
[18:49:30 CET] <Mavrik> huh
[18:49:35 CET] <Mavrik> Are you running that on a potato?
[18:49:38 CET] <theFam> ChocolateArmpits: input is 1080p
[18:49:47 CET] <furq> -threads 2?
[18:49:54 CET] <theFam> h264 too
[18:50:13 CET] <Mavrik> Aren't you the guy that's running ffmpeg without ASM or was that someone else? O.o
[18:50:19 CET] <theFam> ignore the first 2 lines
[18:50:20 CET] <furq> no that's someone else
[18:50:24 CET] <theFam> r/qutebrowser.py", line 153 in main
[18:50:26 CET] <theFam>   File "/usr/bin/qutebrowser", line 9 in <module>
[18:50:26 CET] <furq> i think this might be the guy with the atom
[18:50:27 CET] <theFam> frame= 1533 fps=0.3 q=29.0 size=    7281kB time=00:00:51.09 bitrate=1167.4kbits/s
[18:50:44 CET] <theFam> fps=0.3
[18:50:46 CET] <theFam> ;_;
[18:50:58 CET] <furq> if it's a dualcore atom then you should be using -threads 3, although that won't make a huge difference
[18:51:01 CET] <ChocolateArmpits> theFam: change your preset to at least superfast and set the bitrate
[18:51:17 CET] <furq> or just don't use -threads at all and let it autodetect
[18:51:33 CET] <theFam> it's too laaate
[18:51:37 CET] <ChocolateArmpits> there is no real point for anything below medium when rendering hd video
[18:51:41 CET] <ChocolateArmpits> You'll save time
[18:51:48 CET] <theFam> i can't risk giving up 51 seconds of progress
[18:51:48 CET] <ChocolateArmpits> or is it just that short?
[18:52:04 CET] <theFam> it's about 3 mins
[18:52:26 CET] <theFam> it'll probably render when I am asleep
[18:52:33 CET] <theFam> this an arm tablet btw
[18:52:40 CET] <theFam> I have no other device ;_;
[18:54:22 CET] <Mavrik> Those have HW encoders for a reason.
[18:54:33 CET] <Mavrik> But yeah, that progress sounds about right.
[18:54:50 CET] <furq> i don't use veryslow on an i7
[18:55:00 CET] <furq> using it on an arm tablet sounds like a fun day out
[19:06:01 CET] <Pajeet> it's not fun
[19:06:06 CET] <Pajeet> it's neer been fun
[19:15:36 CET] <furq> no but you'll have plenty of time to go somewhere nice while you wait for it to finish
[19:16:05 CET] <Pajeet> haha
[19:16:14 CET] <Pajeet> yay 01:04:00
[20:28:43 CET] <faLUCE> Hello, I created a video from a mp3 file and an animated gif, with this command:  ffmpeg -i framm.wav -i anim.gif -acodec mp2 output.avi. Unfortunately, the animated gif is delayed (the second frame doesn't start at its real value, but it is delayed of abou 5 seconds) ... where can be the problem? I use  ffmpeg 2.4.3
[20:41:00 CET] <pzich> faLUCE: might be worth trying a newer version? this was opened against 2.3.5 (14 months ago) and closed 11 months ago, not sure if that means 2.4.3 has the fix: https://trac.ffmpeg.org/ticket/4235
[20:43:15 CET] <pzich> the example looks like it's working correctly in 2.7.2 for me
[20:45:34 CET] <faLUCE> pzich: is there an alterantive command for linux? I don't want to install a new ffmpeg from scratch by compiling
[20:48:41 CET] <jtdesigns01> Are there any optimizations I could do for this command to keep file size down without losing quality and still use mpeg2 video?
[20:48:41 CET] <jtdesigns01> ffmpeg -i 20.mkv -s 320x240 -vcodec mpeg2video -b 1000k -ab 128k -ac 2 -ar 44100 -acodec mp3 "F:20.mpg"
[20:49:29 CET] <DHE> 2-pass mode might help
[20:49:47 CET] <jtdesigns01> how do I do that?
[20:51:38 CET] <DHE> run ffmpeg twice, once with `-pass 1` and once with `-pass 2`
[20:53:08 CET] <jtdesigns01> hmm. ok
[20:53:13 CET] <jtdesigns01> will try that
[20:56:14 CET] <jtdesigns01> which stream should I run it on? video?
[22:59:21 CET] <durka42> hi, I'm looking to extract the timestamp of each frame of a video as accurately as possible
[22:59:31 CET] <durka42> I found the -debug_ts option but I don't know which fields I should be looking at
[23:07:28 CET] <kepstin> durka42: you probably want to use ffprobe with the -show_frames option
[23:07:57 CET] <kepstin> then depending what you are looking for, maybe the pkg_pts or pkg_pts_time fields is appropriate
[23:08:31 CET] <durka42> kepstin: oh nice
[23:08:38 CET] <durka42> kepstin: yeah so is there docs somewhere on what those field names stand for?
[23:08:59 CET] <durka42> best_effort_timestamp_time also sounds promising :)
[23:09:00 CET] <kepstin> durka42: to interpret the pts values, you'll also want "show_streams" so you can get the time_base.
[23:09:14 CET] <durka42> the units of pts are seconds, right?
[23:09:19 CET] <kepstin> pts is 'presentation time stamp', i.e. basically the time at which the frame should be shown
[23:09:38 CET] <kepstin> pts is an integer, you multiply by the time_base fraction to get seconds.
[23:09:52 CET] <kepstin> the pkt_pts_time field is premultiplied and is in seconds, I think? maybe ms
[23:10:04 CET] <kepstin> looks like seconds
[23:10:06 CET] <durka42> oh so it looks like show_frames already looked at the time_base and multiplied for me
[23:10:15 CET] <styler2go> Heya everyone. I am trying to make some live transcoding in a rtmp server. My current commands knocks out after some seconds with the error av_interleaved_write_frame(): Operation not permitted
[23:10:15 CET] <durka42> yeah it matches up with what I would expect given the length of the video
[23:10:24 CET] <durka42> and what is "best effort"? is it smoothed or something?
[23:11:17 CET] <kepstin> durka42: it's equal to pts for most formats, but if the container doesn't store proper timestamps it can be basically made up by ffmpeg, iirc.
[23:11:21 CET] <kepstin> don't  know exactly how it works
[23:11:27 CET] <styler2go> it works if i use -f flv but if i use -f h264 it dies after a few ticks
[23:12:00 CET] <durka42> kepstin: I see
[23:12:04 CET] <durka42> kepstin: this is H.264
[23:13:53 CET] <durka42> kepstin: thanks for the tips!
[23:13:55 CET] <kepstin> durka42: I think it depends more on the container/stream than the codec
[23:14:03 CET] <durka42> MP4
[23:15:09 CET] <kepstin> i think in general, for most uses, you probably want to use pts.
[23:16:33 CET] <styler2go> does -re set ffmpeg into realtime?
[23:17:13 CET] <kepstin> -re causes ffmpeg to wait (sleep basically) after decoding each frame until the time when the next frame should be shown
[23:17:28 CET] <styler2go> sounds like realtime :P
[23:17:41 CET] <kepstin> so slows down input, particularly for stuff like reading from files, to approximately realtime
[23:17:57 CET] <kepstin> obviously it can't speed up output if you have an encoder slower than realtime ;)
[23:18:01 CET] <styler2go> does this work with -f h264?
[23:18:57 CET] <furq> use -f flv
[23:19:05 CET] <styler2go> when i use -f h264 it has around 64 fps, when i use -f flv i have around 34 (live is 30)
[23:19:10 CET] <kepstin> styler2go: as input or output?
[23:19:14 CET] <styler2go> output
[23:19:35 CET] <kepstin> are you streaming from a local file to rtmp?
[23:19:50 CET] <styler2go> i am trying to set up my rtmp server
[23:19:57 CET] <styler2go> to lvie transcode to lower quality
[23:20:34 CET] <kepstin> Then you don't want -re - you want it to output frames as fast as it receives them to the output.
[23:20:39 CET] <furq> i doubt that -re will work for raw h264 and i don't think rtmp even supports it
[23:20:42 CET] <styler2go> http://pastie.org/10728012 that's my current full command like
[23:21:19 CET] <furq> -f h264 definitely won't work if you want audio
[23:21:30 CET] <furq> also -b:v doesn't do anything if you're copying streams
[23:21:34 CET] <kepstin> styler2go: aso, you're using -vcodec copy, so it's not actually transcoding
[23:21:34 CET] <furq> nor does -s
[23:21:41 CET] <styler2go> humm >.<
[23:21:51 CET] <styler2go> I love ffmpeg but it's so hard to use :(
[23:21:57 CET] <styler2go> (for me)
[23:22:12 CET] <furq> and yeah i would have thought -re would be useless there
[23:22:22 CET] <styler2go> what would i need to do if i want to have it at 2000 kbit/s instead of 4000 which is the live input
[23:22:27 CET] <furq> you need to transcode it
[23:22:33 CET] <kepstin> since it's already reading from a realtime rtmp stream, adding -re would only slow it down and desync it
[23:22:42 CET] <furq> replace -vcodec copy with -c:v libx264
[23:22:50 CET] <styler2go> oh and i also want to go down form 60fps to 30fps :o
[23:22:54 CET] <styler2go> is that even possible?
[23:23:10 CET] <furq> -vf fps=fps=30
[23:23:32 CET] <styler2go> like that? ffmpeg -i rtmp://localhost:1935/live -b:v 2000 -c:v libx264 -vf fps=fps=30 -acodec copy -s 1280x720 -f flv rtmp://localhost:1935/twitch
[23:24:00 CET] <furq> sure
[23:24:30 CET] <furq> except -b:v 2000k
[23:25:07 CET] <furq> you might also want to tweak x264's -preset setting depending on how much cpu usage you can/want to use
[23:25:34 CET] <styler2go> it's already at 60% cpu usage with current settings
[23:25:36 CET] <kepstin> yeah, 2000 bits per second is  probably not enough for 720p video ;)
[23:26:08 CET] <styler2go> i should lower resolution, ture
[23:26:41 CET] <furq> read what he said again
[23:26:48 CET] <styler2go> woups
[23:26:54 CET] <styler2go> sorry i am sleepy already :D
[23:27:07 CET] <furq> 2000kbps is probably a bit low but it depends what you're streaming
[23:27:38 CET] <styler2go> Well, i can't stream on more because of twitch, that's why i want to use an own rtmp server to stream at better quality somewhere else
[23:27:44 CET] <styler2go> but also reach out to twitch platform
[23:29:08 CET] <styler2go> thanks a lot people. u saved my evening :)
[23:44:41 CET] <podman> anyone have experience using FFMPEG with the GPU instances on AWS? Is there anything special that needs to be done to take advantage of the hardware encoders?
[23:49:36 CET] <kepstin> podman: start with https://trac.ffmpeg.org/wiki/HWAccelIntro#NVENC
[23:50:06 CET] <kepstin> podman: probably need a custom ffmpeg build with nvenc enabled, and you'll need to switch some parameters to us the encoder
[23:52:07 CET] <podman> kepstin: thanks
[23:53:10 CET] <wintershade> hey guys! a quick question. is it possible to define a target filesize, rather than bitrate? using libvpx as a video codec and libfdk_aac as audio when I need.
[23:53:26 CET] <kepstin> wintershade: target filesize and target bitrate are equivalent
[23:53:32 CET] <kepstin> filesize = bitrate * length
[23:54:01 CET] <wintershade> yes, but sometimes I don't know the length until I open the video, and it's not too convenient for a batch
[23:54:31 CET] <wintershade> unless there's a formula which I can insert into the command line...
[23:54:39 CET] <kepstin> if you're scripting it, you could use something like ffprobe to read the video length before starting the encode
[23:56:28 CET] <wintershade> ok, is there an easy way to get the duration in seconds?
[23:56:55 CET] <wintershade> like ffprobe -someswitch <input video>, which yields <some integer>?
[23:56:59 CET] <kepstin> by using ffprobe and parsing it from the output (ffprobe is designed to have machine-readable output)
[23:57:42 CET] <wintershade> kepstin: I think I'm a bit lost here...
[23:59:33 CET] <kepstin> if you run 'ffprobe -show_format file.whatever', it'll print a line like duration=1234.5678 on stdout which is the duration in seconds. A script can read the output, look for that, and use it to calculate the bitrate to use in the encode.
[00:00:00 CET] --- Fri Feb 19 2016


More information about the Ffmpeg-devel-irc mailing list