[Ffmpeg-devel-irc] ffmpeg.log.20130429

burek burek021 at gmail.com
Tue Apr 30 02:05:01 CEST 2013


[05:14] <Keshl> Let's say I'm absoutely crazy and wanna play 1080p at 120 FPS. What do you guys recommend I use as a codec, oÉo? Even on a system with a processor running at 4 ghz and an overclocked 670 GTX, it's not able to keep up.
[05:32] <retard> i'm not really sure how helpful this is, but maybe you could try looking at how 1080p60 3d video gets decoded and use some tricks from that?
[05:34] <retard> considering how modern 3d tvs with active shutter glasses work i'd assume that would be a viable starting point anyway
[05:35] <Mavrik> thats not 60 fps though :
[05:35] <Mavrik> Keshl, since youre really going over designed paramters of decoders
[05:35] <Mavrik> plug in profiler see where you have a bottleneck
[05:36] <Keshl> retard: Good idea, but the thing is, my laptop plays it fine.
[05:36] <Keshl> in 3D I mean, at 120 FPS.
[05:36] <Keshl> But that's because it's decoding from two video files at once.
[05:36] <Keshl> Thus can use two cores.
[05:36] <Keshl> When I'm doing a 2D video at 120, it's limited to a single core, and I'm petty sure that's my issue xwx
[05:37] <Keshl> Mavrik: How's I do that? D:
[05:37] <Mavrik> nevermind...
[05:38] <retard> i don't know much about 3d outside a certain nintendo console, but couldn't you do the same thing and split the video into two 1080p60 files
[05:38] <retard> (virtual boy 4 lyfe)
[05:38] <Keshl> No, oÉo.
[05:38] <drv> just as a quick hack, decoding a 1080p h.264 video and throwing away the output with command-line ffmpeg, i can get ~130 fps on my crappy old 3 GHz core2duo
[05:39] <drv> so it should be doable...
[05:39] <Keshl> Not with the current state of any decoders out there.
[05:39] <retard> surely a setup capable of playing back 1080p60 3d video would be able to do what you want as long as you simply drop the glasses?
[05:40] <Keshl> No.
[05:41] <retard> oh
[05:41] <Keshl> When you're playing 3D video, you do one of two things: Either you have a single video file and each frame is twice as wide as the screen is, and the left half is sent to your left eye and the right is sent to your right (In which case you have the same issue I have now and it won't work), or you have two seporate video files and each gets sent to an individual eye, but since they're seproate files you can decode both in parallel (which
[05:41] <Keshl> is what I do), but this can't be done on stirctly 2D 120 FPS video.
[05:41] <Mavrik> most 1080p60 3d videos are 60fps anyway with 30fps "real" framerate
[05:41] <Mavrik> or theyre 120fps and interlaced
[05:41] <Keshl> ... Oh wait hm.
[05:41] <Mavrik> to lower bandwidth requirements
[05:42] <Mavrik> other option is that you have a normal 1080p60 video thats anamorphicaly squished side-by-side
[05:42] <Keshl> It's hackish but you kight've actually had a point, if I just make both videos, but tell ffmpeg to skip every other frame on one and on the other but with an offset of 1, I should be able to play it in 3D mode just without glasses and essentially have 120 FPS with two threads..
[05:42] <retard> yeah, that was my reasoning
[05:42] <Keshl> Could work, oÉo
[05:43] <Keshl> Now if only there was a way to do that without starting 3D mode (or requiring it to exist on everyone's systems..)
[05:47] <retard> i don't have any display hardware that accepts more than 60hz anyway
[05:48] <retard> a friend of mine who works in my countries' national broadcasting company told me about some completely amazing 200hz display technology he saw demoed
[05:49] <zap0> no CRTs?
[05:49] <retard> not anymore
[05:49] <retard> i was a holdout, but now i pay for my own electricity
[05:49] <zap0> :)
[05:50] <retard> :S
[05:51] <zap0> i'm building some 1000fps display hardware.
[05:52] <zap0> it's for high speed photography stuff.
[06:04] <Keshl> Shiny, oÉo
[06:18] <retard> what manner of creature is supposed to be looking at this display
[06:19] <Keshl> Humans, oÉo.
[06:19] <Keshl> See, 24 FPS is the bare minimum for most human brains to switch from "this is a series of pictures" to "this is motion" mode.
[06:19] <Keshl> Though at 24 FPS you can still see it's fake.
[06:20] <Keshl> In reality, you see much faster.
[06:20] <Keshl> When you're going over 100 FPS, it's not so much that you're seeing every frame, but that you're making legit motion blurr.
[06:20] <retard> okay, but are there humans who can tell the difference between 500 and 1000 fps
[06:20] <Keshl> It usually comes off better than blurr added to 60 FPS footage.
[06:20] <Keshl> Yes, of course there are.
[06:20] <Keshl> Everyone can.
[06:20] <retard> i doubt it
[06:20] <Keshl> Like I said, anything over 100 you're trying to make blurr.
[06:21] <Keshl> You won't see every frame exactly as it was intended to be seen, but rather you'll see real blur, which is the goal.
[06:24] <retard> "Showscans research indicates that an average of 66.7 frames per second is the upper limit of what the human eye can perceive, and higher frame rates have no further effect, except in reducing flicker."
[06:25] <Keshl> Studies show those studies are wrong.
[06:25] <Keshl> The difference between 60 and 120 FPS is very pronounced to me.
[06:25] <Keshl> To the point that looking at 60 FPS actually bothers me because the lag's so noticable and I only use it when watching 3D video. >w>
[06:25] <retard> yeah, and from what i hear the difference between 100 and 200fps is very noticable
[06:25] <retard> but
[06:26] <retard> i very much doubt humans can tell the difference past some limit not too far from 200
[06:26] <retard> being able to tell the difference between 60 and 120 and 500 and 1000 are very different things
[06:26] <Keshl> Well, we'll just have to wait and see when I get 240 hz TV's near me <É<
[06:28] <retard> i'll be holding my breath starting now
[06:28] <Keshl> oÉo...
[06:28] Action: Keshl pokes retard while he turns blue.
[06:35] <Mavrik> retard, I think eye movement recognition is about 70-80fps depending on the person
[06:35] <Mavrik> if I remember my theory correctly :)
[06:36] <retard> 1000 fps seems like severe overkill on the display side anyway
[08:24] <zap0> turn on an LED for 1/1000th of a second and then tell me if you can see it blink or not.
[10:34] <Hans_Henrik> yeah i could, but im not sure about the timing accuracy
[10:34] <Hans_Henrik> (using my fingers)
[14:18] <xlinkz0> can i cut from a specific timestamp?
[14:18] <retard> yes
[14:18] <xlinkz0> in stream->timebase units
[14:18] <retard> i don't know what that means
[14:19] <xlinkz0> i know i can seek and cut from timestamps like hh:mm:ss
[14:19] <xlinkz0> but i have the timestamp where 90000 means one second
[14:20] <xlinkz0> so cut from 0 to 90000 means cut from start to the first second
[14:20] <retard> i use http://pastebin.com/1u96HKuz to cut at keyframes without audio desynch
[14:20] <xlinkz0> can i do that?
[14:20] <xlinkz0> i don't have audio
[14:20] <retard> the point isn't the audio
[14:20] <retard> but that you can specify fractional times
[14:23] <xlinkz0> i tried ffmpeg -i 2.mp4 -ss 226844 -c copy out.mp4
[14:25] <retard> that will seek to 226844 seconds into the file
[14:25] <retard> (or actually the closest keyframe)
[14:58] <xlinkz0> retard: thanks i got what you were saying, basically you can seek to 2.12345 seconds
[14:59] <retard> xlinkz0: yeah, but note that with using -c copy you will be restricted to cutting at keyframes anyway
[14:59] <xlinkz0> isn't that determined by the position of ss?
[15:00] <xlinkz0> after the -i option it's accurate, before is to keyframe only?
[15:00] <xlinkz0> atleas that's what i get from http://ffmpeg.org/trac/ffmpeg/wiki/Seeking%20with%20FFmpeg
[15:02] <retard> not cutting at keyframes while using -c copy doesn't make any sense
[15:03] <xlinkz0> i'll revise that later, right now i'm trying to concatenate files
[15:03] <xlinkz0> and it's going horribly :(
[15:03] <xlinkz0> i have a file of length 20s and another of length 14s
[15:04] <xlinkz0> i do ffmpeg -y -f concat -i cat.txt -c copy out.mp4
[15:04] <xlinkz0> cat.txt : http://codepad.org/cxZTiw26
[15:05] <retard> same codec and parameters?
[15:05] <xlinkz0> and i get this : http://codepad.org/ERPEN3z6
[15:05] <xlinkz0> yes, same codec same container
[15:09] <retard> try giving the recipe with intermediate .ts-files a try
[15:09] <xlinkz0> i don't know what those are
[15:10] <retard> http://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20concatenate%20%28join%2C%20merge%29%20media%20files
[15:12] <xlinkz0> can't it automatically get the stream duration? :\
[15:13] <xlinkz0> seems such a trivial and ordinary thing
[15:16] <lentferj> I have a problem here regarding aac in mpeg containers
[15:16] <lentferj> it seems I can only put aac audio stream on mpegts (-f mpegts) container
[15:17] <lentferj> if I use anything else, that from my understanding should give me mpeg-2 PS (dvd, vob)
[15:17] <xlinkz0> isn't Video: h264 (High) (avc1 / 0x31637661) raw video?
[15:17] <lentferj> the audio stream is not correctly detected by ffprobe (or any player)
[15:20] <lentferj> as far I understood, aac should be ok in "any" mpeg-2 container... not just mpeg-ts
[15:21] <lentferj> problem is, that mpeg-ts gives me so much overhead, that using aac over mp3 becomes senseless
[17:44] <Diogo> hi this is possible loop a input file using ffmpeg?
[17:44] <Diogo> ffmpeg -i filename.mp4 ..... (loop option)
[17:44] <sacarasc> I think you can only do it for images, but there is -loop_input
[17:46] <Diogo> i need to loop a video file to a rtmp server..
[17:46] <Diogo> 24hours
[17:46] <Diogo> what is the best way to do that?
[17:51] <klaxa> Diogo: the documentation states that one should use -loop <number of loops> (http://ffmpeg.org/ffmpeg.html#Advanced-options look for -loop_input) however, that part is somewhat not documented, i'm doing some tests with it right now
[17:54] <klaxa> -loop does not work as i expected
[17:58] <Diogo> using images works but when i pass a mp4 not working..
[17:58] <Diogo> Option loop not found.
[17:58] <Diogo> comand: ffmpeg -loop 1 -i rod.mp4  -f flv  rtmp://SERVER
[17:59] <klaxa> yes, if specified after the input file it is more or less ignored
[17:59] <klaxa> if specified before the input file it's not recognized at all
[18:01] <klaxa> it looks like it's included in ffplay though :V
[18:11] <klaxa> i'm sorry, to me it appears rather impossible at the moment, i'm quite disappointed
[19:04] <Mista-D> anyway to burn libass subtitiles for english and french in one pass?
[19:08] <RoboJ1M> I have a problem remuxing AVIs with packed B frames, is there anyway to unpack it with avconv?
[19:08] <RoboJ1M> Details here: http://askubuntu.com/questions/287461/using-avconv-when-remuxing-to-mkv-is-there-a-way-to-fix-packed-avi-input-files
[19:10] <sacarasc> RoboJ1M: avconv's channel is #libav
[19:12] <RoboJ1M___> OK, thanks. So why does the command ffmpeg say it's deprecated and it's now avconv?
[19:12] <sacarasc> Because you're using a fork of ffmpeg, and not ffmpeg itself.
[19:13] <RoboJ1M___> Oh right, so it's the distro that's deprecated ffmpeg for the fork, called avconv
[19:13] <Hans_Henrik> why did they deprecate ffmpeg? o.0
[19:14] <iive> not even that, the member of the fork deprecated the other fork in their fork. just to make it more confusing fork.
[19:14] <Hans_Henrik> its not like ffmpeg development is going slow, bad,halting, or anything like that, is there?
[19:15] <smj> Has anyone else been screencapturing, and noticing about a second of delay in audio? Has anyone found a solution for it?
[19:15] <iive> when they fork, the debian maintainer (sietart) switched on libav on its own. Then they (libav) added this message to confuse the users on purpose and refuse to change it.
[19:15] <sacarasc> Hans_Henrik: The ffmpeg executable supplied by libav (which is a fork of ffmpeg) is deprecated in favour of their avconv executable.
[19:16] <iive> so it is kind of intentionally misleading.
[19:16] <RoboJ1M> Thanks fflogger.
[19:17] <RoboJ1M> I wonder if the ffmpeg on my pc is the real ffmpeg or a wrapper around avconv with a warning?
[19:17] <Mavrik> RoboJ1M, its old ffmpeg with a warning
[19:17] <Mavrik> RoboJ1M, probably obsolete as hell
[19:17] <Mavrik> grab a static build
[19:18] <Hans_Henrik> couldn't they for gods sake call it something else at least? ffmpeg-av  or something?
[19:18] <RoboJ1M> ffmpeg version 0.8.6-4
[19:18] <Mavrik> no because they wanted to push all people to avconv
[19:18] <Mavrik> RoboJ1M, current is 1.2 :)
[19:19] <Mavrik> RoboJ1M, http://dl.dropboxusercontent.com/u/24633983/ffmpeg/index.html
[19:19] <Mavrik> grab this, use it :)
[19:19] <Mavrik> no install needed, just unpack and run
[19:20] <RoboJ1M> Got it, I'll shove it in a home folder and give it a go
[19:20] <RoboJ1M> I'll link it to /usr/bin/soundpicturemakedifferenter
[19:24] <smj> Is no one using ffmpeg for recording the screen?
[19:26] <Hans_Henrik> smj, idk, but its supported for Xserv at least
[19:26] <Hans_Henrik> hmm nvm, idk any details
[19:26] <RoboJ1M> OK. Well, I grabbed one of the 32-bit builds. Is there any way to unpack B frames when remuxing avi to mkv with ffmpeg? :)
[19:26] <RoboJ1M> I'll shove the details on pastebin
[19:30] <RoboJ1M> http://pastebin.com/Dvb68cgY
[19:47] <Fjorgynn> why do I get fps 5.7? :o
[20:27] <xlinkz0> Fjorgynn: nightime rtsp?
[21:19] <An_Ony_Moose> would this be the place to ask about compiling libx264? If not, please do tell me where to go instead.
[21:19] <An_Ony_Moose> How can I make libx264 compile with OpenCL support? I've installed OpenCL dev libraries but ./configure still says opencl: no
[21:20] <smj> fuck everything, I've spent at least two days trying to get audio and screen capture to synchronize, with no results
[21:20] <smj> and my monitor is has screen tearing when every single vsync is on
[21:21] <smj> and I'm making typos
[21:24] <smj> An_Ony_Moose, have you tried #x264?
[21:26] <An_Ony_Moose> ooh that exists, ok I assumed it wouldn't for some reason. Thanks smj
[21:27] <smj> you're welcome
[21:29] <smj> I'd like to blame PulseAudio for this, but I haven't been able to get monitor input from ALSA
[21:32] <smj> I prefer something that works and has documentation piled up on the Internet, than an abstraction layer that does some things half-assedly
[21:33] <klaxa> smj: https://bbs.archlinux.org/viewtopic.php?id=147852
[21:33] <klaxa> post #4
[21:34] <klaxa> i tend to postproduct though, simply sync audio and video by hand
[21:35] <smj> thanks, I'll take a look at this
[21:40] <An_Ony_Moose> whee building an unredistributible copy of ffmpeg x)
[21:56] <iive> smj: is your video card ati and are you using fglrx?
[21:57] <smj> no, NVIDIA and the prop driver
[21:57] <iive> hum...
[22:24] <An_Ony_Moose> what are the practical differences between fdk-aac and faac?
[22:24] <JEEB> faac is one of the two reference implementations, and fdk-aac is the fraunhofer encoder
[22:25] <JEEB> and reference be reference
[22:30] <smj> klaxa, it doesn't work, where should I look for errors?
[22:30] <klaxa> no idea actually, i'm not an alsa guy, i use pulseaudio too
[22:31] <klaxa> you could see if alsamixer shows another virtual soundcard though
[22:51] <blez> someone usinnnnnnnnnnnnnnnnnnnnnnnnn
[22:51] <blez> pf, stupid keyboard
[22:51] <blez> someone using dshow?
[23:36] <Mavrik> gah
[23:36] <Mavrik> compiling ffmpeg for ARM is pissoff annoying
[23:38] <klaxa> i gave up trying to compile mpd for arm
[23:38] <klaxa> just not worth the hassle :|
[23:39] <Mavrik> I managed to do it without NEON support
[23:39] <Mavrik> but with NEON support the code generation just craps out
[00:00] --- Tue Apr 30 2013


More information about the Ffmpeg-devel-irc mailing list