[Ffmpeg-devel-irc] ffmpeg.log.20160126
burek
burek021 at gmail.com
Wed Jan 27 02:05:01 CET 2016
[00:01:11 CET] <dorp_> Is there a better alternative to gdigrab? Or I might as well test a different machine? (I'm still uncertain whether the results are to be expected)
[00:01:29 CET] <ricardo_> Hello, I have a video separated in two files: data (.dat) and a index (.dix). Using totem I can open the .dat and see the video, but with others players I can't. How do I open these files with ffmpeg?
[00:02:16 CET] <J_Darnley> dorp_: you could see if dshow has a capture device available on your system
[00:02:34 CET] <J_Darnley> otherwise consider a "better" capture tool
[00:02:46 CET] <J_Darnley> perhaps fraps or obs
[00:03:14 CET] <kepstin> yeah, i added the gdigrab filter because it was simple to code, dumb, and works on any windows version
[00:03:33 CET] <kepstin> a better capture driver that e.g. grabs the compositor front buffer can get a lot better performance
[00:03:33 CET] <J_Darnley> Oh its your feature is it?
[00:03:44 CET] <kepstin> not really, i just got it upstream and cleaned up the code a bit :/
[00:03:55 CET] <J_Darnley> ah
[00:04:35 CET] <dorp_> J_Darnley: Thanks for the suggestions, I'm on it
[00:04:47 CET] <Mavrik> I guess interfacing with modern stuff like nVidia / AMD video encoder directly would probably net better results.
[00:04:57 CET] <Mavrik> E.g. the APIs Steam etc. use to stream gameplay.
[00:05:54 CET] <kepstin> in theory, the fancy on-card encoders on nvidia, amd, intel can feed data directly from the video ram into the encoder at native framerate :/
[00:06:11 CET] <Mavrik> Exactly.
[00:06:36 CET] <Mavrik> You get freshly encoded H.264 directly form the card without having to transport raw display anywher.e
[00:06:43 CET] <Mavrik> (If you have the HW of course.)
[00:06:50 CET] <dorp_> Are there existing tools for utilizing that? Or I'm getting excited for nothing? :)
[00:07:17 CET] <Mavrik> dorp_, nVidia GeForce Experience is the nVidia tool I know of
[00:07:19 CET] <Mavrik> no idea about AMD
[00:07:34 CET] <Mavrik> You need a decently modern GPU tho.
[00:08:33 CET] <furq> i think raptr can do it for amd and intel
[00:12:11 CET] <YamakasY> yo anyone know the right parameters to have the sound synced flv to mp4 ?
[00:17:07 CET] <J_Darnley> Why do you think that needs a special option?
[00:17:23 CET] <J_Darnley> Did you post the command and full log yet?
[00:17:41 CET] <dorp_> J_Darnley: Thanks for the suggestion, it seems that obs with 60fps can capture all the frames
[00:18:45 CET] <YamakasY> J_Darnley: erm
[00:22:19 CET] <YamakasY> nah testing out
[00:22:34 CET] <YamakasY> I'm going install some extra ffmpeg servers, that is for sure
[00:25:15 CET] <YamakasY> ok this seems to be best for now -vcodec mpeg4 -async 1 -acodec libvo_aacenc
[00:26:31 CET] <J_Darnley> If you have a recent ffmpeg please use the internal encoder.
[00:26:33 CET] <YamakasY> mhh can be better
[00:26:35 CET] <kepstin> YamakasY: the vo_aacenc encoder isn't very good, I'd recommend using the internall encoder (-acodec aac) instead
[00:26:57 CET] <YamakasY> kepstin: yeah that should be better, but I need to specify it it seems
[00:27:16 CET] <J_Darnley> Also, you were complaining about the video quality. I'm not surprised since you're using mpeg4
[00:27:27 CET] <YamakasY> mhh still experimental ?
[00:27:31 CET] <J_Darnley> At the default of 200k
[00:27:38 CET] <J_Darnley> Not anymore
[00:27:39 CET] <YamakasY> nah it's not that bad
[00:28:19 CET] <YamakasY> no I have a birate of 255
[00:28:21 CET] <YamakasY> oops
[00:28:22 CET] <YamakasY> 355
[00:30:55 CET] <YamakasY> mhh I need much better, fps is 30 default
[00:30:59 CET] <YamakasY> of the flv
[00:33:21 CET] <dorp_> Is there a filter for ffmpeg to discard frames which are the same, essentially keeping the unique frames in a variable-framerate sort of way?
[00:34:57 CET] <YamakasY> oh the windows 10 player is crappy ti seems
[00:35:04 CET] <J_Darnley> I think you can use one of the telecine filters forthat
[00:48:55 CET] <kepstin> dorp_: the decimate filter can do that, yeah
[00:50:15 CET] <dorp_> kepstin: I actually just tried it, it seems to me that it's not very thorough, I can still skip-through lots of duplicate frames
[00:50:24 CET] <dorp_> Unless I'm using it wrong
[00:50:45 CET] <YamakasY> is 30 fps on 640x480 really an advanatge ? back the days I saw some PC's having a hard time on it when recording with flash
[00:51:15 CET] <dorp_> ./ffmpeg -i -c:v ... -vf mpdecimate out.mp4 ... seems correct?
[00:51:35 CET] <furq> YamakasY: why are you using mpeg4 instead of libx264
[00:52:36 CET] <YamakasY> furq: dunno, I come from flv's
[00:52:46 CET] <YamakasY> furq: I need to play it on mobile devices
[00:52:52 CET] <furq> so use libx264
[00:53:07 CET] <TD-Linux> ffmpeg's mpeg4 encoder is unlikely to produce something playable on a mobile device
[00:53:29 CET] <YamakasY> but what would be my extention with that lib ?
[00:53:30 CET] <TD-Linux> (it's an ASP encoder right?)
[00:53:36 CET] <furq> mp4
[00:53:42 CET] <furq> and i think it's an SP encoder
[00:53:42 CET] <YamakasY> ok
[00:54:05 CET] <YamakasY> so as acodec I need to se libx264?
[00:54:09 CET] <furq> vcodec
[00:54:16 CET] <furq> or c:v if you're not from the past
[00:54:27 CET] <YamakasY> I have vcoded in my command
[00:54:47 CET] <YamakasY> furq: I like the 90's
[00:55:02 CET] <furq> that explains why you're using nellymoser
[00:55:03 CET] <TD-Linux> I do not miss 90's video encodes one bit
[00:55:24 CET] <TD-Linux> it looks like his input is nellymoser, not output
[00:55:36 CET] <furq> yes but it would have spoiled the rhythm of the joke to make that clear
[00:55:45 CET] <YamakasY> so than it would be ffmpeg -i bla.flv c:v output.mp4 ?
[00:55:52 CET] <TD-Linux> opus in mp4 soon (tm)
[00:55:54 CET] <furq> -c:v libx264
[00:56:02 CET] <YamakasY> nothing more ?
[00:56:22 CET] <furq> that depends on whether you want to do anything more
[00:56:37 CET] <YamakasY> furq: what is advised ?
[00:57:03 CET] <YamakasY> I record my flv's at high 264 but at a fps of 25 atm
[00:57:15 CET] <YamakasY> 30 can be done but was not always nice with buffering I thought
[00:57:40 CET] <furq> use the framerate of the input
[00:58:41 CET] <YamakasY> ye syes
[00:58:51 CET] <YamakasY> that for sure and i need async
[00:59:30 CET] <YamakasY> heh, these ffmpeg servers are sure gonna be there
[01:00:51 CET] <YamakasY> ok,, perfect quality, but async doesn't sync that good
[01:02:49 CET] <YamakasY> furq: good advise
[01:03:02 CET] <YamakasY> I knew about the framerate, never UP it
[01:11:36 CET] <themisfit610> hey all, looking to identify options for the e-ac3 encoder in ffmpeg
[01:11:44 CET] <themisfit610> is there documentation of this?
[01:14:59 CET] <J_Darnley> Is there one?
[01:15:34 CET] <J_Darnley> So there is
[01:15:58 CET] <J_Darnley> Then: -h encoder=eac3
[01:16:11 CET] <J_Darnley> And other global options
[01:17:29 CET] <themisfit610> many thanks!
[01:18:54 CET] <themisfit610> another question
[01:19:34 CET] <themisfit610> best practices for ac3 / ec3 encoding includes things like lowpass filtering the LFE
[01:19:45 CET] <themisfit610> will the encoder in ffmpeg do this automatically or do I need to handle that upstream?
[01:25:18 CET] <J_Darnley> I have no idea about that
[01:30:03 CET] <maduro> hi all, is it possible to generate DASH manifests with both webm and mp4 content using ffmpeg?
[04:48:38 CET] <waressearcher2> can you imagine living on a planet 5 times bigger than earth, all that gravity
[06:15:40 CET] <johnnny22-afk> can i really just cat all the .ts files from an m3u8 playlist into a single .ts files without adjustments ?
[06:26:45 CET] <c_14> usually, yes
[06:28:53 CET] <johnnny22-afk> nice to hear, i'll give it a test ;)
[07:52:43 CET] <digidog> hm, any chance to get vpse installed on Linux ? :$
[07:53:23 CET] <furq> that depends on what vpse is
[07:54:00 CET] <digidog> furq: vapoursynth-editor
[07:54:45 CET] <digidog> furq: it seems that the packages miss sth for compiling if I understood correctly :/ https://bitbucket.org/mystery_keeper/vapoursynth-editor/issues/7/modification-for-compiling-under-clang-os
[07:56:25 CET] <furq> that says clang and osx
[07:57:13 CET] <digidog> furq: the last comment
[07:57:37 CET] <furq> that says clang on linux
[07:59:07 CET] <furq> https://bitbucket.org/mystery_keeper/vapoursynth-editor/issues/4/building-instruction
[07:59:42 CET] <digidog> furq: sry, not much exp in compiling ... so you think it should compile fine with gcc or how do I start to compile it since there is no instruction and I am on ...
[07:59:48 CET] <digidog> furq: ah ...
[07:59:59 CET] <digidog> furq: awesome. that was the link I was looking for!
[08:00:11 CET] <digidog> furq: thanks a million!
[08:01:53 CET] <digidog> furq: qmake ... *facepalm* sure ... since it is Qt5
[08:01:57 CET] <digidog> thank you !
[08:02:02 CET] <digidog> furq++
[08:03:00 CET] Last message repeated 1 time(s).
[09:07:26 CET] <nindustries> Hi, with Skylake bringing HEVC hardware decoding, is there any acceleration in ffmpeg possible?
[09:19:14 CET] <fritsch> on linux?
[09:19:15 CET] <fritsch> no
[09:19:21 CET] <fritsch> not for 10 bit
[09:19:27 CET] <fritsch> but HEVC 8 bit is already there in ffmpeg
[09:19:41 CET] <fritsch> libva / libva-intel-driver >= 1.6.1
[09:20:05 CET] <fritsch> nindustries:
[09:31:57 CET] <nindustries> But there is on windows? fritsch
[09:32:06 CET] <nindustries> Reason is i'm picking out a new server
[09:42:57 CET] <furq> nindustries: apparently ffmpeg does hevc decoding with dxva2
[09:43:10 CET] <fritsch> it has for windows / linux
[09:43:14 CET] <nindustries> hmm
[09:43:14 CET] <fritsch> but only for 8 bit both
[09:43:22 CET] <nindustries> And why only 8-bit?
[09:43:40 CET] <fritsch> skl has no 10 bit decoding unit
[09:43:44 CET] <fritsch> it is done "gpu accelerated"
[09:43:50 CET] <fritsch> much too slow for 4k video anyways
[09:44:13 CET] <fritsch> but, the reason is: for linux 10 bit hwaccel work is missing and for windows hendrik did not yet PR his bits
[09:44:27 CET] <fritsch> lavfilters already can do it on windows
[09:45:29 CET] <nindustries> skl ?
[09:45:39 CET] <nindustries> oh, skylake
[09:46:19 CET] <fritsch> yeah don't buy it
[09:46:21 CET] <fritsch> wait for kabilake
[09:46:33 CET] <nindustries> why, may I ask?
[09:46:41 CET] <fritsch> hä?
[09:46:50 CET] <fritsch> cause it cannot play hevc 10 bit content
[09:46:56 CET] <fritsch> without the cpu going to 400%
[09:46:56 CET] <nindustries> second half of 2017..
[09:47:12 CET] <fritsch> the future live tv / uhd bluray is 4k @ 10 bit
[09:47:22 CET] <nindustries> yah
[09:47:23 CET] <fritsch> i would not buy an expensive skylake just to add an nvidia card later
[09:47:24 CET] <fritsch> to play that
[09:47:33 CET] <nindustries> this is going to be the server btw
[09:47:40 CET] <nindustries> so I was thinking about converting things
[09:47:50 CET] <nindustries> ah right.. that's ENcoding..
[09:48:03 CET] <fritsch> encoding won't work either
[09:50:38 CET] <nindustries> yeah
[09:50:50 CET] <nindustries> so I suppose only reason for skylake is DDR4 memory
[09:54:16 CET] <YamakasY> morning
[11:59:06 CET] <YamakasY> anyone doing an ffmpeg exec in php and checking for a OK in an if ?
[13:41:31 CET] <dorp_> If it helps anybody else- it seems that there's a difference between screen capturing 'desktop' with -f gdigrab .. and capturing a specific window. When I have a window set to 1920:1080, playing a video at 25fps ... and then use ffmpeg to capture said window at 60fps, it seems that all the frames are captured. (at least within the scope of 8 seconds, 200 frames)
[13:41:43 CET] <dorp_> (using 'desktop' would result with missing frames)
[13:44:03 CET] <dorp_> Maybe it's worth adding that to the documentation?
[14:15:18 CET] <luc4> Hello! Im using ffmpeg libraries in my code to open containers. Im running a sort of unit test that simply opens and closes the same file. After approx 6500 times, I get this crash: http://pastebin.com/fuFkCfxj. Im inclined to think this is my fault, I only found old reports, but anyone who experienced something similar by any chance? Im using ffmpeg 2.7.2.
[14:17:04 CET] <J_Darnley> Not that I know of.
[14:18:10 CET] <J_Darnley> I do wonder why you are using 2.7.2 when 2.7.5 is available (not to mention the 2.8 releases)
[14:19:17 CET] <J_Darnley> If you want to report a bug then you should compile with debug symbols and post a full trace in the report.
[14:19:28 CET] <J_Darnley> You should test the latest git head first though.
[14:20:37 CET] <luc4> J_Darnley: unfortunately I spent one day trying to use 2.8, but without success.
[14:20:47 CET] <luc4> J_Darnley: I can try 2.7.5 though
[14:24:00 CET] <durandal_1707> luc4: what's wrong with 2.8?
[14:27:02 CET] <luc4> durandal_1707: totally unknown, just switching from 2.7.2 to 2.8 was breaking my code, but dont remember the details. I was helped here but no one could find anything. I was told that actually a code written for 2.7.2 should be compatible with 2.8, but my code was simply not working. After a day I stopped trying.
[14:36:30 CET] <durandal_1707> luc4: does your code have source code?
[14:36:57 CET] <luc4> durandal_1707: you mean if it is open source?
[14:37:13 CET] <luc4> durandal_1707: it is open yes
[14:37:32 CET] <durandal_1707> where it is?
[14:37:46 CET] <luc4> durandal_1707: however I was trying on 2.8.0, maybe 2.8.1 is different
[14:38:54 CET] <J_Darnley> Why would you not use the newest version of each branch?
[14:39:00 CET] <luc4> durandal_1707: this is the portion using ffmpeg to open: https://github.com/carlonluca/pi/blob/master/piomxtextures_src/omxplayer_lib/OMXReader.cpp
[14:39:07 CET] <J_Darnley> Isn't this why people spend time maintaining them?
[14:39:43 CET] <luc4> J_Darnley: because it takes much time to rebuild for an arm platform, and I do not do it unless there is a real reason to do it.
[14:40:18 CET] <luc4> J_Darnley: last time I tried to update to 2.8 it took 1 day, and it was wasted :-)
[14:40:52 CET] <luc4> J_Darnley: however if I find out this crash is really only caused by ffmpeg and not by me Ill have to update
[14:40:57 CET] <J_Darnley> ARM clearly sucks so much
[14:41:53 CET] Action: J_Darnley wonder how long it takes to build on an RPi
[14:42:21 CET] <luc4> J_Darnley: crossbuilding takes 5/10 minutes approx
[14:44:01 CET] <J_Darnley> Curses I don't have the source on it.
[14:46:47 CET] <hook> hey all
[14:47:00 CET] <hook> trying to pipe a gource video to ffmpeg with this; gource --seconds-per-day 0.3 --auto-skip-seconds 6 --file-idle-time 500 --multi-sampling -1024x768 --hide filenames,dirnames --stop-at-end --disable-progress --output-ppm-stream - | ffmpeg -an -threads 4 -y -vb 4000000 -r 60 -f image2pipe -vcodec ppm -i - -vcodec libx264 ./gource.mp4
[14:47:03 CET] <waressearcher2> hook: hallo und herzlich willkommen
[14:47:12 CET] <hook> thanks waressearcher2
[14:47:26 CET] <hook> but it seems that the video is cropping to the top left?
[14:47:40 CET] <hook> zoomed in
[14:48:12 CET] <J_Darnley> What does ffmpeg say about it?
[14:48:13 CET] <waressearcher2> "-vb 4000000", ist korrekt ?
[14:49:11 CET] <hook> probably not, I guess I should be figuring this value out
[14:49:50 CET] <hook> I see this in the output; Stream #0.0: Video: libx264, yuv420p, 1024x768, q=-1--1, 60 tbn, 60 tbc
[14:50:45 CET] <J_Darnley> Why don't you stop guessing and post the whole log?
[14:52:08 CET] <hook> ffmpeg version 0.8.17-4:0.8.17-0ubuntu0.12.04.1, Copyright (c) 2000-2014 the Libav developers
[14:52:08 CET] <hook> built on Mar 16 2015 13:26:50 with gcc 4.6.3
[14:52:08 CET] <hook> The ffmpeg program is only provided for script compatibility and will be removed
[14:52:09 CET] <hook> in a future release. It has been deprecated in the Libav project to allow for
[14:52:09 CET] <hook> incompatible command line syntax improvements in its replacement called avconv
[14:52:09 CET] <hook> (see Changelog for details). Please use avconv instead.
[14:52:09 CET] <hook> [image2pipe @ 0x778c40] Estimating duration from bitrate, this may be inaccurate
[14:52:10 CET] <hook> Input #0, image2pipe, from 'pipe:':
[14:52:10 CET] <hook> Duration: N/A, bitrate: N/A
[14:52:10 CET] <hook> Stream #0.0: Video: ppm, rgb24, 1024x768, 60 fps, 60 tbr, 60 tbn, 60 tbc
[14:52:11 CET] <hook> Incompatible pixel format 'rgb24' for codec 'libx264', auto-selecting format 'yuv420p'
[14:52:11 CET] <hook> [buffer @ 0x779380] w:1024 h:768 pixfmt:rgb24
[14:52:56 CET] <J_Darnley> OMFG
[14:53:00 CET] <hook> apologies for the dump, forgotten my irc skills
[14:53:10 CET] <J_Darnley> 1 - That's not ffmpeg
[14:53:13 CET] <J_Darnley> 2 - That so old
[14:53:20 CET] <J_Darnley> 3 - That's not the whole log.
[14:53:24 CET] <hook> ah okay, good
[14:53:35 CET] <hook> trust me to trust an apt-get
[14:53:53 CET] <hook> if I fire it through avconv I get the same issue
[14:54:16 CET] <J_Darnley> I assume your version of avconv is equally as old
[14:54:30 CET] <hook> let's see
[14:55:19 CET] <hook> avconv version 0.8.17-4:0.8.17-0ubuntu0.12.04.1,
[14:56:06 CET] <hook> does anyone mind me dumping logs here, or is there a preferred way?
[14:56:16 CET] <J_Darnley> pastebin or similar!
[14:57:10 CET] <hook> ok ;)
[15:07:49 CET] <dorp_> Is there anybody here with access to the documentation? Who should I contact for making a suggestion for a note about gdigrab?
[15:08:19 CET] <J_Darnley> Anyone with the source can do that
[15:09:15 CET] <J_Darnley> If you have a specific change in mind and don't want to bother with that then put the text you want somewhere and I will look at putting it in.
[15:10:30 CET] <dorp_> J_Darnley: It's just a simple note concerning what I've experienced, that capturing 'desktop' and capturing a specific window entity, would perform differently, even if both happen to be the same resolution/canvas
[15:11:22 CET] <J_Darnley> I'm afraid I don't understand what you mean.
[15:11:39 CET] <dorp_> Capturing a 25fps 1920:1080 with 'desktop' would have missing frames, where capturing the same 25fps 1920:1080 by a specific window/title, would have none.
[15:12:00 CET] <J_Darnley> Perhaps if you post the two command lines I could test myself.
[15:13:16 CET] <dorp_> J_Darnley: As basic as that: ./ffmpeg -f gdigrab -framerate 60 -i desktop -c:v libx264 -qp 0 -preset ultrafast -y out.mp4
[15:14:01 CET] <dorp_> J_Darnley: Given a monitor with 1920:1080 60hz, playing and capturing a 1920:1080 video with 25fps, would result with missing frames
[15:14:16 CET] <J_Darnley> And what about the "slow" command?
[15:15:00 CET] <dorp_> J_Darnley: This is the command that would result with missing frames. The one that didn't- was about replacing: -i desktop, with: -i title="explicit"
[15:15:04 CET] <jkqxz> Huh, that's a "fun" answer to the question. They really don't seem like they should be different from the code.
[15:15:21 CET] <dorp_> jkqxz: Hey there, what do you mean?
[15:17:56 CET] <dorp_> jkqxz: BTW, thanks a lot for the sequence of numbers suggestion, made my testing a lot easier and clearer to evaluate comparisons
[15:18:01 CET] <jkqxz> Desktop vs. window in gdigrab is essentially the same code, except a bit of the initialisation setting it up. The difference must be somewhere on the Windows side.
[15:18:18 CET] <bencoh> dorp_: what about cpu use?
[15:18:31 CET] <J_Darnley> Windows is known for its exceptionally good code(!)
[15:19:25 CET] <dorp_> bencoh: My cases didn't involve cpu concerns, and the case I've shown ^ is about two identical cases. Both having the same resolution, source fps, capture fps, same codec, and same drive
[15:20:45 CET] <dorp_> bencoh: The only difference was whether the selection was -i desktop, or -i title= ... so if other happens to stumble issues, it would be best to know that if you don't actually wish to capture your desktop activity, and you wish to capture something within a window canvas, it's possible it will perform better
[15:21:47 CET] <bencoh> dorp_: I was only asking if you noticed any difference in cpu usage between the two (be it for ffmpeg or the rest of the system), but okay.
[15:23:03 CET] <dorp_> bencoh: I didn't notice anything significant, so I'm not sure how to measure/compare a subtle difference. I just wanted to emphasize that this is not a cpu bottleneck concern
[15:32:41 CET] <eau4x6> I'm using to concat demuxer to join two parts of the same video (no reencoding). I get an annoying gap when the first video ends. The last frame of first video is displayed for 7 seconds along with audio from second. I tried monkeying around with vsync/async, copyts, copytb options but it didn't help :)
[15:33:00 CET] <eau4x6> "I'm using concat demuxer"
[15:34:43 CET] <eau4x6> the video format is h.264 and i use mkv container. any tips?
[15:36:22 CET] <hook> think I'm going to give up on this gource/ffmpeg thing for today, let it settle and pick it up tomorrow
[16:28:18 CET] <digidog> any vapoursynth friends here? need to add PATHONPATH to vapoursynth since it doesnt work in bashrc or profile. any hints ?
[16:36:08 CET] <jkqxz> You've misspelled PYTHONPATH, perhaps?
[16:41:13 CET] <digidog> jkqxz: jesus! yes ... sry. *squints*
[16:41:50 CET] <durandal_1707> do you see letters well?
[16:42:06 CET] <shincodex> I love gutting your make file
[16:42:51 CET] <digidog> jkqxz: vspipe -v doesnt work - I always need to type PYTHONPATH=/usr/local/lib/python3.5/site-packages vspipe -v that why I try to place this somewhere useful
[16:43:30 CET] <digidog> durandal_1707: define *well* after 26 hours awake ...
[16:44:54 CET] <digidog> shincodex: that doesn't even make any sense for my spare english ...
[16:46:09 CET] <shincodex> I have a spare tire in my trunk
[16:49:37 CET] <digidog> shincodex: well ... drop it off ;) ... btw: we can also talk in German, French or Italian if you want ? :)
[17:04:05 CET] <digidog> Myrsloik: can I set up PYTHONPATH somewhere in vapoursynth/*? Placing it in the ~/.bashrc or ~/.profile doesn'T has any effect. (the usual way to add any paths in ENV Variables for Terminal commands under Linux/Debian)
[17:10:38 CET] <dorp_> Yesterday I've asked if it's possible to screen capture at a high framerate, and have the export consists only of the unique frames. It seems that I've achieved just that with: -vf mpdecimate -vsync 0 ... just using 'mpdecimate' would seem to discard the duplicates from the source, but introduce 'dups' to fill the gaps. So -vsync 0 is significant
[17:16:52 CET] <jkqxz> digidog: "export PYTHONPATH=..." in ~/.bashrc really should work if you're using bash. Does "set" output show the right PYTHONPATH in the environment immediately before you run the program?
[17:29:29 CET] <digidog> jkqxz++
[17:30:48 CET] <digidog> jkqxz: you saved my ass. thanks. made another check and it turned out that my echo PATH >> basrc missed *export*
[17:30:56 CET] <digidog> *facepalm*
[17:31:06 CET] <digidog> jkqxz: thank you.
[17:37:54 CET] <jkqxz> :)
[18:25:18 CET] <salviadud> Lets say I want to disable x86 flags on an encoding
[18:26:01 CET] <salviadud> would I go like this: ffmpeg -cpuflags -mmx-sse-avx-cmov ?
[18:26:23 CET] <salviadud> My question is how do I disable multiple x86 flags on a single command
[18:27:01 CET] <J_Darnley> If you want to disable them all: -cpuflags 0
[18:27:25 CET] <J_Darnley> (I wonder why though)
[18:27:53 CET] <J_Darnley> As for disabling some I think you might be able to use what you posted
[18:28:18 CET] <salviadud> Thank you J_Darnley
[18:29:58 CET] <J_Darnley> Yeah, see cpuflags here http://ffmpeg.org/ffmpeg.html
[18:30:34 CET] <salviadud> You got any favorite distro for ffmpeg ?
[18:31:01 CET] <J_Darnley> ?
[18:31:10 CET] <J_Darnley> Do you mean operating system?
[18:31:54 CET] <salviadud> Yeah
[18:31:55 CET] <jkqxz> I think the cpuflags only handles internal use in libav* - if you use libx264 for encoding it won't be respected.
[18:32:02 CET] <J_Darnley> Windows.
[18:32:23 CET] <salviadud> jkqxz, how do I encode in h264 without libx264 then?
[18:32:28 CET] <drv> why do you want to restrict cpu flags? have you found a bug in one of the optimized routines?
[18:32:31 CET] <J_Darnley> You don't
[18:33:01 CET] <salviadud> Then I would have to compile h264 without those flags
[18:33:35 CET] <salviadud> cpu flags don't allow me to decompress a binary file hidden within a video
[18:35:06 CET] <salviadud> libx264 seems to only have sse
[18:35:17 CET] <salviadud> There is no bug
[18:35:20 CET] <salviadud> so, don't worry
[18:35:32 CET] <salviadud> ffmpeg works fine
[18:35:37 CET] <J_Darnley> I think you're seeing things.
[18:36:15 CET] <J_Darnley> Anyway, you can control the features used in libx264. See the asm option.
[18:37:20 CET] <salviadud> But then, I would have to compile ffmpeg with --disable-asm
[18:37:26 CET] <J_Darnley> No
[18:37:35 CET] <J_Darnley> See x264's asm option
[18:38:25 CET] <J_Darnley> And I don't mean a compile time option.
[18:39:46 CET] <salviadud> I can't seem to find that option.
[18:39:56 CET] <salviadud> on google at least
[18:40:08 CET] <salviadud> most of the stuff that comes out is how to compile libx264
[18:40:20 CET] <bencoh> x264 --fullhelp|grep asm
[18:44:23 CET] <salviadud> I don't have x264 as a binary
[18:44:32 CET] <salviadud> I can get full help on ffmpeg
[18:45:47 CET] <salviadud> I only got the library
[18:52:01 CET] <jkqxz> Something like "ffmpeg ... -x264opts asm=sse8,avx7 ..." should do it. It doesn't have a negative form, though, so you need to build up the whole set yourself.
[18:52:55 CET] <salviadud> I would only use the most basic of flags
[18:53:09 CET] <salviadud> to achieve my lab test
[19:04:07 CET] <kbarry> I'm very new to FFMPEG. Google doesnt offer any read answers, so came here looking for help with what to search for (what is it called)
[19:04:35 CET] <grublet> kbarry: what are you trying to find the name of?
[19:04:38 CET] <grublet> describe it
[19:04:50 CET] <kbarry> I'd like to take a audio track (an MP3), and replace the audio, but keep all else the same, metadata, track length, etc.
[19:05:06 CET] <kbarry> I don't know what that might be called.
[19:05:25 CET] <kbarry> not entirely sure its common procedure, but seems that it should be fairly straitforward,
[19:06:06 CET] <kbarry> Was thinking I might have to generate a sample of [length of origin] from my "new audio source", then "slip/insert" that in as the only audio track.
[19:06:07 CET] <grublet> kbarry: you're looking to copy metadata. someone else will need to tell you what specifically to do though, but googling 'copy metadata' may give better results
[19:06:24 CET] <grublet> oh, keep track length the same? that idk
[19:06:29 CET] <Mavrik> kbarry, that's a very interesting usecase
[19:06:38 CET] <Mavrik> What are you trying to do? :)
[19:07:38 CET] <kbarry> Prank my Mentor.
[19:07:39 CET] <c_14> kbarry: ffmpeg -i metadata.mp3 -i audio.mp3 -map_metadata 0 -map 1:a out.mp3
[19:07:40 CET] <c_14> should do it
[19:07:43 CET] <kbarry> (to be honest)
[19:08:01 CET] <c_14> track length will depend on the length of audio.mp3 though
[19:08:09 CET] <kbarry> c_14: Would that keep the track length the same?
[19:08:14 CET] <kbarry> Right,
[19:08:14 CET] <c_14> (you can use -t duration as an output option to trim)
[19:08:29 CET] <Mavrik> kbarry, ah, we can help with that :P
[19:08:43 CET] <Mavrik> Yeah, what c_14 said.
[19:08:52 CET] <kbarry> What is the Dureation of my "replacement " track is , say 30 seconds, and the original was 5 minutes?
[19:08:55 CET] <Mavrik> Or reencode audio and use silence audio filter.
[19:09:02 CET] <Mavrik> That will keep everything the same just the track is silent.
[19:09:07 CET] <kbarry> will it just have 30 seconds of the "new" thne silence?
[19:09:36 CET] <kbarry> So, were going to clone-backup his music library,
[19:09:42 CET] <c_14> kbarry: you also have to add -map_metadata:s:0 0:s:0
[19:10:11 CET] <YamakasY> what is the right term for ffmpeg an flv to a mp4 ?
[19:10:13 CET] <kbarry> but then we are going to replace all his music with a song (techno/kids song called "i'm a jellybean", guess what the only lyric is)
[19:10:39 CET] <c_14> kbarry: you can use -stream_loop -1 as an input option to infinitely loop (and then with -t as an output option to cut off after x seconds/minutes)
[19:10:46 CET] <Mavrik> YamakasY, remuxing if you don't want to touch video and audio inside
[19:10:49 CET] <kbarry> But want to keep "everything else" the same, so the metadata, album art (i know its probably not part of the track)
[19:10:56 CET] <Mavrik> ffmpeg -i bla.flv -codec copy bla.mp4
[19:10:59 CET] <YamakasY> Mavrik: ok, no transcoding ?
[19:12:53 CET] <kbarry> c_14: OK, going to take down all you said, and read the docs, so I can learn.
[19:13:51 CET] <kbarry> c_14: in -map_metadata:s:0 0:s:0 what is the 0:s:0
[19:14:00 CET] <kbarry> (you didnt have that in the original suggested command)
[19:15:05 CET] <c_14> You want both, the one with the s will copy the stream metadata, the other one will copy the global metadata
[19:15:07 CET] <YamakasY> Mavrik: ?
[19:15:23 CET] <c_14> 0:s:0 being the first stream of the first file
[19:15:48 CET] <kbarry> 0:s = index0 STREAM
[19:16:00 CET] <kbarry> :0 index0 stream
[19:16:31 CET] <Mavrik> YamakasY, what?
[19:18:33 CET] <YamakasY> Mavrik: 'no transcoding ?
[19:24:31 CET] <kbarry> When using a command like -stream_loop, is it possible for me to apply this to a specific stream/input ?
[19:24:52 CET] <kbarry> ie, Currently i'd have 2 streams, and I really only want to repeat one.
[19:25:16 CET] <kbarry> In my use-case i'll be using a -t, so it wont r"really" matter, but for a broader understanding, i'd like to know
[19:25:21 CET] <c_14> just put it directly before the -i you want it to affect
[19:25:35 CET] <c_14> Most ffmpeg options affect the file they are placed in front of.
[19:26:36 CET] <kbarry> OK,
[19:27:13 CET] <kbarry> so "technically" I could have -stream_loop 2 -i file.mp3 -stream_loop -1 -i secondaudio.mp3
[19:27:39 CET] <c_14> yes
[19:27:43 CET] <c_14> practically as well
[19:27:44 CET] <digidog> hm, qtgmc @ vapoursynth has switched from Yadif to nnedi3, does this infect the all over qtgmc quality ? any experiences ?
[19:29:19 CET] <kbarry> Thanks for the help.
[19:29:43 CET] Action: kbarry enjoying ffmpeg more every time he gets to use it
[20:27:02 CET] <durandal_1707> digidog: nnedi3 gets pixels out of nothing, using trained neurons
[20:28:57 CET] <durandal_1707> digidog: yadif uses other fields from prev, current and next frame
[22:26:42 CET] <Chagall1> would it be faster to extract text subs and retime them (automatically) compared to slow seek all the way?
[22:26:56 CET] <Chagall1> (if you have to seek anyway)
[22:27:48 CET] <c_14> Well, you're going to have to read the entire file either way. And if you extract them you'll have to read it twice (assuming you want the subs put back in again later)
[22:36:13 CET] <c_14> Chagall1: did you get my message?
[22:38:26 CET] <Chagall1> no, didn't see it
[22:38:31 CET] <c_14> Well, you're going to have to read the entire file either way. And if you extract them you'll have to read it twice (assuming you want the subs put back in again later)
[22:42:36 CET] <Chagall1> yeah but I can fast seek for the encode
[22:43:15 CET] <Chagall1> if I can get the subtitles faster somehow
[22:43:49 CET] <c_14> Are you adjusting the timings for the entire file or just for a part of it?
[22:44:27 CET] <Chagall1> whatever part corresponds to the video i am re-encoding
[22:44:54 CET] <c_14> I'm not sure I understand what you're trying to do.
[22:45:17 CET] <Chagall1> i would be seeking, like i said, so it would be a part of it
[22:47:39 CET] <c_14> You should be able to fast seek in either case, no?
[22:48:23 CET] <Chagall1> if you fast seek with text subs they just get displayed from the beginning
[22:49:02 CET] <Chagall1> (with subtitles filter)
[22:49:43 CET] <Chagall1> but if i can just extract them with ffmpeg using fast seek then i can use them as input for the actual encode and lose almost no time, not sure if that would work though
[22:50:39 CET] <c_14> Ah, your question is which would be faster ffmpeg -i video -ss <time> -t <duration> -vf subtitles=video out or extracting the subs, retiming them and then using ffmpeg -ss <time> -t <duration> -i video -vf subtitles=subs out, right?
[22:50:43 CET] <c_14> In that case extracting would be faster.
[22:51:18 CET] <c_14> s/would/should/
[22:52:53 CET] <Nitori> so, if ffmpeg tells me a x264/x265 video stream has the pix fmt yuv422p10le, that does imply 10-bit depth, correct?
[22:52:57 CET] <Chagall1> ok, thanks
[22:53:10 CET] <c_14> yes
[22:53:57 CET] <Nitori> okay, and any idea what the "(tv)" in "yuv422p10le(tv)" means?
[22:54:07 CET] <Nitori> x265
[22:54:10 CET] <c_14> It means it's not full-range
[22:54:33 CET] <c_14> It was either that or the bt level
[22:54:58 CET] <c_14> ie bt.701 or bt.609 etc
[22:54:58 CET] <Nitori> ok..
[23:06:36 CET] <c_14> It's the color range
[23:06:42 CET] <c_14> tv = not full-range
[23:17:13 CET] <Nitori> could have something to do with me using crf 0, or preset "fast"? or just simply that the video only consists of black and white?
[23:21:08 CET] <J_Darnley> crf 0 is just the quality level (which is not lossless at 10-bit)
[23:21:46 CET] <Nitori> didn't know it wasn't lossless
[23:21:53 CET] <J_Darnley> 422 refers to how the chroma is subsampled
[23:22:12 CET] <Nitori> those are fancy words. Still have much learning to do
[23:22:29 CET] <J_Darnley> Anything else you're not sure about?
[23:23:08 CET] <J_Darnley> p means planar
[23:23:13 CET] <J_Darnley> 10 does mean 10-bit
[23:23:20 CET] <J_Darnley> le means little endian
[23:23:33 CET] <Nitori> figured those last two :-)
[23:56:53 CET] <salviadud> Just so you guys know, to disable-asm on the x264 library, you need a recompile
[23:57:09 CET] <salviadud> that way, when ffmpeg calls for it, it's disabled
[00:00:00 CET] --- Wed Jan 27 2016
More information about the Ffmpeg-devel-irc
mailing list