burek021 at gmail.com
Tue Aug 7 03:05:02 EEST 2018
[00:14:35 CEST] <DHE> bencoh: turns out leak sanitizer works out just fine if you recompile EVERYTHING (including x264, etc) with it... work required but worth it.
[05:48:55 CEST] <ayohmang> i'm compiling ffmpeg on solaris 11.3 sparc, some ancient solaris command such as grep,sed,awk need to be replaced with gnu conterparts in order to compile succesfully source code. just rename sed in /usr/sbin to sed.orig and and reference gnu sed in /usr/local/bin.. after a while i cant reboot solaris vm. im running on solaris 11.3 vm.
[08:32:47 CEST] <botik101> hello - i am using ffmpeg to extract many frames from video. what is the maximum number of frames i can extract using select filter? I noticed that most likely it will depend on the maximum length of the ffmpeg string?
[11:54:13 CEST] Action: pi- (REPOST) Seeking ffmpeg consultant https://paste.pound-python.org/show/eni5jLABXdJ1756nnzKk/
[12:01:16 CEST] <TheAMM> What are you offering, pi-?
[12:05:25 CEST] <pi-> TheAMM: I invite any interested party to offer a consultancy rate. My bad, I failed to put that in the post.
[12:12:01 CEST] <King_DuckZ> hello, when I send frames to ffmpeg, do they have to be in the right order even if they have a timestamp? I'm writing a multithreaded program and I'm not sure if I should make sure frames are sent in the right order or not
[12:12:19 CEST] <King_DuckZ> the timestamp would be correct obviously
[12:18:19 CEST] <DHE> frames going into an encoder need to be in ascending PTS order. packets going into a muxer need to be in ascending DTS order (if DTS is unset, DTS == PTS).
[12:18:53 CEST] <DHE> you should hold some kind of lock while performing any operation on a single AVXxxxContext object if said object may be used from multiple threads
[12:50:00 CEST] <King_DuckZ> DHE: ok, I'll make sure they are output in the right order from my side then, thanks!
[16:43:55 CEST] <SortaCore> hey folks
[16:44:17 CEST] <SortaCore> how do lossless ogg (presumably with flac inside)?
[16:44:29 CEST] <SortaCore> or should I just as well create .flac directly
[16:44:50 CEST] <furq> you should just create flac directly
[16:45:01 CEST] <furq> ogg flac isn't well supported and flac uses vorbis comments for tagging anyway
[16:45:08 CEST] <furq> so unless you want to attach other streams there's no point
[16:45:20 CEST] <SortaCore> ok, danke
[16:50:45 CEST] <barhom> https://trac.ffmpeg.org/ticket/7346 < this issue is killing me, atbd did y ou find any solution to your issue?
[17:00:57 CEST] <atbd> barhom: yes, my issue was engendered by an old code which did not take rollover into account. So it had started to filter everything once the rollover happened
[17:01:30 CEST] <atbd> ffmpeg is not the culprit
[17:05:01 CEST] <atbd> barhom: have you look at ffmpeg code to see in which case those messages appears ?
[17:08:58 CEST] <atbd> it comes from "rfast -g 50 -keyint_min 100 -sc_threshold 0 \
[17:08:58 CEST] <atbd> -b:v 900k -maxrate 900k -bufsize 2000k -c:a libfdk_aac -b:a 64k \
[17:09:10 CEST] <atbd> oups sorry
[17:10:07 CEST] <atbd> it comes from "avfilter_graph_request_oldest" in libavfilter/avfiltergraph.c:1396
[18:34:20 CEST] <IES> I am trying to use ffmpeg to stream a camera to youtube live. Everytime I try to do anything but -c:v copy I get extremely high CPU times. For example simply changing -c:v libx264 and instantly I get 200% CPU time. This is on a Ubuntu Linux VMWare host. I am using ffmpeg version N-90813-g4ac0ff8 Copyright (c) 2000-2018 the FFmpeg developers. Any ideas or suggesions please?
[18:35:24 CEST] <BtbN> Encoding video does use a lot of CPU. So nothing unusual there.
[18:38:54 CEST] <IES> But that much? I need to add some filters for time, date and weather. It uses so much CPU that I basically crashes the server after a period of time.
[18:39:18 CEST] <IES> There has to be something that can be done. FFMPEG would be useless if it did this for everyone.
[18:42:41 CEST] <bruce-> do you use the ultrafast x264 profile?
[18:42:56 CEST] <IES> Yes if I use anything else I can't even get 30FPS.
[18:43:51 CEST] <IES> Does FFMPEG rely on graphics card, memory, CPU? Memory and CPU are abundant but as a vmware host graphics is not great.
[18:45:59 CEST] <IES> Or is FFMPEG just not a good choice for me to use? Something better?
[18:47:32 CEST] <c_14> in most cases it won't use the GPU. Mainly CPU and a bit of memory
[18:47:40 CEST] <c_14> and 200% for x264 is on the low side
[18:47:49 CEST] <c_14> though it depends on your video resolution
[18:48:00 CEST] <furq> it shouldn't be using that much with a live source
[18:48:10 CEST] <c_14> Oh, input's live
[18:48:30 CEST] <BtbN> depends on the CPU
[18:48:45 CEST] <BtbN> if this is some ARM or crappy Atom CPU
[18:48:59 CEST] <BtbN> Or a 4K feed or something
[18:49:26 CEST] <BtbN> and really, 200%, depending on how many cores there are, is really not a lot for x264
[18:53:51 CEST] <IES> Really? How do people get this to work consistantly? I am trying to run a 24/7/365 camera and if it is over taxing the CPU 24/7 that kind of makes no sense. How does EVERYONE else in the world run a 24/7/365 stream consistantly without issues? This is running on an ESXi server that has plenty of CPU and Memory. I can throw 16 cores at it and 64 or even 128 GB of ram but upping either the
[18:53:51 CEST] <IES> CPU or Memory makes 0 difference unless there are some flags I need to set as well to utilize the cores or extra memory.
[18:55:04 CEST] <IES> This really seems like in this day and age it should be easy to accomplish. But again maybe ffmpeg is not the right tool for this job?
[18:56:58 CEST] <furq> IES: pastebin the command and output
[18:57:18 CEST] <furq> and also i guess /proc/cpuinfo wouldn't hurt (or just summarise it)
[19:07:11 CEST] <DHE> often -preset:v fast (or faster, or veryfast) can make it go better.
[19:07:17 CEST] <DHE> well, faster anyway.
[19:08:12 CEST] <DHE> other people do GPU or other hardware offload. any mid-to-high end nvidia GPU in the last 4 years will have hardware H264 encoding offload if you run the nvidia binary drivers.
[19:08:27 CEST] <DHE> (other options exist, but I don't have the hardware)
[19:13:06 CEST] <IES> https://pastebin.com/G3LmHUF9
[19:14:49 CEST] <DHE> source resolution?
[19:15:19 CEST] <DHE> 22 megabit video is impressive...
[19:19:21 CEST] <furq> if this is 4k then there's probably not much you can do
[19:19:24 CEST] <furq> other than rescale it
[19:19:45 CEST] <furq> also -crf will override -b:v
[19:25:05 CEST] <BtbN> IES, 200% cpu usage is far from over taxing.
[19:25:46 CEST] <BtbN> The event-stream I ran recently was using over 1000% CPU consistently. Also, that's a terrible way to measure CPU usage.
[19:26:12 CEST] <BtbN> It's meaningless, specially when you don't know the total amount of cores and threads.
[19:27:09 CEST] <BtbN> 384k for aac is also insane
[19:27:17 CEST] <BtbN> 128k is fine, unless this is 5.1 or something
[19:28:18 CEST] <BtbN> Also, that CPU of yours, E5-2680 v3, is a 12 core 24 thread CPU. 200% is absolutely nothing on that. It means it's using 2 cores.
[19:28:43 CEST] <BtbN> you should probably even limit it to 6 or 8 threads, more is mostly pointless
[19:29:04 CEST] <BtbN> Given that it's a Xeon E5, you probably even have two of them.
[19:33:34 CEST] <ayohmang> im compiling on solaris 11.3 which a little bit pain. install gnu grep,diff,sed,awk because old solaris similar command doesnt supported. make and make install working fine. but after reboot, my vm wont boot anymore.
[19:35:21 CEST] <ayohmang> solaris 11.3 sparc on oracle vm solaris zone
[21:20:05 CEST] <pi-> If I wish to encode as say .mp4, do I have a choice of available codecs? In which case, how can I choose one that scales well to multicore?
[21:22:14 CEST] <pi-> Also, is there a CPU efficient way to prepend (say 1s) black to a video?
[21:29:27 CEST] <DHE> pi-: it varies by container. mp4 supports a number of codecs, though I imagine h264 will be your preferred video codec
[22:10:17 CEST] <leif> When compiling ffmpeg with the lame mp3 encoder, would it make sense to use a version of lame compiled with --enable-nasm or not? (Or does it not really make much of a difference.)
[22:10:45 CEST] <leif> (I'm having a hard time figuring out what lame and/or ffmpeg do with nassm, so I'm not really sure what the effects are.)
[22:12:09 CEST] <DHE> assembly implementations of algorithms, when done right, are faster than C
[22:12:28 CEST] <DHE> plus you can do things more easily like make use of MMX, SSE, 3dNow, AVX, and whatever other instructions exist
[22:12:58 CEST] <DHE> it's usually better to enable it if available. but once ffmpeg or Lame are built, nasm doesn't really matter anymore.
[22:13:21 CEST] <teratorn> DHE: not necessarily true, and C libraries /help/ you write parallel code, not the reverse
[22:13:37 CEST] <teratorn> at least ones designed to not get in the way of doing so
[22:14:10 CEST] <teratorn> but you're right sometimes it's just easier to use the instruction set
[22:15:39 CEST] <Cracki> I wonder what use a standalone assembler is when you can do inline assembly in C and C++
[22:16:05 CEST] <leif> DHE and teratorn That makes sense.
[22:16:24 CEST] <leif> So basically it's just an optional assembler you can use. That makes sense, thanks. :)
[22:16:36 CEST] <pi-> Would h264 be suitable for preserving > 16kHz audio content?
[22:16:38 CEST] <furq> you should absolutely use it for ffmpeg itself
[22:16:45 CEST] <furq> it'll actually throw a big warning at you if you don't
[22:16:52 CEST] <furq> but i don't think it makes much difference for lame
[22:17:25 CEST] <pi-> DHE: "it varies by container" <-- what do you mean by 'container'?
[22:17:44 CEST] <DHE> Cracki: it's often easier to get upgraded versions of nasm than upgraded binutils when new CPU instructions come out
[22:17:50 CEST] <Cracki> hm
[22:18:46 CEST] <DHE> also i've seen inline asm wreak havoc with some compiler optimizations like LTO. (I do hope they've fixed that)
[22:19:07 CEST] <DHE> pi-: the actual file format. mp4 is the container, and it contains H264 video and, oh, AAC audio or something
[22:21:50 CEST] <pi-> ah, tx
[22:23:54 CEST] <pi-> I don't actually need to mess with the video stream. Apart from prepending black for k1 seconds and k2 seconds. So I suppose I should stick with whatever file format the src video is.
[22:24:18 CEST] <Cracki> prepending black picture or still audio for some containers means just setting an offset
[22:24:27 CEST] <pi-> * and freezing the last frame for k2 seconds
[22:28:32 CEST] <pi-> Cracki: source videos are guaranteed to be a AVI or MP4
[22:29:22 CEST] <pi-> Is there an easy fix for pre-pending black and freezing the last Frame?
[22:33:19 CEST] <TheAMM> pi-: fyi, h264 is a video codec and irrelevant to audio
[22:33:36 CEST] <TheAMM> As you've found out, AAC can handle over 16khz audio
[22:34:13 CEST] <TheAMM> But I don't know your max range, so it's not really all that useful to say "it can do above X"
[22:36:26 CEST] <TheAMM> You can (ab)use the overlay video filter to hold the last frame (https://video.stackexchange.com/questions/10825/how-to-hold-the-last-frame-when-using-ffmpeg/10833#10833 for example)
[23:48:17 CEST] <ArsenArsen> does anyone have an implementation of clean reencode and remux (any format to any other format with all the conversions done to the recommended codecs of the target format, lets say webm (vp8) to mp4 (h264)) implemented in C or C++
[23:51:20 CEST] <Mavrik> ffmpeg.c would be the right thing ? :P
[23:52:38 CEST] <ArsenArsen> na I need to programatically call it without subprocessing (since I want to do more with the mid result, eg show it live as it is transcoded)
[23:52:51 CEST] <ArsenArsen> which also means that i need to do it really fast so it can happen in real time
[23:58:01 CEST] <Mavrik> Sure, but my point is - it's the implementation of transcoding that supports all of that ;)
[23:58:09 CEST] <Mavrik> (might wanna look into examples/ directory first tho.)
[00:00:00 CEST] --- Tue Aug 7 2018
More information about the Ffmpeg-devel-irc