[Ffmpeg-devel-irc] ffmpeg.log.20171020

burek burek021 at gmail.com
Sat Oct 21 03:05:01 EEST 2017


[02:24:13 CEST] <chrisjunkie> @rcombs Chris A has mentioned you might be able to help me with a weird TS issue Im having? :-)
[02:24:32 CEST] <rcombs> possibly; if not, this is the right channel
[02:24:51 CEST] <chrisjunkie> Perfect thank you. Ill try and describe as clear as I can what Im trying to do
[02:26:57 CEST] <chrisjunkie> I have a "live" HLS playlist, that has up to 360 segments in the playlist, all at 10 seconds each. What I'm trying to do is regenerate, from a certain starting point in the playlist, new, lower bitrate and framerate segments and clean the input up a bit. What I'm seeing is that the seek isn't accurate for the HLS TS segments, meaning my transcoded segments are out by anywhere from 10-15 seconds from where they should be
[02:27:32 CEST] <chrisjunkie> Ive compiled ffmpeg from the 3.4 branch
[02:28:03 CEST] <rcombs> generally you'll want to be on master for support here (though I don't think anything relevant has changed since 3.4)
[02:28:40 CEST] <rcombs> you'll probably also want to post a sample
[02:31:07 CEST] <chrisjunkie> Yeah sorry I did compile from master but it was literally just as the 3.4 branch was being sliced which is why I mentioned it
[02:31:33 CEST] <chrisjunkie> https://pastebin.com/TjjHSwA5
[02:31:52 CEST] <chrisjunkie> I get this message when FFMPEG tries to seek: segments_original/stream.m3u8: could not seek to position 3811.628
[02:32:54 CEST] <chrisjunkie> And then the starting timestamp of the first segment of my output is 3823.666667
[02:33:11 CEST] <chrisjunkie> 12 and a bit seconds out
[02:35:39 CEST] <chrisjunkie> I think that this ticket is relevant https://trac.ffmpeg.org/ticket/5093#comment:10
[02:39:38 CEST] <Johnjay_> test 1 2 3 test 1 2 3
[02:39:43 CEST] <Johnjay_> can you hear me can you hear me over roger over
[02:41:03 CEST] <chrisjunkie> Yes
[02:41:11 CEST] <Johnjay> YES thank you
[02:41:53 CEST] <tdr> i can only read the text, turn your mic up
[02:42:45 CEST] <Johnjay> haha
[02:45:14 CEST] <tdr> more ... its still really faint
[03:13:50 CEST] <chrisjunkie> Hmmm I might be getting somewhere - ffmpeg doesnt use `hls_read_seek` for sliding window playlists
[04:11:46 CEST] <maxofs2d> Hello! I'm using relaxed 's static builds ( https://johnvansickle.com/ffmpeg/ ) on a Raspberry Pi 3, on Raspbian Jessie.
[04:11:53 CEST] <maxofs2d> I'm doing RTMP transcoding of an input RTMP stream with it. The armel build works fine, but quite slowly.
[04:12:00 CEST] <maxofs2d> I'm trying to use the armhf build since that's what enables hardware floating point, which, to my understanding, should dramatically speed up x264.
[04:12:05 CEST] <maxofs2d> However, any attempt at doing anything with the armhf build throws up a bus error after it prints the input's stats. Here's a screenshot of the output. https://i.imgur.com/P3a8A51.png
[04:12:24 CEST] <maxofs2d> I could barely find anything about "bus errors" besides one ticket in the ffmpeg tracker from 8 months ago, and I don't believe it's relevant.
[04:14:03 CEST] <maxofs2d> The bus error happens even by just stripping down the output to merely audio (which is what you see in my screenshot)
[04:15:16 CEST] <maxofs2d> I would appreciate any pointers (no pun intended) you may have if this rings a bell with any of you :P
[04:42:26 CEST] <chrisjunkie> So it turns out that the HLS demuxer doesnt seek accurately if its a live playlist
[04:42:45 CEST] <chrisjunkie> So Ive added a fix around that, so the output is visually the same
[04:43:15 CEST] <chrisjunkie> BUT, what Im seeing is that the segment muxer times are out by 10seconds which is my segment length
[07:51:49 CEST] <IOException> Hi all! Somebody here?
[07:52:46 CEST] <IOException> How to stretch/scale showspectrum vertical?
[08:29:36 CEST] <kepstin> IOException: the vertical scaling is based on the sample rate of the input audio
[08:30:02 CEST] <kepstin> so resample to adjust it.
[08:30:12 CEST] <IOException> kepstin I would like to multiply it by some variable to make output more "friendly"
[08:30:31 CEST] <IOException> currently because of low freq 80% of height is "black"
[08:32:04 CEST] <IOException> kepstin sample video > https://pasteboard.co/GPLsRx1.png
[08:32:59 CEST] <kepstin> the high freq, you mean? what's the sample rate of the audio?
[08:33:06 CEST] <IOException> I will appreciate workaround based on showcqt, but I don't know how to remove "spikes" axis from output
[08:33:36 CEST] <IOException> kepstin it is low freq [10-2400 HZ]
[08:34:18 CEST] <IOException> I would like to fill half of frame with "orange" dots. Currently there are "blue" space
[08:35:25 CEST] <kepstin> IOException: the vertical scale of the showspectrum filter is from 0 (bottom) to 1/2 of sample rate (top). To change the scale, resample the audio before putting it into the filter.
[08:36:05 CEST] <IOException> :) can ffmpeg resample audio?
[08:36:41 CEST] <IOException> Just googled it. Seems it can
[08:36:42 CEST] <kepstin> of course. https://ffmpeg.org/ffmpeg-filters.html#aresample-1
[08:39:44 CEST] <IOException> Original command: https://gist.github.com/shmutalov/fa28d06919a12517f5f90a4cff5f8ceb
[09:13:36 CEST] <IOException> How to scale video without changing resolution?
[09:14:31 CEST] <Nacht> That's an odd question.
[09:14:58 CEST] <Nacht> Scaling implies making it larger or smaller, thus changing the resolution
[09:15:15 CEST] <IOException> say, my video is 640x480, I want to scale all frames, but don't change original resolution
[09:15:25 CEST] <IOException> with croping
[09:15:38 CEST] <Nacht> So you want to crop it, but keep the original resolution
[09:15:44 CEST] <IOException> yes
[09:16:52 CEST] <Nacht> I recon with a filter, cropping first, then scaling it afterwards
[09:17:36 CEST] <IOException> Nacht can you help me with filtering without scale?
[09:18:35 CEST] <IOException> Nacht > https://pasteboard.co/GPLsRx1.png I would like to scale spectrum to remove empty space
[09:18:53 CEST] <IOException> empty blue space
[09:26:01 CEST] <Nacht> IOException: So it should be something like:  -vf "crop=<your input>, scale=640:480,setdar=4:3"
[09:27:02 CEST] <IOException> Nacht kepstin aresample did the trick. But spectrum decreased in length
[09:27:36 CEST] <IOException> https://pasteboard.co/GPLPwEw.png
[09:27:43 CEST] <Nacht> judging from your image, it should be -vf "crop=640:240:0:0, scale=640:480,setdar=4:3"
[09:28:05 CEST] <Nacht> judging from your image, it should be -vf "crop=640:240:0:0, scale=320:480,setdar=4:3"
[09:28:14 CEST] <Nacht> slight brainfart there
[09:28:32 CEST] <Nacht> argh
[09:28:37 CEST] <Nacht> Right, its way too early
[09:28:51 CEST] <Nacht> -vf "crop=320:480:0:0, scale=640:480,setdar=4:3"
[09:29:25 CEST] <Nacht> it will crop a 320x480 image from the top left corner, and stretch it to 640x480 with an aspact ratio of 4:3
[09:30:12 CEST] <IOException> Nacht croping will decrease quality
[09:30:33 CEST] <IOException> so I am not using it
[09:33:13 CEST] <Nacht> Scaling will decrease quality, not cropping
[09:38:01 CEST] <blap> can you crop without reencoding?
[09:39:39 CEST] <Nacht> Whenever you change the actual pixels, you need re-encoding
[09:41:16 CEST] <IOException> Result video: https://drive.google.com/file/d/0B2AoR3-Umz1Pd0pVb3F4a05RclE/view?usp=sharing
[09:42:08 CEST] <IOException> now I need to stretch the spectrum in length
[09:43:51 CEST] <blap> THen cropping will decrease quality.
[09:45:27 CEST] <IOException> blap I am not familar with ffmpeg... :'( All I want to map audio source to video spectrum, where spectrum will be fill the full frame at the end of video
[09:45:49 CEST] <IOException> if I crop it then scale it again I will lose pixels
[09:46:10 CEST] <blap> what with a fft?
[09:46:16 CEST] <IOException> what is it?
[09:46:22 CEST] <blap> fast fourier transform
[09:46:32 CEST] <blap> or you want an oscilloscope display?
[09:47:11 CEST] <IOException> I want an oscilloscope display
[09:47:40 CEST] <IOException> I think I can do reverse algo (as Nacht and kepstin wrote)
[09:48:00 CEST] <IOException> render video with big resolution and crop it
[09:50:36 CEST] <blap> ah i didn't know you could do that
[09:51:19 CEST] <IOException> my brain will explode :)
[09:51:36 CEST] <IOException> it works, but it shows spectrum very small
[10:19:57 CEST] <IOException> Nacht scaling after the crop doesn't work
[10:20:33 CEST] <IOException> oh I forgot setsar
[10:20:37 CEST] <IOException> *setdar
[11:20:51 CEST] <luc4> Hello! I see that ffplay is using direct3d by default. Any idea how to disable it and use only the CPU?
[11:44:40 CEST] <IOException> another problem. How to sync video duration with audio
[12:30:34 CEST] <IOException> how to use -vsync parameter? Documentation is not clear
[12:32:58 CEST] <jojva> Hi. When encoding an H264 stream, should the SPS be present before each PPS?
[13:25:43 CEST] <IOException> kepstin Nacht blap there is result: https://www.youtube.com/watch?v=oaZWR61HVe0
[14:54:51 CEST] <charly> Hello, I'm using libavformat/libavcodec on a xeon E5-1660 v4 (8 cores, 16 threads) to decode hight quality HEVC/H264 TS streams with high bitrate (>100Mbps) in real-time.
[14:55:09 CEST] <charly> It works well but I need to set the thread number at 100 to be real-time, with the warning "Application has requested 100 threads. Using a thread count greater than 16 is not recommended". The problem is that the latency is huge and memory concumption huge too with this number of threads.
[14:55:19 CEST] <charly> How can I use a resonable amount of threads (16 ? 32 ?) to decrease the latancy/memory while being real-time ?
[14:56:42 CEST] <BtbN> Using as many threads as the machine has threads is the default.
[14:57:22 CEST] <BtbN> And going beyond that doesn't have benefits usually
[14:57:26 CEST] <IOException> using 100 threads (while your processor gives you 16) is meaningless
[14:57:43 CEST] <IOException> also each thread will consume memory + context change cost
[14:57:57 CEST] <charly> yes but i have about 43fps when i use default threads config
[14:58:41 CEST] <charly> ~ 52fps with 50 threads, and ~ 63fps with 100 threads
[15:01:39 CEST] <charly> i know this is non logic and this is why i ask how to fix this
[15:03:18 CEST] <BtbN> Depending on what you're encoding and with what settings, 8 cores just aren't enough for HEVC
[15:06:16 CEST] <charly> i don't know how exactly it was encoded, but result is: hevc (Main 10) ([36][0][0][0] / 0x0024), yuv420p10le(tv, bt2020nc/bt2020/arib-std-b67), 3840x2160 [SAR 1:1 DAR 16:9], 59.94 fps
[15:07:16 CEST] <charly> so it's an HEVC 10bits HLG stream at 105Mbps
[15:09:20 CEST] <charly> 8 cores (16threads) seems to be enough because it works with 100 decoding threads
[15:11:05 CEST] <BtbN> probably because it degrades in quality enough.
[15:12:40 CEST] <charly> i didn't notice any quality reduction but maybe
[15:37:22 CEST] <JEEB> charly: the HEVC decoder is not given much optimization love in SIMD which should be able to help you. mostly because nobody had a use case / requirement. if you have one you could think about sponsoring some HEVC SIMD work
[15:37:32 CEST] <JEEB> (or doing it yourself of course if you like to get your hands dirty)
[15:38:41 CEST] <DHE> sadly the CPU scheduler for threads doesn't aggressively migrate threads between CPUs. if you have a CPU idle and another CPU with 2 jobs trying to run on it, the idle CPU may remain idle for an extended period before the system really does anything about it
[15:39:00 CEST] <DHE> bringing the number of threads way up helps hide this in the noise of running threads
[15:40:57 CEST] <charly> JEEB, I understand that the decoder is not fully optimized and permomances can probably be better but my problem is that the number of threads must be huge to decode my stream, so the performances are sufficient for my application
[15:42:00 CEST] <JEEB> yes, you're working around the thing not being fast enough by spawning enough buffer (and latency)
[15:42:17 CEST] <charly> DHE, any tips to optimize the core distribution ?
[15:42:31 CEST] <JEEB> numactl lets you control on which core(s) an app can go
[15:42:38 CEST] <JEEB> but not sure how much that will help you
[15:42:54 CEST] <JEEB> s/cores/NUMA nodes/
[15:43:00 CEST] <charly> I already use it, also with nice
[15:43:41 CEST] <JEEB> yup
[15:43:41 CEST] <charly> but if i understand well, the problem is inside the hevc decoder, not the application itself
[15:44:13 CEST] <JEEB> yes, the HEVC decoder is just not fast enough. when you create umpteen frame threads you create enough buffer for any speed fluctuations to possibly be non-problematic
[15:44:23 CEST] <DHE> an E5-1660 isn't going to be multicore, so no numa involved
[15:44:26 CEST] <DHE> err, multisocket
[15:44:36 CEST] <JEEB> because with frame threads each thread is decoding its own image
[15:44:49 CEST] <JEEB> (and that's why you have obscene memory usage)
[15:45:04 CEST] <DHE> quick question. are all memory channels populated?
[15:45:27 CEST] <DHE> eg: if there are 3 off-coloured RAM slots, you have a memory stick in all 3 of them
[15:46:33 CEST] <JEEB> not sure if that would help too much even if the memory bandwidth was working fully :P
[15:46:37 CEST] <charly> i' using 4 channels slots with 8 available
[15:47:16 CEST] <charly> 4*4GB at 2400MHz
[15:47:17 CEST] <JEEB> basically what I see is that some frames take longer to decode than others, and unless you have a very large buffer it isn't averaging out fast enough
[15:47:25 CEST] <DHE> JEEB: on a personal note, I once had a server built and the manufacturer was a fscking moron. only 1 stick in the quad-channel controller.
[15:47:32 CEST] <JEEB> sure
[15:47:47 CEST] <JEEB> this is just a case of "HEVC is slow, and yes you can work around by using frame threading as a buffer"
[15:48:18 CEST] <DHE> put in a 2nd stick for testing, +30% performance boost
[15:48:23 CEST] <JEEB> so yes, while from threading perspective 100 threads just ain't making sense
[15:48:33 CEST] <JEEB> when you have a not-always-fast-enough decoder
[15:48:44 CEST] <JEEB> the added latency buffer due to frame threading helps average things out
[15:48:55 CEST] <JEEB> so that the larger intra frames which take longer etc don't drop the average
[15:49:13 CEST] <JEEB> at least that's how I see this
[15:50:12 CEST] <JEEB> so the two ways to work around this without a huge amount of threads is to have a separate buffer of images to be fed to the user, which of course also takes memory
[15:50:25 CEST] <JEEB> or just optimizing the HEVC decoder
[15:50:28 CEST] <DHE> wonder if you could use an external app like mbuffer or pv to buffer the source material...
[15:51:57 CEST] <JEEB> charly: anyways congratulations on being the first person around here that cares about HEVC content that makes the lavc decoder derp :P
[15:52:07 CEST] <charly> i already use the first work around, but the 16G of memory are a little bit just
[15:52:24 CEST] <JEEB> charly: btw if you want to test out some patches that would require rewriting to get into upstream
[15:52:31 CEST] <JEEB> but have been noted as helping people
[15:52:50 CEST] <JEEB> http://git.1f0.de/gitweb?p=ffmpeg.git;a=shortlog;js=1
[15:52:59 CEST] <JEEB> the two OpenHEVC related ports
[15:53:07 CEST] <JEEB> and the x86/hevc things after that
[15:53:18 CEST] <JEEB> that might speed it up a bit
[15:53:30 CEST] <JEEB> (they are intrinsics which upstream FFmpeg doesn't accept)
[15:55:11 CEST] <charly> so http://git.1f0.de/gitweb?p=ffmpeg.git;a=commit;h=61b83379ef8c19117427cfe3ca2c14c4612dcc2f;js=1 and http://git.1f0.de/gitweb?p=ffmpeg.git;a=commit;h=cbcae719f8d03273b9c3e481c5f0e2f543c39682;js=1 ?
[15:55:32 CEST] <JEEB> yes
[15:55:49 CEST] <JEEB> and then there's two x86/hevc ones
[15:56:04 CEST] <JEEB> the DECLARE_ALIGNED and correctly mark as SSE4 ones
[15:56:33 CEST] <JEEB> you're not building with MSVC most likely so the last hevc SIMD related thing isn't required I think (which also disables one of the optimization paths)
[15:57:01 CEST] <charly> ok
[16:09:24 CEST] <Johnjay> are you guys talking about mingw again?
[16:09:46 CEST] <JEEB> no
[16:10:01 CEST] <Johnjay> damn I thought for sure when you said he wasn't building with MSVC
[16:24:34 CEST] <Johnjay> ee
[16:24:48 CEST] <Johnjay> *ffmpeg I managed to compile with mingw
[16:24:58 CEST] <Johnjay> which i still find very cool
[17:00:57 CEST] <chamath> Hi Guys, I have a problem with zooming in to a video. I need to zoom in to a video, but not as a zoom animation. Just keep the zoom level constant during the video play time. This is the ffmpeg command I'm trying.
[17:01:00 CEST] <chamath> https://pastebin.com/6sUDSnCJ
[17:01:36 CEST] <chamath> But it increases the output video duration to twise the input video duration.
[17:02:07 CEST] <chamath> Any idea about that? Is there any other way that I can get this done?
[17:02:13 CEST] <Johnjay> wow. that's a single command?
[17:02:58 CEST] <chamath> yea
[17:04:42 CEST] <c_14> try getting rid of the setpts?
[17:05:20 CEST] <c_14> or if you thin it's the zoompan filter just crop,scale
[17:51:28 CEST] <ricci> if i encode a file multiple times (from the same input), i get outputs' hash is different every time. why?
[17:52:38 CEST] <BtbN> Because the metadata includes the date and other stuff that changes.
[17:53:13 CEST] <ricci> how do i get creation time with ffprobe?
[17:53:25 CEST] <BtbN> ffprobile myfile.something
[17:53:30 CEST] <BtbN> -il
[17:55:15 CEST] <furq> ricci: you can use the hash muxer if you just want to compare the hashes of the actual streams
[17:55:22 CEST] <furq> !muxer hash @ricci
[17:55:22 CEST] <nfobot> ricci: http://ffmpeg.org/ffmpeg-formats.html#hash-1
[17:56:18 CEST] <ricci> BtbN: ffprobe doesn't shows creation time, just duration and other metadata
[17:59:06 CEST] <furq> ricci: -show_format
[18:01:13 CEST] <ricci> creation time is not listed
[18:01:54 CEST] <ricci> encoder is libopus, format is ogg
[18:02:10 CEST] <DHE> there's also a bitexact option that strips out much of that metadata and produces something more consistent between runs or even versions of ffmpeg
[18:05:46 CEST] <ricci> i don't know how to get creation time, but my main question is answered. i just wanted to know if audio streams are different. thanks to all
[18:24:21 CEST] <lagzilla> is it possible to bring 2 audio steams in as mono on the fly like ffmpeg -i foo.mp3 -ac1 -i bar.mp3 -ac 1 -filter_complex "[0:a][1:a]amerge=inputs=2[aout]" -map "[aout]" output.mp3
[18:30:56 CEST] <JEEB> lagzilla: "on the fly"?
[18:32:12 CEST] <lagzilla> yeah so the source audio stream is sterio id like to bring it into ffmpeg as mono
[18:32:43 CEST] <JEEB> I still don't understand the "on the fly" part? how dynamic does the solution need to be?
[18:33:26 CEST] <JEEB> or do you just need to separate the left/right of one input and similarly for the second input and get that somehow?
[18:33:34 CEST] <JEEB> please explain what exactly it is that you want to do
[18:35:15 CEST] <lagzilla> So I need to have audiofile1.mp3 play only on the right side, then audiofile2.mp3 play on the left side
[18:37:15 CEST] <JEEB> yes
[18:37:29 CEST] <JEEB> you can select left from the other and right from the other
[18:37:37 CEST] <JEEB> it's all in https://www.ffmpeg.org/ffmpeg-all.html
[23:37:26 CEST] <AlRaquish> I asked something similar yesterday, is it possible to make iterative/recursive filters? I am trying to delay certain pixels by some frames depending on their position. Is it possible to achieve that without using too much bash? My script is working, but I assume it would be faster if it was pure ffmpeg.
[23:41:04 CEST] <durandal_1707> AlRaquish: hmm, how you do it?
[23:41:32 CEST] <AlRaquish> https://pastebin.com/MSGNcw63 here is the script, which is rather ugly
[23:41:46 CEST] <AlRaquish> I specifically want to delay columns/ rows so it's not too hard
[23:42:01 CEST] <AlRaquish> I split it into n videos cropped and delayed, and then stack them back
[23:43:11 CEST] <AlRaquish> I know that I don't have to split first and then merge. I could split one column, delay it, and then stack it to the output video. My way is faster though because it's generally dealing with small and similar-sized videos.
[23:46:28 CEST] <AlRaquish> durandal_1707: I am wondering if I can construct some kind of complex filter that would do the same job, or that I could run repeatedly on the video to get more subdivisions. Maybe I could construct a huge filter pipeline as a string in bash and then give it to ffmpeg?
[23:50:25 CEST] <durandal_1707> if movie filter supported seeking you could do that
[23:50:56 CEST] <durandal_1707> but otherwise its not currently possible
[23:51:51 CEST] <AlRaquish> Hmm yeah, I wonder if an advanced filter scripting language could be made for ffmpeg
[23:52:23 CEST] <AlRaquish> Is there generally need for such a thing?
[23:54:03 CEST] <durandal_1707> it is....
[23:55:17 CEST] <AlRaquish> Yeah, it is already very functional, but maybe being able to do everything from inside ffmpeg would make it possible for scripts to be cross-platform
[23:55:57 CEST] <durandal_1707> they are already
[23:56:14 CEST] <durandal_1707> so not shure what you mean
[23:56:27 CEST] <AlRaquish> Well, in my case I had to use bash to get the final result
[23:57:06 CEST] <AlRaquish> If there was some way to implement loops/recursion in ffmpeg filters, I don't think there would be any need for outside scripting
[00:00:00 CEST] --- Sat Oct 21 2017


More information about the Ffmpeg-devel-irc mailing list