[Ffmpeg-devel-irc] ffmpeg.log.20190917

burek burek at teamnet.rs
Wed Sep 18 03:05:03 EEST 2019


[01:22:42 CEST] <realies> any way to accelerate the generation of the showwaves filter?
[03:48:55 CEST] <karanveersingh> Hi All , yesterday i was struggling with live streaming of 4k videos
[03:49:39 CEST] <karanveersingh> system configuration is 2 socket , 88 core processors with 8 GB of Ram
[03:51:21 CEST] <karanveersingh> below is the first command I was trying , the issue is that the video live streaming runs perfectly with 2 simultaneous streams of 4k but when the 3rd stream starts , all goes bad and fps start dropping
[03:51:35 CEST] <karanveersingh> ffmpeg -re -i Stranger09.mkv -c:v libx264 -b:v 50M -preset ultrafast -tune zerolatency -b:a 128k -s 4096x2160 -bufsize 5M -x264opts keyint=500 -g 60 -pix_fmt yuv420p -f flv rtmp://194.167.137.11/live-test/Strange09_4k
[03:52:28 CEST] <karanveersingh> Some guys recommended me to omit / edit below mentioned parameters but still its not working
[03:52:43 CEST] <karanveersingh> Remove -tune zerolatency
[03:53:13 CEST] <karanveersingh> keyint=500 -g 60 , both are same so does use it together
[03:54:01 CEST] <karanveersingh> need to limit thread by adding  -thread x
[03:55:39 CEST] <karanveersingh> add audio codec  -c:a aac
[03:56:32 CEST] <another> how's your cpu usage?
[03:56:38 CEST] <karanveersingh> very high
[03:57:59 CEST] <another> hmm.. any reason why you're scaling?
[03:59:08 CEST] <karanveersingh> yes , need to see how many videos i can run on a specific system with a specific drive
[03:59:40 CEST] <karanveersingh> after 2 videos , nothing goes fine ,
[04:00:31 CEST] <another> well, are all of your cores at 100% with 2 streams?
[04:00:57 CEST] <karanveersingh> no , its 50%
[04:01:10 CEST] <another> average over all cores?
[04:02:48 CEST] <another> how's your memory usage?
[04:03:11 CEST] <another> 8GB seems rather low
[04:03:55 CEST] <karanveersingh> load average cpu - 524.00 ~ 900.00
[04:04:04 CEST] <karanveersingh> memory is all high
[04:04:18 CEST] <karanveersingh> i created swap of the disk more than memory
[04:04:24 CEST] <karanveersingh> Ram + 2
[04:04:40 CEST] <karanveersingh> * on the disk
[04:05:18 CEST] <another> ssd or hdd?
[04:05:37 CEST] <another> how hard are you swapping?
[04:08:43 CEST] <karanveersingh> nvme QLC ssd
[04:08:59 CEST] <karanveersingh> very bad performance than TLC
[04:09:50 CEST] <karanveersingh> On TLC i could run IOs till 15 videos and then saw drop in IO and increase in io wait time
[04:10:47 CEST] <karanveersingh> here in QLC its all broken after 7 videos , now no activity i can see on drive and ffmpeg command has not failed yet
[04:11:35 CEST] <karanveersingh> end result I need to run max no of videos with no frame drop
[04:11:54 CEST] <karanveersingh> right now I am able to reach upto 2 videos with no frame drop
[04:12:42 CEST] <karanveersingh> The 15 video streams I ran was just to see SSD IO activity
[04:15:32 CEST] <another> are all your cores maxed with 3 streams? how is your memory?
[04:16:40 CEST] <karanveersingh> no , cores are not maxed out at 3and memory is still there like 50%
[04:23:04 CEST] <another> hmm
[04:23:48 CEST] <another> network?
[04:24:13 CEST] <karanveersingh> single node , so streaming and transcoding all on it
[04:24:28 CEST] <another> no i meant uplink
[04:24:32 CEST] <DHE> I think he meant network storage of some sort
[04:24:42 CEST] <another> that too
[04:29:31 CEST] <karanveersingh> no network storage , i am not sending any packet out of the node
[04:29:49 CEST] <karanveersingh> all operations are carried out in single node
[04:30:05 CEST] <karanveersingh> so network does not come into
[04:31:42 CEST] <another> are you streaming to localhost?
[04:31:49 CEST] <karanveersingh> yes
[04:32:42 CEST] <qbmonkey> So I'd like to use ffmpeg for any xvid encoding I do. I have gotten ffmpeg on par with mencoder quality, but outputs are 10-20% larger. Here are the options I use in both. https://pastebin.com/QKFFmxhT
[04:33:16 CEST] <qbmonkey> I think I've exhausted my options.
[04:34:32 CEST] <another> you have a special need for xvid?
[04:34:59 CEST] <qbmonkey> yes
[04:35:18 CEST] <qbmonkey> Plays well on SOC arm with not vid accel
[04:35:18 CEST] <pink_mist> eww, why in the world would you use xvid in 2019? are you going to play it back on 486 CPUs?
[04:35:32 CEST] <pink_mist> oh, guess so
[04:35:33 CEST] <qbmonkey> near that when you have no vid accel
[04:36:19 CEST] <qbmonkey> Thats actually the response I expected.
[04:37:54 CEST] <another> i'm afraid i can't help you
[04:38:33 CEST] <another> karanveersingh: are you reading all the input from the same disk?
[04:38:57 CEST] <qbmonkey> I imagine that this is a known difference between ffmpeg and mencoder, but no one is really invested in it.
[04:45:51 CEST] <kepstin> the obvious difference between those two command lines is that the ffmpeg one has audio and the mencoder one doesn't
[04:45:57 CEST] <kepstin> that would account for the size difference
[04:58:43 CEST] <qbmonkey> I forgot to omit that. When testing both are without audio
[04:59:15 CEST] <qbmonkey> mencoder does not support the audio codec I use in the mkv container.
[04:59:55 CEST] <qbmonkey> But that is why the example has audio with ffmpeg.
[05:00:02 CEST] <qbmonkey> The difference is not the audio.
[05:05:11 CEST] <qbmonkey> I've thought the issue was in vhq or bvhq. Can't remember which one (been awhile) but I thing ffmpeg didn't have a option for it.
[05:05:46 CEST] <qbmonkey> Or it had a hard-coded default. Or I just didn't know what I was doing.
[05:13:57 CEST] <qbmonkey> It may have been also that the strength for one or both of those setting was not supported in ffmpeg.
[05:16:07 CEST] <qbmonkey> The adjustment of the setting was supported, but not all options available in ffmpeg.
[05:17:29 CEST] <qbmonkey> It was probably either a -me* option or the -mbd (ffmpeg).
[05:18:08 CEST] <qbmonkey> I believe vhq/bvhq are the related mencoder options.
[05:21:20 CEST] <qbmonkey> Otherwise, the actual xvid code should be pretty much the same.
[05:22:10 CEST] <qbmonkey> It seemed like there was just no way I could find to input the same options. Pretty close though,
[11:36:37 CEST] <auri_> Hello! Is there any way to make the DASH muxer start writing segments from a specific number?
[11:38:03 CEST] <auri_> I saw a similar question on SO a while back, no real answers there, thought I should ask here.
[11:43:28 CEST] <BtbN> https://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/dashenc.c;h=a462876c13a191e484b91334d9d159bebe261028;hb=HEAD#l457 does not look like it
[11:46:10 CEST] <auri_> huh
[11:46:41 CEST] <auri_> the Plex "fork" of the project (which they distribute because of the license) has an opt called "-skip_to_segment"
[11:46:57 CEST] <auri_> is there any reason why this option is not part of upstream, technical considerations at least?
[11:47:06 CEST] <durandal_1707> nobody sent patch
[11:47:09 CEST] <cehoyos> Yes, definitely
[11:47:28 CEST] <auri_> oh, well that makes more sense
[11:49:33 CEST] <BtbN> I don't really see why you would need that though?
[11:49:44 CEST] <auri_> Ah, let me explain
[11:49:48 CEST] <BtbN> The names of the segments seem pretty arbitrary to me
[11:51:08 CEST] <auri_> I'm building an application feature that is similar to Plex and I thought I could use ffmpeg's dash muxer to create the segments (as ffmpeg is already extensively used in the codebase)
[11:51:27 CEST] <auri_> one of the considerations is that users are allowed to seek past currently encoded segments
[11:51:41 CEST] <auri_> in which case we stop the encoder and start a new one
[11:52:03 CEST] <auri_> the issue is that to share an output directory, we'd have to change the starting segment number
[11:52:25 CEST] <auri_> otherwise we'd have to create a new temporary directory for each time a user uses the seeking functionality
[11:52:36 CEST] <auri_> and then figure out how to serve them appropriately
[11:53:54 CEST] <auri_> kind of a bad explanation on my part but I hope it makes enough sense
[11:59:22 CEST] <BtbN> you could also just change the name of the segments a little bit each time
[12:00:36 CEST] <BtbN> But if you want to end up with one big playlist with no need to re-encode old segments, that might be a bit annoying indeed. But do you calculate the proper index of each segment, so that gaps while seeking result in the correct gap in the indices?
[12:01:02 CEST] <auri_> Yep
[12:02:10 CEST] <auri_> -skip_to_segment would greatly reduce the amount of logic necessary on our end
[12:02:45 CEST] <auri_> I guess I should extract the patch and send it in the development channel
[12:02:59 CEST] <auri_> not too comfortable with ffmpeg code to submit it myself, though someone else might be interested
[12:03:15 CEST] <JEEB> unfortunately that then 100% depends on people's interest level :)
[12:03:41 CEST] <auri_> indeed
[12:03:44 CEST] <JEEB> (also you'd have to figure out the copyright, which could also just be "Plex" <blah at plex.tld>)
[12:03:56 CEST] <auri_> yeah, that I also plan on doing
[12:04:10 CEST] <auri_> though to be fair it's very simple and not difficult to just
[12:04:16 CEST] <auri_> white room reimplement it
[12:04:22 CEST] <auri_> to avoid copyright issues
[12:05:04 CEST] <BtbN> The big issue with those kind of patches usually is that they disregard a lot of corner cases and break a bunch of other stuff.
[12:05:28 CEST] <auri_> yep, that's my biggest concern for this
[12:05:50 CEST] <auri_> which is why I'm too afraid to submit it myself, lol
[12:05:56 CEST] <BtbN> I'm also maintaining a small collection of special case HLS segment muxer patches that fit my use case, but just plain break a bunch of others and are horrible code.
[12:06:09 CEST] <BtbN> Just grab the patch and send it really
[12:06:13 CEST] <BtbN> People can then discuss
[12:08:03 CEST] <JEEB> I was surprised that the segment muxer had HLS stuff
[12:08:14 CEST] <JEEB> even though HLS muxer does pretty much the same
[12:08:41 CEST] <JEEB> and then some guy wanting to add secondary streams to webvtt because it didn't work otherwise with segment muxer's HLS output...
[12:09:06 CEST] <JEEB> while the HLS muxer does support webvtt, even though it doesn't support it in the master playlist generation code
[12:09:10 CEST] <JEEB> (´4@)
[12:21:12 CEST] <auri_> yeah, it seems that Plex provided no licensing notices whatsoever
[12:21:23 CEST] <auri_> so the copyright is somewhere up in the air
[12:32:35 CEST] <BtbN> Not really. If you patch (L)GPL software, your patch is (L)GPL.
[12:32:50 CEST] <JEEB> yea
[12:33:20 CEST] <JEEB> (you can *also* license your patches under another license, but to follow LGPL you have to publish the sources of LGPL software under that license)
[12:47:14 CEST] <auri_> Oh, I completely forgot that ffmpeg is licensed under the LGPL
[12:47:28 CEST] <auri_> this makes it easier to submit the patch, I guess
[13:14:16 CEST] <JEEB> hmm, does anyone remember if -map_metadata works with stream identifiers based on stream IDs instead of indices?
[13:15:21 CEST] <JEEB> like -map_metadata:s:a:0 '0:s:#1337' (map the metadata of PID 1337 from input to output audio stream 0)
[13:15:34 CEST] <JEEB> I would guess not since 0:s: is IIRC index based
[13:27:18 CEST] <forgon> Following instructions from https://trac.ffmpeg.org/wiki/Capture/ALSA and executing `ffmpeg -y -f alsa -i hw:Loopback,1,0 -c:a flac /tmp/test.wav` fails with "Input/output error" when starting the application whose sound should be recorded: http://ix.io/1Vx0
[14:37:10 CEST] <Radiator> Hi all, it appear that the example https://ffmpeg.org/doxygen/trunk/muxing_8c-example.html is no more valide as the function avcodec_encode_video2 is deprecated. The documentation redirect to using the function avcodec_send_frame but I find it a little off as it doesn't need any AVPacket to send it. Since avcodec_encode_video2 to send the packet we
[14:37:10 CEST] <Radiator> had to call av_interleaved_write_frame, do we still have to call that function when using avcodec_send_frame and if so how d owe retreive the packet ?
[14:38:25 CEST] <BtbN> There's a corresponding function to recv it.
[14:39:56 CEST] <DHE> av_[interleaved]_write_frame is poorly named as you give it an AVPacket. similar for av_read_frame
[14:41:52 CEST] <Radiator> DHE Do you advice to use a different function to send the packet ? or directly send the AVFrame ?
[14:42:28 CEST] <DHE> if you're encoding, you send frames and receive packets. avcodec_send_frame and avcodec_receive_packet
[14:42:43 CEST] <Radiator> BtbN are you talking about avcodec_receive_frame .
[14:43:03 CEST] <BtbN> no
[14:43:09 CEST] <BtbN> you want a packet, don't you?
[14:43:23 CEST] <Radiator> BtbN Yup
[14:43:30 CEST] <DHE> then do what I said
[14:45:37 CEST] <Radiator> DHE Ok, I see now, I "send the frame" which will somehow build a packet that I receive using the function avcodec_receive_packet. Then I write the packet on the AVFormatContext using the av[_interleaved]_write_frame ?
[14:46:01 CEST] <DHE> basically yes
[14:46:15 CEST] <Radiator> Great !
[14:46:19 CEST] <Radiator> Thanks :)
[14:46:20 CEST] <DHE> there's some little housekeeping things to do. like if your output file contains both audio and video, you'll need to set the stream index in the packet
[14:47:27 CEST] <Radiator> Yeah I already did that part, as well as handling the frame rate and paying attention to the pts
[16:35:12 CEST] <Radiator> what should I have to free after using av_image_alloc() on a frame ? Do I have to use a av_frame_unref ? av_frame_free() doesn't seem to free everything sadly
[16:37:03 CEST] <Radiator> Nevermind, I just found my answer. av_freep must be called on the pointers allocated
[21:59:34 CEST] <forgon> When trying to record the sound from an application by running `ffmpeg -f alsa -i default -c:a flac foo.mkv`, I notice that my recording is less loud than what I hear. What could be the cause?
[22:02:57 CEST] <BtbN> Your audio chain is not at 100% everywhere, so each time you capture, it'll lose those XX% you are from it
[22:03:45 CEST] <forgon> I guess I'll copy-paste from the tutorial at https://trac.ffmpeg.org/wiki/Capture/ALSA :|
[22:04:35 CEST] <BtbN> Also, isn't -i default your microphone?
[22:07:20 CEST] <forgon> BtbN: That could be an explanation. Afaik I have 2 cards: One is called PCH and described as "Analog", the other one is called Loopback.
[22:07:43 CEST] <BtbN> Then you already did setup a loopback device
[22:07:57 CEST] <forgon> BtbN: And that's the one I should always use, right?
[22:08:09 CEST] <BtbN> Depends purely on your hardware
[22:08:15 CEST] <BtbN> I never had a Loopback device appear on its own
[22:08:20 CEST] <BtbN> Also, if you want to actually capture individual application, you will need pulseaudio.
[22:09:29 CEST] <forgon> BtbN: Has been noted.
[23:11:33 CEST] <classsic> Hi, I get this error " Application provided invalid, non monotonically increasing dts to muxer in stream 0: 604 >= 604"
[23:11:59 CEST] <classsic> is there a way to fixit?
[23:12:33 CEST] <bashquest> hello people
[23:12:37 CEST] <bashquest> ffmpeg -f concat -safe 0 -i <(for cut in *00*; do echo file "${cut at Q}"; done) -c copy out.mp
[23:13:06 CEST] <bashquest> problem is the pipe, <(...), i want to inline the file with the media to concatenated.
[23:13:20 CEST] <bashquest> when i look at the docs, ...
[23:13:31 CEST] <bashquest> they show me an example with <(...) ...
[23:14:11 CEST] <bashquest> ffmpeg answers with "impossible to open /dev/fd/mypipe"
[23:15:26 CEST] <cehoyos> classsic: Please paste the command line you tested together with the complete, uncut console output to a webpage of your choice and post the link here.
[00:00:00 CEST] --- Wed Sep 18 2019


More information about the Ffmpeg-devel-irc mailing list