[Ffmpeg-devel-irc] ffmpeg.log.20191003

burek burek at teamnet.rs
Fri Oct 4 03:05:03 EEST 2019


[06:57:03 CEST] <lain98> i'm trying to generate vp9 profile 1 videos with the command. ffmpeg -f lavfi -i color=c=blue:s=1280x720:d=3:r=60 -c:v libvpx-vp9  -profile:v 1 -vf "format=pix_fmts=yuv420p, drawtext=fontsize=64: fontcolor=white: font=monospace: x=(w-text_w)/2: y=(h-text_h)/2: r=60: text='%{frame_num}'"
[06:57:24 CEST] <lain98> it says 420 format requires profile 0 or 2.
[06:58:22 CEST] <lain98> oh okay i just figured it out. wikipedia has wrong information
[06:58:46 CEST] <Gigabitten> *Where* is this channel publicly logged?
[07:03:35 CEST] <furq> Gigabitten: https://lists.ffmpeg.org/pipermail/ffmpeg-devel-irc/2019-September/date.html
[07:04:52 CEST] <Gigabitten> found it myself just before you put that there but thank you very much
[10:33:41 CEST] <karanveersingh> Need help with GPU transcoding
[10:34:05 CEST] <karanveersingh> I am using tesla T4 GPU and below is the command
[10:35:11 CEST] <karanveersingh> ffmpeg -hwaccel_device 1 -hwaccel cuvid -re -i Stranger02.mkv -c:v h264_nvenc -preset fast -b:v 10M -bufsize 20M -x264opts keyint=500 -pix_fmt yuv420p -f flv rtmp://194.167.137.11/live-test/Strange02_4k
[10:36:10 CEST] <karanveersingh> Its 4k video than I am using , trying to transcode multiple 4k streams
[10:36:39 CEST] <karanveersingh> Issue is I am only able to run 7 streams with no frame loss
[10:37:12 CEST] <karanveersingh> As soon as I start 8 video , I start seeing frame loss
[10:37:42 CEST] <karanveersingh> can anybody help me with correct parameters
[10:37:58 CEST] <karanveersingh> i can also see some CPU utilization , is this expected ?
[10:39:16 CEST] <karanveersingh> How many maximum live streams can be run with GPU ?
[10:42:34 CEST] <karanveersingh> any help guys ?
[11:10:34 CEST] <ponyrider> karanveersingh: maybe check here if you havent already? https://trac.ffmpeg.org/wiki/StreamingGuide
[11:17:28 CEST] <karanveersingh> ponyrider , already check , it talks more about CPU codecs like libx264 , I need help with h264_nvenc
[14:11:46 CEST] <mlok> Hello, is a python sub process call a good way of implementing ffmpeg in Python?
[14:12:16 CEST] <mlok> or would it be preferred to use FFMPEG bindings in Python?
[14:14:42 CEST] <c_14> shelling out to ffmpeg is probably easiest if that works for you
[14:15:37 CEST] <pink_mist> yeah, use the easiest solution that actually works for your problem
[14:15:45 CEST] <mlok> c_14: what would be a better way of doing this?
[14:16:29 CEST] <mlok> pink_mist: hmm, would creating a daemon or service for the sub process call script work better?
[14:16:35 CEST] <c_14> depends on your problem, if shelling out to the binary works for you it's the easiest way to go
[14:16:52 CEST] <c_14> And the easiest one to get help for if/when you need it
[14:16:52 CEST] <mlok> c_14: I need the script to be stable and not crash
[14:17:17 CEST] <mlok> c_14: e.g. I need to do a large transcoding queue and it cannot crash
[14:18:57 CEST] <c_14> I'm not sure I've had issues with ffmpeg crashing, the binary may error out if your input is corrupt or something, but that's up to you to catch the errors in your script
[14:21:08 CEST] <mlok> c_14: I've had it freezing when transcoding an RTMP stream to HLS for instance
[14:21:17 CEST] <mlok> c_14: when I scripted it in bash
[17:38:44 CEST] <phenoxis> Hello.. I'm trying to stream fragmented-mp4 from ffmpeg onto a webpage's <video> tag using MSE. Clients may join inbetween, and should be able to receive the next fragment. To enable this, I'm saving the initialization segment (ftyp+moov) and sending this to all clients when they join. To a new client, the order of boxes they receive is as follows:
[17:38:44 CEST] <phenoxis> ftyp->moov->moof->mdat->moof->mdat... However, the video does not play. `chrome://media-internals` reports the following: `Append: stream parsing failed. Data size=56424 append_window_start=0 append_window_end=inf`
[17:45:09 CEST] <phenoxis> Is there something specific I need to do to get this to work?
[17:45:29 CEST] <phenoxis> My command line at the moment is `-probesize 32 -stream_loop -1 -i sintel_trailer.mp4 -c:v libx264 -preset ultrafast -tune zerolatency -profile:v high -level 13 -g 10 -pix_fmt yuv420p -vsync 2 -c:a aac -ab 64k -strict -2 -f mp4 -profile:v high -movflags +frag_keyframe+empty_moov+omit_tfhd_offset+default_base_moof -reset_timestamps 1 -g 52
[17:45:30 CEST] <phenoxis> -frag_duration 100000 -`
[17:53:17 CEST] <brimestone> Hey guys, how can I loop thru the content of AVDictionary? Using av_dict_get requires me to know the key ahead of time...
[17:55:46 CEST] <brimestone> Ahh, if I give it black for the key it will output a vector..
[18:19:26 CEST] <brimestone> is there any quick way to get FPS value using AVFormatContext?
[18:23:09 CEST] <cehoyos> Typically, there is no such thing;-(
[18:23:29 CEST] <brimestone> some video fps are hidden
[18:23:42 CEST] <cehoyos> But for some samples, avg_frame_rate is set
[18:24:01 CEST] <cehoyos> They are not "hidden", the term is not defined for many formats.
[18:24:09 CEST] <brimestone> yeah..
[18:24:21 CEST] <cehoyos> The time_base is always an upper limit for the framerate
[18:24:34 CEST] <brimestone> This particular format im working on, its stored on ttime_base
[18:24:35 CEST] <cehoyos> but it is 1000 for asf, and also fixed for other formats
[18:24:49 CEST] <brimestone> asf?
[18:25:00 CEST] <cehoyos> Used to be very common...
[18:25:07 CEST] <cehoyos> but it is also fixed for mpegts
[18:25:35 CEST] <cehoyos> https://en.wikipedia.org/wiki/Advanced_Systems_Format
[18:25:47 CEST] <brimestone> Btw, im super stoked that im finally understanding how to use libav now..
[18:26:00 CEST] <brimestone> Took me years to keep tipping my toes into it
[18:27:54 CEST] <kepstin> mkv/webm is technically not fixed, but it does have to be a power of 10 and most muxers use 1000
[18:28:47 CEST] <brimestone> I see
[18:42:18 CEST] <brimestone> If want to create an audio level meter for a player application.. can I use libav for that?
[20:54:40 CEST] <jpb> hi - question on ffplay - how can i get rid of the wide black borders on the left and right of the display window?
[20:55:16 CEST] <jpb> i'm using overlays, and the border is much to wide...
[21:04:33 CEST] <kepstin> jpb: ffplay just shows the video as it is - if there's borders, you've either resized the window to be wider than the video (in which case... don't do that) or the video itself has black borders
[21:12:25 CEST] <jpb> hmmm...  ok, so this is a mosaic (similar to the wiki example, but 3x3 instead of 2x2).  i'm using ffmpeg to create the mosaic, but piping it to ffplay to display.  https://pastebin.com/jjDLF4fZ
[21:14:51 CEST] <jpb> xwininfo shows the resulting display is 1285x725.  i'm perplexed :-(
[21:15:50 CEST] <durandal_1707> jpb: your fault using overlay for mosaic
[21:17:26 CEST] <jpb> is there a better way?
[21:18:44 CEST] <durandal_1707> yes
[21:39:38 CEST] <kepstin> jpb: the problem is that you've resized the window to be wider than the video. don't do that.
[21:39:50 CEST] <kepstin> jpb: remove the -x and -y options from ffplay, those tell it to resize the window
[21:44:14 CEST] <jpb> looks like i also want to look at the xstack filter.  might be closer to what i want.
[21:45:35 CEST] <jpb> kepstin:  yep, there's a mistacke right there.  i was so focused on the overlay bits i forgot to look at the ffplay command line.
[21:45:45 CEST] <jpb> good catch
[22:54:41 CEST] <jpb> looks like xstack is the way to go.
[22:55:56 CEST] <TheSashmo> Has anyone seen a bug in HEVC decoding library in ffmpeg?  I can see using a transport stream analyzer that the ES info is reported back as 1920x1080i but ffmpeg hevc decoder is showing a 1920x540 which would be half the resolution..... in fact even my stream analuzer decodes the video as 540 on the screen, it looks squashed,  seems like a progressive/interlaced bug?
[22:57:31 CEST] <TheSashmo> https://www.dropbox.com/s/69ueovnt6arjgi7/Screenshot%202019-10-03%2016.33.43.png?dl=0
[22:58:53 CEST] <TheSashmo> https://www.dropbox.com/s/td1gnhgrhlrf4in/Screenshot%202019-10-03%2016.33.14.png?dl=0
[22:59:14 CEST] <kepstin> TheSashmo: hevc doesn't really do interlaced, the ffmpeg decoder decodes each field as a separate half-height frame
[22:59:29 CEST] <kepstin> you're free to combine them back together into a combed frame with a filter
[22:59:33 CEST] <TheSashmo> yeah but the video looks sqished
[22:59:42 CEST] <TheSashmo> yeah I could do that
[23:00:05 CEST] <kepstin> better yet, just don't use interlaced hevc ;)
[23:00:33 CEST] <TheSashmo> I dont have that choice
[23:00:37 CEST] <TheSashmo> Im not encoding it
[23:00:41 CEST] <TheSashmo> its coming to me like that
[23:50:30 CEST] <brimestone> Is there a compre
[23:51:37 CEST] <brimestone> Is there a comprehensive book, on how to use ffmpeg and all its awesomeness ?
[00:00:00 CEST] --- Fri Oct  4 2019


More information about the Ffmpeg-devel-irc mailing list