[Ffmpeg-devel-irc] ffmpeg.log.20190211

burek burek021 at gmail.com
Tue Feb 12 03:05:03 EET 2019


[01:05:02 CET] <keegans> I'm getting a "Incorrect number of samples in encryption info" error, but when I ffprobe -v trace the file it shows the correct number of samples and their info
[01:05:12 CET] <keegans> Is there a way to prevent ffmpeg from exiting or displaying this error?
[01:30:57 CET] <JEEB> keegans: sorry for not being able to check things too much, I've been reviewing stuff in two projects and also fixing stuff that other people have found in my code :)
[02:18:14 CET] <keegans> JEEB: yeah no worries , just trying to figure out why ffmpeg is refusing to read this mp4 file
[02:18:31 CET] <keegans> i got it to encode the mp4 as pcm and then play it
[02:18:51 CET] <keegans> but now i have to do some things to the packets before i decode them, but ffmpeg is throwing errors about issues with the sample count in this file
[02:19:08 CET] <keegans> yet when i ffprobe -v trace it it shows all the correct sample offsets and sizes
[02:19:10 CET] <keegans> so idk wtf
[02:19:35 CET] <cryptodechange> Still getting interlaced frames with fieldmatch,yadif,decimate
[02:19:53 CET] <cryptodechange> converting from telecine DVD (PPPII mostly)
[02:20:34 CET] <cryptodechange> Getting a lot of 'Frame #X at X is still interlaced' messages
[02:23:15 CET] <keegans> https://github.com/FFmpeg/FFmpeg/blob/7f8bfbee36638f3bcacea8a6af5eece8878833ed/libavformat/mov.c#L6617
[02:26:10 CET] <keegans> i wonder if i just `nop` that out
[03:23:36 CET] <keegans> be back later
[04:16:40 CET] <killown> is that possible to create a blank video that displays a logo in the center?
[04:17:00 CET] <killown> I have this ffmpeg -t 7200 -s 640x480 -f rawvideo -pix_fmt rgb24 -r 25 -i /dev/zero empty.mpeg
[04:17:15 CET] <killown> but I want to display a logo
[10:20:14 CET] <friendofafriend> I'm encoding a video with libx264, from a single image, at 30fps.  I can't change the framerate.  Is there some way to make the stream use less bandwidth, as the scene never changes?
[13:48:50 CET] <merethan> friendofafriend, my guess is that the scene does change. Aren't you encoding a noisy image?
[15:31:50 CET] <Fyr> guys, when copying video into mp4, FFMPEG adds:
[15:31:50 CET] <Fyr> Stream #0:2(eng): Data: bin_data (text / 0x74786574)
[15:31:56 CET] <Fyr> what is that?
[15:32:29 CET] <Fyr> I used -sn option, the input file has only a video and an audio stream.
[15:34:05 CET] <JEEB> could be chapters if your input had that, since that is also output as text
[15:34:15 CET] <Fyr> wow, it has chapters.
[15:34:27 CET] <JEEB> (or well, one of the ways of putting chapters into MP4)
[15:34:40 CET] <JEEB> (there was the MOV way and the Nero we-did-it-our-way way)
[17:24:31 CET] <kepstin> cryptodechange: the ones where you get that '...is still interlaced' message are the ones that got fix, it's the ones where it didn't detect that which are the problem :)
[17:25:09 CET] <kepstin> cryptodechange: sounds like you have input with a lot of bad cuts or mismatched overlay/crossfades, etc.?
[17:51:17 CET] <kepstin> cryptodechange: hmm, that said, yadif with no other options should be deinterlacing all frames
[17:51:27 CET] <kepstin> cryptodechange: are you actually seeing interlaced frames in the result?
[17:52:49 CET] <kepstin> cryptodechange: that "... is still interlaced" message is output from the fieldmatch filter, it's basically saying "I couldn't find any fields that matched close enough, all possible matches left some combing artifacts", the yadif filter should be removing the combing
[20:53:02 CET] <friendofafriend> merethan: A noisy image?  I don't follow, it's just one image.
[20:53:55 CET] <merethan> Image as in, one and just one frame only?
[21:00:34 CET] <doyouhas> ello squad fam
[21:00:40 CET] <doyouhas> anyone fucking around here?
[21:05:45 CET] <friendofafriend> merethan: One and just one frame only.
[21:06:07 CET] <merethan> Mm weird
[21:06:30 CET] <merethan> My best guess is that it has something to do with the keyframe rate.
[21:07:31 CET] <merethan> Every so many frames there's a key frame, that kind of serves as a rebuild or reset of the image. Every frame after the keyframe only stores the differences from the previous one.
[21:08:32 CET] <merethan> Since your video feed is static, so there ain't any differences from the keyframe to the next frame. But the rate at which keyframes are inserted is I recon a configuration option.
[21:08:41 CET] <friendofafriend> I'm already using the libx264 "stillframe" tune.  I saw an option to use "-x264-params keyint=".
[21:09:41 CET] <friendofafriend> But I'm not really sure if that interferes with -g .
[21:10:00 CET] <furq> it makes no difference with a still image
[21:10:23 CET] <furq> keyint and -g are the same thing with x264 anyway but even changing scenecut or min-keyint will have no effect
[21:23:05 CET] <friendofafriend> Ah.  I thought about making the framerate really low, but some devices I test low framerate streams on seem awfully unhappy with it.
[21:25:25 CET] <furq> if this is for youtube then the lowest framerate they transcode to is 6
[21:25:41 CET] <furq> it doesn't matter so much what you give them but i always just use 6
[21:29:11 CET] <wfbarksdale> I'm working on some code for remuxing an mp4 using ffmpeg, when i ffprobe the output file i see the warning `AVC: nal size 0`, everything else looks normal... any one know what I might be missing? presumably I am not setting something properly on output context or on the video stream?
[22:23:39 CET] <wfbarksdale> Still working on this: very confused about what I am doing wrong. I've correctly copied the `extradata` over from the input file: I'm working on some code for remuxing an mp4 using ffmpeg, when i ffprobe the output file i see the warning `AVC: nal size 0`, everything else looks normal... any one know what I might be missing? presumably I am not setting something properly on output context or on the video st
[22:23:40 CET] <wfbarksdale> ream?
[22:32:03 CET] <doyouhas> why the hell are there no good ffmpeg wrappers in python
[22:32:44 CET] <doyouhas> i can run the exact command in a terminal but when i run with ffmpy i get exception :(
[22:40:43 CET] <w1kl4s> just run subprocess
[22:41:01 CET] <w1kl4s> if you don't have ffmpeg in path then you are doing something wrong anyway
[22:41:29 CET] <furq> if you know the command you want to run then why do you need a wrapper that abstracts it for you
[22:41:37 CET] <furq> that just means you have to learn two things instead of one
[22:41:50 CET] <furq> and then there are two things that might break instead of one
[22:41:54 CET] <furq> and so on
[22:42:18 CET] <w1kl4s> i mean pretty much everything that relies on ffmpeg requires it to be in path
[22:42:23 CET] <w1kl4s> and just calls a subprocess
[22:42:34 CET] <furq> that's apparently what ffmpy does
[22:42:42 CET] <w1kl4s> lol
[22:42:46 CET] <w1kl4s> then why even use it
[22:42:53 CET] <w1kl4s> what's the actual reason to
[22:43:06 CET] <furq> i guess it makes it look more like python?
[22:43:09 CET] <furq> why you'd want that i don't know
[23:10:41 CET] <MapMan> Hi! I'm having problems recording at 60 fps even though my hardware is capable of such recordings. Can someone take a look at how I'm running ffmpeg and tell me what's wrong? https://gist.github.com/Mapiarz/63165867470d19818694da68fc702905
[23:11:20 CET] <MapMan> By default, dshow input device caps at 30, I can edit registry and make the input any fps my pc supports, e.g. 100 fps
[23:11:38 CET] <MapMan> ffmpeg decides to duplicate frames instead of recording at the framerate I tell him to, e.g. 60 fps
[23:11:53 CET] <kepstin> MapMan: no idea, it should be mostly up to the performance of whatever external application is doing the screen capture
[23:12:31 CET] <JEEB> MapMan: is it the vsync logic or which part doing derps at you?
[23:12:40 CET] <kepstin> MapMan: note that you might need to use the -framerate input option
[23:13:17 CET] <MapMan> https://gist.github.com/Mapiarz/e90b303e5cc019bd055a3b06873cb0f4
[23:13:21 CET] <MapMan> Heres -v trace log
[23:13:46 CET] <JEEB> ok, so your input is at least perceived as 60Hz
[23:14:39 CET] <JEEB> and  yea, that *** X dup
[23:14:43 CET] <JEEB> is the vsync code in ffmpeg.c
[23:14:53 CET] <MapMan> my cpu/gpu usage is really low, I recorded lossless with other apps (like obs, which uses ffmpeg anyway...)
[23:15:18 CET] <MapMan> And I did try lowering the resolution, using 'medium' profile and so on
[23:15:24 CET] <JEEB> (also in your mind separate FFmpeg as the framework, and ffmpeg.c which is an API client with its own logic
[23:15:38 CET] <MapMan> I see
[23:15:50 CET] <JEEB> check if the timestamps logged there make sense
[23:15:55 CET] <JEEB> the dshow ones I guess
[23:15:56 CET] <MapMan> I did not build ffmpeg myself, I used a binary from ffmpeg website
[23:16:00 CET] <kepstin> yeah, so it looks like your input isn't actually providing frames at 60hz, and ffmpeg is duplicating frames to keep the output vfr
[23:16:07 CET] <JEEB> *cfr
[23:16:12 CET] <kepstin> yeah, that
[23:16:18 CET] <kepstin> they're like right beside eachother ;)
[23:16:30 CET] <JEEB> thanks for checking the timestamps
[23:16:33 CET] <JEEB> so that's incorrect then
[23:17:14 CET] <MapMan> okay, so what you're saying is ffmpeg thinks (or is fed) data at 30 fps, not 60hz as I want it to?
[23:17:26 CET] <JEEB> the dshow module is feeding timestamps at 30Hz
[23:17:41 CET] <JEEB> so most likely your hack isn't 100% working or so? or there's a boog in the dshow reader
[23:18:01 CET] <JEEB> because while your dshow input advertises itself as 60Hz
[23:18:02 CET] <kepstin> MapMan: no. The dshow device says it's gonna be providing input at 60hz, and then is providing input slower than that (2-3x longer between frames than expected)
[23:18:08 CET] <JEEB> yup
[23:18:22 CET] <JEEB> because the timestamps of the packets actually matter, not the advertised frame rate
[23:18:38 CET] <kepstin> so you're probably just hitting performance limits with the dshow input device implementation :/
[23:18:58 CET] <kepstin> consider using OBS, which has a higher performance (desktop duplication api) implementation of full screen capture.
[23:19:21 CET] <kepstin> (i have no idea how your dshow device works)
[23:19:29 CET] <MapMan> (me neither)
[23:20:00 CET] <JEEB> the desktop duplication API looks neat, but you really need to be non-blocking for it to work
[23:20:05 CET] <JEEB> not going to fly in lavd most likely
[23:20:10 CET] <MapMan> what about gdigrab? I tried it before trying dshow but it was slooooow. Like, at half my native res, with medium profile I was hitting 15-20 fps.
[23:20:32 CET] <JEEB> depending on your OS the perf of that can be something between bad and better
[23:20:42 CET] <MapMan> win 10
[23:20:47 CET] <JEEB> GDI is an old thing and in various versions of Windows they removed GDI hw accel etc
[23:20:51 CET] <kepstin> gdigrab depends on how well your video driver implements legacy apis, yeah
[23:21:00 CET] <kepstin> it's better in win10 than win7, actually :/
[23:21:01 CET] <doyouhas> it is in the path
[23:21:24 CET] <kepstin> (gdigrab is actually pretty fast in wine on X11, fwiw)
[23:21:36 CET] <JEEB> xD
[23:21:45 CET] <furq> what is this, opengl on amd
[23:22:03 CET] <MapMan> Ok, for sanity, I'm gonna reboot, brb
[23:22:13 CET] <kepstin> I was developing gdigrab in wine, and was surprised when i tested it on win7 intel drivers and it was ridiculously slow
[23:22:59 CET] <JEEB> I wonder if for a nice experience for screen capture you have to basically spawn another thread with a growing buffer from which the frames are then fed when requested by lavf/lavd
[23:23:08 CET] <JEEB> and then if possible provide them as d3d11 textures
[23:23:36 CET] <JEEB> but yea, OBS has it already pretty optimized IIRC :P
[23:23:47 CET] <MapMan> back
[23:24:00 CET] <MapMan> Let me try a few things again after reboot
[23:24:10 CET] <MapMan> and then I'll tell you all about the walls I've been hitting in OBS :P
[23:24:32 CET] <JEEB> note: I have only used OBS once, to test nvidia lossless encoding
[23:24:44 CET] <MapMan> lossless encoding is exactly what I want to do
[23:24:54 CET] <kepstin> important in obs to use the full screen capture so you get dd api, i think the window capture uses a different path (maybe even gdi?)
[23:25:05 CET] <MapMan> I had mixed results and ultimately decided to try just ffmpeg
[23:25:18 CET] <JEEB> also I would love to basically hack RGB encoding into nvidia's encoding since it supports 4:4:4 already
[23:25:26 CET] <JEEB> you would just have to use the bit stream filter to rewrite the headers :P
[23:25:43 CET] <MapMan> kepstin: possible! I had similar slow-as-hell experience in some cases in OBS, I was thinking they could be using gdi
[23:25:48 CET] <JEEB> "no this is not BT.601 4:4:4, it's totally BGRP"
[23:26:41 CET] <kepstin> last I looked, the screencapture methods in browser webrtc screenshare in chrome and firefox are also using gdi
[23:27:14 CET] <TheAMM> JEEB: >non-blocking
[23:27:20 CET] <TheAMM> Doesn't gdigrab block?
[23:27:38 CET] <kepstin> gdigrab blocks, yes; it does frame timing by sleeping in the input device
[23:27:51 CET] <TheAMM> Yep, I've looked at it real close
[23:27:59 CET] <kepstin> (said timing code was basically copied from x11grab)
[23:28:12 CET] <JEEB> TheAMM: the idea was not to do it the usual way IFF such a module would be made :P
[23:28:22 CET] <kepstin> with desktop duplication, rather than polling, don't you get updates pushed to you?
[23:28:24 CET] <TheAMM> I'm still open for suggestions regarding the output duplication device api
[23:28:27 CET] <JEEB> kepstin: yes
[23:28:33 CET] <TheAMM> Hmh?
[23:28:38 CET] <TheAMM> No, you have to ask for updates
[23:28:41 CET] <JEEB> oh
[23:28:43 CET] <JEEB> m'kay
[23:28:45 CET] <TheAMM> And then are given dirty rects, if any
[23:28:56 CET] <TheAMM> Mouse position, cursor icons, etc
[23:28:58 CET] <JEEB> virtualdub's blog sounded a bit different but if you've actually looked at the API you know better :)
[23:29:06 CET] <TheAMM> I've implemented it
[23:29:18 CET] <JEEB> cool
[23:29:18 CET] <kepstin> so you basically go and 60 times a second ask "what are the changes since the last time i checked?" ?
[23:29:21 CET] <MapMan> Could my problem be caused by the fact that I'm running a game in fullscreen windowed mode? Shouldn't matter right, I'm recording entire screen anyway (with all the windows and stuff on top)
[23:29:26 CET] <TheAMM> I've been trying to come up with a better name than mondup
[23:29:35 CET] <TheAMM> Because I started with that without thinking about a name
[23:29:44 CET] <kepstin> I like 'windowsddapigrabusethisnotgdi'
[23:29:53 CET] <JEEB> lol
[23:29:53 CET] <TheAMM> And I've been considering using rtbufsize, but 1080p RGBA gets real fat fast
[23:30:08 CET] <TheAMM> The name should be short and sweet but also descriptive
[23:30:13 CET] <TheAMM> out_dup is too vague, though
[23:30:53 CET] <kepstin> i'd just call it 'ddapigrab'
[23:31:33 CET] <TheAMM> I guess
[23:31:41 CET] <kepstin> all of the existing screen grab drivers are named <name of api, shortened>grab
[23:31:43 CET] <kepstin> so :/
[23:32:08 CET] <TheAMM> Works for me, then
[23:32:19 CET] <JEEB> ddapi(_)rec nîn
[23:32:20 CET] <MapMan> Okay, everything is same as before reboot. Looks like I'm back to square 1 with ffmpeg and gdigrab/dshow
[23:32:24 CET] <TheAMM> I'll go over it and see if there isn't anything horribly awkward
[23:32:36 CET] <TheAMM> Then get to experience the m a i l i n g l i s t
[23:32:41 CET] <JEEB> xD
[23:32:48 CET] <kepstin> always fun.
[23:32:54 CET] <durandal_1707> lies.
[23:33:09 CET] <kepstin> that reminds me, I should probably help review that gdigrab highdpi patch that's hanging out there
[23:33:14 CET] <kepstin> or was hanging out there last I looked
[23:33:22 CET] <JEEB> yea, it's still there I think
[23:35:18 CET] <MapMan> JEEB: how was your experience with OBS and lossless? I wanted to use nvenc (h264 or hevc?) to record lossless at 60hz
[23:35:52 CET] <kepstin> TheAMM: did you get it working so it produced hardware surfaces that could be sent directly to e.g. nvenc?
[23:35:56 CET] <MapMan> I was reaching 60fps no problem but the color profile was poor (yuv420 or something?)
[23:36:09 CET] <TheAMM> kepstin: no
[23:36:12 CET] <MapMan> when switching to RGB I was getting 10fps for a change
[23:36:18 CET] <TheAMM> If you have any examples on how to do that, I can look into it
[23:36:21 CET] Action: kepstin still finds it hilarious that the easiest way to get gdigrab to work was to have it make bmp images, which were then decoded by the bmp decoder
[23:36:36 CET] <TheAMM> But I only thought about that a week or so ago
[23:36:52 CET] <kepstin> TheAMM: i've never worked with that stuff before, so i can't really help :/
[23:36:57 CET] <TheAMM> Ye
[23:37:11 CET] <TheAMM> It's "fast enough" for now, imo
[23:37:28 CET] <TheAMM> My FX wasn't able to do 1080 at 60, but nvenc was
[23:37:35 CET] <kepstin> in theory gdigrab should work when you're running in 256color paletted mode, but I never actually tested that :)
[23:37:40 CET] <TheAMM> Problem with nvenc is that it takes a moment to warm up, and ffmpeg probes the first frame
[23:38:02 CET] <TheAMM> So the first frame is stuck for a fraction of a second
[23:38:05 CET] <kepstin> right, the general issue with ffmpeg.c being singlethreaded
[23:38:10 CET] <TheAMM> As there's no internal buffer (threading)
[23:38:27 CET] <TheAMM> Which is why I considered the buffer but it's way too fat for too little gain
[00:00:00 CET] --- Tue Feb 12 2019


More information about the Ffmpeg-devel-irc mailing list