[Ffmpeg-devel-irc] ffmpeg.log.20190425
burek
burek021 at gmail.com
Fri Apr 26 03:05:02 EEST 2019
[00:01:33 CEST] <another> no idea what premiere supports, but from the table on wikipedia it seems that not a lot of encoders support high444pred
[00:02:13 CEST] <GuiToris> well actually I checked the reencoded file and it's High 4:4:4 Predictive at L4.2 , CABAC / 4 Ref Frames
[00:03:04 CEST] <GuiToris> and it works perfectly
[00:03:30 CEST] <GuiToris> should I really reencode all the clips?
[00:04:28 CEST] <another> you reencoded with ffmpeg -i lossless.mp4 new.mp4 ?
[00:04:42 CEST] <another> and it's high444pred?
[00:04:54 CEST] <GuiToris> yes
[00:05:05 CEST] <GuiToris> I didn't use a single option
[00:06:56 CEST] <another> o.O
[00:07:28 CEST] <GuiToris> I'll post the full output
[00:08:17 CEST] <GuiToris> https://bpaste.net/raw/12e6eb65d596
[00:14:37 CEST] <another> without options libx264 should default to crf 23
[00:16:50 CEST] <GuiToris> hurray, I have internet connection again
[00:16:59 CEST] <GuiToris> another, did you get my links?
[00:17:08 CEST] <GuiToris> I'm not sure if I could send them
[00:17:20 CEST] <GuiToris> my internet just disconnected
[00:18:16 CEST] <GuiToris> they are both High 4:4:4 Predictive
[00:18:25 CEST] <GuiToris> I don't know what's wrong with Premiere
[00:19:06 CEST] <GuiToris> and what options should I use in case I have to reencode all the clips?
[00:19:28 CEST] <another> maybe try to limit the level?
[00:20:05 CEST] <another> but that's just a wild stab in the dark
[00:21:30 CEST] <GuiToris> another, -c:v libx264 -preset veryslow -crf 0 -level:v 4.2
[00:21:37 CEST] <GuiToris> is this what you meant?
[00:23:10 CEST] <another> yep
[00:23:34 CEST] <GuiToris> I'll give it a shot, thank you another
[00:24:36 CEST] <GuiToris> I'm leaving now because my connection is really poor and it's disconnecting all the time
[00:24:42 CEST] <GuiToris> see you later!
[00:25:19 CEST] <another> laters
[03:49:08 CEST] <Numline1> So I just got home from Endgame
[03:49:13 CEST] <Numline1> I'm not gonna spoil anything
[03:49:27 CEST] <Numline1> but it might be the best movie in like 20 years
[03:52:46 CEST] <Numline1> Also I meant to post this elsewhere and I'm an idiot lol
[10:39:10 CEST] <irwiss> hey i've a video that appears like so https://i.imgur.com/rtaO1Ou.png (planar YUV420) even though other videos seem to decode fine, on the other hand vlc can play that stream so it's not completely broken and most likely i'm doing something wrong, it looks a lot like banding but i'm not sure, did i mess up an alignment/stride or is it some other is
[10:39:10 CEST] <irwiss> sue? i'm baffled why it repeats 5 times; the width of the video is 320px, the green part on second line is garbage in memory (i think) so did i mess up the alignment by 64 bytes or so.?
[12:49:58 CEST] <irwiss> yep that's what it was and by about that much, didn't realize the strides could be up ~20% of a frame's size
[13:27:59 CEST] <Numline1> irwiss I'm about to do some yuv420 magic and I feel your pain
[13:28:11 CEST] <Numline1> I have nothing constructive to help you with
[13:58:03 CEST] <irwiss> Numline1: it seems i would stumble into same issue with simplest rgb interleaved raster, i didn't realize how large those strides can be, also directx memory views also have their own stride, fun stuff
[13:59:31 CEST] <Numline1> irwiss that sounds rough, I'm glad I'm not doing exactly the same. Tl;dr I'm trying to convert y4m into series of images in memory, it took me a while to even figure out how to spit it correctly
[13:59:59 CEST] <Numline1> the y4m header kept messing with me, but I eventually found the correct yuv 2 rgb formula. I'm trying to implement it now, I hope I don't get that noise you got
[14:00:13 CEST] <JEEB> "correct YCbCr to RGB formula"
[14:00:16 CEST] <JEEB> oh you youngling
[14:00:24 CEST] <JEEB> to think there is a single formula that is correct 8)
[14:00:57 CEST] <Numline1> JEEB well I really hope this one'll work. It's something I found deeply in Golang core :)
[14:01:06 CEST] <Numline1> this folk right here - https://golang.org/src/image/color/ycbcr.go
[14:01:25 CEST] <JEEB> ok
[14:01:45 CEST] <JEEB> doesn't mention BT.709 vs BT.601 or BT.2020
[14:01:57 CEST] <Numline1> who's that
[14:02:05 CEST] <Numline1> sounds like terminator models
[14:02:21 CEST] <JEEB> also doesn't mention full/limited range YCbCr
[14:02:43 CEST] <JEEB> since most YCbCr that comes out of video is limited as in, the full range is 16-235/240
[14:02:47 CEST] <JEEB> instead of 0-255
[14:02:58 CEST] <JEEB> JPEG YCbCr by default is "full range" which is 0-255
[14:03:05 CEST] <Mavrik> Is scaling from 16-235 -> 0-255 linear? :)
[14:03:26 CEST] <Numline1> does that depend on pixel format?
[14:03:31 CEST] <JEEB> no
[14:03:39 CEST] <JEEB> it's a separate piece of metadata
[14:03:43 CEST] <JEEB> full range vs limited range
[14:03:53 CEST] <JEEB> for video you can expect limited range
[14:04:03 CEST] <Numline1> well fuck me sideways
[14:04:05 CEST] <Numline1> I hope this works
[14:04:15 CEST] <Numline1> JEEB you've planted the seed of uncertainty
[14:04:20 CEST] <JEEB> mwahahaha
[14:04:30 CEST] <JEEB> anyways, that code with a quick look looks like it took values from JFIF
[14:04:40 CEST] <JEEB> so it's highly likely that it's meant for JPEG like stuff
[14:04:56 CEST] <Numline1> I mean it's image library, so I'd think so
[14:05:14 CEST] <Numline1> fuck I still need to figure out a reasonable way to parse the y4m header
[14:05:16 CEST] <JEEB> Numline1: I would just utilize something like zimg to get YCbCr into RGB
[14:05:29 CEST] <JEEB> or even output RGB out of FFmpeg first of all :P
[14:05:42 CEST] <JEEB> unless you need the raw data that the decoder is pushing out
[14:06:09 CEST] <Numline1> JEEB can I though? I'm basically outputting multiple images into a pipe, so guys from this channel recommended y4m
[14:06:12 CEST] <irwiss> anyone knows what determines the output formats of say a dx11 hw decoder? i've been staring at https://ffmpeg.org/doxygen/4.1/hwcontext__d3d11va_8c_source.html#l00082 and nearby lines 142/143 trying to figure out what's going on there, e.g. is there a way to get nicer texture layout (non-planar) than nv12 or are hw decoders are hardwired/coded into
[14:06:12 CEST] <irwiss> those formats? or a writeup on how various sw_format_* stuff interacts with hw_format* stuff?
[14:06:20 CEST] <Numline1> I doubt I can output multiple images into rgb format
[14:06:23 CEST] <Numline1> in a single run
[14:06:47 CEST] <JEEB> Numline1: if y4m output takes in rgb pix_fmts I don't see any reason why not
[14:06:52 CEST] <JEEB> it's just an identifier after all
[14:07:23 CEST] <JEEB> irwiss: hw decoder interfaces pretty much give you NV12 or P010 nowadays
[14:07:39 CEST] <JEEB> (or other P01X for >10bit)
[14:08:12 CEST] <Numline1> JEEB _ ERROR: yuv4mpeg can only handle yuv444p, yuv422p, yuv420p, yuv411p and gray8 pixel formats
[14:08:18 CEST] <JEEB> ok
[14:08:25 CEST] <Numline1> so that's probably a nope :)
[14:08:46 CEST] <Numline1> JEEB I mean, unless there's a better format to output my frames... I was thinkign GIF maybe?
[14:08:46 CEST] <JEEB> technically nothing stops you from outputting it, but I guess that's 100% valid because then other tools wouldn't be able to read it
[14:08:49 CEST] <JEEB> although
[14:09:12 CEST] <JEEB> Numline1: umm
[14:09:18 CEST] <JEEB> current yuv4mpegenc has quite a bit more
[14:09:21 CEST] <JEEB> of pix_fmts supported
[14:09:28 CEST] <irwiss> ah, guess i'll have to trigger a shader or something then, couldnt figure out if it's hardwired or nobody bothered to support non-planar ones :) thanks
[14:09:39 CEST] <JEEB> although no full rgb ones
[14:09:57 CEST] <Numline1> oh, it also says And using 'strict -1' also yuv444p9, yuv422p9, yuv420p9, yuv444p10, yuv422p10, yuv420p10, yuv444p12, yuv422p12, yuv420p12, yuv444p14, yuv422p14, yuv420p14, yuv444p16, yuv422p16, yuv420p16, gray9, gray10, gray12 and gray16 pixel formats. Use -pix_fmt to select one.
[14:10:01 CEST] <Numline1> I haven't noticed that
[14:10:06 CEST] <Numline1> it's 4.1.1
[14:10:08 CEST] <Numline1> btw
[14:10:28 CEST] <JEEB> ok, so it checks that in init() while I was reading write_header probably :)
[14:10:41 CEST] <JEEB> also I hate those numeric strict values :P
[14:10:49 CEST] <JEEB> they have textual names goddamnit xD
[14:11:04 CEST] <Numline1> JEEB I mean, to be fair, I don't mind the conversion to RGB, even though it adds a bit of complexity :) I just need to figure out the "how"
[14:11:09 CEST] <Numline1> most examples only do that in matlab
[14:11:17 CEST] <JEEB> zimg
[14:11:26 CEST] <Mavrik> swscale? /hides
[14:11:26 CEST] <JEEB> that's my goto thing for CPU based conversions and scaling
[14:11:34 CEST] <Numline1> yeah I was also trying to avoid C bindings in Go
[14:11:52 CEST] <JEEB> uh-huh
[14:12:05 CEST] <Numline1> plus, this shouldn't really be hard to implement. All I need are 3 values from y4m header and then slice the remainder of the file
[14:12:33 CEST] <JEEB> anyways, swscale (generally) and zimg do these things correctly, while all the random matlab etc answers probably not :P
[14:12:33 CEST] <Numline1> what I'm trying to figure out right now is how to strip away the header and then start slicing
[14:12:46 CEST] <JEEB> also at this point it sounds like you might as well have written a short API client
[14:12:48 CEST] <Numline1> matlab actually seems to have some video libs as well, for whatever reason
[14:12:51 CEST] <JEEB> that outputs the raw decoded images
[14:12:59 CEST] <JEEB> in a amanner that you want to parse
[14:13:06 CEST] <JEEB> if you really really wanted to keep the C separate from golang
[14:13:07 CEST] <JEEB> :P
[14:13:26 CEST] <Numline1> this poor thing is a bit monolithic :)
[14:13:37 CEST] <Numline1> It needs to do the ffmpeg part internally, since it's deployed to Google App Engine and more instances = more money
[14:13:55 CEST] <Numline1> I also shouldn't write files
[14:14:11 CEST] <JEEB> I haven't mentioned either so I don't knwo why you mention it
[14:14:16 CEST] <Numline1> Basically I fell into this rabbit hole of "I'm trying to do this properly" instead of actually finishing 2 days ago with something that actualyl works
[14:14:43 CEST] <JEEB> well right now you'rei nstead of knowingly picking correctly working components trynig to cobble something together yourself :P
[14:14:57 CEST] <JEEB> I might be sounding a bit bad, but that's how it looks right now
[14:14:58 CEST] <Numline1> JEEB yeah, I know, I added that as a lovely bonus to explain why I can't just let ffmpeg output bunch of jpeg files
[14:15:06 CEST] <JEEB> eh
[14:15:11 CEST] <JEEB> why do you bring jpeg here
[14:15:14 CEST] <JEEB> I didn't mention it
[14:15:31 CEST] <JEEB> so far your problem is according to chat a) decode videos b) output timestamp+raw RGB c) do *something* with it
[14:15:38 CEST] <Numline1> JEEB I know, I'm trying to say letting ffmpeg output a bunch of jpeg files into a folder and then loading them would essentially be the same :)
[14:16:00 CEST] <Numline1> the input is an mp4 or whatever file, yuv is something I willingly created on output in stdout
[14:16:28 CEST] <Numline1> I'll end up creating jpeg files anyway, so our image recognition system can deal with it later
[14:16:42 CEST] <Numline1> I just wanted to do all that in memory during runtime of my program
[14:16:49 CEST] <Numline1> /rant
[14:16:59 CEST] <JEEB> yes, I just see it as an extra complication what you're noting about not wanting to do X,Y,Z
[14:17:08 CEST] <JEEB> like using lavc + zimg to get the RGB planes
[14:17:18 CEST] <JEEB> either via sub-process that's custom built or otherwise
[14:17:18 CEST] <JEEB> :P
[14:17:26 CEST] <JEEB> anyways, have fun
[14:18:38 CEST] <Tazmain> HI all, I am trying to convert a RTP stream that was saved as raw (it was opus) to wav,. and I keep getting invalid pixel format https://bpaste.net/show/0dc18782d7aa
[14:18:40 CEST] <Numline1> JEEB well, thanks :) My issue with that is it add extra dependencies like imagemagic and stuff
[14:19:05 CEST] <Tazmain> does the -dn flag not work ?
[14:19:09 CEST] <Numline1> I would use that for something more complex, after all, I used ffmpeg instead of splitting videos manually, zimg just doesn't seem necessary at this time
[14:19:35 CEST] <JEEB> uhh
[14:19:40 CEST] <JEEB> where did imagemagick come in from? :D
[14:19:49 CEST] <JEEB> https://github.com/sekrit-twc/zimg
[14:19:50 CEST] <JEEB> is zimg
[14:19:59 CEST] <JEEB> does scaling and colorspace conversions correctly
[14:20:06 CEST] <JEEB> and is usable through FFmpeg's libavfilter
[14:20:07 CEST] <JEEB> :P
[14:20:11 CEST] <JEEB> (through the zscale filter)
[14:20:19 CEST] <JEEB> while the swscale library is usable through the scale filter
[14:20:19 CEST] <another> Tazmain: is the raw data opus?
[14:20:25 CEST] <Tazmain> another, yes
[14:20:37 CEST] <another> then put the -c:a libopus before the input
[14:20:39 CEST] <Tazmain> `ffmpeg -i Saved\ RTP\ Audio.raw -vn -c:a libopus sa.ogg` is my command
[14:20:45 CEST] <Tazmain> before the input ?
[14:21:06 CEST] <TikityTik> can anyone explain bufsize to me?
[14:21:06 CEST] <another> ffmpeg -c:a libopus -i $input -c copy out.opus
[14:21:19 CEST] <Tazmain> -c copy ?
[14:21:22 CEST] <Tazmain> is that -c:a ?
[14:21:29 CEST] <Tazmain> oh nvm
[14:22:16 CEST] <Tazmain> still getting errors on that https://bpaste.net/show/ef8fd36fc876
[14:26:14 CEST] <another> looks like it not just raw opus frmaes
[14:26:18 CEST] <another> *frames
[14:27:03 CEST] <TikityTik> Tazmain: what are you trying to do, what is your command line?
[14:27:11 CEST] <Tazmain> well I tried saving as Unsyncronized forward stream, stream syncrhonized audio
[14:27:13 CEST] <Tazmain> next is file
[14:28:28 CEST] <Tazmain> TikityTik, `ffmpeg -c:a libopus -i file.raw -vn sa.ogg` or `ffmpeg -c:a libopus -i file.raw -c copy out.opus` both fail . So I have a wireshark trace of a SIP call using opus RTP. I want to get the audio from the RTP
[14:28:52 CEST] <Tazmain> now no guide online matches the current wireshark option, and there is very little on working with opus
[14:29:01 CEST] <Tazmain> so I am basically stuck and at the point of giving up
[14:29:03 CEST] <TikityTik> Tazmain: why aren't you using ffmpeg -i file.raw -vn -c:a libopus sa.ogg?
[14:29:46 CEST] <Tazmain> Output file #0 does not contain any stream
[14:29:49 CEST] <another> Tazmain: where did you get this "raw" file?
[14:29:52 CEST] <Tazmain> for that command
[14:30:07 CEST] <Tazmain> another, saving from wireshark rtp anaylsis, I mentioned above
[14:30:35 CEST] <TikityTik> Tazmain: can you upload the file?
[14:30:40 CEST] <Tazmain> TikityTik, sure
[14:31:19 CEST] <Tazmain> TikityTik, https://send.firefox.com/download/7407997eec598148/#uC6T8ytZk6q1vTqYD9-iTQ
[14:31:34 CEST] <Tazmain> there are 4 raw files, as I tried different option in wireshark to save as
[14:33:05 CEST] <another> hmm.. just a wild stab in the dark: ffmpeg -f rtp -c:a libopus -i $input -c copy out.opus
[14:35:12 CEST] <Tazmain> wait , if that can take rtp then I need to dump the rtp
[14:36:30 CEST] <Tazmain> still all fails
[14:36:31 CEST] <Tazmain> sigh
[14:39:39 CEST] <TikityTik> Tazmain: what encoding was the RTP stream using?
[14:42:04 CEST] <Tazmain> opus
[14:42:17 CEST] <Tazmain> even shows that in wireshark
[14:45:28 CEST] <Numline1> I wonder, why does the YCbCrToRGB func in Go accept 3 int8 parameters
[14:45:38 CEST] <Numline1> Shouldn't that be like a ton of bytes instead?
[14:49:30 CEST] <Mavrik> Hopefully it's not operating on per-pixel basis :D
[14:49:33 CEST] <Mavrik> Or it'll be slow as heck
[14:50:16 CEST] <JEEB> it probably does
[14:53:14 CEST] <Numline1> Tazmain I honestly think it does lol
[14:54:33 CEST] <furq> Numline1: sounds like it's expecting you to have packed yuv444
[14:55:29 CEST] <Numline1> furq it's this thing here, I'm currently trying to check the math whether it's something that could work - https://golang.org/src/image/color/ycbcr.go
[14:55:53 CEST] <Numline1> although looking at the namespace under "color" it may be something different
[14:56:30 CEST] <Numline1> it says it's used to convert Y'CbCr triple and RGB triple, among some other stuff
[14:57:34 CEST] <furq> annoyingly it would be easier to just do the rgb conversion in ffmpeg, but y4m doesn't support rgb
[14:58:33 CEST] <JEEB> furq: I'd just patch it locally if I needed RGB over some simple container
[14:58:35 CEST] <Numline1> Yeah. However it seems I might be able to just read the frame from y4m file and then pass it to something in Go's image package
[14:58:45 CEST] <Numline1> this one is sadly a bit low level. I need to do some googling
[14:58:56 CEST] <JEEB> I mean, you just add a new identifier for RGB and let it pass the check
[14:59:11 CEST] <furq> JEEB: it's weird that hasn't been done already considering there are already noncompliant pixel formats with -strict -1
[14:59:17 CEST] <JEEB> yea
[14:59:17 CEST] <furq> i guess it would conflict with the name though
[14:59:21 CEST] <JEEB> I Was going to mention that :P
[15:02:01 CEST] <Numline1> Okay I think I can create an Image struct in Go with Model called "YCbCrModel"
[15:02:06 CEST] <Numline1> from the frame I read
[15:02:11 CEST] <Numline1> and then convert it to Jpeg
[15:02:18 CEST] <Numline1> holy shit, if this works...
[15:02:54 CEST] <JEEB> always compare your output against something known "OK" stuff
[15:03:11 CEST] <JEEB> like f.ex. mpv with the gpu renderer, opengl/d3d11 output
[15:03:24 CEST] <JEEB> or if you have a simple way of using zimg with vapoursynth or so
[15:03:42 CEST] <Tazmain> Numline1, heh ?
[15:04:40 CEST] <Numline1> Tazmain basically I thought I'd have to read pixel by pixel and convert into RGB
[15:04:46 CEST] <Numline1> I may be able to load an entire frame
[15:05:00 CEST] <Numline1> and hope Go actually has its ways to convert it into jpeg
[15:05:03 CEST] <furq> jpeg is yuv though so i don't see how that helps
[15:05:08 CEST] <furq> also it's lossy which isn't great
[15:05:20 CEST] <furq> unless you're planning to then convert it into png
[15:05:20 CEST] <Numline1> furq the end file needs to be uploaded online and then processed
[15:05:36 CEST] <Numline1> I mean I'm okay with png as well, anything more "standard"
[15:05:49 CEST] <Numline1> it just needs to have some fancy headers so iimage recognition can work with it
[15:43:53 CEST] <TikityTik> can anyone explain bufsize to me? and why setting it too low ruins the quality?
[15:44:16 CEST] <TikityTik> and why is there no option for bufsize to be set by seconds instead?
[15:53:57 CEST] <DHE> keyframes require a large amount of data compared to other frames. the in VBV mode the maximum size of any frame will be bitrate/framerate + whatever is available in the buffer
[15:54:18 CEST] <DHE> if the buffer is too small, a keyframe will look bad when forced to be about the same size as other inter/between frames
[16:09:50 CEST] <TikityTik> DHE: keyframes are in bufsize?
[16:11:50 CEST] <TikityTik> i don't really understand the structure of a video
[16:12:04 CEST] <Numline1> So folks :) I had some progress with my Y4M file :)
[16:12:11 CEST] <Numline1> The header is : YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420mpeg2 XYSCSS=420MPEG2
[16:12:19 CEST] <Numline1> As far as I can tell, there are two frames in the video
[16:12:36 CEST] <Numline1> the amount of bytes read for one frame should be 1382400
[16:12:47 CEST] <Numline1> However, the file size, without the header, seems to be 1382400 * 4
[16:12:55 CEST] <Numline1> before the EOF has been reached
[16:13:03 CEST] <Numline1> Any thoughts on why that might be?
[16:17:40 CEST] <DHE> TikityTik: a keyframe is basically an entirely self-contained picture. inter frames are based on previous frame(s) so they size requirements are much lower
[16:18:26 CEST] <TikityTik> DHE: so what's the min keyframe size?
[16:18:49 CEST] <TikityTik> are the multiple keyframes per a second?
[16:18:53 CEST] <TikityTik> are there*
[16:19:08 CEST] <DHE> there isn't really a minimum. assuming h264, there's a quality parameter which goes from 0 (lossless) to 51 (unreadable smudge)
[16:19:55 CEST] <TikityTik> how do i determinte then what's the smallest bufsize i can do?
[16:19:59 CEST] <TikityTik> determine*
[16:20:26 CEST] <DHE> you can do 0. you just get a horrible image to go with it.
[16:20:30 CEST] <faLUCE> TikityTik: why do you want to determine it?
[16:21:01 CEST] <TikityTik> smaller bufsize means more changing bitrate right? so it would be better for constant quality encoding no?
[16:26:35 CEST] <DHE> the average remains constant. bigger buffers allow for more variance in the short term to deal with times that need it
[16:32:16 CEST] <TikityTik> i don't understand
[16:32:31 CEST] <TikityTik> wouldn't the crf be calculated by the bufsize?
[16:32:54 CEST] <TikityTik> the bitrate needed for the quality
[16:33:31 CEST] <TikityTik> which means that smaller buffers give more variance?
[16:35:03 CEST] <bigpod> Hello my question is how to compile ffmpeg on ubuntu so it can utilise nvenc and can be used by OBS i followed the guide and on the side and before hand got nv-codec-headers but obs doesnt detect it
[16:36:20 CEST] <BtbN> You'll need to re-compile the system packages, or build a custom deb of them
[16:36:33 CEST] <DHE> "ffmpeg -h encoder=nvenc" should produce information about what options it supports, if ffmpeg was built correctly
[16:36:36 CEST] <BtbN> Or build a static version and build OBS forcing it to use those
[16:36:47 CEST] <DHE> also you need the binary nvidia driver from nvidia themselves for this to work
[16:36:57 CEST] <BtbN> manually "make install"ing a version of FFmpeg into your system is a recipe for disaster.
[16:38:48 CEST] <bigpod> and how to do those sort of stuff
[16:39:35 CEST] <BtbN> Use the apt/dpkg magic to get the exact sources of your system packages, build them with the exact same parameters, except you also enable ffnvcodec, and then install those debs you end up with
[16:39:36 CEST] <bigpod> so it would work
[16:39:42 CEST] <BtbN> and repeat on every update
[16:40:57 CEST] <bigpod> i just need ffmpeg with nvenc which actualy works but OBS doesnt detect it(becuase its installed at user level as far as i understand)
[16:44:48 CEST] <bigpod> so i dont need every piece of software to use nvenc its just a recording and transcode machine
[16:45:01 CEST] <BtbN> obs does not use the ffmpeg cli tool
[16:45:19 CEST] <BtbN> It needs the libraries it uses to support it
[16:45:23 CEST] <bigpod> what it uses
[16:46:43 CEST] <bigpod> because what i get on OBS side of things is i need to compile FFMPEG in such a way that it installed at system level and not user level
[16:47:21 CEST] <BtbN> Which is exactly what I just explained...
[16:47:34 CEST] <BtbN> That or a local build of OBS with static ffmpeg with ffnvcodec support
[16:48:37 CEST] <bigpod> and how to do that
[16:48:49 CEST] <BtbN> <BtbN> Use the apt/dpkg magic to get the exact sources of your system packages, build them with the exact same parameters, except you also enable ffnvcodec, and then install those debs you end up with
[16:49:18 CEST] <BtbN> There is no copy&paste tutorial for that stuff
[16:53:03 CEST] <bigpod> and how to do that
[16:57:29 CEST] <TikityTik> DHE: don't smaller buffers in common sense mean more variance? I don't understand
[16:59:58 CEST] <DHE> each frame is allotted up to bitrate/framerate + remaining_buffer_size bytes. if the buffer is small, the second term approaches 0 and you get an actual consistent bitrate
[17:00:12 CEST] <DHE> (if the frame uses less than bitrate/framerate bytes, the excess goes into the buffer if capacity permits)
[17:03:31 CEST] <TikityTik> so how do i know what bufsize to use if i want to do constant quality?
[17:03:49 CEST] <TikityTik> for libvpx for example
[17:07:47 CEST] <kepstin> if you want constant quality, then you shouldn't be using a buffer limit at all
[17:08:13 CEST] <kepstin> the purpose of the buffer stuff is to limit the bitrate usage, causing quality to be reduced in complex scenes that would otherwise use too much bitrate
[17:08:19 CEST] <TikityTik> i notice for libvpx that if i want to use cbr mixed with crf then you need bufsize
[17:09:08 CEST] <TikityTik> thanks for letting me know though, i didn't notice crf actually works without bufsize
[17:09:10 CEST] <DHE> constant quality and constant bitrate are mutually at odds. this is more constant quality with constrained bitrate
[17:09:18 CEST] <kepstin> with libvpx, usind crf in combination with a bitrate causes the crf to be used as a maximum quality value (i.e. it will constrain the max quality)
[17:09:47 CEST] <kepstin> so if there's a simple scene such that if it used the full bitrate, the quality will be over the specified crf value, then it'll limit it and use less bitrate
[17:10:32 CEST] <TikityTik> ah so it's not maintain crf and use the bitrate as a max?
[17:11:13 CEST] <kepstin> nope, it's the other way around
[17:11:23 CEST] <TikityTik> is it not possible to do it in libvpx?
[17:11:27 CEST] <kepstin> (with libvpx)
[17:11:33 CEST] <kepstin> note that x264 works very differently
[17:13:08 CEST] <TikityTik> is it not possible to maintain crf and have constrained bitrate with libvpx?
[17:15:39 CEST] <kepstin> if you set a crf value such that the with most content the crf limit would be below the specified bitrate limit, then that's effictively what you'll get
[17:24:08 CEST] <TikityTik> i see
[17:24:11 CEST] <TikityTik> thanks
[17:25:12 CEST] <kepstin> in other words - if you set the quality low enough that it wouldn't use all the available bitrate, then the bitrate limits won't have an effect.
[17:48:56 CEST] <Numline1> Guys, I'm getting very close with my YUV handling :)
[17:49:27 CEST] <Numline1> kepstin I think we spoke yesterday about how y4m output works. You've mentioned that the frame size is width x height x 1.5 for yuv420
[17:49:42 CEST] <Numline1> However I've noticed there's a "FRAME \n" thingie before each frame
[17:49:50 CEST] <furq> Numline1: if you need to convert to jpeg anyway then you might as well just have ffmpeg output mjpeg
[17:50:27 CEST] <Numline1> furq I'm actually considering it at this point. I managed to split the Y4M into separate frames, however I can't seem to open them
[17:50:36 CEST] <Numline1> My header was 61 bytes
[17:50:47 CEST] <furq> mjpeg is more or less just a bunch of concatenated jpegs
[17:50:54 CEST] <furq> so just read from 0xFFD8 to 0xFFD9
[17:50:55 CEST] <Numline1> When I removed the FRAME \N part, I eliminated another 12 bytes (for two frames)
[17:51:00 CEST] <furq> inclusive
[17:51:17 CEST] <Numline1> furq what are those characters anyway?
[17:51:25 CEST] <furq> jpeg SOI and EOI markers
[17:51:33 CEST] <Numline1> ohh
[17:52:14 CEST] <Numline1> Well my hex editor can't see them for some reason, 0 results
[17:53:38 CEST] <Numline1> furq would you mind if I sent you a frame (yuv) to see whether it's correctly split?
[17:53:53 CEST] <Numline1> I don't want to waste your time if you're busy
[17:55:20 CEST] <furq> if you just save it as .yuv then mpv should be able to display it
[17:55:53 CEST] <Numline1> furq both ffmpeg and mpv say [ffmpeg] IMGUTILS: Picture size 0x0 is invalid
[17:56:04 CEST] <Numline1> I assume I saved an incorrect portion of the frame
[17:56:22 CEST] <furq> no you just need to give ffmpeg the video size
[17:56:24 CEST] <Numline1> the y4m file is playable okay
[17:56:31 CEST] <furq> -video_size 123x456 -i foo.yuv
[17:57:59 CEST] <Numline1> furq https://numshare.s3-eu-west-2.amazonaws.com/Screen-Shot-2019-04-25-17-57-55-1556207875.jpg
[17:58:10 CEST] <Numline1> the ffmpeg part worked, but I still suspect it's broken :(
[17:58:33 CEST] <Numline1> furq just to be sure - the "FRAME \n" part should be omitted from the final yuv when doing y4m to yuv?
[17:59:36 CEST] <Numline1> Again, I did the math and basically width * height * 1.5 = <some_number>. The some_number + size of the header + amount_of_frames * 6 === file size
[18:00:39 CEST] <Numline1> 6 is bytesize of FRAME\n
[18:01:51 CEST] <furq> the actual frame data excludes the headers, yeah
[18:01:56 CEST] <furq> https://clbin.com/ZyTNU
[18:01:58 CEST] <furq> but also just use this
[18:04:13 CEST] <furq> it's probably buggy but piping 8K mjpeg from ffmpeg worked
[18:04:27 CEST] <Numline1> furq thank you very much for that (especially since it's in Go)
[18:04:33 CEST] <Numline1> furq did you write that or find that?
[18:04:41 CEST] <furq> i wrote it
[18:04:59 CEST] <furq> it was a good excuse to actually use a bufio.Scanner
[18:05:29 CEST] <Numline1> furq that's very cool from you. What do you think was wrong here? https://privatebin.net/?052cc1e5a8d2e804#/PDcbmE9TZPZR2a4fYz6X5WalmkbjRIi11gPA4F2H3c=
[18:05:38 CEST] <Numline1> We took very different aproaches I think
[18:06:00 CEST] <Numline1> Scanner gave me some headaches lol :) I had to figure out when the file pointer was moving and when not :)
[18:07:41 CEST] <Numline1> I was thinking about using scanner split functions, I was just thinking about newlines (someone in here advised I shouldn't use it as frame delimiter, since there can be the same character somewhere in the frame)
[18:14:07 CEST] <furq> yeah there's no need for a scanner there really
[18:14:38 CEST] <furq> i can't see anything obviously wrong with that code but i'd have to actually test it and see what it's doing
[18:16:07 CEST] <Numline1> furq don't bother with that, I like your solution better tbh. I'm just trying to figure out how you came up with 1<<20 and 1<<22
[18:16:13 CEST] <Numline1> is that just random size?
[18:16:15 CEST] <furq> yeah
[18:16:17 CEST] <furq> 1MB and 4MB
[18:16:34 CEST] <Numline1> furq neat. Could I possibly use that width x height * 1.5 formula for that?
[18:16:52 CEST] <furq> the second argument to Buffer has to be larger or else it won't ever reallocate the buffer if it runs out of space
[18:17:00 CEST] <Numline1> instead of 1<<20 and the same for the second one, but times the amount of frames
[18:17:04 CEST] <furq> and uh
[18:17:21 CEST] <furq> it's mjpeg so the frames are already compressed
[18:17:46 CEST] <Numline1> damn, really? I always thought it's raw uncompressed data
[18:17:52 CEST] <Numline1> that explains some problems
[18:18:05 CEST] <Hello71> have you heard of "jpeg"
[18:21:41 CEST] <Numline1> fair enough
[18:27:26 CEST] <kubast2> Hey, how can I overlay transparently more than 2 videos?
[18:28:57 CEST] <kubast2> or
[18:29:10 CEST] <cfoch> Hello
[18:29:40 CEST] <cfoch> I am reversing an audio file with this command: ffmpeg -y -i "$input_file" -af silenceremove=1:0:$ratio "$input_file"
[18:29:54 CEST] <cfoch> but I note that for some reason the result file has less duration than the original one
[18:29:58 CEST] <cfoch> any idea why?
[18:30:15 CEST] <furq> did you also notice that it's not reversed
[18:30:27 CEST] <cfoch> sorry
[18:30:30 CEST] <cfoch> the wrong command
[18:30:38 CEST] <cfoch> ffmpeg -y -i "$input_file" -af "areverse" "$input_file"
[18:30:41 CEST] <cfoch> this is the command I use
[18:31:11 CEST] <furq> don't use the same output filename as input filename
[18:31:31 CEST] <kubast2> yep
[18:31:40 CEST] <furq> ffmpeg -i in out && mv out in
[18:32:00 CEST] <cfoch> I see
[18:33:53 CEST] <durandal_1707> cfoch: ffmpeg version is?
[18:34:27 CEST] <cfoch> ffmpeg version 4.0.3 Copyright (c) 2000-2018 the FFmpeg developers
[18:34:44 CEST] <cfoch> in the new version using the same output file as the input file is allowed?
[18:35:38 CEST] <durandal_1707> nope
[18:36:23 CEST] <durandal_1707> anyway use latest ffmpeg version
[19:47:36 CEST] <ChocolateArmpits> Using drawgrid, is it possible to have the lines be spaced non linearly? It seems the width or the height parameter simply takes the resultant expression value and uses that to plot lines at exact distances so I have to use mulitple filters to get what I want
[19:48:26 CEST] <durandal_1707> nope, and why you need something different?
[19:49:04 CEST] <ChocolateArmpits> What do you mean different? I don't want evenly spaced lines that's all
[19:49:27 CEST] <durandal_1707> use geq filter
[19:51:52 CEST] <pzich> wow, geq sounds pretty awesome
[19:52:05 CEST] <ChocolateArmpits> okay that seems more like it
[19:52:22 CEST] <ChocolateArmpits> hopefully it's not slow, seems to have slice threading
[19:55:44 CEST] <ChocolateArmpits> I'm actually plotting volume bars for vertically displayed showvolume output of 16 channel audio input. The default volume measurement provided by showvolume is quite bad so not using it
[19:56:22 CEST] <ChocolateArmpits> drawtext lists volume level at each predefined bar position
[19:59:18 CEST] <durandal_1707> ChocolateArmpits: how can that be true? What is missing in that filter?
[19:59:52 CEST] <ChocolateArmpits> durandal_1707, showvolume?
[20:00:22 CEST] <durandal_1707> yes
[20:00:59 CEST] <ChocolateArmpits> Well if the orientation is set to vertical then the volume indication is also drawn vertically, which makes it hard to read
[20:01:52 CEST] <durandal_1707> anything else?
[20:02:56 CEST] <ChocolateArmpits> The volume number updates with the frequency of the framerate, but even at 8 fps the updates are too often, but lowering it makes the volume bar update too sluggishly
[20:03:25 CEST] <ChocolateArmpits> And the size of the volume indication is somewhat small and hard to read at higher sizes
[20:03:32 CEST] <ChocolateArmpits> dimensions*
[20:04:03 CEST] <ChocolateArmpits> So overall I plan to have it disabled especially because a few horiztaonl lines indicating volume level across 16 channels is just easier to view
[20:05:14 CEST] <ChocolateArmpits> I mean I already have it working with 8 drawgrid filters, but I want a lighter filter command, so as you suggest, geq may help
[20:34:48 CEST] <de-facto> Hey guys, I want to combine four "sprite" videos (a spinning object) along the four edges of a black canvas (combined output video). Each "sprite" needs rotation e.g. 0 deg, 90 deg, 180 deg and 270 deg
[20:35:38 CEST] <de-facto> Is something like this scriptable with ffmpeg? the source "sprites" will be VP8 webm from either chrome or firefox
[20:36:13 CEST] <__Shepherd> are the sprite vids still
[20:36:19 CEST] <pink_mist> I know it's doable with ffmpeg, yes ... I haven't the faintest clues how
[20:36:50 CEST] <__Shepherd> oh never mind
[20:36:55 CEST] <de-facto> the sprites are rotations from a 3D object rendered with threejs (full rotations along axes)
[20:37:25 CEST] <de-facto> maybe -filter_complex something?
[20:37:31 CEST] <__Shepherd> let me get this straight
[20:37:34 CEST] <pink_mist> yeah, probably
[20:38:29 CEST] <__Shepherd> your sprite vids display rotation objects or the vid itself is yet to be made rotating
[20:39:49 CEST] <de-facto> I have a 3D model (glb) which is rendered in the browser with threejs. This animation shows it rotating around specific axes and i capture this with ccapture.js to a webm VP8 video as "sprite". It shows the object rotating.
[20:42:43 CEST] <de-facto> now i want to put this "sprite" four times along the edges of a big black canvas. Each "sprite" video rotated by 90 degrees: one normal orientation, second rotated 90 deg, third upside down and forth rotated 270 deg
[20:43:02 CEST] <de-facto> video (2d) rotation that is
[20:44:16 CEST] <__Shepherd> hang there a sec
[20:47:51 CEST] <de-facto> correction its not upside down its all rotations 0, 90, 180, 270 deg,
[20:49:03 CEST] <de-facto> it is going to be a "fake" hologram displayed by a monitor and its reflection is shown by a glas pyramid below. preferably it would need to show the four sprites at different timecodes
[20:50:37 CEST] <de-facto> e.g. first from start, second 1/4 playtime, third 1/2 playtime, fourth 3/4 playtime to show correct "object" rotations
[20:53:57 CEST] <de-facto> also the "sprites" would need to be looped for this i guess (until first sprite got one complete playtime)
[20:57:13 CEST] <de-facto> output from the big black "canvas" should be a mp4 playable buy raspberry pi (e.g. 1600x1200 or such)
[21:08:51 CEST] <__Shepherd> what's the size of your canvas?
[21:09:02 CEST] <__Shepherd> and fps?
[21:09:31 CEST] <__Shepherd> and duration?
[21:10:30 CEST] <__Shepherd> guessing you want your duration to be a multiple of your sprite vid duration so it can loop w/ no problem
[21:37:05 CEST] <de-facto> __Shepherd, first capture "sprite" is a 30s WebM in VGA (640x480) in VP8 codec 60fps, the canvas size is 1600x1200 in 60fps
[21:39:00 CEST] <de-facto> " Stream #0:0: Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 60 fps, 60 tbr, 1k tbn, 1k tbc (default)"
[21:40:41 CEST] <de-facto> i hope this fits in the video canvas, but i can adjust the sprite size prior to creating it (html canvas)
[21:42:17 CEST] <__Shepherd> 640 x2 is 1280 but your canvas is only 1200
[21:42:57 CEST] <__Shepherd> some of your sprite video will be outside
[21:44:07 CEST] <__Shepherd> *only 1200 in height
[21:45:29 CEST] <de-facto> the sprite will be with its bottom edge along each outer edge of the big canvas (maybe with a margin, have to adjust the reflection probably), so its 480x2 = 960 < 1200 and 480x2 = 960 < 1600
[21:46:35 CEST] <__Shepherd> but your sprite vid is 640x480
[21:47:14 CEST] <de-facto> if it helps, this is my first capture of the sprite: https://uploadfiles.io/foi1hf1r
[21:47:22 CEST] <__Shepherd> nah it will fit
[21:49:35 CEST] <de-facto> yeah 640 is the width along each outer edge of the canvas, only the sprite height is perpendicular to this edges
[21:49:39 CEST] <de-facto> *these
[21:59:01 CEST] <__Shepherd> Uploaded file: https://uploads.kiwiirc.com/files/8ec67d8f3efdf889ff7dc74bef960fc9/rotation.flv
[21:59:21 CEST] <__Shepherd> is that the effect you need de-facto
[22:03:17 CEST] <de-facto> yeah kinda, but all four at once
[22:04:01 CEST] <__Shepherd> all four to be visible at the same time?
[22:06:38 CEST] <de-facto> yes with some seek offsets, like 0s, 1/4 playtime, 1/2 playtime, 3/4 playtime like this: https://s16.directupload.net/images/190425/xfrmtxcg.png
[22:11:24 CEST] <__Shepherd> you want the bottom video1 to appear starting from 0s to 7.5s, then the video2 from 7.5s to 15s, video3 from 15s to 22.5s, finally video4 from 22.5s to 30s then have that looped
[22:11:45 CEST] <__Shepherd> I'm not really sure I understand what you want to achieve
[22:14:12 CEST] <Classsic> hi
[22:14:28 CEST] <Classsic> somebody know how fix this "Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly"
[22:14:55 CEST] <__Shepherd> de-facto or maybe you just want them to rotate in fast succussion with a predetermined interval between all of them
[22:15:01 CEST] <JEEB> means that the AVPackets passed in the framework don't have timestamps
[22:15:08 CEST] <de-facto> i want the target video to make a full loop (30s). It will be displayed on a 1600x1200 monitor with a raspi: its reflection will be on a glass pyramid (4 triangles), so the object appears as rotating in the middle of the glass pyramid: each sprite would have to loop for 30 seconds but start with offsets: 0s, 7.5s, 15s, 22.5s and loop for full 30 s
[22:16:32 CEST] <__Shepherd> got it
[22:17:24 CEST] <de-facto> e.g. sprite0 from 0->30s, sprite1 7.5s->7.5s (loop), sprite2 15s->15s (loop), sprite3 22.5s->22.5s (loop)
[22:17:44 CEST] <de-facto> a bit difficult to explain, but its for correct object rotation angles for each reflection
[22:17:50 CEST] <Classsic> ok, how I can fixit this? this is rtsp ----> rtp
[22:18:00 CEST] <__Shepherd> that's actually much simpler
[22:24:42 CEST] <de-facto> i probably would have to adjust it on the glass pyramid itself when it is working (e.g. symmetric distances from the center), but thats fine tuning...
[22:25:47 CEST] <de-facto> either using a sprite aspect=1 or maybe even with black=transparent or such, but thats first order corrections
[22:29:54 CEST] <de-facto> e.g. "project" the glass pyramid ground plane on the monitor (which is above facing downwards) and have the objects centers each one quater distance from the projection edges or such
[22:46:42 CEST] <__Shepherd> Uploaded file: https://uploads.kiwiirc.com/files/fc01a195ee9597ccf2c35e25cf39eb95/rotation.flv
[22:51:34 CEST] <de-facto> that looks almost perfect, how did you do this?
[22:57:26 CEST] <__Shepherd> Uploaded file: https://uploads.kiwiirc.com/files/40c92e4515ad0cda5e7a6e238efee62c/ffmpegcmd.txt
[22:59:12 CEST] <de-facto> whoa thats a lot of advanced ffmpeg fu :)
[23:00:03 CEST] <de-facto> need some time to understand this, but it looks really well :))
[23:01:14 CEST] <furq> the pi supports high at 4.1
[23:01:32 CEST] <furq> so probably just get rid of profile and level
[23:03:01 CEST] <furq> de-facto: http://vpaste.net/Z42lO
[23:03:05 CEST] <furq> it's a lot easier to understand with line breaks
[23:03:36 CEST] <de-facto> right now its a raspi2 with openelec i think
[23:03:47 CEST] <furq> every pi has the same hwdec chip iirc
[23:04:01 CEST] <furq> but if not then even the worst one does high at 4.1
[23:04:30 CEST] <furq> it's been a long time since devices could only do baseline
[23:05:01 CEST] <furq> also you probably want to remove -preset ultrafast if that wasn't obvious
[23:05:24 CEST] <another> btw: is it possible to just set a max profile level for x264?
[23:05:29 CEST] <de-facto> yeah mainly its the geometric and time layout that i could not do myself (ffmpeg noob here)
[23:05:35 CEST] <furq> another: sort of
[23:05:46 CEST] <furq> actually wait. no
[23:05:54 CEST] <furq> it's only sort of possible to set the level at all
[23:07:28 CEST] <de-facto> this is really a really cool script, thanks a lot __Shepherd
[23:08:58 CEST] <__Shepherd> sorry I left the room my computer got sluggish. tell me if that works
[23:09:26 CEST] <de-facto> it looks extremely well, im gonna need some time to understand
[23:10:04 CEST] <__Shepherd> I hope someone can optimize on that cuz the loop filter sucks so much ram damn
[23:10:12 CEST] <__Shepherd> so did it work for you?
[23:11:20 CEST] <de-facto> kinda, have to adjust the positions and understand what it does
[23:14:42 CEST] <de-facto> coordinates start in lower left corner?
[23:15:07 CEST] <furq> __Shepherd: http://vpaste.net/BgihG
[23:15:10 CEST] <furq> something like that (untested)
[23:16:10 CEST] <furq> de-facto: top left
[23:16:26 CEST] <de-facto> ok
[23:17:52 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#overlay-1
[23:19:18 CEST] <__Shepherd> I'm sorry
[23:19:41 CEST] <__Shepherd> I let the X Y coordinates of my test run
[23:21:50 CEST] <de-facto> its extremely helpful and even fun to use ffmpeg like this :)
[23:22:38 CEST] <__Shepherd> Uploaded file: https://uploads.kiwiirc.com/files/954d0b7cf8ebe997742b3b0d917ab8e0/ffmpegcmd.txt
[23:23:41 CEST] <de-facto> coordinates work well adjusted: http://vpaste.net/E3zBj
[23:24:31 CEST] <furq> de-facto: get rid of -preset ultrafast as well
[23:25:25 CEST] <__Shepherd> yeah change whatever setting you like depending on your liking
[23:29:22 CEST] <de-facto> this is awesome guys thanks a lot :))
[23:30:10 CEST] <__Shepherd> loop=3:1800:0 I have no explanation why I set loop's size to 1800
[23:30:31 CEST] <__Shepherd> but that's what worked
[23:30:49 CEST] <de-facto> 30s*60fps=1800 frames maybe?
[23:31:32 CEST] <__Shepherd> yes that's how I've calculated it
[23:31:55 CEST] <__Shepherd> but I don't have a complete understanding of why it worked
[23:32:32 CEST] <__Shepherd> other values just ruin the loop
[23:34:43 CEST] <__Shepherd> there's a max limit in this option so you can't loop anything seamlessly
[23:37:31 CEST] <__Shepherd> this is most likely the filter option that will cause you trouble when scripting to the raspberry
[23:38:55 CEST] <__Shepherd> but since you are in control of making the overlay inputs I guess you could work something out
[23:40:10 CEST] <de-facto> yeah i would have to feed it to the raspi tomorrow (I dont have it here right now) and adjust to the glass pyramid
[23:41:05 CEST] <de-facto> but this is much better with ffmpeg, the other guys messed around with adobe premiere for this
[23:51:02 CEST] <durandal_1707> de-facto: learn about hflip/vflip
[23:52:53 CEST] <de-facto> i also need 90° rotations though
[23:57:18 CEST] <durandal_1707> yes, but twice transpose is hflip/vflip
[23:59:01 CEST] <de-facto> yes, orientation seems to be quite well like this: http://vpaste.net/8iW06
[00:00:00 CEST] --- Fri Apr 26 2019
More information about the Ffmpeg-devel-irc
mailing list