[Ffmpeg-devel-irc] ffmpeg.log.20190424

burek burek021 at gmail.com
Thu Apr 25 03:05:02 EEST 2019


[00:01:29 CEST] <BtbN> mkv is just fine
[00:08:36 CEST] <dablitz> can anyone continue to help me
[00:08:50 CEST] <cehoyos> Yes
[00:09:00 CEST] <dablitz> cehoyos:
[00:09:18 CEST] <cehoyos> No, sorry, I don't know much about rtp (if that is the question)
[00:09:29 CEST] <dablitz> yes that is the question
[00:11:33 CEST] <Atlenohen> calamari: OBS broadcaster specifically uses MKV for failsafe when abrupt shutdown or crash, I was benchmarking, ran out of RAM, it crashed, the video was fine
[00:11:47 CEST] <cehoyos> He has left
[00:11:52 CEST] <Atlenohen> Oh
[00:12:18 CEST] <cehoyos> But using broken mkv as "failsafe" is of course also not a useful answer;-)
[00:12:29 CEST] <cehoyos> Do you still have your configure line? It looked a little broken to me...
[00:12:36 CEST] <Atlenohen> Atleast in comparison to MP4
[00:13:22 CEST] <Atlenohen> You mean my ffmpeg compiling saga?
[00:13:37 CEST] <cehoyos> Iirc, you used both --disable-all and --disable-everything
[00:13:59 CEST] <cehoyos> everything is a debug options that you should not use and it is always a subset of all
[00:14:13 CEST] <cehoyos> You should not use "--disable-everything"!
[00:14:14 CEST] <cehoyos> You should not use "--disable-everything"
[00:14:39 CEST] <Atlenohen> Oh I did sort it out tho fully, however, I'll check that right now
[00:14:40 CEST] <cehoyos> Don't remember the other issues atm, saw them in the log
[00:14:59 CEST] <cehoyos> But if you run "./ffmpeg", I can have another look
[00:16:10 CEST] <cehoyos> I meant: If you post your configure line again, I can have a look
[00:16:38 CEST] <Atlenohen> FFmpeg was building fine, the two other issues were that I had to modify x264.header for ffmpeg to know it's getting a static x264 lib instead of dynamic, the X264_API_IMPORTS stuff
[00:17:06 CEST] <Atlenohen> and I had to simply include bcrypt.lib in the additional libraries of the project settings linker in VS2017, presto
[00:17:09 CEST] <elmofan> can any deshaker for ffmpeg also correct rotational shake?  or is it all translation?
[00:17:34 CEST] <Atlenohen> Sure
[00:17:58 CEST] <cehoyos> Of course, I believed we resolved bcrypt originally ("in a minute" iirc), you just added some additional options, some of them are wrong afair, one of them was "--disable-everyhing"
[00:18:05 CEST] <Atlenohen> not sorry, to ceyehos: sure I can, moment,
[00:18:34 CEST] <Atlenohen> I did fiddle in depth with the options too, but I don't remember
[00:19:07 CEST] <Atlenohen> cehoyos: # ./configure.txt --toolchain=msvc --arch=x86_64 --target-os=win64 --extra-cflags=-I./dependency/x264/include --extra-cflags=-MT --extra-cflags=-GS- --extra-ldflags='-LIBPATH:./dependency/x264/lib' --prefix=./CAKE --disable-all --disable-everything --disable-network --disable-debug --enable-static --enable-avformat --enable-avcodec --enable-swscale --enable-swresample --enable-gpl --enable-protocol=file --enable-muxer='matros
[00:19:07 CEST] <Atlenohen> ka,mpegts,mxf,webm,rawvideo,frame*' --enable-libx264 --enable-encoder='mpeg4,libx264*,bmp,ffv*,rawvideo'
[00:19:25 CEST] <cehoyos> Are you using msys or wsl?
[00:19:29 CEST] <Atlenohen> msys2
[00:20:04 CEST] <Atlenohen> it's more like msys2+mingw64, properly.
[00:20:07 CEST] <cehoyos> Remove arch (it does not do what you think it does, if it had an effect, it would break compilation) and target
[00:20:45 CEST] <cehoyos> Remove --disable-everything (it is a subset of all, and a subset that you are not interested in)
[00:21:14 CEST] <cehoyos> Remove --disable-static (it has been the default since forever, much more than a decade)
[00:21:21 CEST] <cehoyos> Remove --enable-static (it has been the default since forever, much more than a decade)
[00:21:23 CEST] <cehoyos> (sorry)
[00:21:41 CEST] <Atlenohen> You sure, I did went into the configure and hard edited some stuff that were annoying me, but it's working now, althought, who knows maybe theres some unforseen consequences when encoding
[00:22:00 CEST] <cehoyos> Remove --disable-network as it is a subset of --disable-all
[00:22:08 CEST] <Atlenohen> Ah if it's a subset then probably not necessary
[00:22:16 CEST] <cehoyos> Yes, I am sure that --enable-static is the default since forever and that forever is >10 years
[00:24:05 CEST] <Atlenohen> I see that there's multiple things leading to x86 like     i[3-6]86*|i86pc|BePC|x86pc|x86_64|x86_32|amd64)
[00:24:05 CEST] <Atlenohen>        arch="x86"  - so I left it be as it probably didn't break anything
[00:24:32 CEST] <cehoyos> Again: arch breaks compilation
[00:24:53 CEST] <cehoyos> target is useless because it is set by toolchain=msvc
[00:25:16 CEST] <Atlenohen> how does it break it?
[00:26:03 CEST] <cehoyos> If your compiler actually is configured for "x86_64", arch does nothing, if your compiler happens to be configured for another target, arch breaks (or maybe: can break) compilation
[00:26:06 CEST] <cehoyos> Please remove it
[00:26:11 CEST] <cehoyos> to be on the safe side.
[00:28:27 CEST] <cehoyos> Did you benchmark "-GS-"? I ask because I believe we should make it the default (but only if it has an effect).
[00:29:07 CEST] <Atlenohen> Not, I've not done that specifically, since I did more side stuff in the VS project
[00:29:32 CEST] <Atlenohen> However MSVC has win32 not win64 as the target_os_default
[00:29:56 CEST] <Atlenohen> and there is no arch
[00:30:35 CEST] <cehoyos> There is no difference between "win32" and "win64"
[00:30:49 CEST] <cehoyos> There is no arch because the option can break compilation, you should not use it
[00:30:49 CEST] <Atlenohen> yeah it's one of those duds again
[00:30:59 CEST] <cehoyos> (Unless you cross-compile which you are not)
[00:31:32 CEST] <cehoyos> If you have not benchmarked it, why do you use it?
[00:31:52 CEST] <Atlenohen> most of this stuff was in the original configure statement
[00:32:16 CEST] <cehoyos> Do you know what -GS- does?
[00:32:33 CEST] <Atlenohen> which is several years old, or so, when I wasn't around
[00:32:43 CEST] <Atlenohen> I figured out all the options yes
[00:32:51 CEST] <cehoyos> So what does it?
[00:33:41 CEST] <Atlenohen> I had to, It was quite a run, I worked 16 hours strait for 3 days lol, finally found I suppose to be using -LIBPATH: not -L
[00:33:57 CEST] <Atlenohen> Disables buffer security
[00:34:28 CEST] <cehoyos> That took me more than a minute arguably (in the meantime I also compiled for Windows but decided to use wsl, not msys), but fortunately not days;-)
[00:34:43 CEST] <cehoyos> to find out about LIBPATH
[00:35:07 CEST] <cehoyos> But I decided to install into /lib/x86-64/whatever to avoid this issue;-)
[00:36:11 CEST] <Classsic> hi
[00:36:35 CEST] <Classsic> somebody know how add buffer on rtp output?
[00:38:59 CEST] <Atlenohen> the linker is looking at 5 places in my case by default, without LDFLAGS, unfortunately I figured that out too late, some lib folders in MS NET, SDK and VC paths on the system, but the two easy ones were that it looked in msys64/mingw64/lib/x264.lib and ffmpeg's root github folder all that time, facepalm
[00:40:15 CEST] <Atlenohen> it does not look into ffmpeg-repo/lib/x86-64/ AFAIK
[00:46:42 CEST] <Atlenohen> would be a good idea updating configure for using LIBPATH: for windows, it's only using it one odd case in PKCONFIG but even that by just
[00:47:27 CEST] <Atlenohen> And there's a lot of other stuff in those 3 days, that was quite a summary I made there
[01:45:36 CEST] <Atlenohen> later
[02:10:55 CEST] <LunaLovegood> The docs about AVCodecContext::time_base say to use 1/framerate, but wouldn't 1/90000 make more sense for MPEG stuff?
[07:08:36 CEST] <elmofan> how can i auto-censor faces
[13:49:01 CEST] <Numline1> Hey guys. So, what does -ss stand for?
[13:49:08 CEST] <Numline1> I know it's for seeking, but what's the other s
[13:49:15 CEST] <Numline1> seek start?
[14:01:20 CEST] <DHE> it may just be different than -s because it's already used for scaling the image size. eg: -s 1920x1080
[14:14:04 CEST] <Numline1> DHE tl;dr ffmpeg devs are nazis
[14:18:30 CEST] <Numline1> anyway, jokes aside - I was kind wondering, with all the protocols supported by ffmpeg
[14:18:53 CEST] <Numline1> I kinda need to process a file from Google Storage without downloading it locally
[14:19:08 CEST] <Numline1> Since my app running in app engine can't really access local storage anyway
[14:19:15 CEST] <Numline1> what protocol could be used for this?
[14:19:24 CEST] <Numline1> it seems async could be the one?
[14:22:44 CEST] <DHE> you can always write your own or shoehorn a pipeline that the ffmpeg CLI makes awkward or impossible.
[14:26:42 CEST] <Mavrik> Just read via HTTP? :)
[14:27:04 CEST] <DHE> Maybe. If you need to do something more complicated, see doc/examples/avio_reading.c to write your own IO handler for ffmpeg
[14:27:37 CEST] <Numline1> DHE Well, I suck at C so that's a nope :)
[14:27:44 CEST] <Numline1> Mavrik will it be a streamed input though?
[14:28:15 CEST] <Numline1> I'm actually going to try it right now, but I do a lot of seeking and stuff, so I'd prefer the file doesn't get downloaded in its entire for no reason
[15:00:18 CEST] <elmofan> did you know the talmud is pure evil?
[16:08:02 CEST] <PixelHir> Hi, I need a bit of help
[16:08:14 CEST] <PixelHir> How can you convert mp3 to mp4? without image etc, sound only
[16:08:26 CEST] <PixelHir> Facebook's voice messages are stored in mp4 for some reason
[16:08:34 CEST] <PixelHir> and i need to convert but i don't know how
[16:12:56 CEST] <pink_mist> ffmpeg -i in.mp3 -a:c copy out.mp4?
[16:13:15 CEST] <DHE> that's -c:a
[16:13:40 CEST] <pink_mist> ah
[16:33:59 CEST] <Numline1> Folks, one more question - is it possible to pipe out from ffmpeg when creating multiple output files/images?
[16:34:32 CEST] <Numline1> I'm doing this at the end of my command - thumb%04d.jpg
[16:34:43 CEST] <Numline1> And I'm trying to process the result programatically if possible
[16:37:28 CEST] <Numline1> ffmpeg -ss 1 -i https://example.com/test.mp4 -t 200 -vf scale=1280:-1,thumbnail=50 -vsync 0 thumb%04d.jpg
[16:37:32 CEST] <Numline1> would be the entire command :)
[16:37:34 CEST] <kepstin> Numline1: you can with some image formats (using either image2pipe or mjpeg for example), but note that ffmpeg will simply write one frame then the next, so you have to parse the images yourself afterwards.
[16:38:05 CEST] <kepstin> (you might consider a raw image format rather than e.g. jpg if you're passing it to another program - then the images are all fixed sizes and you don't have to decode it again)
[16:38:22 CEST] <pomaranc> Numline1: aren't you from slovakia by any chance?
[16:38:56 CEST] <Numline1> kepstin yeah that's what I was wondering how that behaves. One output is easy to handle, multiple outputs to stdout are a mystery to me :)
[16:39:00 CEST] <Numline1> pomaranc o/
[16:39:34 CEST] <kepstin> Numline1: it's not multiple outputs, it's just one output, a stream of bytes containing one image then the next.
[16:39:48 CEST] <Numline1> kepstin will it send EOF though?
[16:39:55 CEST] <Numline1> if so, I can easily handle that I think
[16:39:56 CEST] <kepstin> Numline1: only after the last frame
[16:40:26 CEST] <Numline1> kepstin that's a problem :D
[16:40:55 CEST] <kepstin> that's why I recommend using a raw image format - since each frame is the same number of bytes (calculated from frame size), you just read N bytes, treat as a frame, repeat.
[16:43:24 CEST] <Numline1> kepstin well, that might work, if it's not prone to errors. What I'm actually doing is I'm processing a video in Go, trying to get multiple outputs. Since the app is going to run in app engine, I'm avoiding file writes. I can get a Reader from the stdout and pass that directly to Google Storage library
[16:43:56 CEST] <Numline1> but let's pretend it's language agnostic - doesn't ffmpeg output also some verbose crap to stdout?
[16:44:06 CEST] <kepstin> no, stderr (and you can turn that off)
[16:46:24 CEST] <Numline1> I mean, I'm getting stuff like
[16:46:25 CEST] <Numline1> ffmpeg version 4.1.1 Copyright (c) 2000-2019 the FFmpeg developers
[16:46:25 CEST] <Numline1>   built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
[16:46:36 CEST] <Numline1> judging by the exit code, it might be stdout, but I'm not arguing with you
[16:47:07 CEST] <another> that is going to stderr
[16:47:14 CEST] <Numline1> oh :) good to know
[16:47:36 CEST] <Numline1> kepstin another okay guys, anyway, I'm going to fiddle with it a bit more, thanks for the tips on handling the outputs :) much appreciated
[17:11:41 CEST] <Numline1> kepstin so um :) I was trying your advice about raw data - ffmpeg -ss 1 -i https://something.com/something.mp4 -t 200 -vf scale=1280:-1,thumbnail=50 -vsync 0 -f data - | cat -
[17:11:47 CEST] <Numline1> Output file #0 does not contain any stream
[17:11:59 CEST] <Numline1> The error kinda makes sense, but I'm not sure why the output needs to contain a stream
[17:12:50 CEST] <furq> Numline1: you want -f rawvideo
[17:13:09 CEST] <Numline1> oh shoot, yea, I just found that furq
[17:13:10 CEST] <furq> although you probably actually want something like -f yuv4mpegpipe
[17:13:10 CEST] <kepstin> Numline1: the "data" format isn't what you want here. Try something like "-pix_fmt bgr0 -f rawvideo" (change the -pix_fmt option to taste)
[17:13:19 CEST] <Numline1> I thought that doesnt make sense since the output is actually images :)
[17:13:27 CEST] <furq> video is actually images
[17:13:40 CEST] <Numline1> touche
[17:14:00 CEST] <furq> if go has an mjpeg demuxer/decoder then that's probably the easiest answer
[17:14:38 CEST] <furq> or a y4m demuxer
[17:15:06 CEST] <Numline1> well I found an encoder (images > video)
[17:15:16 CEST] <Numline1> I wonder if it has the reverse :)
[17:15:26 CEST] <furq> the problem with rawvideo is you need to hardcode the dimensions, pixel format etc in your receiving application
[17:15:29 CEST] <Numline1> I liked the idea kepstin had, to use rawvideo and just somehow separate the bytes
[17:15:43 CEST] <furq> y4m is pretty much the same thing except with a header that contains all that
[17:15:51 CEST] <furq> you could probably write a demuxer in five minutes
[17:16:02 CEST] <furq> https://wiki.multimedia.cx/index.php/YUV4MPEG2
[17:16:29 CEST] <Numline1> Okay, I'll try y4m and output that stuff into a file to see the format. I can probably easily decode that into images by just reading the file via buffer
[17:16:35 CEST] <Numline1> furq kepstin thanks again fellas
[17:16:40 CEST] <kepstin> with raw video, as long as you know the frame size and pixel format in advance, reading a frame is just "read width * height * bytes per pixel bytes into a buffer"
[17:16:45 CEST] <kepstin> and repeat :/
[17:16:54 CEST] <furq> yeah which is fine if that's always the same
[17:17:09 CEST] <furq> if it's not and you're using yuv images anyway (which you are if it's always jpeg) then y4m will ultimately be less hassle
[17:17:39 CEST] <Numline1> Yeah, that would be ez to implement, I just don't trust that rawvideo solution. It's probably just a matter of time before something somewhere breaks and suddenly there's a frame that's incorrect and it screws all further images
[17:17:43 CEST] <kepstin> y4m (or even something like nut) lets ffmpeg pass the information about width, height, pixel format, etc. to you in the container.
[17:18:12 CEST] <Numline1> tbh I honestly wish I could just output the files somewhere and get JSON from ffmpeg on output to see where they are
[17:18:26 CEST] <furq> there is an image2pipe muxer but i don't think anyone's ever used it
[17:18:30 CEST] <kepstin> there's more moving parts involved encoding video, stuffing it into a container, demuxing it, and decoding it again... tbh i'd normally expect that to be more fragile :)
[17:18:30 CEST] <furq> only the demuxer is documented anywhere
[17:18:31 CEST] <Numline1> or just somehow loop the ffmpeg in some weird way and get single image for each loop
[17:19:04 CEST] <Numline1> furq yeah but image2pipe (if it does what the name sounds like) probably only does one image :)
[17:19:23 CEST] <furq> well the demuxer is used like cat *.jpg | ffmpeg -f image2pipe -i - ...
[17:19:29 CEST] <furq> so i assume the muxer works the same way
[17:19:29 CEST] <kepstin> no, image2pipe puts multiple images into a pipe
[17:19:46 CEST] <furq> it's worth trying just to see what it actually outputs
[17:19:48 CEST] <Numline1> kepstin yeah, but you still have to process and split them eventually :)
[17:19:53 CEST] <kepstin> but i'm pretty sure it has the issue that it doesn't indicate frame boundaries in any way, you're required to parse the image somehow to find them
[17:20:02 CEST] <furq> yeah i'm not sure about that either
[17:20:34 CEST] <Numline1> yeah, the yuv thingie seems to be more viable in the end
[17:21:10 CEST] <kepstin> but yeah, I use the piped raw video quite a bit in production, one app I have is actually a python script that renders frames with pycairo and then sends them to ffmpeg over a pipe to be encoded.
[17:21:54 CEST] <Numline1> kepstin I'll save that as a fallback if YUV4MPEG2 fails for some reason, I already have metadata from ffprobe anyway
[17:25:06 CEST] <Numline1> ERROR: yuv4mpeg can only handle yuv444p, yuv422p, yuv420p, yuv411p and gray8 pixel formats. And using 'strict -1' also yuv444p9, yuv422p9, yuv420p9, yuv444p10, yuv422p10, yuv420p10, yuv444p12, yuv422p12, yuv420p12, yuv444p14, yuv422p14, yuv420p14, yuv444p16, yuv422p16, yuv420p16, gray9, gray10, gray12 and gray16 pixel formats. Use -pix_fmt to select one.
[17:25:08 CEST] <Numline1> yikes
[17:25:10 CEST] <Numline1> which one do I want
[17:25:21 CEST] <furq> don't set -pix_fmt at all
[17:25:55 CEST] <Numline1> furq that's when I got the error :P
[17:26:26 CEST] <Numline1> ffmpeg -ss 1 -i https://blah.com/mp4 -t 200 -vf scale=1280:-1,thumbnail=50 -vsync 0 -f yuv4mpegpipe - | cat > test.txt
[17:26:52 CEST] <furq> what pixel format is the input video
[17:28:23 CEST] <Numline1> ooff. It can be pretty much anything in the actual app. When it comes to this specific one, I'm not sure, how can I find out?
[17:28:38 CEST] <Numline1> I ran ffprobe on it, I just dont see it there
[17:29:35 CEST] <furq> Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
[17:29:41 CEST] <furq> it should say there
[17:29:59 CEST] <furq> like 99% of videos you'll ever encounter are yuv420p so i figured y4m was a safe bet
[17:30:18 CEST] <Numline1> furq oh :) Stream #0:1(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 2161 kb/s, 30 fps, 30 tbr, 30k tbn, 60 tbc (default)
[17:30:40 CEST] <Numline1> I ran a ffmpeg command from stackoverflow which should determine that, it said: yuv420p
[17:30:44 CEST] <another> furq: except mjpeg
[17:31:03 CEST] <furq> mjpeg is always one of the supported y4m pixel formats
[17:31:54 CEST] <another> Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 1280x720, 18252 kb/s, 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
[17:32:08 CEST] <furq> yuvj422p isn't a real pixel format
[17:32:20 CEST] <furq> although idk how the y4m muxer deals with that
[17:32:21 CEST] <another> it's not?
[17:32:33 CEST] <furq> no it's just yuv422p with the full range flag
[17:32:41 CEST] <Numline1> well I add -pix_fmt yuv420p and it works (?)
[17:32:43 CEST] <furq> using the yuvj pixel formats has been deprecated for ages
[17:32:47 CEST] <Numline1> It started spitting out output at least
[17:33:06 CEST] <Numline1> however I can't verify it makes sense (the output), it's bunch of jibberish at this point
[17:33:53 CEST] <another> furq: does this full range flag mean anything?
[17:34:12 CEST] <furq> it means full colour range (0-255) instead of limited range (16-235)
[17:34:31 CEST] <furq> limited range is a tv thing which is why jpeg doesn't use it
[17:34:48 CEST] <Numline1> Well, I've piped the output to a file, the header says YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420mpeg2 XYSCSS=420MPEG2
[17:34:48 CEST] <Numline1> FRAME
[17:34:59 CEST] <Numline1> I guess that's it and it worked :) Now I just need to decode it in Go
[17:35:27 CEST] <Numline1> The first result in Google is to use ffmpeg, lol
[17:35:58 CEST] <furq> ok i checked and yuvj444p to y4m works fine
[17:36:01 CEST] <furq> so that's nice to know
[17:36:12 CEST] <furq> i have no idea why it would want you to set pix_fmt with a yuv source though
[17:36:37 CEST] <Numline1> No idea man, but since it's the default one anyway...
[17:36:55 CEST] <Numline1> Thanks again though, hopefully I'll be able to go from here
[17:37:01 CEST] <Numline1> literally, since I'm using Go
[17:37:08 CEST] <Numline1> *awkward silence*
[18:00:05 CEST] <irwiss> not seeing this mentioned anywhere so could be a silly question, but given a naive read_frame/send_packet/receive_frame decoding loop, can frames come out of decoder out-of-order when using udp transport(or may be tcp too?)
[18:01:35 CEST] <Mavrik> You'll get broken frames first
[18:01:46 CEST] <Mavrik> Although it's not very likely :)
[18:01:56 CEST] <Mavrik> Basically the decoder needs packets in order since frames depend on each other
[18:02:17 CEST] <Mavrik> But most protocols that use UDP have some kind of timestamp counter so you can reorder packets back :)
[18:02:20 CEST] <irwiss> ah so if i need to reorder it has to happen before decoder gets them
[18:02:34 CEST] <Numline1> kepstin sorry, one more thing if you don't mind. You've mentioned it's basically width * height * bytes per pixel. How do I figure how many bytes there are per pixel?
[18:02:48 CEST] <furq> Numline1: the pixel format
[18:03:12 CEST] <furq> 4:2:0 is 12 bits per pixel, 4:2:2 is 16, 4:4:4 is 24 etc
[18:03:19 CEST] <kepstin> Numline1: for this application, you'd request ffmpeg convert to a specific pixel format with a known bits/bytes per pixel
[18:03:37 CEST] <furq> well y4m signals that so you could just handle it in your code
[18:03:42 CEST] <irwiss> Mavrik: thanks i think this clears up my confusion
[18:03:44 CEST] <Numline1> Oh I already did that, yuv420p
[18:03:47 CEST] <Numline1> I'm a dummy :) Thanks!
[18:03:51 CEST] <Mavrik> There's a helper function somewhere
[18:03:58 CEST] <Mavrik> That gives you number of bytes per plane for pixel format
[18:04:00 CEST] <pink_mist> furq: but those are bits, not bytes ... and 12 bits is hard to fit along a byte boundary :P
[18:04:12 CEST] <furq> fortunately 4:2:0 is always mod2
[18:05:00 CEST] <Mavrik> Yeah, but YUV420P will be planar
[18:05:08 CEST] <Mavrik> So you'll have a plane that's width*height
[18:05:10 CEST] <Mavrik> and two planes that are half
[18:05:18 CEST] Action: Numline1 is confused
[18:05:41 CEST] <Numline1> "two planes that are half"
[18:05:49 CEST] <Numline1> ChanServ: NSA wants to know your location
[18:06:16 CEST] <furq> Numline1: if you're just handing the frame off to something else, it's not something you need to care about
[18:07:10 CEST] <Numline1> furq oh, I'm actually trying to split that YUV4MPEG2 input into separate frames
[18:07:19 CEST] <Mavrik> That's fine then
[18:07:24 CEST] <Mavrik> I mean, everything will be in AVFrame struct
[18:07:27 CEST] <Mavrik> And you don't need to care
[18:07:29 CEST] <Mavrik> :)
[18:07:41 CEST] <Numline1> oh is it in a separate struct? I didn't see that in the binary blob
[18:07:44 CEST] <Numline1> I've seen the header
[18:07:51 CEST] <Numline1> I thought I had to do the math myself
[18:07:54 CEST] <furq> Mavrik: this is ffmpeg.c's y4m output being read into something else
[18:08:19 CEST] <Numline1> oh, yeah, that's true. AVFrame is a ffmpeg struct
[18:08:25 CEST] <Numline1> I just have the raw output
[18:09:24 CEST] <Numline1> basically just to sum up, all I need is to split the output (input in my app) into a bunch of images. Since it's 12 bits, I'm not entirely sure I'll be able to find the amount of bytes per frame
[18:09:37 CEST] <furq> Numline1: like i said, a 4:2:0 frame is always mod2
[18:09:38 CEST] <Numline1> this is all new to me, so the answer might be obvious though
[18:09:45 CEST] <furq> so it'll always be some multiple of 48 bits
[18:10:39 CEST] <Numline1> thinking emoji
[18:10:51 CEST] <furq> which is to say width * height * 1.5 will always be an integer
[18:11:03 CEST] <Mavrik> Numline1: YUV420 frames are funny because as opposed to more usual RGB images, they have one channel that's full sized (Y) and two channels that are half sized (UV)
[18:11:17 CEST] <Mavrik> so the size of each frame is width * height * 1.5 :)
[18:11:31 CEST] <kepstin> actually quarter size - U and V are both 1/2 height and 1/2 width
[18:11:36 CEST] <furq> half sized in both dimensions yeah
[18:11:54 CEST] <Numline1> y u do this frames
[18:12:21 CEST] <Numline1> Mavrik I understand what you're saying, I just can't imagine how that's actually looking in the end (the frame)
[18:12:25 CEST] <Numline1> isn't it stretched?
[18:12:33 CEST] <furq> the U and V planes are chroma
[18:12:45 CEST] <furq> so you have full-size luma and then the chroma planes are resized to match on playback
[18:12:56 CEST] <furq> like i said, 99% of video uses 4:2:0
[18:13:00 CEST] <kepstin> Y is encoder full resolution, U and V are quarter resolution because they encode information the eye is less sensitive to
[18:13:09 CEST] <furq> so you've seen it a million times and never noticed
[18:13:55 CEST] <Numline1> that's interesting and a bit funky :)
[18:14:03 CEST] <furq> unless you've seen youtube videos of screen captures or coloured text overlays and wondered why they look so bad
[18:14:04 CEST] <JEEB> yes, all players and things convert YCbCr to RGB
[18:14:05 CEST] <Numline1> I always thought it's just a series of x y images
[18:14:22 CEST] <JEEB> yes, it's three "images" per image
[18:14:27 CEST] <JEEB> Y is teh "grayscale" image
[18:14:33 CEST] <JEEB> then Cb ir Chroma-blue
[18:14:36 CEST] <JEEB> and Cr is chroma-red
[18:14:49 CEST] <JEEB> and the chroma planes are "one sample for each 2x2 block"
[18:14:51 CEST] <JEEB> that is 4:2:0
[18:15:02 CEST] <JEEB> full resolution chroma and luma is 4:4:4
[18:15:16 CEST] <Numline1> ahhh
[18:15:23 CEST] <Numline1> well, I know two things
[18:15:25 CEST] <JEEB> so when a player plays video, it receives 4:2:0 YCbCr from decoder, then scales the chroma to full resolution
[18:15:31 CEST] <Numline1> 1) It's more complicated than I thought
[18:15:34 CEST] <JEEB> and then converts to RGB
[18:15:35 CEST] <Numline1> 2) Jesus christ why
[18:15:36 CEST] <JEEB> so it can be shown
[18:15:51 CEST] <kepstin> Numline1: why? because video is huge and this lets it be smaller
[18:15:54 CEST] <JEEB> there's a lot of details in this too
[18:16:07 CEST] <furq> the historical motivation is that the luma plane in YUV is the same thing that black-and-white tv transmissions used
[18:16:12 CEST] <JEEB> like, the chroma samples aren't actually in the middle of that 2x2 block
[18:16:15 CEST] <Mavrik> Numline1: if you want pictures: https://en.wikipedia.org/wiki/Chroma_subsampling#Sampling_systems_and_ratios :)
[18:16:20 CEST] <furq> so YUV itself is for backward compatibility, and then the small chroma planes are to save bandwidth
[18:16:24 CEST] <JEEB> the MPEG-2 Video and H.264 default is "top left" if I recall correctly
[18:16:40 CEST] <JEEB> so when you pull the chroma to full resolution, you will have to scale correctly
[18:16:41 CEST] <Numline1> Mavrik thank you, that makes it more clear :)
[18:16:42 CEST] <JEEB> \o/
[18:16:56 CEST] <furq> and yes of course we're still using a format designed for backward compatibility with the 1950s
[18:16:59 CEST] <furq> why wouldn't we be
[18:17:03 CEST] <Mavrik> And https://stackoverflow.com/questions/27822017/planar-yuv420-data-layout :)
[18:17:06 CEST] <Numline1> wow, I never knew this stuff is so complex
[18:17:26 CEST] <Numline1> btw I've noticed the output I generated actually has one frame per line
[18:17:37 CEST] <Numline1> If I'm reading that correctly
[18:18:05 CEST] <Numline1> which is a lie because I'm not, there's 3 frames on 2 lines. I thought I found a shortcut
[18:18:30 CEST] <Numline1> It seems to be delimited by something like "¬¬¬¬¬FRAME"
[18:18:36 CEST] <furq> the frame always ends with a newline
[18:18:49 CEST] <furq> but there could obviously be an 0x0A in the actual frame data
[18:19:03 CEST] <kepstin> the newline byte is valid in the data, yeah, so you can't rely on it as a delimiter
[18:19:03 CEST] <Numline1> who's that?
[18:19:11 CEST] <furq> 0x0A is \n
[18:19:17 CEST] <Numline1> oh, gotcha
[18:19:17 CEST] <kepstin> don't use a text parser for binary data :)
[18:19:27 CEST] <JEEB> thankfully the pixel format should tell you the frame sizes :P
[18:19:33 CEST] <furq> yeah it's easy to calculate the dimensions
[18:19:37 CEST] <Numline1> Hah, I just wanted to see the header before I do any decoding :)
[18:19:53 CEST] <furq> you really don't need to care about how the data is packed, you just need to know the bits per pixel and dimensions
[18:20:05 CEST] <Numline1> It's just a bit weird. I should have at least <amount_of_frames> newlines
[18:20:06 CEST] <Numline1> or more
[18:20:09 CEST] <Numline1> I have one less, which is weird
[18:20:54 CEST] <Numline1> basically ffmpeg said it parsed 3 thumbnails, the file has 4 lines (header, "FRAMES" and then two actual lines). But I guess it's just Sublime being funky, I'll try to actually parse it in code :)
[18:21:31 CEST] <JEEB> I recommend hex editors :P
[18:21:36 CEST] <JEEB> hxd on windows
[18:21:40 CEST] <JEEB> and I think I use bless on linux
[18:21:47 CEST] Action: Numline1 uses macOS
[18:21:50 CEST] <Numline1> :P
[18:21:55 CEST] <JEEB> then whatever is sane for that
[18:22:05 CEST] <JEEB> text editor usually isn't nice for this sort of stuff :P
[18:23:05 CEST] <Numline1> yeah, poor VisualStudio Code is just displaying empty space :P
[18:23:11 CEST] <Numline1> I'll try to find something
[18:23:35 CEST] <JEEB> https://github.com/ridiculousfish/HexFiend
[18:23:37 CEST] <JEEB> something like this?
[18:24:44 CEST] <Numline1> yeah I found something similar :) Basically looks like this - https://numshare.s3-eu-west-2.amazonaws.com/Screen-Shot-2019-04-24-18-24-25-1556123065.jpg
[18:25:01 CEST] <Numline1> you guys can possibly reconstruct part of my first frame now #securityleak
[18:49:40 CEST] <Numline1> btw I've just noticed, ppm might be used as well :) Is that better/worse in some way?
[18:53:11 CEST] <Numline1> although it probably just saved the latest image, so that's irrelevant :)
[19:09:47 CEST] <emsjessec> why does ffmpeg get stuck sometimes and I have to ctrl+c it?
[19:09:57 CEST] <emsjessec> ffmpeg -y -nostdin -i (input file) -filter:a loudnorm (output file) 2>&1
[19:10:17 CEST] <emsjessec> it's running from PHP's exec function and there's no input being sent to STDIN
[19:12:41 CEST] <durandal_1707> emsjessec: is it using CPU?
[19:15:47 CEST] <ChocolateArmpits> emsjessec, where did you get the binary
[19:16:48 CEST] <ChocolateArmpits> Also for loudnorm you may want to downsample to 48kHz because it always outputs an upsampled stream
[19:17:22 CEST] <ChocolateArmpits> It does that to increase measurement accuracy
[19:39:55 CEST] <emsjessec> ok
[20:27:33 CEST] <DHE> is there documentation about what mpegtsraw does differently from the regular mpegts demuxer?
[21:16:18 CEST] <zerodefect> When using the C-API and setting up the x264 encoder, is it possible to somehow log the low-level settings (x264_params) after the encoder is configured?
[21:19:48 CEST] <kepstin> zerodefect: the x264 encoder itself already logs (as a text log message) its internal settings when it starts up
[21:20:08 CEST] <kepstin> other than that, the answer is "anything is possible if you code it" :/
[21:20:31 CEST] <JEEB> after you have applied the AVOptions they get eaten from the list
[21:20:53 CEST] <JEEB> so log them before you apply them to libx264's AVCodecContext
[21:21:01 CEST] <JEEB> if you want to log them
[21:21:16 CEST] <JEEB> and then you'll have to log all of the AVCodecContext parameters you are interested in
[21:24:19 CEST] <zerodefect> Ok. Nothing elegant. The reason I ask is that I'm trying to set a CBR stream but the bitrate is fluctating to above 4.2 MBit/s and the HW IRD receving is very unhappy about the video bitrate. I've set NAL=HRD/vbv-bufsize, but I clearly need to tweak something else. I was wanting to drill down into x264_params a bit more :/
[21:25:37 CEST] <JEEB> make sure it's complaining about the video stream
[21:25:44 CEST] <JEEB> also you need to set both maxrate and bufsize
[21:26:20 CEST] <JEEB> nal-hrd has to be set to either vbr or cbr for getting the HRD headers written into the video stream regarding the buffering parameters
[21:26:52 CEST] <JEEB> often when people have things receiving complaining it's more often about the muxer/transmission
[21:27:00 CEST] <JEEB> rather than actual video encoding rates
[21:27:48 CEST] <JEEB> make sure you're using muxrate with MPEG-TS if you're using that, and if my development VM would boot at least once a day proper I would be able to tell you the bit rate option for the UDP protocol :P
[21:28:05 CEST] <zerodefect> Ok. What is a sensible vbv-bufsize? Bitrate / fps?
[21:28:19 CEST] <JEEB> that 100% depends on your requirements
[21:28:43 CEST] <zerodefect> Do you know if this is outlined in any of the DVB docs?
[21:29:02 CEST] <JEEB> I think DVB specs only talk about the MPEG-TS level mux
[21:29:13 CEST] <JEEB> but feel free to check, I haven't checked too hard :P
[21:29:28 CEST] <JEEB> most encoders out there don't let you tweak this stuff too much if I understand it correctly
[21:29:51 CEST] <JEEB> usually it's (maxrate * seconds of initial buffer)
[21:29:53 CEST] <JEEB> kidn of thing
[21:29:54 CEST] <zerodefect> Yes, that is what I've seen thus far. It feels a bit more like an art to find the best settings to keep this HW encoders happy.
[21:32:33 CEST] <Mavrik> There's of course also an option in x264 to stuff the stream with null packets :)
[23:47:03 CEST] <GuiToris> hey, I have a problem. I created some lossless video files with ffmpeg and imported them into Premiere and they look like this : http://ix.io/1H7N
[23:47:33 CEST] <GuiToris> -c:v libx264 -preset veryslow -crf 0
[23:47:39 CEST] <GuiToris> that's what I used
[23:48:28 CEST] <GuiToris> if I reencode the video with ffmpeg -i lossless.mp4 new.mp4 this color artifact disappears
[23:48:34 CEST] <GuiToris> what should I do?
[23:48:37 CEST] <furq> that link doesn't work
[23:48:54 CEST] <GuiToris> furq, weird it works here :S
[23:49:03 CEST] <GuiToris> can't you download it with wget?
[23:49:23 CEST] <GuiToris> wget 'http://ix.io/1H7N'
[23:50:29 CEST] <GuiToris> it also works here but you'll lose the extension (it's jpg)
[23:50:38 CEST] <GuiToris> could you open it?
[23:51:48 CEST] <another> not sure if premiere support the lossless profile
[23:52:37 CEST] <GuiToris> I could edit and play back them but they are all ugly
[23:52:44 CEST] <GuiToris> isn't it just a color profile or something?
[23:53:21 CEST] <GuiToris> I would keep the good quality but right now even ffmpeg -i lossless lossy is way much better
[23:55:01 CEST] <GuiToris> isn't premiere the leading video editing software? that would be a shame if it didn't support lossless files
[23:57:10 CEST] <GuiToris> My format profile is 'High 4:4:4 Predictive at L5.1'
[23:57:31 CEST] <GuiToris> CABAC / 16 Ref Frames
[00:00:00 CEST] --- Thu Apr 25 2019


More information about the Ffmpeg-devel-irc mailing list