[Ffmpeg-devel-irc] ffmpeg.log.20190220

burek burek021 at gmail.com
Thu Feb 21 03:05:01 EET 2019


[03:11:24 CET] <giaco> kepstin: I've tried "loudnorm=I=-18,aresample=48000" but ffmpeg crashed complaining about missing channel data so I fixed it with "loudnorm=I=-18,aresample=48000,aformat=channel_layouts=mono". The output is audible but wrong, speed seems wrong and some parts seems overlapping. I've also tried, just for test, just "loudnorm=I=-18" but again the output feels wrong. Removing the filter I've the stream sounds
[03:11:26 CET] <giaco> right (except volume, obviously)
[03:12:24 CET] <giaco> sorry for messed up english, I surely need some sleep
[03:13:06 CET] <kepstin> hmm. your input is mono, right?
[03:13:16 CET] <kepstin> i wonder if the loudnorm filter is just buggy with mono input
[03:14:16 CET] <kepstin> you can try setting the "dual_mono" option on loudnorm to "true", maybe that will help?
[03:16:55 CET] <giaco> kepstin: correct, input is mono
[03:19:12 CET] <giaco> kepstin: I don't know how to chain internal filter options. "loudnorm=I=-18,aresample=48000" > "loudnorm=I=-18,dual_mono=true,aresample=48000" ?
[03:27:17 CET] <giaco> got it
[03:38:34 CET] <giaco> but same error "Cannot select channel layout for the link between filters Parsed_aresample_1 and format_out_0_0."
[03:38:38 CET] <zap0> hi.. anyone know what it means when a YUV image appears very pink and green
[03:39:01 CET] <giaco> I think I have to add same "aformat=channel_layouts=mono"
[03:39:08 CET] <giaco> or maybe "aformat=channel_layouts=stereo" now
[03:41:53 CET] <giaco> kepstin: no, audio chunks are overlapped even with "-af loudnorm=I=-18:dual_mono=true,aresample=48000,aformat=channel_layouts=stereo"
[03:45:57 CET] <kepstin> zap0: can't really say without more info, but some common mistakes are that you got the U and V (well, Cr and Cb) planes reversed, or you're interpreting the YUV colours as if they were RGB
[03:46:31 CET] <kepstin> giaco: what ffmpeg version?
[03:47:04 CET] <giaco> same problem even with "-af loudnorm=I=-18:dual_mono=true,aresample=48000,aformat=channel_layouts=mono" chunks are overlapped, I can hear my voice twice
[03:47:34 CET] <zap0> i have written some unit tests for my routines and they appear to be correct.    i also tried swapping the U and V channels, but i still get very green and pink images.
[03:47:36 CET] <giaco> kepstin: ffmpeg version 4.1-static
[03:47:45 CET] <zap0> it looks like this type of mistake  https://stackoverflow.com/questions/33542708/camera2-api-convert-yuv420-to-rgb-green-out/33543885
[03:48:55 CET] <kepstin> right, so you have some mistake in how you're interpreting the image buffer. what pixel format is it in?
[03:49:29 CET] <kepstin> you might want to just use libswscale to convert to rgb or whatever rather than attempting to do it yourself.
[03:49:39 CET] <zap0> YUY2 (i think)
[03:50:06 CET] <zap0> which is a 4:2:2
[03:50:56 CET] <zap0> the pixel are in the right spot, i can clearly make out the video footage.. so the geometry is not wrong.  it's the colours
[03:51:00 CET] <kepstin> fun, that's a packed format
[03:51:07 CET] <giaco> I think that the problem could be elsewhere. Maybe the transform/filter speed is not fast enought to process all the data and chunks are lost
[03:51:31 CET] <kepstin> giaco: ah, that could be if you have a realtime source. the loudnorm filter is kind of slow
[03:51:45 CET] <kepstin> giaco: try that on a local file and see if it runs at >1.0x speed
[03:52:18 CET] <giaco> kepstin: without filter I have speed= 1.1x, with filter is 0.32x! :(
[03:52:39 CET] <kepstin> giaco: wow, what kind of processor is this? :/
[03:53:42 CET] <kepstin> zap0: can you give me some more context on what you're trying to do? is the YUY2 data input? output? how is ffmpeg involved?
[03:54:01 CET] <giaco> kepstin: shitty android phone. I am testing on a low end device (min target)
[03:54:31 CET] <kepstin> ah, slow arm device. yeah, i wouldn't even bother with doing any live stuff on something like that
[03:54:46 CET] <kepstin> giaco: just give up on the volume normalization, not worth it on a device like that
[03:54:58 CET] <giaco> kepstin: I should have though about that much earlies, but the android profiler is was misleading, it shows no more than 10% cpu occupation during transform
[03:56:16 CET] <kepstin> zap0: note that ffmpeg's name for the YUY2 pixel format is "yuyv422"
[03:56:40 CET] <giaco> kepstin: I fill fallback to user-selected amplification
[03:56:55 CET] <giaco> and save the loudnorm for high-end devices
[03:57:15 CET] <zap0> might have figured somethign out...  back in 5
[03:57:19 CET] <giaco> no info has been lost. Thanks for being patient
[03:58:01 CET] <kepstin> giaco: honestly for devices like that, you'd really want to be doing all the transcoding work on the server you're streaming media to, rather than the device itself
[03:58:36 CET] <kepstin> but hmm. is this targetting a chromecast on lan use case, or something like that?
[03:59:15 CET] <kepstin> for that type of thing, yeah, I guess just having a manually controlled volume is what you can do.
[03:59:41 CET] <zap0> kepstin: endian issue!   thanks.  i was interpreting YUYV as UYVY  fixed!
[04:00:09 CET] <kepstin> zap0: yeah, that would do it - treating the colour as lightness and vice-versa :)
[04:00:21 CET] <zap0> oops, i meant YUYV  VYUY
[04:00:32 CET] <zap0> much thanking you!
[04:01:49 CET] <zap0> now i need to learn  YV12 and V210
[04:04:03 CET] <kepstin> yv12 is easy enough, it's just yuv420p with the u and v swapped :)
[04:04:56 CET] <zap0> i hope it's that easy!  surely there is more to it than that
[04:16:04 CET] <zap0> https://wiki.multimedia.cx/index.php/V210     this looks to be a bit tricky.
[04:27:27 CET] <zap0> most of the YUV integer code i have is all 8bit.  V210 is 10bit.
[04:31:20 CET] <kepstin> ... so why aren't you just using libswscale to convert these?
[04:33:17 CET] <kepstin> or checking out something like mpv, which has gpu (opengl) versions of these conversions?
[10:17:50 CET] <TyrfingMjolnir> How can I use ffmpeg to make a new video file based on a time range within an existing file?
[12:53:04 CET] <khaleb> Hello here
[13:00:32 CET] <w1kl4s> general kenobi!
[13:02:32 CET] <khaleb> i'm newbie
[13:03:30 CET] <khaleb> i have a problem with straming on windows
[13:03:48 CET] <khaleb> can someone help me pleases ?
[13:04:45 CET] <norbert> khaleb: as the topic of this chat channel states, "When asking a question be precise, detailed and patient."
[13:04:55 CET] <norbert> "a problem with staming" is pretty vague
[13:05:34 CET] <norbert> could be a million different things, maybe you have your monitor turned off so you cannot see the stream; maybe turn your monitor on
[13:19:05 CET] <khaleb> Yes ofcourse, sorry.
[13:21:00 CET] <khaleb> I have a question fisrt: Is it possible to re-stream a rtmp stream to UDP output using ffmpeg ?
[13:51:49 CET] <norbert> at http://wiki.webmproject.org/ffmpeg/vp9-encoding-guide is that correct, /dev/null
[13:51:58 CET] <norbert> or should that be > /dev/null, with the ">"
[13:52:10 CET] <norbert> because when I run the command, ffmpeg asks if I want to overwrite /dev/null...
[13:52:53 CET] <furq> you can ignore that warning, but you can probably just use -f null -
[13:53:08 CET] <furq> unless webm encapsulation is somehow critical for the pass file, which it probably isn't but idk
[13:53:57 CET] <furq> also that guide predates -row-mt which is the recommended multithreading method now
[13:55:02 CET] <norbert> ok
[14:02:41 CET] <norbert> how necessary/useful is the first pass for a website similar to YouTube?
[14:02:58 CET] <norbert> because that first pass takes time, I wonder if it's worth the processing time
[14:07:24 CET] <norbert> hm, actually, I see the second pass takes longer, so maybe a more pressing question is, how would I prevent the transcode queue from becoming larger and larger; if one user uploads 10 720p videos, the next user may already have to wait 60+ minutes
[14:08:07 CET] <norbert> in theory I could run ffmpeg in parallel, but it'd still only have the max CPU cores working on it
[14:09:08 CET] <norbert> and my plan is to encode each upload to 360, 480, 720 and 1080p
[14:09:35 CET] <norbert> to all switching quality (for mobile, desktop, different network speeds, etc)
[14:09:39 CET] <norbert> *allow
[14:10:43 CET] <norbert> I mean, damn, the second pass is still going, and it's only at 01:43
[14:10:58 CET] <norbert> and that's just the 720p at 15fps
[14:11:45 CET] <norbert> I guess this answers what I was wondering about why many existing video CMS'es don't transcode
[14:27:13 CET] <kepstin> youtube can do it because they have ridiculous amounts of compute power
[14:27:34 CET] <kepstin> also they usually don't have the vp9 available immediately, it takes longer to get ready than other formats
[14:28:47 CET] <norbert> when I'm running the first pass with "Constrained Quality Recommended Settings" I see that ffmpeg is using only ~125% of my CPU, is there any way to increase this?
[14:29:31 CET] <norbert> because I have more compute power :)
[14:30:25 CET] <norbert> my plan is to make the 360p available first, then work on the rest after that
[14:46:38 CET] <norbert> hm, encoding still takes about a second for each in-video second, even if I encode to 640x360; this means if someone uploads 5 videos of an hour, the next user has to wait at least 5 hours - and that's if I only encode to 360p
[14:46:50 CET] <norbert> there has to be a way to make this much faster
[14:46:57 CET] <furq> libvpx is really slow
[14:47:17 CET] <norbert> is there a good alternative I could use for a video CMS?
[14:47:33 CET] <furq> not if you think you'd have to pay to use h264
[14:48:16 CET] <furq> vpx is generally terrible at multithreading, so youtube just splits videos into multiple segments and then distributes the encoding
[14:48:18 CET] <norbert> would paying for h264 be on a per video basis?
[14:48:58 CET] <norbert> that's interesting, splitting the video; would become a bit too technical for my okay-but-not-awesome skills
[14:49:48 CET] <furq> i've never bought an h264 license but iirc it's free if you're not making money from the videos
[14:50:09 CET] <furq> and it's also free if you're not in a jurisdiction where software patents exist, e.g. the EU
[14:50:28 CET] <furq> at least in theory
[14:50:51 CET] <DHE> which is interesting because there are ads on youtube...
[14:50:52 CET] <furq> i phrased that confusingly. the eu is a jurisdiction where software patents don't exist
[14:52:09 CET] <norbert> I'm in the EU and won't be making money from the video, so I guess h264 would be an option
[14:52:45 CET] <furq> if you don't think you need a license then x264 is generally much nicer to use
[14:53:17 CET] <norbert> ok; I think Google uses VP9 with Opus for YouTube though?
[14:53:28 CET] <furq> alongside h264 and aac, yeah
[14:53:49 CET] <DHE> for high resolution and quality, yes. vp9 is superior to h264 so there's an incentive to serve it
[14:54:05 CET] <furq> most youtube videos are only in h264
[14:54:13 CET] <furq> because most youtube videos have like 500 views and they never get upgraded
[14:55:01 CET] <furq> also vp9 is superior to h264 but libvpx is less superior to x264
[14:55:33 CET] <furq> certainly if you don't have google infra where you can just throw 100 cpus at encoding a single video at acceptable quality within 24 hours
[14:59:11 CET] <norbert> 24 minutes, I think you meant
[14:59:14 CET] <norbert> libx264 is definitely much faster (just tried)
[14:59:31 CET] <furq> nah i meant hours
[14:59:41 CET] <furq> even for fairly popular videos there's a huge lag time for the vp9 version to show up
[14:59:49 CET] <norbert> ok
[15:00:35 CET] <DHE> when I upload videos at 70+ megabits processing times get really long. google really takes every trick they can to encode quickly and uploading a video fast steals one of their tricks away from them...
[15:01:21 CET] <norbert> because they can't start processing while upload is still in progress, you mean?
[15:01:40 CET] <norbert> furq: which audio could you suggest I use with -f mp4?
[15:01:43 CET] <furq> aac
[15:01:47 CET] <norbert> ok
[15:02:00 CET] <furq> you would need to have an h264/aac fallback anyway for apple devices
[15:02:04 CET] <furq> which refuse to support anything google made
[15:02:21 CET] <furq> but everything else supports h264/aac in mp4 so you might as well just use that
[15:03:03 CET] <norbert> libx264 (-f mp4) is h264?
[15:03:11 CET] <DHE> norbert: because they start transcoding immediately. if I upload the same video at 7 megabits, they have 10x the amount of time to do transcoding during the upload which potentially buys them a lot of time to "make it available as soon as possible"
[15:03:23 CET] <norbert> DHE: right
[15:03:44 CET] <DHE> in ffmpeg you can specify which encoder to use, not just the target codec family. libx264 is a software encoder. but there's also options like quicksync, nvidia's GPU encoders, etc. in this case you're explicitly selecting software encoding
[15:10:01 CET] <norbert> I have to say, "ultrafast will save 55% at the expense of much lower quality", that much lower quality still looks good to me
[15:12:46 CET] <norbert> although I guess I can't ignore how large the files will be
[15:13:57 CET] <norbert> hm, I feel a bit like I'm inventing the wheel; you'd think someone would have written down what I'm trying to find
[15:14:49 CET] <DHE> the presets don't directly affect file size. file size can't be predicted unless you use CVR or VBV bitrate mode
[15:15:16 CET] <DHE> in theory qp or crf mode will produce larger file sizes at faster presets, but it's not quite as simple as that either
[15:32:50 CET] <norbert> furq: how does this look? ffmpeg -i <source> -c:v libx264 -crf 28 -preset veryslow -tune fastdecode -profile:v baseline -level 3.0 -movflags +faststart -c:a aac -ac 2 -ar 44100 -ab 64k -threads 0 -profile:v high -level 4.2 -movflags +faststart -tune zerolatency -f mp4 -vf scale=640:360 output.mp4
[15:34:09 CET] <norbert> encodes at an acceptable speed, should be iPhone compatible; then if I see the queue has no more 640x360 to do I can move on to higher qualities, after each checking if there's a 360p todo first
[15:34:11 CET] <DHE> you have -tune and -profile:v set twice
[15:34:32 CET] <norbert> oh yeah
[15:35:16 CET] <furq> you don't need baseline for iphone compat and it'll hurt the quality a lot
[15:35:46 CET] <norbert> yes, I meant to use just:  -profile:v high -level 4.2
[15:35:47 CET] <furq> and you should absolutely not use zerolatency
[15:35:53 CET] <DHE> baseline hasn't been needed for iphone compat since, what, iphone 4?
[15:35:57 CET] <norbert> for iPad Air and iPhone 5s +newer
[15:36:36 CET] <furq> also you probably want to actually set a scaler or else it'll use bicubic
[15:36:45 CET] <furq> e.g. -vf scale=640:360:flags=lanczos
[15:37:16 CET] <norbert> ok, thanks
[15:40:23 CET] <norbert> I also had "-movflags +faststart" twice :P
[15:53:55 CET] <norbert> thanks again furq, DHE; I'm happy with the speed and result that I managed to find
[15:54:11 CET] <norbert> now on to coding the rest of the website :)
[16:41:03 CET] <Ducky^> I have two video outputs at 800x600 each on Ubuntu
[16:41:10 CET] <Ducky^> I'd like to capture both of them side by side with ffmpeg
[16:41:29 CET] <Ducky^> ffmpeg -f x11grab -y -r 60 -s 1600x600 -i :0.0 -minrate 4000k -maxrate 4000k -vcodec h264 out.mkv
[16:41:41 CET] <Ducky^> this command only captures the left screen and not the right (it's blank)
[16:41:49 CET] <Ducky^> can I capture both?
[16:45:18 CET] <DHE> does the $DISPLAY variable of both screens match? if not, no that wouldn't work
[16:46:11 CET] <Ducky^> they should both be on the same display
[16:46:50 CET] <Ducky^> I configured the resolution with xrandr and display as :0 for both screens, so they must be
[16:53:19 CET] <norbert> Ducky^: maybe use kdenlive instead?
[16:53:24 CET] <DHE> right, but it's possible to have :0.0 and :0.1 on the same X server
[16:54:25 CET] <Ducky^> hmm, I had a try opening :0.1 and get -f x11grab -s "4480x1440" -r "30" -i :0.0 \
[16:54:28 CET] <Ducky^>   -vcodec libx264 -s "1280x720" -preset slow
[16:54:36 CET] <Ducky^> nope, not that
[16:54:39 CET] <Ducky^> Cannot open display :0.1, error 6.
[16:55:02 CET] <Ducky^> norbert: I was hoping for a command line way, I'm doing it with ssh
[16:57:12 CET] <relaxed> Ducky^: -s and -r are not input options, see ffmpeg-h demuxer=x11grab
[16:58:10 CET] <relaxed> Ducky^: er, ffmpeg -h demuxer=x11grab
[16:59:38 CET] <Ducky^> thanks relaxed
[17:00:04 CET] <Ducky^> I'm having some luck using the inbuilt gnome screen recorder, it captures both screens
[17:00:06 CET] <Ducky^> so I'll use that
[17:06:05 CET] <ralfie> hey, I'm trying to overlay two videos, where the base video starts 5 seconds after the overlay video
[17:06:17 CET] <ralfie> but the overlay video freezes at frame 5s
[17:06:23 CET] <ralfie> https://pastebin.com/bjne2y1H
[17:07:11 CET] <ralfie> what i mean is, it shows the same frame from 0s to 5s (start of base video), then overlay proceeds as normal
[17:08:39 CET] <ralfie> any conventional approach to this? perhaps by adding 5 seconds of black frames to the start of the base video, if there is no other solution?
[17:15:51 CET] <norbert> ralfie: if it doesn't need to be command-line/automated, you could use kdenlive
[17:16:06 CET] <norbert> overlaying videos sounds like a video editing thing
[17:17:04 CET] <ralfie> im looking for an automated solution, i'll have one base video and a lot of overlaying videos
[17:17:10 CET] <norbert> ok
[17:18:04 CET] <norbert> my notes about adding a watermark include: ffmpeg -sameq -i out.ogv -ss 0 -t 30 -s 240x160 -ab 128k -deinterlace -r 25 -vf "movie=240x160_25p.png [logo]; [in][logo] overlay=main_w-overlay_w:main_h-overlay_h [out]" output.avi
[17:18:11 CET] <ralfie> an overlaying video has 5 seconds of non-transparent content, then a chroma-keyed transition into the base video
[17:18:20 CET] <norbert> (no idea if that's useful for you though)
[17:18:29 CET] <ralfie> ooo
[17:19:42 CET] <ralfie> don't think it's useful to me, but thanks
[17:20:35 CET] <ralfie> i've tried splitting the overlay into a non-transparent and transparent part like so https://pastebin.com/j4bQhN5V
[17:21:45 CET] <ralfie> seemed to work fine video-wise, but the audio tracks would behave weirdly
[17:21:51 CET] <ralfie> couldn't stitch them back together exactly
[17:21:54 CET] <ralfie> also no idea how efficient this is
[17:33:46 CET] <ralfie> i guess i could fix that by adding the overlay as an input the 3rd time, and just using the audio from that, although that feels really hacky
[17:48:51 CET] <Water_27300935_> Hi, I install ffmpeg on Fedora via source , but I can't find/run ffplay after install done.
[17:49:27 CET] <pink_mist> did you have the proper SDL libraries and their headers on your system where ffmpeg's configure could find them?
[17:49:38 CET] <Water_27300935_> ffmpeg and ffprobe is work
[17:49:48 CET] <pink_mist> yes, but ffplay requires SDL
[17:50:06 CET] <Water_27300935_> Oh, I try later.
[17:50:16 CET] <Water_27300935_> :)
[19:05:26 CET] <Water_27300935_> I reinstsll ffmpeg after install 'SDL' and 'SDL-devel', ffplay n't work continually, (in project folder can't find 'ffplay' execute file),
[19:06:20 CET] <Water_27300935_> I use './configure','make','make install' to build ffmpeg, need some other option ?
[19:07:21 CET] <DHE> is it specifically SDL version 2?
[19:08:00 CET] <Water_27300935_> Oh, installed SDL only, not SDL2.
[20:28:03 CET] <Water_27300935_> ffplay work after install SDL2, why have n't some notice or error while build ffmpeg without SDL2 ?
[20:28:35 CET] <JEEB> did you specifically specify --enable-sdl?
[20:32:12 CET] <Water_27300935_> Oh,no.
[20:32:50 CET] <JEEB> the default in FFmpeg generally tends to be that unless you lack something that probably is considered "required" for the whole framework, then you get a failure without any options
[20:32:54 CET] <JEEB> otherwise then there's two lists
[20:33:05 CET] <JEEB> one where libraries that get autodetected and get enabled
[20:33:20 CET] <JEEB> and another where you have to specify the enable-xxx yourself
[20:33:29 CET] <JEEB> although I don't remember the details
[20:33:46 CET] <JEEB> now, if you set --enable-blah yourself (in various cases) it will fail hard if it can't be found
[00:00:00 CET] --- Thu Feb 21 2019


More information about the Ffmpeg-devel-irc mailing list