[Ffmpeg-devel-irc] ffmpeg.log.20140411
burek
burek021 at gmail.com
Sat Apr 12 02:05:01 CEST 2014
[00:21] <karl_v> Hi, I am looking for a hint on how to convert a 3840x2160 YUV 4:2:0 8-bit-per-pixel H.264 recording to a 1920x1080 YUV 4:4:4 10-bit-per-pixel H.264 file. I have some experience with using ffmpeg, so scaling/encoding is the easy part for me, but I have not yet found the right options to convert to the right colorspace before scaling such that the binning of the pixels will actually retain maximum possible quality (combining 4 8-bit pixels of the
[00:21] <karl_v> original to 1 10-bit pixel in the output). Can somebody hint to the right options for doing so?
[00:28] <klaxa> you need to compile libx264 with 10 bit color depth
[00:28] <klaxa> that has to be enabled at compile time of libx264
[00:37] <karl_v> @klaxa: Yes, I understand that, and I did compile libx264 with 10 bit color depth. But I am wondering about two things: a) do I need to tell "configure" of ffmpeg also whether to use 8 or 10 bit upon compiling ffmpeg? And b) how do I make sure that there is no unneccessary loss of information/quality when downscaling - as it occurs when the destination of the downscaling is also just yuv420?
[00:37] <klaxa> you don't have to tell ffmpeg to use 10-bit in libx264 if you compiled libx264 with 10 bit (as it can't encode to anything else anyway)
[00:38] <klaxa> as for the downscaling, you will obviously lose information and probably quality too, there is nothing you can do against that
[00:39] <klaxa> i don't know about colorspace conversion details
[00:40] <klaxa> i would just encode short clips and check the quality
[00:40] <klaxa> see the -t option for that
[00:43] <karl_v> Maybe I found what I was looking for: "-vf format=pix_fmts=yuv444p,scale=1920x1080" seems to do the trick. Will have to check for the resulting quality - I'll paint a test picture with thin lines of equal luminance but different colors.
[00:48] <karl_v> Hmmm... "-vf format=pix_fmts=yuv444p" does spare me losing color information, but I wonder whether ffmpeg actually computes with 10bits per color channel, internally, then. It kind of would make no sense to encode to a 10bit per channel output format if the working data within ffmpeg was only 8 bits per channel, though...
[01:38] <relevart> Hi, I'm trying to stream video from udp. But the function av_read_frame hangs every now and then, stalling the stream. please help!
[09:52] <termos> Is it better to use a filter graph for audio and video instead of using the audio_fifo directly?
[10:38] <Aiena> Can someone please tell me which PCM is best for cross compatibility ? E.g. ffmpeg supports so many PCM types and I don;t know which is the most common
[10:41] <sacarasc> Aiena: 16 bit little endian, I forget how it's phrased when picking it/.
[10:41] <Aiena> thank you sacarasc
[10:41] <Aiena> in that case the codec is pcm_s16le
[10:41] <Aiena> sacarasc which container is best for PCM in this case WAV ?
[10:41] <sacarasc> Yeah, that one.
[10:42] <Aiena> i mean file container e.g .mp4 etc
[10:42] <sacarasc> WAV works well, yeah. Should work on just about anything you wanna play it on from the last 20 years.
[10:42] <Aiena> Ok thaks
[10:42] <Aiena> thanks
[10:43] <Aiena> sacarasc id the file name has spaces ffmpeg accepts quotes for the last output parameter right ?
[10:43] <Aiena> *if
[10:43] <sacarasc> Yes.
[10:43] <Aiena> thanks
[10:44] <Aiena> ffmpeg is one of the finest pieces of software but with so much power comes more complexity.
[10:44] <Aiena> thanks for your help sacarasc
[12:59] <grrk-bzzt> Hello
[13:00] <grrk-bzzt> I have a mkv{h264 + flac} file which video stream is longuer than the audio stream.
[13:00] <grrk-bzzt> I would like to drop frames with -vsync and -map but ffmpeg always end up producing a mkv with only an audio stream in it
[13:01] <grrk-bzzt> I use the following command
[13:01] <grrk-bzzt> ffmpeg -i file.mkv -vsync -1 -c:a copy -map 0:a file_fixed.mkv
[13:02] <relaxed> grrk-bzzt: you just want them to end at the same time?
[13:02] <grrk-bzzt> I want the video stream to end at the same time of the audio, yeah
[13:03] <relaxed> to cut the video stream where the audio ends, ffmpeg -i input -map 0 -c copy -shortest output
[13:06] <grrk-bzzt> Hum
[13:08] <grrk-bzzt> relaxed, -map 0:1 or 0:a in this case
[13:09] <grrk-bzzt> -map 0 just reproduces the same file as the original
[13:09] <grrk-bzzt> Wait no
[13:09] <grrk-bzzt> It doesn't work either
[13:09] <grrk-bzzt> It still makes mkv with no video stream
[13:19] <pdgendt> hi, i don't know if this is related to ffmpeg or alsa but when I play an audio file to a USB DAC, I get a lot of crackle (especially in the lower tones), I don't have it with regular headphone jack, FYI I am running on an ARM omap4 board, anyone who has an idea or could point me in the right direction?
[13:32] <pdgendt> nevermind other decoders have same issue, probably alsa thing
[15:36] <Venti> can I use the blackframe filter to remove all black frames from a video? how about white or boring frames in general?
[15:48] <dannixon> I'm trying to add an OGG audio track to an MJPG video, I have tried 'ffmpeg -i UNWRAP_MJPEG.avi -i AUDIO.ogg -vcodec copy -acodec aac -strict experimental FINAL.avi' however the audio teack appears to be empty when I try to play it
[18:17] <devinheitmueller> Are there any write-ups/documentation which describes the theory of operation for ffmpegs parallel encoding functionality? Im trying to understand how much sharing of state there is, whether its spreading the load based on GOPs (or whether its splitting up encoding tasks within a particular frame) etc.
[18:19] <devinheitmueller> For example, if its taking groups of frames that are within the same GOP and handing them off to a single CPU, then I would know that the multi-threading doesnt really help with realtime encoding.
[18:28] <clever> devinheitmueller: from what i understand, slice encoding is the main way to parallel it
[18:28] <clever> and i think its just splitting the entire frame into 2 (or 4), and then encoding it as entirely isolated streams
[18:28] <clever> so you basically have 2 entirely seperate h264 video streams
[18:28] <clever> which you are playing side by side
[18:28] <clever> or one above the other
[18:28] <devinheitmueller> clever: Ah, ok. A bit simplisitic, but certainly appropriate for streams which are already saved on disk.
[18:29] <clever> and it must be sliced at encode time
[18:29] <clever> once sliced, it can also use threads at decode time
[18:29] <devinheitmueller> Yeah, in my case Im interested purely in encode, and given the source is realtime then splitting into slices doesnt really help that much.
[18:30] <clever> you would have 2 encode threads, each thread takes half of the frame, and encodes it by itself
[18:30] <devinheitmueller> Ive read about other implementations which do MoCo in serial and DCT in parallel, but wasnt sure which category ffmpeg fell into.
[18:30] <clever> and then it somehow muxes the 2 h264 streams together with a special header to say its sliced
[18:31] <clever> so each thread can do a frame in half the time, since its encoding only half of the frame
[18:31] <devinheitmueller> Ah, I got it now.
[18:31] <clever> and the exact same thing at decode time
[18:32] <clever> i'm sure ffmpeg has a command line option to just do it
[18:32] <clever> the main issue is that you loose some compressability
[18:32] <devinheitmueller> I would assume support for sliced encoding would be dependent on the codec though, eh?
[18:32] <clever> yeah
[18:32] <clever> when something moves from one slice to the other, its an entirely different h264 stream
[18:32] <clever> so it cant encode it as moving
[18:33] <devinheitmueller> Right.
[18:33] <clever> it has to encode it as a new image entering the slice
[18:33] <clever> that limitation removes the need to share data between the 2 encode threads
[18:33] <clever> making the encode process much simpler
[18:33] <clever> the threads dont care about eachother
[18:33] <devinheitmueller> So a quick Google search suggests that sliced encoding is restricted to H.264/MPEG-4 (as opposed to MPEG-2). Do you concur?
[18:34] <clever> dont know if any other codecs support it
[18:34] <clever> ive only heard of it being used with h264
[18:34] <devinheitmueller> Gotcha.
[18:34] <devinheitmueller> Well, learn something new every day. :-)
[18:34] <clever> for single core decoders (or encoders), it can just do both slices on the same core, in the same ammount of time as a normal single slice file
[18:34] <devinheitmueller> sure.
[18:41] <JEEB> devinheitmueller, there are two main ways of threading, picture (frame) based and slice (or part-of-picture otherwise) based
[18:42] <devinheitmueller> Gotcha
[18:42] <JEEB> the first just happens to work even with references between the pictures with current formats it seems, and the latter is OK for devices with less RAM or if you need low latency
[18:42] <JEEB> because of course picture-based threading generally tends to bring latency, while the speed-up generally is greater than slice-based
[18:43] <devinheitmueller> Ah, nice.
[18:43] <devinheitmueller> Good to know there are a couple of different schemes available.
[18:43] <JEEB> and of couse you can do slice/whatever part-based stuff only when the file you're reading has that
[18:44] <JEEB> for example blu-ray requires you to have four slices per picture, but most random files you'll find on the internet will only have one per picture, since that gives you the best compression efficiency
[18:45] <devinheitmueller> Understood.
[18:50] <JEEB> with HEVC there's another way of possibly doing parallelization that most vendors seem to be enabling by default in the files, WPP (wavefront parallel processing). Basically it means that you have to decode two CTU blocks from a CTU row, and then you can decode the next CTU row. I think it costs a few % of compression eff. but the vendors and the standards body were OK with that.
[18:51] <JEEB> lavc currently doesn't support WPP multithreading for HEVC, only picture-based which works with all streams
[18:51] <devinheitmueller> HEVC is largely out of scope for me at this point, but thats certainly good to know whats coming down the pipe.
[18:51] <JEEB> well, at this point it seems like lavc will be using the picture-based threading model for that
[18:52] <JEEB> since that works in basically all cases
[18:52] <JEEB> instead of some, albeit the vendors seem to be pushing WPP rather hard so it will probably be a commonly usable way of multithreading
[18:57] <iive> <clever> when something moves from one slice to the other, its an entirely different h264 stream
[18:57] <iive> <clever> so it cant encode it as moving
[18:57] <iive> that's actually not true.
[18:58] <iive> slices still operate on same frames, output and reference.
[19:00] <iive> e.g. mpeg2 by standard have every row of macroblocks as separate slice. Since mpeg2 uses fixed huffman tables, it doesn't really affect compression. the only difference is that it waste 4 bytes on a slice header.
[19:11] <clever> ah
[19:21] <adi10289> hi guys i need help
[19:22] <adi10289> i m trying to download from a m3u8 hls streaming which is having https AES key can someone help me how to download from such streaming
[19:34] <adi10289> somebody pls help me out :(
[19:46] <blahsd> Hello! I was wondering if anybody knew what is the best way to have the smart blur more intense
[19:49] <blahsd> pretty please? :D
[19:50] <blahsd> hey aard_ are you there?
[19:52] <aard_> i guess :)
[19:52] <blahsd> there's not many people actually online I guess :P Can I bother you for a second? Maybe you can give me a quick tip :)
[19:53] <aard_> if i can help
[19:53] <blahsd> thanks a bunch mate
[19:53] <blahsd> I'm trying to get a stronger blur going on a video, but raising chroma values don't get anywhere near where I want to be
[19:54] <blahsd> do you have any idea what I could do?
[19:54] <aard_> not my expertise sorry :/
[19:54] <blahsd> that's alright no worries :) It's a very specific question haha
[20:02] <adi10289> somebody help me too :(
[20:03] <blahsd> adi10289: what are you looking for?
[20:03] <adi10289> i m trying to download from a m3u8 hls streaming which is having https AES key
[20:04] <adi10289> but haven't got any success till now :(
[20:04] <blahsd> stop right there man that's waaaay off my league hahah
[20:04] <blahsd> I'm sorry dude
[20:07] <adi10289> ok blahsd, guess i have to switch to debian and start testing :(
[20:07] <blahsd> sorry man :(
[20:09] <blahsd> gotta go good day all!
[20:31] <dannixon> Anyone have any ideas what could be causing this: http://pastebin.com/nDAqKy69
[20:43] <c_14> You're using libav, see #libav
[21:09] <Spec-Chum> is there a way to get ffmpeg to tell you the video codec used?
[21:10] <Spec-Chum> I want something like: if (codec != x264) convertToX264(); else copyVideo();
[21:10] <Spec-Chum> in bash BTW, just horrible pseudo code above :)
[21:10] <c_14> ffprobe
[21:12] <Spec-Chum> bit drastic, isn't there an ffmpeg option that just returns "h264" string or somesuch?
[21:13] <Spec-Chum> Just doing a simple script to convert any file to stream on Chromecast: ffmpeg -i "$1" -c:a pcm_s16le -f wav - | neroAacEnc -if - -ignorelength -of "$2".mpa; ffmpeg -i "$1" -i "$2".mpa -c:v copy -c:a copy -map 0:v:0 -map 1:a:0 "$2".mp4
[21:14] <Spec-Chum> that does work, but it copies the video stream, I want to test and only convert if it's not h264
[21:14] <Spec-Chum> make sense?
[21:17] <c_14> ffprobe -show_entries stream=codec_name
[21:19] <Spec-Chum> ah, righto cheers!
[21:33] <Spec-Chum> ffprobe -v quiet "$1" -select_streams v -show_entries stream=codec_name
[21:33] <Spec-Chum> perfect, thanks c_14
[22:47] <Samian> so lets say you want to edit 10 seconds of a 1 hour video. Obviously, you wouldn't want to re-encode the entire video and thus lose quality. What's the best way to only re-encode the 10 second part? What's a good program that has that capability?
[22:50] <c_14> I don't know about the best way, but what I would do is split the video into 3 parts, mess around with the part in the middle and then just concat them together again.
[22:50] <c_14> I'd look at -ss -t -to and the concat filter.
[00:00] --- Sat Apr 12 2014
More information about the Ffmpeg-devel-irc
mailing list