[Ffmpeg-devel-irc] ffmpeg.log.20170315
burek
burek021 at gmail.com
Thu Mar 16 03:05:01 EET 2017
[00:02:15 CET] <SpeakerToMeat> If I'm reading from/to files and not a stream, a "Thread message queue blocking" doesn't mean it's gonna drop frames, only that it's gonna go slower, right?
[00:06:25 CET] <shincodex> Im back
[00:06:55 CET] <shincodex> in my frantic scramble to solve h264 being just garbage
[00:06:59 CET] <shincodex> or ffmpegs imp of it idk
[00:07:07 CET] <shincodex> I do a init packet
[00:07:29 CET] <shincodex> int error = av_read_frame(formatContext, &packet);
[00:07:38 CET] <shincodex> if all good
[00:07:49 CET] <shincodex> avcodec_decode_video2(codecContext, frameNative, &frameFinished, &packet);
[00:08:06 CET] <shincodex> LIB say you fool that deprecated
[00:08:15 CET] <shincodex> and i say the examples must be deprecated fool
[00:08:29 CET] <shincodex> attribute_deprecated
[00:08:39 CET] <shincodex> * @deprecated Use avcodec_send_packet() and avcodec_receive_frame().
[00:08:49 CET] <shincodex> I wonder if this do something with packet that i not do right
[00:08:59 CET] <shincodex> and i have bad corrupt poop frames cause
[00:09:01 CET] <shincodex> of this
[00:10:27 CET] <shincodex> When my avdecode is finish or not
[00:10:32 CET] <shincodex> at end of loop i av_free_packet(&packet);
[00:10:39 CET] <shincodex> for files I see they dont do this
[00:10:41 CET] <shincodex> they init once
[00:10:58 CET] <shincodex> and move in size based on avcodec decode returns the bytes it decode
[00:11:01 CET] <shincodex> they move the packet by this
[00:11:12 CET] <shincodex> for next decode but that all great for file
[00:11:18 CET] <shincodex> I get my h264 from rtsp
[00:11:29 CET] <shincodex> allow only video kick rest out
[00:11:31 CET] <thebombzen> are you just rambling or do you actually have a question
[00:11:40 CET] <shincodex> Thats my question
[00:11:48 CET] <thebombzen> what is
[00:11:50 CET] <shincodex> I was directing at jeeb or furq or engineers
[00:11:56 CET] <shincodex> That understand libavcodec
[00:11:57 CET] <thebombzen> you've been talking at this channel for five minutes straight
[00:12:06 CET] <shincodex> No
[00:12:09 CET] <shincodex> I been talking all day
[00:12:11 CET] <shincodex> I come and go
[00:12:26 CET] <thebombzen> I know that but your last stream has been fight minues straight of talking
[00:12:33 CET] <thebombzen> but you still haven't actually asked a question
[00:12:39 CET] <thebombzen> you've just said stuff that didn't work
[00:13:54 CET] <shincodex> Well tell me
[00:14:07 CET] <thebombzen> you haven't actually asked a question
[00:14:45 CET] <shincodex> Why does VLC decode h264 better(Less pixelation, scanline mirroring, ink splooging) than h264 even though libavcodec is used in VLC?
[00:15:08 CET] <thebombzen> there's no such thing as "decode it better"
[00:15:22 CET] <thebombzen> a decoder is "dumb" in that it has to produce exactly the same output for a compliant bitstream
[00:15:32 CET] <thebombzen> it doesn't get to make any choices
[00:15:48 CET] <c_14> in many cases, yes it does
[00:15:55 CET] <c_14> a lot of codecs don't specify bit-exact decoding
[00:15:55 CET] <thebombzen> shhh
[00:16:04 CET] <thebombzen> you're overcomplicating things :P
[00:16:14 CET] <c_14> I'm guessing the difference is probably somewhere in the renderer
[00:16:18 CET] <thebombzen> if you're wondering why a decoded frame in VLC might look better, it's probably because it's doing post-processing
[00:16:25 CET] <c_14> ye
[00:16:49 CET] <thebombzen> some players like mpv (and maybe VLC) do postprocessing like deblocking or derining
[00:16:57 CET] <thebombzen> deringing* wow I can't spell
[00:17:11 CET] <nolaan> Hi guys there's a problem with the hflip and vflip video filters. They are inversed....
[00:17:28 CET] <thebombzen> what do you mean "they are inversed"
[00:17:38 CET] <shincodex> thats the thing
[00:17:54 CET] <shincodex> is post-proc responsible for taking care of artifacts
[00:17:59 CET] <shincodex> it is disabled currently
[00:18:07 CET] <thebombzen> nothing to do with libpostproc
[00:18:12 CET] <shincodex> huh
[00:18:19 CET] <shincodex> Maybe its a useless lib?
[00:18:30 CET] <thebombzen> no it's not just the point here :P
[00:18:42 CET] <thebombzen> a video player has its own internal routines for making a video prettier before putting it on the screen
[00:18:44 CET] <shincodex> i destroyed my configure line to find the problem
[00:18:53 CET] <shincodex> Yes... well
[00:19:01 CET] <thebombzen> if you're comparing the output of a video player to the raw decoded frame
[00:19:03 CET] <thebombzen> then you're doing it wrong
[00:19:10 CET] <thebombzen> because you should not expect those to look the same
[00:19:10 CET] <c_14> nolaan: you're probably just confusing the terminology
[00:19:59 CET] <c_14> hflip flips across the horizontal axis, i.e. the axis that partitions the horizon into two equal halves
[00:20:29 CET] <shincodex> Are you saying that avcodec_decode
[00:20:35 CET] <shincodex> is a raw frame?
[00:20:55 CET] <shincodex> poor ffplay
[00:21:24 CET] <thebombzen> ffplay is not supposed to be a good player
[00:21:33 CET] <thebombzen> it's supposed to be a quick n dirty thing that uses ffmpeg
[00:21:37 CET] <shincodex> i have yet to run it
[00:21:41 CET] <shincodex> Its not quick
[00:21:44 CET] <shincodex> but it is dirtyy
[00:21:56 CET] <thebombzen> clearly you've never heard the expression quick-n-dirty
[00:21:58 CET] <shincodex> how many lines was that thing again?
[00:22:07 CET] <shincodex> No I just opened up a player with less lines
[00:22:15 CET] <shincodex> It stabbed my eyeballs in the ear
[00:22:36 CET] <shincodex> but hmmm maybe should look to see if it has post processing
[00:22:39 CET] <thebombzen> again you've probably never herad the phrase quick-n-dirty
[00:22:46 CET] <shincodex> No I never heard it
[00:22:50 CET] <shincodex> Even when you just said it now
[00:22:50 CET] <thebombzen> there you go
[00:22:52 CET] <shincodex> never heard it
[00:23:08 CET] <c_14> turn on your tts
[00:23:20 CET] <thebombzen> text-to-speech?
[00:23:25 CET] <shincodex> Yes
[00:23:34 CET] <shincodex> You must have not heard of tts before
[00:23:44 CET] <shincodex> but now you know
[00:23:45 CET] <shincodex> now....
[00:23:47 CET] <c_14> He was making a pun based on the usage of "heard" in a text chat
[00:23:48 CET] <shincodex> you know...
[00:23:57 CET] <shincodex> heard of cows?
[00:24:21 CET] <thebombzen> wow you're just misinterpreting everything today
[00:24:35 CET] <thebombzen> "quick-n-dirty" does not reference the number of lines of code
[00:26:08 CET] <shincodex> my video quick-n-dirty was less than 80
[00:26:22 CET] <shincodex> thats shy of 100 by 20
[00:26:46 CET] <shincodex> so... but besides this point
[00:27:48 CET] <thebombzen> well ffplay is 3700 lines
[00:28:01 CET] <shincodex> I really think something in h264 decoder or h264dec.c is stopping me here
[00:28:10 CET] <thebombzen> compared to mpv, which is 137000, I'd say, yea ffplay is quickndirty
[00:28:10 CET] <shincodex> a dict option?
[00:28:11 CET] <shincodex> maybe
[00:28:23 CET] <shincodex> copy paste code means nothing
[00:28:34 CET] <thebombzen> >copy paste code
[00:28:41 CET] <shincodex> you saw the configure script right
[00:28:52 CET] <shincodex> lots of lines
[00:28:57 CET] <thebombzen> mpv doesn't have a configure script
[00:28:59 CET] <shincodex> Lots...
[00:28:59 CET] <thebombzen> it uses waf
[00:29:01 CET] <shincodex> ffmpeg
[00:29:19 CET] <thebombzen> now why do you think FFmpeg's configure script is large
[00:29:22 CET] <thebombzen> think about that for a second
[00:29:33 CET] <shincodex> Oh there is plenty of platforms to think about
[00:29:33 CET] <shincodex> pcpc
[00:29:37 CET] <shincodex> ppc mips
[00:29:40 CET] <shincodex> arm
[00:29:42 CET] <shincodex> etc
[00:29:49 CET] <shincodex> in each Libwhatever
[00:29:52 CET] <shincodex> you see those folders
[00:30:02 CET] <shincodex> Then outside of these folders you have many encoders
[00:30:07 CET] <shincodex> decoders muxers demuxers etc
[00:30:24 CET] <thebombzen> I'm outa here cause this convo is a waste of my time
[00:30:33 CET] <shincodex> run away little girl
[00:30:35 CET] <shincodex> run away
[00:30:39 CET] <shincodex> deadly boss mobs
[00:31:46 CET] <shincodex> suntory whisky toki... try it
[00:31:51 CET] <shincodex> its pretty tasty
[00:32:00 CET] Action: shincodex cackles this idle chatroom to death
[00:33:29 CET] <nolaan> c_14: the video had a rotated bit sorry
[00:33:56 CET] <c_14> aah, yeah. that can be disconcerting if you don't realize
[00:38:48 CET] <shincodex> Ill be back
[00:55:42 CET] <SpeakerToMeat> SHould I be worried about this? "[jpeg2000 @ 0x55840c9511e0] End mismatch 1"
[03:02:15 CET] <Rathann> looks like there might be a bug in 3.1.x that is fixed in 3.2.x and which trips over HandBrake (SIGSEGV in decomb filter)
[03:02:46 CET] <Rathann> predictably, HandBrake upstream said "FFmpeg is unsupported, use libav."
[03:04:04 CET] <Rathann> it's 100% reproducible and happens with every file if you just open something and click start encoding (default settings)
[03:04:23 CET] <Rathann> https://github.com/HandBrake/HandBrake/issues/631
[07:17:20 CET] <matkatmusic> howdy
[07:18:03 CET] <matkatmusic> I have a question the API. is it possible to pass in images and assign them to specific smpte frame numbers?
[07:18:50 CET] <matkatmusic> like, i want image Image003.jpg to appear at SMPTE time 1:00:00:45.23, for example. or even FrameNumber 764
[07:20:28 CET] <matkatmusic> Image004.jpg might appear at FrameNum 790, so Image003 would need to be the image used for frames 764-789
[07:21:16 CET] <matkatmusic> I'm trying to figure out if it's possible to take images spit out from my JUCE app whenever it repaints and turn 'em into a movie with ffmpeg
[09:22:39 CET] <termos> I remember seeing this new encoding/decoding API in libFFmpeg where you push and retrieve frames somehow, similar to filter graphs. Not the got_pkt integer that's being set in avcodec_encode_video2
[09:28:00 CET] <termos> seems like avcodec_encode_video2 is marked as deprecated by all the examples are still using it. Yet I remember seeing an example where it was done in a more modern way
[09:29:23 CET] <termos> aha, avcodec_send_frame/avcodec_receive_frame
[09:32:00 CET] <JEEB> yeah examples usually get updated... at some point
[10:43:52 CET] <feliwir> where in ffmpeg is the code to convert yuv420p to rgb?
[10:44:04 CET] <thardin> swscale?
[10:44:32 CET] <feliwir> thardin, i mean the exact code. Because my own yuv420p to rgb conversion isn't working
[10:44:46 CET] <feliwir> so i thought i can look at the ffmpeg sourcecode for that task
[10:45:30 CET] <thardin> one does not simply look into swscale
[10:45:59 CET] <thardin> or rather: it helps if you realize it's a circular buffer of circular buffers (last time I looked)
[10:49:10 CET] <feliwir> hm okay
[10:50:55 CET] <nido_> thardin: reminds me of https://upload.wikimedia.org/wikipedia/commons/5/58/Magic_Roundabout_Schild_db.jpg
[10:53:49 CET] <thardin> :]
[10:54:17 CET] <thardin> actually, I think it's a circular buffer of lines. plus some magic to set up the scaling filters
[10:55:31 CET] <thardin> but how everything is hooking up internally in sws takes a bit of time to digest. but not impossible
[11:29:41 CET] <furq> https://chromium.googlesource.com/chromium/src/+/7d2b8c45afc9c0230410011293cc2e1dbb8943a7
[11:29:44 CET] <furq> what
[11:31:11 CET] <thardin> yes, apng is a thing. firefox supports it
[11:33:51 CET] <furq> i know
[11:34:04 CET] <DHE> https://en.wikipedia.org/wiki/Multiple-image_Network_Graphics aka the MNG format...
[11:34:09 CET] <furq> chromium has spent the last nine years not supporting it
[11:34:31 CET] <furq> i wonder if this is some kind of hostage exchange for webp in firefox
[11:35:09 CET] <furq> also apng and mng are different
[11:35:18 CET] <thardin> mng is a mess iirc
[12:28:06 CET] <thebombzen> Safari supports apong too
[12:28:14 CET] <thebombzen> Chromium used to support it, because WebKit does
[12:28:24 CET] <thebombzen> but when Chromium switched from WebKit to Blink, they lose support
[12:28:30 CET] <thebombzen> apng* not aponglol
[12:28:44 CET] <thebombzen> wow I can't type
[12:45:00 CET] <furq> er
[12:45:06 CET] <furq> i don't think that's right
[12:45:21 CET] <furq> apparently webkit only added apng in late 2015
[12:45:52 CET] <furq> the blink split was back in 2013
[12:46:37 CET] <furq> nothing but firefox supported apng when i last checked a couple of years ago
[12:47:31 CET] <furq> it's good that the thing i wanted to do two years ago would finally be possible now that i don't care any more
[13:13:04 CET] <scam> good morning
[13:14:09 CET] <termos> when doing avcodec_open2(stream->codec, ...) I'm passing the deprecated "codec" into the function, is there another way to do this that's not deprecated?
[13:14:48 CET] <scam> will ffmpeg take usb webcam to rtmp url?
[13:32:50 CET] <DHE> termos: use your own AVCodecContext object stored in your own data structures instead
[13:34:37 CET] <ZeroWalker> is it possible to set the timestamp before the encoding, cause as the encoding might buffer, writing the timestamp to the package when it's done will be incorrect. Was hoping one could set it in the frame and that it would be copied to the package when retrieved
[13:38:24 CET] <DHE> umm.. what? the timestamps usually come from the input and are not changed (unless you specifically request it like with an FPS change)
[13:38:37 CET] <termos> DHE: thanks, that's what i'm doing now
[13:39:22 CET] <DHE> termos: the main reason was that the libavformat code was previously reading that 'codec' field to get information like what codec was being saved for file header reasons. They wanted to break that up.
[13:39:50 CET] <DHE> so they marked that field as deprecated and made a new field specifically for libavformat to get its codec metadata from instead. it's caused some confusion, yeah
[13:42:46 CET] <mcjack> DHE and termos: I struggle at the same point, if I don't set stream->codec on encoding avformat_write_header fails, because the codecs are all set to unknown&
[13:43:42 CET] <termos> aha I haven't made it that far yet, but that sounds annoying
[13:46:31 CET] <DHE> mcjack: avcodec_parameters_from_context(stream->codecpar, my_avcodeccontext);
[13:47:00 CET] <ZeroWalker> well it's a live capture, so i have to set the timestamps myself
[13:47:19 CET] <ZeroWalker> so can i set it at frame->pts, and it will stick to that package when it's retrieved?
[13:48:30 CET] <mcjack> thanks DHE, I will try that
[13:50:18 CET] <mcjack> in that case I have to use my_avcodeccontext = avcodec_alloc_context3 (encoder); instead of the preallocated stream->codec
[13:50:26 CET] <mcjack> right?
[13:53:09 CET] <DHE> mcjack: looks right
[13:55:07 CET] <mcjack> thanks, I'll try
[13:58:43 CET] <ITR> Needing some help with making a slideshow: http://pastebin.com/1mDhmw97
[14:03:47 CET] <ITR> To append on my previous message, uploading it to youtube works fine, so I guess I'll just download it from there and use that, if everything else fails
[14:04:11 CET] <ikevin> hi
[14:05:43 CET] <ikevin> i'm trying to recode a video with multiple audio track, i want to merge audio tracks to 1, is there a way to do that?
[14:06:58 CET] <temp> i encounter problem when build ffmpeg source code like :libavcodec/ffjni.c:23:10: fatal error: jni.h: No such file or directory #include <jni.h>
[14:10:25 CET] <BtbN> you probably need to do some magic to build Java stuff
[14:12:15 CET] <temp> no need now
[14:36:12 CET] <mcjack> DHE: avcodec_parameters_from_context works perfect, thanks! Now I can start fixing the data filling
[14:40:17 CET] <temp> libavcodec/libopusenc.c: In function libopus_encode_init:libavcodec/libopusenc.c:337:15: error: implicit declaration of function opus_multistream_surround_encoder_create; did you mean opus_multistream_decoder_create? [-Werror=implicit-function-declaration] enc = opus_multistream_surround_encoder_create( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ opus_multistream_decoder_create libavcodec/libopusenc.c:337:13: warning: a
[14:40:18 CET] <temp> ssignment makes pointer from integer without a cast [-Wint-conversion] enc = opus_multistream_surround_encoder_create( what's this mean
[14:43:30 CET] <Xtro_M> Hello , I am in first year of my college and i don't have experience in advanced c programming using algorithms or data structures but i am still determined to join GSOC program . Can I get some tips that could help me ? I am open to suggestions about learning algorithms and data structures too but since i have limited time till GSOC I am not sure whether its possible to learn that much till the program starts.
[15:21:23 CET] <SpeakerToMeat> When working with an image progression source, how do you set the original material fps for conversion?
[15:23:08 CET] <SpeakerToMeat> ... -r it seems
[15:23:42 CET] <SpeakerToMeat> nope.
[15:24:42 CET] <SpeakerToMeat> Ok, -r for conversion, but -framerate for input
[15:25:26 CET] <SpeakerToMeat> Thanks :D
[15:26:34 CET] <SpeakerToMeat> I wonder, with the original conversion which took the rate erronously as 25fps what ffmpeg did with the audio, the separate audio was set for fps, so 2000 units per frame. It was probably cut at the end. But it'd be interesting to see what it did
[15:27:58 CET] <temp> libavcodec/libopusenc.c: In function libopus_encode_init:
[15:27:59 CET] <temp> libavcodec/libopusenc.c:337:15: error: implicit declaration of function opus_multistream_surround_encoder_create; did you mean opus_multistream_decoder_create? [-Werror=implicit-function-declaration]
[15:27:59 CET] <temp> enc = opus_multistream_surround_encoder_create(
[15:27:59 CET] <temp> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[15:27:59 CET] <temp> opus_multistream_decoder_create
[15:28:11 CET] <temp> who can help me on this
[15:29:04 CET] <alexpigment> did you install libopus from a repo?
[15:29:13 CET] <temp> yes
[15:29:17 CET] <alexpigment> it's probably just old
[15:29:55 CET] <temp> also try build opus code
[15:30:11 CET] <alexpigment> "also try" = "i also tried"?
[15:30:19 CET] <alexpigment> or are you telling me to do something?
[15:30:33 CET] <temp> yes i tried
[15:30:36 CET] <alexpigment> k
[15:31:03 CET] <SpeakerToMeat> did you remove the repo version of the opus lib so you don't have conflicting versions?
[15:31:57 CET] <SpeakerToMeat> Or, in the case you (probably?) installed the newer libopus to /usr/local pointed configure to the newer location for the lib?
[15:33:10 CET] <temp> i did not configure the location
[15:34:48 CET] <SpeakerToMeat> for libopus? if it's using configure it probably defaulted to /usr/local, did you just build it or built and install it? and, did you remove the distro packages?
[15:37:20 CET] <SpeakerToMeat> If you just built libopus but not install it, you'll have to indicate to the ffmpeg configuration script (I'm not sure which build system is ffmpeg using right now) where the lib and its include files are
[15:37:43 CET] <SpeakerToMeat> it wont find it in /home somewhere on its own
[15:38:23 CET] <SpeakerToMeat> If libopus was built and installed, but the distro package is still there, it's possible the build system is just still hooking onto the distro version and not your custom compiled one
[15:43:18 CET] <c4017> When I use '-f alsa' I get the error 'Unknown input format 'alsa'', but it works with avconv. Is there a configure flag that needs to be set to enable alsa when compiling ffmpeg?
[15:43:46 CET] <temp> now, i tried remove the distro packages about opus*, built and installed the opus.
[15:43:58 CET] <temp> then i get error like : libavcodec/libopus.c:22:10: fatal error: opus_defines.h:
[15:45:54 CET] <SpeakerToMeat> temp: did you reconfigure the ffmpeg so it'd find the new directory opus is in?
[15:46:06 CET] <temp> no
[15:47:51 CET] <temp> i see , let me try again
[15:59:06 CET] <yariv_slidely> Hi, I am trying to blur a video after 12 second. But I also want the video to gradually blur for 2 seconds sort of "blur-in" (like you would fade a video in by changing alpha over time). I was only able to do a simple blur for the 12 second and on: ffmpeg -i input.mp4 -vf "boxblur=enable='gte(t,12)'" -strict -2 output.mp4 Can someone help? Thanks for the help, Yariv
[16:07:03 CET] <c4017> or is there there something other than alsa i can use to input audio?
[16:22:41 CET] <DHE> for live audio from a sound card, alsa is probably your best bet. maybe you could use pulseaudio if you're into that kind of thing.
[16:24:58 CET] <c4017> I would prefer alsa, I just dont understand why theres an error when I try to use it.
[16:26:11 CET] <furq> c4017: is alsa listed in ffmpeg -devices
[16:26:29 CET] <furq> if the alsa development libs/headers aren't installed when you build it then it won't be enabled
[16:26:33 CET] <furq> libasound2-dev on debian-alikes
[16:27:00 CET] <Sparkyish> Hi all - Looking for help with multi-channel AAC Audio encoding if anybody can point me in the right direction please. I need to encode 4 channels of full-range audio without the audio transformations and remapping that happens when using profiles like 5.1 and 7.1. These are essentially 4 mono channels that all need to be input (on the encoder) and output (on the decoder) at the same time. The audio channels are input & output via
[16:27:04 CET] <Sparkyish> If I use -channels 8 on the input, and -c:a aac -ac 4, the output for ch 1 and 2 are fine, but ch 3 and 4 are output on ch 1 and 2. If I use -c:a aac -ac 6, the 5.1 profile is applied and low frequency on ch 1, 2, 5, 6 outputs on a different channel, ch 3 if fine, but ch 4 is low frequency only. Can anybody help with encoding [the first] 4 full range channels without the 5.1 effects? Thank you!
[16:28:07 CET] <c_14> use the pan filter
[16:29:47 CET] <Sparkyish> okay I'll read up... thank you
[16:29:51 CET] <c4017> furq, nice one I didnt have libasound2-dev. Time to rebuild i guess
[16:30:05 CET] <c4017> any configure flags i need?
[16:31:07 CET] <furq> no
[16:31:17 CET] <furq> just make sure alsa shows up in the configure output
[16:31:30 CET] <furq> it's enabled by default if the deps are installed
[17:17:33 CET] <nyuszika7h> [libfdk_aac @ 0x2e4d6e0] Note, the VBR setting is unsupported and only works with some parameter combinations
[17:17:53 CET] <nyuszika7h> does this mean it didn't work with my parameters or how can I tell if it did VBR? it's encoding...
[17:21:10 CET] <furq> nyuszika7h: what params did you use
[17:21:48 CET] <nyuszika7h> ffmpeg -ss 15:00 -i in.mkv -map 0:v:0 -vf 'crop=720:432:0:72,scale=1024:432' -aspect 2.35/1 -c:v libx264 -preset veryslow -tune film -crf 18 -map 0:a:0 -c:a libfdk_aac -vbr 4 -ac 2 -t 05:00 sample.mkv -y
[17:22:50 CET] <furq> oh nvm i guess i do get that warning
[17:23:00 CET] <furq> i'm not sure what that's all about
[17:23:07 CET] <furq> it seems to work fine anyway
[17:31:11 CET] <nyuszika7h> also I've noticed tha x264 seems to use q=crf+5 most of the time at least on this source
[17:51:19 CET] <acresearch> JEEB: you online?
[18:31:51 CET] <acresearch> anyone online?
[18:32:00 CET] <DHE> plenty. just ask your question.
[18:33:35 CET] <acresearch> i ran this command: ffmpeg -i in.mkv -vf scale=1920:1080 out.mkv to scale a video down from 4k to 1080p but the file size went from 23GB to 1 !!!! did i lose quality?
[18:34:30 CET] <DHE> well clearly. you didn't specify any quality options and the defaults are not very good
[18:34:32 CET] <JEEB> you used libx264 with crf that is nonzero
[18:34:48 CET] <JEEB> he did set crf and preset
[18:34:50 CET] <acresearch> oh
[18:34:59 CET] <JEEB> just didn't post it for some reason
[18:35:07 CET] <DHE> oh... well that's misleading
[18:35:23 CET] <furq> you also made the video a quarter of its original resolution
[18:35:42 CET] <acresearch> JEEB: DHE so this would be better? ffmpeg -i in.mkv -c:v libx264 -vf scale=1920:1080 out.mkv
[18:35:50 CET] <JEEB> acresearch: basically it will always be lossy coding but the thing that matters is if it looks good enough for you
[18:36:07 CET] <JEEB> acresearch: default for libx264 is crf=we
[18:36:11 CET] <JEEB> *23
[18:36:20 CET] <JEEB> darn touchscreens
[18:36:22 CET] <acresearch> JEEB: i want to be blown with quality because it is a spacial video for me hehe
[18:36:59 CET] <JEEB> if it looks bad, lower crf. if it looks good, try a higher
[18:37:11 CET] <JEEB> that's how crf works
[18:37:14 CET] <kepstin> acresearch: then why re-encode at all?
[18:37:22 CET] <acresearch> furq: yes i 1/4 the size becuase my tv cannot handle more the 1080p, but i am surprised the file size went down so much
[18:37:28 CET] <JEEB> also for the tv you used level 41
[18:37:46 CET] <JEEB> which you also left out
[18:37:49 CET] <JEEB> :V
[18:38:38 CET] <acresearch> JEEB: the video we made both (with your advised command) is 4GB and i kept it, i am just trying to understand how to replicate this on my own
[18:38:58 CET] <JEEB> acresearch: you can't do it lossless because very few hw decoders support lossless coding, and thus you adjust crf accordingly
[18:39:11 CET] <acresearch> hmmmm
[18:39:37 CET] <JEEB> crf is the closest we have to "constant quality"
[18:40:09 CET] <JEEB> higher value = compress more
[18:40:15 CET] <acresearch> JEEB: so this command is still the best option? ffmpeg -i input.mkv -c:v libx264 -preset veryfast -level 41 -crf 21 -vf scale=1920:1080 -c:a copy -sn out.mp4
[18:40:46 CET] <JEEB> start with 23, encode about 2500 frames of content and adjust
[18:41:13 CET] <JEEB> looks bad => lower, looks good => higher
[18:41:14 CET] <acresearch> JEEB: -crf 21 to 23 ?
[18:42:14 CET] <JEEB> that way you should find the highest value that still looks good to you
[18:42:37 CET] <acresearch> so you are saying to try -crf 23?
[18:43:08 CET] <JEEB> and then there's the preset which you can adjust to slower or faster
[18:43:18 CET] <kepstin> acresearch: try a few different values of crf until you find one that *you* like.
[18:43:25 CET] <JEEB> according to how much time you want to use
[18:43:46 CET] <JEEB> veryfast is pretty much the fastest useful one
[18:44:13 CET] <acresearch> so i should remove -preset veryfast?
[18:44:16 CET] <JEEB> (crf values are not exactly the same between presets, so you'd have to adjust)
[18:44:54 CET] <JEEB> I'm telling you that if you want to adjust your compression, that's how you do it
[18:45:16 CET] <acresearch> JEEB: ok 1 moment, let me write these down so i memorise them
[18:45:36 CET] <acresearch> -crf is the compression rate?quality? 23 is best?
[18:45:47 CET] <JEEB> also -level 41 you shouldn't remove, it makes sure the thing plays on your hardware plastic toy
[18:46:00 CET] <JEEB> oh for fuck's sake
[18:46:18 CET] <kepstin> acresearch: -crf is an "constant quality" more or less; lower numbers look better, higher numbers give smaller files, there is no such thing as "best"
[18:46:38 CET] <acresearch> oohhhhh
[18:46:44 CET] <JEEB> 23 is the fucking default and it's high enough so that you might see artifacts depending on things
[18:47:33 CET] <JEEB> if you didn't set a preset and crf in your random 1gb test then it set crf=23, preset=medium
[18:48:05 CET] <JEEB> which unsurprisingly is fucking smaller than 21,veryfasy
[18:48:06 CET] <acresearch> JEEB: ok i get it
[18:48:21 CET] <JEEB> *veryfast
[18:48:41 CET] <acresearch> ok and for -level 41 i leave the same correct? what is level 41?
[18:49:18 CET] <furq> https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Levels
[18:49:57 CET] <JEEB> that sets level to 4.1. as in tells the encoder to limit memory requirements for the encoded stream within that level
[18:50:18 CET] <JEEB> so that fucking plastic boxes can decode the result
[18:50:41 CET] <JEEB> (level 4.1 is the most common for HD things)
[18:51:13 CET] <acresearch> ohhhh
[18:52:34 CET] <acresearch> JEEB: ok, don't kill me: one last thing -> what is -preset veryfast?
[18:52:45 CET] <JEEB> libx264 preset
[18:52:57 CET] <JEEB> compression vs speed
[18:53:10 CET] <furq> slower presets compress better but take longer (hence the name)
[18:53:30 CET] <acresearch> i don't care about speed, if i left it out as furq said will take long time but compresses better?
[18:53:47 CET] <kepstin> acresearch: the default is "medium". If you don't care about time, then use "-preset veryslow"
[18:53:47 CET] <JEEB> -preset placebo is slowest
[18:53:53 CET] <furq> if you omit it then it'll use preset medium
[18:53:55 CET] <furq> yeah what kepstin said
[18:54:12 CET] <JEEB> veryslow is the one that is the slowest generally recommended for use
[18:54:19 CET] <furq> veryslow at 1080p will be pretty slow
[18:54:25 CET] <furq> you might even say very slow
[18:54:37 CET] <acresearch> ok i get it :-)
[18:54:42 CET] <acresearch> thanks guys :-)
[18:55:01 CET] <JEEB> well I am waiting for this bastard to soon complain about speed
[18:55:14 CET] <furq> that's what ctrl-c is for
[18:55:29 CET] <JEEB> le sigh
[18:55:46 CET] <acresearch> me? haha no i care about quality more than speed
[19:44:18 CET] <c4017> So I have a webcam that outputs h264, but the bitrate is 4 Mbit/s regardless of framerate and resolution. Is there someway to decrease bitrate without having to re-encode?
[19:45:06 CET] <JEEB> no
[19:45:31 CET] <JEEB> unless it is pushing out padding, which I would guess it isn't
[19:46:45 CET] <c4017> How else could it make a 10fps 320x240 video so damn big?
[19:47:59 CET] <JEEB> just using low quantizers?
[19:48:13 CET] <JEEB> I meae/32
[19:49:44 CET] <c4017> I wonder if I can communicate with the webcam to make it output a lower bitrate...
[19:51:36 CET] <alexpigment> i'm trying to build a leaner install of ffmpeg for win64 and seeing what I can disable. does libwavpack get used all for processing normal PCM audio, either encoding or decoding?
[19:51:53 CET] <furq> no
[19:52:11 CET] <alexpigment> furq: thanks. do you know what it's used for?
[19:52:16 CET] <furq> it's used for encoding wavpack
[19:52:26 CET] <furq> https://en.wikipedia.org/wiki/WavPack
[19:52:27 CET] <alexpigment> just that? ok cool
[19:52:37 CET] <alexpigment> i saw the website and it made it seem like it did a lot more
[19:52:46 CET] <alexpigment> anyway, disabled now. thanks furq
[19:54:08 CET] <furq> apparently there's an internal wavpack encoder now so you don't even need libwavpack for that
[19:54:41 CET] <furq> i'm not really sure why anyone bothered to implement that in 2017
[20:21:23 CET] <alexpigment> do you guys know if opencore-amr is necessary for *decoding* either AMR WB or NB?
[20:21:26 CET] <alexpigment> or is it just encoding?
[20:21:48 CET] <furq> https://www.ffmpeg.org/general.html#Audio-Codecs
[20:23:02 CET] <alexpigment> yeah i already read that
[20:23:08 CET] <alexpigment> the thing i'm confused about is "FFmpeg can make use of the OpenCORE libraries for AMR-NB decoding/encoding and AMR-WB decoding."
[20:23:18 CET] <alexpigment> *can make use of* in particular
[20:23:38 CET] <alexpigment> if i just need basic decoding, do i need to enable opencore-amr?
[20:24:33 CET] <furq> doesn't look like it
[20:24:56 CET] <alexpigment> k, thanks for the info
[20:25:09 CET] <furq> i can decode amr_nb and i don't have opencore installed
[20:29:28 CET] <alexpigment> ok, thanks for confirming that
[20:46:01 CET] <JEEB> alexpigment: `ffmpeg -codecs |grep "amr"` should show you decoders and encoders for amr
[20:46:34 CET] <JEEB> I've just built FFmpeg without any specifically enabled libraries
[20:46:50 CET] <JEEB> amrnb/wb both have internal decoders available in the list
[20:47:10 CET] <JEEB> no encoders, which is for what I expect opencore be used
[20:47:18 CET] <alexpigment> JEEB: thanks for the info. i suppose it's not a bad idea to build a copy without any external libraries to test against in the future
[20:48:00 CET] <alexpigment> there are other situations i'm not sure about, like whether libtwolame is preferable to the built-in MPEG-2 encoder, so that could be good to test with
[20:48:19 CET] <JEEB> libtwolame is only mp2 audio
[20:48:49 CET] <alexpigment> jeeb: but that includes mp2 audio within a DVD formatted stream, right?
[20:48:58 CET] <JEEB> uhh
[20:49:03 CET] <JEEB> DVD Video doesn't support mp2
[20:49:13 CET] <JEEB> raw PCM, AC3 or DTS
[20:49:13 CET] <alexpigment> well, technically you might be right
[20:49:19 CET] <alexpigment> it's official in PAL countries
[20:49:26 CET] <JEEB> in *broadcast*, yes
[20:49:30 CET] <alexpigment> and unofficially supported by almost everything in the US
[20:49:39 CET] <JEEB> but for *broadcast* all receivers by now support AAC
[20:49:45 CET] <alexpigment> no, MP2 is 100% part of the PAL DVD spec
[20:49:46 CET] <JEEB> or AC3
[20:49:57 CET] <JEEB> uhh
[20:50:07 CET] <alexpigment> and MP2 is unofficially supported by ~all DVD players in NTSC regions
[20:50:13 CET] <JEEB> I'd be deeply surprised since I don't think I've ever seen a PAL DVD with mp2
[20:50:17 CET] <alexpigment> JEEB: trust me on this one
[20:50:19 CET] <JEEB> and I'm from PAL areas
[20:50:23 CET] <furq> it's part of the spec
[20:50:27 CET] <JEEB> wow
[20:50:28 CET] <JEEB> ok
[20:50:30 CET] <JEEB> TIL
[20:50:39 CET] <alexpigment> to be fair, it's not used commercially for any reason
[20:50:43 CET] <JEEB> that said, why on earth would you use mp2
[20:50:51 CET] <JEEB> AC3 and PCM are available
[20:50:52 CET] <alexpigment> to avoid ac-3 licenses and save bandwidth
[20:51:00 CET] <alexpigment> PCM is 1.5mbps of your total bandwidth
[20:51:00 CET] <furq> i think i might actually have seen it but i don't remember where
[20:51:10 CET] <JEEB> ok, ~kind of makes sense
[20:51:19 CET] <alexpigment> it's common for consumer-grade software that creates DVDs
[20:51:47 CET] <furq> i've definitely seen lpcm on video dvds
[20:51:49 CET] <alexpigment> ac-3 licensing is expensive, and it doesn't gain much over AC-3 if you're just encoding 2-channel audio
[20:51:54 CET] <furq> that's probably quite a bit more common though
[20:51:58 CET] <alexpigment> LPCM is very common, yes
[20:52:06 CET] <alexpigment> especially on music DVDs
[20:52:21 CET] <JEEB> I've seen a lot of TV show DVDs from Japan with LPCM
[20:52:31 CET] <alexpigment> i own a lot of music video DVDs from various artists - probably a few hundred - and a good chunk of them use LPCM
[20:53:19 CET] <JEEB> also AC-3 patents are running out as far as I can see as dolby made AC-4 as its next-gen cash grab
[20:53:41 CET] <alexpigment> yeah, i'm kind of doing this as a safety measure tbh
[20:54:03 CET] <alexpigment> but yeah, a lot of the DVD-era patents are expiring
[20:54:06 CET] <JEEB> I'm not sure of the AC-3 licensing anyways
[20:54:08 CET] <furq> yeah i've only seen lpcm on japanese dvds
[20:54:10 CET] <alexpigment> MPEG-2 video got WAY cheaper last year
[20:54:24 CET] <JEEB> since you're not using Dolby's software to encode
[20:54:30 CET] <furq> lol what ac-4
[20:54:37 CET] <JEEB> furq: there's even samples of it out there
[20:54:40 CET] <JEEB> but no real usage
[20:54:40 CET] <alexpigment> yeah i've not heard of ac-4 either
[20:54:42 CET] <furq> oh man
[20:54:45 CET] <JEEB> because it's a meme format
[20:54:54 CET] <JEEB> similar to MPEG-H Audio
[20:55:11 CET] <furq> When tested for ATSC 3.0 the bit rates needed for the required audio score was 96 Kbit/s for stereo audio, 192 Kbit/s for 5.1 channel audio, and 288 Kbit/s for 7.1.4 channel audio.
[20:55:12 CET] <alexpigment> man, MPEG-H? lol. i think i've seen that written a total of 1 time
[20:55:14 CET] <furq> so it's aac then
[20:55:24 CET] <furq> except with new and exciting patents
[20:55:39 CET] <JEEB> alexpigment: it only has one semi-sane thing in it anyways, MPEG-H Part 2 which is HEVC
[20:55:55 CET] <alexpigment> furq: is this whole thing another way to get HEVC OTA without people actually investing money into doing it right? ;)
[20:56:16 CET] <JEEB> no, they're investing money into going the way of dodo with MPEG-H Part 1
[20:56:19 CET] <JEEB> which MMT
[20:56:22 CET] <alexpigment> JEEB: ah, i didn't realize that was part of that
[20:56:24 CET] <JEEB> which is insane, just fyi
[20:56:40 CET] <JEEB> NHK is all mouth foaming about MMT/MMTP as well
[20:56:49 CET] <JEEB> NOM NOM IP OVER BROADCAST
[20:57:30 CET] <JEEB> at least ARIB (the Japanese broadcast standards body) made specs for both MMTP and MPEG-2 TS next-gen broadcast
[20:58:00 CET] <alexpigment> ARIB also did the HLG standard, right?
[20:58:28 CET] <JEEB> ARIB has it standardized as ARIB STD-B.67, yes
[20:58:51 CET] <JEEB> and both SMPTE ST.2084 and HLG are in BT.2100
[20:58:55 CET] <alexpigment> yeah, i've been dealing with a lot of HDR lately, so i end up typing arib-std-b67 a lot
[20:59:18 CET] <JEEB> -32
[20:59:19 CET] <alexpigment> yeah, i know bt.2100 is an overarching thing, but i don't really know the distinction
[20:59:28 CET] <JEEB> BT.2100 just contains various things for HDR
[20:59:45 CET] <JEEB> so that you don't have to go outside ITU-T to read about HLG and SMPTE ST.2084
[20:59:59 CET] <alexpigment> ahh
[21:00:26 CET] <JEEB> talking of HDR I should really get my thing ready with zimg support .-.
[21:00:56 CET] <alexpigment> what is zimg in relation to HDR?
[21:01:07 CET] <JEEB> https://github.com/sekrit-twc/zimg
[21:01:25 CET] <alexpigment> oh gotcha
[21:01:35 CET] <alexpigment> the colorspace conversion
[21:02:02 CET] <JEEB> haven't got any real sources with HDR video, kind of wished I could capture float video out of games :P
[21:02:32 CET] <alexpigment> well, i'm coming at it from the other realm, which is creating native HDR from RAW images
[21:02:46 CET] <alexpigment> rather, creating native HDR video streams
[21:03:01 CET] <JEEB> well yeah, that's why I'd like float output out of apps :P
[21:03:18 CET] <alexpigment> yeah i hear you
[21:04:15 CET] <JEEB> something like iD tech having float output with demos or so :V
[21:04:22 CET] <JEEB> since you can render demos as slowly as you want
[21:04:23 CET] <alexpigment> btw, if you need HDR video to test with, there's a free site that aggregates a lot of them
[21:04:48 CET] <alexpigment> http://demo-uhd3d.com/
[21:05:08 CET] <JEEB> yeah, I used some of those on my TV as I first got it
[21:05:32 CET] <alexpigment> i'm still very much in SDR land at home
[21:05:34 CET] <JEEB> but I'm more interested in testing with lossless sources
[21:05:54 CET] <alexpigment> well, if you have after effects, you can create simulations
[21:06:05 CET] <JEEB> btw, do you know of any actual HLG content?
[21:06:17 CET] <JEEB> I know BBC said they'd publish stuff but that was for limited amount of hardware
[21:06:19 CET] <alexpigment> pretty sure i have some demo vides
[21:07:11 CET] <alexpigment> http://demo-uhd3d.com/fiche.php?cat=uhd&id=154
[21:07:13 CET] <JEEB> oh, travelxp actually used HLG
[21:07:23 CET] <JEEB> yea, that's the one
[21:07:39 CET] <JEEB> finally a real life sample of it :P
[21:07:44 CET] <alexpigment> haha
[21:07:49 CET] <JEEB> I mean, it's all good and fun to implement it from the spec
[21:08:00 CET] <JEEB> but not being able to test is not fun
[21:08:07 CET] <alexpigment> yep
[21:08:21 CET] <alexpigment> i'm still getting my head around the specs and what's required for the various formats
[21:08:43 CET] <alexpigment> i just want to let it all shake out and one format just become the main one
[21:08:49 CET] <alexpigment> but that doesn't seem to be happening yet
[21:09:08 CET] <alexpigment> HLG is promising though, due to its graceful fallback to SDR
[21:09:34 CET] <JEEB> well the BT.2020 part is already set, and thus you are only left with two different EOTFs
[21:09:44 CET] <JEEB> either HLG or SMPTE ST.2084
[21:15:15 CET] <alexpigment> maybe i've got my info mixed up
[21:15:37 CET] <alexpigment> i thought that HDR10 and DolbyVision both looked incorrect on a TV where they weren't supported, but HLG looked correct
[21:16:12 CET] <alexpigment> "correct", i should say
[21:16:46 CET] <JEEB> well if your thing doesn't support the colorimetry specified at all then you're fscked anyways. HLG just IIRC has somewhat better changes of looking better
[21:17:00 CET] <JEEB> saying things "look correct" with hardware at this point is *really* hard
[21:17:11 CET] <JEEB> even if the thing implements colorimetry correctly
[21:17:12 CET] <alexpigment> that is a very true statement
[21:17:35 CET] <JEEB> because there is no standard on how to handle out-o-range values and tone mapping etc
[21:18:01 CET] <alexpigment> well i think the deal is that HLG's defining characteristics are largely in the metadata
[21:18:17 CET] <alexpigment> so in absence of the metadata, it's a 10-bit SDR video
[21:18:25 CET] <alexpigment> (in bt2020)
[21:19:02 CET] <JEEB> uhh, I wouldn't put it like that
[21:19:25 CET] <JEEB> but the EOTF on a certain range of values is gamma
[21:19:31 CET] <alexpigment> i'm not speaking from any sort of authority here. that's just what i remember when reading up on it
[21:20:13 CET] <JEEB> so on that range of values as long as the rest of it is supported correctly you will get possibly a better looking result than with SMPTE ST.2084
[22:16:42 CET] <JEEB> btw, if anyone here is maintaining the wiki, x11grab was just removed with xcbgrab being the more modern alternative
[22:16:51 CET] <JEEB> pls update desktop capture pages etc
[22:19:35 CET] <JEEB> (-f x11grab will still work, but it's better to use the actual name)
[22:21:25 CET] <kepstin> but... what about my sunos5 box that doesn't have libxcb? ... (I kid, I kid...)
[22:22:07 CET] <JEEB> just rejoice that *something* got removed from FFmpeg for once (although it came from a Libav merge and thus totally went in through a push instead of ML drama)
[22:28:58 CET] <alexpigment> ok, so i just finished some Windows FFMPEG builds, now i'm moving onto OS X builds. usually i just run "brew install ffmpeg" and several --with-[library] flags
[22:29:18 CET] <alexpigment> but there are no --with-nvenc or --with-qsv options
[22:29:27 CET] <alexpigment> anyone know what i need to do to build with those enabled?
[22:29:32 CET] <BtbN> both don't exist on OSX.
[22:29:47 CET] <thebombzen> how does one put an NVidia card inside of a mac
[22:29:59 CET] <JEEB> still doesn't mean that API works
[22:30:00 CET] <alexpigment> thebombzen: lots of macs come with them
[22:30:12 CET] <thebombzen> that doesn't answer my question
[22:30:16 CET] <alexpigment> anyway, so you're saying that at this point, neither nvenc or qsv will work on a mac via ffmpeg
[22:30:21 CET] <thebombzen> hwo does one *put* an nvidia card inside of a mac
[22:30:25 CET] <BtbN> they simply don't exist on OSX.
[22:30:34 CET] <alexpigment> k, thanks BtbN
[22:30:42 CET] <alexpigment> thebombzen: i'm not sure if you're even asking a real question here
[22:31:15 CET] <thebombzen> basically I'm asking if Apple still solders everything together and makes it so you can't customize it
[22:31:27 CET] <alexpigment> yeah, they do
[22:31:43 CET] <alexpigment> and they switch GPU vendors seemingly every year
[22:32:12 CET] <alexpigment> so if you're buying a laptop or desktop, it'll vary from year to year
[22:32:31 CET] <alexpigment> having said that, the Mac Pro *was* a modular PC up until a few years ago
[22:32:50 CET] <alexpigment> in which case, you could definitely put an Nvidia card in it the same way you would a PC
[22:33:36 CET] <alexpigment> having said all that, i think statistically most Macs with dedicated GPUs use Nvidia
[22:34:11 CET] <kepstin> hmm, they've switched back and forth a few times. a lot of the older macbook pros had amd.
[22:34:28 CET] <BtbN> Apple has the crazy habit of writing their own drivers for everything
[22:34:28 CET] <kepstin> all their recent models have been nvidia tho, i think
[22:34:37 CET] <BtbN> Even GPUs
[22:35:26 CET] <kepstin> but yeah, if Apple has exposed hardware video encoder/decoder support, it's probably just via the quicktime apis :/
[22:35:40 CET] <alexpigment> yeah, i mean we know they're using those technologies
[22:35:55 CET] <alexpigment> airplay is most certainly using QuickSync
[22:36:09 CET] <JEEB> yeah, they have videotoolbox
[22:36:13 CET] <JEEB> also QT APIs are being actively removed
[22:36:15 CET] <alexpigment> although i'm not sure if they use Nvenc / VCE on systems with dGPUs
[22:36:26 CET] <JEEB> IIRC the latest xcode removed headers
[22:37:05 CET] <kepstin> huh, is videotoolbox on the desktop macs? I was under the impression that was used on ios
[22:37:16 CET] <JEEB> I think it was a common Apple API
[22:38:15 CET] <thebombzen> wait
[22:38:24 CET] <thebombzen> writing your own driver defeats the whole point of nvidia
[22:38:30 CET] <alexpigment> haha
[22:38:34 CET] <thebombzen> that's why nvidia cards are better than ati
[22:38:38 CET] <thebombzen> cause the drivers are better
[22:38:44 CET] <alexpigment> that's not the *only* reason
[22:38:47 CET] <alexpigment> but yeah, it's a reason
[22:38:57 CET] <thebombzen> ati has faster hardware but nvidia has better real-world performance because the drivers are really really good
[22:39:00 CET] <thebombzen> or one of the reasons
[22:39:09 CET] <thebombzen> but yea it defeats one of the major points of nvidia
[22:39:33 CET] <alexpigment> i just don't think of AMD as a company that provides professional products
[22:39:38 CET] <alexpigment> i know that doesn't mean a lot
[22:39:39 CET] <alexpigment> but
[22:39:45 CET] <matkatmusic> termos: Just saw your comment after my question. "I remember seeing this new encoding/decoding API in libFFmpeg where you push and retrieve frames somehow, similar to filter graphs. Not the got_pkt integer that's being set in avcodec_encode_video2"
[22:40:08 CET] <alexpigment> it's pervasive throughout the industry. if you use professional video editing and creation software, AMD is unheard of
[22:41:01 CET] <alexpigment> then again, i think it's just that they got their foot in the door with CUDA and NVENC early enough that developers utilized those technologies
[22:41:10 CET] <kepstin> as far as video editing stuff goes, yeah, nvidia got pretty big buy-in with their proprietary 'cuda' gpgpu stuff, and nvenc seems easier to use than UVC
[22:41:14 CET] <alexpigment> AMD lagged behind on hardware encoding and computing
[22:41:31 CET] <matkatmusic> I'm wondering if that was answering my question about specifying a jpg to use AT a specific frame number FOR a specific number of frames.
[22:41:35 CET] <kepstin> of course anyone seriously doing professional video editing shouldn't be using a hardware encoder, imo :)
[22:42:06 CET] <alexpigment> kepstin: i disagree with that. perhaps for the actual video encoding of the final product, yes
[22:42:23 CET] <alexpigment> but cuda is very helpful for a lot of processing functions
[22:42:24 CET] <JEEB> after nvidia started supporting lossless coding it became much more relevant for me
[22:42:30 CET] <kepstin> i suppose if you throw a ton of bitrate at it, nvenc might not be that bad as intermediate
[22:42:44 CET] <alexpigment> and nvenc is great for quick draft stuff
[22:47:19 CET] <alexpigment> anyone have any word on VCE being integrated into FFMPEG?
[22:48:58 CET] <BtbN> Nobodys going to stop you if you do
[22:58:58 CET] <kepstin> huh, looks like there's some linux drivers for it that provide the openmax (omx) api, it might be possible to get that working with ffmpeg - who know how much work that would need, tho...
[23:01:46 CET] <jkqxz> Like all good OpenMAX IL implementations, it has it's own set of conflicting assumptions about how the API should work. (It doesn't work at all with the lavc client.)
[23:04:01 CET] <alexpigment> BtbN: nobody will stop me except my complete lack of coding ability ;)
[23:08:16 CET] <jkqxz> It kindof works with VAAPI, but not very well. Good enough for simple encoding (upload + encode), not really usable for GPU transcode or doing anything interesting.
[23:15:50 CET] <alexpigment> kepstin: just looked into a OpenMax a little bit. i didn't know too much about it until now, but it's good to know it exists (and potentially paves a path to AMD hardware encoding in FFMPEG)
[23:16:39 CET] <alexpigment> with all the 4K HEVC stuff that's going on, we're rapidly pushing ourselves into a corner in terms of processing requirements for video encoding
[23:17:16 CET] <alexpigment> so the hardware encoders are going to need to get better and more customizable
[23:17:44 CET] <alexpigment> because i don't see CPUs getting significantly faster anytime soon
[23:18:25 CET] <kepstin> for large-scale batch video processing, it's normally scaled horizontally by segmenting files and encoding them over many cpus
[23:18:43 CET] <kepstin> but that doesn't really help for the video workstation use case ;)
[23:18:47 CET] <alexpigment> yeah, but what about stuff that is done in realtime?
[23:19:54 CET] <alexpigment> the TV market is saying "this is ready for primetime", but the computer industry is saying "whoa... slow down buddy"
[23:20:53 CET] <alexpigment> i don't even know how TV networks will be able to make the conversion to 4K HEVC
[23:21:10 CET] <kepstin> half of them haven't even switched from mpeg2 to h264
[23:21:12 CET] <kepstin> so, yeah...
[23:21:50 CET] <alexpigment> well if you're talking about regional affiliates for major networks, i doubt any of them have switched over
[23:22:08 CET] <TD-Linux> by encoding really bad quality HEVC
[23:22:20 CET] <alexpigment> and yeah, it doesn't benefit the major networks to go from MPEG-2 to H.264 because they're just going to get re-encoded anyway by the cable and satellite companies
[23:22:39 CET] <kepstin> I suppose really bad quality HEVC has the potential to be better than really bad quality H264 :/
[23:22:51 CET] <alexpigment> hmmmm
[23:23:08 CET] <alexpigment> i just think H.264 is still very viable for 4K
[23:23:14 CET] <TD-Linux> the local feed at work is horrific enough that they have a long ways to before 4k even makes sense
[23:23:34 CET] <alexpigment> and the only reason HEVC exists is because of people thinking that bandwidth and storage is still at a premium
[23:23:43 CET] <TD-Linux> it is
[23:25:25 CET] <alexpigment> i don't know. i think a 20-30mbps H.264 is good enough for 4K streaming, and 60mbps+ is good enough for Blu-ray
[23:25:48 CET] <alexpigment> *4k Blu-ray
[23:26:24 CET] <alexpigment> a 30mbps H.264 broadcast standard for 4K wouldn't be terrible in my opinion
[23:26:54 CET] <alexpigment> it would probably be a hell of a lot better than the 10-12mbps MPEG-2 that my local stations put out
[23:27:32 CET] <TD-Linux> sure but where is your magical 20mbps extra coming from
[23:28:34 CET] <alexpigment> that's exactly the point i was trying to make earlier. these standards are always assuming a worst-case scenario in terms of bandwidth
[23:29:13 CET] <alexpigment> ATSC is 18mbps (and then further divided up into HD and SD streams), but it doesn't have to be
[23:29:29 CET] <alexpigment> especially considering that people don't rely as much on OTA feeds
[23:30:00 CET] <alexpigment> but DVB-T is already there in terms of their spec, right?
[23:31:35 CET] <TD-Linux> only if you're willing to throw away a lot of range
[23:32:19 CET] <alexpigment> td-linux, we already threw away most of that range when OTA switched to digital
[23:32:42 CET] <TD-Linux> come on it's a function of symbol rate
[23:33:10 CET] <alexpigment> ?
[23:33:24 CET] <TD-Linux> there is a direct tradeoff between bitrate and range
[23:33:31 CET] <alexpigment> yes, of course there is
[23:33:50 CET] <TD-Linux> you want to run at 64-QAM with highest rate?
[23:33:51 CET] <alexpigment> and what i'm saying is that we already threw away most of the range of "watchable tv" when the digital switch happened
[23:34:35 CET] <alexpigment> if you don't have a perfect digital signal, it's basically unwatchable. if you had a less-than-optimal analog signal, it was very watchable
[23:35:33 CET] <TD-Linux> might as well do a gigabit stream that goes 10 feet then :^)
[23:35:52 CET] <alexpigment> TD-Linux 801.11AD?
[23:35:57 CET] <alexpigment> ;)
[23:36:00 CET] <TD-Linux> whoa dvb-t2 does qam-256
[23:36:05 CET] <alexpigment> yes
[23:36:18 CET] <alexpigment> DVB is way more aggressive on their bitrate standards
[23:36:35 CET] <alexpigment> ATSC is based on the assumption that most americans live far from cities
[23:50:15 CET] <shincodex> av_dict_set(&options, "rtsp_transport", "tcp", 0);
[23:50:22 CET] <shincodex> force this on RTSP when your h264 is a piece of shit
[23:50:38 CET] <shincodex> To anyone else who is getting trolled by college kids this will help you
[23:50:53 CET] <JEEB> sounds like a crappy network or connection
[23:51:00 CET] <shincodex> thats the thing
[23:51:01 CET] <JEEB> where UDP is getting fscked
[23:51:08 CET] <shincodex> I thought it was buffer size
[23:51:23 CET] <shincodex> but then i was like well turn it all off yep... shite and lo! ffmpeg log spam of problems
[23:51:40 CET] <shincodex> turn that one on and its fine... its fine...
[23:52:03 CET] <JEEB> ordering and making sure packets can come through don't help latency but can help with getting data through :P
[23:53:55 CET] <shincodex> I upset another guy last night
[23:54:00 CET] <shincodex> cause of my frusteration
[23:54:08 CET] <shincodex> "X" works better than "Y
[23:54:09 CET] <shincodex> "
[23:54:26 CET] <shincodex> vlc is "Y" and "Y" tells me... ahhh no we no use lavc we use gpu
[23:54:34 CET] <shincodex> and i says... THAT DOESNT HELP ME FIGURE SHITE OUT
[23:54:37 CET] <shincodex> I USE GPU TOO
[23:54:42 CET] <shincodex> AND CPU!
[23:54:45 CET] <shincodex> fpu1111111
[23:54:47 CET] <shincodex> woohoo
[23:59:53 CET] <shincodex> Welcome to your doom
[00:00:00 CET] --- Thu Mar 16 2017
More information about the Ffmpeg-devel-irc
mailing list