[Ffmpeg-devel-irc] ffmpeg.log.20180914

burek burek021 at gmail.com
Sat Sep 15 03:05:01 EEST 2018


[00:00:00 CEST] --- Fri Sep 14 2018
[07:53:45 CEST] <johnnny22> I have this weird situation with the decklink output device, where after some time (like 24h or so), the number of output frames buffered reaches 0. It starts off with about 15 to 30 frames buffered depending on the settings, and 10k to 20k audio samples in buffer, but it slowly creaps down :o ever slightly till it reaches 0. I'm pondering if the input might be using a clock that might be just
[07:53:45 CEST] <johnnny22> a tiny bit slower than the clock used on the output device. Or could it be something else ?
[08:45:37 CEST] <A1den> Hi everyone
[08:45:53 CEST] <A1den> how do I set http headers using the api ?
[08:47:23 CEST] <A1den> I mean this option  -headers  I need to also set cookies
[12:59:50 CEST] <remlap> :q
[15:24:45 CEST] <Hello71> how do I ask ffprobe to give me the bitrate even if it requires demuxing?
[17:33:23 CEST] <Bombo> hi
[17:34:30 CEST] <Bombo> i googled this but i don't understand it: -filter_complex "[0:v]reverse,fifo[r];[0:v][0:a][r] [0:a]concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]"
[17:34:56 CEST] <Bombo> it does work, it reverses the input video, and concats it
[17:35:36 CEST] <Bombo> but the audio is not reversed, so i need to add areverse somewhere, i don't see an explanation in the docs for these
[17:36:44 CEST] <Bombo> tried bruteforcing "[0:v]reverse,fifo[r];[0:v][0:a][r] [0:a]areverse,concat=n=2:v=1:a=1 [v] [a]"
[17:37:05 CEST] <Bombo> no success ;)
[17:37:06 CEST] <Bombo> ;(
[18:47:42 CEST] <th3_v0ice> Can someone tell me what is the integer equiavalent of "Broken pipe" ? I get this after 30h plus stream while using the API. I dont know how to find the actual int value. Printing it is an option but waiting is not :)
[18:48:48 CEST] <JEEB> libavutil/error.c:    { EERROR_TAG(EPIPE),             "Broken pipe" },
[18:48:55 CEST] <JEEB> so it's EPIPE
[18:49:22 CEST] <nicolas17> which is 32 on my Linux amd64 system
[18:50:08 CEST] <JEEB> better not hard-code the number if you can get the value from the macro :P
[18:51:10 CEST] <th3_v0ice> How did you find that its 32? I guess its -32, right?
[18:51:35 CEST] <JEEB> AVERROR(EPIPE)
[18:51:36 CEST] <JEEB> makes it negative
[18:51:59 CEST] <JEEB> so you can check return value against AVERROR(EPIPE)
[18:52:01 CEST] <th3_v0ice> Ah, its in the errno-base.h
[18:52:12 CEST] <th3_v0ice> Ok, I will not hard code it :)
[18:53:32 CEST] <th3_v0ice> But where is "End of file" defined?
[18:53:56 CEST] <JEEB> AVERROR_EOF
[18:54:37 CEST] <JEEB> the documentation mentioned that for quite a while, but also did 0 == EOF a lot as well
[18:54:53 CEST] <JEEB> and late last year someone decided to move completely to 0 != EOF
[18:54:54 CEST] <JEEB> :P
[18:55:01 CEST] <JEEB> which broke a whole lot of modules
[18:55:08 CEST] <JEEB> because usually external APIs return 0 for EOF
[18:55:14 CEST] <th3_v0ice> Hahaha good move
[18:55:24 CEST] <JEEB> so basically for almost any IO module
[18:55:31 CEST] <JEEB> you need to translate from zero to AVERROR_EOF
[18:55:50 CEST] <JEEB> but yes, currently AVERROR_EOF is the EOF  thing
[18:55:56 CEST] <JEEB> for libavformat etc
[18:58:05 CEST] <th3_v0ice> Ok, thanks! :)
[21:30:25 CEST] <estan> hi everyone. anyone know of a lossy codec that is as powerful as e.g. h264 wrt to quality vs size, but specifically designed (or at least works well) with gray16le pixels?
[21:31:25 CEST] <JEEB> do you need hw decoding capability?
[21:31:37 CEST] <estan> i've looked at ffv1, which supports gray16le, but it's a lossless inter-frame codec. i'm looking for something lossy.
[21:31:37 CEST] <JEEB> also do you need the >8bit depth
[21:31:48 CEST] <estan> yes 16 bits per pixel grayscale.
[21:31:50 CEST] <JEEB> and if yes, how high
[21:31:57 CEST] <estan> hw decoding not necessary.
[21:32:00 CEST] <nicolas17> 10 bit not enough?
[21:32:09 CEST] <JEEB> because AVC implementations generally go up to 10bit, and x265 goes up to 12 bit
[21:32:32 CEST] <JEEB> I would just recommend using those in 4:4:4 or 4:0:0 mode
[21:32:44 CEST] <JEEB> I mean, you're going to do lossy anyways :P
[21:32:53 CEST] <nicolas17> I think brightness differences smaller than 2^-10 would be destroyed by the lossy compression anyway
[21:32:55 CEST] <JEEB> I would benchmark 10bit H.264 via x264 first
[21:33:04 CEST] <JEEB> in 4:4:4
[21:33:29 CEST] <estan> 10 bit is a bit on the low side, but i could explore it.. but hm, with those options, wouldn't the 16 bits i feed it just be crammed into one of the 8 bit channels?
[21:33:36 CEST] <estan> (i know very little about codecs)
[21:33:47 CEST] <nicolas17> no, it has real support for 10-bit
[21:34:01 CEST] <estan> sorry, i mean 10 bits.
[21:34:29 CEST] <JEEB> estan: yes, it would be dithered down to 10bit
[21:34:37 CEST] <JEEB> then passed to the encoder
[21:34:39 CEST] <nicolas17> yes, ffmpeg would reduce your 16 bits into 10 bits before the codec does anything
[21:34:43 CEST] <estan> alright. i might try it out anyway. thanks for the tip.
[21:35:55 CEST] <estan> i guess there's no real need for anyone to develop a lossy codec specifically suited for 16 bit grayscale.
[21:36:45 CEST] <JEEB> grayscale is handled pretty well
[21:36:48 CEST] <estan> archivists and medical i guess do 16 bit grayscale, but those are probably more interested in lossless.
[21:36:52 CEST] <JEEB> yes
[21:37:24 CEST] <JEEB> 10-12 bit is the most "pro" formats tend to support for video
[21:37:35 CEST] <estan> i will try 10 bit x264. will i need to build it from source, or do you know if x264 in e.g. ubuntu supports both 8 and 10 bit?
[21:38:17 CEST] <estan> yea. in my case i have this crazy idea of trying to compress some xray tomography of drill core (rock) using a video codec.
[21:38:34 CEST] <nicolas17> estan: turning one of the three 3D axes into time?
[21:38:41 CEST] <estan> nicolas17: yes.
[21:39:02 CEST] <JEEB> estan: if it's new enough it should have both linked in, which is something that got merged in circa december 2017
[21:39:07 CEST] <estan> we're currently compressing using Blosc_LZ4HC at level 4. but i thought it would be fun to treat it as video.
[21:39:14 CEST] <JEEB> otherwise you'll have to build libx264 and FFmpeg yourself
[21:39:25 CEST] <estan> okay. thanks for the info.
[21:40:57 CEST] <estan> we have two versions of the tomography, one low res for overview/navigation along the drill core, and one high res for close inspection. 10 bit is probably OK for the low res one.
[21:41:27 CEST] <estan> our transmission detector is 12 bits, and then there's of course a bunch of losses in the tomographic reconstruction.
[21:42:55 CEST] <estan> anyway, will try your suggestions.
[21:45:45 CEST] <ChocolateArmpits> estan, jpeg2000 fits the bill
[21:46:07 CEST] <furq> j2k doesn't do gray16
[21:46:14 CEST] <ChocolateArmpits> not in ffmpeg
[21:46:17 CEST] <ChocolateArmpits> of course
[21:46:25 CEST] <nicolas17> "JPEG 2000 supports any bit depth, such as 16- and 32-bit floating point pixel images, and any color space."
[21:46:41 CEST] <furq> actually maybe it does if you use libopenjpeg
[21:47:10 CEST] <furq> vp9 and x265 will both do 12-bit yuv
[21:47:44 CEST] <furq> so if you only have 12 bits of precision anyway then those are probably better choices
[21:48:59 CEST] <estan> hm. is jpeg 2000 not just an intra-frame codec? (or does it do advanced inter-frame compression like x264/x265?).
[21:49:06 CEST] <furq> it's intra-only but it's lossy
[21:49:12 CEST] <estan> alright.
[21:49:46 CEST] <furq> whether it'd actually outperform ffv1 i don't know
[21:49:57 CEST] <estan> i was looking for something similarly powerful to x264 in creating small file sizes.. and i was looking at exploiting the fact that my "frames" (z axis in our tomography) has little changes in them.
[21:50:02 CEST] <estan> okay.
[21:50:11 CEST] <furq> yeah i figured you wanted something with inter compression
[21:50:27 CEST] <nicolas17> what about av1? :P
[21:50:28 CEST] <estan> looks like the x265 i have here on ubuntu supports 10/12 bits. so will have i go at that.
[21:50:36 CEST] <furq> neat
[21:50:37 CEST] <ChocolateArmpits> estan, you're expecting too much, there's no mainstream use case for such a thing
[21:51:16 CEST] <furq> x264/x265 will also do high bit depth in lossless mode iirc
[21:51:18 CEST] <furq> with inter compression
[21:51:27 CEST] <estan> ChocolateArmpits: i'm not expecting, just asking. "i guess there's no real need for anyone to develop a lossy codec specifically suited for 16 bit grayscale." <- me a little while ago :)
[21:51:42 CEST] <JEEB> estan: do note that jpeg2000 is not generally used because it's slow as molasses to decode
[21:51:54 CEST] <JEEB> it's a great way of making film theatres pay for hardware decoders
[21:51:56 CEST] <furq> unless you run a cinema
[21:51:59 CEST] <furq> yeah
[21:52:21 CEST] <estan> heh. i see. i do have a need for reasonably fast decoding.
[21:52:36 CEST] <JEEB> then AVC, definitely
[21:52:38 CEST] <estan> in any case, my idea here might be batshit crazy. just thought it would be fun to try out.
[21:53:03 CEST] <JEEB> although I guess with recent generations of hardware decoders for HEVC, most should support 10bit?
[21:53:17 CEST] <JEEB> so even if HEVC swdec is slower than it could be
[21:53:24 CEST] <JEEB> you might be able to fix it with that
[21:53:41 CEST] <JEEB> in any case, I would just expect less surprises from x264 than from x265 artifacts wise
[21:53:46 CEST] <furq> the 10 series does main12 now iirc
[21:53:58 CEST] <estan> okay. i'll try x264 10 bit as well.
[21:54:48 CEST] <estan> i need to support old fart geologists and their 5 year old laptops. so circa intel HD4000 GPUs and onwards maybe.
[21:55:35 CEST] <ChocolateArmpits> aka software decoding
[21:55:49 CEST] <estan> (some of them have beefy machines of course, but it needs to be usable for the majority)
[21:55:52 CEST] <estan> yea.
[21:56:33 CEST] <ChocolateArmpits> what does the extra bit depth even offer with your 16bit images?
[21:57:06 CEST] <ChocolateArmpits> if they're going to view it on conventional monitors there must be doubts about trying to store any of that
[21:57:33 CEST] <estan> well nothing really, we've just gone for the "better safe than sorry" approach right now. we're doing quantitative analysis on the tomography as well, so it's not just for visualization.
[21:57:44 CEST] <estan> but we don't have 16 bits of precision in the data for sure.
[21:58:15 CEST] <estan> (but we probably do have > 10 bits)
[21:58:30 CEST] <ChocolateArmpits> so the compression will be part of archiving?
[21:58:50 CEST] <JEEB> if you want to do any sort of analysis on the actual values then I would just use some heavy sort of streaming lossless compression
[21:59:09 CEST] <JEEB> xz or whatever
[21:59:19 CEST] <estan> yes. it will be the archived format. i think we're at a point where we're ready to accept saving with loss.
[21:59:26 CEST] <nicolas17> JEEB: x264 lossless still does inter-frame stuff right?
[21:59:31 CEST] <JEEB> yes
[21:59:38 CEST] <JEEB> and you can do keyint infinite
[21:59:43 CEST] <JEEB> with scenechange=0
[21:59:46 CEST] <estan> we've evaluated many compressors and settled on Blosc_LZ4HC (it's lossless).
[21:59:48 CEST] <JEEB> or was it scenecut=0
[21:59:55 CEST] <nicolas17> oh scene change, that reminds me of something
[22:00:11 CEST] <JEEB> that just makes sure x264 doesn't make any other random access points
[22:00:18 CEST] <JEEB> so you just get one long GOP together with keyint infinite
[22:00:29 CEST] <estan> also looked at ZFP, which is specifically for smooth floating point data, but our data was a bit too noisy for that one.
[22:01:22 CEST] <nicolas17> the 'select' video filter supports scene change detection, doing stuff like select='gt(scene\,0.4)'
[22:01:30 CEST] <estan> (just to be clear: me looking at trying out a video codec is just me playing around a bit)
[22:01:41 CEST] <estan> s/me//
[22:01:44 CEST] <JEEB> yeh
[22:01:55 CEST] <JEEB> just wanted to make sure that if you wanted to keep some data for actual analysis, I Would not worsen it
[22:02:14 CEST] <JEEB> archive it with some nice archiver (not even image) format
[22:02:22 CEST] <nicolas17> but can I *get* the scene change probability for every frame, rather than just selecting frames based on whether that probability exceeds a fixed threshold?
[22:02:27 CEST] <nicolas17> is it in frame metadata or something?
[22:03:09 CEST] <estan> yea. analysis occurs during scanning with our machine, but we do re-analysis of scanned data quite often.
[22:03:33 CEST] <nicolas17> how often do you need to do re-analysis?
[22:03:33 CEST] <estan> (so yes, it would be a big step to take to starts saving lossy).
[22:04:52 CEST] <estan> at the moment it happens quite a lot, since our machine is dependant on good expert knowledge from the clients (mining geologists/petrologists), and that input is not always available at the time of the initial scan, so we do re-analysis on a regular basis.
[22:05:39 CEST] <nicolas17> if *on average* you access files less than once a month, throw them into AWS S3 Infrequent Access which will cost you $0.0125/GB/mo :P
[22:06:47 CEST] <estan> yea, already storing some of it in the cloud. data management is a big issue.
[22:06:54 CEST] <BtbN> Or just don't use AWS at all and pay 0.005$ at B2.
[22:07:06 CEST] <estan> it's about 1-2 GB per scanned meter of drill core at the moment.
[22:07:46 CEST] <nicolas17> BtbN: B2 is awesome, but the advantage of AWS S3 is that you pay $0 for data retrieval if you run the analysis on AWS too
[22:08:02 CEST] <estan> (and miners drill hundrets of thousands of meters each year)
[22:08:58 CEST] <estan> we're a start-up, but have started getting paying clients, so data management is definitely going to be a challenge. at the moment we have our own SANs for it, but doing some backups to Amazon.
[22:09:12 CEST] <estan> anyway. gotto sleep. thanks for all the input.
[23:32:01 CEST] <EvanR> yay finally my avcapture bug on OSX is fixed and merged in!
[23:32:14 CEST] <EvanR> years later
[23:32:20 CEST] <EvanR> i can finally leave this channel!
[00:00:00 CEST] --- Sat Sep 15 2018


More information about the Ffmpeg-devel-irc mailing list