[Ffmpeg-devel-irc] ffmpeg.log.20170704

burek burek021 at gmail.com
Wed Jul 5 03:05:01 EEST 2017


[00:26:50 CEST] <meriipu> is there a way to with x11grab or ffmpeg otherwise not capture application windows matching certain criteria like window name (including capturing them as black rectangles)? e.g if I would like to not capture terminals or file managers, but capture everything else.
[00:59:36 CEST] <leif> Is there a guide when I should be using av_packet_unref vs av_packet_free.
[01:00:06 CEST] <leif> I seem to have a rather large memory leak I kind of wonder if its possible I should be freeing packets explicitely.
[01:00:25 CEST] <leif> (rather than just calling av_packet_unref)
[01:01:14 CEST] <DHE> leif: you should always allocate your packets with av_packet_alloc and free them with av_packet_free. the ref/unref is for making copies that you need to hold onto even though someone else might free it/take ownership of the pointer
[01:01:43 CEST] <DHE> outside of such a scenario, you should probably avoid the ref/unref functions
[01:03:27 CEST] <leif> DHE: Ah, okay, thanks.
[01:04:02 CEST] <leif> So I should never use av_malloc to allocate a packet then?
[01:04:24 CEST] <DHE> yes. I would also discourage allocation on the stack.
[01:04:43 CEST] <leif> He, he, ya, I agree. Thanks. I'll give that a go.
[01:05:25 CEST] <DHE> and even if you did use av_malloc, you should use av_mallocz to zero out all fields
[01:05:54 CEST] <leif> Yup, I agree there.
[01:06:52 CEST] <leif> (Although I was actually using a malloc that cooperated with a GC. But I was trying av_malloc to see if I could figure out if that was causing the leak.)
[01:06:58 CEST] <leif> Anyway, thanks. :)
[01:20:06 CEST] <leif> DHE: Also, are you required to call av_init_packet (or av_packet_new?) (in addition to av_packet_alloc) before passing it into av_read_frame?
[01:20:48 CEST] <leif> It looks like it doesn't matter, but if we don't, the packet becomes invalid after the next av_read_frame.
[01:23:53 CEST] <DHE> I'm suggesting you should av_packet_alloc() a new one before the next av_read_frame. you can call av_read_frame on the freshly allocated packet immediately, no init required
[01:24:31 CEST] <DHE> and av_packet_free() the packet as soon as you are done with it
[01:33:55 CEST] <leif> DHE: Cool, that seems to work, thank you.
[02:55:57 CEST] <Kaedenn> I'm building a timelapse video as I go. How do I append one frame to an existing 30fps video?
[02:56:11 CEST] <Kaedenn> All frames will be jpeg, same size, same color depth, etc
[02:56:55 CEST] <thebombzen> Kaedenn: it depends on the 30 fps video. If the 30 fps video is mjpeg, you might want to just have separate jpeg files. If it's a raw mjpeg stream, you can literally concatenate the files.
[02:57:13 CEST] <Kaedenn> Basically, rather than creating a folder with a thousand JPGs I want to build a video as I go
[02:57:30 CEST] <Kaedenn> Uh, I'm just picking mp4 as a container at the moment and letting ffmpeg figure all that out
[02:57:31 CEST] <thebombzen> ffmpeg supports the "mjpeg" format which is literally "jpeg images concatenated together"
[02:57:51 CEST] <grublet> glorious intraframe codecs
[02:57:57 CEST] <Kaedenn> \o/
[02:58:00 CEST] <thebombzen> to see what I mean, try running: cat first.jpg second.jpg third.jpg >total.jpg
[02:58:10 CEST] <thebombzen> and then try running: ffplay -f mjpeg -i total.jpg
[02:58:17 CEST] <Kaedenn> ...interesting.
[02:58:20 CEST] <Kaedenn> I'll do that now.
[02:58:45 CEST] <thebombzen> Ideally though, if you're going to be recompressing the video later, you might want to use PNGs instead of jpegs.
[02:59:20 CEST] <thebombzen> In this case though you'd have to use ffmpeg -f image2pipe -i concatenated_png_file
[03:00:13 CEST] <thebombzen> you can also just concatenate PNG images together, and then use "ffmpeg -f image2pipe -i concatenated_png_images_file"
[03:00:37 CEST] <thebombzen> This will assume the framerate is 25 fps, but you can use -framerate to override that. "ffmpeg -f image2pipe -framerate 1 -i images_in_one_file"
[03:00:47 CEST] <Kaedenn> The source I'm querying from responds in JPGs.
[03:01:10 CEST] <Kaedenn> Should I convert those to PNGs, cat them, use image2pipe?
[03:01:17 CEST] <furq> no
[03:01:22 CEST] <Kaedenn> Thought that was the case
[03:01:26 CEST] <furq> just use mjpeg
[03:01:39 CEST] <Kaedenn> mjpeg is the easiest solution. The result seemed perfectly fine
[03:01:42 CEST] <thebombzen> No, you should not. I recommended PNGs because if you were generating the images yourself, then no reason to compress them to jpeg and then re-compress them later.
[03:02:00 CEST] <thebombzen> But if you receive jpegs directly then there is no reason to convert them to PNGs, so don't.
[03:02:07 CEST] <Kaedenn> Aha. Yeah I'm not generating them; I'm just doing the equivalent of a wget/curl and storing them in a folder, named sequentially.
[03:02:07 CEST] <furq> obviously you'll probably want to recompress this when you're done
[03:02:09 CEST] <furq> on account of mjpeg sucks
[03:02:16 CEST] <furq> but you will have a playable video in the interim
[03:02:22 CEST] <Kaedenn> furq: I'm assuming having ffmpeg write to a mp4 does this?
[03:02:33 CEST] <furq> yes
[03:02:35 CEST] <furq> that uses x264 by default
[03:02:42 CEST] <Kaedenn> Awesome, so there's almost nothing I need to do differently
[03:03:05 CEST] <Kaedenn> There's a thundercell rolling in and so I booted up the timelapse generation script to capture it roll in
[03:03:08 CEST] <thebombzen> MJPEG (i.e. just a sequence of jpeg images) is not efficient because it doesn't do any sort of time-based optimization. So it's bad compared to, like, a proper video codec.
[03:03:10 CEST] <furq> you could presumably have an image2pipe waiting for new frames and go to mp4 directly
[03:03:19 CEST] <furq> but then you won't have a playable video until all frames are done
[03:03:26 CEST] <thebombzen> furq: sounds like a bad idea in case of a process crash
[03:03:29 CEST] <furq> also yeah, that
[03:03:37 CEST] <Kaedenn> furq: having a playable video to monitor as I go is quite important
[03:03:41 CEST] <furq> right
[03:03:59 CEST] <furq> that's not possible if you want the final output to be mp4 without doing two passes
[03:04:15 CEST] <Kaedenn> I don't care how many passes it takes; I'm storing thousands of JPGs at the moment as it is :P
[03:04:16 CEST] <furq> so mjpeg and then a final pass is probably the best
[03:04:23 CEST] <furq> sure
[03:04:34 CEST] <thebombzen> Kaedenn: in that case, you can just concatenate the jpeg images together and use the "-f mjpeg" I told you about to view them on the fly. You can put stuff inside an mp4 once everything is fully done.
[03:04:54 CEST] <Kaedenn> Capturing a thousand images at ten second intervals is what takes the time. Everything else is effectively on-demand
[03:05:09 CEST] <thebombzen> Are you capturing 1000 images once every ten seconds?
[03:05:28 CEST] <Kaedenn> Capturing one image every ten seconds, a total of a thousand images.
[03:05:33 CEST] <thebombzen> Ah, okay.
[03:05:34 CEST] <Kaedenn> Timelapse, not high-speed
[03:05:55 CEST] <thebombzen> Oh yes. Then do as above for the realtime capture. Just concatenate the files together.
[03:06:02 CEST] <thebombzen> the JPEGs I mean
[03:06:17 CEST] <Kaedenn> I'm fairly certain I'd get yelled at if I taxed their servers for downloading a ton of images all at once from an internal camera :P
[03:06:49 CEST] <Threads> taxed as in money ?
[03:06:56 CEST] <furq> curl http://foo >> out.mjpeg
[03:06:58 CEST] <furq> should be all you need
[03:07:24 CEST] <Kaedenn> taxed as in taxing electronics as in demanding a ton of resou--that was a lightning bolt
[03:11:13 CEST] <Kaedenn> thank you all
[06:35:25 CEST] <_yashi_> hello, is it possible for ffprobe to print / decode h.264 bytestream like mediaconch?
[06:38:10 CEST] <british_scientis> what is mediaconch
[06:38:26 CEST] <_yashi_> https://mediaarea.net/MediaConch/
[06:39:54 CEST] <_yashi_> it prints like this: https://pastebin.com/p6P77QsU
[06:40:19 CEST] <british_scientis> try verbose options
[06:43:54 CEST] <_yashi_> you mean -loglevel verbose ?
[06:44:11 CEST] <british_scientis> yes
[06:45:12 CEST] <_yashi_> i don't even know how to tell ffprobe to print nal level info, -show_packets seems to print access unit but not nal
[07:51:09 CEST] <JC_Yang> I'm using libavformat to parse some h264 videos in my app, I now use av_register_all() and the linked binary size is quite large, more than 60M single executable, I guess if I just specify the format that I need to support, I can greatly cut the built binary size, but AFAIK, the only way to enumerate the supported format should be done by first av_register_all() then av_iformat_next(), then...
[07:51:10 CEST] <JC_Yang> ...the built binary won't be optimized anyway. How to simply av_register_input_format() for just one or two formats?
[08:58:44 CEST] <buu> Why would ffmpeg produce a file, that when I fed it to ffmpeg again, produced: [matroska,webm @ 0x558e0df1f3c0] Could not find codec parameters for stream 0 (Video: h264, none(progressive), 1280x720): unspecified pixel format
[08:58:52 CEST] <buu> And how can I get it to produce a file it can actually understand?
[09:29:51 CEST] <buu> augh
[09:29:56 CEST] <buu> Why is my audio desynced into youtube
[10:19:33 CEST] <thebombzen> why are you asking us
[10:26:21 CEST] <buu> thebombzen: Because I created the file with ffmpeg =]
[10:27:18 CEST] <buu> Also I'm rapidly running out of guesses
[10:28:07 CEST] <buu> thebombzen: If ffmpeg produced the file, why does it say: [matroska,webm @ 0x561fd570eaa0] Could not find codec parameters for stream 0 (Video: h264, none(progressive), 1280x720): unspecified pixel format ?
[10:28:28 CEST] <buu> Is there some way to fix that?
[10:28:55 CEST] <durandal_1707> post full uncut command line output
[10:29:08 CEST] <durandal_1707> via pastebin
[10:30:57 CEST] <buu> durandal_1707: http://paste.debian.net/plain/974691
[10:33:16 CEST] <durandal_1707> with output
[10:33:20 CEST] <buu> er
[10:34:45 CEST] <buu> durandal_1707: http://paste.debian.net/plain/974693
[10:39:10 CEST] <durandal_1707> buu: first maske sure it decodes/plays without issues
[10:39:17 CEST] <durandal_1707> input video
[10:40:11 CEST] <buu> durandal_1707: er, how? I mean, what am I supposed to do if it doesn't?
[10:40:50 CEST] <durandal_1707> buu: how are you supposed to transcode file which doesnt decode at all?
[10:41:14 CEST] <durandal_1707> buu: from where is your input video?
[10:41:16 CEST] <buu> Well, it does decode
[10:41:21 CEST] <buu> durandal_1707: twitch tv in this case
[10:41:34 CEST] <buu> durandal_1707: When I upload it to youtube I get this: https://www.youtube.com/watch?v=yEjz6LqfiHE
[10:42:28 CEST] <durandal_1707> buu: but your output says different story
[10:42:53 CEST] <buu> durandal_1707: http://rdsw.rmz.gs/audiotest.mkv
[10:42:55 CEST] <buu> This is the actual file
[10:42:59 CEST] <durandal_1707> it can not detect pixel format of input video
[10:43:13 CEST] <buu> durandal_1707: Yes, but what am I supposed to do about it?
[10:43:24 CEST] <buu> I'm attempting to fix the audio desync
[10:43:41 CEST] <durandal_1707> im interested about input file and not output
[10:43:46 CEST] <buu> When I attempt to play it locally in vlc I get 'blank screen' for a couple of seconds but the audio plays, then the video starts and it is synced to the audio
[10:44:54 CEST] <buu> durandal_1707: Unfortunately the ultimate source file is ~20gb so not practical to upload during this conversation
[10:45:07 CEST] <buu> the sub-source is a file I 'clipped' out using ffmpeg in the first place
[10:45:31 CEST] <durandal_1707> you could trim and codec copy it
[10:46:34 CEST] <buu> durandal_1707: The very original file is created by livestreamer sending output to ffmpeg as such: ffmpeg -i pipe:0 1499147789.sgdq.mkv
[10:48:14 CEST] <durandal_1707> pastebin output of ffmpeg -i test.mkv
[10:48:31 CEST] <buu> using ffmpeg  -to 3:00 -c:a copy -c:v copy upload.mkv from the very original to http://rdsw.rmz.gs/upload.mkv
[10:48:54 CEST] <buu> durandal_1707: http://paste.debian.net/plain/974697
[10:49:29 CEST] <buu> The source: http://paste.debian.net/plain/974698
[10:49:41 CEST] <durandal_1707> and does test.mkv plays in ffplay?
[10:51:34 CEST] <buu> durandal_1707: yes
[10:52:01 CEST] <buu> the audio is synced
[10:52:31 CEST] <buu> It does the same thing where I get audio for 1-2 seconds then the video starts
[10:53:06 CEST] <buu> It's like the file I produce is missing the initial video .. packet.. and ffplay/vlc figures it out and waits
[10:53:09 CEST] <buu> but youtube gets confused
[10:53:27 CEST] <durandal_1707> im out of ideas, trim first few seconds
[10:53:35 CEST] <buu> durandal_1707: of test.mkv?
[10:54:03 CEST] <buu> You mean like, -ss 00:02 ... out.mkv ?
[10:54:49 CEST] <buu> I'll note I do get the error: Too many packets buffered for output stream 0:1.
[10:54:56 CEST] <buu> Without the muxing option
[11:12:25 CEST] <buu> aughhhhhhhhhh
[11:12:30 CEST] <buu> Stream specifier ':v:0' in filtergraph description [0:v:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a] matches no streams.
[11:13:44 CEST] <squ> do you really need trailing :0?
[11:15:23 CEST] <buu> Apparently what I need is a file with both an audio and a video stream
[11:15:24 CEST] <buu> oops
[11:16:01 CEST] <buu> Ok, something is seriously wrong here
[11:17:08 CEST] <buu> Can you please look at this file: http://rdsw.rmz.gs/concat.mkv
[11:17:30 CEST] <buu> I used ffmpeg to generate a black video/blank audio, then used the concat filter to prepend it to my video file
[11:17:41 CEST] <buu> And the audio starts over the black screen like.. 4 seconds early?
[11:17:51 CEST] <squ> then [0:v] and [0:a]
[11:18:09 CEST] <squ> concat=n1:v=1:a=1
[11:18:14 CEST] <squ> n=1
[11:18:23 CEST] <buu> what?
[11:18:47 CEST] <buu> squ: I want 5 (or whatever) seconds of blankness then my video file to start
[11:18:54 CEST] <buu> Why does the audio play while the video doesn't?
[11:21:39 CEST] <buu> clearly ffmpeg understands that there is infact audio and video streams
[11:24:24 CEST] <mort> hey
[11:25:33 CEST] <mort> It seems to me like libavformat has some functionality which fragmentizes h264 into NAL units, but I can't figure out how to use it. Would anyone here happen to know anything about it?
[11:38:57 CEST] <buu> Also note this important point from that page: "If you use -ss with -c:v copy, the resulting bitstream might end up being choppy, not playable, or out of sync with the audio stream, since ffmpeg is forced to only use/split on i-frames."
[11:39:01 CEST] <buu> hmmmmmmmmmm
[11:39:38 CEST] <buu> I begin to have ideas.
[11:48:11 CEST] <squ> what ideas?
[12:09:13 CEST] <samgoody> Hi all. Am trying to compile ffmpeg, according to the (understandable and easy-to-follow) wiki https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu.
[12:09:25 CEST] <samgoody> But am getting ERROR: x265 not found using pkg-config
[12:10:03 CEST] <samgoody> I installed libx265-dev, and seeing that wasn't enough also libx265-79
[12:10:47 CEST] <samgoody> (ie, which caused it to be set to manual, since it was already installed). But that didn't help. So how can I get the compiler to see the x265 lib and continue compiling?
[12:13:07 CEST] <samgoody> As far as I can tell, the issue is that libx265 no longer includes support for ffmpeg, and needs to be compiled specifically for ffmpeg.
[12:13:51 CEST] <samgoody> If that is correct, then the compile wiki page should be updated to reflect that. Is that correct?
[12:14:09 CEST] <kerio> samgoody: tbh i'd just compile the latest versions of everything
[12:14:27 CEST] <samgoody> That's because _you_ are competent
[12:15:04 CEST] <samgoody> But me, if I don't use apt, the libs will go out of date, and I wil never get around to recompiling, and that will leave gaping security holes
[12:15:43 CEST] <samgoody> So ffmpeg needs to recompiled - will do, no choice. But as much as I can take from apt, I would like to. BTW, is my stance an odd one?
[12:15:51 CEST] <PharaohAnubis> you can try to automate everything
[12:19:41 CEST] <samgoody> Right, of course. I'll make sure to set that up tomorrow ;)
[12:20:09 CEST] <samgoody> Tomorrow is a black hat's best friend
[12:21:37 CEST] <buu> I just compiled ffmpeg using ii  libx265-79:amd64                            1.9-3                                         amd64        H.265/HEVC video stream encoder (shared library)
[12:21:41 CEST] <buu> ii  libx265-dev:amd64                           1.9-3                                         amd64        H.265/HEVC video stream encoder (development files)
[12:21:44 CEST] <buu> on ubuntu
[12:21:47 CEST] <buu> 3.3.2
[12:22:06 CEST] <samgoody> So how would I debug why it didnt work for me?
[12:22:55 CEST] <buu> what ubuntu version
[12:22:56 CEST] <buu> ?
[12:44:23 CEST] <JC_Yang> mort: yes, you can. read the libavformat.h, though it's not quite good documented, and more specifically, av_read_frame() does read a NAL unit for you
[15:28:07 CEST] <JC_Yang> it seems like that av_register_input_format() is not intended to be a public API at all...but it is so documented in the header...  the only way to properly register a format require some key function pointers which are implementation details... am I wrong? how f**king messy is this project... that's why C is just a dinosaur...
[15:29:44 CEST] <JEEB> more like that part of lavf is WTF. because yes, it looks as if you can plug things during runtime while that is most definitely not true
[15:30:07 CEST] <JEEB> I mean, even if you use a "better" language if you make it public it's public and you've fscked your API
[15:30:35 CEST] <JEEB> I think someone had posted a patch that would remove it but then it got fought against or something?
[15:31:42 CEST] <furq> if you rewrote ffmpeg in rust then the mailing list would be a harmonious place of wonderful collaboration
[15:31:49 CEST] <JEEB> har har har
[15:31:54 CEST] <JEEB> Kostya is doing just that I think
[15:32:01 CEST] <JEEB> with nihav
[15:32:04 CEST] <nicolas17> lol
[15:32:33 CEST] <furq> http://24.media.tumblr.com/75e2ef1b386d1c16931c12c5f36ffad8/tumblr_mhtufc4ttn1s3c16so1_500.png
[15:32:36 CEST] <furq> can you imagine a world without C
[15:32:39 CEST] <JEEB> JC_Yang: or that thing might be a case of getting the private demuxers in lavf and registering them one by one
[15:32:42 CEST] <JC_Yang> well, yes, you're right, but look at all other half public structs in the api, I can't imagine using comments as fences to users is a good idea...
[15:33:15 CEST] <JEEB> it isn't, and that's why you get opaque things in many things I think
[15:33:21 CEST] <JEEB> or you just don't get references to them in the public structs
[15:33:26 CEST] <nicolas17> ffmpeg isn't going to change language anytime soon
[15:33:37 CEST] <nicolas17> so what's the point of "disagreeing" with its choice of language? :P
[15:34:31 CEST] <JC_Yang> I don't care this ugly API now, I just like to know, how to cut the binary size down to a reasonable range, while I only need to support one or two formats
[15:34:43 CEST] <JEEB> just build with less formats?
[15:34:51 CEST] <nicolas17> JC_Yang: --disable-formats --enable-format=foo --enable-format=bar
[15:34:52 CEST] <nicolas17> in configure
[15:35:08 CEST] <JEEB> (I think enable-*= takes things comma-delimited as well)
[15:35:12 CEST] <furq> --enable-muxer
[15:35:14 CEST] <furq> not --enable-format
[15:35:17 CEST] <JEEB> and yes
[15:35:19 CEST] <JEEB> demuxer or muxer
[15:35:22 CEST] <furq> and yeah, you can do --enable-muxer=foo,bar
[15:35:35 CEST] <JC_Yang> well, thank u, will try it soon
[15:35:45 CEST] <DHE> it also accepts --enable-muxer=mp*
[15:35:55 CEST] <furq> you'll probably want demuxers, muxers and parsers
[15:36:01 CEST] <nicolas17> I'm curious how you ended up messing with the code and private APIs before finding the configure options
[15:36:03 CEST] <nicolas17> :P
[15:36:40 CEST] <middleman_> Hello, i have a problem to compile ffmpeg from source by cude support so that i can use nvenc.
[15:36:44 CEST] <JEEB> well, av_* is public API
[15:36:46 CEST] <DHE> libavfilter is also huge and trimming the list of filters can help immensely
[15:36:55 CEST] <furq> yeah dropping libavfilter will give the biggest benefit
[15:37:04 CEST] <furq> ideally you want --disable-everything and then just enable the stuff you want
[15:37:09 CEST] <furq> but you will almost certainly end up with a broken build
[15:37:26 CEST] <furq> so it's better to go through and disable stuff one by one before getting to that point
[15:37:35 CEST] <middleman_> Always got error "ERROR: CUVID not found"
[15:38:07 CEST] <JEEB> furq: basically knowing the exact components you need (as in, starting with the file protocol etc usually) is a good idea :D
[15:38:09 CEST] <middleman_> Anyone  could advise me what shuold i do to specify dependencies for cuda nvenc api ?
[15:38:13 CEST] <JC_Yang> I haven't tried any private apis, just read some codes and get where I am, and, give up and complain, then get the probable the right solutions from you guys, though haven't tried it, but it sounds
[15:38:15 CEST] <furq> yeah that helps
[15:38:34 CEST] <JEEB> because that's the classic failure with disable-everything :D
[15:38:40 CEST] <furq> there is a lot of stuff that isn't immediately obvious
[15:38:43 CEST] <JEEB> "hey why can't I open this JPEG file any more?"
[15:38:44 CEST] <BtbN> cuvid and nvenc do not have any special dependencies.
[15:38:49 CEST] <JEEB> "I enabled the demuxer and decoder!"
[15:39:05 CEST] <JEEB> (but then you don't have the file protocol and d'uh)
[15:39:30 CEST] <middleman_> This one configuration does not work.
[15:39:31 CEST] <middleman_> ./configure --enable-cuda --enable-cuvid --enable-nvenc --enable-nonfree --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64
[15:39:43 CEST] <middleman_> It throws that error i noticed before
[15:39:54 CEST] <BtbN> are you using latest master?
[15:40:08 CEST] <middleman_>  Me ?
[15:40:27 CEST] <BtbN> who else?
[15:40:55 CEST] <middleman_> i am using apt souce ffmpeg git.... link
[15:41:13 CEST] <BtbN> so probably something horribly old, use latest master
[15:41:36 CEST] <nicolas17> what even is "apt souce ffmpeg git"?
[15:41:53 CEST] <middleman_> VERSION file reports 3.2.6
[15:42:53 CEST] <BtbN> yeah, as expected
[15:43:04 CEST] <middleman_> I have installed 375.66 driver , cuda-8.0 and also toolkit
[15:43:28 CEST] <BtbN> nvenc and cuvid have zero external dependencies in latest master.
[15:43:30 CEST] <BtbN> Just use that.
[15:44:20 CEST] <DHE> speaking of which, the nvenc version dependency is very high. you might need to update your nvidia driver further. not sure if 375 is enough
[15:44:28 CEST] <BtbN> 378 iirc
[15:45:34 CEST] <middleman_> so what is the way to ./configugre it ?
[15:45:55 CEST] <DHE> BtbN: I was running 352 for the longest time. so disappointed... :/
[15:46:18 CEST] <BtbN> On master just a plain ./configure will give you nvenc and cuvid.
[15:46:28 CEST] <middleman_> ok, will try it
[15:46:38 CEST] <BtbN> But it won't work with your old driver version
[15:47:12 CEST] <hojuruku> x86_64-pc-linux-gnu-gcc is unable to create an executable file. At first you would think my gcc is broken but no... gentoo's ffmpeg-9999999 ebuild multilib just broke working with git. I assure you my compiler is working and I can build other multilib ebuilds fine.
[15:47:13 CEST] <BtbN> If you insist on using the old ffmpeg version for the old driver, your configure line looks fine to me
[15:47:22 CEST] <hojuruku> any ideas?
[15:47:28 CEST] <BtbN> if the CUDA SDK is actually installed where you are telling it it is
[15:47:53 CEST] <middleman_> I see that Enabled encoders encover nvenc
[15:48:01 CEST] <middleman_> looks promising
[15:48:08 CEST] <BtbN> nvenc never really had any dependencies
[15:48:11 CEST] <hojuruku> i'm not crying i've still got the recent git one installed from last month. It was just my only issue with my monthly updates.
[15:48:17 CEST] <BtbN> it's just about cuvid and the filters.
[15:48:48 CEST] <middleman_> do i need something else ?
[15:48:56 CEST] <BtbN> The CUDA SDK.
[15:48:56 CEST] <hojuruku> i may have to DIY and look at the configure.log to see what's going wrong / what the tests are
[15:49:16 CEST] <middleman_> already have cuda-8.0
[15:49:38 CEST] <BtbN> You are not specifying its path correctly then
[15:49:42 CEST] <BtbN> or it's not properly installed.
[15:51:26 CEST] <middleman_> So what --enable-* i need to entirely correctly get cuda support so that nvenc could work properly ?
[15:51:57 CEST] <nicolas17> there's nothing else to enable
[15:52:35 CEST] <middleman_> Than this one is deprecated --enable-cuda --enable-cuvid ?
[15:52:42 CEST] <nicolas17> those are correct
[15:52:52 CEST] <nicolas17> "You are not specifying its path correctly, or it's not properly installed" -> specify its path correctly or install it properly, you don't need to "enable" more things
[15:52:53 CEST] <BtbN> depends on the version. Recent versions also have cuda-sdk
[15:53:06 CEST] <BtbN> but it very specifically gives you an error
[15:53:14 CEST] <BtbN> check config.log to find out what exactly it's not finding.
[15:55:06 CEST] <middleman_> i see, -lnvcuvid
[15:58:33 CEST] <BtbN> usr/local also looks weird for the cuda sdk
[16:18:49 CEST] <thomedy> if i ffmpeg -i ./movie.mp4 ./frames/frame%d.jpg  how does it determine what the frame numbers will be
[16:18:53 CEST] <thomedy> is it fps
[16:18:59 CEST] <thomedy> i was anticipating more frames from my film
[16:20:26 CEST] <BtbN> it just counts them?
[16:21:13 CEST] <thomedy> which then i can use the fps anyway
[16:21:24 CEST] <thomedy> okay so that means i just had them wrong in my head
[16:21:36 CEST] <thomedy> fps 30, 30 second film 900 frames?
[16:21:45 CEST] <thomedy> reasonable/
[16:21:46 CEST] <thomedy> ?
[16:30:27 CEST] <JC_Yang> well, questions, if what I need maybe just the muxer and demuxer, should I disable all other modules in config? I mean, avcodec and swresample and the possibly not used codes?
[16:33:55 CEST] <nicolas17> JC_Yang: what are you doing that only needs muxer/demuxer? :o
[16:34:18 CEST] <JC_Yang> maybe also parser
[16:35:25 CEST] <JC_Yang> I don't have very clear mind about the distinctions between muxer and parser, any pointers?
[16:37:20 CEST] <bencoh> if I had to guess I'd say the parser allows you to extract (still encoded) frames from an encoded video bitstream, for instance
[16:37:33 CEST] <bencoh> to feed the decoder with those frames
[16:38:19 CEST] <bencoh> (but that's plain educated guess)
[17:21:18 CEST] <samgoody> Hi i reencoded audio on some videos from mp3 to aac
[17:21:46 CEST] <samgoody> the size jump was enourmous. Eg. 569M -> 629M
[17:22:25 CEST] <samgoody> The original sound was encoded someowhere in the range of 50kbps, and the aac at 128kbps
[17:23:12 CEST] <samgoody> Is that size expected? Is there any reason to use higher than the original kbps size? If not, is there any way to get the original size?
[17:23:47 CEST] <nicolas17> you *asked* for a bigger filesize :P
[17:24:12 CEST] <nicolas17> if you want the same file size, use the same bitrate; no warranties of the quality being the same though
[17:24:40 CEST] <samgoody> I want the quality to be as close to the original without enlarging it more than needed.
[17:25:05 CEST] <nicolas17> quality is subjective though, especially when switching codecs
[17:25:13 CEST] <samgoody> If it is enough to use 54kbps when reencoding from 45kbps, than I dont need 128kbps.
[17:25:23 CEST] <nicolas17> you'll have to try it
[17:25:35 CEST] <samgoody> I have over 500 videos
[17:25:43 CEST] <nicolas17> well try it on one
[17:25:56 CEST] <furq> why are you reencoding the audio
[17:26:15 CEST] <nicolas17> aac might be able to have *lower* bitrate and still keep the same quality, but Who Knows
[17:26:20 CEST] <samgoody> And I am not an audiophile, so to me it all sounds the same. It is for streaming the video, using HLS which doesn't support MP3
[17:26:46 CEST] <furq> hls does support mp3
[17:28:23 CEST] <samgoody> Not this version of MP3 (mp4a.40.34). At least not for native playback in FF or Chrome while streaming.
[17:29:00 CEST] <samgoody> I want my clients happy, but increasing each file by 10% is significant
[17:29:23 CEST] <nicolas17> uh, I'm pretty sure mp4a is aac
[17:29:46 CEST] <samgoody> So if there is some way I could ffprobe the original bandwidth and add 20% and be pretty sure than users won't hear a difference, I would do that.
[17:30:45 CEST] <nicolas17> experiment with one video, and then assume the quality loss will be similar on the rest
[17:31:02 CEST] <samgoody> Ie, see that the original was encoded at 96k, add 20% == 116k. Would that work?
[17:31:18 CEST] <nicolas17> try same bitrate first
[17:31:45 CEST] <samgoody> The videos were recrded in many settings, and besides, I am not very good at sound tests. I couldn't carry a tune if it had handles
[17:32:56 CEST] <samgoody> If there is no other solution, I will encode at 128k at call it a day. But that increases bandwidth on playback as well as storage costs. So would have preferred a skinnier approach.
[17:34:03 CEST] <samgoody> is there no proportion that is considered somewhat "safe"?
[17:34:33 CEST] <samgoody> And if so, can I rely on ffprobe to get orig bandwidth?
[17:40:12 CEST] <hojuruku> ffmpeg is a bit flakey when using FILTERS with v4 intel graphics (eg haswell) vaapi actually very flakey.
[17:40:30 CEST] <hojuruku> it works... but you can't say take the first 10 mins of the video etc...
[19:52:02 CEST] <tezogmix> goal: convert to mp4 with minimal artifact/grain - source: (wmv/mpg) quality is visually low, very grainy /// does the "-tune film or grain" feature or crf 15/crf 18 parameter or some other correction help? I've read that too low crf may cause more enhancement of grain artifacts. I think the grain preserves the grain and film will try to reduce that but not sure if it's helpful on already low quality sources/
[19:52:30 CEST] <furq> they both bias towards preserving grain
[19:52:53 CEST] <furq> and using a very low crf will preserve more grain as well
[19:52:56 CEST] <JEEB> if your source is already of visually low quality I am not sure if you want to re-encode to begin with...
[19:53:13 CEST] <furq> it's sometimes worth it if you're resizing down
[19:53:19 CEST] <furq> but if you have mpegs then that's probably not what you're doing
[19:53:20 CEST] <JEEB> because unless there's some real bit rate overusage going on, you probably won't get a compression gain
[19:53:38 CEST] <tezogmix> it was more for device format compatibility to mp4
[19:53:40 CEST] <JEEB> but then again if the source had enough bit rate it wouldn't be of bad quality
[19:54:05 CEST] <furq> you probably want to run a denoise filter anyway
[19:54:24 CEST] <JEEB> tezogmix: the gist is: pick like 2500 frames of content that your stuff will have in general
[19:54:33 CEST] <JEEB> then pick your libx264 preset
[19:54:40 CEST] <JEEB> speed-wise
[19:54:48 CEST] <JEEB> (basically check at which point it gets too slow for you)
[19:55:03 CEST] <JEEB> and then just test on that clip various CRF values until you find the highest that still looks good enough for you
[19:55:07 CEST] <JEEB> start at around 21 to 23
[19:55:07 CEST] <furq> yeah
[19:55:11 CEST] <JEEB> that's literally it
[19:55:12 CEST] <furq> experiment with tunes and filters as well
[19:55:17 CEST] <furq> you probably don't want to use any tune
[19:55:29 CEST] <tezogmix> size wasn't an issue, the source files are considerably small (under 1gb), the bitrates are also pretty low in the sub 1-2k range
[19:55:37 CEST] <furq> they're generally useless if you have a poor quality source
[19:56:23 CEST] <klaxa> just throwing this in here :P  <pengvado> Of course it doesn't look better than the source. It's --deblock, not --pixie-dust.
[19:56:40 CEST] <klaxa> pretty much applies here too from what i can tell
[19:57:07 CEST] <tezogmix> thanks all, what's the denoise parameter to try out?
[19:57:21 CEST] <furq> there are loads
[19:57:26 CEST] <furq> start with -vf hqdn3d
[19:57:35 CEST] <tezogmix> oh yeah, i see many on the official page
[19:57:39 CEST] <furq> !filter hqdn3d
[19:57:40 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#hqdn3d-1
[19:57:46 CEST] <tezogmix> preset, i've been using veryfast
[19:57:53 CEST] <nicolas17> hm
[19:58:04 CEST] <tezogmix> maybe will try fast
[19:58:12 CEST] <nicolas17> I'm supposed to av_packet_unref() for every packet I read, even if I'm reusing the AVPacket?
[19:58:57 CEST] <tezogmix> default ffmpeg crf is 20 ?
[19:59:00 CEST] <furq> 23
[19:59:03 CEST] <tezogmix> oh ok...
[19:59:33 CEST] <tezogmix> for all my hq sources, 15-18 have been working well
[19:59:57 CEST] <tezogmix> but from my testing of the same commands on lower quality, it looks a tad bit worse than the source
[20:00:26 CEST] <nicolas17> using low quality is not a way to get rid of noise present in the source :P
[20:00:31 CEST] <tezogmix> so i'll try maybe the higher crf like JEEB mentioned and furq's -vf denoise
[20:00:46 CEST] <klaxa> nicolas17: av_interleaved_write_frame() takes care of unref'ing the packet iirc but av_write_frame() does not https://ffmpeg.org/doxygen/2.8/group__lavf__encoding.html#ga37352ed2c63493c38219d935e71db6c1
[20:01:00 CEST] <nicolas17> if you have artifacts in the source video, reencoding will waste bits trying to preserve the artifacts as well as possible
[20:01:08 CEST] <nicolas17> klaxa: sorry, I'm decoding
[20:01:10 CEST] <tezogmix> ah yeah nicolas17 , i know it wouldn't be eliminated... i've just noticed it much more enhanced than i was expecting in the conversion....
[20:01:18 CEST] <klaxa> ah... hmm...
[20:02:16 CEST] <klaxa> in my program i do av_read_frame(ifmt_ctx, &pkt); [...] ret = av_write_frame(seg->fmt_ctx, &pkt); av_packet_unref(&pkt);
[20:02:20 CEST] <klaxa> so i would assume, yes
[20:02:28 CEST] <nicolas17> hmm okay
[20:02:38 CEST] <tezogmix> btw kepstin , that suggestion of scaling down source 1080p60 to 720p60 was much better than 1080p60 to 1080p30
[20:02:57 CEST] <nicolas17> I wrapped some things in C++ classes, I might make my read_frame unref the packet before overwriting it...
[20:03:26 CEST] <klaxa> when in doubt, valgrind
[20:03:46 CEST] <nicolas17> hmm nope that doesn't work ^^
[20:04:07 CEST] <nicolas17> it crashes on the first frame, I can't call av_packet_unref on a brand new AVPacket that has nothing in it ^^
[20:06:37 CEST] <nicolas17> ahh nevermind
[20:06:45 CEST] <nicolas17> I can av_packet_unref an "empty" AVPacket
[20:06:55 CEST] <nicolas17> but what I had wasn't even zeroed out, it had stack garbage
[20:09:14 CEST] <SpeakerToMeat> Hi everybody.
[20:09:26 CEST] <SpeakerToMeat> Question, if I use the subtitles filter I can't use a -v copy, right?
[20:09:42 CEST] <furq> right
[20:09:56 CEST] <kepstin> SpeakerToMeat: you cannot use -c copy on any streams which are being filtered, yes
[20:10:13 CEST] <kepstin> (you could for example filter video, but copy audio)
[20:10:44 CEST] <SpeakerToMeat> Hmmm so I need to see from j2k->j2k how much of a loss there'd be
[20:10:46 CEST] <SpeakerToMeat> thanks
[20:11:24 CEST] <nicolas17> the subtitles filter is for hardsub?
[20:11:40 CEST] <kepstin> nicolas17: yes
[20:11:50 CEST] <nicolas17> yeah there's no way you're doing that without reencoding :)
[20:16:19 CEST] <SpeakerToMeat> Huh I lost an HDD....
[20:16:22 CEST] <tezogmix> thanks everyone for the help + tips!
[20:16:46 CEST] <nicolas17> SpeakerToMeat: restore from backup \o/
[20:18:18 CEST] <kepstin> hmm, "Huh I lost an HDD" seems a little low-key for someone needing to restore from backups, maybe this just requires replacing a drive in raid? :)
[20:18:58 CEST] <nicolas17> kepstin: an important KDE server has been reporting 8 "pending" sectors for like a month
[20:19:35 CEST] <SpeakerToMeat> nicolas17: it was lose on the bay
[20:20:17 CEST] <SpeakerToMeat> Time to get the tape out
[20:20:19 CEST] <nicolas17> apparently "pending" means they are bad and need to be reallocated elsewhere, but the drive didn't even manage to read the original contents
[20:20:31 CEST] <kepstin> ah, so you didn't lose an hdd, you loosened an hdd.
[20:20:41 CEST] <nicolas17> :D
[20:21:07 CEST] <furq> nicolas17: you can sometimes clear those by forcing a write to that sector
[20:21:17 CEST] <kerio> do i shill znc
[20:21:19 CEST] <SpeakerToMeat> kepstin: No the hdd loosened itself :D I discovered it by losing the /dev/sdX
[20:21:27 CEST] <kerio> this sounds like a situation where i should shill znc
[20:21:29 CEST] <furq> smartctl will sometimes report the sector
[20:21:35 CEST] <kerio> er what the fuck
[20:21:37 CEST] <kerio> zfs
[20:21:40 CEST] <furq> although it doesn't on my disk with a pending sector
[20:21:42 CEST] <kerio> i can't brain D:
[20:21:54 CEST] <nicolas17> I couldn't find the sector number, even after an "extended" self-test
[20:21:54 CEST] <kepstin> nicolas17: yeah, if you have "pending" sectors, you might not even notice until you try to read some file that happens to be there; if it's in unallocated filesystem space, then it'll fix itself if the fs decides to write something there
[20:22:02 CEST] <nicolas17> kerio: it is in fact using zfs
[20:22:20 CEST] <nicolas17> so that server could explode for all I care, as long as the shards don't damage the others :P
[20:22:24 CEST] <kepstin> but in general if you have a drive with pending sectors, you probably want to preemptively replace it.
[20:22:24 CEST] <nicolas17> er that disk*
[20:22:39 CEST] <kerio> yay
[20:22:54 CEST] <kerio> furq they're using zfs :3
[20:23:17 CEST] <kerio> hey SpeakerToMeat
[20:23:18 CEST] <kerio> use zfs
[20:23:27 CEST] <nicolas17> :P
[20:23:30 CEST] <kepstin> if you do a full filesystem scrub and it doesn't fix the pending sectors, then they're probably in unallocated space and were only found during a smart self-test or something
[20:23:33 CEST] <SpeakerToMeat> kerio: I always use zfs when possible, I'm finishing a project that uses zfs heavily
[20:23:40 CEST] <nicolas17> kerio: are you a door-to-door zfs salesman?
[20:23:56 CEST] <SpeakerToMeat> nicolas17: No, he sells ram
[20:24:01 CEST] <kerio> ayyy
[20:24:09 CEST] <kerio> RAM FOR THE RAM GOD
[20:24:11 CEST] <SpeakerToMeat> Because any self respecting ZFS user NEEDS dedup
[20:24:27 CEST] <nicolas17> not using dedup but still getting RAM problems lol
[20:24:51 CEST] <SpeakerToMeat> Huh dedup is the biggest user of ram with ZFS, my current project has only 8Gb ram total.
[20:25:27 CEST] <kerio> don't use dedup
[20:25:30 CEST] <SpeakerToMeat> On a Xeon DL380-G9 2U bitch chock full of LDDs
[20:25:34 CEST] <SpeakerToMeat> I don't use dedup
[20:25:36 CEST] <kerio> it's a bad idea
[20:25:41 CEST] <SpeakerToMeat> dedup is not accounted into the project or needs
[20:25:46 CEST] <furq> prefetch as well
[20:25:46 CEST] <kerio> unless you have VERY peculiar data
[20:25:51 CEST] <SpeakerToMeat> specially for what we're using it.
[20:26:01 CEST] <SpeakerToMeat> kerio: I have very peculiar data, but it wouldn't benefit from dedup
[20:26:07 CEST] <kerio> furq: well i mean
[20:26:13 CEST] <kerio> you might as well use the ram for *something*
[20:26:23 CEST] <SpeakerToMeat> kerio: Blu-Ray and DVD rips mostly
[20:26:29 CEST] <kerio> what's the alternative, running chrome?
[20:26:39 CEST] <nicolas17> this server has download.kde.org, files.kde.org, and a few other data-heavy thingies :P
[20:31:13 CEST] <SpeakerToMeat> kerio: Run chrome and go into newgrounds
[20:31:44 CEST] <SpeakerToMeat> nicolas17: dedup might help a little there but I wouldn't use it.... read caching though....
[20:32:35 CEST] <SpeakerToMeat> I'm trying to move a client to a newer PHP for his site, so I can enable memcached php compilation cache, he has few php which are repeatedly used a lot, and lots of free ram still
[20:32:58 CEST] <SpeakerToMeat> I'm also excited about getting him away from an older vulnerable php but, oh well
[20:33:42 CEST] <SpeakerToMeat> Do I write image sequence outputs the same way as inputs?
[20:34:09 CEST] <SpeakerToMeat> nicolas17: Are you a sysadmin for kde?
[20:34:51 CEST] <nicolas17> yes
[20:35:26 CEST] <SpeakerToMeat> nicolas17: I think with enough humph I might be able to do something like that. And I'd probably be dead within three months from the stress
[20:35:59 CEST] <SpeakerToMeat> nicolas17: So, thank you for your service. Even if all I use from kde right now is yaquake and k3b
[20:36:18 CEST] Action: nicolas17 lives in yakuake
[20:36:35 CEST] <SpeakerToMeat> nicolas17: you do, in fact you're in yakuake right now... wanna see?
[20:38:45 CEST] <SpeakerToMeat> nicolas17: There you are, you all are shoved into yaquake right now. http://imgur.com/a/rR2a2
[20:39:08 CEST] <SpeakerToMeat> Rather you're all inside yaquake->mosh->screen->weechat now in glorious xterm-256color
[20:40:02 CEST] <SpeakerToMeat> psst. please don't tell the couchdb and mongodb guys that their buffers are neighbours
[20:40:45 CEST] <nicolas17> join #elasticsearch or something and put it in the middle :o
[20:41:04 CEST] <SpeakerToMeat> nicolas17: Buffe 21
[20:41:31 CEST] <nicolas17> :D
[20:41:53 CEST] <SpeakerToMeat> This project I'm talking about uses couchdb+elasticsearch... such a "winning" combination
[20:42:20 CEST] <SpeakerToMeat> That one alone drove me almost to drinking, specially years ago when the river changed from the shoddy plugin to the direct method
[20:43:36 CEST] <SpeakerToMeat> Ok, first "subtitling jpeg2000 movies with ffmpeg" test
[20:44:26 CEST] <kerio> jpeg2000 D:
[20:45:27 CEST] <SpeakerToMeat> kerio: DCI
[20:46:36 CEST] <kepstin> I guess cinema projectors tend not to have softsub renderers that can handle SRT or ASS, do they :/
[20:46:45 CEST] <nicolas17> lolno
[20:46:52 CEST] <kepstin> that seems like something they should fix.
[20:47:14 CEST] <SpeakerToMeat> kepstin: Actually.
[20:47:40 CEST] <SpeakerToMeat> kepstin: Both Interop and SMPTE have xml based subtitle formats, SMPTE's wrapped in MXF
[20:48:18 CEST] <SpeakerToMeat> kepstin: But, sometimes the projector has Texas Instrument's cinecanvas inside to render them, and supposedly sometimes not, and the media server/media block doesn't always have an internal renderer
[20:48:24 CEST] <nicolas17> "XML is like violence, if it doesn't solve the problem, use more"
[20:48:57 CEST] <SpeakerToMeat> kepstin: Plus, some sites haven't updated to SMPTE able hardware/firmware, so doing subtitles in digital cinema is a minefield, and sometimes like festivals or some/many distributors preffer burned in subs for that reason
[20:49:13 CEST] <kepstin> well, the real solution is to just master stuff to a blu-ray and play it on a ps3 in the projection boot, right? :)
[20:49:40 CEST] <nicolas17> ^
[20:49:46 CEST] <SpeakerToMeat> Also, if you're using a type 1 texas instrument dlp based projector (very old), the ram space for the font is limited to 256Kb, so you normally cut unused glyphs out of your font
[20:49:57 CEST] <SpeakerToMeat> kepstin: I have seen worse, much worse.
[20:50:57 CEST] <SpeakerToMeat> Two things, how the heck did I get the help for a codec? I try all ways and always fail :/
[20:51:17 CEST] <kepstin> SpeakerToMeat: for an encoder, "ffmpeg -h encoder=<name>"
[20:51:23 CEST] <SpeakerToMeat> and second, what's the right way to specify the fps for an image sequence input? nto for modifying but for specifying what it's supposed to be
[20:51:36 CEST] <kepstin> but that'll only show encoder-specific avoptions, many codecs also use common options
[20:51:40 CEST] <SpeakerToMeat> kepstin: Huh, thanks and sory, I tried format, codec, etc
[20:51:56 CEST] <nicolas17> SpeakerToMeat: "-framerate 30" before the -i for the images
[20:51:57 CEST] <kepstin> SpeakerToMeat: input fps? depends on the format - what do you have?
[20:52:07 CEST] <nicolas17> I assume you're using image2?
[20:52:09 CEST] <kepstin> but yes, what nicolas17 said for many formats ;)
[20:52:21 CEST] <SpeakerToMeat> image2?
[20:52:48 CEST] <nicolas17> image2 is a demuxer that... pretty much reads individual image files and feeds them unmodified to the decoder
[20:53:22 CEST] <nicolas17> I think support for -i "foo%03d.jpg" frame number formatting is in the image2 (de)muxer too
[20:53:58 CEST] <SpeakerToMeat> So I should change demuxers?
[20:54:15 CEST] <nicolas17> what are you using now?
[20:54:44 CEST] <SpeakerToMeat> nicolas17: whathever defaults for ffmpeg -i image%06d.j2c
[20:55:05 CEST] <kepstin> SpeakerToMeat: then it's using the image2 demuxer.
[20:55:29 CEST] <SpeakerToMeat> Ok
[20:55:52 CEST] <SpeakerToMeat> Hmmm for subs I guess my best bet for setting up everything as I want it is to use ASS and the ass filter (that sounds nasty)
[20:56:25 CEST] <kepstin> SpeakerToMeat: probably the subtitles filter actually, but I think they're more or less equivalent if you just have an external standalone .ass file
[20:56:53 CEST] <furq> ass is only useful if you don't have libavcodec and libavformat
[20:57:10 CEST] <kepstin> (subtitles filter can do things like read embedded fonts from mkv, and subtitle formats other than ass)
[20:57:22 CEST] <SpeakerToMeat> kepstin: I think the subtitle filter builds on top of ass doesn't it? and if I have a meatada clean sub format like srt I'd have to specify a lot of style options like font and color, which I can pre-set on ASS
[20:57:38 CEST] <furq> subtitles will treat ass files the same as ass
[20:57:40 CEST] <SpeakerToMeat> Oh well I can use ass with vf=subtitle
[20:57:52 CEST] <nicolas17> as long as it doesn't treat the subtitles like ass
[20:57:59 CEST] <nicolas17> which means something else :>
[20:58:13 CEST] <SpeakerToMeat> Sometimes they deserve to be treated so
[20:58:19 CEST] <SpeakerToMeat> Not all subtitles are created equal
[20:58:40 CEST] <nicolas17> https://pbs.twimg.com/media/DD2qQLnUwAEUnh6.jpg:large
[20:59:07 CEST] <kepstin> i'm still annoyed that they went with image subtitles on bd, but I guess the alternative would have been something weird that was built in java.
[20:59:15 CEST] <furq> i would've thought it'd be easier to just use srt and set the options in the filter
[20:59:27 CEST] <furq> unless you need specific text positioning or effects or whatnot
[20:59:47 CEST] <SpeakerToMeat> nicolas17: Was gonna say "wat" but you reminded me of this: http://www.bathroomreader.com/wp-content/uploads/2010/04/ToiletComputerTerms1.jpg
[20:59:54 CEST] <furq> i guess it's easier to preview the options if they're in the subtitle file
[21:00:42 CEST] <nicolas17> okay! how do I encode an mp4/x264
[21:01:00 CEST] <nicolas17> *with the API* :D
[21:04:45 CEST] <kerio> furq: i like ass
[21:06:29 CEST] <DHE> we know
[21:09:02 CEST] <SpeakerToMeat> Is there a list of subtitle style options somewhere?
[21:10:33 CEST] <kepstin> SpeakerToMeat: probably the aegisub docs are a good play to start?
[21:11:18 CEST] <kepstin> the closes thing to a "spec" for ass, as far as I know, is this old document: http://www.cccp-project.net/stuff/ass-specs.pdf which might be out of date compared to current renderers? not sure.
[21:12:27 CEST] <furq> http://webcache.googleusercontent.com/search?q=cache:Ynjy50wO8WYJ:moodub.free.fr/video/ass-specs.doc+&cd=5&hl=en&ct=clnk&gl=uk
[21:12:50 CEST] <furq> there's also a wiki page on libass' github repo, but it's for ass v5
[21:13:09 CEST] <JEEB> the ASS "spec" in reality ended up being VSHitler
[21:13:09 CEST] <furq> i'm pretty sure ffmpeg only accepts v4 options
[21:14:05 CEST] <furq> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/ass.c#L45-L53
[21:14:08 CEST] <furq> that's probably better
[21:14:09 CEST] <JEEB> kepstin: ASS did get various additions that had limited usage or implementation
[21:14:25 CEST] <JEEB> but in the end ASSv4 seems to be the canonical thing + how VSHitler filters it
[21:14:43 CEST] <JEEB> libass tries to emulate VSHitler - with exceptions where it's not feasible/possible
[21:15:02 CEST] <nicolas17> okay I'll dive into the docs and see if I can manage to do encoding by myself
[21:15:35 CEST] <JEEB> I think things like Aegisub mostly kept ASS from becoming irrelevant
[21:15:43 CEST] <kepstin> yeah, it's one of those crazy things where an internal format for some subtitle editor was extended and became a defacto-standard subtitle working format for multiple tools
[21:15:45 CEST] <JEEB> I mean, heck, there's at least one major VOD service which uses ASS
[21:15:51 CEST] <JEEB> for the subtitles
[21:33:04 CEST] <SpeakerToMeat> kerio: Youre not razor1000 right?
[21:36:00 CEST] <kerio> no
[21:39:13 CEST] <SpeakerToMeat> Hmmmm how can I tell ffmpeg not to convert color space between in and out?
[21:39:49 CEST] <durandal_1707> SpeakerToMeat: post full command line output
[21:41:02 CEST] <SpeakerToMeat> in is in xyz12dfull command line, or related output?
[21:41:10 CEST] <SpeakerToMeat> blergh
[21:41:55 CEST] <SpeakerToMeat> ffmpeg -framerate 24 -i ../subs-in/image%06d.j2c -vf subtitles=/home/lars/Desktop/Prueba-Sub.srt image%06d.j2c
[21:42:15 CEST] <SpeakerToMeat> It's tryign to convert from xyz (xyz12le), I want the same output, no conversion... maybe an encoder setting
[21:48:13 CEST] <furq> did you build ffmpeg with libopenjpeg
[21:48:29 CEST] <furq> the builtin j2k encoder doesn't support xyz
[21:49:34 CEST] <SpeakerToMeat> hmm
[21:59:26 CEST] <SpeakerToMeat> ok --enable-libopenjpeg
[22:00:40 CEST] <SpeakerToMeat> It was compiled with openjpeg
[22:01:23 CEST] <furq> try with -c:v libopenjpeg
[22:02:15 CEST] <furq> you might want to check -h encoder=libopenjpeg
[22:02:25 CEST] <furq> i don't have a build with it but apparently there's some dcp compat options in there
[22:03:02 CEST] <SpeakerToMeat> furq: Ok but first I need to find otu why force_style in the subtitles filter insists PrimaryColour which is both on the online documentation, and the ASS specs is treated as invalid
[22:03:13 CEST] <SpeakerToMeat> furq: thanks
[22:09:07 CEST] <SpeakerToMeat> furq: It still argues that "Color conversion not implemented for xyz12le" setting the encoder is ok but how do I set the decoder?
[22:10:32 CEST] <furq> -c:v libopenjpeg before -i
[22:11:43 CEST] <kepstin> SpeakerToMeat: the issue is probably that the subtitles filter can't render onto xyz12le video, you'll probably have to convert to yuv or rgb.
[22:12:01 CEST] <kepstin> that warning is coming from in the libavfilters/drawutils code
[22:12:03 CEST] <SpeakerToMeat> ugh precedence of parameters
[22:12:13 CEST] <kepstin> which is used in the subtitles filter
[22:12:15 CEST] <furq> well you'd need it in both places
[22:12:19 CEST] <furq> but yeah kepstin is probably right
[22:12:31 CEST] <SpeakerToMeat> kepstin: Can I tell it not to convert and just use it in the colors I give it? (I've converted the colors beforehand)
[22:12:48 CEST] <SpeakerToMeat> Maybe with the ass filter
[22:13:23 CEST] <SpeakerToMeat> Actually it was the decoder whining
[22:13:35 CEST] <kepstin> SpeakerToMeat: I thought ass colors are all in 32bit BGR/BGRA, I don't think you can preconvert them?
[22:13:58 CEST] <SpeakerToMeat> hmm I don't entirely like this, if the subs are in rgb/yuv and there's no whining there's probably xyz->rgb->xyz going on the background and I don't like that
[22:14:38 CEST] <SpeakerToMeat> kepstin: Not all formats support xyz, many times when you work with xyz you're just using rgb value holders to hold the xyz coordinates.
[22:15:39 CEST] <kepstin> SpeakerToMeat: anyways, the filter is going "draw this stuff using this rgba color", then the drawutils code is going "I don't know how to convert that color to the frame pixel format"
[22:15:42 CEST] <SpeakerToMeat> It's 3 values same as RGB, same possibilities of bit widths (8, 10, 12, cinema is 12 bits only)
[22:15:50 CEST] <SpeakerToMeat> hmmm
[22:16:12 CEST] <SpeakerToMeat> I wonder if it'll work if I convert the subs to alpha channeled images and composit them instead
[22:16:28 CEST] <kepstin> SpeakerToMeat: yeah, that's a possibility.
[22:17:18 CEST] <kepstin> hmm, there's no XYZ pixel formats with alpha channels in ffmpeg, tho
[22:18:05 CEST] <SpeakerToMeat> If I could just tell it "don't touch colors, no matter what you think color is, don't change colors at all) and be done with it
[22:18:27 CEST] <kepstin> I suppose you'll have gamut problems converting to either a yuv or rgb format?
[22:18:39 CEST] <kepstin> if you use a 16bit temp format the precision loss should be minimal
[22:19:11 CEST] <SpeakerToMeat> yes
[22:19:44 CEST] <SpeakerToMeat> There's formulas to convert back and forth taking gamut in account but I want to lose as little as possible so I don't want to go converting back and forth :/
[22:21:56 CEST] <kepstin> SpeakerToMeat: about the only way I can think to to get it to work would be to add XYZ12 pixel format support to the ffmpeg drawutils
[22:22:10 CEST] <kepstin> and it's a strange packed format, so that'll probably be a pain :/
[22:22:51 CEST] <kepstin> hmm, well, 36 bits (12x3), so at least it's an integer number of bytes per pixel. might not be that bad actually
[22:25:01 CEST] Action: kepstin doesn't know anything about the format, he's just looking at the pixel format list in ffmpeg headers
[22:25:20 CEST] <SpeakerToMeat> kepstin: It's deffinitively doing something odd.... let me show you....
[22:26:19 CEST] <kepstin> SpeakerToMeat: in the case when the color conversion is missing, it defaults to using a color filled with '128' bytes, which should be a neutral grey in both YUV and RGB.
[22:29:06 CEST] <SpeakerToMeat> kepstin: http://imgur.com/a/8Nqia command line: ffmpeg -framerate 24 -c:v libopenjpeg -i ../subs-in/image%06d.j2c -c:v libopenjpeg -profile:v cinema2k -cinema_mode 2k_24 -vf subtitles=/home/lars/Desktop/Prueba-Sub.ass image%06d.j2c
[22:30:04 CEST] <kepstin> impressive.
[22:30:38 CEST] <SpeakerToMeat> Yes, it's so pretty
[22:30:45 CEST] <SpeakerToMeat> so very 80's
[22:33:26 CEST] <SpeakerToMeat> Hmmm the input thinks the color space is yuv444p16le, it's not
[22:33:35 CEST] <SpeakerToMeat> it's xyz12le
[22:33:50 CEST] <kepstin> SpeakerToMeat: full ffmpeg output please?
[22:33:54 CEST] <SpeakerToMeat> I wonder if I can tell the decoder that, I can't use -profile:v before -i
[22:37:35 CEST] <SpeakerToMeat> Any prefered channel paste?
[22:37:56 CEST] <kepstin> just use a pastebin service, doesn't matter too much which one
[22:38:14 CEST] <kepstin> lists a few
[22:38:32 CEST] <SpeakerToMeat> Ok I have a pastebin account but some channels hate it
[22:39:33 CEST] <SpeakerToMeat> https://pastebin.com/M6kmySXC
[22:39:51 CEST] <SpeakerToMeat> It's detecting the input as yuv444p16le
[22:40:45 CEST] <kepstin> no, actually it's detecting the input as rgb48le (12bpp)
[22:41:05 CEST] <kepstin> which.. might actually be ok, if it wasn't *converting* to yuv, but just passed it through
[22:42:00 CEST] <SpeakerToMeat> Huh true I read output as input right
[22:43:08 CEST] <SpeakerToMeat> Ok setting input and output to xyz12le
[22:43:23 CEST] <kepstin> SpeakerToMeat: try throwing a "-pix_fmt rgb48le" output option on there
[22:43:29 CEST] <SpeakerToMeat> And there we have the color conversion not implemented, so it is the subtitles.
[22:43:34 CEST] <kepstin> to stop it from adding a conversion to yuv
[22:44:15 CEST] <SpeakerToMeat> So leave out "same" as in,
[22:44:38 CEST] <kepstin> SpeakerToMeat: if you do the "-pix_fmt rgb48le", please run the command with "-v verbose" so it'll tell you if it's autoinserting any scale/conversion filters
[22:46:25 CEST] <SpeakerToMeat> Ok if I set both the input and output formats to xyz12le it works well (subtitles are horrible but I'll fix that by color converting to xyz by hand the ass file), but if I throw an output format as rgb48le it minces the image
[22:47:15 CEST] <SpeakerToMeat> Andyes there's four auto inserted scalers
[22:47:28 CEST] <kepstin> hmm, you won't be able to fix the subtitle colors - if you're getting that "Color conversion not implemented" then it's ignoring the provided color and using 128,128,128
[22:48:24 CEST] <kepstin> which could of course be fixed by implementing the color conversion :)
[22:48:45 CEST] <kepstin> if the actual rendering is fine other than the color, I don't even think it would be that hard to do
[22:49:00 CEST] <nicolas17> oh ugh, he can't convert the video to RGB first because that's lossy? out of gamut?
[22:49:34 CEST] <SpeakerToMeat> Yes
[22:49:49 CEST] <kepstin> SpeakerToMeat: can you paste the full output of the "-pix_fmt xyz12le" encode with "-v verbose"? I'm just curious about how exactly it set up the filter chain.
[22:50:03 CEST] <SpeakerToMeat> I want to minimize the conversions
[22:50:04 CEST] <SpeakerToMeat> kepstin: sure
[22:50:42 CEST] <kepstin> SpeakerToMeat: the only place the code should be doing a color conversion is taking the 32bit bgra colors from the ass file and converting them to the pixel format of the actual frame.
[22:52:05 CEST] <SpeakerToMeat> kepstin: https://pastebin.com/PFzmrNHR
[22:52:23 CEST] <SpeakerToMeat> kepstin: If I only could -pix_fmt the subtitle filter too :D
[22:52:49 CEST] <kepstin> SpeakerToMeat: that paste is missing parts...?
[22:53:50 CEST] <SpeakerToMeat> Hmm only input #0 sorry but no other chain info
[22:54:09 CEST] <mkrs> Does anyone know if the -t (duration) option uses different units now than v2.0? Seems to be seconds for me now. I'm trying to make sense of script that ran the older build.
[22:54:23 CEST] <furq> it's always been a duration specification
[22:54:34 CEST] <kepstin> SpeakerToMeat: can you just paste the *entire* output from the command? That output seems to be missing a fair bit
[22:55:36 CEST] <furq> 12345 or 205:45 or 3:25:45 should all work
[22:56:01 CEST] <SpeakerToMeat> kepstin: not much really https://pastebin.com/VzgGk23f
[22:57:19 CEST] <kepstin> SpeakerToMeat: hmm, you're right. Well - good news, that filter chain is *not* doing any extra colorspace conversions, the subtitles are being drawn directly on the xyz12 frames.
[22:57:36 CEST] <SpeakerToMeat> AH good
[22:57:49 CEST] <kepstin> so if you just implement the missing colour conversion in libavfilter/drawutils.c, it should work perfectly.
[22:58:30 CEST] <mkrs> furq, thanks.
[23:00:51 CEST] <kepstin> SpeakerToMeat: in the "ff_draw_color" function in that file
[23:06:03 CEST] <SpeakerToMeat> hmmm I wonder if I should just treat it like rgb
[23:07:57 CEST] <SpeakerToMeat> I worry that color->comp is using either u8 or u16
[23:13:50 CEST] <kepstin> SpeakerToMeat: in this case, you'd want to set the u16, since the 12bit per component in the xyz12 is stored in 16 bit
[23:14:29 CEST] <kepstin> so you'd use the rgb with depth=12 as a reference
[23:16:26 CEST] <cbsrobot> SpeakerToMeat: why do you want to hardsub j2c files ?
[23:16:59 CEST] <kepstin> cbsrobot: because cinema projectors, apparently.
[23:17:03 CEST] <cbsrobot> can't you just add  interop or smtpe subtitles to your DCP ?
[23:17:06 CEST] <cbsrobot> way  simpler
[23:17:38 CEST] <nicolas17> we went through that already
[23:17:56 CEST] <cbsrobot> I mean if you have time to implement xyz support in drawutils - go for it
[23:18:15 CEST] <nicolas17> <SpeakerToMeat> But, sometimes the projector has Texas Instrument's cinecanvas inside to render them, and supposedly sometimes not, and the media server/media block doesn't always have an internal renderer
[23:18:17 CEST] <nicolas17> <SpeakerToMeat> Plus, some sites haven't updated to SMPTE able hardware/firmware, so doing subtitles in digital cinema is a minefield, and sometimes like festivals or some/many distributors preffer burned in subs for that reason
[23:18:35 CEST] <kepstin> it's just a packed 2 byte per channel format, so it looks like it already works with drawutils other than the missing color conversion.
[23:19:15 CEST] <cbsrobot> I think most problems come from the fact of not sanitizing subtitles
[23:20:00 CEST] <cbsrobot> meaning you have to convert all characters not available in your font file
[23:20:24 CEST] <SpeakerToMeat> yeah
[23:20:44 CEST] <SpeakerToMeat> cbsrobot: Sometimes hardsubs are requested
[23:21:36 CEST] <SpeakerToMeat> cbsrobot: Yes, and most of the world has moved to smpte actually, plus the isdcf has a strong push to go to smpte, but distributors and film festivals are still reticent as they've been burned in the past
[23:21:47 CEST] <cbsrobot> I mean I create multipe cpl dcps - mostly one video file, multiple language audio tracks and a then I add few subtiles to it
[23:22:04 CEST] <cbsrobot> so I end up with one dcp for all kind of festivals
[23:22:16 CEST] <cbsrobot> never heard of a festival refusing it
[23:22:18 CEST] <SpeakerToMeat> We do too, but some people DO NOT accept those or they strongly request hardsubs
[23:22:21 CEST] <SpeakerToMeat> Ok
[23:22:43 CEST] <cbsrobot> maybe some convert it themselfs to a usable format
[23:22:54 CEST] <cbsrobot> but hey - thats their problem
[23:23:52 CEST] <cbsrobot> I heard about TI v1 v2 subtile problems and thats why dolby etc. created a subtitle overlay in software
[23:24:42 CEST] <cbsrobot> btw - I'm curious - which festival forces you to send hardsubs ?
[23:29:29 CEST] <SpeakerToMeat> cbsrobot: Most accept xml subs, but most suggest burned in.
[23:29:45 CEST] <cbsrobot> xml subs are interop
[23:30:11 CEST] <SpeakerToMeat> mxf subs in smpte are generated from xml too, and it's just the xml and ttf wrapped in a mxf
[23:30:19 CEST] <cbsrobot> yeah
[23:30:22 CEST] <SpeakerToMeat> plus the possibility of encryption
[23:30:25 CEST] <cbsrobot> th'ats what i meant
[23:31:03 CEST] <cbsrobot> so if you say: Most accept xml subs. Interop or smpte then ?
[23:32:39 CEST] <SpeakerToMeat> You'd have to discuss with their technical staff, the documents aren't always the most precise, for example the Berlinale (which suggests burned in) will accept both interop and smpte, but mentions subtitles as xml
[23:33:41 CEST] <cbsrobot> send them smtpe - and if they complain tell them to upgrade their hardware :)
[23:33:54 CEST] <SpeakerToMeat> Yeah that always goes over right
[23:33:56 CEST] <cbsrobot> eh firmware I mean
[23:33:58 CEST] <SpeakerToMeat> :P
[23:34:25 CEST] <SpeakerToMeat> Well locally my country arrived at DCPs pretty late, so all our equipment is smpte ready, so for local we mostly do smpte
[23:34:31 CEST] <cbsrobot> did you ever had to send a dcp to the oscars ?
[23:34:51 CEST] <SpeakerToMeat> Nope
[23:34:57 CEST] <cbsrobot> they are the real pain!
[23:35:07 CEST] <SpeakerToMeat> But we had to send DCPs to braindead distributors.
[23:35:27 CEST] <nicolas17> speaking of braindead
[23:35:30 CEST] <cbsrobot> hehe
[23:35:46 CEST] <cbsrobot> barcos do not like interop subtitles
[23:35:48 CEST] <SpeakerToMeat> One failed the QA because there was one duplicate frame (from source) and there was one text in a cell phone screen that wasn't removed (the dcp wasn't textless).... they offered to fix the DCP, at a cost
[23:35:58 CEST] <cbsrobot> but most vendors are ok with smpte
[23:36:02 CEST] <cbsrobot> in my experience
[23:36:25 CEST] <nicolas17> if I do extern "C" { #include <libavformat/avformat.h> } KDevelop *sometimes* stops highlighting ffmpeg functions :/
[23:36:46 CEST] <cbsrobot> lol
[23:36:51 CEST] <SpeakerToMeat> cbsrobot: Odd, we've not done many interops locally, but we've not had troubles with Barcos in general. The onyl time we had troubles was with an old ass Dolby DSS-220
[23:37:26 CEST] <nicolas17> I have to comment out the extern "C", reload file, and it works... and if I un-comment it still works
[23:38:02 CEST] <SpeakerToMeat> kepstin: should I be safe setting only the u16? or do I need to somehow set the u8 too?
[23:38:32 CEST] <SpeakerToMeat> cbsrobot: And this is a test btw only to hardsub existing j2c set rather than producing a new set from the dcdm
[23:38:33 CEST] <kepstin> SpeakerToMeat: I think it's a union, so you'd set just the one that's used.
[23:39:14 CEST] <cbsrobot> SpeakerToMeat: Well maybe I did some weird cpls then ....
[23:40:41 CEST] <cbsrobot> SpeakerToMeat: are you working on drawutils ?
[23:41:30 CEST] <SpeakerToMeat> cbsrobot: It could be... media server/block wise dolbys are pretty finicky to standards, doremis tend to support almost everything :D in fact we had to fix a DCP that had been done in 23.976 (on smpte)..... and a bloody doremi played it ok!
[23:41:55 CEST] <SpeakerToMeat> cbsrobot: I'm doing a quick dirty horrible "test this shit" hack. if it works, I might consider doing a propper xyz->rbg converter
[23:43:44 CEST] <SpeakerToMeat> cbsrobot: Btw did you see the IMS3000 finally has usb3? Ingest should take less than half your lifetime now!
[23:44:00 CEST] <kepstin> SpeakerToMeat: the quick&dirty hack is to just do color->comp[i].u16[0] = rgba[i] << 4; and put xyz colors in the rgb valus in the ass file.
[23:44:26 CEST] <SpeakerToMeat> kepstin: Yeppers that's what I am doing
[23:44:34 CEST] <SpeakerToMeat> kepstin: I based the loop off the rgb if
[23:45:09 CEST] <SpeakerToMeat> SHough I didn't think about the bit shift... duh
[23:46:06 CEST] Action: SpeakerToMeat hits his head against a wall
[23:47:28 CEST] <SpeakerToMeat> It's fun when you're trying to build the source based on the package you use.... from a backport
[23:47:35 CEST] <SpeakerToMeat> I need to get half the deps manually fromt he backports
[23:50:17 CEST] <furq> this probably isn't what you want to hear right now, but you should probably update to debian 9
[23:50:49 CEST] <SpeakerToMeat> furq: With how my system is set up with third party stuff, I think I'd rather jump off the balcony
[23:51:36 CEST] <cbsrobot> SpeakerToMeat: opening windows or not - that's the question
[23:53:33 CEST] <SpeakerToMeat> cbsrobot: I almost never use windows. Much less now with Resolve 14 out
[23:53:46 CEST] <SpeakerToMeat> cbsrobot: Though I have to boot into centos, resolve is buggy on debian
[23:54:03 CEST] <SpeakerToMeat> cbsrobot: The windows person is dad, he's the Premiere expert in the family
[23:54:23 CEST] <SpeakerToMeat> I'm just IT (mostly :D) he's the cinema person :D
[23:54:31 CEST] <cbsrobot> SpeakerToMeat: I was ment to be a joke about you jumping off the balcony
[23:54:47 CEST] <SpeakerToMeat> ...
[23:54:50 CEST] <SpeakerToMeat> oh
[23:54:54 CEST] Action: SpeakerToMeat sighs
[23:54:58 CEST] <cbsrobot> lol
[23:55:03 CEST] <SpeakerToMeat> Long day... I'll go with that excuse, long and tiring day
[23:55:50 CEST] <SpeakerToMeat> Ok, building my abomination of a ffmpeg now
[23:56:18 CEST] <furq> meanwhile
[23:56:20 CEST] <furq> http://i.imgur.com/K1sZ5VQ.png
[23:56:39 CEST] <SpeakerToMeat> kepstin: if I ever do correct color conversion for drawsomething.c should I take gamut conversion in account? I'm not sure if XYZ per se is a different gamut, only its most common use (digital cinema) is.
[00:00:00 CEST] --- Wed Jul  5 2017


More information about the Ffmpeg-devel-irc mailing list