[Ffmpeg-devel-irc] ffmpeg.log.20170504

burek burek021 at gmail.com
Fri May 5 03:05:01 EEST 2017

[00:08:52 CEST] <bsat98> i can't figure out if named pipes as input to ffmpeg are or are not supported. a few older threads i saw they weren't, and now I see some success stories but i'm just not able to get it working
[00:09:23 CEST] <bsat98> i'm using: ffmpeg -y -i tmp.mp4 -ss "00:00:00.000" -vframes 1 t.png
[00:09:34 CEST] <bsat98> and i get an error: `stream1, offset 0x20: partial file`
[00:09:44 CEST] <bsat98> of course the command works when not using named pipe
[00:20:27 CEST] <llogan> bsat98: how do we duplicate the issue?
[00:27:57 CEST] <bsat98> llogan: `mkfifo tmp.mp4 && curl something.mp4 > tmp.mp4` and then in a separate term: `ffmpeg -y -i tmp.mp4 -ss "00:00:00.000" -vframes 1 t.png`
[00:45:40 CEST] <bsat98> actually, even if i just use cat and pipe it with -i pipe:0 i get exactly the same error
[00:54:47 CEST] <llogan> bsat98: don't use mp4 to pipe. or try relocating the moov atom before piping: ffmpeg -i in.mp4 -movflags +faststart out.mp4
[01:10:09 CEST] <bsat98> llogan: it works thanks so much
[01:12:55 CEST] <llogan> bsat98: should have been: ffmpeg -i input.mp4 -c copy -movflags +faststart out.mp4
[01:16:40 CEST] <bsat98> llogan: that would cause it to do less computation by copying the raw av?
[01:18:26 CEST] <llogan> it stream copies. avoids re-encoding.
[01:20:58 CEST] <bsat98> llogan: i'm scared because the files are already being processed by ffmpeg somewhere else, but without -c copy using the command you gave me it reduces the filesize by half :D
[01:22:04 CEST] <furq> that reencodes it using the default settings
[01:22:13 CEST] <furq> which aren't particularly great quality
[01:22:23 CEST] <bsat98> oh right
[01:22:34 CEST] <furq> also iirc ffmpeg rewrites the file twice if you remux with faststart
[01:22:39 CEST] <furq> a proper mp4 muxer will only write it once
[01:23:02 CEST] <bsat98> furq: are you aware of a better suited tool for this?
[01:23:10 CEST] <furq> if you just need to remux then use l-smash
[01:24:05 CEST] <furq> if they're short files then it's not really worth switching
[01:24:10 CEST] <bsat98> my actual problem is basically
[01:24:50 CEST] <bsat98> users upload videos, and i want to generate thumbnails *on demand* and in different resolutions.
[01:24:58 CEST] <furq> i need to remux a lot of very large files on a vps with slow io, so i really notice the difference
[01:25:15 CEST] <bsat98> so my idea was to pipe the first bit of the file to ffmpeg from a named pipe
[01:25:26 CEST] <bsat98> and then generate it like that, rather than move files around the network etc
[01:25:28 CEST] <furq> if you're processing the file with ffmpeg to begin with, then just add -movflags +faststart to that command
[01:25:36 CEST] <furq> no need to remux at all
[01:25:44 CEST] <bsat98> yes i plan to do that
[01:25:57 CEST] <bsat98> im just wondering if there's a more lightweight approach
[01:26:16 CEST] <bsat98> i have mp4 h264 files only and i just want to read up to the first keyframe from an arbitrary pipe
[01:26:16 CEST] <furq> i didn't think piping mp4 would work at all
[01:26:40 CEST] <furq> unless it's fragmented
[01:27:03 CEST] <bsat98> it wasn't working until i used -movflags +faststart
[01:27:21 CEST] <furq> weird
[01:27:33 CEST] <bsat98> why wouldn't it work though?
[01:27:41 CEST] <bsat98> or i mean why do you assume it wouldn't work
[01:27:42 CEST] <furq> mp4 isn't streamable
[01:27:58 CEST] <bsat98> are you sure about that?
[01:28:06 CEST] <furq> how are you piping it
[01:28:19 CEST] <bsat98> named pipe from gsutil
[01:28:31 CEST] <furq> i'm not sure if i'd trust that
[01:28:46 CEST] <furq> i guess if you're always seeking from the start and the moov atom is at the start then it should work
[01:29:18 CEST] <bsat98> that's what is happening
[01:29:54 CEST] <furq> you can't pipe mp4 out of ffmpeg because it writes the moov atom in a second pass
[01:30:09 CEST] <furq> if you're not using ffmpeg then i don't really know
[01:30:20 CEST] <furq> although either way i'd probably just have the files on a network share
[01:30:23 CEST] <bsat98> i'm using ffmpeg to transcode everything to mp4
[01:30:27 CEST] <furq> i meant for piping
[01:30:39 CEST] <bsat98> oh right
[01:31:31 CEST] <bsat98> i wonder if -movflags +faststart is going to "work" for every type i transcode
[01:31:34 CEST] <bsat98> i'll have to test :<
[01:31:38 CEST] <furq> that's an output format option
[01:31:43 CEST] <furq> if you're converting to mp4 then it'll always work
[01:31:50 CEST] <bsat98> but is it always going to be the same normalized mp4 format
[01:31:59 CEST] <furq> sure
[01:32:08 CEST] <bsat98> i assumed there might be some subtle differences depending on the input format
[01:32:08 CEST] <furq> unless you're running different commands on different inputs
[01:32:14 CEST] <bsat98> i am
[01:32:26 CEST] <furq> even then i doubt it
[01:32:40 CEST] <bsat98> everything ends up as mp4 with h264 encoding
[01:32:46 CEST] <furq> every mp4 has a moov atom, faststart just moves that to the beginning of the file
[01:32:51 CEST] <bsat98> but the path to that is different depending on the type of file was input
[01:32:54 CEST] <furq> if there's no moov atom the mp4 won't play at all
[01:33:01 CEST] <bsat98> okay, that's good
[01:33:02 CEST] <bsat98> thanks
[01:33:13 CEST] <furq> as everyone whose camera has run out of battery in the middle of a recording will know
[01:43:52 CEST] <bsat98> furq: so i notice that chrome is able to stream the original non-moov-atom-fixed versions of my mp4s with the help of varnish, is there something i'm missing?
[01:44:38 CEST] <furq> it won't work if you disable range requests in your httpd
[01:44:43 CEST] <bsat98> using range requests and 206s
[01:44:48 CEST] <furq> right
[01:45:04 CEST] <furq> that's a relatively recent thing
[01:45:06 CEST] <bsat98> but shouldn't the mp4 need to be fully consumed
[01:45:18 CEST] <bsat98> in order to decode it properly
[01:45:21 CEST] <furq> no, it just does incrementally larger range requests from the end of the file until it finds the moov atom
[01:45:23 CEST] <bsat98> if the moov atom is at the end
[01:45:26 CEST] <bsat98> ohh
[01:45:32 CEST] <furq> if the moov atom is at the start then that isn't needed
[01:45:45 CEST] <bsat98> yeah im just saying it "works" with it being at the end
[01:45:48 CEST] <bsat98> which surprised me
[01:46:04 CEST] <furq> yeah that's worked for a while now
[01:46:10 CEST] <furq> even ffmpeg will do that now
[01:46:14 CEST] <furq> but it is a bit of a hack
[01:46:51 CEST] <bsat98> so basically chrome is trying to find the moov atom?
[01:46:57 CEST] <furq> pretty much
[01:47:01 CEST] <bsat98> that's why there's like 3-4 reqs
[01:47:03 CEST] <bsat98> makes sense
[01:47:09 CEST] <bsat98> chrome so smart
[01:47:12 CEST] <furq> that's why piping doesn't normally work with mp4, because you can't seek a pipe
[01:47:20 CEST] <bsat98> i see
[02:10:30 CEST] <james9999> this is probably a bad idea because of what was discussed earlier about doc
[02:11:00 CEST] <james9999> but would it be useful to have a bot like fflogger that could show things with commands?
[02:11:07 CEST] <james9999> like !mp4 and it displays something about moov atoms
[02:11:17 CEST] <james9999> i know some channels have bots some don't
[02:12:20 CEST] <furq> !muxer mp4
[02:12:21 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-formats.html#mov_002c-mp4_002c-ismv
[02:12:23 CEST] <furq> you mean like that
[02:16:20 CEST] <james9999> yes exactly
[02:16:22 CEST] <james9999> XD
[09:35:10 CEST] <JC_Yang> questions about AVIOContext api, should the read_packet callback block if no more data currently available?  I've simply checked the source, it seems like return 0 when no data is available result in an EOF conclusion. So read_packet should block when no more data currently available(but new data is coming), is it right?
[09:35:39 CEST] <c_14> I think it returns EAGAIN?
[09:40:35 CEST] <c_14> though it looks like ffurl_read blocks
[09:47:07 CEST] <JC_Yang> the in-header document say nothing about this, so frustrated.
[09:56:31 CEST] <JC_Yang> the only callers are all within the aviobuf.c, the the call site indicate it really should not return 0, so the proper treatment should block, IIUC.
[10:09:31 CEST] <JC_Yang> anyone please correct me if i'm wrong
[10:35:29 CEST] <Nacht> If I run ffprobe to probe a MPEG-TS file using H.264 coding. How do I know which of the code pages in libavcodec it uses ? I see quite a list of H.264 pages.
[10:35:32 CEST] <Nacht> Is it h264_parser.c ?
[10:36:00 CEST] <Nacht> The reason I'm asking, is that I'm trying to figure out how you're scanning for the NAL units
[10:36:29 CEST] <JC_Yang> code page?
[10:36:55 CEST] <Nacht> as in: https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/h264_parser.c
[10:39:23 CEST] <JC_Yang> if what you like to know/understand is "how does ffmpeg find the h264 NAL from a TS", I think the right place to go is libavformat
[10:40:16 CEST] <JC_Yang> it is the demuxer in action
[10:47:06 CEST] <Nacht> Cheers, I'll have a look there
[10:49:31 CEST] <crow> is there any ffmpeg bisect documentations? for an bug report i should do bisect, i know that rls 3.2.4 works and 3.3 not (coredump)
[10:59:24 CEST] <eiro> hello everyone
[10:59:45 CEST] <eiro> can a file have multiple video stream ?
[11:00:00 CEST] <JC_Yang> that depends
[11:00:07 CEST] <JC_Yang> on the file format
[11:00:36 CEST] <eiro> outch ...
[11:01:00 CEST] <eiro> ok thanks
[11:01:12 CEST] <JC_Yang> e.g mkv and mpeg-ts can contain multiple video streams
[11:14:00 CEST] <Nacht> Man I really don't get this. I'm looking at the streams of NASA and they just don't make sense. Somehow FFMPEG knows how to handle them though. If I look at the NAL units they send for the first frame for example, I see NAL 9, 7, 8 and then 3 times 6. Which is weird, as you'd expect an IDR frame (5) being the first frame.
[11:22:03 CEST] <JC_Yang> you're investigating the file hex?
[11:24:23 CEST] <Nacht> Aye. I do see the IDR now. Trying to figure out why I got 3 SEI frames now
[11:24:36 CEST] <Nacht> *units
[11:26:57 CEST] <JC_Yang> I think MPEG-TS standard should be your reference
[11:28:46 CEST] <Nacht> Already got trough the MPEG-TS. I'm trying to split the TS files, although I noticed not all encoders set the RAI when an IDR frame is present. So it's best to just analyse the H264 and look for the IDR there
[11:28:56 CEST] <jkqxz> That still isn't the end of the first access unit.  You have AUD, SPS, PPS, 3xSEI - there still isn't any frame data yet, presumably the IDR slices follow that.
[11:29:41 CEST] <Nacht> Yeah I found the IDR as well, 00 00 01 25
[11:29:48 CEST] <JC_Yang> http://www.itu.int/rec/T-REC-H.222.0   there're free docs(which are marked as suspersed, but actually enough for your study)
[11:36:49 CEST] <faLUCE> Hello. With these settings: ultrafast preset, size=640x480, gopsize=5, bitrate= default bitrate, 0 bframes, I have a latency of about 200ms, for the h264 encoder. Can I reduce it more?
[11:45:10 CEST] <thebombzen_> faLUCE: not with the ffmpeg CLI
[11:45:16 CEST] <thebombzen_> you can use -tune zerolatency though
[11:45:24 CEST] <thebombzen_> that can improve it
[11:45:51 CEST] <thebombzen_> but sub-200 isn't really possible with ffmpeg's CLI because there's buffers you can't control
[11:56:55 CEST] <faLUCE> thebombzen_: yes, I just set zerolatency and FINALLY, I can receive a http-mpegts-h264 0latency stream
[11:57:17 CEST] <faLUCE> thebombzen_: I used the API
[11:57:18 CEST] <thebombzen_> it won't be zero
[11:57:25 CEST] <thebombzen_> but it'll be very low yea
[11:57:33 CEST] <thebombzen_> also I recommend superfast if you can
[11:57:44 CEST] <thebombzen_> it's a huge step up from ultrafast
[11:57:50 CEST] <faLUCE> thebombzen_: is it faster than ultrafast?
[11:57:55 CEST] <faLUCE> ok thanks, I'll try it
[11:57:57 CEST] <BtbN> the preset has no impact on the latency, as long as your CPU can handle it
[11:58:16 CEST] <thebombzen_> BtbN: well, somewhat
[11:58:31 CEST] <thebombzen_> if you have -tune zerolatency set, yes
[11:58:48 CEST] <BtbN> why would you not have that set if you want zero latency?
[11:58:54 CEST] <faLUCE> now I have only the network latency, which is about 0.1 seconds. I obtained it by avoiding avformat_find_stream_info(), and setting the stream's params manually
[11:59:21 CEST] <thebombzen_> also superfast it is not faster. it's one step lower than ultrafast, but if you can handle realtime encoding it's good enough
[11:59:46 CEST] <thebombzen_> but superfast looks a lot better
[12:00:30 CEST] <thebombzen_> ultrafast is terrible visual quality
[12:00:37 CEST] <thebombzen_> I'd recommend trying superfast first
[12:02:13 CEST] <Nacht> Right, I found my error. It's due to the fact that I have 3 SEI units in this stream that it breaks the code. Due to the 188 bytes limitation on MPEG-TS, the IDR unit got pushed to the next packet.
[12:02:31 CEST] <Nacht> Back to the drawingboard
[12:28:32 CEST] <thebombzen_> BtbN: by the way, is there any planned support for hevc_nvenc in OBS's simple output mode
[12:28:37 CEST] <thebombzen_> currently it only does h264_nvenc
[12:37:35 CEST] <heftig1> what controls the input buffer size (the one that -re reads from for pacing)?
[12:51:03 CEST] <BtbN> thebombzen_, ask the OBS people. As no streaming service is going to support HEVC, I doubt it.
[12:58:14 CEST] <heftig1> how does HEVC compare to AVC at the same encoding speed?
[12:58:24 CEST] <heftig1> assuming x264 and x265
[13:01:10 CEST] <heftig1> i think it's worse, and that's why no streaming service is interested in HEVC yet (aside from the effort needed for the whole pipeline to support another codec)
[13:01:17 CEST] <BtbN> You will have trouble getting HEVC to encode at the same speed as h264
[13:03:57 CEST] <BtbN> The streaming sites are not interested because some people have trouble to even decode h264 in real time.
[13:04:14 CEST] <BtbN> And browsers have no sign of support for HEVC playback support, let alone hardware acceleration.
[13:05:31 CEST] <Mandevil> HEVC patents situation is still a mess.
[13:13:07 CEST] <tp_> netflix is using HEVC as well
[13:14:56 CEST] <Mandevil> They stream in HEVC?
[13:15:04 CEST] <tp_> yes
[13:15:12 CEST] <Mandevil> How do the clients play the streams?
[13:15:16 CEST] <Mandevil> Flash?
[13:15:54 CEST] <tp_> through their app, its for 4K content only i think, well and must be for HDR content too
[13:16:24 CEST] <Mandevil> So they have to have Windows app, MacOS app...?
[13:18:49 CEST] <tp_> i dont think they care much for that, but they have an app for android tv
[13:19:06 CEST] <Nacht> Aren't they using VP9 ?
[13:19:09 CEST] <Nacht> https://medium.com/netflix-techblog/more-efficient-mobile-encodes-for-netflix-downloads-625d7b082909
[13:26:01 CEST] <thebombzen_> BtbN: yea but OBS is also used for gameplay recording.
[13:26:09 CEST] <thebombzen_> also I asked you cause you pushed the last commit here: https://github.com/jp9000/obs-studio/blob/master/plugins/obs-ffmpeg/obs-ffmpeg-nvenc.c
[13:26:15 CEST] <tp_> they are using a lot of codecs, probably VP9 as well
[13:27:28 CEST] <thebombzen_> I use hardware encoding to record gameplay and it'd be nice if OBS allowed me to use hevc instead of h264 because then perhaps I won't have 120 Mbps recordings
[13:38:33 CEST] <Mandevil> thebombzen_: have you tried Intel QSV?
[13:38:43 CEST] <Mandevil> thebombzen_: I hear it encodes HVEC really fast.
[13:39:22 CEST] <Mandevil> (you need SkyLake or newer CPU, though)>
[13:39:45 CEST] <thebombzen_> I have a 6700k and I didn't know you could do that with OBS
[13:40:04 CEST] <Mandevil> How does OBS do its encoding?
[13:40:10 CEST] <Mandevil> You can use ffmpeg to do it, right?
[13:41:21 CEST] <thebombzen_> if you use ffmpeg's advanced feature you don't get a replay buffer
[13:41:45 CEST] <thebombzen_> I could use ffmpeg's advanced for hevc_nvenc as well but then no replay buffer
[13:42:06 CEST] <Mandevil> No idea what is replay buffer.
[14:08:55 CEST] <faLUCE>  <thebombzen_> but sub-200 isn't really possible with ffmpeg's CLI because there's buffers you can't control  <--- I did not understand that. What is sub-200 ?
[14:13:54 CEST] <DHE> latency in milliseconds I assume
[14:15:21 CEST] <faLUCE> why is it not possible, with CLI, if I set zerolatency ? which are the other buffers?
[14:19:56 CEST] <faLUCE> In addition: I see that avformat_find_stream_info() adds latency, because it buffers frames in the muxer, while finding these info. Is there a way to discard these frames (--> flush the buffer), after the infos are obtained, so to call av_read_frame only for the next frames?
[14:24:50 CEST] <faLUCE> do I have to use custom I/O for that, or is there a function which I can easily call so tu flush the demuxer's buffer?
[15:05:25 CEST] <superware> I have an h264 video over UDP (no container format), FFmpeg can decode it without problem but when I try to copy the stream to an mp4 output file with av_interleaved_write_frame, the mp4 is being played at once without any timing, I guess this is since all packets have AV_NOPTS_VALUE for pts and dts. any ideas how to workaround this?
[15:09:00 CEST] <DHE> a couple of things to try. force your own framerate with -r on the input stream
[15:13:09 CEST] <superware> I'm using the ffmpeg library directly, not through ffmpeg.exe :|
[15:16:27 CEST] <JEEB> so what is actually getting written in as timestamps?
[15:16:30 CEST] <JEEB> have you checked?
[15:16:42 CEST] <JEEB> L-SMASH's boxdumper f.ex. can do it
[15:16:47 CEST] <JEEB> `boxdumper --box file`
[15:16:56 CEST] <JEEB> (and use less or something else to scroll it)
[15:19:25 CEST] <superware> well, if the packet I'm writing using av_interleaved_write_frame has AV_NOPTS_VALUE for pts/dts, and when I play the mp4 (through ffplay, VLC, Windows Media Player) it renders all frames at once (no timing) so I guess pts/dts are actually written as AV_NOPTS_VALUE
[15:19:52 CEST] <JEEB> I don't think that's valid
[15:19:55 CEST] <superware> unfortunately I don't what's L-SMASH/boxdumper, I'm on Windows
[15:20:17 CEST] <JEEB> L-SMASH is a project and boxdumper is an app
[15:20:57 CEST] <superware> I guess I need to generate pts/dts? but how? the input time_base is 1/90000...
[15:26:37 CEST] <faLUCE> superware: do you use the libav library?
[15:26:44 CEST] <superware> yep
[15:27:33 CEST] <BtbN> thebombzen_, nvenc hevc is from my experience not much better than nvenc h264.
[15:27:55 CEST] <thebombzen_> I've found it's a bit better
[15:27:58 CEST] <thebombzen_> one primary advantage is it supports 10bit precision
[15:28:13 CEST] <thebombzen_> which is useful for 3d video game recordings
[15:28:18 CEST] <BtbN> on pascal, h264 should as well?
[15:28:32 CEST] <BtbN> it will probably internally downsample to 8bit though
[15:28:43 CEST] <thebombzen_> yea, and that defeats the whole point
[15:29:05 CEST] <BtbN> But games usually don't product 10bit output anyway
[15:29:28 CEST] <thebombzen_> they don't. I'd upscale it first
[15:29:43 CEST] <BtbN> But what's the gain of it then?
[15:29:48 CEST] <thebombzen_> compression ratio
[15:30:09 CEST] <thebombzen_> same reason I'd want to use hevc in the first place
[15:30:34 CEST] <BtbN> it shouldn't be terribly hard to add it to OBS, can be treated mostly the same as h264
[15:30:48 CEST] <BtbN> 10bit might be more of a problem though
[15:30:49 CEST] <thebombzen_> I agree, but I don't know c++ or c
[15:31:32 CEST] <thebombzen_> otherwise I might try that out myself
[15:31:47 CEST] <thebombzen_> although I also don't program for Windows and I only use OBS on win
[15:32:12 CEST] <thebombzen_> but yea, upscaling 8->10 and then using dct-based lossy encoding, and then having the decoder go from 10->8.
[15:32:19 CEST] <thebombzen_> for some reason it actually improves the compression ratio
[15:32:45 CEST] <BtbN> you can just feed nvenc 8bit content, and tell it to encode 10bit i believe, not 100% sure though
[15:32:56 CEST] <thebombzen_> I don't think nvenc has an upscaler
[15:33:02 CEST] <thebombzen_> I think I have to feed it yuv420p10le
[15:33:16 CEST] <BtbN> P010 is the native and preferred format
[15:33:46 CEST] <thebombzen_> what even is p010le
[15:33:55 CEST] <BtbN> nv12 for 10bit
[15:33:57 CEST] <thebombzen_> is that like nv12 but for 10bit?
[15:33:59 CEST] <thebombzen_> oh okay ninjad
[15:34:26 CEST] <BtbN> if you go for 12 bit, things get more fun. As the pixel format nvenc wants is not supported by ffmpeg
[15:34:39 CEST] <BtbN> but a 16bit format with the 4 other bits left empty matches perfectly
[15:35:38 CEST] <thebombzen_> it supports yuv444p16le apparently
[15:35:55 CEST] <BtbN> yes, but it will do 12bit encoding
[15:35:59 CEST] <BtbN> and just ignore the 4 lsb
[15:36:13 CEST] <thebombzen_> the real issue here is more that OBS doesn't support a replay buffer outside of simple mode
[15:36:22 CEST] <thebombzen_> which is what I want
[15:36:31 CEST] <BtbN> It really can't, when it hands of to ffmpeg
[15:36:43 CEST] <BtbN> but you should be able to build one yourself
[15:36:49 CEST] <thebombzen_> hm?
[15:36:53 CEST] <BtbN> just use the segment muxer and tell it to overwrite/delete old segments
[15:37:35 CEST] <thebombzen_> ah I didn't think about that. but I'd have to do something funny
[15:37:36 CEST] <BtbN> Or am I misunderstanding what the replay buffer does? Basically hold the last X seconds in a buffer, and copies them to disk on the press of a button
[15:37:43 CEST] <thebombzen_> it does exactly that
[15:37:57 CEST] <BtbN> yeah, segment muxer like that, and some script that copies the segments on a hotkey
[15:38:14 CEST] <faLUCE> I tried this:  http://dranger.com/ffmpeg/tutorial02.c   for demuxing and playing a http h264 mpegts live stream, and I don't have latency at all, If I use ffplay, with -fflags nobuffer, I still have some (small) latency. Why ?
[15:38:23 CEST] <thebombzen_> the issue with the segment muxer's "overwrite" option is write after the overwrite you end up with basically no data
[15:38:37 CEST] <thebombzen_> so if that's the time you choose to save, you end up sad
[15:38:50 CEST] <BtbN> you'd have to implement a delay, so on hotkey press, it waits one segment length before copying
[15:38:58 CEST] <thebombzen_> which is one minute
[15:39:01 CEST] <BtbN> and ignores the latest segments, as it's incomplete
[15:39:11 CEST] <BtbN> why would you use one minute segments?
[15:39:18 CEST] <BtbN> Just use something short, like a few seconds
[15:39:22 CEST] <thebombzen_> because that's the length of the replay buffer
[15:39:32 CEST] <thebombzen_> at least as I like it
[15:39:34 CEST] <BtbN> you use way shorter segments
[15:39:39 CEST] <BtbN> one segment = one gop
[15:39:49 CEST] <BtbN> and keep as many segments as you want in your replay buffer
[15:40:01 CEST] <BtbN> the length of all segments you keep is the length of your replay buffer
[15:40:19 CEST] <thebombzen_> that doesnt' quite mimic the functionality of the replay buffer
[15:40:25 CEST] <BtbN> it does?
[15:40:35 CEST] <thebombzen_> because the replay buffer stores the last X seconds and if I hit a hotkey it'll dump it to a .ts file named with the timestamp
[15:40:41 CEST] <thebombzen_> and if I just keep playing I can do it again
[15:40:45 CEST] <BtbN> yes
[15:40:51 CEST] <thebombzen_> whereas the segment muxer doesn't let me do that sort of differentiation
[15:40:54 CEST] <BtbN> you do the 4 seconds segments for a minute or two
[15:41:10 CEST] <BtbN> and on hotkey, have some script copy the segments except for the latest one, and combine them
[15:41:27 CEST] <BtbN> mimics the replay buffer perfectly
[15:41:33 CEST] <thebombzen_> oh you're suggesting that I use something like autohotkey to run the script
[15:41:54 CEST] <BtbN> if you use mpeg-ts, combining the segments is trivial for any scripting language
[15:41:58 CEST] <thebombzen_> yea I use ts anyway
[15:42:17 CEST] <thebombzen_> I don't know how to write windows scripts is the issue
[15:42:59 CEST] <thebombzen_> I could probalby learn
[15:43:06 CEST] <BtbN> download bash.exe, use bash :D
[15:43:12 CEST] <thebombzen_> I normally do
[15:43:14 CEST] <thebombzen_> it comes with git
[15:43:26 CEST] <thebombzen_> but I don't know if that works with autohotkey
[15:43:40 CEST] <BtbN> can probably do it entirely in an autohotkey script
[15:43:59 CEST] <BtbN> never used that though
[15:43:59 CEST] <thebombzen_> GOTO "I don't know how to write windows scripts"
[15:44:25 CEST] <thebombzen_> but keep in mind that your suggested solution is to 1. set up the segment muxer to have GOP-sized segments last for 1 minute total and then start overwriting (which I don't know how to do)
[15:45:03 CEST] <thebombzen_> 2. Download a third-party piece of software (e.g. AHK) that I can use to create a keybind, because the window manager doesn't support keybinds (unlike on Linux)
[15:45:17 CEST] <thebombzen_> 3. with the keybind, run a shell (or other) script that I have to write myself
[15:45:40 CEST] <thebombzen_> and all these together can allow me to use hevc instead of h264. It sounds like it's not worth it to me.
[15:46:11 CEST] <thebombzen_> it sounds much easier to just allow NVENC's HEVC encoder in simple mode. I would submit a patch for that, but I don't know C/C++
[15:47:06 CEST] <BtbN> maybe just submit a bug to them? If it's simple, they might just add it
[15:47:19 CEST] <thebombzen_> true. sounds like a plan
[17:32:04 CEST] <Guest72238> hi
[17:32:33 CEST] <Guest72238> i tried to compile ffmpeg for windows but i get "C compiler test failed."
[17:36:20 CEST] <Guest72238> anyone there??
[17:38:09 CEST] <c_14> Guest72238: check the end of config.log
[17:43:09 CEST] <Guest72238> WARNING: Unknown C compiler x86_64-w64-mingw32-gccgcc, unable to select optimal CFLAGS
[17:44:45 CEST] <Guest72238> i know what i did wrong
[17:45:42 CEST] <Guest72238> it should be -cross-prefix=x86_64-w64-mingw32- and not -cross-prefix=x86_64-w64-mingw32-gcc
[17:51:25 CEST] <cryptodechange> dang
[17:51:42 CEST] <cryptodechange> I encode something with the same settings but 1080 16:9 instead of cropped 800
[17:51:58 CEST] <cryptodechange> And the bitrate is 10mbit higher than my average encodes before it
[17:52:26 CEST] <cryptodechange> Though I was using crf 15/16, as it's higher resolution I should really up it a bit more
[17:53:30 CEST] <Guest72238> where do i put the frei0r.h
[18:01:05 CEST] <c_14> Guest72238: anywhere in your include path
[18:01:56 CEST] <Guest72238> sry but I'm pretty bad with linux
[18:02:05 CEST] <Guest72238> where is the include path
[18:02:42 CEST] <mdavis> Any of the folders in the PATH environment variable
[18:02:49 CEST] <mdavis> echo $PATH
[18:03:56 CEST] <mdavis> oh wait nvm
[18:03:56 CEST] <BtbN> PATH hat nothing to do with includes
[18:04:00 CEST] <BtbN> *s
[18:04:19 CEST] <mdavis> yeah, zoned out on that one ;)
[18:05:29 CEST] <mdavis> Guest72238: It would probably go somewhere like /usr/include or /usr/local/include
[18:07:21 CEST] <mdavis> If you installed frei0r to a non-standard location, you'd have to configure FFmpeg with something like
[18:07:44 CEST] <mdavis> --extra-cflags="-I/usr/local/whatever/include"
[18:08:37 CEST] <mdavis> I haven't tried working with frei0r before, though, I know it has some quirks
[18:09:05 CEST] <Guest72238> well i actually have not installed frei0r
[18:09:15 CEST] <Guest72238> i'm going to disable it
[19:07:31 CEST] <livingbeef> Would it be possible to have overlay pattern repeated over the whole width and height of the original video?
[19:20:47 CEST] <james9999> by the way
[19:20:59 CEST] <james9999> any idea how to stream youtube with  youtube-dl and ffmpeg on windows?
[19:21:09 CEST] <james9999> i would try piping in linux but i'm not sure it works
[19:21:47 CEST] <BtbN> you want to stream to youtube?
[19:21:53 CEST] <james9999> no from
[19:22:02 CEST] <james9999> my xbox auto selects the res to the lowest one
[19:22:08 CEST] <james9999> but i can fix that by streaming youtube to it from my pc
[19:22:16 CEST] <james9999> but as of now i can only do it for files on my hard drive
[19:22:19 CEST] <BtbN> Why not watch youtube on your PC then, when you need it anyway?
[19:22:39 CEST] <james9999> cause my pc isn't a big 50 inch screen tv with surround sound and comfy sofas. :D
[19:22:53 CEST] <BtbN> But you can connect it to a TV
[19:22:55 CEST] <furq> youtube-dl -o -
[19:23:10 CEST] <james9999> furq: that pipes to stdout?
[19:23:14 CEST] <furq> yes
[19:23:22 CEST] <BtbN> connecting your PC to the TV seems like way less of a hassle than that
[19:23:24 CEST] <james9999> ok. i'm probably screwed on windows then
[19:23:29 CEST] <james9999> even though i have yotuube-dl and ffmpeg for windows
[19:23:31 CEST] <furq> piping works on windows
[19:23:42 CEST] <james9999> eh
[19:23:49 CEST] <james9999> I know io redirect like < or > works
[19:24:14 CEST] <james9999> like if I open an MSYS shell?
[19:34:02 CEST] <james9999> also in this line of output is bitrate the total bitrate or just the bitrate of the video frame?
[19:34:04 CEST] <james9999> frame=  891 fps= 30 q=29.0 size=    1273kB time=00:00:29.23 bitrate= 356.9kbits/
[19:43:17 CEST] <cryptodechange> At crf=15 1920x800 I'm getting ~12mbit/sec, but 1920x1080 i'm getting 20-22mbit
[19:43:18 CEST] <cryptodechange> madness
[19:44:02 CEST] <cryptodechange> Two different sources of coursew
[19:46:30 CEST] <furq> yes, crf 15 is madness
[19:47:22 CEST] <cryptodechange> It's normal to scale the crf down for lower resolutions right?
[19:47:29 CEST] <furq> if by down you mean up then yes
[19:47:40 CEST] <furq> oh nvm you said lower
[19:47:46 CEST] <furq> 1080p is not "lower resolutions"
[19:48:13 CEST] <cryptodechange> So eg. if I was using crf=15 for 1920x800, crf=14 for 720p, 13 for 480p or whatnot
[19:48:37 CEST] <furq> i mean if you're going to use such stupid crf values then you might as well just remux the bluray
[19:48:58 CEST] <cryptodechange> Why? I went from 30gb to 12gb
[19:49:13 CEST] <furq> didn't you say half of that was audio
[19:49:48 CEST] <cryptodechange> One film was 12gb, one was 16gb, I think the larger one was TrueHD
[19:51:42 CEST] <cryptodechange> 1920x800 vs. 1920x1080p is 530k more pixels 26% more
[19:52:38 CEST] <cryptodechange> I don't think applying that to CRF would make sense (in this case, crf=20)
[19:53:29 CEST] <furq> i use 21 for 720p and up
[19:59:27 CEST] <cryptodechange> If you were encoding 480p content, would you apply the same crf?
[20:04:05 CEST] <BtbN> either you want a certain quality, or you don't
[20:06:16 CEST] <furq> i use 20 for sd stuff
[20:06:19 CEST] <furq> or 19 for stuff i care about
[20:15:51 CEST] <Dresk|Dev> I had a question about 3.3  regarding Android - does this version provider hardware decoding with the result being a texture, as opposed to an overlay?
[21:14:49 CEST] <alexpigment> does anyone know if it's possible to add chapters to an MKV when encoding it in FFMPEG, or is MKVMerge necessary for that functionality?
[21:17:04 CEST] <durandal_1707> alexpigment: there is support via api
[21:17:16 CEST] <alexpigment> so not via the command line?
[21:17:32 CEST] <durandal_1707> check if fffmpeg have option for this
[21:17:40 CEST] <furq> you can do it with -metadata iirc
[21:18:23 CEST] <alexpigment> that's good to know. everytime i search on google i get results for mkvmerge
[21:18:34 CEST] <faLUCE> I have to make a simple player for http+mpegts+h264 and AAC. I could start from demuxing_decoding.c, but I wonder if is there any wrapper lib for ffmpeg + sdl
[21:18:44 CEST] <furq> alexpigment: https://www.ffmpeg.org/ffmpeg-formats.html#Metadata-1
[21:19:03 CEST] <furq> you might be better off using mkvmerge because -metadata doesn't support any chapter file format that anything else uses
[21:19:43 CEST] <furq> actually i might have a script somewhere that converts ogm chapters to ffmetadata
[21:20:25 CEST] <alexpigment> ogm chapters are for MKV?
[21:20:28 CEST] <faLUCE> what about this?
[21:20:29 CEST] <faLUCE> https://github.com/wang-bin/QtAV
[21:20:35 CEST] <furq> alexpigment: it's just a common chapter file format
[21:21:03 CEST] <alexpigment> gotcha. i'm a bit green on chapters tbh
[21:22:41 CEST] <alexpigment> well, i'll do some testing with this metadata option to see how well-supported the resultant files are
[21:23:13 CEST] <faLUCE> I need a minimal audio/video player source code... is there any?
[21:24:17 CEST] <furq> alexpigment: it shouldn't make any difference to the player
[21:24:33 CEST] <furq> you're not actually muxing in a chapter file
[21:25:01 CEST] <alexpigment> oh, that's what you meant about metadata not supporting the chapter file formats
[21:25:08 CEST] <alexpigment> i'm going to be specifying custom chapter points
[21:25:26 CEST] <furq> if you're specifying them by hand then ffmetadata is fine
[21:25:44 CEST] <alexpigment> ok cool
[21:25:46 CEST] <furq> it's just annoying because no tool that exports chapter files supports that format
[21:26:08 CEST] <alexpigment> yeah, not a biggie for me. i'm not ripping movies from discs or anything
[21:26:31 CEST] <alexpigment> but given that i am just specifying time stamps and nothing else, is there an even simpler method than metadata?
[21:26:47 CEST] <furq> that's the only way i know of to do it with ffmpeg
[21:26:50 CEST] <alexpigment> k
[21:27:01 CEST] <alexpigment> i was hoping to avoid writing a file out for the metadata, but it's fine
[21:40:56 CEST] <alexpigment> that seemed to do the trick. thank you very much
[21:41:10 CEST] <alexpigment> (furq, i mean)
[22:42:47 CEST] <james999> furq you said h265 isn't mature yet
[22:43:00 CEST] <james999> does that basically mean people figure out better ways to optimize it
[22:43:06 CEST] <james999> until it's as fast as libx264 is now?
[22:43:27 CEST] <james999> IOW the underlying file format stays the same, but the encoding and decoding improve
[22:43:37 CEST] <BtbN> h265 is a finalized standard, the format itself won't change
[22:44:36 CEST] <furq> 21:43:27 ( james999) IOW the underlying file format stays the same, but the encoding and decoding improve
[22:44:39 CEST] <furq> yes
[22:44:50 CEST] <furq> this is the case with all codecs
[22:45:35 CEST] <furq> the standard is finalised and then people figure out increasingly efficient ways of making something that conforms
[22:46:15 CEST] <furq> x264 is probably the most developed encoder of all time
[22:46:35 CEST] <furq> so it's no surprise that an encoder with a much less open development model for a much newer standard isn't quite there yet
[22:46:40 CEST] <furq> even if the standard has greater potential
[22:46:40 CEST] <BtbN> I haven't actually benched x264 on this box
[22:47:05 CEST] <james999> maybe that's what confused me
[22:47:19 CEST] <james999> the part where you said it has greater potential, just noone has optimized it to that level yet
[22:47:35 CEST] <BtbN> you're probably confusing h265 with x265
[22:47:39 CEST] <furq> yeah
[22:47:45 CEST] <furq> h265 is the codec, x265 is the encoder
[22:47:55 CEST] <furq> the codec is a standard, that doesn't change
[22:48:09 CEST] <james999> ok
[22:48:22 CEST] <james999> but how do you know it will eventually be faster than current codecs?
[22:48:29 CEST] <BtbN> it will never be faster
[22:48:32 CEST] <furq> i didn't say it would
[22:48:36 CEST] <furq> and yeah it probably won't
[22:48:40 CEST] <furq> h265 is much more complex
[22:48:40 CEST] <alexpigment> there should be an asterisk by "that doesn't change". it's likely that we'll see a lot of amendments and revisions
[22:48:45 CEST] <furq> it'll be faster than it currently is
[22:49:01 CEST] <BtbN> x265 is real-time capable now, which is already a huge step up
[22:49:09 CEST] <furq> it has greater potential for compression
[22:49:11 CEST] <BtbN> at least on 1080p
[22:49:15 CEST] <alexpigment> BtbN - on what hardware?
[22:49:20 CEST] <BtbN> Ryzen
[22:49:24 CEST] <furq> particularly at 1440p, 4k etc
[22:49:39 CEST] <alexpigment> 1080p on Ryzen is realtime, which means it won't be realtime on laptop hardware for years
[22:50:02 CEST] <BtbN> Don't really see a point in real time encoding h265 though
[22:50:03 CEST] <james999> by laptop do you mean desktop?
[22:50:05 CEST] <alexpigment> most laptops are still shipping with 2core/4thread CPUs :(
[22:50:14 CEST] <furq> ryzen is desktop
[22:50:27 CEST] <furq> the server zen cpus will have different branding
[22:50:36 CEST] <furq> and they also don't exist yet
[22:50:53 CEST] <BtbN> They won't have any issues with hevc in realtime though, with their 32 cores
[22:51:15 CEST] <alexpigment> james999: yeah, i meant laptop. i'm contrasting ryzen desktop (8core/16threads) with current laptop hardware (2core/4threads or sometimes 4core/8threads)
[22:51:32 CEST] <furq> is realtime x265 a big step up over nvenc
[22:51:42 CEST] <furq> i'd have thought you'd have to fuck up the quality quite a lot to get 30fps out of it
[22:52:02 CEST] <alexpigment> nvenc is actually pretty nice for H.265
[22:52:33 CEST] <alexpigment> i mean it wins easily (i can't stress this enough) on speed. and x265 isn't optimized enough at this point to really take nvenc to school on quality
[22:52:51 CEST] <alexpigment> granted, i've done a lot of benching on speed, and not as much on quality
[22:52:59 CEST] <furq> that is all true but i was under the impression that x264 still easily beats nvenc-hevc
[22:53:13 CEST] <alexpigment> that may be true, but it's also true for x264 vs x265
[22:53:33 CEST] <furq> probably at realtime, yeah
[22:53:40 CEST] <alexpigment> i don't actually know for sure how x264 compares to 1000-series nvenc though on quality
[22:53:41 CEST] <furq> but x265 can at least get compression gains
[22:54:08 CEST] <furq> i also don't see the point of realtime hevc though
[22:54:35 CEST] <alexpigment> well if streaming needs to be done, you'd need to do it realtime
[22:54:46 CEST] <alexpigment> (this is theoretical, obviously)
[22:55:00 CEST] <alexpigment> and if you want to do broadcasting, you'd need to do it in realtime
[22:55:04 CEST] <furq> sure, but why would you want to stream hevc in 2017
[22:55:20 CEST] <alexpigment> because you're bandwidth-limited and you're trying to do 4k?
[22:55:49 CEST] <alexpigment> not really saying i believe people should be streaming HEVC in 2017, but without being able to encode in realtime, you'll never be able to stream in realtime
[22:55:56 CEST] <furq> that seems unlikely, but i guess it is a reason
[22:55:57 CEST] <alexpigment> or broadcast in realtime
[22:56:33 CEST] <furq> i guess it's a chicken and egg thing
[22:56:38 CEST] <alexpigment> right
[22:56:44 CEST] <furq> it has no utility right now, but if it's not possible then nobody's going to build anything that makes it useful
[22:56:56 CEST] <alexpigment> that's the point i was trying to make, although you said it more eloquently ;)
[22:57:14 CEST] <furq> that's me baby
[22:57:22 CEST] <alexpigment> ol' eloquent furq
[22:57:24 CEST] <furq> rephrasing your points and taking the praise
[22:57:36 CEST] <alexpigment> you're living the life, man
[23:21:57 CEST] <faLUCE> I need a minimal audio/video player source code with libav... is there any?
[23:27:55 CEST] <BtbN> ffplay?
[23:28:05 CEST] <faLUCE> BtbN: the code is too long
[23:28:19 CEST] <BtbN> well, it's the most minmal you got
[23:28:50 CEST] <faLUCE> BtbN: I found this:  http://dranger.com/ffmpeg/tutorial02.c, it is good for video only
[23:29:04 CEST] <faLUCE> so I need something like that, but for audio+ video
[23:29:15 CEST] <faLUCE> ffplay is too long
[23:35:39 CEST] <iive> there must be examples in ffmpeg source... maybe in docs?
[23:36:02 CEST] <james999> there's an ffmpeg tutorial? o_0
[23:36:23 CEST] <faLUCE> iive: could not find it
[23:36:26 CEST] <BtbN> There are tons of things that call themselves tutorial..
[23:36:33 CEST] <BtbN> Most of them are outdated and/or bad
[23:37:21 CEST] <james999> yeah too bad there's not a website that tries to be a sort of hub for high quality guides and tutorials
[23:37:33 CEST] <james999> a linux documentation project, if you will. XD
[23:37:35 CEST] <iive> faLUCE: ffmpeg-src/doc/examples/
[23:37:43 CEST] <faLUCE> iive: please
[23:38:26 CEST] <faLUCE> anyway, I posted updated examples to the mailing list and they have not been accepted, even if they are judged well coded and working
[23:39:00 CEST] <faLUCE> there's a really bad aptitude from the libav developers to provide examples.
[23:39:08 CEST] <faLUCE> even to accept them
[23:39:28 CEST] <iive> faLUCE: link to the mail thread?
[23:39:34 CEST] <james999> well faLUCE what can I tell you, sometimes the international illuminati just end up keeping you down
[23:39:50 CEST] <alexpigment> haha
[23:40:55 CEST] <faLUCE> iive: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209448.html
[23:41:15 CEST] <faLUCE> then I stopped contributing
[23:42:54 CEST] <iive> faLUCE: just bump the thread, do you want me to do it?
[23:43:27 CEST] <BtbN> also, ffmpeg is still not libav.
[23:44:13 CEST] <iive> :)
[23:44:22 CEST] <faLUCE> iive: what does "bump" mean? (forgive my english)
[23:45:16 CEST] <iive> faLUCE: ask about the status of the patch. e.g. michael might want to give time for other developers to nitpick your code, nobody finds anything to comment, patchs got forgotten.
[23:46:17 CEST] <iive> ask about the status of the patch, since nobody objected, it have a silent aproval and should be committed.
[23:46:25 CEST] <iive> michaelni: ^
[23:46:41 CEST] <faLUCE> iive: to be honest I'm a bit discouraged about that. So I left that stuff in the limbo. some people were very polemic with it, saying that "it doesn't add useful thing" (which is cleraly false: you can judge yourself the code)
[23:47:21 CEST] <faLUCE> iive: what I suggest you is to read and try the code, and if you judge it worthwile, then bump it yourself
[23:47:37 CEST] <faLUCE> because I've not been listened
[23:48:28 CEST] <iive> yeh, recently there have been a bunch of nay sayers...
[23:48:43 CEST] <furq> where did people say it didn't add anything useful
[23:48:56 CEST] <faLUCE> furq: in the #ffmpeg-devel chat
[23:49:00 CEST] <iive> probably irc
[23:49:05 CEST] <furq> oh
[23:50:16 CEST] <faLUCE> I don't want to appear conceited, but the code seems clear and it explains how to do useful things (you can read the comments)
[23:51:13 CEST] <faLUCE> I prepared other snippets as well (for other useful things, like rtp, live grabbing etc.), but after these comments on the irc channel I left all
[23:51:20 CEST] <furq> well the question is whether it adds anything over encode_audio.c
[23:51:45 CEST] <furq> and also whether it omits anything potentially useful that encode_audio.c does
[23:51:49 CEST] <faLUCE> furq: "
[23:51:51 CEST] <faLUCE> It can be adapted, with few changes, to a custom raw audio source (i.e: a live one).
[23:51:52 CEST] <faLUCE> + * It uses a custom I/O write callback (write_adts_muxed_data()) in order to show to the user
[23:51:52 CEST] <furq> because if it doesn't then it should replace it if it's better
[23:51:54 CEST] <faLUCE> + * how to access muxed packets written in memory, before they are written to the output file.
[23:51:55 CEST] <faLUCE> "
[23:52:33 CEST] <faLUCE> the code was intended for showing how to make a COMPLETE pipe from a live source
[23:53:18 CEST] <faLUCE> and it's coded in a strictly sequential form, so to have readable "blocks" in this pipe
[23:53:42 CEST] <faLUCE> instead of splitting it into unreadable functions which force the user/reader to jump from one line to another
[23:54:43 CEST] <faLUCE> in addition, AAC needs to be muxed
[23:54:53 CEST] <faLUCE> which is not included in "encode_audio.c"
[23:55:41 CEST] <faLUCE> in addition, it explains how to manage the muxed packets with a custom I/O, which is useful if you have to insert the code in your program
[23:57:22 CEST] <faLUCE> anyway, after that, I wrote my own library:  https://github.com/paolo-pr/laav
[23:59:29 CEST] <faLUCE> in addition, they preferreed to leave some terrible examples (like: muxing.c, which is a complete disaster of code: just look at it. Or "demuxing_decoding.c"). Now: I don't want to be polemic, but these things really discourage people about contributing
[23:59:44 CEST] <faLUCE> then I left my purposes.
[00:00:00 CEST] --- Fri May  5 2017

More information about the Ffmpeg-devel-irc mailing list