[Ffmpeg-devel-irc] ffmpeg.log.20190329
burek
burek021 at gmail.com
Sat Mar 30 03:05:02 EET 2019
[01:19:13 CET] <dablitz> good evening channel. I am wondering if it is possible to stream audio only with G.711 codec via RTP with ffmpeg
[03:50:01 CET] <pridkett> does ffmpeg support converting 60 fps to 30 fps ?
[07:09:06 CET] <pridkett> what does 64bit application offer that 32bit application does not offer
[12:09:10 CET] <faLUCE> dablitz: did you try the command?
[12:13:27 CET] <dablitz> faLUCE: yes i dis but -a g.711 didnt work
[12:16:52 CET] <faLUCE> dablitz: paste the command
[12:27:09 CET] <dablitz> ffmpeg -i hw,1:0 -acodec g.711 -ar 11025 --f rtp rtp://fc79:85f6:a2b5:1c5a:91d5:a11b:6ca8:692f:54000
[12:34:17 CET] <faLUCE> dablitz: I see pcm_alaw and pcm_mulaw from encoders' list
[12:34:48 CET] <faLUCE> so, replace g.711 with pcm_mulaw
[13:00:42 CET] <dablitz> faLUCE: i will try that thank you
[13:00:56 CET] <JEEB> dongs: sorry I've been busy posing my waifus in a 3-D modeling thing :) but feel free to ask anyways, in case I can be of use
[15:43:40 CET] <kevinnn> Hi all, what kind of effect on performance does interlacing have on x264?
[15:46:40 CET] <pridkett> kevinnn you mean if you when you want to encode a interlaced video using x264 ?
[15:47:39 CET] <kevinnn> pridkett: does x264 offer an interlaced option? I see many references to interlacing throughout x264.h
[15:47:54 CET] <kevinnn> like will it only process half the pixels every cycle
[15:48:03 CET] <kevinnn> which should double the performance right?
[15:48:09 CET] <pridkett> i am not sure, but ffmpeg does
[15:48:13 CET] <kevinnn> and halve the size of the output
[15:48:13 CET] <pridkett> why not just use ffmpeg
[15:48:35 CET] <JEEB> kevinnn: remember field rate vs frame rate :P
[15:48:48 CET] <JEEB> fields might be half the height but then you get 2x of them
[15:48:57 CET] <JEEB> also usually x264 encodes in MBAFF mode
[15:49:02 CET] <JEEB> which means that two fields get coded together
[15:49:51 CET] <kevinnn> JEEB: so does that mean that there is no advantage to tuning interlaced in x264?
[15:50:15 CET] <JEEB> you use interlaced coding when you absolutely need to pass content through interlaced (and you need to re-encode)
[15:50:31 CET] <JEEB> otherwise zarro reason to encode interlaced :P
[15:50:45 CET] <JEEB> some features get disabled in x264 to make it an even sweeter deal
[15:51:30 CET] <kevinnn> JEEB: what do you mean by absolutely need to? I want to get a bit more performance out of x264, is that not a good use case for interlaced?
[16:04:44 CET] <kevinnn> JEEB?
[16:06:42 CET] <dongs> JEEB: well, still same shit. i can't figure out which layer TLV-NIT is at
[16:06:48 CET] <DHE> if the source input is interlaced and that's what needs to be encoded, you use interlace mode. it's easy as that
[16:06:51 CET] <dongs> i thought it was TLV type FE but im not getting anythign there
[16:07:00 CET] <DHE> but yes, there are quality/performance impacts of interlaced mode
[16:07:01 CET] <dongs> i forget if your paste showed nit parsing or not
[16:07:16 CET] <DHE> (I have a build of x264 with interlacing support disabled)
[16:07:43 CET] <kevinnn> DHE: if the source is not interlaced is there any performance impacts?
[16:07:56 CET] <dongs> btw, after fixing hardware, 8K capture is flawless now
[16:08:04 CET] <dongs> nothing i have plays it t ho
[16:08:08 CET] <dongs> it uses 100% nvdec
[16:08:12 CET] <dongs> to play at like 15-20fps
[16:08:14 CET] <dongs> if even that
[16:08:18 CET] <kevinnn> DHE: also if the source is interlaced... does it process even lines first or odd lines first?
[16:08:43 CET] <DHE> kevinnn: x264 supports tff and bff interlace modes
[16:09:28 CET] <kevinnn> DHE: tff?
[16:09:41 CET] <DHE> the performance impact of non-interlaced modes would be pretty theoretical. all the conditions that say "if (is_interlaced) {...}" would always fail, but it's probably a 1 millionth of a slowdown fraction
[16:09:47 CET] <DHE> top field first, bottom field first
[16:10:34 CET] <DHE> I'm building a custom application. very optimized. the configure commandline is 1000 bytes by itself
[16:10:52 CET] <DHE> call it 1400
[16:11:09 CET] <kevinnn> DHE: so, enabling interlaced on a non interlaced stream shows no performance increase nor does it make the individual NAL units any smaller?
[16:11:19 CET] <kevinnn> I don't understand how that could be possible
[16:11:24 CET] <kevinnn> wow
[16:11:30 CET] <kevinnn> I'd like to see that :)
[16:11:31 CET] <DHE> oh, you mean mis-setting the parameter
[16:11:51 CET] <kevinnn> DHE: well sort of
[16:12:02 CET] <kevinnn> it should only process half the pixels right?
[16:12:11 CET] <kevinnn> every cycle
[16:12:16 CET] <DHE> I don't know enough about h264 to make the call. I'm just a user with requirements who did a lot of research specific to that
[16:12:35 CET] <kevinnn> you did research specific to interlacing?
[16:13:11 CET] <DHE> fortunately I'm just deinterlacing everything so I can build x264 with --disable-interlaced, makes x264 a bit smaller and should be a hair faster
[16:15:28 CET] <kevinnn> hmm, so if I mis-enable interlacing on my x264 config right now is there any changes I need to make to my source buffer? Any changes I need to make to the client processing the NAL units?
[16:19:34 CET] <kepstin> kevinnn: the main thing incorrectly enabling interlaced parameter on a progressive frame might do is cause the decoded video to have the chroma processed incorrectly, giving slight artifacts on edges of coloured objects. It will likely also slightly reduce the coding efficiency.
[16:21:13 CET] <kevinnn> kepstin: interesting, so there is no advantage at all to using interlaced video? Even if enabled correctly?
[16:22:33 CET] <kepstin> if your video is actually interlaced, then enabling interlaced mode on the x264 encoder will preserve the interlacing. If you don't use interlaced mode, it might introduce artifacts that blend fields
[16:22:46 CET] <kepstin> and the decoder won't know it's interlaced, so might not run a deinterlacer
[16:23:53 CET] <kepstin> an interlaced digital "frame" contains two fields from different times on alternate lines ("combed" together)
[16:24:50 CET] <kepstin> so a 60 field per second interlaced video will be stored as 30 frames per second, and each frame will contain two fields - one field on the even lines, one on the odd lines.
[16:26:55 CET] <kepstin> setting the interlaced mode on the x264 encoder just tells the encoder "the frames i'm giving you aren't a single image, they actually have two pictures combed together", and then the x264 encoder uses a less efficient encoding mode that keeps the two images from getting blended together.
[16:29:07 CET] <kevinnn> kepstin: thank you very much, that was exactly what I was looking for
[16:56:10 CET] <kepstin> anyways, to summarize: enabling interlaced mode makes the x264 encoder slower and makes the individual nal units bigger.
[16:56:17 CET] <iive> actually the decoder decides on its own whenever to encode the image as single picture or two fields. so in effect it would use the most efficient encoding.
[16:56:50 CET] <iive> it comes at price of slower encoding.
[16:58:15 CET] <kepstin> iive: not sure why you mention a decoder. anyways, last i checked, x264 only supported mbaff encoding which encodes both fields together into one frame.
[16:58:49 CET] <iive> i meant encoder.
[17:45:29 CET] <Muchoz> I've been wondering whether it is possible to mix tiling and non-tiling SHVC's layers. So that you could download a base layer that isn't tiled and download tiled enhancement layers. Is this possible?
[17:46:46 CET] <JEEB> not sure if many people have looked into scalable coding
[17:47:15 CET] <JEEB> I know that some paytv companies are starting to look into it as means of coding a free base layer, and then having the higher resolution extra layer paid
[17:47:22 CET] <JEEB> which might make people implement it, but I don't know
[17:47:40 CET] <JEEB> since I don't know if there's any support for SVC or SHVC in hw decoders right now
[17:47:49 CET] <JEEB> I know in software pretty much just the reference decoder exists
[17:50:10 CET] <Muchoz> I'm researching HAS for VR video and was interested in seeing whether this was just "possible" for now and whether I could generate these files already. I'm atm not really interested in the playback of it.
[17:51:38 CET] <Muchoz> So the most important aspect for me right now is streaming the data over multiple types of networks.
[17:51:56 CET] <JEEB> anyways, I guess check H.265's limitations regarding tiling, esp. regarding additional layers
[17:52:30 CET] <Muchoz> It's not something that could perhaps already be done using mp4box?
[17:53:03 CET] <JEEB> ?
[17:53:15 CET] <JEEB> that's a container thing
[17:53:33 CET] <JEEB> if it was a container level thing then no coding things would be required
[17:54:21 CET] <JEEB> with SVHC and separate transmission you'd have to figure out how different containers handle the signaling
[17:54:42 CET] <Muchoz> Handle the signaling of what?
[17:55:05 CET] <JEEB> "there's a separate layer available on track X which you can merge with your current track"
[17:55:24 CET] <JEEB> you have to signal that somehow if you are transferring it in some separate way
[17:55:55 CET] <Muchoz> Ah right
[17:56:08 CET] <JEEB> or if you handle it before demuxing and just merge things into a single track for the player then however you handle that part
[17:56:28 CET] <JEEB> (Aka "make a single track out of samples coming from this and that source")
[17:56:46 CET] <Muchoz> Okay I'm speculating now
[17:57:01 CET] <JEEB> not sure how many things have standardized such signaling so that you could have X completely separate sources
[17:57:03 CET] <Muchoz> Let's say I have a base layer that is not tiled and a base layer that IS tiled.
[17:57:20 CET] <JEEB> because in MPEG-TS your broadcast either has the full set of PIDs, or it doesn't
[17:57:33 CET] <Muchoz> Can enhancements layers be applied to either of those base layers?
[17:57:42 CET] <JEEB> and then you have a signaling thing that says "for PID 123 there's also PID 345 which has the enhancement layer"
[17:57:42 CET] <Muchoz> Those enhancements layers ARE tiled then.
[17:57:54 CET] <Muchoz> I'm currently not looking into making it actually work.
[17:58:03 CET] <JEEB> but you want separately transferred things soo... :|
[17:58:09 CET] <Muchoz> I'm looking at the feasibility of streaming this in the first place.
[17:58:28 CET] <JEEB> multiple base layers... no idea
[17:58:40 CET] <JEEB> for actual SHVC I recommend taking a dive into H.265's limitations :P
[17:58:41 CET] <Muchoz> No not multiple base layers
[17:58:55 CET] <Muchoz> https://www.utdallas.edu/~afshin/publication/360.pdf
[17:58:58 CET] <JEEB> > I have a base layer that is not tiled and a base layer that is tiled
[17:59:09 CET] <Muchoz> They're working with a base layer that is tiled
[17:59:56 CET] <Muchoz> I'm asking whether I can make 1 single base layer that is not tiled.
[18:00:26 CET] <Muchoz> But leave the enhancement layers tiled. Do these enhancements layers "depend" on how the base layer was encoded or...?
[18:01:03 CET] <Muchoz> My apologies for trying to dumb it down, I don't know _that_ much about it
[18:01:05 CET] <JEEB> read the H.265 spec? it's free :P
[18:01:26 CET] <JEEB> also the tiling in this article seems to be for the purposes of not having to transfer all of it necessarily
[18:01:36 CET] <JEEB> at least that's how it feels like
[18:01:52 CET] <JEEB> althought not sure where hte SVHC comes up there
[18:02:23 CET] <Muchoz> See 4.1 in there
[18:04:28 CET] <JEEB> anyways, if the spec doesn't say that all layers have to share their tiling configuration then that's it :P
[18:04:31 CET] <JEEB> please refer to the spec
[18:05:01 CET] <JEEB> and yes, it seems like they're using tiles just for the purpose of not sending all things from the enhancement layer
[18:05:41 CET] <Muchoz> Alright, thank you. I'll read the spec.
[18:06:52 CET] <JEEB> http://www.itu.int/rec/T-REC-H.265
[18:07:01 CET] <JEEB> includes the scalable extensions etc
[18:07:24 CET] <Muchoz> Ya, I had it open. It's just a bit overwhelming to see 700 pages :p
[18:07:53 CET] <JEEB> also you can always ask the development mailing list for it and see if you get a response
[18:08:02 CET] <Muchoz> The extension itself seems to cover 30 pages. I'll check there first
[18:08:05 CET] <JEEB> SVHC is really not used outside of academia
[18:08:24 CET] <Muchoz> Where is that mailing list?
[18:08:35 CET] <JEEB> jct-vc is the base HEVC mailing list
[18:08:46 CET] <JEEB> scalable coding might have a separate one, although it's part of the spec now
[18:09:16 CET] <Muchoz> Thank you, I'll figure out how to work with mailing lists as welel
[18:09:28 CET] <JEEB> I think for SVC the only "real" use case I've seen so far was cisco telecalls
[18:09:34 CET] <JEEB> where for some reason they utilized SVC :)
[18:11:11 CET] <JEEB> and looking at that player they seem to have utilized some out-of-band signaling of layers or so - it's really unclear if they actually tested their setup
[18:11:48 CET] <JEEB> because they don't have link towards whatever they utilized to play the thing etc
[18:12:03 CET] <JEEB> granted I did just scroll through it rather quickly :P
[18:12:07 CET] <JEEB> including the references at the end
[18:14:33 CET] <Muchoz> JEEB, it's because the focus is on streaming it. Not playing it back.
[18:14:41 CET] <JEEB> well they didn't stream
[18:14:45 CET] <JEEB> the encoder was the reference encoder
[18:14:50 CET] <JEEB> that's not going to stream anything for you
[18:15:05 CET] Action: JEEB still has memories of one-frame-per-minute for HM
[18:15:33 CET] <JEEB> and I don't think SHVC's reference encoder is any better
[18:16:14 CET] <Muchoz> With streaming I mean downloading
[18:16:43 CET] <Muchoz> Just using HAS
[18:16:56 CET] <JEEB> also they included all sorts of funk stuff like "this will be X times faster" etc etc - which you should back up with some implementation
[18:17:05 CET] <JEEB> you can't just say it is magically faster :P
[18:17:22 CET] <Muchoz> JEEB, don't complain to me about their paper :p
[18:17:35 CET] <Muchoz> I agree completely, I would want to open source my work.
[18:17:58 CET] <Muchoz> The scalable extension section of the spec is also just filled with code and it's really just useless to me.
[18:18:11 CET] <JEEB> see profiles section
[18:18:13 CET] <Muchoz> I would have to read the entire spec to really just know what is going on.
[18:18:45 CET] <Muchoz> Alright, I'll read the profiles section more thoroughly
[18:19:09 CET] <JEEB> scalable main it seems
[18:20:33 CET] <Muchoz> It's just throwing around a lot of references to variables that I have no idea what they mean.
[22:56:42 CET] <kevinnn> is there anyone here who might know anything about live555?
[23:04:32 CET] <kevinnn> also does anyone know how can I force x264_encoder_encode to only produce 1 NAL unit each cycle
[23:26:51 CET] <BtbN> I don't think that's possible. You can just split them though.
[23:29:09 CET] <kevinnn> BtbN: what do you mean split them?
[23:29:31 CET] <BtbN> NALs are pretty easy to identify, is they start with their usual start code
[23:30:07 CET] <kevinnn> BtbN: for my particular use case I'd like there only to be one NAL unit produced each call to encode
[23:30:14 CET] <BtbN> why?
[23:30:23 CET] <kevinnn> well.. it is a bit complex
[23:31:17 CET] <kevinnn> so I am creating a streaming protocol of sorts. I am noticing that the system latency goes up everytime there are more than one NAL units produced
[23:31:29 CET] <kevinnn> by an encode cycle
[23:31:50 CET] <kevinnn> so I wanted to test to see if I adjust it down to 1 NAL unit per cycle if my latency will go back down
[23:32:19 CET] <BtbN> The amount of NALs it generated per frame depends on a lot of settings. You can't just set "1 NAL please"
[23:32:25 CET] <BtbN> And what do you mean by system latency?
[23:32:27 CET] <BtbN> Encode delay?
[23:32:44 CET] <kevinnn> no, sending, it takes longer to send more NAL units
[23:32:53 CET] <BtbN> That makes no sense
[23:33:01 CET] <kevinnn> I am using live555 rtsp to send each NAL unit
[23:33:20 CET] <kevinnn> and for some reason it takes a really long time to send more NAL units then it does fewer
[23:33:38 CET] <BtbN> Why would the transport protocol care about codec specifics like that?
[23:33:50 CET] <BtbN> I can see it care about I/P/B frame types, but NALs?
[23:34:10 CET] <kevinnn> I honestly have no idea... it is just a trend I noticed
[23:34:24 CET] <kevinnn> I wanted to see if the multiple NAL units was causing the latency
[23:41:25 CET] <BtbN> You could try disabling slice threading and force a single slice, that should reduce the amount of NALs
[23:41:37 CET] <BtbN> But I really have high doubts about that causing latency
[23:46:29 CET] <kevinnn> BtbN: x264_encoder->parameters.b_sliced_threads = 0; x264_encoder->parameters.i_slice_count = 1; x264_encoder->parameters.i_slice_count_max = 1;
[23:46:51 CET] <kevinnn> I tuned these and I am still seeing a few times where more NAL units are produced
[23:47:08 CET] <BtbN> Yes, IDR frames are bound to have multiple, nothing can be done about that.
[23:48:14 CET] <kevinnn> BtbN: hmm, ya oddly enough it seems that the latency is a bit better this time around though...
[23:48:25 CET] <kevinnn> I am doing to adjust keyintmax and see what happens
[23:48:47 CET] <BtbN> That can only be a matter of single digit milliseconds really
[23:51:08 CET] <kevinnn> BtbN: perhaps it is the placebo effect
[23:51:20 CET] <kevinnn> it doesn't feel any less latenc
[23:51:24 CET] <kevinnn> latent*
[23:51:30 CET] <kevinnn> and I've tuned key int
[23:51:36 CET] <kevinnn> so it is mostly just 1 NAL unit
[23:51:40 CET] <BtbN> Are you trying to do real time gaming via streaming there?
[23:51:46 CET] <kevinnn> god what could be the issue
[23:51:49 CET] <kevinnn> gaming?
[23:51:55 CET] <kevinnn> no, remote desktop
[23:52:03 CET] <BtbN> Yeah... that'll be a pain to do
[23:52:11 CET] <kevinnn> why?
[23:52:17 CET] <BtbN> ffmpeg is not designed for ultra low latency at all
[23:52:22 CET] <BtbN> it has buffers for everything left and right
[23:52:32 CET] <kevinnn> this set up was actually working pretty well on 30 fps
[23:52:37 CET] <kevinnn> also not using ffmpeg
[23:52:50 CET] <kevinnn> it is raw c++ with x264
[23:52:57 CET] <BtbN> I assume you did put x264 in zerolatency mode already?
[23:53:04 CET] <kevinnn> yea..
[23:54:11 CET] <kevinnn> I am timing the receives on the client end and they are pretty high for some reason. I noticed that if I commented out my fps limiting code (at 60fps) then the receiving end would receive quicker for a little but the screen would quickly lag out
[23:55:55 CET] <BtbN> You need to make sure your player does not buffer frames, same for the encoder
[23:56:06 CET] <BtbN> if it goes slow, drop frames, don't buffer
[23:56:43 CET] <kevinnn> live555 should use UDP for transport so when it drops frames it shouldn't buffer
[23:56:54 CET] <BtbN> But your player or the encoder could
[23:57:15 CET] <BtbN> If the player can't keep up for a short moment because of system load, it needs to skip frames
[23:57:58 CET] <kevinnn> BtbN: okay let me see if I can get the player to drop some frames
[23:58:36 CET] <BtbN> Pretty much, just don't have any buffer at all. Play the latest frame, drop anything else
[23:59:14 CET] <BtbN> For that to work you either need to send only I frames, or implement logic to re-request an IDR frame if stuff was lost
[00:00:00 CET] --- Sat Mar 30 2019
More information about the Ffmpeg-devel-irc
mailing list