[Ffmpeg-devel-irc] ffmpeg.log.20180310

burek burek021 at gmail.com
Sun Mar 11 03:05:02 EET 2018


[05:17:02 CET] <fella> when I do: 'ffmpeg -i IN_A/V.mp4 -i IN_A.aac -map 0:0 -map 0:1 -map 1:0 -c copy OUT_2CH.mp4' and then 'ffmpeg -i OUT_2CH.mp4 -map 0:2 -c copy TEST.aac' checksums for TEST.aac and IN_A.aac are identical, as expected - *BUT* - when I do 'ffmpeg -i OUT_2CH.mp4 -map 0:0 -map 0:1 -c copy TEST.mp4' TEST.mp4 is different to IN_A/V.mp4!!
[05:17:13 CET] <fella> ^^ what's the reason for that?
[07:15:27 CET] <StinkerB06> HI!
[07:26:57 CET] <furq> fella: you forgot to map 0:2
[07:34:31 CET] <ZexaronS> Hello
[07:34:55 CET] <ZexaronS> I'm wondering about audio channel mapping, does it change from codec to codec, my original is AC-3 and conersion to OGG
[07:35:23 CET] <ZexaronS> Just trying to find out which one is left or right, first or the third. seems like the number 2 is center
[07:35:42 CET] <ZexaronS> that's just how Vegas Pro adds this OGG file by itself, one channel per audio track
[07:35:52 CET] <furq> it is different between codecs but ffmpeg should automatically map between layouts
[07:36:39 CET] <ZexaronS> oh, but I didn't specify anything about that, does the output console by default tell more, I use verbose output all the time
[07:36:54 CET] <furq> afaik they should both be FL, FR, FC for the first three channels
[07:37:05 CET] <ZexaronS> It's 5.1 audio source and 5.1 speaker setup
[07:37:23 CET] <ZexaronS> it's modern 5.1, not legacy, which means side channels are used, not rear
[07:37:45 CET] <ZexaronS> but yeah I could plug it physically to rear if I wanted to
[07:39:25 CET] <ZexaronS> According to the waveform, the second track looks a lot different than the number 1 and 3, this audio source actually has poor sorround, the volume of the rears/sides is quite low, LFE only ocassionally used
[07:39:40 CET] <ZexaronS> 4 5 are similar
[07:40:33 CET] <furq> apparently it's L, R, C, RL, RR, LFE
[07:40:44 CET] <ZexaronS> for AC-3 ?
[07:40:46 CET] <furq> i assume RL is SL if you're converting from 5.1 (side)
[07:40:48 CET] <furq> for vorbis
[07:41:19 CET] <furq> ac3 is also L, C, R, RL, RR, LFE
[07:41:22 CET] <furq> er, always
[07:41:36 CET] <furq> or SL/SR if it's side
[07:41:37 CET] <ZexaronS> I think i used libvorbis and only specified maximum bitrate, vorbis was experimental
[07:41:52 CET] <furq> yeah don't use the builtin vorbis encoder
[07:42:09 CET] <furq> but yeah if that's correct then channels 2 and 3 will have been swapped
[07:42:19 CET] <ZexaronS> Ah, so it transferred AC-3 mappings to vorbis, not rearranging anything, that's how it is suppose to work right, by default?
[07:42:37 CET] <furq> no it should shuffle channels to get the correct mapping for the output
[07:42:58 CET] <furq> i've never used 5.1 vorbis though so i couldn't tell you if it works
[07:45:38 CET] <ZexaronS> Oh, well, I don't have any other choice but to figure this out right now and get to the bottom of it ... trying to use MPC-HC or other programs
[07:46:15 CET] <ZexaronS> LAV Audio not sure if it'll help, but it has status which shows L R C LFE SL SR
[07:46:39 CET] <ZexaronS> But that's probably not codec/file specific
[07:47:14 CET] <ZexaronS> ffprobe says ac3 has: channel_layout=5.1(side)
[07:48:27 CET] <ZexaronS> yeah i'll have to start with the original and go from there
[07:52:09 CET] <ZexaronS> well MPC-HC has it's own thing, they probably hook it up depending on codec
[07:52:28 CET] <ZexaronS> trying custom channel mappings
[07:57:20 CET] <ZexaronS> Ah, look like not MPC-HC has it correct, at least for this type of AC-3, left front is super low on volume, probably using wrong side channel
[07:58:12 CET] <ZexaronS> same thing with the ogg
[07:58:41 CET] <ZexaronS> I'll probably have to dissect this completely get the separate channels out to see which number is what
[08:00:11 CET] <ZexaronS> yeah even if I use default MPC-HC settings still not right there, trying MPC-BE
[08:30:55 CET] <ZexaronS> OMG, now i figured out not only I had Front Left and Right mixed up at the splitter physically (on the subwoofer panel) but that the audio card is broken and outputting one of the front channels in a much lower volume
[08:32:08 CET] <ZexaronS> Well I do have custom drivers for my Xonar-D1, but this issue happened back on Win7 as well with same drivers for 3 years, it started appearing 2 years ago and I only noticed it in headphones, LMAO I THOUGHT my headphones were going bad, so I changed 3 of them omg
[08:32:25 CET] <ZexaronS> err earphoines, cheap
[12:54:34 CET] <ZexaronS> trying to create great quality downmix, first time ever, never tried before, looks like i'll have to move away from 5.1 since I'm moving soon and no point investing, sound card broken and 5.1 sorround buzzing
[12:55:05 CET] <JEEB> the swresample/avresample resamplers should be correct (TM) by default
[12:55:15 CET] <ZexaronS> but also the lenght of analog cables for sorround speakers is that volume is decreased a lot, but also the positioning isn't optimal in the room to even attach speakers to
[12:55:44 CET] <ZexaronS> thirdly, the sources of sorround simly aren't that good, almost no sounds in the other 3 channels
[12:56:29 CET] <ZexaronS> So I'm asking if ffmpeg is good for downmixing or I should use something else?
[12:56:45 CET] <JEEB> the libraries available in FFmpeg for downmixing are correct as far as I know
[12:56:51 CET] <ZexaronS> and this would be AC-3 to OGG preferably
[12:56:57 CET] <JEEB> uhh
[12:57:07 CET] <JEEB> there's really no reason to re-encode that to be honest
[12:57:14 CET] <durandal_1707> transcoding is not good for audiophiles
[12:57:20 CET] <JEEB> it's already low bit rate and a lot of players can do the downmix properly
[12:57:22 CET] <ZexaronS> or AC-3 to AC-3, it's going to be put back into the original MKV
[12:57:46 CET] <JEEB> so you're taking something already gone through lossy compression, and going to compress it again (Even though after downmixing)
[12:57:54 CET] <JEEB> with AC-3 bit rates it makes no real sense IMHO
[12:58:00 CET] <JEEB> just have the player downmix
[12:58:12 CET] <ZexaronS> JEEB: SO you think i should keep using sorround source and do the downmix in playback live ?
[12:58:22 CET] <JEEB> yes
[12:58:32 CET] <ZexaronS> I wasn't going to remove sorround, just add the stereo version, but oky
[12:58:37 CET] <JEEB> oh
[12:58:54 CET] <JEEB> but yea, even in that case it tends to be a waste
[12:59:23 CET] <JEEB> unless you specifically need it for some compatibility bullshit (for which you can quickly always create that stereo AAC or something for plastic boxes - although most plastic boxes already support AC-3)
[12:59:39 CET] <JEEB> also if it's a hollywood/movie/series 5.1 mix
[12:59:49 CET] <JEEB> those generally suck for 2.0 downmixes :<
[13:00:10 CET] <JEEB> they mix it in a way that has the effects really loud
[13:00:15 CET] <ZexaronS> Well I just figured out something's broken on my Xonar D1, the right audio channel has become 3 times quieter and even increasing software to +12 db is not enough to bring it back to normal
[13:00:34 CET] <JEEB> and then stuff like people talking gets mixed really not loud
[13:00:40 CET] <durandal_1707> ZexaronS: is this for headphones listening?
[13:00:43 CET] <JEEB> could just be how they do the audio in general of course
[13:01:18 CET] <ZexaronS> No not headphones, I would just keep using the Logitech X-540 then but it's old and produces a buzz when powered in https://i.imgur.com/PesgALo.png
[13:04:59 CET] <ZexaronS> not sure if you guys saw previous posts, I was trying to figure out channel mappigns for ac3 and ogg, got that solved but I thought MPC-HC and then MPC-BE and VLC would all not map channels correctly, wrong, it was all my hardware problem here
[13:05:20 CET] <ZexaronS> So then I would just select mixer to stereo in the LAV audio settins to do it
[13:05:33 CET] <ZexaronS> LAV is respectable and downmix should be okay right ?
[13:09:48 CET] <ZexaronS> from afar i can see it's a weird mix, FL and FR are music, center is all speech and sounds from things, SL and SR are super quiet and only have sounds from things excluding human speech, and LFE is used sporadically
[13:09:58 CET] <ZexaronS> it's old, this is like pre 2006
[13:10:22 CET] <ZexaronS> it's 5.1 side layout
[13:10:29 CET] <ZexaronS> same as my outout
[13:11:43 CET] <JEEB> ZexaronS: LAV Audio uses one of the libraries in FFmpeg
[13:12:01 CET] <JEEB> (I mean, the whole thing is just interfaces around FFmpeg's libraries in DShow)
[13:14:22 CET] <ZexaronS> What would be worth a bit, would be creating a downmix which would combine center into FrontLeft and FrontRight, then it would combine SideLeft into FrontLeft, and the other side, then I would copy that result and overwrite the SideLeft and SideRight so they would have all the sounds and full volume
[13:14:36 CET] <ZexaronS> Leaving center and LFE alone
[13:15:06 CET] <ZexaronS> Now I always wondered, what happens to LFE when it's downmixed ?
[13:15:23 CET] <JEEB> according to the correct way of downmixing you leave LFE out
[13:16:03 CET] <JEEB> there used to be a DShow filter that left some LFE in the mix, so I had a thing in the configuration utility that would enable to do a similar thing in LAV Audio :P
[13:16:16 CET] <JEEB> which is 100% incorrect, of course
[13:16:29 CET] <ZexaronS> Yeah since a stereo does not include LFE, but is there a 2.1 downmix .. nope, i see only 4.0
[13:17:11 CET] <ZexaronS> 2.1 speakers exist normally out there, so why no support?
[13:17:21 CET] <JEEB> the FFmpeg libraries let you do 2.1 downmixes if you want
[13:17:37 CET] <ZexaronS> Audio is just so bad all across the tech industry, it's ridicolous how abused it is.
[13:17:38 CET] <JEEB> so it's a case of whether or not you export that availability
[13:17:49 CET] <JEEB> (in things like LAV Audio or whatever you're using :P)
[13:18:47 CET] <JEEB> then again, if you only have two channels actually taken to the device, then most likely system mixer will downmix that to plain stereo
[13:21:05 CET] <ZexaronS> oh okay
[13:21:35 CET] <ZexaronS> Do left and right get mixed too, ?
[13:21:44 CET] <furq> fyi -ac 3 will downmix 5.1 to 2.1
[13:21:54 CET] <furq> it's sort of dumb that -ac doesn't just take a layout name
[13:22:05 CET] <JEEB> you're supposed to use the audio filter itself
[13:22:10 CET] <JEEB> that one takes in layout names
[13:22:17 CET] <furq> i thought it was preferred to use -ac
[13:22:20 CET] <JEEB> nope
[13:22:23 CET] <ZexaronS> Or only the sides get truncated down, SL to FL, SR to FR, C to Both ?
[13:22:26 CET] <furq> well then someone should tell that to the filter docs
[13:22:30 CET] <JEEB> lol
[13:22:54 CET] <furq> https://ffmpeg.org/ffmpeg-filters.html#Mixing-examples
[13:22:57 CET] <durandal_1707> Only Carl prefers -ac
[13:23:02 CET] <furq> Note that ffmpeg integrates a default down-mix (and up-mix) system that should be preferred (see "-ac" option) unless you have very specific needs.
[13:23:11 CET] <JEEB> oh
[13:23:22 CET] <JEEB> that's a general recommendation of doing a normal downmix
[13:23:32 CET] <JEEB> which just ends up being looking real weird with the ac option being mentioned
[13:23:44 CET] <JEEB> as opposed to setting the mixing coefficients yourself
[13:23:50 CET] <JEEB> that's what it is trying to say
[13:24:03 CET] <JEEB> (that's why it's under the "pan" filter examples)
[13:24:09 CET] <furq> what filter were you talking about then
[13:24:34 CET] <JEEB> whatever the filter was that inserts swresample
[13:24:36 CET] <ZexaronS> So what LAV does for Stereo Downmix is some custom parameter which may not be the same as ffmpeg default ?
[13:24:45 CET] <JEEB> ZexaronS: it should use the plain defaults
[13:24:52 CET] <JEEB> because they're correct
[13:24:55 CET] <furq> the only one the docs mention for that is aresample
[13:25:05 CET] <furq> unless i'm bad at ^F
[13:25:10 CET] <ZexaronS> Oh, but what is correct, do left and right get mixed between eachother too ?
[13:25:26 CET] <JEEB> you'd have to grab the specifications from a really long thread of discussion for that
[13:25:30 CET] <JEEB> easier to just look up the code
[13:25:31 CET] <JEEB> :P
[13:25:37 CET] <ZexaronS> Oh okay, np
[13:25:40 CET] <furq> oh nvm i guess it is aresample
[13:26:12 CET] <JEEB> or aformat possibly, I'm not sure
[13:26:12 CET] <ZexaronS> Wouldn't it be cool if the Docs would be autogenerated
[13:26:24 CET] <JEEB> they are
[13:26:47 CET] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#aformat-1
[13:26:50 CET] <ZexaronS> I don't think the commands and values are dynamically linked to the source repo .. ?
[13:27:05 CET] <JEEB> no, but the site gets generated from git commits
[13:27:09 CET] <JEEB> and then there's the doxygen
[13:27:09 CET] <furq> aresample=ocl=2.1 seems to work
[13:27:14 CET] <furq> but i guess aformet does too
[13:27:25 CET] <furq> that's nice to know anyway
[13:27:41 CET] <JEEB> yea I have no idea about the preference between those, as in which is explicit and which is implicit insertion of mixing
[13:27:53 CET] <JEEB> I think I looked into this some time ago when writing some audio filtering code
[13:27:59 CET] <JEEB> but I've since completely forgotten
[13:28:02 CET] <JEEB> isn't life great?
[13:28:18 CET] <ZexaronS> Well that's good then, good docs
[13:28:51 CET] <pdl> Hey im new to ffmpeg and I got 2 questions, I would like to do a 24/7 stream music stream on youtube. I could of course run obs on my pc but I would rather stream from my vps. is there a way to do this? I have no idea how ffmpeg works but I have heard you can stream multiple video files from a list, but would it also be possible to stream one (looping) video file but get the audio from a playlist? (so the video and audio are separate).
[13:29:05 CET] <furq> i assume from the lack of default values for center_mix_level et al that it's done automatically unless you override it
[13:29:27 CET] <furq> but you know what they say about assumptions
[13:29:31 CET] <furq> they make an ass out of u and mptions
[13:29:33 CET] <JEEB> you shouldn't have to define anything other than the output layout for mixing
[13:29:40 CET] <furq> i'd hope so yeah
[13:30:00 CET] <JEEB> nah, pretty sure swresample only needs that
[13:30:08 CET] <JEEB> even if lavfi is stupid
[13:30:27 CET] <JEEB> or at least that's what I remember setting when doing the code I wrote now more than half a year ago
[13:30:30 CET] <JEEB> :V
[13:30:45 CET] <pdl> yo anyone here know an answer for my question?
[13:31:00 CET] <furq> there's no way to do that that isn't annoying
[13:31:21 CET] <pdl> dang
[13:31:23 CET] <pdl> rip
[13:31:25 CET] <JEEB> wouldn't you be able to push crap ou of an mpv playlist or something?
[13:31:30 CET] <JEEB> with the encoding feature
[13:31:34 CET] <gnarface> pdl: i'm nearly certain the answer is yes
[13:31:39 CET] <furq> it'd be nice if there was an option for the concat demuxer that reloaded the concat list periodically
[13:31:40 CET] <gnarface> pdl: i'm foggy on specifics
[13:31:57 CET] <JEEB> > concat things in FFmpeg
[13:32:10 CET] <JEEB> man I still love on seeing people fuck up with those because of the mess they are
[13:32:16 CET] <furq> lol
[13:32:18 CET] <gnarface> pdl: (might also require some clever bash)
[13:32:23 CET] <JEEB> someone made a protocol, another made a demuxer, another made a filter
[13:32:36 CET] <furq> maybe something like the movie source that reads from a playlist then
[13:32:36 CET] <JEEB> but yea, I wonder if mpv could be used for realtime pushing out RTMP
[13:32:38 CET] <pdl> thanks gnarface ill look into it
[13:32:48 CET] <JEEB> and then you'd have full playlist control
[13:32:48 CET] <JEEB> etc
[13:32:58 CET] <JEEB> not sure if mpv closes the output between things in playlists
[13:33:15 CET] <JEEB> but yea, definitely possible with the API, but ffmpeg.c is a mess as usual :)
[13:33:48 CET] <pdl> im a mess too so fuck lmfao
[13:33:57 CET] <durandal_1707> ffmpeg.c is piece of art, please don't spread bad words about it here
[13:34:08 CET] <JEEB> go look at the VSYNC code
[13:34:14 CET] <furq> i should really dust off my C knowledge and write something that does this
[13:34:17 CET] <furq> people ask for it a lot
[13:34:19 CET] <JEEB> go look at the X parts that "handle timestamp discontinuities"
[13:34:26 CET] <JEEB> ffmpeg.c is a mess
[13:34:32 CET] <durandal_1707> JEEB: send a patch
[13:34:36 CET] <JEEB> ...
[13:34:40 CET] <JEEB> you think I understand that shit?
[13:34:47 CET] <JEEB> I can send patches to remove hacks that I understand
[13:34:51 CET] <JEEB> which are usually not in ffmpeg.c
[13:35:06 CET] <JEEB> but generally I bet people would be against removing hacks and their workaround parameters
[13:35:15 CET] <JEEB> because "it was possibly needed seven years ago"
[13:35:29 CET] <furq> submit it to libav then
[13:35:39 CET] <JEEB> guess if it has that hack
[13:35:58 CET] <JEEB> not that Libav is usable for me in any realistic metric
[13:36:25 CET] <furq> another successful interaction
[13:36:31 CET] <JEEB> I used to send patches there, and I still ask opinions etc from people I know regarding that stuff
[13:39:07 CET] <JEEB> furq: I think the hardest part of that would be the dynamic playlist thing
[13:40:08 CET] <JEEB> and the thing that starts inserting silence when there's no audio AVFrames available
[13:40:13 CET] <JEEB> (from the input)
[13:40:51 CET] <durandal_1707> JEEB: start by writing patches which will add comments to hacks, something like: /* XXX this is hack, fixme */
[13:40:58 CET] <ZexaronS> Handbrake has a PR in which they're looking possibly to move to FFMPEG in a year or so, after ver 1.2 ... but 1.1 isn't out yet but should be soon, how do you guys take the news?
[13:41:19 CET] <durandal_1707> fake news everywhere
[13:42:08 CET] <JEEB> durandal_1707: I think sometimes I could do a drinking game of these things I find. for example one day I found someone asking why their time base was incorrect in mp4 that they wrote
[13:42:43 CET] <durandal_1707> JEEB: what was solution to timebase problem?
[13:42:57 CET] <JEEB> let me find you the history of commits that did that issue
[13:43:03 CET] <JEEB> and you can take a glass of vodka for each
[13:43:32 CET] <JEEB> right found it
[13:43:45 CET] <JEEB> first this http://git.videolan.org/?p=ffmpeg.git;a=commit;h=b02493e47668e66757b72a7163476e590edfea3a
[13:43:51 CET] <JEEB> this is in 2012
[13:44:08 CET] <JEEB> so he hacks around API clients not setting the time base correctly by bumping it up until 10k
[13:44:24 CET] <JEEB> then someone herps a derp at it of course
[13:44:31 CET] <durandal_1707> OMG, this is awfull
[13:44:53 CET] <JEEB> the result is an AVOption that sets a GLOBAL time base for ALL video tracks in output http://git.videolan.org/?p=ffmpeg.git;a=commit;h=7e570f027b9bb0b5504ed443c70ceb654930859d
[13:44:57 CET] <ZexaronS> durandal_1707: It's been added to the 1.2 milestone https://github.com/HandBrake/HandBrake/pull/1078
[13:45:02 CET] <JEEB> this in 2013
[13:45:06 CET] <JEEB> and then
[13:45:21 CET] <JEEB> elenril fixes shit in 2014 http://git.videolan.org/?p=ffmpeg.git;a=commit;h=194be1f43ea391eb986732707435176e579265aa
[13:45:28 CET] <JEEB> so most likely this hack is no longer required
[13:45:48 CET] <JEEB> and to be completely fucking honest, the original response should've been "fix your fucking time base"
[13:46:15 CET] <JEEB> so now we've got a hack, and another AVOption hack on top of it
[13:46:21 CET] <furq> how does the mailing list typically respond to patches which are just /* this fucking sucks */
[13:46:34 CET] <furq> i expect that goes down well
[13:47:00 CET] <JEEB> durandal_1707: I can send a patch for this stuff because I understand it, but then it gets all the AVOption deprecation bullshit etc
[13:47:13 CET] <JEEB> and ffmpeg.c now has an output time base parameter I think?
[13:47:22 CET] <JEEB> so this shouldn't be required any more for any API client
[13:47:46 CET] <JEEB> on the contrary, it might have hidden bugs in people's API clients
[13:48:16 CET] <durandal_1707> the stupid hack is still there....
[13:48:20 CET] <JEEB> of course it is
[13:48:28 CET] <JEEB> that is why someone like a week or two ago asked here
[13:48:35 CET] <JEEB> why he wasn't getting the time scale he expected
[13:48:49 CET] <JEEB> (which is what time base is called in MOV/ISOBMFF)
[13:49:19 CET] <durandal_1707> so what he use now?
[13:49:58 CET] <JEEB> I just told him it's a Feature (TM)
[13:50:16 CET] <durandal_1707> headdesk
[13:50:17 CET] <JEEB> and since he is still getting exact timestamps (just on an enlargened time base), he was OK with it
[13:50:44 CET] <JEEB> but I can totally see people getting confused when they ask for X and get X*1000
[13:50:46 CET] <JEEB> or something
[13:50:51 CET] <JEEB> (´4@)
[13:51:31 CET] <ZexaronS> just make the hack optional then, no ?
[13:51:54 CET] <JEEB> the hack shouldn't even be there, the option already is in ffmpeg.c to set output time base (AFAIK)
[13:52:10 CET] <JEEB> and other API clients should just set the time base they need (10k or whatever for VFR)
[13:52:37 CET] <JEEB> ZexaronS: also that's what the second commit literally did. it added an AVOption into the muxer to set the specific timescale wanted
[13:52:48 CET] <JEEB> globally. for all video tracks in the mux
[13:55:15 CET] <JEEB> durandal_1707: anyways now that I've seen that thing I can try to send out a patch for it but I'm really not sure how it will go. hopefully people will see it the way I do.
[13:56:12 CET] <JEEB> or maybe I will raise a discussion first
[14:06:45 CET] <ZexaronS> Is there something in MKV, timing info, because I used AVC and AC3 raw and tried to mux it to MP4 but it stretched video twice long
[14:07:04 CET] <ZexaronS> forcing fps or vf did not help
[14:07:44 CET] <ZexaronS> Then I figured I don't even need to do separate, just directly MKV to MP4 with AC3 since I though AC3 wouldn't work inside MP4
[14:08:34 CET] <JEEB> (hint: it does)
[14:08:45 CET] <JEEB> AC-3 is defined in MP4RA
[14:09:34 CET] <JEEB> durandal_1707: posted on the ML, I will see how the community reacts to it
[14:16:10 CET] <durandal_1707> JEEB: great!
[14:16:55 CET] <furq> ZexaronS: raw avc doesn't have any timing information, you need to specify the framerate when you mux it
[14:17:05 CET] <furq> i'm guessing your video was 50fps or something and it got interpreted as 25
[14:17:32 CET] <kepstin> (25fps is the default for most ffmpeg input formats if unspecified)
[14:17:37 CET] <furq> right
[14:17:54 CET] <furq> there's no need to go via raw streams though
[14:18:01 CET] <ZexaronS> Well it is 23.97, and my MPC reported 12.xx
[14:22:32 CET] <ZexaronS> furq: I tried -r 24000/1001 but didn't help
[14:22:52 CET] <JEEB> note: ffmpeg.c cannot rescale timestamps with remux
[14:23:05 CET] <JEEB> the API lets you do it, but that's not a thing in ffmpeg.c
[14:23:20 CET] <JEEB> so you will want to utilize mkvmerge(gui) or something else
[14:24:55 CET] <ZexaronS> furq: https://pastebin.com/pi0nQXhS
[14:25:27 CET] <ZexaronS> JEEB Well the point was to mux into MP4, not MKV
[14:25:40 CET] <ZexaronS> Vegas Pro can't read MKVs
[14:26:03 CET] <JEEB> yea but with remux ffmpeg.c cannot re-timestamp in that way
[14:26:15 CET] <JEEB> so then you will need to use l-smash's muxer or something
[14:26:56 CET] <ZexaronS> How does the API work different? API about what?
[14:27:20 CET] <ZexaronS> I mean, what is the API called, what it is for?
[14:27:48 CET] <JEEB> the internal APIs which are what FFmpeg consists of.
[14:27:53 CET] <JEEB> ffmpeg.c is the command line application API client
[14:28:18 CET] <ZexaronS> So the API has a feature that it's command line companion does not?
[14:29:05 CET] <JEEB> yes, basically the command line API client does not have a feature which is doable with the APIs you have on hand
[14:29:41 CET] <ZexaronS> But the limitation is ... simply missing code or what?
[14:30:01 CET] <ZexaronS> I have an API on hand? I didn't knew that
[14:30:20 CET] <BtbN> ffmpeg.c is pretty much just a very commonly used proof of concept
[14:30:20 CET] <JEEB> well, what the FFmpeg libraries provide
[14:30:23 CET] <ZexaronS> The command line interface is an API too ?
[14:30:32 CET] <JEEB> BtbN: yup
[14:31:20 CET] <ZexaronS> BtbN: Why a proof of concept? Many people do serious work with it, would would it be regarded as such?
[14:31:43 CET] <ZexaronS> Or they don't?
[14:31:49 CET] <JEEB> yes I know - that is the scariest part of it
[14:31:58 CET] <JEEB> you can get *so* darn far with it in many cases
[14:32:08 CET] <JEEB> but then you start hitting issues with its design/whatever
[14:32:13 CET] <JEEB> or just lacking features
[14:32:31 CET] <BtbN> it's as generic as it gets. You just can't make one application that works for every possible case
[14:32:38 CET] <JEEB> yes
[14:33:01 CET] <ZexaronS> Sorry, but ... there's more cases, which ones ?
[14:33:03 CET] <SamanthaUK> hello everyone, first time visitor from UK here :)
[14:33:38 CET] <JEEB> dynamically added tracks is one thing. lacking various use cases like timestamp recreation (setting another frame rate for a mux)
[14:33:50 CET] <JEEB> etc etc
[14:34:22 CET] <SamanthaUK> Is is possible to use FFMPEG to take the input of a real time RTSP feed and transcode it real time to an MP4 / HTTPv5 output? I have a CCTV camera that spits out RTSP and I want to incorporate it easily into another application
[14:34:44 CET] <JEEB> whatever is HTTPv5
[14:35:02 CET] <ZexaronS> We can't change some of the fundamentals right? Like some formats codecs just can't do things without recode?
[14:35:18 CET] <SamanthaUK> JEEB, really looking for anything that a web browser will interperate natively
[14:35:51 CET] <ZexaronS> So fundamentals should be thought out, what is the things that should be possible without recode, FPS is oone of them, that's a rule I would make and write code until it work, I wouldn't release new version until that base thing works
[14:36:09 CET] <JEEB> ZexaronS: there should just be separate tools for separate use cases. period
[14:36:15 CET] <ZexaronS> Hypothetically speaking if we rewrote ffmpeg or something
[14:36:27 CET] <JEEB> that way they could be kept separate and (relatively) simple :P
[14:36:45 CET] <JEEB> SamanthaUK: so HLS HTTP output into a web server that takes in POST/DELETE
[14:37:07 CET] <JEEB> it will require hls.js on the web server as well, but it is playable in most browsers
[14:37:08 CET] <ZexaronS> I don't think this particularly outside of anything, not sure why is this regarded as a use case, it has to do with video transcoding and muxing, same topic
[14:38:04 CET] <SamanthaUK> ok, I will do some reading on that. If it helps to understand the context, I am trying to integrate a CCTV camera in to Octoprint's camera interface
[14:38:17 CET] <ZexaronS> Maybe we would need some kind of ffmpeg codec and format, as an example how to do things right
[14:38:50 CET] <ZexaronS> and all the other codecs would be regarded as out of standard, if they can't do fps change without recode
[14:39:17 CET] <JEEB> it's not really the codecs/formats per se that are the problem :P
[14:39:31 CET] <ZexaronS> I was just assuming that for the sake of the point
[14:39:43 CET] <JEEB> the problem is just that everyone thinks ffmpeg.c should do all the things, while separating different things into different API clients would make more sense
[14:40:10 CET] <ZexaronS> I didn't even knew there's anything else other than the command line thing :/
[14:40:35 CET] <ZexaronS> or :|
[14:45:42 CET] <ZexaronS> I just think the databases need some kind of 3D visualization to give you some kind of a map to get a sense of the scale of the program and where everything is ...
[14:46:08 CET] <ZexaronS> Said this a number of times, someone's going to do it in UE4 one time and it'll become something normal
[14:49:10 CET] <furq> VRML studio
[14:54:14 CET] <ZexaronS> furq: doesn't seem like it has anything to do with what I meant, and pretty much no hits
[14:54:18 CET] <ZexaronS> vitality studio ?
[14:54:37 CET] <ZexaronS> there something from 2013 on sourceforge, but has nothing to show
[14:55:07 CET] <ZexaronS> Vivaty Studio
[15:01:00 CET] <ZexaronS> Ah VRML is some language. no my idea does not require any such language, you can do it in UE4 now, today
[15:01:06 CET] <ZexaronS> or any 3D game engine
[15:02:12 CET] <ZexaronS> or a custom presentation, just like IDA Pro shows structure of a compiled program
[15:02:26 CET] <ZexaronS> You would do the same with source code
[15:04:03 CET] <ZexaronS> i'll go to ##programming and talk more there
[15:04:29 CET] <ZexaronS> to see what they think
[19:27:39 CET] <bodqhrohro> Hi, I tried to setup ffserver but when I run `ffmpeg -i filename http://127.0.0.1:8090/feed1.ffm', it quickly recodes the part of file and exits with no errors, so I can't even connect a client in time. Is that normal?
[19:28:49 CET] <bodqhrohro> Configuration of ffserver, if needed: http://paste.debian.net/1014155/
[19:29:38 CET] <durandal_1707> ffserver have been removed from upstream, so you will most likely not get support for it here
[19:30:48 CET] <JEEB> well even before it was removed nobody really knew how to use it
[19:31:09 CET] <furq> i did
[19:31:23 CET] <furq> the correct way to use ffserver is to not use ffserver
[19:31:32 CET] <bodqhrohro> So I'll better try something else like VLC?
[19:31:40 CET] <furq> what are you trying to do
[19:32:09 CET] <DHE> for cli services we encourage nginx-rtmp, or if high latency is okay you can look at HLS or DASH with any static content HTTP server
[19:32:32 CET] <bodqhrohro> Youtube have broken RTSP support for my phone, so I'm trying to build up a proxy for it
[19:32:40 CET] <furq> oh fun
[19:32:46 CET] <furq> yeah if you specifically need rtsp then i don't know what to suggest
[19:32:51 CET] <furq> there's plenty of options for rtmp or http
[19:36:01 CET] <ZexaronS> Hey guys
[19:36:02 CET] <JEEB> rtsp I've had work with VLC which uses the live555 or whatever it was
[19:36:09 CET] <JEEB> so probably worth checking out
[19:36:19 CET] <JEEB> if rtsp is what you really need
[19:37:00 CET] <ZexaronS> So I tried panning and setting up my 6 channel source from before, LFE is used sporadically, but still fine, but this is some bad mix, center is all speech so it doesn't matter if it's left or right, there's no effect of if someone is behind or not
[19:37:30 CET] <ZexaronS> I wasn't meaning using a downmix for everything then, but I would do it only for this part of the video im editing in Vegas
[19:37:51 CET] <ZexaronS> It would make things easier, just doing it in ffmpeg then importing rather than botherin in vegas
[19:38:16 CET] <ZexaronS> because I don't know how center has to be panned, I got it similar to a downmix but still sounds weird
[19:38:35 CET] <ZexaronS> if center is center, if I disable all other channels then it sounds totally unrealistic
[19:39:46 CET] <ZexaronS> So the center channel has to have output to both left and right front with default -6 db while it's -6db on the center also, total weird default behavior, had to boost it to +6 to get the center up to normal, no idea why pan works like that
[19:41:04 CET] <ZexaronS> The LAV Audio is doing something else when it playing the 6 channel source, i replicated the thing in vegas channel to specific speaker, and the result was totally different and unlistenable
[19:41:48 CET] <JEEB> LAV Audio without any mixing is just passing it to the windows mixer, which mixes it (either correctly or incorrectly) for the output
[19:41:54 CET] <JEEB> if you enable mixing in LAV Audio, that is swresample
[19:42:02 CET] <JEEB> or well, exactly avresample
[19:42:12 CET] <JEEB> but still, they should do the same thing as far as downmix is concerned :P
[19:45:15 CET] <ZexaronS> JEEB oh, well right now im on Win10, and I can't seem to see if it's sending to MME, DS or WASAPI ?
[19:46:10 CET] <ZexaronS> There's no controls for this windows mixer, kinda weird, can't see channel status and volumes, meh, tried to find it for another reason before
[19:46:22 CET] <JEEB> that is 100% up to your player's rendering path :P
[19:46:39 CET] <JEEB> anyways, personally I've only seen the windows mixer f up with 3.1 so far
[19:46:46 CET] <JEEB> at least in a really bad way
[19:46:56 CET] <JEEB> everything else seemed to get mixed correctly with stereo output
[19:47:28 CET] <ZexaronS> Well if the source is 5.1 and output is 5.1, what's there to mix ?
[19:48:06 CET] <JEEB> there shouldn't be anything. if you want, compare a fully open source thing for the whole chain (except for the MS mixer at the end), which would be mpv
[19:48:20 CET] <JEEB> https://mpv.srsfckn.biz/
[19:48:39 CET] <JEEB> just mpv FILE should generally pass it to the mixer
[19:49:27 CET] <JEEB> and then mpv --audio-channels=stereo would downmix to stereo (--audio-channels=help for how to define custom outputs)
[19:52:18 CET] <ZexaronS> okay ... hmm oh minimalist GUI .... here we go. let me check VLC first
[19:53:09 CET] <JEEB> the problem is that I know mpv better than VLC, and the special cases or what it uses are more known :P
[19:53:30 CET] <JEEB> mpv is: pass down as-is through WASAPI, otherwise use FFmpeg's swresample
[19:53:42 CET] <JEEB> so you know exactly what you're getting by the log
[19:54:53 CET] <JEEB> but generally speaking, what I was trying to say is that unless you know intimately how a thing was created, just use something that you know correctly downmixes to whatever you need and then be done with it.
[19:55:08 CET] <JEEB> FFmpeg without any fancy stuff should do that by default with anything using swresample
[19:59:13 CET] <ZexaronS> Well I was using Vegas Pro with Direct Sound, MME doesn't even work with sorround there, and there's Xonar API cmasiopxPCIX
[19:59:41 CET] <ZexaronS> But, okay now I tested MPC-HC, MPC-BE, VLC and they do seem to have speech only correct in the center channel
[20:00:15 CET] <ZexaronS> but the volumes are totally different, speech sounds clearer, while in Vegas, I had effects sounding a lot louder, but it's all default settings
[20:01:26 CET] <ZexaronS> In vegas you have to pan an audio track to a particular channel, by default the pan makes it so that sound play on the neighbouring channel with decreased volume, I disabled that to get 100% only on the intended channel
[20:02:22 CET] <ZexaronS> but that doesn't seem right, is that not correct then, is it that what's windows mixer could be doing, this kind of panning to the right or left but still putting some bit sound to the other side ?
[20:09:16 CET] <ZexaronS> JEEB: https://i.imgur.com/BErlO7g.png
[20:09:31 CET] <ZexaronS> Allright, I tried mpv,  sounds right to me
[20:09:51 CET] <ZexaronS> I don't know, maybe for the first time I heard it sounding right, I thought it was wrong
[20:10:11 CET] <ZexaronS> Maybe that's also ofcourse sounding right, is not optimal in the original mix
[20:10:47 CET] <ZexaronS> my center speaker is up high on the drawer, while fronts are at the table down
[20:13:18 CET] <JEEB> you have to check the logs from mpv, but by default for WASAPI it should have just pushed things straight into the windows mixer
[20:13:19 CET] <ZexaronS> by right, i mean, that the center doesn't bleed into left or right
[20:13:41 CET] <ZexaronS> I also had to do some pretty good listening to make sure my ears don't hear echoes
[20:14:19 CET] <ZexaronS> The center does make a preety good job making an illusion the speech in this case, is coming from both left and right
[20:15:03 CET] <ZexaronS> For this a few minute sample, i'll still go ahead an play with downmix to 2.1 with ffmpeg just to fiddle and learn something
[20:16:57 CET] <JEEB> you can listen with mpv by telling it to downmix to 2.1
[20:17:07 CET] <ZexaronS> JEEB: I'm sure now that the center feels lesser in Vegas Pro, but wait, i'm using the OGG recode in Vegas, while in MPC, VLC, MVP i was playing the original MKV with original AC-3
[20:17:16 CET] <JEEB> https://mpv.io/manual/master/#options-audio-channels
[20:17:49 CET] <JEEB> the second paragraph after the listing of options has fl-fr-lfe as an example of telling it to downmix to 2.1
[20:18:00 CET] <JEEB> and it will then get sent to the windows mixer as 2.1
[20:18:37 CET] <ZexaronS> I must say, pretty impressive tho, the verbose info in console and the GUI overlay seems very nice
[20:19:04 CET] <JEEB> yes, it's minimal but generally says exactly what it's doing
[20:19:04 CET] <ZexaronS> I guess the hype around mvp sometime ago was all justified
[20:19:19 CET] <JEEB> also you can build a GUI player or whatever on top of its library if you really want to
[20:20:17 CET] <ZexaronS> .com application, great to see some retro goodies too
[20:21:23 CET] <ZexaronS> But does mvp use MME by default ... that's what I need to figure now
[20:21:44 CET] <JEEB> if you add verbosity with -v it will tell you exactly what it connects to through WASAPI
[20:21:53 CET] <JEEB> it uses WASAPI on windows for all audio
[20:22:08 CET] <JEEB> by default it is non-exclusive but you can also tell it to try and get exclusive control
[20:22:30 CET] <JEEB> (I have no idea what MME is so I explained what mpv does for audio rendering :P)
[20:24:29 CET] <ZexaronS> MME is Microsoft Sound Mapper, it's the basic thing that most applications default to
[20:24:52 CET] <ZexaronS> at least pre-win10
[20:25:09 CET] <ZexaronS> But audacity and other stuff, it was all MME by default
[20:25:48 CET] <JEEB> "--msg-level=ao=trace"
[20:25:55 CET] <JEEB> this gives you the exact log for audio playback
[20:26:09 CET] <JEEB> (do note that you will want to exit quickly to still have the beginning there in your log)
[20:27:18 CET] <JEEB> for me it's like this with a stereo setup :P http://up-cat.net/p/844b6b85
[20:31:58 CET] <ZexaronS> yeah it says wasapi, good
[20:32:10 CET] <JEEB> that's the only audio renderer in mpv :P
[20:32:17 CET] <JEEB> and that part it says without extra verbosity
[20:32:27 CET] <JEEB> also the .com is a hack for having exe as a "GUI" application
[20:32:43 CET] <JEEB> the .com is a really small shell that spawns up mpv.exe and reads its debug output
[20:32:58 CET] <JEEB> (and is a command line application that writes into stdout/err)
[20:33:12 CET] <JEEB> the exe one doesn't open a cmd.exe window
[20:33:35 CET] <JEEB> it's just that when you do just "mpv" in command line, windows picks up .com before .exe
[20:33:39 CET] <JEEB> that's the twist ;)
[20:34:04 CET] <JEEB> so you can have both your command line logs, as well as you can use the exe for opening things with from explorer without a cmd.exe window popping up
[20:34:26 CET] <ZexaronS> It's good trick then, indeed.
[20:37:22 CET] <ZexaronS> the " us" thing is nanoseconds ?
[20:39:02 CET] <JEEB> no idea, feel free to ask on #mpv
[20:39:23 CET] <ZexaronS> I guess no ¼s symbol in command line, that should be microseconds heh nano probably overkill
[20:39:46 CET] <furq> us is microseconds
[20:39:59 CET] <furq> oh yeah you said that
[20:40:06 CET] <furq> im helpeful
[20:40:49 CET] <ZexaronS> ah no worries, but us is actually nothing
[20:41:27 CET] <ZexaronS> most likely the symboly just ... nevermind
[20:41:47 CET] <ZexaronS> I tried pasting it and it works for me, maybe the devs had a problem with printing it
[20:42:03 CET] <ZexaronS> I'll ask there, tho
[20:51:15 CET] <ZexaronS> JEEB: Maybe most players use this " Dynamic Range Compression level for AC-3 audio streams"  which I found is also an option in mpv
[20:51:41 CET] <JEEB> LAV Audio should be giving you what mpv is giving you
[20:51:48 CET] <JEEB> what happens after that has no idea of AC-3
[20:52:03 CET] <ZexaronS> That's probably causing discrepancy when listening to raw channels in Vegas Pro
[20:52:21 CET] <JEEB> the vegas thing is actually more likely to do AC-3 "like dolby makes them do it"
[20:52:38 CET] <JEEB> as it probably uses the proprietary decoder which generally is just slapped into things :P
[20:52:57 CET] <ZexaronS> Maybe so, it's the players that sound better, MPV as well, it's Vegas that sounds wrong
[20:53:12 CET] <ZexaronS> But at least I got somewhat closer to the bottom of it
[20:54:20 CET] <ZexaronS> mpv defaults could be different from MPC-HC's LAV, or VLC's stuff .. right?
[20:54:42 CET] <JEEB> the rendering chain could be different
[20:54:49 CET] <JEEB> decoder is most likely the same in all, based on FFmpeg
[22:56:32 CET] <bodqhrohro> Ghosh, now even MPV doesn't want to play YouTube streams http://paste.debian.net/1014175/
[22:57:50 CET] <bodqhrohro> Moreover, I remember that earlier they always broadcasted AAC, not AMR
[22:57:54 CET] <JEEB> umm
[22:57:56 CET] <furq> you know mpv has youtube-dl integration right
[22:58:06 CET] <JEEB> I have no idea how youtube-dl ended up with the crappy 3gp stuff
[22:58:15 CET] <JEEB> also didn't think they still had rtsp there
[22:58:27 CET] <furq> JEEB: it didn't, he specifically requested 3gp
[22:58:56 CET] <bodqhrohro> I fetched this link from my phone
[22:59:16 CET] <JEEB> ok, then it's the wrong type of rtsp for the server
[22:59:20 CET] <JEEB> mpv defaults to TCP
[22:59:26 CET] <JEEB> you probably need UDP
[22:59:28 CET] <furq> i had no idea that youtube even did rtsp
[22:59:43 CET] <JEEB> neither did I, but I guess they have a lot of legacy crap
[22:59:55 CET] <furq> idk what legacy that would even be
[23:00:00 CET] <furq> browsers have never supported that
[23:00:01 CET] <JEEB> bodqhrohro: --rtsp-transport=udp
[23:00:08 CET] <JEEB> or http
[23:00:12 CET] <JEEB> https://mpv.io/manual/master/#options-rtsp-transport
[23:00:18 CET] <furq> or you know
[23:00:21 CET] <JEEB> I didn't even know that HTTP was a valid transport for rtsp
[23:00:30 CET] <furq> just give mpv the youtube url and have it download something that doesn't suck
[23:00:55 CET] <JEEB> I guess he specifically wanted to try the rtsp?
[23:00:56 CET] <JEEB> I dunno
[23:01:06 CET] <bodqhrohro> OK, it works now, but AMR warning is still here. Probably this is the issue my phone stucks because of
[23:02:12 CET] <JEEB> if it worked before in mpv and the phone then more likely it's the TCP RTSP not being supported or something
[23:02:17 CET] <JEEB> but not that I know
[23:03:49 CET] <bodqhrohro> When the support was broken (24 years ago, I don't remember exactly), I probably tested it in mplayer instead and it worked without warnings
[23:05:34 CET] <JEEB> it's quite possible that mplayer never showed those warnings or that those warnings were added then (although latter is probably less likely)
[23:05:45 CET] <JEEB> also lavf by default uses UDP
[23:05:51 CET] <JEEB> it's mpv that is setting the protocol to TCP by default
[23:06:02 CET] <JEEB> (no idea what mplayer did)
[23:33:16 CET] <bodqhrohro> I tried to setup streaming with VLC but had no luck with many of codec combination; for example, h263 shows many `Error evaluating rc_eq "(null)"' messages
[23:38:54 CET] <bodqhrohro> Sometimes they interleave with `vbv buffer overflow'. What does all of it mean?
[00:00:00 CET] --- Sun Mar 11 2018



More information about the Ffmpeg-devel-irc mailing list