[Ffmpeg-devel-irc] ffmpeg.log.20191009
burek
burek at teamnet.rs
Thu Oct 10 03:05:05 EEST 2019
[00:07:00 CEST] <AlexVestin> is there a way to remove the artifacts in the high frequencies with AAC? I'm a bit confused since it seems to be relatively noticable using the FFmpeg binaries
[00:11:05 CEST] <AlexVestin> The difference between with mp3: https://fysiklabb.s3.eu-west-3.amazonaws.com/short.mp3 and AAC: https://fysiklabb.s3.eu-west-3.amazonaws.com/short.aac
[00:12:52 CEST] <BtbN> What ffmpeg version?
[00:13:21 CEST] <AlexVestin> n4.2
[00:25:16 CEST] <AlexVestin> had a friend test the zeranoe build for windows with the same result
[01:39:29 CEST] <Retal> Guys, i forgot how to check all available presets like (fast, ultrafast) for cuda?
[01:44:53 CEST] <DHE> ffmpeg -h encoder=h264_nvenc # or whatever
[01:45:11 CEST] <Retal> DHE: oh thanks!
[02:35:14 CEST] <kepstin> hmm, I guess AlexVestin has just hit a file that ffmpeg's aac encoder doesn't handle well. that's 160kbps, it should be fine...
[13:43:04 CEST] <cards> Strange observation. ffprobe -colors lists "NavajoWhite == #ffdead"
[13:52:08 CEST] <alsuren> less of an ffmpeg-specific question and more of a question about h.264/http-live-streaming: If I have a hls stream (with the index in a .m3u8 file) and I shuffle the sections around, I get a bunch of "Playlist vs segment duration mismatch" errors when I run apple's mediastreamvalidator against it, and it only plays the first few chunks when I feed it into a browser.
[13:53:09 CEST] <alsuren> is there a stream clock encoded in the h.264 chunks or something, and if there is, does anyone know of a way around it?
[13:55:05 CEST] <BtbN> Of course there are timestamps in mpeg-ts
[13:55:16 CEST] <BtbN> If you swap around segments, it will cause chaos
[13:56:02 CEST] <alsuren> ideally, I would like to be able to write a smart server that can serve a custom .m3u8 random-ish walk through a library of videos, but keep the .ts segments hosted on a cdn
[13:56:39 CEST] <BtbN> You will have to signal the discontinuity where it occurs then
[13:57:44 CEST] <alsuren> can I encode a discontinuity into each of the segments before I upload it to CDN or something?
[14:03:07 CEST] <DHE> the playlist files has something like #EXT-X-DISCONTINUITY you can put into the sequence to indicate that a break in timestamps in the video segments at this point in time is expected
[14:03:16 CEST] <DHE> (check the RFC for exact specs as this is all from memory)
[14:04:02 CEST] <BtbN> Not sure how well a player copes if you put that in front of every single segment though
[14:04:08 CEST] <BtbN> it'll probably still break shit
[14:04:29 CEST] <alsuren> I guess I'll find out :D
[14:05:07 CEST] <alsuren> I now have a bunch of reading to do. Thank you both for pointing me in the right direction.
[14:05:31 CEST] Action: DHE is ass deep in the RFC (for other reasons)
[14:15:55 CEST] <alsuren> works like a charm and the results are quite fun to watch. Thanks again.
[15:30:22 CEST] <Freneticks> hey I use master_pl_name for master playlist but there is no CODEC in my playlist
[15:30:27 CEST] <Freneticks> is there a way to have it ?
[15:45:55 CEST] <classsic> Hi, is there a "correct" method to check if rtsp stream stop transmit? like ffmpeg -i rtsp:// -f flv rtmp:// , but when source stop (ip camera), ffmpeg process stay like zombie process
[17:07:55 CEST] <Fenrirthviti> re: OBS/ffmpeg avopts stuff, the example code provided uses priv_data: https://www.ffmpeg.org/doxygen/trunk/decoding_encoding_8c-example.html and that code was written ages ago and hasn't really been touched as not many people are actually using the custom ffmpeg output mode
[17:08:27 CEST] <Fenrirthviti> We'll probably remove the priv_data if there's no reason to keep it in though.
[17:09:08 CEST] <BtbN> I haven't yet tested if setting it on the context itself will set private options as well
[17:09:25 CEST] <BtbN> It might need "try set on priv_data, if fails, try on ctx itself" logic
[17:09:43 CEST] <Fenrirthviti> Yeah, I'm (and the rest of the team) isn't clear exactly why we use priv_data there, but the consensus is "that's what the docs said to do" more than anything else
[17:09:58 CEST] <BtbN> Probably because the old example does, and it worked.
[17:10:03 CEST] <Fenrirthviti> indeed
[17:10:49 CEST] <BtbN> I'm not setup to build OBS, but it shuld be trivial to test if it still can set stuff like x264 params with the ->priv_data removed.
[17:11:03 CEST] <Fenrirthviti> yeah we're poking at it now
[17:12:12 CEST] <Fenrirthviti> I've never been happy with how we set opts in the first place, I wish we just parsed the CLI syntax instead of using opt=value
[17:12:17 CEST] <Fenrirthviti> maybe can fix that too
[17:12:21 CEST] <BtbN> I'm also still a bit confused why OBS opted to re-implement an entire second nvenc encoder. It seems like it was just to be able to pass in d3d11 frames, which ffmpeg could have taken as input just fine as well.
[17:12:41 CEST] <BtbN> I think the CLI syntax works pretty much the way OBS already parses them?
[17:13:29 CEST] <Fenrirthviti> I mean we get people who will type like "-crf 22" in the field and that doesn't get parsed
[17:13:34 CEST] <Fenrirthviti> you have to type crf=22
[17:13:45 CEST] <kepstin> the cli has some special cases for a few options, but for the most part it just takes the key=value and passes it as avoptions to the contexts
[17:14:07 CEST] <BtbN> ffmpeg.c calls av_opt_set_from_string
[17:14:49 CEST] <Fenrirthviti> then again we might have already fixed that and I'm just misremembering cause of how little ffmpeg output mode gets used
[17:15:18 CEST] <BtbN> We use ffmpeg output mode at ESA as botched second rtmp output, to have two different audio feeds.
[17:15:48 CEST] <Fenrirthviti> yeah that's the more common use case I see
[17:15:58 CEST] <Fenrirthviti> or UDP multicast stuff for those who don't like NDI
[17:15:59 CEST] <BtbN> It's a bit annoying, since there is zero stream monitoring.
[17:16:23 CEST] <Fenrirthviti> being able to do multiple outputs is pretty high on our list of things we want to implement
[17:16:34 CEST] <Fenrirthviti> it's entirely a UI limitation, and UI is the worst.
[17:16:42 CEST] <BtbN> Yeah, same video feed, different audio feeds. Would be super nice
[17:17:10 CEST] <Fenrirthviti> we have a ton of drafts and design docs for how it could work, just need to find the time to basically rewrite large parts of the UI
[17:17:46 CEST] <BtbN> I have considered adding libobs support for ffmpeg, but that would be quite a dependency cycle.
[17:18:18 CEST] <BtbN> But specially the various capture capabilities it has would be nice to have as inputs
[17:18:27 CEST] <Fenrirthviti> we've been asked for headless support for ages
[17:18:34 CEST] <JEEB> I think someone had a WIP patch set for the newer windows capture API
[17:18:38 CEST] <JEEB> for libavdevice
[17:18:38 CEST] <Fenrirthviti> tons of use cases there
[17:18:55 CEST] <JEEB> I meant to take a look at it at some point
[17:19:04 CEST] <BtbN> JEEB, yeah, desktop duplication API. It didn't even look that bad. Wonder why it was never sent.
[17:19:25 CEST] <JEEB> probably because the person felt that they didn't have the time to "go through it and finish it up"
[17:19:28 CEST] <BtbN> But libobs has support for hooked game capture and various non-standard capture cards.
[17:20:08 CEST] <Fenrirthviti> most of that is dshow hacks, iirc
[17:20:13 CEST] <Fenrirthviti> unless you mean the decklink plugin
[17:20:26 CEST] <BtbN> The asortment of dshow hacks is quite useful though
[17:20:36 CEST] <Fenrirthviti> I believe those are all available in libdshow
[17:21:06 CEST] <Fenrirthviti> sorry, libdshowcapture https://github.com/obsproject/libdshowcapture
[17:23:11 CEST] <kepstin> hmm, no C api? I assume you link that statically with obs, so it probably doesn't have a stable abi either?
[17:23:48 CEST] <JEEB> I'm pretty sure if you start adding an external lib utilizing dshow into FFmpeg, you would then get people asking why you don't add that stuff to the dshow libavdevice
[17:24:12 CEST] <BtbN> Because it's a lot of messy code
[17:25:13 CEST] <Fenrirthviti> I'm not 100% on how Jim treats the ABI for libshowcapture off the top of my head.
[17:41:35 CEST] <Freneticks> Do you know if it's possible to play HEVC on firefox/chromium ?
[17:42:12 CEST] <JEEB> Freneticks: I know some of the chromium code has some HEVC-related stuff there but not more than that. on desktop builds you're not going to get it anyways.
[17:50:15 CEST] <Rodn3y> On Android and Chromebooks you might get HEVC playback, but certainly not on desktop.
[17:51:09 CEST] <kepstin> Freneticks: hevc basically only works on safari and edge
[17:51:19 CEST] <kepstin> and on edge only on system configurations with a hardware hevc decoder
[17:52:54 CEST] <kepstin> note that the feature requests for firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1332136 and chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=684382 were both closed as "wontfix"
[17:53:14 CEST] <JEEB> yea they both want to keep HEVC out of the desktop space
[17:54:02 CEST] <JEEB> not supporting hwdec is probably a stance since that's what they're doing on ARM/mobile
[17:54:12 CEST] <JEEB> but swdec is like "nope, not gonna do that"
[17:55:40 CEST] <Freneticks> yeah too much patent problem
[17:55:51 CEST] <Freneticks> AV1 is the future
[17:56:14 CEST] Action: kepstin wonders why google is developing their own av1 decoder instead of using dav1d
[17:57:34 CEST] <JEEB> kepstin: because they can throw man-hours at a problem. and they love having the control on stuff
[17:57:53 CEST] <JEEB> it mentions android but it was slower than even libaom on ARM last tested I think?
[18:08:40 CEST] <kepstin> looks like they only have one dev working on it, so maybe it's just a side project from someone who just wanted to make a video decoder
[18:11:39 CEST] <kepstin> the use of the apache 2.0 license is interesting, since it includes an explicit patent grant
[18:15:28 CEST] <Fenrirthviti> Also just to come full circle since we've been testing, the person who was trying to use maxrate/bufsize with hevc_nvenc was doomed to begin with because those options don't do anything
[18:16:07 CEST] <Fenrirthviti> tested with ffmpeg direct and in OBS, same results, the options aren't doing anything. Probably doesn't support them.
[18:16:17 CEST] <kepstin> they get passed to the nvidia apis, but who knows what they do there :/
[18:16:21 CEST] <Fenrirthviti> yeah, indeed.
[18:16:24 CEST] <kepstin> might even be different on different cards
[18:16:32 CEST] <Fenrirthviti> It's correct they weren't being passed, which we have fixed now
[18:16:42 CEST] <Fenrirthviti> But they don't do anything even when passed, so, yeah
[18:17:15 CEST] <JEEB> yes, it's always fun with those OtherEncoders when you expect them to work as well as libx264 - even if options were mapped to something in teh called upon API :/
[18:18:32 CEST] <Fenrirthviti> Ah wait, maybe it is working. It's in bits, not kilobits.
[18:18:45 CEST] <Fenrirthviti> so was being treated as an invalid range
[18:18:57 CEST] <JEEB> yup
[18:19:03 CEST] <JEEB> historical thing in avcodec
[18:19:54 CEST] <kepstin> i think the option parser should accept unit suffixes like 'K' on that?
[18:19:59 CEST] <Rodn3y> It does yeah
[18:20:07 CEST] <JEEB> yup
[18:20:21 CEST] <JEEB> if you pass them as AVOptions instead of just setting a value into the avcodeccontext
[18:20:21 CEST] <Rodn3y> The guy complaining that it didn't work just kinda forgot a few zeroes...
[18:27:11 CEST] <Fenrirthviti> Rodn3y is the one on our team messing with it :P
[18:36:36 CEST] <Rodn3y> So yeah, seems like BtbN had the right idea. Without rewriting this (which really has to happen at some point...), the smallest fix is to try priv_data first, and then context if that fails.
[18:37:17 CEST] <Rodn3y> This stuff has been mostly unchanged for the past 5 years so it's due for some updates. Unfortunately other, bigger, fires keep coming up :p
[18:38:43 CEST] <BtbN> You could also do d3d11 frame input straight into ffmpeg with some work. Various filters and mappings and encoders can take it.
[18:41:51 CEST] <Rodn3y> I think Xaymar's ffmpeg encoders plugin actually does that
[18:42:34 CEST] <Fenrirthviti> It does, yeah, as of a few days agoo
[18:43:50 CEST] <Bugz000> hey guys i'm trying in vain to get a text overlay and image overlay (dynamic) on a ffmpeg stream
[18:44:18 CEST] <Bugz000> the text updates just fine, however the image does not
[18:44:37 CEST] <Bugz000> and i can't seem to get filter_complex to work right
[18:45:15 CEST] <Bugz000> https://pastebin.com/wm3QYyz6 this shows and updates the text and shows the image but doesn't update the image
[18:45:31 CEST] <Bugz000> https://stackoverflow.com/a/49467812
[18:45:39 CEST] <Bugz000> i'm trying to follow this
[18:46:06 CEST] <Bugz000> all i've achieved so far is making myself feel stupid so an ffmpeg guru could please put me out of my misery
[18:46:18 CEST] <Bugz000> haha
[18:47:04 CEST] <Bugz000> https://i.imgur.com/rW185Ne.png
[18:47:11 CEST] <BtbN> I don't see an image anywhere in that filters
[18:47:21 CEST] <Bugz000> oh woops
[18:47:38 CEST] <Bugz000> wrong one sec
[18:47:44 CEST] <Bugz000> i got like 50 here -facedesk-
[18:48:01 CEST] <Bugz000> https://pastebin.com/T1p1PvPV
[18:48:26 CEST] <kepstin> Bugz000: that makes no sense, if you have two -vf options, only the last one is used (the other is ignored)
[18:48:34 CEST] <kepstin> you probably need to be using -filter_complex here
[18:48:49 CEST] <Bugz000> this is for Shinobi but i think this is more of an FFMPEG question than Shinobi, and i've got a feeling i need to be using the input flags
[18:49:00 CEST] <kepstin> although it might make sense to use an input for the image rather than a filter
[18:49:07 CEST] <Bugz000> https://pastebin.com/aJF1DWuc
[18:49:16 CEST] <Bugz000> i'm trying filter complex here
[18:49:38 CEST] <Bugz000> https://i.imgur.com/BFaLOFq.png
[18:49:41 CEST] <kepstin> alright, that looks more like something that has a chance of working
[18:49:53 CEST] <Bugz000> throws an error though
[18:50:10 CEST] <kepstin> that error doesn't matchteh command you run, so ?
[18:50:23 CEST] <Bugz000> that's what confused me
[18:50:58 CEST] <kepstin> get this working directly with the ffmpeg cli first, we can't really help you with other applications.
[18:51:16 CEST] <Bugz000> i can pull the whole ffmpeg command
[18:52:11 CEST] <Bugz000> https://pastebin.com/jf90Mei5
[18:52:20 CEST] <Bugz000> this is what the application is running total
[18:52:56 CEST] <Bugz000> oh hey, -boundary_tag shinobi
[18:53:36 CEST] <kepstin> for some reason its adding mjpeg related options to your command, you need to tell it not to do that
[18:53:52 CEST] <Bugz000> it's taking an RTSP stream in and pushing out as mjpeg
[18:54:04 CEST] <Bugz000> but i can change formats
[18:54:30 CEST] <kepstin> so, it's taking one rtsp in, and one image in, and outputting to hls
[18:54:50 CEST] <kepstin> and the image in is using mjpeg options but then you've overridden it to -f image2, which means the mjpeg options fail
[18:55:12 CEST] <Bugz000> ohright
[18:55:14 CEST] <kepstin> that's just a really weird command all around
[18:55:31 CEST] <Bugz000> i have literally no idea what i'm doing dude so it's a miracle i've got this far
[18:55:57 CEST] <kepstin> it's using libx264 output options, but the output encoder is set to 'pam"? :/ there's no way pam works with hls
[18:56:07 CEST] <kepstin> oh, no, there's a copy in there too
[18:56:16 CEST] <kepstin> too many conflicting options overriding each-other
[18:56:19 CEST] <Bugz000> LOL -facedesk-
[18:56:31 CEST] <Bugz000> okay okay so let's go default and see if that's "normal"
[18:56:36 CEST] <kepstin> copy doesn't work with filters of course,
[18:56:40 CEST] <Bugz000> undo everything i've done
[18:56:47 CEST] <Bugz000> well, put in storage
[18:58:02 CEST] <Bugz000> https://pastebin.com/9K4fVpHB
[18:58:18 CEST] <kepstin> that's still horribly messed up and broken
[18:58:31 CEST] <Bugz000> it is outputting as HLS now
[18:58:34 CEST] <Bugz000> not mjped
[18:58:38 CEST] <Bugz000> mjpeg* if that helps
[18:58:54 CEST] <kepstin> it has -c:v otuput option specified *3* different times, each overriding the last
[18:59:03 CEST] <kepstin> the final one that takes effect is -c:v copy
[18:59:45 CEST] <kepstin> oh, there's actually two separate outputs in that command
[18:59:54 CEST] <kepstin> one is being encoded with x264, the other one is using copy
[19:00:42 CEST] <kepstin> i have no idea what's up - if that command line is generated by a tool, you'll have to talk to the devs of the tool to see what they're trying to do and fix it :/
[19:00:52 CEST] <Bugz000> yah i'm speaking with the dev now actually
[19:01:14 CEST] <Bugz000> what would you say is wrong with it ? i know this has provisions for sending data off to a motion detection library
[19:01:15 CEST] <kepstin> adding the text overlay filter stuff you're talking about to that command isn't trivial
[19:02:06 CEST] <Bugz000> the text works fine, shows and updates
[19:02:10 CEST] <kepstin> actually, this command looks correct for sending one encoded h264 hls stream alongside reduced resolution+color mjpeg data to another hls stream
[19:02:15 CEST] <Bugz000> however the image overlay does show- getting it dynamic is tricky
[19:02:35 CEST] <Bugz000> ah thats good
[19:03:15 CEST] <kepstin> it still has some weirdness in that it's using multiple -c:v options on the second output, and has some unused parameters like -tune
[19:04:16 CEST] <kepstin> looks like it's actually a command designed for three outputs, but the second output filename is missing
[19:04:23 CEST] <kepstin> oh, wait, no, i just missed it
[19:04:27 CEST] <kepstin> it is a three output command
[19:04:31 CEST] <kepstin> weird
[19:04:49 CEST] <Bugz000> uhhh again there's povisions to pipe this stream to other shinobi servers for multiple tiers
[19:04:54 CEST] <Bugz000> iirc
[19:05:07 CEST] <Bugz000> i suspect that could be it
[19:05:15 CEST] <kepstin> it outputs one h264 re-encoded hls stream, one greyscale pam image stream, and one pass-through copy stream
[19:06:19 CEST] <kepstin> anyways, as i mentioned, it is not trivial to add the filters you want to this command. That's because doing so requires using -filter_complex, which means that you're going to need to use -map options on each of the outputs separately to pick the right streams for the output
[19:06:40 CEST] <alsuren> @DHE @BtbN I thought you might like the toy I made to test out your advice https://shuffle.glitch.me/
[19:07:09 CEST] <Bugz000> damn
[19:07:14 CEST] <kepstin> so yeah, you're probably not going to be able to do it without modifying whatever tool is generating the command
[19:07:25 CEST] <Bugz000> i'll go poke the dev and show him
[19:07:38 CEST] <Bugz000> he's pretty active so he might get this working
[19:07:39 CEST] <Bugz000> :D
[19:09:09 CEST] <Bugz000> oh may i ask what would be the hardware overheads of outputting the two extra streams?
[19:09:39 CEST] <Bugz000> i mean if they're not used surely it would be beneficial to not even have them outputting right
[19:10:04 CEST] <Bugz000> or does it only open a "socket" and only actually start crunching data if something wants it
[19:12:36 CEST] <Bugz000> https://i.imgur.com/nhTEtiT.png i only ask because a full 1080p 25fps stream from my ip camera is very low bandwidth
[19:13:09 CEST] <Bugz000> put through ffmpeg and it easilly pumps out the same data at over 20mb/s, the inflation is a little insane
[19:17:38 CEST] <kepstin> Bugz000: ffmpeg processes all outputs all the time. THe second output in that command is to a pipe, so if nothing reads it then it will block ffmpeg from proceeding. The third output is doing a copy, not re-encoding, so it's fairly low overhead other than disk io.
[19:17:55 CEST] <Bugz000> ah good
[19:18:43 CEST] <kepstin> actually, it's putting that third output into memory, so it's not even disk io :)
[19:20:13 CEST] <kepstin> the first one is doing video encoding, but it's pretty unlikely you're getting anywhere near 20 mbyte/s (i'd bet it's even under 20mbit/s). And that's also to memory, not disk.
[19:24:05 CEST] <Bugz000> rofl i just tried HEVC output and my browser is crying
[19:24:28 CEST] <Bugz000> https://i.imgur.com/JZ5vR9W.png
[19:25:46 CEST] <saml> is HEVC battery good on mobile?
[19:27:52 CEST] <kepstin> i'd expect it to probably be fairly similar to h264 when using hardware decoding on mobile
[21:22:04 CEST] <snatcher> what's the most flat/compressed (same audio in different formats converted to it will be mostly identical) audio format?
[21:25:13 CEST] <snatcher> also what's the better to perform drc in such a case? loudnorm?
[21:33:32 CEST] <kepstin> not sure what you mean with the first statement
[21:34:02 CEST] <kepstin> audio codec compression has nothing to do with dynamic range compression
[21:36:04 CEST] <kepstin> the "loudnorm" filter does do some dynamic range compression, in that it'll lower the overall loudness of loud sections of audio, and increase the loudness of quiet sections, but it doesn't operate like the dynamic range compression that mastering engineers typically use
[21:37:32 CEST] <kepstin> it's more useful for stuff like "i have a recording where people talking quietly is mixed with loud music, and i want to make it so i don't have to adjust the playback volume constantly"
[21:40:43 CEST] <snatcher> kepstin: i see, so better to use compand, anyway still not sure what format and bitrate/frequency(lowest?) use to get persistent result from different formats
[21:41:15 CEST] <kepstin> i'm not sure what you're trying to do, so i can't really help you
[21:41:56 CEST] <kepstin> if you want to make sure the output audio format makes no changes to the audio when encoding, use a lossless format like flac
[21:44:12 CEST] <kepstin> if your goal is to try to match audio for fingerprinting/comparison, then merely converting it to a common format (bit depth, sample rate, etc.) generally isn't sufficient
[21:44:49 CEST] <kepstin> (but often is a first step when you're doing feature extraction)
[21:45:55 CEST] <snatcher> kepstin: same audio in different formats/quality, how can i make sure content is not the same? idea is: downmix all input files to stereo, drc(better before or after conversion to specific format?), convert to some flat/persistent format/quality, compare chromaprints
[21:46:15 CEST] <kepstin> snatcher: just go straight from original files -> chromaprints
[21:46:25 CEST] <kepstin> the chromaprint stuff already does all the required normalization
[21:46:38 CEST] <kepstin> https://oxygene.sk/2011/01/how-does-chromaprint-work/ is an interesting read :)
[21:47:59 CEST] <snatcher> kepstin: are you sure there is no way to improve it with additional conversions? while i get different chromaprints between the same audio in flac/opus(320)
[21:48:19 CEST] <kepstin> they will be different, yes, you need to use a threshold when comparing them
[21:48:26 CEST] <kepstin> see the blog post i linked
[21:49:13 CEST] <kepstin> note that when you're using the online acoustid service, it does serverside thresholding/comparison of the chromaprints
[21:49:34 CEST] <kepstin> in that case, you can just go by "do i get the same acoustid back?"
[21:49:53 CEST] <durandal_1707> it will fail miserably if i areverse audio
[21:50:37 CEST] <kepstin> durandal_1707: yes? it's not intended to handle that sort of thing...
[21:51:14 CEST] <kepstin> chromaprint/acoustid sort of came out of the musicbrainz project as a way to detect which particular recording/track and audio file was, so metadata could be looked up automatically
[21:53:53 CEST] <kepstin> it's not designed to be robust enough against noise, or to work for arbitrary bits of audio, to work as a "shazaam" style thing, and it's not designed to work around changes people make like speed, phase, etc. for use in automatic copyright matching stuff.
[22:11:30 CEST] <snatcher> kepstin: do you mean -silence_threshold chromaprint option?
[22:13:50 CEST] <kepstin> snatcher: i'm not sure what that does, tbh.
[22:15:47 CEST] <snatcher> seems with value >=21000 outputs the same fingerprint for any(?) file
[22:22:02 CEST] <kepstin> my guess is that the purpose of that is to better match files that start/end with different amounts of silence, but don't quote me on that.
[00:00:00 CEST] --- Thu Oct 10 2019
More information about the Ffmpeg-devel-irc
mailing list