[Ffmpeg-devel-irc] ffmpeg.log.20191029

burek burek at teamnet.rs
Wed Oct 30 03:05:02 EET 2019


[00:20:36 CET] <cards> What's a common / technical / ADA name for subtitles intended for the hearing impared?  ie, they contain extra information about background noises and identify the speaker's name
[00:23:09 CET] <JEEB> they're often noted as captions as opposed to subtitles. the disposition seems to be called "hearing impaired" in FFmpeg, and you can check the DVB specs regarding those if ye want
[00:24:25 CET] <cards> yeah, there seems to be a ton of nuance in technicalities of subtitles, closed captions, etc and their over-the-air transmission and disc encoding schemes.  But I do recall some 4 letter acronym at one point
[00:27:11 CET] <cards> and apparently "hearing impared" is like calling someone a midget, and the NAD is asking people to use the word "deaf" again. :S
[00:29:39 CET] <cards> the acronym i'm looking for might be something like DHHS or DHHC for "Deaf & Heard of Hearing" Captions or Subtitles
[00:33:49 CET] <mayli> hey, how to use ffprobe the -skip_frame nokey and get the rigth coded_picture_number?
[00:40:58 CET] <ironm> kepstin, I got it: ffmpeg -loop 1 -r 1/10 -i BA10-header.png -c:v libx264 -r 50 -t 3 -pix_fmt yuv420p B10-head-x264.mp4
[00:41:21 CET] <ironm> it is working with those GoPro 7 MP4 files
[00:41:50 CET] <ironm> BA10-header.png: PNG image data, 3840 x 2160, 8-bit/color RGBA, non-interlaced
[00:47:18 CET] <cards> Think I found my answer.  Assuming this is accurate adopted information.
[00:47:21 CET] <cards> > SDH subtitles are subtitles for the deaf and hard of hearing. Subtitles for the deaf and hard of hearing are a transcription - they are subtitles in the same language as the spoken dialogue, with added information for those who cannot hear the environmental sounds or lyrics.
[00:47:57 CET] <cards> With that, I'm going to use English.srt and English_SDH.srt
[00:49:25 CET] <cards> references https://www.rev.com/blog/sdh-subtitles-for-the-deaf-and-hard-of-hearing | https://blog.ai-media.tv/blog/what-is-sdh | https://www.121captions.com/2015/06/14/sdh-subtitles-deaf/
[00:53:03 CET] <Retal> Hello guys
[00:53:03 CET] <Retal> gptid/4dd123d4-a65e-11e8-a21e-0cc47a6ad8a8  FAULTED
[00:53:03 CET] <Retal> How to dtermine from CLI faluted disk phosical SN?
[00:56:06 CET] <nicolas17> Retal: are you sure you're in the right channel?
[00:59:13 CET] <Retal> :D :D nicolas17, soorry
[01:09:52 CET] <realies> having trouble converting a wav file to raw and from raw back to wav
[01:10:58 CET] <realies> ffmpeg -f s16le -ac 1 -ar 48000 i.raw o.wav, Output file #0 does not contain any stream
[01:11:38 CET] <realies> converted to raw with ffmpeg -i z.wav -f s16le -c:a pcm_s16le i.raw
[01:11:59 CET] <nicolas17> you're giving two output files and zero input files
[01:12:04 CET] <nicolas17> you're missing the -i
[01:12:28 CET] <realies> thanks 🤦
[01:13:08 CET] <Reinhilde> ideally your standard English subtitled would be SfDHI
[01:36:30 CET] <johnjay> hrm, why is ffmpeg taking 60 seconds to make a 60 minute silence file?
[01:36:44 CET] <johnjay> i specified -b:a 128k -minrate 128k -maxrate 128k to get something close to CBR
[01:36:53 CET] <johnjay> using anullsrc
[01:50:28 CET] <klaxa> 60 seconds for 60 minutes sounds not too bad
[01:52:04 CET] <furq> i'm guessing it's a lossy codec from the bitrate in which case yeah, that's to be expected
[01:53:54 CET] <DHE> still, I'd expect it to take more like 4-5 seconds on my own PC
[01:54:06 CET] <klaxa> well, no codec was given
[01:54:19 CET] <klaxa> for all we know he used -c:a flac and -b:a was overridden :P
[01:54:23 CET] <DHE> hahaha
[01:54:31 CET] <DHE> I'm guessing AAC or such
[01:54:47 CET] <furq> i'd expect flac to take a while as well
[01:55:12 CET] <klaxa> is encoding silence very much faster in general though?
[01:55:24 CET] <furq> i'm sure you could have some special fastpath for encoding long files where every sample is the same
[01:55:25 CET] <klaxa> i mean i guess there's a lot of shortcuts a codec can take
[01:55:32 CET] <DHE> depends. some codecs don't care, some do vary time by content
[01:55:33 CET] <furq> but i can't imagine anyone has bothered
[01:55:44 CET] <furq> it's not a very common use case
[01:56:41 CET] <klaxa> might be faster to generate with a short sample and a lot of concat muxes
[01:56:59 CET] <furq> yeah but could you write that command in less than 60 seconds
[01:57:30 CET] <klaxa> but think of the time savings in the long run! maybe i need a lot of silent files of specific length?
[02:07:42 CET] <DHE> then I suggest having a long file and truncating it
[02:16:27 CET] <klaxa> good idea
[02:48:12 CET] <realies> can you gpu accelerate the showwaves filter?
[02:51:32 CET] <realies> or reduce the fft so it goes faster?
[03:02:15 CET] <AlexApps> Hello, I've tried to loop an audio clip over an image sequence provided by cat (see below) using stream_loop, but the clip only plays once. Can anybody help me figure out why?
[03:02:16 CET] <AlexApps> cat *.png | ffmpeg -stream_loop -1 -i loop.mp3 -f image2pipe -framerate 1/5 -vcodec png -i pipe:0 -filter_complex [1]scale=force_original_aspect_ratio=decrease:h=1080:w=1920[s0];[s0]pad=color=white:h=1080:w=1920:x=-1:y=-1[s1] -map 0 -map [s1] -f mp4 -c:a aac -c:v libx264 -coder ac -fflags +shortest -pix_fmt yuv420p -r 30 video.mp4
[03:12:10 CET] <AlexApps> Anybody have any advice?
[04:43:23 CET] <KombuchaKip> Is the read_apic() function (https://ffmpeg.org/doxygen/trunk/id3v2_8c-source.html#l00441) apply to any type of media file, or is it strictly for mp3?
[04:44:17 CET] <kepstin> KombuchaKip: that's an internal method of the id3 code which is only for mp3, yeah
[04:44:45 CET] <kepstin> (well, sometimes people throw id3 tags on non-mp3 stuff, but i'm not sure if ffmpeg supports reading those)
[04:44:50 CET] <KombuchaKip> kepstin: Thank you. What do you recommend I take a look at for a general routine that can find artwork for any input?
[04:45:16 CET] <kepstin> KombuchaKip: for all formats that ffmpeg can read artwork from, it's exposed by the demuxer as a video stream
[04:45:56 CET] <kepstin> KombuchaKip: but it might be easier to use a dedicated music tagging library instead of ffmpeg, depending what you're doing
[04:45:56 CET] <KombuchaKip> kepstin: Yes, I saw that, but I wasn't sure if I should look for AVMEDIA_TYPE_AUDIO streams or ones with AV_DISPOSITION_ATTACHED_PIC disposition bit set?
[04:46:08 CET] <kepstin> well, audio streams are audio...
[04:46:14 CET] <kepstin> so it must be the other one
[04:46:15 CET] <KombuchaKip> kepstin: I need to progmatically extract album art.
[04:46:37 CET] <KombuchaKip> kepstin: Sorry, I meant AVMEDIA_TYPE_VIDEO
[04:47:02 CET] <kepstin> it'll be a video stream, as i said. I'm not sure exactly which flags are set.
[04:47:36 CET] <KombuchaKip> kepstin: Thanks.
[04:47:46 CET] <kepstin> I'd suggest you consider looking at something like taglib (c++) or mutagen (python) which are dedicated audio file tagging libraries, both of which support album art stuff in a more complete way than ffmpeg.
[04:49:25 CET] <KombuchaKip> kepstin: I'd prefer not to add another dependency to my code. I also don't need to motify any metadata. I just need to extract artwork and that's it.
[04:49:46 CET] <kepstin> in particular, i don't think ffmpeg supports multiple images well, if at all
[04:49:56 CET] <kepstin> if all you have is a single cover image it might be ok i guess.
[04:53:12 CET] <KombuchaKip> kepstin: Yeah I'm only looking to extract one. I'll look at the packet size and extract whichever one is the larget.
[04:53:14 CET] <KombuchaKip> *largest
[07:07:27 CET] <andyandybobandy> Hello! I'm trying to convert mkvs to mp4 with copied streams (HEVC/AAC) but only one language (audio&sub), but I'm asked to specify an encoder even though I specify copy - cmd: http://ix.io/206Q output: http://ix.io/206P
[07:24:33 CET] <furq> andyandybobandy: you're not mapping the video, so output stream 0.1 is one of the subtitle streams
[07:27:22 CET] <andyandybobandy> furq: yeah I thought that might be a problem, but I also tried adding a '-map 0:v' as the first map and it just shifted the stream numbers but no different really.
[07:27:33 CET] <furq> well you'll need -map 0:v anyway
[07:27:44 CET] <furq> -c:s mov_text should work for text subtitles
[07:27:47 CET] <furq> not sure why that's not autoselected
[07:28:35 CET] <andyandybobandy> is that standard for mp4?
[07:28:44 CET] <furq> that's the only one mp4 supports afaik
[07:29:33 CET] <andyandybobandy> furq: great, thanks! working well when I specify sub codec
[07:29:50 CET] <furq> browser players tend to just ignore it, but this is hevc so that's not going to work anyway
[07:29:58 CET] <furq> it should work fine in a proper player
[09:34:23 CET] <Glumetu> hello. Can u please help me i'm tryin to trasnform a footage that is 1080 at 50 fps interlaced into a 1080 at 25 fps progresive how can i combine fileds into frames?
[09:35:47 CET] <JEEB> deinterlace filter like yadif etc
[09:36:41 CET] <Glumetu> tryed that it doesn;t give me 25 fps it just deinterlaces it
[09:36:57 CET] <Glumetu> i read in manul about filters filedmacch
[09:37:08 CET] <Glumetu> fieldmatch
[09:37:12 CET] <JEEB> so is your content decoded as separate fields?
[09:37:37 CET] <JEEB> also yes, fieldmatch will just stick fields together
[09:37:52 CET] <JEEB> I'm having issues deciphering what exactly you have and what you want to do with it
[09:38:12 CET] <JEEB> a) when you decode with FFmpeg, do the fields come combed together as a "frame" or sepaate field pictures?
[09:38:25 CET] <JEEB> b) Do you want to just deinterlace or actually field-match it?
[09:39:14 CET] <Glumetu> i got a interlaced footage top fields first at 50 fps and i'm trying to get it to progresive at 25 fps
[09:39:35 CET] <Glumetu> and i think deinterlacing it will lose quality ..
[09:40:01 CET] <Glumetu> so th eoption b seems more like what i need
[09:41:02 CET] <Glumetu> any tips on how i can pass the fieldmatch command to get what i need?
[09:44:00 CET] <JEEB> what do you mean with 50fps?
[09:44:09 CET] <JEEB> do you mean fields per sec or frames per sec?
[09:44:16 CET] <JEEB> do you have 100 fields per sec, or 50 fields per sec
[09:44:41 CET] <JEEB> what is teh actual content you have. it is coded interlaced but is it actually interlaced (every field is different), or telecine
[09:45:33 CET] <JEEB> telecine meaning two fields making a single image, instead of every field being its own image
[09:45:41 CET] <JEEB> you should be able to check in a scene with motion
[09:48:54 CET] <Glumetu> is a verry good question i can;t give u a good answer
[09:49:02 CET] <Glumetu> media info look like this
[09:49:08 CET] <Glumetu> Frame rate mode: Constant | Frame rate: 50.000 FPS | Scan type : Interlaced | Scan type, store method: Interleaved fields | Scan order : Top Field First
[09:51:17 CET] <furq> Glumetu: https://ffmpeg.org/ffmpeg-filters.html#weave_002c-doubleweave
[09:51:27 CET] <furq> i don't necessarily recommend using it but it sounds like what you expected yadif to do
[09:52:39 CET] <furq> yadif mode 0 will output 25fps but it'll drop every other field for reasons that will become obvious if you try using weave
[09:52:54 CET] <JEEB> Glumetu: actually *look* at the fucking content
[09:53:00 CET] <JEEB> use mpv, ffplay, whatever
[09:53:06 CET] <JEEB> vapoursynth, avisynth
[09:53:15 CET] <JEEB> you cannot see what is inside from the *metadata*
[09:53:34 CET] <JEEB> metadata will only tell you how it is *coded*
[09:53:39 CET] <JEEB> not what it *contains*
[10:00:43 CET] <Glumetu> let's say is coded interlaced cause u can see jaggy edges on motion ... and that si what i'm trying to get rid of
[10:01:45 CET] <furq> so what was the problem with yadif
[10:02:12 CET] <Glumetu> the problem is that i can;t get it to give me 25 fps .. but is true it deinterlaces
[10:02:31 CET] <furq> just -vf yadif will output 25fps
[10:03:07 CET] <durandal_1707> Glumetu: what yadif gives you instead?
[10:03:08 CET] <Glumetu> i tryed -filter:v yadif=0
[10:03:18 CET] <Glumetu> still gives me 50
[10:03:46 CET] <Glumetu> same as my originl footage but deinterlaces avery frame
[10:04:26 CET] <durandal_1707> Glumetu: perhaps you need to add deint=all ?
[10:04:42 CET] <furq> that should be on by default
[10:05:59 CET] <Glumetu> now i could just add -r 25 to get 25 frames per second but i was looking intro weaving 2 fields intro one frame...
[10:06:19 CET] <furq> yeah that's not going to work
[10:06:29 CET] <Glumetu> but i think JEBB is right meybe i don;t realy have fields .... i just have interlaced footge
[10:06:29 CET] <furq> weaving two fields into one frame is what your video player is doing with deinterlacing off
[10:06:36 CET] <furq> which is why it looks like shit
[10:07:09 CET] <Glumetu> and because i don;t have fields i can't combiene 2 fields intro a frame to get to progresive
[10:07:18 CET] <durandal_1707> Glumetu: please provide short sample
[10:07:32 CET] <furq> yeah
[10:08:08 CET] <durandal_1707> Glumetu: there is weave filter to combine 2 separate frames/fields into single frame (2 fields)
[10:14:13 CET] <Glumetu> Short sample 2 s > 500 MB https://we.tl/t-BIVHnBrZS9
[10:16:45 CET] <durandal_1707> Glumetu: what is it 16K or you use uncompressed video?
[10:17:31 CET] <Glumetu> qtrle (animation)
[10:17:48 CET] <durandal_1707> lol
[10:18:33 CET] <durandal_1707> thats like uncompressed video when used with noisy content
[10:25:41 CET] <durandal_1707> Glumetu: video is marked as progressive, and does not appears to be interlaced at all
[11:30:24 CET] <void09> found a video that ffmpeg produces mostly incorrent snapshots of, like this: https://i.imgur.com/DTdjNUo.png a bluray m2ts file
[11:30:55 CET] <furq> how did you take the screenshot
[11:30:59 CET] <void09> I also get those gray frames in mpv when seeiking in the video
[11:32:18 CET] <void09> ffmpeg -ss [timestamp] -i fie.m2ts -y -frames:v 1 -vf scale=w=trunc(if(gte(sar\,1)\,iw*sar\,iw)/ohsub)*ohsub:h=trunc(if(gte(sar\,1)\,ih\,ih/sar)/ovsub)*ovsub,setsar=1' file.png
[11:32:43 CET] <void09> ffmpeg -ss [timestamp] -i fie.m2ts -y -frames:v 1 -vf 'scale=w=trunc(if(gte(sar\,1)\,iw*sar\,iw)/ohsub)*ohsub:h=trunc(if(gte(sar\,1)\,ih\,ih/sar)/ovsub)*ovsub,setsar=1' file.png
[11:33:30 CET] <furq> you probably need -ss after -i with mpegts since it has no seek index
[11:34:09 CET] <void09> ohh ;o what difference does it make, logically ?
[11:34:30 CET] <furq> well the corruption is because it's not seeking back to the previous idr frame and decoding it from there
[11:34:58 CET] <furq> -ss after -i will decode the stream up to the timestamp you asked for
[11:35:29 CET] <void09> all of it ?
[11:35:36 CET] <BtbN> yes
[11:35:41 CET] <furq> before -i it'll use the seek index to find the previous idr frame and then decode from there
[11:35:46 CET] <BtbN> That's the only way to reliably seek
[11:35:48 CET] <furq> so i assume that doesn't work well with mpegts
[11:35:58 CET] <furq> i'm not sure what it actually tries to do as an input option
[11:36:26 CET] <BtbN> you could also try to combine the two methods, to fast-forward skip the largest chunk first, and then precisely decode-seek to a keyframe
[11:36:55 CET] <BtbN> I think for mpegts it will just make a wildly inaccurate guess on the bitrate, and then jump forward based on that guess.
[11:37:16 CET] <furq> nice
[11:37:43 CET] <furq> not sure why seeking wouldn't work in mpv but i guess that's not a question for this channel
[11:37:49 CET] <void09> BtbN: I wouldn't know how to do that :\
[11:37:52 CET] <furq> it's always worked for me but i guess it's an "if you're lucky" thing
[11:38:04 CET] <void09> furq: it does, it just produces the first second or so or garbage video
[11:38:11 CET] <furq> yeah i mean i've never seen that
[11:38:22 CET] <void09> me neither, it must be something specific to this bluray disc
[11:38:24 CET] <BtbN> mpv will do the same as ffmpeg does. In that it just jumps to a random frame it guessed based on the bitrate
[11:38:29 CET] <void09> I mean i have seent it, very rarely
[11:38:32 CET] <furq> i guess it's more likely with higher bitrates
[11:38:35 CET] <BtbN> and then it will play broken output until it hits the next IDR frame
[11:39:14 CET] <furq> i tried with a few .ts files i have here and it always just found the nearest idr frame
[11:39:33 CET] <void09> vlc does the same too
[11:39:35 CET] <furq> but they're all pretty low bitrate so maybe it just fills the cache and sees if it finds one
[11:39:39 CET] <BtbN> Maybe m2ts is just especially broken
[11:40:01 CET] <void09> Monty Python the meaning of life bd
[11:40:11 CET] <BtbN> But yeah, -ss after -i should entirely avoid the issue.
[11:40:26 CET] <BtbN> And be actually precise timing, not some wild guess
[11:40:43 CET] <void09> yes but taking many screenshots at random times from a 29GB file by seeking all of it ? :\
[11:40:56 CET] <void09> oh wait, seeking is not decoding, right ?
[11:41:05 CET] <BtbN> if you put it after -i, it will
[11:41:11 CET] <furq> if you want to take many screenshots then use the select filter and do it all in one pass
[11:41:14 CET] <BtbN> like I said, try combining the two
[11:41:26 CET] <furq> or just take them with mpv
[11:41:36 CET] <BtbN> seek the largest chunk with -ss before -i, and then add the lase 30 seconds or so after -i
[11:41:39 CET] <void09> furq: mpv would probably produce the same results as it uses ffmpeg
[11:41:53 CET] <furq> it doesn't take screenshots by invoking ffmpeg -ss
[11:42:01 CET] <void09> BtbN: sorry but i have no idea how to do that
[11:42:06 CET] <furq> if it has the correct frame on screen then it'll take a correct screenshot of it
[11:42:16 CET] <BtbN> I have no idea how you can have no idea how to do that oO
[11:42:23 CET] <BtbN> Literally just put one -ss before and one after -i
[11:42:32 CET] <void09> oh
[11:42:44 CET] <furq> does that actually work
[11:42:48 CET] <BtbN> it should
[11:43:17 CET] <BtbN> it will still be largely inaccurate, due to no seek index
[11:43:36 CET] <BtbN> but seeking 30 extra seconds post-decode should ensure you hit an IDR frame
[11:44:09 CET] <furq> does -ss before -i still actually get you to the right timestamp
[11:44:23 CET] <BtbN> If there is no seek index, it will put you wherever it feels like
[11:44:32 CET] <BtbN> But it will be fast at doing so
[11:44:49 CET] <void09> bd keyframe is about every 40 frames or so, so 3 secounds before should be enough
[11:44:58 CET] <furq> yeah that's the part of this where i don't get how using -ss twice would work
[11:45:18 CET] <BtbN> Well, if the goal is to just get some screenshot, and the precision is not overly important.
[11:45:34 CET] <BtbN> If you want a screenshot of a precise timestamp, there is no way around decoding the whole thing
[11:46:39 CET] <furq> anyway if you want multiple screenshots at known timestamps then -vf "select='eq(t,1:23:45)'+'eq(t,2:34:56)',scale=whatever" -vsync 0 shot%d.png
[11:46:48 CET] <furq> that will decode the whole thing but only once instead of seeking through the file n times
[11:47:38 CET] <void09> BtbN: precision is very important .. i have implemented an algorithm so every nth screenshot always has the same timestamp
[11:49:01 CET] <BtbN> yeah, -ss after -i is the only viable thing to do then
[11:49:31 CET] <void09> but this can take hours
[11:49:48 CET] <void09> so doing -ss before and after -i will not result in an accurate screenshot ?
[11:50:09 CET] <void09> it's just a stream
[11:50:15 CET] <void09> no timestamps ? :\
[11:50:46 CET] <furq> -i foo -ss 30 means decode and discard 30 seconds of the input
[11:51:00 CET] <furq> so if it seeks to the wrong place then it's just going to decode 30 seconds from that wrong place
[11:56:18 CET] <void09> right, so it seeks to a byte position, not a time position, if it's not decoding the m2ts
[11:56:49 CET] <furq> yeah that would normally be mapped in a seek index
[11:56:58 CET] <furq> so it just has to make a wild guess if there isn't one
[11:57:45 CET] <void09> blah this sucks
[11:57:55 CET] <furq> how many screenshots do you need
[11:58:01 CET] <void09> arbitrary number of them
[11:58:09 CET] <furq> oh
[11:58:11 CET] <void09> 1-10 usually, whatever the user inputs
[11:58:20 CET] <void09> I am making a little program
[11:58:37 CET] <furq> yeah i was going to say for a large enough number decode and vf select would be faster
[11:58:41 CET] <furq> but 10 probably isn't large enough
[11:58:56 CET] <furq> it'll definitely be quicker than -ss 10 times though
[11:58:58 CET] <void09> of course it will be faster than decoding the video 10 times
[11:59:38 CET] <furq> if it was like 50 then i would do that even if there was a seek index
[11:59:39 CET] <void09> but still ,isn't it a  ffmpeg bug that it does not correctly decode the frame ?
[11:59:58 CET] <furq> it's more a missing feature than a bug
[12:00:17 CET] <void09> i'd say it's a bug, it should not output an incorrectly decoded frame
[12:00:22 CET] <furq> it's decoding the frame correctly but you could argue it should seek backwards to find an IDR frame first
[12:00:37 CET] <void09> well, it's not decoding it correctly then :D
[12:00:50 CET] <void09> cause the frame depends on that IDR frame
[12:00:59 CET] <furq> i'll leave that semantic argument alone
[12:01:32 CET] <bencoh> :)
[12:01:36 CET] <furq> it's hard to cover all bases with this because sometimes that previous idr frame is missing or too far back to buffer
[12:02:09 CET] <void09> well with bluray it's pretty constant I think
[12:02:09 CET] <furq> and the current behaviour is technically doing what you asked for, just in an annoying way
[12:02:51 CET] <furq> sure but not all files with no seek index are from bluray
[12:03:30 CET] <furq> and with something like iptv mpegts you'd expect to start in the middle of a gop
[12:04:54 CET] <furq> the point being that if you do file a ticket it should probably be a feature request, not a bug report
[12:07:13 CET] <void09> furq: does that eq(t.. )+...  command have any drawbacks?
[12:28:38 CET] <furq> not other than it decoding the entire file
[13:06:33 CET] <seni> if I'm compiling ffmpeg with the sole purpose of decoding arbitrary audio files into WAVPCM, what flags should I be setting? I've looked inside configure but there are so many options... it seems to be like I should probably be disabling everything except maybe 3 things?
[13:07:52 CET] <Reinhilde> you'll want to enable most audio codecs
[13:14:37 CET] <furq> at minimum you'll need every audio decoder, pcm encoder, mp3/m4a/ogg/flac/etc demuxers, wav muxer, ac3/aac parsers, file protocol
[13:14:42 CET] <furq> probably more stuff i forgot
[13:15:38 CET] <furq> check ./configure --list-decoders etc
[13:16:44 CET] <furq> also just generally don't do this unless you're shipping the binary with something else
[13:29:13 CET] <DHE> seni: when you mean "sole purpose", you mean you want to omit all other things?
[13:45:32 CET] <seni> DHE: yes. i have an app with which you can import audio files, and to process them I need them to be in WAVPCM
[13:49:30 CET] <seni> thanks guys
[13:50:12 CET] <DHE> sure. but a default build can also do that, but it takes longer to make and is like 20 megabytes large.
[15:57:00 CET] <Ingvix> Any idea why 'ffmpeg -ss 74.4 -i "input.mkv" -map_chapters -1 -to 00:25:21.920 -c:a copy -c:v copy "output.mkv"' works correctly, outputting the video between the timestamps but then 'ffmpeg -ss 74.4 -i "input.mkv" -map_chapters -1 -to 00:25:21.920 -c:a copy -c:v copy "output.mkv"' outputs fom -ss value to the end of video and not to -to value? The only differences I see are the timestamps. My version is
[15:57:00 CET] <Ingvix> N-86950-g1bef008 on Windows.
[15:57:24 CET] <Ingvix> whops, double paste
[15:57:41 CET] <Ingvix> Here's the second command: '
[15:57:41 CET] <Ingvix> ffmpeg -ss 1521.7 -i "input.mkv" -map_chapters -1 -to 00:46:49.610 -c:a copy -c:v copy "output2.mkv"'
[15:59:05 CET] <furq> Ingvix: those should both be broken
[15:59:16 CET] <Ingvix> oh
[15:59:21 CET] <Ingvix> how so?
[15:59:35 CET] <furq> -t/-to as an output option won't do what you expect if you use -ss as an input option
[16:00:25 CET] <furq> -ss 30 -i foo -to 40 will give you 30 to 1:10
[16:01:11 CET] <furq> -ss 30 -to 40 -i foo should work though
[16:01:17 CET] <furq> or both as output options but that'll take longer
[16:03:05 CET] <Ingvix> I see
[16:03:17 CET] <Ingvix> oh, I tried to insert to in front of -i but it gave some error. I'll try again if I just typed it wrong
[16:03:27 CET] <Ingvix> *-to
[16:06:11 CET] <Ingvix> I had them both after -i before but that caused video to not show until the first keyframe in the clip when inserting them before -i splits it before next keyframe (or closest, I'm not sure)
[16:09:13 CET] <Ingvix> I'm still getting error "Option to (record or transcode stop time) cannot be applied to input url input.mkv -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to. Error parsing optios for input file input.mkv. Error opening input files: Invalid argument"
[16:11:01 CET] <Ingvix> I wonder if my version is too outdated or was it just misconception on your end
[16:16:02 CET] <Ingvix> -t seems to work so I guess I can use that. Just need to calculate a bit. Unless someone knows better?
[16:18:36 CET] <Ingvix> docs do imply that -to should work with input file as well. Maybe I should try to update my ffmpeg
[16:32:50 CET] <Ingvix> yup, latest stable build fixed the issue
[19:06:27 CET] <phobosoph> hi
[19:06:44 CET] <phobosoph> still have the issue of youtube live stream stopping to work after some hours
[19:06:46 CET] <phobosoph> Failed to update header with correct duration.
[19:07:01 CET] <phobosoph> also the speed suddenly goes up from normal 1x to e.g. 19.3x !
[19:07:05 CET] <phobosoph> I am not using -r
[19:07:07 CET] <phobosoph> *-re
[19:07:19 CET] <phobosoph> the crazy thing is that ffmpeg still happily streams away, not aware of issues
[19:07:35 CET] <phobosoph> but youtube stops showing the stream - and when this takes for too long youtube will finish the stream
[19:07:51 CET] <phobosoph> so the question is why the speed suddenly goes up from 1.x after many hours of correct streaming?
[19:07:55 CET] <phobosoph> wtf
[19:07:58 CET] <phobosoph> pls help
[19:08:07 CET] <phobosoph> Failed to update header with correct duration.149.2kbits/s speed=19.3x
[19:08:21 CET] <phobosoph> this was logged as I noticed the youtube stream doesn't work anymore
[19:08:29 CET] <phobosoph> I restarted ffmpeg and now the youtube stream works again
[19:08:33 CET] <phobosoph> and speed is back to 1.x
[19:09:24 CET] <phobosoph> frame=12920 fps= 30 q=-1.0 size=  323841kB time=00:07:10.47 bitrate=6162.8kbits/s speed=   1x
[19:09:30 CET] <phobosoph> when it works ^
[19:09:42 CET] <phobosoph> but after some hours speed suddenly climbs dramatically and then youtube doesn't like the data anymore
[19:10:09 CET] <phobosoph> also, why is the bitrate so low for a 19.3x speed?
[19:10:13 CET] <phobosoph> something is off
[19:10:14 CET] <phobosoph> hmm
[19:13:19 CET] <phobosoph> anyone?
[19:13:20 CET] <phobosoph> :/
[19:15:53 CET] <ncouloute> I'm trying to go by the PTS_TIME in the ffmpeg debug log but the fps filter is further adjusting the frames. Is there any way to get ffmpeg to not move the PTS_TIME around. I played around with the fps filter settings but cant get it to be frame accurate. It seems I'm losing between 1-2 frames due to rounding. Hard to account for when it will round a frame or not. I suppose I can look at the Parsed_fps log entries but
[19:15:53 CET] <ncouloute> was hoping there was a better way.
[19:17:09 CET] <JEEB> if you don't want ffmpeg.c to touch the input demuxer timestamps, -vsync passthrough -copyts (you can then verify with -debug_ts that the timestamps aren't getting mangled within ffmpeg.c)
[19:17:44 CET] <JEEB> although I'm not fully sure what you're talking about :P
[19:20:54 CET] <ncouloute> Yeah its a bit hard to explain in the irc chat length.. but I'm converting any given file or  files to 60000/1001 constant frame rate. So I suppose that wont work since vsync passthrough will make it vfr. I couldnt really understand the output from debug_ts but the loglevel debug output seems to make it seem like it is changing the fps after the concat.
[19:22:23 CET] <JEEB> debug_ts just logs the timestamps at each location
[19:22:36 CET] <JEEB> demux, decoding, filtering, encoding, muxing
[19:22:54 CET] <JEEB> and for various there's actually two
[19:23:02 CET] <JEEB> one for pushing something in, another for pulling something out
[19:23:04 CET] <JEEB> like for filtering
[19:23:05 CET] <JEEB> :P
[19:23:13 CET] <JEEB> (or decoding, or encoding)
[19:36:30 CET] <ncouloute> I guess my assumption that the fps filter would be frame accurate is just not the case. Doesnt always pick the right frame. Changing the rounding algorithm helped a bit but other areas it made it worse. Am I right to assume that for a vfr->cfr conversion there would have to be some sort of frame inaccuracy. Since a given frame for cfr is fixed. so you could have a vfr frame between that fixed length? I could be
[19:36:30 CET] <ncouloute> completely wrong though lol
[19:38:48 CET] <durandal_1707> ncouloute: fps filter just picks first frame encountered when duplicating frames
[19:44:27 CET] <durandal_1707> ncouloute: how you want timestamp gaps to be filled ?
[20:20:11 CET] <DanielTheKitsune> Hello, what codec is fine for low-CPU (like 1 GHz or less) environments, where bandwidth and HDD usage aren't important
[20:20:13 CET] <DanielTheKitsune> ?
[20:24:56 CET] <ritsuka> does your low-CPU have a hardware decoder?
[20:26:53 CET] <DanielTheKitsune> obviously nope
[20:27:07 CET] <DanielTheKitsune> or, even if it has, I don't want to use it
[20:28:00 CET] <DanielTheKitsune> ritsuka: actually, why it looks like every computer has a hardware video decoder of any kind?
[20:28:19 CET] <DanielTheKitsune> does every computer has a PCIe GPU?
[20:28:50 CET] <DanielTheKitsune> well, enough question ranting
[20:28:56 CET] <DanielTheKitsune> (sorry)
[20:29:03 CET] <ritsuka> because even the cheaper raspberry pi has got an hardware decoder
[20:29:16 CET] <DanielTheKitsune> yes, I own a RPi ;)
[20:29:26 CET] <DanielTheKitsune> but the low-CPU environment is actually a Pentium III
[20:29:48 CET] <DanielTheKitsune> with no GPU either, other than the built in the motherboard
[20:29:49 CET] <ritsuka> sell it and buy another rpi, it will be faster too :P
[20:29:59 CET] <DanielTheKitsune> meh, I am not allowed to sell it
[20:30:16 CET] <DanielTheKitsune> and I can't afford to buy another RPi either
[20:30:24 CET] <DanielTheKitsune> I am barely paying the one I currently own ;)
[20:31:31 CET] <DanielTheKitsune> also, this usecase will be handy for other environments, for example, when attempting to generate as little heat as possible, or spending as little power as possible, even if there is enough power to decode regular h264/h265 @ 4K
[21:07:18 CET] <ncouloute> @durandal_1707: Sorry for late response. I seek to various spots in the file and need the frame to be same as it was in the original file. I'm almost always converting to a higher framerate. So I understand there will be duplication. Trying to track down why the fps filter needs to move the frame. Example: Read frame with in pts 52523516, out pts 52471 Writing frame with pts 52470 to pts 52470
[21:09:27 CET] <durandal_1707> ncouloute: i see, nothing obvious
[21:09:46 CET] <durandal_1707> ncouloute: please share sample and command you so this can be reproduced
[21:19:17 CET] <ncouloute> Alright. I'm going to need to figure out how to cut this down since its large files from a GoPro 5 that I'm concatenating. I'll see if I can isolate the issue down to a few files
[22:23:07 CET] <Hello71> rpi zero is like $10
[22:23:36 CET] <Hello71> pretty sure that's no more than a day of work in places with internet access
[22:24:46 CET] <ncouloute> oh he gone. Is there a certain way one should upload a sample and command? saw there was one for bug reports but dont know for sure if this is a bug per say. Thinking about just upload to a random upload site and then put the command in a pastebin. =/
[00:00:00 CET] --- Wed Oct 30 2019


More information about the Ffmpeg-devel-irc mailing list