[Ffmpeg-devel-irc] ffmpeg.log.20180124

burek burek021 at gmail.com
Thu Jan 25 03:05:01 EET 2018


[00:00:06 CET] <GyrosGeier> that's not exactly 24 hours, is it?
[00:00:14 CET] <JEEB> somewhat over 26 or so?
[00:00:25 CET] <GyrosGeier> mmh
[00:00:41 CET] <GyrosGeier> my point is, does concat correct timestamps in consecutive files?
[00:00:55 CET] <JEEB> which concat? :D
[00:01:05 CET] <alexpigment> haha
[00:01:10 CET] <JEEB> filter, demuxer, input thing?
[00:01:15 CET] <JEEB> last one was protocol
[00:01:24 CET] <GyrosGeier> hmmk
[00:01:33 CET] <JEEB> because yes, everyone implemented concatenation on their own level for their own use case
[00:01:44 CET] <alexpigment> i want to say demuxer does..
[00:02:17 CET] <alexpigment> or tries
[00:02:25 CET] <kepstin> concat filter does a reasonable job of giving you correct timestamps, assuming all the input videos start at 0
[00:02:29 CET] <JEEB> yea
[00:02:36 CET] <JEEB> the filter I think is the least insane one
[00:02:40 CET] <alexpigment> well, right
[00:02:41 CET] <JEEB> since it only deals with decoded stuff
[00:02:56 CET] <alexpigment> if you're not re-encoding though, i think demuxer is better than protocol
[00:03:12 CET] <alexpigment> if you're re-encoding, filter is always best
[00:03:25 CET] <JEEB> no idea, all of them have sounded like "they work for my use case" things. and if they work for you, congratulations
[00:03:35 CET] <alexpigment> well, they work until they don't
[00:03:40 CET] <alexpigment> and that is a certainty
[00:03:43 CET] <JEEB> yup
[00:04:02 CET] <JEEB> I just see quite often people complaining that mp4 concatenation f.ex. doesn't work
[00:04:20 CET] <JEEB> concatenation I think might have been worth it doing on the API client level
[00:04:26 CET] <alexpigment> well, it doesn't, but i think it's because people expect magic out of something that is imperfect by nature
[00:04:35 CET] <JEEB> ayup
[00:04:42 CET] <alexpigment> videos are made weirdly, and ffmpeg just kinda deals with that as best it can
[00:04:54 CET] <JEEB> no, the videos are OK.
[00:04:59 CET] <kepstin> so, anyways, time to change AV_TIME_BASE to facebook flicks? (hah)
[00:05:04 CET] <JEEB> kepstin: NO
[00:05:06 CET] <JEEB> lol
[00:05:23 CET] <JEEB> I really liked their "we're not even going to try to support the /1.001 rates" thing
[00:05:35 CET] <alexpigment> ?
[00:05:39 CET] <alexpigment> i didn't see this news
[00:05:59 CET] <JEEB> alexpigment: they pushed a huge PR stunt about finding the least common denominator between a lot of audio and video rates
[00:05:59 CET] <kepstin> JEEB: the hilarious thing is that they accidentally supported the /1.001 rates
[00:06:19 CET] <JEEB> kepstin: huh?
[00:06:30 CET] <kepstin> (i have a pull request against it to fix the docs to say that it does the ntsc rates fine)
[00:06:35 CET] <JEEB> well the number is rather large so I wouldn't be surprising
[00:06:38 CET] Action: GyrosGeier is building a CPU in VHDL that supports fractionals natively
[00:06:39 CET] <JEEB> *surprised
[00:06:49 CET] <kepstin> the number divides by 120000, so yeah
[00:07:26 CET] <JEEB> but yes, congratulations to facebook for finding a common denominator :D
[00:07:41 CET] <kepstin> they found it via brute force! they didn't even use a gcd function!
[00:08:19 CET] <JEEB> lol
[00:14:33 CET] <alexpigment> GyrosGeier: sounds cool. this is for an FPGA?
[00:16:36 CET] <GyrosGeier> CPLD, for a CNC mill controller that gets lots of PWM input signals and needs to convert them to PWM outputs
[00:16:42 CET] <alexpigment> ah
[00:16:43 CET] <alexpigment> nice
[01:30:01 CET] <x__> Can you give me a hand with ffmpeg?
[01:30:08 CET] <x__> I'm getting an error with this command
[01:30:09 CET] <x__> C:\Users\x\Videos>ffmpeg -i saída_parte_1.mp4 -i saída_parte_2.mp4 -i saída_part
[01:30:09 CET] <x__> e_3.mp4 -i saída_parte_4.mp4 -filter_complex "[0:v:0][1:v:0][2:v:0][3:v:0]concat
[01:30:09 CET] <x__> =n=4:v=1:[outv]" -vf scale=1920:1080 output.avi
[01:30:33 CET] <x__> Filter concat has an unconnected output
[01:30:40 CET] <x__> What is unconnected there?
[01:32:53 CET] <furq> x__: either get rid of [outv] or add -map "[outv]"
[01:33:05 CET] <furq> if you add a labelled output then you need to explicitly map it
[01:33:12 CET] <furq> (or send it to another filter)
[01:33:33 CET] <x__> furq, what is a labeled output?
[01:33:43 CET] <furq> in that command, [outv]
[01:34:03 CET] <x__> yes
[01:34:59 CET] <x__> Filtergraph 'scale=1920:1080' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.                  -vf/-af/-filter and -filter_complex cannot be used together for the same stream.
[01:35:05 CET] <x__> furq, that is what I get now
[01:35:11 CET] <furq> oh right i didn't read that far
[01:35:20 CET] <x__> What do I use now?
[01:35:35 CET] <furq> add ",scale=1920:1080" to the end of filter_complex
[01:35:45 CET] <furq> and remove -vf obv
[01:36:03 CET] <x__> furq, why?
[01:36:16 CET] <x__> Is this a parameter specific to -filter_complex?
[01:36:18 CET] <furq> like the error says, you can't have -vf and -filter_complex
[01:37:07 CET] <x__> Now I'm getting yet another error
[01:37:58 CET] <x__> furq, https://pastebin.com/UZbk10gX
[01:38:50 CET] <kepstin> x__: you have to re-arrange your complex filterchain to apply the scale to the correct video stream before doing the concat.
[01:39:07 CET] <furq> uh
[01:39:09 CET] <x__> kepstin, Oh?
[01:39:12 CET] <furq> concat,scale works fine here
[01:39:21 CET] <x__> Ok, I'm confused
[01:39:34 CET] <kepstin> the error is that there's a 720p and a 1080p video being fed into concat
[01:39:39 CET] <furq> are all your inputs the same size
[01:39:40 CET] <furq> yeah
[01:39:51 CET] <furq> nvm i see what you're saying now
[01:40:05 CET] <furq> if your inputs are different sizes then you need to scale before concat
[01:41:42 CET] <x__> furq, So I can't scale on the fly?
[01:42:09 CET] <kepstin> you can, you just have to put the scale filter earlier in the filter graph
[01:42:13 CET] <kepstin> so something like '[0:v]scale=1920:1080[blah];[blah][1:v][2:v][3:v]concat=n=4:v=1:a=0'
[01:42:18 CET] <furq> yeah what he said
[01:44:44 CET] <x__> C:\Users\x\Videos>ffmpeg -i saída_parte_1.mp4 -i saída_parte_2.mp4 -i saída_part
[01:44:44 CET] <x__> e_3.mp4 -i saída_parte_4.mp4 -filter_complex "[0:v:0]scale=1920:1080[1:v:0]scale
[01:44:44 CET] <x__> =1920:1080[2:v:0]scale=1920:1080[3:v:0]scale=1920:1080 concat=n=4:v=1",scale=192
[01:44:44 CET] <x__> 0:1080 output.avi
[01:45:08 CET] <x__> furq, I'm applying to each input because each seems to be a different resolution for some reason
[01:45:38 CET] <x__> But now I get
[01:45:39 CET] <x__> [AVFilterGraph @ 000000000702c880] Unable to parse graph description substring:
[01:45:40 CET] <kepstin> x__: that's not the correct syntax, you need to do something similar to what I did in my example
[01:45:54 CET] <x__> oh
[01:45:58 CET] <x__> kepstin, what is the difference?
[01:46:06 CET] <furq> you need labelled outputs in this case
[01:46:17 CET] <x__> ok
[01:46:35 CET] <x__> furq, and where do they go? What are labeled outputs?
[01:47:04 CET] <kepstin> x__: anything in [] is a label. If it's after a filter, it's an output, if it's before a filter it's an input.
[01:47:04 CET] <furq> [0:v]scale=1920:1080[tmp0];...;[tmp0][tmp1][tmp2][tmp3]concat
[01:47:15 CET] <x__> oh
[01:47:53 CET] <kepstin> if [] is inbetween two filters, it's a syntax error :)
[01:48:17 CET] <kepstin> you use ; to separate independent parts of the filter chain
[01:48:35 CET] <furq> , connects one filter's output to the next filter's input
[01:48:53 CET] <furq> in cases where you can't do that you need explicitly labelled outputs that you can use as inputs
[01:48:53 CET] <x__> Like this?
[01:48:56 CET] <x__> [AVFilterGraph @ 000000000702c880] Unable to parse graph description substring:
[01:48:59 CET] <x__> Sorry
[01:49:12 CET] <x__> C:\Users\x\Videos>ffmpeg -i saída_parte_1.mp4 -i saída_parte_2.mp4 -i saída_part
[01:49:12 CET] <x__> e_3.mp4 -i saída_parte_4.mp4 -filter_complex "[0:v]scale=1920:1080[tmp0][1:v]sca
[01:49:12 CET] <x__> le=1920:1080[tmp1][2:v]scale=1920:1080[tmp3][3:v]scale=1920:1080[tmp4] concat=n=
[01:49:12 CET] <x__> 4:v=1",scale=1920:1080 output.avi
[01:49:23 CET] <furq> you're missing the ;s
[01:49:40 CET] <furq> you're also missing the end quote
[01:49:58 CET] <kepstin> nah, the end quote's there, but there's an extra scale on the end that's not doing anything useful
[01:50:11 CET] <furq> oh right yeah
[01:50:44 CET] <x__> Like this?
[01:50:45 CET] <x__> C:\Users\x\Videos>ffmpeg -i saída_parte_1.mp4 -i saída_parte_2.mp4 -i saída_part
[01:50:45 CET] <x__> e_3.mp4 -i saída_parte_4.mp4 -filter_complex "[0:v];scale=1920:1080[tmp0][1:v];s
[01:50:45 CET] <x__> cale=1920:1080[tmp1][2:v];scale=1920:1080[tmp2][3:v];scale=1920:1080[tmp3] conca
[01:50:45 CET] <x__> t=n=4:v=1" output.avi
[01:51:55 CET] <furq> [tmp0];[1:v]
[01:51:55 CET] <furq> etc
[01:52:19 CET] <furq> tmp0 is the output of the first scale, 1:v is the input to the second scale
[01:52:53 CET] <x__> C:\Users\x\Videos>ffmpeg -i saída_parte_1.mp4 -i saída_parte_2.mp4 -i saída_part
[01:52:53 CET] <x__> e_3.mp4 -i saída_parte_4.mp4 -filter_complex "[0:v];scale=1920:1080[tmp0][1:v];s
[01:52:54 CET] <x__> cale=1920:1080[tmp1][2:v];scale=1920:1080[tmp2][3:v];scale=1920:1080[tmp3] conca
[01:52:54 CET] <x__> t=n=4:v=1" output.avi
[01:52:57 CET] <x__> Like this then
[01:52:59 CET] <x__> ?
[01:53:05 CET] <furq> that's the same thing you pasted before
[01:53:13 CET] <x__> crap
[01:53:55 CET] <x__> https://nopaste.linux-dev.org/?1172794
[01:54:07 CET] <furq> http://vpaste.net/JzdWV
[01:55:45 CET] <x__> furq, Now I get:
[01:56:00 CET] <x__> Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_concat_
[01:56:01 CET] <x__> 4
[01:56:36 CET] <kepstin> x__: please repaste the exact command you just ran.
[01:56:42 CET] <x__> Ok
[01:57:42 CET] <x__> kepstin, https://nopaste.linux-dev.org/?1172796
[01:57:57 CET] <furq> you've not given concat any inputs
[01:58:03 CET] <kepstin> x__: that's not what furq told you to do :/
[01:58:30 CET] <x__> oh
[01:58:33 CET] <x__> there's a bit missing there
[01:58:42 CET] <x__>  [tmp0][tmp1][tmp2][tmp3]concat=n=4:v=1" -> this
[01:58:43 CET] <x__> right?
[01:58:47 CET] <furq> yeah
[01:58:55 CET] <kepstin> yes, the list of inputs for the concat filter is missing.
[01:59:21 CET] <x__> Oh, ok
[01:59:39 CET] <x__> Ok, now I just need to know how to speed up the video
[02:00:12 CET] <kepstin> does the video also have sound that you want to keep in sync when you speed it up?
[02:00:27 CET] <furq> http://vpaste.net/vCAdA
[02:00:35 CET] <furq> 0.5 will double the speed, change that as needed
[02:00:57 CET] <furq> i assume there's no audio since that's not being passed to concat, but if there is then you'll need to use atempo as well
[02:01:56 CET] <x__> furq, I can also use the syntax PTS/factor, right?
[02:02:00 CET] <x__> I need this 288 times faster
[02:02:05 CET] <furq> sure
[02:02:17 CET] <x__> Awesome :)
[02:02:47 CET] <furq> you're obviously going to end up with a video that's like 8000fps
[02:02:57 CET] <x__> furq, Works for me
[02:03:00 CET] <furq> fair enough
[02:03:03 CET] <x__> furq, this video is 48 hours long
[02:03:04 CET] <x__> lol
[02:03:11 CET] <x__> I want to shorten to 10 minutes
[02:03:17 CET] <x__> It's a timelapse
[02:03:22 CET] <furq> you can add ,fps=30 to the end of that if you want a proper timelapse
[02:03:37 CET] <x__> can I use fps=60?
[02:03:39 CET] <furq> sure
[02:03:51 CET] <furq> i'm not really sure how a player will deal with an 8000fps video but the answer's probably not "well"
[02:04:48 CET] <x__> furq, So fps=60 makes sure that's properly readable, right?
[02:04:57 CET] <furq> it'll just drop most of the frames
[02:05:06 CET] <furq> so yeah
[02:05:17 CET] <x__> furq, Should I do anything else here?
[02:05:25 CET] <furq> not that i can see
[02:05:43 CET] <x__> How long does this take?
[02:05:46 CET] <furq> probably a long time
[02:06:07 CET] <x__> More than a day?
[02:06:21 CET] <furq> depends on your cpu
[02:06:29 CET] <x__> Core i7 4470
[02:06:32 CET] <furq> most of the frames will be dropped before encoding so it shouldn't be that bad
[02:06:44 CET] <kepstin> hmm, but all of the frames are gonna be scaled
[02:06:47 CET] <furq> yeah
[02:06:54 CET] <kepstin> doing the speeding up before the scale might help there.
[02:07:13 CET] <furq> i was hoping to avoid making the filtergraph more complicated lol
[02:07:22 CET] <kepstin> yeah, i can see that :)
[02:07:35 CET] <x__> kepstin, so where do I put the speeding up?
[02:07:51 CET] <furq> remove setpts and fps from the end
[02:07:54 CET] <furq> and put it before each scale
[02:08:15 CET] <furq> so setpts,fps,scale
[02:08:20 CET] <furq> once for each scale
[02:09:47 CET] <x__> Like this?
[02:09:47 CET] <x__> ffmpeg -i sa¡da_parte_1.mp4 -i sa¡da_parte_2.mp4 -i sa¡da_parte_3.mp4 -i sa¡da_parte_4.mp4 -filter_complex "[0:v]scale=1920:1080[tmp0];[1:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp1];[2:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp2];[3:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp3]; [tmp0][tmp1][tmp2][tmp3]concat=n=4:v=1" output.avi
[02:09:58 CET] <x__> OMG, it's a monster :P
[02:10:39 CET] <furq> you missed the first one
[02:11:52 CET] <x__> furq, fixed
[02:12:00 CET] <x__> I just checked the quality of the aborted file
[02:12:03 CET] <x__> it's so bad :P
[02:12:47 CET] <x__> furq, Am I doing anything wrong?
[02:13:11 CET] <kepstin> x__: you're not specifying any output codec or options, and you're using avi so the default codec is bad.
[02:13:30 CET] <x__> kepstin, What should I do?
[02:13:43 CET] <x__> I want to preserve as much of the original quality as possible
[02:13:48 CET] <kepstin> x__: I recommend switching the output to mkv or mp4, which will also switch the default codec to h264 (x264)
[02:13:57 CET] <x__> ok
[02:14:00 CET] <kepstin> x__: then you can start looking at encoder options to set quality.
[02:14:31 CET] <x__> kepstin, How do I do that?
[02:14:35 CET] <kepstin> x__: with the libx264 codec, you set quality by saying "--crf 24"
[02:14:41 CET] <kepstin> change the number lower for higher quality
[02:14:56 CET] <x__> Can I use 0?
[02:14:57 CET] <kepstin> 24 is a sort of medium default
[02:15:10 CET] <kepstin> x__: yes, but that usually means lossless and your file will be huge
[02:15:34 CET] <furq> just run it with the defaults and see if it looks good enough to you
[02:15:36 CET] <x__> It'll be a 10 min file, so it works for me
[02:15:36 CET] <kepstin> I recommend values around 18-25, you'll have to try some and see what they look like
[02:15:42 CET] <furq> if it's a bit blocky then drop to -crf 20 or so
[02:16:05 CET] <furq> 23 is the default which is reasonably good
[02:16:17 CET] <x__> ok, so -f libx264?
[02:16:29 CET] <furq> -c:v libx264 but you don't need to set that if the output file is mp4
[02:17:54 CET] <x__> And where do I put -crf 20?
[02:18:14 CET] <furq> before the output filename
[02:19:46 CET] <x__> ok
[02:21:39 CET] <x__> C:\Users\x\Videos>ffmpeg -i saída_parte_1.mp4 -i saída_parte_2.mp4 -i saída_parte_3.mp4 -i saída_parte_4.mp4 -filter_complex "[0:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp0];[1:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp1];[2:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp2];[3:v]setpts=PTS/288,fps=60,scale=1920:1080[tmp3]; [tmp0][tmp1][tmp2][tmp3]concat=n=4:v=1" -crf 15 output.mp4
[02:21:42 CET] <x__> So this should work
[02:21:43 CET] <x__> Correct?
[02:22:20 CET] <furq> looks ok to me
[02:22:53 CET] <x__> Still blocky :/
[02:24:14 CET] <SortaCore> scale ya say
[02:24:49 CET] <furq> http://vpaste.net/Kwwe5
[02:24:53 CET] <furq> in case you were wondering how dumb filterchains can get
[02:25:19 CET] <x__> furq, Please exorcise that from my life :P
[02:25:40 CET] <x__> Demon begone
[02:25:53 CET] Last message repeated 1 time(s).
[02:25:53 CET] <SortaCore> I'll have you know I was invited here
[02:26:08 CET] <x__> SortaCore, I was talking about the complex filterchain :P
[02:26:21 CET] <SortaCore> _whoosh_
[02:26:40 CET] <furq> i'm sure other people in here have even stupider filterchains
[02:27:01 CET] <x__> furq, Even setting cbr to 0 makes the final video a bit blocky
[02:27:06 CET] <x__> Is there anything else I can do?
[02:27:12 CET] <furq> what are you watching it with
[02:27:26 CET] <x__> *crf
[02:27:38 CET] <x__> Eh
[02:27:41 CET] <x__> windows media player
[02:27:51 CET] <furq> try it with mpv or something
[02:28:10 CET] <x__> furq, still grainy
[02:28:11 CET] <SortaCore> try with BlockyVideoPlayer"
[02:28:17 CET] <x__> SortaCore, lol
[02:28:37 CET] <x__> this can't be right
[02:29:07 CET] <SortaCore> is crf suitable for all resolutions?
[02:29:14 CET] <furq> i don't understand the question
[02:29:30 CET] <relaxed> but the answer is yes
[02:29:47 CET] Action: SortaCore scribbles out 42
[02:30:31 CET] <x__> I just checked the aborted encoding from VSDC
[02:30:36 CET] <x__> It's significantly better
[02:30:48 CET] <x__> I think I'm missing out something
[02:30:54 CET] <furq> if you're trying to watch it while it's still encoding then that won't work properly with mp4
[02:31:20 CET] <x__> furq, Nope, I aborted it at 00:24
[02:31:25 CET] <x__> (24 minutes)
[02:31:31 CET] <x__> Just to check the quality
[02:31:43 CET] <SortaCore> he means with ffmpeg's encodings
[02:31:45 CET] <furq> i'm not entirely sure if that'll work
[02:31:52 CET] <furq> try it with mkv instead of mp4
[02:31:59 CET] <furq> that should actually work if it doesn't complete properly
[02:32:17 CET] <x__> mkv is also lossless, right?
[02:32:36 CET] <furq> it'll be the same as mp4
[02:32:49 CET] <x__> furq, still grainy :/
[02:33:01 CET] <furq> shrug
[02:33:05 CET] <furq> crf 0 should look identical to the input
[02:33:14 CET] <x__> furq, It isn't though
[02:33:15 CET] <SortaCore> maybe your inputs are grainy
[02:33:20 CET] <x__> SortaCore, nope, just checked
[02:33:29 CET] <x__> Maybe it's the scaling in each input
[02:33:37 CET] <SortaCore> maybe your compy can't keep up with the amount of data?
[02:33:46 CET] <furq> maybe add -sws_flags lanczos
[02:33:56 CET] <x__> SortaCore, Then VSDC should also produce crappy input
[02:34:02 CET] <furq> bicubic scaling shouldn't give blocky output though
[02:34:07 CET] <furq> it'd just be a bit soft
[02:34:47 CET] <SortaCore> x__: not if it handles it a different way
[02:35:19 CET] <x__> I just checked
[02:35:26 CET] <x__> this time it ran for 60 minutes of data
[02:35:33 CET] <x__> And the output is 36 mb
[02:35:41 CET] <x__> So this suggests there's some heavy compression going on
[02:35:59 CET] <SortaCore> lookahead?
[02:36:18 CET] <furq> that's only about 15 seconds of output isn't it
[02:36:26 CET] <x__> Less than that
[02:36:36 CET] <furq> well yeah that seems reasonable then
[02:37:01 CET] <x__> furq, not if you consider that there's some heavy degradation in image quality
[02:37:05 CET] <furq> pastebin the full command and output up until the status line
[02:37:26 CET] <x__> Ok
[02:38:18 CET] <x__> Running it now with the new flag
[02:41:00 CET] <x__> furq, https://nopaste.xyz/?4ee80afabe1b4f33#gGdZ8fBQkibpJFKvI+mb6/j2UwEoeRGRJ/feQXPtyj0=
[02:41:13 CET] <x__> Windows' prompt can't store everything, so I apologize
[02:43:04 CET] <x__> furq, Anything suspicious?
[02:45:38 CET] <Diag> x__: only the curiously strong flavor of new altoids gum
[02:51:16 CET] <x__> furq, Any clue?
[03:00:57 CET] <x__> Anyone?
[03:30:44 CET] <SortaCore> subnautica livestream time
[03:33:59 CET] <x__> furq, I noted something
[03:34:09 CET] <x__> That I was checking the wrong file :P I was always checking the avi
[03:34:21 CET] <x__> now I'm checking the mkv file I'm generating and I'm getting a blank file
[03:46:04 CET] <c3r1c3-Win> Was the RTMP server removed from FFMPEG?
[03:49:15 CET] <pagenoare> hello. Im struggling with some issue. When I add a silent audio stream to the video via: ffmpeg -y -i input.mp4 -f lavfi -i aevalsrc=0 -c:v libx264 -crf 19 -preset slow -pix_fmt yuv420p -shortest output.mp4 I found something odd. The start: value changes to 0.036281. Why does it happen?
[04:40:38 CET] <SortaCore> anyone have a libopenh264 library build guide?
[06:19:25 CET] <SortaCore> I got libopenh264 built easily enough (just make with some params), but ffmpeg's pkg-config can't find it
[06:19:43 CET] <SortaCore> I did make and make install with correct archs/os params
[06:20:02 CET] <SortaCore> and I can see the pc file in the right place
[08:09:09 CET] <TaZeR> is "winff" a legit gui for ffmpeg?
[10:26:02 CET] <Megabyte> hello
[10:26:09 CET] <Megabyte> I managed to get ffmpeg running
[10:26:24 CET] <Megabyte> but after processing 11 hours of input, it'll freeze my computer
[10:26:32 CET] <Megabyte> I tried /set affinity, but it doesn't work
[10:26:38 CET] <Megabyte> what should I do?
[10:37:08 CET] <Megabyte> bencoh, h4llo
[11:10:36 CET] <androbod> hi guys! Is there any way to see HLS tags during ffmpeg process, if we have HLS on input ?
[13:04:54 CET] <XoXFaby>  I have a problem and I think I can solve it using ffmpeg but I don't know if it's the best idea
[13:04:59 CET] <XoXFaby> So I wanted to ask for input here
[13:05:43 CET] <XoXFaby> I am trying to store frames for a timelapse, but just storing the image alone would use massive amounts of space. So I had this idea to instead continously add the frames at the end of a video file using ffmpeg
[13:06:22 CET] <XoXFaby> since that way I can take advantage of compression based on previous frames
[13:06:35 CET] <XoXFaby> which should massively cut down on size on a landscape timelapse
[13:07:23 CET] <XoXFaby> The process should basically be the same as encoding a video except instead of doing it all in one go I'm adding a frame every second or every 10 seconds
[13:07:34 CET] <XoXFaby> Can I use FFMPEG for this and is this a good idea?
[13:10:14 CET] <kikobyte> Hi BtbN, I'm currently looking into an issue with h264_nvenc, what happens is I'm getting intermittent segmentation fault in avcodec_encode_video2 -> ff_nvenc_encode_frame -> process_output_surface. Any chance you've seen that since August?
[13:10:50 CET] <BtbN> Nope, I'm not aware of any crashes in there.
[13:11:39 CET] <BtbN> Also, runtime resolution changes are not supported at all by ffmpeg.c for hardware formats.
[13:12:02 CET] <BtbN> Encoders do not support changing resolution at runtime, that's why it inserts the scaler.
[13:12:28 CET] <BtbN> The branch you linked is just old, ffmpeg.c was patched to not have that anymore, because it only masks a much deeper issue you have then.
[13:12:37 CET] <kikobyte> BtbN, indeed, when that happens, it is after the graph was re-built, yet the resolution of nvenc is still the same. Still it works for most cases
[13:12:57 CET] <BtbN> it's the same because you just so happen to have a scaler in your custom chain
[13:13:04 CET] <BtbN> ffmpeg.c has no possible way to know that
[13:13:22 CET] <BtbN> And it also causes a bunch of other issues, because the hw_frames_ctx changes, and it not exploding over that is pure luck
[13:13:42 CET] <kikobyte> BtbN, yeah, I'm inserting the scaler exactly for that purpose
[13:15:02 CET] <kikobyte> Could it be somehow related to the number of surfaces? I saw a commit which increases their count
[13:16:06 CET] <BtbN> I doubt it
[13:16:13 CET] <BtbN> Surface count was increased?
[13:16:27 CET] <BtbN> Only thing I remember is decreasing the amount of surfaces in cuvid to the minimum
[13:17:14 CET] <kikobyte> + int nb_surfaces = FFMAX(4, ctx->encode_config.frameIntervalP * 2 * 2);, was nb_surfaces = 0
[13:18:43 CET] <kikobyte> And then for rc_lookahead case it's being maxed against the old value.
[13:20:35 CET] <kikobyte> Even though the commit (https://github.com/FFmpeg/FFmpeg/commit/8de3458a07376b0a96772e586b6dba5e93432f52#diff-554d491640d8a8d765b175e1a13401bf) is indeed about reducing the number of surfaces
[13:26:36 CET] <kikobyte> BtbN, what happens when the new resolution comes through the decoder and the gpu scale filter is being re-initialized (yet still producing the same output dimensions)? Does it affect the nvenc encoder in any way? Can it be so that the encoder is being reconfigured or something like that?
[13:27:05 CET] <BtbN> In the worst case, it creates a new CUDA context, and nvenc won't be able to read the frames anymore
[13:27:21 CET] <BtbN> a crash from the driver is what I'd expect to happen then
[13:27:40 CET] <kikobyte> Who creates the context?
[13:28:05 CET] <BtbN> Ideally the external API user and supplies it using a hw_device_context
[13:28:20 CET] <BtbN> but ffmpeg.c is a bit special and the filter-re-init-code is not made with hw accel in mind at all
[13:28:33 CET] <JEEB> ffmpeg.c is a bit "special" indeed
[13:29:11 CET] <kikobyte> I remember I saw a diagnostic message that a cuvid context is being re-initialized - happens when the decoder gets new sps/pps with different dimensions
[13:29:12 CET] <BtbN> if you want to be safe, put a hwdownload,format=nv12 after your scaler. That will feed nvenc software-frames and the ffmpeg_filters.c code will be able to work correctly
[13:29:20 CET] <kikobyte> through cuvid_init I guess
[13:29:40 CET] <kikobyte> I tried doing that, but I got freezes
[13:29:59 CET] <BtbN> you should also try to use nvdec instead of cuvid, it should behave more sane as it's a native hwaccel
[13:30:17 CET] <utack> XoXFaby if i were you i'd buffer something like 50 images lossless, encode them in a block, and then append them to a mkv
[13:30:26 CET] <kikobyte> ...which I presume came later than 3.3?
[13:30:35 CET] <utack> not saying this is THE solution, but it is a simple one
[13:30:38 CET] <BtbN> It's in master
[13:30:48 CET] <kikobyte> Yeah, expected that
[13:31:28 CET] <XoXFaby> utack: that sounds doable.
[13:31:33 CET] <XoXFaby> I'm doing this on a raspberry pi
[13:32:01 CET] <XoXFaby> right now I'm taking the pictures as jpegs
[13:32:09 CET] <Megabyte> Hello
[13:32:16 CET] <Megabyte> I'm having a SERIOUS problem with ffmpeg
[13:32:30 CET] <Megabyte> I can't get it to transcode very large amounts of data
[13:32:30 CET] <utack> i guess you can buffer between 20-50 as bmp XoXFaby and then use x264?
[13:32:44 CET] <Megabyte> after around 11 hours of transcoded data, it crashes my whole system with it
[13:32:46 CET] <Megabyte> can you help me?
[13:34:17 CET] <XoXFaby> hmm
[13:34:49 CET] <kikobyte> BtbN, you suggestion to put frames through the hwdownload-format slowed fps roughly from 170 to 60, kind of expected given that pci-e gets extra load...
[13:35:20 CET] <BtbN> nvidia is usually quite fast with that
[13:35:28 CET] <XoXFaby> utack: I'm also concerned with the ability to later grab different portions of the data at different "framerates"
[13:35:37 CET] <BtbN> unless you are operating a crazy loads, you shouldn't notice any impact
[13:35:50 CET] <XoXFaby> i.e. if I record a day of video like this I wanna be able to grab 10 minutes worth of frames but also 6 hours worth of frames at 30 fps
[13:36:28 CET] <kikobyte> Not that fast as it's usually requried. I was writing half a year ago a full gpu pipeline in DS with nv decoder / my own formatters / nv encoder, and keeping frames on gpu at all times was the only acceptable solution in terms of performance
[13:36:46 CET] <XoXFaby> I assume it should be possible to just extra the frames at the times I need or similar
[13:36:54 CET] <XoXFaby> extract*
[13:37:28 CET] <BtbN> If you want to have reliable format changes at runtime, it'll be your only option. patching ffmpeg.c to support is will be hard to absolutely impossible.
[13:38:16 CET] <kikobyte> BtbN, I'm also not that sure that the problem is on the encoder's input, because the segfault happens in process_output_surface routine
[13:38:27 CET] <JEEB> it's scary how far you can get with ffmpeg.c, and then hit a wall where you need to make your own API client :P
[13:38:42 CET] <utack> XoXFaby i am not sure how well frame exact seeking works, in the worst case you have to decode all frames from the beginning of the stream again
[13:39:06 CET] <XoXFaby> Depending on the codec/container it should only be from the last keyframe, no?
[13:39:18 CET] <BtbN> process_output_surface is a function in nvenc.c
[13:39:40 CET] <BtbN> it extracts the bitstream from nvenc
[13:39:41 CET] <utack> as i said, i am not sure about frame exact seeking
[13:39:54 CET] <kikobyte> BtbN, yes, and I think that something might be broken with ctx->output_surface_queue
[13:40:01 CET] <BtbN> unlikely
[13:40:13 CET] <BtbN> does it crash without format changes?
[13:40:37 CET] <kikobyte> Encoder in particular - not that I'm aware of
[13:40:47 CET] <BtbN> Do you have a full backtrace?
[13:40:51 CET] <kikobyte> but the format itself doesn't change
[13:41:08 CET] <XoXFaby> The idea was that by recording a frame every second, I should be able to go all the way from grabbing 30 frames from 30 seconds to make a 1s clip or skipping frames in between to make 30fps clips that cover longer times. i.e. grabbing a frame every minute to make a 1s 30fps video from 30 minutes of data
[13:41:09 CET] <BtbN> the hw_frames_ctx will likely change
[13:41:26 CET] <kikobyte> BtbN, https://pastebin.com/4veqbB1S
[13:42:34 CET] <BtbN> Why does it call from nvenc to libnvcuvid?
[13:42:59 CET] <kikobyte> If I only knew. AFAIK, that's the decoder lib
[13:43:12 CET] <BtbN> it is
[13:43:41 CET] <Megabyte> Can anyone help me with ffmpeg?
[13:44:01 CET] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/nvenc.c;h=e4f6f0f92765351d46e3266697ac5425330c8dec;hb=refs/heads/release/3.3#l1668 is what is crashing. Looks like something is horribly messed up there.
[13:45:01 CET] <kikobyte> UnmapInputResource?
[13:45:14 CET] <BtbN> I can't reasonably supports any release, so my only advice is to try git master. There were a lot of changes to nvenc and cuvid since 3.3, so it might be fixed. Or it might be ffmpeg.c causing corruption because it's operating in unexpected ways.
[13:46:19 CET] <BtbN> What might happen with a format change is that the new filter pipeline creates a new CUDA context and destroys the old one.
[13:46:24 CET] <kikobyte> Totally understandable, I've been making the same "use newer version first" statements for years for our SDK =)
[13:46:52 CET] <BtbN> So the context the old frames are referencing might be gone, but nvenc still has frames in its pipeline using it.
[13:46:57 CET] <BtbN> So when it tries to free them, it explodes.
[13:47:06 CET] <XoXFaby> utack: I'm pretty sure I can actually extract the video I want very easily from using ffmpeg, just with -ss -to and setpts filtering to speed up the footage
[13:47:07 CET] <kikobyte> Viable guess
[13:47:17 CET] <XoXFaby> since ffmpeg will skip frames exactly how I need it to
[13:48:05 CET] <XoXFaby> now I just need to figure out what the best video format is that I can append to like I need
[13:50:31 CET] <utack> XoXFaby but the "ss" argument decodes all frames from beginning to the point where you seek
[13:52:13 CET] <utack> appending files works very easily in a mkv file. https://mkvtoolnix.download/doc/mkvmerge.html
[13:52:35 CET] <utack> unless you wanted to figure out how to keep ffmpeg opened, stalling while waiting for hte next frame, making one continuous file
[13:52:52 CET] <XoXFaby> utack: ". The input will be parsed using keyframes, which is very fast"
[13:53:19 CET] <XoXFaby> sounded like it wouldn't be an issue
[13:53:41 CET] <XoXFaby> as for the frame thhing, I don't care how I do it, I just want to get it done
[13:54:13 CET] <utack> where do you get the, from, can you directly use a pipe to send them to ffmpeg?
[13:54:35 CET] <XoXFaby> where I'm getting the images from?
[14:00:08 CET] <utack> i have not really found anything to wait for new images...i mean it is solvable i guess, but blocks seem easier, with minimal loss of efficiency
[14:00:30 CET] <XoXFaby> so you're saying to save x frames, encode them into a block and then add them?
[14:00:45 CET] <XoXFaby> presumable append the block to an mkv file
[14:01:07 CET] <XoXFaby> how much overhead is there when saving a video file
[14:01:09 CET] <XoXFaby> i.e
[14:01:32 CET] <XoXFaby> different between 100 30 frame files or one 3000 frame file
[14:02:26 CET] <utack> x264 has a default of about 250 keyframe interval, so beyond that would not make much sense
[14:02:40 CET] <utack> but using half of it should barely make a difference
[14:03:04 CET] <XoXFaby> I was thinking of the difference between appending them to one video file or just saving the blocks and then combining them as needed
[14:03:36 CET] <XoXFaby> but I guess it shoudln't matter and one video file will be easier to deal with
[14:04:40 CET] <utack> yeah or just save the blocks, that is still WAY more efficient than single jpeg files
[14:05:27 CET] <XoXFaby> yeah of course
[14:06:32 CET] <XoXFaby> I was just hoping to be able to add frames on at a time so I could grab a video going right up to the lastest frame at any moment
[14:07:30 CET] <XoXFaby> I could prematurely pack and append the block when a video is requested
[14:09:21 CET] <XoXFaby> You're saying I should pack blocks for 250 frames for x264?
[14:13:24 CET] <utack> that is the default keyframe interval, i'd use less if these images are large, or seeking might take a while
[14:14:50 CET] <XoXFaby> I was gonna go with 30 frame blocks so each block would be one keyframe interval
[14:14:58 CET] <utack> sounds reasonable
[14:15:03 CET] <XoXFaby> at least I hope this will work as I imagine
[14:17:59 CET] <Megabyte> XoXFaby, can you help me?
[14:27:11 CET] <kepstin> note that x264 by default doesn't have a fixed keyframe interval - the '250' value is a max
[14:27:56 CET] <kepstin> (if you really want it fixed, there's an option you can specify via -x264-params to do that)
[14:35:54 CET] <Megabyte> I tried running this command
[14:35:55 CET] <Megabyte> C:\Users\x\Videos>C:\Software\ffmpeg-static\bin\ffmpeg.exe -i saída_parte_1.mp4 -vf scale=1920:1080 -filter:v setpts=PTS/144 -framerate 60 -cbr 0 output1.mp4
[14:36:10 CET] <Megabyte> But ffmpeg will give me a video with the sped up footage and then an empty footage
[14:36:19 CET] <Megabyte> how do I trim the data to the sped up footage only?
[14:38:52 CET] <kepstin> Megabyte: is there an audio track present? If so, the length will be set by the length of the audio track
[14:39:05 CET] <Megabyte> kepstin, Oh, I see. I just removed it.
[14:39:31 CET] <kepstin> Megabyte: easiest way to remove the audio is to just use the "-an" option
[14:39:38 CET] <Megabyte> kepstin, also, it's not really scaling
[14:39:41 CET] <Megabyte> I checked the output
[14:39:45 CET] <Megabyte> and it's not 1920x1080
[14:39:59 CET] <Megabyte> I'm currently using?
[14:40:00 CET] <Megabyte> C:\Users\x\Videos>C:\Software\ffmpeg-static\bin\ffmpeg.exe -i saída_parte_1.mp4 -vf scale=1920:1080 -filter:v setpts=PTS/144 -framerate 60 -cbr 0 output1.mp4
[14:40:32 CET] <Megabyte> sec
[14:40:32 CET] <kepstin> Megabyte: you have two separate filter options: "-vf scale=1920:1080" and "-filter:v setpts=PTS/144"
[14:40:42 CET] <kepstin> Megabyte: note that -vf and -filter:v are the same thing
[14:40:51 CET] <kepstin> Megabyte: and so the second one overwrites the first
[14:41:02 CET] <Megabyte> C:\Users\x\Videos>C:\Software\ffmpeg-static\bin\ffmpeg.exe -i saída_parte_1.mp4 -vf scale=1920:1080 -filter:v setpts=PTS/144 -framerate 60 -cbr 0 output1.mp4
[14:41:09 CET] <Megabyte> uh...
[14:41:37 CET] <kepstin> Megabyte: remember from yesterday: to send the output from one filter into the input of the next filter, put a comma "," between them
[14:42:26 CET] <Megabyte> kepstin, ok, like this?
[14:42:27 CET] <Megabyte> C:\Users\x\Videos>C:\Software\ffmpeg-static\bin\ffmpeg.exe -i saída_parte_1.mp4 -vf scale=1920:1080,setpts=PTS/144 -an output1.mp4
[14:43:06 CET] <kepstin> Megabyte: exactly - although you should also set the output framerate ("-r 60")
[14:43:47 CET] <Megabyte> Ok, like this?
[14:43:47 CET] <Megabyte> C:\Software\ffmpeg-static\bin\ffmpeg.exe -i saída_parte_1.mp4 -vf scale=1920:1080,setpts=PTS/144 -an -r 60 output1.mp4
[14:44:00 CET] <kepstin> Megabyte: yes.
[14:48:39 CET] <Megabyte> kepstin, I'll probably use a script and speed up every video individually
[14:48:43 CET] <Megabyte> it's safer that way
[14:49:07 CET] <Megabyte> I tried to concatenate all my videos into four to make it easier to handle them
[14:49:23 CET] <Megabyte> but ffmpeg is making my computer crash
[14:49:52 CET] <kepstin> ffmpeg shouldn't be making your computer crash& if it is, you probably have a hardware problem or cooling problem.
[14:50:03 CET] <Megabyte> kepstin, I suspect it's a memory error
[14:50:10 CET] <Megabyte> that is, it's crashing with VERY LONG footage
[14:50:19 CET] <Megabyte> it's 48 hours of footage, after all
[14:50:45 CET] <Megabyte> It always crashes when processing around 11 hours
[14:51:08 CET] <Megabyte> I also checked with process lasso and when my computers starts to freeze is exactly when memory starts to get at 100%
[14:52:25 CET] <Megabyte> Now that I stripped the audio it's going much slower
[14:52:31 CET] <Megabyte> mmm...
[14:52:42 CET] <XoXFaby> kepstin: does that mean it accepts varying keyframe intervals?
[14:52:54 CET] <kepstin> XoXFaby: it just might be that the calculation of speed works differently without audio
[14:52:56 CET] <XoXFaby> I'm not sure how all that plays in with mkv container
[14:53:31 CET] <kepstin> XoXFaby: not sure what you mean. x264 allows and can put keyframes wherever it likes, and mkv's index doesn't care how the keyframes are spaced.
[14:53:55 CET] <rk> i am using libavformat to mux a vp8 rtp stream into a webm file. When i see the info on the webm file obtained using mkv tool, i see the duration is too high. And, inside the track info, the frame rate is considered as 1000fps, while i set the time_base with frame rate of 25 fps.
[14:53:56 CET] <XoXFaby> well I was planning to combine multiple short x264 clips in an mkv container
[14:54:07 CET] <kepstin> XoXFaby: yep, i'd expect that to work fine.
[14:54:08 CET] <XoXFaby> where basically each file would one be one keyframe "block"
[14:54:51 CET] <kepstin> XoXFaby: depending on your settings when encoding with x264, you might end up having multiple keyframes in a single encoded chunk of video. If you want fixed-length gops, there's an extra option for that.
[14:55:15 CET] <XoXFaby> do you have context one what I'm trying to do?
[14:55:17 CET] <XoXFaby> might help with understanding
[14:56:38 CET] Action: kepstin scrolls up a bit
[14:57:08 CET] <kepstin> if you want the ability to do a quickly generated time-lapse, then using a fixed gop that's the length of the frame spacing would make sense
[14:57:35 CET] <XoXFaby> right now the plan is to take 1 jpeg every second
[14:57:48 CET] <XoXFaby> and then I want to be able to make timelapses out of that on demand, so the idea was to store them as video files
[14:58:11 CET] <XoXFaby> and now I'm also thinking about how to handle it if I there is missing data, missing frames or missing video blocks
[14:58:28 CET] <kepstin> you basically need to figure out "what's the smallest step in frames that I'll want to use when making the timelapse?"
[14:58:48 CET] <XoXFaby> I mean ideally I want to be able to be second specific
[14:59:03 CET] <XoXFaby> i.e. each frame
[14:59:19 CET] <kepstin> well, each frame you just get by decoding the whole video
[14:59:43 CET] <kepstin> the interesting thing is optimizing how to skip frames when doing a time lapse
[15:00:06 CET] <XoXFaby> the problem is I can't just store the frames as they are, cause that takes way too much space, so I want to take advantage of keyframes
[15:00:11 CET] <XoXFaby> therefore storing it as video
[15:00:37 CET] <XoXFaby> which should still let me seek to any frame in between
[15:00:51 CET] <XoXFaby> which would be handled by ffmpeg when I -ss
[15:01:00 CET] <kepstin> Right. assuming you're storing it in a container with an index, yes; it'll seek to the nearest keyframe then decode to the exact frame you want
[15:01:27 CET] <XoXFaby> exactly
[15:02:02 CET] <XoXFaby> the idea now was to use mkv and either append the 1s clip every 30 seconds or keep the clips seperated and combine them on demand
[15:02:11 CET] <kepstin> note that the index will only be written after the complete file is done/closed, so you'll have to figure out how to handle that
[15:02:50 CET] <kepstin> writing to X second chunks and appending them to an mkv file could work, but will be quite disk intensive
[15:03:04 CET] <XoXFaby> you mean io wise/
[15:03:35 CET] <kepstin> it might make sense to do something like write the chunks out as separate files, then combine them into e.g. a file per hour or per day at regular intervals.
[15:04:32 CET] <XoXFaby> I have an idea
[15:04:44 CET] <XoXFaby> My biggest worry now was about missing frames if I have to restart my scripts etc.
[15:04:52 CET] <XoXFaby> or move the device
[15:04:54 CET] <XoXFaby> all that
[15:05:05 CET] <XoXFaby> maybe I should just pack all the blocks together whenever a gap happens
[15:06:13 CET] <XoXFaby> hell I could even automate uploading to youtube whenever that's done
[15:06:43 CET] <XoXFaby> do the same if there is no gap after a day
[15:07:41 CET] <XoXFaby> one day of 30fps 1s footage would be 2880 clips, is that feasible to combine into one file?
[15:12:28 CET] <kepstin> XoXFaby: I don't see why not.
[15:13:13 CET] <kepstin> (if you wanted to do it with ffmpeg, you'd write a .ffconcat file to give it the list of input files, and then write to a single output with -c:v copy)
[15:36:27 CET] <saml> good morning
[15:38:57 CET] <XoXFaby> kepstin: do you think that's how I should do it?
[15:39:17 CET] <XoXFaby> I just want to do it the best way, idk if ffmpeg or mkvmerge will be better
[15:39:38 CET] <kepstin> XoXFaby: either should work, try both and see which you like better?
[15:39:58 CET] <XoXFaby> what does ffmpeg actually do when copying like that?
[15:40:11 CET] <XoXFaby> mkvmerge presumably would just add it to the container which should be very fast
[15:40:21 CET] <kepstin> it reads the video frames from each file, then concatenates them to a single stream, then muxes them to a new file
[15:40:40 CET] <kepstin> no transcoding needed, normally runs at disk speed
[15:40:40 CET] <XoXFaby> I would assume that would take longer
[15:40:53 CET] <XoXFaby> disk in this case is an sdcard
[15:41:14 CET] <kepstin> i'd expect both to take about the same amount of time, io limited.
[15:41:45 CET] <kepstin> if you really want to avoid extra writes on the sd card, I wouldn't bother - just leave the individual chunks as separate files
[15:41:45 CET] <XoXFaby>  I guess I'll just collect an hour of clips or something and try
[15:41:53 CET] <XoXFaby> currently writing that part so I'll deal with it when I get there
[15:42:30 CET] <XoXFaby> all I care about is speed, cause I will probably need to generate the file on demand
[15:44:29 CET] <calmoo> Hi, I'm trying to mute a portion of audio in a video i'm editing, using this command:
[15:44:30 CET] <calmoo> ffmpeg -i 13125.mp4 -af volume=enable='between(t,2215,2217):volume=0 -c:v copy 13125muted.mp4
[15:44:39 CET] <calmoo> on OSX
[15:44:54 CET] <calmoo> but when I execute it, nothing happens, terminal just produces a ">" character on a newline
[15:52:46 CET] <kepstin> calmoo: you have mismatched quotes
[15:53:11 CET] <kepstin> calmoo: you have fancy unicode quotes, when they need to be plain ascii " and '
[15:53:56 CET] <kepstin> (also you have quotes inside quotes, which will cause problems with ffmpeg's filter parser
[15:53:58 CET] <kepstin> )
[15:54:47 CET] <calmoo> ok
[15:54:48 CET] <calmoo> ffmpeg -i 13125.mp4 -af "volume=enable=between(t,2215,2217):volume=0" -c:v copy 13125muted.mp4
[15:54:49 CET] <calmoo> when i do that
[15:54:57 CET] <calmoo> [volume @ 0x7fc40e700d80] [Eval @ 0x7fff5ae0b710] Missing ')' or too many args in 'between(t' [volume @ 0x7fc40e700d80] Error when evaluating the expression 'between(t' for enable [AVFilterGraph @ 0x7fc40e700960] Error initializing filter 'volume' with args 'enable=between(t'
[15:54:59 CET] <calmoo> i ge that error
[15:55:51 CET] <kepstin> calmoo: the , inside the enable expression has to be escaped with \. This gets complicated because \ is also a special character inside double-quotes
[15:56:17 CET] <kepstin> calmoo: I recommend doing this: -af 'volume=enable=between(t\,2215\,2217):volume=0'
[15:56:43 CET] <kepstin> the single quotes let you use a \ inside the quotes.
[15:56:59 CET] <calmoo> thankyou so much
[15:57:03 CET] <calmoo> i've been trying to work this out for hours
[15:57:06 CET] <calmoo> thanks!! it works
[16:23:03 CET] <lyncher> hi. can you suggest a tool to visualize MPEG2 bitstream (GOP, I/P/B frames...)?
[17:55:17 CET] <fsphil> I've a program that needs video at a fixed resolution and format, I've been using libswscaler but realised I could probably have used AVFilter. Is there a preferred method?
[17:56:12 CET] <kepstin> fsphil: if scaling is all you need, using libswscale directly might be a bit simpler.
[17:56:41 CET] <JEEB> depends on if you need AVFrames or not
[17:56:42 CET] <fsphil> yeah just scaling. the only oddity might be handling interlaced video, my output is interlaced
[17:56:57 CET] <JEEB> AVFilter takes in AVFrames, and outputs AVFrames
[17:57:02 CET] <kepstin> fsphil: remember that the "scale" filter is just a wrapper around libswscale after all :)
[17:57:08 CET] <JEEB> yup
[17:57:09 CET] <fsphil> yeah
[17:57:16 CET] <JEEB> (at least it's a wrapper that is not your problem to write)
[17:57:26 CET] <fsphil> it was the source for the scale filter that I found when looking for an interlaced scale example
[17:57:35 CET] <fsphil> made me wonder if I should just be using that instead
[17:58:06 CET] <JEEB> personally at this point I would only utilize swscale if avfilter makes your life miserable
[17:58:13 CET] <JEEB> otherwise I see the AVFrame support as a really good thing
[17:58:27 CET] <kepstin> well, if your swscale usage is starting to get complicated, and in particular if you might want to do other stuff with frames too at some point, avfilter seems like a good idea.
[17:58:30 CET] <JEEB> (granted, avfilter can be "fun" to create the filter chains in)
[17:58:41 CET] <fsphil> at some point I'd like to provide an option for users to add their own filters, so I'll probably be adding avfilter anyway
[17:59:06 CET] <JEEB> but yea, personally for me the biggest thing is: AVFrames
[17:59:20 CET] <JEEB> because that's what you get from decoders and encoders are fed them
[18:01:05 CET] <fsphil> thanks all. I'll have a play with avfilter
[18:02:43 CET] <lyncher> I'm trying to add MPEG2_START_USER_DATA after a MPEG2_START_PICTURE.... do I need to add extra MPEG2_START_SEQUENCE_HEADER/MPEG2_START_EXTENSION before user data?
[18:23:57 CET] <storrgie> How can I tell if my ffmpeg has decklink compiled/enabled? I'm using the vanilla ffmpeg on archlinux
[18:25:05 CET] <lyncher> storrgie: run ffmpeg command and check if you have --enable-decklink in the configuration line
[18:25:05 CET] <sfan5> $ ffmpeg
[18:25:12 CET] <sfan5> look for the above
[18:25:14 CET] <sfan5> damn someone was faster
[18:25:30 CET] <lyncher> that one was easy :)
[18:33:12 CET] <TAFB_WERK> I will pay money to anyone that can get this 4k ip camera streaming smooth and reliable to youtube: ffmpeg -rtsp_transport tcp -i rtsp:/admin:admin1234 at skyviewe.com:65314/0
[18:37:53 CET] <BtbN> It's probably more about having a beast CPU than anything else.
[18:38:07 CET] <BtbN> And pretty impressive internet upstream.
[18:38:25 CET] <TAFB_WERK> transcoding only, don't need a good cpu
[18:38:36 CET] <BtbN> ...?
[18:38:40 CET] <BtbN> yes you do
[18:38:51 CET] <TAFB_WERK> when I run that stream it uses 3% of my cpu, but ok
[18:39:00 CET] <BtbN> you're not transcoding then
[18:39:12 CET] <TAFB_WERK> -c copy
[18:39:22 CET] <TAFB_WERK> mp4 to flv
[18:39:22 CET] <BtbN> that's the exact opposite of transcoding
[18:39:34 CET] <TAFB_WERK> i see
[18:39:37 CET] <TAFB_WERK> it's not encoding
[18:39:57 CET] <BtbN> Usually those IP webcames produce some massive and unoptimized streams.
[18:40:07 CET] <BtbN> So you'll need massive internet upstream
[18:40:24 CET] <TAFB_WERK> 4mbps is the ip stream, 50mbps is my upstream connection
[18:40:45 CET] <BtbN> 4mbps for a 4K stream? I have my doubts about that.
[18:41:01 CET] <TAFB_WERK> I can set it whatever I want, for testing I have it at 4mbps, it usually runs as 12mbps
[18:41:27 CET] <BtbN> For those hw encoders I'd go for at least 25Mbps if you want anything half decent
[18:41:38 CET] <BtbN> Does it output h264 at least?
[19:01:30 CET] <TAFB_WERK> yep, h264 or h265 selectable
[19:02:02 CET] <TAFB_WERK> can't change the res down to 1080p for re-encoding it :(
[19:07:04 CET] <BtbN> I'm not even sure if YouTube live takes 4K
[19:07:58 CET] <TAFB_WERK> it does, I've bean streaming to it, just super choppy, pixelation, and I have to restart ffmpeg all the time.
[19:09:18 CET] <BtbN> is the stream fine if you play it locally?
[19:09:37 CET] <BtbN> And if you stream 4K at 4mbps, pixelation is exactly what you will get
[19:09:38 CET] <TAFB_WERK> only in internet explorer (activex), if I play it with ffplay it's a complete mess, same with vlc player
[19:09:59 CET] <TAFB_WERK> that link I posted above, you can open it yourself to see
[19:11:23 CET] <BtbN> You definitely can't stream HEVC to YouTube. And for 4K h264 you'll need something like 25mbps or more
[19:11:28 CET] <TAFB_WERK> I was given this command, which stream is nice and smooth, but after an hour or so ffmpeg starts using 100% of a cpu core and the stream goes down. https://pastebin.com/YNP7nmmR
[19:12:11 CET] <BtbN> Why is it split in two?
[19:12:23 CET] <TAFB_WERK> only way to make it smooth, don't ask me
[19:12:38 CET] <BtbN> that sounds more like something is wrong with your source
[19:13:18 CET] <TAFB_WERK> if I'm staring at the cpu usage and I see ffmpeg spike I right away end the task and restart ffmpeg and it's fine. So if it is a problem with the source ffmpeg needs to recover better
[19:13:40 CET] <BtbN> Depending on how broken it is, there is nothing to recover from
[19:14:09 CET] <TAFB_WERK> i tried to find a software where "if a program is using xx% cpu end it automatically" but couldn't find it, that would be a partial solution to the problem.
[19:14:39 CET] <BtbN> https://xkcd.com/1172/
[19:16:52 CET] <alexpigment> TAFB_WERK i think you just described modern antivirus software :
[19:16:55 CET] <alexpigment> :)
[19:17:16 CET] <alexpigment> norton at least warns about it - i'm sure there's an action to end on it
[19:17:30 CET] <alexpigment> anyway, dumb logic for dumb users i guess...
[19:28:21 CET] <kerio> BtbN: the youtube recommendations say 13-34mbps for 2160p30
[19:28:34 CET] <kerio> 20-51 for 2160p60
[19:28:41 CET] <BtbN> seems about right
[19:28:53 CET] <BtbN> highly depends on the content obviously, that's why it's such a huge range
[19:29:01 CET] <TAFB_WERK> 2160p30 so no problems
[19:29:29 CET] <TAFB_WERK> 4mbps looks crisp and clean because the image is mostly static
[19:51:14 CET] <Johnx_> I am testing udp stream for errors using ffmpeg, and I have noticed that at times common header connection error and decoding errors are just not present. for example: `[h264 @ 0x33c1280] SPS unavailable in decode_picture_timing [h264 @ 0x33c1280] non-existing PPS 0 referenced [h264 @ 0x33c1280] decode_slice_header error`
[19:51:34 CET] <Johnx_> I am trying out why is this happening
[19:52:20 CET] <Johnx_> the ffmpeg log indicates that it did process x ammount of audio and y ammount of video
[19:52:42 CET] <Johnx_> just why sometimes no errors are printed out
[20:13:19 CET] <kevc45> How can I embed cover art into a lossless using FFmpeg from the command line? Here's what I'm using: https://hastebin.com/ofadiyiqew.sh
[20:18:28 CET] <relaxed> kevc45: can't see your pastebin, but I don't think our flac demuxer supports cover art
[20:18:43 CET] <alexpigment> kevc45, you're trying to put cover art in a wav file
[20:18:45 CET] <alexpigment> that's not supported
[20:18:45 CET] <relaxed> er, flac muxer
[20:19:32 CET] <alexpigment> when it says "wave files have exactly one stream", it means that no other streams (or pretty much anything else) can be put into the file
[20:19:35 CET] <alexpigment> you need a different file format
[20:25:12 CET] <zerodefect> I'm looking to use the C-API to control the 'overlay' libavfilter.  It it possible to explicitly set the x,y position of the overlay on a frame-by-frame basis (similar to what is done via cmd line). I see there is an 'eval' => 'frame' mode by I don't quite understand how this works programmatically.
[20:25:22 CET] <zerodefect> *Is it possible
[20:27:04 CET] <kepstin> zerodefect: just like on the command line, you can set the parameters of the filters to strings containing expressions that will be evaluated.
[20:28:40 CET] <kepstin> zerodefect: however, the overlay filter also supports receiving commands while it's running to dynamically change parameter values.
[20:28:51 CET] <kepstin> which can be done programmatically from your code.
[20:29:55 CET] <zerodefect> kepstin: Understood. For all the filters I've used thus far, I've set the properties on initialisation when calling 'avfilter_graph_create_filter()'. I'm guessing that I need to somehow update the properties?
[20:30:20 CET] <kepstin> zerodefect: if you can just set the properties to an expression when you start and that's good enough, do that.
[20:32:11 CET] <zerodefect> kepstin: That would suffice for me for the most part, but admittedly I am looking to do some crazy positioning
[20:32:17 CET] <kepstin> if you want to do the calculations in your code instead of writing an expression that runs in the overlay filter, look into this function: https://www.ffmpeg.org/doxygen/2.3/group__lavfi.html#gaaad7850fb5fe26d35e5d371ca75b79e1
[20:33:19 CET] <kepstin> I believe that the overlay filter takes the commands "x" and "y", and the arg should be the expression that you want to assign.
[20:33:56 CET] <kepstin> (it can be just a plain number or a full expression)
[20:34:49 CET] <zerodefect> Terrific. Thanks :). I'll see how I get on.
[20:36:04 CET] <zerodefect> Out of curiousity. The way that FFmpeg can chain overlays...is that inefficient? Would it be more efficient to handle multiple overlays in a single pass?
[20:36:29 CET] <kepstin> zerodefect: ah, filters that take commands usually have the commands documented in the ffmpeg-filters doc, e.g. https://www.ffmpeg.org/ffmpeg-filters.html#overlay-1
[20:37:06 CET] <zerodefect> kepstin: Yeah, have the bookmarked.
[20:37:36 CET] <kepstin> zerodefect: if the overlays have alpha-blending I think you have to do the full calculations anyways, so it wouldn't make much difference. But for opaque overlays, maybe?
[20:39:10 CET] <zerodefect> kepstin: That makes sense
[20:40:15 CET] <kepstin> (this is why for people tiling a bunch of videos, we recommend using hstack/vstack rather than overlaying them on top of each other)
[20:41:29 CET] <nicolas17> does anyone know of a way to do real-time OCR on video to find if a specific text string is visible? it's for screen capturing so I think it should be easier than eg. a camera looking at printed text
[20:42:10 CET] <kevc45> Can I at least embed any metadata to a .wav? Here's what I've got, and it does nothing: https://hastebin.com/loretadojo.bat
[20:43:32 CET] <zerodefect> kepstin: Interesting. I wasn't familiar with those filters.
[20:43:38 CET] <kepstin> kevc45: yeah, you can put some text metadata into wav files, but very few programs know how to do anything with it. You'd really be better off using flac or something.
[20:43:50 CET] <kepstin> zerodefect: they're kinda new, relatively speaking :)
[20:46:27 CET] <zerodefect> kepstin: If you were going to try create a multiview of say 4 inputs (2x2 view)...which would be the best filter in that case?
[20:46:45 CET] <kepstin> two hstacks and a vstack or two vstacks and an hstack.
[20:47:36 CET] Action: kepstin normally uses hstacks to make the rows, and a vstack to stack the rows together.
[20:49:47 CET] <zerodefect> kepstin: Ok. That's what I thought.
[20:50:45 CET] <nicolas17> I guess I could just make bitmaps with the text I want to detect and then find the bitmap, rather than trying to do general OCR
[21:04:25 CET] <FishPencil> Can FFmpeg correct the contrast of a video? I have some media that's showing black at R: 16, G: 16, B: 16, when it should obviously be 0s
[21:05:01 CET] <kepstin> FishPencil: that sounds like an incorrect conversion between limited/full range rather than a contrast issue
[21:05:38 CET] <FishPencil> kepstin: Possibly, but it's not the player, the media is that way
[21:07:31 CET] <zerodefect> FishPencil: You could still convert it from limited up to full range.
[21:07:33 CET] <kepstin> the standard storage for video is limited range. if it's correctly flagged in the file, then the player should be correcting it when displaying on a pc screen
[21:08:03 CET] <kepstin> so it sounds like you have a file in limited range (this is normal) which is incorrectly marked as being full range.
[21:08:14 CET] <FishPencil> That sounds right
[21:08:37 CET] <FishPencil> Can FFmpeg fix that?
[21:09:16 CET] <kepstin> hmm. I'm actually not totally sure. I know it can while re-encoding (there's some filters with specific options to use there), but I don't know if you can fix it without re-encoding.
[21:10:32 CET] <FishPencil> I'll be converting it anyway, so if I could fix it during that stage that'd be fine
[21:11:29 CET] <zerodefect> FishPencil: Your other option is to keep it limited range and then ensure the file wrapper is marked limited range?
[21:11:57 CET] <FishPencil> zerodefect: I think that's what kepstin was suggesting
[21:12:23 CET] <FishPencil> I just don't know how to set that in FFmpeg
[21:13:29 CET] <zerodefect> FishPencil: This is the filter you want to use: https://www.ffmpeg.org/ffmpeg-filters.html#colorspace
[21:13:49 CET] <zerodefect> ...if you want to do a conversion. Fundamentally, there are 2 ways to skin this
[21:14:06 CET] <zerodefect> I'm not well-versed with the cmd line tool though
[21:14:46 CET] <FishPencil> I'm happy to keep it limited and tag it properly
[21:16:40 CET] <kepstin> FishPencil: I think doing -vf colorspace=irange=limited:range=limited will override the the input video's range and make ffmpeg treat the video as limited range from that point on.
[21:16:58 CET] <kepstin> it won't actually make any changes to the video itself.
[21:18:07 CET] <FishPencil> lemme test
[21:23:59 CET] <FishPencil> I don't think that filter is written right
[21:25:37 CET] <kepstin> hmm. you could try using the scale filter instead. Same basic idea, -vf scale=in_range=pc:out_range=tv
[21:25:51 CET] <kepstin> er, sorry
[21:26:03 CET] <kepstin> -vf scale=in_range=tv:out_range=tv
[21:26:12 CET] <kepstin> the first one I gave will make it worse ;)
[21:28:24 CET] <FishPencil> -vf scale=in_range=pc:out_range=tv puts the color at RGB 30
[21:28:36 CET] <kepstin> like I said, that one will make it worse
[21:29:19 CET] <kepstin> hmm. but it shouldn't actually be at RGB30
[21:29:28 CET] <kepstin> that should still render as 16 on a correct player
[21:29:34 CET] <FishPencil> No change with -vf scale=in_range=tv:out_range=tv, RGB is 16s
[21:29:43 CET] <kepstin> it sounds like your player is getting it wrong
[21:30:15 CET] <FishPencil> ffplay handles it the same
[21:30:17 CET] <kepstin> FishPencil: I'd appreciate a pastebin of the full ffmpeg output from that last command.
[21:30:30 CET] <kepstin> ffplay is not a good reference player for colour stuff. Try mpv
[21:30:40 CET] <kepstin> that said, I think it gets limited/full range right.
[21:34:59 CET] <FishPencil> Taking a screenshot with MPV results in the same, 16s
[21:35:20 CET] <kepstin> FishPencil: alright. Can you get me the ffmpeg output from that command, please?
[21:35:37 CET] <kepstin> I want to check what it thinks the pixel formats and stuff are.
[21:37:26 CET] <FishPencil> kepstin:  do you want a specific loglevel
[21:37:35 CET] <kepstin> nah, just the default's fine
[21:41:19 CET] <kepstin> I don't need the full video or anything, feel free to stop it after a few seconds...
[21:42:25 CET] <FishPencil> kepstin: https://gist.githubusercontent.com/anonymous/b0d4f1d0946095b3f5b12393ff183b91/raw/90640ca656f56bbef1ac4e2dcba434e793ad8aa3/gistfile1.txt
[21:44:48 CET] <kepstin> hmm, vc1 stuff from a blu-ray, great fun. It looks like the pixel formats are all fine, ffmpeg thinks it's yuv420p tv range on both the input and output.
[21:45:58 CET] <FishPencil> perhaps it's actually encoded wrong
[21:47:01 CET] <kepstin> yeah, I'm thinking that's actually the case, since the input *is* being interpreted as tv range.
[21:48:55 CET] <kepstin> hmm, that scale option doesn't seem to actually be doing anything in my quick testing.
[21:48:57 CET] <FishPencil> So if it thinks the input is TV, does that mean it's actually worse than 16s as
[21:49:37 CET] <FishPencil> So it's like TV*2 or something
[21:49:45 CET] <kepstin> that would mean the value being stored in the original video 32 when it should be 16, yes.
[21:52:54 CET] <kepstin> alright, try just using -vf scale=in_range=tv:out_range=pc
[21:53:07 CET] <kepstin> that should fix it :/
[21:53:50 CET] <kepstin> you *might* also need a ",scale=in_range=tv:out_range=tv" after that, but i'm not sure.
[21:54:14 CET] <FishPencil> the first frame is "black", so this should work right? ffmpeg -i i.m2ts -vframes 1 -vf scale=in_range=tv:out_range=pc -y o.png
[21:54:18 CET] <FishPencil> o.png is 16s
[21:54:54 CET] <kepstin> FishPencil: the conversion to png adds and extra yuv-rgb step, it would be better to encode to a video to compare
[21:55:24 CET] <FishPencil> -vf scale=in_range=tv:out_range=pc,scale=in_range=tv:out_range=tv actually is 0s
[21:56:47 CET] <FishPencil> So for my clarification, we're taking the input, expanding it to pc, then faking that it's tv? I'm not clear on what's going on
[21:58:01 CET] <kepstin> yeah, what that's doing is taking the input (32-223), running a tv-pc conversion, which gives you 16-239, then retagging it as tv range, where 16 is the correct value for black
[21:58:11 CET] <kepstin> then the remaining code and players should handle it correctly.
[21:59:40 CET] <FishPencil> So much for these companies knowing how to write media
[22:00:10 CET] <FishPencil> kepstin: I really appreciate the help
[22:05:39 CET] <kepstin> make sure you check the entire video tho, I wouldn't be surprised if there was e.g. a pre-credits thing with wrong black levels but then the actual movie is correct :/
[22:07:01 CET] <FishPencil> kepstin: Looks all the same, from beginning to credits. That'd be real fun though
[22:07:23 CET] <FishPencil> kepstin: Would this be noticed on bluray players/tv's?
[22:07:52 CET] <JEEB> if you do a limited YCbCr to full RGB conversion and the result is still limited RGB - yes
[22:07:55 CET] <kepstin> FishPencil: yes, but it would honestly be hard to distinguish from lcd backlight bleed on cheaper tvs.
[22:08:11 CET] <JEEB> we've also had some BDs/DVDs where people forgot to limit their YCbCr to limited range
[22:08:30 CET] <JEEB> so after actually standards compliant players did the conversion you'd lose quite a bit of stuff :D
[22:09:04 CET] <JEEB> (I know one DEEN show that was just full range, and then another which was like 16-255 for some reason)
[22:09:51 CET] <kepstin> if anyone watched this on a modern oled or hdr lcd in a dark room, they'd *definitely* notice :)
[22:12:00 CET] <JEEB> this is what someone who watched on a normal player as-is would have gotten https://kuroko.fushizen.eu/screenshots/seitokai_no_ichizon/1289_tv_601.png
[22:12:12 CET] <JEEB> properly handled: https://kuroko.fushizen.eu/screenshots/seitokai_no_ichizon/1289_pc_601.png
[22:12:16 CET] <JEEB> :D
[22:12:36 CET] <FishPencil> Is there a way to seek to the first keyframe? I want to do -ss 00:10:00 -i i.mkv -vframes 1 o.png
[22:15:54 CET] <kepstin> FishPencil: you can use -noaccurate_seek to have the seek go to the nearest keyframe before the selected seek point.
[22:16:08 CET] <kepstin> or use filters to select the first keyframe after the seekpoint.
[22:48:26 CET] <TAFB_WERK> I will pay money to anyone that can get this 4k ip camera streaming smooth and reliable to youtube without re-encoding because of slow cpu (link open to the public: ffmpeg -rtsp_transport tcp -i rtsp:/admin:admin1234 at skyviewe.com:65314/0 ). Current live stream link that works for 15 minutes to 1 hour before the stream locks up: https://pastebin.com/YNP7nmmR
[22:52:47 CET] <FishPencil> TAFB_WERK: Why aren't you going straight into YouTube's rtmp?Why the pipe
[22:55:47 CET] <FishPencil> TAFB_WERK: And I'm almost positive you'll need to re-encode
[22:58:09 CET] <TAFB_WERK> FishPencil: right to youtube works for about 30 seconds before youtube drops the stream
[22:59:10 CET] <TAFB_WERK> the link to pastebin works, but after 15 mins or an hour, ffmpeg starts using 100% of the core it's running on :(
[22:59:24 CET] <TAFB_WERK> normally it's using about 6% of the cpu
[23:03:49 CET] <TAFB_WERK> some peeps asked to see the live stream on youtube: https://www.youtube.com/c/Skyviewelectronics/live
[23:03:58 CET] <TAFB_WERK> that'll run for approx 15 mins to an hour before it goes down.
[23:04:20 CET] <TAFB_WERK> that's only 4mbps, I usually run it at 12mbps
[23:04:55 CET] <TAFB_WERK> i'm happy with the smoothness, if we can stock ffmpeg from locking up or whatever, that would be good
[23:06:44 CET] <FishPencil> TAFB_WERK: What os/hardware/ffmpeg version are you running this from
[23:07:11 CET] <TAFB_WERK> ffmpeg version N-89881-g1948b76a1b Copyright (c) 2000-2018
[23:07:21 CET] <TAFB_WERK> windows 7 ultimate x64
[23:07:46 CET] <TAFB_WERK> i5 3470 with 8gb ram
[23:09:05 CET] <TAFB_WERK> it was originally running on windows 10 but thought I'd try windows 7 if it fixed the ffmpeg freezing but nope, same issue.
[23:11:15 CET] <FishPencil> TAFB_WERK: What does FFmpeg tell you when the stream dies
[23:12:02 CET] <TAFB_WERK> nothing, ffmpeg pretends everything is fine, just using 100% cpu. I'll wait till it dies and do a screen cap.
[23:14:07 CET] <FishPencil> I would rerun with ffmpeg -loglevel verbose or debug, saving the log to a file. I'd be curious to see that
[23:14:29 CET] <FishPencil> It will probably be quite large
[23:16:08 CET] <TAFB_WERK> do I have to put that in both parts of the pipe?
[23:17:03 CET] <FishPencil> why are you piping anyway
[23:18:24 CET] <FishPencil> ffmpeg -rtsp_transport tcp -allowed_media_types video -i "rtsp://admin:admin1234@192.168.0.65:554/0" -b:a 2k -c:a aac -c:v copy -f flv "rtmp://a.rtmp.youtube.com/live2/xxx-xxxx"
[23:18:24 CET] <TAFB_WERK> only way to make it smooth, super choppy without the pipe (I think because the camera has a slightly variable framerate)
[23:18:29 CET] <sfan5> if you have it installed, running ffmpeg inside gdb would be useful
[23:18:38 CET] <sfan5> so you could get a backtrace of all threads
[23:18:51 CET] <FishPencil> sfan5: He needs debug symbols for that
[23:19:07 CET] <TAFB_WERK> FishPencil: you can't stream to youtube without audio, and the camera stream has no audio
[23:19:17 CET] <FishPencil> ffmpeg -rtsp_transport tcp -allowed_media_types video -r 30 -i "rtsp://admin:admin1234@192.168.0.65:554/0" -b:a 2k -c:a aac -c:v copy -f flv "rtmp://a.rtmp.youtube.com/live2/xxx-xxxx"
[23:19:35 CET] <TAFB_WERK> ok, testing.
[23:19:54 CET] <sfan5> FishPencil: then he should probably get a debug build with symbols
[23:19:59 CET] <FishPencil> you need to add the nullsrc then, and the loglevel
[23:21:12 CET] <FishPencil> TAFB_WERK: I can get you a windows FFmpeg build with debug symbols if you're comfortable using gdb
[23:21:24 CET] <TAFB_WERK> FishPencil: sure
[23:21:40 CET] <TAFB_WERK> FishPencil: that link you gave me streams for about 22 seconds then youtube drops the stream
[23:23:16 CET] <FishPencil> ffmpeg -loglevel debug -rtsp_transport tcp -allowed_media_types video -r 30 -i "rtsp://admin:admin1234@192.168.0.65:554/0" -b:a 2k -c:a aac -c:v copy -f flv "rtmp://a.rtmp.youtube.com/live2/xxx-xxxx" > out.log 2>&1
[23:23:25 CET] <FishPencil> paste out.log somewhere
[23:23:47 CET] <TAFB_WERK> i think we need to inject fake audio stream in there
[23:25:23 CET] <TAFB_WERK> ffmpeg -loglevel debug -rtsp_transport tcp -allowed_media_types video -r 30 -i "rtsp://admin:admin1234@192.168.0.65:554/0" -i anullsrc -b:a 2k -c:a aac -c:v copy -f flv "rtmp://a.rtmp.youtube.com/live2/xxx-xxxx" > out.log 2>&1
[23:25:26 CET] <TAFB_WERK> trying that, 1 sec.
[23:26:59 CET] <TAFB_WERK> nope, my anullsource didn't work
[23:27:17 CET] <sfan5> it's -f lavfi -i anullsrc
[23:27:27 CET] <TAFB_WERK> k.
[23:29:34 CET] <TAFB_WERK> stream is up https://www.youtube.com/c/Skyviewelectronics/live
[23:29:44 CET] <TAFB_WERK> choppy, pixely
[23:30:17 CET] <FishPencil> Ah, good old Vibe
[23:30:38 CET] <FishPencil> one step at a time TAFB_WERK, that out.log when it dies
[23:37:29 CET] <TAFB_WERK> I'm not sure it ever dies when I'm not running the pipe command
[23:37:56 CET] <FishPencil> just q or ctrl+c it then
[23:38:03 CET] <TAFB_WERK> k. doing it now...
[23:39:24 CET] <TAFB_WERK> https://pastebin.com/De2ZX8wa
[23:41:19 CET] <FishPencil> Applying option b:a (video bitrate (please use -b:v)) with argument 2k.
[23:41:20 CET] <FishPencil> curious
[23:42:25 CET] <TAFB_WERK> can't apply video bitrate with -c copy
[23:42:28 CET] <TAFB_WERK> weird
[23:43:09 CET] <sfan5> weird? that's expected behaviour
[23:43:34 CET] <FishPencil> Curious that it says video bitrate for b:a
[23:43:50 CET] <FishPencil> it doesn't appear to actually be doing that though
[23:43:56 CET] <sfan5> I imagine the -b option has that description
[23:44:02 CET] <sfan5> would explain explain why it says to use -b:v
[23:44:08 CET] <sfan5> s/explain/also/
[23:47:57 CET] <FishPencil> I somewhat doubt that the quality is any better with the pipe
[23:48:58 CET] <TAFB_WERK> i'll fire it up without the pipe again and watch it close, then I'll fire it up with the pipe, one moment
[23:50:35 CET] <TAFB_WERK> live stream is up without pipe https://www.youtube.com/c/Skyviewelectronics/live
[23:50:40 CET] <TAFB_WERK> very choppy, gets in fits
[23:53:40 CET] <FishPencil> If this is the pipe one they look the same to me
[23:55:51 CET] <FishPencil> I would say it's possible the lack of pthreads is to blame, but it doesn't sound like the cpu is overloaded
[00:00:00 CET] --- Thu Jan 25 2018


More information about the Ffmpeg-devel-irc mailing list