[Ffmpeg-devel-irc] ffmpeg.log.20170809
burek
burek021 at gmail.com
Thu Aug 10 03:05:01 EEST 2017
[00:00:04 CEST] <JEEB> it should tell you the output SAR/DAR
[00:00:09 CEST] <furq> Stream #0:0: Video: wrapped_avframe, yuv420p, 720x300 [SAR 1:1 DAR 12:5],
[00:00:33 CEST] <JEEB> anyways, if you just want to force a 1:1 SAR ",setsar=sar=1" at the end of the filter chain should work (not sure if the internal sar is needed)
[00:00:35 CEST] <ultrav1olet> blurry as hell and matches my results
[00:00:39 CEST] <furq> er
[00:00:40 CEST] <furq> Stream #0:0: Video: wrapped_avframe, yuv420p, 768x320 [SAR 1:1 DAR 12:5],
[00:00:48 CEST] <BtbN> It's your player then
[00:00:56 CEST] <BtbN> There is absolutely no blur in there at all
[00:01:23 CEST] <ultrav1olet> mpv 0.8.3/MPlayer 1.3.0
[00:01:36 CEST] <JEEB> wow, that's some old stuff
[00:02:15 CEST] <BtbN> 0.26.0 here, and VLC 3.0.0
[00:02:32 CEST] <JEEB> also just make sure that whatever your checking has SAR 1:1
[00:02:37 CEST] <JEEB> you're
[00:02:37 CEST] <ultrav1olet> thank you all!
[00:02:45 CEST] <ultrav1olet> My mplayer setup is f*cked up
[00:02:51 CEST] <ultrav1olet> trying to see what's wrong here
[00:03:05 CEST] <JEEB> kind of guessed that :P
[00:03:34 CEST] <ultrav1olet> mplayer has this: vo = gl:lscale=1,cscale=1
[00:03:58 CEST] <JEEB> I mean, at the point where zscale was doing stuff "wrong" I was pretty much sure that it was your stuff
[00:03:58 CEST] <ultrav1olet> why it gets applied when I don't resize the video for f*ck's sake?
[00:04:13 CEST] <JEEB> because zimg is pretty dang good
[00:04:15 CEST] <BtbN> because it's an ancient version
[00:04:22 CEST] <JEEB> also mplayer is mplayer
[00:05:58 CEST] <furq> ultrav1olet: because it's stretching the video to correct the aspect ratio
[00:06:11 CEST] <furq> actually nvm i forgot you fixed that now
[00:06:36 CEST] <ultrav1olet> thank you all
[00:06:41 CEST] <ultrav1olet> I'm closing the bug report
[00:07:29 CEST] <JEEB> ultrav1olet: to be honest I was trying to get you make an angry bug report @ zimg (and then getting told off by the FFmpeg-unrelated author)
[00:07:56 CEST] <JEEB> because you were so fixated on FFmpeg
[00:07:57 CEST] <JEEB> lol
[00:08:03 CEST] <flaxo> hi is there a way to grab random chapters from DVDs?
[00:08:12 CEST] <furq> not with ffmpeg
[00:08:15 CEST] <furq> tcdemux/tccat will do it
[00:08:38 CEST] <flaxo> jup that was just dumped from debian...
[00:08:38 CEST] <ultrav1olet> JEEB: great many thanks!
[00:08:50 CEST] <flaxo> so my workflow is fucked right now
[00:09:00 CEST] <furq> lol what
[00:09:08 CEST] <furq> that's fucking stupid
[00:09:22 CEST] <flaxo> transcode is not deemed cool anymore with the kids... it seems
[00:09:26 CEST] <furq> i guess you're building transcode yourself then
[00:09:38 CEST] <flaxo> yeah great
[00:09:58 CEST] <furq> there's probably something else that does it, but i couldn't tell you what
[00:10:02 CEST] <flaxo> like cdrecord.. and all the other stuff that got kicked out
[00:10:27 CEST] <flaxo> arch seems more promising each day...
[00:10:38 CEST] <furq> someone should really split tccat out into a separate package
[00:10:51 CEST] <furq> tccat and tcdemux are the only useful bits of transcode
[00:11:02 CEST] <flaxo> well
[00:11:19 CEST] <JEEB> flaxo: what was teh reason for removal? usually it's related to lack of maintenance
[00:11:21 CEST] <flaxo> thats what i was using all the time :/
[00:11:31 CEST] <flaxo> dead upstream
[00:11:33 CEST] <flaxo> they tell my
[00:11:34 CEST] <furq> yeah
[00:11:35 CEST] <flaxo> me
[00:11:48 CEST] <furq> no updates since wheezy
[00:11:48 CEST] <flaxo> it was not dead it was feature-complete
[00:11:50 CEST] <furq> and probably before that
[00:11:54 CEST] <furq> that's a dumb reason though
[00:12:08 CEST] <ultrav1olet> JEEB: speaking of preserving or omitting DAR ratio - what's the best way to encode in the future? By specifying -2 as the second dimension?
[00:12:27 CEST] <ultrav1olet> I don't want my future encodes to have the wrong DAR
[00:12:30 CEST] <JEEB> ultrav1olet: that sets the height automatically to the closest mod2
[00:12:34 CEST] <furq> ultrav1olet: -2 will keep the original aspect ratio
[00:12:41 CEST] <flaxo> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=817199
[00:12:43 CEST] <JEEB> to make sure after that that you have 1:1 SAR you then force the sar
[00:12:50 CEST] <JEEB> as I noted, ",setsar=sar=1"
[00:12:54 CEST] <JEEB> at the end
[00:13:11 CEST] <JEEB> (esp. if you're scaling to smaller sizes the aspect ratio might not be exact with the closest mod2)
[00:13:22 CEST] <JEEB> but the difference might be really small
[00:13:31 CEST] <ultrav1olet> furq: I just thought that if I properly scale manually (e.g. divide each dimension by the same number) everything should work :(
[00:13:33 CEST] <JEEB> and thus it might make sense set forcibly set sar to 1
[00:14:00 CEST] <ultrav1olet> looks like I was wrong
[00:14:05 CEST] <BtbN> ultrav1olet, that's how it works. You just miscalculated
[00:14:14 CEST] <BtbN> But using -2 is easier anyway
[00:14:23 CEST] <ultrav1olet> BtbN: I will use -2 then
[00:14:30 CEST] <BtbN> You can use it for either of the two
[00:14:31 CEST] <ultrav1olet> sound like an easier solution
[00:14:43 CEST] <ultrav1olet> less chance for mistakes
[00:14:48 CEST] <BtbN> if you want a specific height, just put -2 for the width
[00:14:54 CEST] <ultrav1olet> will do
[00:15:21 CEST] <ultrav1olet> I guess ffmpeg's manual should mention what you've all just said
[00:15:35 CEST] <JEEB> yes
[00:15:43 CEST] <JEEB> ffmpeg-all.html
[00:15:50 CEST] <JEEB> is the thing you want to ctrl+F through
[00:15:59 CEST] <ultrav1olet> I looked at https://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg
[00:16:11 CEST] <furq> !filter scale
[00:16:11 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#scale-1
[00:16:13 CEST] <furq> it's in there
[00:16:22 CEST] <ultrav1olet> and it happily offers me to use "ffmpeg -i input.avi -vf scale=320:240 output.avi"
[00:16:30 CEST] <furq> If one and only one of the values is -n with n >= 1, the scale filter will use a value that maintains the aspect ratio of the input image, calculated from the other specified dimension. After that it will, however, make sure that the calculated dimension is divisible by n and adjust the value if necessary.
[00:16:31 CEST] <ultrav1olet> yeah I see
[00:17:12 CEST] <ultrav1olet> it's just "its most basic form:" has no mention that it might completely break things ;)
[00:17:14 CEST] <BtbN> you want -2 because of YUV420 needing multiples of two
[00:19:19 CEST] <ultrav1olet> Ideally I would love a scale option which allows me to specify a divisor, so that I was free not to specify the resulting dimensions at all, say -vf scale 3/5,3/5 or something like that
[00:20:00 CEST] <furq> you can do that
[00:20:07 CEST] <ultrav1olet> so for 1920x1080 source I'd get 1920*3/5,1080*3/5 - is that possible?
[00:20:20 CEST] <furq> -vf scale=iw*(3/5):ih*(3/5)
[00:20:32 CEST] <ultrav1olet> and it'll preserve DAR?
[00:20:40 CEST] <furq> should do
[00:20:52 CEST] <BtbN> that's highly uncommon though
[00:20:56 CEST] <BtbN> you usually want a specific height or width
[00:20:57 CEST] <furq> that's simplified because that'll end up giving you some non-mod2 sizes
[00:21:04 CEST] <furq> and also yeah you normally want a target size
[00:21:47 CEST] <ultrav1olet> I do understand that the resulting dimensions should be at least divisible by 2 and better yet 4
[00:21:57 CEST] <furq> 4/8/16 aren't really any better than 2
[00:22:07 CEST] <furq> iirc x264 encodes mod16 regardless
[00:22:09 CEST] <ultrav1olet> cause x264 works best for 4x4 and higher
[00:22:10 CEST] <BtbN> for h264, you ideally want multiples of 8. But it doesn't overly matter
[00:22:23 CEST] <BtbN> it will just add padding and crop it away on playback
[00:22:26 CEST] <furq> right
[00:22:32 CEST] <furq> is it mod8 or mod16
[00:22:45 CEST] <JEEB> more like multiples of 16 internally since macroblocks, but the padding is so effective you shouldn't care
[00:22:46 CEST] <BtbN> hm, 1080p is actually 1088p. So it might even be 16
[00:22:51 CEST] <JEEB> it is 16
[00:22:54 CEST] <JEEB> and HEVC is 64
[00:22:58 CEST] <furq> nice
[00:23:00 CEST] <JEEB> coding tree unit size
[00:23:26 CEST] <BtbN> as long as the padding isn't filled with randomness
[00:23:38 CEST] <BtbN> if it's a permanent static color, it should compress into nothingness
[00:23:52 CEST] <JEEB> the padding basically is done in a way that lets the encoder optimize the hell out of it :P
[00:23:55 CEST] <JEEB> so you don't have to care
[00:24:06 CEST] <JEEB> thus you actually care more about the playback devices' issues
[00:24:13 CEST] <JEEB> like renderers failing with mod2
[00:24:17 CEST] <JEEB> or something like that
[00:24:28 CEST] <ultrav1olet> ffmpeg -i source.mkv -vf 'scale=iw*(2/5):ih*(2/5)' -c:v utvideo result.mkv works beautifully!
[00:24:48 CEST] <BtbN> if it happens to end up with multiples of two, yes
[00:24:53 CEST] <BtbN> if not, it will explode
[00:24:59 CEST] <ultrav1olet> source: Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x800 [SAR 1:1 DAR 12:5], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
[00:25:04 CEST] <JEEB> with 4:2:0, yes
[00:25:07 CEST] <ultrav1olet> result: Stream #0:0: Video: utvideo (ULY0 / 0x30594C55), yuv420p, 768x320 [SAR 1:1 DAR 12:5], q=2-31, 200 kb/s, 23.98 fps, 1k tbn, 23.98 tbc (default
[00:25:12 CEST] <furq> but yeah you'd actually want something like -vf scale=floor(iw*(3/5)/2)*2
[00:25:17 CEST] <furq> to force mod2
[00:25:24 CEST] <cq1> I presume that any reasonable encoder is more sophisticated than simply "padding with some value", and instead when it does any form of rate-distortion optimization that errors in the padded region count for nothing...
[00:25:32 CEST] <JEEB> cq1: yes
[00:28:11 CEST] <ultrav1olet> One problem I've always wondered but never actually tried: when encoding for the same bitrate target, what's better 1) encoding the source with lower quality or 2) slightly downscaling it (e.g. from 1080p to 720p) and using a higher quality?
[00:28:37 CEST] <furq> depends on the bitrate and target device
[00:28:41 CEST] <ultrav1olet> Of course when you're ready to sacrifice a wee bit of visual fidelity
[00:28:53 CEST] <ultrav1olet> just what I thought
[00:28:54 CEST] <furq> downscaling will make a big difference to quality/size though
[00:29:11 CEST] <ultrav1olet> exactly, so the answer is not really obvious
[00:29:15 CEST] <furq> right
[00:29:25 CEST] <furq> as usual when it comes to video, there's no right or wrong answer
[00:29:33 CEST] <kepstin> this is the sort of thing where netflix does big studies over massive video libraries to try to decide what to do :/
[00:29:39 CEST] <ultrav1olet> "trust your eyes" ))
[00:30:10 CEST] <furq> and as usual when it comes to there being no right or wrong answer, there is someone who keeps asking us what the right answer is
[00:30:21 CEST] <furq> and then JEEB starts swearing at them because they won't just encode a test sample like he keeps telling them
[00:31:12 CEST] <ultrav1olet> are there any plans to make ffmpeg compatible with avisynth filters or that's not even theoretically possible?
[00:31:12 CEST] <furq> it's been a while since we've had one of those guys tbf
[00:31:20 CEST] <furq> ultrav1olet: it can already load .avs on windows
[00:31:24 CEST] <furq> and you can pipe vapoursynth into it
[00:31:50 CEST] <furq> if you mean incorporating avisynth filters into lavfi then that isn't directly possible
[00:31:59 CEST] <furq> some have been ported though
[00:32:06 CEST] <ultrav1olet> I for one dream of having MDegrain2 in ffmpeg
[00:32:19 CEST] <ultrav1olet> probably the best denoiser I've seen in my entire life
[00:32:20 CEST] <furq> yeah it'd be nice to have qtgmc in ffmpeg
[00:32:30 CEST] <furq> but it's not that much effort to go through vs
[00:32:38 CEST] <ultrav1olet> works wonder and I have to boot into windows to use it :(
[00:32:46 CEST] <furq> it's probably been ported to vapoursynth
[00:33:36 CEST] <ultrav1olet> my workflow is to use it in windows to encode to x264 losslessly, then I encode once again in Linux
[00:33:42 CEST] <ultrav1olet> that's kinda awful
[00:33:47 CEST] <furq> that's extremely awful
[00:34:01 CEST] <JEEB> avisynth works under wine since 2004 or so
[00:34:12 CEST] <JEEB> so when I was (ab)using university workstations it was 2009
[00:34:18 CEST] <JEEB> to encode stuff in parts
[00:34:21 CEST] <ultrav1olet> I'm too stupid to use it without MeGUI :(
[00:34:51 CEST] <furq> ultrav1olet: https://github.com/dubhater/vapoursynth-mvtools
[00:34:52 CEST] <ultrav1olet> last time I tried MeGUI doesn't work under wine
[00:35:06 CEST] <furq> vs has free multithreading as well
[00:35:14 CEST] <q66> hi furq
[00:35:27 CEST] <furq> hi there little buddy
[00:35:44 CEST] <furq> how can i help you today
[00:35:50 CEST] <q66> idk
[00:35:55 CEST] <q66> i just came to idle and stick around for a while
[00:36:04 CEST] <ultrav1olet> Hopefully someone with an itch will port MDegrain to ffmpeg :-)
[00:36:08 CEST] <q66> just #mpv isn't enough multimedia madness
[00:36:31 CEST] <furq> you will be pleased to hear that nobody talks about animes in here
[00:36:40 CEST] <q66> that's nice
[00:36:48 CEST] <q66> do they talk about perl
[00:36:53 CEST] <furq> not that i've seen
[00:36:57 CEST] <q66> that's also nice
[00:37:10 CEST] <JEEB> #ffmpeg is just dumb lusers
[00:37:21 CEST] <JEEB> and random people saying they will build the next major video site
[00:37:22 CEST] <furq> ultrav1olet: vapoursynth is easy enough to use
[00:37:23 CEST] <q66> sounds fun
[00:37:28 CEST] <JEEB> or people asking how to do their porn site
[00:37:33 CEST] <furq> it's easier than rebooting into windows every time you want to encode anything
[00:38:21 CEST] <ultrav1olet> furq: looking at it, thanks
[00:40:00 CEST] <ultrav1olet> my needs are very basic actually: https://pastebin.com/raw/dn1RicLD
[00:40:15 CEST] <ultrav1olet> copied straight from mdegrain manual
[00:41:25 CEST] <q66> it's pretty much the same with vapoursynth except in python
[00:41:49 CEST] <ultrav1olet> looks like this will work as is with vapoursynth
[00:42:04 CEST] <furq> yeah just install mvtools and that's all basically the same
[00:42:23 CEST] <ultrav1olet> great, will definitely try that
[00:43:59 CEST] <q66> yeah though convert are not separate funcs in vs
[00:46:28 CEST] <furq> http://vpaste.net/zIKCc
[00:46:29 CEST] <furq> something like that
[00:46:45 CEST] <q66> yeah like that
[00:47:27 CEST] <ultrav1olet> you're amazing, guys!
[00:55:55 CEST] <ultrav1olet> great many thanks!
[01:02:11 CEST] <iive> so, I see people complain that there is not enough talking about anime.
[01:02:49 CEST] <iive> maybe somebody would be kind enough to recommend me something to watch.
[01:03:20 CEST] <iive> i do prefer to be recent title and to be a complete.
[01:04:25 CEST] <JEEB> shouwa genroku rakugo shinjuu
[01:05:31 CEST] <iive> this is about the actors, isn't it?
[01:05:39 CEST] <furq> i would like to assure you that i was not complaining
[01:05:41 CEST] <JEEB> yes, the traditional comedy thing
[01:06:10 CEST] <JEEB> as in, about comedians in that genre
[01:06:21 CEST] <JEEB> two seasons out now, which nicely brought it to an end
[01:06:31 CEST] <ultrav1olet> it even has a trailer ... in Japanese
[01:06:37 CEST] <ultrav1olet> https://www.youtube.com/watch?v=rMWOuwFpF-g
[01:06:59 CEST] <iive> i've seen it recommended before, I'll take a look.
[01:07:08 CEST] <klaxa> i apparently have shit taste, but my favorite show last season was zero kara hajimeru mahou no sho
[01:07:31 CEST] <JEEB> klaxa: it's OK, it was one of the shows I was kind of going through
[01:07:40 CEST] Action: JEEB watches way too much crap
[01:08:10 CEST] <klaxa> i cut back on watching crap shows
[01:08:29 CEST] <klaxa> meaning i watch fewer crap shows
[01:09:09 CEST] <klaxa> i was surprised by how much i liked renai boukun even though the genre and tropes have already been used to death
[01:13:40 CEST] <iive> JEEB: there seem to be ova too, and it is marked as prequel, so I guess I should start with it first.
[01:14:10 CEST] <JEEB> I skipped that and I don't think I lost anything
[01:14:17 CEST] <JEEB> I have a feeling it's something that was done before, separately
[01:14:32 CEST] <iive> ok.
[01:14:45 CEST] <JEEB> because (spoiler alert) the two seasons go through pretty much the whole life of those folk
[01:15:38 CEST] <iive> it's marked as prequel, not alternative version.
[01:16:15 CEST] <JEEB> let's just say I ignored it and missed seemingly nothing
[01:16:21 CEST] <JEEB> of course I don't know what I missed but still :P
[01:16:28 CEST] <iive> :D
[01:16:59 CEST] <iive> klaxa: sometimes there are show that are not highly prised, but I find a lot more meaningful, that the highly rated ones.
[01:17:44 CEST] <JEEB> the thing I bought the whole set of discs for was shirobako, and you can think of that at 60eur a pop and about eight discs
[01:17:56 CEST] <JEEB> one of my favourites for 2014
[01:17:58 CEST] <klaxa> >p.a. works
[01:18:01 CEST] <JEEB> inorite
[01:18:05 CEST] <klaxa> although i heard it was good for a change
[01:18:17 CEST] <JEEB> yea, I skipped a lot of PA stuff before that
[01:18:29 CEST] <klaxa> all the other p.a. works shows i dropped after a few eps (except for angel beats)
[01:18:30 CEST] <JEEB> shirobako hit me as a nice thing
[01:25:27 CEST] <CounterPillow> anime sucks
[01:31:01 CEST] <iive> CounterPillow: i wonder, what is your most hated anime :E
[01:42:01 CEST] <CounterPillow> Spongebob
[01:44:22 CEST] <iive> that's not anime
[01:44:58 CEST] <klaxa> in japan it is :x
[01:48:48 CEST] <klaxa> okay autoconf 2.13 is too old
[01:58:27 CEST] <klaxa> ah whoops, wrong buffer, lol
[03:42:29 CEST] <Harzilein> hi
[03:45:27 CEST] <Harzilein> i made a 'still frame video' like this: ffmpeg -ss 1209.041000 -i defiance303.mkv -an -c copy -vframes 1 defiance303.thumb.mkv # https://pastebin.com/C9eTrh6A
[03:48:58 CEST] <Harzilein> and i now want to make a video with that still frame looped and the audio from the complete file: ffmpeg -i defiance303.mkv -stream_loop -1 -r 2 -i defiance303.thumb.mkv -map 0:a:0 -map 1:v:0 -c copy -fflags +genpts defiance303.mp4 # https://pastebin.com/vneMrWQn
[03:52:07 CEST] <Harzilein> when i look at the output of mediainfo the video stream is only a couple of ms long, but playback seems to work fine (with mpv) ... the only problem is that when i try to seek, playback aborts with an 'end of file' message: https://pastebin.com/xTMrTFJh
[09:55:06 CEST] <yong> I want to encode frames received through a network socket. These frames are sent as individual .bmp files. Currently I'm trying it like this:
[09:55:09 CEST] <yong> ffmpeg -f bmp_pipe -framerate 10 -i tcp://localhost:4444 -vframes 20 -an -c:v libx264 -preset veryfast test.mp4
[09:55:42 CEST] <yong> but ffmpeg doesn't even create a test.mp4, and when the sender closes the socket (after sending 40 frames), all ffmpeg outputs is "tcp://localhost:4444: Unknown error"
[09:55:49 CEST] <yong> What do I have to specify to make this work?
[09:56:58 CEST] <furq> you probably want image2pipe
[10:11:00 CEST] <bencc> why the releases in the branches aren't in sync?
[10:11:22 CEST] <bencc> I see random 3.1.x, 3.2.x, 3.3.x releases
[10:11:27 CEST] <bencc> https://github.com/FFmpeg/FFmpeg/releases
[10:11:38 CEST] <bencc> I would expect all the branches to release on the same day
[10:15:52 CEST] <thebombzen> furq: that almost certainly is not the issue, assuming that they're receiving bmp images
[10:16:28 CEST] <thebombzen> png_pipe, jpeg_pipe, bmp_pipe etc. have all been split off from image2pipe. image2pipe still works but those are better
[10:16:51 CEST] <yong> furq: Actually I'm stupid, I can't even connect to ffmpeg, it doesn't matter when I close the socket, ffmpeg just times out with "tcp://localhost:4444: Unknown error" no matter what (even if I'm not doing anything) - which is kind of weird, since it should just listen on a socket which doesn't really have a timeout? Does ffmpeg not support network input like this?
[10:18:21 CEST] <yong> Or do I have to use UDP? I was going to use TCP for testing purposes so I can be sure ffmpeg at least gets data
[10:20:50 CEST] <yong> Alright I guess I have to just specify it to listen, like this:
[10:20:53 CEST] <yong> ffmpeg -f image2pipe -framerate 10 -i tcp://localhost:4444?listen=1 -vframes 20 test.mp4
[10:20:59 CEST] <yong> still can't connect to it though :D
[10:21:38 CEST] <yong> at least it doesn't timeout anymore
[10:29:46 CEST] <yong> ohhhhhhh solved it lol
[10:30:11 CEST] <yong> localhost = ipv6, you have to specify tcp://127.0.0.1 for ipv4
[14:04:05 CEST] <cableguy> hey team
[14:04:40 CEST] <cableguy> i converted a sample file to yuv color with this
[14:04:42 CEST] <cableguy> ffmpeg -i test.mkv -c:v rawvideo -pix_fmt yuv420p test.yuv
[14:04:51 CEST] <cableguy> the test.mkv source is 50mb
[14:04:58 CEST] <cableguy> and the output test.yuv is 2gb size
[14:05:05 CEST] <cableguy> video is 60 second long
[14:05:29 CEST] <cableguy> i tried doing this to a 2h long video, i ran out of hdd space
[14:05:34 CEST] <cableguy> i dont think this is right
[14:05:50 CEST] <dystopia_> morning
[14:06:00 CEST] <furq> what part of that doesn't seem right
[14:06:18 CEST] <cableguy> why output becomes so big?
[14:06:19 CEST] <c_14> the -c:v rawvideo probably
[14:06:25 CEST] <furq> because it's rawvideo
[14:06:25 CEST] <dystopia_> raw makes every frame an i frame
[14:06:27 CEST] <dystopia_> and is massive
[14:06:31 CEST] <furq> it's uncompressed
[14:07:04 CEST] <furq> 60 seconds of 1080p30 yuv420p rawvideo is 5.5GB
[14:07:23 CEST] <cableguy> my input is 60 seconds of 720x576 dvd vob sample
[14:07:39 CEST] <dystopia_> why go to raw?
[14:07:59 CEST] <dystopia_> it's mpeg2, go to whatever end format is desired
[14:08:10 CEST] <furq> use ffv1 in nut if you want a lossless intermediate format
[14:08:18 CEST] <furq> but you could just use -c copy
[14:09:44 CEST] <cableguy> because x265 encode only from yuv dystopia_
[14:09:44 CEST] <dystopia_> i have a 1920x1080 video, that i want to encode to 4:3 to fit in 640x480
[14:09:56 CEST] <dystopia_> but i don't want to loose any of the picture
[14:10:03 CEST] <dystopia_> what resolution can i scale to ?
[14:10:07 CEST] <furq> cableguy: you know ffmpeg supports libx265, right
[14:10:21 CEST] <furq> dystopia_: you can't
[14:10:31 CEST] <furq> unless you want to fuck up the aspect ratio
[14:10:33 CEST] <dystopia_> furq im happy with black bars top and bottom
[14:10:34 CEST] <dystopia_> yeah
[14:10:36 CEST] <furq> oh right
[14:10:37 CEST] <dystopia_> i do :p
[14:10:44 CEST] <dystopia_> but im not sure what res will fit
[14:10:55 CEST] <cableguy> furq, do i look like someone who knows anything at all
[14:11:14 CEST] <furq> 640*360 with black bars to preserve ar
[14:11:17 CEST] <dystopia_> you can go to x265 from mpeg2 with ffmpeg cableguy
[14:11:26 CEST] <cableguy> so how is it different
[14:11:27 CEST] <dystopia_> thanks
[14:11:29 CEST] <dystopia_> will test
[14:11:30 CEST] <cableguy> from converting to yuv with ffmpeg
[14:11:33 CEST] <furq> 640 / (16/9)
[14:11:34 CEST] <cableguy> and then encoding with x265
[14:11:57 CEST] <furq> it means you don't need to store two hours of rawvideo and kill your hard disk
[14:12:08 CEST] <cableguy> oh
[14:12:18 CEST] <cableguy> can you write an example of command line
[14:12:30 CEST] <furq> granted you don't need to do that anyway because you can pipe ffmpeg's output into x265
[14:12:44 CEST] <furq> ffmpeg -i foo.vob -c:v libx265 [options] out.mkv
[14:13:08 CEST] <furq> https://trac.ffmpeg.org/wiki/Encode/H.265
[14:14:02 CEST] <furq> or ffmpeg -i foo.vob -c:v rawvideo -f yuv4mpegpipe - | x265 [options]
[14:14:12 CEST] <furq> there's no real reason to do that though
[14:15:02 CEST] <cableguy> how do i incorporate avs script into this
[14:15:08 CEST] <cableguy> if i need to deinterlace or crop
[14:15:14 CEST] <furq> you can do both of those with ffmpeg
[14:15:17 CEST] <dystopia_> don't
[14:15:18 CEST] <dystopia_> yeah
[14:15:27 CEST] <furq> unless you're using some super fancy deinterlacing script like qtgmc
[14:15:38 CEST] <cableguy> what about yadif
[14:15:44 CEST] <furq> -vf yadif=1
[14:15:45 CEST] <cableguy> cuz its just some old dvd
[14:16:14 CEST] <cableguy> how about running something like SelectRangeEvery()
[14:16:16 CEST] <cableguy> for testing
[14:16:34 CEST] <dystopia_> -vf yadif=0:0,crop=720x576:0:0
[14:16:38 CEST] <furq> i assume that's the same as -t 60
[14:16:56 CEST] <furq> ffmpeg will just load avs scripts if you're more comfortable with that
[14:16:58 CEST] <furq> -i foo.avs
[14:17:14 CEST] <cableguy> wait so
[14:17:42 CEST] <dystopia_> crop values - crop=width_after_crop:height_after_crop:left_crop:top_crop
[14:17:46 CEST] <cableguy> how do you load avs and convert mpeg2 to yuv and encode to x265 all with ffmpeg
[14:17:58 CEST] <furq> you don't do the middle bit
[14:18:08 CEST] <furq> ffmpeg -i foo.avs -c:v libx265 [options] out.mkv
[14:18:22 CEST] <furq> assuming your avisynth.dll is in the right place
[14:18:24 CEST] <cableguy> so the avs must contain the input src that is already yuv format
[14:18:29 CEST] <furq> yes
[14:18:39 CEST] <cableguy> so you dont save any space
[14:18:46 CEST] <furq> what
[14:18:56 CEST] <cableguy> in order to use avs script
[14:19:05 CEST] <cableguy> you end up converting the mpeg2 to yuv burning hdd space
[14:19:07 CEST] <furq> let me revise my answer: no
[14:19:19 CEST] <furq> you load the d2v or whatever in your avs and have avisynth output yuv
[14:19:23 CEST] <cableguy> so you could include that new uncompressed yuv in avs
[14:19:39 CEST] <furq> at no point in this process does rawvideo get saved to your disk
[14:20:00 CEST] <furq> honestly though if you're just cropping and running yadif, you should ditch avisynth
[14:20:20 CEST] <furq> it's an unnecessary complication when ffmpeg will do both of those things identically
[14:20:41 CEST] <cableguy> but how you prepare it for encoding
[14:20:45 CEST] <furq> you don't
[14:21:06 CEST] <cableguy> if you use megui or something it writes a avs script that sets crop values, etc its easy to follow
[14:21:48 CEST] <dystopia_> ffmpeg -i in.vob -vf yadif=0:0,crop=720:576:0:0 -c:v libx265 out.mkv
[14:21:57 CEST] <dystopia_> ^ this imo
[14:22:15 CEST] <dystopia_> you can do -t 60 to test, will save you a lot of hasstle
[14:22:22 CEST] <furq> yeah basically that
[14:22:42 CEST] <furq> 13:18:08 ( furq) ffmpeg -i foo.avs -c:v libx265 [options] out.mkv
[14:22:49 CEST] <furq> this will work with the avisynth script megui produces though
[14:22:58 CEST] <furq> you don't need to create any intermediate files
[14:23:29 CEST] <cableguy> dystopia_, so what do you use to count how many pixels you need to crop e.g. for black bars
[14:24:17 CEST] <cableguy> so does including libx265 automatically solves the mpeg2 to yuv part
[14:25:06 CEST] <dystopia_> cableguy i script my stuff
[14:25:16 CEST] <dystopia_> basically i use ffmpeg to dump a frame of video
[14:25:22 CEST] <dystopia_> then open that frame in paint
[14:25:32 CEST] <dystopia_> and enable grid lines, zoom into max
[14:25:43 CEST] <dystopia_> and every square on the grid = 1 pixel
[14:25:50 CEST] <dystopia_> so i just count what needs to be cropped
[14:25:56 CEST] <dystopia_> then set it in my script
[14:27:04 CEST] <cableguy> do you also wear a fedora
[14:29:58 CEST] <dystopia_> i do not
[14:30:23 CEST] <dystopia_> %PATHC2%\ffmpeg -ss 58 -i 01.ts -r 1 %PATHC4%\%houtput%.bmp
[14:30:23 CEST] <dystopia_> mspaint %PATHC4%\%houtput%.bmp
[14:32:06 CEST] <dystopia_> https://paste.ofcode.org/3aAPytUMRDstAvFFmjGCjCd my old crop code if you want it
[14:32:23 CEST] <cableguy> ffmpeg -i foo.avs -c:v libx265 [options] out.mkv
[14:32:34 CEST] <cableguy> what do you include in [options] part
[14:32:45 CEST] <cableguy> it doesnt recognize x265 options
[14:32:58 CEST] <cableguy> e.g. --preset fast
[14:34:26 CEST] <dystopia_> it should
[14:34:31 CEST] <iive> cableguy: man ffmpeg-codecs "/" to search "libx265"
[14:35:13 CEST] <iive> it says that therea are: -preset, -tune, -forced-idr and -x265-params
[14:35:37 CEST] <cableguy> ur right
[14:35:43 CEST] <cableguy> its one dash not two
[14:36:18 CEST] <dystopia_> any idea why this drops tons of frames :)
[14:36:23 CEST] <dystopia_> ffmpeg.exe -i "in.mp4" -sws_flags spline -s 640x360 -c:v mpeg1video -c:a mp2 -q:a 224 test.mpg
[14:36:31 CEST] <dystopia_> Past duration 0.823357 too large 8754kB time=00:01:37.78 bitrate= 733.4kbits/s dup=37 drop=32 speed=12.2x
[14:37:49 CEST] <dystopia_> drops 77 frames over 5mins of video
[14:38:25 CEST] <relaxed> dystopia_: maybe a change in framerate
[14:38:38 CEST] <dystopia_> hmm
[14:39:11 CEST] <dystopia_> yes
[14:45:54 CEST] <cableguy> but how do you know which x265 version ffmpeg is using
[14:46:01 CEST] <cableguy> how do you know its latest
[14:46:20 CEST] <cableguy> the encoding result says its x265 2.4+87
[14:46:39 CEST] <iive> it uses the one it is linking to
[14:46:58 CEST] <iive> that's why it is a lib<codec>, because it is an external library
[14:46:59 CEST] <cableguy> but latest is 2.5+6
[14:47:15 CEST] <iive> well... what version do you have installed on your system?
[14:47:18 CEST] <cableguy> how do you change where its linking
[14:47:24 CEST] <relaxed> cableguy: https://www.johnvansickle.com/ffmpeg/
[14:47:39 CEST] <iive> windows?
[14:47:46 CEST] <cableguy> ye
[14:47:48 CEST] <viric> Do you know if all formats start the streams at time 0? I have MTS files and I think they may start after 0
[14:47:51 CEST] <cableguy> i guess my ffmpeg is out of date
[14:47:52 CEST] <viric> between 0 and 1s
[14:48:21 CEST] <iive> well, then it depends what version has been linked by the one who built ffmpeg for you.
[14:48:25 CEST] <cableguy> can you update ffmpeg through terminal
[14:48:41 CEST] <cableguy> im just using some windows build available from public
[14:49:24 CEST] <cableguy> hey maybe the shared version is what i need
[14:50:30 CEST] <cableguy> no, it doesnt have any x265 exe files,although it says linking
[14:59:13 CEST] <cableguy> wait so
[14:59:24 CEST] <cableguy> where is ffmpeg pulling the x265 lib from on windows
[14:59:36 CEST] <cableguy> is it compiled into ffmpeg.exe
[14:59:40 CEST] <BtbN> wherever you told it to
[14:59:59 CEST] <cableguy> but i diddnt tell it anything
[15:00:04 CEST] <BtbN> You did while building it
[15:00:24 CEST] <cableguy> i didnt build it
[15:00:26 CEST] <cableguy> its windows
[15:00:33 CEST] <cableguy> you just snatch it from here https://ffmpeg.zeranoe.com/builds/
[15:00:35 CEST] <BtbN> Then whoever built it did.
[15:00:55 CEST] <cableguy> so its inside ffmpeg.exe
[15:01:23 CEST] <cableguy> would be nice if you could point ffmpeg.exe to some external x265.exe
[15:01:46 CEST] <BtbN> it does not use an exe.
[15:51:22 CEST] <kepstin> viric: various formats can start at pts other than 0 yes, mpegts is a good example of that. Note that the ffmpeg cli tool (by default) rewrites input timestamps to start at 0.
[15:55:18 CEST] <viric> kepstin: aaah good to know. I never understood what happened with that number.
[15:55:30 CEST] <viric> kepstin: what about concatenating two MTS?
[15:55:59 CEST] <viric> kepstin: do you know any case where that number is used?
[15:56:29 CEST] <viric> By conatenating MTS, I mean two consecutive MTS that a camcorder created for continuous shot
[15:57:01 CEST] <kepstin> viric: not enough info, it depends what the camcorder did with the timestamps
[15:58:39 CEST] <kepstin> when concatenating arbitrary mpeg-ts, there's often a timestamp discontinuity - they'll jump backwards or far forwards. I think there's some code somewhere (in ffmpeg cli?) to notice these with heuristics and make timestamps continuous.
[15:59:41 CEST] <kepstin> but on a camcorder, if they're only been split because of max file size limits? it might be a single continuous mpegts stream, so concatenating them would give you... a single continuous stream
[16:00:10 CEST] <viric> kepstin: only max size limits.
[16:00:16 CEST] <viric> ah ok.
[16:00:39 CEST] <kepstin> depends on the camcorder, I don't know what yours does.
[16:00:46 CEST] <viric> ok.
[16:00:49 CEST] <viric> How can I know? :)
[16:00:59 CEST] <viric> what to look at, with ffprobe?
[16:01:46 CEST] Action: kepstin isn't great with ffprobe, so there's probably an easier way to do this, but...
[16:02:34 CEST] <kepstin> if you look at the output of 'ffprobe -show_frames', which includes the frame timestamps, you can see if the timestamps at the start of one file follow from the timestamps at the end of another
[16:03:32 CEST] <kepstin> ffprobe shows the timestamps as they are in the file, it doesn't do any cleanup/rewriting like the ffmpeg cli tool.
[16:38:01 CEST] <Bear10> anyone knwo why: av_image_copy(frame->data, frame->linesize, data, linesize, pix_fmt, w, h); could be causing my memory to increase drastically (it's called on every frame in my custom filter) however I'm changing the data on the frame which I understand should be freed later on
[16:38:26 CEST] <Bear10> i do av_frame_free(&in) and av_freep(&data[0]) to ensure that the originals are freed up
[16:39:59 CEST] <atomnuker> your input is refcounted and you don't unref it
[16:40:13 CEST] <Bear10> atomnuker: isn't that what the av_frame_free(&in) does?
[16:41:19 CEST] <Bear10> just trying to learn :)
[16:41:45 CEST] <atomnuker> custom filter?
[16:41:56 CEST] <Bear10> yeah
[16:41:59 CEST] <atomnuker> couldn't you just make the frame writeable?
[16:42:10 CEST] <atomnuker> and then write to it
[16:43:24 CEST] <Bear10> well i've tried with ff_load_image(frame->data, frame->linesize, ....) but had the same issue, and the w/h/pixfmt might be different since I'm reading a different image
[16:43:57 CEST] <Bear10> correction: ff_load_image (in->data, in->linesize, ...)
[16:45:27 CEST] <atomnuker> ff_ stuff is private
[16:46:35 CEST] <Bear10> how would you recommend reading a jpg from a file and putting it into the frame?
[16:47:10 CEST] <Bear10> I was pointed to https://github.com/jnicklas/ffmpeg_test/blob/master/test.c
[16:47:29 CEST] <atomnuker> its already a frame
[16:48:13 CEST] <Bear10> I understand that in is a frame, but it has a lot of information from the original frame which I want to completely replace with the information I read from a new jpg
[16:49:00 CEST] <Bear10> hence I was pointed to that link
[16:49:04 CEST] <Bear10> I have all of it working as desired a part from the memory issue
[16:50:20 CEST] <atomnuker> trace all allocations you make then, ffmpeg internals make sure to NULL any freed pointers
[16:50:52 CEST] <atomnuker> if you're working with 2 jpeg images then you'll need to make sure both avframes are unref'd
[16:51:40 CEST] <atomnuker> (if you have a filter doing frame_a + frame_b -> frame_c, otherwise for something like frame_a + frame_b -> frame_a/frame_b make sure whatever you leave behind gets unreffd somehow)
[16:55:31 CEST] <Bear10> atomnuker: hmm I thought that's what I was doing here's a pastebin https://pastebin.com/U2waqUqX of the filter_frame function
[16:55:40 CEST] <Bear10> not sure if you see me missing anything you mentioned
[16:56:11 CEST] <Bear10> I've also attempted adding av_frame_unref(...) before the av_free_frame(...)
[16:57:56 CEST] <atomnuker> oh wait, its an internal filter?
[16:58:05 CEST] <atomnuker> I thought it was external
[16:58:19 CEST] <atomnuker> Bear10: ask durandal_1707 in this case
[16:58:19 CEST] <Bear10> atomnuker: yeah it's a filter based on the writing_filters.txt
[16:59:12 CEST] <Bear10> thanks atomnuker for the help :)
[16:59:29 CEST] <Bear10> hello durandal_1707 if you get the chance could you lend me a quick hand?
[17:01:42 CEST] <atomnuker> are you planning to put the patch on the ml?
[17:03:45 CEST] <Bear10> ml?
[17:04:08 CEST] <atomnuker> mailing list so the filter gets upstreamed?
[17:04:15 CEST] <durandal_1707> Bear10: what about freeing data[1] and so on?
[17:04:33 CEST] <Bear10> durandal_1707: I tried freeing data[1] [2] and [3] and still had the same issue oddly enough
[17:04:51 CEST] <Bear10> atomnuker: if it turns out there's an actual issue and it's not me being a noob of course
[17:04:55 CEST] <Bear10> :)
[17:05:59 CEST] <durandal_1707> Bear10: are you on unix, use valgrind
[17:06:47 CEST] <Bear10> durandal_1707: the odd thing also is on a mac OS X it doesn't seem to have this memory issue but on Linux Mint it's where it's happening
[17:06:54 CEST] <Bear10> haven't heard of it i'll give it a look now
[17:07:14 CEST] <JEEB> valgrind is awesomesauce
[17:07:27 CEST] <JEEB> I would not be able to code as well as I do without the help of valgrind and static analyzers
[17:07:43 CEST] <Bear10> in that case installing it now :)
[17:08:47 CEST] <atomnuker> Bear10: you need to do it properly then and make the filter take 2 inputs rather than a frame and a name, look how e.g. vf_premultiply does it
[17:10:59 CEST] <Bear10> ok will have a look, it's going to be a bit to look into but i'll try and come back with some results / information
[17:11:01 CEST] <Bear10> thanks again guys :)
[17:12:48 CEST] <durandal_1707> Bear10: what filter actually do?
[17:13:09 CEST] <Hello_> Hi, I need help
[17:13:25 CEST] <Hello_> pls
[17:13:27 CEST] <Bear10> durandal_1707: it ignores the frame that's being input (it's in an overlay) and replaces it with a jpg of my choosing
[17:13:58 CEST] <Bear10> but I can't do it with normal cli commands or existing filters because it needs to be dynamic (changing on queue)
[17:14:10 CEST] <Bear10> like a news anchor talking about the weather a sun shows up
[17:14:55 CEST] <Bear10> when he/she talks about economic crisis a chart jpeg would show up
[17:14:55 CEST] <Bear10> etc etc
[17:15:08 CEST] <Bear10> on cue*
[17:15:26 CEST] <atomnuker> Bear10: vf_overlay already does that
[17:16:05 CEST] <durandal_1707> but overlay isnt dynamic
[17:16:21 CEST] <atomnuker> define dynamic?
[17:16:29 CEST] <atomnuker> you can change its input
[17:16:42 CEST] <atomnuker> you can even toggle it using the timeline since its supported by it
[17:17:01 CEST] <Bear10> atomnuker: does it? i tried looking everywhere and couldn't find information anywhere
[17:17:18 CEST] <Bear10> i don't mean timeline
[17:17:26 CEST] <Bear10> because an offset isn't the same as "on cue"
[17:17:36 CEST] <durandal_1707> you would need to just write filter which would take inputs as commands and copy it to output
[17:17:53 CEST] <atomnuker> Bear10: what offset?
[17:18:06 CEST] <durandal_1707> Hello_: help what?
[17:18:33 CEST] <Bear10> atomnuker: I've heard of -itoffset or something like that, can't find it right now
[17:18:41 CEST] <Bear10> for an overlay to appear / disappear after X seconds
[17:19:03 CEST] <atomnuker> vf_overlay does that
[17:19:09 CEST] <Bear10> that's not what I want
[17:19:11 CEST] <atomnuker> you just need to use the api
[17:19:16 CEST] <durandal_1707> overlay approach looks to complicated
[17:19:23 CEST] <atomnuker> and feed in the second source as you need
[17:19:39 CEST] <atomnuker> why do you feel you need to cram everything into the ffmpeg command line tool?
[17:20:05 CEST] <Bear10> atomnuker: I'm new to ffmpeg and the documentation available isn't the easiest in the world to understand the sourcecode :)
[17:20:09 CEST] <atomnuker> the API is there if it doesn't do what you need it to do
[17:21:07 CEST] <durandal_1707> atomnuker: he would need to create new filtergraph for every new image
[17:21:18 CEST] <atomnuker> durandal_1707: not necessary
[17:22:12 CEST] <durandal_1707> there is no filter which concat various overlays over video
[17:22:22 CEST] <atomnuker> just make vf_overlay's second input the same as the first input and do compositing on it using some toolkit
[17:22:42 CEST] <atomnuker> using the alpha channel to mask stuff out of the second input
[17:22:57 CEST] <atomnuker> *same size that is
[17:23:04 CEST] <durandal_1707> yes that what im saying for overlay input
[17:23:51 CEST] <durandal_1707> filter would just load images dynamically and create transparent pixels when needed
[17:24:07 CEST] <Bear10> durandal_1707: that's what I'm attempting to do
[17:24:11 CEST] <atomnuker> not filter, the user himself needs to do that
[17:24:25 CEST] <atomnuker> a filter would be too inflexible
[17:24:53 CEST] <durandal_1707> well not really, there are commands for filters
[17:25:36 CEST] <atomnuker> yes, but "show this logo for 10 seconds" and then "show another for 4 seconds" is not something a filter needs to do
[17:26:32 CEST] <atomnuker> its something users need to do
[17:27:33 CEST] <Bear10> when you say "using some toolkit" could you give an example of one?
[17:27:36 CEST] <durandal_1707> but they want too use cli and not c
[17:28:01 CEST] <Bear10> or where in the code I should be looking at if not a custom filter
[17:41:19 CEST] <atomnuker> durandal_1707: the cli gets far too inflexible for something like this, not to mention you can't do frame-perfect sync
[17:41:51 CEST] <atomnuker> its not the proper way of doing things
[17:44:24 CEST] <atomnuker> depending on what you need to do you might want to run 2 filter instances in your program
[17:44:53 CEST] <atomnuker> so one filter to produce what you want to overlay and another to overlay it onto your main image
[17:46:35 CEST] <atomnuker> so to load an image you can either do it the standard way and use lavc + lavf to get a frame or vf_movie to do it for you
[17:47:09 CEST] <Bear10> well that's basically what we have isn't it? 1) we have our filter named "news" where we grab the frame, place the image etc, and then our output goes 2) into the overlay filter
[17:47:23 CEST] <atomnuker> then if you needed to draw some text somewhere you'd use the drawtext filter on the resulting frame
[17:47:33 CEST] <Bear10> I mean we have the desired output, the problem is in the memory
[17:47:34 CEST] <atomnuker> Bear10: that's not the issue
[17:47:40 CEST] <atomnuker> the issue is control
[17:47:48 CEST] <atomnuker> how do you control what shows up and when?
[17:48:18 CEST] <Hello_> aaa
[17:48:28 CEST] <atomnuker> you have no way of controlling this via the command line
[17:48:39 CEST] <Hello_> duranda1_1707
[17:48:45 CEST] <Bear10> atomnuker: we're not trying to control the image via the command line though
[17:49:06 CEST] <Bear10> but there are options and I saw there was a way to dynamically change options
[17:49:06 CEST] <atomnuker> what, you just load some random files from a directory whenever they crop up?
[17:50:04 CEST] <Bear10> currently for testing and prototype purposes we're doing it with a txt file, but I saw we could maybe use zmq
[17:50:09 CEST] <atomnuker> you can't change any options via the command line
[17:50:19 CEST] <atomnuker> after you run it, that's it, its done
[17:50:31 CEST] <atomnuker> if you need to change options you need to use the API
[17:51:04 CEST] <atomnuker> during runtime that is
[17:51:13 CEST] <Bear10> when you say API to what are you referring to?
[17:51:23 CEST] <atomnuker> the libavfilter API
[17:51:42 CEST] <Bear10> I linked you to the frame_filter code we have, we're using the API
[17:52:27 CEST] <Bear10> as mentioned we have the desired output, everything is working as we want, we're just having an issue with a memory leak with the frame and buffer with the av_image_copy
[17:52:55 CEST] <Bear10> it happens on Linux Mint but not OS X
[17:52:58 CEST] <atomnuker> fine, you can sort it out yourselves, though if you try to upstream it I'll object to having it in the codebase
[17:53:41 CEST] <atomnuker> its in no way something that should be encouraged, writing filters because you can't do things via the command line
[17:54:05 CEST] <Bear10> you have a doc that says writing_filters.txt
[17:55:01 CEST] <Bear10> you're contradicting yourself by saying "it shouldn't be done", "use the API", "don't write a filter" "you have to use CLI", "you can't change options via CLI"
[17:55:28 CEST] <atomnuker> I never said you have to use the cli, I said the opposite of that
[17:55:45 CEST] <Bear10> so I have a filter... so what's the issue?
[17:55:49 CEST] <atomnuker> it sucks
[17:55:59 CEST] <Bear10> and you have proposed nothing to fix it
[17:56:00 CEST] <atomnuker> its a hack
[17:56:18 CEST] <atomnuker> nevermind that it doesn't work, its conceptually wrong
[17:58:34 CEST] <Bear10> it does work though
[17:59:25 CEST] <Bear10> on OS X it works fine, Linux Mint it has a leak from an av_image_copy which is part of the API, I'm attempting to use av_frame_free (part of the API) and av_freep (also part of the API) but perhaps I'm missing something else
[18:00:38 CEST] <bencoh> considering you're running the very same code on both platform, I'd try with another memory allocator on linux (say, jemalloc) just to make sure
[18:02:28 CEST] <Bear10> ok will give that a try thanks bencoh
[18:07:19 CEST] <Blubberbub_> is file-metadata(interpret,title,...) for audio only included in the first or in every frame?
[19:06:54 CEST] <markmedes2> Fellas, I need to scan some files for errors, I'm using ffmpeg -v error -i ... Is there a way to include the timestamp of the error in the log?
[20:38:10 CEST] <pgorley> to convert an audio file to an array of samples (int16_t), do i need to decode the audio using the send_packet/receive_frame api?
[21:52:21 CEST] <viric> kepstin: thank you
[21:52:32 CEST] <viric> kepstin: in fact I want to program an audio track aligner
[21:52:36 CEST] <viric> does that exist at all?
[21:52:59 CEST] <kepstin> I'm not sure what you mean by "audio track aligner"
[21:55:00 CEST] <Blubberbub_> so you have like a similar sound at the beginning and want to mix all audio tracks together and shift them, so the signal is aligned?
[21:56:25 CEST] <Blubberbub_> for example when recording something with multiple cameras people often "clap" before doing anything, so they can synchronize everything later?
[21:59:31 CEST] <viric> kepstin: I have two recording devices for the same scene, and I want them under same pts
[22:00:20 CEST] <viric> kepstin: they have slightly different clock, additionaly.
[22:00:52 CEST] <viric> kepstin: about 1s drift per hour
[22:00:56 CEST] <kepstin> sounds annoying to deal with, if they're on different clocks :/
[22:01:36 CEST] <viric> right.
[22:01:41 CEST] <viric> only slightly different
[22:02:12 CEST] <Blubberbub_> Do you want to do that live?
[22:02:55 CEST] <kepstin> but yeah, aligning those tracks in post really wouldn't be that hard as long as you have some identifiable synchronization points near the start and near the end of each track, so you can find what pts in each track the synch points are at
[22:03:01 CEST] <pgorley> does "ffmpeg -i <file> -f s16le -acodec pcm_s16le output.raw" extract the raw audio samples from the file?
[22:03:15 CEST] <kepstin> then just resample to fix the length, and apply an offset to align them
[22:04:03 CEST] <viric> kepstin: right. But usually the identifiable points are hard to find too. I hope I can write something that will find the correlations/offsets/atempo factors for me
[22:04:23 CEST] <Blubberbub_> there is a python script on github somewhere that does something like that
[22:04:40 CEST] <viric> Blubberbub_: can you find it?
[22:04:45 CEST] <kepstin> getting into some fun audio analysis stuff for that, yeah. Out of my area of experience :)
[22:05:20 CEST] <Blubberbub_> viric, https://github.com/bbc/audio-offset-finder
[22:05:34 CEST] <Blubberbub_> it only works on python2, though
[22:05:51 CEST] <viric> https://github.com/protyposis/AudioAlign ?
[22:06:00 CEST] <kepstin> the bbc one only appears to do the offset, not the scaling to compensate for clock drift
[22:06:10 CEST] <viric> what bbc?
[22:06:20 CEST] <viric> ah bbc. ok
[22:14:08 CEST] <viric> mh that AudioAlign would be hard to beat
[22:14:12 CEST] <viric> but it's for Windows.
[22:14:59 CEST] <ultrav1olet> I cannot seem to find any documentation on video stream framerate information. Could someone please document all the values because Googling hasn't netted anything interesting.
[22:15:14 CEST] <ultrav1olet> I'm talking about: " 25 fps, 25 tbr, 1k tbn, 50 tbc (default)"
[22:15:35 CEST] <ultrav1olet> these things are absolutely cryptic and 1k looks like 1000 which doesn't make any sense
[23:08:30 CEST] <markmedes2> Is there a way to scan a file for error and print the timestamp of the error when ffmpeg finds one?
[23:40:19 CEST] <Mista_D> can a format of the source file be passed to output, regardless of file extension?
[23:57:55 CEST] <ultrav1olet> The source video which contains 135475 frames became a video with 132725 frames. What gives?
[23:58:30 CEST] <ultrav1olet> I've just reencoded video - that's all - no fancy options/parameters/anything
[00:00:00 CEST] --- Thu Aug 10 2017
More information about the Ffmpeg-devel-irc
mailing list