[Ffmpeg-devel-irc] ffmpeg.log.20170628
burek
burek021 at gmail.com
Thu Jun 29 03:05:01 EEST 2017
[00:02:24 CEST] <kepstin> ultrav1olet: another thing to try is to use mkvmerge (from mkvtoolnix) to append tracks from one file to the other, rather than using ffmpeg to try to make a single video stream
[00:03:11 CEST] <kepstin> something like mkvmerge -o combined.mkv part1.h265.mkv + part2.h265.mkv
[00:03:17 CEST] <ultrav1olet> I'm already trying this option
[00:03:20 CEST] <nicolas17> what should I use to do simple video transitions?
[00:03:37 CEST] <ultrav1olet> nicolas17: non linear video editor
[00:04:12 CEST] <kepstin> you can do simple-looking transitions with ffmpeg filters but actually writing the command lines? Not so simple :)
[00:04:24 CEST] <ultrav1olet> kepstin: Warning: The track number 0 from the file 'part2.h265.mkv' can probably not be appended correctly to the track number 0 from the file 'part1.h265.mkv': The codec's private data does not match (lengths: 1825 and 1822). Please make sure that the resulting file plays correctly the whole time. The author of this program will probably not give support for playback issues with the resulting file.
[00:04:47 CEST] <nicolas17> yeah I think the transitions I need are totally possible with ffmpeg video filters
[00:05:05 CEST] <kepstin> ultrav1olet: well, then I guess that file will probably show the same issue :/
[00:05:34 CEST] <nicolas17> but getting video A to play normally for N seconds, then do the transition to video B, then play video B, and sync them at the point I want... that would need a messy filtergraph string
[00:05:57 CEST] <nicolas17> oh I could use MLT, I have used it before, forgot about it...
[00:06:00 CEST] <ultrav1olet> kepstin: no dice. the output is similarly broken. seems like you cannot merge two HEVC streams with different encoding parameters. This doesn't sound right.
[00:06:31 CEST] <kepstin> ultrav1olet: well, the same's true of h264, it just seems like hevc is a little more touchy about it.
[00:06:47 CEST] <JEEB> I have a feeling that mkvmerge might just be throwing away one set of parameter sets
[00:06:56 CEST] <JEEB> and AVC concatenation and HEVC concatenation works the same way
[00:07:04 CEST] <JEEB> make sure you have all parameter sets and start on a RAP
[00:07:57 CEST] <ultrav1olet> first file encoding string: -vf fps=30 -an -c:v libx265 -preset veryslow -x265-params keyint=600:min-keyint=30:bframes=16:crf=20:no-sao=1 out.mkv
[00:08:15 CEST] <ultrav1olet> second file encoding string: -vf fps=30 -an -c:v libx265 -preset veryslow -tune grain -x265-params keyint=600:min-keyint=30:bframes=16:crf=20
[00:08:55 CEST] <ultrav1olet> as soon as MPV/ffplay reaches the second file, the output turns to utter garbage
[00:09:03 CEST] <JEEB> that means the parameter sets aren't there
[00:09:12 CEST] <JEEB> parameter sets are what initialize the decoder
[00:09:23 CEST] <JEEB> and if they're different enough that just doesn't work
[00:09:26 CEST] <ultrav1olet> in that case merging these files is not even possible
[00:09:34 CEST] <JEEB> ...
[00:09:36 CEST] <JEEB> it is
[00:09:37 CEST] <kepstin> which means that the ffmpeg concat demuxer isn't handling hevc in mkv correctly
[00:09:38 CEST] <ultrav1olet> which totally sucks
[00:10:14 CEST] <ultrav1olet> I've always thought that I frames are more or less independent
[00:10:30 CEST] <JEEB> PARAMETER SETS DANG IT
[00:10:31 CEST] <ultrav1olet> only P and B are dependant
[00:11:06 CEST] <JEEB> now the perfect way of doing that would be to create an mp4 with two sets of sample description whatever it was called, which contain the parameter sets. and then map the samples to the correct one
[00:11:07 CEST] <ultrav1olet> JEEB: should I file a bug report?
[00:11:10 CEST] <JEEB> no
[00:11:16 CEST] <JEEB> unless the thing is a bug in FFmpeg
[00:11:25 CEST] <kepstin> JEEB: there are two mkv files containing hevc encoded by ffmpeg. they both play fine. After being concatenated by ffmpeg using the concat demuxer, the file does not play back
[00:11:31 CEST] <ultrav1olet> mkvtoolnix produces the same result
[00:11:54 CEST] <JEEB> kepstin: which means that both fuck the second parameter sets somewhere in the middle of it
[00:12:08 CEST] <JEEB> which is probably why that warning happens, that if the parameter sets are different enough you will be fucked
[00:12:23 CEST] <kepstin> looks like mkvtoolnix drops it, yeah, and I guess ffmpeg does too only silently.
[00:12:33 CEST] <JEEB> also the concat shit in lavf is what it is
[00:12:52 CEST] <JEEB> I've only used the lavfi concat because that IIRC decoded the things separately and thus not being affected by bullshit
[00:13:22 CEST] <JEEB> (of course doesn't work when you want to concat already encoded streams so welp)
[00:13:35 CEST] <ultrav1olet> JEEB: can I force ffmpeg to rebuild the output for each new I frame sequence without reencoding?
[00:14:12 CEST] <kepstin> hmm. so would extracting the video tracks to elementary streams then concatenating those work, maybe? :/
[00:14:27 CEST] <nicolas17> would it work to "transcode" with -vcodec copy?
[00:14:28 CEST] <JEEB> that would only work to a limit
[00:14:38 CEST] <JEEB> as in what kepstin said
[00:14:50 CEST] <ultrav1olet> nicolas17: how?
[00:15:01 CEST] <JEEB> nicolas17: no
[00:15:17 CEST] <JEEB> nicolas17: seemingly whatever they used for concat stripped the latter parameter sets
[00:15:25 CEST] <JEEB> copying that shit doesn't help
[00:15:38 CEST] <nicolas17> so you need actual transcoding?
[00:15:47 CEST] <JEEB> no
[00:15:55 CEST] <kepstin> nicolas17: wouldn't help, it's the concat prior to the transcode that's breaking it
[00:16:09 CEST] <ultrav1olet> nope, never again - encoding with preset veryslow is very time demanding ;-)
[00:16:21 CEST] <kepstin> I mean, you could use two separate inputs and the concat filter, do a full decode-renencode, but that defeats the point of this whole thing
[00:16:48 CEST] <ultrav1olet> 0.8fps or so for 1080p source on my PC ;-)
[00:16:55 CEST] <JEEB> that's fast enough
[00:16:58 CEST] <nicolas17> I guess if you use the concat filter, just like with any filter, you don't get to use vcodec copy anymore?
[00:17:11 CEST] <JEEB> nicolas17: no shit, sherlock
[00:17:24 CEST] <ultrav1olet> My current concat is simple: ffmpeg -f concat -i list.txt -c copy out.mkv
[00:17:26 CEST] <JEEB> anyways, what you most likely need is to make every RAP have the parameter sets
[00:17:38 CEST] <JEEB> then you concatenate that
[00:17:49 CEST] <ultrav1olet> that sounds like Chinese to my ears
[00:18:10 CEST] <JEEB> that way no matter where you seek you should get the parameter sets matching the content first
[00:18:18 CEST] <JEEB> and then the rest
[00:18:31 CEST] <iive> JEEB: actually i was wondering if x264 could have a mode where it takes an extradata, uses it to configure itself and encode the input frame,
[00:18:45 CEST] <iive> aka something that would help editing
[00:19:11 CEST] <JEEB> the perfect way is still what I noted there up in the log regarding an mp4 with multiple pieces of initialization data and then mapping the parts of the file to the correct set
[00:19:18 CEST] <JEEB> but I'm not sure how simple that is to write
[00:19:35 CEST] <JEEB> for decoding I know libav added support for that, and FFmpeg might have merged it already
[00:19:49 CEST] <JEEB> the less perfect alternative is to just copy the parameter sets all the way
[00:19:52 CEST] <JEEB> to every RAP
[00:20:01 CEST] <JEEB> as in, the matching ones
[00:20:05 CEST] <iive> that -> multiple extradata mappings?
[00:20:18 CEST] <JEEB> iive: yea, per-sample extradata stuff in mov/mp4
[00:20:25 CEST] <JEEB> I remember koda coding that up
[00:20:41 CEST] <JEEB> but it's for decoding, so it doesn't help *creation* of such files :P
[00:21:21 CEST] <iive> i'm sure somebody would write a program that concats a lot of different .mp4 into such rap
[00:21:35 CEST] <nicolas17> what's a RAP?
[00:21:43 CEST] <JEEB> Random Access Point/Picture
[00:21:47 CEST] <kepstin> of course, in this example we have here, it's all in mkv files, not mp4...
[00:22:07 CEST] <JEEB> matroska I think also supports it but I don't think much supports it :P
[00:22:14 CEST] <JEEB> as in, multiple extradata blocks
[00:22:30 CEST] <JEEB> thus as I noted, the simplest way is to somehow duplicate the parameter sets
[00:22:37 CEST] <JEEB> so that they get applied to every RAP
[00:22:46 CEST] <JEEB> dunno if mpegts muxer does it
[00:22:58 CEST] <nicolas17> I used to know how to use MLT/melt
[00:22:59 CEST] <nicolas17> :/
[00:23:00 CEST] <JEEB> then you concat that shit and hope for the best
[00:23:22 CEST] <JEEB> if the concatenation didn't do anything extra "smart"
[00:23:22 CEST] <kepstin> JEEB: that seems like the sort of thing that could be done in a bsf in ffmpeg, maybe?
[00:23:48 CEST] <JEEB> then you should have the duplicated things still in your thing
[00:23:57 CEST] <JEEB> and then it /should/ work, bar bugs in people's players
[00:24:45 CEST] <JEEB> kepstin: I... guess?
[00:27:20 CEST] <JEEB> anyways, both AVC and HEVC being hard to reconfigure or concat is just bollocks, stuff like broadcast changing the resolution is pretty much just that
[00:27:26 CEST] <kepstin> but in the mean time, ultrav1olet, I guess you could try the good old "convert both files to mpegts, concatenate those, ???, profit" trick :/
[00:27:57 CEST] <JEEB> the problem is that the tools do tend to try and optimize, which then goes bonkers because you fed two different clips with different param sets
[00:28:15 CEST] <JEEB> kepstin: yea and trying to make sure it duplicates the parameter sets at every RAP
[00:28:22 CEST] <JEEB> otherwise seeks through the line can be "fun"
[00:28:27 CEST] <ultrav1olet> I'd love to but as far as I remember mpegts doesn't support HEVC
[00:28:39 CEST] <ultrav1olet> and it's only for AVC
[00:28:40 CEST] <JEEB> then what are these broadcast samples I have on my hands?
[00:28:47 CEST] <kepstin> ... hah, no.
[00:28:55 CEST] <JEEB> or shall I link the DVB spec?
[00:29:02 CEST] <ultrav1olet> I believe you ;-)
[00:29:07 CEST] <ultrav1olet> how that can be done?
[00:29:15 CEST] <kepstin> I would have expected mpegts to be basically the first 'container' to be able to hold hevc?
[00:29:24 CEST] <furq> -i foo.mp4 -c copy -map 0 foo.ts
[00:29:44 CEST] <furq> or mkv or whatever
[00:29:46 CEST] <JEEB> let's see if the muxer or something else has an option to duplicate the parameter sets
[00:30:08 CEST] <ultrav1olet> ffmpeg -i part1.h265.mkv -c copy -bsf h264_mp4toannexb output1.ts => Codec 'hevc' (174) is not supported by the bitstream filter 'h264_mp4toannexb'. Supported codecs are: h264 (28)
[00:30:33 CEST] <kepstin> ultrav1olet: well, yeah, that's the h264 one, and you have h265...?
[00:30:46 CEST] <JEEB> just try without manual bsfs :P
[00:30:54 CEST] <JEEB> I /think/ that got automatized lately
[00:30:55 CEST] <ultrav1olet> yep ;-) I've been talking about HEVC all along
[00:31:09 CEST] <ArsenArsen> What homebrew formula do I need to get the development headers>
[00:31:16 CEST] <ArsenArsen> ? not > >.>
[00:31:36 CEST] <JEEB> ArsenArsen: ask whomever maintains the homebrew formula?
[00:31:58 CEST] <JEEB> although to be honest you could just build a basic FFmpeg yourself with a custom prefix and utilize pkg-config to build against it :P
[00:32:09 CEST] <ArsenArsen> I expected someone to know, but I don't have a Mac
[00:32:24 CEST] <JEEB> yes, someone from /homebrew/
[00:32:59 CEST] <ultrav1olet> it worked!!!
[00:33:00 CEST] <JEEB> how downstreams (be it a lunix distro or anything else) package is not really any of upstream's concern
[00:33:36 CEST] <ArsenArsen> Good point. Thanks
[00:34:01 CEST] <JEEB> also did you check that the thing didn't just install the stuff?
[00:34:26 CEST] <JEEB> PKG_CONFIG_PATH=/path/to/install/prefix/lib/pkgconfig pkg-config --cflags libavcodec
[00:34:29 CEST] <JEEB> for example
[00:34:47 CEST] <JEEB> that path being where homebrew installed the thing
[00:34:58 CEST] <JEEB> (and /lib/pkgconfig where the pc files get installed
[00:35:39 CEST] <JEEB> if locate or find works on OS X, `locate libavcodec.pc` or `find / -iname 'libavcodec.pc'` should work
[00:35:50 CEST] <JEEB> the find probably needs root :P
[00:36:08 CEST] <nicolas17> use Spotlight! :P
[00:36:47 CEST] <furq> find / ;_;
[00:36:56 CEST] <furq> what is this, stackoverflow?
[00:37:04 CEST] <JEEB> furq: well I have no fucking idea where homebrew installs its shit
[00:37:15 CEST] <furq> it's either /usr/local or possibly /home
[00:37:16 CEST] <JEEB> and I did note locate first
[00:37:25 CEST] <ArsenArsen> Hmm. I'll look for it/ Thanks
[00:37:26 CEST] <JEEB> if that works on mac
[00:37:35 CEST] <furq> i assume it does but i doubt it's installed by default
[00:37:40 CEST] <furq> and updatedb takes even longer than find /
[00:37:49 CEST] <nicolas17> I think it's installed by default
[00:37:56 CEST] <nicolas17> but there is no cronjob running updatedb regularly
[00:38:40 CEST] <furq> https://github.com/Homebrew/homebrew-core/blob/master/Formula/ffmpeg.rb
[00:39:01 CEST] <furq> the answer is "whatever your usual prefix is"
[00:39:23 CEST] <furq> https://github.com/Homebrew/homebrew-core/blob/master/Formula/ffmpeg.rb#L106-L146
[00:39:26 CEST] <furq> wow...ruby is so expressive
[00:39:51 CEST] <nicolas17> lol
[00:43:13 CEST] <JEEB> ultrav1olet: since I see you made a bug report, matroska and mp4 are formats where the usually the parameter sets are in a single place only and separated from the video stream. mpegts is one where there's no "separate" part. it's a "streaming" format (as in, optimized for reading A=>B) and with "just" the streams as-is.
[00:43:45 CEST] <JEEB> when you are using the dump concat feature with matroska the thing opens the demuxer and reads the extradata off the first file and is happy. after that it proceeds to reading through the file.
[00:43:51 CEST] <JEEB> and the second file
[00:44:42 CEST] <JEEB> with mpegts there's no such stuff as it's just throwing data through
[00:45:02 CEST] <JEEB> which is why it works even with the "stupid" mode that the concat thing does
[00:45:50 CEST] <androclu`> nicolas17: just an update.. ever since i took the top off the computer and let it breath, it seems to not make seg-faults. funk-a-delic.
[00:46:39 CEST] <androclu`> nicolas17: i think ffmpeg just stresses the cores. still don't know why it should happen *lately*, though, when it did not for the last year or so.
[00:47:37 CEST] <androclu`> nicolas17: however, i think i am going A) to get a bigger fan for the box or set a massive room fan next to it, or B) figure out how to UN-overclock this board, or C) get a small 'fridge or freezer, and put my computer inside!
[00:47:42 CEST] <JEEB> ultrav1olet: also I do really hope that x265 is giving you something that x264 isn't :P in my tests it really wasn't in any way clear cut that it would.
[00:47:52 CEST] <JEEB> although I haven't tested recently
[00:49:26 CEST] <ultrav1olet> JEEB: yep, I thought a bug report to remember and fix is pertinent in this situation ;-)
[00:49:49 CEST] <ultrav1olet> this chat session might be easily forgotten and never taken care of ;-)
[00:50:18 CEST] <ultrav1olet> one last question: how can I cut a file into two parts _exactly_?
[00:50:43 CEST] <JEEB> I will raise a toast if anyone actually takes that use case up for grabs and implements it properly in the containers that need proper separation of parameter sets
[00:50:50 CEST] <ultrav1olet> I've just discovered that I've spent literally 6 hours encoding while the second file misses at least 5 frames :(
[00:51:15 CEST] <ultrav1olet> so -ss and -t options are not quite precise it seems
[00:51:30 CEST] <JEEB> well -ss will only be as precise as the closest RAP
[00:51:42 CEST] <JEEB> -t should be precise but you've got MPEG-TS now so all bets are off
[00:52:03 CEST] <ultrav1olet> hm
[00:52:25 CEST] <ultrav1olet> and how can I find that RAP position using -ss/-t?
[00:52:54 CEST] <ultrav1olet> sounds like an impossible task unless I can operate with frames
[00:52:57 CEST] <JEEB> well ffmpeg.c only lets you start the thing at a fucking RAP, that's what I fucking meant with what I said
[00:53:13 CEST] <JEEB> so that's why your goddamn -ss is not exact, in addition to the fact thay you've now got MPEG-TS
[00:53:31 CEST] <JEEB> which may or may not fuck your shit up because it's a format without an index
[00:54:09 CEST] <JEEB> now, if you know the bytewise location of your RAP and parameter sets before it, you can just use dd with 188 byte chunks because that's what fucking MPEG-TS is
[00:54:35 CEST] <JEEB> but of course you do not know that is so (´4@)
[00:54:47 CEST] <ultrav1olet> :-)
[00:55:48 CEST] <ultrav1olet> I'm now thinking of just reinserting a few forsaken frames ;-) sounds like an easier thing to do
[01:02:44 CEST] <ultrav1olet> so how can I encode frames 8..29 from the source video? ;-)
[01:03:33 CEST] <DHE> select filter?
[01:03:34 CEST] <furq> -vf trim
[01:03:40 CEST] <furq> !filter trim
[01:03:40 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#trim
[01:06:55 CEST] <ultrav1olet> ffmpeg -i part12.mp4 -vf trim=8:29 doesn't seem to work - ffmpeg exist momentarily
[01:07:09 CEST] <ultrav1olet> video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:2kB muxing overhead: unknown
[01:10:04 CEST] <furq> that's seconds, not frames
[01:10:15 CEST] <furq> you want trim=start_frame=8:end_frame=29
[01:10:59 CEST] <furq> and probably also ,setpts=PTS-STARTPTS
[01:12:09 CEST] <nicolas17> blargh
[01:12:47 CEST] <nicolas17> I tried to use MLT to do the video transition I wanted but the composite transition doesn't do antialiasing >_<
[01:13:38 CEST] <nicolas17> it moves the video by whole pixels only
[01:15:37 CEST] <ultrav1olet> thanks everyone
[01:50:31 CEST] <nicolas17> okay
[01:50:43 CEST] <nicolas17> so the reason why I couldn't get these two timelapses in sync is that the interval isn't even consistent
[01:50:49 CEST] <nicolas17> in theory it was 2fps
[01:50:58 CEST] <nicolas17> but I see many seconds with only one picture, according to EXIF
[01:51:33 CEST] <nicolas17> no idea if the framerate is even constant
[01:51:45 CEST] <nicolas17> since I don't have sub-second timestamps /o\
[02:03:20 CEST] <nicolas17> and sure enough both cameras have different speeds
[02:04:43 CEST] <JodaZ> yeh, video is such a shitshow, isn't it
[02:05:16 CEST] <nicolas17> http://sprunge.us/GMQQ this is the EXIF timestamp of the photos
[02:06:04 CEST] <nicolas17> any clever idea on how to estimate plausible milliseconds for them?
[02:58:15 CEST] <FishPencil> Is there any better explanation of the hqdn3d settings? The docs are pretty basic https://ffmpeg.org/ffmpeg-filters.html#hqdn3d-1
[03:38:39 CEST] <FishPencil> Uhhh, how is it that -preset veryslow is larger than the same input sample with the same arguments except for -preset medium with x265?
[03:39:31 CEST] <FishPencil> I thought -preset was a way to control how compressed it was (i.e. filesize)
[03:44:08 CEST] <nicolas17> FishPencil: depends on your other options
[03:44:25 CEST] <FishPencil> All other options are the same
[03:44:36 CEST] <nicolas17> if you asked for a certain bitrate, veryslow will probably give you a result with a similar (maybe bigger!) file size but better quality
[03:45:11 CEST] <FishPencil> Simply '-c:v libx265 -crf 22 -preset medium' vs '-c:v libx265 -crf 22 -preset veryslow'
[03:45:36 CEST] <nicolas17> ah crf
[03:45:39 CEST] <nicolas17> dunno then
[03:46:33 CEST] <FishPencil> Seems quite odd. It took way longer, but ended up being bigger...
[03:46:42 CEST] <FishPencil> Seems counter productive
[03:47:08 CEST] <nicolas17> crf asks for constant "quality", so with same crf and slower preset it should have been smaller...
[03:56:18 CEST] <FishPencil> Maybe the "veryslow" preset decided it actually needed more bits
[04:11:11 CEST] <furq> FishPencil: that isn't what -preset means
[04:11:27 CEST] <furq> the meaning of -crf changes depending on other encoding options
[04:11:35 CEST] <furq> so you'll normally get roughly the same size for any preset
[04:11:40 CEST] <furq> at least with x264, i assume x265 is the same
[04:11:53 CEST] <FishPencil> furq: So what's actually happening here?
[04:12:01 CEST] <furq> i said "normally"
[04:12:08 CEST] <furq> crf isn't an exact science
[04:12:21 CEST] <furq> it is perfectly normal for veryslow to come out bigger than medium
[04:12:28 CEST] <furq> it'll look better
[04:13:34 CEST] <FishPencil> furq: So it's probably just looking further ahead to see what can be done, and thus might find that the same bitrate for medium isn't enough?
[04:14:43 CEST] <furq> not really
[04:14:59 CEST] <furq> crf will try to pick the best quantiser for any given scene
[04:15:11 CEST] <furq> if you use higher-quality settings it'll pick a lower quantiser (i.e. higher quality)
[04:15:26 CEST] <furq> on the basis that it expects to compress better
[04:15:51 CEST] <furq> that's my limited understanding of it anyway
[04:15:57 CEST] <furq> i admit it is completely unintuitive
[04:23:50 CEST] <FishPencil> -b:a is for all channels right? so for 5.1 (6 channel), I'd want 384k probably for 64k per channel?
[04:27:18 CEST] <furq> yeah
[04:27:34 CEST] <furq> you probably don't need that much for aac though
[04:27:58 CEST] <FishPencil> Opus
[04:28:06 CEST] <furq> yeah 256k should probably be fine
[04:28:36 CEST] <furq> xiph recommend 128-256 for 5.1
[04:29:44 CEST] <FishPencil> Wow, 128 for 5.1? that must be some compression
[04:30:36 CEST] <furq> bear in mind that that's vbr
[04:30:44 CEST] <furq> and it's not abr so it might over/undershoot by a lot
[04:31:50 CEST] <FishPencil> Curious if FFmpeg uses surround-sound bitrate allocation that Opus provides
[04:32:01 CEST] <furq> it does if you use libopus
[04:32:05 CEST] <FishPencil> furq: Is -b:a still the best way to call it?
[04:32:15 CEST] <furq> that's the only way of using it afaik
[04:32:29 CEST] <furq> opus is a bit weird in that regard
[04:33:07 CEST] <furq> setting the bitrate to 128k will use settings that averaged out to 128kbps across their test corpus
[04:33:15 CEST] <furq> it doesn't make any attempt to actually hit 128k
[04:33:26 CEST] <furq> at least afaik
[04:34:02 CEST] <furq> there are true abr and cbr modes but they're presumably intended for streaming
[04:35:21 CEST] <FishPencil> furq: You said earlier that x265's -preset veryslow will be better quality at the same crf. Does that mean something like: -crf 22 -preset medium = -crf 24 -preset veryslow
[04:35:34 CEST] <furq> "something like", yes
[04:35:40 CEST] <furq> i would not try to put actual numbers on that
[04:36:39 CEST] <FishPencil> In that case the file size would be smaller for -veryslow?
[05:43:40 CEST] <kepstin> I actually kinda wish opus had a 'quality' setting like vorbis, so you didn't have to calculate it yourself based on the number of channels
[05:44:23 CEST] <kepstin> didn't have to calculate the bitrate*
[08:30:06 CEST] <cowai> Hello, does anybody have any experience with the prompeg fec documented here? https://www.ffmpeg.org/ffmpeg-all.html#prompeg
[08:33:10 CEST] <cowai> I am trying to figure out if I can use that as input as well in ffmpeg
[08:33:28 CEST] <cowai> I mean, can I decode FEC as well?
[09:05:33 CEST] <nahsi> I need to make a clip from a 4h video consisting of evenly extracted parts of 50 frames. THe clip sould be around 6000 frames. Hope you can understand what I want. Can ffmpeg do that or I need to use bash magic?
[09:11:02 CEST] <squ> split video by frames with ffmpeg
[09:11:11 CEST] <squ> -t, -ss, -framerate
[09:16:00 CEST] <nahsi> squ: I think it's now what I want. I need to extract 50 frames then skip some then extract more then skip etc so the total length would be around 6000 frames from every part of a video
[09:16:57 CEST] <squ> yes
[09:17:35 CEST] <nahsi> I though -framerate is for converting frame rate of a video?
[09:17:51 CEST] <squ> tells how much frames in a second
[10:06:00 CEST] <termos> when I'm done encoding and get an AVPacket out of avcodec_receive_packet it has duration set to 0 for video. This causes the HLS muxer to complain. Is there a reason it's set to 0 and not 1/framerate for example?
[10:06:21 CEST] <termos> something I forgot to set in the AVCodecContext?
[10:16:09 CEST] <DHE> interesting. what input format?
[10:22:20 CEST] <termos> the input is FLV from rtmp, decoded h264 (still has duration set), then it's encoded with libx264 and no duration in the AVPackets after
[10:22:50 CEST] <termos> my bad, still has duration after demuxing, no duration in the decoded AVFrame's
[10:25:10 CEST] <termos> I guess I could set the duration manually before muxing to cur_pts-prev_pts scaled into the correct time_base or something
[10:25:29 CEST] <termos> I'd like to avoid it if possible
[10:53:20 CEST] <ArsenArsen> When I finish my encoding some frames are missing from the recording, I use the send receive API, and before writing the trailer I receive all packets that are left in the codec. What could cause this issue?
[10:55:01 CEST] <ArsenArsen> There seems to be at max ~20 frames gone every time
[11:00:25 CEST] <termos> are you flushing the encoder by passing a NULL frame to avcodec_send_frame?
[11:01:57 CEST] <ArsenArsen> I am not.. Will try doing that now.
[11:10:52 CEST] <ArsenArsen> That fixed it! Thanks
[11:45:16 CEST] <termos> I ended up doing packet.duration = packet.pts - prev_pts; before every call to av_interleaved_write_frame and I see no issues with it. Does it look correct?
[11:46:22 CEST] <termos> all HLS segments are the correct length at least
[12:36:04 CEST] <darklink> hello. I have a wav file with 10 channels and a channel layout of Front left, front right, front center, LFE, back left, back right, side left, side right, top left and top right.
[12:36:17 CEST] <darklink> Now I want to convert that file into something more usable
[12:37:24 CEST] <darklink> like 7.1 or 11.1
[12:37:59 CEST] <darklink> What channel layout should I pass to ffmpeg?
[12:39:14 CEST] <thebombzen> darklink: I believe ffmpeg supports notation like "-ac 7.1"
[12:39:37 CEST] <darklink> but it can't guess the channel layout of the 10 channel input file
[12:40:00 CEST] <thebombzen> idk then, I guess you might have to use a dedicated tool
[12:40:09 CEST] <thebombzen> perhaps it can be done, but I don't know how
[12:41:10 CEST] <darklink> can you split the channels of an arbitrary audio file into individual files?
[12:41:27 CEST] <squ> yes, with -map
[13:00:18 CEST] <hollunder> Hi there. I'm looking into a bug I encountered with obs and recording a stream to a mkv file. The resulting mkv file has 1000 fps and some other issues.
[13:00:53 CEST] <hollunder> from what we could figure out at quackenet #obsproject it might be a ffmpeg issue.
[13:01:22 CEST] <hollunder> Osiris there could get correct frame rates when he set avg_frame_rate but is worried about side effects
[13:01:42 CEST] <hollunder> Is someone here familiar with this stuff?
[14:14:45 CEST] <thebombzen> hollunder: OBS has had a broken mkv muxer for a long time
[14:15:01 CEST] <thebombzen> they claim it's really ffmpeg's mkv muxer that is broken, but this isn't fully accurate
[14:15:17 CEST] <thebombzen> I really think OBS just don't call libavformat correctly
[14:22:22 CEST] <BtbN> I guess they don't properly let it finalize the output file?
[14:28:30 CEST] <BtbN> They are doing VERY weird things with their muxing
[14:28:44 CEST] <BtbN> they are calling an external binary which they made themselves, instead of muxing directly.
[14:28:53 CEST] <BtbN> I guess so OBS can safely crash, without killing mp4 files?!
[14:40:56 CEST] <thebombzen> BtbN: except it can't though
[14:41:20 CEST] <thebombzen> otherwise I'd just remux to mp4. I don't know how it works. their mkv muxer is broken as above, the mp4 muxer is unsafe because it's not streamable by default
[14:41:36 CEST] <thebombzen> I end up recordting to mpegts with OBS
[14:41:38 CEST] <BtbN> mp4 is not streamable. You can't safely live-record to it
[14:41:42 CEST] <BtbN> Just use flv
[14:41:56 CEST] <thebombzen> I say "by default" because you can do moov hacks and whatnot
[14:42:08 CEST] <thebombzen> but yea it has to be finalized or it'll be corrupt
[14:42:10 CEST] <hollunder> can't someone tell them how to do it correctly?
[14:42:20 CEST] <thebombzen> hollunder: if you do that, they'll say it's FFmpeg's problem
[14:42:33 CEST] <thebombzen> I worked around it by recording to mpegts
[14:42:59 CEST] <BtbN> The ffmpeg mkv muxer is clearly not broken. It works when used via ffmpeg.c or various other applications.
[14:43:06 CEST] <hollunder> I don't know, Osiris didn't seem entirely unreasonable
[14:43:21 CEST] <thebombzen> I've mentioned it in the past on #obsproject
[14:43:31 CEST] <hollunder> IT worked for Osiris in obs when he set avg_frame_rate
[14:43:31 CEST] <BtbN> So if it works everywhere, but in OBS, it's pretty clear on which side the issue/misuse is happening.
[14:43:31 CEST] <thebombzen> and basically am told "we just use FFmpeg's muxer so it's their fault"
[14:43:33 CEST] <thebombzen> which is stupid
[14:44:01 CEST] <hollunder> he wasn't sure whether this had side effects
[14:44:04 CEST] <BtbN> Also, their muxing code is so convoluted due to that seperate process it's really hard to pinpoint anything
[14:48:23 CEST] <hollunder> I record two audio streams, so I can't just use flv.
[14:48:46 CEST] <hollunder> It seems re-muxing the mkv to mp4 from obs works well enough.
[14:49:05 CEST] <BtbN> you shouldn't use either format.
[14:49:13 CEST] <BtbN> mp4 is risky to live-record, and mkv in obs seems to be broken
[14:49:24 CEST] <BtbN> That leaves you with ts as the only possible format
[14:51:07 CEST] <hollunder> thanks, I'll give that a shot
[14:51:55 CEST] <thebombzen> well, it also has mov and flv, but mov has the exact same problem as mp4
[14:52:03 CEST] <thebombzen> and flv is deprecated and there's no reason to use it over mpegts
[14:52:37 CEST] <hollunder> this looks like a quite horrible situation
[14:52:47 CEST] <BtbN> flv is deprecated?
[14:52:58 CEST] <BtbN> rtmp is still by far the most common distribution protocol, and that's flv over tcp
[14:53:03 CEST] <BtbN> flv is not going anywhere
[14:56:57 CEST] <zalaare> can someone tell me how to use vp9_vaapi /encode/ and the scaler. I prefer -scale=-2:480, however even the scale_vaapi=w=-2:h=480 is not working. If I add the scale i get "Failed to end picture encode issue: 18 (invalid parameter).", when I remove it, it encodes just ducky.
[14:58:48 CEST] <zalaare> current command is `ffmpeg -i '/path/to/file' -vf 'format=nv12,scale=-2:480,hwupload -vaapi_device '/dev/dri/renderD128' -c:v vp9_vaapi -c:a copy -y '/path/to/newfile.mkv'
[14:59:56 CEST] <c_14> I'd assume scale_vaapi doesn't support using -2 as a width
[15:00:22 CEST] <shincodex> So.
[15:00:26 CEST] <zalaare> c_14: it certainly did not in the past. I'm trying to use the swscale instead of it (see above).
[15:00:51 CEST] <shincodex> if using rtsp then my priv_data is going to be a RTSPState
[15:01:11 CEST] <c_14> zalaare: try scale,format,hwupload ?
[15:01:12 CEST] <zalaare> to use the scale_vaapi as i understand it: -vf 'format=nv12,hwupload,vaapi_scale=w=840:h:480'
[15:01:27 CEST] <shincodex> which uses udp or tcp which tries to cast the priv_data to a tcpcontext... This can never be true so the setsock options never goes off.
[15:01:32 CEST] <zalaare> c_14 i'll try
[15:01:44 CEST] <jkqxz> -2 should work, the code to calculate that is common. (Though that change was more recent than the addition of the filter itself.)
[15:01:46 CEST] <shincodex> which is why in tcp they do url split for timeout
[15:04:35 CEST] <jkqxz> Also, scale_vaapi can usually do the format conversion, so doing it on the CPU beforehand is unnecessary. If your input is yuv420p, you can just do 'hwupload,scale_vaapi=w=840:h=480:format=nv12'.
[15:05:25 CEST] <zalaare> same result with scale before format
[15:05:57 CEST] <zalaare> if scale_vaapi now supports -2 then i can do that.
[15:06:57 CEST] <zalaare> same error
[15:07:39 CEST] <jkqxz> Oh, I didn't see the first error you said. That is odd. Can you paste the whole output somewhere and link it?
[15:08:27 CEST] <shincodex> oh....
[15:08:43 CEST] <shincodex> TCP.c does not implement a url_open2 which excepts avoptions
[15:08:47 CEST] <shincodex> probably udp as well
[15:10:36 CEST] <zalaare> https://www.pastiebin.com/5953aac353103
[15:14:37 CEST] <zalaare> if change 1 character it works :P vp9_vaapi -> vp8_vaapi
[15:14:38 CEST] <zalaare> hehe
[15:17:51 CEST] <jkqxz> Weird. I have no idea at all why that would be failing.
[15:20:06 CEST] <jkqxz> The driver doesn't obviously have any relevant changes since that version.
[15:20:07 CEST] <jkqxz> You could try doing the decode on the GPU as well, rather than having it on the CPU and uploaded?
[15:20:24 CEST] <c_14> zalaare: try using -16 instead of -2?
[15:21:13 CEST] <zalaare> jkqxz: decode works fine, but this is part of a script and I never know what the input will necessarily by.
[15:21:14 CEST] <zalaare> be*
[15:21:55 CEST] <zalaare> c_14 -16 works
[15:21:57 CEST] <zalaare> why?
[15:22:17 CEST] <zalaare> end up with slightly odd res
[15:22:23 CEST] <zalaare> 848x480
[15:22:28 CEST] <zalaare> but it works
[15:23:17 CEST] <zalaare> -8 works
[15:23:51 CEST] <zalaare> -4 fails. good enough to work with :) Thanks!
[15:25:37 CEST] <mcegledi> Hi there!
[15:26:06 CEST] <jkqxz> The Intel VP9 encoder doesn't like some frame sizes? I haven't seen that before, but I don't think I ever tried weird sizes. I'll have to look into that.
[15:28:37 CEST] <mcegledi> I have a installation description for a software that requires me to install the ubuntu ffmpeg package at my system. But that doesn't exist for my ubuntu 14.04. There are a lot of other *ffmpeg* packages for trusty, does anybody know which trusty packages have been replaced by "ffmpeg"?
[15:31:46 CEST] <c_14> zalaare: some codecs only like things divisible by certain numbers
[15:31:54 CEST] <c_14> most encoders should pad/scale to fit
[15:32:00 CEST] <c_14> but vaapi_vp9 probably doesn't
[15:32:36 CEST] <zalaare> stupid question, but is the -2,-4,-8,-16 basically the same as mod2, mod4, mod8, and mod16?
[15:32:52 CEST] <DHE> yes, it forces a selection that is a multiple of those numbers
[15:32:53 CEST] <c_14> no
[15:32:56 CEST] <c_14> it's round to the nearest
[15:33:06 CEST] <c_14> it forces it to be modn
[15:33:10 CEST] <c_14> it's not actually a modn
[15:33:30 CEST] <DHE> modulo arithmatic is technically different from the concept of "being a multiple of X"
[15:33:58 CEST] <c_14> yes, it forces it to be modn == 0
[15:34:54 CEST] <zalaare> so similar, but not the same is what I think I am hearing/reading.
[15:35:24 CEST] <jkqxz> It should work; seems likely to be a bug in the driver around thinking there should be alignment on input surfaces which need not have it.
[15:35:31 CEST] <asdzxc> Hey everyone. I'm back with the same question as I had before: if I run the same command twice, why wouldn't the two outputs have the same checksum?
[15:36:44 CEST] <celyr> if the same command is cat /dev/random that's actually expected
[15:36:52 CEST] <jkqxz> Timestamps or similar in metadata?
[15:37:04 CEST] <celyr> also xz on 2 different machines can have 2 different outputs
[15:37:10 CEST] <celyr> you are giving 0 info
[15:39:34 CEST] <c_14> asdzxc: ffmpeg includes timestamps in the metadata, use -fflags +bitexact
[15:41:05 CEST] <asdzxc> Let me rebuild my command. One min
[15:52:14 CEST] <asdzxc> https://pastebin.com/p93N3fjL
[15:52:29 CEST] <asdzxc> It's a bit lengthy of a command, but it'll take an input @ timecode and create multiple outputs from it
[15:52:44 CEST] <asdzxc> If I run that same command twice, my outputs have different checksums.
[15:53:11 CEST] <c_14> how are you checking the checksums?
[15:54:20 CEST] <asdzxc> Using a md5 checker built-in to FreeCommander
[15:56:06 CEST] <c_14> yeah, first try adding -fflags +bitexact, then try using the hash/framehash muxers to compare instead
[15:57:10 CEST] <asdzxc> Alright. Let me try that out. Thanks
[16:05:33 CEST] <PsyDebug> hi all! how i can reload input stream, when eof? rtsp input is down every errors
[16:07:58 CEST] <pgorley> hi, i'm looking at commit 66963d4b8d302611553e7928063c1cb2ff0efdff (avcodec: remove warning against using frame threading with hwaccels), which mentions a "libavcodec native software fallback mechanism". where can i get more details on such a mechanism (i don't mind looking through code)?
[16:09:33 CEST] <thebombzen> BtbN: flv is deprecated in that you should not use it for anything other than rtmp
[16:10:03 CEST] <thebombzen> unless you specifically need flv, there's not really any reason to use it instead of mpegts
[16:11:26 CEST] <jkqxz> pgorley: It's there automatically. When you receive the get_format() callback it will always contain a software format (whether hardware formats are available or not), and if you pick it you will get the software decoder.
[16:12:24 CEST] <asdzxc> c_14: The -fflags +bitexact didn't work. Do you see anything in my command that would cause the checksum to be different on different runs? With a simpler command, I'm able to confirm that the MD5s match, so I think there may be something with my command.
[16:13:00 CEST] <pgorley> jkqxz: so if i'm using hardware decoding, and it fails during runtime, how do i fallback to software? is there a way to do this implemented in avcodec?
[16:14:18 CEST] <c_14> asdzxc: not really, what does the framehash muxer say?
[16:14:43 CEST] <asdzxc> I'm not sure how to use that.
[16:15:20 CEST] <c_14> ffmpeg -i file -f framehash out.sha256
[16:15:29 CEST] <c_14> the hash is written to out.sha256
[16:16:21 CEST] <jkqxz> pgorley: What do you mena by "fails"? If the stream format changes to one which your hardware decoder does not support, then you can pick the software decoder instead.
[16:17:20 CEST] <pgorley> jkqxz: i mean, once the hardware is decoding and for some reason screws up and starts spewing errors, is there a way to redo the get_format callback to choose a software format?
[16:18:27 CEST] <jkqxz> If your hardware actually fails (like, you snap your graphics card in half mid-decode) then you it won't have any way to recover until the next get_format() callback, because that's the granularity of the selection.
[16:18:56 CEST] <jkqxz> (Well, maybe that particular failure would be more serious, but hopefully you get the idea.)
[16:20:45 CEST] <alexpigment> jkqxz: i'm interested to know what happens when you snap your GPU in half mid-decode. please report your findings ;)
[16:22:02 CEST] <pgorley> jkqxz: haha, thanks for the help!
[16:23:01 CEST] <pgorley> so i'd have to reinitialize the avcodecocntext to get the next call to get_format
[16:23:05 CEST] <Asuran> hi, can -b:v use some automatic value like the bitrate of the source?
[16:23:54 CEST] <alexpigment> Asuran: I assume a CRF value is out of the question?
[16:24:02 CEST] <alexpigment> (if you're using x264)
[16:25:50 CEST] <Asuran> alexpigment, actually i wanted to use crf
[16:26:15 CEST] <Asuran> but now im unsure if i should not use some automatic -b:v and use vbr encoding?
[16:26:20 CEST] <Asuran> codec libvpx-9
[16:26:27 CEST] <alexpigment> ah, gotcha
[16:26:32 CEST] <Asuran> but im open to use x265
[16:27:03 CEST] <alexpigment> I think with libvpx, you can specify -crf but it actually maps to a -qp internally
[16:27:09 CEST] <alexpigment> you still have to set the bitrate though
[16:27:16 CEST] <alexpigment> -b:v effectively becomes the maxrate
[16:27:26 CEST] <alexpigment> unless you set -b:v to 0, then it's unconstrained
[16:27:42 CEST] <alexpigment> (I'm basing this off my experience with VP8, but I believe VP9 is the same
[16:27:46 CEST] <furq> it is
[16:28:02 CEST] <Asuran> ye in vp9 if you dont set it it autos to 256 or
[16:28:12 CEST] <kepstin> asdzxc: You're using x264 multithreading - which is a *non-deterministic* encoder. You aren't guaranteed to get the same result each time.
[16:28:16 CEST] <Asuran> so -b:v is like b:v auto?
[16:28:17 CEST] <alexpigment> yeah, just set -crf [number] -b:v 0
[16:28:27 CEST] <Asuran> thanks!
[16:28:40 CEST] <Asuran> to both of you
[16:28:46 CEST] <alexpigment> np
[16:29:01 CEST] <alexpigment> hope it works. i've done very little testing with vp9, but I *think* it's the same as how vp8 works
[16:30:22 CEST] <kepstin> huh, I though cq mode with vp8 wasn't supported - you just had to set -b:v to some really high value and hope for the best. The unconstrained cq mode is new in vp9?
[16:30:32 CEST] <furq> cq mode is in vp8 and vp9
[16:30:35 CEST] <furq> constant quality mode is new in vp9
[16:30:39 CEST] <kepstin> er, cq mode with unconstrained bitrate in vp8 wasn't supported*
[16:30:45 CEST] <Asuran> furq, cq is crf?
[16:30:47 CEST] <furq> you have to set -b:v 0 to enable it
[16:30:51 CEST] <furq> cq is constrained quality
[16:31:00 CEST] <furq> constant quality (VPX_Q) is presumably supposed to be crf
[16:31:06 CEST] <alexpigment> kepstin: yeah, I didn't *know* it was supported years ago, so I set some really high bitrate as the max. and then I felt dumb when I found out -b:v 0 worked
[16:31:37 CEST] <furq> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/libvpxenc.c#L512-L516
[16:32:04 CEST] <Asuran> thanks again
[16:32:12 CEST] <furq> http://www.webmproject.org/docs/webm-sdk/group__encoder.html#gaf50e74d91be4cae6f70dfeba5b7410d2
[16:32:20 CEST] <kepstin> the libvpx constrained/constant quality modes are implemented in a completely different way from x264's "crf" mode tho, so don't be confused by the name and expect them to behave like x264 :/
[16:32:24 CEST] <furq> it was probably naive of me to expect them to explain what those modes actually are
[16:34:24 CEST] <alexpigment> kepstin: yeah, as I said above, I think CRF just maps to qp (or cq or something) on the back end
[16:34:45 CEST] <Asuran> which is youre fav codec for lossy archiving?
[16:34:48 CEST] <furq> x264
[16:34:51 CEST] <alexpigment> x264
[16:34:53 CEST] <Asuran> i understand
[16:34:53 CEST] <kepstin> Asuran: x264.
[16:35:15 CEST] <furq> it will continue to be x264 at least until av1 comes out
[16:35:17 CEST] <Asuran> so if i understand it right due encoding times?
[16:35:19 CEST] <furq> and probably for a few years after that
[16:35:27 CEST] <Asuran> av1 is the sucessor of vp9?
[16:35:29 CEST] <furq> yes
[16:35:34 CEST] <furq> vp9 and x265 aren't really that much better
[16:35:47 CEST] <furq> certainly not enough to warrant how slow they are
[16:35:57 CEST] <alexpigment> Asuran: it's just a fully featured encoder, and it's reasonably fast. it supports -crf and interlacing and has blu-ray compatibility. that's enough for me :)
[16:36:17 CEST] <furq> and from what i hear the encoders are much less mature and more prone to weird quality issues that have been ironed out in x264
[16:36:47 CEST] <kepstin> the best part of x264 was when they added the "preset" system, which finally put an end to copying around encoder command lines you didn't understand, and made it easy to use.
[16:36:52 CEST] <furq> and also obviously the h265 patent situation is a fucking nightmare
[16:36:58 CEST] <furq> not that it matters that much for archival
[16:37:06 CEST] <alexpigment> kepstin: agreed
[16:37:31 CEST] <furq> and yeah i remember all the wizardry needed to create a decent-looking xvid rip
[16:37:36 CEST] <furq> i'd rather not go back to that
[16:37:45 CEST] <alexpigment> furq: what's nightmarish about it? I presume you just have to deal with MPEG-LA as usual
[16:37:53 CEST] <furq> you wish
[16:37:58 CEST] <furq> there are now four different patent pools
[16:38:05 CEST] <furq> so you need to pay all of them
[16:38:07 CEST] <kepstin> alexpigment: haha. Nope, that's the problem, it's not just the MPEG-LA
[16:38:09 CEST] <alexpigment> really?
[16:38:15 CEST] <alexpigment> well, consider this news to me ;)
[16:38:21 CEST] <furq> mpeg-la, hevc advance, technicolour, and a new one whose name i forget now
[16:38:27 CEST] <furq> we were making fun of their awful website a while back
[16:38:41 CEST] <alexpigment> and you have to pay those four regardless of which features you use?
[16:38:47 CEST] <furq> i've not looked into it that much
[16:38:56 CEST] <furq> i just know i would run away from that shit like usain bolt
[16:39:03 CEST] <alexpigment> haha
[16:39:16 CEST] <alexpigment> well, I know that MPEG-LA has a minimum number of units before you actually have to pay
[16:39:25 CEST] <alexpigment> hopefully the others are the same (not that I would bet on it)
[16:39:26 CEST] <furq> velos media, that's the fourth one
[16:41:08 CEST] <alexpigment> noted for the future; thanks
[16:42:05 CEST] <furq> the fees have come down a lot though
[16:42:21 CEST] <furq> hevc advance initially wanted $2.80 per device plus 0.5% of revenue from hevc content
[16:42:36 CEST] <alexpigment> wow
[16:43:24 CEST] <kepstin> hah, of course the velos media site serves up their videos... in h264/mp4 and vp9/webm.
[16:43:28 CEST] <furq> lol yeah
[16:43:37 CEST] <furq> i really like their "h264 vs hevc" comparison video
[16:43:42 CEST] <furq> which is a webm
[16:43:58 CEST] <furq> it's just the same video but one of them was scaled with nearest
[16:44:05 CEST] <kepstin> some of the video tags even link to ogv files, but the one I checked returned a 403, so.
[16:44:41 CEST] <DHE> I know some TV box manufacturers are starting to include VP9 player support...
[16:44:52 CEST] <furq> but yeah unless you need to do a wide deployment of 4k Right Now then i would stay far away from hevc
[16:45:18 CEST] <furq> or your name is apple and you're definitely not getting kickbacks at all from the patent pools for suddenly deciding to use it for everything
[16:47:20 CEST] <furq> with that said, apparently some dvb-t2 stations are already using hevc
[16:47:25 CEST] <furq> so it's probably in it for the long haul
[17:19:17 CEST] <nahsi> OK, I'm back with the same question. I need to make a clip from a 4h long video, which will consist of short clips 50 frames of length evenly extracted. Hope that make sense. The total length of a clip should ba around 6000 frames. I need that to estimate optimal bitrate for encoding.
[17:20:39 CEST] <DHE> so you want to break the input video into 50 chunks of roughly identical length?
[17:21:01 CEST] <DHE> for later transcoding, I imagine on multiple systems
[17:21:39 CEST] <nahsi> No-no, I want to make I single representative clip of ~6000 frames from a video
[17:21:53 CEST] <nahsi> 50 frames from here 50 frames from there
[17:23:45 CEST] <DHE> so some kind of supercut? :)
[17:24:55 CEST] <nahsi> Yes you can all it like that. Just a bunch of random scenes to encode and see average bitrate
[17:25:57 CEST] <nahsi> There is a script for aviSunth that does it but there is no abiSynth on linux
[17:25:59 CEST] <nahsi> selectTotal1=framecount()/100
[17:26:00 CEST] <nahsi> selectTotal2=selectTotal1*2
[17:26:03 CEST] <nahsi> selectrangeevery(selectTotal2,50)
[17:28:26 CEST] <dl2s4> there is vapoursynth on linux, also sometimes avxsynth can help too
[17:30:38 CEST] <DHE> I'm thinking you might be able to do some trickery with select, setpts and funky math but I'd need to check the options available first.
[17:31:11 CEST] <nahsi> Actually I think vapoursynth is what I need
[17:31:15 CEST] <nahsi> I'll try it
[17:31:18 CEST] <nahsi> Thanks
[17:31:18 CEST] <kepstin> the main thing is that ffmpeg filters don't know the total no of frames or length of the video, so you'd have to precalculate it
[17:31:30 CEST] <kepstin> but yeah, it could be done with ffmpeg other than that.
[17:31:55 CEST] <nahsi> If I know the length then how I do that?
[17:32:04 CEST] <nahsi> I can check it with ffprobe right?
[17:32:10 CEST] <kepstin> yeah
[17:32:30 CEST] <nahsi> So I have the length what's next
[17:34:03 CEST] <kepstin> you'd use the 'select' filter, which takes a mathematical expression (has current frame no and current timestamp available) to pick which frames to include vs. exclude, then throw a setpts filter after it (probably something like 'setpts=N', depending on the stream timebase) to fix up the gaps in the timestamps.
[17:34:29 CEST] <nahsi> OK I'll google it then
[17:34:31 CEST] <nahsi> Thank you
[17:35:56 CEST] <shincodex> typedef struct AVInputFormat {
[17:36:03 CEST] <shincodex> shouldnt the read functor of this take options
[17:36:57 CEST] <kepstin> probably something like select='between(mod(t\,600)\,0\,10),setpts=N' which would grab the first 10 seconds of video from each 10 minute segment - i.e. seconds 0-10,600-610,1200-1210, and so on
[17:38:30 CEST] <kepstin> you could also use frame numbers rather than time, switch the variable t to be n instead
[17:54:38 CEST] <shincodex> // Awful piece of @#$@#%@#%@$#%@#%@#%@#%@#%@#2355 extern "C" { // Go get the tcp global for control extern int TCPBufferSize; }
[17:54:57 CEST] <shincodex> tcp.c
[17:55:04 CEST] <shincodex> // a global controlling tcp recieve/send buffer size since options are not working because ffmpeg devs suck at design. int TCPBufferSize = -1;
[17:55:14 CEST] <shincodex> static int tcp_open(URLContext *h, const char *uri, int flags)
[17:55:19 CEST] <shincodex> if (TCPBufferSize > 0) { setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &TCPBufferSize, sizeof(TCPBufferSize)); setsockopt(fd, SOL_SOCKET, SO_SNDBUF, &TCPBufferSize, sizeof(TCPBufferSize)); }
[17:55:25 CEST] <shincodex> go patch the master with that
[17:55:30 CEST] <shincodex> that awful fucking hack
[18:08:52 CEST] <DHE> wut?
[18:12:31 CEST] <Asuran> if i do 2pass encoding, can i tell ffmpeg to auto clean stuff up?
[18:16:08 CEST] <BtbN> Just put an rm into the script you have anyway?
[18:34:27 CEST] <albb> Hi there, can someone help me with a problem with x264 library?
[18:42:06 CEST] <DHE> and he's already gone
[18:47:42 CEST] <Asuran> BtbN, the file stay everytime same name?
[18:48:12 CEST] <BtbN> I'd be surprised if it doesn't? The second process has to reliably find it.
[18:57:21 CEST] <albb> Hi all someone can help me with this error using ffmpeg?
[18:57:32 CEST] <albb> Unrecognized option 'x264opts'. Error splitting the argument list: Option not found
[19:03:25 CEST] <BtbN> The option is called x264-params.
[19:03:56 CEST] <ploprof> hello
[19:04:07 CEST] <ploprof> I get this error [hevc_nvenc @ 0x563f46e7cd00] Failed to create nvenc instance: invalid version (15)
[19:04:22 CEST] <kepstin> Asuran: the filename is derived from a prefix (which you can optionally specify) and a number to indicate which output stream.
[19:04:22 CEST] <ploprof> I am on fedora 25 with up to date nvidia drivers and cuda
[19:04:35 CEST] <ploprof> this was the command I used ffmpeg -i Grosse.Pointe.Blank.mkv -vcodec hevc_nvenc output.mkv
[19:04:36 CEST] <albb> @BtbN the error is the same
[19:04:57 CEST] <BtbN> You are not encoding with libx264 then.
[19:05:12 CEST] <albb> how can I check it?
[19:05:25 CEST] <BtbN> well, do you tell it to encode with libx264?
[19:05:36 CEST] <albb> I have followed the Compiling guide of ffmpeg.org
[19:06:02 CEST] <albb> Here https://trac.ffmpe
[19:06:18 CEST] <albb> I have followed all the compile section..
[19:07:16 CEST] <BtbN> albb, that's not a compile time option though.
[19:07:28 CEST] <BtbN> ploprof, what version of nvidia-drivers are you using? That are probably too old.
[19:08:46 CEST] <ploprof> 375.66
[19:09:38 CEST] <albb> this is the pastebin
[19:09:40 CEST] <albb> https://pastebin.com/giCNje2e
[19:09:52 CEST] <BtbN> ploprof, the minimum driver version is 378.13 now
[19:10:06 CEST] <ploprof> D a r n
[19:11:46 CEST] <kepstin> albb: ok, what does the output of "ffmpeg -h encoder=libx264" say?
[19:11:58 CEST] <BtbN> albb, what ffmpeg version is that? The commandline looks fine.
[19:13:24 CEST] <albb> "ffmpeg -h encoder=libx264" lists some libavutil, libavcodec and says Codec 'libx264' is not recognized by FFmpeg
[19:13:41 CEST] <albb> ffmpeg version N-86654-gd2ef9e6
[19:13:45 CEST] <kepstin> albb: alright, so your ffmpeg was not built with x264 enabled.
[19:14:05 CEST] <albb> What should I do to enable?
[19:14:30 CEST] <Asuran> recompile it
[19:14:42 CEST] <Asuran> or get a version with it enabled
[19:15:06 CEST] <kepstin> make sure you have the libx264 headers installed, and use "--enable-libx264" on the configure line.
[19:16:17 CEST] <albb> where should I check for the libx264 headers?
[19:17:39 CEST] <albb> I have the x264 application-executable in Home/ffmpeg_sources/x264-snapshot-20170627-2245
[19:18:23 CEST] <kepstin> did you build x264 yourself? or are you using e.g. distro-provided packages?
[19:18:47 CEST] <kepstin> albb: honestly, unless you have a really good reason to build it yourself, you might want to consider just grabbing some prebuild ffmpeg binaries
[19:19:44 CEST] <albb> I have followed https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#RevertingChangesMadebyThisGuide
[19:19:49 CEST] <albb> this guide
[19:20:13 CEST] <Asuran> get the source pkg of this pkg
[19:20:16 CEST] <Asuran> and add what he said
[19:20:24 CEST] <albb> Before I have simply used git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
[19:20:36 CEST] <kepstin> albb: so... are you actually running the ffmpeg binary that you built?
[19:21:05 CEST] <kepstin> unless it's in your path, you'll need to specify the full path to the binary to run it
[19:21:07 CEST] <albb> I hope, sorry but I am a newbie and I don't know how to check theese
[19:21:52 CEST] <albb> I need a step by step :D
[19:23:27 CEST] <kepstin> albb: honestly, if you're using ubuntu, it's probably easiest to just run version 16.10 or later and use the system ffmpeg, that should work for you well enough :/
[19:24:30 CEST] <albb> But I have a lo of programs in Ubuntu 14.04
[19:24:57 CEST] <albb> with an unpgrade OS i lose them , right?
[19:26:18 CEST] <kepstin> well, you should do a backup obviously, but the ubuntu system upgrade from 14.04 to 16.04 is pretty reliable, and even 16.04 ships with a usable - if somewhat old (2.8) ffmpeg.
[19:27:12 CEST] <albb> At the moment I can't do a backup :/
[19:28:32 CEST] <albb> <kepstin> can you help me tomorrow cause now I have to go?
[19:28:40 CEST] <kepstin> any data you don't have backed up is obviously data you don't care about, so you should be good to just reformat and reinstall then ;)
[19:29:19 CEST] <albb> Yes, I have backed up in cloud
[19:29:42 CEST] <albb> Not all and I have not an external hard disk at the moment
[19:30:28 CEST] <albb> I have some virtual machine, and a lot of programs :/
[19:30:45 CEST] <Asuran> arent those easy to backup?
[19:30:51 CEST] <Asuran> i believe its just some vhd file?
[19:31:35 CEST] <albb> University project
[19:31:44 CEST] <Asuran> still same?
[19:32:47 CEST] <albb> It's a long procedure even if it's easier later for ffmpeg use
[19:33:55 CEST] <albb> but I have Eclipse, VirtualBox and so on that are time-expensive to re-install
[19:38:45 CEST] <albb> Thank you for all advices.. tomorrow I'll try, and I hope to find you here
[19:44:45 CEST] <alex88> Hello everyone, I'm trying to stream a webcam over the network, these are my ffserver config and ffmpeg command https://gist.github.com/alex88/1df3d37744726c9125f3a69dc7c850c8 the video is not very smooth, it lags and the fps seems very low, ffmpeg on the server command reports 22/24 fps and the same on the client (on the other machine)
[19:46:18 CEST] <alex88> I don't know if it's a wifi problem (the machines are like 2 meters away and the router is at the same distance), is it because rtp transfers the video "as-is" without control on the timestamp or? (sorry for the idiot things I may.say :D )
[19:48:28 CEST] <kepstin> alex88: in general people aren't gonna be able to help you with ffserver stuff; what kind of use case are you thinking of?
[19:49:08 CEST] <alex88> kepstin: it's a software which in some part, based on some sensors input has to start/stop recording from a webcam connected to another machine
[19:49:46 CEST] <alex88> so on the machine with the webcam I run ffserver+ffmpeg and on the app server I just run ffmpeg to save the stream to file
[19:50:19 CEST] <kepstin> alex88: anyways, some things to check: make sure you're not encoding the video twice (with some ffmpeg+ffserver configs that might happen), check cpu usage
[19:50:21 CEST] <alex88> I've tried with https://github.com/mpromonet/v4l2rtspserver but the quality was very low and most of the time the stream wasn't working until I manually restart it
[19:50:44 CEST] <kepstin> remember that many webcams don't actually have consistent framerate to start with, particularly in low light
[19:51:31 CEST] <alex88> kepstin: cpu usage is mostly ffmpeg, ffserver usage seems very low, about the light, well, it's in a closed room where people work, not so dark, plus sometimes framerate is not that bad, it just drops for no reason randomly
[19:52:25 CEST] <alex88> I've seen that setting video codec in ffmpeg but not in ffserver wasn't working, I don't remember what it wasn't working, but the format on the client was unexpected, so I had to set it into both
[19:52:48 CEST] <kepstin> try using -tune zerolatency, make sure you have a reasonably small gop size so there's no long delay on startup, and don't use wifi because it adds random unpredictable delays.
[19:54:29 CEST] <alex88> kepstin: dumb question, isn't there a way to make the client wait until it receives the correct frames instead of writing the last frame it received?
[19:54:43 CEST] <alex88> -tun zerolatency where? server's ffmpeg or client ffmpeg?
[19:54:56 CEST] <kepstin> alex88: that's an encoder option for x264
[19:55:29 CEST] <kepstin> alex88: and by default, the client should be waiting until it receives a keyframe before starting to write, but I dunno the details of how ffserver works for that sort of thing.
[19:56:10 CEST] <alex88> well that's when to start the video (and it does, sometimes it's immediate, sometimes takes some seconds), I mean after starting writing the video, if framerate drops while streaming
[19:57:25 CEST] <kepstin> you can probably do your use case fairly well without ffserver, btw; just have ffmpeg running on the server machine continuously sending udp/rtp to a port on the client, and then start/stop an ffmpeg on the client machine which listens to that udp/rtp stream.
[19:57:33 CEST] <hiru> how do you capture multiple screenshots from a video sequence and combine them to make a single tall image? do you know when during a video there is a slow sequence that shows something from feet to the head? that's what I'm tring to do
[19:58:12 CEST] <kepstin> hiru: I expect that most people do that by simply taking screenshots from the video, opening them in layers in e.g. the gimp or photoshop, and lining them up manually.
[19:58:45 CEST] <kepstin> I haven't heard of any automated way to do it, although such might exist? (maybe something in opencv?)
[19:58:54 CEST] <Asuran> hiru, if you want some kind of thumbnails look for imagemagick or google ffmpeg thumbnails
[19:59:34 CEST] <hiru> not looking for a thumbnail, just wondering how it was done and if ffmpeg was the tool people is using
[19:59:50 CEST] <Asuran> there you find explanation too how you get specific amount of screens between sec X and Y
[20:01:42 CEST] <hiru> maybe ffmpeg is used to pipe the screenshots in some other tool
[20:01:53 CEST] <kepstin> hiru: maybe look into software for image stack registration, like used for multi-exposure HDR stuff. Anyways, you're going to end up just turning the pan into a pile of images, then using some other tool or tools.
[20:02:33 CEST] <hiru> ok thanks for the info. I'll try looking on google now that I know what to look for :)
[20:02:42 CEST] <Asuran> hiru, why you dont do as i said? i googled it yesterday and find there an explantion for getting screens of sec X until Y
[20:03:07 CEST] <kepstin> easy way? open the video in mpv, and use its framestep and screenshot feature to get the images. Slightly harder? you can use ffmpeg to turn a segment of the video into images per frame.
[20:03:27 CEST] <hiru> uhm? Asuran actually taking screenshots from sec X to sec Y should be pretty easy. I was more interested in the combination part
[20:03:44 CEST] <Asuran> combination of what?
[20:04:03 CEST] <alex88> kepstin: thanks for the idea, I'll try that too, thank you!
[20:04:09 CEST] <hiru> maybe I was unclear
[20:04:34 CEST] <kepstin> Asuran: the video contains a pan over a static scene, and hiru wants to turn this into a single image of the entire scene.
[20:04:53 CEST] <nicolas17> oh well have fun syncing it
[20:05:00 CEST] <hiru> take the panoramic capture feature modern smartphones now have
[20:05:12 CEST] <hiru> something like that
[20:05:25 CEST] <hiru> you move and the phone combines the images in a long one at the end
[20:06:28 CEST] <nicolas17> hugin is a GUI that combines several tools to do image stitching and panoramic photo stuff
[20:07:53 CEST] <kepstin> ooh, I completely forgot about hugin. been a long time since I played with that :)
[20:11:46 CEST] <hiru> maybe I found something useful http://www.kolor.com/wiki-en/action/view/Fun_:_Stitching_video_frames
[20:29:32 CEST] <cryptodechange> hm crop=1920:698:0:191 produces a black line on top, crop=1920:698:0:192 produces a black line underneath
[20:29:47 CEST] <cryptodechange> Though measuring the pixels, the source is definitely 698px in height
[20:32:33 CEST] <kepstin> cryptodechange: hmm, the problem is probably that the image isn't lined up with the chroma subsampling
[20:33:23 CEST] <kepstin> if you want to get the exact pixel offsets, you might have to convert to 4:4:4 sampling (e.g. yuv444p) before cropping (then convert back afterwards)
[20:34:24 CEST] <kepstin> this is obviously a slightly lossy operation, since it requires reinterpolating the chroma
[20:34:41 CEST] <kepstin> the other alternative would be to just take the middle 696px instead :/
[20:35:00 CEST] <cryptodechange> hm, well 2 pixels is no harm I suppose
[20:35:28 CEST] <cryptodechange> media info states I'm using yuv420, haven't changed it
[20:38:25 CEST] <lmat> I have an impulse response of a space and a dry recording. Can I convolve the two to create a "wet" recording in Audacity?
[20:39:07 CEST] <lmat> Or command line or whatever?
[20:41:33 CEST] <The_8472> does the png encoder support multithreading?
[20:42:00 CEST] <durandal_170> yes it does
[20:42:46 CEST] <The_8472> hrrm, then I must be holding it wrong. h.264 decoding for input uses multiple cores, but when i wire it together with a scaler, png encoder and image2 output it's using ~1core
[20:43:56 CEST] <nicolas17> lol holding it wrong
[20:45:51 CEST] <The_8472> kepstin> Asuran: the video contains a pan over a static scene, and hiru wants to turn this into a single image of the entire scene. <- oh heh, that's what i'm working on atm
[20:46:23 CEST] <durandal_170> The_8472: filters are not frame thread encoding
[20:46:48 CEST] <durandal_170> only directky encoding png is very fast iirc
[20:48:01 CEST] <The_8472> is filter the right term? because i'm instantiating it as a codec
[20:49:42 CEST] <The_8472> feed avframe into avpackage via avcodec, feed avpackage to avformatcontext
[20:50:22 CEST] <roxlu> hey, does someone knows what example I can look at when I want to write a .mp4 file when I have raw h264 nals (in code). I've seen this one but that does a lot more: https://ffmpeg.org/doxygen/trunk/muxing-example_8c-source.html
[20:59:04 CEST] <lmat> Forget it! I'll just read wikipedia and write it up myself!
[21:00:02 CEST] <Vaska> Hello - I'm able to change ffplay's visualization when playing a local file but I can't figure out how to do it when playing a streaming url.
[21:00:23 CEST] <Vaska> Is there an url equivilant to amovie?
[21:03:25 CEST] <Vaska> I'd like to use something like this, but with an url instead of a local mp3 file --> ffplay -f lavfi "amovie=file.mp3,asplit[out0],showwaves[out1]"
[21:06:48 CEST] <kepstin> Vaska: you should be able to use any url that works in ffplay in the amovie filter - the restriction is that : is a special character, so you'll have to escape it in the filter chain with \
[21:08:34 CEST] <kepstin> ... I'm not actually sure how to get that right
[21:08:42 CEST] <kepstin> filtergraph escaping is kinda tricky
[21:10:28 CEST] <Vaska> kepstin: I'm using a windows binary too which probably complicates things
[21:11:13 CEST] <kepstin> ok, so on *linux*, you can do "ffplay -f lavfi 'amovie=filename=https\\\://www.example.com/audiofile.opus'
[21:11:35 CEST] <kepstin> note that the filename= is required, and that single quotes are needed to get the escaping right
[21:11:58 CEST] <kepstin> turning that into windows syntax is an exercise for the reader :/
[21:12:01 CEST] <Vaska> I get "Failed to avformat_open_input 'http'" when I try this --> ffplay.exe -f lavfi "amovie=http://wuwm.streamguys1.com/live.mp3,asplit[out0],showwaves[out1]"
[21:12:31 CEST] <kepstin> Vaska: ^^
[21:12:44 CEST] <BtbN> kepstin, pretty sure you can just omit most quotes and escapes
[21:13:06 CEST] <BtbN> Or just get bash
[21:13:30 CEST] <Vaska> same thing when I add amovie=filename=http://wuwm....
[21:13:45 CEST] <kepstin> Vaska: needs more backslashes.
[21:13:46 CEST] <BtbN> still need to escape the : for ffmpeg
[21:14:30 CEST] <kepstin> specifically, ffplay has to get three backslashes in front of the : because it gets unescaped twice for some reason
[21:15:01 CEST] <kepstin> if your shell interprets backslashes too, you may need more
[21:15:28 CEST] <Vaska> it seems to be balking at http though -> Failed to avformat_open_input 'http'
[21:16:10 CEST] <kepstin> Vaska: the character ':' is used to separate options in ffmpeg filters. So it reads up to the :, assigns the 'http' to the filename, then goes on and sets the next option with the rest of the string
[21:16:16 CEST] <kepstin> which is why you need to escape the :
[21:16:44 CEST] <Vaska> kepstin: ah, ok, trying some things, but a \ in front of the : doesn't seem to help
[21:16:56 CEST] <Vaska> OH!
[21:16:56 CEST] <kepstin> please go back and look at my example command again
[21:17:02 CEST] <kepstin> and the rest of what i wrote
[21:17:28 CEST] <Vaska> got it, thanks and sorry, had to use two backslashes, like this --> ffplay.exe -f lavfi "amovie=filename=http\\://wuwm.streamguys1.com/live.mp3,asplit[out0],showwaves[out1]"
[21:17:59 CEST] <kepstin> huh, you're right, it does work with just 2
[21:18:14 CEST] Action: kepstin is confused by the fact that it also works with 3.
[21:18:51 CEST] <Vaska> kepstin: thanks for the help though, wouldn't have figred it out without you
[21:23:24 CEST] <Asuran> kepstin, im not sure but is there something which makes ffmpeg to take care oder audio synchron? i got some feeling its maybe a bit out of sync
[21:23:32 CEST] <Asuran> i used libvp9
[21:23:40 CEST] <Asuran> and libopus
[21:23:46 CEST] <Asuran> with -f webm
[21:24:28 CEST] <kepstin> Asuran: unless you do anything strange, ffmpeg should preserve the synchronization as it was in your original file.
[21:26:42 CEST] <nicolas17> how do I get filenames from a text file?
[21:26:53 CEST] <nicolas17> I was using glob patterns with -f image2 before, but now I need a specific list of files
[21:30:24 CEST] <kepstin> nicolas17: hmm, no way to do that with the image2 demuxer as far as I know. You might be able to use some external tool to concatenate the images then pipe them to ffmpeg (use the 'image2pipe' demuxer), depending on the image format used.
[21:30:33 CEST] <kepstin> I think that works ok with png and jpeg.
[21:30:46 CEST] <nicolas17> it's jpeg
[21:38:15 CEST] <alexpigment> does anyone really know what the "Past duration too large" message is about?
[21:39:16 CEST] <kepstin> I actually read the code a while back to figure it out.
[21:40:06 CEST] <kepstin> it indicates that at the end of the filter chain, some frames have timestamps closer together than the indicated constant framerate would allow
[21:40:13 CEST] <kepstin> I think the offending frames get dropped
[21:41:02 CEST] <kepstin> the most common way to get it is probably to have something like a VFR mkv file where ffmpeg made a guess about the expected framerate, but was wrong or didn't look far enough ahead
[21:41:21 CEST] <alexpigment> ah
[21:41:33 CEST] <nicolas17> I'd love to turn all these pictures into a variable framerate video but I don't have subsecond timestamps D:
[21:42:10 CEST] <alexpigment> i've never really understood the appeal of VFR
[21:42:17 CEST] <alexpigment> especially for modern lossy codecs
[21:42:22 CEST] <alexpigment> or modern lossless codecs, for that matter
[21:42:26 CEST] <nicolas17> alexpigment: the pictures were taken with variable framerate to begin with
[21:42:39 CEST] <alexpigment> oh gotcha
[21:42:42 CEST] <nicolas17> because the camera's timer sucks, or it couldn't keep up to save them to the SD
[21:42:50 CEST] <kepstin> it's mostly not useful, but sometimes when you're dealing with mixed telecined vs. interlaced vs. progressive content from old NTSC.. :/
[21:43:29 CEST] <nicolas17> it's *supposed* to be 2fps (0.5s) but the actual interval averages to 0.625s with quite some drift :/
[21:44:50 CEST] <kepstin> I *think* that using -vsync 0 or -vsync 2 should allow vfr output and get rid of the 'Past duration too large' message? But I haven't actually checked, and I don't remember if that's correct.
[21:46:04 CEST] <kepstin> If the video was supposed to be CFR but there's just a lot of timestamp jitter, then just use '-vf fps=XXX' to fix it, of course.
[21:46:47 CEST] <kepstin> If you're concatenating two videos with different framerates then... I need to resubmit the patch that fixes the concat video filter to mark the output as vfr ;)
[21:48:38 CEST] <kepstin> (right now the concat filter sets the output fps to the fps of the first video input, so if the second is higher fps, then you'll get the 'Past duration too large' message a lot once the second video starts)
[21:54:07 CEST] <nicolas17> "If the video was supposed to be CFR but there's just a lot of timestamp jitter, then just use '-vf fps=XXX' to fix it"
[21:54:11 CEST] <nicolas17> kepstin: was that for me?
[21:57:20 CEST] <nicolas17> frame I:9 Avg QP:21.07 size: 31610
[21:57:21 CEST] <nicolas17> frame P:510 Avg QP:23.44 size: 992
[21:57:22 CEST] <nicolas17> frame B:1500 Avg QP:26.49 size: 268
[21:57:25 CEST] <nicolas17> okay wow that partly explains why the video is so small
[21:57:25 CEST] <kepstin> nicolas17: no, that's not really applicible to what you're doing.
[21:57:30 CEST] <nicolas17> that's a big GOP
[21:57:50 CEST] <kepstin> nicolas17: unless you specify, default gop on x264 is... 250, i think.
[21:57:57 CEST] <nicolas17> hah
[21:58:21 CEST] <nicolas17> keyint=250
[21:58:31 CEST] <kepstin> (although it has scene cut detection which can cause it to insert keyframes earlier)
[21:59:04 CEST] <nicolas17> in MPEG2 it was like 15, right?
[22:00:08 CEST] <The_8472> <alexpigment> i've never really understood the appeal of VFR <- it can still help lossy codecs to get a deeper reference buffer. i'm seeing some decent savings with vfr + duplicate frame decimation with the mpdecimate filter when encoding webms.
[22:01:55 CEST] <The_8472> or when shoving TV content that switches rates into a single stream. i think mkv could also handle it some other way but multiple segments in a single mkv are arcane
[22:02:53 CEST] <alexpigment> The_8472: I suppose that makes sense. In my heads I was just thinking about how no changes from frame to frame would result in very little extra data. And I guess I'm also thinking about how CFR video probably adheres to a stardard is more widely supported
[22:03:18 CEST] <alexpigment> *head
[22:03:49 CEST] <The_8472> yes, but whenever someone implements code that assumes CFR in popular software it's won't take long until someone opens a bug to complain :)
[22:04:00 CEST] <alexpigment> true :)
[22:04:29 CEST] <alexpigment> but knowing that there will be bugs in various software when using VFR is almost reason enough to not use it imho
[22:04:45 CEST] <nicolas17> 640x480, crf 24 -> 173kbps <3
[22:05:06 CEST] <The_8472> alexpigment, yes, best to stop using any software with non-zero complexity if you want to avoid bugs
[22:05:17 CEST] <nicolas17> heh :)
[22:05:25 CEST] <alexpigment> haha
[22:05:39 CEST] <alexpigment> I was really referring to VFR, but point taken :)
[22:06:49 CEST] <JEEB> talking of low bit rates, I should do another floppy thing to have senpai notice me
[22:07:20 CEST] <nicolas17> JEEB: ...now I kinda want to fit this timelapse into a floppy
[22:07:41 CEST] <JEEB> do note that headers start taking space at some point
[22:07:53 CEST] <JEEB> at which point you will have to start lowering frame rate
[22:08:49 CEST] <JEEB> (this was a 22min lower-SD clip of a cartoon with something pre-opus for audio)
[22:09:49 CEST] <nicolas17> this is only 5000 frames
[22:12:52 CEST] <roxlu> When I want to write raw h264 (e.g. that I receive from a socket), into a .mp4 file do I need to setup the codec for the AVStream too? Also how should I create and init the stream? I'm looking at this muxing example: https://ffmpeg.org/doxygen/2.0/doc_2examples_2muxing_8c-example.html which sets up a video stream / codec which makes sense when encoding but I already have encoded data
[22:26:50 CEST] <kepstin> roxlu: it might make sense to run the received h264 data through the "h264" demuxer, which will handle packetizing it, setting up timestamps, etc.
[22:29:58 CEST] <roxlu> kepstin: yeah I was thinking about that, thanks
[22:30:13 CEST] <roxlu> kepstin: I guess that's kinda the preferred solution from what I find online
[22:36:39 CEST] <roxlu> kepstin: although I would think there is no need for a input context
[22:40:50 CEST] <BtbN> you should not ever deal with raw h264 though
[22:40:57 CEST] <BtbN> it's lacking important information. Like timestamps
[22:41:31 CEST] <roxlu> BtbN: I think I have all the necessary information (decoder config, timestamps)
[22:41:56 CEST] <BtbN> Where do you get timestamps from for your raw h264 input?
[22:42:12 CEST] <roxlu> I don't get them from the h264, I have them in memory
[22:42:34 CEST] <kepstin> where are you getting the frames from then, and why are they already h264?
[22:42:44 CEST] <roxlu> webcam / hw-encoder
[22:42:49 CEST] <alexpigment> Hey guys, I'm drawing a blank. What's the command line parameter to ignore video rotation metadata?
[22:42:58 CEST] <BtbN> and it's not muxed into some format already?
[22:43:08 CEST] <roxlu> No
[22:43:15 CEST] <kepstin> roxlu: for a webcam, you should be using ffmpeg's device support for webcams, generally...
[22:43:52 CEST] <kepstin> roxlu: and ffmpeg has hwaccel support for several types of hardware encoders.
[22:44:04 CEST] <roxlu> kepstin: that's all fine, but I'm not looking for another way to deal with the data / input. I'm curious if ffmpeg is usable in this case
[22:44:09 CEST] <kepstin> roxlu: you're not another person using an rpi & builtin cam, are you?
[22:44:34 CEST] <roxlu> I undesrtand the points you're making, but I can't change that I only have raw h264 data
[22:44:41 CEST] <roxlu> hehe no
[22:45:40 CEST] <kepstin> roxlu: well, the way I'd recommend proceeding is rather than attempting to stuff some data you got somehow into ffmpeg, you should write an ffmpeg input device so that others can also use this hardware ;)
[22:46:46 CEST] <kepstin> that's basically what you're going to end up writing in order to get this to work, really.
[22:47:14 CEST] <BtbN> How is this not a normal ipcam anyway? What's so special about it?
[22:47:25 CEST] <kepstin> maybe take a look at how the v4l2 driver handles webcams which return h264 frames as a reference.
[22:47:35 CEST] <nicolas17> kepstin: hm I plan to do something with a rpi and builtin cam in the future, what's the problem with that? :)
[22:48:04 CEST] <nicolas17> bad CPU for encoding?
[22:48:13 CEST] <kepstin> nicolas17: most people who try something like that end up getting frustrated by bottlenecks due to underperforming hardware and issues with wifi performance
[22:48:22 CEST] <nicolas17> oh ew wifi
[22:49:31 CEST] <nicolas17> I want to take a timelapse, every 2 or 3 seconds, and I wonder if I should save individual JPEGs or if I should use some video codec to use less space... although I think I may run out of battery before running out of space
[23:22:38 CEST] <kepstin> nicolas17: for that use case, an rpi would probably be fine, and using the hw encoder should in theory save a little battery power. Reducing amount of data written is a good idea because SD cards are mostly awful.
[23:51:21 CEST] <nicolas17> looks like I'm exceeding the limits of the ffmpeg command line tool
[23:51:47 CEST] <nicolas17> not really looking forward to writing my own C program with libav* though
[00:00:00 CEST] --- Thu Jun 29 2017
More information about the Ffmpeg-devel-irc
mailing list