[Ffmpeg-devel-irc] ffmpeg-devel.log.20161215
burek
burek021 at gmail.com
Fri Dec 16 03:05:03 EET 2016
[01:27:55 CET] <cone-706> ffmpeg 03Andreas Cadhalpun 07master:e558a6348ac1: 4xm: prevent overflow during bit rate calculation
[01:27:56 CET] <cone-706> ffmpeg 03Andreas Cadhalpun 07master:baba9c6aef88: cafdec: prevent overflow during bit rate calculation
[01:27:57 CET] <cone-706> ffmpeg 03Andreas Cadhalpun 07master:076c3a9fa23c: mov: prevent overflow during bit rate calculation
[01:31:49 CET] <cone-706> ffmpeg 03Andreas Cadhalpun 07master:ed412d285078: tiff: fix overflows when calling av_reduce
[04:37:09 CET] <cone-433> ffmpeg 03Chris Cunningham 07master:ab87df9a47cd: avformat/mp3dec: fix msan warning when verifying mpa header
[11:42:00 CET] <nevcairiel> does anyone happen to have a HDR broadcast test clip using Hybrid-Log-Gamma?
[11:42:14 CET] <JEEB> yeah I tried to look for those as well but didn't find any
[11:42:26 CET] <JEEB> to test the mpv opengl renderer implementation
[11:42:33 CET] <JEEB> because it was just done according to the spec :P
[12:25:48 CET] <kierank> nevcairiel: I can ask someone tonight
[13:10:04 CET] <durandal_1707> strange issue I have, clang miscompiles code if optimizations are used
[13:56:15 CET] <durandal_1707> BBB: is it ok to add >8 bit support to ssim filter?
[13:56:24 CET] <BBB> of course
[13:56:30 CET] <BBB> what an awfully strange question
[13:56:39 CET] <BBB> unless it breaks 8bit support?
[13:57:26 CET] <durandal_1707> I thought one could get same results
[14:06:12 CET] <BBB> durandal_1707: should be good then :)
[15:04:33 CET] <durandal_1707> is there some program which removes gotos from source code while still keeping it functional?
[15:04:56 CET] <ubitux> yes, $EDITOR
[15:29:35 CET] <BBB> is there a problem with gotos?
[15:32:32 CET] <funman> if you have just a couple, gotos are considered handful
[15:33:27 CET] <durandal_1707> any goto is harmful
[15:35:02 CET] <wm4> I use gotos instead of loops because I'm a prick
[15:38:04 CET] <BBB> I agree gotos can make code unreadable, I also believe that in some situations, they are more readable than other solutions.. its very situation-dependent
[15:53:52 CET] <iive> agree
[16:05:14 CET] <cone-828> ffmpeg 03Paul B Mahol 07master:745f4bcc2c1d: avfilter/vsrc_testsrc: draw_bar: make sure width is not negative
[17:50:58 CET] <BBB> so nobody gives a crap about the massive butload of overflow checks andreas wants to add to each and every single decoder in our tree?
[17:51:12 CET] <BBB> thats rather saddening TBH :(
[17:53:25 CET] <wm4> haven't even checked the ML today
[17:53:37 CET] <wm4> too busy doing more important things
[17:53:41 CET] <wm4> (baking christmas cookies)
[17:53:53 CET] <jamrial> BBB: aren't they all in header parsing functions and such? it's not like they are in the middle of a decode loop
[17:54:22 CET] <wm4> BBB: those 6 patches?
[17:54:29 CET] <BBB> Im not worried about runtime complexity
[17:54:33 CET] <BBB> Im worried about code complexity
[17:54:43 CET] <jamrial> ah
[17:54:54 CET] <BBB> the extension of this is that each and every multiplication in our whole tree needs an overflow check to make ubsan happy
[17:55:00 CET] <BBB> I find that terribly worrying
[17:55:39 CET] <BBB> as in, Im not sure ubsan is that helpful in that case (see also previous arguments Ive made about helgrind)
[17:56:09 CET] <jamrial> you can tell him you're against all these, and it would be a blocker until he comes up with a better generic solution
[17:56:17 CET] <rcombs> don't necessarily need actual checks in all cases
[17:56:18 CET] <BBB> but at the very least I dont like the proposed fixed because - as said - at the extreme, this approach means every single multiplication needs a overflow check, specific error message and return value
[17:56:22 CET] <BBB> I Dont think that makes sense
[17:56:30 CET] <BBB> the error message certainly isnt helpful to users
[17:56:42 CET] <BBB> nor is the error return value
[17:56:42 CET] <rcombs> like, you can assert(a < INT_MAX && b < INT_MAX); a * b
[17:56:56 CET] <rcombs> (for a 64-bit mul)
[17:57:04 CET] <BBB> I think I said that Im against that already, but he pushed some anyway
[17:57:27 CET] <BBB> (I dont want to get into revert war territory, Im trying to see if others agree with my point of view)
[17:57:29 CET] <rcombs> which is basically a hint that "the overflow will never happen"
[17:58:30 CET] <jamrial> BBB: as long as you say "please listen" instead of "I'm against this, don't push it and find a better solution", he will consider it a suggestion he can ignore and will go ahead with the first positive review he gets afterwards
[17:58:41 CET] <rcombs> well there he goes
[17:58:44 CET] <rcombs> I think ubsan is generally worthwhile, and if asserts like that are necessary for it to work properly, that's probably not too much to ask
[17:58:52 CET] <rcombs> though I haven't seen the patches in question
[17:59:48 CET] <rcombs> BBB: <rcombs> I think ubsan is generally worthwhile, and if asserts like that are necessary for it to work properly, that's probably not too much to ask
[17:59:48 CET] <rcombs> <rcombs> though I haven't seen the patches in question
[18:00:16 CET] <BBB> i wish ubsan could tell us about values being used, similar to valgrind
[18:01:05 CET] <BBB> sorry for the offline, had to switch locations for something...
[18:01:15 CET] <rcombs> though
[18:01:22 CET] <rcombs> why not just change bit_rate to be unsigned
[18:02:16 CET] <rcombs> (reading the patches now)
[18:03:26 CET] <nevcairiel> Negative bitrate is not very meaningful indeed, but still only fixes one particular occurrence of such troubles and he'll find others
[18:03:44 CET] <jamrial> BBB: as long as you say "please listen" instead of "I'm against this, don't push it and find a better solution", he will consider it a suggestion he can ignore and will go ahead with the first positive review he gets afterwards
[18:03:48 CET] <nevcairiel> So a generic agreement is needed either way
[18:05:15 CET] <nevcairiel> Checking every single signed multiplication is of course quite ugly and should be avoided
[18:06:21 CET] <rcombs> maybe we should have a function in mathematics.h that multiplies its arguments and returns INT64_MAX if it would overflow
[18:08:12 CET] <rcombs> a la __builtin_mul_overflow
[18:08:51 CET] <rcombs> could have a varadic that does this on an arbitrary number of inputs
[18:10:58 CET] <nevcairiel> Can most of these overflows even happen? Most of the time we read a fixed amount of bits, less then 32, and such analyzer are often too dumb to know. So should we uglify code because of limited analyzers?
[18:15:21 CET] <rcombs> the 4xm one is multiplying 3 32-bit integers from the file, so that could legitimately happen
[18:15:45 CET] <rcombs> but would be solved by a mul function that returns INT64_MAX on overflow
[18:16:55 CET] <nevcairiel> Definitely less ugly then if checks everywhere
[18:18:39 CET] <rcombs> CAF is a 32-bit int casted from a double, multiplied by 32-bit int from the file, times 8, divided by a 64-bit int
[18:19:00 CET] <jamrial> rcombs: no, it's three signed 32bits integers, since there's already a check for <= 0 for all three of them
[18:19:12 CET] <jamrial> that can't overflow int64_t
[18:19:35 CET] <jamrial> so the new checks andreas added are not necessary, but the int64_t cast probably is
[18:20:35 CET] <rcombs> (2^31-1)^3?
[18:21:26 CET] <rcombs> is my math badly wrong
[18:25:48 CET] <jamrial> rcombs: no, you're right, seems i messed up mine
[18:26:19 CET] <rcombs> mov has a 64-bit int in the multiplication, so yes that can also happen
[18:26:44 CET] <rcombs> but in general since bitrate isn't that important a value, just clamping to INT64_MAX on overflow seems fine
[18:27:09 CET] <rcombs> I guess in theory when dividing at the end you should upcast to something higher-precision, or to double, but I can't bring myself to care
[18:27:29 CET] <rcombs> and someone will yell at me for using FPU unnecessarily =
[18:28:29 CET] <durandal_1707> wait to see if he will start adding this for every nonsense case..
[18:50:10 CET] <BBB> durandal_1707: he said hes currently doing it for codec parameters, it sounds like he wants ot do it for anything ubsan notices
[18:50:45 CET] <BBB> so as a generic solution, maybe its a good idea to make all these values unsigned? if that prevents the ub, then we can ignore the weird values and do clipping in a more generic way in something/utils.c
[18:50:52 CET] <BBB> that might make everyone happy
[18:51:22 CET] <BBB> Im just trying to prevent every decode from having 10-20 if (bla bla > INT_MAX / sizeof(bla bla)) { av_log(some error); return some error; } blocks
[18:51:33 CET] <BBB> especially because the some error isnt going to be helpful to any user
[18:51:57 CET] <BBB> av_log(your file was fuzzed, stop doing that!);
[18:52:10 CET] <durandal_1707> lol
[18:53:58 CET] <rcombs> for cases where the overflow is ridiculous, we could have a macro for the content of the block
[18:54:34 CET] <rcombs> like FAIL_CRAZY_FILE(INT64_MAX / int1 > int2)
[18:54:57 CET] <rcombs> which expands to if (&) {av_log(&); return AVERROR_INVALIDDATA}
[18:59:57 CET] <BBB> that would be slightly nicer, yes
[19:00:27 CET] <wm4> <BBB> av_log(your file was fuzzed, stop doing that!); <- I like this solution
[19:00:47 CET] <wm4> fuzz files -> watch ubsan output -> fix every single of them -> send 1000s of patches
[19:01:08 CET] <wm4> is this really a good way to spend time, does it really make anything safer or less buggy
[19:01:44 CET] <BBB> I dont think ubsan fundamentally helps in that respect
[19:01:52 CET] <BBB> Ive complained about helgrind before also, its sort of the same problem
[19:02:13 CET] <BBB> I would like tools to help us find and fix bugs, not tell us things the authors of that tool hated but arent necessarily harmful in and by themselves
[19:02:28 CET] <iive> are the result of overflow used in memory address calculations?
[19:03:26 CET] <BBB> no, theyre printed out in some tools as metadata
[19:03:40 CET] <BBB> so you get things like bitrate: -100000000000 bps
[19:54:50 CET] <Compn> BBB : would it be better to have asserts than overflow checks ?
[19:54:55 CET] <Compn> dumb question possibly
[19:55:04 CET] <Compn> same amount of buttload.
[20:16:08 CET] <BBB> Compn: asserts are for programming mistakes
[20:16:15 CET] <BBB> Compn: some of these fields exist in files
[20:16:29 CET] <BBB> Compn: e.g. imagine a ffm (ffservers internal format) file with channels=UINT_MAX
[20:16:39 CET] <BBB> Compn: so no asserts
[20:16:59 CET] <Compn> BBB : my ultimate idea was to run over every file to find out the valid and invalid numbers and then assert any out of bounds...
[20:17:07 CET] <Compn> for each function.
[20:17:11 CET] <Compn> yes , i may be mad.
[20:17:17 CET] <Compn> but it would work :D
[20:18:05 CET] <Compn> every file = at least 1000 files for each codec that is
[20:26:45 CET] <BBB> Compn: youre mad :D
[20:30:15 CET] <RobertHorstmann1> Hello ... a short question about NVENC functionality in FFMPEG ... NVIDIA released version 7.1 of the NVENC SDK a few weeks ago. FFMPEG currently is at 7.0. Are there any plan to upgrade to 7.1 ? Thank you!
[20:31:21 CET] <J_Darnley> Isn't that for the users to do?
[20:33:07 CET] <RobertHorstmann1> I am not sure, sorry. Usually, once a new SDK comes out, there a changes to the FFMPEG code. There were a number of git commits related to SDK 7.0, for example.
[20:33:21 CET] <JEEB> nevcairiel: btw I wonder if anyone ripped the BBC Planet Earth II HLG sample
[20:33:31 CET] <JEEB> http://www.bbc.co.uk/mediacentre/latestnews/2016/planet-earth-2-uhd
[20:35:44 CET] <durandal_1707> this channel is publically logged
[20:35:48 CET] <Compn> BBB : but i am smart.
[20:36:35 CET] <Compn> RobertHorstmann1 : a number of our devs have been reading the nvenc sdk changes... its complicated ?
[20:36:52 CET] <Compn> im not sure what the plans are exactly
[20:37:01 CET] <Compn> is there a change you want done, specifically , RobertHorstmann1 ?
[20:37:51 CET] <Compn> BBB : also my suggestion is smarter than running fuzz testing haha
[20:37:56 CET] <RobertHorstmann1> Changelog for SDK version 7.1 lists quality improvements for temporal adaptive quantization.
[20:38:26 CET] <BBB> Compn: that is probably true
[20:38:56 CET] <Compn> by "complicated" i meant the sdk changes are poorly documented.
[20:39:14 CET] <durandal_1707> Compn: don't type
[20:41:24 CET] <Compn> durandal_1707 : you can never take muh freedommmmm!
[20:42:04 CET] <durandal_1707> you are CoC evader
[20:42:28 CET] <Compn> you know it
[20:42:36 CET] <Compn> no one likes a big coc in their face
[20:43:06 CET] Action: Compn hides
[20:43:20 CET] <Compn> RobertHorstmann1 : so yes, the plan is to keep updated with the sdk. i dont have a timetable/
[20:44:08 CET] <Compn> feel free to file a bug on the trac though. http://trac.ffmpeg.org
[20:45:45 CET] <RobertHorstmann1> thank you ... that's good to know ... and I'll have a look at the trac. have a nice evening!
[22:30:09 CET] <cone-828> ffmpeg 03Michael Niedermayer 07master:c62beba49a90: avcodec/rscc: return the packet size instead of 0
[22:30:10 CET] <cone-828> ffmpeg 03Michael Niedermayer 07master:2eebcda10a65: avcodec/screenpresso: return the packet size instead of 0
[22:30:11 CET] <cone-828> ffmpeg 03Michael Niedermayer 07master:0888c5a24273: avcodec/tdsc: return the packet size instead of 0
[22:30:12 CET] <cone-828> ffmpeg 03Michael Niedermayer 07master:c869e00f8810: avcodec/smvjpegdec: return the packet size instead of 0
[22:30:13 CET] <cone-828> ffmpeg 03Michael Niedermayer 07master:eb7aa6bde42c: avcodec/h263dec: Return the correct error code in explode mode
[22:34:06 CET] <blue_misfit> hey guys, any words of wisdom on using ffmpeg to transcode large files from S3?
[22:34:34 CET] <blue_misfit> we're reading via HTTP and in some cases we end up with truncated results
[22:34:50 CET] <blue_misfit> working theory is that the connection is closed for some reason and ffmepg thinks the source is finished
[22:37:32 CET] <blue_misfit> can somebody help me understand how ffmpeg would (for example) deal with reading a 200 GB ProRes MOV file over HTTP?
[22:37:41 CET] <blue_misfit> like, I assume it does a series of range requests?
[22:38:59 CET] <BtbN> it will most likely just do one normal http get request, and keep reading from it at however fast the chain after it can consume it.
[22:39:15 CET] <blue_misfit> BtbN, thanks for the info
[22:39:21 CET] <BBB> blue_misfit: is the terminal point reproduceable?
[22:39:24 CET] <blue_misfit> any thoughts on a best practice here?
[22:39:29 CET] <BBB> blue_misfit: like, does it always truncate at the same point?
[22:39:30 CET] <blue_misfit> BBB, yes, in about 10-15% of cases
[22:39:34 CET] <blue_misfit> oh, no
[22:39:37 CET] <blue_misfit> different point each time
[22:39:40 CET] <BtbN> Don't access huge files via http.
[22:39:41 CET] <BBB> blue_misfit: yeah, so its just connection close then
[22:39:53 CET] <BBB> you could make the http reader udnerstand disconnects and auto-reconnect
[22:39:54 CET] <BtbN> Sounds to me like the server just closes the connection at some point, because it has a hard timeout
[22:39:59 CET] <blue_misfit> yeah
[22:40:04 CET] <BBB> but dont forget your token on s3 only has limited lifespan
[22:40:21 CET] <BBB> so you need to change the default lifespan for the token to keep it alive in addition to auto-reconnect
[22:40:22 CET] <blue_misfit> maybe - AWS support basically told me I am not familiar with how ffmpeg reads the source files, but it I would guess that it reads using a single thread sequentially. It might be running into an issue if it is making all the requests over the same HTTP connection. S3 will only accept up to 100 requests before it will close the connection on its side. Unfortunately there isn't much we can do to alter the way the software reads data
[22:40:22 CET] <blue_misfit> from the video source
[22:40:30 CET] <BBB> otherwise itll give you a connection refused because the token is no longer valid
[22:40:37 CET] <blue_misfit> ok we can check that for sure
[22:40:38 CET] <BtbN> There is only one request
[22:41:01 CET] <BBB> yeah its one request
[22:41:07 CET] <blue_misfit> ok
[22:41:14 CET] <blue_misfit> so given the use case (AWS encoding of files on S3)
[22:41:23 CET] <blue_misfit> having to pull down a 200 gig file is pretty painful
[22:41:31 CET] <blue_misfit> hugely more efficient to just read directly
[22:41:39 CET] <BtbN> Well, you download it anyway, so might as well cache it for the moment
[22:41:42 CET] <blue_misfit> do you guys see a better way to do this?
[22:41:50 CET] <BBB> add some code to http.c to reconnect
[22:41:59 CET] <blue_misfit> ;)
[22:42:05 CET] <BBB> and make sure it didnt close it because the token expired
[22:42:10 CET] <BBB> thats what I would look for
[22:42:13 CET] <blue_misfit> yeah our tokens are good for 24h
[22:42:15 CET] <BtbN> Or rather to automatically do range-requests, for a given max size
[22:42:19 CET] <blue_misfit> and these jobs take a couple hours max
[22:42:40 CET] <blue_misfit> is there code to already to the range requests or are we going to have to write code regardless?
[22:42:50 CET] <BBB> youll need to write code I think
[22:42:56 CET] <BBB> not 100% sure :-p
[22:43:28 CET] <BBB> brb
[22:44:30 CET] <blue_misfit> haha someone ping me if you're potentially interested in writing this for me as a contract
[22:45:19 CET] <BtbN> adding a max request size wouldn't even be that hard
[22:45:21 CET] <Compn> blue_misfit : why not just wget files first and work locally ?
[22:45:35 CET] <Compn> or probably adjust the tcp timeout and retry settings in http
[22:45:39 CET] <BtbN> Might just not be enough space for that.
[22:45:51 CET] <BtbN> It's not a tcp timeout, the connection is busy
[22:45:59 CET] <BtbN> Just the server having a hard max connection time or something.
[22:46:49 CET] <blue_misfit> we could make enough space
[22:47:00 CET] <blue_misfit> but then we wait x amount of time before even starting the transcode
[22:47:09 CET] <blue_misfit> considering 200 GB sources, it's pretty painful
[22:47:20 CET] <Compn> blue_misfit : why not mount the http (or other server) as a fuse ?
[22:47:28 CET] <Compn> and then let fuse take care of it
[22:47:32 CET] <blue_misfit> that's kind of a neat idea actually
[22:47:37 CET] <blue_misfit> feels a little hacky but it might work :D
[22:47:50 CET] <Compn> i accept paypal or bitcoin :P
[22:48:01 CET] <Compn> or cash/checks
[22:48:05 CET] <Compn> i'm easy going lol
[22:48:11 CET] <blue_misfit> hehehe
[22:48:29 CET] <Compn> not sure if there is AWS fuse module
[22:48:41 CET] <Compn> althought that would be $$$ to code up... probably lot of people want that
[22:49:00 CET] <Compn> im not sure what kind of connections you can run between you and amazonian
[22:49:53 CET] <blue_misfit> what does the 'multiple_requests' flag do for the http protocol handler?
[22:49:55 CET] <Compn> you probably figured it out already and im just talking to myself now lol
[22:50:05 CET] <Compn> in which handler ?
[22:50:13 CET] <blue_misfit> nah not at all, I'm screwed lol
[22:50:16 CET] <Compn> i mean ... what interface ? ffmpeg? aws?
[22:50:19 CET] <blue_misfit> https://ffmpeg.org/ffmpeg-protocols.html#http
[22:50:20 CET] <blue_misfit> yeah ffmpeg
[22:50:46 CET] <Compn> enables persistent connections , which you want
[22:50:50 CET] <Compn> so you should enable that
[22:50:58 CET] <Compn> the default is no persistnent connections
[22:51:01 CET] <blue_misfit> nice!
[22:51:03 CET] <blue_misfit> I will try
[22:51:23 CET] <Compn> now wether or not that fixes things or introduces more problems, i have no idea ;)
[22:51:54 CET] <blue_misfit> so I'd do ffmpeg -multiple_requests 1 http://foo/bar/bas.mov
[22:52:01 CET] <Compn> oh yeah here you go , blue_misfit https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon
[22:52:26 CET] <blue_misfit> max file size = 64 GB :(
[22:52:30 CET] <blue_misfit> I need 200+ GB
[22:52:38 CET] <Compn> ah i see that :D
[22:53:26 CET] <Compn> anyway yuou could change your filesizes on the aws to 64gb ? ffmpeg can cut at 64gb mark...
[22:53:45 CET] <blue_misfit> It would be a little tricky
[22:53:52 CET] <blue_misfit> I'm running an x264 first pass encode
[22:54:17 CET] <Compn> x264 should possibly be able to do so too, but i have not checked dont quote me
[22:54:18 CET] <blue_misfit> so ideally that process would read the whole source straight through
[22:54:25 CET] <blue_misfit> yeah
[22:54:34 CET] <blue_misfit> I could possibly split the first pass into a few pieces and then combine the stats files
[22:54:40 CET] <Compn> well i mean its just ffmpeg -i 1 -i 2 -i 3 -i 4...
[22:54:45 CET] <blue_misfit> OH
[22:55:00 CET] <Compn> or whatever program should do multiple input maybe
[22:55:05 CET] <Compn> but this might just complicate things further
[22:55:12 CET] <blue_misfit> we're all ffmpeg (libx264 in ffmpeg)
[22:55:16 CET] <Compn> which you probably dont need.
[22:55:17 CET] <Compn> oh
[22:55:30 CET] <blue_misfit> when I do that, can I concatenate all those input statements and have it look like a single input stream to libx264?
[22:55:38 CET] <Compn> 200 gb over http, i'm impressed any of that works actually.
[22:55:39 CET] <Compn> yes
[22:55:44 CET] <blue_misfit> it does!! most of the time!!
[22:55:47 CET] <blue_misfit> lol like 90% of the time!
[22:55:54 CET] <Compn> thats crazy to me honestly
[22:55:59 CET] <blue_misfit> can you give me an example of how to do that?
[22:56:05 CET] <blue_misfit> S3 is pretty wicked honestly
[22:56:07 CET] <Compn> lets seeeee
[22:56:41 CET] <Compn> https://trac.ffmpeg.org/wiki/Concatenate
[22:57:42 CET] <llogan> maybe sshfs would be worth a look. never used s3 so maybe it's a dumb idea. also, it's fuse client based.
[22:58:10 CET] <Compn> yes sshfs is also useful
[22:58:16 CET] <Compn> and very easy
[23:01:25 CET] <blue_misfit> how could sshfs help me? Isn't that for mounting sftp servers?
[23:04:19 CET] <Compn> if you can access your aws via ssh, sshfs just makes the ssh connection a local hd
[23:04:37 CET] <blue_misfit> well, but that would connect me to maybe the ebs store of an ec2 instance
[23:04:44 CET] <blue_misfit> it wouldn't connect me to s3
[23:05:02 CET] <Compn> ok, so the connections to the s3 are limited, i just dont know s3 stuff...
[23:05:12 CET] <Compn> we're kind of throwing darts in the dark here
[23:05:18 CET] <blue_misfit> ya
[23:05:30 CET] <blue_misfit> s3 is the big massive object storage system that you interact with via http(s) only
[23:05:53 CET] <Compn> ah
[23:05:57 CET] <blue_misfit> ebs = basically kinda like local disks on a vm
[23:06:00 CET] <blue_misfit> mounted kinda via iscsi
[23:06:02 CET] <blue_misfit> but it's block data
[23:06:12 CET] <blue_misfit> s3 is all object storage via rest
[23:06:19 CET] <blue_misfit> then there's also glacier which is their offline cold storage
[23:06:43 CET] <Compn> blue_misfit : as to your question, i'm not sure concat works over http. i'm trying to find more info
[23:07:13 CET] <blue_misfit> yeah, trying to cobble something together for a POC
[23:10:00 CET] <Compn> see mencoder cound do this ez :P
[23:10:10 CET] <Compn> just mencoder kinda sucks
[23:10:59 CET] <blue_misfit> lol!!!
[23:11:13 CET] <Compn> oh wait i got it working
[23:11:26 CET] <Compn> for some reason it doesnt work in a txt file
[23:11:43 CET] <Compn> ffmpeg -i "concat:http://1|http://2" -encodingoptions
[23:11:48 CET] <Compn> there you go blue_misfit
[23:12:07 CET] <blue_misfit> so I'd need to support -ss and -to for each of those segments
[23:12:21 CET] <Compn> you're busting my balls.
[23:12:24 CET] <blue_misfit> hehehehehe
[23:12:29 CET] <Compn> let me seeeeee...
[23:13:43 CET] <blue_misfit> works locally
[23:13:44 CET] <blue_misfit> ffmpeg -ss 0 -i AlvinAndTheChipmunks3Chipwrecked_Feature_ENG.mov -t 5 -ss 5 -i AlvinAndTheChipmunks3Chipwrecked_Feature_ENG.mov -t 5 -ss 10 -i AlvinAndTheChipmunks3Chipwrecked_Feature_ENG.mov -t 5 -ss 15 -i AlvinAndTheChipmunks3Chipwrecked_Feature_ENG.mov -t 5 -ss 20 -i AlvinAndTheChipmunks3Chipwrecked_Feature_ENG.mov -t 5 -filter_complex "[0][1][2][3][4]concat=n=5:v=1[outv]" -map "[outv]" -c:v libx264 -preset superfast
[23:13:44 CET] <blue_misfit> local-concat.mp4
[23:13:57 CET] <blue_misfit> trying over s3 now..
[23:13:58 CET] <nevcairiel> Seeking is probably not accurate enough, it would likely be easier to hack up a new protocol to just open a new connection occasionally
[23:14:09 CET] <blue_misfit> well, seeking is accurate in our case since we have all-intra sources
[23:14:15 CET] <blue_misfit> pro-res :)
[23:14:52 CET] <Compn> blue_misfit : try dis ffmpeg -ss 1 -i "concat:http://1" -ss 30 -i "concat:http2"
[23:15:14 CET] <Compn> hmm no i dont think that works
[23:15:38 CET] <blue_misfit> hmm so it worked in this short test
[23:16:00 CET] <Compn> oh ok theres a concat protocol and a concat demuxer , i see now
[23:16:00 CET] <blue_misfit> https://paste.ee/p/oRXws
[23:18:16 CET] <Compn> your job is to make sure people can watch the chipwrecked sequel lol
[23:18:21 CET] <Compn> put that on the old resume :D
[23:18:57 CET] <Compn> so if your example works with the local file, does it work with the http files ?
[23:19:08 CET] <Compn> oh you are trying
[23:19:14 CET] Action: Compn sits back and enjoys
[23:19:42 CET] <blue_misfit> hahahaha
[23:19:46 CET] <blue_misfit> it's just a sample trailer
[23:19:55 CET] <blue_misfit> it does
[23:20:03 CET] <blue_misfit> using 5 second chunks did actually work
[23:20:06 CET] <blue_misfit> trying more now..
[23:20:11 CET] <Compn> but does this actually solve your problem ?
[23:20:15 CET] <blue_misfit> unknown
[23:20:20 CET] <blue_misfit> we'll have to deploy and test at scale
[23:20:27 CET] <blue_misfit> like I said - what we're doing now works most of the time :D
[23:20:43 CET] <blue_misfit> oh....
[23:20:46 CET] <blue_misfit> actually I think I messed up
[23:26:34 CET] <blue_misfit> sooo it kinda works
[23:26:37 CET] <blue_misfit> it seems to read all the data
[23:26:43 CET] <blue_misfit> but the final output file only contains the first chunk
[23:27:43 CET] <blue_misfit> https://paste.ee/p/bW4FT
[23:29:19 CET] <Compn> blue_misfit : i think you want ffmpeg -i http:// -f http -multiple_requests output.mp4
[23:29:25 CET] <Compn> to enable multiple requests feature
[23:29:41 CET] <blue_misfit> without any of this crazy concat stuff?
[23:29:43 CET] <Compn> or maybe add a 1 after multiple_requests
[23:29:45 CET] <Compn> yes
[23:29:47 CET] <blue_misfit> ok
[23:29:48 CET] <blue_misfit> let me try
[23:30:03 CET] <Compn> oop,s dont add any numbers after multiple_requests
[23:30:18 CET] <blue_misfit> oh yeah?
[23:30:28 CET] <blue_misfit> I assumed it would be -multiple_requests 1 -i http://.....
[23:30:30 CET] <blue_misfit> but ok
[23:31:07 CET] <Compn> well try both see what happens
[23:31:14 CET] <blue_misfit> Requested output format 'http' is not a suitable output format
[23:31:17 CET] <Compn> note that im no expert ;)
[23:33:04 CET] <Compn> blue_misfit : oh this is probably maybe what you need http://blog.yo1.dog/fix-for-ffmpeg-protocol-not-on-whitelist-error-for-urls/
[23:33:10 CET] <blue_misfit> -multiple_requests 1 -i http:// seems to at least not throw errors
[23:33:12 CET] <Compn> to do concat stuff
[23:33:22 CET] <Compn> i think my ffmpeg maybe too old to help :D
[23:33:26 CET] Action: Compn needs to update
[23:52:06 CET] <blue_misfit> thanks for your help, Compn ! We're going to deploy a test build with -multiple_requests 1 and see the impact
[23:52:20 CET] <blue_misfit> I verified it's setting http connection type 'keep-alive' instead of 'close'
[23:52:27 CET] <blue_misfit> not sure what S3 will do with that, but we'll see :)
[23:53:14 CET] <Compn> technically you found that option, i didnt help much
[00:00:00 CET] --- Fri Dec 16 2016
More information about the Ffmpeg-devel-irc
mailing list