[Ffmpeg-devel-irc] ffmpeg-devel.log.20180222

burek burek021 at gmail.com
Fri Feb 23 03:05:03 EET 2018


[00:01:39 CET] <jamrial> kierank: doing that will probably break a lot of weird samples h264dec has been tuned to parse
[00:02:02 CET] <kierank> ok 
[00:02:10 CET] <kierank> good i guess
[00:02:27 CET] <jamrial> also, every time some big change was done to h264dec you'd freak out and say all the previous fuzzing suddenly became pointless :p
[00:02:43 CET] <jkqxz> Yeah, it's not really a sane thing to do because of the legacy, even if it would kill all the stupid fuzzing things.
[00:04:04 CET] <jamrial> jkqxz: first batch of fuzzing found a lot of real issues, specially with slice threading decoding
[00:04:18 CET] <jamrial> recent stuff is mostly UB, which is not really that useful
[00:04:25 CET] <jkqxz> Fuzzing for meaningless undefined behaviour in bitstream reading, I mean.
[00:04:31 CET] <jamrial> yeah
[00:05:17 CET] <kierank> yeah most of them these days are useless
[00:05:42 CET] <jkqxz> The "oh noes you can get a signed overflow on this variable just before the parser detects the error and returns failure!" stuff.
[00:06:45 CET] <JEEB> yea, too bad it doesn't figure that the case was handled
[00:07:59 CET] <jkqxz> Well technically it isn't because the undefined behaviour lets the compiler make it delete all your files and punch you in the face before the check happens, but meh.
[00:08:13 CET] <JEEB> right
[00:09:43 CET] <jdarnley> Will the next version of C please define signed integers as twos complement.
[00:10:23 CET] <iive> it would be great
[00:11:19 CET] <jkqxz> Or at least implementation-defined to something nonstupid, prohibiting nasal demons.
[00:11:46 CET] <jkqxz> (I bet there are still crazy people in the standards stuff who would reject a twos complement definition.)
[00:13:07 CET] <iive> are there any cpu's that don't use it?
[00:13:39 CET] <iive> i mean, currently been manufactured.
[00:13:52 CET] <peloverde> It's not just two's complement issues. The compiler folks like being able to assume that you can't drive the result negative by adding two positives because it enables a bunch of peephole optimizations.
[00:14:23 CET] <iive> that could be resolved in more elegant way.
[00:14:30 CET] <iive> e.g. explicit overflow handling.
[00:14:38 CET] <iive> it's good for security too.
[00:14:54 CET] <jkqxz> Probably the sort of CPUs you use to run nuclear power stations.
[00:15:16 CET] <iive> jkqxz, intel atoms ?
[00:15:29 CET] <nevcairiel> nah, something from the 60s
[00:16:02 CET] <peloverde> Of course they insist negative << [0,31] is UB don't bother writing the optimization for that one https://t.co/kffl71HQYV
[00:16:30 CET] <jkqxz> peloverde:  "Signed overflow may return any value in the range of the type" is fine for optimisation.
[00:17:03 CET] <peloverde> I agree it's fine, but they want to optimize out a bunch of conditionals
[00:17:28 CET] <peloverde> watch chandler carruth's talk on UB if you want to know how they think
[00:17:42 CET] <jkqxz> No, I mean that is sufficient for them to optimise out conditionals as well if they want (since they can assert that it will always return a particular value which doesn't trigger something).
[00:17:46 CET] <jdarnley> iive: haha
[00:18:08 CET] Action: iive is happy somebody figured out the joke :D
[00:18:22 CET] <peloverde> so you want signed overlow returns an "unspecified value" rather than is "undefined behavior"
[00:18:30 CET] <jkqxz> Since CPUs don't actually trap on signed overflow.
[00:18:32 CET] <jkqxz> Yes, exactly.
[00:19:09 CET] <peloverde> that still prevents optimizing on "X + 1 > X"
[00:19:21 CET] <iive> one thing i've mentioned before is creating a new unsigned type where the overflow is defined.
[00:19:28 CET] <peloverde> because if x is the max vale for the type they will be equal
[00:19:32 CET] <iive> this way only new code would be affected.
[00:19:46 CET] <nevcairiel> it would help all that nonsense in dsp functions though
[00:19:57 CET] <nevcairiel> right now its  UB and people claim that could make your PC explode, so its bad
[00:19:57 CET] <peloverde> overflow is already defined for unsigned types
[00:20:06 CET] <nevcairiel> if its just undefined values .. noone cares about that in dsp stuff
[00:20:07 CET] <iive> i mean signed, sorry
[00:20:54 CET] <iive> one thing i've mentioned before is creating a new signed type where the overflow is defined. <<fixed>>
[00:22:35 CET] <iive> anyway, nuclear power plants use modern computers, even if they have been built in 60's
[00:23:25 CET] <nevcairiel> then use something worse, computers controlling nuclear weapons, because those definitely are from the stoneage =p
[00:24:08 CET] <jkqxz> iive:  <https://www.theregister.co.uk/2013/06/19/nuke_plants_to_keep_pdp11_until_2050/>
[00:24:36 CET] <nevcairiel> infrastructure is full of legacy stuff like that
[00:24:41 CET] <nevcairiel> replacing it costs a fortune, and "it works"
[00:24:44 CET] <jkqxz> Yeah.
[00:25:55 CET] <iive> jkqxz, do you notice that they are seeking for assembler coders? no C for them :P
[00:26:07 CET] <nevcairiel> thats just one option
[00:26:13 CET] <thardin> don't replace something that works
[00:26:18 CET] <nevcairiel> but yeah C wasnt really popular back then
[00:26:28 CET] <nevcairiel> assembler or maybe COBOL
[00:26:40 CET] <jkqxz> Ha, I guess that's true.
[00:28:16 CET] <thardin> there's more than a few pdp collectors in the world who likely know enough pdp asm to work with that
[00:28:58 CET] <iive> are they really keeping an original pdp in working condition, just for that?
[00:30:09 CET] <nevcairiel> replacing such a system would probably cost hundreds of millions in development and validation
[00:30:10 CET] <thardin> nasa is looking for people to code for voyager
[00:30:36 CET] <thardin> though in that case replacing the hardware is a bit more difficult
[00:31:04 CET] <nevcairiel> FORTRAN eh
[00:31:09 CET] <cone-576> ffmpeg 03Felix Matouschek 07master:5ac3a309fddd: avdevice: add android_camera indev
[00:31:10 CET] <cone-576> ffmpeg 03Calvin Walton 07master:ff0de964e7ab: libavfilter/vf_fps: Add tests for start_time option
[00:31:11 CET] <cone-576> ffmpeg 03Gyan Doshi 07master:4f8c691040b0: avformat/mpegenc - log error msgs for unsupported LPCM streams
[00:31:12 CET] <iive> at least I know nasa is using 486 hardened to withstand cosmic radiation.
[00:31:12 CET] <cone-576> ffmpeg 03Michael Niedermayer 07master:ae2eb0464883: avcodec/cavsdec: Check alpha/beta offset
[00:31:34 CET] <jkqxz> Remember that minicomputers of that time are actually possible to fix, since it's pre-VLSI.
[00:32:19 CET] <iive> though voyager is a bit older than 486, iirc
[00:32:36 CET] <thardin> jkqxz: nah, I hear it's more down to the Q bus
[00:32:41 CET] <nevcairiel> voyager launched in 77
[00:32:49 CET] <nevcairiel> quite a while before any x86 =p
[00:32:55 CET] <thardin> like you just install more modern replacement modules
[00:33:30 CET] <thardin> source: pdp-8 collector who lives in my town
[00:37:45 CET] <atomnuker> nah, they don't use x86 for space things, afaik they use POWER ones because IBM makes hardened versions themselves
[00:39:27 CET] <nevcairiel> the mars rovers use a PowerPC type-chip, but its not directly from IBM, some other company makes them radiation  hardened etc
[00:39:29 CET] <thardin> the shuttle used 486
[00:39:58 CET] <iive> http://gawker.com/5064694/nasas-shame-hubble-space-telescope-runs-on-a-486-chip
[00:40:08 CET] <iive> sorry for gawker...
[00:41:32 CET] <nevcairiel> its kinda sad, even the next mars  mission planned for 2020 is supposedly still using a hardened PowerPC chip from 2001
[00:41:40 CET] <thardin> how is it running a shame?
[00:41:41 CET] <nevcairiel> no wonder they typically run out of developers for this stuff
[00:42:33 CET] <thardin> you don't do any heavy lifting on these cpus anyway. just capture the data and crap it down the radio link
[00:42:44 CET] <thardin> cram. well I guess crap works too
[00:44:38 CET] <thardin> since a failed CPU means you have sent a $1G brick to mars you'll want to make at least the central ones something that's well used
[00:44:42 CET] <iive> wow, earthquake here... quick and vertical. it's over.
[00:45:07 CET] <jkqxz> POWER systems from 2003 apparently still work pretty well on Mars - <https://en.wikipedia.org/wiki/Opportunity_(rover)>.
[00:45:18 CET] <atomnuker> the new mars rover does hardware jpeg compression (or it can do lossless) in the camera itself before passing it onto the powerpc chip
[00:45:21 CET] <iive> https://en.wikipedia.org/wiki/DF-224
[00:45:31 CET] <iive> that one is two's complement too.
[00:46:13 CET] <thardin> atomnuker: neat
[00:46:40 CET] <atomnuker> (new == 2012 one, forgot its name since they chose it so late)
[00:47:06 CET] <thardin> plated wire memory?
[00:47:44 CET] <nevcairiel> Curiosity was the latest rover
[00:48:02 CET] <cone-576> ffmpeg 03Mark Thompson 07master:9ca79784e9e6: lavc/mjpeg: Add profiles for MJPEG using SOF marker codes
[00:48:03 CET] <cone-576> ffmpeg 03Mark Thompson 07master:6c0bfa30c00d: mjpegdec: Add hwaccel hooks
[00:48:04 CET] <cone-576> ffmpeg 03Mark Thompson 07master:fabcbfba3846: hwcontext_vaapi: Add more surface formats
[00:48:05 CET] <cone-576> ffmpeg 03Mark Thompson 07master:193e43e6195e: hwcontext_vaapi: Fix frames context creation with external attributes
[00:48:06 CET] <cone-576> ffmpeg 03Mark Thompson 07master:99ab0a13dc23: vaapi_decode: Make the frames context format selection more general
[00:48:07 CET] <cone-576> ffmpeg 03Mark Thompson 07master:63c690ad1545: vaapi: Add MJPEG decode hwaccel
[00:48:08 CET] <cone-576> ffmpeg 03Philip Langdale 07master:cd98f20b4aba: avcodec/nvdec: Implement mjpeg nvdec hwaccel
[00:48:08 CET] <nevcairiel> and it uses the successor chip  of the one that Opportunity used
[00:48:17 CET] <jkqxz> Got to make the memory bits big enough that cosmic rays can't flip them.
[00:48:47 CET] <thardin> I think core memory (and FeRAM?) is inherently immune to that
[00:49:42 CET] <thardin> of course, the dirty little secret to many of these chips is that they just run at lower voltage and speed than their non-space counterparts
[00:50:33 CET] <nevcairiel> well power is at a premium too if you have limited batteries and some solar panels - and need to drive around with that energy too
[00:50:46 CET] <thardin> but being certified for ~space~ adds around two zeroes to the price
[00:51:03 CET] <thardin> nevcairiel: it has to do with band gap stuff
[00:51:19 CET] <thardin> the lower the voltage the less risk there is of charges getting knocked around
[00:51:28 CET] <thardin> I think
[00:53:29 CET] <kierank> atomnuker: depends where the satellites
[00:53:30 CET] <kierank> is
[00:53:43 CET] <kierank> because most LEO stuff doesn't really need hardened radiation
[00:53:47 CET] <kierank> it's just tradition to do it that way
[00:54:08 CET] <nevcairiel> magnetic field already protecting it there?
[00:56:24 CET] <thardin> I talked to some of the engineers at the swedish institute of space physics last fall, they said you don't need to care about that stuff near earth or the moon. especially if it's short missions
[00:56:50 CET] <thardin> maaaybe if you're planning on running something for 10+ years
[00:57:06 CET] <nevcairiel> satellites probably have an expected lifetime of "as long as it goes" =p
[00:58:34 CET] <thardin> jupiter on the other hand..
[01:11:47 CET] <philipl> jkqxz: yay
[01:34:43 CET] <iive> so, how are we going to ask the C standard body to make two's complement part of the standard?
[01:44:49 CET] <jamrial> philipl: can the same thing you did for color_range be done for the other parameters, like primaries, trc and colorspace?
[01:45:09 CET] <jamrial> look at ticket 7033. none of them seem to be passed to the encoder
[01:54:22 CET] <philipl> jamrial: It's Just Work(tm).
[01:54:35 CET] <philipl> So yeah, you can pass anything you want to put in the effort for.
[01:55:20 CET] <philipl> But it'll quickly turn into a more complex exercise like durandal_1707's full change, with negotiation in some cases, and filters altering values.
[01:55:33 CET] <philipl> But it's obviously all necessary to be correct.
[01:56:57 CET] <philipl> I'm surprised we don't transfer colourspace today - that seems too obvious to miss.
[02:00:36 CET] <philipl> So seems like you'd want to add a an AVMasteringDisplayMetadata to the codec context and then pass that through in the same basic way.
[02:01:26 CET] <philipl> a quick grep says that we only read and write it for mkv, and not mp4.
[02:01:51 CET] <philipl> I take that back. I found it for mov
[02:04:24 CET] <jamrial> the mastering metadata in this sample is part of a SEI in the hevc bitstream and not in the container, so it's afaik available in all or most decoded frames
[02:05:00 CET] <jamrial> how to get it from there to the encoder, i'm not sure
[02:05:24 CET] <jamrial> also, the encoder (libx265) is currently not looking for it, so that also needs to be written
[02:05:33 CET] <philipl> Small steps.
[02:05:48 CET] <philipl> Getting it there is adding codec context fields and then repeating my change.
[02:06:16 CET] <philipl> There's also additional work in demuxers and codecpar to transfer the information too, in case it's not in the stream
[02:20:19 CET] <jamrial> mastering metadata was written as side data to avoid adding more fields to avcodeccontext and avframe, though
[02:22:50 CET] <atomnuker> I wish more fields could be added to it arbitrarily, but they can't, since its name might as well be "AVSMPTEMasteringData"
[02:25:07 CET] <atomnuker> also even though its side data it isn't really well done IMO, it has its own function for alloc
[02:25:51 CET] <philipl> Well, codec side data would be fine. Is that a thing?
[02:36:57 CET] <jamrial> philipl: no, just frame, container and packet side data
[02:41:58 CET] <philipl> So, you could look at the frame side data on the encoder side, but this didn't seem generally desirable when I was discussing the colour range problem.
[02:42:13 CET] <philipl> It's weird to take stream level data from a frame - which frame? after all.
[02:58:55 CET] <JEEB> basically with AVFrames you'd have to have reconfigurability
[02:59:16 CET] <JEEB> which is generally OK with the first frame, but most things want that during initialization, not when the first frame is fed
[02:59:19 CET] <JEEB> vOv
[03:15:45 CET] <iive> it's just an idea... 
[03:15:59 CET] <iive> how about removing the packet/frame thing and making everything side data?
[03:16:20 CET] <iive> that is, the packet/frame becomes just another type of side data
[03:17:33 CET] <iive> i think it is kind of classical event system.
[03:17:48 CET] <iive> gtg, n8
[03:19:34 CET] <cone-576> ffmpeg 03Dale Curtis 07master:a246701e9abe: Parse and drop gain control data, so that SSR packets decode.
[08:56:12 CET] <cone-266> ffmpeg 03Tobias Rapp 07master:aedbb3c72c97: doc/filters: add links to ffmpeg-utils and ffmpeg documentation
[12:14:43 CET] <jdarnley> atomnuker: Can you explain to my dumb ass how the seperated fields are handled in the dirac_trimmed branch you wrote?
[12:15:48 CET] <jdarnley> We (I) have completely broken it by adding fragment support.
[12:17:15 CET] <jdarnley> We get a blank green output picture.
[12:17:44 CET] <atomnuker> jdarnley: https://github.com/OpenBroadcastSystems/ffmpeg/blob/dirac_trimmed/libavcodec/diracdec.c#L781
[12:17:51 CET] <atomnuker> https://github.com/OpenBroadcastSystems/ffmpeg/blob/dirac_trimmed/libavcodec/diracdec.c#L885
[12:18:38 CET] <atomnuker> so it checks the field number from the bitstream, allocs a frame if 0 and renders to it with 2x the stride, and it doesn't output it
[12:19:28 CET] <atomnuker> on the second field it reuses that frame but renders to it with starting on line 1, and finally to output it refs the internal frame used to the output frame
[12:20:15 CET] <atomnuker> when starting the next frame the decoder unrefs the last frame used and creates a new one
[12:26:08 CET] <jdarnley> atomnuker: thank you, that is what I want to happen with the addition of caching the picture between fragments
[12:27:30 CET] <jdarnley> We are failing to allocate a buffer in some situations because the data planes in current_picture are sometimes NULL causing a segfault when transforming.
[12:28:31 CET] <jdarnley> I also suspect that we are not transforming one plane
[12:28:40 CET] <kierank> jdarnley: yes, we are not iirc
[12:28:43 CET] <kierank> the current code does that
[12:28:47 CET] <kierank> it calls the transform once
[12:28:49 CET] <kierank> in our code
[12:31:51 CET] <jdarnley> yes, I am goint to re-enable the transform in dirac_decode_frame_internal when we think we have all the fragments
[12:48:34 CET] <Chloe> michaelni: my mail client seemed to not include Muhammad, should i just forward my reply to him or will you cc him?
[12:50:20 CET] <jdarnley> Okay, good, it hasn't crashed yet but I still get a blank green output so there is clearly still a problem in which frame we are outputting
[12:51:17 CET] <jdarnley> Oh, I should check progressive
[12:52:09 CET] <jdarnley> All fine there.
[12:52:54 CET] <jdarnley> I now notice that we don't switch correctly between interlaced and progressive
[13:04:54 CET] <jdarnley> https://github.com/OpenBroadcastSystems/ffmpeg/blob/dirac_trimmed/libavcodec/diracdec.c#L885
[13:05:53 CET] <jdarnley> atomnuker: is that ^ line keeping a ref for the decoder or is it setting the picture for output?
[13:08:46 CET] <jdarnley> I am confused because it doesn't appear to check that both fields are present.  It only checks that for the got_frame variable.
[13:16:03 CET] Action: jdarnley goes to lunch
[13:25:15 CET] <atomnuker> jdarnley: its setting the picture for output obviously
[13:25:22 CET] <atomnuker> and it does check for both fields
[13:25:51 CET] <atomnuker> if the decoder receives 2 field 1's the previous one will be scrapped
[13:26:11 CET] <atomnuker> if it receives a field 2 it'll report invalid data
[14:03:30 CET] <jdarnley> Hm
[14:04:18 CET] <jdarnley> I seem to have forgotten that progressive will use the AVFrame provided by avcodec
[14:04:44 CET] <jdarnley> So it doesn't need to "move" from the internal one
[14:51:15 CET] <jdarnley> Dammit!  Why do I not understand this?
[14:55:34 CET] <jdarnley> It seems like this interlaced data is vanishing into thin air.
[14:59:02 CET] <atomnuker> working as intended then
[17:57:42 CET] <jamrial> https://git.videolan.org/?p=ffmpeg.git;a=blob;f=fftools/ffmpeg.c;h=a37de2ff98419f2f0eb7c4ba92f4f556d195cf05;hb=HEAD#l2055
[17:57:45 CET] <jamrial> that call is dead code
[18:00:02 CET] <nevcairiel> i never called av_parser_change from my code, does that function even do anything useful
[18:00:29 CET] <jamrial> av_parser_change() does nothing if the avctx passed to is has the flags and flags2 fields set to 0 (or rather, global_header and/or local_header flags not set)
[18:00:31 CET] <jamrial> which is the case here
[18:01:02 CET] <jamrial> becuase the parser_avctx pased to it is initialized as a copy of an avcodecparameters
[18:01:10 CET] <jamrial> which has no flags field
[18:02:01 CET] <jamrial> so, that call is pretty much a noop
[18:02:38 CET] <jamrial> http://coverage.ffmpeg.org/libavcodec/parser.c.gcov.html
[18:03:11 CET] <jamrial> ffmpeg.c is the only thing that calls that function
[18:03:32 CET] <jamrial> and lcov confirms that it's doing nothing
[18:06:55 CET] <jamrial> fate doesn't seem to set global_header or local_header for any test, so no wonder it wasn't noticed
[18:48:31 CET] <jdarnley> How can it be this hard to output these damn frames correctly?
[18:48:34 CET] <jdarnley> How can one cached picture be correct and not another?
[18:49:05 CET] <Chloe> michaelni: the method you described seemed interesting. Whats the benefit of having to enable internal components? I was thinking to have un/register functions, and then a function to configure how _iterate should work i.e. only iterate internal, only external, both with internal before and both with external before. If you have to register internal components manually then you get issues if youre replacing an external component
[18:49:05 CET] <Chloe> (how do you distinguish between the two? Are internal components public symbols which would be passed into the register function or do you have multiple register functions?)
[18:49:12 CET] <Chloe> Just some thoughts
[18:51:02 CET] <Chloe> Either way i think that enabling components to be shown in an iterate function vs registering a component should be separate
[19:09:31 CET] <michaelni> Chloe, the reason behind having to enable internal components (or the ability to disable them) vs. having them fixed is that it reduces the attack surface for exploits in a quite hard way
[19:11:24 CET] <durandal_1707> Compn: do you have windows movie maker files?
[19:11:37 CET] <michaelni> Chloe, i was thinking that internal and external could be handled indentically, so there would be only one set of functions
[19:12:06 CET] <michaelni> minimal complexity that is needed ...
[19:23:25 CET] <atomnuker> I hope I don't have to argue that encoder options need to be decided by the encoder, not the user
[19:31:28 CET] <Chloe> michaelni: yes but how do you then register an internal component? Surely not by symbol, since you cant guarantee that a symbol is compiled in
[19:32:34 CET] <jamrial> you can, that's how the old register api did it. pre processor check to make sure the symbol was compiled in
[19:33:34 CET] <Chloe> jamrial: with dynamic libs?
[19:35:06 CET] <jamrial> the registration was internal. if you mean the api user registering internal components manually then nevermind, that indeed wasn't the case
[19:35:55 CET] <Chloe> jamrial: one of my ideas with removing the registration api and replacing it with the current design was to reimplement registration in such a way which allows external components
[19:36:06 CET] <Chloe> obviously this needs a bit of work other than 'just' adding an api
[19:44:31 CET] <michaelni> Chloe, it depends on what the exact use cases are we target. If its for applications which register all OR register a minimal set of absolutely needed components then per symbol should be fine as a missing symbol due to it not being compiled in would make the application fail either way as it needs that component
[19:46:38 CET] <michaelni> there is dlsym and weak symbols but these dont work in all cases / platforms and may be complex. I do prefer simple solutions if possible
[19:49:03 CET] <wm4> it should be a requirement for a new API not to have any globally visible state
[20:07:34 CET] <Compn> durandal_1707 : .wmm ?
[20:08:33 CET] <Compn> durandal_1707 : i think those are only project files , e.g. playlists? not actual video files ?
[20:09:33 CET] <durandal_1707> Compn: do you have movie maker?
[21:11:05 CET] <JEEB> (45
[21:26:36 CET] <philipl> jkqxz: to complete the jpeg mpv story - the native vavpp filter doesn't do the right thing, but forcing scale_vaapi (or presumably any other vavpp based filter) makes it work.
[21:35:57 CET] <thardin> lol g2meet
[21:38:30 CET] <wm4> philipl: aw
[21:38:53 CET] <wm4> philipl: pixfmt issues?
[21:51:59 CET] <philipl> wm4: This is the weird thing where vaapi supports more in-memory formats than it can put on a GL surface
[21:52:42 CET] <philipl> So the jpeg is 422 and ends up in a yuv440p (whatever) buffer which then can't natively be displayed - so you have to force a format conversion to nv12
[21:53:38 CET] <philipl> The logic for negotiating this situation isn't in the mpv vavpp filter, so it doesn't do the right thing.
[21:57:27 CET] <wm4> philipl: yeah, I was afraid of that
[21:57:46 CET] <wm4> the d3d11 stuff has some logic, but it's terrible
[21:58:04 CET] <wm4> probably would be easier to just instantiate the filter in the VO
[21:59:47 CET] <philipl> wm4: yeah - just wrap up the ffmpeg filters with convenience syntax.
[22:00:15 CET] <wm4> (what syntax?)
[22:00:38 CET] <philipl> You mean auto-inserting the filter? I was meaning make vf=vavpp use the ffmpeg filters underneath.
[22:01:36 CET] <wm4> that could be done too of course
[22:02:51 CET] <wm4> but I was thinking of sidestepping the need for manual user interaction or complexity in the mpv chain by just converting it on the VO
[22:04:52 CET] <philipl> that makes sense.
[22:05:57 CET] <Chloe> wm4: this is why I want internal components and external registered components to have separate lists, so you're not exposing anything. The only thing the libraries then have is a separate list of pointers to external components
[22:05:59 CET] <wm4> though could also lead to double conversions
[22:06:32 CET] <wm4> Chloe: makes sense too
[22:06:55 CET] <Chloe> the issue would then be, how do the iterate functions handle multiple lists
[22:07:47 CET] <Chloe> and I thought of a few use cases for why people would want to register external components and figured that it might be best if it's configurable how the lists are within iterate, but I dont know about this really
[22:08:07 CET] <wm4> really depends what you have in mind
[22:08:53 CET] <wm4> but IMO you won't get away with some sort of context object (be it a library context like nicolas george suggested, or an explicit object that represents the list)
[00:00:00 CET] --- Fri Feb 23 2018


More information about the Ffmpeg-devel-irc mailing list