[Ffmpeg-devel-irc] ffmpeg-devel.log.20180314
burek
burek021 at gmail.com
Thu Mar 15 03:05:03 EET 2018
[00:50:34 CET] <cone-194> ffmpeg 03James Almer 07master:b173e035362b: avcodec/hapqa_extract: fix two error return values
[01:27:45 CET] <jamrial> rcombs: to be honest i don't know. maybe send an email to the ml asking other people's opinion about it
[04:51:40 CET] <rcombs> jamrial: btw movenc also modifies extradata in write_packet, so I think it's fairly safe to say this is the API in-practice and that we should update documentation to reflect that
[04:52:43 CET] <jamrial> no, it's more like nobody bothered to not do it by using a pointer in the muxer's private context instead
[04:53:30 CET] <rcombs> right, just, the fact that it's used like that in current muxers means nobody's depending on the documented behavior
[12:40:05 CET] <bogdanc> could someone explain how does "get_bits" in get_bits.h work?
[12:40:17 CET] <bogdanc> works*
[12:40:39 CET] <nevcairiel> its all a bit magical, wouldnt it be easier to just accept that it does
[12:42:37 CET] <bogdanc> ok...
[12:43:05 CET] <fx159> Hi, is it possible that the new API for iterating input devices skips the first input device? Seems like a bug to me
[12:43:51 CET] <wm4> the new API is potentially not too well tested, so it's a possibility and worth checking out
[12:43:52 CET] <fx159> device_next method in alldevices.c
[12:44:42 CET] <fx159> line 129 sets the pointer to the first device in the list and line 138 already overwrites it with the second device
[12:46:27 CET] <wm4> isn't that the old API
[12:46:39 CET] <BtbN> The assignment to prev in line 129 seems wrong
[12:46:40 CET] <wm4> (should still be fixed of course)
[12:46:53 CET] <BtbN> well, not wrong
[12:47:03 CET] <BtbN> it's needed as the rest of the code relies on prev != NULL
[12:48:09 CET] <BtbN> should be more like if (!prev) return output ? (void*)outdev_list[0] : (void*)indev_list[0];
[12:48:10 CET] <fx159> was touched in commit 0fd475704e871ef3a535947596a012894bae3cbd while adding the new api
[12:48:40 CET] <BtbN> but that would skip the category check
[12:55:37 CET] <fx159> Could this work in line 138?
[12:55:37 CET] <fx159> if (!(prev ? ((AVInputFormat *)prev)->next : (void*)indev_list[0]))
[12:56:32 CET] <fx159> oops
[12:57:02 CET] <fx159> if (!(prev = prev ? ((AVInputFormat *)prev)->next : (void*)indev_list[0]))
[12:57:05 CET] <fx159> with assignment
[12:58:02 CET] <BtbN> I'm not sure if it isn't just intended for the list to have a dummy segment at the beginning
[12:58:12 CET] <wm4> try it, if it works send as patch to mailing list
[13:00:48 CET] <cone-439> ffmpeg 03Ravindra 07master:6010537956d5: avformat/hlsenc: Option to set timeout for socket I/O operation
[13:01:34 CET] <BtbN> Yeah, it's definitely bogus
[13:03:20 CET] <BtbN> fx159, that won't work. What if next is NULL? It would just start over again.
[13:03:46 CET] <BtbN> or woult it, hm
[13:03:53 CET] <BtbN> depends on the calling code reacting properly
[13:06:22 CET] <fx159> BtbN: I think it would loop...
[13:06:57 CET] <fx159> or well.. no, the break cancels the loop and a NULL is returned
[13:08:22 CET] <fx159> cmdutilsc:2230 aborts if NULL... so it could work theoretically, will test
[13:10:21 CET] <fx159> Yeah, it works
[13:21:58 CET] <fx159> I submitted a patch to the mailing list
[14:56:53 CET] <cone-439> ffmpeg 03James Almer 07master:f706cdda5694: avcodec/hapqa_extract: remove the AVOption flags
[17:19:39 CET] <gagandeep> kierank: i have been looking into the cineform sdk and have completely tracked the interlaced routines using printf
[17:20:20 CET] <gagandeep> kierank: there are many functions which perform the same mathematical function for the levels, the only difference is the output format
[17:20:57 CET] <gagandeep> like one converts to 16bit output, one to YUV, and others,..
[17:21:41 CET] <gagandeep> so can you explain to me how is ffmpeg handling all the data you have processed and are putting that into avframe
[17:29:44 CET] <gagandeep> kierank: nevermind, i might have found the correct function finally
[17:54:23 CET] <gagandeep> kierank: can you provide me some more interlaced frames with gradient or other yuv photo that can help me in debug
[18:16:17 CET] <gagandeep> kierank
[18:18:29 CET] <kierank> Sorry was in a meeting
[18:18:35 CET] <kierank> I have no other samples
[18:31:16 CET] <durandal_1707> gagandeep: you cant write code using cineform sdk to encode images?
[18:39:00 CET] <kierank> gagandeep: we don't do the crazy function duplication that cineform do
[18:39:04 CET] <kierank> we output to one pixel format usually
[18:39:08 CET] <kierank> and then that's it
[18:50:54 CET] <gagandeep> durandal_1707: hmmm, that's gonna be a bit ambitious looking at the encoder >,<
[18:50:59 CET] <gagandeep> will take a bit of time
[18:51:23 CET] <gagandeep> maybe something is going wrong with the luma channel
[18:51:46 CET] <durandal_1707> gagandeep: there must be encoding function in SDK right?
[18:52:10 CET] <durandal_1707> just call it..
[18:52:30 CET] <gagandeep> haven't seen it but must be, like in the decoder sdk
[18:52:42 CET] <gagandeep> will see that as well
[18:53:20 CET] <kierank> durandal_1707: maybe not
[18:53:45 CET] <gagandeep> i will need some raw file i guess to call it
[18:53:55 CET] <durandal_1707> kierank: what? than thats useless PoS
[18:56:11 CET] <kierank> it might be some older code that did it
[18:56:43 CET] <durandal_1707> there is TestCFHD
[18:57:28 CET] <gagandeep> durandal_1707: exactly, TestCFHD, is corectly decoding and i have traced all the final functions doing the work
[18:57:39 CET] <gagandeep> using printf and other markers
[18:58:19 CET] <gagandeep> all have the same formula and only difference is the output bit like RGBA, YUV
[18:58:46 CET] <gagandeep> kierank: by the way, cfhd.c is using YUV10bit format to print to avframe?
[18:59:13 CET] <kierank> yes
[18:59:14 CET] <kierank> or 12-bit
[18:59:19 CET] <kierank> or gbr(a)
[19:00:01 CET] <gagandeep> when is it changing the format to 12bit or gbr(a)
[19:01:33 CET] <gagandeep> oh, when bpc is 12
[19:01:40 CET] <kierank> depending on the signalled pixel format and bitdepth
[19:02:39 CET] <gagandeep> though for the samples i have for testing they are only 10bit in av_log
[19:05:12 CET] <BodecsBela> durandal_1707: I found a bug in process_frame function of f_astreamselect.c In some ocassion ff_filter_frame is not called and it cause the filter to hang.
[19:05:30 CET] <BodecsBela> I could fix by a call to ff_filter_set_ready. But I am not sure wheter is a good fix or just a hack
[19:10:39 CET] <durandal_1707> BodecsBela: when it happens? send patch to mailing list
[19:11:27 CET] <BodecsBela> I thought to ask here before submit the patch. Yesterday I submitted a patch in another filter and it was a duplicate of something I was not aware.
[19:12:17 CET] <durandal_1707> ah, sure feel free to ask here anytime before writing something :)
[19:13:10 CET] <BodecsBela> I will submit the patch very soon. Please review it.
[19:15:17 CET] <durandal_1707> BodecsBela: when this happens with audio? always or?
[19:20:09 CET] <durandal_1707> kierank: it should be CineForm, not Cineform
[19:24:50 CET] <BodecsBela> I can reproduce it only with audio filter only, but only with those inputs (stream or file) when there were video too.
[19:24:56 CET] <BodecsBela> I have submitted the patch
[19:26:43 CET] <BodecsBela> I would like to ask you the reason of the if block in case of audio content in process_frame in the inner for
[19:28:51 CET] <durandal_1707> BodecsBela: probably so it doesn't output same frame over and over
[19:29:25 CET] <BodecsBela> but why doeas this filter receive same frame twice or more?
[19:29:34 CET] <BodecsBela> and why only audio?
[19:29:51 CET] <durandal_1707> it is how framesync works
[19:30:02 CET] <BodecsBela> I have debugged and if branch happens in the start at the files
[19:31:00 CET] <durandal_1707> try it with that code removed, do you get correct output?
[19:31:02 CET] <BodecsBela> I have checked the framesync code but I could not conclud eit. can you suggest me where to see this same-frame-more-than-once behaviour?
[19:31:59 CET] <BodecsBela> If I removed the if block some content produced many non-monoton-dts messages
[19:32:15 CET] <durandal_1707> framesync code works that way, it will return frame with pts that match best even if such frame with that pts is already in output
[19:32:21 CET] <BodecsBela> so I though it has a good reason you put it there
[19:32:37 CET] <durandal_1707> yes
[19:32:54 CET] <BodecsBela> and why do you apply this on audio content only?
[19:33:13 CET] <durandal_1707> so patch, should probably be ok, unless Nicolas objects...
[19:34:05 CET] <durandal_1707> BodecsBela: for video frames it probably never happened...
[19:35:19 CET] <BodecsBela> I have two files (mp4) with anyone can reproduce the bug. But if I remux them to have audio content only, no bug found.
[19:36:19 CET] <BodecsBela> I read somewhere that it is harmless to active the filter even more than necessary except that cause some cpu overhead. Isn't it?
[19:36:34 CET] <durandal_1707> yeah, nothing much can be done about that
[19:37:28 CET] <BodecsBela> I would like to enhance the frameselect filter a litlle bit.
[19:37:51 CET] <BodecsBela> But I would like to understand it better
[19:38:25 CET] <BodecsBela> when remapping occurs, it is not a problem that timestamps has a gap?
[19:38:54 CET] <durandal_1707> BodecsBela: first, why do you use it at all?
[19:39:06 CET] <durandal_1707> ideally no gap should be there
[19:39:17 CET] <BodecsBela> to do live switch over two stream
[19:39:45 CET] <BodecsBela> I use ffmpeg in realtime envorinment.
[19:40:04 CET] <durandal_1707> just code it, using API
[19:40:41 CET] <BodecsBela> I would like to make it available to everyone
[19:40:41 CET] <durandal_1707> the (a)streamselect filters are more proof of concept, than anything else
[19:41:12 CET] <durandal_1707> BodecsBela: are inputs of same dimensions and pixel format?
[19:41:17 CET] <BodecsBela> yes
[19:41:27 CET] <durandal_1707> ok, good
[19:42:13 CET] <BodecsBela> I think your filter is very usefull and I read several places that this functionality would be very useful for others
[19:42:54 CET] <durandal_1707> BodecsBela: so you always get gaps?
[19:42:57 CET] <BodecsBela> I can swithc over two streams by overlay filter and amix filter
[19:43:21 CET] <BodecsBela> setting volume to 0/100 and setting x to 0 or 99999
[19:43:48 CET] <BodecsBela> but your filters seem easier to use
[19:44:27 CET] <BodecsBela> theoretically there may be gap, even very small
[19:44:36 CET] <durandal_1707> amix solution need volume compensation on both inputs....
[19:45:28 CET] <BodecsBela> one of the source is 0 and the oher one is 100 after switchg via versa
[19:46:19 CET] <durandal_1707> BodecsBela: ah, try using asetnsamples filter, with same number to both inputs, that should fix gaps
[19:46:35 CET] <durandal_1707> prior calling astreamselect
[19:48:05 CET] <BodecsBela> in case of amix and overlay filter there is a "master" input. this pts will be on output.
[19:48:18 CET] <durandal_1707> yes, that too
[19:49:03 CET] <BodecsBela> but in streamselect case after each remapping a new "master" timestamp may be a slighlty before or after than the previous "master".
[19:49:52 CET] <BodecsBela> would not it be good to have a virtual master stream?
[19:50:34 CET] <BodecsBela> this way to avoid the glitch? or to adjust the timestamp glitch at each remapping.
[19:50:36 CET] <durandal_1707> BodecsBela: make sure they use same timebase, ((a)settb filters + asetnsamples for audio)
[19:51:24 CET] <durandal_1707> i dunno what would virtual master stream do...
[19:51:47 CET] <BodecsBela> to normalize the timestamps to it
[19:51:58 CET] <BodecsBela> I always think of live sources
[19:52:14 CET] <BodecsBela> this problem could not occur if source are seekable file sources.
[19:52:32 CET] <durandal_1707> BodecsBela: yes, use asettb, same sample rate and asetnsamples filter prior calling aselectstream
[19:52:44 CET] <BodecsBela> ok I see
[19:53:14 CET] <durandal_1707> need to add that to docs..
[19:54:19 CET] <BodecsBela> The new functionality I would like to achive with (a)streamselect filter is the following: to have a "secondary" input in case the original mapped input is unavailable
[19:55:10 CET] <durandal_1707> yeah, need to check if framesync code allows that "master" stream dissapears
[19:55:46 CET] <BodecsBela> but if the main input is available again use the primary input
[19:56:09 CET] <BodecsBela> curently the EOF state is "for ever" in framesync
[19:57:01 CET] <BodecsBela> this way e.g I can show a silent color bar when live source is unavailable
[19:57:06 CET] <wm4> EOF is in general forever in libavfilter
[19:57:33 CET] <durandal_1707> BodecsBela: in[i].before/after = EXT_STOP; ----> change this to something else
[19:57:43 CET] <durandal_1707> and play with that
[19:58:29 CET] <BodecsBela> I will try
[19:58:40 CET] <BodecsBela> This functionality is built into the declink source
[19:58:56 CET] <BodecsBela> So I would lke to implement it into this filer pair
[19:59:46 CET] <durandal_1707> yes, i remmember people asking for this feature, to put another video when primary is not available...
[20:01:29 CET] <BodecsBela> another question that you call avfilter_config_links when processing the remapping command. I think here after it, we should call ff_filter_set_ready? or not?
[20:01:59 CET] <BodecsBela> in function process_command
[20:02:33 CET] <durandal_1707> is it needed at all?
[20:03:15 CET] <BodecsBela> I have tried to send a rempa command and filer stopped as in case of audio bug
[20:03:38 CET] <BodecsBela> filer=filter rempa=remap
[20:04:05 CET] <BodecsBela> to send by zmqsend
[20:04:49 CET] <durandal_1707> hmm, that reply to thread you already posted with new upgraded patch
[20:05:55 CET] <BodecsBela> I do not understand your last sentence
[20:06:26 CET] <BodecsBela> I have posted a single patch
[20:06:31 CET] <BodecsBela> only
[20:07:07 CET] <durandal_1707> BodecsBela: post another one, that have both changes, one in process_frame and another one in remap process command
[20:08:18 CET] <BodecsBela> I could not test the process_command fix so I was not dare to put it in to it.
[20:09:40 CET] <BodecsBela> another question is how should I chse the value of priority value in ff_filter_set_ready?
[20:10:05 CET] <BodecsBela> I see 100,200,300 which one is when?
[20:13:48 CET] <durandal_1707> BodecsBela: its just for to be added priority queue, currently it does not mean much, except it is 0 or !0
[20:13:59 CET] <BodecsBela> thank you
[20:14:35 CET] <BodecsBela> later today I will post a new patch and will test before submit
[20:15:13 CET] <BodecsBela> a 4day national holiday here, so I will have plenty of time in the next days ...
[20:15:15 CET] <BodecsBela> :)
[20:22:41 CET] <BodecsBela> and one last question: is there a way to re-open an input stream from a filter?
[20:23:53 CET] <durandal_1707> nope, what usecase you need?
[20:25:47 CET] <JEEB> if you need multiple inputs and take the input from it, you need that logic outside of a filter
[20:26:01 CET] <JEEB> + you need to set the required rate
[20:26:49 CET] <JEEB> actually not sure how you'd implement such a fallback exactly since I'm not sure if waiting is the best way and/or if you risk missing a frame if it comes in at the last moment
[20:27:22 CET] <JEEB> because what you don't want is to have N hz input coming in and some random jitter somewhere leading to a black back-up input frame getting utilized
[20:29:25 CET] <BodecsBela> there should be a maximal threshold timeout to wait for the fram elike fps filter
[20:30:38 CET] <BodecsBela> <durandal_1707>: in case of eof of a e.g hls stream
[20:30:40 CET] <JEEB> yes, but that requires actual input to come in, no?
[20:31:11 CET] <JEEB> and while I do agree that my first idea would be to have a timeout on "receive data"
[20:31:23 CET] <JEEB> I haven't given proper thought to it
[20:31:45 CET] <BodecsBela> I have been thinking of it for months
[20:31:48 CET] <JEEB> and how you switch between, say, input + back-up input + back-up static overlay
[20:33:01 CET] <BodecsBela> this coul dbe similar as framsync handles eof
[20:34:03 CET] <durandal_1707> yes, but i dont think there is anything which would handle reappearance of primary stream
[20:34:27 CET] <durandal_1707> so if you receive eof, you are done
[20:34:28 CET] <JEEB> except filters require input, no? I see this as something that can have multiple inputs and you switch between them. one of them can be a libavfilter generator filter chain, of course
[20:34:28 CET] <BodecsBela> yes I agree. That's why I need reopen
[20:34:46 CET] <JEEB> anyways, what I'm trying to say is that this is an application layer problem
[20:34:50 CET] <JEEB> not a libavfilter problem
[20:35:08 CET] <BodecsBela> yes you are right
[20:35:08 CET] <JEEB> libavfilter is very much "feed avframes, receive bacon" kind of thing
[20:35:30 CET] <BodecsBela> I am curiuos where shoul dthis functionlity implemented?
[20:35:40 CET] <JEEB> in your API client
[20:35:44 CET] <BodecsBela> input demux filter
[20:35:57 CET] <BodecsBela> but I think of ffmpeg
[20:36:25 CET] <JEEB> I don't see a spot in the FFmpeg libraries themselves (the current set) to handle this on the level of abstraction
[20:36:32 CET] <JEEB> your API client is the one that handles lavf contexts
[20:36:39 CET] <JEEB> your API client is the one that generates lavfi filter chains
[20:36:46 CET] <JEEB> it is the one doing the routing
[20:37:11 CET] <BodecsBela> ffmpeg is an best API client :)
[20:37:30 CET] <JEEB> no, no it is not. it works for a surprising amount of things but it is very static and very specific for some use cases
[20:37:51 CET] <BodecsBela> ok, I see, but if I write my own API client how to make it available for others?
[20:38:06 CET] <JEEB> push it to a repo, publish it
[20:38:25 CET] <JEEB> if it's a simple example it can go under docs/examples
[20:38:33 CET] <JEEB> if it's not then it's a stand-alone tool of its own might
[20:39:14 CET] <JEEB> also just for the record, during FOSDEM this source switching was mentioned during the upipe talk
[20:39:25 CET] <JEEB> so looking at that might be worth a while
[20:39:40 CET] <durandal_1707> upipe have that?
[20:39:55 CET] <JEEB> yes, they said they have the functionality for that
[20:40:02 CET] <JEEB> primary vs back-up and switching between
[20:41:17 CET] <JEEB> it's a generally required feature for a lot of encoders
[20:41:25 CET] <JEEB> because you start a stream and the input might or might not be there
[20:41:30 CET] <JEEB> so you start with your back-up signal
[20:41:41 CET] <BodecsBela> yes that is exactly I think of
[20:41:55 CET] <JEEB> then you set an input to your encoder (one or multiple to switch between)
[20:42:11 CET] <JEEB> and when the input drops it should fall back according to a logic
[20:42:14 CET] <BodecsBela> I thought that with filters input queue it is afeasuible, if queue is empty switch to other input
[20:42:50 CET] <BodecsBela> I will check upipe
[20:44:24 CET] <JEEB> yea I'm not sure such things abstractually fit into lavfi as soon as you go beyond AVFrames->AVFrames
[20:44:55 CET] <JEEB> you could have a switcheroo filter that would time out after time X but whatever happened with your lavf contexts is not its problem
[20:46:25 CET] <BodecsBela> there are "realtime" filter in ffmpeg, so I think it is not so far
[20:46:34 CET] <JEEB> but it would also mean that the interface for it would have to be either blocking or callback-based? since output might or might not be available at the time of request for a new frame?
[20:47:08 CET] <JEEB> and as I noted, lavfi is definitely not the place to do the mapping before the X inputs -> 1 output
[20:47:13 CET] <BodecsBela> I feel it is tough problem, but I think it is because I know less than reuired
[20:48:13 CET] <BodecsBela> can give some more detail sbout it "not the place to do the mapping before the X inputs -> 1 output"?
[20:48:23 CET] <durandal_1707> (a)streamselect filters ideally should work offline, both with VFR and CFR and with gaps...
[20:49:00 CET] <JEEB> BodecsBela: it is not avfilter's job to deal with keeping track of input contexts or decoding or anything like that
[20:49:05 CET] <durandal_1707> BodecsBela: mapping is fine, reviveing stream is not
[20:49:22 CET] <JEEB> yea that's what I meant
[20:49:39 CET] <JEEB> you have the stuff you receive those AVFrames from X inputs from
[20:49:51 CET] <BodecsBela> ok I see. Maybe put this functionlity into as an input filter?
[20:50:10 CET] <JEEB> I do not see currently any place for this functionality than the API client
[20:50:52 CET] <durandal_1707> BodecsBela: that would basically make it not source filter
[20:51:10 CET] <JEEB> people tend to try to hack random crap into the libraries like lavf or avfilter etc just not to have to properly do it in an API client, but that is not properly thinking about it IMHO
[20:52:16 CET] <BodecsBela> ok, maybe
[20:53:04 CET] <JEEB> as I said, I can see the X AVFrame inputs -> 1 AVFrame output
[20:53:07 CET] <JEEB> thing being in avfilter
[20:53:08 CET] <JEEB> BUT
[20:53:25 CET] <JEEB> does the API in avfilter let you do that?
[20:53:32 CET] <durandal_1707> you could hack source filter to reestabilish connection all the time and to never report EOF to other filters in chain........, but that looks like hack
[20:53:43 CET] <rcombs> free VTune https://software.intel.com/en-us/system-studio/choose-download
[20:55:11 CET] <JEEB> durandal_1707: let's not go for hacks shall we? opening and handling inputs is not the job of the thing that is the "switch", which is what I believe matches the design of avfilter IF AND ONLY IF it has the APIs to handle such filters which might not return right away
[20:55:23 CET] <JEEB> (or that give you the AVFrame in a callback)
[20:56:02 CET] <JEEB> anyways, as I already noted the upipe people advertised this so I recommend taking a look at how they did it. upipe is a callback based design.
[20:56:17 CET] <durandal_1707> lavfi is all about processing AVFrames in filters via multidimensional graphs
[20:56:27 CET] <JEEB> yes
[20:56:58 CET] <JEEB> but now think that you might or might not get an AVFrame from all of your inputs, and that you have one that might constantly giving you AVFrames (such as teh signal generator filter)
[20:57:25 CET] <JEEB> now the first thing that comes to mind is a time-based timeout
[20:57:30 CET] <JEEB> which I have no idea if it's a good idea
[20:57:52 CET] <durandal_1707> framesync just works TM in such situation, for offline purposes, once filtersink receive EOF its too late
[20:57:55 CET] <JEEB> so do you block the receive_frame() call or do you ask the API client to register a callback which will get called when the AVFrame is ready
[20:57:57 CET] <BodecsBela> and what about something like concat demuxer?
[20:58:08 CET] <JEEB> BodecsBela: do not fucking remind me of those hackjobs
[20:58:15 CET] <durandal_1707> lol
[20:58:22 CET] <JEEB> or the concat protocol
[20:58:33 CET] <durandal_1707> concat demuxer and concat protocol are HACKS
[20:58:34 CET] <JEEB> the only semi-sane thing was the concat avfilter, which makes sense
[20:59:02 CET] <JEEB> people are just way too lazy to handle concat etc properly on the API client level (aka they just wanted it quickly in ffmpeg.c)
[20:59:39 CET] <JEEB> durandal_1707: yea but I don't think "I didn't get input yet" is an EOF situation btw
[20:59:48 CET] <JEEB> the filter is just a simple picker
[21:00:16 CET] <durandal_1707> it fill pick previous frame, or will wait for next frame..
[21:00:41 CET] <BodecsBela> You are right but very few people are able to write api client but they want to concatenate inputs
[21:00:46 CET] <durandal_1707> it special situations it may consume lot of memory...
[21:01:06 CET] <JEEB> BodecsBela: and then they whack their fucking heads into wall because all of those things work with VERY SPECIFIC USE CASES
[21:01:10 CET] <JEEB> just because of their pure hackery
[21:01:28 CET] <JEEB> the fact that they're in is not a good thing and it is not /that/ hard to make it proper in an API client
[21:01:54 CET] <JEEB> and if you're trying to think of such solutions in this problem that shows that you're not really thinking of actually trying to do things proper
[21:02:17 CET] <BodecsBela> I think we who have the ability to program need to others to make their needs
[21:02:58 CET] <JEEB> that didn't parse in my english parser, unfortunately
[21:02:59 CET] <BodecsBela> ok I see that this filter way is not the proper way that's why I asked it here
[21:03:37 CET] <JEEB> well part of the thing is possibly in the realm of avfilter
[21:03:40 CET] <BodecsBela> I mean I would like to create something to do this failover job
[21:03:42 CET] <JEEB> not the whole problem
[21:04:20 CET] <JEEB> I don't know avfilter well enough to know if it's possible to make such a filter that f.ex. operates on a timeout
[21:04:28 CET] <JEEB> without EOFs
[21:05:08 CET] <durandal_1707> in timeout situation it will just wait and consume memory
[21:05:17 CET] <JEEB> that's about current filters, right?
[21:05:31 CET] <durandal_1707> current framesync code AFAIK
[21:05:48 CET] <BodecsBela> durandal_1707: "just wait and consume memory" will you please describe it min more detail?
[21:06:12 CET] <durandal_1707> BodecsBela: it will wait for other input to give him frame
[21:06:37 CET] <JEEB> durandal_1707: basically what I was trying to hint is someone like you who knows avfilter to step in and say that design-wise such a filter is possible/not possible
[21:06:47 CET] <durandal_1707> actually in current situation it may not consume lot of memory, after latest api
[21:07:23 CET] <JEEB> one where you have for example X inputs (high availability, sticky), and then a back-up (usually an avfilter that will give you a new AVFrame right away)
[21:07:26 CET] <durandal_1707> BodecsBela: as i said earlier, you cant revive filters after EOF
[21:07:51 CET] <JEEB> you're still talking about existing structures, and we've already settled on the fact that current filters are not capable of what is needed :P
[21:08:00 CET] <durandal_1707> JEEB: it will just stall, but betterr check out in real scenario
[21:08:02 CET] <JEEB> so either they need to be improved
[21:08:09 CET] <JEEB> or framework rethought
[21:08:44 CET] <BodecsBela> I undestand that realtime and offline pipelines have diffenerent constraints
[21:08:44 CET] <JEEB> I know I'm bad at communication but I'm really disliking the fact that you haven't yet said "not possible with current design" or "possible" durandal_1707
[21:08:56 CET] <durandal_1707> well, special option could be added when to trigger timeout sitation, and operate in semi-EOF way
[21:09:26 CET] <JEEB> because all of my text and IF AND ONLY IF kind of was trying to coerce a response regarding that :P
[21:09:41 CET] <durandal_1707> reviveing after EOF is not possible, and I double it will ever be possible
[21:09:45 CET] <BodecsBela> maybe each filter has a realtime flag
[21:09:48 CET] <JEEB> yes, EOF is EOF
[21:09:57 CET] <JEEB> EOF is "input is <completely> dead"
[21:10:08 CET] <BodecsBela> movie filter has seek command
[21:10:43 CET] <BodecsBela> is not it capable to jump to start?
[21:11:03 CET] <JEEB> a filter is not supposed to have the capability to do lavf things
[21:11:15 CET] <JEEB> if it does it needs to be purged as it is breaking the framework's constraints
[21:11:38 CET] <JEEB> and yes I know FFmpeg has a fuckload of such things, but it doesn't mean that proper design isn't something people want
[21:12:02 CET] <JEEB> "the existence of crapola in a project is not a reason to spawn more crapola"
[21:12:50 CET] <JEEB> anywaysr, the part that befits avfilter (in theory, if avfilter could do certain things - I just don't know) could be done in a filter
[21:14:00 CET] <durandal_1707> tehnically one can use amovie filter and new reinit option...
[21:14:05 CET] <JEEB> and I'm sorry if I come out as aggressive, but what I really want to underline that as long as it's not impossible to do something properly it should be attempted that the design is sound and follows abstraction layers of the frameworks provided
[21:14:22 CET] <durandal_1707> but that is with limitations....
[21:14:41 CET] <JEEB> there is a part in this thing (the switcher to which AVFrames are fed) that could be fit for avfilter
[21:15:02 CET] <JEEB> but as I noted that is IF avfilter's design lets you do blocking filters
[21:15:14 CET] <JEEB> (since lavfi has no callback design as far as I know)
[21:15:24 CET] <durandal_1707> if you just need to replace timeout signal with static picture you could use FFmpeg libs API
[21:15:33 CET] <BodecsBela> so summarizing, I need to write an API client and where to put this failover functionlity?
[21:15:39 CET] <JEEB> yes
[21:15:47 CET] <JEEB> also I recommend looking into upipe since they advertised this
[21:15:55 CET] <JEEB> and upipe seems to be a nice broadcast thing
[21:16:05 CET] <JEEB> aka "if it already seems to exist, check it out"
[21:16:43 CET] <JEEB> and btw, I'm not trying to downplay avfilter. I am using it myself as well
[21:16:53 CET] <BodecsBela> I know there is a reltime filter in ffmpeg to slow down filter pipeline
[21:17:23 CET] <durandal_1707> that is not issue, frame pts is issue or its non avaibility
[21:17:43 CET] <JEEB> yes, avfilter 100% seems to currently be designed around the fact that the input will be there
[21:18:00 CET] <JEEB> and that's a valid limitation
[21:18:57 CET] <BodecsBela> move filter with reinit sounds good
[21:19:11 CET] <JEEB> durandal_1707: fuck you for giving this guy ideas of hacking
[21:19:15 CET] <JEEB> vittu saatana
[21:19:23 CET] Action: JEEB goes calm down somewhere else
[21:19:41 CET] <JEEB> I was just going to write that FFmpeg is great because it gives one the base components to build all these things it doesn't support itself yet
[21:19:44 CET] <JEEB> fuck this shit
[21:19:51 CET] <durandal_1707> BodecsBela: that is hardly doable
[21:21:18 CET] <durandal_1707> and have numerous issues
[21:21:39 CET] <BodecsBela> thank you for your help. I understand that my idea is a little bit out of ffmpeg abstraction layer. It wa sworth to me to ask it here.
[21:24:19 CET] <BodecsBela> now I have to leave. I will upload the patch as I promised.
[21:25:00 CET] <durandal_1707> ok, remmember: that filters are for offline operation
[21:25:55 CET] <BodecsBela> should not we create something for realtime use cases? but please discuss about it next time
[21:27:18 CET] <durandal_1707> lavfi are hardly designed for realtime usecases, they can work in some situations, but that is it
[22:23:48 CET] <_jamrial> wm4: so basically, using lavu's tree.h to manage the pool, using find() to look for "smallest which is largest than" buffer in the pool
[00:00:00 CET] --- Thu Mar 15 2018
More information about the Ffmpeg-devel-irc
mailing list