[MPlayer-dev-eng] Lots of stuff for NUT
Oded Shimon
ods15 at ods15.dyndns.org
Fri Jan 13 23:34:55 CET 2006
On Fri, Jan 13, 2006 at 11:25:39PM +0100, Michael Niedermayer wrote:
> Hi
>
> On Fri, Jan 13, 2006 at 10:18:02PM +0200, Oded Shimon wrote:
> > On Fri, Jan 13, 2006 at 08:56:21PM +0100, Michael Niedermayer wrote:
> > > On Fri, Jan 13, 2006 at 09:04:26PM +0200, Oded Shimon wrote:
> > > > On Fri, Jan 13, 2006 at 07:46:26PM +0100, Michael Niedermayer wrote:
> > > > > the strict DTS rule is a subset of the MN rule, so a simple muxer can always
> > > > > use the DTS rule and have no more complexity then with multiple timestamps
> > > > > if for whatever reason the muxer doesnt follow the strict DTS rule then it
> > > > > would have some additional complexity to deal with
> > > > > keeping track of all syncpoints and keyframes is pretty much needed for the
> > > > > index anyway, you could also store backptrs for every stream and syncpoint
> > > > > instead of the keyframes for the index sure ...
> > > >
> > > > You forgot EOR, even with strict DTS you need it, so if you handle it you
> > > > might as well handle all of it.
> > >
> > > what was the EOR problem? we dont allow EOR + delay>0 or do we?
> >
> > Well, the spec nowhere says that it is not allowed...
>
> maybe but if its allowed we will have many more issues to deal with
> a demuxer in most frameworks (no noone will change their API for nut)
MPlaye G5926 will. :)
> can just
> send packets to the decoder (or muxer in case of remuxing)
> what do you send to the decoder/muxer for EOR?
> in case of 1in 1out decoders (again noone will rewrite their decoders for nut)
> there are decode_delay frames in the decoder which must be output, if you
> dont flush/output them, then you will receive them when you feed the next
> frames in
> should EOR produce decode_delay zero-keyframes? decode_delay+1? none? one?
> what with remuxing do we loose the EORs?
Ask Rich. :)
I have no idea really...
> > > > > one possible method to simplify single ts + MN rule / SP TS selection might
> > > > > be the following (assuming my headache doesnt affect my abbility to think)
> > > > >
> > > > > 1. add exactly one "virtual/imaginary" syncpoint stream
> > > > > 2. every time the muxer is feeded with a packet, add a virtual syncpoint with
> > > > > its dts=pts equal to either the pts or dts (always choose the same but either
> > > > > seems ok)
> > > > > 3. discard these virtual syncpoints as needed
> > > > > 4. write them like the other packets filling in ptrs and such
> > > >
> > > > Completely confused by this explanation. Doesn't matter, I've more or less
> > > > figured out everything necessary for implementing this method.
> > >
> > > the idea is to handle syncpoints like audio & video packets, they should end up
> > > with correct ts and position automagically even with MN rule
> >
> > Well, sounds insane, I can't even begin to think how well this would work,
> > and the problem is that (i'm guessing) it deals with the optional high
> > level muxer which reorders packets to meet MN rule, and the low level muxer
> > is the one that is supposed to deal with syncpoints...
>
> does your muxer reorder packets to meet the MN or DTS rule? if no its not
> compliant ...
The high level muxer, yes. The low level muxer, only checks and fails if
input is not compliant. (The goal is to avoid buffering in low level
muxer..)
> > > > Since (I think) we have completely prooved that both methods are 100%
> > > > working, the tradeoff here is purely:
> > > > 1. complexity and allowing several syncpoints with no frame inbetween
> > > > 2. overhead of (most likely) 1 byte per syncpoint for almost any file
> > > >
> > > > My vote is still 2, and I have a feeling Rich will flame on the several
> > > > syncpoints thing, however I'm gonna start implementing single ts, and if
> > > > Rich agrees with it then I'm OK with finalizing it...
> >
> > OK, umm, I'm trying, but as I feared it is getting increasingly much more
> > complicated, especially the interleaved dts cache flushing part... I'm
> > still highly suggesting multiple ts. The overhead is REALLY not an issue
> > now, so it's not an excuse IMO... (You do remember that it is as many ts's
> > as the amount of delay>0 streams, plus one, right? So rarely EVER is more
> > than two ts necessary..)
>
> the flushing is needed anyway, if you want delay>0 EORs as 1in 1out decoders
> need the extra packets to output anything and for delay=0 EORs theres no
> flushing if i understand it correctly so really i cant see how multiple ts
> are going to simplify anything here ...
With multiple ts there is no interleaved cache flushing that causes
syncpoints. There isn't even any possible situation of 2 syncpoints with no
frames inbetween. I finished all the code necessary for multiple ts for the
nut_write_frame() code in 5 lines:
<after writing frame data>
if (is_key) {
old_back_ptr = stream->back_ptr;
stream->back_ptr = nut->last_syncpoint;
if (nut->last_syncpoint != old_back_ptr) stream->key_pts = pts;
if (old_back_ptr - nut->last_syncpoint > nut->max_distance) put_syncpoint();
}
Now all you have to do in the syncpoint is grab all stream->key_pts and
stream->back_ptr, compress and output!.. Certainely much simpler than what
I'm still trying to implement here.. :/
- ods15
More information about the MPlayer-dev-eng
mailing list