[FFmpeg-devel] [PATCH] libvpx: alt reference frame / lag
Reimar Döffinger
Reimar.Doeffinger
Sat Jul 3 00:30:13 CEST 2010
On Fri, Jul 02, 2010 at 05:39:45PM -0400, John Koleszar wrote:
> There are a number of ways to actually pack the data, that's not the
> issue. It would require every decoder out there to be
> rebuilt/reinstalled/etc. For this change to be invisible to users,
> you'd have to coordinate all of them, which is impossible, or wait
> long enough that it was reasonable to expect everyone had updated, and
> then there'd still probably be issues. It's effectively a change to
> the bitstream, or at least would have to be managed like one.
If that is your position, then what are you going to do if there's
ever a bug discovered in the decoder? Or, even worse, a security issue?
Give up and go home?
Why is this necessarily different from any such bug?
Also, a particularly thoroughly implemented decoder probably could be
able to play the "new" format, since it would try to do something sensible
when it finds the start of a new frame in the packet after having decoded
one frame.
>
> >> For the question about what to do with the pts, the closest thing to
> >> the current implementation would be to create a pts between frames if
> >> the user's time base supports it, and disable the feature otherwise.
> >
> > And how would a transcoding application figure out the real frame-rate
> > from this mess when e.g. transcoding to DivX (MPEG-4 in AVI)?
> > But more importantly, how would the _encoder_ know which time base the
> > _muxer_ will use?
>
> err... lavc encoders don't know that now, and a number of muxers use
> their own, different time bases.. I don't see what you're getting at
> here.
You want to disable an encoder feature based on the time base.
The encoder can hardly disable a feature based on the time base
if it does not know the time base...
Obviously your idea was to only fix it for FFmpeg, which of course
leaves every other user of libavcodec standing in the rain...
> >> So if the user passes in a timebase on the command line, the time is
> >> calculated in that base either based on a frame rate from the command
> >> line or the time in the originating container, and ARFs are enabled
> >> (presuming the user's time base is precise enough).
> >
> > So with each transcoding from VP8 to VP8 you end up doubling the frame
> > rate?
>
> I never said anything about doubling. Users who wanted to use this
> feature aren't likely to know the real frame rate, so they'd probably
> end up specifying a timebase of ms/us/ns. The encoder wouldn't change
> the time base given in this scheme. If you're transcoding material
> that's already in a high precision timebase, you'd be able to use
> ARFs, since the frame duration would be >1.
A timebase of 1 ms will for AVI result in at least 4kB/s overhead.
I think this is layering one hack above the other, with not even half
of the corner-cases even considered, if that's the plan I think just
removing ARF from the spec would result in a far better user experience.
More information about the ffmpeg-devel
mailing list