[Ffmpeg-devel] Timestamps on Decoding
Paul Curtis
pfc
Mon Oct 24 00:45:20 CEST 2005
Ronald S. Bultje wrote:
> They are probably correct, and more than tired of having to explain it
> every time again. I explained it to you in an as-simple-as-possible
> fashion in my other email, which you appear to silently ignore.
I did read your e-mail several times ... thanks, it explained a number
of things. In addition, I've reduced the 'mplayer' playback code to the
basics. Luckily, I don't have to rescale to correct display versus
stored aspect ratio.
> Richard may not be very nice to you, but he is completely right. The
> timestamps that you are using are illegal, and the fact that it works is
> most likely nothing more than pure coincidence or luck. Most likely, the
> Helix SDK does timestamp smoothing to make sure the output file is
> valid.
The Helix SDK expects a start time and end time (in milliseconds) for a
single frame of either audio or video. When I receive a "finished" frame
from the decoder, there is no way (as several e-mail threads atest) to
determine the presentation timestamp. I have tried every value in all
the structures. I have used values that were available and interpolated
the missing values.
It *is* pure luck that the container/stream combination I'm using
happens to present a monotonic increasing DTS for the decoded audio and
video frames. I realize that this may not hold true for all
container/stream combinations.
But the last question was never answered: when I receive a complete raw
frame, how can I accurately tell what it is I have received? Queueing
the frames (audio or video) and presenting them to the Helix encoder
wouldn't be a problem IF I could determine when the frame SHOULD be
presented. This is the crux of the problem, and even Richard has said
it's a problem.
Paul
More information about the ffmpeg-devel
mailing list