[Ffmpeg-devel] Re: [PATCH] Tiertex .SEQ files support
Michael Niedermayer
michaelni
Thu Sep 21 23:52:43 CEST 2006
Hi
On Wed, Sep 20, 2006 at 12:19:48AM +0200, Gregory Montoir wrote:
>
> Hi,
>
> Updated patch.
>
> Michael Niedermayer wrote:
> > Hi
> >
> > [...]
> >
> >>+static int seq_read_header(AVFormatContext *s, AVFormatParameters *ap)
> >>+{
> >>+ int i, rc;
> >>+ SeqDemuxContext *seq = (SeqDemuxContext *)s->priv_data;
> >>+ ByteIOContext *pb = &s->pb;
> >>+ AVStream *st;
> >>+
> >>+ if (get_buffer(pb, seq->current_frame_data, SEQ_FRAME_SIZE) !=
> SEQ_FRAME_SIZE)
> >>+ return AVERROR_IO;
> >
> > why do you read the data into a buffer instead of using get_le*()
> everywhere?
> > if theres no reason then please change it to use get_le*()
>
> Mostly because I thought it would be more efficient to do it like this.
> Instead of fseek'ing/ftell'ing ByteIOContext, I just read the whole 6kb
> buffer and do the various memcpy calls from the different video data
> offsets.
you dont need fseek/ftell see below
[...]
>
>
> I also have a question regarding the way palette update is handled.
> Looking at idcin.c/idcinvideo.c, AVPaletteControl is used. If I
> understand things correctly, the AVCodecContext structure contains
> this field, the demuxer updates it when it detects a palette change
> and next time the video decoder processes a frame, it can handle
> it properly. But what happens if the (demuxer)_read_packet() and
> (video)_decode_frame calls are not exactly sequential ? For example,
> in the following situation :
>
> demuxer_read_packet() is called
> -> a palette change is detected on frame (n+0), AVPaletteControl
> is updated
>
> demuxer_read_packet() is called (probably to buffer data)
> -> no palette change in frame (n+1)
>
> demuxer_read_packet() is called a third time (probably to buffer even
> more data)
> -> a palette change is detected on frame (n+2), AVPaletteControl
> is updated
>
> video_decode_frame() is called
> -> it processes frame (n+0), but, with the palette of frame (n+2) (as
> the demuxer has just updated AVPaletteControl in the previous call).
>
> I am just wondering because that's the behaviour I'm observing under
> ffplay. Is there a way to do, properly, some kind of synchronization ?
no, the AVPaletteControl system is totaly broken, a patch which removes
it and instead passes palette changes as AVPackets would be very welcome
[...]
> Index: ffmpeg/libavcodec/tiertexseqv.c
> ===================================================================
> --- ffmpeg/libavcodec/tiertexseqv.c (revision 0)
> +++ ffmpeg/libavcodec/tiertexseqv.c (revision 0)
[...]
> +typedef struct SeqVideoContext {
> + AVCodecContext *avctx;
> + AVFrame frame;
> + unsigned int palette[256];
> + unsigned char unpack_buffer[64];
> +} SeqVideoContext;
> +
> +
> +static unsigned char *seq_unpack_rle_frame(unsigned char *src, unsigned char *dst, int dst_size)
> +{
> + int i, len, sz;
> + GetBitContext gb;
> + int code_table[64];
> +
> + for (i = 0, sz = 0; i < 64 && sz < dst_size;) {
> + init_get_bits(&gb, src, 8); ++src;
> +
> + code_table[i] = get_sbits(&gb, 4);
> + sz += ABS(code_table[i]);
> + ++i;
> +
> + code_table[i] = get_sbits(&gb, 4);
> + sz += ABS(code_table[i]);
> + ++i;
> + }
calling init_get_bits() in the loop is of course not what i had in mind,
i rather meant that you should do a single init_get_bits() per frame
though i see now that there are some problems with that approch
as theres are plenty of cases where byte based accesses happen
in the specific case above i would suggest to either
A. put the init_get_bits before the loop
B. do the init_get_bits outside and long before the function and use
get_bits_count() to find the byte pos for the memcpy/memset
C. forget init_get_bits() and return to the old byte based code
also i would suggest not to unroll loops "by hand" unless they are speed
critical
note, also if any of my suggstions lead to slower, more bloated or messy code
then you should of course not follow them but rather complain
[...]
> +static unsigned char *seq_decode_op2(SeqVideoContext *seq, unsigned char *src, unsigned char *dst)
> +{
> + int b, i, len;
> + GetBitContext gb;
> +
> + len = *src++;
> + if (len & 0x80) {
> + switch (len & 15) {
> + case 1:
> + src = seq_unpack_rle_frame(src, seq->unpack_buffer, sizeof(seq->unpack_buffer));
> + for (b = 0; b < 8; ++b) {
> + memcpy(dst, &seq->unpack_buffer[b * 8], 8);
> + dst += seq->frame.linesize[0];
> + }
> + break;
> + case 2:
> + src = seq_unpack_rle_frame(src, seq->unpack_buffer, sizeof(seq->unpack_buffer));
> + for (i = 0; i < 8; ++i) {
> + for (b = 0; b < 8; ++b)
> + dst[b * seq->frame.linesize[0]] = seq->unpack_buffer[i * 8 + b];
> + ++dst;
> + }
> + break;
> + }
i would move the call to seq_unpack_rle_frame before the switch so that there
just a single call, that way the cost for inlinng the function would be
smaller for the compiler (if it does inline it ...)
> + } else {
> + unsigned char *color_table = src;
> + src += len;
> + if (len > 8) {
> + init_get_bits(&gb, src, 32 * 8); src += 32;
> + for (b = 0; b < 8; ++b) {
> + for (i = 0; i < 8; ++i)
> + dst[i] = color_table[get_bits(&gb, 4)];
> + dst += seq->frame.linesize[0];
> + }
> + } else if (len > 4) {
> + init_get_bits(&gb, src, 24 * 8); src += 24;
> + for (b = 0; b < 8; ++b) {
> + for (i = 0; i < 8; ++i)
> + dst[i] = color_table[get_bits(&gb, 3)];
> + dst += seq->frame.linesize[0];
> + }
> + } else if (len > 2) {
> + init_get_bits(&gb, src, 16 * 8); src += 16;
> + for (b = 0; b < 8; ++b) {
> + for (i = 0; i < 8; ++i)
> + dst[i] = color_table[get_bits(&gb, 2)];
> + dst += seq->frame.linesize[0];
> + }
> + } else {
> + init_get_bits(&gb, src, 8 * 8); src += 8;
> + for (b = 0; b < 8; ++b) {
> + for (i = 0; i < 8; ++i)
> + dst[i] = color_table[get_bits(&gb, 1)];
> + dst += seq->frame.linesize[0];
> + }
> + }
> + }
why not: (assuming its not slow)
if(len > 8) bits= 4;
else bits= foobar_table[len]; (or some av_log2 based stuff)
(init_get_bits(&gb, src, 8*8*bits); src += 8*bits)
for (b = 0; b < 8; ++b) {
for (i = 0; i < 8; ++i)
dst[i] = color_table[get_bits(&gb, bits)];
dst += seq->frame.linesize[0];
}
[...]
> +static int seq_parse_frame_data(SeqDemuxContext *seq, ByteIOContext *pb)
> +{
> + unsigned int offset, end_offset;
> +
> + if (get_buffer(pb, seq->current_frame_data, SEQ_FRAME_SIZE) != SEQ_FRAME_SIZE)
> + return AVERROR_IO;
> +
> + /* sound data */
> + offset = LE_16(&seq->current_frame_data[0]);
> + if (offset != 0) {
> + seq->current_audio_pkt_size = SEQ_AUDIO_BUFFER_SIZE * 2;
> + seq->current_audio_pkt_ptr = seq->current_frame_data + offset;
> + } else {
> + seq->current_audio_pkt_size = 0;
> + seq->current_audio_pkt_ptr = 0;
> + }
> +
> + /* palette data */
> + offset = LE_16(&seq->current_frame_data[2]);
> + if (offset != 0) {
> + seq->current_palette_pkt_size = 768;
> + seq->current_palette_pkt_ptr = seq->current_frame_data + offset;
> + } else {
> + seq->current_palette_pkt_size = 0;
> + seq->current_palette_pkt_ptr = 0;
> + }
> +
> + /* video data */
> + offset = LE_16(&seq->current_frame_data[8]);
> + if (offset != 0) {
> + end_offset = LE_16(&seq->current_frame_data[10]);
> + if (end_offset == 0)
> + end_offset = LE_16(&seq->current_frame_data[12]);
> + if (end_offset == 0)
> + end_offset = LE_16(&seq->current_frame_data[14]);
the indention looks wrong
> + seq_fill_buffer(seq, seq->current_frame_data[5],
> + seq->current_frame_data + offset,
> + end_offset - offset);
> + }
> + offset = LE_16(&seq->current_frame_data[10]);
> + if (offset != 0) {
> + end_offset = LE_16(&seq->current_frame_data[12]);
> + if (end_offset == 0)
> + end_offset = LE_16(&seq->current_frame_data[14]);
> + seq_fill_buffer(seq, seq->current_frame_data[6],
> + seq->current_frame_data + offset,
> + end_offset - offset);
> + }
> + offset = LE_16(&seq->current_frame_data[12]);
> + if (offset != 0) {
> + end_offset = LE_16(&seq->current_frame_data[14]);
> + seq_fill_buffer(seq, seq->current_frame_data[7],
> + seq->current_frame_data + offset,
> + end_offset - offset);
> + }
for(i=0; i<3; i++)
buf_num[i]= get_byte();
for(i=0; i<4; i++)
offset[i]= get_le16();
end_offset[3]= 0;
for(i=2; i>=0; i--)
end_offset[i]= offset[i+1] ? offset[i+1] : end_offset[i+1]
for(i=0; i<3; i++){
if(offset[i])
seq_fill_buffer(seq, buf_num[i],
seq->current_frame_data + offset[i],
end_offset[i] - offset[i]);
}
dont you think that something like the above code is more readable? also
there are no url_fseek() needed ...
[...]
> + pkt->pts = seq->frame_number;
> +
> + memcpy(pkt->data, seq->current_audio_pkt_ptr, seq->current_audio_pkt_size);
indention is wrong
[...]
> Index: ffmpeg/libavcodec/bitstream.h
> ===================================================================
> --- ffmpeg/libavcodec/bitstream.h (revision 6286)
> +++ ffmpeg/libavcodec/bitstream.h (working copy)
> @@ -402,7 +402,7 @@
> {
> #ifdef CONFIG_ALIGN
> const uint8_t *p=v;
> - return (((p[0]<<8) | p[1])<<16) | (p[2]<<8) | (p[3]);
> + return (((p[0]<<24) | p[1])<<16) | (p[2]<<8) | (p[3]);
this looks wrong, see how the () are placed
[...]
> @@ -463,8 +463,13 @@
> NEG_USR32(name##_cache, num)
> # endif
>
> +# ifdef ALT_BITSTREAM_READER_LE
> # define SHOW_SBITS(name, gb, num)\
> + NEG_SSR32((name##_cache)<<(32-(num)), num)
> +# else
> +# define SHOW_SBITS(name, gb, num)\
> NEG_SSR32(name##_cache, num)
> +# endif
these 2 SHOW_SBITS should be put under the ifdef ALT_BITSTREAM_READER_LE under
which the SHOW_UBITS are too -> fewer #ifdefs
except these things the bitstream.h patch looks ok
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
In the past you could go to a library and read, borrow or copy any book
Today you'd get arrested for mere telling someone where the library is
More information about the ffmpeg-devel
mailing list