[Ffmpeg-devel] RTP patches & RFC
Ryan Martell
rdm4
Thu Oct 12 15:59:36 CEST 2006
On Oct 12, 2006, at 5:29 AM, Michael Niedermayer wrote:
> Hi
>
> On Wed, Oct 11, 2006 at 10:41:36PM -0500, Ryan Martell wrote:
> [...]
>>>
>>> [...]
>>>> 4) I am using my own packet queues. This means that a packet comes
>>>> in (from network), gets allocated and copied to a packet, and
>>>> once a
>>>> full frame is received, is conglomerated and
>>>> copied into an AVPacket. I got rid of the extra allocations and
>>>> reallocations and copies with the fragmentation packets (see the
>>>> add_h264_partial_data)
>>>
>>> either
>>> A. you support out of order packets (like with a n packet buffer
>>> which
>>> reorders packets) and then after reordering you output them
>>> B. you output packets in their order and fragmentation
>>>
>>> either way you should set AVStream->need_parsing=1 and leave the
>>> merging
>>> of packets to the AVParser
>>
>> Okay, sorry to be dense about this, but I'm still confused. The
>> packets that come over rtp have their size indicated in various ways,
>> depending on the packet. That information is NOT in the NAL, and the
>> NAL is NOT preceeded by the start sequence. So what I was doing was
>> converting them to AvC style (preceded by length) packets. This is
>
> iam more in favor of adding the startcode prefix instead of the length
> i dont think the avc style length stuff is supported by our AVParser
I can try that; I tried it in the past and it didn't work, but I
wasn't using the AVParser, and I had other (possibly offseting) bugs
at that time. I thought the length preceding was cleaner though, as
it prevented any searching through the bit stream.
The other part of the issue is that the rtp stuff wouldn't start
until the extradata for the codec was filled in; the only way to fill
it in (I thought) was giving it a valid AvC block, which is why I
ended up going down that path. I'll reinvestigate.
>> (Or does the AVParser know enough about h264 nals to
>> determine where they start and end?). There are no sequence packets
>> in the stream (it relies on the rtp timestamps). Furthermore, if
>> they are fragmented, they aren't just a fragmented bitstream, they
>> have header data that must be stripped first, and they have to be
>> accumulated, and only if certain parameters are met.
>>
>> Step two, is accumulating all of the packets with the same timestamp
>> into a frame. I can see how the AVParser could do this for me, based
>> on the pts of the packet.
>
> it doesnt need the pts for this, it will analize the nal units
I haven't looked at the h264 stream specification too closely, but I
thought there was something called a sequence nal with timestamp
information in it. From what I understand in reading the rtp docs,
that packet isn't present over the rtp stream; it expects you to get
that from the timestamp of the rtp packets.
If i just prepend the start sequence to my packets (instead of the
length bytes), set needs_parse=true, then I don't have to worry about
the timestamp/pts issue at all?
>
> [...]
>>>>
>>>> struct h264_packet {
>>>> uint8_t *data;
>>>> uint32_t length;
>>>> uint32_t timestamp;
>>>>
>>>> struct h264_packet *next;
>>>> };
>>>
>>> timestamps probably need to be 64bit
>>
>> they're only 32 bit in the rtp packet.
>
> ok, if you just compare them for equality then thats fine otherwise
> you should use libavformat/utils.c lsb2full() to make 64bit timestamps
> out of them as they will overflow after ~ 13h (assuming 90khz) and
> 13h isnt that long ...
I just had to take care of this with the rtcp statistics stuff.
Although i'm using that timestamp internally for putting frames
together and for accumulating fragmented nals (so it's almost more
like an id to me).
> [...]
>>
>>>> // av_set_pts_info(s->st, 33, 1, 90000);
>>>
>>> you must set the timebase correctly (the default of 90khz is very
>>> likly not
>>> correct)
>>
>> h264/rtp is automatically 90khz. It's also specified on the sdp line,
>> but it will always be 90kHz.
>
> what does the ADJUST_TIMESTAMP or whatever it was called code do then?
If you #define ADJUST_TIMESTAMP, it does the timestamp normalization
code that rtp.c did (scaling the timestamp to 90kHz base time, using
the rtcp ntp time at start minus the present time), and that is stuck
into the pts of the packet.
If you don't #define ADJUST_TIMESTAMP, it just puts the timestamp
from the packet directly into the pts field.d
I was trying to figure out how to get my audio to sync. I suspect
that ADJUST_TIMESTAMP should always be defined (and it also handles
the conversion from the rtp 32 bit timestamp to the 64 bit timestamp).
I have to keep the unadjusted timestamp around first, because it's
possible that an rtcp packet would come over the wire before a
fragmented nal unit was complete, which would potentially change the
adjusted timestamp...
>> Is this the correct way to set it?
>
> yes, though the 33 should match the number of bits in the AVPacket-
> >pts
>
>
> [...]
>> I'm also having to get the rtsp rtcp packet (receiver reports)
>> working, because that's why the server is cutting me off after two
>> minutes.
>
> great
yeah; but i'm having a hard time writing packets out on the rtp
incoming stream. somethings getting closed (even though I'm opening
it with URL_RDWR). I'll figure it out.
>>
>> Finally, I can give a Blocksize parameter to the server on RTSP
>> initial handshake. With this, I could specify a maximum packet size,
>> which would allow me to preallocate all of my internal packets (using
>> around 2k buffers each, since MTU is 1500 by default). This would
>> basically mean that there would be initial memory allocation for
>> packets until I had enough in the packet pool, and I could retire
>> them to another linked list and pull them from there. So there would
>> be no malloc's after an initial startup period. I think this would
>> be a good thing, but I'm not sure how the rest of FFMPEG feels about
>> memory allocation. Is this a better approach?
>
> yes, less *malloc() and *free() is better
will do.
More information about the ffmpeg-devel
mailing list