[FFmpeg-devel] RTCP packets issue
Luca Abeni
lucabe72
Thu Jul 22 09:05:55 CEST 2010
On 07/21/2010 04:43 PM, sateesh babu wrote:
> Hi,
>
> I am using ffmpeg libraries to receive a mpeg-4 stream from a rtsp
> server (Siqura C60 E encoder). The server expects a RR packet from the
> client as a reply to the SR packet it sends.
Are you sure about this? In my understanding of the RFC, a server behaving
in this way is broken.
> VLC is able to play the
> stream without any disconnection. The server seems to tear down the
> connection with ffmpeg libraries.
In my understanding, the only thing that needs to be fixed is the client's
SSRC in the RTCP RR packets.
[...]
> 2. The current code puts a value of 0 for 'delay_since_last' value.
Last time I checked, this value seemed to be computed correctly. It is
set to 0 only before the first RTCP SR packet is received (which makes
sense).
> I
> have changed uint64_t ntp_time= s->last_rtcp_ntp_time to uint64_t
> ntp_time= av_gettime()/1000000 to get a reasonable value.
This is wrong. This value should be the delay between receiving the
last SR packet and sending this RR packet.
> 3. Wireshark is displaying all the SR RTCP packets but sometimes the
> libraryseems to send a RR packet with an old ntp timestamp
AFAIR, there is no NTP timestamp in RR packets. If you mean the "last
SR timestamp" field (LSR), then rtp_check_and_send_back_rr() seems to
compute it correctly. What do you mean by "old ntp timestamp"? According
to the code, last_rtcp_ntp_time is set every time a SR packet is
received, and this field is used to compute LSR.
Luca
More information about the ffmpeg-devel
mailing list