[FFmpeg-devel] PATCH: RTP/MJPEG low contrast image on low quality setting
Ico Doornekamp
libav at zevv.nl
Thu Mar 24 14:31:38 CET 2016
Original mail and my own followup on ffmpeg-user earlier today:
I have a device sending out a MJPEG/RTP stream on a low quality setting.
Decoding and displaying the video with libavformat results in a washed
out, low contrast, greyish image. Playing the same stream with VLC results
in proper color representation.
Screenshots for comparison:
http://zevv.nl/div/libav/shot-ffplay.jpg
http://zevv.nl/div/libav/shot-vlc.jpg
A pcap capture of a few seconds of video and SDP file for playing the
stream are available at
http://zevv.nl/div/libav/mjpeg.pcap
http://zevv.nl/div/libav/mjpeg.sdp
I believe the problem might be in the calculation of the quantization
tables in the function create_default_qtables(), the attached patch
solves the issue for me.
The problem is that the argument 'q' is of the type uint8_t. According to the
JPEG standard, if 1 <= q <= 50, the scale factor 'S' should be 5000 / Q.
Because the create_default_qtables() reuses the variable 'q' to store the
result of this calculation, for small values of q < 19, q wil subsequently
overflow and give wrong results in the calculated quantization tables. The
patch below uses a new variable 'S' (same name as in RFC2435) with the proper
range to store the result of the division.
--- a/libavformat/rtpdec_jpeg.c
+++ b/libavformat/rtpdec_jpeg.c
@@ -207,16 +207,17 @@ static void create_default_qtables(uint8_t *qtables, uint8_t q)
{
int factor = q;
int i;
+ uint16_t S;
factor = av_clip(q, 1, 99);
if (q < 50)
- q = 5000 / factor;
+ S = 5000 / factor;
else
- q = 200 - factor * 2;
+ S = 200 - factor * 2;
for (i = 0; i < 128; i++) {
- int val = (default_quantizers[i] * q + 50) / 100;
+ int val = (default_quantizers[i] * S + 50) / 100;
/* Limit the quantizers to 1 <= q <= 255. */
val = av_clip(val, 1, 255);
--
:wq
^X^Cy^K^X^C^C^C^C
More information about the ffmpeg-devel
mailing list