[FFmpeg-devel] Creating a AVFrame from a QVideoFrame
Stefano Sabatini
stefasab at gmail.com
Sun Jun 8 12:45:29 CEST 2014
On date Saturday 2014-06-07 07:52:12 -0600, Mike Nelson encoded:
> I'm struggling with converting a QVideoFrame (QT) into a AVFrame in order
> to encode video coming from a webcam.
>
> Documentation for QVideoFrame:
> http://qt-project.org/doc/qt-5/qvideoframe.html
>
> I understand the basics of image formats, stride and such but I'm missing
> something. This is what I've got so far, adapted from a libav example (
> https://libav.org/doxygen/release/0.8/libavformat_2output-example_8c-example.html
> ):
>
>
> static void fill_yuv_image(const QVideoFrame &frame, AVFrame *pict,
> int frame_index, int width, int height)
>
> {
>
> pict->pts = frame.startTime() * 1000000.0; // time_base is 1/1000000
>
> pict->width = frame.width();
>
> pict->height = frame.height();
>
> pict->format = STREAM_PIX_FMT; //todo: get this from the frame
>
> pict->data[0] = (uint8_t*)frame.bits();
>
> pict->linesize[0] = frame.bytesPerLine();
> }
Here you're setting only the Y plane in AVFrame. You also need to set
the U and V planes (and corresponding linesizes). Also, I don't know
that is the image layout (AKA pixel format) in QVideoFrame.
Note: this question really belongs to libav-user, ffmpeg-devel is for
FFmpeg development, please post the followup on libav-user.
--
FFmpeg = Fostering & Fancy MultiPurpose Erroneous Genius
More information about the ffmpeg-devel
mailing list