[FFmpeg-devel] Added HW H.264 and HEVC encoding for AMD GPUs based on AMF SDK
Mark Thompson
sw at jkqxz.net
Tue Nov 14 16:14:19 EET 2017
On 13/11/17 23:00, Mironov, Mikhail wrote:
>>> + res = ctx->factory->pVtbl->CreateContext(ctx->factory, &ctx->context);
>>> + AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR_UNKNOWN,
>> "CreateContext() failed with error %d\n", res);
>>> + // try to reuse existing DX device
>>> + if (avctx->hw_frames_ctx) {
>>> + AVHWFramesContext *device_ctx = (AVHWFramesContext*)avctx-
>>> hw_frames_ctx->data;
>>> + if (device_ctx->device_ctx->type == AV_HWDEVICE_TYPE_D3D11VA){
>>> + if (amf_av_to_amf_format(device_ctx->sw_format) ==
>>> + AMF_SURFACE_UNKNOWN) {
>>
>> This test is inverted.
>>
>> Have you actually tested this path? Even with that test fixed, I'm unable to
>> pass the following initialisation test with an AMD D3D11 device.
>>
>
> Yes, the condition should be reverted. To test I had to add
> "-hwaccel d3d11va -hwaccel_output_format d3d11" to the command line.
Yeah. I get:
$ ./ffmpeg_g -y -hwaccel d3d11va -hwaccel_device 0 -hwaccel_output_format d3d11 -i ~/bbb_1080_264.mp4 -an -c:v h264_amf out.mp4
...
[AVHWDeviceContext @ 000000000270e120] Created on device 1002:665f (AMD Radeon (TM) R7 360 Series).
...
[h264_amf @ 00000000004dcd80] amf_shared: avctx->hw_frames_ctx has non-AMD device, switching to default
It's then comedically slow in this state (about 2fps), but works fine when the decode is in software.
>>> +
>>> + // Dynamic
>>> + /// Rate Control Method
>>> + { "rc", "Rate Control Method",
>> OFFSET(rate_control_mode), AV_OPT_TYPE_INT, { .i64 =
>> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VB
>> R }, AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP,
>> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_LATENCY_CONSTRAINED
>> _VBR, VE, "rc" },
>>> + { "cqp", "Constant Quantization Parameter", 0,
>> AV_OPT_TYPE_CONST, { .i64 =
>> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP },
>> 0, 0, VE, "rc" },
>>> + { "cbr", "Constant Bitrate", 0,
>> AV_OPT_TYPE_CONST, { .i64 =
>> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CBR }, 0, 0,
>> VE, "rc" },
>>> + { "vbr_peak", "Peak Contrained Variable Bitrate", 0,
>> AV_OPT_TYPE_CONST, { .i64 =
>> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VB
>> R }, 0, 0, VE, "rc" },
>>> + { "vbr_latency", "Latency Constrained Variable Bitrate", 0,
>> AV_OPT_TYPE_CONST, { .i64 =
>> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_LATENCY_CONSTRAINED
>> _VBR }, 0, 0, VE, "rc" },
>>
>> I think the default for this option needs to be decided dynamically. Just
>> setting "-b:v" is a not-unreasonable thing to do, and currently the choice of
>> PEAK_CONSTRAINED_VBR makes it then complain that maxrate isn't set.
>> Similarly, if the only setting is some constant-quality option (-q/-
>> global_quality, or your private ones below), it ignores that and use the
>> default 2Mbps instead.
>>
>>> + /// Enforce HRD, Filler Data, VBAQ, Frame Skipping
>>> + { "enforce_hrd", "Enforce HRD", OFFSET(enforce_hrd),
>> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
>>
>> Does this option work? I don't seem to be able to push it into generating
>> HRD information with any combination of options.
>>
>
> Fixed.
What combination of options are needed to get the HRD parameters in the output stream? I still don't see them with the new version.
>>> + { "filler_data", "Filler Data Enable", OFFSET(filler_data),
>> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
>>> + { "vbaq", "Enable VBAQ", OFFSET(enable_vbaq),
>> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
>>> + { "frame_skipping", "Rate Control Based Frame Skip",
>> OFFSET(skip_frame), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
>>> +
>>> + /// QP Values
>>> + { "qp_i", "Quantization Parameter for I-Frame", OFFSET(qp_i),
>> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
>>> + { "qp_p", "Quantization Parameter for P-Frame", OFFSET(qp_p),
>> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
>>> + { "qp_b", "Quantization Parameter for B-Frame", OFFSET(qp_b),
>> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
>>> +
>>> + /// Pre-Pass, Pre-Analysis, Two-Pass
>>> + { "preanalysis", "Pre-Analysis Mode", OFFSET(preanalysis),
>> AV_OPT_TYPE_BOOL,{ .i64 = 0 }, 0, 1, VE, NULL },
>>> +
>>> + /// Maximum Access Unit Size
>>> + { "max_au_size", "Maximum Access Unit Size for rate control (in bits)",
>> OFFSET(max_au_size), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE },
>>
>> Can you explain more about what this option does? I don't seem to be able
>> to get it to do anything - e.g. setting -max_au_size 80000 with 30fps CBR 1M
>> (which should be easily achievable) still makes packets of more than 80000
>> bits.)
>>
>
> It means maximum frame size in bits, and it should be used together
> with enforce_hrd enabled. I tested, it works after the related fix for enforce_hrd.
> I added dependency handling.
$ ./ffmpeg_g -y -nostats -i ~/bbb_1080_264.mp4 -an -c:v h264_amf -bsf:v trace_headers -frames:v 1000 -enforce_hrd 1 -b:v 1M -maxrate 1M -max_au_size 80000 out.mp4 2>&1 | grep 'Packet: [0-9]\{5\}'
[AVBSFContext @ 00000000029d7f40] Packet: 11426 bytes, key frame, pts 128000, dts 128000.
[AVBSFContext @ 00000000029d7f40] Packet: 17623 bytes, key frame, pts 192000, dts 192000.
[AVBSFContext @ 00000000029d7f40] Packet: 23358 bytes, pts 249856, dts 249856.
(That is, packets bigger than the supposed 80000-bit maximum.) Expected?
>>
>> And some thoughts on the stream it makes:
>>
>> "ffmpeg_g -report -y -f lavfi -i testsrc -an -c:v h264_amf -bsf:v trace_headers -
>> frames:v 1000 out.mp4"
>>
>> [AVBSFContext @ 000000000049b9c0] Sequence Parameter Set
>> [AVBSFContext @ 000000000049b9c0] 40 max_num_ref_frames
>> 00101 = 4
>> [AVBSFContext @ 000000000049b9c0] 206 max_dec_frame_buffering
>> 00101 = 4
>>
>> Where did 4 come from? It never uses more than 1 reference in the stream.
>
> According to codec guys this field filled in by HW and represents how many
> frames can be stored in DPB buffer. But in reality HW encoder will reference
> one frame at the time.
Why set it to 4, then? That just creates needless incompatibility.
Thanks,
- Mark
More information about the ffmpeg-devel
mailing list