[Ffmpeg-devel-irc] ffmpeg-devel.log.20190924
burek
burek at teamnet.rs
Wed Sep 25 03:05:09 EEST 2019
[00:23:17 CEST] <rcombs> BtbN: GPU arrived, thanks
[00:23:46 CEST] <BtbN> oh nice, that went faster than I expected.
[00:26:20 CEST] <rcombs> think this is just barely gonna fit in my case
[00:26:33 CEST] <rcombs> eyeballing it, looks like it'll have maybe a mm or two of clearance
[01:54:33 CEST] <Lynne> no way
[01:54:50 CEST] <Lynne> people are using the gain field in the opus extradata for replaygain
[02:00:19 CEST] <Lynne> I'm okay with it, its actually smart to let decoders do scaling like that since its free
[02:01:00 CEST] <Lynne> just didn't expect it to be used at all
[02:37:28 CEST] <kepstin> huh, I haven't actually seen any tools write that field, do you have examples in the wild?
[02:39:39 CEST] <Lynne> https://github.com/desbma/r128gain/blob/master/r128gain/opusgain.py
[02:43:40 CEST] <Lynne> I think they should have specified a different base, pow(10, 0xffff/(20.0*256)) = 6306736518360.512
[02:44:25 CEST] <Lynne> with base 2, pow(2, 0xffff/(20.0*256)) = 7130.584808402239, more reasonable amplification
[02:47:13 CEST] <Lynne> and maybe made anything above 0x7fff inverse to reduce the volume
[10:27:32 CEST] <JEEB> ok, so in a case where a stream gets removed from a program in MPEG-TS, should:
[10:27:40 CEST] <JEEB> 1. framework send End-of-Stream
[10:28:16 CEST] <JEEB> 2. API client like ffmpeg.c catch the PMT update (program updates are communicated to the API client), and send an End-of-Stream of its own to whatever it is doing
[10:31:31 CEST] <JEEB> I'm just seeing cases of a client like ffmpeg.c buffering its input because it thinks it will still be getting audio from a stream that has since disappeared from the program
[10:31:46 CEST] <JEEB> so I'm wondering if this should be fixed on either the framework, or the client side
[11:19:59 CEST] <JEEB> also this reminds me
[11:20:05 CEST] <JEEB> what was the lavf way of saying end-of-stream?
[11:21:02 CEST] <JEEB> avformat.h doesn't seem to document it
[11:29:08 CEST] <cone-244> ffmpeg 03Paul B Mahol 07master:a214c17414bd: avfilter/vf_v360: do not use mod where it is not needed
[11:47:01 CEST] <durandal_1707> Lynne: can i pass runtime parameters to vulkan shaders, for example to change view coordinates without need to renit whole filter?
[12:04:13 CEST] <cone-244> ffmpeg 03Timo Rothenpieler 07master:89cbbe9f70fc: avcodec/nvenc: fix typo in new Windows driver version
[12:05:40 CEST] <cone-244> ffmpeg 03Timo Rothenpieler 07release/4.2:0eb1088960a2: avcodec/nvenc: add driver version info for SDK 9.1
[12:08:53 CEST] <cone-244> ffmpeg 03Timo Rothenpieler 07release/4.1:fe1064f77954: avcodec/nvenc: add driver version info for latest SDKs
[12:11:07 CEST] <cone-244> ffmpeg 03Timo Rothenpieler 07release/4.0:25d1d5929fdb: avcodec/nvenc: add driver version info for latest SDKs
[12:26:16 CEST] <Lynne> durandal_1707: how big is the data do you want to transfer and do you need to download it later?
[12:27:02 CEST] <Lynne> if its small and only needs to be read by the GPU, use pushconstants, like chromaber
[12:27:29 CEST] <Lynne> otherwise look at how overlay creates a buffer on init
[12:30:59 CEST] <durandal_1707> Lynne: it is about changing roll/yaw/pitch and maybe fov dynamically
[12:31:42 CEST] <Lynne> yeah, pushconstants are perfect for this
[12:57:51 CEST] <durandal_1707> Lynne: what number means in GLSLC(1, ... ?
[13:01:20 CEST] <Lynne> indentation
[13:52:18 CEST] <Lynne> can bitstream filters modify ctx->par_in->extradata in the filter or init functions or do they have to create a new AV_PKT_DATA_NEW_EXTRADATA side data?
[13:53:00 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:74bbf9bc8279: avcodec/adpcm: Check number of channels for MTAF
[13:53:01 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:c7c0229bebbb: avcodec/truespeech: Eliminate some left shifts
[13:53:02 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:8c7d5fcfc32d: avcodec/dxv: Check op_offset in both directions
[14:05:06 CEST] <mkver> Lynne: Why would you want to do that? Isn't modifying ctx->par_out->extradata enough?
[14:09:22 CEST] <Lynne> is ctx->par_out->extradata initialized to be equal to ctx->par_in->extradata? if so that would work
[14:10:30 CEST] <mkver> Yes, it is.
[15:46:29 CEST] <Lynne> so a few new things I've learned about opus: the gain is in fact signed, not unsigned, which is good, so base 10 makes sense
[15:46:49 CEST] <Lynne> and our downmix handling has always been broken
[15:47:07 CEST] <Lynne> but so is libopus' downmix handling so its fine!
[15:48:53 CEST] <VenomFK> Hi, I am trying to build ffmpeg on AWS EC2 instance, I am facing following error
[15:48:54 CEST] <VenomFK> undefined reference to `ff_aac_at_encoder'
[15:49:35 CEST] <VenomFK> ./configure --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" --extra-libs=-ldl --enable-version3 --enable-libx264 --enable-libmp3lame --enable-libtheora --enable-nonfree --enable-gpl --enable-postproc --enable-libfdk-aac --disable-optimizations --disable-x86asm
[15:50:35 CEST] <JEEB> usage discussion to the users' channel
[15:50:45 CEST] <JEEB> -> #ffmpeg
[15:50:52 CEST] <VenomFK> thanks
[16:00:35 CEST] <durandal_1707> VenomFK: why you disable x86 asm?
[16:07:51 CEST] <VenomFK> durandal_1707: its not that important for me now
[16:11:18 CEST] <Lynne> durandal_1707: postproc is explicitly enabled and you ask that?
[16:13:48 CEST] <BBB> sounds like it's for the android sdk
[16:14:07 CEST] <BBB> although no --enable-pic
[16:14:10 CEST] <BBB> hm...
[16:15:34 CEST] <jamrial> aac_at is audiotoolbox, macos stuff. sounds like a dirty build folder
[16:38:50 CEST] <durandal_1707> Lynne: but chromaber_vulkan pushes constants during initialization and not during execution
[16:39:56 CEST] <Lynne> durandal_1707: it calls ff_vk_update_push_exec on every frame to update it
[16:40:23 CEST] <durandal_1707> ah
[17:19:22 CEST] <durandal_1707> Lynne: shouldn't your vulkan filters crash with gray pix fmt inputs?
[17:21:34 CEST] <Lynne> no, it still works
[17:22:27 CEST] <Lynne> you can always sample .rgba, you'll just get 0 in the .gba channels
[17:25:52 CEST] <cone-244> ffmpeg 03Andreas Rheinhardt 07master:f83ac5fd793f: avcodec/cbs_h264: Automatically free SEI payload on error
[17:27:34 CEST] <Lynne> durandal_1707: btw if you find a good glsl language reference tell me
[17:27:51 CEST] <Lynne> because I wasn't able to find one so all I know is from other code I've seen
[17:32:39 CEST] <Lynne> btw since you're doing 360 video to "normal", here's something you should know
[17:33:08 CEST] <Lynne> you need to work backwards, that is, determine which pixel of the input image maps to the output image
[17:34:48 CEST] <Lynne> in each main() invocation, gl_GlobalInvocationID.xy tell you which output pixel you'll need to fill
[17:35:38 CEST] <durandal_1707> Lynne: v360 already works like that
[17:38:39 CEST] <Lynne> cool
[17:55:38 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:5fe6a9db1539: tools/target_dec_fuzzer: Adjust threshold for MSS2
[17:55:39 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:d217691eec56: libavcodec/mpeg12dec: Check input for minimal frame size
[17:55:39 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:59163731e992: tools/target_dec_fuzzer: consider potential padding/edge in pixel threshold
[17:55:41 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:cede385018f5: avcodec/aacdec_fixed: Add FF_CODEC_CAP_INIT_CLEANUP
[17:55:42 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:8e51f35f81c2: avformat/vividas: Check n_sb_blocks against input space
[17:55:43 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:27a2f6594810: avformat/vividas: Test size and packet numbers a bit more
[17:55:43 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:c7ccbf40edb8: avcodec/ffwavesynth: Fix integer overflow in timestamps
[17:55:45 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:72db18e929cf: avformat/utils: Do not assume duration is non negative in compute_pkt_fields()
[17:55:46 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:0831cbfe0991: avcodec/alac: fix undefined behavior with INT_MIN in lpc_prediction()
[17:55:47 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:b30c07cc2b9e: avcodec/alac: Fix invalid shifts in 20/24 bps
[17:55:48 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:033d2c4884ec: avcodec/smacker: Fix integer overflow in signed int multiply in SMK_BLK_FILL
[17:55:49 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:340ab13504dd: avcodec/utils: Use av_memcpy_backptr() in ff_color_frame()
[17:55:50 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:1e984a69155c: avcodec/h264_slice: clear frame only on gaps when it is not otherwise initilaized
[17:55:51 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:3dce4d03d5a5: avcodec/aacdec: Check if we run out of input in read_stream_mux_config()
[17:55:52 CEST] <cone-244> ffmpeg 03Michael Niedermayer 07master:95e5396919b1: avcodec/utils: Optimize ff_color_frame() using memcpy()
[17:56:51 CEST] <durandal_1707> Lynne: i can not use function pointers in glslang?
[18:10:27 CEST] <Lynne> durandal_1707: I don't think so, AFAIK pointers are somewhat restricted, but you shouln't need to
[18:11:08 CEST] <Lynne> after all you can rewrite the shader on init, so you can directly substitute the function you need
[18:38:06 CEST] <durandal_1707> Lynne: i get segv when shader failed to compile
[18:48:43 CEST] <durandal_1707> Lynne: i can not get size of input image
[18:54:27 CEST] <Lynne> can you link the filter somewhere?
[18:55:02 CEST] <Lynne> imageSize only works for non-sampled images, so the output planes
[19:04:43 CEST] <durandal_1707> Lynne: how to use M_PI in shader?
[19:11:58 CEST] <durandal_1707> i got basic working now sort of
[19:21:29 CEST] <Lynne> durandal_1707: GLSLF(0, #define M_PI (%f) ,M_PI);
[19:21:36 CEST] <durandal_1707> and i got same performance as v360 with x86 asm and multithreading
[19:22:00 CEST] <durandal_1707> expect this one use 100% cpu instead of almost 400% cpu
[19:22:25 CEST] <Lynne> that's good for a starter, I'm sure performance can be improved with maybe some caching
[19:25:15 CEST] <Lynne> are you using an intel gpu? how much cpu is used if you hwmap from vaapi instead of hwupload?
[19:26:27 CEST] <durandal_1707> Lynne: this is on Intel(R) HD Graphics 5500 (Broadwell GT2)
[19:27:56 CEST] <Lynne> you should be able to put -vaapi_device /dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi before the input file and change the hwupload to hwmap
[19:32:24 CEST] <durandal_1707> https://github.com/richardpl/FFmpeg/tree/vulkan
[19:36:56 CEST] <Lynne> that's it? I thought it would be more complex
[19:37:29 CEST] <durandal_1707> Lynne: that is just equirectangular to flat, real v360 have more conversions
[19:37:45 CEST] <durandal_1707> this is also without rotation of vector v
[19:37:55 CEST] <durandal_1707> no yaw/pitch/roll
[19:39:32 CEST] <durandal_1707> also i think this one does not handle wrap around for bilinear
[19:40:06 CEST] <Lynne> accessing outside of input image's bounds?
[19:40:50 CEST] <durandal_1707> Lynne: no, wrapping x in width when interpolating back to 0
[19:41:26 CEST] <durandal_1707> once i implement yaw/pitch/roll will inspect back view
[19:44:58 CEST] <durandal_1707> Lynne: what GPU you have and what speed gain you get?
[19:49:26 CEST] <durandal_1707> shit, 820M nvidia card is not for gaming, so no vulkan with it even if it worked by miracle
[20:22:23 CEST] <Lynne> no idea, haven't tested yet
[20:22:53 CEST] <Lynne> but do keep in mind video players will be able to skip downloading and uploading, so its really only a fair comparison if you hwmp and don't hwdownload
[20:23:30 CEST] <Lynne> well, maybe not, if hardware decoders can't decode large resolution 360 video
[20:28:26 CEST] <durandal_1707> Lynne: please try it
[20:28:35 CEST] <durandal_1707> and report speed
[20:51:18 CEST] <Lynne> k, will do in a few hours, can you link a sample?
[21:26:01 CEST] <durandal_1707> Lynne: sample is irrelevant, just use same resolution when comparing with v360=e:flat:1
[21:59:13 CEST] <durandal_1707> heh, if i use rgba pixel format than v360_vulkan is almost 2x faster than v360
[22:34:07 CEST] <durandal_1707> Lynne: i guess i can cache results into separate image
[22:36:26 CEST] <durandal_1707> Lynne: but how to create read+write cache image ?
[22:36:54 CEST] <Lynne> durandal_1707: avgblur_vulkan does that
[22:37:14 CEST] <durandal_1707> no, avgblur does not do it
[22:37:56 CEST] <durandal_1707> i need to cache all vec2 values for all planes
[22:38:18 CEST] <Lynne> it creates a temporary image into which the horizontal blur pass is stored, then runs the vertical blur pass on it
[22:38:40 CEST] <Lynne> tmp = ff_get_video_buffer(outlink, outlink->w, outlink->h);
[22:43:39 CEST] <durandal_1707> Lynne: and it holds values in floats?
[22:47:18 CEST] <durandal_1707> and seems planar with alpha formats are not supported at all?
[22:52:26 CEST] <Lynne> durandal_1707: it holds values in whatever the pixel format is
[22:53:00 CEST] <Lynne> I also wanted to create a different pixel format image for another filter but this can't be done through lavfi, you need to do it through the avhwframescontext
[22:53:01 CEST] <durandal_1707> Lynne: but i need float values, because that allows interpolation
[22:54:21 CEST] <durandal_1707> so there is no way to allocate 2 float arrays[W*H]
[23:02:57 CEST] <Lynne> you can, just use a buffer, like overlay_vulkan does
[23:03:25 CEST] <Lynne> wait, you need to interpolate, that won't work
[23:04:02 CEST] <Lynne> I guess I'll write a function which creates a custom avhwframescontext for whatever sw_format you need
[23:04:10 CEST] <Lynne> but I'll do it tomorrow
[23:07:36 CEST] <durandal_1707> i need to keep those 2 arrays
[23:07:52 CEST] <durandal_1707> and redo it only if parameters change
[00:00:00 CEST] --- Wed Sep 25 2019
More information about the Ffmpeg-devel-irc
mailing list