[Ffmpeg-devel-irc] ffmpeg.log.20180828

burek burek021 at gmail.com
Wed Aug 29 03:05:01 EEST 2018


[00:01:27 CEST] <LigH> Well, maybe I found the position.
[00:02:37 CEST] <LigH> I just need someone understanding shell scripts to confirm
[00:02:49 CEST] <LigH> Or blindly test and hope
[00:22:08 CEST] <LigH> OK, I got lensfun built in media-autobuild suite. Waiting for ffmpeg to link it too...
[00:22:13 CEST] <LigH> Bye for now.
[00:31:58 CEST] <pi-> ffmpeg seems to be refusing to process certain MP3 files
[00:32:16 CEST] <pi-> I'm getting "[mp3 @ 0x55eb48657e40] Format mp3 detected only with low score of 1, misdetection possible!"
[00:32:24 CEST] <pi-> Then "[mp3 @ 0x55eb48657e40] Failed to read frame size: Could not seek to 1327."
[00:32:38 CEST] <pi-> Is there any sensible next step?
[00:32:48 CEST] <pi-> Or have I reached a dead-end?
[00:33:01 CEST] <DHE> you sure the file isn't corrupted?
[00:36:00 CEST] <LigH> May it have a large ID3 tag (e.g. with lyrics or cover)?
[00:38:30 CEST] <LigH> My test failed in ffmpeg's configure step: "lensfun not found using pkg-config"
[00:38:41 CEST] <LigH> I'm out.
[00:38:45 CEST] <LigH> GN8
[04:05:44 CEST] <hojuruku> I'm not having much luck with ffmpeg kmsgrab on amdgpu gcn 1.4 https://pastebin.com/pxjZVa6q
[04:07:20 CEST] <hojuruku> [pid 28577] ioctl(5, DRM_IOCTL_AMDGPU_GEM_CREATE or DRM_IOCTL_VIA_ALLOCMEM, 0x7ffe6e9d1ac0) = 0 [pid 28577] ioctl(5, _IOC(_IOC_READ|_IOC_WRITE, 0x64, 0x48, 0x28), 0x7ffe6e9d1b00) = 0 [pid 28577] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x15c} --- system call trace
[05:16:53 CEST] <hojuruku> https://superuser.com/questions/908280/what-is-the-correct-way-to-fix-keyframes-in-ffmpeg-for-dash ok I got it working....
[05:17:08 CEST] <hojuruku> now to set the keyint with vaapi - what options does it support?
[06:04:41 CEST] <hojuruku> "Please check the video resolution. The current resolution is (65535x65535), which is not optimal." says youtube but ffmpeg reports 1080p
[06:57:57 CEST] <hojuruku> yeah can't get around that youtube glitch. How to I KMS grab on the amd - hwdownload - then upload to the intel for encoding?
[06:59:29 CEST] <hojuruku> DRI_PRIME=1  ffmpeg -threads 4  -framerate 15 -device /dev/dri/card1 -thread_queue_size 1024 -f kmsgrab -i - -f lavfi -i anullsrc=channel_layout=stereo:r=44100 -ar 44100 -init_hw_device vaapi=amd:/dev/dri/renderD129 -filter_hw_device amd -filter:v hwmap,hwupload,format=nv12 -init_hw_device vaapi=intel:/dev/dri/renderD128 -filter_hw_device intel -filter:v hwdownload,format=nv12 -c:v h264_vaapi -g 30 -bf 0 -profile:v constrained_baseline -level:v 4
[06:59:29 CEST] <hojuruku> -coder:v cavlc -qp 23 -codec:a aac -b:a 128k  -f flv rtmp://a.rtmp.youtube.com/live2/
[07:09:43 CEST] <hojuruku>  ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel_output_format vaapi -framerate 30 -device /dev/dri/card1 -f x11grab -i :0.0 -f lavfi -i anullsrc=channel_layout=stereo:r=48000 -threads 4  -vf 'format=nv12,hwupload' -g 60 -c:v h264_vaapi -qp 24 -bf 0 -coder:v cavlc -profile:v constrained_baseline -level 4.1 -acodec aac -ar 48000 -b:a 128k -f flv rtmp://a.rtmp.youtube.com/live2/
[07:10:00 CEST] <hojuruku> that's what's "works" with youtube but the screen can't start because the screen dimensions are wrong
[07:10:33 CEST] <hojuruku> "Please check the video resolution. The current resolution is (65535x65535), which is not optimal."
[07:10:49 CEST] <hojuruku> (in VBR mode)
[07:12:01 CEST] <hojuruku>  DRI_PRIME=1 LIBVA_DRIVER_NAME=intel ffmpeg -threads 4  -framerate 15 -device /dev/dri/card1 -thread_queue_size 1024 -f kmsgrab -i - -f lavfi -i anullsrc=channel_layout=stereo:r=44100 -ar 44100 -init_hw_device intel=v:/dev/dri/renderD128 -filter_hw_device v -filter:v hwmap,scale_vaapi=format=nv12 -video_size 1920x1080 -c:v h264_vaapi -g 30  -bf 0 -profile:v constrained_baseline -level:v 4.1 -coder:v cavlc -qp 23 -codec:a aac -b:a 128k -aspect 16:
[07:12:01 CEST] <hojuruku> flv rtmp://a.rtmp.youtube.com/live2/
[07:12:17 CEST] <hojuruku> sorry that's the "working" line as sending data to youtube
[07:12:50 CEST] <hojuruku> (with -hw_device intel (should be amd on both of them) you get the idea...
[11:02:27 CEST] <Yagiza> Are there any news about RTP/SDP implementation/usage?
[11:06:33 CEST] <Mavrik> What do you mean news? :)
[11:06:37 CEST] <Mavrik> It works?
[11:09:40 CEST] <Yagiza> I have some questions and noone could help me a few weeks ago when I asked here.
[11:10:23 CEST] <Yagiza> So, maybe RTP profi appeared?
[11:11:22 CEST] <Yagiza> The question was about sdp_flags=custom_io option.
[11:14:36 CEST] <Yagiza> When I specify this option for format with avformat_open_input() via dictionary, it accepts the option, but nothing happens.
[11:15:32 CEST] <Yagiza> It still listens the port specified in SDP instead of listening the I/O used to provide SDP.
[11:16:52 CEST] <Yagiza> But when I specify the option for ffplay.exe, it do not play. So it seems the option works with ffplay.exe.
[11:17:22 CEST] <Yagiza> So, I'm looking for any working example of using the option.
[11:18:17 CEST] <Yagiza> Of any example of specifying RTCP port when using SDP demuxer.
[11:43:18 CEST] <Mavrik> I'd look at sdp demuxer and see if it's not hardcoded
[11:43:21 CEST] <Mavrik> (source that is)
[11:51:12 CEST] <Yagiza> Mavrik, I just wonder if SDP demuxer is in srtp.c/h or sdp.c/h.
[11:51:31 CEST] <Mavrik> Last time I worked with it, it was in sdp.c
[11:51:53 CEST] <Yagiza> Mavrik, it seems sdp.c/h contains SDP muxer only. And SDP demuxer is in rtsp.c/h. But I'm not sure.
[11:53:19 CEST] <Yagiza> Mavrik, I didn't find any SDP parser code in sdp.c. Only SDP builder.
[11:53:46 CEST] <Yagiza> Mavrik, but I see SDP parsing in rtsp.c.
[11:56:40 CEST] <Yagiza> Mavrik, is #ffmpeg-devel a better place to discuss such things?
[12:47:29 CEST] <barhom> Hello, trying apples mediastreamvalidator on some HLS created with ffmpeg. Scaling using "-2:240"
[12:47:46 CEST] <barhom> Video is getting scaled to 300x240 (reported in vlc), but mediastreamvalidator is still saying,
[12:47:55 CEST] <barhom> Error: Playlist video resolution doesn't match parsed video resolution
[12:47:55 CEST] <barhom> --> Detail:  Playlist: 300 x 240, Video: 426 x 240
[12:48:03 CEST] <barhom> Does this have anything to do with setsar or similar?
[13:37:19 CEST] <barhom> anything to do with setsar?
[15:08:45 CEST] <Yagiza> For options like this:
[15:08:56 CEST] <Yagiza> -err_detect        <flags>      .D.VA... set error detection flags (default 0)
[15:08:56 CEST] <Yagiza>      crccheck                     .D.VA... verify embedded CRCs
[15:08:56 CEST] <Yagiza>      bitstream                    .D.VA... detect bitstream specification deviations
[15:08:56 CEST] <Yagiza>      buffer                       .D.VA... detect improper bitstream length
[15:08:56 CEST] <Yagiza>      explode                      .D.VA... abort decoding on minor error detection
[15:08:56 CEST] <Yagiza>      ignore_err                   .D.VA... ignore errors
[15:08:56 CEST] <Yagiza>      careful                      .D.VA... consider things that violate the spec, are fast to check and have not been seen in the wild as errors
[15:08:57 CEST] <Yagiza>      compliant                    .D.VA... consider all spec non compliancies as errors
[15:08:57 CEST] <Yagiza>      aggressive
[15:10:21 CEST] Action: DHE abandons the flooding ship
[15:10:23 CEST] <Yagiza> Is it possible to specify more than one option, or they are mutually exclusive?
[15:10:53 CEST] <durandal_1707> Yagiza: use + for flags options
[15:11:08 CEST] <Yagiza> durandal_1707, thanx
[15:11:19 CEST] <durandal_1707> -err_detect buffer+explode
[15:15:50 CEST] <Yagiza> durandal_1707, when specifying options with AVDictionary, the same rule applied?
[15:33:40 CEST] <durandal_1707> Yagiza: for every option for type <flags>
[15:37:03 CEST] <Yagiza> durandal_1707, thanx
[15:38:46 CEST] <Yagiza> durandal_1707, and what about AV_OPT_TYPE_CONST?
[15:48:05 CEST] <durandal_1707> Yagiza: it is for using in C
[15:54:06 CEST] <Yagiza> durandal_1707, that's what I need
[15:54:45 CEST] <Yagiza> durandal_1707, I have this option:
[15:55:21 CEST] <Yagiza> static const AVOption sdp_options[] = {
[15:55:21 CEST] <Yagiza>     RTSP_FLAG_OPTS("sdp_flags", "SDP flags"),
[15:55:21 CEST] <Yagiza>     { "custom_io", "use custom I/O", 0, AV_OPT_TYPE_CONST, {.i64 = RTSP_FLAG_CUSTOM_IO}, 0, 0, DEC, "rtsp_flags" },
[15:55:21 CEST] <Yagiza> ...
[15:55:21 CEST] <Yagiza> };
[15:55:46 CEST] <Yagiza> durandal_1707, I try to set it with av_dict_set()
[15:57:23 CEST] <Yagiza> durandal_1707, then pass it to demuxer as "options" parameter of avformat_open_input().
[15:58:21 CEST] <Yagiza> durandal_1707, I specify "sdp_flags" as option name, and "custom_io" as string value.
[15:58:43 CEST] <Yagiza> durandal_1707, but it seems the option is ignored.
[15:59:24 CEST] <Yagiza> durandal_1707, but when I pass it to ffplay.exe as -sdp_flags custom_io, it works!
[15:59:53 CEST] <Yagiza> durandal_1707, so, I try to figure out, what's wrong with my code?
[16:02:07 CEST] <durandal_1707> show me the code
[16:09:02 CEST] <Yagiza> durandal_1707, one moment, please. I need to check something.
[16:42:37 CEST] <Yagiza> durandal_1707, where can I paste my code?
[16:43:24 CEST] <tdr> Yagiza,  https://paste.pound-python.org
[16:44:56 CEST] <Yagiza> tdr, thanx
[16:45:26 CEST] <Yagiza> durandal_1707, oh! I found a bug in my code. It seems to be working right now.
[16:45:44 CEST] <Yagiza> durandal_1707, but I need to write more code to check it.
[17:02:18 CEST] <mixfix41> arg on arm trying to play ffplay in full screen mode, but it doesn't play very well compared to firefox on the same machine. but there was a change in xorg.conf and some libraries added to get firefox playing that way
[17:09:11 CEST] <durandal_1707> mixfix41: use mpv
[17:26:43 CEST] <DHE> mixfix41: ffplay is more of a demo application or to do some filter testing. it's not intended as a fully featured consistent video player
[17:30:23 CEST] <salatfreak> Hey ho! I'd like to cut out a segment of a long h264 video but only reencode the stream before the first keyframe and after the last keyframe in the time range and copy the rest in between. What's the best way to go about that?
[17:38:04 CEST] <mixfix41> good to know
[17:46:19 CEST] <salatfreak> When using '-ss' and '-t' to cut out a segment from a video with the '-c copy' option, can I specify that I want the nearest keyframes *before* the start time and *after* the end time selected?
[17:46:45 CEST] <DHE> ending usually doesn't need to be on a keyframe
[17:59:17 CEST] <salatfreak> DHE: Then I'd only need to worry about the first keyframe to be before the specified start time. Is there a solution for that?
[17:59:44 CEST] <DHE> if you're looking to cut the video into keyframe segments, I suggest looking at the "segment" muxer
[18:07:32 CEST] <salatfreak> DHE: My goal is to cut out a segment at exact times without reencoding more than necessary.
[18:10:01 CEST] <hojuruku> RI_PRIME=1 LIBVA_DRIVER_NAME=radeonsi ffmpeg -threads 8  -framerate 24 -device /dev/dri/card1 -thread_queue_size 1024 -f kmsgrab -i - -f lavfi -i anullsrc=channel_layout=stereo:r=44100 -ar 44100 -init_hw_device vaapi=amd:/dev/dri/renderD129 -filter_hw_device amd -filter:v hwmap,scale_vaapi=format=nv12 -video_size 1920x1080 -c:v h264_vaapi -max_delay 500000 -g 48 -keyint_min 48 -force_key_frames "expr:gte(t,n_forced*3)"  -bf 0 -profile:v
[18:10:01 CEST] <hojuruku> constrained_baseline -level:v 4.1 -coder:v cavlc -sei timing -b:v 3M -maxrate 3M  -codec:a aac -b:a 128k  -f flv rtmp://a.rtmp.youtube.com/live2/
[18:11:07 CEST] <hojuruku> That's working but youtube is not letting the stream start go live because AMD is reporting the wrong screen size 65536x65536 to youtube - but work find wint intel. Is there some way to copy from amd's kmsgrab video memory to the intel's memory to use as the encoder until the fix the bug
[18:12:00 CEST] <hojuruku> also the timing information in the stream is wrong usually.  mkvmerge manages to fix it up to work around this issue: https://bugs.freedesktop.org/show_bug.cgi?id=105277 why  I didn't up date my ticket with the success is a long story I wont go into , but the amd driver puts out broken streams.
[18:12:44 CEST] <jkqxz> The AMD VAAPI implementation does not support out-of-band extradata.  Can you send TS or something where inband extradata is generally supported?
[18:14:42 CEST] <jkqxz> The copy from video memory should work automatically if you supply a different filter_hw_device to hwmap, assuming there isn't any funny tiling.  (IIRC from my testing in X AMD capture mapping to Intel worked, but Intel capture mapping to AMD doesn't because of tiling.  May be different with other things running, though.)
[18:15:26 CEST] <hojuruku> yeah I don't mind using the intel for encoding. How do you tell / disable sei so amd stops sending the wrong screen size etc to youtube.
[18:15:43 CEST] <hojuruku>  "Please check the video resolution. The current resolution is (65535x65535), which is not optimal." that's the error i would get in live dashboard from youtube
[18:16:06 CEST] <jkqxz> I assume youtube is trying to read the nonexistent extradata and making something up to tell you.
[18:19:39 CEST] <hojuruku> jkqxz: i think we talked before regarding with ffmpeg making corrupted mp4 files - it wasn't just no moov atom it had timing issues, however mkmerge could fix the timing issues running the video through.
[18:20:31 CEST] <hojuruku> i set -sei to identification only - hoping that works better, but somehow the stream is reporting the wrong size, the radeon vaapi needs lots of work
[18:21:12 CEST] <hojuruku> any help telling me how to hwmap from one device to another (is that possible? - letting the intel hwmap the amd's ram?) - or hwdownload one gpu ->  hwupload to the intel.
[19:04:14 CEST] <jkqxz> hojuruku:  In your command above, you can make the second device the Intel (presumably /dev/dri/renderD128).
[19:04:35 CEST] <jkqxz> The SEI stuff doesn't work at all in the AMD driver, it's completely ignored.
[19:05:28 CEST] <hojuruku> ah ok, so the amd driver is hardcoded to out put a fake screen resolution and broken timestamp data that freaks out some players (like gstreamer and hardware decoding TVs)
[19:06:37 CEST] <jkqxz> No, it just gives you nothing at all as extradata.  You only get the stream with inband parameter sets.
[19:06:41 CEST] <hojuruku> jkqxz: I've tried for 30 mins to try and modify that above to take the kmsgrab from the andgpu card1 - and hwmap it to the intel decoder, or if all else fails hwdownload -> hwupload. The docs talk about having two vaapi's working on different streams - but not talking to eachother.
[19:08:20 CEST] <jkqxz> What goes wrong with 'ffmpeg ... -f kmsgrab -device /dev/dri/card0 -i - ... -init_hw_device vaapi=intel:/dev/dri/renderD128 -filter_hw_device intel -filter:v hwmap,scale_vaapi=format=nv12 ...'?
[21:56:50 CEST] <leandro_> hello
[21:57:13 CEST] <leandro_> how can i set referer, origin and cookie by ffmpeg ?
[22:53:22 CEST] <hojuruku> jkqxz: sorry about before I missed your reply I got hit by a kernel panic (not swap enabled since ssd died, but zswap enabled in kernel boot arguments - BOOM when OOM)
[23:16:23 CEST] <jkqxz> hojuruku:  "17:08 < jkqxz> What goes wrong with 'ffmpeg ... -f kmsgrab -device /dev/dri/card0 -i - ... -init_hw_device vaapi=intel:/dev/dri/renderD128 -filter_hw_device intel -filter:v hwmap,scale_vaapi=format=nv12 ...'?"
[23:19:00 CEST] <hojuruku> Failed to create surface from DRM object: 18 (invalid parameter).
[23:19:08 CEST] <hojuruku> i could debug it.
[23:21:40 CEST] <hojuruku> needs to capture amd screen > to intel vaapi surface and encode. i think we have to take it out of the amd memory by vaapi via hwdownload, then init a new vaapi the intel and hwupload it. just changing the vaapi driver and dri/render to intel didn't work.
[23:22:12 CEST] <jkqxz> That case should be the usual DRM PRIME sharing between devices setup.
[23:22:28 CEST] <jkqxz> Though yeah, certainly you can do download/upload if you prefer.
[23:23:17 CEST] <hojuruku> not using DRI prime, but you are right that might be what i need to enable sharing, xinerama is no good. but I don't want to use reverse prime, I rather just have to seperate cards the old fashioned way and  using xinerama, but xrandr is giving me no joy either. right now only got the one screen working.
[23:54:57 CEST] <jkqxz> Hmm, right.  What I was remembering working was export of video surfaces from AMD, which can indeed be imported directly to Intel from PRIME fds.
[23:56:10 CEST] <jkqxz> It looks like however the scanout surfaces get set up means that isn't the case for them, though, so it's probably not going to work directly.
[23:57:25 CEST] <jkqxz> (From testing with Polaris + Coffee Lake recording a bare framebuffer console, but could well be different with other setups.)
[00:00:00 CEST] --- Wed Aug 29 2018


More information about the Ffmpeg-devel-irc mailing list