[Ffmpeg-devel-irc] ffmpeg.log.20190801
burek
burek021 at gmail.com
Fri Aug 2 03:05:03 EEST 2019
[00:19:57 CEST] <dastan> hello people
[00:20:11 CEST] <dastan> is it possible to accelerate a decklink (blackmagic) output with cuda?
[00:20:21 CEST] <dastan> is it possible to accelerate a decklink (blackmagic) output with cuda(nvidia)?
[00:25:53 CEST] <kepstin> dastan: decklink stuff is all raw video, not sure what you would use cuda for?
[00:26:31 CEST] <dastan> i want to hardware accelerate the decklink video insertion
[00:30:12 CEST] <BtbN> ...?
[00:30:15 CEST] <BtbN> That doesn't make sense
[00:33:49 CEST] <dastan> why?
[00:34:15 CEST] <dastan> i get this when i convert the video conversion in the normal way
[00:34:37 CEST] <dastan> (h264 (native) -> v210 (native))
[00:35:49 CEST] <dastan> is not possible to create this (h264_cuda -> v210 (native))
[00:36:27 CEST] <dastan> i am making the conversion from h264 with software, is it possible to convert it with hardware?
[00:37:52 CEST] <another> if your hardware supports h264 encoding
[00:38:18 CEST] <dastan> yes it supports
[00:38:22 CEST] <dastan> but how?
[00:41:49 CEST] <klaxa> huh, shouldn't that be decoding?
[00:42:29 CEST] <another> err, yeah. thx klaxa
[00:43:54 CEST] <klaxa> if you are lucky you can just use ffmpeg -c:v h264_cuvid -i some_h264.mkv
[00:44:08 CEST] <klaxa> if all the drivers are present and your ffmpeg is built with the correct support
[00:44:19 CEST] <klaxa> i think at least
[00:44:48 CEST] <klaxa> (see: ffmpeg -codecs | grep h264 and ffmpeg -help decoder=h264_cuvid)
[00:45:26 CEST] <another> and hope you have better luck than klaxa :D
[00:55:03 CEST] <dastan> i am looking in the nvidia-smi that i can only allocate 2 process because the memory is 210 MiB
[00:55:25 CEST] <dastan> i dont know if MiB is MB, but the card has 2GB
[00:56:01 CEST] <dastan> and nvidia is sending messages with an error saying is out of memory
[00:56:11 CEST] <dastan> where i can change the memory?
[03:09:30 CEST] <cmptr> :close
[03:14:35 CEST] <fred1807> how much power do I need in my VPS to proper streamg 1080p videos to rtsmp youtube, flawless ?
[03:33:05 CEST] <NubNubNub> kepstin FYI i tried the streamselect and it's working like a charm
[10:04:08 CEST] <machtl> hi guys!
[10:05:01 CEST] <machtl> how can i reuse an input with concat? i will post a pastebin afterwards
[10:08:01 CEST] <machtl> https://pastebin.com/1SBhV9XD
[10:09:00 CEST] <machtl> i want to reuse the input 1... also the problem is that i would need to "start" the input of the stream when the first filter starts (otherwise the trim wont work)
[10:11:20 CEST] <furq> [0:v]scale=1280x720:force_original_aspect_ratio=1,split=2[v0][v1]
[10:14:56 CEST] <furq> or even [0:v]scale=1280x720:force_original_aspect_ratio=1,split,concat[concatvideo]
[10:15:07 CEST] <furq> if i'm reading that filterchain right
[10:15:20 CEST] <furq> i have no idea about the trimming thing, i'm not sure why you'd want a fixed trim on a live input
[10:15:58 CEST] <durandal_1707> split,concat will eat lots of RAM if input is too long
[10:16:08 CEST] <furq> yeah you probably just want to use loop for that
[10:16:38 CEST] <furq> scale,loop=2
[10:17:02 CEST] <durandal_1707> loop also eats memory :) and that is invalid command for loop
[10:17:55 CEST] <durandal_1707> you better use concat demuxer with loop option and scale final output
[10:18:47 CEST] <durandal_1707> if input is not too long, say few seconds, it is fine to use split,concat
[10:19:09 CEST] <furq> i'm guessing it's more than that if the trim starts at 224 seconds
[10:20:18 CEST] <durandal_1707> for reusing long inputs you better use intermediate files
[10:20:29 CEST] <machtl> hmm yes its actually a bigger project
[10:23:25 CEST] <machtl> here is the bigger output: https://pastebin.com/CDGeNeyx
[10:23:35 CEST] <machtl> not output the ffmpeg commands
[10:24:56 CEST] <machtl> there are 2 things bothering me, 1 is that the "stream" aa0 doesnt start because of the trim (I/O Error) therefore i added the dummies (which dont help as well)
[10:26:19 CEST] <machtl> and the second is that i cant re-use the inputs
[10:27:30 CEST] <furq> well for the last part you want split/asplit
[10:27:36 CEST] <furq> i don't even know where to start with the rest of that
[10:31:02 CEST] <durandal_1707> whenever you reuse input with split, its better to store up to split intermediate file
[10:31:44 CEST] <machtl> furq: is it that bad :) ?
[10:31:48 CEST] <durandal_1707> and then instead of intermediate_file -vf split do -i intermediate_file -i intermediate_file twice
[10:33:01 CEST] <furq> it's mostly the trimming live inputs that concerns me
[10:33:16 CEST] <furq> you're going to end up buffering a ton of stuff waiting for that
[10:34:23 CEST] <machtl> yes, i couldnt find another option to have the input of the livestreams at the live position in the videos
[10:39:39 CEST] <machtl> i wouldnt need to do that if it is possible to "restart" the input of the livestream to the actuall live stream
[10:39:48 CEST] <machtl> position
[10:41:02 CEST] <machtl> and end when the mapped video is over
[11:34:17 CEST] <ocx32> hello community, so i am trying to create a kind of a bridge from WEBRTC to RTP, my question is can i take the SDP that is embedded in WEBRTC and use the same one for RTP or is it better to generate a new one? can the same SDP be used in the webrtc and RTP SDPpayload? Thanks
[12:44:29 CEST] <machtl> does someone know an icecast stream for testing which has an audioclock or counter or something?
[17:02:26 CEST] <lyncher3> hi. I'm trying to add metadata OBUs to a AV1 bitstream. What are the muxing guidelines?
[17:13:05 CEST] <dastan> hello
[17:14:36 CEST] <dastan> i want to ask where i can find documentation about the QSV flags....i have a document that mix filter complex with hwupload, scale_qsv and things like that but i dont find documentation about
[18:49:31 CEST] <tepiloxtl> Hey. Im trying to use ffmpeg with vaapi (for twitch streaming, but I think thats not important). Ive been basing my attempts on this github gist https://gist.github.com/Brainiarc7/7b6049aac3145927ae1cfeafc8f682c1 and ffmpeg docs https://trac.ffmpeg.org/wiki/Hardware/VAAPI. My problem is that I cant find a way to make my output even close to quality that I was achieving on Windows
[18:50:20 CEST] <tepiloxtl> Heres my command https://pastebin.com/Vf7x1kh3. My CPU is Intel i5-4460. Can I do anything to make quality better, or is it as optimal as it can be?
[18:51:34 CEST] <tepiloxtl> I was trying to use higher bitrate, but I need to stay around 6-10M for twitch. Trying to run with -qp 23 made bitrate quickly baloon up to 50M before my Pc locked up
[18:52:28 CEST] <fred1807> is it possible to breadcast many video files, without breaking the stream ?
[18:57:39 CEST] <tepiloxtl> This is one of the better results I achieved, but its still not exactly appetising to watch: https://www.twitch.tv/videos/461064865
[18:57:56 CEST] <HickHoward> uhh
[18:58:09 CEST] <HickHoward> so, i have some file with a "s302m" track
[18:58:30 CEST] <HickHoward> and i want to decode that s302m track without the result being one big noisy sound file
[18:58:37 CEST] <HickHoward> and idea as to how i can actually achieve this?
[19:05:02 CEST] <Spring> What is the 'Unique ID' metadata from WebMs as exposed in MediaInfo? Tried searching for 'webm "unique id"' but couldn't find exact answers
[19:05:42 CEST] <Spring> it's a long, random-looking string
[19:06:32 CEST] <Spring> I haven't observed it in WebMs I've downloaded but do see it in ffmpeg-transcoded ones I have
[19:08:59 CEST] <Mavrik> Spring, it seems to be just a kinda random number
[19:09:55 CEST] <Mavrik> Spring, it's not even a UUID/GUID so it's not unique at all :)
[19:10:59 CEST] <Spring> closest I got was this, which mentions TrackUID and other 'UID's but most is crossed out and the only current comment is the Track one should be '0', http://wiki.webmproject.org/webm-metadata/global-metadata
[19:11:43 CEST] <Mavrik> Spring, see https://www.matroska.org/technical/specs/index.html
[19:11:45 CEST] <Mavrik> FileUID
[19:11:57 CEST] <Mavrik> And source: https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/matroskaenc.c#L1794
[19:12:08 CEST] <cehoyos> HickHoward: It is most likely Dolby-E, there is a patch on the development mailing list that may allow to decode the audio.
[19:12:41 CEST] <cehoyos> But you forgot to provide the complete, uncut "ffmpeg -i" console output
[19:12:53 CEST] <Spring> interesting, at least I know what it's for
[19:13:11 CEST] <Spring> (now)
[19:13:31 CEST] <Mavrik> (webm is essentially matroska/mkv so it uses the same codepath on ffmpeg)
[19:13:45 CEST] <Mavrik> That's probably the reason why you see structures not supposed to be added to webm in there :)
[19:15:19 CEST] <JEEB> I guess we need a similar thing like in movenc for a mode/profile?
[19:15:29 CEST] Action: JEEB doesn't remember if that already exists
[19:19:12 CEST] <HickHoward> here's the "complete, uncut" console output
[19:19:12 CEST] <HickHoward> https://pastebin.com/TgnFEhLN
[20:11:31 CEST] <Ua-Jared> Hey all, I was here yersterday asking this question so tell me to go away if I'm getting annoying, haha. My problem is that I'm trying to use ffmpeg to decode an H.264 stream from a camera. I like using ffmpeg for this purpose because it'll use my GPU somewhat for better performance (with the -hwaccel flag set). Anyways, my problem is that I then w
[20:11:31 CEST] <Ua-Jared> ant to process these decoded, uncompressed images in my Java program. I tried piping the output of ffmpeg into my Java program, but I can't seem to get it to work. The command I used was ./ffmpeg -hwaccel dxva2 -threads 1 -loglevel verbose -i "rtsp://admin:admin@192.168.15.20/media/video2" -f matroska pipe:1 | java Test , and my Java program loo
[20:11:32 CEST] <Ua-Jared> ks like this (just reads from stdin) https://pastebin.com/ycevPw8i
[20:11:36 CEST] <Ua-Jared> To be honest I don't even know if I'm approaching this right. Fundamentally I'm just trying to use ffmpeg to do some gpu-accelerated decoding of the H264 stream, and then somehow further process the decoded images in my Java program. Is there some other way I should do it?
[20:19:44 CEST] <kepstin> Ua-Jared: the overhead of handling the pipe and decoded frames is probably not worth it just to get hardware decoding going, unless you're *really* limited on cpu core count or power :/
[20:20:40 CEST] <kepstin> in your example ffmpeg command, ffmpeg would actually be re-encoding the stream (using a software encoder) to h264 in matroska, which is definitely not what you want
[20:21:28 CEST] <kepstin> and of course video data is binary, not text, so you'd want to be using binary read/write methods on the System.in io object.
[20:22:47 CEST] <kepstin> i'd recommend either figuring out how to get hardware decoding working with javacv, or switch to a different language that can use libavcodec,etc. directly.
[20:23:26 CEST] <kepstin> if you're really set on java, you can write a libavcodec wrapper that does all the c bits and provides a simple interface then use that from jni I guess?
[20:24:53 CEST] <kepstin> You can get raw video going through pipes fine - indeed, I've done it for some applications (a python script that generates video, sent to an external ffmpeg to encode) - but it requires careful buffer handling since pipe buffers aren't normally big enough to hold an entire raw video frame so you get a lot of blocking io :/
[20:24:57 CEST] <kepstin> at least on linux.
[20:25:30 CEST] <DHE> yeah I did something like that back in the day with mencoder and some HUGE -buffer -and -audio-buffer values...
[20:25:48 CEST] <Ua-Jared> kepstin: dang, I was wondering about that too :( . And dang! I should have understood what that command was doing. To be honest I'm still a total noob about all thing video, but I have read enough to start understanding it (I think). I know the video files are called "containers", and I know inside the containers are the video data (encoded by some
[20:25:49 CEST] <Ua-Jared> format) and audio data (encoded by some format). And I know some containers (eg,.mp4) will only allow certain types of encoded video (eg, h.264). So that makes sense that matroska container would need *some* video format for the data inside, and ffmpeg just uses h264. And yeah, I do understand that the binary from that pipe definitely wasn't goin
[20:25:49 CEST] <Ua-Jared> g to print right as characters :P .
[20:25:51 CEST] <kepstin> with my python script, I ended up buffering inside python and having threaded io handlers.
[20:26:16 CEST] <kepstin> matroska can hold some types of raw video iirc
[20:26:16 CEST] <furq> Ua-Jared: ffmpeg defines default codecs for every container
[20:26:24 CEST] <furq> so if you don't explicitly set one then it'll use those
[20:26:25 CEST] <kepstin> but if you don't specify a codec, ffmpeg picks one
[20:26:45 CEST] <kepstin> in general, you always want to explicitly specify the codec.
[20:26:53 CEST] <kepstin> the defaults have even changed between ffmpeg versions :)
[20:28:02 CEST] <kepstin> Ua-Jared: note that matroska is a complicated format, and is kind of hard to demux to get the video frames in your java code later. That's why we recommended you use "yuv4mpeg", which is a *super* simple format designed just to transfer raw video between applications.
[20:28:15 CEST] <furq> well i assume there's a matroska demuxer for java out there, but yeah
[20:28:20 CEST] <furq> y4m is simple enough to parse by hand in a few lines of code
[20:28:30 CEST] <Ua-Jared> And Thank you for the tip about further approaches! I think those are really good directions to go. I think I'm going to try to see if I can't get OpenCV to do the GPU decoding somehow. I'd like to be able to call ffmpeg source libraries in C, but JNI is rather scary, lol. Or at least it is from what I've seen of it
[20:28:47 CEST] <furq> https://wiki.multimedia.cx/index.php/YUV4MPEG2
[20:28:53 CEST] <Ua-Jared> (I know OpenCV uses ffmpeg on the backend, there's gotta be a way to do it)
[20:29:24 CEST] <Ua-Jared> Thank you furq! If I end up going the "pipe the output of ffmpeg into my Java program" route, I'll use y4m for a container
[20:29:55 CEST] <kepstin> you can also send raw video without any container, but it's harder to work with. the benefit of y4m is that it passes the video resolution, pixel format, etc. in headers.
[20:30:25 CEST] <furq> i guess you'd want to avoid java demuxers anyway if you have tight constraints on input buffer size
[20:30:39 CEST] <furq> since you probably won't have any control over how they're doing that
[20:32:14 CEST] <Ua-Jared> So just to make sure I'm thinking about this all right, when we say "raw video", we mean pure decoded and uncompressed video right? Like as I understand it, something like h264 encoded video as basically a series of "base images", and then it just stores deltas / changes in the image from there. So it requires some decoding to play on screen, becau
[20:32:15 CEST] <Ua-Jared> se it's not simply storing image after image. Would "raw video" for the video format in a container mean that the video data is really truly just image after image? I know it's uncommon and that videos are almost always compressed for storage, but just trying to get my head conceptually around it all
[20:33:33 CEST] <furq> yes
[20:33:37 CEST] <furq> it's just a stream of raw bitmap data
[20:33:39 CEST] <Ua-Jared> Because I think that "raw video" is basically what I want; just plain images to pass to Java. Apologies for my ignorance, I know it's more complicated than a 2d matrix of pixel values, but that's just how I've conceptualized it lol
[20:34:13 CEST] <Ua-Jared> Aww sweet! Ok so I have got the basics. So the raw video would be stupid large in file size (compared to say h264), but very easy to process
[20:34:15 CEST] <furq> so if you know what the frame size is (i.e. you know the dimensions, pixel format etc) then that is all you need
[20:34:38 CEST] <furq> y4m is nice if you don't know those in advance because it'll tell you those in the header
[20:34:58 CEST] <furq> and then after that you more or less just get a rawvideo stream but with markers between every frame
[20:36:07 CEST] <Ua-Jared> Awesome. So how would you tell ffmpeg to use y4m for the container format, and then "raw video" for the underlying video data inside that container?
[20:36:26 CEST] <furq> you don't need to set the codec for y4m, it can only use rawvideo
[20:36:32 CEST] <furq> so just -f yuv4mpegpipe
[20:37:25 CEST] <durandal_1707> video is 3d matrix
[20:39:32 CEST] <Ua-Jared> Ahh ok, that makes sense. Thank you furq. My gameplan going forward is a) try to research OpenCV further, and if I can't tell it (somehow) to use hwaccel with ffmpeg. And then b)maybe investigate this piping option further, taking care about large buffers and to use y4m for container format. And also keeping in mind all this might kill the added pe
[20:39:32 CEST] <Ua-Jared> rformance anyway. And then finally c)writing some JNI bidnings to call the underlying C code for ffmpeg in my Java program
[20:43:11 CEST] <furq> http://bytedeco.org/javacv/apidocs/org/bytedeco/javacv/FFmpegFrameRecorder.html
[20:43:16 CEST] <furq> maybe this
[20:43:36 CEST] <furq> and setVideoCodec(AV_PIX_FMT_DXVA2_VLD)
[20:46:58 CEST] <Ua-Jared> That actually looks super promising, I will give that a shot! Thanks for the link. I had actually just been using the "pure" OpenCV wrapper for Java, not JavaCV. I'll definitely give JavaCV a shot (I know it's kinda just a wrapper for OpenCV, but still, haha)
[20:48:09 CEST] <furq> nvm that's not a thing. i should stop listening to stackoverflow
[20:49:15 CEST] <Ua-Jared> Lol, my life for the past few days summarized. It's hard for me to keep track of all these technologies lol, JavaCV using OpenCV using FFmpeg, pretty rad.
[20:52:31 CEST] <Ua-Jared> Here's the Java 8 code I have that works pretty well, by the way. Uses the OpenCV 4.1.0 wrappers . https://pastebin.com/iUFUiWQc . I just need to figure out how to get it to use CPU :) . But it's actually MORE responsive than VLC, which kinda shocked me (i.e., less latency). But it uses more of my CPU and no GPU, whilst VLC only uses a little CPU
[20:52:32 CEST] <Ua-Jared> and some of my GPU
[00:00:00 CEST] --- Fri Aug 2 2019
More information about the Ffmpeg-devel-irc
mailing list