[Ffmpeg-devel-irc] ffmpeg.log.20180713

burek burek021 at gmail.com
Sat Jul 14 03:05:01 EEST 2018


[01:42:43 CEST] <BtbN> wlfgang, that depends entirely on the API. There is no generic "hwaccelerate this" recipe
[03:45:54 CEST] <wlfgang> well, for example, is there a way to query for what is actually supported by the hardware/OS? (h264_nvenc, h264_qsv, ...)
[03:46:52 CEST] <wlfgang> i am interested in h264 only, so if i need to query each one, that is fine
[03:47:05 CEST] <memeka> hi, is it possible to set the pix_fmt sent to the decoder from the ffmpeg command line?
[03:47:25 CEST] <memeka> the AVCodecContext  pix_fmt
[07:16:07 CEST] <memeka> is it possible to set the pix_fmt sent to the decoder from the ffmpeg command line? the AVCodecContext  pix_fmt
[07:16:33 CEST] <nicolas17> -pix_fmt? :P
[08:10:40 CEST] <memeka> nicolas17: pix_fmt will actually be recognized later in the process
[08:10:52 CEST] <memeka> and it's not set to AVCodecContext
[08:11:33 CEST] <memeka> so it will actually try to do conversion to it, instead of setting it as AVCodecContext  pix_fmt
[12:03:10 CEST] <aaaa> I am trying to reverse an mp4 video but keep getting this error: Error while decoding stream #0:0: Invalid data found when processing input. The video plays fine with vlc. I notice that memory usage rockets when I run the command. The command I am using is /usr/bin/ffmpeg -i /tmp/original.mp4 -vf reverse /tmp/reversed.mp4
[12:03:27 CEST] <aaaa> Can anyone help with the error?
[12:04:42 CEST] <JEEB> -v verbose and post the full terminal output on pastebin or so, and link here
[12:04:48 CEST] <JEEB> most likely though you're running out of memory
[12:04:57 CEST] <JEEB> since -vf reverse probably is going to buffer the whole shebang
[12:05:13 CEST] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#reverse
[12:05:14 CEST] <JEEB> yes
[12:05:19 CEST] <JEEB> > Warning: This filter requires memory to buffer the entire clip, so trimming is suggested.
[12:05:26 CEST] <JEEB> and that is the *decoded* clip
[12:05:30 CEST] <JEEB> not just your clip
[12:09:59 CEST] <aaaa> Here is the verbose output: https://pastebin.com/VsREznmf
[12:10:41 CEST] <aaaa> Is there any workaround for this issue? Sorry if this is a stupid question... I'm very new to this :P
[12:11:43 CEST] <sfan5> you're running out of memory
[12:11:59 CEST] <sfan5> your best bet is not to run this on a terrible underpowered device (a raspberry pi)
[12:13:23 CEST] <JEEB> sfan5: well he *is* buffering the whole clip
[12:13:30 CEST] <JEEB> because that's the only way to reverse
[12:14:26 CEST] <sfan5> sure, just using a rpi for this task doesn't help
[12:14:36 CEST] <JEEB> yes, of course
[12:14:46 CEST] <JEEB> I didn't check the length of his clip
[12:14:55 CEST] <aaaa> its 15 seconds long
[12:14:57 CEST] <JEEB> but I would guess a lot of *nix distros would just nope out
[12:15:04 CEST] <sfan5> reversing would work better if it was done for each GOP: start with the last one, encode it reversed, go to the previous one
[12:15:09 CEST] <sfan5> ffmpeg doesn't support any of this though
[12:15:25 CEST] <JEEB> yea, going backwards efficiently is a special case
[12:15:35 CEST] <JEEB> and generally with just FFmpeg you can't expect frame-exactness
[12:15:44 CEST] <JEEB> (although with mp4 it will most likely be frame-exact)
[12:15:51 CEST] <aaaa> :(
[12:17:35 CEST] <aaaa> Would it work if the file was segmented to say 1 second clips and each one reversed separately and then join them together at the end?
[12:17:59 CEST] <JEEB> it would require less buffering, but with an rpi I don't think you're gonna get far
[12:20:15 CEST] <aaaa> Ok, I'll give it a go and see what happens. Thanks for your help
[12:23:42 CEST] <King_DuckZ> hey guys, I've finally got something up, this was really messy and hard
[12:24:18 CEST] <King_DuckZ> now my video looks mostly ok, except for some crazy colours appearing where highlits should be
[12:24:43 CEST] <King_DuckZ> like bright magenta, yellow and such - any idea of what might be causing this?
[12:24:58 CEST] <King_DuckZ> my input is rgb24, output is yuv422p
[12:25:41 CEST] <sfan5> what are you decoding it with?
[12:25:59 CEST] <King_DuckZ> I can change those, I just picked two random ones but it's not like I must use them - however my input is RGB
[12:26:39 CEST] <King_DuckZ> sfan5: nothing, input comes from separate still images (jpg, png...) which I load with openimageio
[12:26:49 CEST] <sfan5> the final video, i mean
[12:28:31 CEST] <King_DuckZ> sfan5: you mean what I'm using to watch it? mplayer
[12:29:12 CEST] <sfan5> that should handle 4:2:2 fine
[12:29:59 CEST] <King_DuckZ> should I use something else instead of 422?
[12:30:15 CEST] <sfan5> you can give yuv420p a try to see if that fixes it
[12:30:21 CEST] <King_DuckZ> ok
[12:52:27 CEST] <King_DuckZ> uhm no, that just breaks my code when I select the dnxhd codec, and with h264 it just shows the same green/magenta/blue bands in the sky and in lit areas
[12:52:56 CEST] <King_DuckZ> I'm sure mplayer is right and my code is wrong, though I don't see how
[12:59:13 CEST] <King_DuckZ> maybe a better question would be: how do I make any sense out of all the pixelformats in the enum, and how do I know which one can be used?
[14:02:23 CEST] <mort> Weird question: when converting a png to a yuv (nv12) with ffmpeg, is there a way to set the number of bytes per line?
[14:02:34 CEST] <mort> using the CLI
[14:27:57 CEST] <DHE> you're saving to rawvideo?
[14:28:00 CEST] <utack> is there anything that can be done here? "[png @ 0x557e39382b00] [IMGUTILS @ 0x7f8430390300] Picture size 21465x31581 is invalid
[14:28:00 CEST] <utack> "
[14:28:12 CEST] <utack> is the size too large, or the uneven pixel number a problem?
[14:28:50 CEST] <DHE> I'd try the uneven pixel thing first. the resolution is large but not unreasonably so
[14:29:29 CEST] <sfan5> 2.5GiB is pretty large, wouldn't surprise me if there was a sanity check put in for that
[14:29:46 CEST] <sfan5> s/GiB/GiB per frame/
[14:37:59 CEST] <DHE> looking at the source where that error comes from, it looks like it is a memory allocation limit...
[15:17:40 CEST] <utack> thanks DHE sfan5
[15:17:54 CEST] <utack> not a fan of resizing to evne pixels but i will try
[16:07:36 CEST] <aaaa> I managed to reverse the video by extracting all the frames and joining them in reverse. Its a bit slower but works
[16:33:42 CEST] <DHE> aaaa: that's going to be the best way without some kind of custom solution. like the batching by GOP which still requires a lot of RAM or disk space for buffering
[17:02:07 CEST] <aaaa> DHE: Yeah it seems to be working well. I did try increasing swap space (can't add more ram as its a raspberry pi) but I underestimated the amount of ram it actually uses so abandoned that idea..
[17:51:08 CEST] <King_DuckZ> I want to give my program's users the ability to pipe my output into ffmpeg, I normally have RGB frames one at time, how should I output them so that ffmpeg will understand data across the pipe?
[17:51:41 CEST] <furq> as rgb frames one at a time
[17:51:59 CEST] <furq> if you want your users to not have to specify the frame size, pixel format etc then it's a bit tricker
[17:53:23 CEST] <TheAMM> NUT is a simple container ffmpeg understands, so that's one solution to ez piping
[17:53:39 CEST] <TheAMM> However I don't consider NUT all that simple to implement
[17:53:40 CEST] <King_DuckZ> furq: that would be ideal... I can change pretty much anything quite easily on my buffers - planar or interleaved, alignment, bit depth...
[17:53:47 CEST] <furq> there are much simpler ways than nut
[17:53:52 CEST] <furq> unless your program links to lavf
[17:53:55 CEST] <TheAMM> Do tell
[17:54:09 CEST] <furq> i'd probably just output a sequence of bmp/tiff frames or something
[17:54:12 CEST] <TheAMM> Because I'm still half-looking for piping frames with timestamps
[17:54:19 CEST] <furq> oh right
[17:54:22 CEST] <furq> yeah that doesn't help with timestamps
[17:54:30 CEST] <furq> y4m would be another way but obviously it doesn't work with rgb
[17:55:12 CEST] <King_DuckZ> hold on, bmp? you mean the actual windows format, with headers, 4-byte aligned scanlines and all?
[17:55:44 CEST] <furq> i do mean that but it doesn't necessarily have to be bmp
[17:55:55 CEST] <furq> whatever the simplest image format that image2 will accept is
[17:57:07 CEST] <King_DuckZ> I can do bitmap... just wondering if there's any case where I'd want to use more than 8bpp, because bmp is capped to that irrc
[17:57:09 CEST] <King_DuckZ> iirc
[17:59:03 CEST] <furq> maybe ppm would be simpler
[17:59:24 CEST] <furq> you'd still have to specify input framerate in ffmpeg
[18:01:14 CEST] <King_DuckZ> that's fine, odds are original input comes from an image sequence, 1 frame per file, so users would have to specify the frame rate anyways
[18:01:29 CEST] <King_DuckZ> what is ppm tho?
[18:01:43 CEST] <furq> portable pixmap
[18:01:45 CEST] <furq> part of netpbm
[18:02:02 CEST] <furq> iirc it's a simple header and then just rgb24 data
[18:03:08 CEST] <King_DuckZ> ok I found some resources on ddg, thanks for the help! :)
[18:03:44 CEST] <furq> it's annoying that there's no rgb extension for y4m
[18:03:46 CEST] <furq> that'd be perfect otherwise
[18:04:14 CEST] <King_DuckZ> what format would that need? YUV?
[18:04:20 CEST] <furq> any yuv rawvideo, yeah
[18:05:20 CEST] <King_DuckZ> hm I can write code to convert to YUV... it's probably a bit complicated, but in terms of run time, that conversion would happen anyways, one side of the pipe or the other, no?
[18:05:32 CEST] <furq> depends what you're doing
[18:05:42 CEST] <furq> or what your user is doing, rather
[18:06:15 CEST] <King_DuckZ> who knows, this is a support-any-input/write-any-output kind of tool
[18:12:45 CEST] <King_DuckZ> am I looking at the right thing? https://wiki.multimedia.cx/index.php?titel=YUV4MPEG2
[18:13:07 CEST] <King_DuckZ> https://wiki.multimedia.cx/index.php?title=YUV4MPEG2 sorry
[18:13:15 CEST] <furq> yeah
[18:14:07 CEST] <furq> http://vpaste.net/Y6ZlI
[18:14:12 CEST] <furq> that's all the supported pixel formats
[18:18:13 CEST] <King_DuckZ> those in the "use strict -1" list, are they also valid?
[18:27:40 CEST] <podman> Hi there! Is the 'allowed_extensions' options only available when the input file is a hls manifest? I get "Option allowed_extensions not found." when including that option for other file types
[19:03:31 CEST] <saml_> podman, https://www.ffmpeg.org/ffmpeg-all.html#hls-1  yeah it's for hls only
[20:17:37 CEST] <chocolate-elvis> If someone wanted to emulate this cheezy Adobe Media Encoder setting, https://postimg.cc/image/w5xoyrmgn/ just -timecode and using
[20:17:46 CEST] <chocolate-elvis> ; or . would work right?
[20:25:42 CEST] <insonifi> hi, I want to use VAAPI along with VidStab filter, but can't figure out the correct filter chain. Is it at all possible?
[20:27:00 CEST] <insonifi> here is the script I tried: https://pastebin.com/h20d3ptA
[20:53:07 CEST] <Hello71> why bother with vaapi
[20:53:27 CEST] <Hello71> oh, you want to *output* with vaapi
[21:19:18 CEST] <insonifi> Hello71: because it does recompression 3-6 times faster :) No, I don't want to output.
[21:19:35 CEST] <Hello71> encode
[21:19:44 CEST] <Hello71> and also much worse, but whatever
[21:25:33 CEST] <DHE> hardware encoding is very fast, but loses to software encoders like x264 even on the default "medium" settings
[21:44:43 CEST] <kerio> hardware encoding is useful if you're encoding at crazy high bitrates for later reencoding
[21:45:00 CEST] <kerio> like, if you're saving a video stream on an embedded device
[21:45:19 CEST] <kerio> or a computer as a background task (game recording, for instance)
[21:45:27 CEST] <kerio> otherwise, it's just shite
[22:25:53 CEST] <fsphil> hey all. I've been using av_frame_get_best_effort_timestamp() and noticed it's marked as deprecated, what's the replacement? just read the pts value from AVFrame?
[22:27:05 CEST] <JEEB> fsphil: `git grep -B5 "av_frame_get_best_effort_timestamp"` in the FFmpeg source directory
[22:29:24 CEST] <JEEB> basically the AVFrame structure has a field called like that, and it can just be accessed as-is
[22:29:37 CEST] <JEEB> if you need that specific field as opposed to just the pts/dts
[22:31:13 CEST] <fsphil> oh yes, best_effort_timestamp, I'd missed that one
[23:54:54 CEST] <figgisfiggis> Hi! Any hevc guys around? Im trying to extract the qp-map for hevc on a x86 system but I'm lost in the code. It's simple for H.264 with everything gathered nicely in mpeg.c. No debug information implemented yet for hevc...
[00:00:00 CEST] --- Sat Jul 14 2018


More information about the Ffmpeg-devel-irc mailing list