[Ffmpeg-devel-irc] ffmpeg.log.20180420
burek
burek021 at gmail.com
Sat Apr 21 03:05:02 EEST 2018
[00:12:00 CEST] <zamba> ok, the conclusion is that photorec does a terrible job of restoring stuff
[00:34:55 CEST] <alone-y> zamba
[00:34:59 CEST] <alone-y> u still here?
[00:37:48 CEST] <exastiken> Hi
[00:38:02 CEST] <exastiken> How do I compile ffmpeg with debugging options?
[00:38:09 CEST] <exastiken> I wish to run callgrind on it.
[00:39:08 CEST] <JEEB> --disable-stripping
[00:39:47 CEST] <JEEB> if you mean ffmpeg.c then the debug symbol'd version is always left around in the build root as ffmpeg_g
[00:40:05 CEST] <JEEB> disable-stripping just skips stripping of all end result binaries
[00:40:12 CEST] <JEEB> and when you install
[00:40:33 CEST] <exastiken> okay
[00:43:09 CEST] <exastiken> JEEB: is this the same for x265 as well?
[00:44:01 CEST] <alone-y> jeeb hello, can i ask you about lut?
[00:45:00 CEST] <JEEB> exastiken: x265 is cmake so no effing idea
[00:45:07 CEST] <exastiken> okay
[00:45:33 CEST] <whs> Hi can I start asking a question?
[00:46:33 CEST] <whs> I am following the tutorial http://dranger.com/ffmpeg/tutorial01.html
[00:46:57 CEST] <whs> But when I reach the line pCodec=avcodec_find_decoder(pCodecCtx->codec_id)
[00:47:22 CEST] <whs> I always get an error
[00:47:38 CEST] <whs> It seems that codec_id is always 0
[00:47:50 CEST] <whs> Could you please help?
[00:49:35 CEST] <whs> pCodec is always null
[00:51:07 CEST] <furq> alone-y: -vf "format=gray8,lutyuv=if(eq(val\,0)\,0\,255)"
[00:52:34 CEST] <alone-y> furq!!!
[00:52:37 CEST] <alone-y> thank you!!!!
[00:52:51 CEST] <furq> lol
[00:52:57 CEST] <furq> it's always good to have a break and then come back
[00:52:59 CEST] <alone-y> i have some result already but not at all i need
[00:53:16 CEST] <alone-y> yes, just baby sleeping not good today
[00:53:23 CEST] <alone-y> sorry.
[00:53:48 CEST] <furq> i'm still not sure why i couldn't get geq to work
[00:53:57 CEST] <furq> but that was a worse solution anyway so nvm
[00:54:01 CEST] <alone-y> i am trying
[00:54:03 CEST] <alone-y> just a moment
[00:54:28 CEST] <alone-y> WOW!!!!!!!!!!!!!!!!!!!!!
[00:54:33 CEST] <alone-y> IT"S WORKING!!!!!
[00:54:39 CEST] <alone-y> KEWL!!!!!
[00:54:44 CEST] <alone-y> THANK YOU A LOT !
[00:54:47 CEST] <furq> yeah i actually tested this one before posting it
[00:54:52 CEST] <furq> in an unprecedented move
[00:55:08 CEST] <alone-y> furq
[00:55:30 CEST] <zamba> alone-y: yes
[00:55:38 CEST] <furq> good timing
[00:55:44 CEST] <alone-y> i am understanding, that i am bisect the bounds, but how can i do it realtime with mpv?
[00:56:00 CEST] <alone-y> i know mpv can ffmpeg filters
[00:56:15 CEST] <furq> you can probably pass that string straight into --vf
[00:56:20 CEST] <furq> or you can just pipe it from ffmpeg
[00:56:33 CEST] <furq> i've never really messed with mpv filtering too much
[00:56:58 CEST] <alone-y> thank you furq a lot!
[01:05:50 CEST] <Anonrate> Is xlib supported on Windows?
[01:07:34 CEST] <alone-y> farq, mpv --lavfi-complex='format=gray8,lutyuv=if(eq(val\,0)\,0\,255)' "file.mkv"
[01:14:50 CEST] <alone-y> grayscale is working - mpv --vf=format=gray8,crop=1600:900:0:0
[01:21:05 CEST] <garoto> mpv --vf=lavfi="[format=gray8,lutyuv=if(eq(val\,0)\,0\,255)]" file.ext
[01:24:59 CEST] <Mista_D> is there a method to run pass1 once, and then split "_passlog and mbtree file_"and source video into pieces to run pass2 on multiple servers? Keeping proper VBR rate distribution etc? Looking for x264 and x265 methods.
[01:25:05 CEST] <alone-y> garoto!!!!
[01:25:09 CEST] <alone-y> MANY MANY THANKS!!!
[01:25:15 CEST] <alone-y> 1000 times
[01:25:42 CEST] <garoto> cheers
[01:25:55 CEST] <alone-y> thank you a lot
[01:26:01 CEST] <alone-y> have a nice night!
[01:26:08 CEST] <alone-y> 2 26 here.
[01:26:13 CEST] <alone-y> gotta sleep
[01:26:20 CEST] <alone-y> thank you all, guyz
[01:26:27 CEST] <alone-y> (and may be ladys too)
[10:02:04 CEST] <will_> @alexpigment the source file is a wav. I tried to mp3 and the file was slightly longer
[10:08:36 CEST] <acresearch> people, i have an audio file .mp3 and i want to add a static picture to it so i can upload it to youtube. how can i do that please?
[10:11:31 CEST] <BtbN> That's a device_ctx, not a frames_ctx.
[10:11:37 CEST] <BtbN> It will use it automatically if it exists.
[12:07:46 CEST] <acresearch> hello people, is there a way to convert an audio file into a video file with a wave form?
[12:09:42 CEST] <durandal_1707> acresearch: yes, be more specific
[12:11:05 CEST] <faxmodem> I guess for displaying an oscilloscope view
[12:11:54 CEST] <acresearch> durandal_1707: so i have an audio file, a podcast recording, and i want to do something like this: https://twitter.com/WNYC/status/707576942837374976 add an image (but not nessesary) more importantly add a wave form
[12:46:44 CEST] <zerodefect> In the AC-3 encoder, is it possible and legal to dynamically change the channel count/layout? I'm thinking about when it's used in conjunction with a transport stream.
[12:53:47 CEST] <acresearch> durandal_1707: i found this command: ffmpeg -i epi.mp3 -filter_complex "showwaves=s=1920x1080:mode=cline" test.avi is it good?
[13:03:22 CEST] <durandal_1707> acresearch: have you tried it?
[13:14:36 CEST] <acresearch> durandal_1707: yes it works, i just want to know if there is something essential that is missing? maybe something in the background that i cannot see?
[13:42:29 CEST] <acresearch> in this command: ffmpeg -i epi.mp3 -filter_complex "showwaves=s=1920x1080:mode=cline" test.avi how can i add colours?
[13:58:23 CEST] <durandal_1707> acresearch: use recent ffmpeg, not historic one
[13:58:29 CEST] <acresearch> the colors command seems to be working, anyone can tell me what i am doing wrong?
[13:58:44 CEST] <acresearch> durandal_1707: i just installed ffmpeg 2 hours ago
[13:58:46 CEST] <durandal_1707> pastebing full command
[13:58:57 CEST] <durandal_1707> and output
[14:00:01 CEST] <acresearch> durandal_1707: http://paste.debian.net/1021198/
[14:05:18 CEST] <durandal_1707> acresearch: learn to read english, your ffmpeg is historic
[14:07:42 CEST] <acresearch> durandal_1707: what are you talking about? i just installed it today
[14:07:56 CEST] <durandal_1707> acresearch: you installed old version
[15:35:12 CEST] <tuna> So I've got a ffmpeg+nvenc question....
[15:35:54 CEST] <DHE> go on...
[15:38:21 CEST] <tuna> So I have assigned the pointer to my AvCudaDeviceCtx to the AVCodecContext.hw_device_ctx....that AvCudaDeviceCtx includes the pointer to the cuda context inside. When I call avcodec_open2(AVCodecContext, codec, Null) I get an error in visual studio, MSVC++ says that I have an access violation reading location 0xFFFFFFFFFFF, which I am thinking means I am missing some input to that AVCodecContext struct...
[15:38:44 CEST] <tuna> So my question is what else am i missing when setting up the encoder AVCodecContext
[15:39:13 CEST] <tuna> FWIW this worked just fine when I setup with nvenc before i tried to use harware input
[15:39:34 CEST] <tuna> aka when hw_device_ctx was a nulptr
[15:45:52 CEST] <jkqxz> tuna: AVCodecContext.hw_device_ctx needs to be a pointer to an AVBufferRef containing an AVHWDeviceContext.
[15:46:01 CEST] <jkqxz> The AVCudaDeviceContext you want to fill is hwctx in that AVHWDeviceContext, which is allocated by av_hwdevice_ctx_alloc().
[15:47:56 CEST] <tuna> I figured what I was doing was incorrect...: c->hw_device_ctx = (AVBufferRef*)avCudaDeviceCtx;
[15:56:27 CEST] <tuna> jkqxz: so Is AvBufferRef.buffer of type AvCudaDeviceCtx after the alloc is called?
[15:58:10 CEST] <jkqxz> No.
[15:58:15 CEST] <jkqxz> The sequence you want is:
[15:58:33 CEST] <jkqxz> - Make a new AVHWDeviceContext with av_hwdevice_ctx_alloc().
[15:59:26 CEST] <jkqxz> - The hwctx field of that is an AVCudaDeviceContext, fill it with your Cuda setup.
[16:00:18 CEST] <jkqxz> - Call av_hwdevice_ctx_init() to finalise the device.
[16:00:40 CEST] <jkqxz> - Place the reference to the device in AVCodecContext.hw_device_ctx.
[16:01:47 CEST] <jkqxz> If you want to then use AV_PIX_FMT_CUDA frames on input to something you want the same sequence to make an AVHWFramesContext and place it in AVCodecContext.hw_frames_ctx.
[16:04:05 CEST] <tuna> AVBufferRef is a AVHWDeviceContext?
[16:04:45 CEST] <tuna> jkqxz: since av_hwdevice_ctx_alloc returns a pointer to AVBufferRef can i just cast that pointer to AVHWDeviceContext?
[16:06:50 CEST] <atomnuker> no, you need to do (AVHWDeviceContext)AVBufferRef->data
[16:07:41 CEST] <tuna> gotcha
[16:28:13 CEST] <tuna> What is the point/use of AV_PIX_FMT_CUDA? Is that to say hey im using a cuda buffer??
[16:28:28 CEST] <atomnuker> yes
[16:28:57 CEST] <atomnuker> the sw_format of the avhwframescontext determines what type of data you have in your hardware frames
[16:29:58 CEST] <tuna> I just cudaGraphicsGLRegisterImage with my opengl texture, will that mean I need to use the AV_PIX_FMT_CUDA or someother FMT?
[16:30:40 CEST] <atomnuker> pix_fmt needs to be set to cuda
[16:30:45 CEST] <tuna> Ok
[16:30:59 CEST] <atomnuker> hwfc->sw_format needs to be set to whatever's in your image (rgb/yuv/whatever)
[16:32:45 CEST] <tuna> im sorry, hwfc?
[16:49:15 CEST] <atomnuker> avhwframescontext
[17:14:21 CEST] <BtbN> I doubt you can pass a registered OpenGL image to nvenc
[17:14:31 CEST] <BtbN> It won't be in any known sw_format
[17:34:02 CEST] <tuna> does sws_scale have any idea of hardware use?
[17:34:25 CEST] <tuna> Can I use it to convert a hardware based frame to another hardware based frame before encoding?
[17:34:36 CEST] <tuna> convert rbg to yuv that is
[17:35:07 CEST] <tuna> Is it as simple as just adding the hwframecontext to each frame and just using it as normal?
[17:38:37 CEST] <atomnuker> no, swscale is software scale only like the name implies, you can use vf_scale_cuda maybe, not sure what it supports
[17:46:47 CEST] <gasdg> who is?
[17:48:06 CEST] <gasdg> When ffprobe analyzes a media file or stream, does it do it by parsing metadata or does it actually analyzes data packets as they are received?
[17:48:32 CEST] <furq> depends
[17:48:44 CEST] <furq> show_packets and show_frames read the whole stream
[17:49:59 CEST] <gasdg> so running ffprobe with no flags reads just the metadata?
[17:50:24 CEST] <furq> something like that
[17:51:40 CEST] <gasdg> does anyone know the reason why when running ffprobe and trying to send the results to a .txt file, the file created is blank?
[17:51:50 CEST] <furq> because it outputs on stderr
[17:52:11 CEST] <DHE> it does decode a little bit of the opening of the file to get specs like width/height, number of audio channels, etc.
[17:52:46 CEST] <furq> it definitely looks at stream headers, yeah
[17:52:54 CEST] <gasdg> got it... that explains why I had to use show_frames to get what I wanted.
[17:52:57 CEST] <furq> idk if it reads beyond that or which codecs/containers it needs to do it for
[17:53:42 CEST] <furq> gasdg: if you need accurate duration or something then look at -count_frames and -show_streams
[17:54:49 CEST] <gasdg> Thank you for the info. Really appreciate all of your help. furq and gasdg!! You rock!!
[18:34:32 CEST] <shtomik> Hi guys, :) I tried to understand why I had some problem with AVFoundation audio device. All work great, but sometimes I had ana error in avformat_open_input() avfoundation audio format is not supported& But on next time it is work&
[18:34:41 CEST] <shtomik> Is it bug?
[18:34:50 CEST] <shtomik> Thanks to all :)
[18:35:52 CEST] <shtomik> Oh&Yes, for the same device all time&
[18:42:23 CEST] <tuna> It seems nvenc supports rbg input for encoding...does ffmpeg support a rgb input also when doing hardware h264_nvenc encoding?
[18:44:17 CEST] <BtbN> It doesn't care when you do that.
[18:44:23 CEST] <BtbN> It's up to nvenc
[19:20:21 CEST] <tuna> ret = av_frame_get_buffer(rgb_frame, 32); do I need to do this when using hardware input with nvenc?
[19:21:58 CEST] <BtbN> If you don't allocate them yourself
[19:22:15 CEST] <BtbN> if you want to try and pass your mapped frames, you surely won'T
[19:26:03 CEST] <tuna> I have a ogl buffer that I have registered as a cuda resource...that is what I am "passing" in to ffmpeg such that nvenc can operate on it
[19:26:20 CEST] <tuna> So youre saying with that setup i should call that alloca e function?
[19:43:59 CEST] <BtbN> no, I said the exact opposite...
[20:01:40 CEST] <spicypixel> is there a noticeable performance hit when you enable -count_frames on ffprobe lookups?
[20:04:31 CEST] <furq> yes
[20:04:50 CEST] <furq> it decodes the entire file
[20:06:10 CEST] <spicypixel> figured as much as it'd need to
[20:08:26 CEST] <tuna> Ok, I have it all wired up and running, but on the client side I only see a black screen....I know that doesnt give much to go off of....but idk, seems like when I call avcodec_receive_packet() it works and get packets with a data field that has a pointer and a size of 60
[20:08:41 CEST] <tuna> so nvenc must be doing something, possibly an input error?
[20:09:37 CEST] <tuna> Ive got my hwdevicecontext and hwframecontext both being initialized with data....
[20:10:17 CEST] <tuna> avcodecopen2 works just fine
[20:11:22 CEST] <tuna> If there are errors in nvenc, how would I know??
[20:48:25 CEST] <tuna> So I am getting an error it turns out...its on my av_hwframe_ctx_init(frameAvBuffer) call...it says: Pixel format 'rgb24' is not supported...
[20:48:40 CEST] <tuna> but from what I can tell that is the format of the OGL buffer I am using....
[20:48:53 CEST] <tuna> glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
[20:49:00 CEST] <tuna> GL_RGB ^^
[20:58:02 CEST] <tuna> Looks like ffmpeg's implementation of nvenc encoder only supports YUV type....that sucks
[20:59:43 CEST] <JEEB> tuna: the library doesn't support RGB, it is always converted to YCbCr for encoding
[20:59:57 CEST] <JEEB> if you are dealing with GPU surfaces then you can do the conversion in shaders
[21:00:15 CEST] <JEEB> if you are dealing with hw decoded video then you should be able to get YCbCr surfaces from it
[21:00:30 CEST] <tuna> This is on the encode side
[21:00:41 CEST] <JEEB> yes, I know
[21:00:50 CEST] <JEEB> the library doesn't support RGB encoding
[21:01:11 CEST] <tuna> Ok, thanks...Will have to convert to YUV then pass in I gues
[21:02:00 CEST] <BtbN> the ffmpeg nvenc wrapper supports RGB input just fine
[21:02:09 CEST] <BtbN> it accepts 0RGB32 and 0BGR32
[21:02:50 CEST] <BtbN> as those map natively to what nvenc supports
[21:06:50 CEST] <JEEB> right
[21:07:01 CEST] <JEEB> but of course internally it converts to YCbCr
[21:07:45 CEST] <BtbN> it explicitly supports RGB input for stuff like direct GL/D3D surface input
[21:12:38 CEST] <JEEB> yea, it makes sense since otherwise every API client would have to do the shadering
[21:39:30 CEST] <tuna> Well, then why when I run av_hwframe_ctx_init(m_avBufferRefFrame); with a frame context.sw_format of AV_PIX_FMT_RGB24 does it say unsupported pixel format???
[21:39:50 CEST] <tuna> Seems the source code only supports: AV_PIX_FMT_NV12, AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV444P, AV_PIX_FMT_P010, AV_PIX_FMT_P016, AV_PIX_FMT_YUV444P16,
[21:39:50 CEST] <JEEB> that's not 0RGB32
[21:40:00 CEST] <JEEB> or 0BGR32
[21:41:02 CEST] <BtbN> because the hwframes ctx does not support RGB/BGR
[21:41:18 CEST] <BtbN> If you want to supply your own mapped frames, you don't need or even want one anyway
[21:41:31 CEST] <BtbN> the point of a hwframes_ctx is to make allocating them easier
[21:42:06 CEST] <tuna> Ok i must be confused on how this all works together then.....
[21:43:45 CEST] <BtbN> using an hw_frames_ctx is optional and purely for convenience
[21:43:51 CEST] <BtbN> in your case, there is no point in using one
[21:44:05 CEST] <tuna> https://pastebin.com/XEwu3Wsv
[21:44:23 CEST] <tuna> There is what I have, if you could look at it please and let me know if anything looks off
[21:44:34 CEST] <tuna> This is in a class, and all the variables are memeber variables
[21:45:23 CEST] <BtbN> you can just let the hwdevice_ctx create the cuda context for you
[21:45:36 CEST] <BtbN> that's the primary point of it
[21:45:44 CEST] <BtbN> and you don't need the hwframes stuff at all
[21:46:01 CEST] <tuna> Oh ok
[21:46:15 CEST] <BtbN> hwframes is for allocating hwframes
[21:46:25 CEST] <BtbN> because that can be quite a mess, it's abstracted for the generic case
[21:47:26 CEST] <tuna> Oh ok, that makes more sense
[21:50:18 CEST] <tuna> How do I pass my m_cudaGraphicsResource down into ffmpeg to point it at the frame?
[21:50:35 CEST] <tuna> Give it to my avframe ?
[21:50:59 CEST] <BtbN> I have no idea, never tried OpenGL stuff
[21:51:23 CEST] <tuna> well, that would be a cuda ptr...does that help at all?
[21:51:40 CEST] <BtbN> The CUDA pix_fmt behaves exactly like the sw pix_fmt it contains
[21:51:47 CEST] <BtbN> except that the data pointers are cuda pointer
[22:08:59 CEST] <tuna> BtbN: The av_hwdevice_ctx_alloc only seems to alloc the space for the cuda ctx, but it remains null unless I run my code that fills it with the cuCtxXreate call
[22:09:13 CEST] <BtbN> that's what the init function is for.
[22:12:25 CEST] <tuna> ret = av_hwdevice_ctx_init(m_avBufferRefDevice);.....still doesnt do it????
[22:13:38 CEST] <tuna> cuda_ctx remains null after init is called
[22:15:37 CEST] <BtbN> oh, it's actually in create
[22:15:51 CEST] <BtbN> it's alloc, then init, then create
[22:16:21 CEST] <User____> Hello! I'm trying to concat 2 videos, but due to some timestamp problems, using "-fflags +igndts" I'm getting result file, which lasts for thousands of hours instead of 1 hour... Is it possible to merge them somehow without those dts and pts?
[22:16:33 CEST] <BtbN> or was it the other way
[22:16:34 CEST] <BtbN> hm
[22:16:47 CEST] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavutil/hwcontext.h;h=f5a4b623877477c0d778bea1ab051be074fe09d5;hb=HEAD#l277
[22:17:30 CEST] <BtbN> yeah, it's alloc, init, then create
[22:18:36 CEST] <BtbN> but you don't need init if you use create, it calls it for you internally
[22:21:49 CEST] <utack> hi. how does one find a suitable container, for a givne encoder?
[22:22:09 CEST] <utack> it is listed in the source files i think, but there must be a way to get it from the command line
[22:22:42 CEST] <BtbN> I don't think there is, no
[22:22:46 CEST] <kepstin> hmm, there's no real way to figure that out from ffmpeg.
[22:23:16 CEST] <kepstin> i mean, you can throw most things into mkv, so that's a reasonable default.
[22:24:06 CEST] <utack> which source is it in again, the compatibility list?
[22:24:38 CEST] <kepstin> there's no general compatibility list in the ffmpeg source
[22:25:25 CEST] <kepstin> the only way to generically know if you can put a certain codec in a certain container is to try it and see if it returns an error.
[22:27:03 CEST] <tuna> BtbN: so I just need to call create, instead of alloc and init, it seems?
[22:28:53 CEST] <BtbN> you will still need to allocate it
[22:31:12 CEST] <tuna> Ok, that seemed to create the cuda context this time
[22:32:46 CEST] <tuna> ret = avcodec_send_frame(c, rgb_frame);.....so in this line I pass in the rgb_frame....when I assign the data[0] of it to my cudaResource pointer the program just crashes...no warning or anything...MSVC doesnt even throw an exception...it just dies
[22:33:01 CEST] <tuna> if data[0] is NULL its ok
[22:33:16 CEST] <BtbN> you need to fill at least data[0] to 2, maybe even 3
[22:33:24 CEST] <BtbN> not 100% sure how 0BGR and stuff works
[22:33:35 CEST] <BtbN> oh wait, nevermind
[22:33:39 CEST] <BtbN> it's not YUV with planes and stuff
[22:33:58 CEST] <tuna> Yea, the rgb I was using before was just at data[0] for the sw encoder
[22:34:23 CEST] <BtbN> well, ask a debugger about the crash and investigate what's wrong
[22:35:12 CEST] <tuna> rgb_frame->data[0] = (uint8_t*)m_cudaGraphicsResource; deos this seem logical?
[22:35:29 CEST] <tuna> cudaGraphicsResource_t m_cudaGraphicsResource
[22:38:12 CEST] <BtbN> i have no idea what cudaGraphicsResource_t
[22:38:18 CEST] <BtbN> it wants a devptr
[22:41:30 CEST] <tuna> Whats a devptr?
[22:41:47 CEST] <tuna> I got this error from ffmpeg when I ran it in the command line...Assertion abs(src_linesize) >= bytewidth failed at src/libavutil/imgutils.c:313
[22:41:52 CEST] <BtbN> CUdevptr
[22:42:05 CEST] <BtbN> yeah, you'll also need to set the linesize correctly
[22:47:00 CEST] <exastiken> is there a way to re-compile ffmpeg with debug options turned on for both ffmpeg and x265
[22:47:14 CEST] <exastiken> compiling both from git sources
[22:48:03 CEST] <BtbN> define debug options
[00:00:00 CEST] --- Sat Apr 21 2018
More information about the Ffmpeg-devel-irc
mailing list