[Ffmpeg-devel-irc] ffmpeg.log.20190929
burek
burek at teamnet.rs
Mon Sep 30 03:05:03 EEST 2019
[00:00:17 CEST] <JEEB> I don't think this was a regression ever :P
[00:00:22 CEST] <JEEB> it jsut never worked correctly
[00:00:25 CEST] <JEEB> (with gnutls)
[00:01:09 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=commit;h=bc1749c6e46099ec85110361dbe6f7994a63040d
[00:01:14 CEST] <JEEB> that's the relevant patch I think
[00:03:35 CEST] <JEEB> (because IIRC the definition of a regression is: it used to work, now it doesn't)
[00:05:41 CEST] <JEEB> anyways, I'm not sure if I 100% agree about regression fixes only because a lot of stuff just was incorrect to begin with
[00:05:57 CEST] <JEEB> and only started breaking when someone on the "other side" of a line started using something
[00:06:00 CEST] <JEEB> but I digress
[00:07:21 CEST] <Thomas_J> I now have another problem. I am taking an rtsp feed in from an IP camera as map0 and adding a external sampled stereo feed as map1 and outputing to facebook via rtmps. The stream plays on facebook but the audio starts failing after a few seconde and ffmpeg throws a bunch of info lines, "[flv @ 0x55a96c1ee0] Delay between the first packet and last packet in the muxing queue is 10006000 > 10000000: forcing output".
[00:08:07 CEST] <Thomas_J> When I write to a mp4 file instead, it runs smoothly.
[00:14:51 CEST] <Thomas_J> Could this be caused from a buffer overrun?
[05:23:32 CEST] <KodiakIT[m]> Not strictly ffmpeg related (outside of it being used on the back end I 'spose), but as #handbrake has only ~50 people I figure It can't hurt to ask here: Any ideas what's going on with the banding in the re-encoded video here? https://imgur.com/zrgxzUk
[08:59:53 CEST] <cards> does #ffmpeg have the capacity to validate media, ie, detect if a file is truncated, detect if all the frames are intact, detect CRC and other parity corruption, whether the headers represents the content, etc etc
[09:00:20 CEST] <cards> s/#//
[11:15:01 CEST] <orue> Hi. I am trying to encode a video file from a ffv1 to h264 and I recognized a slight yellow/greenish color shift in the video output file. I assume that it is due to a colorspace shift? Is there a way to compensate such shifts reliably?
[11:15:32 CEST] <orue> Ah forgot. I am trying to use cuda/nvenc https://pastebin.com/qQxak7G8
[11:45:35 CEST] <snooky> moin
[19:35:57 CEST] <RazWelles> Hey, anybody know how to get debug information out of avcodec_encode_video2? Its silently crashing on me
[19:36:46 CEST] <RazWelles> avcodec_encode_video2( output_context->streams[0]->codec, packet, picture, &got_packet);
[19:38:52 CEST] <JEEB> don't use an AVCodecContext from lavf contexts :P
[19:40:35 CEST] <JEEB> also I think by now for 3-4 years we've had a new API for decoding and encoding that decouples feeding and receiving
[19:40:43 CEST] <JEEB> https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[19:40:46 CEST] <JEEB> recommended reading
[19:42:15 CEST] <RazWelles> Thank you.. x_x I'm reading deprecated everywhere and I'm having a hard time finding up to date information
[19:43:28 CEST] <JEEB> you can find an AVCodec * to create a codec context by either using a name (to get a specific encoder or decoder), or by using a codec id
[19:44:36 CEST] <JEEB> then you allocate the codec context with f.ex. avcodec_alloc_context3, and set the basic options depending on if it's video or audio
[19:45:17 CEST] <RazWelles> Oh I think I found something I did wrong there then, I'm using avformat_alloc_output_context2
[19:45:32 CEST] <JEEB> that's for avformat
[19:45:53 CEST] <RazWelles> ohh ok
[19:46:51 CEST] <JEEB> it really depends on what encoder you're using exactly, but generally width/height/sample_aspect_ratio/pix_fmt and time_base seem to be a minimum set of things to set for video, for example
[19:47:16 CEST] <JEEB> I usually create the encoder after the first AVFrame comes out of the decoder, because that way I don't need to guess things vOv
[19:48:07 CEST] <RazWelles> What I have to do here is convert opencv mat's to avframes, so I'm filling the buffer manually
[19:48:36 CEST] <RazWelles> I'm using the h.264 encoder
[19:49:07 CEST] <pink_mist> which h.264 encodier?
[19:49:11 CEST] <pink_mist> *encoder
[19:49:15 CEST] <RazWelles> AV_CODEC_ID_H264
[19:49:38 CEST] <JEEB> that just picks *some*
[19:49:45 CEST] <RazWelles> oof
[19:49:55 CEST] <RazWelles> I think its using x264
[19:50:05 CEST] <JEEB> that's why the by name functions that give you an AVCodec are nice
[19:50:13 CEST] <JEEB> you can specifically say "libx264" in that case
[19:52:03 CEST] <RazWelles> I think I might have assembled a bad mental framework for how this stuff fits together. Is there an up to date resource I can read? I really appreciate the help here and don't want to inundate with dumb questions
[19:52:04 CEST] <JEEB> but yea, get an encoder AVCodec, create a context for it, set basic values according to what you're going to be feeding to it, open the encoder and you can start using the stuff noted in the documentation link I posted
[19:52:27 CEST] <JEEB> RazWelles: there are examples of varying quality under doc/examples
[19:52:48 CEST] <RazWelles> How does that fit together with AvFormatContext? I think I'm using that to write to a file
[19:53:08 CEST] <JEEB> avformat aka lavf is used for I/O and container level stuff
[19:53:19 CEST] <JEEB> so avcodec handles the decoding to and from AVFrames
[19:53:42 CEST] <JEEB> and then lavf either gives you AVPackets (reading) or eats your AVPackets (writing)
[19:54:02 CEST] <JEEB> so you f.ex. can open an avformat context for mp4
[19:54:12 CEST] <JEEB> then you add the stream(s) that you need
[19:54:45 CEST] <JEEB> call the write_header() function, and then push AVPackets received from an encoder to the avformat context
[19:54:55 CEST] <JEEB> and when you're done, call the write_footer() function
[19:54:59 CEST] <RazWelles> and I do that through receive_packet?
[19:55:11 CEST] <JEEB> that gives you the AVPacket from the encoder
[19:55:24 CEST] <JEEB> you can then take that into avformat
[19:56:11 CEST] <JEEB> 1) set the AVStream index to the AVPacket you received 2) av_interleaved_write_frame
[19:56:32 CEST] <JEEB> and yes, lavf uses _frame() in the functions
[19:56:36 CEST] <JEEB> I am really sorry for that
[19:56:40 CEST] <JEEB> it deals with AVPackets
[19:57:09 CEST] <JEEB> so `av_read_frame` and `av_interleaved_write_frame`
[19:57:14 CEST] <JEEB> actually read and write AVPackets
[19:57:23 CEST] <JEEB> once again, sorry
[19:58:29 CEST] <RazWelles> Not at all, I really appreciate this, was having a hard time piecing this together until now, thank you so much :)
[20:39:24 CEST] <RazWelles> Are there any examples out there on using codecpar?
[20:39:29 CEST] <RazWelles> Or is it just a warning I can ignore for now?
[20:39:49 CEST] <JEEB> codecpar is utilized for accessing parameters from lavf contexts
[20:39:54 CEST] <durandal_1707> RazWelles: see doc/examples/
[20:40:09 CEST] <JEEB> I'm not sure where you're touching a codecpar if you're not reading nor decoding stuff?
[20:41:02 CEST] <RazWelles> I was setting parameters with the avcodeccontext param by param
[20:41:06 CEST] Action: RazWelles takes a quick look under doc
[20:55:44 CEST] <JEEB> RazWelles: with an encoder setting values to avcodeccontext is normal
[20:55:52 CEST] <JEEB> not sure why you were setting them to the avcodecpar?
[21:16:45 CEST] <RazWelles> So I seem to be able to make it to avcodec_send_frame, I created and set an avcodeccontext and I set the picture format to yuv420, but avcodec_receive_packet is where it crashes now
[21:17:14 CEST] <RazWelles> I think I do an avcodec_send_frame and avcodec_recieve_packet one after the other correct? Using the same context?
[21:18:52 CEST] <JEEB> RazWelles: since you're making the AVFrame, have you made sure your buffers are aligned and all?
[21:19:01 CEST] <JEEB> and that values such as line-to-line stride is correc
[21:19:03 CEST] <JEEB> *correc
[21:19:04 CEST] <JEEB> *correct
[21:20:21 CEST] <RazWelles> JEEB, I think I did, I'll double check though, by alignment do you mean setting the yuv planes? I might be using an old example where I'm setting and defining the buffer pointers myself via malloc, I couldn't get it working via get_buffer
[21:20:50 CEST] <JEEB> you can get a pre-allocated AVFrame with specific pix_fmt, width and height IIRC
[21:22:17 CEST] <RazWelles> Do I write to the buffer like picture->data[0][y*linesize[0]+x]?
[21:22:57 CEST] <RazWelles> I'll try playing with get_buffer again
[21:23:37 CEST] <durandal_1707> yes, but pixel format is important
[21:24:00 CEST] <durandal_1707> if you use valgrind you can see if there are overreads/overwrites
[21:24:03 CEST] <JEEB> https://ffmpeg.org/doxygen/trunk/group__lavu__frame.html#ga6b1acbfa82c79bf7fd78d868572f0ceb
[21:24:08 CEST] <JEEB> this was it
[21:24:12 CEST] <JEEB> av_frame_get_buffer
[21:24:22 CEST] <JEEB> you feed it an AVFrame with the pix_fmt, width and height defined
[21:25:57 CEST] <RazWelles> Ooh, I'll try setting it and accessing it again, I think I didn't have those defined before
[21:26:19 CEST] <JEEB> that basically allocates you buffers with the required alignment
[21:26:26 CEST] <JEEB> so you only need to copy over the line data
[21:31:33 CEST] <RazWelles> My code is kind of blinding but is it alright if I post some of my code in a pastebin here?
[21:31:45 CEST] <RazWelles> I set the get buffer alignment param to 0 because it says it does it automatically there
[21:32:41 CEST] <RazWelles> https://pastebin.com/MWXWyBsZ
[21:37:32 CEST] <durandal_1707> RazWelles: why you change picture->data[] pointers and linesize[] ?
[21:39:26 CEST] <RazWelles> The original code for that example set the data[0] buffer to point to the malloc'ed memory block. the linesize 1 2 and 3 are different because YUV is Y full frame size, U and V half frame size
[21:40:05 CEST] <RazWelles> Since I encode as YUV420p
[21:41:43 CEST] <durandal_1707> RazWelles: remove that lines, they are incorrect
[21:42:38 CEST] <JEEB> yea you don't need to set those, those are set by av_frame_get_buffer
[21:42:50 CEST] <RazWelles> oh really? Does that account for the YUV format too?
[21:42:59 CEST] <JEEB> you set the pix_fmt, width and height
[21:43:13 CEST] <JEEB> that is enough for av_frame_get_buffer to just get you a buffer
[21:43:16 CEST] <RazWelles> ooh ok
[21:43:18 CEST] <JEEB> or buffers
[21:43:55 CEST] <RazWelles> Hm.. still crashed for me
[21:46:38 CEST] <durandal_1707> RazWelles: where it crashes now?
[21:48:54 CEST] <RazWelles> Still at the same place, avcodec_receive_packet
[21:49:16 CEST] <RazWelles> It does manage to send the frame though
[21:49:53 CEST] <durandal_1707> what kind of crash?
[21:52:09 CEST] <RazWelles> Oddly enough it's just silent
[21:52:19 CEST] <RazWelles> I have the log level set to debug too
[21:52:32 CEST] <RazWelles> I check for the return value but it never makes it to that
[21:54:08 CEST] <durandal_1707> how you fill frame data[] ?
[21:55:38 CEST] <RazWelles> Lines 36-52, I adapted it (maybe badly?) from the example code in ffmpeg's examples
[21:56:04 CEST] <RazWelles> I wonder if I should try just not filling it and seeing if I can get blank packets through
[21:56:37 CEST] <gunchyo> Hi, I cannot finish compiling ffmpeg from latest git https://paste.debian.net/plainh/e4af44b9
[21:56:48 CEST] <gunchyo> I have no libav* packages installed from Debian's repo to avoid conflicts
[22:11:03 CEST] <RazWelles> I got it to stop crashing at avcodec_receive_packet durandal_1707, JEEB :o .. I had to allocate the packet pointer
[22:11:10 CEST] Action: RazWelles does walk of shame
[22:19:42 CEST] <ztube> Hey, is anyone here experienced with converting VOB to mkv? I have some issues regarding the metadata, e.g. the individual audio tracks and subtitles aren't assigned any language. I know how to assign the metadata manually but I would like to take this data from my input file
[23:08:22 CEST] <kadiro> Hello, can we replace an embeded subtitle inside an mkv or we must copy video/audio track excluding subtitle and including a subtitle in the end?
[23:08:54 CEST] <kadiro> I mean in the output
[23:09:37 CEST] <pink_mist> I don't really understand what you think the difference of those two propositions are
[23:09:44 CEST] <pink_mist> they sound like the same thing to my ears
[23:11:56 CEST] <cehoyos> kadiro: Please provide ffmpeg -i output
[23:12:03 CEST] <kadiro> pink_mist> let say I have a file called something.mkv that have video, audio and subtitle , i want to remove or replace that subtitle, do i need to copy video and audio only to another output and work on it or the ffmpeg can do that in the fly without taking many spaces as i don't have for other mkv files
[23:12:11 CEST] <kadiro> cehoyos> ok
[23:12:25 CEST] <pink_mist> kadiro: ffmpeg can do it in one operation
[23:12:45 CEST] <pink_mist> kadiro: this operation will require you to copy it
[23:13:16 CEST] <pink_mist> kadiro: you can't have ffmpeg output to the same file as your input, that will ruin everything
[23:13:44 CEST] <kadiro> oh
[23:17:32 CEST] <kadiro> cehoyos> sorry to be late as the command line didn't work, i did it manually: https://paste.ubuntu.com/p/dTDMm9zJQh/
[23:18:24 CEST] <kadiro> pink_mist> you mean it has to be something like: ffmpeg myfile <some_arguments> outputfile ?
[23:18:54 CEST] <pink_mist> -i myfile, but yes
[23:18:55 CEST] <kadiro> :(
[23:19:11 CEST] <kadiro> pink_mist> can i just replace the subtitle by ovveriding it ?
[23:19:33 CEST] <pink_mist> whatever media player you're playing it with can possibly do that
[23:19:43 CEST] <pink_mist> I'm pretty sure mpv for instance has that ability
[23:19:55 CEST] <pink_mist> I don't really use other media players, so I can't say much about others
[23:20:00 CEST] <kadiro> pink_mist> I manage to play it on tv
[23:20:44 CEST] <pink_mist> you'll need to ask your tv manufacturer then I guess
[23:21:04 CEST] <kadiro> ok thank you pink_mist
[23:35:52 CEST] <kadiro> pink_mist> If i do : ffmpeg -i input.mkv -srt input.srt -c:v copy -c:a copy output.mkv ... it will override the subtitle or it add it ?
[23:36:41 CEST] <pink_mist> I'm the wrong person to ask, I don't honestly know ... my guess would be that should override the subtitle
[23:36:49 CEST] <pink_mist> but you may need some -map invocation too
[23:37:36 CEST] <kadiro> ah ok, is not the map used when we have multiple tracks?
[23:38:40 CEST] <pink_mist> like I said: I don't know
[23:38:52 CEST] <pink_mist> I would suggest you try it and see
[23:39:04 CEST] <kadiro> ok sorry, thank you
[23:41:12 CEST] <RazWelles> Any particular reason why avformat_alloc_output_context2 would fail to open a file?
[23:51:43 CEST] <ztube> maybe sth like -map 0:s -map 1:s might add the subtitles
[23:55:01 CEST] <kadiro> with this command : ffmpeg -i input.mkv -f ass -i input.ass -c:v copy -c:a copy test.mkv ==> ffmpeg work but the old subtitle is still there, with this: ffmpeg -i input.mkv -f ass -i input.ass -c:v copy -c:a copy -c:s mov_text test.mkv ==> ffmpeg say: [matroska @ 0x55d50c938360] Subtitle codec 94213 is not supported. av_interleaved_write_frame(): Function not implemented Error writing trailer of test.mkv: Function not implemented
[23:55:48 CEST] <kadiro> ztube> will try that
[23:56:25 CEST] <ztube> so if you want to replace the subtitle instead of adding i would use
[23:57:22 CEST] <ztube> ffmpeg -i input.mkv -i input.ass -c:v copy -c:a copy -c:s copy -map 0:v -map 0:a -map 1:s test.mkv
[00:00:00 CEST] --- Mon Sep 30 2019
More information about the Ffmpeg-devel-irc
mailing list