[Ffmpeg-devel-irc] ffmpeg.log.20190205
burek
burek021 at gmail.com
Wed Feb 6 03:05:01 EET 2019
[00:03:19 CET] <turnage> I am using libavcodec to encode and would like to know the size of buffer I ought to prepare for a receive_packet call. In the examples they use constants and I do not know how they are chosen. Can libavcodec provide me a reasonable hint, or does the buffer size I provide inform libavcodec on how to size its packets?
[00:26:11 CET] <DHE> the AVPacket will be prepared by the codec. You can just do "AVPacket *pkkt = av_packet_alloc(); int code = avcodec_receive_packet(avc_context, pkt);" and go with it
[00:26:24 CET] <DHE> just be sure to free it properly afterwards (where applicable)
[00:33:58 CET] <turnage> I want to prepare the buffers myself; I need to share them with another process and can't afford the copy.
[02:05:52 CET] <brimestone> Hey guys, anyone here knows of a streaming server that works with ffmpeg -re and has less than 2 second delay?
[02:15:42 CET] <DHE> rtmp is probably the best you can hope for, but encoder settings are important. not -tune zerolatency, but there are other options for x264
[09:34:35 CET] <botik101> hello, I hgaev a question about using select between. My command allows me to cut out segments from video, but it does not cut audio. Can someone please help
[09:35:45 CET] <botik101> I am trying: ffmpeg -i /tmp/inputvideo.mp4 -vf "select='between(n,1264, 4244) +between(n,20663, 20664)',setpts=N/FRAME_RATE/TB" /tmp/out.mp4
[09:35:58 CET] <botik101> it cuts video but audio remains the same
[09:38:07 CET] <friendofafriend> botik101: -vf is for "video filter".
[09:38:36 CET] <botik101> how do i tell it to cut both video and audio? If I replace -vf with -filter it tells me it @cannot connect video filter to audio output@
[09:39:20 CET] <botik101> friendofafriend: yes, thank you! I noticed it. I tried using -filter but then it gives me "cannot connect video filter to audio output". How do I rip out both video and audio ? So confused
[09:40:34 CET] <friendofafriend> I believe that is the "aselect" filter.
[09:42:35 CET] <botik101> friendofafriend: so you are saying i should use -vf AND -af in the same string and repeat SELECT BETWEEN twice
[09:42:40 CET] <friendofafriend> There are some pretty good examples over here. https://superuser.com/questions/866144/cutting-videos-at-exact-frames-with-ffmpeg-select-filter/1223509
[09:43:50 CET] <botik101> friendofafriend: yes, thak you. I saw that. It still does not tell you how to cut both video and audio in one line
[09:43:54 CET] <friendofafriend> Also pretty good material here, specifying an -af and -vf. https://video.stackexchange.com/questions/17164/accurate-audio-selection
[09:46:57 CET] <botik101> friendofafriend: trying! :)
[09:48:48 CET] <friendofafriend> botik101: Good luck to you, botik101!
[09:58:53 CET] <botik101> ffmpeg -i /tmp/input.mp4 -vf "select='between(n,1264, 4244)+between(n,20663, 20664)',setpts=PTS-STARTPTS" -af "aselect='between(n,1264, 4244)+between(n,20663, 20664)',asetpts=PTS-STARTPTS" -y -strict -2 /tmp/out.mp4
[09:59:27 CET] <botik101> friendofafriend: nope....still the same problem
[10:15:04 CET] <botik101> almot
[10:23:09 CET] <botik101> guys, I still do not have audio being cut correctly in this case:
[10:23:50 CET] <botik101> ffmpeg -i /tmp/input.mp4 -vf "select='between(n,1264, 4244)+between(n,20663, 20664)',setpts=N/FRAME_RATE/TB" -af "aselect='between(n,1264, 4244)+between(n,20663, 20664)',asetpts=N/SR/TB" -y -strict -2 /tmp/out.mp4
[10:24:41 CET] <botik101> audio does not start anywhere near video frame and abruptly ends midway through the final video. what am I doing wrong
[10:26:05 CET] <durandal_1707> botik101: audio frames numbers are not same as video ones
[10:26:37 CET] <durandal_1707> you can not use n, use pts
[10:32:21 CET] <botik101> durandal_1707: wxcwkkwbr~
[10:32:41 CET] <botik101> durandal_1707: thanks! trying it now!
[10:36:52 CET] <durandal_1707> you need to make sure pts timebase of audio and video is same
[11:58:57 CET] <X-Kent> Hello, I cannot figure out how to correctly transcode segments of the HLS stream. I want to transcode each segment individually while preserving all the stream information. To eliminate possible codec problems I for now use "copy". My ffmpeg command is "ffmpeg -i in_segment_i.ts -map 0:0 -map 0:1 -map 0:2 -c:v copy -c:a copy -f mpegts -copyts out_segment_i.ts". When I try to play the stream with ffmpeg I get "Continuity check failed
[11:58:57 CET] <X-Kent> for pid 17 expected 7 got 0". It doesn't happen in the source stream.
[12:00:14 CET] <X-Kent> I used the map to preserve stream 0 as it is a "timed_id3" stream and it gets stripped off if I don't use map.
[12:01:39 CET] <X-Kent> The stream is a live stream. The playback works in general but sometimes (like every minute) there a slight freeze. I believe that's because I don't copy some metadata. Do I need to copy something other than what "copyts" does ?
[12:03:01 CET] <X-Kent> I tried setting GOP to 25 (stream is 50fps), tryed "-vsync 0". No effect.
[14:10:45 CET] <th3_v0ice> Is it possible to create fake input file using API? Like specify the number of streams and decoders and everything?
[14:13:47 CET] <JEEB> yes
[14:14:54 CET] <th3_v0ice> Stupid question. Sorry :)
[16:40:48 CET] <th3_v0ice> JEEB: In order to create this fake file, what do you think appropriate methods and order of calling them would be? I currently have avformat_allo_context, and then avformat_new_stream() and avcodec_parameters_copy(). Is anything else needed? avio_open or something?
[16:58:44 CET] <JEEB> th3_v0ice: depends on what exactly you want to create
[17:07:28 CET] <th3_v0ice> Well, I have input file that has 10 streams and I want to create fake file, which will only have 2 streams from the original 10 in the original input file. I will copy the decoder contexts and stream properties. I would like to copy all that is possible to copy so that the fake input file will have everything the same except number of streams.
[17:17:01 CET] <th3_v0ice> JEEB: Currently I get segmentation fault because AVFormatContext->iformat is NULL in the fake file I created.
[17:18:43 CET] <kepstin> why do you need a fake file? can't you just ignore the streams you don't want to read from the input?
[17:21:17 CET] <th3_v0ice> I was using that tehnique and currently its limiting me. I need to create a fake file.
[17:29:03 CET] <th3_v0ice> Copying the AVFormatContext->iformat's pointer seems to be creating a proper fake file. At least av_dump_format() prints everything as it should. I dont know if this can have some consequences.
[17:43:18 CET] <kepstin> I suspect you might get issues with double-frees when trying to tear this all down. There might not be any way to do this "properly" without access to ffmpeg private functions/structures.
[17:47:45 CET] <JEEB> th3_v0ice: I was more interested in what exactly do you want to test.
[17:47:56 CET] <JEEB> do you want to test your iteration etc logic based on AVStreams etc?
[17:48:16 CET] <JEEB> or do you want to just have a test initialization etc from X decoders etc
[17:50:13 CET] <JEEB> if you really really want to emulate an AVFormatContext, then there's two ways: either use your own data through an AVIO context, or make a thing in libavformat that lets you filter etc the input data (but it requires an actual input file for that)
[17:50:43 CET] <JEEB> so really, I would generate the required data and use AVIO context if I need to test the whole API client logic without changing things
[18:03:17 CET] <kepstin> if this is for tests, I'd probably just use ffmpeg cli to remux the input file to a temporary file on disk before running the test.
[18:03:35 CET] <kepstin> (or run ffmpeg in a subprocess, piped to your test program's input)
[18:08:46 CET] <JEEB> the movenc tests have pre-generated H.264 headers etc
[18:09:15 CET] <JEEB> so in a similar way if you pre-generate some data to be demuxed you can always feed that to lavf through AVIO
[18:48:01 CET] <turnage> Can I peek at or configure the size of packets that will be written during encoding with libavcodec in calls to receive_packet before actually writing the bytes?
[19:16:23 CET] <turnage> I see an option called packetsize in the options table. Does this refer to encoder the size of packets output by encoders?
[19:17:21 CET] <DHE> where's this?
[19:18:08 CET] <turnage> https://github.com/FFmpeg/FFmpeg/blob/8522d219ce805ce69ff302f259e6f083fdb4887c/libavformat/options_table.h#L41
[19:18:30 CET] <turnage> actually this may not have anything to do with libavcodec
[19:18:33 CET] <DHE> well that's libavformat, and second not all formats will handle that. just ones where it makes sense
[19:18:59 CET] <turnage> is there an options for it in libavcodec?
[19:19:11 CET] <DHE> how would it work?
[19:19:39 CET] <DHE> packets are encoded data. you don't set that directly. you might be able to set the bitrate but there's still going to be variance
[19:19:51 CET] <kepstin> most codecs don't know how big the packet will be until it's done encoding it, so... (there's some special cases for constant bit rate which are configured with codec-specific options)
[19:20:11 CET] <JEEB> in other words, start with the use case explanation
[19:20:36 CET] <JEEB> after we figure out the use case we can see if there's a better way of doing it
[19:20:37 CET] <JEEB> :P
[19:20:40 CET] <turnage> helpful info. An upper bound for h264 encoded packet sizes with a known target variable bitrate is what I'd like
[19:20:58 CET] <JEEB> just use VBV/HRD
[19:21:05 CET] <JEEB> aka -maxrate and -bufsize
[19:21:20 CET] <JEEB> that makes sure that over bufsize the average rate is max maxrate
[19:21:26 CET] <DHE> there are strategies to reduce the bursts, like intra_refresh instead of regular keyframes but that one has some pretty serious downsides
[19:21:42 CET] <JEEB> that does work with libx264, for example
[19:21:45 CET] <kepstin> note that vbv doesn't have a limit on encoded frame size directly, but i think it requires that any given frame is no bigger than the buffer.
[19:21:51 CET] <JEEB> other encoders... mileage may vary
[19:22:43 CET] <turnage> it's probably invariant in h264 that any given packet is smaller than an uncompressed frame yes?
[19:22:55 CET] <kepstin> turnage: not specifically required.
[19:23:11 CET] <kepstin> why do you need this anyways?
[19:23:38 CET] <kepstin> ffmpeg allocates the right amount of memory for packets for you, you don't need to care in most cases.
[19:24:25 CET] <turnage> I'm working in a multiprocess pipeline that needs to agree on buffer sizes ahead of time.
[19:24:57 CET] <turnage> Looks like I may just have to estimate up and suffer a copy on occasion though.
[19:25:29 CET] <kepstin> oh, you're doing some sort of shared memory thing?
[19:25:33 CET] <turnage> yeah
[19:27:39 CET] <kepstin> Hmm, so you're using a fixed size shared buffer rather than changing ownership of memory pages between processes?
[19:35:11 CET] <turnage> Yeah; it's a pipeline of vmos: https://fuchsia.googlesource.com/zircon/+/HEAD/docs/objects/vm_object.md
[19:36:09 CET] <kepstin> hmm, so roughly analogous to mmaping a "file" on /dev/shm on linux then. Resizable, that's good.
[19:37:09 CET] <turnage> Thanks for the helpful info. I'll deal with a copy for now. It may be possible to get RT even with the copies and I'll revisit later if not.
[19:37:37 CET] <kepstin> how are you convincing the encoder to write the frame into your smh block, anyways?
[19:38:03 CET] <turnage> I map it into the process: https://fuchsia.googlesource.com/zircon/+/HEAD/docs/objects/vm_address_region.md
[19:38:25 CET] <turnage> then make an AVBuffer backed by it
[19:39:57 CET] <kepstin> hmm, right, if the destination avpacket contains a preallocated buffer, then it'll be used if it's big enough.
[19:45:34 CET] <kepstin> it looks like if the buffer is too small, the encoder functions will automatically allocate a new buffer and use that instead :/
[19:46:03 CET] <turnage> lol
[19:50:02 CET] <kepstin> best thing I can think of is that if the packet the encoder gives you back has a reallocated buffer, then your buffer was too small and you should resize it larger for next time.
[19:50:19 CET] <kepstin> should at least limit the number of copies you have to do.
[19:52:45 CET] <turnage> that's a good idea, kepstin
[19:54:18 CET] <kepstin> it would require a bit of experimenting to pick a reasonable initial buffer size. Same as frame size should certainly cover 99.99% of encoded frames, you could probably pick something smaller (but how much? I dunno)
[19:56:43 CET] <turnage> yeah I foresee a lot of math in that optimization problem. Whatever I end up needing to get RT, I'll come back and let you know how it went.
[20:20:19 CET] Action: kepstin didn't know that the fuchsia kernel was to the point where people were actually trying to build apps using ffmpeg on it, that's kinda neat
[20:23:21 CET] <JEEB> oh, that's from fuchsia? funky
[20:30:31 CET] <LunaLovegood> Is there a way to make avcodec_send_packet() and avcodec_receive_frame() act like blocking i/o functions? Like I'd have a thread that simply sends packets in a loop, and another thread that would receive decoded frames. ?
[20:31:19 CET] <turnage> that is what I do but I use a signal to block
[20:31:38 CET] <turnage> e.g. send on one thread, then signal to the other thread to unblock and receive until EAGAIN
[20:31:55 CET] <turnage> e.g. with conditional variable
[20:32:17 CET] <LunaLovegood> oh right. that makes sense. I guess I'll just write wrapper functions like that.
[20:33:03 CET] <kepstin> LunaLovegood: you can't call functions on a single avcodeccontext from different threads without taking care of any required synchronization yourself.
[20:33:22 CET] <kepstin> but you can send avframe and avpacket structures safetly across frames
[20:33:26 CET] <kepstin> across threads*
[20:34:37 CET] <kepstin> so i'd probably suggest using some sort of bounded length threadsafe queue with blocking operations to transfer either frames or packets between threads
[20:34:47 CET] <LunaLovegood> alright
[20:36:10 CET] <LunaLovegood> do ffmpeg's decoding threads run in the background, or are they only active until avcodec_send_packet or avcodec_receive_frame returns?
[20:36:44 CET] <LunaLovegood> err. not that it should matter much since I'll be calling those functions often enough. sorry.
[20:38:32 CET] <kepstin> i don't actually know, tbh. It might depend on the particular codec.
[20:39:14 CET] <turnage> could be wrong but I don't think threads will be spun up without you asking
[20:39:36 CET] <turnage> ime work happens on the receive side
[20:39:56 CET] <turnage> and the buffers from send need to stay alive while referenced which may be long after the corresponding data has been received
[20:40:05 CET] <kepstin> default for codecs that support it is to do multithreaded decoding (you can override this by setting threads to 1 in the context iirc
[20:40:10 CET] <kepstin> )
[20:40:35 CET] <turnage> oh interesting I didn't know default wasn't 1.
[20:40:40 CET] <kepstin> and the ffmpeg buffer/frame api takes care of keeping data alive, you don't need to do anything special
[20:41:10 CET] <kepstin> (buffers are actually thread-safe reference counted)
[20:43:43 CET] <DHE> it will vary by codec, but the most popular ones usually support multi-threading
[21:35:39 CET] <BtbN> LunaLovegood, you can't do that unless you lock every call with a mutex for that context. Which removes most benefits, since you end up waiting for it to finish again anyway.
[00:00:00 CET] --- Wed Feb 6 2019
More information about the Ffmpeg-devel-irc
mailing list