[Ffmpeg-devel-irc] ffmpeg.log.20191015
burek
burek at teamnet.rs
Wed Oct 16 03:05:03 EEST 2019
[01:03:22 CEST] <Diag> yall just look past that
[02:19:45 CEST] <ncouloute> So even the debug statments from ffmpeg.cli are not accurate when it comes to concatenation for both concat filter and concat demuxer. the pts and pts_time is wrong. So I'm forced to learn the ffmpeg api and hope I can figure out whats going on. :(
[02:25:01 CEST] <anticw> JEEB re: mp4 and indexing, how can i determine if it has a single index at the start (i assume this is done for mobile/http-byte-range optmised clients?) vs a series of fragments?
[02:25:57 CEST] <anticw> actually, that's not so pressing right now, i could convert either way ... the more pressing issue would be can i take a stream and determine robustly where the i-frames are for robust splitting?
[02:28:05 CEST] <kepstin> anticw: if you're writing an application using ffmpeg libraries, that's pretty easy, the demuxed packets will have a field set if they're keyframes.
[02:28:38 CEST] <kepstin> anticw: if you have a file on a hd, ffprobe can show that information along with the frame pts, which could be used with other commands to split later.
[02:29:21 CEST] <anticw> the plan was to use the ffmpeg library, do you know what field is should be googling/grepping for?
[02:34:58 CEST] <klaxa> anticw: AVPacket.flags
[02:35:22 CEST] <klaxa> if (pkt.flags & AV_PKT_FLAG_KEY) { /* code */ }
[02:36:16 CEST] <anticw> thanks ... re: audio ... is that usually encoded "lock step" with the video ... that is if i split on keyframes and presenve the audio (as-is/copy) will that work?
[02:37:51 CEST] <klaxa> if you demux a file you will receive packets of different streams interleaved
[02:37:57 CEST] <klaxa> afaik
[02:38:20 CEST] <anticw> and it's up to me to make sure the timing of each is sane/valid i take it?
[02:38:58 CEST] <anticw> that is i could get a video frame ... which i can infer it's duration from the frame rate, and an audio pkt which could represent any number of video frames in some sense?
[02:40:08 CEST] <klaxa> it's usually the other way around, don't worry too much about the audio packets, just copy them like the video frames
[02:42:08 CEST] <klaxa> https://github.com/klaxa/ffserver/blob/master/ffserver.c#L252 is kind of what you want i think
[02:42:13 CEST] <klaxa> maybe have a look
[02:43:05 CEST] <anticw> https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/demuxing_decoding.c not easier?
[02:44:04 CEST] <klaxa> well it doesn't write anything
[02:44:23 CEST] <klaxa> wait
[02:44:56 CEST] <klaxa> well it writes raw frames :P
[02:45:15 CEST] <anticw> ok ... so that code you pointed me at ... it knows about DASH ... i'm curious is there is enough there (related but diff issue) to stream video to a chromecast
[02:45:41 CEST] <anticw> (as a cheap output device, some versions of vlc seemed to do it for a while, so presumable it's not that difficult at this stage)
[02:46:24 CEST] <klaxa> yeah should be, it leaks a bit of memory but in a small setup it should work, don't expect real-time latency though
[02:47:26 CEST] <anticw> klaxa: what's the latency issue? something about dash?
[02:47:48 CEST] <klaxa> buffering of packets in general
[02:48:24 CEST] <klaxa> the server waits for 1 GOP until it serves it
[02:48:54 CEST] <klaxa> so the segment is done writing before being served
[02:49:46 CEST] <anticw> GOP is an i-frame/keyframe? or an entire run of frames?
[02:50:03 CEST] <klaxa> the latter
[02:50:17 CEST] <klaxa> it's from one keyframe up to the last non-keyframe (usually)
[02:50:37 CEST] <klaxa> one could possibly hack it to serve packets with chuncked http transfer with a small buffer and change the code to allow for the segments to be read before being finished writing
[02:51:03 CEST] <anticw> you've tested with a chromecast specifically?
[02:51:07 CEST] <klaxa> no
[02:51:13 CEST] <klaxa> don't have one
[02:52:32 CEST] <klaxa> i think i tested it with a fire tv
[02:53:14 CEST] <anticw> dash is what's most tractable in both cases?
[02:53:34 CEST] <anticw> (vs say hls)
[02:54:00 CEST] <klaxa> doesn't really matter i think
[02:57:15 CEST] <nicolas17> afaik Android phones can stream the screen to a chromecast, does anyone know what protocol that uses?
[02:57:37 CEST] <klaxa> a proprietary one afaik
[02:58:35 CEST] <nicolas17> that doesn't surprise me but doesn't stop me either :P
[03:49:16 CEST] <anticw> vlc can stream to a chromecast ... i didn't check what it uses
[03:52:03 CEST] <nicolas17> anticw: well that doesn't necessarily need low latency
[04:08:49 CEST] <Fenrirthviti> nicolas17: https://github.com/balloob/pychromecast I've used this
[04:10:03 CEST] <nicolas17> that gives the chromecast a URL to play from, doesn't look like you can stream video
[04:10:14 CEST] <nicolas17> unless you run a webserver and give it a LAN URL I guess?
[04:10:35 CEST] <Fenrirthviti> read the examples
[04:11:42 CEST] <Fenrirthviti> I don't think a chromecast can play media that isn't sent from some kind of URL though.
[04:11:57 CEST] <nicolas17> afaik Android phones can stream the screen to a chromecast
[04:12:53 CEST] <Fenrirthviti> which I'm pretty sure sends the screen as a vp9 stream
[04:13:31 CEST] <nicolas17> and hosts it over HTTP with chunked encoding and gives the URL to the chromecast? I don't think that will have low enough latency
[04:15:46 CEST] <ponyrider> nicolas17: why do you think that?
[04:19:37 CEST] <nicolas17> I know Apple's AirPlay screen mirroring has a protocol completely different than normal AirPlay
[16:21:22 CEST] <ncouloute> hmm seems I was mistaken. There are several different streams... Sometimes the audio stream.. and stream 2 (subtitle stream?) will start before the video stream. Is there a command I can use to make all the streams start at the same time?
[16:23:32 CEST] <Mavrik> That might actually break your video tho
[16:24:51 CEST] <ncouloute> True, I suppose I can just find the video stream start time. As I dont want to break the frame timings.
[18:16:43 CEST] <Beerbelott> Hello, How does one fix "[AVBSFContext @ 0x6dd6400] Error parsing ADTS frame header! // [AVBSFContext @ 0x6dd6400] Failed to receive packet from filter aac_adtstoasc for stream 0"?
[18:17:17 CEST] <Beerbelott> Associated question: how to sort out if it is a source file problem or a decoder one?
[18:18:53 CEST] <DHE> that's a bitstream filter. it may be used internally for conversion between formats. I know mpegts will invoke it if certain metadata is missing from the stream needed for mpegts
[18:20:12 CEST] <Beerbelott> Is there a way to show metadata in the source MPEG-TS in order to highlight potential missing required entries?
[18:25:17 CEST] <Beerbelott> Would "ffmpeg-current/ffprobe -v quiet -print_format json -show_streams <input>" be helpful?
[18:25:35 CEST] <Beerbelott> whoops scrap the filesystem mess in the start
[21:22:43 CEST] <ChrisJW> Hi, I am currently hardware decoding with cuvid, but I want to choose the CUcontext with which is uses. This https://ffmpeg.org/pipermail/libav-user/2018-May/011163.html indicates this may be possible but I can't find any resources on the subject
[22:54:22 CEST] <ChrisJW> I am going to try replacing cuda_ctx after initialisation, and then swapping it back to the original on destruction... This may or may not explode
[23:33:48 CEST] <BtbN> ChrisJW, you are avare the cuvid decoder is deprecated? You should use nvdec.
[23:35:34 CEST] <BtbN> But both nvdec and cuviddec will first try to grab the CUDA Context from the hw_frames_ctx in the AVCodeContext. If that fails, they try for hw_device_ctx, if that fails, it will create one itself.
[23:35:46 CEST] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/cuviddec.c;h=acee78cf2cb74b8c93e01ea01c1936dc8b045eff;hb=HEAD#l879
[23:36:42 CEST] <BtbN> So if you want to pass your own CUDA context, create it via av_hwdevice_ctx_create()
[00:00:00 CEST] --- Wed Oct 16 2019
More information about the Ffmpeg-devel-irc
mailing list