[Ffmpeg-devel-irc] ffmpeg.log.20180404

burek burek021 at gmail.com
Thu Apr 5 03:05:02 EEST 2018


[00:06:48 CEST] <atomnuker> DHE: what resolution? also what fps?
[00:07:06 CEST] <atomnuker> by my calculations you've encoded 30 seconds of 24fps video
[00:08:06 CEST] <furq> atomnuker: 1080p
[00:19:19 CEST] <DHE> atomnuker: fps is irrelevant to encoding speed. but it's 25fps if it really matters
[00:26:15 CEST] <atomnuker> I know, I just wanted to know how many seconds of video you've encoded to know if you've broken a record
[00:46:00 CEST] <DHE> frame=  849 fps=0.0 q=-0.0 size=    4809kB time=00:00:34.72 bitrate=1134.7kbits/s speed=0.000206x
[00:46:34 CEST] <DHE> note that's frames into the encoder, not packets out from the encoder
[00:51:05 CEST] <undeclared> Could I possibly use ffmpeg to merge 2 m2ts files while keeping proper headers?
[00:52:39 CEST] <undeclared> or repair headers if they're broken? (eg copy using cat and it would rebuild maybe)
[00:53:33 CEST] <DHE> merge 2 mpegts multi-program streams?
[00:54:59 CEST] <undeclared> yup
[02:48:43 CEST] <strfry> Is there anything in libavcodec to improve handling of lost packets in real-time H264 decoding?
[02:51:28 CEST] <strfry> i'm experiencing that packet loss leads to big parts of the output image being corrupted, where i'd expect that applying current B-frames to the last valid output would still look better
[02:53:28 CEST] <strfry> looking that term up, i probably mean P-frames ;)
[03:02:32 CEST] <JEEB> strfry: IIRC FFmpeg's error correction is of the better quality, and that image most likely is what can be correct decoded
[03:03:12 CEST] <JEEB> although if FFmpeg has gone backwards in that, feel free to make an issue about it
[03:36:51 CEST] <strfry> JEEB: i now suspect that i'm somehow using the API the wrong way. i found that a different code path in my application (that doesn't go through the RTP stack), and when i add simulated packet loss there, the concealment looks much better
[03:38:43 CEST] <JEEB> the decoder really shouldn't matter too much
[03:38:48 CEST] <JEEB> it takes in AVPackets
[03:38:53 CEST] <JEEB> gives you out AVFrames
[03:39:04 CEST] <JEEB> you can configure some of them
[03:39:11 CEST] <JEEB> in some ways
[03:39:36 CEST] <JEEB> but generally speaking, if you have N decoders of the same type with generally the same things, they should generally give the same result
[04:29:35 CEST] <strfry> JEEB: i did some more tests, and noticed that while smaller packet losses get concealed nicely, bigger bursts of lost packets lead to the partial destruction of the image
[04:30:55 CEST] <strfry> you say that's probably what can be correctly decoded (whatever correct means here), but i don't understand why it can't just display what was previously there
[04:34:50 CEST] <strfry> say you transmit  a still image, and incoming packets basically just encode camera noise. then a burst of lost packets shouldn't destroy that image, IMHO
[06:02:23 CEST] <Johnjay> hey if I take a course in machine vision for electrical eng. does that cover ffmpeg image coding stuff?
[06:02:29 CEST] <Johnjay> or is that a different course ?
[06:34:04 CEST] <TD-Linux> utack, frame parallel encoding is unlikely to be added for a while, it's deprecated by GOP parallelism
[06:43:46 CEST] <furq> TD-Linux: is performance work actually going to happen in libaom or is that just going to be the reference encoder
[07:46:54 CEST] <TD-Linux> furq, it will happen in libaom, some companies are going to use libaom as their production encoder
[09:16:28 CEST] <MarkedOne> Hello good ppl... I have probably found a bug  in ffmpeg or I am doing something wrong..
[09:17:24 CEST] <MarkedOne> I have crafted this command to take pictutre of stream every minute.. ffmpeg -rtsp_transport tcp -i rtsp://185.23.112.114:1935/live/parkovisko/ -vf fps=1/60 -compression_level 9 -strftime 1 "%Y-%m-%d_%H-%M-%S.png" But problem ist that after  while it is doing 2 phtos every minute and then it hangs...
[09:19:39 CEST] <MarkedOne> The last photo was taken hours later...
[09:20:22 CEST] <MarkedOne> Can tcp be to blame?
[09:31:49 CEST] <dystopia_> fps isnt a video filter is it?
[09:32:14 CEST] <dystopia_> try remove "-vf fps=1/60" and use -r to set the framerate you want
[09:34:41 CEST] <durandal_1707> fps is video filter
[09:35:00 CEST] <dystopia_> ahh ok
[09:35:21 CEST] <MarkedOne> Here https://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video that fps needs to be used
[10:04:13 CEST] <MarkedOne> Do you have any ideas? Only thing that comes to my mind is.. that ffmpeg waits for packets to be delivered but they are lost
[10:07:53 CEST] <durandal_1707> MarkedOne: try latest ffmpeg
[10:11:11 CEST] <MarkedOne> durandal_1707: The project is probably over for now.. I had noticed it after I collected all photos... Just curious..
[11:07:15 CEST] <termos> I'm not getting a AVCodecContext.time_base when I open some input streams. I'm trying to then use AVStream.r_frame_rate but it seems to cause issues. Is there a work-around for something like this?
[11:08:08 CEST] <Mavrik> Read time_base from the stream
[11:08:19 CEST] <Mavrik> Might be that the codec doesn't have it defined, but the container does
[11:09:20 CEST] <Mavrik> I wouldn't rely on frame rate since a lot of modern codecs don't have a static one
[11:15:44 CEST] <termos> the r_frame_rate actually worked hm
[11:16:28 CEST] <Mavrik> It might work for some codecs, yeah :)
[11:16:39 CEST] <slavanap> Hi, is that possible to mute short segments of long video without full audio re-encoding?
[11:18:09 CEST] <termos> so the best approach is to use the AVStream.time_base always? I've had some issues using 1/1000 or similar values as the codec timebase, but it's a long time since I tried that
[11:18:42 CEST] <Mavrik> termos: reminder again, many modern codecs (the most popular H.264 being one of them) do not require constant framerate
[11:18:53 CEST] <Mavrik> And you can easily get a video that doesn't have actual fps :)
[11:19:41 CEST] <termos> ugh :) I see, the space of possible inputs is so large
[11:22:56 CEST] <Mavrik> Of course if you only get inputs from certain sources that's fine
[11:23:13 CEST] <Mavrik> But for example mobile phones these days are the primary source of VFR videos
[11:25:14 CEST] <termos> I can get input from multiple weird encoders, usually rtmp flv but sometimes mpegts over udp as well
[11:25:21 CEST] <termos> but it's always h264
[11:26:27 CEST] <Mavrik> If it's TV broadcast it'll usually be CFR
[11:26:41 CEST] <Mavrik> So I guess frame rate parameters should be populated with reasonable accuracy
[11:27:47 CEST] <termos> ah okey that's good to know, now i'm checking if the time_base of the codec is set, if not I fall back to using av_inv_q(stream->r_frame_rate)
[11:28:18 CEST] <Mavrik> Well other thing to remember, time_base won't give you framerate
[11:28:27 CEST] <Mavrik> Difference in timestamp between two frames will :)
[11:28:48 CEST] <Mavrik> time_base is an arbitrary number more or less
[11:29:15 CEST] <termos> ah yes I don't go that way, I don't set the framerate based on timebase just setting the time_base from framerate if everything fails
[11:29:34 CEST] <Mavrik> ah :)
[11:29:48 CEST] <termos> but only for the video codec, the audio codec always seems fine with 1/sample_rate as the time_base usually
[11:30:06 CEST] <Mavrik> Yeah, I've seen audio stuff break if that's not true
[11:30:11 CEST] <Mavrik> Video codecs depend on container
[11:30:17 CEST] <Mavrik> IIRC FLV really demands 1/fps timebase
[11:30:24 CEST] <Mavrik> mpeg-ts loves to just keep it at 1/90000
[11:31:57 CEST] <termos> ah i see, hopefully this "fix" is good enough for most input streams then
[12:00:39 CEST] <JEEB> Mavrik: FLV has 1/1000
[12:00:41 CEST] <JEEB> time base
[12:00:45 CEST] <JEEB> hard-coded
[12:01:48 CEST] <Mavrik> ah
[12:02:04 CEST] <Mavrik> Yeah, looking at my code it is 1/1000
[12:02:09 CEST] <Mavrik> Sorry for being misleading :/
[12:02:18 CEST] <JEEB> just like MPEG-TS has 1/90000
[14:25:57 CEST] <colekas> hi guys, I asked in here earlier but was wondering if there are any multicast experts here? I'm trying to generate a udp multicast stream at a low bitrate (i.e. ~1Mbps) and I'm finding that when I ffprobe my udp multicast I have to set the probesize/analyzeduration to get the proper video codec parameters
[14:26:24 CEST] <colekas> is there something on the encode side that I can do to make it so I don't need to set the probesize/analyzeduration?
[17:24:16 CEST] <utack> DHE in the last ~18h i made it to frame 250, on what is not even HD
[17:24:30 CEST] <utack> no miracle it shows 0.0fps rounded
[17:50:31 CEST] <DHE> utack: I'm running an x264 placebo encode of the same video (1080p 25fps) and it's doing just a bit under 0.5 fps... (also limited to 1 thread for the sake of comparison)
[17:52:26 CEST] <utack> DHE yeah i think there are good chances they can tune it to a reasonable level.
[17:52:53 CEST] <utack> libvpx-vp9 in "best" mode is also ridiculously slow, and it can be quite reasonable at speed "good"
[17:59:27 CEST] <DHE> right, but I'm running av1 in defaults, so there's that...
[18:02:07 CEST] <kepstin> I wonder if the -cpu-used option in libaom/aomenc does anything useful yet
[18:06:20 CEST] <kepstin> looks like it does, significantly adjusts search sizes and turns on faster search methods
[18:08:42 CEST] <DHE> yes, the AV1 build does support AVX[2], SSE family, etc
[18:09:00 CEST] <kepstin> so yeah, it would be interesting to try the encoder with a few different -cpu-used (speed) settings, to see what it does
[18:09:37 CEST] <kepstin> higher numbers switch to faster searches and start disabling features
[18:11:45 CEST] <kepstin> defaults have all of the adaptive and skip search features disabled.
[18:14:46 CEST] <kepstin> in av1/encoder/speed_features.c, the function av1_set_speed_features_framesize_independent shows all the defaults, and set_good_speed_features_framesize_{dependent,independent} shows what the cpu-used (speed) setting changes.
[18:37:16 CEST] <poutine> Is there a generic video in the OSS world channel? Running into some issues with captioning, specifically, converting SRT to SCC files in an OSS way. Found SCC TOOLS from 2005, which doesn't seem to work (even put on my perl hat and modified it a bit to get it farther along), and then a bunch of crapware websites that do the conversion, nothing really OSS. Is this for a reason? I've looked over how SCC works, and it seems really
[18:37:16 CEST] <poutine> complicated (like padding captions with 8080 to make up for loading time and frame adjustment). As far as I understand, any conversion would have to take text that is likely unstructured in the SRT and lay it out into the 4 rows and 32 columns that are available for line21 data.
[21:46:36 CEST] <adminjoep> I would like to use an existing cuda context for decoding frames with avcodec_send_packet and avcodec_receive_frame so I can use the decoded frame in my cuda kernel. I use avcodec_alloc_context3 to allocat the device context as pCodecContext. Should I be able to simply set the cuda context pointer in pCodecContext.hw_device_ctx?
[00:00:00 CEST] --- Thu Apr  5 2018


More information about the Ffmpeg-devel-irc mailing list