[Ffmpeg-devel-irc] ffmpeg.log.20190925
burek
burek at teamnet.rs
Thu Sep 26 03:05:03 EEST 2019
[00:00:10 CEST] <kepstin> (also, ffvhuff is basically an improved huffyuv - if you're gonna be using ffmpeg/libavcodec to decode it later, prefer ffvhuff)
[00:00:27 CEST] <kepstin> if only because it supports more pixel formats :)
[00:17:20 CEST] <^Neo> hello! I'm working with some HEVC 1080i video and it appears that when I ffprobe/av_find_stream_info with the video it comes up as 1920x540p60 instead of 1920x1080i30...
[00:17:36 CEST] <^Neo> I found some patches about supporting field paired HEVC streams and they look to be integrated in the latest FFmpeg
[00:17:54 CEST] <^Neo> but was curious what people were doing to get the reported 1920x1080i resolution
[00:43:24 CEST] <kepstin> ^Neo: you can use the tinterlace filter to turn the separate field images into full-height interlaced frames.
[00:43:55 CEST] <^Neo> yeah, I just thought of that after I posted - duh! thanks for confirming what I thought
[00:43:56 CEST] <kepstin> or weave
[00:44:01 CEST] <kepstin> which is equivalent i think
[00:44:52 CEST] <kepstin> i suppose weave is easier to use
[03:05:07 CEST] <b0bby|> hello
[03:08:09 CEST] <b0bby|> Scenario: I have a 4k file that takes a significant amount of processing power to decode. I would like to lower the quality to make it easier to playback. What is the fastest(computationally fastest) way to do this? The size of the files doesn't really matter as long as its not absurdly large(think raw video).
[03:13:04 CEST] <klaxa> probably something like -c:v libx264 -profile:v baseline
[03:20:50 CEST] <b0bby|> klaxa: does that keep a decent amount of quality? My goal is 1080p
[03:22:52 CEST] <b0bby|> klaxa: better question, whats the catch of using baseline?
[03:27:09 CEST] <klaxa> bigger file because of less expensive computations
[03:27:16 CEST] <klaxa> you can control quality with -crf
[03:27:26 CEST] <klaxa> do some test encodes to check
[03:27:42 CEST] <b0bby|> klaxa: thats super cool
[03:28:05 CEST] <b0bby|> klaxa: I suspected something existed but didn't know for sure
[03:28:39 CEST] <klaxa> well if it's not fast enough there is even faster enocders with worse quality :P
[03:29:16 CEST] <klaxa> or you could first tune it even more to fastness by using different presets
[03:30:47 CEST] <b0bby|> klaxa: ok, do that presets effect quality heavily?
[03:31:32 CEST] <klaxa> not sure tbh, just give it a try, i think the default is -preset medium faster values are fast, faster, ultrafast
[03:31:35 CEST] <klaxa> afaik
[03:31:58 CEST] <b0bby|> thanks for the help
[03:38:11 CEST] <another> see also: https://trac.ffmpeg.org/wiki/Encode/H.264
[09:10:05 CEST] <lain98> how would i get the stream duration of a webm file. i enabled webm demuxer in the ffmpeg build. AVStream.duration does not seem to report correct value. even the ffmpeg application reports "Duration: N/A, start: 0.000000, bitrate: N/A"
[10:42:58 CEST] <Radiator> Hello everyone ! I'm remuxing a stream from mpeg to h264 and converting the frame in a YUV420P format. When I use ffplay to display the output stream, the frame is all pixelised, grey and not scaled at all. Any idea why it would do that ? Using other containers works fine but h264 with YUV420P doesn't.
[13:29:11 CEST] <safinaskar> what is best encoder for videos captured from screen? (in terms of file size) and why ffv1 and ffvhuff produce so big files compared to x264 -crf 0?
[13:29:25 CEST] <safinaskar> wait a minute, i will show you sample video
[13:31:07 CEST] <cehoyos> It seems you already found the best encoder for your use case...
[13:31:34 CEST] <cehoyos> Feel free to test qtrle
[13:33:26 CEST] <safinaskar> so, i am trying to find best encoder for videos captured from screen. to make video even smaller i heavily reduce count of different colors appearing in video. i. e. i apply strong filter, which collapses whole palette to very small number of colors. this is sample with typical video captured from my screen with such reduced palette:
[13:33:27 CEST] <safinaskar> https://gitlab.freedesktop.org/gstreamer/gst-examples/uploads/0a23a18aa80168708dd990e0c5ff34c4/sample.libx264-crf0.mkv . so, i need lossless encoder for such videos. i tried x264, ffv1 and ffvhuff. and i noticed that ffv1 and ffvhuff are a lot bigger than x264:
[13:33:28 CEST] <safinaskar> https://gitlab.freedesktop.org/gstreamer/gst-examples/uploads/dce9129d8311d6a90ef2226a10837f58/sample.ffv1.mkv , https://gitlab.freedesktop.org/gstreamer/gst-examples/uploads/3fd9ab069b19885c00cc95d2437bdcb4/sample.ffvhuff.mkv . why?
[13:33:56 CEST] <durandal_1707> because
[13:35:30 CEST] <cehoyos> I believe you don't want yuv444p encoding
[13:37:19 CEST] <safinaskar> cehoyos: ok
[13:40:11 CEST] <safinaskar> -rw-r--r-- 1 user user 540M A5= 25 13:54 /mnt/tmp/sample.ffv1.mkv-rw-r--r-- 1 user user 4,8G A5= 25 14:08 /mnt/tmp/sample.ffvhuff.mkv-rw-r--r-- 1 user user 21M A5= 25 14:02 /mnt/tmp/sample.libx264-crf0.mkv
[13:59:58 CEST] <lain98> https://www.ffmpeg.org/doxygen/3.2/structAVCodecParameters.html#abee943e65d98f9763fa6602a356e774f . However I get format = -1. Any clue ?
[14:00:33 CEST] <lain98> format should be AVPixelFormat or AVSampleFormat enum according to documentation
[14:01:42 CEST] <lain98> -1 corresponds to AV_PIX_FMT_NONE but i know the file is yuv420p
[14:10:14 CEST] <cehoyos> safinaskar: the difference is that frame-exact seeking is not easily possible with the x264 encoding, use -g 1 to get a comparable result
[14:15:08 CEST] <DHE> lain98: have you actaully run something like avformat_find_stream_info to populate it?
[14:35:43 CEST] <termos> Any way to make h264 decoding more resilient? Getting 5+ decoding errors in a row results in stream not being able to continue working. I tried with -fflags +discardcorrupt which seems to make it a bit better
[14:36:36 CEST] <BtbN> If too much data is missing to decode the stream, there is not much to be done about it.
[14:40:49 CEST] <furq> you can try -err_detect ignore_err
[14:52:45 CEST] <termos> hmm that's an interesting flag, will give it a tr
[14:57:03 CEST] <safinaskar> cehoyos: i just tested x264 -g 1 and got big file (file size simular to ffv1). is there a way to use x264 with exact seeking? maybe some container will help?
[14:57:29 CEST] <cehoyos> termos: "not being able to continue working" should never happen, if there is a sample input, please provide it.
[14:57:36 CEST] <cehoyos> safinaskar: Yes, -g 1
[14:58:00 CEST] <cehoyos> afk
[14:58:45 CEST] <kepstin> safinaskar: if you need to be able to copy (without re-encoding) on arbitrary frames, then you need to encode in intra-only, and therefore result will be big.
[14:58:46 CEST] <DHE> sorry, your only other option is a player that can seek to a keyframe and then decode to the true desired seek position quietly
[14:59:41 CEST] <kepstin> note that ffmpeg itself supports frame-accurate seeking when decoding h264
[14:59:57 CEST] <JEEB> ***
[15:00:05 CEST] <JEEB> *** depends on container
[15:01:02 CEST] <kepstin> yeah, needs a container that indicates keyframes, iirc? and definitely works best if there's an index.
[15:01:12 CEST] <kepstin> mkv, mp4 are fine, for example.
[15:01:24 CEST] <JEEB> keyframes and index (that is utilized in seeking functionality)
[15:09:31 CEST] <safinaskar> Well. Let me finally describe my actual task. I want to capture screen. And then i want to play it using player which supports playing in both ways (back and forth). And also supports arbitrary seeking. Also my computer sometimes abruptly fails. And i want player to be able to play even such videos. I. e. even if computer powered off while
[15:09:31 CEST] <safinaskar> recording video, player still should be able to play such video with all features enabled, i. e. playing back and forth, seeking, etc. Also I want videos to be small. Unfortunately, such task appears to be very difficult. First of all, it seems there is no player, which supports playing in back direction. There is unofficial mpv branch, which
[15:09:32 CEST] <safinaskar> supports such playing, but I don't want to depend on unofficial branch. Also gst-play supports it, too, but gst-play is not supposed to be full-featured player, this is util to test gstreamer libs. So, I decided to write my own player. Using gstreamer libs, because they support playing in back direction. Now I am trying to choose encoder. libx264
[15:09:32 CEST] <safinaskar> compresses very well. Also (when I pack x264 into matroska container) seeking and playing in back direction works, too (in gstreamer). So, everything is ok. Unfortunately, problems arise when computer powers off. If I record such x264 video and then power off my computer, then gstreamer cannot seek on such broken video. So, now I am in search for
[15:09:33 CEST] <safinaskar> encoder, which satisfy this 2 criteria: 1) it should compress well, similary to x264 2) when computer crashed while recording such video, gstreamer libs should seek this video anyway
[15:10:14 CEST] <safinaskar> Currently I didn't find such codec. x264 seeks badly, when computer crashs. ffv1 seeks relatively good, but compresses badly
[15:11:24 CEST] <furq> safinaskar: did you try remuxing the interrupted files
[15:11:35 CEST] <furq> sounds like the container seek index was never built or is incomplete
[17:17:00 CEST] <mlok> Hello, what do you think I should learn and know in order to effectively understand and modify FFMPeg source code written in C to create a new application? Thank you
[17:17:40 CEST] <mlok> Should I improve C/C++ skills? Or perhaps understand how FFPlay works in terms of decoding video to start?
[17:17:54 CEST] <pink_mist> you should probably not modify ffmpeg source code, you should just link to it ... and I'm pretty sure there are loads of examples of doing that kind of thing in the sources
[17:18:10 CEST] <JEEB> doc/examples has various examples of varying quality
[17:18:26 CEST] <JEEB> and yes, ffmpeg.c (the command line application) is just an API client
[17:18:28 CEST] <JEEB> just like any other
[17:18:30 CEST] <JEEB> :)
[17:18:57 CEST] <JEEB> another thing to look at are the trunk doxygen documentation
[17:19:02 CEST] <mlok> Thanks :) for example I would like to enerates output frames at a constant rate to feed to ffmpeg regardless of the state of the inputs
[17:19:10 CEST] <mlok> *generate
[17:19:18 CEST] <JEEB> site:ffmpeg.org doxygen trunk KEYWORD
[17:19:32 CEST] <JEEB> mlok: so you basically need a time-based generator
[17:19:46 CEST] <JEEB> you can use lavfi (libavfilter) to have a source, but you need to add the timings yourself
[17:20:03 CEST] <mlok> JEEB: ahh ok, that sounds interesting
[17:20:05 CEST] <JEEB> say, "if nothing came in in 250ms, stuff this into the chain"
[17:20:31 CEST] <JEEB> mlok: I must say that the upipe framework did get multiple alternative inputs with back-ups implemented
[17:20:34 CEST] <JEEB> so you might want to see how that was made
[17:20:40 CEST] <JEEB> upipe also uses FFmpeg for some parts
[17:20:46 CEST] <JEEB> (although it is broadcast oriented)
[17:20:51 CEST] <mlok> Thanks, I'll check these things out
[17:21:01 CEST] <Media_Thor> safinsakar: try MPEG2 video codec in MPEG2-TS container maybe, most you'll lose is 1/2 a sec gop.
[17:21:33 CEST] <mlok> Media_Thor: Thanks
[17:21:57 CEST] <mlok> This is mainly for RTMP which can drop causing FFMPEG to start transcoding into HLS
[17:22:01 CEST] <mlok> *stop
[17:27:37 CEST] <mlok> JEEB: If I wanted to create a web based GUI for FFMPEG, is it a good idea to use a wrapper or e.g. python bindings?
[17:59:01 CEST] <taliho> anyone know if there is a way to disable probing on certain streams?
[18:00:03 CEST] <taliho> I have an mpegts stream with one stream as binary data and ff_read_packet gets stuck trying to probe this stream
[18:01:32 CEST] <taliho> I feel like this issue has come up a few times before
[18:03:57 CEST] <taliho> I could try to add this as an option if there is any interest
[18:04:50 CEST] <taliho> at the moment I have:
[18:04:56 CEST] <taliho> : ./ffmpeg -y -loglevel trace -f mpegts -i udp://127.0.0.1:10001 -copy_unknown -map 0:d:0 -codec:d copy -f data udp://127.0.0.1:10002
[18:05:57 CEST] <taliho> maybe with a disable probing option it could be:
[18:06:39 CEST] <taliho> : ./ffmpeg -y -loglevel trace -f mpegts -i udp://127.0.0.1:10001 -disable_probing:d:0 -copy_unknown -map 0:d:0 -codec:d copy -f data udp://127.0.0.1:10002
[18:14:11 CEST] <cehoyos> I don't think FFmpeg is the right tool for this job.
[18:14:16 CEST] <cehoyos> Maybe tcpdump?
[18:19:08 CEST] <DHE> probing should timeout as long as any data is being received at all (eg: joining a multicast group that's dead will hang) and ffmpeg will still be able to retroactively process the data that was used in probing
[18:29:28 CEST] <void09> how does ffmpeg behave if you tell it to cut a video on non-I frame ?
[18:31:54 CEST] <cehoyos> If you re-encode, there is no problem, if you remux, your request will not be 100% satisfied
[18:33:45 CEST] <JEEB> taliho: usually due to the data stream being low bit rate
[18:34:00 CEST] <JEEB> so the probing logic waits and waits until it gets enough
[18:34:19 CEST] <JEEB> I've had that with broadcast radio channels
[18:34:31 CEST] <JEEB> although it probably happens with any stream where there's a slowly enough growing data stream
[18:34:34 CEST] <void09> "With the mp4 container it is possible to cut at a non-keyframe without re-encoding using an edit list. In other words, if the closest keyframe before 3s is at 0s then it will copy the video starting at 0s and use an edit list to tell the player to start playing 3 seconds in."
[18:34:40 CEST] <void09> I assume this does not work for mkv ?
[18:34:50 CEST] <JEEB> you have virtual timelines in matroska as well
[18:35:14 CEST] <JEEB> they aren't really well supported in FFmpeg so you might want to look into mkvtoolnix's mkvmerge and the "ordered chapters" stuff
[18:35:24 CEST] <JEEB> "ordered chapters" is how virtual time lines are called in Matroska
[18:35:29 CEST] <cehoyos> FFmpeg does not support cutting with edit lists
[18:35:33 CEST] <void09> :(
[18:36:08 CEST] <void09> What I want to achieve is, make a script that splits a video into chunks at scene changes. without any re-encoding.
[18:36:10 CEST] <JEEB> taliho: I think there's a patch on the ML to alleviate the low bit rate data stream probing problem
[18:36:17 CEST] <cehoyos> The reason is that for cutting with edit lists you need an mp4 editor, FFmpeg is not a file editor, it is a transcoding application (that also supports remuxing)
[18:37:38 CEST] <cehoyos> void09: What you want is not generally possible - there is no guarantee that the (original) encoder was smart enough to detect the scene change
[18:37:38 CEST] <void09> so if the scene change frame is not an I-frame, I need the frames leading (and following, for the end of the video) up to the I-frame, to be included. but I also need some way of marking those padded frames so they are not considered when re-encoding, for my purposes
[18:38:07 CEST] <void09> cehoyos: ffmpeg supports scene change detection, and giving it a good enough parameter for sensitivity, it's pretty much certain to be correct
[18:38:12 CEST] <cehoyos> (And it is even possible that inserting an I frame at the "scene change" would not have resulted in a minimal file size / best quality)
[18:38:42 CEST] <JEEB> void09: sounds like you want a project file format. something like EDL?
[18:38:48 CEST] <JEEB> which then refers to files and has times
[18:38:52 CEST] <cehoyos> It supports scene change detection (this is absolutely necessary for any transcoding application) but this is not related to the question you asked
[18:38:55 CEST] <void09> JEEB: no idea what that is
[18:39:04 CEST] <void09> I want to deploy a distributed encoder, just for fun
[18:39:37 CEST] <cehoyos> What you want is not possible with current FFmpeg
[18:39:37 CEST] <void09> cehoyos: I do not care or assume anything about the original encoder's efficiency. it will be mostly be BD anyway, and that has a fixed I-frame interval
[18:39:39 CEST] <JEEB> then if youtube and others are fine with ffms2 then you should be too
[18:40:15 CEST] <void09> cehoyos: it might not be possible with ffmpeg, but by embedding ffmpeg in a little script to add that functionality
[18:40:25 CEST] <cehoyos> With a bad vp9 encoding, you end up with n files for n scenes, all the same size as the original encoding;-)
[18:40:43 CEST] <void09> cehoyos: I want to try AV1
[18:40:51 CEST] <cehoyos> I don't think a little script will be sufficient, especially since edit lists are not written
[18:41:39 CEST] <void09> well, it does not have to be edit lists. can be a text file that records the frame offsets that need to be ignored
[18:41:46 CEST] <void09> or even embed that info in the filename
[18:42:59 CEST] <JEEB> anyways I know taht some applications take in EDL files ("edit lists") which have references to files and timestamps
[18:43:03 CEST] <void09> you say ffmpeg does not support cutting with edit list info. but does it support decoding with taking edit lists into consideration ?
[18:43:11 CEST] <JEEB> partially
[18:43:22 CEST] <JEEB> it can be quite the shitshow with the GOOG-provided implementation
[18:43:31 CEST] <void09> GOOG?
[18:43:44 CEST] <another> google
[18:44:40 CEST] <giaco> hello
[18:45:04 CEST] <giaco> I have an unknown udp stream coming from a camera, and I want to identify it
[18:45:35 CEST] <giaco> I have setup an offline network with wireshark in the middle and I see the packet stream in the subnetwork
[18:45:35 CEST] <JEEB> I'd just have a list somewhere of where you want to split
[18:45:39 CEST] <JEEB> (for encoders)
[18:45:51 CEST] <JEEB> and then utilize something like ffms2 to easily get frame accurate access
[18:45:55 CEST] <void09> ok then, how would you write this.. first get the scene change frame nubmers. then tell ffmpeg to cut at thosse frame intervals (by remuxing and adding extra frames up to the nearest I-frames, not re-encoding). and then running ffmpeg again so to find the nearest previous and next I-frame for every split frame point, and record those in the file
[18:45:58 CEST] <giaco> how can I identify the stream? Get some info out of it?
[18:46:00 CEST] <JEEB> and just read it simultanceously
[18:46:16 CEST] <JEEB> (from each node with different spots they'd access)
[18:46:37 CEST] <JEEB> audio can be done separately since it generally is fast enough to just encode in a single blast
[18:47:02 CEST] <void09> JEEB: yeah, audio will be done on the server that manages the jobs, once all the encoded pieces are received
[18:47:31 CEST] <another> void09: look into the segment muxer
[18:47:33 CEST] <void09> especially since i don't think there are I-frames in an audio stream ?
[18:47:49 CEST] <JEEB> another: he just needs the nice split points for his *encoder*
[18:47:50 CEST] <void09> so cutting audio into little pieces will reduce encoding efficiency
[18:48:05 CEST] <JEEB> void09: there are. generally all audio frames are separately decode'able
[18:48:19 CEST] <JEEB> the problem is, there is such thing as "encoder delay" which is a PITA to handle :P
[18:48:20 CEST] <another> void09: you can also look into https://github.com/klaxa/dist-enc
[18:48:47 CEST] <void09> another: that's one of the projects i was eyeing
[18:48:48 CEST] <JEEB> basically the first XYZ samples in the first audio frame will be encoder initalization stuff
[18:48:57 CEST] <another> i'm currently working on a rewrite of it
[18:49:09 CEST] <void09> JEEB: yeah whatever, I think a 2 hour 5.1 opus encode will not be so straining, I have a 12 core machine
[18:49:15 CEST] <JEEB> yes, as I said
[18:49:22 CEST] <JEEB> the audio will go through liek breeze :P
[18:49:24 CEST] <void09> but a 2 hour av1 optimum encode will take 2 months
[18:50:07 CEST] <JEEB> void09: I used to do split encoding with HM with avisynth over wine, but nowadays you have vapoursynth. pipe from vspipe then your python-scripted output to an encoder like ffmpeg.c or so
[18:50:23 CEST] <another> void09: you can also look into https://pyscenedetect.readthedocs.io/en/latest/
[18:50:38 CEST] <JEEB> so you index you input file once, and then ffms2 will provide frame exact access
[18:50:43 CEST] <void09> PySceneDetect is a command-line application and a Python library for detecting scene changes in videos, and automatically splitting the video into separate clips
[18:51:00 CEST] <void09> oh wow, i knew about PySceneDetect but didn't notice it can do splitting too
[18:51:15 CEST] <JEEB> then you have a thing generated for each worker, which together with the vpy file would generate the wanted segments
[18:51:17 CEST] <another> well, it just calling ffmpeg
[18:51:34 CEST] <JEEB> and then you just stitch the outputs together and walk home happy
[18:52:07 CEST] <void09> vpy ?
[18:52:11 CEST] <void09> too much new info here :)
[18:52:35 CEST] <JEEB> vapoursynth python. basically normal python but utilizing the vapoursynth module
[18:52:56 CEST] <JEEB> and vapoursynth is just a convenient way of using ffms2 effectively, instead of writing custom C code
[18:52:58 CEST] <another> void09: altough note that klaxa's tool is a bit buggy
[18:53:36 CEST] <void09> I cannot believe nobody thought to do this already.. perfectly optimized cutting & distributed encoding
[18:54:13 CEST] <taliho> JEEB: Re: Probing: did you mean this patch? https://patchwork.ffmpeg.org/patch/12239/ or is there another one?
[18:54:29 CEST] <JEEB> void09: mostly because you can do business with it :P
[18:54:40 CEST] <JEEB> some people would probably be afraid of telling you what I already did
[18:54:57 CEST] <void09> lol what?
[18:55:08 CEST] <taliho> JEEB: that's already merged, but it only applies to probing of mpegts packets size
[18:55:11 CEST] <JEEB> I don't disagree
[18:55:22 CEST] <JEEB> taliho: yea I do remember there was something regarding data streams
[18:55:34 CEST] <JEEB> the simplest way to skip that is to just revert the commit that started probing them
[18:55:48 CEST] <JEEB> or add a codec id for the type of data you're encountering
[18:57:28 CEST] <taliho> yes, that's sounds easier: add a codec id and initialize it from command line
[18:57:56 CEST] <taliho> I'll try this idea
[18:58:06 CEST] <JEEB> I don't think you can do it from cli
[18:58:18 CEST] <JEEB> you have to actually figure out how the data stream is being marketed in the MPEG-TS
[18:58:35 CEST] <JEEB> in other words, it's simpler to check which commit started probing the data streams
[18:58:41 CEST] <JEEB> and see if that's revertable :P
[18:59:20 CEST] <taliho> ok I see, I'll have a look at the history
[19:00:55 CEST] <JEEB> I'll have to try my luck later with the radio channels I've seen and if the change you linked doesn't help with the probe time then I'll have to figure out how to improve that
[19:01:08 CEST] <JEEB> because with low rate data streams you can easily be waiting for a minute
[19:01:14 CEST] <JEEB> for the probe to get enough data
[19:02:17 CEST] <taliho> yep :( it's a pain
[19:06:57 CEST] <void09> JEEB: there's business in cloud transcoding clips ?
[19:07:20 CEST] <void09> video*
[19:07:20 CEST] <JEEB> just look at the various services doing that?
[19:07:20 CEST] <void09> unless you have a shit ton of video.. who would pay for that
[19:07:20 CEST] <void09> noobs?
[19:07:43 CEST] <JEEB> quite a lot of places just want video out quickly, with multiple variants
[19:07:52 CEST] <JEEB> and ready to pay for it
[19:08:13 CEST] <JEEB> some interfaces to automate it all into their media creation/receival systems, and voila :P
[19:10:26 CEST] <void09> https://cloud.qencode.com/# - 2 hours of 1080p encoding . $57
[19:10:31 CEST] <void09> AV1*
[19:14:20 CEST] <giaco> no hint on how to inspect an unknown udp stream?
[19:16:54 CEST] <another> ffprobe ?
[19:17:54 CEST] <JEEB> ffprobe -v verbose -i udp://blah
[19:26:17 CEST] <giaco> JEEB: with wireshark I see that the stream gets broadcasted into my subnetwork (192.168.8.255)
[19:26:46 CEST] <giaco> is it ffprobe -v verbose -i udp://192.168.8.255 ?
[19:28:57 CEST] <another> you need to take the source ip
[19:29:22 CEST] <DHE> you definitely need the port
[19:29:26 CEST] <DHE> :1234 or whatever
[19:30:07 CEST] <taliho> JEEB: The issue for me is that if an mpegts stream contains a binary stream (AVMEDIA_TYPE_DATA) it is assigned st->codecpar->codec_id = AV_CODEC_ID_NONE. So probe_codec() will continue probing because of utils.c:749
[19:30:24 CEST] <taliho> and probe_codec() cannot identify the codec becuase it's some random binary data
[19:30:29 CEST] <JEEB> yup
[19:30:34 CEST] <JEEB> and often low bit rate data
[19:30:41 CEST] <taliho> so shouldn't I be create a dummy decoder (i.e. AV_CODEC_ID_DUMMY) and set it with: ./ffmpeg -f mpegts -codec:d:0 dummy -i udp://blah ... -map etc
[19:30:41 CEST] <JEEB> so it's not gonna get much of it
[19:30:55 CEST] <JEEB> taliho: usually the demuxer sets that
[19:31:04 CEST] <JEEB> so I'd be surprised if you could do that :P
[19:31:40 CEST] <taliho> I see. I'll have a look
[19:31:43 CEST] <taliho> thanks
[19:37:07 CEST] <AlexVestin> Im having issues freeing/closing resources when using the ffmpeg cuda implementation, does anyone know how to close the cuda session?
[19:38:57 CEST] <AlexVestin> I've tried some combinations of using the driver API with the ffmpeg provided free methods but still hit the 2 encoding streams limit when doing encodings sequentially
[19:39:24 CEST] <giaco> DHE: thanks
[19:41:22 CEST] <kepstin> AlexVestin: specifically, you're using avcodec_free_context() to close and free the encoder?
[19:42:33 CEST] <AlexVestin> kepstin it throws errors when called so I tried with the av_buffer_unref
[19:43:09 CEST] <AlexVestin> Im basically using the setup from this SO post: https://stackoverflow.com/questions/49862610/opengl-to-ffmpeg-encode/50339978#comment101367660_50339978
[19:43:18 CEST] <kepstin> av_buffer_unref has nothing to do with closing an encoder
[19:47:34 CEST] <kepstin> AlexVestin: when you say "throws errors", you are of course putting those errors into a pastebin somewhere so we can see them, right?
[19:48:08 CEST] <AlexVestin> hehe of course, it's a short one malloc_consolidate(): invalid chunk size
[19:48:45 CEST] <kepstin> did you call the function on the wrong thing? It needs to be passed the address of a pointer to a heap-allocated AVCodecContext.
[19:50:35 CEST] <kepstin> note that it frees the codec context, so to start encoding another stream you'll have to allocate and set up a new codec context.
[19:51:50 CEST] <AlexVestin> yeah, two sessions work fine so I think the re-opening works
[19:52:33 CEST] <AlexVestin> the context is allocated with c = avcodec_alloc_context3(codec);
[19:53:03 CEST] <kepstin> in theory, it should be the avcodec_free_context() that closes the encoder and the cuda encoder context. Without that you'd *definitely* be leaking them.
[20:18:56 CEST] <AlexVestin> Yeah I think I might be doing something wrong before I try to free the context, I'll try to isolate the problem
[20:30:16 CEST] <AlexVestin> this reproduces the error: https://pastebin.com/b0z1cEsK
[20:59:08 CEST] <AlexVestin> seems like these free the same memory https://github.com/FFmpeg/FFmpeg/blob/95e5396919b13a00264466b5d766f80f1a4f7fdc/libavcodec/utils.c#L1136
[21:02:17 CEST] <AlexVestin> (Or I get a `free(): double free detected in tcache 2` here at least)
[21:11:04 CEST] <AlexVestin> it also gets freed here when hw_frames_ctx gets freed https://github.com/FFmpeg/FFmpeg/blob/a0ac49e38ee1d1011c394d7be67d0f08b2281526/libavutil/hwcontext.c#L235
[21:13:17 CEST] <void09> blah, is there no video player that can jump to a very specific time or frame ?
[21:14:50 CEST] <BtbN> That's oftentimes not trivially possible, and mostly not needed
[21:16:52 CEST] <void09> I fail to understand what's not trivially possible about it
[21:17:54 CEST] <void09> are there no timestamps/frame ids in an mkv video ?
[21:20:18 CEST] <kepstin> very specific time is easy, many players support that (e.g. mpv)
[21:20:24 CEST] <kepstin> at least for files with indexes
[21:20:28 CEST] <kepstin> specific frame is hard
[21:20:38 CEST] <void09> but for constant fps video, how is that hard?
[21:20:52 CEST] <void09> you convert time into frame
[21:20:56 CEST] <kepstin> indexes don't include frame numbers, so only way to get a specific frame is to decode from the start and count
[21:21:13 CEST] <kepstin> a lot of video formats don't really do "constant fps"
[21:21:34 CEST] <kepstin> constant fps just means that the pts increases by about the same amount (modulo rounding errors) each frame
[21:22:21 CEST] <kepstin> if you know that a video is constant fps through other information, then you can feel free to do the conversion to time yourself, then seek to time.
[21:22:46 CEST] <void09> yes, but i found no video player to allow this
[21:22:52 CEST] <void09> only to the second
[21:23:00 CEST] <kepstin> mpv supports subsecond seeking just fine
[21:23:05 CEST] <void09> how?
[21:23:17 CEST] <void09> oh, I forgot to mention, through a gui
[21:23:33 CEST] <kepstin> well, you can write arbitrary guis in mpv with lua :)
[21:23:43 CEST] <kepstin> the default gui probably doesn't have something for that, yeah
[21:24:53 CEST] <kepstin> but you can tell mpv to start at a particular point in a video with frame accuracy by running with the --start option
[21:27:37 CEST] <void09> I am trying to verify the accuracy of pyscenedetect scene detection (and the subsequent cutting)
[21:28:30 CEST] <Hello71> heh
[21:29:38 CEST] <void09> and it seems 1 frame off, at least. there's a scene ending at timecode 00:00:33.000. i play mpv with that timecode, paused, and i get the first frame of the next scene
[21:29:47 CEST] <void09> or maybe i am missing some basic maths
[21:32:33 CEST] <void09> scene starts are accurate though. it's just the end timecode corresponding with the first frame of the next scene
[21:32:34 CEST] <void09> any idea?
[21:33:45 CEST] <JEEB> end of a video frame usually is the start of the following one?
[21:33:53 CEST] <kepstin> yeah, that seems right
[21:34:01 CEST] <JEEB> as in, pts + duration = usually pts of packet+1
[21:34:03 CEST] <kepstin> a frame is displayed until the next frame replaces it
[21:34:11 CEST] <kepstin> so the end of a frame is the moment that the next frame appears
[21:35:10 CEST] <void09> damn, that makes sense
[21:35:59 CEST] <void09> ffmpeg does support ignoring the first x frames and the last y frames, when decoding a video ?
[21:36:25 CEST] <kepstin> you'd use the -ss and -to options to do that, normally.
[21:36:35 CEST] <kepstin> but those take times, not frames
[21:36:40 CEST] <kepstin> conveniently, you have times.
[21:37:16 CEST] <TheAMM> There's the trim filter
[21:37:38 CEST] <kepstin> (note that in this case, you'd normally want to use both -ss and -to as input options, so it can do fast seeking, and the timestamps -to uses are input timestamps)
[21:37:50 CEST] <TheAMM> If you only have to skip a few frames and know exactly how many you want to skip
[21:38:46 CEST] <kepstin> the trim filter operates on decoded video, and the reason it can work with frame numbers is because it sees every frame from the start and counts them.
[21:39:38 CEST] <void09> I have times, I guess, it's just that frames looked like a better, more accurate, option
[21:39:44 CEST] <void09> as they're just integers
[21:40:20 CEST] <kepstin> mkv (as written by most muxers) stores frame timestamps in milliseconds
[21:40:28 CEST] <kepstin> so millisecond timestamps are exact
[21:42:26 CEST] <JEEB> void09: frames are more exact but the FFmpeg framework doesn't work in frames seeking-wise. so, uh, yea
[21:42:34 CEST] <JEEB> write your own indexer or use ffms2
[21:42:43 CEST] <void09> timestamps will have to do then
[21:43:19 CEST] <void09> problem is, when splitting a video with pyscenedetect, using -copy (uses mkvmerge instead of ffmpeg to split losslessly)..
[21:43:51 CEST] <kepstin> keyframes don't necessarily line up with your detected scene cuts, so there's no guarantee that will work
[21:43:58 CEST] <void09> I get no refference of the original time in the video file, and don't know how to compute the extra frames added when cutting
[21:45:00 CEST] <void09> kepstin: yeah, that's what I am talking about. I want to get the extra frames in the video file, thare are not in the scene as detected by pyscenedetect, so i can ignore them (with ffmpeg -ss -to?)
[21:45:34 CEST] <kepstin> void09: if you're doing stuff with ffmpeg, why not use ffmpeg with -ss and -to directly on the original file?
[21:46:26 CEST] <void09> kepstin: I want to split a long video into scenes (the place where an encoder would put an I-frame), but without re-encoding bits that don't match with the I-frame
[21:46:59 CEST] <void09> and with keeping track of how many extra frames are in each cut piece, so I can skip them when re-encoding the pieces
[21:47:13 CEST] <kepstin> you should really be writing a custom application using ffmpeg apis
[21:47:23 CEST] <void09> I'm not a coder :(
[21:47:40 CEST] <JEEB> then ffms2 through vapoursynth and some python scripting?
[21:47:42 CEST] <kepstin> also note that re-encoding only pieces of a video is hard unless you know the encoder and options used on the original
[21:47:42 CEST] <void09> and this is not commercial, otherwise I'd hire someone
[21:48:15 CEST] <void09> kepstin: not sure i follow that. why should it be hard ?
[21:48:39 CEST] <JEEB> I think he means getting an exact cut with minimal re-encoding
[21:48:43 CEST] <JEEB> which is not what you want
[21:48:52 CEST] <JEEB> your stuff is jsut really easy to misunderstand :P
[21:48:54 CEST] <void09> yeah, no re-encoding at all for me
[21:49:05 CEST] <kepstin> if you don't re-encode, then you can't get exact seeking
[21:49:14 CEST] <JEEB> you can
[21:49:17 CEST] <JEEB> index with ffms2 or so
[21:49:35 CEST] <void09> I'll have to look into that ffms2 of your later JEEB
[21:49:55 CEST] <JEEB> index once, then frame exact access through that index
[21:50:02 CEST] <JEEB> and frame based access
[21:50:10 CEST] <JEEB> it uses FFmpeg underneath
[21:50:14 CEST] <JEEB> the libraries
[21:52:06 CEST] <void09> oh I see. so something ffmpeg should have had in the first place
[21:52:32 CEST] <JEEB> dunno, it's a rather specialized use case and it's better served by an API client that specifically utilizes the functionality for that use case
[21:52:36 CEST] <kepstin> nah, it's really a special case thing for people who explicitly need seeking by frame number
[21:52:58 CEST] <JEEB> of course you could put it into ffmpeg.c
[21:53:02 CEST] <kepstin> an external library makes sense, especially given that you should only get the frame number index if you opt in
[21:53:20 CEST] <JEEB> but yea, vapoursynth is on linux probably the simplest way to utilize ffms2
[21:53:39 CEST] <JEEB> you can make yourself a vapoursynth python script that loads up a list of things
[21:53:47 CEST] <void09> vapoursynth is another thing I have to look into
[21:53:56 CEST] <kepstin> void09: i'm missing a bit of your overal use case for this stuff, it might be that we're getting into complex solutions to a simple problem.
[21:53:56 CEST] <JEEB> like, if you have a file where you have exported how the jobs look like
[21:54:09 CEST] <void09> kepstin: distributed "lossless" encoding
[21:54:18 CEST] <JEEB> uhh
[21:54:25 CEST] <JEEB> what does that lossless mean there?
[21:54:27 CEST] <void09> I want to encode some movies into av1, and a 1080p 2 hour movie takes ~1-2months to encode on a single PC
[21:54:33 CEST] <kepstin> so not lossless
[21:54:45 CEST] <JEEB> void09: you're unneceassarily confusing people :P
[21:54:47 CEST] <void09> lossless as in, i do not want to lose quality when distributing vs encoding on a single machine
[21:54:54 CEST] <void09> sorry :)
[21:54:59 CEST] <kepstin> void09: ah, right, i remember you.
[21:55:11 CEST] <void09> lol yeah I obsess about this
[21:55:24 CEST] <JEEB> anyways, do your scene detection and export a file that your vapoursynth script will take in on each worker
[21:55:37 CEST] <JEEB> then have a network drive or something else that all the workers will access
[21:55:44 CEST] <kepstin> if every worker has access to the full original file this is trivial
[21:56:02 CEST] <void09> oh wow I have not thought of that
[21:56:05 CEST] <kepstin> just get your seek points from the detection, then run ffmpeg with -ss -to to encode the segment, then concatenate after
[21:56:09 CEST] <void09> giving access to the full file :O
[21:56:22 CEST] <void09> omg
[21:56:25 CEST] <JEEB> file + index
[21:56:26 CEST] <JEEB> done
[21:56:43 CEST] <void09> what would ffmpeg allow as a remote ? http works?
[21:56:46 CEST] <kepstin> or index if you want to use frame numbers, but the extra accuracy may or may not be needed.
[21:56:52 CEST] <kepstin> http works, ffmpeg can seek over http
[21:56:56 CEST] <kepstin> make sure the file has an index
[21:57:14 CEST] <void09> what kind of index ?
[21:57:19 CEST] <kepstin> a seek index
[21:57:28 CEST] <void09> is this something that mkv files usually have?
[21:57:30 CEST] <kepstin> yes
[21:57:39 CEST] <void09> oh, so nothing special then. yes, they are proper mkv files
[21:59:24 CEST] <void09> thanks kepstin, I was so stuck on what I thought was the optimal solution, that I missed the obvious :)
[23:34:43 CEST] <sagax> hi all!
[23:34:49 CEST] <sagax> how to crop a video? i want to change the aspect ratio.
[23:35:05 CEST] <sagax> change aspect ratio with crop
[23:36:16 CEST] <sagax> through crop video
[00:00:00 CEST] --- Thu Sep 26 2019
More information about the Ffmpeg-devel-irc
mailing list