[Ffmpeg-devel-irc] ffmpeg.log.20190814

burek burek at teamnet.rs
Thu Aug 15 03:05:03 EEST 2019


[00:01:46 CEST] <bencc> kepstin: ok so h264 hw acceleration is gpu thing and not cpu
[00:01:55 CEST] <bencc> they also have nvidia gpus: https://cloud.google.com/compute/gpus-pricing
[00:02:42 CEST] <bencc> should I expect order of magnitude improvement compared to cpu transcoding or is it not worth it?
[00:02:49 CEST] <kepstin> the hardware encoder (it's *not* hardware accelleration) is dedicated hardware separate from the gpu and cpu, but intel bundles it with their gpu
[00:03:52 CEST] <kepstin> nvidia's hardware encoder is one of the best around, as far as i know. they have specifications about how many streams and framerates their encoders support on particular cards on their site
[00:04:54 CEST] <bencc> I'll search for the nvidia specs
[00:04:57 CEST] <bencc> what about amd?
[00:04:59 CEST] <kepstin> my understanding is that nvidia's hardware encoder does quality/filesize comparable to x264's "veryfast" to "medium" depending on settings you choose.
[00:07:21 CEST] <kepstin> i think google's using amd's hardware encoder for the stadia platform? i don't really know much about its performance/quality.
[00:08:07 CEST] <kepstin> the hardware encoders seem to be mostly useful for handling realtime/live encoding circumstances, where you'll be able to get N realtime streams depending on card specs and video resolution/framerate
[00:08:47 CEST] <kepstin> if you're doing encoding for video on demand type applications, the additional compression you can get from a slower-than-realtime software encoder might be preferable.
[00:10:30 CEST] <kepstin> with video on demand you can split an entire video into chunks that are encoded in parallel, so the whole encode can be done quickly even tho each chunk is encoded slower than realtime.
[00:47:27 CEST] <bencc> kepstin: how can I split a video to chunks and combine it later?
[00:47:47 CEST] <kepstin> segment muxer, concat demuxer
[01:18:47 CEST] <bencc> thanks
[02:35:54 CEST] <w1d3m0d3>  /join #libre.fm
[02:37:57 CEST] <barg> kepstin: ok, so if i do ffmpeg -i blah.mp4 -ss 0:05:00 -to 00:10:00 -c copy blah2.mp4   is that right, no need to re-encode, but should copy the codecs so it doesn't put default ones in?
[02:42:14 CEST] <klaxa> yes, be aware that copying codecs will cut at gop boundaries
[02:43:17 CEST] <nicolas17> what's the boundary for audio?
[02:46:37 CEST] <barg> i don't know but the whole video is about 00:00:00-00:14:00
[02:50:36 CEST] <klaxa> nicolas17: don't know for sure, but a lot smaller than for videos, probably also depends on the codec
[02:50:54 CEST] <klaxa> depending on how many samples they put in one "gop" or rather group of samples
[02:51:46 CEST] <barg> I notice that when I do a line like ffmpeg -i VID_20190808_214609987.mp4 -acodec copy -vcodec copy -ss 00:10:30 987b_TEST.mp4  So, starting 10min 30sec in, Then it takes a while saying frame=0 before it begins. And the further the -ss, the longer it takes to start going beyond frame=0, I guess  that's unavoidable?
[02:52:28 CEST] <nicolas17> yes, it decodes all frames from the beginning until it reaches 0:10:30
[02:52:54 CEST] <c_14> you can put the -ss before the -i
[02:52:55 CEST] <klaxa> you can put the -ss before
[02:52:58 CEST] <klaxa> you sniper
[02:53:04 CEST] <nicolas17> hm
[02:53:22 CEST] <nicolas17> what is key_frame in the output of "ffprobe -show_frames"?
[02:54:16 CEST] <klaxa> for video streams it's the start of gops
[02:54:27 CEST] <klaxa> not sure about audio?
[02:54:30 CEST] <nicolas17> I guess I found another bug / missing feature
[02:54:44 CEST] <nicolas17> I get key_frame=1 for *every* frame in av1 video
[05:08:25 CEST] <void09> is it possible to record the output of a full screen program in a way that it does not interfere with my current desktop work ? that is, so I can do other stuff while it's recording. something like a virtual monitor.
[05:19:37 CEST] <nicolas17> -vcodec libx265 -profile:v slow
[05:19:47 CEST] <nicolas17> what defines what 'slow' means there? is it in ffmpeg or in x265?
[05:20:20 CEST] <nicolas17> er preset*
[05:24:22 CEST] <void09> it's in ffmpeg, for x265 ? the encoding profile for x265
[05:26:41 CEST] <nicolas17> I think it's in x265 (eg. I can't find the string "placebo" in ffmpeg's source code)
[05:26:47 CEST] <nicolas17> now to figure out *where* in x265 code
[05:30:08 CEST] <void09> it's the cpu time allocated for the encode, from ultra fast to placebo. the faster the less cpu time needed and less efficient of an encode
[05:30:43 CEST] <void09> hmm I'd have figured those string would be present in the ffmpeg code too
[05:41:23 CEST] <nicolas17> found x265/doc/reST/presets.rst which explains what each preset does :)
[08:06:39 CEST] <kepstin> nicolaus: if your ffmpeg is build with the dav1d decoder, that decoder should indicate frame properties better.
[09:01:21 CEST] <Hackerpcs> how is the aac encoder nowadays? Would switching from fdk aac be advisable?
[09:40:52 CEST] <furq> no
[12:19:32 CEST] <DHE> Hackerpcs: it's good enough if you can't deal with the fdk license. but if you can deal, stick with fdk
[13:52:29 CEST] <shroomM> hey
[13:52:41 CEST] <shroomM> I have a question regarding outputting fragmented mp4
[13:53:26 CEST] <shroomM> when I use ffmpeg to output to a fragmented mp4 (-movflags frag_keyframe), the resulting file's first frame has a timestamp of 0.8s and not 0s as I'd expect
[13:54:35 CEST] <shroomM> this seems to cause issues with a packager I'm trying to use to produce a DASH manifest
[13:54:49 CEST] <shroomM> any idea what's the reason behind this ?
[13:58:27 CEST] <shroomM> a cli "ffmpeg -i video.mov -an -g 48 -movflags frag_keyframe -b:v 1048576 -f mp4 foo.mp4" produces a file which shows this information: "Duration: 00:01:00.00, start: 0.083333, bitrate: 1484 kb/s"
[13:59:48 CEST] <shroomM> removing the movflags part (ffmpeg -i video.mov -an -g 48 -b:v 1048576 -f mp4 foo2.mp4) gives me: "Duration: 00:01:00.00, start: 0.000000, bitrate: 1484 kb/s"
[14:11:32 CEST] <shroomM> ah, found it, seems "negative_cts_offsets" makes the first frame to have the timestamp of 0
[14:13:44 CEST] <ritsuka> you either need to use negative ctts offsets, or an edit list
[14:13:57 CEST] <ritsuka> but maybe your packaged doesn't like edit lists
[14:33:20 CEST] <shroomM> ritsuka thanks, found that as well, yes
[14:33:34 CEST] <shroomM> I'm just now looking at the movenc.c code to see what it all means
[14:33:46 CEST] <shroomM> and I found there's an undocumented movflag, "dash"
[14:33:51 CEST] <shroomM> should this be used
[14:34:17 CEST] <shroomM> well, I'm not sure whether it's undocumented, I didn't see it on ffmpeg.org
[16:14:10 CEST] <shroomM> ok, so the negative_cts_offsets doesn't have any effect if it's specified with the empty_moov movflag
[16:14:48 CEST] <shroomM> "ffmpeg -i video.mov -an -g 48 -movflags frag_keyframe+empty_moov+negative_cts_offsets -b:v 1048576 -f mp4 foo.mp4" still produces an mp4 with "Duration: 00:00:10.13, start: 0.083333, bitrate: 3672 kb/s" :(
[16:15:01 CEST] <shroomM> if I omit the empty_moov, then it's fine
[17:24:25 CEST] <devinheitmueller> Question:  has anyone put together a methodology to inject arbitrary frames if the real source stops delivering frames?  I.e. encode colorbars on loss-of-signal.  I can probably write some sort of filter from scratch, but wasnt sure if there was some wonky pipeline I could setup which uses the selectstream filter.
[17:24:50 CEST] <BtbN> Twitch does that if your stream crashes and you enabled the option.
[17:25:00 CEST] <BtbN> It does it by just replacing entire HLS segments
[17:25:20 CEST] <devinheitmueller> This is a bit harder than the traditional flitering scenario, where filters are run when a frame arrives.  In this case I want the filter to be used in the *absense* of frames.
[17:25:27 CEST] <BtbN> An ffmpeg filter won't help you. If the input dies, the entire process stops.
[17:25:46 CEST] <BtbN> Easiest to do this out of band
[17:25:56 CEST] <devinheitmueller> Well in this case Im talking a realtime TS, so I dont get an EOF or anything.
[17:25:57 CEST] <furq> yeah and if a frame is late then the filter has no input
[17:26:42 CEST] <devinheitmueller> I know the libavfilter framework has an alternative entry point (request_frame) which I assume can be invoked even if the original source doesnt deliver a frame.
[17:27:30 CEST] <BtbN> libavfilter simply is the entirely wrong place for that kinda thing
[17:27:42 CEST] <BtbN> They are filters, nothing more
[17:28:17 CEST] <BtbN> You would need to write an input protocol wrapper that has reconnect-logic and sends "Input is disconnected" intermediate data
[17:29:47 CEST] <devinheitmueller> Hmm, that feels like the wrong approach.  I can imagine all sorts of cases where I might want to arbitrary replace the source after the decode phase.  For example, based on the blackdetect filter.
[17:30:15 CEST] <BtbN> ffmpeg CLI has no provisions for arbitrarily replacing the source
[17:30:24 CEST] <BtbN> You will have to write your own application for that
[17:30:38 CEST] <devinheitmueller> Well Im not against having two sources (the real source, and one producing colorbars)
[17:30:54 CEST] <devinheitmueller> And then some filter chooses which one to pass through.
[17:31:09 CEST] <BtbN> If one of two sources dies, the whole chain dies with it
[17:31:24 CEST] <devinheitmueller> Yeah, but the source isnt actually dying.  It may very well come back.
[17:31:30 CEST] <BtbN> That doesn't matter
[17:31:34 CEST] <BtbN> it does not produce frames anymore
[17:31:54 CEST] <BtbN> so either it blocks the entire chain until more data comes in, or it goes EOF and the chain finished up and ends
[17:32:06 CEST] <devinheitmueller> Right, but it might start producing frames again 30 seconds later.  It would be great to be able to throw up a please stand by message in the video during that duration.
[17:32:29 CEST] <devinheitmueller> Presumably ffmpeg would continue to service the second source even if the first is blocked.
[17:32:29 CEST] <BtbN> That's why you need a wrapped source does accounts for that
[17:32:34 CEST] <BtbN> ffmpeg does not
[17:32:52 CEST] <BtbN> that's not how any of the code works, and changing it to behave like that is pretty much impossible without a major rewrite of... everything
[17:33:20 CEST] <BtbN> I don't see the issue with a wrapper-protocol that implements that kind of fallback behaviour
[17:33:58 CEST] <devinheitmueller> Well the wrapper protocol approach would require me to deliver an entire alternative compressed stream with the same map layout.
[17:34:34 CEST] <devinheitmueller> You would think people would have this issue with stuff like mosaics/multiviews.  If one of the sources stops, you wouldnt expect the entire app to hang..
[17:34:46 CEST] <DHE> you'd have to write some kind of protocol, like http or whatever, that expects realtime data from the wrapped protocol and if it doesn't get it starts producing alternative frames.
[17:35:16 CEST] <DHE> the main issue I see with that is it would need to provide these alternative frames in the same encoding as the original source (eg: h264 at specific resolution, probably same framerate, etc)
[17:36:04 CEST] <devinheitmueller> Thats why I think it should be at the filter level.  From within the filter its trivial to create video frames at the appropriate resolution and framerate.
[17:36:51 CEST] <BtbN> Except that it's too late to do what you want to do at the filter level
[17:36:52 CEST] <kepstin> filters don't handle realtime stuff like that. they just request frames from their sources or upstream filters, and then get activated again later when the frames are available.
[17:38:21 CEST] <devinheitmueller> kepstin: My thought would be to have a filter where request_frame delivers the most recent frame from upstream, and if there is no such frame it creates one.  Of course in that case it would be good to repeat the previous frame a certain number of times before switching to the secondardy frame source.
[17:38:53 CEST] <BtbN> If there is no such frame, it blocks until there is one, or until EOF
[17:38:59 CEST] <kepstin> how does it know that there's "no such frame"? all it does is request a frame, then it doesn't get run again until the frame is available.
[17:39:40 CEST] <devinheitmueller> Yeah, I think thats part of the issue - I dont fully understand how the request_frame API is implemented.
[17:40:02 CEST] <devinheitmueller> All my filters have used the more traditional filter_frame API, where the filter gets invoked when a frame arrives from upstream.
[17:40:35 CEST] <kepstin> all the filter frame stuff is actually implemented in terms of the newer activate api, fwiw (there's some wrapper code in the filter core that adapts between the layers)
[17:41:14 CEST] <devinheitmueller> Yeah, Im not shocked that the filter_frame is called from within the new API.  Is there any documentation describing the activate api?
[17:41:31 CEST] <devinheitmueller> PS.  ffmpegs documentation is absolutely terrible.
[17:41:40 CEST] <kepstin> (rough summary:) in the background, the filter_frame wrapper gets kicked when a downstream filter/sink wants a frame, then it requests a frame from the upstream filter/source, then when that frame is available it runs the filter_frame function.
[17:42:04 CEST] <devinheitmueller> Ok, that makes sense.
[17:43:07 CEST] <devinheitmueller> Hmm, ok I see the problem.  I cant rely on the call to filter_frame telling the filter whether this is the frame arriving for the first time, or if its because the activate API invoked it.
[17:43:53 CEST] <BtbN> It would also be a major API and ABI break to change that behaviour
[17:44:06 CEST] <kepstin> the filter system's all pull-based, a new frame being buffered to a filter doesn't activate it iirc
[17:45:03 CEST] <kepstin> i can't actually remember if that's true, the code's hard to follow :)
[17:45:35 CEST] <devinheitmueller> kepstin: I think that was part of my misunderstanding.  Because Ive always done filter_frame, I assumed it was pushed based.  If it really is pull based, then I should be able to synthesize frames from scratch in the event the source isnt delivering any.
[17:46:15 CEST] <kepstin> devinheitmueller: the problem is that the only way to know if an upstream filter is delivering frames, is to ask it for a frame, and then your filter doesn't get run again until a frame is available
[17:46:16 CEST] <BtbN> except that the pull-function blocks if there is no frame
[17:46:45 CEST] <BtbN> making it non-blocking would break the behaviour of all of the libav* family libs
[17:46:47 CEST] <devinheitmueller> Ah, I see.
[17:47:29 CEST] <durandal_1707> thats just how buffersrc buffersink works
[17:49:39 CEST] <kepstin> in the activate based api, the pull function isn't blocking, it's asynchronous. You ask for frame, then return, and you get run again later once there's a frame sitting in your input buffer to read (or an eof/error indicator).
[17:49:54 CEST] <devinheitmueller> I guess if I had a dedicated thread servicing the filters and feeding the result onto a FIFO, I could have the other end of the FIFO time out if no frames are delivered within a time period.
[17:50:44 CEST] <kepstin> gonna be honest here, you probably could build what you want in the gstreamer graph system (which does support threads, things timed to clocks, etc).
[17:51:10 CEST] <kepstin> or of course by using ffmpeg apis yourself and swapping between reading from different inputs based on knowledge you have of stream state.
[17:51:41 CEST] <devinheitmueller> Oh Im totally not against calling the APIs directly.  I dont need it to work in ffmpeg.c.  That said, I want it to conform to the libavfilter API so I can upstream it.
[17:52:23 CEST] <devinheitmueller> (would be nice if it worked from within ffmpeg.c of course, but thats not a showstopper)
[17:52:49 CEST] <kepstin> this cannot be done in the libavfilter api, and putting it in the ffmpeg.c code would require some rewriting (which it probably needs, really, given the number of people who try to use it for realtime stuff)
[17:53:16 CEST] <devinheitmueller> Oh dont get me started on the steaming pile of garbage that is ffmpeg.c.  No disrespect intended.
[17:53:43 CEST] <BtbN> It's pretty good for what it is. It tries to cater every possible usecase, and in most cases manages to do so.
[19:33:02 CEST] <Hackerpcs> DHE: Thanks, it's only for personal use so no licensing issues. Sticking to fdk then
[19:33:31 CEST] <durandal_1707> really?
[19:41:45 CEST] <furq> Hackerpcs: wrong kind of license
[19:42:04 CEST] <furq> fdk isn't gpl compatible so you can't distribute ffmpeg binaries with fdk
[19:45:45 CEST] <Hackerpcs> ehm I just said it's for personal use?
[19:46:22 CEST] <furq> oh
[19:46:26 CEST] <furq> ok then
[20:45:10 CEST] <Spring> is there a way to tell ffmpeg to not include the last image (in an incremental sequence of image files) for the input when encoding?
[20:45:31 CEST] <Spring> eg: let's say I have 60 images and want to only include the first 59 of them for the encode
[20:45:39 CEST] <durandal_1707> yes trim filter
[20:50:50 CEST] <Spring> looks like -vframes <total frames> is perhaps the simpler method in my case since I only need to grab the first n number of frames
[20:51:09 CEST] <durandal_1707> yes
[20:52:50 CEST] <void09> what would be a good lossless codec to record on a dual core intel i7 mobile cpu, with a sata ssd drive connected over usb3?
[20:53:34 CEST] <durandal_1707> resolution and framerate?
[20:53:51 CEST] <void09> 1920x1080 60fps. I'm thinking something that can detect duplicate frames maybe, as the source video is 24fps
[20:54:20 CEST] <void09> but recording at 24 fps might miss some frames I think , no ?
[20:54:36 CEST] <another> ffvhuff e.g.
[20:55:13 CEST] <durandal_1707> it doesnt detect dupe frames
[20:55:16 CEST] <void09> that's what I am using at the moment. I could do with something even faster (less cpu intensive) as I have 2TB to spare
[20:55:28 CEST] <void09> faster and dupe frame detection would be great
[20:55:43 CEST] <void09> x264 is too much for that cpu I think
[20:55:55 CEST] <durandal_1707> faster cant get much
[20:56:01 CEST] <durandal_1707> than ffvhuff
[20:56:12 CEST] <void09> the original huff ?
[20:56:17 CEST] <durandal_1707> both
[20:56:28 CEST] <void09> raw video then :D
[20:57:13 CEST] <durandal_1707> waste of resources
[20:57:40 CEST] <void09> I just want to not drop frames, I can re-encode after
[20:58:19 CEST] <void09> is it possible to tell it not to drop frames, but somehow keep data it can't process fast enough in the RAM, like in a queue ?
[20:59:17 CEST] <another> you can also try utvideo
[20:59:21 CEST] <durandal_1707> that can not work
[20:59:30 CEST] <durandal_1707> try netter hw
[20:59:35 CEST] <durandal_1707> better
[21:00:50 CEST] <durandal_1707> or gpu for encoding
[21:01:20 CEST] <void09> can gpu be used to accelerate ffvhuff ? or x264 lossless
[21:01:43 CEST] <durandal_1707> i doubt so
[21:02:29 CEST] <petecouture> For building a RTMP server. Has placement of the listen parameter changed on the input? Im using an old script which is now throwing a socket error like its trying to connect instead of waiting for a connection.
[21:02:30 CEST] <petecouture> https://pastebin.com/1s1aBhHg
[21:02:34 CEST] <void09> blah, I guess I'll have to build another machine
[21:03:26 CEST] <DHE> lossless H264 maybe, if the GPU supports it.
[21:03:47 CEST] <DHE> of course, there's still colourspace losses as always
[21:04:57 CEST] <void09> DHE no way to avoid those ? I am recording in rgb24
[21:05:18 CEST] <DHE> I strongly suspect no GPU supports RGB encoding, let alone with lossless
[21:05:28 CEST] <void09> video I am recording is x264 so most likely yuv420
[21:05:48 CEST] <DHE> x264 does support higher colourspace modes if you ask for them
[21:06:07 CEST] <void09> ok nevermind about codec and acceleration, just avoiding the colourspace loss
[21:06:52 CEST] <Spring> what would be the simplest way to crop a pixel value beginning from the left side and a different crop value from the right side? My brain is a bit fizzled atm.
[21:06:57 CEST] <DHE> yuv4:4:4 might work, and while it's not perfectly lossless it's probably close enough
[21:07:55 CEST] <void09> you mean it would be a better pick for recording yuv420 as they share the same original colourspace?
[21:08:05 CEST] <Spring> or does ffmpeg always require that the dimensions /after/ the first crop be resulting output?
[21:08:46 CEST] <Spring> oh god, hold on, it's coming back to me now :p
[21:08:58 CEST] <DHE> Spring: filters can be chained. if you only invoke 1 filter, don't be surprise that its output is the resulting video
[21:09:49 CEST] <Spring> yeah, don't worry, I can achieve this how I used to by setting a beginning x/y coord (the offset from either the left/right side) then the crop dimensions instead.
[21:10:13 CEST] <Spring> was a bit hazy for a minute
[21:10:55 CEST] <Spring> (of course this particular way only works when I know the original dimensions but it'll do for now)
[21:33:13 CEST] <void09> on a screen grab, any way to see or log which frame numbers were skipped?
[21:33:21 CEST] <void09> dropped*
[21:38:17 CEST] <Spring> has the syntax for metadata rotation in MP4 containers changed in the last few years? As I switched to a recent build and `-metadata:s:v rotate="270"` doesn't do anything now.
[21:45:14 CEST] <Spring> yeah, switched back to a 2016 build (also from Zeranoe) and the rotation metadata works again.
[22:39:36 CEST] <petecouture> When using the -listen 1 flag on input to create a RTMP server. Does the origin RTMP stream having to be publishing prior to starting the server or is there a way that ffmpeg can sit idle awaiting an input connection?
[22:39:55 CEST] <petecouture> I know the common solution is to use gnix but Im seeing if it can be done via command line
[22:45:42 CEST] <petecouture> I have the -timeout 60 flag set per the rtmp spec. Im assuming it would keep the listener open awaiting an attempt
[22:50:30 CEST] <petecouture> Realize I probably have the input formatted incorrectly. What Ive found online is people listing the listen parameter for the HTTP server but there isnt an example of the RTMP server being used
[23:37:32 CEST] <petecouture> another: This is the old script
[23:37:33 CEST] <petecouture> https://pastebin.com/1s1aBhHg
[23:37:41 CEST] <petecouture> Which includes the listen commad you can find on google
[23:38:09 CEST] <petecouture> After reviewing all the rtmp docs I believe they are all wrong incorrectly listing how th eform the rtmp string based on the3 HTTP server settings
[23:38:12 CEST] <another> i've seen that
[23:38:40 CEST] <petecouture> New one
[23:38:41 CEST] <petecouture> https://pastebin.com/ZfhFCb2i
[23:38:52 CEST] <petecouture> with the listen and timeout parameters including within the UI string
[23:39:13 CEST] <petecouture> Which I believe is the correct way based on the examples from the other rt* based connection documentation
[23:39:18 CEST] <petecouture> Im pulling the ffmpeg source code now
[23:40:14 CEST] <another> https://ffmpeg.org/ffmpeg-all.html#rtmp
[23:40:31 CEST] <petecouture> Correct Ive been staring at that for a few days now
[23:40:32 CEST] <another> the rtmp protocol has a -listen option
[23:40:50 CEST] <petecouture> Correct but it doesnt document how the option is used
[23:41:01 CEST] <petecouture> And it lists it as a parameter
[23:41:07 CEST] <petecouture> included within the URL parameter lists
[23:41:10 CEST] <another> which when i run it, opens a socket on the specified port
[23:41:13 CEST] <petecouture> onlyt the example doesnt show how its used
[23:41:19 CEST] <petecouture> your serious...
[23:41:23 CEST] <petecouture>   derr Encoder Error:  ffmpeg exited with code 1: RTMP_Connect0, failed to connect socket. 61 (Connection refused)
[23:41:26 CEST] <petecouture> This is the error i get
[23:41:38 CEST] <BtbN> so, the server refused your connection then
[23:41:59 CEST] <petecouture> The server is supposs to open the connection
[23:42:04 CEST] <another> what version are you running?
[23:42:06 CEST] <petecouture> and there isnt anything publishing to the URI yet
[23:42:16 CEST] <petecouture> My goal is to open the socket and leave it listening
[23:42:24 CEST] <petecouture> Or understand how that works
[23:42:28 CEST] <BtbN> Can ffmpeg even act as rtmp server? I was only aware of client support.
[23:42:39 CEST] <petecouture> With the listen flag its supposed to be able to
[23:42:50 CEST] <petecouture> listen
[23:42:51 CEST] <petecouture> Act as a server, listening for an incoming connection.
[23:43:41 CEST] <petecouture> BtbN: This is one of those undocumented things where a Google search will bring up many bad examples.
[23:43:56 CEST] <petecouture> another 4.1.4
[23:44:07 CEST] <petecouture> Which is this year, the listen flag was added in 2017
[23:44:08 CEST] <BtbN> RTMP_Connect0 indicated you are using librtmp. The listen flag is only implemented in the native rtmp code. Those two are mutually exclusive.
[23:44:22 CEST] <petecouture> BtbN damit your right
[23:44:29 CEST] <petecouture> So I have to rebuild ffmpeg
[23:44:48 CEST] <BtbN> librtmp support really should just be removed at this point imo
[23:45:06 CEST] <petecouture> I literally thought about that.
[23:45:15 CEST] <petecouture> I was building ffmpeg with librtmp for raspi a year ago
[23:45:33 CEST] <BtbN> You have to build with --disable-librtmp to get proper rtmp support.
[23:45:50 CEST] <petecouture> <3
[23:46:35 CEST] <petecouture> Literally had a meeting last week were I said the issue amhybe because librtmp was built into ffmpeg but didnt go down that path& cats and dogs getting along this week. total anarchy&
[23:56:26 CEST] <Classsic> Hi, somebody can point  how can decode with qsv and encode with libx264 in windows??
[00:00:00 CEST] --- Thu Aug 15 2019


More information about the Ffmpeg-devel-irc mailing list