[Ffmpeg-devel-irc] ffmpeg.log.20190802
burek
burek021 at gmail.com
Wed Aug 21 22:32:31 EEST 2019
[01:51:00 CEST] <stevessss> so.. zhiyun crane 3 lab offers image transmission to your phone from hdmi capture over wifi.... thier mobile app includes bundled ffmpeg.... how do I go about asking that they release the source to their ffmpeg based mobile app?
[01:51:14 CEST] <stevessss> I think a copyright holder to ffmpeg has to assert their gpl rights
[01:57:25 CEST] <stevessss> https://play.google.com/store/apps/details?id=com.zhiyun.zyplay
[01:58:20 CEST] <stevessss> it uses ffmpeg for playing the video sream over udp, using some undocumented protocol... as linked .so file... reporting to apple will make it permanant app ban.. but maybe google will block it until such point as they release their source code for that portion of app that uses ffmpeg
[01:59:48 CEST] <stevessss> apple hates open source and would want it blocked from their store for using it regardless of whether it goes back into gpl compliance... and also finds that distributing gpl apps is against their tos
[02:06:07 CEST] <stevessss> sorry.. that is lgpl.. as long as its linked and not modified they can use that
[02:43:46 CEST] <Retal> Hello guys. I have 2 Nvidia GPU in my Linux server. How i can turn off 1 card form startup ?
[03:06:26 CEST] <Beam_Inn> would anyone be willing to help with a command/script that converts date-time?
[03:06:28 CEST] <Beam_Inn> I have a rough draft
[03:10:06 CEST] <relaxed> wrong channel?
[03:12:03 CEST] <Beam_Inn> nah, i'm trying to use ffmpeg to clip a section of the video
[03:12:12 CEST] <Beam_Inn> and i haven't been successful using snippets from the internet
[03:12:26 CEST] <Beam_Inn> ffmpeg -i test.mp4 -ss 00:00:03 -t 00:00:08 -async 1 cut.mp4
[03:12:39 CEST] <Beam_Inn> for example I should expect this to start from 3 seconds into the video and create an 8-second clip
[03:13:53 CEST] <relaxed> that's correct
[03:13:57 CEST] <Beam_Inn> i mean, I'm using a programming language that doesn't have date-time to convert the 'clip length' into a 'end-time' parameter.
[03:14:10 CEST] <Beam_Inn> oh. maybe i just need to update or something, then.
[03:14:27 CEST] <Beam_Inn> let me see if they posted a COM datetime method yesterday
[03:15:38 CEST] <relaxed> you can use seconds if that's easier
[03:16:24 CEST] <Beam_Inn> how's that?
[03:16:34 CEST] <relaxed> -ss 3 -t 8
[03:16:38 CEST] <Beam_Inn> I mean, honestly I was planning to just write an hh mm ss converter
[03:16:52 CEST] <relaxed> that would work too
[03:17:29 CEST] <relaxed> which language?
[03:17:40 CEST] <Beam_Inn> and then convert the starttime to seconds and the endtime to seconds then use the endtime-starttime to get cliptime.
[03:18:03 CEST] <Beam_Inn> i always need to write it out because I guess it's just the way my mind works, it seems like starttime+endtime should equal total time, but obviously that's stupid.
[03:18:07 CEST] <Beam_Inn> just ahk
[03:18:16 CEST] <Beam_Inn> i mean it's for personal time saving
[03:18:47 CEST] <Beam_Inn> i do a lot of video clipping for premiere studies
[03:20:11 CEST] <Beam_Inn> i mean i already wrote the code.
[03:20:17 CEST] <Beam_Inn> i guess it's just buggy.
[03:20:37 CEST] <Beam_Inn> it's probably a scope error or something. i'll try to figure it out and check in later.
[04:05:37 CEST] <renatosilva> is there any utility in ffmpeg that can detect video streaming on web browser? I'm on windows
[04:06:36 CEST] <another> what do you mean?
[04:10:04 CEST] <DHE> what do you mean by "detect"? some browsers (most? I know chrome does) use ffmpeg for their video streaming capabilities
[05:18:20 CEST] <renatosilva> detect that a video streaming has started, or detect that a video is being watched
[05:19:02 CEST] <renatosilva> for example, I used to do this through windows event log, used to work pretty well but the event is not registered anymore
[07:53:16 CEST] <twb> I've got two security camera feeds, and I want to join them (the video part) together side-by-side into a single feed. i.e. the movie equivalent of imagemagick's montage -mode concatenate
[07:53:28 CEST] <twb> Where in the manpage should I be looking?
[09:26:52 CEST] <twb> https://stackoverflow.com/questions/28078510/ffmpeg-side-by-side-camera-and-audio-recording looks relevant
[09:27:10 CEST] <twb> Unsurprisingly, the filters look complicated and scary
[09:27:25 CEST] <furq> twb: https://ffmpeg.org/ffmpeg-filters.html#hstack
[09:28:09 CEST] <furq> bear in mind the ffmpeg cli doesn't do any kind of reconnection management or anything like that, so if one feed drops the output will die
[09:28:23 CEST] <furq> if these feeds are on lan it should be tolerable though
[09:28:39 CEST] <JEEB> yea, I would do an API client that would handle inputs going up and down personally
[09:28:45 CEST] <JEEB> possibly with something like upipe, dunno
[09:29:22 CEST] <furq> for two local feeds ffmpeg should be fine
[09:29:48 CEST] <furq> with lots of feeds and/or remote feeds you're going to run into sync issues and dropouts etc
[09:31:15 CEST] <furq> twb: -lavfi [0:v]setpts=PTS-STARTPTS[l];[1:v]setpts=PTS-STARTPTS[r];[l][r]hstack
[09:31:17 CEST] <furq> something like that
[09:33:05 CEST] <twb> For stupid reasons, the cameras are not directly accessible; they're being streamed to a shitty proprietary windows thing that runs on the same host as ffmpeg. ffmpeg is currently reading rtmp:// from one camera, then remuxing it as flv, then sending it to a youtube stream thing
[09:33:40 CEST] <twb> I wanted to just run two ffmpegs, each reading from one camera and writing to one youtube, but apparently the owner's youtube... account?... doesn't allow that
[09:34:07 CEST] <JEEB> lol
[09:35:04 CEST] <furq> https://support.google.com/youtube/answer/2853812
[09:35:05 CEST] <furq> maybe this
[09:35:41 CEST] <twb> thanks
[09:35:53 CEST] <twb> I realize now I say it, my original question should have been "how do I make youtube not suck?"
[09:35:57 CEST] <twb> X/Y problem
[09:41:01 CEST] <twb> $user says they were in the event in youtube and couldn't find the ingestion settings thing
[09:41:15 CEST] <twb> part of the problem is, apparently, that youtube is halfway through changing all the UI
[09:54:01 CEST] <furq> they're always halfway through changing all the ui
[10:05:04 CEST] <twb> FYI I've just been trying your hstack approach and it seems to be working nicely
[10:05:36 CEST] <twb> If I just want the audio from the first camera, is it sufficient to do -c:a copy ?
[10:07:55 CEST] <twb> It is also reporting speed=0.7xx so I suspect this is not able to hstack in real-time :-/
[10:08:36 CEST] <twb> I also saw some crap about it wanting pthreads for some circular buffer thing
[10:12:10 CEST] <durandal_1707> twb: hstack is memcpy, what you encode and how?
[10:12:59 CEST] <twb> ah sorry
[10:13:14 CEST] <twb> I'll have to retype because I'm not connected to the rdesktop computer
[10:13:53 CEST] <durandal_1707> use ultrafast preset :)
[10:15:13 CEST] <twb> ffmpeg -rtsp_transport tcp -i rtsp://<camera 1> -i rtsp://<camera 2> -filter_complex "[0:v]setpts=PTS-STARTPTS[l];[1:v]setpts=PTS-STARTPTS[r];[l][r]hstack" -tune zerolatency -vcodec libx264 -t 7:58:00 -pix_fmt + -c:a copy -f flv rtmp://<youtube crap>
[10:15:50 CEST] <twb> Those other options were cargo-culted by the last guy; I can guess the x264 and flv and a copy; the rest I don't know WTF it is
[10:16:46 CEST] <twb> In the meantime, I "solved" the speed problem by switching the camera's feeds from high-quality to low-quality. Now speed=1.02
[10:16:53 CEST] <durandal_1707> twb: use faster preset for x264
[10:17:06 CEST] <twb> OK
[10:17:16 CEST] <twb> How? :-)
[10:17:45 CEST] <twb> Hrm, https://trac.ffmpeg.org/wiki/Encode/H.264
[10:17:51 CEST] <durandal_1707> after -vcodec libx264 add -preset:v ultrafast ?
[10:17:56 CEST] <twb> Cool thanks
[10:18:10 CEST] <durandal_1707> also it depends on speed of your network...
[10:19:06 CEST] <twb> FFTH so like 25Mbit/s synchronous to the interboobs
[10:30:25 CEST] <koz_> I'm having some weird behaviour when using hevc_vaapi on my AMD GPU. Instead of video, I get a bunch of weirdly-coloured static.
[10:31:53 CEST] <furq> twb: use at least superfast if you can afford it
[10:32:03 CEST] <furq> ultrafast turns pretty much everything good about h264 off
[10:32:03 CEST] <twb> cool
[10:32:30 CEST] <twb> I'm also changing between high/med/low quality from the cameras themselves
[10:32:41 CEST] <furq> i'm guessing that's just resolution if it's actually making a difference
[10:32:51 CEST] <furq> that would make a very big difference though
[10:33:01 CEST] <twb> both cameras at med, and using ultrafast, I'm seeing it starting at speed=2.42 then dropping down to speed=1.04
[10:33:25 CEST] <furq> you'll want to check your cpu usage with live inputs
[10:33:34 CEST] <furq> since it'll obviously be capped at ~1x
[10:34:25 CEST] <twb> good thinking
[10:37:42 CEST] <twb> btw you're right - I tried mixing camera qualities and ffmpeg was upset because one camera was 1080 and the other was 576
[10:37:48 CEST] <twb> so clearly "quality" is just resolution
[10:39:09 CEST] <furq> oh also don't use -tune zerolatency
[10:39:33 CEST] <furq> it'll have no effect on latency for this use case but it will turn off a bunch of stuff
[10:39:39 CEST] <furq> like proper multithreading for starters
[10:42:29 CEST] <twb> good to know
[11:10:22 CEST] <twb> Grr. I tried to put ffmpeg into a loop like: while true; do timeout 11h ffmpeg ...; sleep 5m; done
[11:10:39 CEST] <twb> ...but PowerShell has no equivalent of "timeout" that I can see
[11:11:02 CEST] <twb> So I gave up and ran ffmpeg and bash inside an Ubuntu WSL container
[11:11:16 CEST] <twb> ...but then ffmpeg can't see the RTMP network streams
[11:11:27 CEST] <furq> -t should work for that
[11:12:06 CEST] <twb> oh OK
[11:12:09 CEST] <furq> or just run it in msys
[11:12:18 CEST] <twb> So the previous person had -t in there already, but set to *nine* hours
[11:12:27 CEST] <twb> So I can just skip that and do a regular while loop
[11:27:34 CEST] <twb> Thanks for all your help everybody
[11:27:46 CEST] <twb> (/me goes to make panchmel dal for dinner)
[12:00:11 CEST] <Pompolus> hi, someone willing to help?
[12:00:21 CEST] <Pompolus> I'm trying to get an rstp stream from an IP camera and restreaming it on RTMP.
[12:00:34 CEST] <Pompolus> The problem is that I managed to get it work on a 18.04 ubuntu machine acting as a media-server, but now that I'm trying to duplicate the installation on another machine (running ubuntu 18.04 too) ffmpeg stucks when retrieving the input.
[12:00:48 CEST] <Pompolus> I can access the rstp streaming from VLC on any pc, with different IPs, so the issue it's not about some firewall rule from IP camera side.
[12:01:53 CEST] <Pompolus> this is the ffmpeg string I'mt trying to use:
[12:01:55 CEST] <Pompolus> ffmpeg -i rtsp://admin:qw12er34ty56@net.siralab.com:554/onvif2 -vcodec libx264 -crf 23 -acodec copy -vbsf h264_mp4toannexb -hls_time 2 -hls_list_size 999999999 -f flv rtmp://116.203.34.212:1935/show/294d495d-352e-4bc8-95f9-92d2834454e1
[12:02:31 CEST] <Pompolus> I tried several different string, but seems like ffmpeg cannot access to the input stream
[12:02:47 CEST] <Pompolus> log:
[12:02:48 CEST] <Pompolus> [rtsp @ 0x56440c9c4980] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
[12:02:48 CEST] <Pompolus> Consider increasing the value for the 'analyzeduration' and 'probesize' options
[12:02:48 CEST] <Pompolus> Input #0, rtsp, from 'rtsp://admin:qw12er34ty56@net.siralab.com:554/onvif2':
[12:02:48 CEST] <Pompolus> Metadata:
[12:02:48 CEST] <Pompolus> title : H.264 Video, RtspServer_0.0.0.2
[12:02:50 CEST] <Pompolus> Duration: N/A, start: 0.000000, bitrate: N/A
[12:02:52 CEST] <Pompolus> Stream #0:0: Video: h264, none, 90k tbr, 90k tbn, 180k tbc
[12:02:54 CEST] <Pompolus> Output #0, image2, to 'hello/img%03d.png':
[12:02:56 CEST] <Pompolus> Output file #0 does not contain any stream
[12:04:22 CEST] <Pompolus> I tried to increase 'analyzeduration' and 'probesize' but the output is the same
[12:04:28 CEST] <pink_mist> please use a pastebin for that kind of pasting in the future
[12:04:33 CEST] <Pompolus> any hints? I'm going crazy
[12:04:40 CEST] <Pompolus> ok, sorry, you are right
[12:04:49 CEST] <pink_mist> (I've no experience with rtmp stuff though, so I can't really help)
[12:05:22 CEST] <Pompolus> it's not about rtmp, even trying to save the stream on file the result is the same
[12:05:48 CEST] <Pompolus> I cannot access the rstp input stream
[12:06:37 CEST] <Pompolus> the weird part is that it work flawless on the first installation I made
[12:07:14 CEST] <Pompolus> and with vlc I can access the stream without issues
[12:09:06 CEST] <koz_> On here: https://trac.ffmpeg.org/wiki/Hardware/VAAPI it says that "H.264 encode is working on GCN GPUs, but is still incomplete. No other codecs are supported by Mesa for encode yet.". Is this still current?
[12:38:47 CEST] <Pompolus> no one?
[13:07:40 CEST] <tinytoast> Pompolus: as soon as someone mentions they are using ubuntu regarding issues i always remember that they use seriously outdated libraries and code. they also have a tendency to do path changes and other things that most other linux distros do not. your problem may in fact be the unbuntu distro of ffmpeg itself. just a thought.
[13:09:03 CEST] <another> Pompolus: you may want to change your passwords btw
[13:44:56 CEST] <machtl> as some guys helped me yesterday i have found 2 solutions that work for my issues, either re-use my inputs or use slit with concat and use too much memory
[13:45:20 CEST] <machtl> as some of you mentioned split and concat uses too much memory
[13:45:22 CEST] <machtl> https://pastebin.com/3AtEJ4ip
[13:45:28 CEST] <machtl> https://pastebin.com/an0GJV5C
[13:45:48 CEST] <machtl> is there another way to do this?
[14:41:33 CEST] <Pompolus> tinytoast thank you but I can get the input stream from an ubuntu installation without issues, but only there. I'm not sure what is wrong with the other systems, they run the exact same version of both ubuntu and ffmpeg
[14:42:33 CEST] <Pompolus> even this simple string doesn't work:
[14:42:34 CEST] <Pompolus> ffmpeg -rtsp_transport udp -i rtsp://admin:qw12er34ty56@net.siralab.com:554/onvif2 -vcodec copy -acodec copy -f flv a.flv
[14:42:42 CEST] <Pompolus> can anyone try?
[14:57:52 CEST] <Pompolus> the same happens with ffplay too. Works on server, fails on every other place
[14:57:55 CEST] <Pompolus> ffplay -rtsp_transport udp rtsp://admin:qw12er34ty56@net.siralab.com:554/onvif2
[15:48:12 CEST] <Pompolus> any idea on this error?
[15:48:13 CEST] <Pompolus> Could not find codec parameters for stream 0 (Video: h264, 1 reference frame, none(left)): unspecified size
[15:48:13 CEST] <Pompolus> Consider increasing the value for the 'analyzeduration' and 'probesize' options
[15:48:34 CEST] <JEEB> Pompolus: there were no packets that could initialize the decoder
[15:48:49 CEST] <JEEB> either no packets for video stream at all, or there are no parameter sets
[15:48:55 CEST] <JEEB> in other words, the decoder cannot initialize
[15:49:08 CEST] <Pompolus> the question is why, then :\
[15:49:27 CEST] <JEEB> I think you should ask that from the encoder that's creating the stream?
[15:49:35 CEST] <DHE> the stream says there's supposed to be h264 video, but no video was actually received or at least not enough to work with
[15:50:22 CEST] <Pompolus> on vlc and in another system runs smoothly
[15:50:46 CEST] <JEEB> hmm, I wonder if it doesn't register the source or something
[15:50:56 CEST] <JEEB> vlc uses live555 for the rtsp access I think
[16:24:58 CEST] <Pompolus> if someone wants to help me, here there are the logs (with -loglevel debug) of ffmpeg running on the server (the only system I can get it to work) and on another system (with same SO and ffmpeg version).
[16:25:25 CEST] <Pompolus> failing system log: https://pastebin.com/raw/xn86kmb8
[16:25:37 CEST] <Pompolus> server log: https://pastebin.com/raw/gjsL1J6v
[16:25:56 CEST] <Pompolus> I pasted only the relevant part where they differ
[21:51:01 CEST] <dastan> hello, i have a problem with hardware acceleration
[21:52:24 CEST] <dastan> can someone help me
[21:52:37 CEST] <dastan> i am trying to send this command
[21:52:45 CEST] <dastan> ffmpeg -hide_banner -hwaccel cuvid -c:v h264_cuvid -f hls -i "YOUTUBE-HLS-LINK" -vcodec h264_nvenc -r 50 -acodec pcm_s16le -ac 2 -ar 48000 -f hls /home/build/temp/tn/tn.m3u8 -vcodec prores_aw -r 25 -acodec pcm_s16be -ac 2 -ar 48000 /
[21:53:35 CEST] <dastan> it gives me an error like Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
[21:53:45 CEST] <dastan> Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
[21:53:47 CEST] <BtbN> That looks like a wild copy&paste hell unrelated things
[21:54:02 CEST] <BtbN> +of
[21:54:03 CEST] <dastan> jaja
[21:54:09 CEST] <dastan> yes it is a little
[21:54:16 CEST] <dastan> i explain
[21:54:41 CEST] <dastan> ffmpeg -c:v h264_cuvid -f hls -i "LOCAL-HLS-LINK" -vcodec prores -r 25 -acodec pcm_s16le -ac 2 -ar 48000 /home/build/temp/tn/tn.mov
[21:54:49 CEST] <dastan> this command works fine
[21:54:55 CEST] <BtbN> You are decoding as On-GPU CUDA frames, and telling it to encode that to prores, which is not a hw encoder. It won't have any idea what to do with the frames.
[21:55:33 CEST] <BtbN> Also, -vcodec/acodec are deprecated, use -c:v and -c:a
[21:57:50 CEST] <dastan> to make the mov in the same command i need to use hw_download in the second output, isn it?
[21:58:24 CEST] <dastan> because there is not a hardware codec to make a prores video, isnt it?
[21:58:35 CEST] <koz_> On here: https://trac.ffmpeg.org/wiki/Hardware/VAAPI it says that "H.264 encode is working on GCN GPUs, but is still incomplete. No other codecs are supported by Mesa for encode yet.". Is this still current?
[21:59:02 CEST] <BtbN> dastan, no idea if such a construct works. Probably better to do it in two seaprate commands
[21:59:38 CEST] <dastan> ok, right now i have it working in two separate commands
[22:01:40 CEST] <BtbN> You could also just not use hwframes. The performance won't be much worse.
[22:02:38 CEST] <dastan> i will check everything, because without hw frames is possible to make everything in one codec
[22:02:44 CEST] <dastan> in one command
[22:02:50 CEST] <dastan> with two outputs
[22:03:08 CEST] <BtbN> two outputs in one command are far from as efficient as one might think
[22:03:17 CEST] <BtbN> since ffmpeg will work through them in sequence, not in parallel
[22:03:28 CEST] <BtbN> per frame that is
[22:04:57 CEST] <dastan> you aqre saying that two outputs in one commands is not efficient?
[22:05:36 CEST] <BtbN> If one of the two outputs is slow, it'll also slow down the other one. So, yes.
[22:06:16 CEST] <dastan> why one output can become slow?
[22:06:28 CEST] <BtbN> nvenc is magnitudes faster than prores
[22:06:47 CEST] <dastan> in that example i am writing one output locally and other is a network device
[22:06:59 CEST] <BtbN> That's even worse, cause slow network can then kill the local file
[22:08:55 CEST] <dastan> aaaaaa....you are saying becose if you have two outputs, one in h264 and other in prores, h264 is gaster than prores, and if you use the hw accelerated version of h264 is even faster
[22:10:37 CEST] <BtbN> ffmpeg, as in the ffmpeg cli tool, is inherently single threaded
[22:10:46 CEST] <BtbN> so if one of the two output chains blocks, it will block the other, too
[00:00:00 CEST] --- Sat Aug 3 2019
More information about the Ffmpeg-devel-irc
mailing list