[Ffmpeg-devel-irc] ffmpeg.log.20160920
burek
burek021 at gmail.com
Wed Sep 21 03:05:01 EEST 2016
[02:35:01 CEST] <ossifrage> Do you think ffplay could handle h.264 with dynamically changing resolution?
[02:36:28 CEST] <ossifrage> In low light (high noise) the picture quality starts to suffer to the point that 1080p doesn't make any sense. I think my encoder hardware can be convinced to switch to a lower base frame rate on the fly (with an islice at the boundary)
[02:36:44 CEST] <ossifrage> err not frame rate, frame size
[02:58:10 CEST] <relaxed> Can someone with a raspberry pi please test my static build? https://www.johnvansickle.com/ffmpeg/
[03:12:13 CEST] <furq> relaxed: seems to work fine
[03:38:24 CEST] <relaxed> great! thanks
[03:44:23 CEST] <furq> i should really figure out why omx has stopped working on my rpi so i can test that as well
[04:15:28 CEST] <furq> oh
[04:15:48 CEST] <furq> relaxed: apparently --extra-ldflags=-static breaks omx
[04:15:51 CEST] <furq> at least for me
[04:21:55 CEST] <radicaldev> Hello, can anyone recommend good ways to take an audio file and divide it based on various parameters like intensity thresholds and whatnot, or point me in the direction to learn more about how to do that well?
[04:24:41 CEST] <debianuser> radicaldev: What about -af silencedetect=...? It prints silence timestamps in the file.
[04:58:25 CEST] <radicaldev> debianuser: that's pretty cool
[05:09:59 CEST] <radicaldev> debianuser: Seems like a really simple approach like I'm already using with a python module called Auditok. threshold and a limit, basically. What I've seen so far is that I can get 'good' results, but I'm working with speech which may have some noise and and I want to capture as much speech as possible and as little noisy non-speech as possible.
[07:12:58 CEST] <relaxed> furq: I'm pretty sure it breaks all hwacells, nvenc and quicksync too
[07:20:41 CEST] <furq> weird
[07:27:34 CEST] <furq> http://sprunge.us/ZLaM
[07:27:38 CEST] <furq> well there's a backtrace if it's any use to you
[08:48:42 CEST] <NetworkingPro> hey everyone
[08:48:55 CEST] <NetworkingPro> im trying to copy an rtsp stream from an internal ip to a public address
[08:49:07 CEST] <NetworkingPro> ffmpeg -i rtsp://admin:password@172.31.92.138/ISAPI/Streaming/channels/101 -vcodec copy rtsp://127.0.0.1:52125
[08:49:11 CEST] <NetworkingPro> I think Im close?
[08:49:23 CEST] <NetworkingPro> Any ideas how to define the output?
[09:11:21 CEST] <NetworkingPro> https://www.irccloud.com/pastebin/vs3ek5q8/
[09:11:42 CEST] <NetworkingPro> Anyone know how to get around that fail to bind port issue?
[10:20:29 CEST] <k_sze[work]> There's nothing to prevent me from using ffmpeg library in a worker thread, right?
[10:27:49 CEST] <soulshock1> library? worker thread?
[10:39:54 CEST] <flux> you can certainly use most any library in a separate thread, ffmpeg included. the interesting case is using the same library from multiple threads..
[10:40:21 CEST] <flux> I do wonder how safe ffmpeg is in that case (ie. are reference counts safe).
[10:40:51 CEST] <bencoh> I personally wouldnt risk it :)
[10:42:06 CEST] <flux> just to pick an example it seems af_frame_ref nor av_frame_unref aren't safe at all regarding the access of src. so don't do that :)
[10:57:52 CEST] <thegoodguy> moin
[10:59:07 CEST] <thegoodguy> guys can someone tell me, how much cpu power is needed to remux a dts audio to stereo ?
[10:59:52 CEST] <thegoodguy> i am using ffmpeg via Emby and when i live remux i have just one core in use, but this is 100% and the movie starts to stutter after a short period because of this
[11:00:08 CEST] <thegoodguy> i thought remuxing isnt such a big thing for a modern cpu
[11:00:14 CEST] <thegoodguy> cpu is n3700 intel
[11:02:28 CEST] <BtbN> "remux to stereo"?
[11:04:14 CEST] <thegoodguy> 2 chaneels
[11:04:15 CEST] <mrelcee> thegoodguy you dont' remux dts to stereo. you re-encode it. muxing is mixing little bits of the audio stream to go along with the video frames for the length of the video...
[11:05:38 CEST] <thegoodguy> its a player in the browser and the browser is ofc not able to play dts
[11:06:37 CEST] <thegoodguy> ok, so re-encode it, but is it really that expensive for the cpu?
[11:06:58 CEST] <BtbN> depends on the CPU
[11:07:08 CEST] <BtbN> are you sure you're not also re-encoding the video?
[11:07:29 CEST] <thegoodguy> and/or is it possible at least to use vaapi for this or more then one core?
[11:07:45 CEST] <BtbN> vaapi is a video API.
[11:08:09 CEST] <BtbN> audio coding is usually dirt cheap.
[11:08:16 CEST] <BtbN> So you probably are transcoding the video as well.
[11:08:25 CEST] <thegoodguy> /usr/bin/ffmpeg -ss 01:25:09.714 -fflags +genpts -i file:"/tmp/test.mkv" -map 0:0 -map 0:1 -map -0:s -codec:v:0 copy -map_metadata -1 -threads 0 -codec:a:0 aac -strict experimental -ac 6 -ab 1536000 -af "aresample=async=1" -f mp4 -movflags frag_keyframe+empty_moov -y "/var/lib/emby-server/transcoding-temp/55dc19f7eb27da795d337743908a7ef2.mp4"
[11:08:26 CEST] <BtbN> And ffmpeg uses as many cores it can get, if you don't tell it otherwise.
[11:08:39 CEST] <thegoodguy> thats the command
[11:09:34 CEST] <BtbN> you can drop the experimental thing
[11:09:39 CEST] <BtbN> or you really need to update your ffmpeg.
[11:09:53 CEST] <thegoodguy> i am on 3.1.3
[11:10:33 CEST] <BtbN> Also, that audio bitrate seems a bit excessive. That's 1.5Mbit/s
[11:10:56 CEST] <BtbN> And it's 6 channels, not 2.
[11:11:00 CEST] <thegoodguy> Stream #0:1(ger): Audio: dts (DTS), 48000 Hz, 5.1(side), fltp, 1536 kb/s (default)
[11:11:27 CEST] <thegoodguy> exactly, but its getting down to 2 channels for the webplayer
[11:11:27 CEST] <k_sze[work]> avcodec_find_encoder(AV_CODEC_ID_H264) doesn't necessarily get me libx264, right?
[11:11:57 CEST] <BtbN> thegoodguy, no it's not. You're telling it to encode 6 channels.
[11:12:07 CEST] <BtbN> And that bitrate guess seems plain wrong, even for DTS.
[11:12:21 CEST] <BtbN> for aac, two channels, 160kbit/s is fine.
[11:13:02 CEST] <BtbN> k_sze[work], if libx264 is enabled, it should.
[11:13:16 CEST] <BtbN> otherwise you might end up with whatever h264 encoder is next in the list.
[11:15:30 CEST] <k_sze[work]> BtbN: avcodec_find_codec_by_name("libx264") would force the use of libx264 (or just fail), correct?
[11:15:58 CEST] <BtbN> yes.
[11:21:07 CEST] <thegoodguy> BtbN: sorry, i really try to understand this ffmpeg stuff, but to be honest its really complicated in the beginning
[11:21:50 CEST] <thegoodguy> i still dont understand why my ffmped process is still using just one core, does it depends on threads 0 ?
[11:22:59 CEST] <BtbN> I think the aac encoder is not multi threaded
[11:23:09 CEST] <BtbN> because it's more than fast enough on any half decent CPU
[11:23:15 CEST] <BtbN> what CPU is this? Some RPi1 like?
[11:23:30 CEST] <thegoodguy> nope its an n3700
[11:23:40 CEST] <BtbN> what's that?
[11:24:01 CEST] <thegoodguy> the total weihttp://ark.intel.com/de/products/87261/Intel-Pentium-Processor-N3700-2M-Cache-up-to-2_40-GHz
[11:24:34 CEST] <BtbN> that's a 1.6GHz x86. It should easily handle aac encoding.
[11:24:39 CEST] <thegoodguy> the weird thing is with vaapi it has 30% cpu usgae at all ;)
[11:24:41 CEST] <BtbN> Something else must be hogging your CPU
[11:24:49 CEST] <BtbN> VAAPI does not encode audio though
[11:24:58 CEST] <thegoodguy> 97.0 1.2 0:14.52 ffmpeg
[11:25:01 CEST] <BtbN> and you are not encoding video, so vaapi does not help you at all
[11:25:30 CEST] <BtbN> try replacing your strange codec:v:0 copy with just -c:v copy
[11:25:35 CEST] <thegoodguy> yes, but its weird to see that transcoding doesnt rape my cpu that hard as reencoding ausio
[11:25:43 CEST] <BtbN> same for audio, just -c:a aac
[11:25:59 CEST] <BtbN> and if you want two channels, you'll have to tell it to encode 2 channels instead of 6
[11:26:08 CEST] <BtbN> and 160k is fine for your bitrate, 1.5M is crazy
[11:26:57 CEST] <thegoodguy> i guess they calculate the bitrate from the source
[11:27:16 CEST] <BtbN> the input bitrate is just a wild guess from the first X seconds of input
[11:33:27 CEST] <thegoodguy> tried this now
[11:33:29 CEST] <thegoodguy> -map 0:0 -map 0:1 -map -0:s -c:v copy -copyts -avoid_negative_ts disabled -start_at_zero -map_metadata -1 -threads 0 -codec:a:0 libmp3lame -ac 2 -ab 136000 -af "aresample=async=1,volume=2" -y "/var/lib/emby-server/transcoding-temp/65aeaeb631033d57dcd9eb3ab5cc7028.mkv
[11:33:34 CEST] <thegoodguy> still 100%
[11:33:40 CEST] <thegoodguy> maybe my ffmpeg is broken?
[11:36:09 CEST] <thegoodguy> sorry, wrong paste -map 0:0 -map 0:1 -map -0:s -c:v copy -copyts -avoid_negative_ts disabled -start_at_zero -map_metadata -1 -threads 0 -c:a aac -ac 2 -ab 136000 -af "aresample=async=1,volume=2" -y "/var/lib/emby-server/transcoding-temp/65aeaeb631033d57dcd9eb3ab5cc7028.mkv
[11:37:49 CEST] <thegoodguy> these options should be ez for the cpu? or am i wrong?
[11:43:28 CEST] <k_sze[work]> What does this actually mean? [swscaler @ 0x7f44c4f7cbc0] Warning: data is not aligned! This can lead to a speedloss
[11:46:48 CEST] <k_sze[work]> Is it because the input byte array I pass to sws_scale() is not aligned to something 32-byte boundaries or something like that?
[12:07:54 CEST] <iive> k_sze[work]: yes, plane(s) should start at 32 byte boundaries and linesize(s) should be multiple of 32 .
[12:08:14 CEST] <iive> otherwise you can't use fast sse/avx .
[12:08:46 CEST] <k_sze[work]> What trick can I use to ensure my array is aligned?
[12:15:23 CEST] <k_sze[work]> Currently, my frame data lives in a cyclic buffer like this (in C++11): https://bpaste.net/show/a826cf69c3d8
[12:22:14 CEST] <BtbN> use an alligned alloc, and use a linesize that's a multiple of 32
[12:22:49 CEST] <BtbN> is that struct for packed RGB?
[12:23:27 CEST] <k_sze[work]> colour_data is non-planar BGRX
[12:23:33 CEST] <k_sze[work]> depth_data is gray16le
[12:23:47 CEST] <k_sze[work]> erm, I mean, depth_data is float
[12:24:02 CEST] <k_sze[work]> (that I'll cast to gray16le later)
[12:24:04 CEST] <BtbN> So what pixel format is that?
[12:24:11 CEST] <BtbN> two at the same time?
[12:24:32 CEST] <BtbN> Anyway, use something like alligned_alloc from C++11, with a 32 byte alignment to allocate the buffer.
[12:24:33 CEST] <k_sze[work]> every 4-byte is one pixel
[12:24:40 CEST] <BtbN> If you have only one plane, that's all you need to do.
[12:25:05 CEST] <BtbN> Hm, actually, still need to make sure each line is aligned
[12:25:18 CEST] <BtbN> so you have to support a linesize != width.
[12:25:47 CEST] <k_sze[work]> I think my lines are guaranteed to be aligned, no?
[12:25:57 CEST] <k_sze[work]> both 512 * 4 and 1920 * 4 are divisible by 32.
[12:27:37 CEST] <k_sze[work]> (well, the depth data will become 512 * 2 bytes once I cast it into another a uint16_t array)
[12:27:42 CEST] <k_sze[work]> still divisible by 32
[12:40:47 CEST] <k_sze[work]> Actually, AVFrame's data array is automatically aligned, right?
[12:41:16 CEST] <k_sze[work]> when allocated by alloc_picture()
[12:41:33 CEST] <k_sze[work]> I mean av_frame_alloc()
[12:43:52 CEST] <k_sze[work]> and av_frame_get_buffer()
[12:48:23 CEST] <soulshock1> why does speed drop to about half when encoding MP3 to multiple outputs, each output at different bitrate?
[12:48:23 CEST] <soulshock1> -i 72fcc9e4-64dc-4fa3-bc40-a68700c5c744.wav -q:a 3 72fcc9e4-64dc-4fa3-bc40-a68700c5c744_192.mp3 produces 54x speed
[12:48:23 CEST] <soulshock1> vs
[12:48:23 CEST] <soulshock1> -i 72fcc9e4-64dc-4fa3-bc40-a68700c5c744.wav -q:a 3 72fcc9e4-64dc-4fa3-bc40-a68700c5c744_192.mp3 -q:a 9 72fcc9e4-64dc-4fa3-bc40-a68700c5c744_64.mp3 produces 33x speed
[12:48:23 CEST] <soulshock1> looking at cpu usage, ffmpeg only uses about 12,5%
[13:03:01 CEST] <jkqxz> soulshock1: ffmpeg is not itself threaded: it serialises all calls to lavc, though they may use other threads internally. (Your 12.5% sounds like all of one thread of an eight-thread processor.)
[13:03:35 CEST] <soulshock1> meaning if I want it to be twice as fast, I have to spawn 2 seperate ffmpeg processes?
[13:04:46 CEST] <jkqxz> If the encoder you are using it not itself threaded, yes.
[13:07:11 CEST] <soulshock1> ok. thank you :)
[14:07:24 CEST] <thegoodguy> me again :(
[14:07:37 CEST] <thegoodguy> trying still to get my cpu donw a bit
[14:07:40 CEST] <thegoodguy> ffmpeg -i input.mp4 -c:v copy -c:a libfdk_aac -b:a 384k output.mp4
[14:07:55 CEST] <thegoodguy> still 100%, isnt this a joke?
[14:08:27 CEST] <thegoodguy> every paramter i try is 100%
[14:17:57 CEST] <jkqxz> thegoodguy: What is wrong with it being 100%? A local file-to-file command always runs as fast as it can.
[14:20:11 CEST] <k_sze> When using the ffmpeg library, is there a way to control the thread priority of the threads that the ffmpeg library functions spawn?
[14:21:41 CEST] <k_sze> e.g. if I encode using libx264 with 12 slices, multiple threads are spawned to handle the slices, right? Is there a way I can programmatically control the thread priorities?
[14:25:39 CEST] <kepstin> k_sze: not with any public apis. In fact, I don't think you can do that even when using libx264 directly
[14:26:29 CEST] <DHE> x264 manages its own thread priorities actually. it lowers the priority of some of its other helper threads like rc-lookahead
[14:26:44 CEST] <DHE> which I don't approve of, but like kepstin said, not much you can do about it without code hacking
[14:29:15 CEST] <retard> thegoodguy: look into nice(1)
[14:31:06 CEST] <kepstin> k_sze: x264 works with a thread pool, too, so there's no way to determine in advance which thread is working on which slice either
[14:31:45 CEST] <ElAngelo> hi, i've been struggling a bit with this, i have a .mov video file that contains a timecode stream: looks like : Stream #0:1(eng): Data: none (tmcd / 0x64636D74) (default)
[14:31:54 CEST] <ElAngelo> i would like to extract that info
[14:31:55 CEST] <k_sze> So I guess my closest alternative without hacking ffmpeg/x264 is to control the scheduling priority of my *other* threads.
[14:32:05 CEST] <ElAngelo> with ffmpeg, is that somehow possible?
[14:33:19 CEST] <k_sze> This is going to be a bit painful.
[14:35:03 CEST] <kepstin> k_sze: what's your use case for this? realtime encoding on an underprovisioned cpu or something like that?
[14:35:21 CEST] <k_sze> Underprovisioned?
[14:35:38 CEST] <DHE> underpowered
[14:35:46 CEST] <k_sze> 4-core Core i5
[14:36:01 CEST] <k_sze> and it's nowhere near saturated when I look at htop.
[14:36:32 CEST] <kepstin> ... then why are you playing with thread priorities? if it's not saturated, that should do effectively nothing
[14:37:01 CEST] <k_sze> I am writing a program that grabs frames from a camera in one thread and queues them for ffmpeg to do the encoding in a worker thread.
[14:37:31 CEST] <k_sze> The worker thread is somehow interfering with the capture thread, such that I get dropped frames here and there.
[14:38:31 CEST] <kepstin> hmm, that would depend a lot on the queue implementation and how you're communicating with ffmpeg (using libav*, piping to an external process?)
[14:38:43 CEST] <k_sze> I'm using libav*
[14:39:43 CEST] <k_sze> The queue is actually a simple cyclic buffer with a condition_variable so the capture thread can notify the encoding worker thread that data is available.
[14:40:22 CEST] <kepstin> is the queue big enough for multiple frames?
[14:40:35 CEST] <k_sze> I have enough for 600 frames
[14:40:40 CEST] <k_sze> and I know it's not the queue filling up.
[14:40:55 CEST] <kepstin> I assume the reason for that is you wanted to avoid dynamic allocation anc copying memory
[14:43:31 CEST] <k_sze> yes
[14:44:18 CEST] <kepstin> (of course, there's gonna be a bunch of both inside libav*, but whatever)
[14:44:39 CEST] <NetworkingPro> Im having a little bit of trouble transcoding from RTSP to MJPEG via HTTP
[14:44:47 CEST] <NetworkingPro> I keep getting the following errors:
[14:45:11 CEST] <k_sze> kepstin: the less dynamic allocation, the better.
[14:46:07 CEST] <NetworkingPro> ffmpeg -i rtsp://admin:2627c1478d1facc3@172.31.86.15/ISAPI/Streaming/channels/101 -f mpegts http://0.0.0.0:65514
[14:46:10 CEST] <NetworkingPro> theres the command
[14:46:21 CEST] <NetworkingPro> heres the error: [tcp @ 0x244f700] Connection to tcp://0.0.0.0:65514 failed: Connection refused
[14:46:22 CEST] <NetworkingPro> http://0.0.0.0:65514: Connection refused
[14:46:31 CEST] <NetworkingPro> it looks like its failing to bind an http port
[14:46:45 CEST] <NetworkingPro> anyone know how to properly serve http?
[14:47:34 CEST] <bencoh> this is probably not what you're looking for
[14:48:12 CEST] <NetworkingPro> bencoh: was that to me?
[14:48:21 CEST] <kepstin> NetworkingPro: ffmpeg does not support acting as a multiple-client server for any tcp based protocols. I dunno if you could maybe do it with ffserve, but it's unlikely anyone would help you with that (ffserve is deprecated)
[14:48:33 CEST] <bencoh> NetworkingPro: yup
[14:48:50 CEST] <bencoh> hm I thought ffserver has been deprecated?
[14:48:53 CEST] <bencoh> had*
[14:49:02 CEST] <kepstin> bencoh: that's what I said?
[14:49:10 CEST] <bencoh> oh mybad, misread
[14:49:31 CEST] <NetworkingPro> kepstin: ah, that helps. So you're saying it doesnt support RTSP to MJPEG via HTTP then?
[14:49:40 CEST] <kepstin> NetworkingPro: the command you have is trying to post an mpegts (not mjpeg) to an external http server/
[14:49:44 CEST] <NetworkingPro> So, it should support RTSP to RTSP copy?
[14:50:16 CEST] <NetworkingPro> kepstin: so is there one for mjpeg that you know of?
[14:50:38 CEST] <bencoh> that's unrelated to codec/video format
[14:51:00 CEST] <kepstin> NetworkingPro: well, you could use different parameters to get mjpeg, but that still doesn't solve the problem that ffmpeg isn't an http server :0
[14:51:09 CEST] <NetworkingPro> right
[14:51:23 CEST] <NetworkingPro> Let me throw something else out that I tried.
[14:51:38 CEST] <NetworkingPro> ffmpeg -i rtsp://admin:2627c1478d1facc3@172.31.86.15/ISAPI/Streaming/channels/101 -vcodec copy -f rtp -an rtp://0.0.0.0:55422
[14:51:55 CEST] <NetworkingPro> so the objective was to take an internal RTSP stream and republish it on the public interface.
[14:52:07 CEST] <NetworkingPro> So that command works, starts giving me stats, frames, etc.
[14:52:08 CEST] <kepstin> NetworkingPro: sending rtp to 0.0.0.0 makes no sense
[14:53:02 CEST] <NetworkingPro> kepstin: So, a little background. Im coming from using VLC, which seems to have all sorts of oddities with transcoding, and in VLC 0.0.0.0 binds to all interfaces.
[14:53:08 CEST] <kepstin> (rtp is a udp-based broadcast protocol - connectionless - so you can't make a server that waits for connections)
[14:53:50 CEST] <kepstin> with rtp you have to send packets to somewhere - either a specific ip address where something is receiving it, or e.g. a multicast address
[14:54:28 CEST] <NetworkingPro> So does RTSP not use RTP?
[14:54:38 CEST] <bencoh> rtsp is basically rtp + control session
[14:54:57 CEST] <bencoh> (actually the other way around, but you get the idea)
[14:55:09 CEST] <NetworkingPro> RTP is the control protocol, no/
[14:55:15 CEST] <NetworkingPro> and RTSP the streaming?
[14:55:22 CEST] <kepstin> no, other way around
[14:55:53 CEST] <bencoh> rtp is the most commonly implemented streaming protocol for rtsp, and rtcp is usually the control protocol there
[14:56:17 CEST] <kepstin> with rtsp, the server waits for someone to connect, they negotiate a stream, then media transfer is started over rtp once both sides know the codecs and ip addresses of each-other.
[14:56:41 CEST] <kepstin> (ffmpeg is not an rtsp server either, but it can push streams to an external rtsp server)
[14:56:43 CEST] Action: NetworkingPro is reading.
[14:57:20 CEST] <NetworkingPro> Ok, thats all starting to make more sense.
[14:58:08 CEST] <NetworkingPro> So RTP and RTCP (its control/QoS protocol) are independent of RTSP. They're a standalone streaming protocol. So its what another application is pushing when a stream is occuring?
[14:58:13 CEST] <NetworkingPro> that pretty much cover it?
[14:58:17 CEST] <bencoh> wo ffmpeg can be used as a transcoding/publishing tool, but not to actually distribute streams / serve (multiple) clients (apart from basic streaming to multicast)
[14:58:38 CEST] <bencoh> s/wo/so/
[14:58:54 CEST] <NetworkingPro> and RTSP seems to be a session negotiation protocol.
[14:59:31 CEST] <NetworkingPro> So ffmpeg can't really sit and wait for a client to connect to it. It doesnt handle any negotiation of video exchange, but rather only transcodes and delivers media to a pre-desginated session?
[14:59:37 CEST] <NetworkingPro> that pretty accurate?
[14:59:55 CEST] <NetworkingPro> Im looking for a VLC replacement, and basically this isnt it.
[15:00:29 CEST] <kepstin> NetworkingPro: pretty much yeah. the 'ffmpeg' command line tool is basically a batch media conversion tool with some network support and hacked-in basic realtime stuff.
[15:00:53 CEST] <NetworkingPro> gotcha,
[15:01:01 CEST] <NetworkingPro> Sorry for the complete stupidity.
[15:01:09 CEST] <NetworkingPro> Thanks for helping me grasp it.
[15:01:21 CEST] <kepstin> ffsserve could do some of what you want, but it's deprecated due mostly to lack of dev interest to keep it updated/working
[15:01:31 CEST] <kepstin> ffserve*
[15:01:44 CEST] <NetworkingPro> So the reason I *thought* it might work is because I heard that it was what VLC used under the hood.
[15:02:04 CEST] <retard> ffserver
[15:02:07 CEST] <retard> rip
[15:02:14 CEST] <NetworkingPro> And that may be true, but it makes sense that VLC uses it to handle the transcoding part, and all the non network, non server related things.
[15:02:19 CEST] <kepstin> vlc uses ffmpeg (actuall libavcodec, libavformat, etc - the underlying libraries) to handle media encoding/decoding and possibly some protocols.
[15:03:05 CEST] <NetworkingPro> kepstin: yea, I looked at ffserve, but also saw it was depricated. Also, it wouldn't really do what I want as it looked to be heavily config file driven.
[15:03:26 CEST] <NetworkingPro> I need a very agile product that can spin up streams based on CLI arguments, and not config files.
[15:03:34 CEST] <NetworkingPro> Any of you know of a replacement for VLC?
[15:03:53 CEST] <CruX|> mpv ? mplayer2 ?
[15:04:11 CEST] <CruX|> ah for transcoding dunno
[15:04:36 CEST] <kepstin> NetworkingPro: if you're using an external media server or cdn to do the actual serving/broadcasting, you could certainly use ffmpeg to do the source screen grabbing, transcoding, and publishing.
[15:05:20 CEST] <NetworkingPro> well, i have a few video cameras that are all one a LAN
[15:05:34 CEST] <NetworkingPro> I need to publish them to clients without NATing
[15:05:49 CEST] <NetworkingPro> So Ive been doing that with CVLC
[15:05:53 CEST] <NetworkingPro> which has worked OK for the most part.
[15:06:15 CEST] <NetworkingPro> But Ive realized that cvlv is dropping frames badly, and causing hiccups.
[15:06:42 CEST] <NetworkingPro> If I simply open the RTSP stream on the LAN segment with the cameras using VLC GUI it works like a champ.
[15:07:04 CEST] <NetworkingPro> If I open the transcoded stream it pretty quickly falls behind, starts dropping frames, etc.
[15:08:09 CEST] <NetworkingPro> Not gonna lie, video is kind of cryptic. Im very good about Googling things, and self-teaching but it's hard to find qualified facts on video things, it seems.
[15:09:26 CEST] <kepstin> hmm, if the transcoded stream is falling behind, you should check cpu usage. it might be that the codec conversion is using too much cpu.
[15:10:28 CEST] <NetworkingPro> kepstin: yea, Ive done that. At any given time CPU and RAM are less than 5% used.
[15:10:33 CEST] <NetworkingPro> Its a beefy server.
[15:10:40 CEST] <NetworkingPro> Mind if I ask some general video questions?
[15:11:01 CEST] <kepstin> sure, there's probably some generally knowledgable video folks around here
[15:11:05 CEST] Action: kepstin is off tho
[15:11:16 CEST] <NetworkingPro> Whats the best way to determine the best variable bitrate?
[15:11:34 CEST] <bencoh> or that vlc is trying to stick to some framerate and/or respect (invalid) timestamps
[15:11:44 CEST] <bencoh> which will eventually lead to delay and/or framedrop
[15:11:48 CEST] <NetworkingPro> Also, I would assume that my --out video stream should have at least the same VBR as my input?
[15:12:29 CEST] <NetworkingPro> bencoh: yea, you know VLC, too? If I sent the command Im throwing into vlc would you mind seeing if Im doing something obviously wrong?
[15:12:57 CEST] <bencoh> NetworkingPro: I've never liked streaming using vlc
[15:13:28 CEST] <bencoh> or using it for anything else than playing video (and then I usually prefere mplayer/mpv ;p) ... I always end up grepping the --full-help
[15:14:08 CEST] <bencoh> ... or reading the source.
[15:14:31 CEST] <NetworkingPro> If you had a lower output framerate than your in could that be an issue?
[15:14:40 CEST] <NetworkingPro> or should it handle that in the transcoding?
[15:15:09 CEST] <bencoh> no idea what it should do
[15:22:03 CEST] <NetworkingPro> Yea, me either. :D
[15:22:33 CEST] <NetworkingPro> Im just trying to make sure I make the best use of it as possible at this point (VLC).
[15:38:15 CEST] <rhn> hello! I'd like to report a bug in ffmpeg
[15:38:52 CEST] <rhn> unfortunately, I can't register on track - your blacklist is keeping me off. Is there anyone who can help?
[16:17:37 CEST] <mcjack> Hi all, to open a video file for writing I use avformat_alloc_output_context2&
[16:18:11 CEST] <mcjack> But I want to supply the codecIDs in the AVOutputFormat rather than using NULL for the defaults
[16:18:33 CEST] <mcjack> How do I create the AVOutputFormat?
[16:23:41 CEST] <mcjack> Simply AVOutputFormat ofmt; ?
[16:24:42 CEST] <mcjack> or exists an avformat_alloc_output_format similar to the context allocation?
[16:34:05 CEST] <kepstin> mcjack: you shouldn't ever be allocating storage for an AVOutputFormat; you just need a pointer to hold a reference to one that you've looked up.
[16:34:32 CEST] <kepstin> (unless you're making a new one, I suppose)
[16:38:28 CEST] <mcjack> kepstin: thanks, so this can only return the defaults from avformat_alloc_output_context2? I'm looking at the example transcoding.c
[16:38:58 CEST] <mcjack> I want to replace the audio of a video file, so I want the output format to be the same as the input
[16:39:18 CEST] <kepstin> mcjack: you should be getting the AVOutputFormat either from av_guess_format() or by iterating through them with av_oformat_next() to find the one you want
[16:41:23 CEST] <kepstin> mcjack: the avformat_alloc_output_context2() function helpfully calls av_guess_format() on your behalf if you give it a format name or filename but leave oformat NULL.
[16:42:11 CEST] <kepstin> probably the easiest way to get the same output format as input is to get the name of the input format, and pass it as the format_name when allocating your output context
[16:43:35 CEST] <mcjack> so every format name is unique? it's not like there can be different codect inside a certain outputformat?
[16:43:51 CEST] <kepstin> codecs are independent from formats
[16:44:25 CEST] <kepstin> format is the "container", and most containers support multiple different codecs you can choose between
[16:45:22 CEST] <kepstin> the formats all have a default codec that is picked if you don't manually choose one; e.g. mkv will default to using h.264 video.
[16:45:32 CEST] <mcjack> so I thought, thanks. Now what I don't understand is, if I create an output context, it sets the output format from the ending. But I want to set the defaults in the format from my input file.
[16:45:46 CEST] <mcjack> so I can safely change it after the alloc context?
[16:46:40 CEST] <kepstin> which context? the AVFormatContext?
[16:47:02 CEST] <kepstin> that doesn't have any codec information in it to change
[16:47:28 CEST] <kepstin> you pick the codec when making AVStream structures
[16:48:43 CEST] <mcjack> Ok, the context creates a default format in avformat_alloc_output_context2, right?
[16:49:30 CEST] <kepstin> if you don't pass an AVFormat, aformat_alloc_output_context2 will look one up based on the filename and format_name parameters. (not create - look up)
[16:50:02 CEST] <kepstin> but that's independent from the AVCodec, aside from that the AVFormat says what the "default" codec is if you don't manually pick one.
[16:50:29 CEST] <mcjack> ah, ok, I start to understand
[16:51:09 CEST] <mcjack> and the created context holds a pointer to the proposed format
[16:51:39 CEST] <mcjack> and it's up to me, if I use the codec named in that format or if I set my own, when I add the streams?
[16:52:08 CEST] <kepstin> yep. Obviously, the format can reject any particular codec if it doesn't know how to save it.
[16:54:09 CEST] <ashish> hey i m trying to stream live usb cam from raspberry pi
[16:54:27 CEST] <ashish> but having trouble in it can any one help me
[16:54:42 CEST] <mcjack> cool, I think I got it now& Thanks a bunch
[16:56:50 CEST] <ashish> any one online
[16:58:18 CEST] <ashish> hey please help me to live stream video from raspberry pi
[16:58:50 CEST] <ashish> any one inline
[16:58:53 CEST] <ashish> online
[16:59:26 CEST] <ashish> hey please help me to live stream video from raspberry pi
[17:00:37 CEST] <ashish> ?
[17:00:49 CEST] <ashish> #hackerrank
[17:03:22 CEST] <ashish> hey
[17:14:54 CEST] <ashish> any one online
[17:16:09 CEST] <ashish> hello
[17:22:31 CEST] <ashish> hey can any one help me with live video streaming
[17:28:00 CEST] Last message repeated 1 time(s).
[18:19:03 CEST] <acamargo> hello there, I'm seeking some guidance about capturing CEA-708 subtitles with ffmpeg. anyone?
[18:27:47 CEST] <DHE> there's some subtitles already captured by the mpeg2video and h264 decoders. it's just a matter of format, don't recall which format is used off the top of my head
[18:32:13 CEST] <acamargo> DHE, I'm using a DeckLink Studio 4K card to capture a HD-SDI feed
[18:33:47 CEST] <DHE> that's outside my area of knowledge....
[18:34:08 CEST] <acamargo> no prob ;-) thanks anyway
[18:35:27 CEST] <acamargo> I read in MAINTAINERS file that Deti Fliegl is in charge of decklink module
[18:35:49 CEST] <acamargo> how can I contact him?
[18:37:17 CEST] <ashish> hey
[18:38:33 CEST] <acamargo> hallo
[18:39:13 CEST] <ashish> i am trying to stream live from usb webcam
[18:39:22 CEST] <ashish> from my raspberry pi
[18:43:52 CEST] <acamargo> ashish, nice :-)
[18:44:48 CEST] <ashish> sorry i was asking for help
[18:44:56 CEST] <ashish> how to do
[18:45:12 CEST] <ashish> ffserver -f /etc/ffserver.conf & ffmpeg -v verbose -r 5 -s 640x480 -f video4linux2 -i /dev/video0 http://localhost/webcam.ffm
[18:45:50 CEST] <DHE> ffserver is basically discontinued...
[18:46:25 CEST] <ashish> through this command video is not playing in browser instead of it started downloading
[18:56:19 CEST] <A3G1S> Hey guys, anyone has any info on x86 text relocations
[19:16:26 CEST] <acamargo> ashish, did you try to play with vlc?
[19:17:58 CEST] <ashish> yes but mjpeg is not playing
[19:19:08 CEST] <Jnorthrup> https://ffmpeg.org/ffmpeg-filters.html#afifo is not a thing
[19:19:43 CEST] <Jnorthrup> time to rfresh the docs
[19:19:53 CEST] <furq> Jnorthrup: https://ffmpeg.org/ffmpeg-filters.html#fifo_002c-afifo
[19:25:48 CEST] <Jnorthrup> oh gotcha.
[19:38:18 CEST] <MattTS> I have a 4k camera that can output 30fps as mjpeg. I want to be able to record this to a file directly and also encode it to h264 in lower resolution/frame rate to be broadcast. I can record it to a file fairly simply with vcodec copy and save it at around 30fps - get slight variations but I think the camera itself lowers fps if it needs to increase the exposure time. I can get it to encode at 720p at around 10fps into a fifo that I can broadcast from. Whats t
[19:38:19 CEST] <MattTS> best way to do both at the same time from a single camera without introducing bottlenecks?
[19:38:44 CEST] <MattTS> Using multiple outputs from a single ffmpeg seems to cause it to slow the recording/dumping to file down to the rate of the encoding
[19:51:25 CEST] <kepstin> MattTS: yes, ffmpeg is a single-threaded application, so multiple outputs block each-other when processing.
[19:52:38 CEST] <furq> you should be able to write the mjpeg stream and then run a separate process which encodes that with -re
[19:52:46 CEST] <furq> it's not entirely trustworthy though
[19:52:57 CEST] <furq> and also i'm not sure what container you'd want to use
[19:52:58 CEST] <DHE> some aspects are multi-threaded, and codecs can be multi-threaded. but ffmpeg alone isn't intended to be tuned for maximum real-time under major workloads.
[19:53:06 CEST] <furq> apparently ffmpeg will mux mjpeg into mpegts but then won't read it back
[19:55:35 CEST] <kepstin> could just use image2pipe with a stream of jpegs, i guess
[19:57:40 CEST] <MattTS> Yeah, I tried muxing into mpegts but couldnt get anything to play it back
[19:58:12 CEST] <MattTS> So you mean first ffmpeg -> file and then second ffmpeg reads from that file and encodes? Isnt it bad to have a file open twice?
[20:00:54 CEST] <MattTS> Would -re solve the issue of it trying to encode every frame in the file too?
[20:01:41 CEST] <furq> -re will cap the encoding speed to the framerate of the input
[20:02:35 CEST] <furq> if the encoding is much slower than the capture then it's probably safe
[20:02:51 CEST] <MattTS> Yeah, the encoding is definitely slower
[20:06:33 CEST] <kepstin> if your capture speed is realtime from a device, you almost certainly don't want -re
[20:07:08 CEST] <kepstin> without -re, ffmpeg encodes as fast as it can, with -re it adds an extra sleep each frame to slow down to approximately realtime
[20:12:52 CEST] <ashish> any one know about android
[20:13:34 CEST] <ashish> clear
[20:18:00 CEST] Last message repeated 1 time(s).
[20:55:53 CEST] <example6> Hi all, I'm trying to stream live H.264 video from an IP camera. I'm noticing a pretty significant delay. I noticed setting -probesize and -fflags nobuffer seemed to speed it up a bit, but I'm wondering if there's any obvious solution. I tried using -vsync options to see if I can have it "always grab the latest frame" but that didn't seem to make a difference. Any suggestions anyone?
[21:00:36 CEST] <DHE> example6: there's usually a fair bit of buffering going on in the encoders, even in hardware. I assume you're just streaming without processing
[21:01:14 CEST] <example6> I'm very new to streaming and ffmpeg in general, so I'm not really sure what that means
[21:01:47 CEST] <example6> DHE: The part about streaming without processing
[21:05:59 CEST] <DHE> your ffmpeg command I presume makes use of "-c copy"
[21:20:43 CEST] <example6> DHE: I'm just piping it into VLC to test for now
[21:22:02 CEST] <example6> I'm encoding it as 'asf' as per an example I found online
[21:22:14 CEST] <furq> uh
[21:22:17 CEST] <furq> that sounds like a bad example
[21:25:02 CEST] <furq> is there some reason you're not just playing the stream directly
[21:31:53 CEST] <example6> well, ffplay doesn't support the same arguments I was troubleshooting
[21:32:13 CEST] <furq> pastebin the command
[21:32:26 CEST] <furq> either way you definitely shouldn't be using asf
[21:32:50 CEST] <furq> it doesn't support h264 so it's presumably reencoding to wmv, which doesn't sound like something that should be done
[21:33:22 CEST] <example6> http://pastebin.com/heg76PVV
[21:33:56 CEST] <furq> ffplay -fflags nobuffer rtsp://stream
[21:40:18 CEST] <example6> furq: with that, I'm getting around ~6s delay. If I add -probesize 32 it reduces to around ~3s
[21:40:43 CEST] <example6> It doesn't support the -vsync cfr option I was using, which is why I tried to just pipe output of ffmpeg
[21:42:13 CEST] <example6> It also doesn't seem to support forcing a framerate, or -re
[21:42:38 CEST] <furq> you don't need -re
[21:43:57 CEST] <example6> I have a standalone program to stream the feed from this camera (It's in a Cisco VSOM server) and the delay there is less than 1 second, which is what I'm trying to imitate
[22:31:12 CEST] <Threads> in theory could yadifmod be ported into ffmpeg ?
[22:33:34 CEST] <furq> probably
[22:34:06 CEST] <furq> there are other filters which use an external clip
[22:35:24 CEST] <iive> Threads: what does it do differently than yadif?
[22:35:30 CEST] <furq> iive: http://avisynth.nl/index.php/Yadifmod
[22:35:37 CEST] <furq> This version doesn't internally generate spatial predictions, but takes them from an external clip.
[22:35:58 CEST] <iive> huh?
[22:36:06 CEST] <furq> shrug
[22:36:15 CEST] <furq> i have no idea why that's good either
[22:39:47 CEST] <iive> it should be able to merge the files using the separate and merge fields filters...
[00:00:00 CEST] --- Wed Sep 21 2016
More information about the Ffmpeg-devel-irc
mailing list