[Ffmpeg-devel-irc] ffmpeg.log.20170727
burek
burek021 at gmail.com
Fri Jul 28 03:05:01 EEST 2017
[00:12:58 CEST] <stapler> and libavformat handles most of what i need to do rt(s)p streaming?
[00:28:11 CEST] <DHE> stapler: it should be fairly obvious how to map ffmpeg command-line parameters to libav. RTSP is no different than any other output format
[00:29:29 CEST] <stapler> DHE, i suppose so
[04:35:39 CEST] <cryptodechange> so a 10bit encode with the same CRF produces a file only ~0.4mbps smaller
[04:36:13 CEST] <cryptodechange> Though I noticed some artifacts in darker areas on the 8bit, smoother on the 10bit
[04:44:14 CEST] <blimpse_> Is there a specific way I need to compile ffmpeg to support converting .SPC files?
[04:44:46 CEST] <blimpse_> This is the command I tried and its output: https://pastebin.com/4rGWucE7
[04:55:02 CEST] <kepstin> blimpse_: by spc you mean snes emulated audio?
[04:56:41 CEST] <kepstin> I think you have to have a version of ffmpeg compiled with libgme ("game music emu") in order to do anything with that
[05:00:11 CEST] <furq> the ffmpeg in debian has libgme
[05:00:22 CEST] <furq> i guess that's from a PPA or something if it's ubuntu 14.04 though
[05:00:57 CEST] <furq> oh nvm i guess you built it yourself
[05:14:49 CEST] <blimpse_> Thanks! I'm compiling it with libgme now.
[10:04:20 CEST] <franciscui> Hi,all,I use the command "ffmpeg -re -fflags +genpts -f concat -safe 0 -i /data/videos/vod2live/dst/20170721230010192470 3playlist.txt -vcodec h264 -acodec aac -ac 2 -bufsize 3M -preset fast -f flv rtmp://127.0.0.1:8081/live/101924703" to push stream ,and
[10:04:36 CEST] <franciscui> then ,i WriteN, RTMP send error 104 (129 bytes) WriteN, RTMP send error 32 (44 bytes) WriteN, RTMP send error 9 (42 bytes) av_interleaved_write_frame(): Operation not permitted Last message repeated 1 times [flv @ 0x21908c0] Failed to update header with correct duration. [flv @ 0x21908c0] Failed to update header with correct filesize. Error writing trailer of rtmp://127.0.0.1:8081/live/101924703: Operation not permittedframe=222
[10:05:29 CEST] <franciscui> I meet above problem,can anyone help me ,thanks
[11:26:33 CEST] <squ> operation not permitted it says
[11:26:39 CEST] <squ> :)
[12:35:32 CEST] <mnr200> I'm receiving Unknown input format: 'dshow' error
[12:36:08 CEST] <mnr200> with the option dshow
[12:36:42 CEST] <mnr200> can anyone give me an idea what is wrong here?
[12:36:58 CEST] <stevenliu> command line?
[12:39:18 CEST] <mnr200> stevenliu, ffmpeg -f dshow -i video="Virtual-Camera" -preset ultrafast -vcodec libx264 -tune zerolatency -b 900k -f mpegts rtsp://localhost:8554/live.sdp
[12:39:36 CEST] <stevenliu> ffmpeg -show_devices
[12:40:04 CEST] <stevenliu> ffmpeg -devices
[12:41:15 CEST] <mnr200> https://pastebin.com/2myU2pwt
[12:42:21 CEST] <mnr200> I'm trying to stream live video from webcam
[12:45:41 CEST] <mnr200> stevenliu, ffmpeg -f video4linux2 -s 640x480 -r 30 -i /dev/video0 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/live.sdp this one somewhat works
[12:46:02 CEST] <stevenliu> ok
[12:46:04 CEST] <mnr200> but huge latency with and delay
[12:50:15 CEST] <mnr200> so whats the problem with the above command, any idea?
[12:52:11 CEST] <stevenliu> what codec from the camera?
[12:52:19 CEST] <stevenliu> mjpeg? AVC?
[12:53:31 CEST] <mnr200> not sure, mjpeg may be
[12:53:38 CEST] <mnr200> how do I find it?
[13:34:23 CEST] <stevenliu> you can find the message at input info
[13:43:31 CEST] <long-klong> hello !
[13:43:41 CEST] <long-klong> I want to implement a close to real time encoded image into a video stream from a camera ov5642 , and I will like to parallelize using openCL some part of the algo
[13:43:50 CEST] <long-klong> is it possible on a raspberry pi 3 ?
[13:43:56 CEST] <long-klong> has anyone done something close to this ?
[13:49:38 CEST] <squ> some codecs are parallelized
[16:06:18 CEST] <rev0n> anyone here?
[16:06:46 CEST] <c_14> no
[16:09:37 CEST] <rev0n> that's fine ;) so maybe I will just ask a question. Is it possible with ffmpeg lib to convert blob data from the socket to another format and also output it as a stream, rather then writing it to file? If so - how?
[16:10:11 CEST] <furq> i couldn't tell you how to do it with the libs, but that's certainly possible
[16:12:34 CEST] <DHE> ffmpeg can make a basic TCP/UDP socket itself as well as common network protocols like rtsp, or you can specify a virtual IO library for ffmpeg to use for reads/writes. then use that as an input or output
[16:13:10 CEST] <c_14> rev0n: depends on what "blob data" is
[16:13:30 CEST] <c_14> it has to be in a format/protocol that FFmpeg can read
[16:13:44 CEST] <rev0n> blob data means webm endoded package
[16:14:01 CEST] <c_14> webm should be fine
[16:17:05 CEST] <rev0n> I actually try to record the stream from the webcam on the client-side, send it through websockets to some python tornado server (very basic one as I started learning python just yesterday - you know, PHP / Java developer), convert it then to flv and write to rtmp nginx based server for further publishing via HLS to viewers. What's the purpose you would ask - I'm trying badly to avoid Flash for live webcam streaming
[16:20:06 CEST] <faLUCE> hello. Is there some option for 0 latency on the aac encoder?
[16:22:06 CEST] <kepstin> faLUCE: define "0" latency
[16:22:30 CEST] <kepstin> and with aac-lc, what the internal aac encoder supports, there is no real low latency ability.
[16:22:46 CEST] <faLUCE> kepstin: something like h264. The encoder doesn't have to buffer before outputting an encoded frame
[16:23:13 CEST] <faLUCE> I mean: it doesn't have to buffer MORE than 1 aac frame (1024 samples)
[16:24:02 CEST] <c_14> rev0n: assuming you can grab the webcam on the client side you can generate hls directly with ffmpeg or pass it to nginx-rtmp with ffmpeg. Might not even need to use the libraries directly (assuming the webcam has dshow/v4l2)
[16:26:13 CEST] <rev0n> for getting webcam I use html5 getUserMedia which is browser thing. So one Idea I had was to stream data from the camera with JS WebSockets. This is why I cannot generate any other format than webm
[16:26:34 CEST] <kepstin> faLUCE: well, buffering 1024 samples is already ~23ms, and I think aac needs info from the next frame for overlapping mdct windows to encode the current frame.
[16:28:31 CEST] <kepstin> reducing audio latency with lossy codecs usually involves making frames shorter; e.g. opus can go down to 2.5ms (at significantly reduced efficiency)
[16:28:51 CEST] <faLUCE> kepstin: the time for buffering 1024 samples depends on the sample rate....
[16:29:09 CEST] <kepstin> faLUCE: I assumed 44.1kHz there, adjust as needed.
[16:29:58 CEST] <faLUCE> In addition, I need to understand the minimum amount of frames that aac has to buffer before producing the first encoded frame... do you think they are two frames or more?
[16:30:04 CEST] Action: kepstin notes that opus has additional latency of 2.5-4ms on top of the frame size depending on settings.
[16:30:58 CEST] <kepstin> faLUCE: I don't know the details of the aac format well enough. I suspect that it is "at least x samples from the next frame"
[16:31:51 CEST] <rev0n> How can I open an input ffmpeg socket then to retrieve the data from user?
[16:33:18 CEST] <kepstin> rev0n: a websocket? ffmpeg doesn't have native support for that; you'd have to handle the socket in your own code and pass the data to ffmpeg.
[16:34:03 CEST] <faLUCE> kepstin: aac has a framesize of 1024
[16:34:09 CEST] <faLUCE> it's fixed
[16:34:31 CEST] <faLUCE> I wonder how many frames are required to buffer before outputting the first encoded frame
[16:34:33 CEST] <rev0n> kepstin: ok, I have that - Python Tornado server. But how I can feed ffmpeg with this data?
[16:36:00 CEST] <kepstin> rev0n: i don't know of any way to use ffmpeg libraries directly from python usefully, but one option you could do is create an ffmpeg cli subprocess which is reading from a pipe, and then write the data from the socket to that pipe in your python app.
[16:38:19 CEST] <rev0n> kepstin: thank you. That's what I'm gonna try. I thought maybe there's better way. Should be fine tho
[16:42:38 CEST] <kepstin> faLUCE: it looks like the size of the aac transform is 2048 samples at 1024 sample offsets, but most encoders use a number of priming samples that's not a multiple of 1024.
[16:42:40 CEST] <kepstin> In theory an aac encoder only needs 1024 future samples, but if you're giving it 1024 sample frames, they'll be misaligned, so it'll need 2 future frames.
[16:43:37 CEST] <kepstin> faLUCE: so one way to reduce encoder delay might be to provide audio to the encoder in smaller chunks
[16:43:58 CEST] <kepstin> (i dunno if this works with ffmpeg's aac encoder, need to find someone familiar with the internals)
[16:44:31 CEST] <faLUCE> kepstin: I provide to the encoder chunks of 148. Then it buffers them to a frame with size=1024
[16:45:02 CEST] <faLUCE> (1024 samples)
[16:45:21 CEST] <kepstin> faLUCE: then you're probably already doing as well as can be expected with aac.
[16:45:34 CEST] <faLUCE> kepstin: it seems that a minimum of 2 frames (2048 samples) is required... in fact I see that it buffers 2 frames
[16:45:43 CEST] <faLUCE> but I wonder if this can be reduced to 1 frame
[16:45:52 CEST] <kepstin> faLUCE: yes, that's what I said above
[16:46:03 CEST] <kepstin> "the size of the aac transform is 2048 samples at 1024 sample offsets"
[16:46:12 CEST] <kepstin> the only way to reduce that is to use a different codec.
[16:46:23 CEST] <faLUCE> kepstin: but I wonder if is there some "zerolatency" option for aac too...
[16:46:28 CEST] <faLUCE> that reduces it
[16:47:39 CEST] <kepstin> there's "aac-ld" which reduces latency to ~20ms, but keep in mind that although it's called "aac" it's actually a fairly different codec from the common aac-lc, and it's not supported by all decoders.
[16:47:55 CEST] <kepstin> and ffmpeg's internal aac encoder can't encode anything besides aac-lc right now, i think
[16:48:25 CEST] <kepstin> if you're switching codecs anyways to reduce latency, just use opus :)
[16:48:41 CEST] <faLUCE> kepstin: what about mp2 ?
[16:50:40 CEST] <kepstin> faLUCE: I think mp2 uses even larger frames (1152 samples), but i dunno how much overlapping it needs. might have less latency than aac? probably still not great.
[16:52:40 CEST] <faLUCE> kepstin: you said that [16:26] <kepstin> faLUCE: well, buffering 1024 samples is already ~23ms, <---- but I measured it, it seems 35ms
[16:52:46 CEST] <faLUCE> (at 44100)
[16:53:32 CEST] <lukas_gab> Hi folks. I want to record split screen from few IP cameras, I try this way https://pastebin.com/82y7bi9h but I have an error "Filter overlay has an unconnected output". How can I fix it?? Thnaks for yout help and time.
[16:53:33 CEST] <kepstin> 44100 Hz is 44100 samples in 1 second. 1024 samples is therefore 1024/44100 = 23.2ms
[16:53:50 CEST] <kepstin> any additional delays are due to something other than simply buffering 1024 sample increments
[16:55:15 CEST] <faLUCE> I see, thanks
[17:51:32 CEST] <lukas_gab> ok, I split screen succesfull
[17:51:38 CEST] <lukas_gab> but I have other troble
[17:51:46 CEST] <lukas_gab> I do this -
[17:51:46 CEST] <lukas_gab> ffmpeg -rtsp_transport tcp -i "rtsp://admin:Pass@192.168.88.76:554/h264" -rtsp_transport tcp -i "rtsp://admin:Pass@192.168.88.76:554/h264" -filter_complex "[0:v][1:v]hstack" -c:v libx264 combo.avi
[17:52:16 CEST] <lukas_gab> but output has hi resolution, ffmpeg go to 100% of cpu, loss packet and give me error
[17:52:32 CEST] <lukas_gab> how can I resize streams before split and encoding ?
[17:52:37 CEST] <lukas_gab> pleas help me
[17:52:50 CEST] <rev0n> Has anybody any clue if I should do some additional data convertion while retrieving webm chunks via websocket? The data seems to be ok on the JS client side, but when the data comes to python server and I try to save it as bytes to a .webm file I get file working in VLC but the image looks like multiplied by four (actually I get 4 images on a splitscreen)
[18:01:47 CEST] <kepstin> rev0n: i can't think of anything that would cause that other than something in how the video is encoded at the source
[18:06:13 CEST] <rev0n> Just found the problem. It was some stupid mistake as I had multiplied stream by my js code. It's fine now. However I have one more problem: [mp4 @ 0x5610442d44a0] Non-monotonous DTS in output stream 0:1; previous: 143394, current: 137320; changing to 143395. This may result in incorrect timestamps in the output file.
[18:06:30 CEST] <rev0n> and in fact my output file is empty
[18:10:23 CEST] <rev0n> any ideas?
[18:21:24 CEST] <rev0n> Input #0, matroska,webm, from 'pipe:':
[18:21:24 CEST] <rev0n> Metadata:
[18:21:24 CEST] <rev0n> encoder : Chrome
[18:21:24 CEST] <rev0n> Duration: N/A, start: 0.000000, bitrate: N/A
[18:21:24 CEST] <rev0n> Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
[18:21:25 CEST] <rev0n> Stream #0:1(eng): Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 16.67 tbr, 1k tbn, 1k tbc (default)
[18:21:27 CEST] <rev0n> Metadata:
[18:21:29 CEST] <rev0n> alpha_mode : 1
[18:21:49 CEST] <rev0n> looks fine,doesn't it?
[18:23:13 CEST] <durandal_1707> lukas_gab: use scale filter
[18:54:17 CEST] <rev0n> Could you please help me out with right configuration to convert webm to flv for rtmp stream?
[19:09:56 CEST] <furq> -i foo.webm -c:v libx264 -c:a aac rtmp://
[19:16:31 CEST] <faLUCE> Do you know if is there a way to obtain less than 2 frames latency with the aac encoder?
[19:17:10 CEST] <faLUCE> without aac-lc
[19:26:04 CEST] <kepstin> faLUCE: no, that's inherent to the design of the codec.
[19:26:27 CEST] <kepstin> er, inherent to the design of aac-lc
[19:27:27 CEST] <faLUCE> kepstin: I see... I was hoping for a different answer ;-)
[19:28:13 CEST] <faLUCE> kepstin: anyway, about mp2, even if it has a larger frame-size (1152samples), maybe it doesn't need to buffer two frames...?
[19:28:58 CEST] <kepstin> i don't know enough about mp2, but it has a simpler design, so that might be the case.
[19:29:11 CEST] <kepstin> my recommendation of 'use opus' stands.
[19:31:25 CEST] <Blubberbub> is 2 frames really that big of a deal?
[19:31:57 CEST] <kepstin> two frames in aac plus the other delays in the codec probaly add up to around 50ms
[19:32:42 CEST] <kepstin> which isn't terrible, but it's a bit too long for e.g. synced musical performances and whatnot
[19:34:43 CEST] <rev0n> furq: I get the following error: [NULL @ 0x55ceb7926d20] Unable to find a suitable output format for 'rtmp://
[19:35:47 CEST] <rev0n> obviously there's whole rtmp address, I was able to stream from mp4 to exact same rtmp server
[19:44:11 CEST] <rev0n> relaxed: https://pastebin.com/90FgV4XL I try to pipe data from websocket to ffmpeg and convert it actually to any other format then webm
[19:46:57 CEST] <furq> rev0n: add -f flv
[19:49:03 CEST] <rev0n> furq: I tried that. It does not work properly. It seems like there's something with my incoming stream, however it seems ok, ffmpeg recognize webm, but I cannot convert it to any other format
[19:54:45 CEST] <rev0n> Input #0, matroska,webm, from 'pipe:':
[19:54:45 CEST] <rev0n> Metadata:
[19:54:45 CEST] <rev0n> encoder : Chrome
[19:54:45 CEST] <rev0n> Duration: N/A, start: 0.000000, bitrate: N/A
[19:54:45 CEST] <rev0n> Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
[19:54:46 CEST] <rev0n> Stream #0:1(eng): Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn, 1k tbc (default)
[19:55:31 CEST] <kepstin> rev0n: please use a pastebin site, and include the *complete* output.
[19:55:32 CEST] <relaxed> your error referred to the output format
[19:59:20 CEST] <rev0n> just trying to get full log, however my lack of knowledge in Python complicates things. Also my ffmpeg runs on docker which gives me limited number of lines while getting log from docker
[19:59:57 CEST] <rev0n> and I can't seem to write log to a file with ffmpeg as it does not recognize > something.txt
[20:00:14 CEST] <kepstin> the logs are put on stderr, not stdout
[20:00:55 CEST] <faLUCE> kepstin: I can't use opus, I use mpegts as muxer
[20:00:59 CEST] <faLUCE> as container
[20:01:30 CEST] <JEEB> opus is specified in mpeg-ts
[20:01:46 CEST] <JEEB> usually it's the receiver that you'll have issues with with opus-in-mpegts
[20:01:50 CEST] <kepstin> is the muxing actually implemented in ffmpeg tho?
[20:01:58 CEST] <JEEB> not sure
[20:02:57 CEST] <JEEB> I know demuxing should be there at least
[20:04:03 CEST] <kepstin> faLUCE: are you sending mpegts over udp or something like that?
[20:04:05 CEST] <faLUCE> kepstin: what do you mean with that, exactly? [19:31] <kepstin> two frames in aac plus the other delays in the codec probaly add up to around 50ms <--- it is the codec which adds two frame latency
[20:04:18 CEST] <faLUCE> kepstin: over http
[20:04:22 CEST] <kepstin> lol
[20:04:33 CEST] <kepstin> why are you even worrying about the encoder delay then
[20:04:49 CEST] <faLUCE> kepstin: I already made a ~0 latency http mpegts h264 streamer + receiver
[20:05:13 CEST] <kepstin> well, aside from until a packet gets dropped and you have to wait for a retransmit
[20:05:14 CEST] <faLUCE> now I want to obtain a good result for audio too
[20:06:12 CEST] <kepstin> but yep, looks like ffmpeg can mux opus in mpegts
[20:06:20 CEST] <faLUCE> [20:05] <kepstin> well, aside from until a packet gets dropped and you have to wait for a retransmit <--- what do you mean?
[20:06:22 CEST] <kepstin> and since you control both ends of the stream, should be ok
[20:06:32 CEST] <faLUCE> kepstin: I drop frames in the receiver
[20:06:52 CEST] <kepstin> faLUCE: http is build on tcp, which is a reliable stream transport that implements buffering and packet retransmit at a lower level than the application
[20:07:15 CEST] <faLUCE> kepstin: I know that
[20:07:29 CEST] <kepstin> so it can add unpredictable delays which you can't do anything about in the face of network errors like dropped packets
[20:07:45 CEST] <faLUCE> but for LAN is good
[20:08:32 CEST] <kepstin> usually, yeah, but if you're building a custom app anyways why not use udp? which will also work well for lan, with potentially less os-level buffering
[20:09:09 CEST] <faLUCE> kepstin: I already made a library: https://github.com/paolo-pr/laav
[20:09:13 CEST] <kepstin> i assume you're already setting the tcp stream with the nodelay flag at least, so it's not trying to buffer full ethernet frames
[20:09:15 CEST] <faLUCE> kepstin: now, I'm working on the player side
[20:09:23 CEST] <kepstin> full ip frames*
[20:09:52 CEST] <faLUCE> I don't bother with tcp delay, for now... I'm removing unnecessary delays on localhost
[20:10:10 CEST] <kepstin> oh, you're only running this on localhost for testing?
[20:10:10 CEST] <faLUCE> kepstin: anyway, I could send mpegts over udp as well
[20:10:17 CEST] <faLUCE> kepstin: yes, for now
[20:10:21 CEST] <kepstin> yeah, you're gonna get some surprises when you switch to a real network interface
[20:11:03 CEST] <faLUCE> kepstin: I know, but I'm removing all the delays in many parts ( mpegts muxer, h264 parser, h264 decoder etc.)
[20:11:17 CEST] <faLUCE> then I can switch to udp as well
[20:11:32 CEST] <faLUCE> kepstin: what about mpegts over udp?
[20:11:44 CEST] <JEEB> often used in LANs
[20:11:50 CEST] <kepstin> faLUCE: should be fine, commonly used yeah
[20:12:22 CEST] <faLUCE> I see. It will be easy so change HTTP with UDP. I just have to add a simple module
[20:12:31 CEST] <faLUCE> (in my library)
[20:12:49 CEST] <faLUCE> but I don't like UDP very much to be honest. I had many problems with it, in the past
[20:12:59 CEST] <faLUCE> (FWs)
[20:13:03 CEST] <JEEB> sure, it can have a lot of issues depending on your network quality
[20:13:07 CEST] <JEEB> and/or other things
[20:14:02 CEST] <kepstin> but... tcp still has to deal with the same issues, and it prioritizes not losing data and transfer rate over latency, which is not always the best fit for every application
[20:14:25 CEST] <faLUCE> I know
[20:14:48 CEST] <faLUCE> I did not think about UDP with MPEGTS
[20:14:53 CEST] <faLUCE> it seems a good idea
[20:15:04 CEST] <faLUCE> an alternative to RTP over UDP
[20:15:59 CEST] <faLUCE> btw: I'm making the player with gstreamer... which is much easier than libav for the player side
[20:16:15 CEST] <faLUCE> (I used libav for the encoder side)
[20:16:58 CEST] <kepstin> gstreamer is probably using ffmpeg/libav to pull in the actual audio/video decoders, of course, but it has its own network, av sync, etc. stuff.
[20:17:08 CEST] <faLUCE> yes, it uses libav
[20:17:12 CEST] <faLUCE> gst-libav
[20:17:45 CEST] <kepstin> the really hilarious thing about gstreamer is that in gstreamer 0.10, gst-ffmpeg uses libav, and in 1.0, gst-libav uses ffmpeg
[20:17:45 CEST] <faLUCE> but it has a very huge API, and the user doesn't have to bother with threads, queue etc.
[20:19:40 CEST] <faLUCE> I spent lot of time in finding a good library for the player. libav was too complex and too "low level", mpv has a too minimal API. Tried also some others... At the end, I found gstreamer
[20:24:52 CEST] <rev0n> running self.app = Subprocess(['ffmpeg', '-y', '-i', '-', '-an', 'new.webm'], stdout=Subprocess.STREAM, stdin=Subprocess.STREAM) gives me: https://pastebin.com/HF6SQs9m
[20:25:08 CEST] <rev0n> file new.webm is empty (0 bytes)
[20:26:09 CEST] <kepstin> faLUCE: anyways, opus in mpegts. If configured for lowest delay mode, at great expense to efficiency, and you avoid/disable resampling, the encoder delay is 5ms.
[20:26:41 CEST] <kepstin> (and it's quite tweakable; adjustable frame size)
[20:27:49 CEST] <faLUCE> kepstin: I'll consider it
[20:30:35 CEST] <rev0n> any thoughts on my issue?
[20:33:37 CEST] <kepstin> rev0n: it clear what's going on there. there's nothing in the ffmpeg output that says there's any error; are you continuously sending it data, then closing the stdin stream when done to signal eof?
[20:35:37 CEST] <rev0n> kepsting: that's correct. In tornado server on socket open I create self.app = Subprocess(['ffmpeg', '-y', '-i', '-', '-an', 'new.webm'], stdout=Subprocess.STREAM, stdin=Subprocess.STREAM), later on message I do self.app.stdin.write(incoming).
[20:37:33 CEST] <kepstin> well, it did get enough input to detect the file format and print logs about that
[20:37:44 CEST] <rev0n> kepsting: sorry, but it's not clear for me. I am total newbie to ffmpeg unfortunately, and I just try to get going with it. The js script which is MediaRecorder API is sending chunks of video every 3 seconds.
[20:39:47 CEST] <rev0n> kepsting: I left the script going in the background and actually after few minutes (without any input from me) I got another part of log just came up out of nowhere: https://pastebin.com/1RP3LdUW
[20:40:27 CEST] <kepstin> rev0n: that all seems to be working as expected, the vp9 encoder that it's using has a fair delay to start up.
[20:40:33 CEST] <kepstin> and is probably kinda slow
[20:40:39 CEST] <kepstin> you might want to use "-c:v copy"?
[20:42:32 CEST] <rev0n> kepsting: another thing is that if I set the script to send packets every 300ms ffmpeg is not able to read the type. However if I just write incoming bytes as a file with Python (not using ffmpeg, just file.write()) I get perfectly readable webm file of length of js script sending interval.
[20:42:46 CEST] <rev0n> trying that just now
[20:45:54 CEST] <rev0n> kepstin: When adding -c:v copy I get the following: https://pastebin.com/TRTCF91U
[20:47:19 CEST] <kepstin> well, the pts issue is because of the weird segmentation, i guess, i assume that's why you had -fflags +genpts before?
[20:47:53 CEST] <rev0n> yes, but they changed nothing. Will try them once more
[20:48:53 CEST] <kepstin> you could also use the -r input option to override the framerate and write new pts values
[20:49:59 CEST] <rev0n> Tried with gents and the log looks just the same.
[20:52:41 CEST] <rev0n> -r doesn't change a thing
[20:53:39 CEST] <rev0n> ['ffmpeg', '-fflags', '+genpts', '-r', '24', '-y', '-i', '-', '-r', '24', '-c:v', 'copy', '-an', 'new.webm'] still getting Non-monotonous DTS
[20:54:43 CEST] <kepstin> oh, huh, that's dts not pts
[20:54:46 CEST] <rev0n> right now, my new.webm file is growing (~7mb so far), but it's still unreadable to vlc
[20:54:51 CEST] Action: kepstin doesn't know the bet way to handle that
[20:55:49 CEST] <rev0n> any ideas how to handle it?
[20:58:14 CEST] <rev0n> when trying to run the file in VLC nothing happens, when using native ubuntu movie app it shows "gstreamer error, general stream error"
[20:58:27 CEST] <kepstin> this is a bit weird, because I guess with each of the segments you're generating in the browser, it's encoding a completely new block separately
[20:58:56 CEST] <kepstin> it might need special handling to concatenate the pieces, since i guess they're webm fragments rather than a single webm file?
[20:59:12 CEST] <kepstin> re-encoding the video would probably work
[20:59:30 CEST] <kepstin> (of course, vp8/9 encoder is pretty cpu intensive)
[21:00:15 CEST] <rev0n> they are webm fragments, that's correct. when console.log the piece of data that's being sent to server I get in console: Blob {size: 356630, type: "video/webm"}
[21:00:31 CEST] <rev0n> size vary with each piece of data
[21:01:00 CEST] <rev0n> how can I do the re-encoding? again sorry for newbie question, but again... I'm a newbie :)
[21:01:38 CEST] <kepstin> well, you do re-encoding by setting -c:v to something other than copy... or omitting it completely, in which case ffmpeg will re-encode to some 'default' codec for the format.
[21:01:47 CEST] <kepstin> for webm, that's vp9
[21:02:23 CEST] <kepstin> from your previous output, it was only managing 0.1fps with the default vp9 encoder settings, so you either need a faster computer or a bunch of settings tweaking to get that working.
[21:04:06 CEST] <rev0n> well, that would be strange if my machine has unsufficient resources, and I'm running 6-cores AMD FM @ 3,6 GHz, 16 Gigs of RAM, and some SSD drives (if that matters)
[21:07:08 CEST] <rev0n> ok, I tried -c:v vp9 and I get the following: https://pastebin.com/MXYxZmU6
[21:07:28 CEST] <rev0n> file on disk is still unreadable and seems suspiciously small
[21:07:38 CEST] <kepstin> hmm, 6fps
[21:07:55 CEST] <kepstin> it should be readable tho, with that ffmpeg version
[21:08:23 CEST] <kepstin> actually, it hasn't output much video yet, probably a fair bit buffered in the encoder
[21:08:35 CEST] <rev0n> oh, ok, it is readable
[21:08:48 CEST] <rev0n> very low quality tho
[21:08:58 CEST] <rev0n> but it's not important at this point
[21:09:11 CEST] <kepstin> well, you didn't tell it what quality settings to use, so it's just some random defaults
[21:10:51 CEST] <rev0n> random defaults will do just fine for now. I don't care about quality, but about making it work at all :P file is readable, but it works really slow. Currently I wait about 1 minute for any output and nothing really happens
[21:11:48 CEST] <kepstin> yeah, the default encoder settings will run really slow (note the 6fps figure, which is probably much slower than the original video was captured), and buffer a lot of frames (which is ok for saving to a file, but not if you're transmitting somewhere else in realtime)
[21:12:13 CEST] <rev0n> the new log: https://pastebin.com/YvSm3fh2
[21:12:35 CEST] <rev0n> again for the appearing of "frame=" log entries I've waited about 1-2 minutes
[21:12:52 CEST] <rev0n> How can I speed it up for the live broadcast? is it possible?
[21:13:06 CEST] <kepstin> what's your final goal gonna be?
[21:13:09 CEST] <rev0n> or any workarounds?
[21:13:17 CEST] <kepstin> i don't imagine it's saving to a webm file on disk :)
[21:14:08 CEST] <kepstin> right now it's slow because of the use of a vp9 encoder without any tuning. But it's only using that because you're writing to a webm file, if you're planning to e.g. to hls or rtmp streaming, then you'd be using a different encoder entirely
[21:14:44 CEST] <rev0n> My goal is to getUserMedia (js) -> send to websocket -> ffmpeg gets data from websocket -> ffmpeg converts the data to flv and sends it over rtmp to my live transmission server on nginx
[21:15:34 CEST] <kepstin> ok, so for testing you could do something like write to a local flv file, something like "-c:v libx264 -tune veryfast -f flv local-test-file.flv"
[21:15:51 CEST] <rev0n> just trying
[21:15:57 CEST] <kepstin> er, "-preset veryfast", typo
[21:18:03 CEST] <rev0n> like this? ['ffmpeg', '-y', '-i', '-', '-c:v', 'libx264', '-preset', 'veryfast', '-f', 'flv', 'local-test-file.flv']
[21:18:44 CEST] <kepstin> sure, although you might want to leave the '-an' if you don't want audio - or add "-c:a aac -b:a 128k" or something if you want to keep the audio.
[21:20:26 CEST] <rev0n> kept -an to make things simplier
[21:20:48 CEST] <rev0n> again waiting for about 1 minute and it hangs on "stream mapping ...."
[21:21:35 CEST] <rev0n> at this point local-test-file.flv has about 271 bytes
[21:21:56 CEST] <kepstin> x264 still has a lot of frame buffering internally - around 250 frames by default, i think. You can try adding "-tune zerolatency" to disable that (at a fair bit of quality loss)
[21:22:39 CEST] <JEEB> definitely not 250
[21:22:59 CEST] <JEEB> around 60 or less + frame threading delay
[21:23:24 CEST] <DHE> 60 is about right. maybe 250 if you use preset placebo or something
[21:24:03 CEST] <kepstin> oh, i'm getting confused with the gop size
[21:24:04 CEST] <rev0n> not really promising: https://pastebin.com/r7GkeFn5
[21:24:10 CEST] <furq> is it lookahead plus threads
[21:24:13 CEST] <furq> or thereabouts
[21:26:53 CEST] <kepstin> right, default lookahead is 40 frames at medium, 60 at 'slower' through 'placebo', and reduces by 10 for each faster preset.
[21:28:26 CEST] <rev0n> and after another few minutes it crapped out another part of "frame=" entries: https://pastebin.com/H3uyRgNR
[21:28:32 CEST] <rev0n> file is readable
[21:31:44 CEST] <rev0n> banging my head agains the wall right now...
[00:00:00 CEST] --- Fri Jul 28 2017
More information about the Ffmpeg-devel-irc
mailing list