[Ffmpeg-devel-irc] ffmpeg.log.20170209

burek burek021 at gmail.com
Fri Feb 10 03:05:02 EET 2017


[00:00:51 CET] <Threadnaught> eg  ffmpeg -i audio-firstpass.mp3 -i frames/%d.png -strict -2 out.mp4 just gives the audio and a black screen
[00:01:46 CET] <Threadnaught> what do I do?
[00:05:50 CET] <Threadnaught> acc never mind, had too loop it, figured it out
[00:19:06 CET] <xtina> can anyone help me figure out how to sync audio and video in my ffmpeg stream?
[00:19:09 CET] <xtina> please?
[00:36:24 CET] <rebel2234> xtina: what raspberry pi are you using?
[00:42:18 CET] <xtina> hey, can anyone please help me with my audio/video sync issues? i'm piping 2 named pipes into ffmpeg and one is sipping
[00:42:21 CET] <xtina> skipping*
[01:24:23 CET] <xtina> hey, can anyone please help me with my audio/video sync issues? i'm piping 2 named pipes into ffmpeg and one is skipping?
[01:24:40 CET] <xtina> please?
[01:25:50 CET] <xtina> debianuser: you around?
[01:27:49 CET] <xtina> my audio file is playing at realtime speed
[01:27:54 CET] <xtina> but my video file is playing way faster than realtime
[01:28:07 CET] <xtina> how do i tell the video file to play according to realtime timestamps?
[02:16:32 CET] <xtina> should i be checking out the :map parameter?
[02:16:36 CET] <xtina> for audio/video syncing issues?
[02:35:50 CET] <xtina> why is my video stream running fast and skipping 2-3 seconds of frames every so often? what param controls that?
[02:38:09 CET] <xtina> please guys, can anyone help?
[02:54:47 CET] <spiderkeys> I am decoding h264 that I am receiving over a udp stream, but my program is doing a ton of other stuff and is continuously building up a buffer. This is to be expected, but I can't figure out how to drop all old frames and only use the latest frame. Are there any examples or reference literature for how to flush all old packets and only use the latest?
[03:38:19 CET] <xtina> quiet in here today :O
[03:45:50 CET] <kepstin> spiderkeys: using ffmpeg libraries?
[03:47:20 CET] <kepstin> spiderkeys: should be as simple as just decoding the video until it stops giving you frames, while throwing out all but the last frame you got...
[03:51:26 CET] <spiderkeys> kepstin: is that a while loop around the avcodec_decode_video2( pCodecCtx, pFrame, &got_frame, &packet ); call until...? I've seen some mailing list threads describe needing to feed a non-null packet with 0 size to do some kind of flushing operation
[03:53:24 CET] <kepstin> spiderkeys: hmm, I'd have to check the docs for the avcodec_decode_video2 function. I think it's easier to do with the new avcodec_receive_frame() api, since you can just keep calling it until it stops giving you frames
[03:54:09 CET] <kepstin> assuming you're actually receiving all the UDP data, and pushing it into ffmpeg decoder, and it's not getting lost...
[03:54:33 CET] <kepstin> if you're losing UDP data, I hope it's in something like RTP or mpeg-ts, so it can actually find a new keyframe and resync
[03:55:12 CET] <spiderkeys> Ah yea, I'm still using the old API. I am definitely getting all of the data, it is just buffering bigtime as I am currently only able to decode a frame at a time. There is a post processing call that takes 40ms, and the stream is 30fps, so it just creeps slightly more and more behind
[03:55:37 CET] <spiderkeys> I'll take a look at receive frame and see if that helps out
[03:56:05 CET] <kepstin> I think with the old api, there isn't a way to check whether you've decoded all of the currently available frames
[03:56:47 CET] <xtina> kepstin: any tips for preventing my video from skipping ahead?
[03:57:37 CET] <xtina> it is jumping frames, and i just want it to play the named pipe in realtime
[03:57:58 CET] <kepstin> xtina: not a clue, sorry. given the limitations you're working with, it might be best to write a custom application so you get more control, rather than attempting to convince the ffmpeg command line to do what you want :/
[03:58:24 CET] <xtina> it seems like lots of ffmpeg flags are *related* to adjusting video speed
[03:58:27 CET] <xtina> like setpts
[03:58:29 CET] <xtina> map?
[03:58:32 CET] <xtina> async/vsync?
[03:58:34 CET] <xtina> -re?
[03:58:40 CET] <xtina> are you familiar with those?
[03:58:50 CET] <xtina> i'm having trouble understanding them, and whether any suit my needs
[03:58:59 CET] <xtina> i just want to record video and play it in realtime, doesn't seem like a niche application..?
[03:59:31 CET] <xtina> sorry about my frustration, this just seems so simple and i don't know how to proceed
[03:59:34 CET] <kepstin> many of those wouldn't really help, stuff like setpts (and other filters) only work when re-encoding video, not when copying it. and "-re" only slows down ffmpeg, so it would probably just make it worse
[04:00:04 CET] <xtina> the first thing i want to understand is *why* this is happening
[04:00:13 CET] <xtina> https://www.youtube.com/watch?v=Bz12pDbay0U
[04:00:20 CET] <xtina> stuff like that video skipping ^^
[04:00:20 CET] <kepstin> my guess is that the framerate you're getting from the camera doesn't quite match the framerate that ffmpeg is using when reading the stream, so the a/v get desynced, but I'm not really sure if that's actually what's happening
[04:00:52 CET] <xtina> basically, the video will be perfectly real-time for 3-8 seconds
[04:00:55 CET] <xtina> then skip forward
[04:01:05 CET] <xtina> then perfectly realtime for another 3-8 seconds
[04:01:17 CET] <xtina> if the framerates didn't match wouldn't it always be wonky
[04:01:18 CET] <xtina> ?
[04:02:17 CET] <xtina> and the resulting file has video and audio perfectly in sync with no skips
[04:02:28 CET] <xtina> so, i tried the same command but outputting to a local file instead of to Youtube
[04:02:34 CET] <xtina> does that give any clues?
[04:02:36 CET] <xtina> i also tested my internet speed, 5mbps up
[04:02:42 CET] <xtina> and my cpu usage is <30%
[04:02:44 CET] <kepstin> so it works with a file, but not to youtube?
[04:02:55 CET] <xtina> correct
[04:03:01 CET] <kepstin> you're probably just hitting some of the limitations of the ffmpeg tool being single-threaded and not designed for realtime stuff :/
[04:03:18 CET] <kepstin> if the network blocks, even momentarily, it'll hang the tool and stop it from processing inputs
[04:03:24 CET] <xtina> if i take this command: http://pastebin.com/MG4jfhUa
[04:03:35 CET] <xtina> and output to a flv file then it's all good
[04:03:42 CET] <kepstin> and wifi in particular is known to have lots of small latency bursts or packet drops that would do that
[04:04:01 CET] <xtina> so people don't use ffmpeg for streaming?
[04:04:22 CET] <xtina> well, what i'm trying to understand is
[04:04:24 CET] <xtina> the audio NEVER skips
[04:04:28 CET] <xtina> it is always perfectly realtime
[04:04:30 CET] <xtina> only the video skips
[04:04:39 CET] <xtina> if there were latency bursts or packet drops, why would that never affect the audio?
[04:05:05 CET] <kepstin> hard to say, maybe it's just that since video is bigger than audio, it's less likely to run out of buffer
[04:05:09 CET] <xtina> if i do audio and null video, the audio doesn't skip. if i do video and null audio the video doesn't sip
[04:05:13 CET] <kepstin> er, video is more likely
[04:05:14 CET] <xtina> if i do them together the video sips
[04:05:15 CET] <xtina> skips*
[04:06:25 CET] <furq> you could try `sysctl fs.pipe-max-size=1048576`
[04:07:46 CET] <kepstin> I have a tool that does piped raw frames into ffmpeg, I had to write some asynchronous (threaded) IO code to handle passing stuff to it to prevent my tool from being blocked when ffmpeg wasn't reading
[04:07:57 CET] <kepstin> larger pipe could help if it's something like that which is happening
[04:08:07 CET] <furq> i've done streaming with ffmpeg but never over wifi
[04:08:15 CET] <kepstin> alternatively, there's tools you can put in the middle of the pipe that just add more buffer
[04:08:51 CET] <furq> also didn't you get the video working with ffmpeg v4l2
[04:08:55 CET] <kepstin> (e.g. the 'pv' tool has an option that'll make it add a userspace buffer)
[04:09:09 CET] <kepstin> and mbuffer
[04:22:01 CET] <xtina> sorry i stepped off for a sec
[04:23:03 CET] <xtina> furq: audio *must* be via arecord to prevent constant stuttering, but i did manage arecord + ffmpeg v4l2 video, however, i only get 3-5 fps, lots of overrun errors, constant buffering, and this error:
[04:23:04 CET] <xtina> Error while decoding stream #0:0: Invalid argument, video output low)
[04:23:14 CET] <xtina> the best performance i've had so far has been arecord+raspivid -> ffmpeg
[04:23:29 CET] <xtina> i can get a steady 10fps (my target) with almost no buffering or over/underruns
[04:23:42 CET] <xtina> the only issue being that the video sometimes skips
[04:24:41 CET] <xtina> i will try your idea of sysctl fs.pipe-max-size=1048576
[04:25:23 CET] <xtina> i still don't totally understand the network issue
[04:25:30 CET] <xtina> since my video is getting written into a named pipe
[04:25:38 CET] <xtina> nothing should be 'lost' right?
[04:25:59 CET] <xtina> if there's a latency burst can i force ffmpeg to keep reading every frame, rather than skipping over a bunch?
[04:26:17 CET] <xtina> that's what i'm trying to find the setting for (but maybe i'm still misunderstanding)
[04:26:27 CET] <xtina> like ideally the "issue" would be that the video and audio sometimes pause, but in sync
[04:26:36 CET] <xtina> rather than the video skipping ahead
[04:27:13 CET] <xtina> isn't that what 'vsync' is for?
[04:27:57 CET] <xtina> i'd also be happy to reduce quality if that alleviated video packet drops.. does that make sense?
[04:47:04 CET] <xtina> hmm, right now i'm sending 1920x1080 since thats the native rez of my pi cam
[04:47:22 CET] <xtina> if i sent a lower rez, would that be helpful (because less data) or not helpful (because requires an extra scaling step)?
[04:47:34 CET] <xtina> i'd like to sacrifice quality to reduce buffering
[04:51:32 CET] <kepstin> xtina: the pipe only can hold a certain amount of data, so when the pipe is full (ffmpeg's not reading it), then the next time raspivid tries to write a frame, it'll "block" (or hang) until ffmpeg reads a frame. During that time it's blocked, it's not grabbing frames from the camera, so you'll get a skip.
[04:51:47 CET] <kepstin> that's just a guess at what's happening, but it seems to fit
[04:52:05 CET] <furq> if the camera can put out multiple resolutions then there's no extra step
[04:52:12 CET] <furq> and yeah i'd expect that to help
[05:04:25 CET] <xtina> kepstin: oh, hmm. so if i write raspivid to a file, not a pipe, then it won't block, right?
[05:30:15 CET] <xtina> furq: it's a miracle, but i think your systemctl command actually did the trick......
[05:30:21 CET] <xtina> wow!! :)
[05:30:36 CET] <xtina> i wasn't sure i'd ever get a synced audio/video stream but i just got one
[05:33:12 CET] <xtina> https://www.youtube.com/watch?v=zCuPZHTumYU in case anyone is curious to see
[05:33:43 CET] <xtina> and this was my command: http://pastebin.com/bRPAQKWt
[05:48:35 CET] <furq> raising the pipe size is a band-aid over the real problem
[05:48:46 CET] <furq> it should improve matters but it won't fix it 100%
[05:48:59 CET] <furq> maybe it'll be good enough though
[06:04:10 CET] <xtina> furq: what do you mean?
[06:04:35 CET] <xtina> i did notice that after 2 minutes of streaming the audio and video stuttered once and then desynced by about 0.5s
[06:05:53 CET] <xtina> is that the kind of thing you're talking about?
[06:06:16 CET] <furq> pretty much
[06:11:27 CET] <xtina> why would ffmpeg stall in reading the pipe?
[06:12:19 CET] <xtina> also ... i dont mind if a few frames are dropped every few minutes
[06:12:25 CET] <xtina> but i do need the audio and video to resync
[06:12:30 CET] <xtina> since it seems like it's just the video frames dropping
[06:12:33 CET] <xtina> i wonder how to do that?
[06:20:22 CET] <xtina> furq: any ideas? could i write a script or something..?
[06:21:10 CET] <xtina> essentially i'd like to 'snap' the audio and video frames based on recording timestamp, so if the video stream loses a few frames, it doesn't hop ahead of the audio
[06:50:55 CET] <xtina> interesting, i've noticed that it was the audio that skipped a few frames, not the video
[06:56:33 CET] <xtina> furq: are keyframes a way to prevent the desync from happening?
[07:03:09 CET] <xtina> btw thanks for your help guys (furq, kepstin, kerio, debianuser.. i've gotten a lot of help lol) .. made a lot more progress than expected :)
[07:31:28 CET] <xtina> kepstin: hey, how far id you get on that tool that pipes raw frames into ffmpeg? i would also like to prevent blocking when ffmpeg isn't reading
[07:31:39 CET] <xtina> and is your method different from tools you can put in the middle of the pipe to add buffer?
[08:06:49 CET] <naquad> hi
[08:07:33 CET] <naquad> is there a way to make ffmpeg to broadcast stream w/o streaming server? i want http endpoint with x264 from my cam, but it looks like i'll need a streaming server no matter what :(
[08:07:40 CET] <voxadam> Is it possible to split the audio, video, subtitle, and other tracks out of MP4 or MKV containers in such a way that they can be reassembled such that they are bit-perfect reproductions of the original?
[08:09:09 CET] <furq> voxadam: do you mean the streams themselves or the entire file
[08:09:14 CET] <c_14> naquad: you can push rtp or udp
[08:09:45 CET] <naquad> won't do, i don't have second server
[08:10:09 CET] <c_14> second server for what?
[08:11:10 CET] <c_14> I'm not sure what you're trying to do
[08:11:40 CET] <voxadam> furq: Split the streams into standalone files (plus a metadata file, I assume), transport the files independently, resemble the streams and metadata into the original MP4 or
[08:11:40 CET] <naquad> i've got a raspberry pi with webcam connected, i want to publish video stream via http
[08:11:47 CET] <voxadam> MKV
[08:12:04 CET] <kerio> naquad: HLS
[08:12:07 CET] <furq> i mean it's possible but i don't know of any tool which does it
[08:12:16 CET] <kerio> it's literally just files
[08:12:18 CET] <naquad> kerio, not real time :(
[08:12:24 CET] <kerio> what do you mean
[08:13:21 CET] <kerio> then just install nginx-rtmp
[08:14:20 CET] <naquad> seems it'll be easier to just install vlc
[08:14:30 CET] <naquad> thanks for the help anyways
[08:15:36 CET] <furq> voxadam: is there any reason it needs to be the exact original file
[08:15:43 CET] <furq> or that you need to send the streams separately
[08:15:54 CET] <kerio> a solution in search of a problem
[08:17:29 CET] <c_14> voxadam: I mean, you can do it with ffmpeg and it'll be almost the exact same thing, but there's no way to guarantee that it will be bitexact
[08:17:43 CET] <c_14> But why do you need this in the first place
[08:18:05 CET] <furq> there is -fflags +bitexact which will stop the muxer writing version information
[08:18:24 CET] <c_14> But if the input file has them it won't be bitexact to the input
[08:18:24 CET] <furq> which might work, if the original file was muxed with ffmpeg with -fflags +bitexact
[08:18:30 CET] <furq> which it probably wasn't
[08:44:31 CET] <xtina> hey furq, i'm curious. do you think it will be impossible for me to avoid audio/video desync, if I attempt a 1 hour stream from my Pi Zero via wifi?
[08:47:41 CET] <furq> not impossible but i wouldn't count on it working perfectly every time
[08:48:39 CET] <furq> it'll be less likely to fuck up if you can reduce the resolution/bandwidth etc or boost your wifi signal somehow
[08:50:35 CET] <kerio> x265 is so slow :<
[08:53:36 CET] <voxadam> furq: Not really, it's more of a theory thing at the moment.
[08:56:18 CET] <xtina> hmm, do you think my 5mbps upload speed is too slow?
[08:57:35 CET] <xtina> what do you mean by boost the signal? sorry this isn't my area :(
[09:00:37 CET] <furq> stick a bigger antenna on your router or something
[09:00:58 CET] <furq> not sure if there's an easy way to do it on the pi that doesn't make it less wearable
[09:14:35 CET] <xtina> and you say that because.. 5mbps is too low for good streaming?
[09:14:58 CET] <xtina> like how do you know the issue is that my signal is too weak and not that my command isn't optimized?
[09:14:59 CET] <furq> i say that because wifi has inconsistent throughput
[09:15:44 CET] <furq> if your command works when you write to a file then the problem is almost certainly the wifi
[09:16:47 CET] <furq> iirc you can't really make the pipe buffers any bigger than 1MB without running everything as root
[09:17:02 CET] <furq> so reducing the stream bandwidth or improving your signal consistency is about all you can do
[09:18:01 CET] <furq> or writing your own streaming application using ffmpeg's libs which lets you buffer as much as you want
[09:19:25 CET] <furq> you could try doing it all with ffmpeg and using -rtbufsize, but idk how much that'll help
[09:19:27 CET] <xtina> furq: even with the pipe viewer tool, pv
[09:19:36 CET] <furq> i guess that's another option
[09:20:53 CET] <furq> but yeah i don't use wifi very often but both devices i have with builtin 802.11g and no external antenna are pretty awful
[09:21:44 CET] <furq> i can have my laptop sat two feet from my router and it still keeps renegotiating
[09:25:01 CET] <xtina> so i'm actually connecting my pi to my phone's wifi hotspot
[09:25:19 CET] <xtina> so the phone and pi will be like 1 ft from each other, both on the same person
[09:45:52 CET] <xtina> anyway, yea fair enough. i've dropped the rez to 640x480, dropped the video bitrate to 1mbps
[09:45:58 CET] <xtina> i dropped the audio bitrate too
[09:46:06 CET] <xtina> hopefully the bandwidth consumption is low enough now
[09:46:11 CET] <xtina> and i will try the PV trick :)
[09:46:15 CET] <xtina> thx for all the tips man
[09:47:43 CET] <kerio> i've had very good results with mbuffer
[09:47:59 CET] <kerio> very fine-grained control
[09:53:39 CET] <xtina> i'm not familiar but i'll try both mbuffer and PV
[09:53:47 CET] <xtina> :)
[12:12:26 CET] <Elirips> Hello. If I use ffmpeg to extract frames from a stream, ffmpeg will first write the frame into a foo.jpg.tmp file, and then copy that foo.jpg.tmp to foo.jpg. Any way to avoid this? I would like to make ffmpeg directly write into foo.jpg, without a tmp-file
[12:13:45 CET] <c_14> ffmpeg shouldn't do that
[12:14:15 CET] <xtina> I'm trying to use pipe viewer to look at a video pipe that i'm passing to ffmpeg
[12:14:23 CET] <xtina> for some reason if i pass the pipe name directly to ffmpeg, it works
[12:14:32 CET] <xtina> but if i pass the pipe through PV, ffmpeg never reads from it
[12:14:44 CET] <xtina> i think i've made a stupid mistake but can't find it
[12:15:29 CET] <xtina> my command .. can anyone see something wrong? http://pastebin.com/FJaUJzV3
[12:23:07 CET] <JoshX> why would ffprobe give me "nb_frames": "26941" and ffprobe -show_frames <file> actually give me 26941 frames (coded_picture_number 0 - 26940)
[12:23:48 CET] <JoshX> but when i dump the mp4 file to frames, jpg files with ffmpeg -i <file> %05d.jpg it gives my root at cuda02:/scratch/mnt/input1/20161007/00# ls -1 t/*.jpg | wc -l
[12:23:51 CET] <JoshX> 26988
[12:23:55 CET] <JoshX> 26988 files?
[12:24:09 CET] <JoshX> where do the 47 'extra' frames come from?
[12:24:43 CET] <JoshX> ah! ffmpeg gives me 'duplicates'
[12:25:02 CET] <JoshX> but why? how do i just dump all the frames without ffmpeg giving me dups?
[12:25:16 CET] <JoshX> frame=26988 fps=1113 q=24.8 Lsize=N/A time=00:14:59.60 bitrate=N/A dup=47 drop=0 speed=37.1x
[12:25:25 CET] <JoshX> i see where the 47 extra frames come from
[12:26:13 CET] <JoshX> so how do i tell ffmpeg to just dump frames like they are in the file?
[12:26:36 CET] <JoshX> I seem to have gaps in the file which i'm trying to fill with 'black frames'
[12:27:01 CET] <JoshX> to make all files exactly 27000 frames (900 sec x 30fps) to fix synchronisation errors
[12:27:07 CET] <furq> is the source variable framerate
[12:27:12 CET] <JoshX> we're trying to overay 3 different files
[12:27:22 CET] <JoshX> no the source is somewhat crappy due to crappy rtsp streams
[12:27:37 CET] <furq> you can try -vsync vfr
[12:28:12 CET] <JoshX> so i see some different pkt_durations
[12:28:21 CET] <JoshX> which i can use to hunt down the gaps i guess
[12:28:25 CET] <JoshX> pkt_duration=3014
[12:28:25 CET] <JoshX> pkt_duration=3015
[12:28:25 CET] <JoshX> pkt_duration=51003
[12:28:26 CET] <JoshX> pkt_duration=6000
[12:28:34 CET] <JoshX> the 30xx are the 'normal' frames
[12:28:47 CET] <JoshX> so the 51003 is a 16 frame gap
[12:29:02 CET] <JoshX> 3000 for a normal one leaves 16* 3000 gap
[12:29:10 CET] <JoshX> should be something like that right?
[12:29:11 CET] <furq> yeah the image2 muxer defaults to -vsync cfr which will dup frames
[12:29:24 CET] <JoshX> let me try the vfr option
[12:29:25 CET] <furq> forcing vfr output should fix it
[12:29:26 CET] <JoshX> hang on
[12:29:38 CET] <furq> afaik the gaps won't be reflected in the filenames though
[12:29:47 CET] <furq> so you'll need to figure it out from ffprobe
[12:30:39 CET] <JoshX> yeah i figured.. i can use a concat file i guess for putting jpgs back to mp4?
[12:30:47 CET] <furq> you can just use the image2 demuxer
[12:31:13 CET] <JoshX> yes, 26941 files now good
[12:31:19 CET] <JoshX> so now i have my frames
[12:31:22 CET] <furq> you should probably also use png if this is an intermediate format
[12:31:36 CET] <JoshX> thats like 20 times slower ...
[12:31:42 CET] <JoshX> and i need to do 650 hours
[12:31:45 CET] <JoshX> :-/
[12:31:49 CET] <furq> bmp?
[12:31:58 CET] <furq> that should be faster than both if you've got the disk space for it
[12:32:01 CET] <JoshX> frame=26941 fps=1189 q=24.8 Lsize=N/A time=00:14:59.60 bitrate=N/A speed=39.7x
[12:32:25 CET] <JoshX> bmp starts quick but drops fast
[12:32:42 CET] <JoshX> < 10x now
[12:32:48 CET] <furq> weird
[12:32:52 CET] <JoshX> middles at 8.7fps
[12:33:02 CET] <JoshX> i can try on a ssd scratch drive
[12:33:04 CET] <JoshX> let me check
[12:33:32 CET] <furq> well if it's io limited then that's probably way too much data if you're writing 650 hours
[12:34:05 CET] <JoshX> i'm doing 15 minutes at the time
[12:34:06 CET] <furq> you'll obviously get extra generation loss if you go via jpeg
[12:34:11 CET] <JoshX> the files are 15 minutes long
[12:34:24 CET] <JoshX> frame=26941 fps=519 q=-0.0 Lsize=N/A time=00:14:59.60 bitrate=N/A speed=17.3x
[12:34:45 CET] <JoshX> also, i tried doing hardware decoding with cuda, but that doesn't work to images?
[12:35:11 CET] <furq> well yeah decoding isn't encoding
[12:35:43 CET] <JoshX> well i usually do ffmpeg -y -hwaccel_device 0 -hwaccel cuvid -c:v h264_cuvid -i <input>
[12:35:51 CET] <JoshX> and that speeds things up
[12:36:04 CET] <JoshX> but then when i try to go to jpg it gives an error
[12:36:07 CET] <furq> oh fun
[12:36:20 CET] <furq> probably some colourspace thing
[12:36:28 CET] <furq> i've never really used hwaccel decoding so i couldn't help there
[12:36:39 CET] <JoshX> CUVID hwaccel requested, but impossible to achieve.
[12:36:43 CET] <JoshX> thats the error
[12:36:59 CET] <furq> does it work on that file otherwise
[12:37:03 CET] <JoshX> yes
[12:37:09 CET] <furq> fun
[12:37:45 CET] <JoshX> well i usually do ffmpeg -y -hwaccel_device 0 -hwaccel cuvid -c:v h264_cuvid -i <input> -vsync vfr -c:v h264_nvenc bla.mp4
[12:37:50 CET] <JoshX> that works fine :)
[12:37:58 CET] <JoshX> also just normal
[12:38:05 CET] <JoshX> well i usually do ffmpeg -y -hwaccel_device 0 -hwaccel cuvid -c:v h264_cuvid -i <input> -vsync vfr blah.mp4
[12:38:08 CET] <JoshX> works fine
[12:38:21 CET] <JoshX> it just transcodes it then, but that works
[12:38:43 CET] <JoshX> so i don't see why the %05d.jpg doesnt work :)
[12:39:29 CET] <JoshX> i'm going to try with jpgs first, just to see how the output looks
[12:39:42 CET] <JoshX> and if i can succesfully find and fill the gaps
[12:40:11 CET] <JoshX> can i use a concat file? for file 'frame_00001.jpg' etc? and then feed that to ffmpeg and let it create an mp4 file?
[12:40:31 CET] <furq> just -i %05d.jpg will work
[12:41:11 CET] <furq> fwiw you can probably do this without writing to temporary files
[12:41:19 CET] <furq> the filter's probably going to get pretty complicated though
[12:41:31 CET] <jkqxz> Does it work if you remove the mentions of hwaccel (just use the h264_cuvid decoder)?  The hwaccel option on things with an explicit decoder really means "output hardware surfaces", which doesn't sound like what you want there.
[12:42:12 CET] <JoshX> ah, let me check
[12:42:33 CET] <JoshX> furq: i need to insert a couple of 'black.jpg' lines on where the gaps are
[12:42:51 CET] <JoshX> i need the files to be as 'correct' as possible and black frames are better then skips
[12:42:54 CET] <JoshX> in this case
[12:43:02 CET] <JoshX> because of synchronisation
[12:43:07 CET] <furq> yeah you could do that with e.g. the trim/concat/color filters
[12:43:16 CET] <furq> there's probably a more elegant way but that's the one that comes to mind
[12:44:46 CET] <JoshX> to generate a frames file would be easier i guess if that is possible
[12:45:17 CET] <JoshX> jkqxz: ghe with just the -c:v h264_cuvid i get only 50% the performance :)
[12:45:31 CET] <JoshX> so software decoding to jpg is faster apparently
[14:36:18 CET] <kepstin> JoshX: yeah, it's probably mostly slow because of copying to/from gpu memory being the limiting factor. jpg decoding is pretty fast.
[14:37:09 CET] <kepstin> er, i'm misinterpreting what you said, but still
[14:54:26 CET] <s-ol> does the Apple HLS muxer support mp4 streams (apple updated the hls spec to allow it)?
[14:55:09 CET] <s-ol> and/or does anyone know a good workflow to encode videos for adaptive streaming with HLS and MPEG-DASH without duplicate streams if possible
[15:24:09 CET] <Pasteur43> I am building ffmpeg shared libraries (.lib/.dll) on Windows using media-autobuild_suite for use in a C++ application. I want to be able to create debug info to allow source debugging in VS (presumably pdbs) and to control whether I create a debug or release build.
[15:24:21 CET] <Pasteur43> Can anyone tell me how to do this? Thanks
[15:26:43 CET] <JEEB> no idea about media-autobuild but if you have VS2013 or VS2015 up-to-date you should be able to follow what the MSVC FATE box does to create PDBs as well (requires compilation with MSVC as opposed to mingw-w64)
[15:29:40 CET] <JoshX> kepstin: i see.. thanks for the info
[15:33:46 CET] <Pasteur43> Thanks. I hadn't come across FATE I will investigate. media-autobuild_suite is a script that downloads and builds FFmpeg and related tools on Windows using msys2 and msvc. Apart from Visual Studio it downloads all dependencies before building.
[15:40:18 CET] <IamTrying> http://i.imgur.com/YfRReWQ.png - IS FFMPEG VIRUS? I downloaded the official FFmpeg and just wanted to execute it. But Norton is scaring me to death with all its Popup. Why Norton is telling ffmpeg.exe is Virus or not trusted remove it?
[15:43:01 CET] <DHE> image quality is so poor I can't read that
[15:43:24 CET] <DHE> as for "being a virus", no it's not. but if you got a copy made by someone else it might be infected by them and it's not our fault that happens.
[15:46:09 CET] <JEEB> IamTrying: there is no official FFmpeg binaries
[15:50:48 CET] <IamTrying> JEEB: very confusing Norton. When i am compiling any .exe using Visual studio 2015 or Qt5 or just downloading official ffmpeg from http://ffmpeg.org/download.html as windows and trying to execute it, then Norton wakes up and scared the hell out
[15:51:19 CET] <IamTrying> Its very confusing is Norton just ignorant or its my machine screwed up.
[15:51:37 CET] <IamTrying> I also have 3 years EV code signing fully paid
[15:58:11 CET] <IamTrying> DHE: the error says in dutch: 1) "our information about this file is unknown, its not trusted to use ffmpeg. unless you trust it" 2) very less users, less then 5 users used ffmpeg 3) very new application, 1 week ago was made 4) not enough information available of ffmpeg apps 5) ffmpeg.exe was downloaded from unkonwn (even it was from official site) 6) we recommend delete it
[15:58:42 CET] <IamTrying> This is no respect at all to FFmpeg, the great FFmpeg. what  a nonsense error.
[15:59:24 CET] <IamTrying> But Norton is scaring to hell even i code signed using EV code signer. Norton just dont give a ss....t.
[16:42:32 CET] <kerio> s-ol: i believe "fragmented isobmff" is the correct term
[16:46:41 CET] <s-ol> kerio: ah, I hadn't followed that term yet but its not my issue atm
[16:46:46 CET] <s-ol> generating a fragmented mp4 is working fine
[16:47:24 CET] <s-ol> using -g and -keyint_min I get multiple boxes according to mp4parser.com and my demo DASH stream is working too
[16:48:05 CET] <s-ol> I have a few patches to the undocumented mpeg-dash muxer that fix a few issues. my real problem is having an 'intermediate stage' or something from which i can deploy HLS and MPEG-DASH manifests
[16:49:19 CET] <s-ol> I guess I can write my own tool to jumble together the data from the .mpd to generate a .m3u8 or the other way around now that I've been reading through the HLS docs
[16:50:49 CET] <kerio> i'm not entirely sure the client support is up to par
[17:00:55 CET] <s-ol> kerio: thats why I need HLS and MPEG-DASH; our target platforms are pretty much covered by that
[17:01:14 CET] <kerio> no i mean for fMP4 HLS
[17:01:25 CET] <kerio> you still probably need the mpegts one
[17:01:39 CET] <s-ol> HLS works on everything Apple reasonably far back (not sure about the mp4 container part here) and mpeg-dash works with the media encoder js things
[17:02:15 CET] <s-ol> okay yeah, I haven't really investigated the compatiblity for that and older iOs devices yet
[17:03:05 CET] <kerio> why couldn't we just add rtmp to browsers? :|
[17:04:26 CET] <s-ol> i need to test that compatibility matrix stuff again later anyhow, considering crosswalk support also
[17:05:26 CET] <s-ol> it might be ok to drop support for native iOs web a few versions back if crosswalk has better support for something but I have no idea what kind of features it includes in this area
[17:34:44 CET] <s-ol> (how) can i get the AVOption help text from the ffmpeg CLI?
[17:35:41 CET] <s-ol> nvm, it's ffmpeg --help muxer=...
[17:48:31 CET] <wouter> hi -- how do I tell ffmpeg to take a stereo audio track and turn it into a mono audio track by mixing both channels together?
[17:48:52 CET] <wouter> I know I can turn it into a mono track by picking one channel with -map_channel, but that's not what I need
[17:52:24 CET] <kepstin> wouter: adding the output option "-ac 1" should cause it to downmix, that's probably the simplest way
[17:56:02 CET] <wouter> kepstin: that works, thanks!
[18:31:56 CET] <Fenrirthviti> kerio: cause FLASH!
[18:32:51 CET] <Fenrirthviti> s-ol: There's some forks of nginx-rtmp that do HLS/DASH pretty well.
[19:09:24 CET] <s-ol> Fenrirthviti: for on-demand video too? i am working on that
[19:09:32 CET] <flyBoi> Anyone else use ffmpeg in a docker image and know of a way to shorten the docker build process?
[19:09:40 CET] <flyBoi> Or am I just forever stuck with 6mins+?
[19:15:02 CET] <s-ol> so I guess realtime encoding doesn't fit?
[19:34:36 CET] <teoo> hello, i have a problem with a DV AVI video, when i encode to h264 the audio has a double speed
[19:35:15 CET] <BtbN> but the video is fine?
[19:35:20 CET] <teoo> yes
[19:35:31 CET] <BtbN> sounds like the samplerate is misdetected.
[19:35:34 CET] <teoo> vlc says this in the audio info:0: video 1: PCM S16 LE (s16l) 48kHz 2:Codifica: PCM S16 LE (s16l) 32kHz
[19:35:59 CET] <teoo> ok
[19:36:49 CET] <teoo> http://pastebin.com/ciUYtUDb
[19:37:01 CET] <teoo> here is the command for audio only (that is where is the problem)
[19:37:43 CET] <teoo> there are many errors "AC EOB marker is absent" but vlc plays the original fine without any problem
[19:38:35 CET] <BtbN> that's for the video stream
[19:38:53 CET] <BtbN> does ffplay play it fine?
[19:41:06 CET] <Fenrirthviti> s-ol: yup, should
[19:42:51 CET] <teoo> now i chack with ffplay
[19:42:57 CET] <teoo> other strange thing: http://pastebin.com/Cr5Fms0Y
[19:43:06 CET] <teoo> if i encode also the video in that way
[19:43:14 CET] <teoo> audio speed is correct
[19:43:29 CET] <teoo> but there is 1 sec of sound, 1 sec muted, 1 sec sound.....
[19:44:01 CET] <teoo> removing the video part from the second paste make it play again fast (without mute)
[19:46:20 CET] <teoo> ffplay play fast both audio and video!!
[19:46:27 CET] <teoo> interesting
[19:48:19 CET] <teoo> ffplay output: http://pastebin.com/Y6JPhbk3
[19:51:03 CET] <BtbN> seems to me something with that file is either messed up or not supported in ffmpeg
[19:56:15 CET] <teoo> it was recorded from camera with my old windows xp computer and windows movie maker (new pc doesnt have dv/1934 input)
[19:56:35 CET] <teoo> and from windows movie maker i used "save as dv video"
[19:57:35 CET] <teoo> fps are correctly detected (25) so why it plays at double or more speed?
[19:57:49 CET] <teoo> i'm now taking a look here https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video
[20:00:40 CET] <llogan> which player plays it wrong? do you have a short sample input file you can share?
[20:01:29 CET] <teoo> ffplay play video and audio fast, vlc works good (both audio and video)
[20:03:14 CET] <teoo> i have 5 videos 1 hour each recored in the same way and all have the same problem
[20:03:55 CET] <llogan> make a short sample using "dd" (unless you're on Windows...I don't know what the equivalent is if there is one).
[20:04:11 CET] <teoo> i could try to cut with hex editor
[20:08:18 CET] <llogan> in the meantime show the complete console output of: ffprobe -show_format -show_streams input.foo
[20:08:24 CET] <teoo> is it necessary to have a short part of the video? i'm from italy and we have third world connection, i have 56kb/s=dial up upload speed
[20:09:28 CET] <llogan> i guess that complicates things
[20:13:52 CET] <teoo> i managed to "dd" copy 16 mb of video
[20:14:19 CET] <teoo> there are about 4 seconds of video, vlc still play correctly
[20:14:31 CET] <llogan> what about ffplay?
[20:16:43 CET] <teoo> works too, still fast (vlc with cutted video show broken index warning but you can igncore it and it works)
[20:17:03 CET] <llogan> are your ffmpeg/ffplay old?
[20:17:17 CET] <teoo> no, i have downloaded them yesterday
[20:17:33 CET] <teoo> (windows static compiled version)
[20:17:56 CET] <teoo> ffmpeg version N-83410-gb1e2192
[20:21:49 CET] <teoo> go here and tell me the email that has been assigned to you, i will send you by email (video in zip+password)
[20:22:50 CET] <teoo> https://www.guerrillamail.com
[20:25:55 CET] <teoo> ok i have the mail ready to be sent, the attachment is loaded
[20:27:04 CET] <llogan> would be more effective to share with anyone via dropbox or whatever
[20:27:42 CET] <teoo> i don't have credentials here, do you know any service that allow free upload without register?
[20:28:36 CET] <llogan> not any good ones, but i have my own server and don't have a need for these services. if you have google account you could use google drive.
[20:33:21 CET] <teoo> uploaduing... i have checked also windows media player and it works good also there
[20:33:41 CET] <teoo> in windows media player it start "freezed" and then there are 3 seconds of video
[20:33:58 CET] <teoo> in vlc and ffplay there is a "beeeeep" at the start and then it play
[20:34:16 CET] <teoo> probably it was some kind of delay between camera play and pc recording
[20:36:37 CET] <teoo> here is the video: https://mega.nz/#!9IliCaQJ!Cy638GFXxCV3mglQ5svkkeHpxz_AVUL6yZwzb-lxPPM
[20:38:44 CET] <teoo> it was a show with fireworks (after, not in the 3 seconds) on the beach at Marina di Venezia, awesome place for you holidays :)
[20:39:32 CET] <llogan> i've only been to Siena
[20:50:57 CET] <teoo> are you trying something right now? (it's non that i want to hurry up you) i'm asking because i'm constantly watching the screen hoping in a miracle :)
[20:57:25 CET] <aster__> what is the purpose of read_packet and write_packet in avio_alloc_context? any use case? thanks in advance
[21:00:35 CET] <llogan> teoo: looks like there is some garbage at the beginning of the file causing the issues. if you use "-ss 1" as an input option you can probably skip  and the output will probably play fine.
[21:02:58 CET] <teoo> llogan: no, unfortunatly doesn't change anything
[21:03:41 CET] <teoo> as i said that beep/garbage is probably because when you press rec on pc the camera (with videocassette) autoclick play but there is a delay before it actually stars
[21:03:49 CET] <dbz2k1> hello
[21:03:53 CET] <teoo> hello!
[21:04:38 CET] <dbz2k1> could you help with figuring out commands?
[21:04:50 CET] <dbz2k1> I am confused on something
[21:04:58 CET] <teoo> llogan: this is also suspicius [avi @ 000000000056a400] New audio stream 0:2 at pos:2519560 and DTS:1.00723s (and is reported by vlc); i don't get why there should be 2 audio streams with two different bitrates
[21:05:14 CET] <teoo> dbz2k1: tell me, i will try
[21:06:44 CET] <dbz2k1> I have a input that uses my soundcard output and like steromix all setup I just want to make a simple udp stream with that input?
[21:09:39 CET] <teoo> dbz2k1: i have never used streams :/ only some tests using vlc
[21:10:55 CET] <teoo> here is vlc command, i don't know if it can help you: :sout=#udp{dst=1.2.3.4:1234} :sout-keep
[21:11:46 CET] <dbz2k1> can I convert vlc commands to ffmpeg in some way?
[21:11:56 CET] <teoo> let me take a look
[21:12:07 CET] <dbz2k1> http://pastebin.com/JyRFXsXL
[21:13:09 CET] <teoo> try read here: https://trac.ffmpeg.org/wiki/StreamingGuide#StreamingasimpleRTPaudiostreamfromFFmpeg
[21:13:26 CET] <llogan> teoo: IIRC, DV can have more than one audio stream. the junk in the beginning probably prevents its detection until later. see if -probesize and/or -analyzeduration input options help.
[21:14:11 CET] <dbz2k1> ok
[21:16:13 CET] <dbz2k1> llogan: is the http streaming from vlc a vlc feature or is it part of ffmpeg?
[21:17:16 CET] <llogan> i don't know anything about streaming
[21:17:31 CET] <teoo> llogan: how to add that options? here ffplay (and also ffmpeg) say unknown command/argument. anyway for the second audio track i don't care at all. while that garbage it's "analog"/from the source so the video is not "broken" i think
[21:20:13 CET] <teoo> i don't know if it's a feature but the commands are similar. i remember that i tested vlc streaming and i could do it in lan with http but not udp
[21:20:20 CET] <llogan> ffmpeg -probesize <value> -analyzeduration <value> -i input ...
[21:25:23 CET] <teoo> ffmpeg -probesize 100000 -i R1.avi 2> result.txt (seems that the number that i write doens't matter)http://pastebin.com/QViynr60
[21:25:24 CET] <thebombzen> hmm is -c opus still incompatible with -f nut?
[21:26:20 CET] <teoo> mmm what if you connect to me with team viewer so you can do what you want easily
[21:26:58 CET] <teoo> ffmpeg -analyzeduration 10 -i R1.avi 2> result2.txt http://pastebin.com/4tVuRf7j
[21:37:06 CET] <teoo> llogan: if you want to take a look with teamviewer
[21:41:58 CET] <aster__> what is the purpose of read_packet and write_packet params in avio_alloc_context? any use case would be great. thanks
[21:42:28 CET] <teoo> i don't know i'm sorry
[21:44:03 CET] <klaxa> teoo: they are used by the AVIOContext for reading and writing packets, they are called when data should be written to an output or read from an input
[21:44:16 CET] <klaxa> they can be overridden for custom AVIOContexts
[21:44:36 CET] <klaxa> https://www.ffmpeg.org/doxygen/trunk/doc_2examples_2avio_reading_8c-example.html
[22:01:12 CET] <xtina> hey guys. i have a stupid question. i have a working FFMPEG command, but when I stick PV (pipe viewer) into it, it breaks
[22:01:36 CET] <xtina> command: http://pastebin.com/Xfet1Qdc
[22:01:40 CET] <xtina> if i take out 'pv | \', the command works
[22:01:43 CET] <xtina> does anyone know what is wrong?
[22:04:18 CET] <teoo> xtina: i'm not sure what you are trying to do (watching the text output in a different program?)
[22:04:43 CET] <xtina> i want to use PV to add a buffer to my video pipe
[22:04:52 CET] <xtina> because right now, it occasionally drops frames when it is full
[22:04:59 CET] <xtina> so i want to use pv -B
[22:05:13 CET] <teoo> i don't know pv :/
[22:09:10 CET] <teoo> you seems more expert than me in linux but if i'm not wrong | takes left stdout and put it to stdin at right
[22:09:39 CET] <teoo> echo hello | echo doesnt work because echo want data as argument not stdin
[22:09:47 CET] <teoo> can it be your problem?
[22:09:48 CET] <xtina> teoo: ya, i thought raspivid would send output to pv, which woudl send output to ffmpeg
[22:10:05 CET] <xtina> hmm what do you mean
[22:10:09 CET] <xtina> which command is the problem?
[22:10:38 CET] <teoo> i'm not sure because i never used / heared pv
[22:10:50 CET] <teoo> echo hello | echo <--try this, it will not work
[22:11:19 CET] <teoo> because echo (at right) wants data to be echoed passed as argument while pipe | pass it to it's stdin
[22:11:35 CET] <teoo> i'm not sure that this is your problem...
[22:12:07 CET] <xtina> pv wants its file from stdin
[22:12:44 CET] <xtina> 'pv will copy each supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just standard input is copied'
[22:14:00 CET] <xtina> is ffmpeg encountering an issue with pipe format?
[22:15:01 CET] <teoo> you might try to dump everything to file and check the differences:
[22:15:11 CET] <teoo> command1 > file1
[22:15:17 CET] <teoo> command1 | pv > file
[22:16:06 CET] <xtina> i checked the output log
[22:16:11 CET] <xtina> when i use PV
[22:16:20 CET] <xtina> ffmpeg does not open the PV pipe
[22:16:51 CET] <xtina> kepstin: do you have any experience with PV since we were talking about it last night?
[22:19:46 CET] <teoo> check pv options and see if it has a buffered output, something like "print only after x bytes received"
[22:24:58 CET] <kepstin> by default, pv does splicing, I think you have to use the -B option to make it buffer. Read the man page, of course...
[22:25:39 CET] <kepstin> (might have to use -C as well)
[22:29:27 CET] <xtina> kepstin: thanks for the tips. i've tried -B and -C with pv but still cannot get ffmpeg to open/read from the pipe
[22:29:34 CET] <xtina> i am doing
[22:29:36 CET] <xtina> raspivid -w 640 -h 480 -fps 10 -v -b 1000000 -o - -t 0 | \
[22:29:39 CET] <xtina> pv -B 1000000 -C | \
[22:29:56 CET] <xtina> ffmpeg -i pipe:0 ...
[22:30:11 CET] <xtina> could there be anything wrong with the stdout pipe format of PV?
[22:31:10 CET] <xtina> ffmpeg appears to *open* the video pipe but not read from i
[22:31:12 CET] <xtina> Input #0, h264, from 'pipe:0':   Duration: N/A, bitrate: N/A     Stream #0:0, 52, 1/1200000: Video: h264 (High), yuv420p(progressive), 640x480, 10 fps, 10 tbr, 1200k tbn, 20 tbc
[22:35:06 CET] <xtina> actually, on closer inspection it looks like it's the audio file that isn't being read
[22:35:10 CET] <xtina> when I use PV
[22:35:39 CET] <xtina> kepstin: any idea why using PV on the video pipe blocks the audio pipe?
[22:36:32 CET] <kepstin> no idea :/
[22:38:10 CET] <xtina> OK, thanks anyway
[22:38:18 CET] <xtina> maybe i'll try mbuffer
[22:47:14 CET] <klaxa> oh wow i meant to highlight aster__ an hour ago
[22:47:24 CET] <klaxa> oops :P
[23:55:18 CET] <roasted_> dumb question - can I use ffmpeg to simply stream and display an RTSP feed? All of the examples I read suggest input + output (as in, recording). Wanted to make sure I can even simply 'stream' it before going further.
[23:57:30 CET] <DHE> not display with ffmpeg. but you can use ffplay to show it on-screen.
[23:58:54 CET] <roasted_> is that something as simple as ffplay -i rtsp://path/to/stream/url ?
[23:59:15 CET] <DHE> basically
[23:59:42 CET] <DHE> you can also add -af and -vf to run filters, etc
[00:00:00 CET] --- Fri Feb 10 2017



More information about the Ffmpeg-devel-irc mailing list