[Ffmpeg-devel-irc] ffmpeg.log.20161030
burek
burek021 at gmail.com
Mon Oct 31 03:05:01 EET 2016
[00:44:04 CEST] <Filarius> so it seems issue was in short set of test data and my software closing stream before pushing all data to pipe. I think, with coder h264 it just not send "bad" data after last keyframe.
[01:12:53 CEST] <fgfhdwfh> hello
[01:13:19 CEST] <fgfhdwfh> when i execute : ffmpeg -i rtmp://184.72.239.149/vod -c copy dump.flv
[01:13:30 CEST] <fgfhdwfh> i got always a connection refused error
[01:13:45 CEST] <fgfhdwfh> can you help cause i cant found any thing on doc
[01:14:36 CEST] <fgfhdwfh> it's an open rtmp im testing
[01:19:04 CEST] <BtbN> well, the rtmp server refuses the connection
[01:21:01 CEST] <fgfhdwfh> thank you for your reply
[01:22:56 CEST] <BtbN> Well, it's not something ffmpeg can do much about, so not sure what you'd expect.
[01:23:09 CEST] <BtbN> Just make sure you are using the right IP and port.
[01:24:25 CEST] <fgfhdwfh> ok i understand
[01:24:35 CEST] <BtbN> also, rtmp is usually a two part url
[01:24:41 CEST] <BtbN> so it should be /vod/something
[01:24:45 CEST] <BtbN> and not just /vod
[01:25:20 CEST] <fgfhdwfh> can i use http stream url ?
[01:25:26 CEST] <fgfhdwfh> ffmpeg -i http://218tv.ddns.net/ -c copy dump.flv
[01:25:38 CEST] <BtbN> sure, if the server serves something ffmpeg understands
[01:25:50 CEST] <BtbN> if it's a website with some (flash) player on it, no
[01:27:26 CEST] <fgfhdwfh> i understand that i have to ask simply for rtmp url for the live stream so i can use it with ffmpeg
[01:28:02 CEST] <BtbN> if the website plays it with some flash player, you should be able to extract it. If not from the source of the site, wireshark will find it.
[01:44:19 CEST] <fgfhdwfh> thank you :)
[01:46:07 CEST] <fgfhdwfh> i tracked rtmp link but on my server i have again connection refused error. So ill juste ask the stream administrator to give permission to my server ip
[01:46:15 CEST] <fgfhdwfh> thank s again
[02:46:55 CET] <Keridos> how can i convert an RGB input video into yuv444p output?
[02:47:18 CET] <Keridos> currently I got wrong colors in output (looks like it interprets the input as BGR??)
[02:52:08 CET] <furq> paste your command line and output
[03:00:46 CET] <Keridos> furq: currently cannot, using custom ffmpeg output with obs, cannot get the full command line it launches
[03:01:18 CET] <Keridos> just trying to find out how I can select a specific pixel format to interpret the input?
[03:02:09 CET] <Keridos> so I do not want to specify pix_fmt for the output, but tell ffmpeg to interpret the input as a specific pixel format, even if it autodetects another one
[03:02:21 CET] <furq> specify -pix_fmt before -i
[05:24:32 CET] <kanzure> pp/win 79
[06:02:04 CET] <Ekho> probably just being blind but I cant find what flags are needed during configure to compile with alsa support, can somebody point me in the right direction? all i can find is that I need libasound
[06:14:38 CET] <furq> Ekho: it's enabled by default
[06:15:21 CET] <furq> if you don't have asoundlib.h or libasound then it'll be automatically disabled
[06:23:23 CET] <Ekho> I have both asoundlib.h and libasound
[06:40:44 CET] <Ekho> ignore me I'm an idiot an had a typo character in my alsa-lib prefix so ffmpeg couldnt find them
[12:41:08 CET] <Filarius> funny, FFMpeg not writing all frames from StdIn if StdOut and StdError not closed...
[12:49:02 CET] <crst> I'm not being able to figure it out properly when searching google or reading docs. What is the practical best solution with latest ffmpeg to convert a mp3 of a voice recording as small as bearable and applying highest quality conversion for the small filesize?
[12:58:28 CET] <StephenS> is there an issue with opus git? It seems I cant clone it with: git clone http://git.opus-codec.org/opus.git
[13:00:51 CET] <StephenS> hmm seems to work now, just takes a lot of time heh
[15:30:52 CET] <TAFB2> Thank you FFMPEG :) Live streaming of the event yesturday went FLAWLESS! Thank you apple for HLS streaming too. Time lapse Zombie video: http://www.skyviewelectronics.com/zombie-walk-brooklin
[15:30:59 CET] <kerio> \o/
[15:50:39 CET] <SchrodingersScat> nice
[16:32:48 CET] <paule32> hello, how can i stream X server screen from 800x600 to 640x480 video only compressed? to nginx server port rtmp 1935 ?
[16:44:37 CET] <paule32> ffmpeg -f x11grab -maxrate 128k -video_size 800x600 -g 60 -bufsize 1024k -f mp4 rtmp://192.168.178.80:1935/live
[16:44:50 CET] <paule32> Output #0, flv, to 'rtmp://192.168.178.80:1935/live':
[16:44:50 CET] <paule32> Output file #0 does not contain any stream
[16:50:40 CET] <jkqxz> You need to tell it where to get the input from. It's probably X display :0, so add "-i :0" after "-f x11grab".
[16:51:24 CET] <BtbN> you also won't have much fun with a 128kbps video stream.
[16:52:37 CET] <jkqxz> It would take a while to converge, but for a mostly-static PC stream it might only be an incomplete disaster.
[16:54:46 CET] <kerio> i have some 1.8mbps songs
[17:14:52 CET] <paule32> jkqxz: what do you can point 1024k ?
[17:15:22 CET] <paule32> is ffserver needed?
[17:16:02 CET] <furq> no
[17:16:22 CET] <paule32> how then rtmp to nginx?
[17:17:21 CET] <furq> ffmpeg -f lavfi -i testsrc -c:v libx264 -f flv rtmp://192.168.178.80:1935/live
[17:17:32 CET] <furq> if that works then your problem is probably the thing jkqxz said
[17:18:47 CET] <paule32> unknown encode libx264
[17:19:21 CET] <furq> well that's a great start
[17:19:54 CET] <furq> you probably want to start by getting an ffmpeg binary that has an h264 encoder
[17:21:22 CET] <paule32> D.V.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
[17:21:22 CET] <paule32> jens at kallup:/media/sdb1/e-learning/mov$ ffmpeg -codecs | grep h264
[17:22:10 CET] <furq> so you probably want to start by getting an ffmpeg binary that has an h264 encoder
[17:26:34 CET] <paule32> Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1532 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
[17:38:43 CET] <paule32> runs
[17:59:27 CET] <paule32> ./tests/../ffserver -f ffserver.conf
[17:59:36 CET] <paule32> cancel running
[18:02:13 CET] <paule32> https://paste.fedoraproject.org/465342/14778468/
[18:11:03 CET] <paule32> ok, ffmpeg works, for creating mp4 file
[18:11:16 CET] <paule32> input is screen, output is file
[18:11:28 CET] <paule32> so, how can i send to nginx?
[18:11:42 CET] <paule32> or ffserver?
[18:12:38 CET] <JEEB> never use ffserver unless you're ready to maintain or rewrite it
[18:13:07 CET] <JEEB> if you want to stream, utilize a proper streaming server. icecast and a few others come to mind
[18:13:40 CET] <paule32> i am under Linux
[18:13:47 CET] <paule32> testing only, at the moment
[18:14:26 CET] <paule32> i am c a programmer, so rewriting should not so hard ...
[18:14:39 CET] <Mavrik> why on earth.
[18:14:40 CET] <paule32> but i shall learn the basics ..
[18:14:49 CET] <Mavrik> when you can use nginx-rtmp plugin.
[18:15:04 CET] <JEEB> btw, did that thing support multiple profiles in any sane way?
[18:15:13 CET] <JEEB> since flv IIRC is limited to a single thing or so?
[18:15:39 CET] <Mavrik> nginx-rtmp?
[18:15:39 CET] <bencoh> what is "that thing"?
[18:15:45 CET] <Mavrik> or ffserver? :)
[18:16:02 CET] <paule32> hold on guys
[18:16:18 CET] <paule32> i test and then, i try to customize
[18:16:21 CET] <bencoh> nginx-rtmp can stream out to hls, so ... yes, somehow
[18:16:24 CET] <Mavrik> JEEB, nginx-rtmp supports HLS and DASH streaming which should be used instead of RTMP :P (yeah, dumb name I know :P )
[18:16:36 CET] <JEEB> yes, but the *input* is FLV in RTMP
[18:16:47 CET] <JEEB> thus I'm not sure how you're supposed to input multiple video tracks to it
[18:16:59 CET] <bencoh> JEEB: sure, but the point is you stream only one (highest) profile to it
[18:17:01 CET] <paule32> ffmpeg -f x11grab -i :0.0+100,200 -framerate 7 -f mp4 /media/sdb1/e-learning/mov/feed1.mp4
[18:17:11 CET] <paule32> this create a mp4 file
[18:17:17 CET] <Mavrik> JEEB, ahh, it's rather basic
[18:17:23 CET] <Mavrik> it doesn't do switching IIRC
[18:17:31 CET] <furq> JEEB: https://github.com/arut/nginx-rtmp-module/wiki/Directives#hls_variant
[18:17:39 CET] <paule32> but i have to cut it (max size) to not override my hd
[18:17:56 CET] <JEEB> furq: ahh
[18:17:59 CET] <JEEB> that makes sense
[18:18:16 CET] <bencoh> actually this hls_variant thing is far from being perfect, since GOP won't be aligned
[18:18:40 CET] <JEEB> aligning GOPs isn't that hard anyways :P
[18:18:43 CET] <bencoh> (unless you explicitely turn off scene cut)
[18:18:54 CET] <paule32> guys, i am here :-)
[18:19:14 CET] <furq> i'd expect to have to use fixed-size gops for adaptive streaming anyway
[18:19:16 CET] <bencoh> JEEB: when using separate encoders it's not "hard" but it's kindof a waste ... but anyway :)
[18:19:32 CET] <JEEB> furq: I don't but a lot of shit seems to expect it
[18:19:48 CET] <JEEB> I was way too happy back with just HLS, where apple and others were all m'kay about it
[18:19:52 CET] <bencoh> furq: you can do without it but you'll need synced encoders, somehow
[18:19:54 CET] <JEEB> then I entered the lairs of the dragons
[18:20:04 CET] <JEEB> and fuck those pieces of shit
[18:20:27 CET] <JEEB> (excuse me for the language but that's what I got after interacting with certain vendors)
[18:20:28 CET] <paule32> JEEB: i want to go the ffserver way
[18:20:32 CET] <furq> streaming is fun
[18:20:37 CET] <furq> please don't use ffserver
[18:20:53 CET] <JEEB> paule32: which is pretty much "rewrite your own streaming server"
[18:21:01 CET] <paule32> no
[18:21:03 CET] <JEEB> because ffserver is dead and will not be alive
[18:21:08 CET] <paule32> oh
[18:21:08 CET] <furq> hopefully after 3.2 comes out i will never have to explain that again
[18:21:18 CET] <furq> a beautiful dream~
[18:21:31 CET] <JEEB> a shot in the head and thrown into the back alley
[18:21:37 CET] <paule32> so let me try the nginx way?
[18:22:15 CET] <JEEB> paule32: just for your information - ffserver sounds probably good as a title but the thing would never work like you'd expect and you had to know its internals to do *anything*. also it required hacks all around because it liked to poke internals of the libraries instead of just utilizing them properly
[18:22:25 CET] <furq> paule32: do you have libx264 yet
[18:22:48 CET] <Mavrik> Is there anything cheaper than Wowza if you want to have a decent streaming server?
[18:22:49 CET] <paule32> yes
[18:22:59 CET] <paule32> ffmpeg is compiled and running
[18:23:07 CET] <Mavrik> (set-it-and-forget-it variety with transcode and HLS/DASH/Smooth)?
[18:23:11 CET] <furq> what's the problem then
[18:23:24 CET] <JEEB> I am not sure if I would put transcode together with a media serving thing
[18:23:29 CET] <paule32> i cant stream to my nginx server
[18:23:36 CET] <JEEB> my preference is usually to separate the two
[18:23:36 CET] <paule32> i get error
[18:23:56 CET] <Mavrik> JEEB, yeah, mine too, but I get a lot of queries for smaller setups
[18:24:12 CET] <Mavrik> Where people aren't really experts, have < 500 streams active and don't want too much hassle.
[18:24:21 CET] <Mavrik> I usually recommended Wowza which works well enough for those cases.
[18:24:25 CET] <JEEB> Mavrik: well nothing stops you from putting the two onto the same machine if you really want
[18:24:55 CET] <JEEB> but yeah, I should start looking at solutions again
[18:25:35 CET] <paule32> https://paste.fedoraproject.org/465427/77848319/
[18:25:37 CET] <paule32> look
[18:26:12 CET] <BtbN> that's not a valid rtmp url.
[18:26:23 CET] <BtbN> the name is missing
[18:26:26 CET] <furq> that's also not the same x11grab invocation you just pasted
[18:26:42 CET] <paule32> that differs, yes
[18:26:50 CET] <paule32> that is, what i try, with nginx
[18:29:19 CET] <paule32> https://paste.fedoraproject.org/465440/47784854/
[18:29:26 CET] <paule32> now, i get this
[18:29:34 CET] <furq> -f flv
[18:29:44 CET] <paule32> flash
[18:29:45 CET] <paule32> ?
[18:29:49 CET] <paule32> is dead
[18:29:50 CET] <furq> rtmp uses flv
[18:30:03 CET] <furq> also that's encoding to mpeg4, which is not x264
[18:30:34 CET] <paule32> can you give me correct cmd line?
[18:30:36 CET] <paule32> thx
[18:31:15 CET] <furq> replace -f mp4 with -f flv
[18:31:28 CET] <furq> if you don't have x264 then it's going to encode to some garbage codec
[18:31:38 CET] <paule32> ah ok
[18:31:53 CET] <paule32> but the client players see mp4 ?
[18:33:02 CET] <Mavrik> They shouldn't for live stream.
[18:33:10 CET] <Mavrik> At least not for HLS.
[18:33:43 CET] <paule32> https://paste.fedoraproject.org/465455/48792147/
[18:51:32 CET] <kerio> can't really livestream mp4
[18:52:17 CET] <JEEB> *objection*
[18:52:33 CET] Action: JEEB inserts loads of text about segmented ISOBMFF
[18:53:24 CET] <BtbN> in theory you don't even need to segment it. Just generate the moov atom early, and put it in the beginning. like DASH does
[18:53:43 CET] <JEEB> DASH is just fragmented ISOBMFF
[18:53:59 CET] <BtbN> it has one special element, which contains the header/moov atom.
[18:54:00 CET] <JEEB> except you can have separate initialization segments with the parameter sets etc
[18:54:11 CET] <BtbN> and if you cat header seg1 seg2 ... you get a valid file
[18:54:14 CET] <JEEB> you could as well have them in the beginning of each normal segment
[18:54:14 CET] <BtbN> so you can stream that
[18:54:21 CET] <Mavrik> JEEB, can we agree on "streaming MP4 is a sin and DASH an abomination" ? :P
[18:54:40 CET] <JEEB> no? it's less bad than HLS because MPEG-TS eats bandwidth?
[18:54:41 CET] <kerio> long live hls
[18:55:16 CET] <JEEB> and I still think fragmented ISOBMFF is the least bad ingest format for HTTP streaming remuxing
[18:55:17 CET] <BtbN> that 1 or 2 % of overhead doesn't matter too much
[18:55:52 CET] <paule32> hey BtbN nice to see you
[18:56:14 CET] <kerio> how did we end up with https/ip
[18:56:15 CET] <paule32> did you code in pascal, anymore?
[18:56:28 CET] <JEEB> kerio: people wanted to use their HTTP infra
[18:56:35 CET] <JEEB> like load balancers and caching
[18:56:40 CET] <JEEB> so, uh, here we area
[18:56:57 CET] <paule32> JEEB: is that output normal?
[18:56:59 CET] <paule32> https://paste.fedoraproject.org/465455/48792147/
[19:01:17 CET] <paule32> the argument -b 128k does nothing
[19:01:39 CET] <paule32> what is -crf 20 ?
[19:04:15 CET] <paule32> and -vcodec libx264
[19:04:31 CET] <paule32> give me unknown codec 'libx264'
[19:05:05 CET] <JEEB> then you built without x264
[19:05:06 CET] <JEEB> simple as that
[19:10:12 CET] <paule32> when i remove the option, i get, muxer is not streamable
[19:11:38 CET] <paule32> what means "past duration to large" ?
[19:12:56 CET] <paule32> will this debug output store in log file?
[19:13:24 CET] <paule32> haleyulja, then happy streaming
[19:14:05 CET] <TheXzoron> how would I record from multiple audio input devices in ffmpeg
[19:16:05 CET] <TheXzoron> ffmpeg -f alsa -ac 1 -ar 44100 -i webcamrate -f alsa -ac 2 -ar 48000 -i looprecvol /tmp/test.wav
[19:16:21 CET] <TheXzoron> is what I'm trying but it does not get the webcam mic
[19:16:34 CET] <TheXzoron> but it does individually
[19:20:53 CET] <Filarius> -i input1 -i input2 -map 0 -map 1 output
[19:21:47 CET] <Filarius> if you input is have several tracks, like video and audio, then -i input1 -i input2 -map 0:a -map 1:a output
[19:22:52 CET] <Filarius> also works -map 0:0 -map 0:1 here u just how track number, audio must be 1 and more
[19:23:00 CET] <Filarius> *show
[19:24:46 CET] <TheXzoron> ffmpeg -i alsa -i alsa -map webcamrate -map looprecvol /tmp/test.wav this didn't work so
[19:30:00 CET] <TheXzoron> ok I got it it was ffmpeg -f alsa -ac 1 -ar 48000 -i webcamrate -f alsa -ac 2 -ar 44100 -i looprecvol -map 0 -map 1
[19:34:51 CET] <TheXzoron> but
[19:34:57 CET] <TheXzoron> no it wasn't
[19:43:14 CET] <TheXzoron> it doesn't play properly
[19:45:41 CET] <TheXzoron> oh I see
[19:45:50 CET] <TheXzoron> one source is 1 channel and the other has 2 channels
[19:46:02 CET] <TheXzoron> and it keeps trying to switch between channels for some reason
[19:46:06 CET] <TheXzoron> why
[19:46:23 CET] <Filarius> ffmpeg -f alsa -ac 1 -ar 48000 -i webcamrate -f alsa -ac 2 -ar 44100 -i looprecvol -map 0 -map 1 out.mkv
[19:47:35 CET] <Filarius> and I do not see how you can put 2 audiostreams in one wav file
[19:47:58 CET] <TheXzoron> I didn't know wave was mono only
[19:48:29 CET] <Filarius> mono/stereo is audio track parameters
[19:49:25 CET] <Filarius> and i'm not sure what FFMpeg will do if you try to mux 2 tracks to WAV without make it know what it is left and right channels
[19:49:54 CET] <paule32> so, i have pipe the output 2> /dev/null
[19:50:04 CET] <paule32> how can i see, if something wrong?
[19:52:06 CET] <paule32> ah ok
[19:52:06 CET] <TheXzoron> ok when I
[19:52:17 CET] <paule32> i see
[19:52:25 CET] <TheXzoron> Filarius: the audio of looprecvol is not present
[19:52:49 CET] <paule32> have you /dev/video0 ?
[19:53:28 CET] <TheXzoron> I'm using the webcam as a mic not a video source paule32
[19:53:41 CET] <Filarius> to make stereo from 2 mono -filter_complex "[0:a][1:a]amerge=inputs=2[aout]" -map "[aout]" output.wav
[19:54:03 CET] <paule32> webcam are video sources (my has a mic in it
[19:55:02 CET] <Filarius> okay, give me two examples of just recording audio from each source, separatly
[19:55:26 CET] <paule32> have you try vlc before? it is easy to stream, play... it is universal knife, and you can select the source per mouse
[19:56:03 CET] <paule32> -i input1 -i input2 -map out1 -map out2
[19:56:57 CET] <paule32> okay, i misunderstood
[19:57:01 CET] <paule32> mic not video
[19:57:03 CET] <paule32> sorry
[19:57:34 CET] <Filarius> TheXzoron, first make examples how to record from each source one by one
[19:58:50 CET] <TheXzoron> ffmpeg -f alsa -ac 1 -ar 44100 -i webcamrate out.ogg
[19:59:20 CET] <TheXzoron> ffmpeg -f alsa -ac 2 -ar 48000 -i looprecvol out.ogg
[20:00:35 CET] <paule32> ah
[20:00:39 CET] <paule32> also
[20:00:42 CET] <paule32> alsa
[20:00:47 CET] <paule32> very old
[20:01:05 CET] <paule32> pulseaudio is the replacement
[20:01:43 CET] <paule32> on older linux boxes, pulse are on top of alsa, but please try to use pulseaudio
[20:01:57 CET] <paule32> but don't say, i have say it to you :-)
[20:02:34 CET] <TheXzoron> I have but it's very unstable
[20:02:44 CET] <paule32> yes, old linux box
[20:02:55 CET] <paule32> i use linux mint, and debian gnome
[20:02:59 CET] <TheXzoron> no I was using pulseaudio 9
[20:03:08 CET] <TheXzoron> crashed randomly
[20:03:11 CET] <paule32> gnome was a little instable, but mint is perfect
[20:03:33 CET] <paule32> then is something wrong i ibus?
[20:03:57 CET] <TheXzoron> I have alsa working the way I want without it and recording works in obs from both sources
[20:04:06 CET] <paule32> yu can reset audio per "alsainit"
[20:04:12 CET] <TheXzoron> I'm just trying to make a recording script
[20:04:56 CET] <paule32> you can install "arecord/(a)play" - sorry, have very long not worked with also anymore
[20:04:59 CET] <Filarius> ffmpeg -f alsa -ac 1 -ar 44100 -i webcamrate -f alsa -ac 2 -ar 48000 -i looprecvol out.mka - is it working ?
[20:05:04 CET] <Filarius> i
[20:05:31 CET] <Filarius> i'm not sure about how to handle more then one custom inputs
[20:05:46 CET] <paule32> you can have many
[20:05:53 CET] <paule32> 2^32-1
[20:05:56 CET] <paule32> :-)
[20:06:01 CET] <Filarius> not about "can" but about "how"
[20:06:08 CET] <TheXzoron> meta-l /window bare
[20:06:14 CET] <TheXzoron> whoops
[20:06:20 CET] <Filarius> is it needs "-f" every time or not
[20:06:21 CET] <paule32> input1 = -i source1
[20:06:30 CET] <paule32> input2 = -i source2
[20:06:33 CET] <paule32> ..
[20:06:45 CET] <paule32> -f is format
[20:07:11 CET] <Filarius> i know, its not an answer )
[20:07:23 CET] <paule32> $ ../ffmpeg/ffmpeg -f x11grab -i :0.0+100,200 -framerate 5 -f flv rtmp://192.168.178.80:8585/live 2> /dev/null
[20:07:29 CET] <TheXzoron> Filarius: yes that does not get the webcam and onlt gets the desktop
[20:08:40 CET] <TheXzoron> I don't know why it doesn't error or something
[20:08:43 CET] <paule32> you have to give over the /dev/devicenode
[20:08:50 CET] <TheXzoron> because it completely ignores the first source
[20:09:44 CET] <paule32> ok, so far
[20:10:01 CET] <paule32> how can i scale the video output?
[20:10:34 CET] <paule32> but before, why this error(s): https://paste.fedoraproject.org/465596/85394114/
[20:17:36 CET] <paule32> ah
[20:17:48 CET] <paule32> where can i download asf codec driver?
[20:20:48 CET] <bencoh> ?
[20:26:24 CET] <Trump_2016> How do you reduce the key frame interval with mpeg2video?
[20:26:46 CET] <Trump_2016> paule32, -s XxY sets the video size in the target
[20:26:56 CET] <paule32> thx
[20:27:00 CET] <paule32> i need a codec
[20:27:04 CET] <paule32> for vlc
[20:28:38 CET] <paule32> http://www.free-codecs.com/download/voxware_metasound_audio_codec.htm
[20:29:03 CET] <paule32> here i can find only window stuff
[21:22:58 CET] <pipo1> where can i find ffmpeglauncher ?
[21:24:51 CET] <pipo1> commented here https://www18.atwiki.jp/live2ch/pages/419.html
[21:26:19 CET] <BtbN> on some japanese page I'd guess.
[21:27:24 CET] <pipo1> yes but where exactly
[21:28:34 CET] <JEEB> you could have used google translated
[21:28:36 CET] <JEEB> *translate
[21:28:39 CET] <JEEB> https://www18.atwiki.jp/live2ch/pages/419.html#id_ef55d105
[21:29:29 CET] <JEEB> it then links to a nico community which contains the download links :P
[21:30:11 CET] <JEEB> do note that this piece of software has nothing to do with FFmpeg or the ffmpeg command line tool, it's a thing someone completely unrelated made
[21:31:07 CET] <pipo1> ok
[21:41:47 CET] <pipo1> why ffmpeg dont stream to older versions of icecast ?
[21:54:48 CET] <steve__> Hi, I just purchased a Garmin DashCam that encodes GPS inside the MP4 files it outputs. I believe its being stored using SEI messages. Does anyone know if/how I can use FFMPEG to export that data?
[22:03:27 CET] <paule32> i can't find codec asf for vlc linux
[22:03:54 CET] <paule32> i run into error on vlc client side: undf not supported
[23:50:59 CET] <deweydb> fhey guys
[23:51:02 CET] <deweydb> -f
[23:57:09 CET] <deweydb> i've got a file that is in MTS format. usually my scripts use ffprobe -of json -show_streams ... to pull out metadata information, most importantly the number of frames in the video. i get this from a value in the video stream with the key 'nb_frames' but this MTS file doesn't have that populated. Is there some other way of getting the precise number of frames, maybe from sort of magical multiplication of ts and something
[23:57:09 CET] <deweydb> else? this is the stream in questions metadata: http://pastebin.com/BRPDDEZi
[23:57:57 CET] <deweydb> another stream (any non MTS file i have encountered looks like this, i.e. it has nb_frames): http://pastebin.com/ZrBhbygJ
[23:58:25 CET] <c_14> Does your command have -count_frames ?
[00:00:00 CET] --- Mon Oct 31 2016
More information about the Ffmpeg-devel-irc
mailing list