[Ffmpeg-devel-irc] ffmpeg.log.20160111

burek burek021 at gmail.com
Tue Jan 12 02:05:01 CET 2016

[00:08:39 CET] <thetrueavatar> Ok I'm leaving you. Thanks for all the time dedicated to support me and sorry if sometimes I have being a bit annoying. Have a good night
[00:23:37 CET] <waressearcher2> thetrueavatar: bis bald
[01:40:25 CET] <iRy> Is there a way to cache the input before encoding? System Linux
[01:40:52 CET] <c_14> copy it onto a ramfs?
[01:41:41 CET] <iRy> Kind of. VLC has an option called :network-caching=[time in ms]
[01:41:59 CET] <c_14> What does vlc have to do with anything?
[01:42:03 CET] <iRy> Looking for similar with ffmpeg
[01:42:16 CET] <iRy> Just using it as reference
[01:42:25 CET] <c_14> Why do you want it?
[01:43:30 CET] <iRy> I've got a network stream which is a little unreliable, some missing frames etc. I would like to cache it before
[01:44:04 CET] <c_14> Depending on the stream type it might have a caching/buffer option
[01:44:23 CET] <iRy> It's a HLd
[01:44:27 CET] <iRy> HLS
[01:44:30 CET] <iRy> Stream
[01:44:38 CET] <iRy> Sorry cell phone
[01:46:10 CET] <c_14> There's a timeout/reconnect_at_eof option
[01:46:17 CET] <c_14> Other than that, nothing really.
[01:47:49 CET] <iRy> I'll give it a try
[01:48:27 CET] <c_14> You'll probably want reconnect_streamed as well
[01:50:17 CET] <iRy> Well I'm not loosing the connection sometimes it's just laggy because of one or two missing frames
[01:51:12 CET] <iRy> I'll be at the computer in about 15min. I could post the log
[01:51:55 CET] <c_14> sure. I mean you could try using -c copy to output to disk and then use that as input, but
[01:52:37 CET] <iRy> Was thinking same thing but I would like to avoid that solution
[01:53:38 CET] <Logicgate> llogan you around?
[01:55:38 CET] <Logicgate> Can one do -t 6s
[01:55:47 CET] <Logicgate> instead of -t 00:00:06?
[01:55:54 CET] <c_14> drop the s, and yes
[01:55:59 CET] <c_14> just -t 6
[01:56:35 CET] <Logicgate> ok thanks
[03:40:30 CET] <sagerdearia> Hi all. I'm working on setting up some video surveillance stream administration to work across an open publich mesh network. I am researching to determine if it is possible to convert RTSP streams from ip cams to appear on a web page.
[03:40:45 CET] <waressearcher2> sagerdearia: hallo und herzlich willkommen
[03:40:49 CET] <sagerdearia> Thanks :)
[03:41:03 CET] <sagerdearia> Is it possible to use ffmpeg to produce an rtp stream? and if so, how does that work exactly?
[03:41:22 CET] <sagerdearia> I'm currently playing around with this: ffmpeg -i rtsp:// -vcodec copy -acodec copy
[03:55:53 CET] <sagerdearia> Alright, this is doing something, but I have no idea what: ffmpeg -i rtsp:// -vcodec copy -acodec copy -y -f rtp rtp://
[03:56:16 CET] <sagerdearia> After running that, I checked `nmap -p 7000` but it is still closed and not open
[03:57:26 CET] <tdr> you could find that from netstat output too
[03:59:15 CET] <sagerdearia> tdr, true: `netstat -a|grep 7000` shows nothing also
[03:59:40 CET] <sagerdearia> What does the "-f rtp rtp://" actually do?
[03:59:56 CET] <sagerdearia> Must I have some kind of server running on port 7000 that is listening for connections before running that ffmpeg command?
[04:00:44 CET] <sagerdearia> I think what I am trying to do is run ffmpeg so that in a web page running on local webserver, I can try to display a live video stream that is sourced from the rtsp stream from the ip cam
[04:00:54 CET] <sagerdearia> I'm not sure if that will work
[04:01:12 CET] <tdr> tried looking for a howoto guide for your setup, may be easier than guessing
[04:03:03 CET] <jbg> hello, I get no audio when I play local video files with ffplay. I get the following error: No more combinations to try, audio open failed. Installing libsdl2 didn't resolve my issue. How can I get audio to play?
[07:14:39 CET] <Renari> I have a webm encoding command like so: https://gist.github.com/Renari/91574d81defdee2d19c8
[07:15:08 CET] <Renari> Now I'm trying to speedup the video output and read that this can be done by manipulating the keyframes.
[07:15:30 CET] <Renari> However in my command this isn't working and I believe it's because I have two -vf flags (thus ignoring the first one).
[07:16:10 CET] <Renari> How do I pass multiple filters with vf? Just separate them with commas?
[07:17:11 CET] <mark4o> yes
[07:19:21 CET] <Renari> Alright, thanks everywhere I checked about video filters was just showing examples of single filters.
[07:21:56 CET] <k_sze> Weird. A few days ago I ran `ffmpeg` on a .nut file without specifying -threads, and /proc/<pid>/status shows there were 50 threads
[07:22:05 CET] <k_sze> today I run the same thing and I get only 42 threads.
[07:22:30 CET] <k_sze> (and I have not added or removed CPUs from the system)
[07:24:04 CET] <mark4o> Renari: see https://trac.ffmpeg.org/wiki/FilteringGuide for some more complex examples
[07:25:28 CET] <k_sze> And I don't get why the base number of threads seems to be 24.
[07:25:56 CET] <Renari> Ah this is separating them within the string with a comma. I separated the strings with a comma and that worked.
[07:33:37 CET] <Logicgate> hey guys
[07:33:44 CET] <Logicgate> -acodec copy -c:v libx264 -movflags +faststart -crf 18 scale=720:-2:flags=lanczos,format=yuv420p when doing this I'm getting an error
[07:33:51 CET] <Logicgate> Unable to find a suitable output format for 'scale=720:-2:flags=lanczos,format=yuv420p'
[07:34:19 CET] <mark4o> Logicgate: you need -vf before scale=...
[07:34:41 CET] <Logicgate> oh my god lol
[07:34:42 CET] <Logicgate> thanks
[08:12:40 CET] <k_sze> Similarly weird on Windows: ffmpeg with no -threads argument: total 16 threads according to Task Manager
[08:12:53 CET] <k_sze> ffmpeg with -threads 4: total 15 threads according to Task Manager.
[08:12:58 CET] <k_sze> And I have 4 real cores.
[08:22:44 CET] <odinsbane> k_sze: how much is it working?
[08:28:33 CET] <odinsbane> Is it swamping a bunch of cores, or just using multiple threads.
[09:17:56 CET] <Logicgate> My god
[09:23:58 CET] <waressearcher2> Logicgate: was ?
[10:29:46 CET] <k_sze> odinsbane: it *does* look like it uses all 4 cores in any case.
[10:30:20 CET] <k_sze> just a bit strange that ffmpeg seems to automatically pick 5 threads when I have 4 real cores.
[10:31:58 CET] <furq> it's not that unusual
[10:32:14 CET] <furq> e.g. x264 with -threads auto uses (1.5 * logical cores) threads
[10:32:22 CET] <k_sze> I only have one FFV1 level 3 stream in a .nut file.
[12:18:15 CET] <k_sze> Does ffmpeg have built-in filter to colormap gray data (e.g. gray16le) to color?
[12:18:48 CET] <k_sze> I have these gray16le videos that I would like to convert and make them streamable to web browsers.
[12:19:06 CET] <sagerdearia> I'm back from previous discussion, for anyone that was around.
[12:19:29 CET] <sagerdearia> This is doing something, but I have no idea what: ffmpeg -i rtsp:// -vcodec copy -acodec copy -y -f rtp rtp://
[12:19:40 CET] <sagerdearia> Must I have some kind of server running on port 7000 that is listening for connections before running that ffmpeg command?
[12:20:21 CET] <sagerdearia> What I am trying to do is run ffmpeg so that in a web page running on local webserver, I can try to display a live video stream that is sourced from the rtsp stream from the ip cam
[12:20:55 CET] <bencoh> this is not what you're looking for :)
[12:21:08 CET] <furq> sagerdearia: yes you must and also web browsers can't play rtp or rtsp
[12:21:26 CET] <furq> you probably want HLS or DASH
[13:50:48 CET] <thebombzen> I noticed that FFmpeg uses the API of libquvi 0.4, which is the one in the Debian repos, and it's incompatible with the latest git (2 years ago) libquvi. Would that be something we'd switch to, even though it'd break those builds?
[14:06:14 CET] <BtbN> Isn't libquvi dead anyway?
[14:12:25 CET] <JEEB> pretty much
[14:12:31 CET] <JEEB> the AGPL version never went much anywhere
[14:12:42 CET] <JEEB> youtube-dl pretty much has been my choice for quite a while now
[14:12:57 CET] <JEEB> (unlike its name it supports pretty much all kinds of services)
[14:19:32 CET] <bencoh> :(
[14:49:06 CET] <sweb> i convert mp3 to ogg but converted file not seekable ffmpeg -i '/home/sweb/www/mymusss/out/Jenny Lewis with The Watson Twins/2006 - Rabbit Fur Coat - 4 - Happy.mp3' -codec:a libvorbis -qscale:a 4 '/home/sweb/www/mymusss/conv/2006 - Rabbit Fur Coat - 4 - Happy.ogg'
[14:49:11 CET] <sweb> http://paste.ubuntu.com/14469116/
[15:27:53 CET] <hero_biz> hi guys...
[15:28:35 CET] <hero_biz> guys, I wonder if anyone has ever encoded a video suitable for playing on smartphones(around 5" screen).
[15:29:11 CET] <hero_biz> I like to know what setting is better for such encods.
[15:30:16 CET] <hero_biz> I always use semi-hq settings, so I'm not sure what setting is good for easy play on such small screen.
[15:32:50 CET] <DHE> the screen size is of minimal consequence. your bigger issues are hw decoder capabilities and bitrates. find out what the smartphone you're targeting can do and encode to those specs. 5" modern phones can probably do 720p h264 under a Main profile
[15:34:04 CET] <DHE> though 720p might sound overkill, this is where you'll want to actually check it yourself. you might think it's a high resolution but people hold phones less than 2 feet from their faces
[15:35:23 CET] <bencoh> I'd rather show a well-encoded 480p than a bit-starved 720p, though
[15:35:35 CET] <bencoh> but that's up to you (or your customer feedback)
[15:36:41 CET] <hero_biz> but I think size is a real concern too.
[15:37:08 CET] <hero_biz> because videos for smart phones are the ones that you want to send through apps normally.
[15:37:18 CET] <hero_biz> so big videos will be quick problem.
[15:37:43 CET] <DHE> I just double-checked my Nexus 5's specs. It has a 1080p screen. So it can PROBABLY do 1080p decoding at 30fps.
[15:38:06 CET] <hero_biz> DHE, it can, because I have tested.
[15:38:15 CET] <hero_biz> but those videos will be too big...
[15:38:29 CET] <DHE> what kind of bitrates are you looking at? 3 to 4 megabit h264 is actually okay for low-to-medium action 720p
[15:38:55 CET] <DHE> for, say, a football or hockey game you might want higher
[15:39:06 CET] <hero_biz> lets assume someone want to share it through apps, like fb, telegram,...
[15:39:37 CET] <hero_biz> I think those rate will be too high for these reasons.
[15:40:13 CET] <hero_biz> I think maybe somethign for webrip is more suitable,isn't it?
[15:40:21 CET] <hero_biz> *like
[15:49:51 CET] <furq> hero_biz: fwiw youtube's 720p videos are usually about 2.5mbit
[15:50:06 CET] <furq> i'm pretty sure they run heavy denoising filters on everything though
[15:50:54 CET] <furq> if you're concerned about users' bandwidth then you'll probably want to use adaptive bitrate anyway
[15:53:23 CET] <DHE> I am a bit confused. you're uploading to facebook, don't they do their own transcoding or other video prep anyway?
[15:53:24 CET] <hero_biz> i think I go around crf 25, 320p, maybe somedenoise filter
[15:53:47 CET] <hero_biz> DHE, at least youtube does that
[15:53:53 CET] <hero_biz> fidelity adaptation
[15:54:25 CET] <DHE> so facebook just serves up whatever you give it?
[15:54:26 CET] <hero_biz> so people with different bandwidth all could use resulting video suitable for them
[15:54:42 CET] <hero_biz> hm...I think facebook encodes too,not sure though
[15:56:25 CET] <furq> is this for livestreaming
[16:00:22 CET] <_julian> hi
[17:49:38 CET] <kynlem> hey. what would be a good lossless audio codec to use inside a mp4 container?
[17:49:48 CET] <c_14> alac
[17:50:24 CET] <c_14> Not sure there are many others you can even put in mp4
[17:51:23 CET] <c_14> You'll have to use -f mov though. ffmpeg won't find the correct tag if you use -f mp4
[17:52:22 CET] <kynlem> does using -f mov have any other implications?
[17:53:48 CET] <c_14> There are minor differences, but it should still be playable.
[17:56:05 CET] <furq> it supports ALS and SLS but those are probably even less well-supported than alac
[17:56:32 CET] <c_14> Does ffmpeg even have encoders for those?
[17:56:37 CET] <kynlem> i see. well, what i need is not lossless encoding per se
[17:57:06 CET] <kynlem> i am just splitting a huge video file into many smaller ones (easier to deal with)
[17:57:21 CET] <c_14> use -c copy?
[17:57:26 CET] <c_14> Should be exact enough with audio
[17:57:28 CET] <c_14> eh -c:a copy
[17:57:52 CET] <kynlem> yeah, did that. and doing the split at i-frames, so everything's cool with the video
[17:58:26 CET] <bencoh> but when merging them back it introduces some a/v desync?
[17:58:43 CET] <kynlem> no, i'm just getting a popping sound at the point where the files meet in adobe premiere
[17:59:03 CET] <kynlem> tried alac (worked as you said), but still popping sound
[17:59:53 CET] <kynlem> i wonder if it has something to do with premiere. this should never be happening when i reencode the audio losslessly when splitting, right?
[18:00:54 CET] <c_14> Have you tried concatting the audio streams with ffmpeg to see if it happens there as well?
[18:04:06 CET] <kynlem> yeah, just that. no popping sound. does ffmpeg perform any kind of transition automatically when concat'ing?
[18:04:16 CET] <c_14> no
[18:04:53 CET] <kynlem> i see. is it safe to cut aac stream at any point in general?
[18:05:13 CET] <kynlem> or does it have a concept similar to what i-frame is for video streams?
[18:05:43 CET] <c_14> If you're copying the audio stream you're cutting at audio frames anyway, so that's fine. If you're reencoding to alac you're reencoding so that's fine as well.
[18:08:15 CET] <kynlem> yeah, what i meant is: can i cut it at *any frame* without reencoding?
[18:08:33 CET] <c_14> yeah
[18:08:53 CET] <c_14> That's what I was trying to say.
[18:08:55 CET] <kynlem> thanks, c_14 :)
[18:09:37 CET] <bencoh> as long as you dont cut in the middle of an audio frame
[18:10:06 CET] <c_14> ffmpeg won't let you do that
[18:10:11 CET] <bencoh> oh and, actually you might have issues when re-encoding to an audio codec with a different frame size
[18:10:41 CET] <bencoh> since avcodec would have to add some blank samples for the last frame
[18:10:54 CET] <kynlem> ffmpeg -ss 120.12 -t 60.06 -i VideoForCutting.mp4 -acodec copy -vcodec copy ~/chunk-02.mp4
[18:11:39 CET] <kynlem> that's what i'm doing basically. it's 1800 (video?) frames per file.
[18:12:10 CET] <kynlem> (each 8th frame is an i-frame in case of my camera, so i'm safe on the video side.)
[18:13:24 CET] <kynlem> bencoh: is there a way to secure myself from that happening?
[18:13:52 CET] <bencoh> use -c:a copy as c_14 told you :)
[18:14:06 CET] <kynlem> yeah, i'm doing that
[18:14:28 CET] <kynlem> as long as ffmpeg doesn't exit with an error, i can assume the audio frame didn't get cut?
[18:14:48 CET] <bencoh> yeah
[18:15:17 CET] <kynlem> kudos.
[18:15:27 CET] <thebombzen> hey, I seem to be having an issue. I'm getting Mjpeg from a webcam and I want to use the mjpeg2jpeg bitstream filter so I can put jpeg frames in an image2pipe. but the filtert complains that the input isn't mjpeg.
[18:16:00 CET] <thebombzen> ffmpeg -f v4l2 -input_format mjpeg -video_size 1280x720 -i /dev/video0 -c copy -f mjpeg -bsf mjpeg2jpeg -y /dev/null
[18:16:23 CET] <thebombzen> but it complains that: [NULL @ 0x222f280] input is not MJPEG/AVI1
[18:16:31 CET] <thebombzen> which is wrong. because it's mjpeg. any ideas?
[18:18:44 CET] <BtbN> well, which format is it instead?
[18:20:18 CET] <thebombzen> it's mjpeg. that's what's weird.
[18:20:50 CET] <relaxed> thebombzen: pastebin.com the command and console output
[18:21:01 CET] <thebombzen> http://pastebin.com/Tyth5MLp
[18:21:03 CET] <thebombzen> here it is
[18:21:40 CET] <thebombzen> and then a bunch more error messages like those that I didn't copy in
[18:21:50 CET] <thebombzen> by like those I mean Identical to those
[18:23:10 CET] <relaxed> lose -f mjpeg and try using img-%04d.jpg as the output
[18:24:09 CET] <thebombzen> nope. let me pastebin it
[18:24:31 CET] <thebombzen> http://pastebin.com/pkrRdY0e
[18:29:22 CET] <relaxed> what if you omit the -bsf ?
[18:31:13 CET] <thebombzen> it works: http://pastebin.com/wn5HWYe8
[18:31:41 CET] <thebombzen> however, the jpegs are corrupted.
[18:31:51 CET] <thebombzen> display.im6: Huffman table 0x00 was not defined `image0001.jpg' @ error/jpeg.c/JPEGErrorHandler/316.
[18:32:01 CET] <relaxed> what are you trying to pipe it to?
[18:32:41 CET] <thebombzen> I'm trying to pipe it to a Java program that reads the images from stdin and displays them to the screen. simple Java webcam viewer
[18:33:09 CET] <thebombzen> but it shouldn't matter. it should work no matter why I want to do that
[18:33:28 CET] <thebombzen> of course, reencoding it with -c mjpeg fixes it. but the idea is not to waste cpu
[18:45:48 CET] <thebombz_> Also, using -f image2pipe did not work
[20:20:14 CET] <FeeL_LiKe_GoD> Is there any way to obtain real CBR on transport stream via ffmpeg? As far as I tried it seems almost impossible
[20:23:11 CET] <DHE> like mpegts? there's a -muxrate option which will pad out the stream with NULLs
[20:23:57 CET] <FeeL_LiKe_GoD> Yeah like mpegts. I tried this parameter but TS' CBR still varying
[20:56:53 CET] <guest2015> hello
[20:57:40 CET] <guest2015> I'm getting an 403 error when trying to download m3u8 playlist file
[20:58:06 CET] <guest2015> 403 Forbidden error message, what can I do to download that video?
[20:59:27 CET] <DHE> if it's your web server, try checking the error.log
[20:59:44 CET] <guest2015> no not my webserver
[21:00:04 CET] <DHE> then you're screwed
[21:00:24 CET] <furq> do you get a 403 outside of ffmpeg
[21:00:37 CET] <guest2015> yes in Safari as well
[21:00:46 CET] <furq> well then yeah, what DHE said
[21:02:12 CET] <guest2015> but how does the video player on the website "authorize" to play the video?
[21:02:20 CET] <guest2015> its included via Javascript
[21:02:45 CET] <guest2015> any chances that I can find a key in the source code of the website?
[21:03:12 CET] <furq> does safari have builtin developer tools
[21:03:16 CET] <DHE> that's up to the player. could be anything from cookies to IP address association that only lasts 10 seconds to a super-secret handshake that only the cool kids know
[21:03:26 CET] <furq> in chrome or firefox you could check the request in the developer tools network tab
[21:04:28 CET] <guest2015> yes Safari does as well and I have a m3u8 URL but sometimes I can see the video with this URL in Safari and sometimes it results in 403 error
[21:05:08 CET] <furq> check the request headers for a successful get of the m3u8
[21:05:27 CET] <furq> it might well be more complicated than that though
[22:15:21 CET] <podman> i'm surprised there are so few good tutorials for dash :(
[22:18:09 CET] <Betablocker> dash is kinda new & but it will grow in the future
[22:18:56 CET] <Betablocker> there are only a few players supporting dash. and the support is often experimental or in a few stadium
[22:19:39 CET] <JEEB> most DASH implementations suck... hard
[22:19:46 CET] <JEEB> hopefully it will get better
[22:20:13 CET] <Betablocker> no need to get in panic - it will grow
[22:20:34 CET] <JEEB> I'm not panicing
[22:20:46 CET] <Betablocker> just kidding :)
[22:21:35 CET] <Betablocker> take a look at the clappr player dash plugin &  https://github.com/clappr/dash-shaka-playback
[22:22:04 CET] <Betablocker> sounds interesting
[22:22:09 CET] <JEEB> I just know I have some needs, like possibly having audio and video not start at the exact same point (like there being 0.9 seconds of audio before the first video picture - which often happens with live streams. and you can guess how well the browser crap (except for Edge's and I think Google's Android's parser) handles it
[22:26:18 CET] <podman> I'm just having a difficult time even creating and properly segmenting files for dash
[22:26:41 CET] <podman> seems like some implementations don't even like muxed content, like dash.js
[22:35:24 CET] <JEEB> do note that how well dash.js works seems to heavily depend on the browser as well
[22:35:49 CET] <JEEB> for example under latest chrome beta I can't get that 0.9 second of audio before first video PTS sample to work at all
[22:35:53 CET] <JEEB> (a VoD sample)
[22:36:14 CET] <JEEB> meanwhile if I use the same damn thing on Firefox Dev Ed (aka aurora), that works OK except doesn't let me seek into the first 0.9 seconds :P
[22:36:50 CET] <JEEB> in browsers funny enough MS Edge is the best so far in DASH support
[22:36:59 CET] <JEEB> for android there is a google library to support it, which is good
[22:37:14 CET] <podman> have you tried other players? like google's?
[22:37:16 CET] <JEEB> everything else that is not VLC or so just sucks
[22:37:18 CET] <JEEB> yes
[22:37:20 CET] <TD-Linux> JEEB, well Edge has as native DASH player, are you using that or MSE?
[22:37:30 CET] <podman> https://github.com/google/shaka-player
[22:37:31 CET] <JEEB> TD-Linux: of course the native one
[22:37:51 CET] <JEEB> podman: that one sucks dongs, is pretty much chrome-specific and only works with content that is done in a very specific way
[22:37:57 CET] <JEEB> it's worse off than their android lib
[22:38:47 CET] <JEEB> I'm actually more hopeful of the MSE HLS parsers than the MSE DASH implementations
[22:39:14 CET] <podman> JEEB: seems fine to me. works in Chrome, Safari, Firefox... Haven't tested in IE yet, but would be surprised if it had issues
[22:39:29 CET] <JEEB> podman: then you just haven't tried anything that it doesn't like
[22:39:48 CET] <podman> why would i feed it something it doesn't like if i control the source?
[22:40:01 CET] <JEEB> because sometimes what you want to do is what it doesn't like?
[22:40:11 CET] <podman> ok, well i don't?
[22:40:17 CET] <JEEB> well then sure fine
[22:40:25 CET] <JEEB> but then again in that case it's not any better than dash.js IIRC
[22:40:42 CET] <JEEB> although they're all trainwrecks at this point :P
[22:40:45 CET] <JEEB> if they work for you, great
[22:40:50 CET] <TD-Linux> some of this is DASH's fault
[22:41:06 CET] <podman> dash.js has some random restrictions based on the profiles it supports
[22:41:08 CET] <TD-Linux> it's exceddingly flexible
[22:41:30 CET] <podman> like it doesn't support muxed audio/video as far as i can tell (at least not their demo)
[22:43:18 CET] <podman> "Multiplexed representations are intentionally not supported, as they are not compliant with the DASH-AVC/264 guidelines"
[22:43:20 CET] <podman> :(
[22:43:43 CET] <TD-Linux> "guidelines"
[22:43:57 CET] <JEEB> uhh
[22:44:10 CET] <JEEB> what do you mean multiplexed? as in muxed into one source file?
[22:44:25 CET] <JEEB> because most DASH examples have had tracks in containers ;)
[22:44:51 CET] <podman> i'm not using the dash-avc/264 profile though :\
[22:45:04 CET] <JEEB> I don't give a fuck how it's called :/
[22:45:10 CET] <JEEB> just tell me what it actually means
[22:45:33 CET] <TD-Linux> you're not a professional unless you know all of the specifications and acronyms
[22:45:40 CET] <JEEB> specs are OK
[22:45:41 CET] <podman> multiplexed = muxed
[22:45:44 CET] <JEEB> yes, I know
[22:45:59 CET] <podman> yes, it means that the audio and video are in a single container in a single file
[22:46:03 CET] <JEEB> ok
[22:46:07 CET] <JEEB> so what I guessed
[22:46:23 CET] <podman> right, it's what it means :P
[22:46:34 CET] <JEEB> no, muxed is just muxed
[22:46:38 CET] <JEEB> as in, not a raw bit stream
[22:46:47 CET] <podman> uh, that's not what that means
[22:47:20 CET] <JEEB> "I output a video track into a raw bit stream and then mux it into ISOBMFF"
[22:47:53 CET] <JEEB> I do understand the multi in multiplexing
[22:47:55 CET] <podman> muxed is short for multiplexed which means combining to signals into one signal
[22:48:03 CET] <podman> two
[22:48:14 CET] <podman> has nothing to do with raw
[22:48:16 CET] <JEEB> well you could think of the container information as another signal :P like timestmaps
[22:48:56 CET] <podman> you could, i guess
[22:49:00 CET] <JEEB> but yeah, as far as I can see muxing is putting something into a "container" and demuxing is taking something from a container
[22:49:13 CET] <podman> sort of
[22:49:13 CET] <JEEB> at least that's how it's used around most multimedia OSS circles
[22:49:33 CET] <podman> anyway, dash.js doesn't like that
[22:49:43 CET] <JEEB> yeah, kind of not surprising
[22:53:10 CET] <JEEB> since the idea seems to be to have f.ex. audio completely separate and then X different video sterams
[22:53:10 CET] <JEEB> *streams
[22:53:10 CET] <podman> which adds an unneeded layer of complexity on to of everything
[22:53:10 CET] <JEEB> well you could just support ISOBMFF with fragments, but almost nobody does that :/
[22:53:10 CET] <TD-Linux> JEEB, well the idea is more "generated stream on the fly" so there are use cases where DASH makes sense but the other stuff isn't necessary
[22:53:11 CET] <JEEB> (also I love calling it that, lol - such an oversized callsign for the format)
[22:53:11 CET] <TD-Linux> probably the best thing to do for now is "whatever youtube does" because most browser implementations are geared basically just for that :/
[22:53:14 CET] <podman> yeah, pretty much. it was interesting reading firefox's discussion about supporting dash
[22:53:49 CET] <podman> so, it seems like that's probably the best bet for now. separate audio and video streams, eh?
[22:54:03 CET] <podman> I guess that'll save a few megs here and there for our customers
[22:54:14 CET] <JEEB> yeah, separately muxed into the sub/superset of ISOBMFF
[22:54:25 CET] <JEEB> which DASH uses
[23:17:53 CET] <podman> TD-Linux: there isn't a channel for DASH, is there?
[23:18:02 CET] <TD-Linux> nope
[23:18:11 CET] <podman> or a mailinglist?
[23:18:25 CET] <TD-Linux> don't know of any.
[23:18:37 CET] <podman> :\
[23:18:44 CET] <TD-Linux> for browser related stuff there is #media on irc.mozilla.org
[23:19:15 CET] <podman> i love how flash is "dead" and no one wants to use it anymore but there is really no viable alternative yet. clearly DASH is the successor but there is like zero community that I know of
[23:20:06 CET] <TD-Linux> yeah. I don't even know of any opensource working DASH streaming servers
[23:22:59 CET] <podman> which is silly because youtube and netflix and the browser makers have crowned dash the successor for video playback on the web
[23:26:42 CET] <Kalculus> I'm trying to make a stream like: ffmpeg -f dshow -i audio="Microphone" -c:a libmp3lame -f mpegts udp://    However, how can I password protect the stream?  From the manual it doesn't say udp:// can have a username & pass.  Recommendations?
[23:32:32 CET] <podman> TD-Linux: well, i got it working with the onDemand Profile and single segments. progress!
[23:32:48 CET] <podman> now i just want multiple segments and I'll be happy
[23:33:53 CET] <TD-Linux> podman, well to be precise, chrome crowned MSE, not dash :)
[23:34:03 CET] <podman> TD-Linux: that's true
[23:35:09 CET] <podman> let's see if I can get the live profile to work
[23:35:53 CET] <TD-Linux> Kalculus, that's just blasting unencrypted UDP packets. there's no way to protect the stream
[23:36:41 CET] <c_14> Besides throwing it over something like an ipsec tunnel or a vpn or something
[23:36:43 CET] <Kalculus> TD-Linux: is there a simple way to make a stream from my microphone that requires a username/password to connect to?
[23:37:58 CET] <TD-Linux> Kalculus, you can with icecast
[23:37:59 CET] <TD-Linux> http://www.icecast.org/docs/icecast-2.4.0/auth.html
[23:38:33 CET] <TD-Linux> its a bit more complicated than what you're doing though. you need to both run icecast and a source client
[23:39:53 CET] <TD-Linux> the nice thing about icecast though is you can just play the stream in a browser.
[23:40:23 CET] <c_14> The easiest thing to do imo would be to forward it over ssh
[23:41:28 CET] <Kalculus> oh.  let me try Icecast
[23:45:56 CET] <Kalculus> For the command I posted above, it's just constantly broadcasting UDP packets all the time?  Is there a way I can make it more client/server, so it only sends the stream to a user that connected?
[23:46:26 CET] <podman> Interesting, the video is working but the audio is not
[23:46:59 CET] <c_14> Kalculus: icecast
[23:47:19 CET] <c_14> or maybe tcp with ?listen
[23:49:21 CET] <TD-Linux> Kalculus, yeah icecast does what you want. it's designed for internet radio and the like.
[00:00:00 CET] --- Tue Jan 12 2016

More information about the Ffmpeg-devel-irc mailing list