[Ffmpeg-devel-irc] ffmpeg.log.20160904

burek burek021 at gmail.com
Mon Sep 5 03:05:01 EEST 2016


[00:00:54 CEST] <durandal_1707> woodbeastz: mmfpeg?
[00:02:07 CEST] <woodbeastz> Oh man I mean FFMpeg
[00:02:16 CEST] <woodbeastz> sorry
[00:03:11 CEST] <woodbeastz> I have a remote Linux server with FFMpeg installed and I  have no clue how to set it up
[00:03:40 CEST] <woodbeastz> to like do live streams
[00:16:25 CEST] <DHE> you really are best of starting with a guide online. but you need something to serve the content, like twitch.tv or nginx
[00:16:34 CEST] <DHE> with the rtmp module
[01:39:33 CEST] <qroft> Good evening everyone. I am playing around with ffmpeg and soe GIF files and got a problem now. Can i create a 5 seconds long MP4 file out of one GIF file that is only 2 seconds long?
[01:39:55 CEST] <c_14> ffmpeg -loop 1 -i gif -t 5 out.mp4
[01:39:58 CEST] <qroft> I would love the output file to contain the GIF file looped for the whole 5 seconds.
[01:40:54 CEST] <qroft> thanks c_14, but i get the "Option loop not found" error
[01:41:18 CEST] <c_14> How old is your ffmpeg?
[01:42:00 CEST] <qroft> My ffmpeg version is N-75841-g5911eeb
[01:43:54 CEST] <c_14> hmm, doesn't work here either
[01:43:57 CEST] <c_14> still in the docs though...
[01:45:17 CEST] <qroft> to be honest i could not even find the loop option in the docs. it mentiones the deprecated "loop_input" and "loop_output" options.
[01:48:45 CEST] <c_14> use -ignore_loop 0 instead
[01:49:27 CEST] <c_14> That'll tell the gif demuxer to loop the input forever (in most cases)
[01:50:31 CEST] <qroft> that's it. thanks a lot c_14 !
[03:38:32 CEST] <lindylex> How do I install ffmpeg on Debian testing?
[03:39:18 CEST] <lindylex> Nevermind
[04:43:13 CEST] <vans163> hey. anyone know how to receive input from unix socket?  I am using ffmpeg -i unix://mnt/ramdisk/unixsocket but it gives me error connection refused. I want ffmpeg to spawn the unix socket, then I want to pipe into it
[04:43:18 CEST] <vans163> *not pipe but send into it
[04:44:13 CEST] <DHE> you can specify "-listen 1" before the -i, according to this
[04:46:05 CEST] <vans163> DHE: ty working. and a Q about the input. I am sending it .ppm files from a camera
[04:46:20 CEST] <vans163> do I need to pass a certain -f so it will know its ppm?
[04:47:32 CEST] <DHE> there is autodetection, but -f can help or speed up the process. never tried ppm.
[04:48:23 CEST] <vans163> DHE: autodetect is default?
[07:34:27 CEST] <VamoMenem> some ideas about deinterlacing in a nvenc workflow?
[07:59:51 CEST] <Spring> is there a foolproof drawtext syntax that works?
[08:00:30 CEST] <Spring> Error initializing filter 'drawtext' with args 'text=blah:fontcolor=white:fontsize=20'
[08:00:53 CEST] <Spring> text is literally 'blah;
[08:01:05 CEST] <Spring> *blah
[08:09:12 CEST] <furq> Spring: pastebin the full output
[08:11:26 CEST] <Spring> furq, http://pastebin.com/VVgLJPSX
[08:14:56 CEST] <furq> you need to pass fontfile or build with --enable-fontconfig
[08:20:02 CEST] <Spring> furq, hmm, had assumed Zeranoe's build added it to ffplay
[08:20:56 CEST] <Spring> there's no Fonts directory system variable on Windows that I could find, so it looks like the path will have to be fixed
[08:33:56 CEST] <Spring> not so bad, can use %systemdrive%. Good-O. Tried using %t to display the timestamp but even escaped using %%{t} or %%t or \%t it returns an error.
[08:44:12 CEST] <Spring> %%{n} correctly draws the number of the input frame fwiw, but not t as per docs
[08:45:06 CEST] <Spring> it does say 'NAN if the input timestamp is unknown' but wouldn't it know from the trim command?
[09:09:51 CEST] <Spring> finally got it working text=\'%%{pts\:hms}\'
[09:10:07 CEST] <Spring> I'm amazed how long it took to find a working solution
[09:11:06 CEST] <Spring> probably as I wasn't looking further enough down the docs -_-
[10:29:51 CEST] <Spring> so adding the expanded title in the ffplay window title doesn't expand, which is a pity, would be better than adding it as drawtext tbh
[10:29:59 CEST] <Spring> *expanded timecode
[13:36:47 CEST] <roxlu> Hey, when I want to encode some audio that I receive from e.g. a mic. what's the best way to send the PCM data into the encoder? Do I collect enough frames (AVCodecContext.frame_size) and then feed it somehow? Or can I feed the frames I receive from the mic. directly into the encoder?
[14:44:49 CEST] <DHE> roxlu: you might want to look at some of the examples in the documentation. transcode_aac might be a good step for you
[14:45:27 CEST] <DHE> it's a bit involved, since it covers decoding an input, resampling it, buffering it to meet the frame_size requirement, and then encoding. but it follows fairly well
[19:42:29 CEST] <ricemuffinball> what is best audio codec for doing mono(people talking/radio show) (no music)
[20:12:49 CEST] <klaxa> ricemuffinball: you maybe want speex or opus, that's what software like teamspeak and mumble use
[20:13:12 CEST] <klaxa> speex is specifically tuned for human voice reproduction (afaik)
[20:13:47 CEST] <furq> speex has been replaced by opus
[20:13:48 CEST] <klaxa> opus is tuned for everything and therefore better than most competitions, but it is not very widely spread
[20:13:59 CEST] <klaxa> *competitors
[20:14:09 CEST] <klaxa> or is it widely spread by now?
[20:14:12 CEST] <furq> it's more widespread than speex
[20:14:18 CEST] <klaxa> oh?
[20:14:21 CEST] <furq> maybe not in dedicated voip applications
[20:14:29 CEST] <furq> but definitely in terms of general playback support
[20:14:40 CEST] <furq> all the major browsers except safari support it
[20:14:40 CEST] <klaxa> my favorite music player for android (deadbeef) doesn't support it :(
[20:15:05 CEST] <klaxa> but that's just the dev being lazy i guess
[20:15:28 CEST] <JEEB> even the android media framework should support opus now tho
[20:15:46 CEST] <furq> yeah chrome for android has supported it for a while
[20:15:51 CEST] <klaxa> it does, at least since 6 also in ogg containers
[20:16:01 CEST] <furq> and i'd hope that uses the native media stuff
[20:17:53 CEST] <furq> is deadbeef the one which dreams of being fb2k
[20:18:01 CEST] <furq> on *nix, at least
[20:19:02 CEST] <klaxa> there are some i think
[20:19:15 CEST] <klaxa> but deadbeef was interesting for me at one point because it plays nes chiptunes
[20:19:35 CEST] <furq> everything plays those nowadays though
[20:20:19 CEST] <furq> i'm not impressed unless it plays mdx
[20:28:47 CEST] <ricemuffinball> what about vorbis?  is vorbis useless now since everybody seems to be on opus hype
[20:29:17 CEST] <furq> opus is recommended over both vorbis and speex now
[20:29:42 CEST] <furq> vorbis will probably do a fine job unless you want really low bitrates
[20:29:42 CEST] <ricemuffinball> furq what about  aac and he-aac
[20:30:00 CEST] <furq> shrug
[20:30:15 CEST] <furq> i don't use them unless i'm muxing an mp4
[20:30:35 CEST] <ricemuffinball> mp4 doesn't support  opus?
[20:30:38 CEST] <furq> no
[20:30:42 CEST] <BtbN> even wma will "work". But you asked for best. and opus voice is the best.
[20:31:01 CEST] <furq> anything newer than mp3 will do pretty much fine on voice at 32-64kbps
[20:31:11 CEST] <furq> if you want lower than that then you'll probably want a codec which is optimised for it
[20:31:14 CEST] <ricemuffinball> what about 20kbps
[20:31:17 CEST] <furq> ^
[20:37:14 CEST] <ricemuffinball> which aac encoder is best
[20:37:30 CEST] <furq> do you mean in general or of the ones in ffmpeg
[20:37:43 CEST] <ricemuffinball> in general
[20:37:45 CEST] <furq> qaac
[20:38:15 CEST] <ricemuffinball> why doesn't ffmpeg have qaac or neroaac
[20:38:40 CEST] <furq> because you're too selfish to pay for it
[20:39:00 CEST] <ricemuffinball> they are free though
[20:40:36 CEST] <furq> i hope you're not a lawyer
[20:51:11 CEST] <spaam> when did ffmpeg move from -vcodec to -c:v ?
[20:52:10 CEST] <ricemuffinball> what happens if you convert 7.1 audio to 5.1
[20:54:30 CEST] <furq> ricemuffinball: depends how you do it
[20:54:44 CEST] <furq> -ac will downmix
[20:54:54 CEST] <furq> you can use -af pan to drop channels
[20:55:06 CEST] <ricemuffinball> what is default
[20:55:09 CEST] <ricemuffinball> if you don't use any switch
[20:56:31 CEST] <furq> it'll either keep the same layout or use -ac if it's not supported
[21:32:57 CEST] <roxlu> When I encode audio using AAC, what's the easiest way to check if the encoded aac data is correct?
[22:18:25 CEST] <BtbN> listen to it
[22:21:30 CEST] <hetii> Hey :)
[22:22:00 CEST] <hetii> I try to dump my screen under ubuntu and use result file in my TV as a screencast.
[22:22:01 CEST] <hetii> but
[22:22:16 CEST] <hetii> have a trouble with mkv and h264 format
[22:22:52 CEST] <hetii> when I try play it on my tv via DLNA then I got that my file format is unsupported
[22:23:18 CEST] <hetii> but its not true cause I can dump some other mkv file for eg from youtube and they are works nice.
[22:23:49 CEST] <hetii> so far I use this command: ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 30 -s 1920x1080 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 30 -threads 2 /var/lib/minidlna/output.mkv
[22:24:56 CEST] <BtbN> try with aac
[22:25:31 CEST] <intracube> hetii: might be because of 4:4:4 colour format
[22:25:48 CEST] <hetii> ok how I can set it to 4:2:0 ?
[22:26:04 CEST] <intracube> -pix_fmt yuv420p
[22:26:12 CEST] <hetii> ok thx, will try i
[22:26:13 CEST] <hetii> t
[22:26:34 CEST] <intracube> ffmpeg tries to autodetect from the src. for screen cap it'll likely be 4:4:4
[22:26:59 CEST] <intracube> which isn't often supported on standalone devices
[22:27:38 CEST] <hetii> yep, this is it
[22:27:42 CEST] <hetii> now it works :)
[22:27:43 CEST] <intracube> :)
[22:29:20 CEST] <hetii> do you know maybe if there is some lib wrapper that I could use with python to write a server who will do this screencast directly over network without dumping it to file ?
[22:30:15 CEST] <hetii> n
[22:32:29 CEST] <BtbN> just stream with ffmpeg?
[22:32:30 CEST] <intracube> not sure about that. what about: https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming
[22:34:07 CEST] <hetii> hmm need to read more about DLNA protocol but if use RTP then should work
[22:34:32 CEST] <BtbN> just setup an nginx-rtmp somewhere and stream to that
[22:38:10 CEST] <hetii> ?
[22:38:33 CEST] <hetii> well my tv need somehow talk to dlna server first
[22:42:41 CEST] <BtbN> you want to stream directly to your TV? Without some PC on it?
[22:47:02 CEST] <hetii> I have a PC in my LAN and want to use it as a media center via DLNA to my TV
[22:47:55 CEST] <hetii> so instead using HDMI my plan is to use LAN
[22:51:38 CEST] <BtbN> That might be complicated or even impossible.
[22:52:44 CEST] <hetii> well ... http://realmike.org/blog/2011/02/09/live-desktop-streaming-via-dlna-on-gnulinux/
[22:53:50 CEST] <BtbN> I'd say that definitely qualifies as complicated.
[22:53:58 CEST] <BtbN> And it also doesn't look complete?
[22:55:10 CEST] <hetii> yep, some work was done but its 5 years old topic, so today probably we have a lot of more possibilites
[22:56:06 CEST] <hetii> as I said I can play my captured screencast, now the point is to have it as a live session
[22:56:06 CEST] <BtbN> DLNA seems kind of dead. And SmartTVs turned into a security nightmare that I wouldn't let near my network.
[23:01:19 CEST] <hetii> I found https://github.com/coherence-project/Coherence
[23:01:52 CEST] <hetii> its based on twisted and maybe its a good start point to try write such live screencast server.
[23:02:48 CEST] <BtbN> Buying a Pi and streaming from that would probably save you some headaches.
[23:02:59 CEST] <BtbN> And would keep that TV off the network.
[23:03:44 CEST] <hetii> well, I have few such board like odroid XU4 etc...
[23:04:13 CEST] <hetii> but still would like to test if idea with dlna server could work.
[00:00:00 CEST] --- Mon Sep  5 2016


More information about the Ffmpeg-devel-irc mailing list