[Ffmpeg-devel-irc] ffmpeg.log.20140826

burek burek021 at gmail.com
Wed Aug 27 02:05:01 CEST 2014


[00:01] <benlieb> tnx!
[00:01] <benlieb> can anyone recommend a way to do what I need: I need to go through 2000+ videos and choose an accurate thumbnail frame down to the .xx decimal place.
[00:01] <benlieb> Ffplay works fine for this
[00:01] <benlieb> but it's tedious to manually copy paste this value from the terminal
[00:02] <benlieb> is there a way once the video is paused to somehow write to a file with the filename and location?
[00:08] <llogan> benlieb: you mean you want to make a screenshot wherever you pause in ffplay?
[00:08] <llogan> doesn't vlc allow you to do just that?
[00:09] <benlieb> I don't want a graphic, I want text, but it needs to be done from a program where I can pause with accuracy to the ss.mm level
[00:10] <benlieb> in an ideal world I would have a file with lines like: filename 33.22
[00:10] <benlieb> http://stackoverflow.com/questions/25495112/how-can-i-write-to-file-with-video-state-while-paused-in-ffplay
[00:10] <benlieb> llogan: ^
[00:19] <llogan> stackoverflow is for programming questions only. super user is probably the better place
[01:15] <woof-woof> Hi!
[01:31] <Camusensei> llogan: I managed to make the ass filter work using the information from the thread you linked me, thanks :)  now I'm trying to get the subtitles larger ^^
[01:32] <Camusensei> it works \o/
[01:34] <Baked_Cake> what would happen if i used aac_he but set it to 128k instead of 64k
[01:34] <Baked_Cake> would that b better that aac_lc
[01:35] <Baked_Cake> i dont really need vbr at such low biit rates
[01:36] <llogan> Camusensei: how did you deal with the issue?
[01:37] <llogan> Camusensei: as for font size: http://stackoverflow.com/a/21369850/1109017
[01:38] <Camusensei> llogan: I already solved the font size issue
[01:38] <Camusensei> llogan: I followed camelotmsl's post (http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=10&t=318&start=20) as well as setting the FONTCONFIG_FILE variable, and it worked. I didn't bother trying out which one was the relevant one.
[01:39] <llogan> i see. how about the font size?
[01:40] <Baked_Cake> for subs?
[01:40] <llogan> yes
[01:41] <llogan> Baked_Cake: if you want 128k you might as well use AAC-LC
[01:41] <Baked_Cake> u can edit sub fonts in aegisub
[01:43] <Baked_Cake> llogan is there some drawback to using aac_he aside from limited playback?
[01:45] <Camusensei> llogan: default font size suited me well. I had previously set my video size (1080p) into the ass file, which resulted in very tiny subs. removing these values got me the default font size back
[01:46] <llogan> Baked_Cake: it's designed for lower bitrates. if you want to use it outside of that usual application then you're using the wrong tool.
[01:46] <Camusensei> llogan: (I had set PlayResX: 1920 and PlayResY: 1040)
[01:46] <Baked_Cake> ic
[01:47] <Baked_Cake> how does the audio quality compare to aac_lc at 128k? it seems about the same to my ears
[01:48] <llogan> if it sounds the same to you then use whatever you want to use. some people claim to like the way aac-he sounds.
[01:50] <llogan> Baked_Cake: some info here https://trac.ffmpeg.org/wiki/Encode/AAC
[01:50] <Baked_Cake> ya ive read thru that badboy
[01:51] <llogan> also https://trac.ffmpeg.org/wiki/Encode/HighQualityAudio but i don't know if any of that is substantiated
[01:51] <Baked_Cake> it does say on there that i can go upto 160k with it
[01:52] <Baked_Cake> o ill check out that other link
[02:02] <Baked_Cake> hmm its says the aac_he profiles cant reach transparency
[02:04] <Baked_Cake> i guess ill toss that idea out the window
[02:05] <Baked_Cake> aand stick to lc_aac
[02:07] <maujhsn> Is there an ffmpeg command that will bring up a video image in vlc media player?
[02:09] <Baked_Cake> hmm but the other page that says version 1 can go upto 160k has been updated more recently
[02:09] <am0rphis> hi117, how to disable window with equalizer graphics in x when i listenning ffplay?
[02:10] <Baked_Cake> ive never used ffplay
[02:17] <am0rphis> i like to listening radio in background with it
[02:18] <maujhsn> Is there an ffmpeg command that will bring up a video image in vlc media player?
[02:18] <maujhsn> Is there an ffmpeg command that will bring up a webcam video image in vlc media player?
[02:18] <Baked_Cake> looks like i need to do some reading on SBR
[02:22] <Baked_Cake> hmm the idea is pretty cool, but i can see why its not considered transparent
[02:22] <Baked_Cake> i guess ill just have to listen for my self some more
[02:28] <Baked_Cake> o this is interesting
[02:28] <Baked_Cake> Again, this is not a limitation of the standard, it's an encoder design decision. I thought I already made this clear in another thread once, but I'm happy to repeat: there's a mode called "downsampled SBR", in which you can move the SBR start frequency above half the input signal bandwidth. For example, with 44.1- or 48-kHz audio input, you could let SBR code only the frequencies above 16 kHz or so. Such a setting will be trans
[02:28] <Baked_Cake> parent for many people (assuming the core bit-rate is high enough). Fraunhofer's encoder supports downsampled SBR, but that mode might not (yet) be available in Winamp, I don't remember.
[02:29] <am0rphis> -nodisp hide the window ._.
[05:11] <aho> i'm trying to join some images and it's dropping 2 frames
[05:11] <aho> why is it doing that?
[05:18] <aho> upgraded to the most recent build. works now.
[05:26] <aho> or not. drops half the frames if i try to save it as webm
[05:31] <aho> http://superuser.com/questions/452542/ffmpeg-drops-frames-when-encoding-a-png-image-sequence-into-an-x264-mp4-video
[05:31] <aho> that works
[06:52] <djdduty> what am I doing wrong here? http://puu.sh/b8dpu/3a19691647.png
[07:09] <tbarletz_> Is there a way to dump the PMT content of a TS file using ffprobe?
[07:10] <tbarletz_> dump=show the programs and pids in a human readable form
[07:10] <aho> djdduty, try %2d (or 3 if there are up to 3 digits)
[07:11] <djdduty> aho: they are up to 3 digits, but there are no precursing 0s on the first ones
[07:11] <blazer420> Greetings. I'm having trouble piping rtmpdump to ffmpeg. Currently testing with unprotected rtmp source to make it easier, code is here.. http://pastebin.com/a9J5Me9G
[07:12] <blazer420> Any assistance is appreciate. Thank you. :)
[07:12] <aho> you'd have to use %02d if there are leading zeros
[07:12] <aho> do you get the same error with %3d?
[07:12] <djdduty> aho: yeah
[07:14] <djdduty> aho: got it, there was a space in the front
[07:14] <djdduty> a leading space*
[07:14] <aho> ah
[07:14] <aho> because i just tried %d and it worked just fine :>
[07:18] <blazer420> Oh im sorry. Also, the error message I get is "pipe:: Invalid data found when processing input"
[08:11] <ArM23> hi
[08:14] <ArM23> hi guys how do i record RTMP live stream
[09:16] <Baked_Cake> what do u guys think about libopus 5.1 vs aac 5.1
[09:16] <Baked_Cake> or just libopus vs libfdk_aac
[09:25] <K4T> hi
[09:26] <K4T> can someone provide me 30fps film sample, but no timeelapse or music video?
[09:26] <K4T> I searched over one hundred videos on vimeo and found nothing
[09:35] <Baked_Cake> looks like opus-tools is the only way to go
[09:37] <K4T> ok, found one
[09:43] <K4T> can someone give me some hints or link to informations about converting 29fps video to 25?
[09:43] <K4T> I hope I can convert it with ffmpeg
[09:44] <K4T> and get smooth playback
[10:35] <K4T> can I use FFSERVER as an alternative to Wowza Media Server? I need to stream video card output (Declink Mini Monitor) to multicast, flash players on the web (like JWPlayer), android devices (via RTSP I think) and iOS devices
[10:58] <K4T> no? :p
[12:04] <alinescoo> Hi all. I have a question: is it possible to use android port of ffmpeg and add a watermark image over video ?
[12:17] <spaam> using the overlay filter? should work
[12:20] <alinescoo> Thanks @spaam, do you know how reliable is ffmpeg for android ?
[12:21] <alinescoo> I need to learn a lot on this video matter and I don't want to invest time in something which has compatibility issues
[12:22] <spaam> alinescoo: i have no idea. i know it builds :)
[12:44] <K4T> is it possible to use ffserver on Windows?
[13:03] <devnode> goooood ...day.
[13:03] <devnode> i missed the morning :D
[13:19] <Eftekhari> hi
[13:21] <Eftekhari> i have ffserver, when feed is larger than 2 GB, if ffmpeg that feed the ffserver exit, re executing the ffmpeg to feed the server not working, with the error :  Error reading write index from feed file '/tmp/feed1.ffm': Resource temporarily unavailable
[13:27] <bigzed> Hello, I have a 16channel audio file. How do I convert this to stereo?
[13:42] <bigzed> Or more precise, how can I drop everything except the first two channels and mix them to stereo?
[13:54] <Mavrik> bigzed, you'll have to use audio filters in sequence
[13:55] <Mavrik> bigzed, see channelsplit filter (for splitting) and amix filter (for remixing back)
[13:59] <bigzed> Mavrik, Is there maybe an howto about it?
[13:59] <Eftekhari> i have ffserver, when feed is larger than 2 GB, if ffmpeg that feed the ffserver exit, re executing the ffmpeg to feed the server not working, with the error :  Error reading write index from feed file '/tmp/feed1.ffm': Resource temporarily unavailable
[13:59] <Mavrik> bigzed, look on ffmpeg wiki and example on ffmpeg documentation
[14:00] <Mavrik> I bet there's a remix example somewhere there
[14:15] <blight> hi guys
[14:17] <blight> i am thinking about opening a file twice (two format context's) and use one to read the audio and one to read the video so i can treat both as independent streams which would make my code more simple i think... does anybody know how much overhead this will have or if there is any reason why it could be a bad idea?
[14:17] <blight> *contexts
[14:18] <blight> (i always open files, no streams or anything)
[14:37] <Hello71> sounds like you might have issues with A/V sync
[14:40] <Mavrik> both audio and video frames have absolute timestamps so that's not an issue really.
[15:02] <blight> the timestamps are not always 100% correct are they? and when handling the interleaved packets the code is more complex isnt it?
[15:03] <blight> iirc i have a file where the audio stream has some strange timebase and the timestamps have an accumulated rounding error
[15:05] <Mavrik> ugh
[15:05] <Mavrik> the timestamps HAVE to be correct
[15:05] <Mavrik> how else does the presenter know when it's time to present a frame or set of audio samples?
[15:07] <blight> i am sure you know a lot more about ffmpeg internals, but do audio timestamps have to be sample correct?
[15:07] <Mavrik> PTS has to be correct and granular enough to allow playback.
[15:08] <Mavrik> note that PTS might not be in the same timebase for each stream
[15:08] <blight> like if you decode a stream from some position you get some audio samples, then you seek before that position and you get samples with timestamp, will the delta between the timestamps be the exact sample count so the 2 pieces of samples can be matched with sample accuracy?
[15:10] <Mavrik> depends on format
[15:10] <Mavrik> but it should be for most sane formats.
[15:10] <blight> hmm i am thinking about decoding an audio stream in reverse... by decoding blocks from back to front and then putting them together
[15:10] <Mavrik> note that packet and data loss does exist in real world :)
[15:11] <blight> well i always work with files but some of them are not perfect of course :(
[16:29] <Sashmo_> here is a good question..... if my source is 59.94......  and I am encoding it to you tube live.... and they say to use a GOP of 2 seconds.... what do I set?  should be -g 120 or should is be like -g 119 ?
[16:39] <BtbN> flash can't play 59.94 or 60 fps anyway, so you better encode it as 30 fps
[16:51] <chchjesus> Hey, I was wondering if anyone has experience with transcoding to webm from mkv?
[17:01] <c_14> chchjesus: What's your question?
[17:03] <chchjesus> http://pastebin.com/bnmQ9qXQ
[17:03] <chchjesus> Here's a paste
[17:04] <chchjesus> I was wondering what an appropriate way would be to transcode an mkv file to a webm file
[17:04] <chchjesus> I'd like to make a clip out of part of the mkv file
[17:04] <c_14> vpx does not have a preset setting
[17:04] <chchjesus> but the resulting clip I'd like to be in webm
[17:04] <chchjesus> ok
[17:05] <c_14> -threads 0 is also a noop for vpx
[17:06] <chchjesus> c_14: Yet theok
[17:06] <chchjesus> oops
[17:06] <chchjesus> It encodes it to 7 minutes instead of the 17 seconds I asked it for
[17:06] <chchjesus> And the size of the file doesn't start counting up until ffmpeg has at least gotten to 12 seconds
[17:09] <chchjesus> When I play the file
[17:09] <chchjesus> It plays 17 seconds
[17:10] <chchjesus> But it's the wrong 17 seconds, and the file says it's 7 minutes long
[17:10] <c_14> Which 17 second range is it?
[17:10] <c_14> about
[17:11] <chchjesus> Instead it starts at what is 00:16:21 in the original mkv file
[17:13] <chchjesus> Let me give you another paste
[17:13] <c_14> try setpts=897 instead of the PTS+897/TB, if that doesn't work, do 897/TB
[17:15] <chchjesus> What is pts?
[17:15] <c_14> Presentation Time Stamp
[17:15] <chchjesus> I've used it previously only to get the subtitles to the right position
[17:15] <c_14> I'm guessing it has something to do with the setpts filters anyway, you can try the command without the -vf just to see if it cuts correctly.
[17:16] <c_14> I know what you're using it for, I just don't know what else could be causing the difference.
[17:17] <chchjesus> I suspect it's a difference in the formats?
[17:17] <chchjesus> And I'm just not transcoding it correctly?
[17:18] <c_14> Shouldn't be. I've done similar things rather often without issues.
[17:19] <c_14> You can try using -ss and -t as output options to see if that helps.
[17:19] <c_14> Maybe add -loglevel debug and see what ffmpeg has to say about the commandline.
[17:20] <chchjesus> http://pastebin.com/gnUgbGSg
[17:20] <chchjesus> Ok
[17:20] <chchjesus> I'll try those
[17:21] <chchjesus> Where do I put ss and t for output options?
[17:21] <chchjesus> Do I just need to add them after the input file?
[17:21] <c_14> Just on the other side of the input file.
[17:23] <chchjesus> Oh crazy
[17:24] <chchjesus> It's given me a 17 second file now
[17:24] <chchjesus> But it's still the wrong 17 seconds
[17:24] <chchjesus> But
[17:24] <chchjesus> Lemme give you the output
[17:24] <chchjesus> Hold up
[17:25] <chchjesus> http://pastebin.com/RpVzXmjr
[17:25] <chchjesus> There you go
[17:25] <chchjesus> c_14
[17:30] <misitawright> hi guys i need some help. I am evaluating how to go about making a bunch of videos using ffmpeg
[17:30] <c_14> I have no idea why it would be cutting incorrectly, does this happen with every file you try or just the one.
[17:31] <misitawright> I would like to be able to have a text overlay that would read options: blah and have it fade in and out each option while leaving the text option in place. how would i go about doing something like this?
[17:31] <chchjesus> c_14: It's happened with others I think
[17:32] <chchjesus> c_14: So, the only problem now is that it's offset wrong
[17:32] <c_14> Is it a static offset every time? (ie the same offset)
[17:33] <chchjesus> What do you mean?
[17:33] <chchjesus> The same time?
[17:33] <c_14> The same start/end points. ie instead of from 14:57 it's from 16:30 every time.
[17:33] <chchjesus> Oh, yesd
[17:34] <chchjesus> It does mention in the output about a delay
[17:34] <c_14> Ok, my cheap answer would be to find that offset and just apply it to the seek time. The slightly more intelligent answer would be to test with a recent git build (you can try a static build) and if it still doesn't work with that, check the bug tracker and if you can't find a similar bug, make a new one.
[17:35] <chchjesus> Yeah, I was thinking that too (about finding the right offset)
[17:35] <chchjesus> But sure
[17:36] <chchjesus> I'll build from the AUR soon if this doesn't work
[17:36] <c_14> (if you don't like building stuff)
[17:36] <chchjesus> Ah
[17:37] <chchjesus> c_14: Oh, btw, there's a long pause at the start of running it where the time nor filesize go up
[17:37] <chchjesus> They both remain at 0
[17:37] <c_14> That's (probably) while it's seeking.
[17:40] <chchjesus> It might just be easier to install ffmpeg-git from the user repos
[17:43] <chchjesus> c_14: Weird. It just replaced both libx264 and x264 with the x264-git pacakge
[17:43] <chchjesus> maybe there was something wrong with the x264 codec
[17:43] <chchjesus> It's compiling it now, anyway.
[17:54] <Vish__> Hi - I am getting a mjpeg stream from android device which is connected via USB through firing a terminal command. Android itself runs fmpeg and provides the mjpeg data. How do i get that data to ffserver on my mac? Any ideas?
[17:56] <chchjesus> Vish__: Can you stream it to an ffm file?
[17:58] <Vish__> chchjesus: thats what I am not able to do.. when I fire a terminal command eg: adb shell "ffmpeg -options" , it starts printing out put in console which is mjpeg format and accurate one. I know that. How do I pipe that output to ffserver?
[18:00] <chchjesus> I'll give you a command I use
[18:00] <chchjesus> First I get ffserver running
[18:00] <chchjesus> with
[18:00] <Vish__> chchjesus: part of problem is - ffmpeg runs on android while ffserver runs on mac
[18:00] <chchjesus> It shouldn't matter
[18:00] <chchjesus> ffserver -f /etc/ffserver.conf
[18:00] <chchjesus> sudo ffmpeg -i $1 -vf "subtitles=$1, scale=1280:720" -b:v 256K http://localhost:8090/feed1.ffm
[18:01] <chchjesus> ffmpeg -i input_file http://localhost:8090/feed1.ffm
[18:01] <chchjesus> So, you want to have ffserver running on your mac, on 0.0.0.0
[18:01] <chchjesus> Then you use ffmpeg to stream to the ip address of the mac
[18:02] <chchjesus> You'll also need to properly set the permissions of the feed in ffserver config
[18:03] <Vish__> sorry but did not understand. You fired 2 ffmpeg on android itself..  on andorid there is no ffserver. so ffm does not exist there..
[18:07] <chchjesus> Yes, you have the ffm file running on the mac
[18:07] <chchjesus> Then you stream from the android with ffmeg to the mac
[18:08] <chchjesus> You want to send stuff to ffserver from ffmpeg, correct?
[18:08] <chchjesus> ffmpeg being on the android?
[18:13] <Vish__> yes correct.. but I guess unless ffmpeg and ffserver are not running on same machin localhost thing is not going to work...
[18:13] <chchjesus> Yes, change localhost to 0.0.0.0
[18:14] <chchjesus> Are you transmitting on the same network?
[18:14] <chchjesus> Or across the internet?
[18:15] <chchjesus> Eh
[18:30] <chchjesus> i tried to help them
[19:39] <davis> hello
[19:39] <davis> i pulled the ffmpeg sources using git and did a make install, but ffplay did not install. it appears its not built. Where do I get the ffplay sources?
[19:40] <JEEB> then you didn't have the x11 stuff
[19:40] <JEEB> check the output that configure gave
[19:40] <JEEB> you can re-run configure if you want to
[19:41] <davis> ok let me see what it says
[19:41] <davis> hmm ./configure | less does not show any entries for ffplay
[19:43] <klaxa> pastebin config.log maybe?
[19:44] <JEEB> davis, it should tell you if it did find or do anything related to x11
[19:45] <davis> is there a pastebin site which allows you to upload a file?
[19:45] <davis> the log is long and will be difficult with copy and paste
[19:45] <davis> i do see x11 lines here, nothing with an error pops out
[19:47] <davis> maybe a better question is, on the www.ffmpeg.org website, is there a place which lists all the libraries/packages necessary to build ffmpeg ?
[19:48] <brontosaurusrex> slightly offtopic: i did a simple cmx 3600 plotter, but i can't make adobe to import it properly, any ideas? seems valid and all
[19:48] <brontosaurusrex> i mean it wont import edl
[19:51] <brontosaurusrex> example: http://paste.debian.net/plain/117692
[19:56] <llogan> davis: ffplay requires sdl
[19:56] <davis> yes, i have found a ffmpeg-user post which seems to imply that.
[19:57] <davis> i'm trying to get ubuntu to install it. sadly it looks like I've got broken dependencies.
[19:57] <llogan> http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[20:04] <davis> llogan: many thanks
[20:05] <davis> JEEB: many thanks
[20:06] <llogan> were you successful?
[20:06] <davis> i pulled down the libsdl-dev
[20:07] <davis> i rerand configure and now I am dong a new build
[20:07] <llogan> i wonder why someone added --dsiable-opencl to the x264 instructions
[20:16] <davis> yes, it now works. many thanks again.
[20:43] <drawesome> Is there a command line switch to set ffplay to be fullscreen when it opens?
[20:44] <llogan> you can press 'f' while playing, but i guess that's not what you want exactly
[20:45] <llogan> oh, -fs
[21:02] <mistawright> hi guys i need some help. I have a png i am trying to overlay on top of a video i am making from images and an audio file. When i add the overlay the overlay shows at an offset. the image is 640x360 and the video is 640x480. how can i scale the image input and have it overlay directly on top?
[21:04] <Fjorgynn> aha
[21:04] <VeL0X> Hey guys
[21:06] <VeL0X> as i tried to run my new music files, i got some errors. i was messing around, but at least i'm now at ffmpeg, as i try to run it i got the error "Output file #0 does not contain any stream"
[21:06] <VeL0X> what to do?
[21:11] <Hello71> how the ass do you run a music file
[21:11] <VeL0X> with my player (moc / mocp) :D
[21:12] <llogan> and can you provide the output image so i can see what you are describing?
[21:12] <VeL0X> but thats showing an error like "Could not find codec parameters (err -543....)
[21:13] <VeL0X> yep. what was the exact pastebin command?
[21:18] <llogan> ok, next!
[21:19] <mistawright> fflogger, This is the command i am using. http://pastebin.com/Mg6a6N8G
[21:23] <llogan> mistawright: you forgot the complete console output.
[21:28] <mistawright> llogan, http://pastebin.com/kX944PBs
[21:28] <mistawright> here goes the console output
[21:28] <mistawright> https://www.youtube.com/watch?v=-RBLvgQMlKU&feature=youtu.be
[21:29] <mistawright> you can see my issue here
[21:30] <mistawright> the bar should be at the very bottom of the screen. the width is fine. but the image is 120 pixels shy of 480. I have tried scaling but am just barely beginning to use ffmpeg
[21:31] <llogan> can you provide frame.png?
[21:32] <mistawright> https://docs.google.com/file/d/0B4rh1bWqHxtLUEFlYUEycDE5WkU/edit?pli=1
[21:33] <mistawright> I havent been able to figure out positioning. I will be using that with two different overlays based on a situation and will also need to have text draw on top
[21:37] <llogan> mistawright: overlay=W-w:H-h
[21:39] <mistawright> with the command i used how should i have gone about getting this image in place properly?
[21:41] <llogan> mistawright: ffmpeg -framerate 1/5 -i image%03d.jpg -i audio.mp3 -i frame.png -filter_complex "[0:v][2:v]overlay=(W-w)/2:H-h,format=yuv420p[vid]" -map "[vid]" -map 1:a -c:v libx264 -c:a aac -strict experimental -r 30 outAudio.mp4
[21:42] <llogan> if you're just uploding to youtube then you can stream copy the audio instead of re-encoding it (-c:a copy)
[21:43] <mistawright> thats good to know definitely helpful
[21:44] <llogan> in your case W-w looks the same as (W-w)/2, but (W-w)/2 centers it in case the frame.png is smaller (if you don't scale it first)
[21:44] <llogan> alternatively you could use drawbox filter to make the red and black bars instead of an image file
[21:45] <llogan> also, add -crf 18 if you're uploading to youtube. since they re-encode you want to give it as high quality that is practical for you to upload.
[21:45] <mistawright> that sounds like it may be easier. I have another transparent png that I needed overlayed as well. I have not seen an example of using multiple overlays. Once I have those overlays situated I will then go through and start adding text and transitions
[21:46] <llogan> adding -shortest might be useful too to make sure the output ends when the shortest input ends
[21:47] <llogan> multiple overlays aren't hard: [0:v][1:v]overlay[a];[0:v][a]overlay[b]
[21:48] <llogan> drawtext also has a box drawing option, but i think it only is as tall and wide as the text (I may be wrong though)
[21:48] <mistawright> the input for overlay a should that be written as -i inputa.png -filter_complex "blah" or should it be part of the existing filter complex options?
[21:49] <llogan> all filtering can occur within one filtergraph (use one -filter_complex). you can chain filters in a row with a comma to make filterchains, and you can chain filterchains together with a semi colon
[21:51] <llogan> as in this shitty example: [0:v]scale=640:-2,crop=iw/2,negate[scn];[0:v][scn]overlay
[21:51] <llogan> you'll get the hang of it after a while
[21:52] <mistawright> trying to figure out how to add the overlay to the command you provided. now
[21:52] <llogan> you mean another image that you want to overlay?
[21:53] <mistawright> yeah. trying to recreate what we currently have in an opensource  solution minus windows servers
[21:55] <llogan> something like: ffmpeg -framerate 1/5 -i image%03d.jpg -i audio.mp3 -i frame.png -i anotherimage.png -filter_complex "[0:v][2:v]overlay=(W-w)/2:H-h[v1];[v1][3:v]overlay,format=yuv420p[out]" -map "[out]" -map 1:a -c:v libx264 -c:a aac -strict experimental -r 30 outAudio.mp4
[21:58] <mistawright> I am lost where the v1 one label came from.
[22:04] <llogan> mistawright: it's an arbitrary label name. you can name it anything. it's so the output from that filterchain can be referenced by other filters or via -map
[22:08] <mistawright> makes sense. thanks for the help. i do appreciate it. trying to find concise documentation or examples has eluded me
[22:09] <llogan> http://ffmpeg.org/ffmpeg-filters.html#Description
[22:09] <llogan> https://trac.ffmpeg.org/wiki/FilteringGuide
[22:15] <K4T> is ffserver still developed?
[22:16] <llogan> K4T: it still receives rare updates.
[22:17] <llogan> but i would not call it "active development"
[22:19] <mistawright> llogan, not finding much on transitions between each image in my slideshow. I had read earlier today that kenburns and blend were available transitions
[22:19] <mistawright> or at least allowing each image to fade into the next would be helpful, then kenburns would just be icing
[22:19] <K4T> so it is better to not use ffserver for professional solutions, yes?
[22:20] <llogan> i don't know. i've never used it.
[22:20] <llogan> mistawright: i don't know the easiest way to place a blend or fade between sequential images
[22:23] <llogan> but i do have a drawbox example: [0:v]drawbox=x=iw-w:y=ih-h:w=iw:h=48:c=black at 0.9:t=h,drawbox=x=iw-w:y=ih-48-h:w=iw:h=5:c=red:t=h
[22:25] <llogan> height could also be a percentage of the input size if you wanted it to be dynamic
[22:25] <mistawright> interesting
[22:25] <mistawright> time to open up pastebin so i dont forgot anything
[22:26] <llogan> http://lists.ffmpeg.org/pipermail/ffmpeg-devel-irc/2014-August/thread.html
[22:26] <llogan> channel is logged
[22:28] <llogan> mistawright: using srt or ass subtitles to create hardsubs is a good alternative to drawtext if you want certain text to show up at certain times
[22:29] <mistawright> I was going to do that for a list of options but wanted it to appear in a box about 100px by 20px
[22:30] <mistawright> and figured an srt with the options would do fine. that way I could have be displayed easily
[22:30] <llogan> aegisub is a good tool to make subs if you don't want to do it manually
[22:32] Action: llogan will return in an hour or so
[22:42] <vlatkozelka> hi
[22:44] <vlatkozelka> when i stream from file to udp or any other protocol ( tho udp is what i need ) ... it gets encoded way too fast , like as fast as the cpu can read from the file
[22:57] <c_14> Get a slower CPU? :P
[22:57] <c_14> If you want real-time, use -re
[22:57] <vlatkozelka> and
[22:57] <vlatkozelka> i lose timestamps
[22:58] <c_14> real-time as in stream 1s of video every second
[22:58] <vlatkozelka> its a ts stream saved onto ts files
[22:58] <vlatkozelka> i tried -copyts it ruined everything , started recording 1 second files
[22:58] <vlatkozelka> i tried usu_wallclock .. also bad
[22:58] <vlatkozelka> ill try this -re
[22:59] <vlatkozelka> ok that solved the speed issue ... and dropped cpu usage :) thx
[23:18] <KjetilK> I'm organizing a sports event where we will attempt to give spectators (we expect 250-300 in total) the possibility to view a video stream on their own devices using a Wifi network we set up using 7-8 access points.
[23:18] <KjetilK> We think we have the wifi part rather well covered now, but we have maximum 70 Mbits/s into the arena, and so the plan we had to use livestream or ustream seems risky, since that seems to open one stream per client
[23:20] <KjetilK> so, I was thinking of putting up a box with VLC,  but then I just realised ffmpeg perhaps does the job better
[23:21] <KjetilK> so, I can have a proxy on the LAN serving all the clients on the inside
[23:21] <KjetilK> the problem seems to be that iOS clients should have HLS, whereas Android clients should have RTSP
[23:22] <KjetilK> so, my question is just if I could have ffmpeg set up both a HLS stream and an RTSP stream?
[23:22] <vlatkozelka> like run 2 processes at same time ?
[23:23] <c_14> KjetilK: either use two outputs (this will encode everything twice) or see if you can get everything working with the tee muxer
[23:23] <vlatkozelka> c_14
[23:23] <vlatkozelka> that -re thing worked
[23:23] <vlatkozelka> but the timestamps always start from 00:00:00
[23:24] <vlatkozelka> idc about the timestamp in the files i need the real time stamp , like say windows clock
[23:25] <c_14> use the setpts filter
[23:25] <KjetilK> ok, so this should be doable, if I have one incoming stream, dunno which protocol, and then (re-)encode to get two streams out?
[23:25] <c_14> yep
[23:25] <KjetilK> cool :-)
[23:26] <KjetilK> I may come back to haunt you with the details, meanwhile I'll report back to the rest of the pack that my evil plans are likely to work out :-)
[23:26] <vlatkozelka> lol
[23:26] <vlatkozelka> good luck :)
[23:27] <KjetilK> thanks! :-)
[23:27] <llogan> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs#Teepseudo-muxer
[23:27] <vlatkozelka> c_14 how to use it ? i know i should look at docs but ive been reading docs about stuff for about 2 months man ... this is the last bit
[23:27] <KjetilK> llogan, ah, great, thanks!
[23:28] <vlatkozelka> i made a monitor program in JAVA to monitor TV channels and record them and extract from them ... just need this last bit of code :v
[23:29] <c_14> setpts=time(0)
[23:29] <c_14> I think.
[00:00] --- Wed Aug 27 2014


More information about the Ffmpeg-devel-irc mailing list