[Ffmpeg-devel-irc] ffmpeg.log.20150717

burek burek021 at gmail.com
Sat Jul 18 02:05:01 CEST 2015

[00:30:22 CEST] <opdop> Hi. I have a question about streaming over HTTP via ffserver and ffmpeg
[00:30:41 CEST] <DHE> go on
[00:31:45 CEST] <opdop> The question is: is there a way to visualize just the last part of the streaming (real time) instead of the whole video (since beginning) like it happens to me?
[00:32:53 CEST] <opdop> let's say I do mplayer http://ip_address:port/streamlink
[00:33:38 CEST] <opdop> It gives me the video since the beginning and not in real time (that happens whatever I use...mplayer, web browser, vlc...)
[00:35:14 CEST] <opdop> what I understood is that the feed is a video recorded constantly, then via ffserver config it get connected to a link and server with the codec setup
[00:36:21 CEST] <opdop> I miss just the point how to get the video in real time and not since beginning (or if there is a way for ffserver/ffmpeg to serve just let's say the last 30 seconds instead to record the whole time)
[00:37:23 CEST] <opdop> *served
[00:38:28 CEST] <opdop> I can obtain what I want easily via VLC, but it's too heavy. I would prefer to user ffserver/ffmpeg
[00:38:44 CEST] <klaxa> it sounds like what you want is http live streaming
[00:38:49 CEST] <opdop> yes
[00:39:51 CEST] <klaxa> as a matter of fact i'm basically currently implementing that in the libavformat library as a google summer of code sponsored project
[00:40:38 CEST] <klaxa> it still needs a lot of time, work, code and love
[00:40:53 CEST] <opdop> Good luck :)
[00:41:29 CEST] <klaxa> thanks, seeing that people want these features gives me motivation to work harder :)
[00:42:04 CEST] <opdop> I see a lot of people rely on on stuff like motion
[00:42:30 CEST] <opdop> but I don't want a flow of jpg pics, I need a read video and audio included
[00:43:03 CEST] <DHE> I thought HLS was already in ffmpeg...
[00:43:10 CEST] <DHE> it's not spectacular, but it works fairly well
[00:43:49 CEST] <opdop> Can you provide more info?
[00:44:15 CEST] <opdop> As I said, I'm able to get video streaming (captured from a webcam)
[00:44:33 CEST] <opdop> My only problem is to get that streaming in real time then
[00:44:45 CEST] <opdop> I get the whole video since the beginning instead
[00:46:30 CEST] <opdop> when you run ffserver + ffmpleg, you can visit the page http://ip_address:port/stat.html (usually), click the link of your streaming and get it
[00:47:11 CEST] <opdop> Just, you get it from the beginning, like if you are watching a video/movie instead to get the real time part of the video
[00:50:17 CEST] <DHE> opdop: I'm running an HLS producing ffmpeg process now. Converts one live feed into HLS. it supports live events, VOD, and even HLS encryption. so it does decently well for me
[00:52:54 CEST] <opdop> DHE: cool, any links about how to do that? I'm not a ffmpeg guru
[00:54:11 CEST] <DHE> ffmpeg -i ..... -f hls -hls_flags delete_segments -hls_list_size 4 -hls_time 10  /var/www/html/hlsvideos/livefeed.m3u8
[00:54:49 CEST] <DHE> something like this would provide a live channel as long as the input to ffmpeg rate-limits itself to real-time input
[00:54:56 CEST] <DHE> I'm running the git head, and I think some of these HLS features are new
[00:56:26 CEST] <opdop> the input part is the link to the ffm feed?
[00:57:30 CEST] <DHE> if you can play it with ffplay or vlc, chances are it'll work here
[00:58:40 CEST] <opdop> I'm a little confused about the last part...
[00:58:45 CEST] <DHE> ?
[00:59:15 CEST] <klaxa> opdop: for DHE's example you don't use ffserver at all, you use ffmpeg
[00:59:31 CEST] <opdop> oh ok, now we are...
[00:59:53 CEST] <opdop> because otherwise it looked like to duplicate the feed
[01:00:28 CEST] <DHE> in my case I have apache (or similar) as my server. any static content server would do.
[01:00:42 CEST] <opdop> so, now the question is: how do I reach the HLS from another device?
[01:01:53 CEST] <DHE> From a stock install of apache (at least on centos 6) put into your video player
[01:02:29 CEST] <opdop> oh yes, I see it now that you are talking about apache
[01:04:16 CEST] <opdop> so, in my case using ffserver, how could I adapt that command?
[01:04:44 CEST] <DHE> this negates the need for ffserver
[01:05:23 CEST] <opdop> but I don't wanna run the "heavy" apache just to get some server functions :)
[01:05:33 CEST] <opdop> btw I got it now...
[01:05:35 CEST] <DHE> pick something else then. nginx, lighttpd
[01:05:40 CEST] <DHE> I like the former myself
[01:06:14 CEST] <opdop> the -i part gets the webcam (let's say /dev/video0), the past part the link to the stream
[01:06:33 CEST] <opdop> just...the feed is name.ffm
[01:07:11 CEST] <opdop> would it be ok anyway or I need to switch it to .m3u8?
[01:08:06 CEST] <DHE> if you're using hls, .m3u8 is the output format
[01:08:26 CEST] <opdop> ok, I'm gonna try with that format then
[01:08:30 CEST] <DHE> actually how real-time do you need this? for webcam output it sounds like you want it to be very realtime and HLS introduces a fair amount of delay
[01:09:16 CEST] <opdop> let's say 2-3 seconds of delay would be ok
[01:09:35 CEST] <DHE> yeah, HLS is a lot worse. I'm thinking 10-15 seconds.
[01:10:28 CEST] <opdop> mmm...is there a better solution?
[01:10:58 CEST] <opdop> because that delay is pretty close the same I get using VLC and I'm not ok with that
[01:11:47 CEST] <opdop> are you running all of that on a powerful machine?
[01:12:02 CEST] <opdop> I'm working on a raspberry pi 2
[01:12:58 CEST] <klaxa> you might want to skip encoding then, try adding -c copy to your command if it's not already present
[01:14:02 CEST] <opdop> klaxa: what I understood is that the feed is a "raw" video. so, -c copy shouldn't be helpful, right?
[01:14:25 CEST] <opdop> I mean, you need to encode it
[01:14:46 CEST] <opdop> Am I wrong?
[01:15:08 CEST] <klaxa> no i think you are right for ffserver, i thought you were now using ffmpeg + a light webserver
[01:15:49 CEST] <opdop> Maybe...if it's the best solution about avoid huge delay
[01:15:50 CEST] <klaxa> a light webserver has literally no more impact on your machine than your ssh server
[01:16:14 CEST] <opdop> klaxa: until you start to use it :P
[01:16:37 CEST] <klaxa> you can use something simple like: python -m SimpleHTTPServer 8000
[01:17:31 CEST] <opdop> I would prefer to build a little socket in C then
[01:17:54 CEST] <opdop> it would be lighter and better performance than involving python
[01:18:21 CEST] <klaxa> you can probably do it with combining shell commands
[01:18:43 CEST] <pzich> just netcat in a loop :D
[01:21:06 CEST] <opdop> :D
[01:21:52 CEST] <klaxa> i underestimated the complexity of http, doing it in a shell is not as easy as i thought
[01:22:52 CEST] <opdop> It's enough create a socket listening on a port...
[01:28:00 CEST] <opdop> ok, with apache it works good enough (with the delay told by DHE)
[01:28:20 CEST] <opdop> if I try to use it with ffserver I get a broken pipe
[01:28:49 CEST] <opdop> about av_interleaved_write_frame()
[01:31:49 CEST] <opdop> mmm, noticed a thing...
[01:32:23 CEST] <opdop> using that command, you don't get just file.m3u8, but also different file*.ts
[01:33:03 CEST] <pzich> isn't the m3u8 just the playlist?
[01:33:51 CEST] <opdop> and those file*.ts are little piece of video. So, the whole streaming gets split in different pieces and when you visit the link, you get just the last one (the most recent one)
[01:34:03 CEST] <pzich> yup
[01:34:06 CEST] <opdop> nice trick, but that's not real streaming
[01:35:26 CEST] <pzich> you don't want to share just the last one though, right? You want the playlist and all the TS files available?
[01:37:09 CEST] <opdop> I would like more something like: mplayer --ss=5 http://...stream to get the last 5 seconds of the streaming for example (using a loop, you would get those last 5 seconds updated constantly)
[01:38:12 CEST] <opdop> pzich: no, I don't wanna that, I just want a real live streaming like you get with VLC for example (but it's kinda too heavy on the raspberry pi and the delay is over 10-15 seconds)
[01:42:07 CEST] <opdop> isn't there a way to make ffmpeg provide a streming (also without record it, like VLC does) and get it from when I connect to it to ahead?
[01:44:36 CEST] <Svenska> well, the difference is that the HLS thingy saves short segments on disk
[01:44:43 CEST] <Svenska> what you want saves short segments in RAM
[01:45:01 CEST] <opdop> I think it's what VLC does
[01:45:03 CEST] <Svenska> so, if you put the folder with playlist and *.ts on a ramdisk, you essentially get the same thing
[01:45:19 CEST] <opdop> the point is not that
[01:45:30 CEST] <Svenska> how "live" it is depends on the buffer size
[01:46:00 CEST] <opdop> the point is that if I have a HLS with VLC, when I connect to it, I get the streaming from that moment ahead
[01:46:14 CEST] <Svenska> isn't that what you wanted?
[01:46:19 CEST] <opdop> yes
[01:46:37 CEST] <opdop> but VLC is too heavy on the raspberry pi it seems
[01:46:45 CEST] <Svenska> with HLS, the playlist is constantly updated, and --delete sounds to me like "and when it's over, remove it from playlist and filesystem"
[01:47:05 CEST] <DHE> for live playlists you use the delete option. you can't rewind, it's literally live.
[01:47:21 CEST] <Svenska> exactly, so this is a live stream
[01:47:37 CEST] <Svenska> if you want to keep the old parts, you just add more (fills up your SD card, though)
[01:47:38 CEST] <opdop> ok with or without playlist
[01:47:54 CEST] <Svenska> *just add more TS segments
[01:48:34 CEST] <opdop> there is a way to avoid huge delay? something like to get every .ts of 5 seconds, so the streaming would be updated every 5 seconds?
[01:48:52 CEST] <Svenska> last parameter of the command?
[01:49:14 CEST] <Svenska> -hls_time 5
[01:49:33 CEST] <opdop> I tried with -hls_time 1, but I get .ts ~7 seconds
[01:49:45 CEST] <Svenska> -hls_list_size 4
[01:49:56 CEST] <Svenska> because you have 4 segments of N seconds each
[01:50:04 CEST] <Svenska> try list size 2, time 2
[01:50:13 CEST] <opdop> ok, let me try, thanks
[01:50:14 CEST] <Svenska> should be 4s (plus client-side buffering)
[01:50:14 CEST] <DHE> also transcoding can introduce lag if you're doing it. for example x264 has a preset called zerolatency
[01:50:42 CEST] <opdop> DHE, I'm using the command like you given to me
[01:50:58 CEST] <DHE> I never gave you any codec options, which probably means it's going to mpeg2 ?
[01:51:09 CEST] <opdop> I suppose
[01:52:56 CEST] <opdop> yep, ffprobe confirms that, mpeg2
[01:54:10 CEST] <opdop> Svenska: ffprobe says 8.35 seconds with those settings
[01:54:19 CEST] <DHE> if the input is H264 then '-c:v copy' might be a good idea
[01:54:34 CEST] <opdop> the input is my webcam
[01:54:49 CEST] <DHE> what if you ffprobe the webcam then?
[01:55:31 CEST] <opdop> v4l2
[01:56:13 CEST] <opdop> aka video4linux2
[01:56:26 CEST] <Svenska> what video format does it produce? MJPEG or so?
[01:57:18 CEST] <opdop> the .ts files are mpeg2 without codec options
[01:57:38 CEST] <Svenska> if ffmpeg produces mpeg2, then the ts files will be mpeg2
[01:57:39 CEST] <opdop> I think that's because it's the default codec used by ffmpeg
[01:57:44 CEST] <Svenska> what happens if you "copy"?
[01:57:50 CEST] <opdop> let me try
[01:59:06 CEST] <DHE> is v4l2 a codec?
[01:59:20 CEST] <Svenska> no
[02:00:29 CEST] <opdop> Svenska: ffprobe says mpeg2 yet about those ts files now, but giving also an error saying unsupported codec
[02:01:26 CEST] <opdop> ok, wait a second guys
[02:01:31 CEST] <opdop> checked better...
[02:01:53 CEST] <opdop> without -c copy, the video codec is h264
[02:02:22 CEST] <opdop> with video -c copy I get unsupported codec
[02:02:40 CEST] <Svenska> then its probably mjpeg or some raw yuv, depends on the webcam
[02:03:07 CEST] <opdop> I think a raw yuv
[02:03:09 CEST] <Svenska> then try to make the encoder introduce less latency
[02:03:30 CEST] <opdop> how?
[02:03:45 CEST] <Svenska> <DHE> also transcoding can introduce lag if you're doing it. for example x264 has a preset called zerolatency
[02:04:16 CEST] <opdop> should I use -tune zerolatency?
[02:06:19 CEST] <ps-auxw> If you are building something like OnLive! or a video conference system, maybe. Otherwise probably not.
[02:06:33 CEST] <Svenska> why not?
[02:06:35 CEST] <ps-auxw> s/!//
[02:07:14 CEST] <DHE> realtime has negative impacts on quality and potentially performance
[02:07:35 CEST] <ps-auxw> Svenska: Because the settings that will set will lower quality significantly, so you should only use it if you have to. (0 b-frames, no lookahead...)
[02:08:04 CEST] <ps-auxw> You can check the exact settings it sets in x264 --fullhelp output.
[02:08:10 CEST] <Svenska> ah, okay
[02:09:08 CEST] <DHE> again, this is h264 output only
[02:09:35 CEST] <opdop> yep
[02:10:32 CEST] <opdop> btw I'm glad I learned those new stuff about ffmpeg today :)
[02:10:42 CEST] <opdop> I didn't know all those HLS options
[02:11:15 CEST] <opdop> I will play around that to see to get shorter delay
[02:11:23 CEST] <opdop> thank you guys for your help
[13:41:08 CEST] <KimiNewt> Hello, I'm having trouble caputring video from a Dazzle DVC 100. It works fine but the screen flickers every second or so
[13:42:07 CEST] <KimiNewt> I run "ffplay -s 720x480 -f v4l2 /dev/video1" (it works fine on OBS)
[13:42:47 CEST] <KimiNewt> I'm not sure if I'd define it as "fidgets" or "flickering", but it's weird.
[13:44:17 CEST] <KimiNewt> Full output: http://pastebin.com/TWYuJNyv
[13:49:59 CEST] <tomred> If I run `ffprobe -select_streams v:0 -show_entries stream=duration -of flat somefile.mov`, I get "streams.stream.0.duration="12.880000". Can anyone let me the how the spec for this duration? Is it "SS:Centiseconds"
[13:50:49 CEST] <tomred> replace('let', 'tell')
[13:58:56 CEST] <AndrewMock> I don't want to touch my input h264 at all except for cropping it. Is it possible to do a stream-copy-but-still-crop?
[13:59:50 CEST] <AndrewMock> or will ffmpeg be limited to just re-encoding h264 to h264 in a full round trip?
[14:00:49 CEST] <relaxed> AndrewMock: it's not possible
[14:01:19 CEST] <AndrewMock> :( okay well thank you
[14:01:45 CEST] <relaxed> you can do it during playback
[14:02:10 CEST] <BtbN> well... in theory it is possible
[14:02:19 CEST] <BtbN> But it's a realy bad idea
[14:02:26 CEST] <AndrewMock> why?
[14:02:36 CEST] <AndrewMock> it's just weird or bad?
[14:02:48 CEST] <BtbN> Because at least the ffmpeg decoder doesn't implement top/left cropping via SPS params
[14:03:02 CEST] <BtbN> It calls it braindead-cropping and refuses to do it.
[14:03:24 CEST] <AndrewMock> so ffmpeg wasn't programmed for that?
[14:03:30 CEST] <BtbN> Also, that kind of cropping will not reduce the data rate
[14:03:31 CEST] <KimiNewt> Um regarding my problem, 'm on ffmpeg 2.4.3. I looked at the PPA and it recommends not to use it on ubuntu 14.10
[14:03:39 CEST] <BtbN> all the data will still be transmitted, and only discarded when decoding
[14:03:41 CEST] <KimiNewt> I thought maybe upgrading would solve my problem
[14:03:41 CEST] <AndrewMock> I'm just a OCD freak.
[14:04:21 CEST] <BtbN> I also don't think it is implemented to set SPS params like that. You'd have to hex edit every IDR frame
[14:04:35 CEST] <AndrewMock> interesting... so I could set the .mp4 display resoluation lower than the video stream haha
[14:04:45 CEST] <BtbN> nothing to do with mp4
[14:04:58 CEST] <BtbN> there are cropping parameters in the SPS/PPS
[14:05:10 CEST] <BtbN> because h264 can only encode multiples of 8
[14:05:18 CEST] <BtbN> so 1920x1080 wouldn't be possible
[14:05:24 CEST] <BtbN> (Or was it 16?)
[14:05:55 CEST] <BtbN> yeah, it's 16
[14:06:08 CEST] <AndrewMock> wow codec are getting crazy now
[14:06:10 CEST] <BtbN> so every 1080p video actualy is 1088p with 8 pixels cropped of
[14:06:16 CEST] <AndrewMock> hyped for Ultra HD Bluray
[14:06:26 CEST] <AndrewMock> huh
[14:06:57 CEST] <KimiNewt> um so anyone have any ideas about my issue?
[14:07:42 CEST] <AndrewMock> what issue?
[14:08:19 CEST] <KimiNewt> Having screen flickering/fidgeting when trying to capture my dazzle dvc 100 vid via ffplay
[14:08:29 CEST] <KimiNewt> Output: http://pastebin.com/TWYuJNyv
[14:08:59 CEST] <KimiNewt> It works fine when I run it with OBS (no flickering)
[14:09:02 CEST] <relaxed> "Option -s is deprecated, use -video_size"
[14:09:04 CEST] <KimiNewt> Or VLC for that matter
[14:09:06 CEST] <DHE> yes it's 16 for H264. and x264 won't accept anything that isn't a multiple of 2 even with codec-level cropping
[14:09:11 CEST] <KimiNewt> Yeah even without that relaxed
[14:09:17 CEST] <KimiNewt> Still happens
[14:09:22 CEST] <relaxed> you might also want to set -framerate
[14:09:29 CEST] <KimiNewt> I've tried that, didn't work
[14:09:35 CEST] <BtbN> DHE, even with yuv444?
[14:10:18 CEST] <relaxed> KimiNewt: remove yadif
[14:10:26 CEST] <KimiNewt> Tried that, still the same
[14:10:30 CEST] <AndrewMock> this isn't in A VM, right?
[14:10:38 CEST] <KimiNewt> Right
[14:10:48 CEST] <KimiNewt> Ubuntu 14.10, physical machine..
[14:12:20 CEST] <relaxed> try adding -use_libv4l2 1
[14:13:08 CEST] <relaxed> if that doesn't work, compile the latest version
[14:13:32 CEST] <KimiNewt> Yeah doesn't work, libavdevice is not built with libv4l2 support
[14:13:33 CEST] <KimiNewt> Alright..
[14:14:35 CEST] <KimiNewt> Latest version of ffmpeg
[14:14:51 CEST] <KimiNewt> Question mark was supposed to be there
[14:17:55 CEST] <KimiNewt> compiling..
[14:28:11 CEST] <KimiNewt> jesus this takes longer than compiling the kernel
[14:30:20 CEST] <ddass> Hey everybody! My question is about watermarking videos in mp4. I have a lot of them and they all need the watermark on the right down corner. How should I write the .bat file so that it burns the watermark on all the videos, ignoring all errors that might come up. IE What is the most generic command to simply burn the .png on all videos? Currently i have this conf--> for %%a in ("*.mp4") do...
[14:30:22 CEST] <ddass> ...ffmpeg -i "%%a" -i blue.png -filter_complex "overlay=x=(main_w-overlay_w)/1:y=(main_h-overlay_h)/1" "newfiles\%%~na.mp4"
[14:30:23 CEST] <ddass> pause
[14:34:12 CEST] <ddass> Currently I get a "Past duration xxx too large" error.
[14:38:33 CEST] <AndrewMock> "By default when using -ac 2 the LFE channel is omitted."
[14:38:38 CEST] <AndrewMock> uhhh why
[14:39:06 CEST] <durandal_1707> 2 is stereo
[14:39:28 CEST] <AndrewMock> https://trac.ffmpeg.org/wiki/AudioChannelManipulation#a5.1stereo
[14:39:42 CEST] <AndrewMock> the LFE is not included as input into the encoder
[14:40:36 CEST] <ddass> Here ? http://pastie.org/10298366
[14:43:48 CEST] <AndrewMock> i guess it would make since for a52 spec sheets to not say that LFE should be downmixed to stereo if the LFE was mono
[14:44:13 CEST] <AndrewMock> i will have take LFE and add it to both L and R as well as the rest of the panning
[14:45:44 CEST] <AndrewMock> oh  man it isn't just a simple pan -.- SCREW YOU ATSC
[14:49:54 CEST] <AndrewMock> yeah how do i *actually* downmix from 7.1 to 2.0?
[14:50:36 CEST] <DHE> the easy automatic way is "-ac 2"
[14:50:46 CEST] <AndrewMock> but that chops off the sub channel
[14:50:54 CEST] <AndrewMock> the Low Frequency Effects
[14:51:12 CEST] <DHE> then you'll need to build an audio filter chain
[14:57:18 CEST] <KimiNewt> It'sbeen compiling for like an hour, I swear
[14:57:35 CEST] <KimiNewt> well, thirty minutes apparently :P
[14:57:45 CEST] <durandal_1707> ddass: does output looks ok?
[14:58:02 CEST] <durandal_1707> KimiNewt: slow CPU?
[14:58:04 CEST] <BtbN> Even my RPi2 build ffmpeg in like 10 minutes
[14:58:15 CEST] <KimiNewt> It's i7..
[14:58:18 CEST] <KimiNewt> I dunno why it's so slow
[14:59:09 CEST] <BtbN> because you're only building on one core?
[14:59:44 CEST] <iive> `make -j 8` to build with 8 cores.
[15:00:10 CEST] <KimiNewt> deeeeeeerp
[15:00:49 CEST] <KimiNewt> what the heck does it use zmq for
[15:01:16 CEST] <ddass> durandal_1707: output seems ok... Thought it was an error of more importance that I should ask you guys.
[15:01:44 CEST] <AndrewMock> eac3to input.ac3 output.ac3 -down2 -mixlfe
[15:01:50 CEST] <AndrewMock> seems way easier haha
[15:01:58 CEST] <AndrewMock> and still proper a52 compliance
[15:02:02 CEST] <AndrewMock> sorta
[15:10:14 CEST] <KimiNewt> woo compilation completel
[15:18:54 CEST] <KimiNewt> Okay I've compiled and installed the latest version and I still get the same problem
[15:19:08 CEST] <KimiNewt> and error with libavdevice is not build with libv4l2 support, though I'm not sure if that's related
[15:21:46 CEST] <KimiNewt> wait it didn't actually compile a new ffplay, just ffmpeg
[15:31:51 CEST] <relaxed> KimiNewt: you need sdl-dev installed
[15:33:23 CEST] <KimiNewt> yeah installed it already
[15:42:01 CEST] <Parsec300> Hi people, the documentation on how to compile makes no mention of libxvid and how to compile it with that. Is there a good tutorial on how to do this?
[15:44:48 CEST] <DHE> if you have xvid installed, running configure with --enable-libxvid should do it
[15:49:44 CEST] <KimiNewt> I ran configure with --enable-ffplay and recompiled, yet it still didn't compile ffplay!
[16:03:11 CEST] <KimiNewt> Now I have to run it with a bunch of flags and wait for it to fail a thousand times for different libraries.. fun
[16:14:03 CEST] <AndrewMock> a02 The libDcaDec DTS Decoder reported the error "CRC check failed" while decoding.
[16:14:20 CEST] <AndrewMock> https://paste.debian.net/plainh/abeb8c5a
[16:18:56 CEST] <KimiNewt> okay three hours over this thing is too fucking much
[16:19:34 CEST] <KimiNewt> probably won't even work, gah
[16:36:11 CEST] <Parsec300> DHE, thanks. Compiled it just now
[16:37:27 CEST] <newtc> What should be the command to compile ffplay? If I can't use the makefile?
[16:38:17 CEST] <newtc> I tried this: http://ffmpeg-users.933282.n4.nabble.com/Howto-compile-ffplay-c-in-seperate-directory-with-my-own-svn-version-of-ffmpeg-td940857.html#a3751830
[16:38:21 CEST] <newtc> but it seems like it's for an older version
[16:42:51 CEST] <noncom> hi!
[16:44:58 CEST] <noncom> could anyone tell me, if i am to use FFMpeg in not a GPL application (say, BSD one), what terms do I have to apply to? Like, for example, should I make no use of x264 codec and such
[16:45:04 CEST] <noncom> is there anywhere such info available?
[16:47:43 CEST] <AstralStorm> noncom: generally LGPL v2.1 or v3
[16:47:59 CEST] <AstralStorm> the LICENSE file is in ffmpeg itself
[16:48:09 CEST] <AstralStorm> some codecs taint ffmpeg to GPL
[16:48:31 CEST] <AstralStorm> others are nonfree - both of those are compile options
[16:49:08 CEST] <noncom> i have read the details on the website.. i have to provide the user to replace codecs. if I just pack all in the jar, that complies with that. and i must not make use of codecs like x264.. ?
[16:51:49 CEST] <AstralStorm> what do you mean by "provide the user to replace codecs"? FFmpeg does not allow codecs to be added at runtime
[16:52:24 CEST] <AstralStorm> your jar is generally fine, since LGPL is permissive as to ude
[16:52:34 CEST] <AstralStorm> *use, unless you use one of the nonfree or GPL codecs
[16:52:47 CEST] <AstralStorm> by use I mean compile ffmpeg with the support
[17:07:50 CEST] <newtc> I've recompiled ffmpeg from the newest build
[17:08:23 CEST] <newtc> with ffplay, and I still get the same problem and the "libavdevice is not build with libv4l2 support." error
[17:09:58 CEST] <AstralStorm> it's not enabled by default, there are multiple configure flags
[17:39:34 CEST] <rgoodwin> Morning folks. Trying to troubleshoot why drawtext filter isn't actually drawing anything :) I compiled with --enable-libfreetype, i'm not doing a vcodec copy, I gave a valid fontfile path. -v debug doesn't seem to indicate any problems. Any thoughts? output is here: http://pastebin.com/3NqSmXQ9
[17:44:01 CEST] <rgoodwin> I can't even tell for sure if it's configuring freetype properly. hmm
[17:50:37 CEST] <rgoodwin> trying to add -libfontconfig but it complains that it can't be found, even though it didn't complain about libfreetype
[18:08:35 CEST] <rgoodwin> oh good grief. is it because i'm specifying multiple -vt in the command instead of vt filter1,filter2?
[18:11:30 CEST] <Aristide> Hi !
[18:11:40 CEST] <Aristide> I have a problem with ffmpeg : ffmpeg -f x11grab -r 25 -s 1920x1080 -i :0.0 -vcodec libx264 -threads 4 screencast.avi
[18:11:47 CEST] <Aristide> I get « Unable to find a suitable output format for 'screencast.avi' »
[18:12:11 CEST] <Aristide> I have try lot of formats but don't work :(
[18:12:35 CEST] <Aristide> I Use OpenSUSE
[18:17:21 CEST] <rgoodwin> yes rgoodwin, yes that was it ;) still can't make fontconfig work but don't need it now.
[18:20:42 CEST] <kritzikratzi> Aristide: try .mp4 extension instead of .avi
[18:20:55 CEST] <kritzikratzi> i dont think its possible to have h264 inside avi
[18:21:51 CEST] <Aristide> kritzikratzi: I have already try :) But I have maybe a solution
[18:21:56 CEST] <Aristide> I use « Multimedia:libs » repository
[18:22:09 CEST] <Aristide> I have delete this repo (Factory on openSUSE) and I try without ^^
[18:22:16 CEST] <Aristide> Time to reboot
[18:25:13 CEST] <Aristide> Time to try
[18:25:53 CEST] <Aristide> Same problem :"
[18:25:57 CEST] <noncom> AstralStorm: thanks! I am getting into it. see, I am employed by JavaCV, we are to determine the configuration of JavaCV which would allow us to use it (with FFmpeg features for video playback) in BSD-licensed games, made on JMonkeyEngine (a Java OpenGL engine)
[18:26:38 CEST] <noncom> we are aiming to create a version of JavaCV which would be legaly useable in this respect
[18:27:08 CEST] <noncom> so that the end users (game programmers of JMonkeyEngine, and other people who use JavaCV) would be able to use the package in their products without a second thought
[18:30:52 CEST] <johnnybgood123> I want to split a video into parts, but not equal parts.
[18:30:53 CEST] <johnnybgood123> I want each clip to be between 15 and 50 seconds. But I want each clip length to be random (within that 15 to 50 second range).
[18:30:53 CEST] <johnnybgood123> How would I do this with ffmpeg in mac terminal?
[18:30:53 CEST] <johnnybgood123> Below is the command I was using for splitting a video into equal parts
[18:30:53 CEST] <johnnybgood123> http://pastebin.com/CkahMWXP
[18:31:20 CEST] <Aristide> Sorry but with mp4 i have same problem
[18:31:38 CEST] <Fjorgynn_> aha
[18:47:27 CEST] <AndrewMock> Is there a way to do a CRC repair on my DTS-HD track?
[20:13:33 CEST] <stakewinner00> there are some high level api for convert audio from one format to ogg?
[21:47:47 CEST] <dericed> is the %06d format for incrementing integers in image sequences an ffmpeg protocol or adopted from somewhere else?
[21:49:19 CEST] <durandal_1707> that can hardly be called a protocol
[21:49:35 CEST] <dericed> ha, a 'tradition', 'pattern', whatever
[00:00:00 CEST] --- Sat Jul 18 2015

More information about the Ffmpeg-devel-irc mailing list