[Ffmpeg-devel-irc] ffmpeg.log.20150624

burek burek021 at gmail.com
Thu Jun 25 02:05:01 CEST 2015


[10:29:36 CEST] <zipped> Hi there guys ,
[10:31:21 CEST] <zipped> I'm trying to use ffmpeg encoder "tscc2" which appears on the supported encoders list . However when I run ffmpeg -f x11grab -s 1920x1080 -r 15 -i :0.0 -f pulse -ac 2 -i default -qscale 0 -vcodec tscc2 Videos/test.avi I get :
[10:31:21 CEST] <zipped>  Please use -q:a or -q:v, -qscale is ambiguous
[10:31:21 CEST] <zipped> Unknown encoder 'tscc2'
[10:31:21 CEST] <zipped>  
[10:31:25 CEST] <zipped> any Ideas ?
[10:34:45 CEST] <zipped> also when I change the resolution to 1280 x 720 it will not capture my entire screen (my screen resolution is 1920x1080)
[10:44:16 CEST] <mico> Hi ! I have some issues with my convert application. I use ffmpeg on android to convert 3gp -> mp4
[10:44:51 CEST] <mico> I use : String args = " ffmpeg " + "-i " + input + " -sameq -ab 64k -ar 44100 " + output; 		Runtime r = Runtime.getRuntime(); //On démarre une console pour pouvoir utiliser ffmpeg 		Process p = r.exec("/system/bin/sh " + "-c" + args);
[10:46:37 CEST] <mico> But it return me a permission denied, and i don't know how to get around that
[10:47:27 CEST] <mico> Someone has an idea to help me please ?
[10:59:09 CEST] <zipped> I'm trying to use ffmpeg encoder "tscc2" which appears on the supported encoders list . However when I run ffmpeg -f x11grab -s 1920x1080 -r 15 -i :0.0 -f pulse -ac 2 -i default -qscale 0 -vcodec tscc2 Videos/test.avi I get :
[10:59:13 CEST] <zipped> Unknown encoder 'tscc2'
[10:59:15 CEST] <zipped> any Ideas ?
[11:14:28 CEST] <durandal_170> zipped: tscc2 is only decoder
[11:16:39 CEST] <zipped> durandal_170: ah
[11:17:01 CEST] <zipped> durandal_170: so how can I see the list of available encoders ?
[11:18:23 CEST] <durandal_170> zipped: ffmpeg -encoders
[11:21:29 CEST] <zipped> thanks durandal_170 !
[11:21:59 CEST] <zipped> durandal_170:  would you happen to know also how come when I change my resolution to 1280 x 720 it will not capture my entire screen (my screen resolution is 1920x1080) ?
[11:33:16 CEST] <Mavrik> zipped, are you sure you're not changing the capture resolution instead of video resolution?
[11:34:24 CEST] <zipped> Mavrik: no :) ffmpeg -f x11grab -s 1280x720 -r 15 -i :0.0 -f pulse -ac 2 -i default -qscale 0 -vcodec libx264 . -s is capture resolution ?
[11:36:05 CEST] <Mavrik> you're telling x11grab to grab a 1280x720 square
[11:36:15 CEST] <zipped> ah
[11:36:28 CEST] <Mavrik> you're also using command-line parameters mostly deprecated -_-
[11:36:34 CEST] <Mavrik> do -s "1920x1080"
[11:36:49 CEST] <Mavrik> and add "-vf scale=-2:720" to add a scale video filter
[11:37:50 CEST] <zipped> instead of the "-qscale 0" ?
[11:48:14 CEST] <Mavrik> No.
[11:48:39 CEST] <Mavrik> qscale = video quality (should probably use newer -q:v)
[11:48:57 CEST] <Mavrik> scale filter - scale each video frame to set resolution
[11:50:35 CEST] <zipped> Mavrik: what types of qualities are there? I mean what can I pass to -q:v
[11:51:05 CEST] <Mavrik> zipped, what are you trying to achieve? :)
[11:51:35 CEST] <Mavrik> zipped, I think this will help: https://trac.ffmpeg.org/wiki/Encode/H.264
[11:52:06 CEST] <Mavrik> "-qscale 0" has special meaning for x264 - lossless
[11:53:21 CEST] <zipped> Mavrik: hehe , just trying to capture a video with audio , with resolution of 1280x768 ,  format AVI , and with a good quality , but not something that will consume alot of space
[12:34:06 CEST] <Bigsby> msg NickServ indentify hotdog
[12:34:14 CEST] <Bigsby> urm...
[12:34:16 CEST] <Bigsby> oops
[12:47:37 CEST] <nick123> Hi, Does any one know how to fix this error "[mov,mp4,m4a,3gp,3g2,mj2 ] overread end of atom 'color' by 1 bytes."
[12:56:47 CEST] <jonny_> Hello guys. Is there any way to use four identical raw video files (300 frames CC: I420) and merge them to one 8k raw file. I know there is a method to use the overlay command for this but it cant seem to get it working with .raw files.
[12:59:35 CEST] <jonny_> this is what i tried so far: http://pastebin.com/SkLSX9n6
[13:07:11 CEST] <jonny_> Hello guys. Is there any way to use four identical raw video files (300 frames CC: I420) and merge them to one 8k raw file. I know there is a method to use the overlay command for this but it cant seem to get it working with .raw files.
[13:26:00 CEST] Last message repeated 1 time(s).
[13:29:18 CEST] <claz> is there any option for cutting a frame every X milliseconds?
[13:37:50 CEST] <durandal_170> jonny_: paste non working command
[13:39:04 CEST] <claz> nvm found it
[13:39:23 CEST] <jonny_> @durandal_170 ffmpeg-20150407-git-235589e-win64-st atic\bin>ffmpeg -y -i 1.raw -i 2.raw -i 3.raw -i 4.raw-filter_complex "[0:0]pad= iw*2:ih*2[a];[1:0]null[b];[2:0]null[c];[3:0]null[d];[a][b]overlay=w[x];[x][c]ove rlay=0:h[y];[y][d]overlay=w:h" o5.raw
[13:39:43 CEST] <jonny_> http://pastebin.com/SkLSX9n6
[13:39:46 CEST] <jonny_> output of the console
[13:46:28 CEST] <durandal_170> you need to tell ffmpeg it is raw video. Pixel format and size
[13:56:06 CEST] <jonny_> so add -rawvideo and -i420?
[13:56:13 CEST] <jonny_> or what is the command for pixel format?
[13:56:39 CEST] <jonny_> *-vcodec rawvideo
[13:56:40 CEST] <durandal_170> -pix_fmt
[13:57:05 CEST] <jonny_> -pix_fmt i420 -s 3840x2160 -vcodec rawvideo ?
[14:00:58 CEST] <jonny_> [image2 @ 0000000002e76800] Format image2 detected only with low score of 5, mis detection possible!
[14:04:04 CEST] <relaxed> jonny_: something like, ffmpeg -f rawvideo -pixel_format yuv420p -framerate 25 -video_size 3840x2160 -i input.raw ...
[14:07:48 CEST] <jonny_> ffmpeg -y -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 1.raw -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 2.raw -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 3.raw -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 4.raw -filter_complex "[0:0]pad=iw*2:ih*2[a];[1:0]null[b];[2:0]null[c];[3:0]null[d];[a][b]overlay=w[x];[x][c]overlay=0:h[y];[y][d]overlay=w:h" o5.raw
[14:08:18 CEST] <relaxed> -f rawvideo
[14:17:07 CEST] <jonny_> hey again. my browser crashed while trying to run the command line
[14:18:03 CEST] <jonny_> this is my output atm: http://pastebin.com/DcebsHK6
[14:18:10 CEST] <jonny_> there seems to be an error with the output
[14:18:42 CEST] <relaxed> you need -f rawvideo, it's trying to use the image2 demuxer
[14:20:43 CEST] <jonny_> instead of -y?
[14:21:43 CEST] <relaxed> no
[14:23:01 CEST] <jonny_> so where to place it?
[14:23:11 CEST] <jonny_> infront of every input file or just at the beginning?
[14:23:29 CEST] <relaxed> the former
[14:25:45 CEST] <jonny_> like this:
[14:25:46 CEST] <jonny_> ffmpeg -y -f rawvideo -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 1.raw -f rawvideo -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 2.raw -f rawvideo -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 3.raw -f rawvideo -vcodec rawvideo -framerate 60 -s 3840x2160 -pix_fmt yuv420p -i 4.raw -filter_complex "[0:0]pad=iw*2:ih*2[a];[1:0]null[b];[2:0]null[c];[3:0]null[d];[a][b]overlay=w[x];[x
[14:26:04 CEST] <jonny_> running this makes my pc useless for several minutes so i gotta be sure
[14:26:53 CEST] <relaxed> yes
[14:28:23 CEST] <jonny_> [NULL @ 0000000002cd54e0] Unable to find a suitable output format for 'output.ra w' output.raw: Invalid argument
[14:28:29 CEST] <jonny_> ah nvm my b
[14:29:03 CEST] <jonny_> or no. not working
[14:29:22 CEST] <jonny_> there is no space in output.raw its just from copying
[14:52:33 CEST] <hrw> hi
[15:27:32 CEST] <hypfer> are there hardware h264 encoders that work with ffmpeg?
[15:30:18 CEST] <BtbN> qsv, nvenc
[15:32:03 CEST] <DHE> I have an external USB capture box. It outputs mpegts with H264. makes life pretty easy actually
[16:11:38 CEST] <bertieb> Good day all, I have a niche question or two, but first a sanity check: should it be possible using ffmpeg to create a video file with codec parameters to match an already-extant clip, with the intention of passing those to the concat demuxer and expecting a sane output?
[16:13:08 CEST] <bertieb> Or, asked from the other side: I am trying to produce videos to be concat demuxed together, but they are currently giving audiovisual artifacts at the joins; if I match up the codec parameters, should the joins be artifact-free?
[16:13:48 CEST] <bertieb> I can give more info and context if necessary. Thanks in advance.
[16:22:28 CEST] <hypfer> BtbN: not GPU. like real hardware h264 encoders
[17:06:07 CEST] <bertieb> Further to my last, on the assumption that I can proceed- is there a way to set codec_time_base (tbc?) for encoded output?
[18:19:03 CEST] <kreb> can anyone tell me where ffmpeg installs in OS X (10.10)?
[18:19:15 CEST] <kreb> Audacity can't find the .dylib file
[18:19:41 CEST] <kreb> and it doesn't show up in installed libraries in Audacity prefs
[18:40:27 CEST] <ldiamond> I'm converting .tga image sequences to a video. The process takes a while and if the footage is long, the tga takes insane amount of disk space
[18:40:56 CEST] <ldiamond> Is there a way to launch ffmpeg to consume all the .tga as they are created and wait for new files?
[18:41:08 CEST] <ldiamond> and also delete them
[18:41:17 CEST] <JEEBsv> no
[18:41:41 CEST] <JEEBsv> you would have to make your own app with the libraries that does that in addition to encoding
[18:41:44 CEST] <ldiamond> Is it easy to launch ffmpeg to consume all .tga and append movie files?
[18:42:17 CEST] <JEEBsv> or you just use raw pictures or so and pipe them to ffmpeg
[18:42:20 CEST] <ldiamond> The problem I see with that is the sound, but I guess I could tell ffmpeg to ignore the .wav and just add it to the video later
[18:42:50 CEST] <ldiamond> The process generating the files can't be piped
[18:43:07 CEST] <ldiamond> unfortunately
[18:43:13 CEST] <JEEBsv> or you make your generating app use the libraries to encode things
[18:43:25 CEST] <ldiamond> I have no control over the generating app
[18:43:31 CEST] <ldiamond> it's Counter Strike Go
[18:43:44 CEST] <JEEBsv> well you are capturing it in some way, no?
[18:43:54 CEST] <JEEBsv> usually one uses screen capture software for it
[18:44:13 CEST] <ldiamond> cs go has a 'startmovie' command which generates .tga and .wav from demos
[18:44:24 CEST] <ldiamond> it just renders the demo live then writes an image on disk
[18:45:03 CEST] <ldiamond> I guess I could write a simple python script that batches a few hundred images into an ffmpeg command that generates a short video with a lossless codec, then append all videos together
[18:45:15 CEST] <JEEBsv> yeah, if that is an alternative
[18:45:18 CEST] <ldiamond> Surely a lossless codec will take less space
[18:45:35 CEST] <ldiamond> later on I can convert to x264 and work with it
[18:46:09 CEST] <JEEBsv> ut video would be my recommendation if the use case would be video editing
[18:46:19 CEST] <JEEBsv> it has plugins for vfw/ds/mf and QT
[18:46:33 CEST] <JEEBsv> thus making it supported in all video editing things that can use one of those
[18:47:15 CEST] <ldiamond> it has good Linux support? (first result, from a shady source says "lossless video codec for Windows"
[18:47:43 CEST] <JEEBsv> well it is supported by anything with a post-2012 libavcodec
[18:48:01 CEST] <JEEBsv> kostya did the decoder in 2011, and I did the encoder in 2012
[18:48:14 CEST] <JEEBsv> but with FOSS you indeed have more alternatives
[18:48:53 CEST] <JEEBsv> and it definitely isn't a thing "for windows", as I said the original author made plugins for all of the commonly used media frameworks, except for gstreamer (for obvious reasons)
[18:48:56 CEST] <ldiamond> yea I dont really mind which one I use, as long as it compresses and is lossless.
[18:49:03 CEST] <ldiamond> I'll transcode it to x264 later anyways
[18:49:53 CEST] <JEEBsv> well if you are going to use something FOSS for editing then you don't have to limit your choices, ut video is just the thing I recommend for people who use proprietary video editors
[18:50:01 CEST] <JEEBsv> because of its wide support
[18:51:46 CEST] <ldiamond> I'll be using blender most likely
[18:52:02 CEST] <ldiamond> I use FOSS as much as possible
[18:52:05 CEST] <bertieb> ldiamond: blender as an nle, out of interest?
[18:52:19 CEST] <ldiamond> yea
[18:52:37 CEST] <JEEBsv> yea, it's the least retarded FOSS NLE there is
[18:53:01 CEST] <bertieb> ldiamond: Cool, I've not tried it myself but have it set up for emergencies. Do you have a YT/Twitch channel I could check out btw?
[18:53:01 CEST] <ldiamond> I've read that there's a few, but I also read blender is the best
[18:53:18 CEST] <ldiamond> I dont have it yet, I'm setting up for a youtube channel
[18:53:23 CEST] <ldiamond> then might do twitch
[18:53:35 CEST] <JEEBsv> all of the FOSS things that have so far tried to be a "video editor" seem to have just failed in life
[18:53:38 CEST] <ldiamond> but I still have to figure out how to move the webcam around while streaming with ffmpeg
[18:53:44 CEST] <JEEBsv> but blender seems to have gotten good
[18:53:48 CEST] <ldiamond> I think lightworks is decent
[18:53:50 CEST] <ldiamond> not sure though
[18:54:39 CEST] <ldiamond> but if I'm gonna learn something, might as well be blender.
[18:54:43 CEST] <ldiamond> It's so much more than and nle
[18:54:43 CEST] <JEEBsv> yup
[18:55:04 CEST] <ldiamond> I could also create 3d footage for intro's and such.
[18:55:29 CEST] <bertieb> I tried lightworks briefly; didn't find it intuitive (though what decent nle is, I guess) and a short clip had an av desync for reasons I couldn't get to.
[18:56:10 CEST] <bertieb> PEBKAC problem for sure, but finding the rpoblem and fixing it was beyond me
[18:56:22 CEST] <ldiamond> tbh, it would be simple to just use ffmpeg and create a cli wrapper
[18:56:34 CEST] <Parsec300> Hi people, I'm trying o get some padding added to my output video, but it keeps giving errors
[18:56:39 CEST] <ldiamond> commands like cut vid_name start end
[18:56:55 CEST] <ldiamond> then use some naming convention and easily concatenate them
[18:57:44 CEST] <Parsec300> ffmpeg -i BH6.mp4 -f avi -c:v libx264 -b:v 800k -g 300 -vf scale=640:-1 -r 30 -c:a libmp3lame -b:a 128k -vf pad="pad=640:480:0:160:black" -map 0:0 -map 0:2 BH6.avi
[18:58:12 CEST] <Parsec300> Keeps saying that negative values are not acceptable
[18:58:16 CEST] <bertieb> ldiamond: Hah! That's what I thought... I'm working on an 'automatic' highlighter at the moment (takes a start time stop time and optional name for cuts). Then you realise any non-transcoped (stream copy) cuts are not frame-exact and the decision becomes "slow and precise" or "fast and close enough probably"
[18:58:45 CEST] <bertieb> *non-transcoded
[18:59:25 CEST] <bertieb> (I have footage produced in h264 and I'm keen to avoid a re-encode so the latter was the one I went with)
[19:00:34 CEST] <bertieb> At the moment I'm trying to set the tbc (codec_time_base) to see if some of the transitions I've done with very short segments can be re-integrated
[19:01:25 CEST] <bertieb> But I since I asked I haven't had a reply about what setting controls tbc output :-/ (I thought it was video_track_timescale but it doesn't appear to be)
[19:01:56 CEST] <bertieb> I'm hanging around in the hope that someone reads the scrollback :P
[19:01:58 CEST] <bertieb> But for simple things, you are quite right
[19:15:04 CEST] <ldiamond> I have a .tga image sequence, recorded a 240 frames per second. I'm trying to convert it to a h264 mkv video @ 60fps. My command is:
[19:15:06 CEST] <ldiamond> ffmpeg -i full_demo_pov%04d.tga -vcodec h264 -r 240 -framerate 60 pov.mkv
[19:15:23 CEST] <c_14> -framerate is an input option
[19:15:29 CEST] <c_14> In this case
[19:15:49 CEST] <ldiamond> The result is a slow motion video, the time slider in vlc is all messed up
[19:15:56 CEST] <ldiamond> So I should set -framerate 240
[19:15:59 CEST] <ldiamond> and -r 60?
[19:16:19 CEST] <c_14> ffmpeg -framerate 60 -i %04d.tga -c:v libx264 -r 60 out.mkv
[19:16:53 CEST] <ldiamond> where do I say that the source is 240fps?
[19:17:07 CEST] <c_14> *ffmpeg -framerate 240
[19:17:13 CEST] Action: c_14 got distracted while typing
[19:17:19 CEST] <ldiamond> Ok
[19:17:32 CEST] <ldiamond> Does the order of the -framerate option matter?
[19:17:47 CEST] <c_14> yes
[19:18:26 CEST] <DHE> options are applied to the input or output specified immediately after it
[19:19:13 CEST] <DHE> ffmpeg -r 240 -i [input]  -r 60 [other-output-options] [output]
[19:19:30 CEST] <ldiamond> ah, so -r and -framerate are the same!
[19:19:48 CEST] <ldiamond> their position is what matters. Good to know
[19:20:40 CEST] <c_14> No, they're not.
[19:20:58 CEST] <c_14> -framerate is an option specific to the image2 muxer/demuxer
[19:21:00 CEST] <c_14> (in this case)
[19:21:12 CEST] <ldiamond> I see
[19:23:09 CEST] <ldiamond> Does anyone know how I could stream my desktop and add a webcam overlay but be able to move the overlay around with ffmpeg?
[19:23:48 CEST] <ldiamond> I was thinking about having one ffmpeg with two input streams and put one over the other, one of the input stream is the desktop, the other is the webcam
[19:24:04 CEST] <ldiamond> when I want to move the webcam around I stop the webcam stream, restart it with a different x/y
[19:24:20 CEST] <bertieb> ldiamond: Any particular reason not to use OBS which is also FOSS?
[19:24:43 CEST] <ldiamond> No particular reason, I haven't tried it, I looked at a few other and none of them allow to move the webcam live
[19:24:47 CEST] <ldiamond> if obs does it I'll just use that
[19:25:15 CEST] <bertieb> OBS most definitely does, but its output formats might be more restrictive than you wish
[19:25:47 CEST] <ldiamond> we'll see
[19:25:51 CEST] <ldiamond> I'll give it a shot
[19:26:01 CEST] <bertieb> Cool
[19:26:32 CEST] <BtbN> It basicaly only outputs h264/aac into flv
[19:26:52 CEST] <ldiamond> I guess I can easily change that if it's using ffmpeg
[19:26:58 CEST] <BtbN> It's not.
[19:27:02 CEST] <BtbN> It supports exactly that.
[19:28:23 CEST] <bertieb> It does support other containers, but it uses x264 yeah
[19:28:32 CEST] <BtbN> It supports mp4
[19:28:36 CEST] <BtbN> But that's a terrible idea
[19:28:38 CEST] <BtbN> flv is better
[19:28:59 CEST] <bertieb> BtbN: True, but the multiplatform version supports more than just flv/mp4
[19:29:07 CEST] <ldiamond> well ok, obs is neat enough
[19:29:10 CEST] <BtbN> multiplatform version is far from finished
[19:29:20 CEST] <BtbN> But does use lavc/lavf
[19:29:28 CEST] <ldiamond> first time I hear flash is better than something else.
[19:29:31 CEST] <ldiamond> :p
[19:29:44 CEST] <BtbN> nobody said anything about flash.
[19:30:00 CEST] <ldiamond> isnt flv the flash format
[19:30:10 CEST] <BtbN> It's a very simple video container.
[19:30:20 CEST] <BtbN> and audio, of course
[19:30:32 CEST] <BtbN> mp4 is a very complex and annoying one
[19:30:45 CEST] <ldiamond> yea I guess I can easily catch that stream and put it in another container easily
[19:30:50 CEST] <BtbN> OBS crashes in the middle of a recording? recording up to that point is useless
[19:30:51 CEST] <bertieb> BtbN: Far from finished yes but usable, depending on what you're doing
[19:30:52 CEST] <ldiamond> I tend to use mkv.
[19:31:27 CEST] <bertieb> BtbN: Not for production use though :)
[19:31:58 CEST] <BtbN> I haven't checked on its progress in a while
[19:32:35 CEST] <ldiamond> oh wow.
[19:32:49 CEST] <ldiamond> finished re-encoding that video with the fix you guys provided.
[19:33:08 CEST] <ldiamond> then I go to my console's command history and execute rm pov.mkv instead of vlc pov.mkv
[19:35:18 CEST] <bertieb> ldiamond: sudo apt-get install trash-cli; echo "alias rm=trash" >> ~/.bashrc  ;)
[19:35:53 CEST] <ldiamond> yea that could work :p
[19:36:02 CEST] <bertieb> For next time :P
[19:36:16 CEST] <ldiamond> or there must be an undelete function for btrfs
[19:38:25 CEST] <bertieb> ldiamond: Probably... not sure that there are any that are less work than re-encoding, particularly if it was done since you asked about 25 minutes ago :P
[19:38:51 CEST] <ldiamond> nah I just rm'd it
[19:39:00 CEST] <ldiamond> I re-encoded though, cause it's no big deal
[19:39:09 CEST] <ldiamond> but I could have undeleted right away
[19:39:16 CEST] <ldiamond> I just haven't set it up yet
[19:39:23 CEST] <ldiamond> I might get trash though
[19:40:54 CEST] <ldiamond> ah trash-cli uses python 2.7
[19:41:28 CEST] <ldiamond> oh I already have pip2, nice
[20:29:01 CEST] <azizulhakim_> Is it possible to use ffmpeg in kernel module? I'm interested to use it in framebuffer driver
[20:29:52 CEST] <kepstin-laptop> hah ah hah... that's got to be one of the worst ideas I've ever heard ;)
[20:30:16 CEST] <DHE> ffmpeg runs in userspace, but grabbing data from the framebuffer might be possible...
[20:30:25 CEST] <kepstin-laptop> you'd want to have the driver expose the capabilities the hardware can do on its own, and if you need media transcoding or format conversion, do it in userspace.
[20:34:23 CEST] <azizulhakim_> thanks :)
[21:07:28 CEST] <kepstin-laptop> an interesting example of a driver that uses ffmpeg in userspace is that alsa supports outputting dolby digital surround audio (a52) via spdif by doing the audio encoding with ffmpeg.
[21:15:52 CEST] <Parsec300> Hi people, I'm trying o get some padding added to my output video, but it keeps giving errors
[21:16:30 CEST] <Parsec300> Could somebody please shed some light on this?
[21:16:43 CEST] <BtbN> CBR padding?
[21:17:25 CEST] <Parsec300> Adding black space to make the video play in the correct aspect on some player
[21:17:31 CEST] <Fjorgynn> Hello
[21:17:38 CEST] <Parsec300> Like this:
[21:17:43 CEST] <Parsec300> ffmpeg -i BH6.mp4 -f avi -c:v libx264 -b:v 800k -g 300 -vf scale=640:-1 -r 30 -c:a libmp3lame -b:a 128k -vf pad="pad=640:480:0:160:black" -map 0:0 -map 0:2 BH6.avi
[21:30:38 CEST] <llogan> use one filtergraph, not two
[21:30:43 CEST] <llogan> also...
[21:32:49 CEST] <Parsec300> fflogger, http://pastebin.com/pbir2TJa
[21:33:18 CEST] <Parsec300> llogan, one filtergraph? Oh, I get it
[21:33:26 CEST] <llogan> it's nice to have the command and output in the same link
[21:33:40 CEST] <Parsec300> llogan, but you do
[21:34:22 CEST] <Parsec300> But how to combine the two?
[21:34:26 CEST] <llogan> i don't see the command in the console output link, so i have to go back and forth between the IRC and the browser
[21:34:35 CEST] <llogan> inefficient
[21:34:42 CEST] <llogan> </bitchmode>
[21:35:13 CEST] <anoop_r> anybody tried to compile ffmpeg in gcc 5
[21:35:38 CEST] <llogan> i did recently with 5.1.0
[21:35:40 CEST] <Parsec300> Sorry, my bad
[21:35:42 CEST] <Parsec300> http://pastebin.com/wF0cgsT9
[21:36:23 CEST] <anoop_r> was it a success
[21:36:32 CEST] <llogan> yes
[21:36:50 CEST] <anoop_r> any problems with gcc 5
[21:37:15 CEST] <anoop_r> any performance improvements ?
[21:37:25 CEST] <llogan> there may be some issues with gcc itself as per tradition with new releases.
[21:37:53 CEST] <llogan> if there were improvements i haven't noticed, but i did not benchmark
[21:39:09 CEST] <llogan> Parsec300: what do you want to achieve, exactly? what player are you watching it in?
[21:40:54 CEST] <Parsec300> It's for my kids's portable DVD player (NextBase). The screen is widescreen, but it stretches/flattens the movies that are in wider than that
[21:45:09 CEST] <kepstin-laptop> ah, right, dvd format only supports 4:3 and 16:9 aspect ratios, anything else you'll have to pad.
[21:50:51 CEST] <Parsec300> And I'd like to add some black space to make the aspect correct. I don't want my kids to grow used to a stretched look on world
[21:51:03 CEST] <Parsec300> People are stretched enough these days ;)
[21:51:04 CEST] <feliwir> is there a way to check if an object passed to av_free is deletable?
[21:51:18 CEST] <feliwir> somehow i pass invalid AVFrame's to it at some point (not sure where)
[21:54:26 CEST] <lowsider> hello people
[22:00:28 CEST] <c_14> Parsec300: -vf pad=w=(iw*16/9):h=ih:x=(ow-iw)/2 <- should pad the sides with black bars until it's 16/9. Analog for 4:3 or for padded height instead of padded width
[22:00:58 CEST] <c_14> wait
[22:01:23 CEST] <c_14> w=(ih*16/9)
[22:05:39 CEST] <tclarke> hi, I've got a multi PES mpeg-ts file with a bunch of video streams and a stream with KLV metadata...is there a way to use the ffmpeg command to extract the raw klv? I tried "ffmpeg -i foo.ts -codec:2 copy -f data foo.klv" and I get "Output file #0 does not contain any stream"
[22:06:14 CEST] <tclarke> tried "ffmpeg -i foo.ts -map 0:2 -codec copy -f data foo.klv" and get Cannot map stream #0:2 - unsupported type
[22:22:12 CEST] <Parsec300> c_14, thank you for your advice. I will certainly give it a try
[22:38:34 CEST] <Parsec300> c_14, and how to combine that with scale?
[22:38:55 CEST] <Parsec300> To downsize it to 640 since the Nextbase players accept max width 640
[22:40:44 CEST] <c_14> ,scale=640:-2
[22:40:47 CEST] <c_14> after the pad
[22:45:29 CEST] <Parsec300> Ah, comma. Thanks
[22:46:08 CEST] <feliwir> how frequently zeranoe builds are released?
[22:46:51 CEST] <Parsec300> How come -2?
[22:47:29 CEST] <c_14> Parsec300: makes sure the height is even
[22:48:09 CEST] <Parsec300> Oh
[22:52:27 CEST] <Parsec300> c_14, sorry, but I get the same error
[22:52:39 CEST] <c_14> What error?
[22:53:28 CEST] <Anoia> feliwir: nightly
[22:53:32 CEST] <Parsec300> http://pastebin.com/PdT8dcK9
[22:54:56 CEST] <Parsec300> btw without the parentheses, I got a bash syntax error
[22:55:58 CEST] <c_14> negative values...
[22:58:14 CEST] <c_14> oh
[22:58:17 CEST] <c_14> that's easy
[22:58:22 CEST] <c_14> put the '"' before pad
[23:00:14 CEST] <Parsec300> c_14, I did, but still complains about negative values
[23:01:22 CEST] <c_14> oh, derp
[23:02:18 CEST] <c_14> "pad=w=iw:h=(iw*9/16):y=(oh-ih)/2,scale=640:-2"
[23:02:25 CEST] <c_14> You need to pad the top and bottom
[23:02:27 CEST] <c_14> Not the sides
[23:02:36 CEST] <c_14> For your case, anyway.
[23:02:42 CEST] <Parsec300> I will try
[23:02:50 CEST] <c_14> You could adjust the pad with an if to check which direction it'll have to adjust, but I was lazy.
[23:04:22 CEST] <zhanshan> c_14 do you see anything wrong with that video stream: https://gist.github.com/zhanshan/9cc26d93e1ec8c0d39fe
[23:05:20 CEST] <zhanshan> I put together jpg and flac to MKV video and wm4 of #mpv told me:  key frames are exactly 10 seconds apart, but the index looks correctly and audio frames are only 100ms long, so there should be no problem
[23:05:34 CEST] <zhanshan> so I don't see what's wrong with the keyframe
[23:05:54 CEST] <zhanshan> is it a ffmpeg command issue or h.264 issue or not at all?
[23:06:03 CEST] <c_14> What's the problem?
[23:07:45 CEST] <zhanshan> in mpv if I seek to like a minute and seek backwards sound always takes a bit to go back up
[23:08:37 CEST] <Parsec300> c_14, you can do if statements with ffmpeg?
[23:08:43 CEST] <zhanshan> this is hard to work with
[23:09:04 CEST] <zhanshan> and vlc doesn't seem to have jack audio output?? strange enough or am I just not finding it
[23:10:04 CEST] <c_14> Parsec300: yep https://ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation
[23:11:28 CEST] <c_14> zhanshan: have you tried ffplay?
[23:11:34 CEST] <zhanshan> I need a small videoplayer, and I really like mpv for the classification of video footage
[23:12:12 CEST] <zhanshan> what's the command for jack audio output?
[23:12:19 CEST] <c_14> For ffplay?
[23:12:52 CEST] <zhanshan> yah
[23:13:03 CEST] <zhanshan> mpv can use it automatically
[23:13:15 CEST] <zhanshan> ffplay told me No more combinations to try, audio open failed
[23:13:26 CEST] <zhanshan> SDL_OpenAudio (2 channels, 48000 Hz):
[23:13:39 CEST] <Parsec300> c_14, it's converting now
[23:13:51 CEST] <zhanshan> trying to use alsa by default
[23:15:01 CEST] <c_14> zhanshan: I don't think ffmpge has jack output
[23:15:10 CEST] <c_14> *ffmpeg
[23:15:15 CEST] <c_14> you can check with ffmpeg -devices
[23:15:46 CEST] <Parsec300> c_14, works like a charm, thank you!
[23:17:01 CEST] <zhanshan> c_14 so I gotta stick to mpv's delay and have to install a vlc version with jack-support. someone in the channel told me this exists
[23:17:36 CEST] <zhanshan> Demuxing supported: D  jack            JACK Audio Connection Kit
[23:17:41 CEST] <c_14> You can build it yourself iirc.
[23:18:10 CEST] <zhanshan> so would that be ffplay -d jack file.mkv??
[23:18:17 CEST] <zhanshan> or -ao?
[23:18:34 CEST] <c_14> Demuxing probably means input only
[23:18:41 CEST] <c_14> Because devices are formats
[23:18:48 CEST] <c_14> and demuxing = reading, and muxing = writing
[23:18:54 CEST] <zhanshan> ah ok
[23:18:57 CEST] <zhanshan> :/
[23:20:15 CEST] <zhanshan> wm4 said: it's odd, normally x264 decides when it's a good time to insert a keyframe
[23:20:32 CEST] <zhanshan> and: anyway, no sample no fix, and I'm off for today
[23:20:40 CEST] <zhanshan> I don't quite understand
[23:20:50 CEST] <zhanshan> except that he's off for today :P
[00:00:00 CEST] --- Thu Jun 25 2015


More information about the Ffmpeg-devel-irc mailing list