[Ffmpeg-devel-irc] ffmpeg.log.20160601

burek burek021 at gmail.com
Thu Jun 2 02:05:01 CEST 2016


[00:00:55 CEST] <rainabba> JEEB: That makes sense because it takes 3-4 jobs to saturate the 36-core for encoding, but just the unsharp appears to max out at 6. I'm doing prores -c:a copy -c:v prores -vf.... so there's no encoding happening when I see that.
[00:01:17 CEST] <rainabba> bbiaf.
[00:01:26 CEST] <JEEB> uhh
[00:01:37 CEST] <rainabba> Meeting with Akamai...
[00:01:39 CEST] <JEEB> non-copy -c is *always* both decoding and encoding
[00:01:44 CEST] <JEEB> lol akamai
[00:01:59 CEST] <rainabba> Yeah, well. Gotta maintain relationships :/
[00:02:15 CEST] <JEEB> sure, I've just been on the side of trying to integrate against some of their crapola
[00:02:31 CEST] <rainabba> Just 40 mins ago was on the phone with Multicoreware
[00:02:44 CEST] <JEEB> they tried to sell you a lot of magic
[00:02:45 CEST] <rainabba> THEY know their shit :)
[00:03:04 CEST] <JEEB> I just know MCW's marketing people are really obnoxious
[00:03:21 CEST] <rainabba> We do all our own encoding/transcoding so don't really need Akami.
[00:03:23 CEST] <JEEB> at least for someone who knows how they work
[00:03:37 CEST] <JEEB> akamai is mostly used for CDN purposes anyways
[00:05:37 CEST] Action: DHE googles names
[00:06:17 CEST] <DHE> huh... okay, moving on
[00:14:12 CEST] <Threads> what would the reason be for ffmpeg to encoe but the first few frames are still images ?
[00:24:48 CEST] <llogan> Threads: i don't think anyone understands your question.
[00:33:21 CEST] <Threads> llogan when i encode an h264 input stream it freezes for the first 1>2 seconds on the video
[00:35:35 CEST] <DHE> what's the source?
[00:37:01 CEST] <Threads> h264.ts input
[00:37:25 CEST] <DHE> okay... when you start converting, does it spam errors from the video codec first?
[00:39:16 CEST] <Threads> yep
[00:43:25 CEST] <DHE> I've seen this. the video doesn't start with a keyframe, so the video needs to wait for a keyframe to start decoding. but the audio is good
[00:43:53 CEST] <DHE> but after conversion you see the first keyframe, but then wait for when it actually belongs in the video before it starts moving
[00:44:55 CEST] <Threads> DHE thats what i have
[00:45:07 CEST] <Threads> cant seem to fix it tho
[01:04:42 CEST] <rainabba> Threads: Sounds like you need to seek on the input to the first keyframe. How to identify which is the bit I'm not sure about. https://trac.ffmpeg.org/wiki/Seeking
[01:10:09 CEST] <rainabba> JEEB: I'm now following this guide: https://trac.ffmpeg.org/wiki/CompilationGuide/Centos  Based on what you said about libavcodec including all the decoders I'll need, I don't need to follow the libx264 steps in that guide, or was that just referring to decoders and I still need to follow those steps (and the ones for libx265, libfdc_aac, etc..) for those encoders?
[01:15:42 CEST] <llogan> rainabba: if you want to encode with libx26*, fdk, etc then you'll need to compile the appropriate external libraries.
[01:20:54 CEST] <furq> rainabba: https://ffmpeg.org/general.html#Supported-File-Formats_002c-Codecs-or-Features
[01:20:58 CEST] <furq> anything marked with an X is built-in
[01:40:53 CEST] <rainabba> furq: ty
[01:43:06 CEST] <CoJaBo> ..how do i fix the framerate of a video without reencoding?
[01:43:30 CEST] <furq> with difficulty
[01:43:55 CEST] <CoJaBo> ..yes?
[01:45:12 CEST] <furq> i don't think you can do it with just ffmpeg but it's possible with mp4box or mkvmerge
[01:45:34 CEST] <furq> assuming this is h.264, idk about doing it with other codecs
[01:45:50 CEST] <furq> http://video.stackexchange.com/a/15003
[01:45:53 CEST] <CoJaBo> Yes, it is
[01:46:11 CEST] <furq> you can also probably do it with l-smash so that people in here don't yell at me for recommending gpac
[01:46:46 CEST] <CoJaBo> Its gona be faster to just start over, isn't it :/
[01:47:13 CEST] <furq> it should be pretty simple
[01:52:55 CEST] <CoJaBo> ..i dont even know where to get those tools
[01:53:05 CEST] <furq> what os/distro are you on
[01:53:14 CEST] <furq> and also what container are you using
[01:54:07 CEST] <furq> if you're on debian/ubuntu then mp4box is in gpac and mkvmerge is in mkvtoolnix
[01:54:38 CEST] <CoJaBo> ubuntu old-lts
[01:59:35 CEST] <rainabba> furq: For CoJaBo's question, this wouldn't work? https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video
[02:00:05 CEST] <furq> that reencodes
[02:00:14 CEST] <furq> you can't change the framerate with ffmpeg without reencoding
[02:00:21 CEST] <rainabba> Gotcha
[02:11:14 CEST] <rainabba> If I've built correctly from the following guide, should I find the actual ffmpeg binary at ffmpeg_build/bin/ ?  https://trac.ffmpeg.org/wiki/CompilationGuide/Centos
[02:12:33 CEST] <rainabba> My actual build script: http://pastebin.com/C1Lystun
[02:13:10 CEST] <furq> i should have thought so
[02:14:18 CEST] <furq> actually you're setting --bindir to $HOME/bin which might override that
[02:15:40 CEST] <rainabba> Interesting. I do see vsyasm, x264, yasm and ytasm there, but no ffmpeg. That was a script copy/paste thing, should I continue with that practice?
[02:16:08 CEST] <llogan> is there a ffmpeg binary in "~/ffmpeg_sources/ffmpeg"?
[02:16:28 CEST] <furq> no idea, i don't like to set --prefix or --bindir because then my build system's paths end up in the binaries i distribute
[02:18:40 CEST] <rainabba> `find ~ -name ffmpeg` didn't return anything, but I've cleaned up to try again. Updated my script to: http://pastebin.com/C1Lystun If you have any other suggestions, I'd appreciate hearing them.
[02:18:55 CEST] <furq> i can't imagine it's a good idea to run make distclean after building
[02:19:44 CEST] <rainabba> After `make install`, would it matter?
[02:19:45 CEST] <furq> i've seen at least some makefiles that wipe out your prefix if you do that
[02:19:55 CEST] <furq> which is a lot of fun when your prefix is /usr
[02:20:05 CEST] <rainabba> Hmm.
[02:20:06 CEST] <furq> thanks vlc!
[02:20:11 CEST] <furq> fortunately that was in a chroot so it wasn't that bad
[02:20:17 CEST] <rainabba> Oh
[02:20:32 CEST] <rainabba> Just realized what you were saying :/
[02:21:05 CEST] <rainabba> Not an issue with --prefix set to an isolated build folder like this script then right?
[02:21:57 CEST] <furq> it'd be an issue in the sense that it'd delete the binary you just built
[02:22:25 CEST] <rainabba> That would make this guide pretty useless (which would be sad considering it's popularity) :)
[02:22:33 CEST] <furq> i have no idea if that's true of ffmpeg
[02:22:55 CEST] <furq> but if you're not running as root, and i assume you're not, then the binary will be somewhere in ~ unless it didn't build
[02:22:58 CEST] <furq> and you'd probably have noticed that
[02:23:32 CEST] <c_14> distclean doesn't do anything outside the build directory
[02:23:52 CEST] Action: rainabba ducks away so nobody sees his horrible and outdated sysadmin practices
[02:24:14 CEST] <furq> also, why does your ffmpeg make invocation not have -j32
[02:24:19 CEST] <furq> that's the one where you really need it
[02:24:45 CEST] <llogan> furq: users were attempting to recompile without running make distclean beforehand. it's just a preventative measure.
[02:24:45 CEST] <rainabba> That might explain why this is taking ages :)
[02:25:11 CEST] <furq> llogan: i'm sure it's fine, i've just been burned by other video-related projects who should know better
[02:25:47 CEST] <furq> i always run `make install-progs prefix=$PREFIX` so that i don't have to have a hardcoded path show up in the banner
[02:27:01 CEST] <llogan> rainabba: i'm guessing compilation failed somewhere but your script just kept going.
[02:27:18 CEST] <furq> yeah you probably want to put `set -e` at the top
[02:27:32 CEST] <furq> running make distclean at the end will probably scroll any error messages off the screen
[02:27:53 CEST] <llogan> rainabba: refer to console output or just do it manually and script it once you get it working.
[02:30:14 CEST] <rainabba> furq: Not familiar and google not helping readily. `set -e` does what?
[02:30:29 CEST] <c_14> errexit
[02:30:32 CEST] <furq> set -e is a shell builtin which breaks a script on error
[02:30:34 CEST] <c_14> if there's an error, the script terminates
[02:30:38 CEST] <rainabba> Ahh
[02:30:57 CEST] <furq> it stops if any subcommand exits with a non-zero status
[02:31:12 CEST] <furq> which is normally what you want, but shell defaults are dumb
[02:31:27 CEST] <rainabba> Haha. I'm actually pasting that to the terminal and considered converting to include && between each :)
[02:31:50 CEST] Action: rainabba isn't a bash expert, but has been poking at it since the early days of RedHat ~1999
[02:33:00 CEST] <CoJaBo> Achievement Unlocked: Stalled a vehicle in the middle of a busy intersection.
[02:33:04 CEST] <rainabba> If I don't get a binary this time, I'll do each chunk seperately then put it all in a proper bash script with #!/bin/bash for future use.
[02:33:15 CEST] <furq> #!/bin/sh pls
[02:33:23 CEST] <furq> or at least #!/usr/bin/env bash
[02:33:35 CEST] <furq> not that you're ever going to run this on a system where bash isn't in /bin, but it will help me sleep at night
[02:33:37 CEST] <rainabba> furq: Oh?
[02:34:02 CEST] <rainabba> furq: You code and find tabs > spaces too huh? ;)
[02:34:17 CEST] <furq> four spaces for life
[02:34:21 CEST] <furq> (four life)
[02:34:31 CEST] <rainabba> furq: Seriously though, that's news to me. Thanks.  YOu watch Silicon Valley at all?
[02:34:36 CEST] <furq> i can't say i do
[02:34:54 CEST] <furq> but yeah /bin/bash won't work on at least the BSDs
[02:35:02 CEST] <furq> probably any system where bash isn't the default shell
[02:35:09 CEST] <rainabba> Mike Judge. It's disturbingly accurate and funny as shit.
[02:36:17 CEST] <rainabba> Last eppisode, the main character ended up breaking up with his girlfriend from Facebook because she preferred spaces and he's ..... no words for him
[02:36:33 CEST] <furq> i'll give her a good home
[02:37:35 CEST] <rainabba> "from Facebook" meaning she is a developer there and the show is so dead-serious at being accurate, the show did a closeup of a github page and they made this just to get the shot: https://github.com/stitchpunk Pay close attention to https://github.com/Stitchpunk/ghdecoy
[02:38:38 CEST] <rainabba> Grr: "ERROR: opencl not found" because I added `--enable-opecl` (because I need the support), but apparently I need another project to support that.
[05:40:47 CEST] <prelude2004c> hey guys.. anyone around ?
[06:05:35 CEST] <Demon_Fox> prelude2004c, You just have to ask an ffmpeg question
[06:06:00 CEST] <prelude2004c> np.. so i started using vdpau for decoding and nvenc for encoding and the world is good again... problem is my audio/video are off sync
[06:06:09 CEST] <prelude2004c> not sure if it is the delay or something because of the vdpau
[06:09:38 CEST] <prelude2004c> seems my audio is off -0.700s
[06:18:06 CEST] <prelude2004c> ${ffmpeg} -hwaccel vdpau -threads 1 -i "$stream" $mapping -c:s copy -c:v nvenc -2pass 1 -s 1280x720 -b:v 1800k -preset llhq -minrate 1k -maxrate 2200k -g 180 -r 30 -af "aresample=async=1000" -af 'volume=10dB' $audio -avoid_negative_ts 1 -frame_drop_threshold 1.0 -dts_delta_threshold 0 -hls_time 6 -break_non_keyframes 1 -hls_flags delete_segments -start_number $start -hls_list_size 5 ${HLS_PATH_FILES}/${CHN}/${CHN}2M.m3u8
[06:18:06 CEST] <prelude2004c> 2>/var/log/channels/${CHN}/720p.txt;sleep 5;done
[06:18:28 CEST] <prelude2004c> what there causes the video / audio to be out of sync.. the timestamps should be good so why isn't the aresample keeping audio/video in sync
[06:20:42 CEST] <Demon_Fox> You should be getting warnings if it was a corruption in the input
[06:21:07 CEST] <Demon_Fox> From your settings, i can surmise that this is a live stream
[06:21:22 CEST] <prelude2004c> its a live stream :)
[06:21:38 CEST] <prelude2004c> and the audio seems to be drifting further too
[06:21:39 CEST] <prelude2004c> over time
[06:24:33 CEST] <prelude2004c> any ideas how i am able to keep the audio/video in check so it can go the distance ?
[06:34:47 CEST] <prelude2004c> any suggestion ?
[06:58:08 CEST] <Demon_Fox> Well
[06:58:20 CEST] <Demon_Fox> I would try a different command line to see if it still happens
[06:58:33 CEST] <Demon_Fox> On a file instead of a stream to see if it persists
[07:28:54 CEST] <Demon_Fox> prelude2004c, if no one responds now, it's probably because it's night time on the part of the world most the people are in
[10:28:59 CEST] <andrey_utkin> I wonder if ffmpeg could allow implementation of such a hack as "downscaled decoding" for H.264, to save CPU cycles when a lot of decoding is needed. Please let me know if you know how to achieve this, we may want to pay to you.
[11:24:46 CEST] <theeboat> Does anybody know which codecs I can use within an mxf using ffmpeg?
[11:47:50 CEST] <Franciman> Hi all, I am using libavcodec to extract samples from an audio stream, and I would like to show a progress bar, now, what can I use in order to track progress?
[11:48:24 CEST] <Franciman> Specifically, I can't seem to find a value that represents the total number of frames, or something like that
[13:23:33 CEST] <CoJaBo> How often are the static builds, built?
[13:24:11 CEST] <ln-> the comma in that question must be ungrammatical.
[13:24:54 CEST] <CoJaBo> Yeh, I hit comma instead of space somehow :/
[14:07:36 CEST] <DHE> CoJaBo: for purely static binaries, --extra-ldflags=-static but the build environment must be prepared for it. and some external libraries don't support static builds
[14:07:54 CEST] <CoJaBo> DHE: ..?
[14:08:53 CEST] <DHE> for example, x264 with opencl mode enabled can't be statically linked
[14:16:23 CEST] <CoJaBo> DHE: http://johnvansickle.com/ffmpeg/
[14:17:18 CEST] <DHE> yeah, and?
[14:17:32 CEST] <CoJaBo> My question was how often are those built..
[14:17:57 CEST] <DHE> well, the git build is dated May 29th
[14:18:16 CEST] <DHE> but I don't think this is an official build
[14:18:50 CEST] <CoJaBo> It's linked from the official site, anyway
[14:57:24 CEST] <nyuszika7h> rainabba: you misspelled "opencl"
[15:01:18 CEST] <mr_pinc> Greetings.  I've got a request at work to make my video encodes look as good and be the same size as a pirated version of Kill Bill which is 1920x1080 - 1.5 gigs in mkv format.  Thing is I have to target the web -  Currently I am using the following settings to match (aproximately) the bitrate - -c:v libx264  -profile:v high -preset slow -b:v 12000k - anyone have any recommendations on how I
[15:01:19 CEST] <mr_pinc> can improve the quality while keeping file size comparable?
[15:07:46 CEST] <Taoki> Hello.
[15:08:09 CEST] <Taoki> Is it possible to use ffmpeg to compile an image sequence into a gifv file? Not gif but gifv.
[15:08:23 CEST] <Taoki> I have a series of transparent png files which I want to make an animated image of.
[15:08:35 CEST] <Taoki> Animated png also works, but I'm not aware of ffmpeg supporting that.
[15:10:49 CEST] <durandal_1707> It is supported
[15:10:51 CEST] <ritsuka> isn't gifv just mp4 or webm?
[15:10:59 CEST] <furq> yeah it is
[15:12:05 CEST] <furq> i don't think it's even a distinct wrapper or anything
[15:12:34 CEST] <Taoki> ah
[15:12:35 CEST] <furq> afaik the gifv extension just forces imgur to wrap the webm/mp4 in some html which makes it loop
[15:12:40 CEST] <Taoki> Any way to encode to apng then?
[15:12:59 CEST] <furq> -c:v apng
[15:13:13 CEST] <Taoki> Thanks, will give it a try!
[15:13:51 CEST] <durandal_1707> It supports only raw Apng container
[15:14:08 CEST] <Taoki> As long as it renders animated in a browser that's ok
[15:14:18 CEST] <furq> it depends on the browser
[15:14:25 CEST] <furq> only firefox supports apng
[15:14:27 CEST] <durandal_1707> Yea
[15:14:43 CEST] <Taoki> Really? I thought all browsers did by now. Wow
[15:14:45 CEST] <furq> chrome was on the mng side iirc, although i don't think they support either any more
[15:14:49 CEST] <ritsuka> safari supports apng too
[15:14:58 CEST] <Taoki> Better use webp then
[15:15:18 CEST] <furq> firefox doesn't support webp
[15:15:18 CEST] <Taoki> I'd use gif, but the 256 colors limitation sucks. Don't know how to save 32-bit animated gifs
[15:15:26 CEST] <furq> welcome to web standards!
[15:15:28 CEST] <Taoki> I think it does, seen it used a few times
[15:15:41 CEST] <furq> http://caniuse.com/#feat=webp
[15:15:54 CEST] <durandal_1707> Taoki: dithering
[15:16:01 CEST] <furq> don't ask me how they support webm without supporting webp
[15:16:13 CEST] <furq> it's some fun political thing, much like with apng
[15:17:48 CEST] <furq> there's some stuff you can do with frame disposal to have >256 colours in a gif, but it's a bit hacky and i don't think ffmpeg will do it for you
[15:17:50 CEST] <Taoki> Can ffmpeg export to gif then? And if so, can it export to full 32-bit color gif, not the old 256 colors one?
[15:17:56 CEST] <Taoki> ok
[15:18:01 CEST] <furq> there is no such thing as 32-bit gif
[15:18:15 CEST] <furq> it's always 256 colours per frame
[15:18:29 CEST] <furq> but you can leave colours from the previous frame in the next one and it doesn't count towards the limit
[15:20:15 CEST] <furq> you would think in 2016 it would be easy to have a lossless animation in a browser, wouldn't you
[15:20:35 CEST] <furq> i sure did the last time i wanted to do this and ended up using a gif
[15:31:34 CEST] <BtbN> It's not 256 colours per frame.
[15:31:39 CEST] <BtbN> It's 256 in total.
[15:31:44 CEST] <BtbN> All frames share the same palette
[15:32:37 CEST] <BtbN> Just use mp4 with h264, all browsers support it.
[15:36:42 CEST] <Taoki> BtbN: It's for something I need to embed as the background of a div element, so probably wouldn't work
[15:36:56 CEST] <BtbN> why wouldn't it?
[15:37:16 CEST] <furq> you can use a video as a background if you want your website to be bad
[15:37:27 CEST] <furq> and gif does support multiple palettes
[15:38:18 CEST] <furq> not that i'm recommending it
[15:38:34 CEST] <Taoki> Not a website... more like a javascript / html5 game.
[15:38:43 CEST] <Taoki> I have websites that do that :P
[15:39:01 CEST] <Taoki> But you can like, embed mp4 directly in the background-image field of an element? That's new...
[15:39:26 CEST] <Taoki> Of course, mp4 doesn't support transparency :P Yeah... think I'll go with gif.
[15:39:31 CEST] <furq> no, but you can fake it
[15:39:37 CEST] <furq> webm does support transparency but not in every browser
[15:39:45 CEST] <furq> i don't think it works in firefox yet
[15:41:14 CEST] <furq> there might be a tool out there which will generate faux-truecolour gifs but i'm not aware of one
[15:41:35 CEST] <furq> http://static.tweakers.net/ext/f/L9kEhvvZMizHQ5SE4rDoyh9P/full.gif
[15:41:40 CEST] <furq> it works better than i thought though
[16:13:38 CEST] <matthias_> Hi, i want to remove all silence intervals from a video-file. ffmpeg -i aud.mkv -af "silenceremove=0:0:0:-1:1:-20dB" aud_2.mkv   if i set the output file to mp3 etc. it works, but when i use a video container, the video is not cutted right. any solutions?
[16:21:02 CEST] <c_14> matthias_: you're going to have to use silencedetect, parse the output in a script and then use something like the select/aselect filters to get the parts you want
[16:21:10 CEST] <theeboat> Does anybody know which codecs I can use within an mxf using ffmpeg?
[16:21:17 CEST] <c_14> The silenceremove filter only works on audio
[16:21:43 CEST] <matthias_> c_14: oh, well there wil be a plenty of files for a 1.5h video
[17:15:18 CEST] <hispeed67> sweet. i want to add a 3 second clip of text to the beginning of a video... thoughts/ideas?
[17:15:52 CEST] <hispeed67> black background, white text, 3 seconds, "who, what, why, when"
[17:19:30 CEST] <DHE> well, you can build an image with drawtext if you need to. or maybe you can just make a PNG yourself and have ffmpeg turn it into a 3-second video with silent audio
[17:19:35 CEST] <DHE> then put two videos together
[17:21:25 CEST] <nyuszika7h> why is x264 encoding with a bitrate that's much higher than the source
[17:21:54 CEST] <dv> hi. on a sitara AM3352 1GHz machine (single-core armv7 cortex-A8) , decoding a 384 kHz ALAC file can use ~40% CPU.
[17:22:14 CEST] <dv> is the alac decoder significantly optimized?
[17:22:24 CEST] <dv> or has there been no demand yet?
[17:22:35 CEST] <hispeed67> DHE: that's what i was thinking.. make a PNG and turn it into a video.. i'll have to check into that.
[17:23:16 CEST] <c_14> nyuszika7h: because you set a higher bitrate/"low" crf?
[17:23:27 CEST] <nyuszika7h> I used 20 as CRF, I guess it can go higher then?
[17:23:57 CEST] <c_14> crf for libx264 8bit goes to 53 or something afaik
[17:24:13 CEST] <c_14> 51
[17:26:26 CEST] <c_14> https://trac.ffmpeg.org/wiki/Encode/H.264 <- nyuszika7h
[17:28:49 CEST] <nyuszika7h> yes, I've seen that, I guess I never really tried higher CRF values because I thought that would reduce the quality too much, didn't notice it's using such a high bitrate
[17:33:00 CEST] <hispeed67> hey, video (non-ffmpeg) question. when i d/l from youtube (using youtube-dl) i d/l .mkv files. when i use the '-k' option, it keeps the orig. mp4 as well as a .webm file, then merges the .webm with the .mp4 and creates a new .mp4.. so i get three files..wassup with that?
[17:33:35 CEST] <furq> that's what you asked it to do
[17:34:12 CEST] <furq> although it should create an mkv from a webm and an mp4
[17:34:22 CEST] <furq> since mp4 doesn't support vp9 or opus
[17:34:50 CEST] <hispeed67> furq, when i first started, i wanted .mp4, so i was converting the .mkv (yea, that's what i meant, sorry) into .mp4. then i found the -k option now i don't have to convert to .mp4
[17:35:02 CEST] <furq> the mp4 will only contain video or audio
[17:35:11 CEST] <furq> if you want an mp4 then use -f and select two mp4-compatible streams
[17:35:34 CEST] <hispeed67> i think my confusion comes from unfamiliarity with youtube-dl
[17:36:12 CEST] <hispeed67> it was after watching it that i noticed it was deleting the .mp4 file that i was actually wanting.. it seems to be both video and audio..
[17:36:31 CEST] <furq> http://vpaste.net/QTYB3
[17:36:55 CEST] <furq> use -F to list the formats and then (in that example) -f 137+140 to get h.264 and aac
[17:37:30 CEST] <hispeed67> sweet. thnx
[17:37:34 CEST] <furq> the default format selection is weird and it'll often pick h.264 and opus
[17:39:11 CEST] <hispeed67> freakin nice.. :)
[17:39:14 CEST] <hispeed67> thnx dude.
[17:39:36 CEST] <furq> or just use -f 22 if you want the old http mp4
[17:39:46 CEST] <furq> that's limited to 720p though
[17:40:15 CEST] <hispeed67> do those numbers change with each video? i.e. will 137 always be mp4, just sometimes the option for 137 won't be listed.. ?
[17:40:21 CEST] <furq> i believe so
[17:40:51 CEST] <furq> 137 will only be present if the video is 1080p
[17:40:58 CEST] <furq> and there'll be another one if there's a 1080p60 video
[17:43:30 CEST] <rainabba> So my efforts to get a build yesterday went well, but aren't yet complete. The reason I finally needed my own build was to get opencl support in addition to what I already had in another build. To that end, I've added `--enable-opencl`, but the ffmpeg build failed with "ERROR: opencl not found" so I started Googleing and what I could find, was very unclear. Can anyone point me in the right
[17:43:30 CEST] <rainabba> direction?
[17:43:49 CEST] <furq> rainabba: what does config.log say
[17:44:35 CEST] <rainabba> Quite a bit in there :)  What am I looking for?
[17:44:42 CEST] <furq> the error will be right at the end
[17:44:55 CEST] <furq> pastebin the last 100 lines or so if you can't figure it out
[17:45:12 CEST] <rainabba> ty
[17:45:55 CEST] <rainabba> Looks like I need headers (no idea where to get them or put them though): /tmp/ffconf.blZamvHf.c:1:23: fatal error: OpenCL/cl.h: No such file or directory
[17:46:49 CEST] <rainabba> Hmm: https://ffmpeg.org/doxygen/2.8/opencl_8h_source.html ?
[17:47:08 CEST] <furq> it's part of the cuda sdk if you're using an nvidia card
[17:47:11 CEST] <vade> I have an AVFrame that reports FLTP sample format, contains 1024 nb_samples, line size is 8192, contains 2 data buffers both reporting 8192 as size, and also contains 2 AVBuffers reporting size of 8192. But, if I have 2 planar channels of float , at 1024 samples, should not buf[0] and buf[1] both be 4096 in size, as they contain one channel of 1024 samples at sizeof(float) = 1024 x 4? I need an adult :D
[17:47:41 CEST] <rainabba> Ahh. Helpful hint. Will explore.
[17:47:49 CEST] <furq> https://wiki.tiker.net/OpenCLHowTo#Installation
[17:47:52 CEST] <furq> that seems useful
[17:48:11 CEST] <furq> i know debian packages it, but you're on rhel aren't you
[17:48:29 CEST] <rainabba> AWS Linux AMI with yum
[17:48:30 CEST] <furq> and i wouldn't expect rhel to contain any useful packages at all
[17:50:26 CEST] <rainabba> This looks helpful too since NVENC is something else I wanted to explore (yes, I'm WELL aware of the quality difference): https://trac.ffmpeg.org/wiki/HWAccelIntro
[17:53:06 CEST] <kepstin> I think with the AWS linux it is easier to get the nvidia sdks on the box, since they designed it for use with their gpu instances. Look up some aws-specific stuff maybe?
[17:53:09 CEST] <vade> ah we just got NVEnc working on a G2 instance ina custom app using libavcodec. It just worked"
[17:53:20 CEST] <vade> like literally yesterday
[17:54:04 CEST] <furq> a lot of distros package them up with the gpu drivers
[17:54:23 CEST] <furq> although i'd have thought they'd be preinstalled on an aws gpu instance
[17:55:12 CEST] <vade> my colleague did the setup using Ubuntu Server 14.04 and it works with NVIDIA-Linux-x86_64-361.42
[17:58:12 CEST] <f00bar80> Suppose a source live stream channel stopped from the tuner server, when that happens the ffmpeg fails for this channel, when I revert the channel back to its output link the ffmpeg will not auto start, is there a way I can let ffmpeg austart if a source input is reverted back after being stopped ?
[18:00:07 CEST] <rainabba> I'm on a G2 instance and have nvidia support working so I likely already have the SDK on the box and just need to make it known to the build environment.
[18:01:19 CEST] <rainabba> Added it to get NVENC support for wowza (in a Docker container at that, which turned out to be one step harder).
[18:05:04 CEST] <rainabba> It looks like what I need (almost) is `TGT_DIR=/opt/opencl-headers/include/CL && mkdir -p $TGT_DIR && cd $TGT_DIR && wget https://www.khronos.org/registry/cl/api/1.2/{opencl,cl_platform,cl,cl_ext,cl_gl,cl_gl_ext}.h`, but those files don't exist and looking at https://www.khronos.org/registry/cl/ I can see there are much newer versions. Anyone know the newest I can/should use?
[18:06:01 CEST] <rainabba> I'm thinking https://raw.githubusercontent.com/KhronosGroup/OpenCL-Headers/opencl21/
[18:08:44 CEST] <rainabba> So assuming those would work, how do I tell the build process where to find those headers?
[18:09:08 CEST] <furq> presuambly --extra-cflags="-I/opt/opencl-headers/include"
[18:10:25 CEST] <rainabba> Ahh. Would it then make sense to place/link the headers in the currently declared folder ($HOME/ffmpeg_build/include) rather than modifying the cflags?
[18:10:53 CEST] <furq> i guess
[18:10:55 CEST] <f00bar80> more clear , what i meant is how to autostart ffmpeg if a source stopped then resumed
[18:10:57 CEST] <rainabba> If some of these questions are ignorant, it's because these concepts are somewhat familiar, but I wouldn't dare say I "know" them :)
[18:11:23 CEST] <furq> if there are newer versions available then just put those into your include dir
[18:16:07 CEST] <rainabba> Success!
[18:19:58 CEST] <f00bar80> ppl any comment ?
[18:21:06 CEST] <rainabba> f00bar80: I imagine the most proper approach would use some kind of taskrunner intended on keeping a process running and wouldn't be ffmpeg specific.
[18:22:02 CEST] <f00bar80> rainabba: some more details plz
[18:22:03 CEST] <rainabba> If ffmpeg exits with code 0, a simple hack would be a bash script  and `sh ./runffmpeg.sh && sh ./runffmpeg.sh && sh ./runffmpeg.sh && sh ./runffmpeg.sh`, but I'm pretty lame with these things so that's just my hack :)
[18:24:10 CEST] <f00bar80> some more details
[18:24:42 CEST] <c_14> rainabba: you mean while true; do ./runffmpeg.sh; done ?
[18:25:05 CEST] <c_14> f00bar80: the ffmpeg binary itself does not handle what you want to do, either use the api or script around it
[18:25:55 CEST] <rainabba> c_14: No doubt that's a better approach :)
[18:27:43 CEST] <c_14> or if you only want it when it exits "successfully" while [ $? -eq 0]; do
[18:31:36 CEST] <f00bar80> c_14: I'm not sure about the exit status for a non exixting source , and if i will use a script to autostart how for the script to know when the source is resumed else than by a way to check for the source somehow every specific interval
[18:32:08 CEST] <c_14> Well, all you can do in that case is poll
[18:32:31 CEST] <f00bar80> what do you mean by poll
[18:32:55 CEST] <c_14> https://en.wikipedia.org/wiki/Polling_(computer_science)
[18:35:38 CEST] <f00bar80> c_14: Any idea what is the exit status for a non exixting source?
[18:35:56 CEST] <c_14> probably 1
[18:50:28 CEST] <rainabba> Refining my build script (so I don't have to remember all this later) and despite `make -j 32` on the actual ffmpeg build, I see only 1 core being used (maxed out) when seeing `[ 87%] Built target encoder`. That to be expected? It's the longest step in the whole process which is why it caught my attention.
[18:50:54 CEST] <DHE> link step?
[18:51:16 CEST] <furq> if there's only one job that can be run then only one core will be in use
[18:51:27 CEST] <rainabba> Line before is `[ 87%] Building CXX object common/CMakeFiles/common.dir/deblock.cpp.o` Other link steps actually said "linking".
[18:51:36 CEST] <furq> that appears to be from libx265
[18:51:46 CEST] <rainabba> K. Just wanted to be sure I hadn't missed something obvious
[18:51:50 CEST] <furq> which your script is needlessly rebuilding every time, but that's beside the point
[18:52:20 CEST] <rainabba> Haha. First line of my script blows EVERYTHING away to try and prevent other issues related to my ignorance :)
[18:52:28 CEST] <furq> make -j just runs a bunch of jobs in parallel, but some jobs will depend on other jobs being completed
[18:52:55 CEST] <rainabba> As I get things working, I'm documenting here: https://gist.github.com/rainabba/07425c3bc14a0bb51632f12e913d9081
[18:52:58 CEST] <furq> and obviously it's smart enough to not run a job while one of its dependencies is still running
[18:53:04 CEST] <rainabba> rgr
[18:53:43 CEST] <furq> fwiw if you replaced your script with a makefile and then ran that with -j32 it could build the various libraries in parallel
[18:54:36 CEST] <furq> i can't imagine it's particularly slow with -j32 though
[18:55:38 CEST] <rainabba> furq: Hmm. Makes sense. Haven't build a makefile myself so I'll check that out. No, this step is the only one that sits for any length of time so I'm not sure it would make any real difference, but I'm all for learning more about systems I depend on [blindly].
[18:56:25 CEST] <furq> the naive solution would be `if [ -f .libx265 ]; then (build x265); touch .libx265; fi`
[18:56:26 CEST] <DHE> you using link-time optimization? (lto)
[18:56:33 CEST] <rainabba> No clue
[18:56:35 CEST] <furq> to avoid rebuilding the libs every time
[18:56:44 CEST] <furq> then just rm .libx265 if you want to rebuild it
[19:13:23 CEST] <rainabba> furq: So .libx265 in that case is really just a flag to indicate that the build process is done and I'd remove that if updating the source (then rebuild and create it again)?
[19:13:40 CEST] <rainabba> .. that's all just bash too then, not makefile?
[19:14:00 CEST] <rainabba> OR is makefile bash? (hadn't considered that before)
[19:14:55 CEST] <furq> that's shell
[19:15:35 CEST] <rainabba> For that matter, I'd only want to create the flag file on a 0 exit for each dependency, then have an if statement that checked for all before building ffmpeg?
[19:15:51 CEST] <furq> if you set -e then it wouldn't get as far as touch anyway
[19:16:01 CEST] <rainabba> Ahh
[19:16:06 CEST] <furq> or as far as building ffmpeg, for that matter
[19:18:15 CEST] <f00bar80> so ppl not comment ?
[19:18:45 CEST] <f00bar80> Again what I'm asking about is , I have multiple live stream sources encoded continuously , if any of the sources stopped then resumed i want the script to be able to know and run ffmpeg on it , while the other streams are still encoded
[19:19:48 CEST] <rainabba> f00bar80: THis is a chat channel where some people MAY help, not a paid support system and what you're asking are higher-level architechture questions that really have little to do with ffmpeg that would vary depending on the platform used and how you're launching your processes.
[19:20:29 CEST] <rainabba> The more specific your questions are and the more effort you demonstrate in solving your own problem, the more likely people are to respond IF they have the time, knowledge and inclination.
[19:20:58 CEST] <rainabba> These are general guidelines for any community support (web, mailing lists, irc, etc..)
[19:24:40 CEST] <apt-get> hey!
[19:25:08 CEST] <apt-get> is there a way to set a default codec for a file format? For example, vp8 for webm instead of vp9 when doing something like "ffmpeg -i something.mp4 something.webm".
[19:25:27 CEST] <apt-get> I don't want to always set -c:v libvpx
[19:27:57 CEST] <kepstin> apt-get: at the moment, the default codec for a given container is hardcoded in the ffmpeg source. I generally recommend always specifying a codec.
[19:28:05 CEST] <apt-get> okay, kepstin
[19:28:58 CEST] <apt-get> by the way, when I use something like "ffmpeg -f x11grab -s "$W"x"$H" -i :0.0+$X,$Y -c:v libvpx -f pulse -ac 2 -i default file.webm", it still saves as vp9
[19:29:02 CEST] <apt-get> where should I place the -c:v ?
[19:34:06 CEST] <furq> after -i default
[19:34:17 CEST] <imperito> Hello. I'm trying to make some live streaming video. I think ffmpeg might be part of what I need. I'm not sure how it would work, though.
[19:34:27 CEST] <furq> options go before the file to which they apply
[19:35:29 CEST] <imperito> I've got some Python that can generate video frames, which I think I need to encode to something like h.264, and segment somehow, and then somehow serve to clients using a protocol like HLS.
[19:35:38 CEST] <imperito> If I'm understanding this correctly
[19:37:06 CEST] <imperito> But I'm not sure what tools to use where. I'm assuming I can't just write out a video file and point apache at it, because that wouldn't be a live stream
[19:39:25 CEST] <rainabba> imperito: At a glance, this looks to hit most of the core concepts you need (except then generating HLS, but that would be the easier part I think): http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
[19:40:07 CEST] <rainabba> Doubt I'll be any more help than that though
[19:40:33 CEST] <rainabba> I can encode/transcode all day long and do live encodes from capture cards, but never used other sources for live.
[19:41:31 CEST] <imperito> I see. I had already seen that, actually. The only thing I'm not sure about at that end is how the frame rate would work. Since the Python code would be generating frame at an arbitrary rate but I assume ffmpeg would need to output them at some defined rate for web video streaming
[19:42:10 CEST] <imperito> But that just shows how to write to a file, or read raw video from ffmpeg back into Python
[19:42:35 CEST] <imperito> Which I figure I could do but then I'm not sure how to make that a live stream
[19:44:53 CEST] <rainabba> imperito: largely a guess, but I'd think that by defining a framerate on your output with -r, ffmpeg would try to maintain that framerate (buffering extra frames) and you'd get dropped frames if your source couldn't keep up.
[19:45:52 CEST] <kepstin> using -r as an output option is the same as adding "-vf fps". what it does is make ffmpeg duplicate frames if the input framerate is below the requested, and drop frames if the input framerate is above the requested.
[19:46:12 CEST] <imperito> That sound like what I'd want
[19:46:50 CEST] <kepstin> it might not be, actually.
[19:47:03 CEST] <kepstin> it depends a lot on how you're getting the video from your application into ffmpeg
[19:47:47 CEST] <kepstin> if you're, for example, sending raw video frames into ffmpeg via a pipe, you'll want to use the '-framerate' input option and possible '-re' global option instead.
[19:48:07 CEST] <imperito> The suggestion I saw online and that rainabba mentioned involves pipeing from Python into ffmpeg using rawvideo
[19:48:49 CEST] <rainabba> imperito: For context, what's the content or nature of content? Live motion, information, etc?
[19:49:36 CEST] <imperito> An image that is built up over time from many smaller tiles
[19:50:00 CEST] <rainabba> Also, what rate do you expect to be sending frames to ffmpeg?
[19:50:07 CEST] <rainabba> expect/hope
[19:50:30 CEST] <kepstin> imperito: one thing to remember is that by "input framerate", I mean "pts (time) value attached to each input frame" not "time at which ffmpeg read the frame". ffmpeg isn't really built as a tool for live stuff, although in many cases it can be made to work reasonably.
[19:51:07 CEST] <imperito> That's up in the air I think. I assume if I used an asynchronous double-buffering approach I could have my Python code emit frames at an arbitrary rate, or rate-limit it in the code somehow
[19:51:47 CEST] <rainabba> imperito: So you hope to feed frames at a rate somewhere near your output rate? Not 1 frame in every 10 seconds or something like that?
[19:51:53 CEST] <imperito> In the former case I'd want ffmpeg to drop any excess frames so that 1 second of wall clock input made 1 second of video, regardless of how many frames were emitted by my application
[19:52:07 CEST] <kepstin> imperito: if you want to have the output as a "live stream", I would suggest using ffmpeg with the '-re' option, and writing frames from your python app as fast as possible. The -re option to ffmpeg will cause the python app to block while it waits for the next frame
[19:54:06 CEST] <kepstin> (with -re, ffmpeg will read a frame, sleep until the next frame should be shown, then read the next frame)
[19:54:17 CEST] <Smashcat> Hi, I'm trying to resize a video, which is working but I'm losing the audio track. If I do a "file 1.mp4" on the input file I get "1.mp4: ISO Media, MPEG v4 system, version 2". I am using the command "./ffmpeg -i 1.mp4 -an -vf scale=1600:960 -t 7 -vcodec h264 o1.mp4" to rescale. Can someone tell me what I'm doing wrong?
[19:54:28 CEST] <furq> -an removes the audio
[19:54:39 CEST] <Smashcat> Ah! thanks :)
[19:57:16 CEST] <imperito> OK, I'll look into the -re option
[19:57:23 CEST] <imperito> Thanks for the advice
[19:57:48 CEST] <imperito> Although I'm still at a loss as to what happens after the video is encoded
[19:58:33 CEST] <kepstin> imperito: the other option, which might also work in your case, is to do frame pacing in your python app, then use the 'setpts' ffmpeg filter to set the timestamps based on realtime (optionally followed by fps filter to smooth out the framerate)
[20:00:07 CEST] <imperito> kepstin: I don't see "-re" in man ffmpeg. Is it new?
[20:00:35 CEST] <imperito> kepstin: nevermind
[20:00:46 CEST] <imperito> Apparently my pager doesn't autocorrect typos
[20:09:05 CEST] <kepstin> imperito: if you want to try the second option - where you output paced frames from your python app, which ffmpeg reads as fast as it can - an ffmpeg filter chain like '-vf settb=AVTB,setpts=RTCTIME-RTCSTART,fps=30' would adjust the frame timestamps to the real time they were read, then normalize the framerate to 30fps by dropping/duplicating frames.
[20:09:24 CEST] <kepstin> (in that case, you wouldn't use -re)
[20:13:41 CEST] <imperito> OK
[20:13:54 CEST] <imperito> I'll probably try it both ways, see what makes sense
[20:25:05 CEST] <prelude2004c> already put on developer chat but .. just in case someone here knows what the diff. is ( ok so i can confirm that the git version is different than the 3.0.1 version ... eg..( 3.0.1 ) >  Stream #0:4[0x1511]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn ... &  ( GIT version ) >  Stream #0:4[0x1511]: Video: h264 (High) ([27][0][0][0] / 0x001B),
[20:25:05 CEST] <prelude2004c> yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn )
[20:25:35 CEST] <prelude2004c> the closed caption data is seen on the 3.0.1 and 3.0.2 .. the git version does not see the closed captions
[20:25:39 CEST] <prelude2004c> odd right ?
[20:38:36 CEST] <nille002> does someone has a good knowlege about the nvenc usage? i tryed a 2 pass encode with it but the statfile is each time empty. my first run test looks like this
[20:38:36 CEST] <nille002> ffmpeg -y -i "input.mkv" -c copy -c:v nvenc_hevc -2pass 1 -pass 1 -b:v 4000k -an -f matroska NUL
[20:40:35 CEST] <nille002> thats work but like i sayed the statfile is empty. the second run with ffmpeg -n -i "input.mkv" -c copy -c:v nvenc_hevc -2pass 2 -pass 2 -b:v 4000k -f matroska "output.mkv" can only fail then
[20:48:03 CEST] <kepstin> nille002: as far as I know, nvenc doesn't actually support a real 2-pass mode, and you shouldn't use the '-pass' option with it. The -2pass option just enables a thing where it analyzes each individual frame before encoding, to better distribute quality within the frame.
[20:48:26 CEST] <kepstin> kinda confusing, and they should have called it something else :/
[20:48:34 CEST] <nille002> ok thank you
[20:49:32 CEST] <nille002> is the -tier option somewhere documented? because i find nothing about it
[20:49:44 CEST] <nille002> -tier              <string>     E..V.... Set the encoding tier (main or high) (default "main")
[20:59:26 CEST] <jnorthrup> i have been encoding mpeg 720x486x29.97    but i need 720x486x60i
[20:59:36 CEST] <jnorthrup> the input is in interlaced
[21:01:28 CEST] <kepstin> jnorthrup: 60 fields per second is usually represented as 30 frames per second, with each frame containing two fields
[21:01:34 CEST] <kepstin> jnorthrup: that looks as expected
[21:02:13 CEST] <kepstin> (might be some encoder options needed so that everything knows the video is interlaced rather than progressive, but that's it)
[21:02:24 CEST] <jnorthrup> so... -vf tinterlace is where my rtfm leads me to
[21:02:48 CEST] <jnorthrup> i guess i want to transcode with whatever interlace it had before and preserve the thing it said it had
[21:04:35 CEST] <furq> what are you encoding it to
[21:04:59 CEST] <kepstin> and what are you encoding it from?
[21:06:25 CEST] <jnorthrup>  http://pastie.org/10860621
[21:06:51 CEST] <nille002> jnorthrup i would try to get rid of the interlacing its make later only problems ...
[21:07:21 CEST] <jnorthrup> im working on broadcast endpoint bugfixes
[21:07:55 CEST] <furq> Sorry, there is no pastie #10860621 or it has been removed. Why not create a new pastie?
[21:08:21 CEST] <jnorthrup> http://pastie.org/10860621#3,34
[21:08:30 CEST] <furq> yeah it still doesn't exist
[21:09:00 CEST] <imperito> kepstin: I'm working on getting the -re option working, it isn't doing quite what I want yet. I'm wondering if I've got it in the wrong place.
[21:09:30 CEST] <jnorthrup> http://pastebin.com/M8ZPrS7M
[21:11:02 CEST] <llogan> imperito: make sure you're using it as an input option, not output option
[21:11:32 CEST] <kepstin> jnorthrup: so, is the original source video interlaced or progressive? if it's a movie at '29.97fps', i'd assume that it's been telecined, so the video  is interlaced.
[21:13:32 CEST] <jnorthrup> kepstin, apparently
[21:13:40 CEST] <jnorthrup> intelraced
[21:14:33 CEST] <nille002> i guess you has to deinterlace it, and interlace it again for the output
[21:14:52 CEST] <furq> i assume you can just use `-flags +ilme+ildct` to do an interlaced encode
[21:14:54 CEST] <kepstin> so in this case, it should simply be a matter of ensuring that the mpeg2 encoding is being done with interlaced-aware encoding mode, and that appropriate interlaced flags are set in the output container
[21:15:37 CEST] <jnorthrup> so -vf yadif undoes it, then `-flags +ilme+ildct`
[21:15:38 CEST] <ChocolateArmpits> nille002: if it's a movie with visible combing then typical deinterlacing would be destructive. He needs to check if inverse telecine does the job first
[21:15:54 CEST] <jnorthrup> so -vf yadif undoes it, then `-flags +ilme+ildct`  ?
[21:16:12 CEST] <ChocolateArmpits> yadif is a deinterlacer for actually interlaced content
[21:16:17 CEST] <kepstin> jnorthrup: from the sounds of it, you *do not* want to remove the interlacing, since the output is still supposed to be interlaced
[21:17:21 CEST] <ChocolateArmpits> If it's a 24p movie that has been converted to 29.97fps then it will have interlace combing but typical deinterlacer like yadif won't do the perfect job
[21:17:24 CEST] <kepstin> jnorthrup: so do not use a deinterlacing filter. simply use the flags to tell the encoder to do interlaced output.
[21:17:40 CEST] <furq> jnorthrup: you may also need -top 0 or -top 1 to set the field order
[21:18:11 CEST] <kepstin> jnorthrup: you might also need to pass an extra option to the scale filter to get it to do interlaced-aware scaling.
[21:18:14 CEST] <ChocolateArmpits> probably -top 1 as consumer video like dvd has that
[21:18:28 CEST] <jnorthrup> this may be sourced from masters
[21:18:36 CEST] <jnorthrup> whatever that means
[21:18:40 CEST] <jnorthrup> animorphic to be exact
[21:19:14 CEST] <furq> i believe the default is to autodetect, so don't set it manually unless it comes out wrong
[21:19:22 CEST] <furq> it'll be pretty obvious if the field order is swapped in the output
[21:20:06 CEST] <furq> this is also a remarkably unproductive google search
[21:20:18 CEST] <jnorthrup> hahahaha
[21:20:23 CEST] <furq> i didn't think my opinion of videohelp.com could get any lower, but it just did
[21:20:42 CEST] <jnorthrup> my next question is how to bring over all these pcm audio streams to mxf format via ffmpeg
[21:21:08 CEST] <jnorthrup> -map 0 -c copy is not friendly to "data" streams
[21:21:10 CEST] <kepstin> you're doing a scale from anamorphic 720x480 to letterbox, which means the height change. so make sure you are doing interlace-aware scaling. i think there's a filter option for that.
[21:21:25 CEST] <furq> -map 0:a
[21:21:35 CEST] <furq> after -map 0:v ofc
[21:22:10 CEST] <nille002> furq thank you i didnt know this is working ^^
[21:22:31 CEST] <furq> -vf scale=123:456:interl=1
[21:22:33 CEST] <furq> apparently, anyway
[21:23:09 CEST] <jnorthrup> >ffmpeg -i SpongeBob_TEST_Paramount.mov -map 0:v   -vf scale="720:396",setsar=1/1,pad="720:486:0:44:black" -map 0:a -c copy  -ar 48000 -ac 2 -r ntsc -qscale:v 2 -aspect 4:3 -flags +ilme+ildct  -y SBTP.mxf
[21:23:26 CEST] <furq> jnorthrup: ^
[21:23:30 CEST] <jnorthrup> Filtergraph 'scale=720:396,setsar=1/1,pad=720:486:0:44:black' was defined for vi
[21:23:31 CEST] <jnorthrup> deo output stream 0:0 but codec copy was selected.
[21:23:33 CEST] <jnorthrup> Filtering and streamcopy cannot be used together.
[21:23:48 CEST] <furq> well yeah you want -c:a copy
[21:23:55 CEST] <furq> otherwise it won't reencode the video
[21:24:06 CEST] <kepstin> have to be careful with the pad to make sure the vertical offset is a multiple of 2 so you don't change the top/bottom field first stuff. I think you're good there with '44'.
[21:24:23 CEST] <furq> and yeah you need -vf scale=720:396:interl=1
[21:26:59 CEST] <jnorthrup> hew! uglified
[21:27:35 CEST] <kepstin> obviously you'll want to watch this on a 4:3 CRT television to make sure it's all coming out correctly :/
[21:28:00 CEST] <jnorthrup> getting the SDI dapter in tomorrow actually, have a debug SDI monitor
[21:28:48 CEST] <furq> is 4:3 CRT compliance still a thing
[21:28:53 CEST] <furq> how dreadful
[21:28:54 CEST] <kepstin> (a good check to make sure everything's working would be to run the output file through a detelecine filter like 'pullup' to see if you get a reasonable 24fps video)
[21:29:03 CEST] <tonsofpcs> is there a way to have ffmpeg output the audio on stdout and ONLY the audio?
[21:29:17 CEST] <furq> -map 0:a
[21:29:22 CEST] <jnorthrup> well, i guess interlace makes quality worse, might be interesting to see a real interlaced monitor play it
[21:29:44 CEST] <kepstin> jnorthrup: if you can at all possibly do it, please don't generate an interlaced signal ;)
[21:29:54 CEST] <jnorthrup> hahahaha
[21:30:00 CEST] <furq> there's nothing wrong with interlacing if you live in 1993
[21:30:01 CEST] <jnorthrup> customer specified 60i
[21:30:14 CEST] <furq> but if you don't then yes, there is everything wrong with interlacing
[21:30:44 CEST] <jnorthrup> im new to broadcast, just humoring the various opinions
[21:31:10 CEST] <kepstin> assuming people are going to be using actual modern tvs to watch this, you should check what the video looks like after it's run through various deinterlacing and/or detelecine filters
[21:31:27 CEST] <kepstin> since barely has tvs that can natively display interlaced any more :/
[21:31:32 CEST] <kepstin> barely anyone*
[21:32:12 CEST] <furq> you'd think that would maybe prompt broadcasters to stop using 50i/60i
[21:32:48 CEST] <tonsofpcs> sorry, we use 59.94
[21:32:51 CEST] <jnorthrup> i can bring this up as a failing point of taking the early 2000's codebase with a grain of salt
[21:32:57 CEST] <jnorthrup> withotu
[21:33:03 CEST] <tonsofpcs> (well, 60*1000/1001)
[21:33:06 CEST] <kepstin> well, in some cases that's bw limits, they don't have enough for 60p, and sports broadcast does actually look smoother in 60i than 30p usually
[21:33:10 CEST] <tonsofpcs> (and we call it 29.97)
[21:33:27 CEST] <kepstin> tonsofpcs: yes, we know that, we're all just rounding so we don't have to type as much.
[21:33:34 CEST] <tonsofpcs> something something ATSC 3.0
[21:33:53 CEST] <tonsofpcs> kepstin: oh, no, some folks actually use 60i and wonder why they end up with sync and playback issues...
[21:33:55 CEST] <podman> Anyone know the algorithm used by the thumbnail filter?
[21:34:59 CEST] <c_14> podman: http://notbrainsurgery.livejournal.com/29773.html
[21:35:02 CEST] <tonsofpcs> kepstin: well, don't the newer "120fps" and 'faster' sets actually use the interlace data to recreate 'frames' that are different, not deinterlaced frame-to-frame?
[21:35:18 CEST] <c_14> As listed in the first non-license comment of vf_thumbnail.c
[21:35:59 CEST] <kepstin> tonsofpcs: i'd expect that the 120fps interpolated tv sets usually work by starting with 30p or "60i deinterlaced to 60p", then do motion compensated interpolation to 120p.
[21:36:14 CEST] <kepstin> but i'm not a tv scaler asic designer, so I don't know for sure :)
[21:36:17 CEST] <tonsofpcs> kepstin: let me cry a little
[21:36:19 CEST] <tonsofpcs> :)
[21:36:35 CEST] <podman> c_14: Thanks! yeah, didn't check out the code. sorry. Seems like it should be listed here too: https://ffmpeg.org/ffmpeg-filters.html#thumbnail
[21:37:11 CEST] <tonsofpcs> I need to find an HDMI to component 'adapter' that lets me choose what rates it advertises so I can put my HD CRT to useful use...
[21:37:15 CEST] <furq> does motion compensation to 120fps look more or less like a mexican telenovela than motion compensation to 60fps
[21:37:23 CEST] <tonsofpcs> more.
[21:37:41 CEST] <furq> that's good to know
[21:37:46 CEST] <tonsofpcs> that said, if you want something that looks really awesome check out the brazilian stuff.
[21:37:53 CEST] <furq> i'll stick with my 60hz screens then
[21:38:06 CEST] <furq> since i don't play quake any more
[21:38:09 CEST] <tonsofpcs> they produce their telenovelas in 60p
[21:38:14 CEST] <furq> nice
[21:38:25 CEST] <tonsofpcs> (and it wouldn't surprise me if it were 4k 60p)
[21:38:38 CEST] <tonsofpcs> furq: was the map 0:a for me?
[21:38:50 CEST] <furq> yes
[21:38:56 CEST] <furq> assuming you only want audio output
[21:39:17 CEST] <furq> if you want the video and audio output to a file and the audio on stdout then maybe the tee muxer will do it
[21:39:28 CEST] <kepstin> tonsofpcs: if you want audio on stdout and video in a file somewhere, you can use multiple output files with different -map on each
[21:39:40 CEST] <furq> oh yeah that's a less stupid idea
[21:39:40 CEST] <tonsofpcs> furq: nah, just capturing from alsa, want to output just the audio to stdout as wav or raw pcm (wav w/o header)
[21:39:52 CEST] <tonsofpcs> (no other output)
[21:40:02 CEST] <furq> what's the problem then
[21:41:25 CEST] <kepstin> tonsofpcs: and if that's all you're doing, you don't even really need all of ffmpeg, you could just use arecord or something.
[21:41:45 CEST] <tonsofpcs> kepstin: well, device only supports 2 channel capture but I need single channel output to the pipe
[21:42:03 CEST] <tonsofpcs> so I'm doing ffmpeg -f alsa -ac 2 -i hw:1 -ac 1 -ar 8000 -f wav - 2>null
[21:42:03 CEST] <kepstin> ah, so you're using some filters and stuff.
[21:42:46 CEST] <tonsofpcs> I'm also trying to figure out how to set the buffer size because it seems like it's set quite high
[21:44:24 CEST] <kepstin> if you're piping to stdout, you also have to worry about the pipe buffer, etc.
[21:44:50 CEST] <tonsofpcs> I'm thinking -f wav pipe:1 2>null and I found a page that references setting the buffer for the pipe output but it doesn't give an example
[21:46:49 CEST] <tonsofpcs> https://ffmpeg.org/ffmpeg-protocols.html#pipe
[21:46:53 CEST] <tonsofpcs> "blocksize"
[21:49:31 CEST] <tonsofpcs> so is that pipe:1:blocksize:2048 or pipe,blocksize=2048:1 or pipe,bocksize,2048:1 or --blocksize 2048 pipe:1 or ???
[21:50:00 CEST] <tonsofpcs> or pipe:1:2048 perhaps?
[21:51:33 CEST] <tonsofpcs> andrey_utkin may perhaps know?
[21:54:28 CEST] <tonsofpcs> I suppose it helps if I run a version of ffmpeg newer than this change *curses rh*
[21:55:43 CEST] <llogan> tonsofpcs: just download a build if yours is old: http://johnvansickle.com/ffmpeg/
[21:55:45 CEST] <tonsofpcs> (looks like -blocksize 2048 before or after pipe:1 might work as it says invalid option but this is version 0.6.5 lol)
[21:55:52 CEST] <llogan> ancient and unsupported
[22:02:15 CEST] <prelude2004c> llogan, can you help with my answer as to why the ffmepg3.0.1 shows the closed caption from the TS input and the git version does not
[22:04:14 CEST] <tonsofpcs> llogan: yea, I just followed someone's "use this third party repo" guide and ... well, lol
[22:06:05 CEST] <podman> c_14: do you have any idea if there are ways to optimize memory usage for that? I was thinking that maybe I could reduce both the frame rate and resolution of the input so that it doesn't have to calculate the histogram of a huge 4k video. Especially if the beginning of the video is a fade in... probably worth analyzing at least the first three seconds or so
[22:06:38 CEST] <podman> Seems like using -r before the input and setting it to something like 1 reduces the amount of memory and CPU used but am just looking for other possible optimizations
[22:13:51 CEST] <c_14> less frames/smaller frames
[22:15:53 CEST] <c_14> short of optimizing internal data structures etc
[22:17:16 CEST] <podman> alright, so that's what I guessed
[22:18:54 CEST] <llogan> prelude2004c: sorry, i know nothing of closed captions. maybe one build has --enable-libzvbi and the other doesn't?
[22:35:50 CEST] <tonsofpcs> yay, it works!
[22:38:24 CEST] <tonsofpcs> now to test it in the system...
[22:39:00 CEST] <tonsofpcs> grrr, doesn't work
[22:45:58 CEST] <tonsofpcs> I wonder if it's because of the wav headers...
[23:08:31 CEST] <Guest35> Hi All, I have a question regarding VideoToolbox on OS X/iOS. This table: https://trac.ffmpeg.org/wiki/HWAccelIntro seems to indicate that VideoToolbox encoding is supported with ffmpeg, but I see no evidence of that. Does anyone know how to use VideoToolbox for encoding (either programmatically, or through cli)?
[23:09:40 CEST] <rkern> Guest35: it's in master, but not in a release yet.
[23:13:34 CEST] <Guest35> rkern: thank you, I will try master!
[23:18:53 CEST] <kepstin> tonsofpcs: if you don't want wav headers, use -f rawaudio
[23:19:08 CEST] <kepstin> er, got that wrong
[23:19:12 CEST] Action: kepstin looks up the correct one
[23:19:53 CEST] <kepstin> ah, you'll want '-f s16le' or whatever's appropriate to the audio format you're using
[00:00:00 CEST] --- Thu Jun  2 2016


More information about the Ffmpeg-devel-irc mailing list