[Ffmpeg-devel-irc] ffmpeg.log.20170817
burek
burek021 at gmail.com
Fri Aug 18 03:05:01 EEST 2017
[00:41:35 CEST] <medfly> Hi, friends. I can't build ffmpeg 3.3.3 with -Wl,--warn-shared-textrel -Wl,--fatal-warnings on netbsd/i386.
[00:41:54 CEST] <medfly> kinda "don't do that" but also I believe that text relocations violate W^X
[00:42:05 CEST] <medfly> I may be wrong
[00:42:30 CEST] <medfly> if I add -fPIC to CFLAGS, it gets further, but not to the end
[00:42:48 CEST] <medfly> LD libavresample/libavresample.so.3
[00:42:48 CEST] <medfly> ld: libavresample/x86/audio_convert.o: warning: relocation in readonly section `.text'
[00:42:51 CEST] <medfly> ld: warning: creating a DT_TEXTREL in a shared object.
[00:42:54 CEST] <medfly> library.mak:93: recipe for target 'libavresample/libavresample.so.3' failed
[00:44:02 CEST] <medfly> I think if you attempt to do a text relocation at runtime on a platform like netbsd which enforces W^X it will crash
[00:44:47 CEST] <iive> medfly: how is code written into memory? only dma allowed ?
[00:45:09 CEST] <iive> also, it would help if there is an option to say the name of the relocation symbol.
[00:45:14 CEST] <medfly> there are exceptions
[00:47:10 CEST] <medfly> just that the default allocation will silently not allow you to map things executable
[00:47:16 CEST] <medfly> iif they're writable
[00:47:27 CEST] <medfly> not great, i know
[00:48:07 CEST] <iive> well, I would assume that it would allow writing the code as data and then forbid writing and allow execution in a single operation.
[00:48:32 CEST] <iive> so a texrel should be done after the writing and before the switch of page attributes
[00:49:39 CEST] <medfly> I think program .text is mapped RX, so if you attempt to do te relocation and write, it will fail to write. maybe it still succeeds. but there's nothing attempting to change mapping
[00:49:54 CEST] <medfly> fail to write as in memory violation
[00:50:53 CEST] <medfly> I don't use the extension thing that allows to enforce memory protections actually, but just saw my rtld complain about text relocations
[00:50:58 CEST] <medfly> and other things are crashy, so suspected
[00:51:07 CEST] <medfly> s/memory protections/fine grained memory protections/
[00:52:03 CEST] <medfly> $ mpv *
[00:52:03 CEST] <medfly> /usr/pkg/lib/ffmpeg3/libavcodec.so.57: text relocations
[00:52:03 CEST] <medfly> /usr/pkg/lib/ffmpeg3/libavutil.so.55: text relocations
[00:52:29 CEST] <medfly> It manages to run! but I had to switch to an old version of firefox which i smaybe related
[00:52:38 CEST] <iive> things been crashiny is not proof for your suspicouns. especially if they crash after the program starts.
[00:53:12 CEST] <medfly> It doesn't do the relocations at load, it lazily does them as needed
[00:53:41 CEST] <logicaltechno> Hello. I am the developer of an Android game that relies heavily on ffmpeg to extract frames from multiple videos at a time. I am trying to improve the performance of said decoding by using the hard ware accelerated h274-mediacodec feature. I am currently experienceing this error spammed in the logs: E C2DColorConvert: unknown format passed for luma alignment number. I also this in the logs: W VideoCapabilities: Unrecognized profile
[00:54:22 CEST] <medfly> I will make a bug report :)
[00:55:03 CEST] <iive> medfly: try to find the texrels with `objdump`
[00:56:18 CEST] <medfly> hmm, I am told this is how to do it
[00:56:21 CEST] <medfly> $ readelf --relocs /usr/pkg/lib/ffmpeg3/libavcodec.so.57 | egrep '(GOT|PLT|JU?MP_SLOT)'
[00:56:23 CEST] <medfly> 00a5532c 00005307 R_386_JUMP_SLOT 00000000 _Jv_RegisterClasses
[00:56:26 CEST] <medfly> 00a55330 00009607 R_386_JUMP_SLOT 00000000 __cxa_finalize
[00:56:28 CEST] <medfly> that doens't seem like ffmpeg functions
[00:56:30 CEST] <medfly> 00a55334 0001b807 R_386_JUMP_SLOT 00000000 __register_frame_info
[00:56:33 CEST] <medfly> 00a55338 00024c07 R_386_JUMP_SLOT 00000000 __deregister_frame_inf
[00:56:53 CEST] <iive> yeh
[00:57:21 CEST] <medfly> I wonder if it's something about my local toolchain that is creating its own text relocations unless built with -fPIC
[00:57:24 CEST] <medfly> I'll ask :)
[00:57:41 CEST] <logicaltechno> You can rwork around the text relocations with --disable-asm, --disable-yasm, --disable-inline-asm, --disable-neon
[00:58:32 CEST] <logicaltechno> text relocations are fixed in ffmpeg versions newer than 3.something,,, theres no text relocations in the 3.3 version that I'm using
[00:58:40 CEST] <iive> logicaltechno: i think recent ffmpeg uses PIC for asm too
[00:59:32 CEST] <logicaltechno> I'm looking at a commit I made last year called "remove text relocations in ffmpeg" where I just changed compile options. This year, I'm using ffmpeg 3.3 and do not have to remove asm
[01:00:47 CEST] <logical> Is anyone aware of the cause of " E C2DColorConvert: unknown format passed for luma alignment number." when using android hardware acceleration?
[01:01:01 CEST] <logicaltechno> Yo logical and me are the same person..
[01:03:14 CEST] <iive> no idea about that.... i'm not even sure what luma alignment number is. i mean, luma is the Y plane in YUV,
[01:04:58 CEST] <iive> and alignment in ffmpeg is done to set planes to start from specific addresses, or each line of the image to have a specific size (linesize/stride).
[01:05:27 CEST] <iive> but then why would you need a specific format to pass a number for it...
[01:05:46 CEST] <logicaltechno> Sorry I missed what you said before "and alignment"
[01:06:02 CEST] <iive> "no idea about that.... i'm not even sure what luma alignment number is. i mean, luma is the Y plane in YUV, "
[01:06:24 CEST] <logicaltechno> I see. Thanks for giving my issue your attention.
[01:06:55 CEST] <logicaltechno> The video looks properly encoded. This android source file is where the error originates from: https://android.googlesource.com/platform/hardware/qcom/media/+/2a914b33529b826083ff39ec764426680367b157/libc2dcolorconvert/C2DColorConverter.cpp
[01:07:16 CEST] <iive> there are people who know more... just wait a bit longer.
[01:08:26 CEST] <iive> so that's from line 531?
[01:09:01 CEST] <iive> NV12 is known image format.
[01:10:20 CEST] <iive> the function seems to be called from ::getBuffReq()
[01:10:26 CEST] <logicaltechno> Yes its from line 531. Exactly.
[01:11:37 CEST] <iive> so, since you are decoding the video, I'd assume that it expects the output format to be nv12_2k
[01:14:09 CEST] <logicaltechno> I'm not sure how to check that. In what function call would nv12 be specified?
[01:15:02 CEST] <logicaltechno> If I run ffprobe on my videos I get "Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p,"
[01:15:11 CEST] <logicaltechno> Maybe I need to encode them with an nv12 color format?
[01:15:57 CEST] <logicaltechno> maybe I need "-pix_fmt nv12"
[01:17:39 CEST] <logicaltechno> Alright I'm encoding all my videos with -pix_fmt nv12, see ya in 30 minutes!
[01:17:40 CEST] <iive> logicaltechno: are you using ffmpeg/ffplay? or libavcodec in some program.
[01:17:48 CEST] <logicaltechno> libavcodec in an android program
[01:18:40 CEST] <iive> well, iguess setting breakpoint at the error message and checking the backtrace is not as simple on adb?
[01:20:33 CEST] <logicaltechno> It's relatively simple for my execution enviroment. The break would be in Android source code. I'm pretty optimistic about the chances of this nv12 re-encode. The thing that confuses me is that I downloaded someones example and ran ffprobe on his example mp4, and it was yuv color format
[01:27:28 CEST] <furq> logicaltechno: if you're using x264 then -pix_fmt nv12 won't change anything
[01:28:17 CEST] <furq> x264 will convert it to yuv420p internally
[01:29:34 CEST] <logicaltechno> Well snap. so if the error is from getBuffReq in https://android.googlesource.com/platform/hardware/qcom/media/+/2a914b33529b826083ff39ec764426680367b157/libc2dcolorconvert/C2DColorConverter.cpp
[01:30:00 CEST] <logicaltechno> then I need to make "isYUVSurface(format)" return true
[01:33:35 CEST] <redrabbit> hi
[01:33:36 CEST] <redrabbit> Cmd=-analyzeduration {analyzeduration} {offset} {realtime} -i "{infile}" -f mpegts -pat_period 0.2 -c:v libx265 -crf 21 -x265-params vbv-maxrate=1000:vbv-bufsize=2000 -g 50 {framerate} -map 0:v:0 -map 0:a:0 -vf "yadif=0:-1:1, scale={scalex}:{scaley}" -preset {vpreset} -level 30 -c:a aac -b:a 148k -ar 44100 -cutoff 15000 -ac 2 -async 1 -y "{outfile}"
[01:33:57 CEST] <redrabbit> i use this command atm, id like to passthrough the audio instead of transcoding it
[01:34:05 CEST] <redrabbit> unsure what i should change
[01:34:20 CEST] <logicaltechno> -c:a copy
[01:34:25 CEST] <redrabbit> thanks
[01:34:30 CEST] <logicaltechno> get right of -b:a
[01:34:38 CEST] <logicaltechno> get rid of -b:a 148k
[01:34:51 CEST] <logicaltechno> probably get right of -ar, -cutoff, -ac
[01:34:55 CEST] <logicaltechno> rid** not right
[01:36:16 CEST] <redrabbit> works !
[01:36:19 CEST] <redrabbit> o/
[01:36:43 CEST] <logicaltechno> \o
[01:37:02 CEST] <logicaltechno> anyone know where the format for "isYUVSurface" is coming from?
[01:37:33 CEST] <redrabbit> oh yeah that's much better. sounds nicer than transcoding for sure
[01:59:45 CEST] <RandomCouch_> Hi, I'm trying to stream an mp3 file to an rtmp server and I'm having some issues. I'm trying to combine the mp3 with an image
[01:59:50 CEST] <RandomCouch_> this is my command
[02:00:05 CEST] <RandomCouch_> "avconv -re -i " + filePath + " -loop 1 -i local.jpg -map 0:a -map 1:v -preset veryfast -maxrate 320k -bufsize 256k -g 25 -c:a copy -b:v 128k -s 320x240 -r 30 -ar 44100 -f flv " + stream
[02:00:36 CEST] <RandomCouch_> the rtmp server receives it but it's very laggy
[02:00:38 CEST] <RandomCouch_> or choppy
[02:06:48 CEST] <hiihiii> how do you set user agent
[02:07:15 CEST] <hiihiii> I've been using -header but no success
[02:07:34 CEST] <hiihiii> I'm on a windows machine
[02:08:03 CEST] <RandomCouch_> -headers 'User-Agent: "FMLE/3.0 (compatible; FMSc/1.0)"'
[02:09:25 CEST] <RandomCouch_> so, I'm trying to stream this mp3 file combined with an image to an rtmp server, my previous command is actually working fine now, but the stream doesn't stop when it reaches the end of the mp3 because I'm looping the image with -loop 1
[02:09:39 CEST] <hiihiii> I've tried that with \r\n at the end but still not solving my issue
[02:09:47 CEST] <RandomCouch_> is there a way to have the stream stop at the end of reading the mp3 but loop the image ?
[02:11:34 CEST] <hiihiii> RandomCouch_: maybe you could set shortest=1 somewhere idk
[02:12:12 CEST] <hiihiii> -loop isn't a filter so I'm not sure
[02:12:25 CEST] <RandomCouch_> -shortest 1 ?
[02:12:31 CEST] <RandomCouch_> I just tried that it gave me Unable to find a suitable output format for '1'
[02:12:56 CEST] <hiihiii> it's not an option
[02:13:14 CEST] <hiihiii> try setting -t with the mp3's duration
[02:14:19 CEST] <RandomCouch_> hm
[02:14:25 CEST] <hiihiii> hope that won't override loop
[02:18:48 CEST] <hiihiii> I get this error when setting the header "no trailing CRLF found in HTTP header". tried adding "\r\n" to it but nope
[02:31:19 CEST] <pingouin> hello
[02:32:14 CEST] <pingouin> is it allowad to copy paste command line on the chan ? or pastebin url/link mandatory ?
[02:35:41 CEST] <JEEB> since you usually want to post the terminal output as well, that's why pastebin should be used :P
[02:37:49 CEST] <pingouin> ok thanks
[02:40:15 CEST] <pingouin> so, i record my desktop video+sound with ffmpeg and it working great - i'm sure my command line options is not the best, that another problem - i can playback my screencast with mplayer, working great, but when i try to use some vdpau options to playback the file, i only have some choppy/globy green/yellow/black/nothing for the video, the sound is playing great
[02:40:37 CEST] <pingouin> my command line to record with ffmpeg and playback with mplayer : https://pastebin.com/YVyi3yN9
[02:41:50 CEST] <pingouin> installed last nvidia drivers to give a shot at - mine was old - no change
[02:43:00 CEST] <pingouin> the mplayer playback with vdpau options work great on dvd or others files i did not recorded myself with ffmpeg
[03:08:29 CEST] <kepstin> pingouin: with that ffmpeg command, your video will probably be yuv444p pixfmt, which i guess your vdpau decoder doesn't handle correctly.
[03:09:36 CEST] <kepstin> you can add "-pix_fmt yuv420p" to the ffmpeg command and it'll use a more widely supported format
[03:09:54 CEST] <kepstin> (with some quality loss, since it reduces the chroma resolution)
[03:10:10 CEST] <pingouin> ok, hmmm, how can i find the yuv444p info from the video ? i see nothing with the mplayer -identify
[03:10:23 CEST] <pingouin> i've to try that
[03:14:25 CEST] <redrabbit> is there a way to offload some of the load of that command to the gpu decoder ?
[03:14:26 CEST] <redrabbit> Cmd=-analyzeduration {analyzeduration} {offset} {realtime} -i "{infile}" -f mpegts -pat_period 0.2 -c:v libx265 -crf 19 -x265-params vbv-maxrate=3200:vbv-bufsize=6400 -g 50 {framerate} -map 0:v:0 -map 0:a:0 -vf "yadif=0:-1:1, scale={scalex}:{scaley}" -preset {vpreset} -level 30 -c:a copy -async 1 -y "{outfile}"
[03:14:41 CEST] <redrabbit> i have a 1060 6gb
[03:15:01 CEST] <redrabbit> i heard they had encoders build in but im unsure i can use them
[03:15:51 CEST] <redrabbit> gains from decoding source from the hw decoders were marginal last time i tried
[03:16:36 CEST] <iive> i think you have to look for "nvenc"
[03:17:44 CEST] <redrabbit> h264_nvenc
[03:17:58 CEST] <redrabbit> looks like it doesnt do HEVC
[03:18:00 CEST] <redrabbit> ?
[03:18:16 CEST] <iive> it does, if it is new enough.
[03:18:34 CEST] <redrabbit> i shall use h265_nvenc then i guess
[03:18:35 CEST] <iive> (driver and ffmpeg)
[03:18:51 CEST] <redrabbit> last drivers and ffmpeg
[03:19:01 CEST] <iive> well, yeh. h265 is hevc
[03:21:29 CEST] <furq> redrabbit: nvenc is much lower quality than x265
[03:21:51 CEST] <furq> it's not that bad for realtime stuff though
[03:23:07 CEST] <pingouin> kepstin: thank you, it's working with this strange option ! GPU decode works, and the video is playing without any blob
[03:23:14 CEST] <pingouin> wtf is this option ?!
[03:24:11 CEST] <iive> yuv is different color space. similar to one used internally by analog color TV
[03:25:10 CEST] <iive> Y is luminacity (aka gray levels) and U and V are caller color differential (e.g. Green-Blue, Green-Red)
[03:26:17 CEST] <iive> 1:1 transition from rgb to yuv would produce yuv444. Because human eye is less sensitive to color changes, you can actually send less of color info.
[03:26:49 CEST] <iive> this is done by sending UV planes at half resolution for yuv422 (half width)
[03:27:09 CEST] <iive> or quarter for yuv420 (half width and half height)
[03:27:34 CEST] <pingouin> whoaw.... :/
[03:28:14 CEST] <pingouin> and vdpau do not like the default used when i record...
[03:28:24 CEST] <pingouin> have to check what it can handle...somewhere
[03:29:34 CEST] <iive> it probably reports yuv444 as supported, but handles it as yuv420...
[03:30:00 CEST] <pingouin> http://us.download.nvidia.com/XFree86/Linux-x86_64/384.59/README/vdpausupport.html#vdpau-implementation-limits-video-surface
[03:30:09 CEST] <pingouin> is it what we talking about ?
[03:30:47 CEST] <iive> VDP_CHROMA_TYPE
[03:31:22 CEST] <furq> you probably want -pix_fmt yuv422p then
[03:31:37 CEST] <furq> unless 420 is acceptable quality
[03:31:51 CEST] <furq> oh nvm
[03:31:57 CEST] <furq> In all cases, VdpDecoder objects solely support 8-bit 4:2:0 streams,
[03:33:48 CEST] <pingouin> yeah quality is ok on everything, maybe a little blury on command line text...
[03:34:33 CEST] <iive> lossy video doesn't like sharp edges
[03:35:59 CEST] <iive> -qp 0 would make libx264 lossless.
[03:36:31 CEST] <iive> if there are any artifacts, they could be from the yuv444->420 reduction.
[03:37:11 CEST] <iive> gtg, n8 ppl
[03:37:31 CEST] <pingouin> yeah, tried the -qp 0 option.....the file was huge !
[03:37:44 CEST] <pingouin> yeah, good night & thank iive
[03:39:38 CEST] <pingouin> thank you kepstin !
[03:40:11 CEST] <pingouin> i would never figure it out by myself....until 1000 years of test
[03:52:00 CEST] <redrabbit> Cmd=-analyzeduration {analyzeduration} {offset} {realtime} -i "{infile}" -f mpegts -pat_period 0.2 -c:v h264_nvenc -preset slow -bufsize 4000k -maxrate 2000k -crf 20 -g 50 {framerate} -map 0:v:0 -map 0:a:0 -vf "yadif=0:-1:1, scale={scalex}:{scaley}" -level 30 -c:a copy -async 1 -y "{outfile}"
[03:52:05 CEST] <redrabbit> im using this command
[03:52:13 CEST] <redrabbit> somehow it only works with mpeg2 sources
[03:52:19 CEST] <redrabbit> strange
[03:52:57 CEST] <redrabbit> h265_nvenc didnt worked at all though
[03:55:44 CEST] <pingouin> thank you very much again.
[03:55:50 CEST] <pingouin> Good day/night
[04:13:19 CEST] <redrabbit> http://dpaste.com/3W24QPN
[04:13:34 CEST] <redrabbit> Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[04:14:03 CEST] <redrabbit> when i try to transcode AVC source it doesnt start
[04:14:22 CEST] <redrabbit> mpeg2 source works though
[04:14:45 CEST] <redrabbit> probably an option in the command i posted before
[04:19:54 CEST] <hiihiii> hello
[04:20:26 CEST] <hiihiii> how do you pipe the outout of ffmpeg to compression tool, say zip
[04:21:37 CEST] <hiihiii> ffmpeg -i input | zip archive.zip ?
[04:22:29 CEST] <hiihiii> since zip expects an output followed by a list of inputs, the ubove command is failing
[04:26:47 CEST] <redrabbit> "yadif=0:-1:1, scale={scalex}:{scaley}" was messing it up
[04:32:43 CEST] <redrabbit> doesnt look too bad for something that uses so little cpu
[04:40:01 CEST] <redrabbit> -c:v hevc_nvenc
[04:40:08 CEST] <redrabbit> and not -c:v h265_nvenc
[04:40:09 CEST] <redrabbit> heh
[04:40:11 CEST] <redrabbit> _o/
[04:45:16 CEST] <redrabbit> hw encoding works nicely
[04:45:22 CEST] <redrabbit> hw decoding : Error while opening decoder for input stream #0:0 : Operation not permitted
[04:45:43 CEST] <redrabbit> only uses 10% cpu with hw decoding, but id like to shave as much as i can
[04:45:56 CEST] <redrabbit> without hw decoding*
[04:50:47 CEST] <redrabbit> ah, solved it
[06:20:35 CEST] <redrabbit> well looks like hw encoder for hevc maxes out around 3Mbit
[06:50:36 CEST] <xacktm> redrabbit: hey, do you mind me asking what you ahve installed for nvenc? I've tried this command to capture teh screen but I keep getting "Cannot Init CUDA!" https://bpaste.net/show/de8052d290b8
[10:13:54 CEST] <JustASquid> Anyone familiar with dshow for webcam capture?
[14:02:09 CEST] <certaindestiny> Hi all, I am getting a Too many inputs specified for the setpts filter error when using this command https://pastebin.com/U7tzCf4Y, Anybody able to figure out why this is happening?
[14:03:34 CEST] <DHE> always include the output from ffmpeg as well
[14:03:48 CEST] <DHE> first guess without said output is that the input source has multiple video streams?
[14:04:18 CEST] <durandal_1707> certaindestiny: you do not name output pad for each filter
[14:04:31 CEST] <durandal_1707> only for some
[14:04:52 CEST] <certaindestiny> @durandal_1707, I am sorry can you please elaborate?
[14:05:34 CEST] <certaindestiny> @dhe, How can i specify which inputs to keep?
[14:06:39 CEST] <DHE> that's also a good point. if you separate filters with a comma, the outputs from one carry to the input of the next
[14:06:48 CEST] <durandal_1707> certaindestiny: what to elaborate? i described what you need to do, simply copypasta can not work
[14:06:55 CEST] <DHE> use a ;semicolon; if you want only the indicated names to be used
[14:07:30 CEST] <durandal_1707> you need to understand that how filterfraphs are built
[14:07:45 CEST] <durandal_1707> filtergraphs
[14:09:09 CEST] <certaindestiny> @durandal_1707, Do you mean i cannot use both the setpts and hstack filter in the same line and need to apply first pts and then the hstack?
[14:09:51 CEST] <durandal_1707> certaindestiny: hstack takes and needs multiple inputs
[14:09:59 CEST] <DHE> the rules of how you specify filter inputs and outputs vary depending on what character you use.
[14:10:12 CEST] <durandal_1707> you can not do setpts,hstack
[14:10:19 CEST] <DHE> ;[input] filter=params [output]; # the semicolons means this is an isolated filter
[14:10:46 CEST] <DHE> , filter=params, # the commas mean to chain the previous output into this input, and this output into the next filter's input
[14:11:05 CEST] <durandal_1707> you need to do: setpts a, setpts b, a b hstack
[14:11:18 CEST] <certaindestiny> So i would need to something along the line of [0:v] setpts [a] ;[a] hstack
[14:11:20 CEST] <DHE> as a matter of style, I suggest always using ;semicolon; style when doing complex filters
[14:11:49 CEST] <DHE> [0:v] setpts=...[a] ; [1:v] setpts=... [b] ; [a][b] hstack [out]
[14:12:05 CEST] <DHE> this filtergraph will horizontally stack 2 videos and call the resulting video [out]
[14:12:58 CEST] <certaindestiny> ah ok ok ok, I though i could directly apply the setpts filter to the video input
[14:13:30 CEST] <certaindestiny> as here:https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos both pts and scale where applied in a row
[14:15:29 CEST] <durandal_1707> certaindestiny: you can do the same just you need to connect right pads to filter input/outputs
[14:17:00 CEST] <certaindestiny> durandal_1707, Something like this? https://pastebin.com/NHP912tH
[14:19:08 CEST] <durandal_1707> certaindestiny: sure. but better use alphabet only for naming pads
[14:20:07 CEST] <certaindestiny> @durandandal_1707, I am now getting No such filter: '' does it not allow spaces to be present?
[14:37:26 CEST] <durandal_1707> you mis typed smthing
[14:40:26 CEST] <certaindestiny> @durandall, I is working now. Thanks for the help
[14:40:35 CEST] <certaindestiny> thank aswell DHE
[14:41:34 CEST] <ZoomZoomZoom> Hi! How can I extract an audio stream from a mp4 file, when the stream misses a header?
[14:42:40 CEST] <ZoomZoomZoom> Should I force an input format? It should work, but I don't know how to do it per-stream.
[14:43:05 CEST] <BtbN> misses a header? So, a broken mp4 file?
[14:44:09 CEST] <ZoomZoomZoom> Kinda. It says "[mp2 @ 00000000004edbe0] Header missing"
[14:45:54 CEST] <ZoomZoomZoom> I have 2 files from same source. ffprobe says "Stream #0:1[0x1c0]: Audio: mp2, 0 channels, s16p" for the audio stream in the first file and "Stream #0:1[0x1c0]: Audio: mp2, 0 channels" for the second.
[14:49:22 CEST] <ZoomZoomZoom> I could analyze and convert the stream by hand, if I could just separate it from the container.
[14:50:19 CEST] <BtbN> 0 channels looks very broken though
[14:51:49 CEST] <ZoomZoomZoom> The files play (with no sound, of course), and I can demux the video stream. But not audio.
[15:24:16 CEST] <redrabbit> hi, im using ffmpeg to stream my iptv service with my 1060 as the encoder
[15:24:41 CEST] <redrabbit> wondering what is the best solution to use ffmpeg/hw encoding to stream the content of my win10 desktop
[15:32:23 CEST] <furq> redrabbit: you probably want to use OBS for screen capture on windows
[15:33:20 CEST] <redrabbit> looking it up
[15:33:38 CEST] <redrabbit> so it works with nvidia hw encoder i guess
[15:34:36 CEST] <furq> should do
[15:34:36 CEST] <redrabbit> installing it
[15:34:52 CEST] <redrabbit> yeah it would make sense
[15:36:37 CEST] <certaindestiny> I am still trying to stream 4 rtsp stream to 1 udp stream. However i am seeing an enormous amount of errors. for which i can not find a reason why they are appearing. Anybody having any experience with this? https://pastebin.com/qBxjkdhb
[15:37:15 CEST] <furq> do you get those errors while decoding one of those streams
[15:37:58 CEST] <certaindestiny> You mean sepperatly?
[15:38:17 CEST] <furq> yeah
[15:38:27 CEST] <furq> ffmpeg -i rtsp://foo -f null -
[15:39:11 CEST] <redrabbit> out of curiousity is it called multiplexing
[15:39:26 CEST] <furq> is what called multiplexing
[15:39:28 CEST] <certaindestiny> I am getting those errors aswell on the single rtp stream
[15:39:45 CEST] <redrabbit> kind of like TV streams ?
[15:39:56 CEST] <furq> certaindestiny: those are mpeg2 decoder errors, so if you're getting it with one stream then your upstream is probably broken
[15:40:21 CEST] <furq> i doubt there's much ffmpeg can do about that
[15:40:22 CEST] <redrabbit> with multiple channels in one multiplex so variable bandwith worth together
[15:40:39 CEST] <certaindestiny> That would be highly unlikely as decoding the stream directly on VLC is not giving any error messages at all
[15:42:56 CEST] <certaindestiny> Think i might have got it. Buffer size and max delay should be after the stream input not before i think
[15:45:06 CEST] <certaindestiny> nvm, that fixes it for the single stream but not all 4 of them
[15:52:44 CEST] <certaindestiny> Furq, any advice?
[15:57:50 CEST] <redrabbit> furq: i figured i would need to run a streaming server
[15:58:00 CEST] <redrabbit> what do you recommand for debian
[16:04:21 CEST] <c7j8d9> I running a batch script to convert multiple files in a folder. is there a way add a delete command to delete the current file when done converting?
[16:04:43 CEST] <c7j8d9> @echo off & setlocal
[16:04:43 CEST] <c7j8d9> FOR /r %%i in (*.mkv) DO ffmpeg -i "%%~fi" -c:v copy -c:a ac3 "%%~dpni%.mp4"
[16:46:23 CEST] <relaxed> c7j8d9: do ffmpeg ... && <command to delete current file in loop>
[16:47:33 CEST] <c7j8d9> del "%%~fi" ?
[16:54:00 CEST] <SpeakerToMeat_> I can't find any concise online explanation about the difference between tbr, tbn and tbc? how they're calculated, used, sourced, set, etc?
[16:54:23 CEST] <SpeakerToMeat_> Is there any such document somewhere? or is the only solution to read source code?
[16:55:05 CEST] <JEEB> "tb" is time base
[16:55:36 CEST] <SpeakerToMeat_> Ok,
[16:56:23 CEST] <JEEB> which exact values are being gotten you would have to check ffmpeg.c's code
[16:56:24 CEST] <JEEB> but
[16:56:25 CEST] <JEEB> 29.97 tbr, 90k tbn, 59.94 tbc
[16:56:35 CEST] <JEEB> 90k seems like the container time base
[16:56:45 CEST] <JEEB> 59.94 would probably be the decoder's code base?
[16:57:02 CEST] <JEEB> 29.97 would probably be some "frame rate" time base?
[16:57:08 CEST] <JEEB> let's see how wrong I am :)
[16:57:22 CEST] <SpeakerToMeat_> Sigh, thanks, I think I'll end up perusing the source.
[16:57:41 CEST] <kepstin> i think tbr is from the framerate, and is sometimes estimated, yeah
[16:58:05 CEST] <JEEB> seems to be libavformat/dump.c
[16:58:09 CEST] Action: kepstin thought tbc was container time base, but that doesn't make sense on mpeg-ts
[16:58:28 CEST] <JEEB> kepstin: yea what I just poked with a stick is MPEG-TS and you can see 90k in tbn
[16:58:50 CEST] <JEEB> it's interlaced NTSC so the 59.94 tbc makes sense for the AVCodecContext frame rate
[16:58:59 CEST] <JEEB> "frame rate"
[16:59:03 CEST] <JEEB> I mean time base
[16:59:39 CEST] <JEEB> yup, tbc is the decoder time base
[17:00:14 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/dump.c;h=8fd58a0dbac9bf3ecf4d739a41b691313665a1c8;hb=HEAD#l509
[17:00:25 CEST] <JEEB> SpeakerToMeat_: there you have the definitions :)
[17:00:42 CEST] <JEEB> don't be fooled by the "print_fps", it is not frame rate
[17:00:50 CEST] <JEEB> it is the time base for that given thing
[17:01:19 CEST] <SpeakerToMeat_> there's this https://stackoverflow.com/questions/3199489/meaning-of-ffmpeg-output-tbc-tbn-tbr but these things make no sense without some sort of strange/crazy units as there's no reason to the numbers, how can I have a file with a tbn of 24 and a tbc of 12288, 12288 frames per second? and this is a codec copied file where the original has a tbn of 24 and a tbc of 24
[17:01:35 CEST] <JEEB> as I said, it's not a frame rate
[17:01:54 CEST] <SpeakerToMeat_> Then, it's a time fraction? 12288 means 1/12288 of a second?
[17:01:55 CEST] <JEEB> time base is the maximum precision of timestamps
[17:05:26 CEST] <JEEB> so in "29.97 tbr, 90k tbn, 59.94 tbc" , tbr is the time base of a "frame rate" of the stream (however that is filled), tbn is the demuxer's time base (MPEG-TS has a time base of 90 kilohertz so the value is correct), tbc is the time base gotten from the decoder
[17:06:01 CEST] <JEEB> these are all internal values that can also be very much misleading or somehow wrong :D
[17:06:32 CEST] <JEEB> SpeakerToMeat_: I hope this helped :)
[17:10:37 CEST] <pingouin> hello #ffmpeg
[17:10:48 CEST] <SpeakerToMeat_> JEEB: And why the original file has 24 for tbn? btw this is prores jic
[17:10:59 CEST] <pingouin> iive: thank you again for the help last night
[17:11:24 CEST] <iive> pingouin: what was the issue?
[17:11:35 CEST] <JEEB> SpeakerToMeat_: different containers have different limitations for values etc
[17:11:39 CEST] <JEEB> among other possibilities
[17:12:05 CEST] <JEEB> I really recommend not to stare too much at all of those values but rather I would probably look at ffprobe's stuff like -show_streams or -show_frames with -of json
[17:12:14 CEST] <pingouin> also thank you too kepstin
[17:13:34 CEST] <pingouin> iive: the -vdpau option for the playback with mplayer, encode with ffmpeg need to put the -pix_fmt due to some limitation of the nvdia vdpau
[17:13:49 CEST] <iive> oh, sure.
[17:14:18 CEST] <pingouin> you quited before i can thank you
[17:14:35 CEST] <iive> i think you did
[17:15:55 CEST] <SpeakerToMeat_> JEEB: Yeah but I worry because I do a -c:v copy, source is 24,24,24, output is 24,12288,24 and when played in quicktime (not mplayer, ffplay or vlc) it has unexplainable pauses
[17:16:10 CEST] <pingouin> when i do an : ffmpeg -pix_fmts
[17:16:29 CEST] <JEEB> SpeakerToMeat_: from what container to what container btw?
[17:16:40 CEST] <SpeakerToMeat_> JEEB: from quicktime to quicktime
[17:16:45 CEST] <JEEB> mov to mov?
[17:16:50 CEST] <SpeakerToMeat_> yes
[17:16:55 CEST] <JEEB> ok, that's funky
[17:16:58 CEST] <JEEB> do you set any params?
[17:16:59 CEST] <pingouin> i get a list of a lot option available, one draw my attention : ..H.. vdpau_h264
[17:17:10 CEST] <JEEB> also can you test with the latest master of FFmpeg?
[17:17:23 CEST] <pingouin> is it to encode espescialy for the vdpau ?
[17:17:24 CEST] <SpeakerToMeat_> JEEB: None, only two maps as I'm adding an audio track to a quicktime that had none
[17:17:31 CEST] <JEEB> ah
[17:17:44 CEST] <JEEB> I wonder if that's where it's getting the time base for one of the tracks
[17:18:20 CEST] <SpeakerToMeat_> JEEB: basically (filenames changed:( ffmpeg -i video.mov -i audio.wav -map 0:0 -map 1:0 -c copy out.mov
[17:18:30 CEST] <JEEB> yea
[17:18:48 CEST] <SpeakerToMeat_> Hmm maybe I'd have to ffprobe the sound file... yet it's the stream 0 (video) on the out that's changing
[17:19:30 CEST] <JEEB> yea, since the time bases are stream-specific (although f.ex. MPEG-TS always has 90k there because that's the global specified time base for that specific container) and the print is only done when AVMEDIA_TYPE_VIDEO
[17:19:59 CEST] <JEEB> I would rather check both time bases with -show_streams -of json with ffprobe in that output .mov
[17:20:20 CEST] <JEEB> also it would be interesting to know the general ffprobe details (without any parameters) of the input files
[17:20:25 CEST] <SpeakerToMeat_> pingouin: I have no personal idea, but what I'd gather it's probably a vdpau sped up encoder?
[17:20:35 CEST] <SpeakerToMeat_> JEEB: Ok.
[17:20:53 CEST] <JEEB> metadata and file names aren't really too interesting (as in, tags etc) - mostly time bases and other technical information on the input files
[17:21:04 CEST] <pingouin> SpeakerToMeat_: any idea where can i read about that ?
[17:21:39 CEST] <SpeakerToMeat_> pingouin: ... maybe try.... one second
[17:22:21 CEST] <SpeakerToMeat_> pingouin: ffmpeg -h encoder=vdpau_h264
[17:22:31 CEST] <SpeakerToMeat_> if it's an encoder, it should show more help there
[17:22:42 CEST] <SpeakerToMeat_> JEEB: Until it starts failing :D
[17:23:56 CEST] <pingouin> bah, not a big deal, wanted to understand better what this is....
[17:24:02 CEST] <pingouin> thank you anyway
[17:24:04 CEST] <JEEB> well, yea - I'm trying to gather the general details of the input files so suddenly the thing becomes reproduce'able without your original input files
[17:24:05 CEST] <SpeakerToMeat_> pingouin: I myself don't have it, but I have an nvenc_h264 which is an nvidia encoder, so I'm guessing hardware assisted
[17:24:33 CEST] <SpeakerToMeat_> JEEB: Yeah I gather that, I will try in a short while, right now I'm doing a heavy process. Thanks for the help
[17:24:54 CEST] <pingouin> SpeakerToMeat_: Codec 'vdpau_h264' is not recognized by FFmpeg when i try you above command ( ffmpeg version 3.2.5-1 )
[17:25:14 CEST] <pingouin> SpeakerToMeat_: and i got all the nvdia's driver ( playing with vdpau work great)
[17:26:25 CEST] <SpeakerToMeat_> pingouin: Yeah I don't have that one either, you saw this on your "ffmpeg -codecs" list?
[17:27:51 CEST] <pingouin> nop it's the answer of the ffmpeg -h encoder=vdpau_h264
[17:29:11 CEST] <pingouin> SpeakerToMeat_: but if i do an : ffmpeg -codecs i got : DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_crystalhd h264_vdpau ) (encoders: libx264 libx264rgb h264_nvenc h264_omx h264_vaapi nvenc nvenc_h264 )
[17:30:53 CEST] <jkqxz> h264_vdpau is a deprecated standalone decoder, replaced by the VDPAU hwaccel. It is nothing to do with encoding.
[17:33:03 CEST] <pingouin> jkqxz: ok thank you for the information
[17:33:58 CEST] <pingouin> jkqxz: do you know any hardware acceleration encoder/option to try for h264 with ffmpeg ?
[17:35:34 CEST] <jkqxz> See the standalone hardware encoders in your list - h264_nvenc, h264_omx, h264_vaapi. Which of those might be usable will depend on what hardware you have.
[17:37:36 CEST] <pingouin> jkqxz: ok thank for the help, i'll test that
[17:38:25 CEST] <SpeakerToMeat_> JEEB: I'll retake this asap, but I'm pretty sure so far a) the audio influenced/influences the tbc somehow b) this might not have to do with tbc but with a TC track
[18:00:04 CEST] <pingouin> sorry to bother but what is the difference between h264_nvenc & nvenc_h264 ? i'm confuse
[18:03:31 CEST] <SpeakerToMeat_> pingouin: I have no idea, I'd guess they're different names for the same encoder... but it's a guess
[18:06:03 CEST] <durandal_1707> there is long description for the reason
[18:07:32 CEST] <SpeakerToMeat_> JEEB: It seems for some reason, the time base for the original video is being changed when rewrapped with the audio, and that's odd... https://pastebin.com/ibYZsuPu
[18:07:56 CEST] <SpeakerToMeat_> JEEB: I gotta run about now, but when I come back I'll see if I can somehow compile a newer ffmpeg (lots of deps. LOOOTS of deps, but it might be worth it)
[18:08:00 CEST] <SpeakerToMeat_> And test that one
[18:08:12 CEST] <pingouin> durandal_1707: where can i read about ?
[18:09:08 CEST] <jkqxz> nvenc_h264 is a deprecated name for h264_nevnc. (It existed before it was agreed that all of those encoders should use the form "$codec_$api".)
[18:09:31 CEST] <pingouin> ok ! great
[18:10:27 CEST] <jkqxz> So they are the same thing, but you should use h264_nvenc because nvenc_h264 will be removed in some future version.
[18:10:50 CEST] <pingouin> many thanks for the light about this jkqxz !
[18:14:37 CEST] <SpeakerToMeat_> JEEB: Now if you see my inputs, I have a %0.1 discrepancy between video and audio, and I think it might be the reason of it all, maybe (just maybe) ff is trying to make them match and thus changes the time base for the video... this happened because of a bug in resolve which caused troubles with premiere. sigh
[18:41:01 CEST] <SpeakerToMeat_> Ok, no, doing a codec copy of ProRes 422, ffmpeg 3.2.5 insists on changing the time base, from 1/24000 to 1/12288
[18:41:04 CEST] <SpeakerToMeat_> for some reason
[18:56:07 CEST] <jdelStrother> Hi there
[18:59:32 CEST] <jdelStrother> I'm trying to find a decent programmatic way of determining if an mp3 is VBR or not. I thought I'd be able to walk through the mp3 frames with `ffprobe -show_frames` and figure out the bitrate for each frame from that, but that doesn't quite give what I was hoping for
[18:59:57 CEST] <furq> mediainfo?
[19:00:33 CEST] <jdelStrother> urgh, I was hoping to avoid adding another binary dependency
[19:09:42 CEST] <furq> if you're using python then mutagen will do it as well
[19:10:20 CEST] <furq> the fastest way is to just check the xing header but that's only reliable for mp3s encoded by lame
[19:11:19 CEST] <dustobub> Hey! I'm trying to do a video capture from a WDM device (Blackmagic Decklink) and have it record for only 8 seconds and then exit the process. Currently when I run the following command, the recording finishes after 8 seconds, but the process doesn't exit. Does anyone know of a way to have the process exit cleanly without resorting to timeout+pkill
[19:11:19 CEST] <dustobub> ? Here's the command: ffmpeg -y -f dshow -video_size 3840x2160 -pixel_format uyvy422 -rtbufsize 256MB -framerate 24 -t 00:00:08 -i video="Blackmagic WDM Capture" -c:v h264_nvenc -b:v 20000k capture.mp4
[19:12:17 CEST] <furq> maybe move -t after the input
[19:15:52 CEST] <dustobub> furq: thanks for the suggestion, but that seems to do the same thing. any other ideas?
[19:17:06 CEST] <furq> the same thing as what
[19:18:49 CEST] <dustobub> furq: the behavior is the same (it doesn't automatically exit after the recording stops)
[19:19:12 CEST] <furq> oh right i thought you were someone else
[19:19:22 CEST] <furq> i'm not really sure what would be causing that
[19:39:53 CEST] <c_14> check strace or a--, wait windows
[19:39:56 CEST] <c_14> attach the debugger?
[19:40:01 CEST] <c_14> I'm pretty sure that's a thing you can do in windows
[19:56:09 CEST] <ney> I'm in trouble with streaming ffmpeg to ffserver using a c++ code, someone help?
[19:57:03 CEST] <BtbN> Don't use ffserver
[19:57:05 CEST] <BtbN> it's dead
[19:58:06 CEST] <ney> Whoa... this is new to me. What I have to use instead?
[19:59:19 CEST] <ney> By the way, the ffserver is configured and working for me, the problem is sending the output to it.
[19:59:39 CEST] <BtbN> It's been unmaintained for years, just dragged along because some people refuse to drop it
[20:00:06 CEST] <BtbN> And nobody really knows how to work with it, and it has a long list of issues nobody cares to fix.
[20:00:22 CEST] <ney> It's important, what do you recommend to make the streaming?
[20:00:32 CEST] <BtbN> The usual thing people use is nginx-rtmp
[20:00:51 CEST] <ney> Ah! Great, I'll look for it.
[20:02:12 CEST] <ney> My application consists in capturing image from one or more webcams and streaming over internet. But I have to make some realtime in image, like zooming, panning, overlay them.
[20:02:52 CEST] <BtbN> sound like you are looking for obs?
[20:03:13 CEST] <ney> So, I'm trying to capture and manipulate it with OpenCV, send to a server with ffmpeg and broadcast it. Is a good idea?
[20:03:28 CEST] <ney> "Obs"? What does it means?
[20:03:38 CEST] <BtbN> That's the name of the software.
[20:03:46 CEST] <ney> * broadcast with nginx now.
[20:04:14 CEST] <ney> I want to build one, for my masters dissert.
[20:04:38 CEST] <devinheitmueller> ney: https://obsproject.com/
[20:05:49 CEST] <ney> Obs don't works for me.
[20:07:13 CEST] <ney> Mode datailed: I have a webcam connected to a Raspberry PI. I send commands to it, and the Raspberry have to prepare the image (zoom, pan, crop ... these things) and send the treated image to the server, understood?
[20:08:14 CEST] <ney> There is a robot connected to Raspberry.
[20:08:28 CEST] <devinheitmueller> Ugh. Good luck doing any serious image processing on a Pi. :-)
[20:08:52 CEST] <BtbN> the processing is not the problem
[20:08:55 CEST] <BtbN> the encoding to stream it is
[20:09:36 CEST] <ney> Ahhh, I have hearded about this. Even at 15 Fps, is not praticeable?
[20:09:53 CEST] <ney> An 640 x 480 image
[20:09:56 CEST] <BtbN> if you manage to utilize the hardware encoder, maybe
[20:09:59 CEST] <BtbN> but it's shit quality
[20:11:03 CEST] <ney> I think I'll have to try ... and do the best I can, even if to say to team that is inviable.
[20:12:16 CEST] <devinheitmueller> BtbN: I assumed he had access to the MMAL facilities on the Pi, but turns up under ffmpeg only decode was implemented. So yeah, hes not going to get very far.
[20:12:48 CEST] <BtbN> I remember there being some h264_omx?
[20:13:00 CEST] <devinheitmueller> Why on earth does everybody work so hard to do everything with a Pi? For most people, a NUC would work just as well and only costs $200.
[20:13:19 CEST] <ney> Hey, NUC? What is it?
[20:13:21 CEST] <atomnuker> less, even,
[20:13:37 CEST] <atomnuker> you can get one for around 80 or so bux
[20:13:59 CEST] <devinheitmueller> nehttps://www.intel.com/content/www/us/en/products/boards-kits/nuc.html
[20:14:01 CEST] <devinheitmueller> ney: https://www.intel.com/content/www/us/en/products/boards-kits/nuc.html
[20:14:06 CEST] <atomnuker> (having at least some celeron with avx, they're not bad)
[20:14:59 CEST] <ney> Here in Brazil is expensive (about 4x more than a RPi)
[20:15:04 CEST] <devinheitmueller> atomnuker: Yeah, I assumed $200 after you add memory and a storage. But yeah, still quite cheap and well worth it if youre not trying to ship thousands of units.
[20:15:51 CEST] <atomnuker> ah, yeah, memory is expensive
[20:16:08 CEST] <atomnuker> (very expensive nowadays, what happened?)
[20:16:16 CEST] <devinheitmueller> ney: Are you planning on shipping a thousand robots? If youre really only going to build a single robot, just buy a NUC or other small form-factor PC and save yourself a lot of headache.
[20:17:21 CEST] <ney> I have to talk with my team to change hardware. But is a very good tip that we was uninformed.
[20:18:00 CEST] <ney> Sure I'll look for it
[20:19:03 CEST] <ney> But, even if we change the hardware, I think the software will be the same, right? Opencv -> ffmpeg -> nginx...
[20:20:29 CEST] <devinheitmueller> ney: Probably. Except in that case the hardware will be 50X faster than a Pi, so you may actually be able to do that workload on the platform without having to spend three months optimizing the code.
[20:20:51 CEST] <devinheitmueller> :-)
[20:20:58 CEST] <ney> That's sounds very good!!!
[20:21:30 CEST] <ney> So, I have one ffmpeg question to you.
[20:22:02 CEST] <ney> I used a sample code that save a stream to a file, opening from webcam. Worked just fine.
[20:22:47 CEST] <ney> Then, instead redirect the output to a file, I redirected to a feed at ffserver (that I'm abandoning now)
[20:23:24 CEST] <ney> I want to know if to stream to a server is very different to stream to a file, because my try doesn't worked.
[20:24:10 CEST] <ney> I'm using FLV as conteiner and libx264 as encoder, to be more suitable.
[20:24:42 CEST] <devinheitmueller> You would probably have to provide more detail as to why it failed. Probably the easiest thing to do for debug/testing would be to just set the output to udp://x.x.x.x:1234 and see if you can watch it with VLC.
[20:25:16 CEST] <ney> I made the output to http and tried to get in VLC.
[20:25:40 CEST] <ney> The VLC debug received a lot of data, but not displayed anything.
[20:26:27 CEST] <devinheitmueller> HTTP adds alot of complexity where things can go wrong, hence using just UDP multicast avoids such.
[20:28:29 CEST] <ney> There are some source code or tutorial where I can research how to do this?
[20:28:45 CEST] <devinheitmueller> https://trac.ffmpeg.org/wiki/StreamingGuide
[20:29:28 CEST] <furq> ney: what pi is it
[20:30:06 CEST] <ney> (SHAME!!!) RPi B+ <:(!
[20:30:18 CEST] <furq> yeah that's not going to do much
[20:30:24 CEST] <furq> i was going to say it might be worth a try if you have a 3
[20:30:47 CEST] <furq> the encoder is the same on all of them, but opencv is probably too much for a pi
[20:31:29 CEST] <ney> As I'm using opencv only to make some transformations on matrixes, I have some lighter alternative?
[20:32:05 CEST] <furq> opencv apparently has some neon optimisations so i guess that's your best choice on a pi
[20:32:21 CEST] <furq> the encoding is basically free, but obviously the hardware encoder is really poor quality
[20:32:41 CEST] <furq> and if you're using the pi camera interface then the decoding is basically free as well
[20:32:55 CEST] <furq> so opencv (and audio if you need that) is the only bit that will hit the cpu
[20:33:20 CEST] <ney> Yeah, but I'm thinking if the Pi cannot handle it, I stream the whole image over the net and make the treatments client sided.
[20:33:39 CEST] <furq> yeah that'll work fine
[20:33:54 CEST] <ney> I'm thinking in save CPU or save bandwidth, what is more critical on the project.
[20:34:09 CEST] <furq> you probably don't have a choice really
[20:34:18 CEST] <furq> unless your webcam is like 480p or less
[20:34:37 CEST] <ney> The way all of you are saying... I gess not, really.
[20:34:57 CEST] <furq> obviously it's worth trying it for yourself
[20:35:17 CEST] <devinheitmueller> On that topic, look what just came across on Hackaday: http://hackaday.com/2017/08/17/hackaday-prize-entry-inspectorbot-aims-to-look-underneath/
[20:35:23 CEST] <furq> just encoding and streaming the raw images off the webcam should be no problem
[20:35:31 CEST] <devinheitmueller> Might be worth seeing how this person wrote their software.
[20:35:36 CEST] <furq> as long as it's not a usb webcam
[20:35:57 CEST] <furq> you just need ffmpeg built with --enable-omx
[20:36:49 CEST] <ney> Ohh, that robot is a dream!!!
[20:37:24 CEST] <ney> That webcam looks like usb, no?
[20:38:00 CEST] <furq> the issue with usb is that you'd need to use mjpeg for HD
[20:38:10 CEST] <furq> also i'm pretty sure that's using the pi camera interface
[20:38:15 CEST] <furq> at least the top camera is
[20:38:40 CEST] <furq> mmal on the pi does mjpeg decoding but i don't think that made it into ffmpeg yet
[20:38:57 CEST] <ney> I'm with difficulties with one camera and they made with two! Humiliating!
[20:39:01 CEST] <furq> and rawvideo over usb2 won't work well for anything over 480p
[20:39:29 CEST] <furq> someone was trying something similar with a pi zero and they couldn't decode 720p30 mjpeg in realtime
[20:39:43 CEST] <ney> So, let's start my research on NGinx. Make a c++ program to send a stream to nginx is almost the same to send for a video stream on Hard Disk?
[20:39:59 CEST] <furq> not really
[20:40:05 CEST] <furq> you'd need to mux to flv and send over rtmp
[20:40:18 CEST] <furq> if you're already using the ffmpeg libs then it's not that much different
[20:40:36 CEST] <ney> Yes, using the ffmpeg libs...
[20:40:43 CEST] <furq> that should be fine then
[20:41:09 CEST] <ney> Man, all of you, thanks for the chat! I learnt a lot!
[20:41:24 CEST] <ney> I'll return when have some news!
[20:41:44 CEST] <ney> An thanks for the NUC tip, I'll try to use them with the team!
[20:41:57 CEST] <ney> See you soon!
[21:06:27 CEST] <Pandela> Afternoon, just curious still. Does anyone know if FFmpeg can process an entire file, but skip frames of which to process a video? Say only apply a filter to the first few seconds or so on?
[21:09:22 CEST] <durandal_1707> yes
[21:12:28 CEST] <devinheitmueller> Just run ffmpeg -frames 60 foo.ts and it will only process the first 60 frames of video.
[21:13:22 CEST] <devinheitmueller> (i.e. 60 frames = 20 seconds of video if FPS=30)
[21:13:31 CEST] <devinheitmueller> s/20/2/
[21:17:22 CEST] <Pandela> devinheitmueller: That only encodes the first 60 frames lol, I want to process the entire video but only apply a filter to the first few frames or so
[21:17:49 CEST] <BtbN> add the video as input twice, once skipping 60, once only 60. Filter first one, concat.
[21:18:18 CEST] <durandal_1707> Pandela: yes use smthing like: vflip=enable=....
[21:20:18 CEST] <Pandela> durandal_1707: I'm gonna give that a shot, sadly ladspa audio filters do not have Timeline support, which uses that =enable= flag
[21:20:31 CEST] <Pandela> BtbN: That might work too <3
[21:21:10 CEST] <BtbN> if you encode to a similar format, you might even be able to not re-encode the rest of the video
[21:21:40 CEST] <Pandela> what about codec copy?
[21:22:35 CEST] <BtbN> you can't filter if you don't encode.
[21:26:27 CEST] <SpeakerToMeat_> It has nothing to do with the audio, if I try to copy the video stream only to a new file, it still changes the time base to 1/12288
[21:28:21 CEST] <Pandela> BtbN: touche
[21:29:06 CEST] <durandal_1707> you can encode to null
[21:57:20 CEST] <SpeakerToMeat_> Arrrrrrgh, ffmpeg 3.3.3 does the same thing
[22:25:13 CEST] <techno_> I'm trying to use the h264_mediacodec for android hw accel decoding and am receiving this message spammed in the logcat output: C2DColorConvert: unknown format passed for luma alignment number
[22:25:16 CEST] <techno_> anyone have any ideas?
[22:26:52 CEST] <niklob> Hello gents, I have a .ts file with dvb subs. Is it possible to find out which character encoding was used, even though dvb is a bitmap stream?
[22:27:22 CEST] <JEEB> the best you might get is the language code
[22:32:35 CEST] <SpeakerToMeat_> niklob: As it's a bitmap stream, the original encoding should be lost/gone, it's just pixels now, no encoding for those
[22:33:22 CEST] <niklob> yeah, taht's what i expected. i was just hoping there might be something in the metadata, but ffprobe didn't show it
[22:35:29 CEST] <paveldimow> Hi, any chance that ffmpeg will support amf0?
[22:37:26 CEST] <niklob> Another problem: I imported srt subs like this: "ffmpeg -i input.mkv -i eng.srt -c:v copy -c:a copy -c:s dvbsub -map 0:0 -map 0:1 -map 1:0 -metadata:s:s:0 language=eng output.ts" and they show up as dvbsubs in the tracklist in VLC
[22:37:34 CEST] <niklob> BUT: they are invisible
[22:38:39 CEST] <devinheitmueller> paveldimow: what are you trying to do with amf0?
[22:39:00 CEST] <niklob> It's strange, because ffmpeg didn't return any errors or warnings, but after activating the track, the subtitles remain hidden/invisible
[22:39:01 CEST] <furq> presumably something to do with rtmp
[22:39:13 CEST] <devinheitmueller> furq: Yeah, perhaps pass through Ad triggers?
[22:40:08 CEST] <paveldimow> devinheitmueller: I a trying to read that metada
[22:40:54 CEST] <devinheitmueller> Yeah, presumably you would have to build a bitstream filter.
[22:41:03 CEST] <paveldimow> the thing is that client is doing live streaming via mobile app and when media server saves mp4 I need to read this metadata
[22:41:17 CEST] <llogan> niklob: use a pastebin to show complete console output
[22:41:17 CEST] <paveldimow> it should contain timestamps
[22:41:57 CEST] <paveldimow> but I am don't know, so I would like to dump this metadata
[22:41:59 CEST] <devinheitmueller> Sounds like fun. Pretty sure youre going to have to write some code. There isnt any functionality like that readily available to users.
[22:42:21 CEST] <paveldimow> well... I tried :) thank you for your answer
[22:42:52 CEST] <paveldimow> maybe any other tool that is capable of this?
[22:43:56 CEST] <devinheitmueller> Not that I know of. Feels like the sort of thing you could hack together in a few hours if you knew what you were doing.
[22:45:03 CEST] <paveldimow> well I just saw your github... I am not 0.1% near.. :)
[23:12:33 CEST] <SpeakerToMeat_> The only encoder that can do prores HQ is prores_ks
[23:13:43 CEST] <SpeakerToMeat_> that was supposed to be a question. ugh
[23:13:54 CEST] <SpeakerToMeat_> It seems prores and prores_ka can't do HQ, am I right?
[23:17:02 CEST] <llogan> SpeakerToMeat_: I believe they both can, but prores_aw may not accept named values, so use -profile:v 3 or whatever
[23:17:27 CEST] <llogan> there really should be only one prores encoder... dumb to have two
[23:18:33 CEST] <JEEB> I thought one of them was removed recently
[23:19:00 CEST] <llogan> ah. i haven't been paying attention lately.
[23:19:16 CEST] <JEEB> I could be wrong of course
[23:19:42 CEST] <JEEB> fug
[23:19:43 CEST] <JEEB> still there
[23:19:46 CEST] <JEEB> :V
[23:20:00 CEST] <llogan> may you live 1000 years and never hunt again
[23:21:10 CEST] <alexpigment> is one of them better than the other in terms of encoding speed?
[23:21:22 CEST] <alexpigment> i vote on keeping that one and fixing whatever annoyances there are with profile naming ;)
[23:21:42 CEST] <JEEB> IIRC anatoly's was faster by default but kostya's compressed better?
[23:21:51 CEST] <alexpigment> interesting
[23:22:07 CEST] <alexpigment> i'm sure it's probably pretty small in terms of visual differences
[23:22:13 CEST] <llogan> AFAIKAND/ORIIRC there was some discussion about it on -devel ML
[23:22:14 CEST] <alexpigment> unless you're doing a proxy preset
[23:22:23 CEST] <JEEB> llogan: yea
[00:00:00 CEST] --- Fri Aug 18 2017
More information about the Ffmpeg-devel-irc
mailing list