[Ffmpeg-devel-irc] ffmpeg.log.20170326
burek
burek021 at gmail.com
Mon Mar 27 03:05:01 EEST 2017
[00:31:22 CET] <lindylex> Is there a way to maintain the the positon for the url.png in each video 10 units to the left and 10 units from the top? If I chaneg the location to start the crop it changes the url.png location. This is my command ffmpeg -i MVI_9579.MOV -i url.png -ss 00:00:14.0 -t 00:00:52.0 -codec:v libx264 -filter_complex "overlay=x=(main_w-overlay_w)/2-95:y=(main_h-overlay_h)/2-280,crop=640:640:440:50,setpts=.8*PTS" -profile:v baseline
[00:31:22 CET] <lindylex> -preset slow -pix_fmt yuv420p -b:v 3500k -threads 0 -an -y 4.mp4
[00:37:32 CET] <furq> move crop before overlay?
[00:42:26 CET] <lindylex> furq, let me try.
[00:43:03 CET] <lindylex> I understand now why you suggested this. One sec.
[00:54:52 CET] <ZeroWalker> when trying to compile x264 with msys2 msvc (for use in ffmpeg), i get complains about config.guess being too old. But i have installed automake and seem to have a newer version, but it uses one from 2009, any ideas?
[01:03:12 CET] <lindylex> furq: truly thank for all your help. It helps speed up the learning of this application. You are the best!
[01:11:53 CET] <lindylex> @search A Goal Digger's Guide
[01:12:04 CET] <lindylex> Mistake
[01:37:05 CET] <kittyfoots> im seeing some memory leaks from x264 when encoding, it seems to happen when there's an error on a frame. I've run the code through valgrind and it seems to come from x264_frame_pop_unused. Is there something I can do to release that memory?
[04:19:49 CEST] <ZeroWalker> no one has any idead on my x264 question before, been searching a lot to no avail;(
[05:38:24 CEST] <ZeroWalker> okay i was able to compiled x264, had to use mingw64 instead of msvc. anyhow, i can't understand how to compile ffmpeg with it. tried using ld and cl flags to point to the lib/include thing as pkg-config doesn't seem to do it automatically
[05:38:47 CEST] <ZeroWalker> never had much clue about how this pkg-config stuff works so i am most likely doing something wrong
[05:41:01 CEST] <ZeroWalker> JEEB, i am hinting for your assistance
[05:41:14 CEST] <c3r1c3-Win> ZeroWalker: Jim created a script to cross-compile ffmpeg on linux for Windows. Have you tried that?... and actually do you run Linux at all?
[05:41:45 CEST] <ZeroWalker> oh he did
[05:42:04 CEST] <ZeroWalker> haven't, and no i don't run linux at all, my experience with it is close to none
[05:42:18 CEST] <ZeroWalker> i am on Windows that will say
[06:39:10 CEST] <ZeroWalker> so, no one;(?
[08:36:08 CEST] <alexpigment> Zerowalker: I found it much much easier to cross compile ffmpeg on LInux with this script: https://github.com/rdp/ffmpeg-windows-build-helpers
[08:36:50 CEST] <alexpigment> fwiw, i installed Ubuntu on a virtual machine via VMWare Workstation
[08:37:06 CEST] <alexpigment> although VirtualBox (free) probably will work
[08:37:23 CEST] <alexpigment> just make sure and allocation 30GB+
[08:38:06 CEST] <alexpigment> i made a tutorial too which i can give to you if you think it would help
[08:42:01 CEST] <ZeroWalker> ah, would prefer to get it running on Windows, seems a bit wasteful to spend 30gb and use a virtual os just for that. I have actually done this before, somehow, with much help, don't remember any though
[08:42:34 CEST] <ZeroWalker> i look at your scripts though, but can't see what's going on, it looks like ffmpeg just gets the information from --pkg-config=pkg-config
[08:42:36 CEST] <kerio> why doesn't ffmpeg build in windows natively?
[08:42:38 CEST] <ZeroWalker> that doesn't work for me
[08:42:40 CEST] <kerio> with like mingw
[08:42:59 CEST] <ZeroWalker> it does, doesn't it?
[08:45:02 CEST] <furq> 30gb wtf
[08:46:26 CEST] <ZeroWalker> i have x264 compiled with make install, so i would assume i just need to tell ffmpeg somehow where that is
[10:34:14 CEST] <alexpigment> @furq: i was building 4 different copies - 32-bit 8-bit x26x and 10-bit x26x as well as the 64-bit versions. i hit the 20gb limit during that process on my VM
[10:34:48 CEST] <alexpigment> Linux VMs are hard to expand, so i just made a 40GB VM and started again
[13:29:23 CEST] <idlus> Hello, I try to capture X11 from an ssh session, and it coredumps with "operation not permitted". Is there security parameters/environment variables to set?
[13:33:18 CEST] <thebombzen> idlus: the issue is your xauth
[13:34:00 CEST] <thebombzen> xauth is a security that makse it so you can't interact with an X session from another tty (or ssh session) without authority
[13:34:30 CEST] <flux> the ssh option ForwardX11Trusted is probably relevant.
[13:34:40 CEST] <thebombzen> flux: not really
[13:35:02 CEST] <idlus> I tried to set XAUTHORITY=~/.Xauthority, that did not suffice
[13:35:10 CEST] <thebombzen> try also setting DISPLAY
[13:35:21 CEST] <thebombzen> it also depends on your display manager
[13:35:31 CEST] <thebombzen> if you use lightdm, your xauthority is in /var/run/lightdm/:0/root
[13:35:32 CEST] <idlus> X11forwarding is set to yes
[13:35:40 CEST] <thebombzen> X11 Forwarding with ssh allows you to start X applications on an ssh session as though it were a separate screen and forward the connection
[13:35:42 CEST] <thebombzen> it's not relevant here
[13:35:51 CEST] <idlus> true
[13:36:09 CEST] <flux> so you're trying to capture the display of a remote host?
[13:36:12 CEST] <thebombzen> idlus: is your X server from a display manager or from "startx"
[13:36:22 CEST] <idlus> flux: basically yes
[13:36:30 CEST] <idlus> thebombzen: from startx
[13:36:45 CEST] <thebombzen> did you start it as an ordinary user as root
[13:37:01 CEST] <thebombzen> but yes I'd also try setting the DISPLAY variable
[13:37:04 CEST] <idlus> the same user that owns the X display
[13:37:05 CEST] <thebombzen> not just XAUTHORITY
[13:37:11 CEST] <flux> idlus, find a process running on the console, tr '\0' '\n' < /proc/processid/env | grep XAUTHORITY, export the XAUTHORITY to that value
[13:38:12 CEST] <flux> (oh, it was /environ, not /env)
[13:38:40 CEST] <idlus> yep, I figured it was environ, it gave the same value I set before
[13:39:49 CEST] <idlus> thebombzen: I tried DISPLAY before (which I thought redundant with -i :0.0?), this time also with XAUTHORITY, to no avail
[13:40:17 CEST] <idlus> I give you the command line in case I make an obvious mistake
[13:40:27 CEST] <thebombzen> no, that's right
[13:40:42 CEST] <thebombzen> it would be possibly relevant with x authority
[13:40:58 CEST] <idlus> command line: DISPLAY=:0 XAUTHORITY=/home/clara/.Xauthority ffmpeg -y -f x11grab -r 25 -i :0 /tmp/v.mkv
[13:42:10 CEST] <flux> idlus, are you running as used clara?
[13:42:16 CEST] <flux> "user"
[13:42:49 CEST] <thebombzen> idlus: either way, try this https://unix.stackexchange.com/questions/10121/open-a-window-on-a-remote-x-display-why-cannot-open-display
[13:42:55 CEST] <thebombzen> see what this thing gives
[13:43:02 CEST] <idlus> yes, technically I su clara from my default ssh user, Ill try to ssh directly as clara
[13:43:20 CEST] <flux> idlus, btw, does anything work over X, such as xdpyinfo?
[13:43:23 CEST] <idlus> thebombzen: thank you
[13:43:51 CEST] <idlus> flux: scrot (screenshot utility) works, I tried for sanity
[13:44:32 CEST] <idlus> and DISPLAY=:0 xdpyinfo as well
[13:44:39 CEST] <flux> hmm, I would have expected x11grab to use the same facility as a general screenshot utility, is x11grab somehow optimized?
[13:55:01 CEST] <idlus> Im not sure the problem lies with XAUTHORITY and DISPLAY, I think I set those right, Ill file a bug
[15:23:39 CEST] <DHE> idlus: anything that might interfere with shared memory perhaps? ffmpeg uses shared memory extensively. performance would be terrible otherwise
[15:30:31 CEST] <idlus> DHE: you mean ffmpeg needs access to X's memory?
[15:42:00 CEST] <bencc1> mpeg-dash works in ios?
[15:42:19 CEST] <bencc1> or do I need both hls and mpeg-dash for broad browser support?
[15:43:50 CEST] <JEEB> there are SDKs for MPEG DASH but officially they support fragmented MP4 segments in HLS playlists
[15:44:11 CEST] <JEEB> so you might be able to share the segments, but having two separate playlists
[15:44:33 CEST] <kerio> just do hls with fmp4
[15:48:28 CEST] <DHE> idlus: yes
[15:48:36 CEST] <bencc1> kerio: I'll have broad support for hls with fmp4?
[15:48:54 CEST] <kerio> maybe :^)
[15:49:01 CEST] <JEEB> I'm not sure of the support among stuff like exoplayer on android or hls.js on browsers
[15:49:17 CEST] <kerio> surely it should be easier than remuxing mpegts into mp4
[15:49:19 CEST] <DHE> the dual playlist method looks interesting though
[15:49:31 CEST] <JEEB> yes, that's what I'd do if possible
[15:49:37 CEST] <JEEB> no support for both playlists yet in lavf tho
[15:49:50 CEST] <kerio> honestly i'm pretty sure that the single-file playlist thing is the best thing ever
[15:50:55 CEST] <bencc1> so the chunks are the same only two separate playlists point to them?
[15:51:05 CEST] <JEEB> yea
[15:51:10 CEST] <JEEB> one DASH and one HLS
[15:51:55 CEST] <bencc1> hls use segment*.ts and playlist.m3u8 ?
[15:52:01 CEST] <bencc1> or segment*.mp4?
[15:52:22 CEST] <kerio> recent hls can use segment*.mp4 and playlist.m3u8
[15:52:39 CEST] <kerio> or even just a single mp4 and then byte ranges specified in the playlist
[15:52:53 CEST] <JEEB> https://tools.ietf.org/html/draft-pantos-http-live-streaming-20#section-3.3
[15:52:56 CEST] <JEEB> related part of the spec
[15:53:18 CEST] <bencc1> mpeg-dash can also use byte ranges in a single mp4?
[15:53:25 CEST] <bencc1> can I create this mp4 in real time?
[15:53:49 CEST] <kerio> yes, the point of fragmented mp4 is that you don't need to "backtrack" when writing it
[15:54:04 CEST] <kerio> bencc1: https://developer.apple.com/streaming/examples/ see example here
[15:54:26 CEST] <kerio> "fMP4 stream compatible with macOS v10.12 or later, iOS 10 or later, and tvOS 10 or later"
[15:54:31 CEST] <kerio> so quite recent stuff
[15:54:44 CEST] <bencc1> cool. thanks
[19:01:18 CEST] <RonaldsMazitis> how do I convert every file in folder from mp4 to mp3
[19:01:42 CEST] <RonaldsMazitis> ok I think I got it
[19:01:50 CEST] <kerio> no problem
[20:01:18 CEST] <peg_the_ffm> Since this channel is publicly logged, I just want to say hi to my mom in Florida: "HI MOM"
[20:06:50 CEST] <peg_the_ffm> Okay, so I use -i in.mp4 -i palette.png -lavfi "fps=25 [x]; [x][1:v] paletteuse" -f image2 out%05d.gif BUT the output is actually jpeg images with a gif extension
[20:07:44 CEST] <peg_the_ffm> what I would like is individual GIF images with the -lavfi paletteuse actually being used
[20:28:22 CEST] <yos> When I compile the folowing
[20:28:46 CEST] <yos> #include <libavutil/avutil.h> #include <libavutil/imgutils.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libswscale/swscale.h>
[20:29:04 CEST] <yos> int decode(AVCodecContext *avctx, AVFrame *pkt, process_frame_cb cb, void *priv) { AVFrame *frame = av_frame_alloc(); int ret; ret = avcodec_send_packet(avctx, pkt); // Again EAGAIN is not expected if (ret < 0) { av_frame_free(&frame); if (ret == AVERROR(EAGAIN)) return 0; } while (!ret) { ret = avcodec_receive_frame(avctx, frame); if (!ret) ret
[20:29:55 CEST] <yos> I have an error of undifned reference to `avcodec_send_packet'
[20:30:10 CEST] <yos> is thare any way to take care of it?
[20:30:20 CEST] <JEEB> sounds like you're trying to use old libavcodec
[20:31:17 CEST] <yos> #define LIBAVCODEC_VERSION_MAJOR 57 #define LIBAVCODEC_VERSION_MINOR 64 #define LIBAVCODEC_VERSION_MICRO 101
[20:33:32 CEST] <yos> I checked and this is the right version
[20:33:54 CEST] <JEEB> yea but if it doesn't have those functions that doesn't exactly help does it
[20:35:37 CEST] <yos> It has the function, but some how, the compiler dosnt see them. And I have the -lavcodec
[20:36:35 CEST] <JEEB> then you're passing the wrong search paths to either I or L
[20:37:01 CEST] <yos> JEEB can you explain?
[20:37:30 CEST] <peg_the_ffm> how to output INDIVIDUAL .gif images but still use the -lavfi paletteuse conversion
[20:37:34 CEST] <JEEB> header or library search paths
[20:38:56 CEST] <BugzBunny> Hello, what would be the easiest way to concat to video files and transcode them to H.264 via VAAPI?
[20:39:07 CEST] <BugzBunny> s/to/two/
[20:47:51 CEST] <peg_the_ffm> BugzBunny: probably using -f concat -i list_of_files.txt and list of files looks like this file file1.mov\nfile file2.mov\nfile file3.mov
[20:48:59 CEST] <peg_the_ffm> and hello
[20:49:52 CEST] Action: peg_the_ffm wonders if balrog used to hang out on #dsdev
[20:50:50 CEST] <peg_the_ffm> with WinterMute & the boyz
[20:53:38 CEST] <BugzBunny> peg_the_ffm, Would something like this would work http://pastebin.com/JkXkqHtW?
[21:15:28 CEST] <peg_the_ffm> BugzBunny: can't see it
[21:16:20 CEST] <peg_the_ffm> BugzBunny: but maybe this https://superuser.com/questions/607383/concat-two-mp4-files-with-ffmpeg-without-losing-quality/607384
[21:17:12 CEST] <peg_the_ffm> or in th edocs https://trac.ffmpeg.org/wiki/Concatenate
[21:20:31 CEST] <BugzBunny> Yeah, I am looking over it, but I would preferably want hwaccel to reduce encoding time
[21:20:51 CEST] <Duality> hi
[21:21:05 CEST] <Duality> is there any way to disable input buffering ?
[21:31:40 CEST] <ChocolateArmpits> Duality, you can try -probesize 32 or any other low value
[21:31:49 CEST] <ChocolateArmpits> also there's a nobuffer flag
[21:32:19 CEST] <ChocolateArmpits> ahh -fflags nobuffer
[21:32:28 CEST] <ChocolateArmpits> both are input options
[21:42:49 CEST] <BugzBunny> I am getting this error [concat @ 0x55ab4d3fd1a0] Line 1: unknown keyword 'intro_video.mp4', am I am doing something wrong here?
[21:44:56 CEST] <ChocolateArmpits> BugzBunny, are you using a text file concatenation ?
[21:48:16 CEST] <BugzBunny> Yeah, I looks I forgot to add 'file', I got further but it fails at encoder: Line 1: unknown keyword. However, I read, with dmux, both videos have to have the same codec right?
[21:51:09 CEST] <BugzBunny> https://hastebin.com/unitudeqab.go
[21:51:35 CEST] <BugzBunny> The exact error rather: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height.
[21:53:24 CEST] <BugzBunny> I noticed that the input file isn't added in the "stream" and I assuming that's because the second video is h.265. My end goal is concatenation two video files with different codecs via VAAPI to help speed up encoding. Doing on CPU atm, is about 2 hours each video.
[22:08:28 CEST] <ChocolateArmpits> BugzBunny, when using text file concat they don't have to I believe, only when you're specifying the video files in the command line
[22:08:55 CEST] <Duality> ChocolateArmpits: thanks i'll try those :)
[22:11:53 CEST] <BugzBunny> Going over https://trac.ffmpeg.org/wiki/Concatenate, it appears you need to use concat filter to "Concatenation of files with different codecs", well that doesn't work as apparently I can't use filter_complex with -vf.
[22:12:27 CEST] <BugzBunny> I think I am going to give on this endeavor. It's not possible.
[22:13:11 CEST] <ChocolateArmpits> BugzBunny, you can specify video filters inside filter_complex just as well
[22:13:34 CEST] <ChocolateArmpits> filter_complex signals that all types of filters can be used and that filter outputs can be mapped
[22:13:48 CEST] <ChocolateArmpits> it's more complex but more powerful too
[22:16:08 CEST] <furq> i heard a rumour that filter_complex was complex
[22:16:16 CEST] <furq> but i forget where
[22:16:42 CEST] <ChocolateArmpits> must've been me
[22:18:09 CEST] <ChocolateArmpits> BugzBunny, So in your case it could be something like -filter_complex [0:v][1:v]concat=n=2:v=1:a=1[v1][aout];[v1]format=nv12,hwupload[vout] -map [vout]:v -map [aout]:a
[22:18:17 CEST] <ChocolateArmpits> oh wait
[22:18:23 CEST] <ChocolateArmpits> BugzBunny, So in your case it could be something like -filter_complex [0][1]concat=n=2:v=1:a=1[v1][aout];[v1]format=nv12,hwupload[vout] -map [vout]:v -map [aout]:a
[22:18:35 CEST] <BugzBunny> Let me give it a shot
[22:18:38 CEST] <ChocolateArmpits> or maybe something like that
[22:18:46 CEST] <ChocolateArmpits> you need to try the input mapping for the concat filter
[22:18:48 CEST] <ChocolateArmpits> with your source
[22:18:56 CEST] <furq> you don't need to map the inputs to concat if they're in the right order
[22:18:58 CEST] <furq> which is useful
[22:19:02 CEST] <ChocolateArmpits> I'm not sure if you need to specify each stream separately there
[22:19:11 CEST] <ChocolateArmpits> or maybe that
[22:19:20 CEST] <furq> e.g. if you have file1.mp4 and file2.mp4 and they both have the same number of video/audio streams as specified in the concat filter
[22:19:41 CEST] <furq> just -i file1.mp4 -i file2.mp4 -filter_complex concat=2:1:1 will work
[22:24:32 CEST] <BugzBunny> Alright, for the simpler version, how would I incorporate format=nv12,hwupload?
[22:26:13 CEST] <furq> -filter_complex "concat=2:1:1[tmp][a];[tmp]format=nv12,hwupload[v]" -map "[v]" -map "[a]"
[22:26:34 CEST] <BugzBunny> Thanks.
[22:29:57 CEST] <BugzBunny> Alright, now trying to figure why it's failing: https://hastebin.com/ofabewufab.sql. Error Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height. Can't tell yet if this is the culprit: [h264_vaapi @ 0x563e95540540] B frames are not supported (1).
[22:30:33 CEST] <BugzBunny> My GPU doesn't not support B Frames in hardware period. I am not sure if that's ignored.
[22:30:53 CEST] <furq> Reading option '-bf' ... matched as AVOption 'bf' with argument '2'.
[22:31:14 CEST] <furq> that's probably wrong if your encoder doesn't support bframes
[22:33:59 CEST] Action: BugzBunny is googling
[23:02:08 CEST] <BugzBunny> I got it to do something, but it seems like it segfault
[23:02:44 CEST] <BugzBunny> "[1] 16639 floating point exception (core dumped) ffmpeg -loglevel debug -hwaccel vaapi -vaapi_device :0 -i intro_video.mp4 -i "
[00:00:00 CEST] --- Mon Mar 27 2017
More information about the Ffmpeg-devel-irc
mailing list