[Ffmpeg-devel-irc] ffmpeg.log.20160908
burek
burek021 at gmail.com
Fri Sep 9 03:05:01 EEST 2016
[01:29:25 CEST] <kahunamoore> DHE: agreed. Trying to avoid that since eventually we want to do 4k/UHD and our embedded processor/memory is sloooow. One memcpy is too many in that case. :-\
[01:34:11 CEST] <kahunamoore> Take care - thanks for the input everyone.
[02:19:14 CEST] <Alexey_> hi all ! I have transcoded AVI (DivX) to MPEG4 with the following command: ffmpeg -y -fflags +genpts -i "$f" -codec copy -bsf:v mpeg4_unpack_bframes "$f.mp4" ... but my video is "jumpy"
[02:19:18 CEST] <Alexey_> any ideas ?
[02:20:23 CEST] <Alexey_> (when played back on an iPad)
[02:24:11 CEST] <Threads> same framerate as the source ?
[02:29:08 CEST] <Alexey_> Threads, unchanged
[04:59:13 CEST] <k_sze[work]> SWS_BICUBIC (or any SWS_BILINEAR or whatever) has no effect if the input and output have the exact same dimensions, right?
[05:00:59 CEST] <k_sze[work]> Or let's say I *know* that the input and output have the same dimensions, I'm just using swscale to convert the pixel format. Is there a faster flag?
[05:06:43 CEST] <kepstin> if the input and output res are the same, swscale doesn't do any scaling, it only does the format conversion
[05:07:07 CEST] <kepstin> (although note that some format conversions require scaling, e.g. yuv420 to yuv422 changes the chroma resolution)
[05:13:45 CEST] <k_sze[work]> So if I'm converting from, say BGRX to yuv422p, does the flag have any effect?
[05:14:30 CEST] <k_sze[work]> I think the desired behaviour is to just average the chroma across every pair of horizontal pixel.
[05:16:12 CEST] <kepstin> yeah, BGRX to yuv422p will involve a scale
[05:16:55 CEST] Action: kepstin notes that "averaging the chrome across every pair of horizontal pixels" is pretty much what should happen in bilinear or area scale mode
[08:34:02 CEST] <Spring> how would I work out the appropriate mapping for the shuffleframes filter? Is there a simpler explanation of the mapping values than the docs?
[08:37:09 CEST] <ss_> Hi, I'm new to ffmpeg and I'm trying to save rtsp stream from camera (H264, pcm-alaw) in flv format but in 10mins segments. So far my code can read stream from camera and store it in a single file
[08:37:32 CEST] <ss_> Is there a working example of segments (code not commands)?
[08:37:40 CEST] <ss_> or some guide to help me start
[08:41:41 CEST] <nonex86> ss_: using cli or you writing your own code?
[08:42:14 CEST] <ss_> nonex86: writing c code
[08:42:17 CEST] <nonex86> ss_: if you are using ffmpeg api directly i dont see any problem
[08:42:26 CEST] <nonex86> ss_: check rtp timestamps
[08:42:37 CEST] <nonex86> ss_: and write data to new one file
[08:42:47 CEST] <nonex86> ss_: when reached required time
[08:43:09 CEST] <ss_> nonex86: libavformat file segment.c has segment API, write!
[08:45:00 CEST] <nonex86> ss_: never used it so cant tell anything about
[08:45:00 CEST] <ss_> nonex86: I just need proper initializaton sequence of this API
[08:45:20 CEST] <ss_> oh ok
[08:45:44 CEST] <ss_> nonex86: so u suggest manually creating multi files based on timestamps?
[09:04:17 CEST] <DHE> when the video stream produces a keyframe and the pts (or dts) passes a threshold, close the file and start the next one
[09:16:38 CEST] <ss_> DHE: you are right, I can do that but i want to do it using segment.c api from libavformat
[09:16:49 CEST] <ss_> DHE: i just need initialization sequence for segment.c as there is no example given
[09:17:01 CEST] <ss_> DHE: actually ive 8 camera streams at same time, so manually hanlding them all based on keyframe can be messy
[09:22:12 CEST] <Spring> is there a way to increase the blur amount greater than the maximum smartblur allows? It's fairly weak and was needing a stronger blur.
[09:23:34 CEST] <Spring> looks like I can stack them, hmm.
[09:26:49 CEST] <Spring> bit hard to reproduce the same output as the Photoshop Gaussian blur
[09:27:10 CEST] <Spring> stacking 7x gets roughly the same blur though not the same type of blurring
[09:28:16 CEST] <durandal_1707> Spring: whats not clear about shuffleframes?
[09:28:49 CEST] <Spring> durandal_1707, I'm not sure what the mapping values represent
[09:31:09 CEST] <durandal_1707> Spring: input number to output number
[09:31:20 CEST] <Spring> in the example for instance: "shuffleframes=0 2 1" which 'swaps second and third frame every three frames'. Tried changing the values to experiment but each returned an error.
[09:31:53 CEST] <durandal_1707> to what change?
[09:32:32 CEST] <durandal_1707> Spring: there's boxblur, avgblur and gblur now too
[09:34:29 CEST] <durandal_1707> number of entries must be greater than max number in entry by exactly one
[09:34:46 CEST] <Spring> durandal_1707, like 4 2 1 (not that I clearly understood what controlled the order and looping). Huh, gblur must be a very recent addition.
[09:36:26 CEST] <durandal_1707> Spring: you need to add two extra number to that
[09:37:43 CEST] <durandal_1707> shuffleframes can't know what frames to drop/dup without them
[09:38:13 CEST] <durandal_1707> nukber of entries tells how much frames to take
[09:38:25 CEST] <durandal_1707> *number
[09:39:09 CEST] <durandal_1707> so if you have 4 there, you need 5 numbers
[09:39:33 CEST] <durandal_1707> 4 2 1 0 0
[09:41:25 CEST] <Spring> so if I'm understanding it correctly thanks to your clarification, you would write the same number of values as the initial value and then within those values tell it which ones to re-order. Eg: with '4' frames order the 2nd as the first. Is that correct?
[09:41:41 CEST] <Spring> *er, one extra than the initial value
[09:42:43 CEST] <Spring> so for 10 frames it would be: 10 3 0 1 0 0 0 0 0 0 0 <- to order the third frame as the first every 10 frames?
[09:49:40 CEST] <Spring> gblur works nicely
[09:51:26 CEST] <durandal_1707> Spring: it counts from 0, so 10 th is actually 9
[09:53:53 CEST] <durandal_1707> and first number is first number in shuffle table, not number of entries
[09:55:02 CEST] <Spring> durandal_1707, mind elaborating on that last point?
[09:55:15 CEST] <Spring> so what should the first value be, just 0?
[09:55:53 CEST] <Spring> basically making them all zeroes except the ones intending to re-order
[09:56:08 CEST] <durandal_1707> 2 0 1 3 4 5 6 7 8 9
[09:56:29 CEST] <Spring> oh, so they all have to be named
[09:57:01 CEST] <durandal_1707> yes, that set mapping
[09:57:12 CEST] <Spring> I honestly think that would have made more immediate sense if it was in the docs
[09:57:33 CEST] <Spring> even if just a secondary example
[09:57:54 CEST] <Spring> or maybe I'm just not used to thinking about it that way :p
[09:58:33 CEST] <durandal_1707> first entry maps first frame of output, and so on
[09:58:43 CEST] <Spring> right, gotcha
[10:00:33 CEST] <steve___> 'm having an issue opening https streams with avformat_open_input, where I'm getting the following error:
[10:00:38 CEST] <steve___> [https @ 0x1287f1fe0] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'( [tls @ 0x128b57550] error:140A90F1:SSL routines:SSL_CTX_new:unable to load ssl2 md5 routines(
[10:01:04 CEST] <steve___> the server im using does not support ssl2 or ssl3 according to this report: https://www.ssllabs.com/ssltest/analyze.html?d=bffdotfm.s3.amazonaws.com
[10:58:24 CEST] <k_sze[work]> libx264 doesn't take advantage of VA-API, right?
[11:01:52 CEST] <DHE> no. it's a software implementation. opencl is available but hit-and-miss as to whether it even helps
[11:15:49 CEST] <flux> don't the opencl implementations even reduce CPU load even if they don't actually speed it up?
[11:20:37 CEST] <JEEB> well you're only taking ME lookahead into whatever is running opencl instructions
[11:20:49 CEST] <JEEB> which is a tiny thing that can be run completely separately from the rest of the threads
[11:21:12 CEST] <JEEB> + I don't remember if the opencl version of the ME lookahead actually did it as accurately as the non-opencl one
[11:40:17 CEST] <DHE> there is a documented quality loss. probably related to the use of float instead of doubles on most GUPs
[11:40:18 CEST] <DHE> GPUs
[11:40:53 CEST] <DHE> flux: I benchmarked opencl on several videos in terms of encoding FPS. I had some up, some down
[11:41:24 CEST] <DHE> both the CPU and GPU were decent desktop grade systems (core i7 and a workstation Quadro) so they would have done well on their own
[11:47:22 CEST] <k_sze[work]> I need some pointer about how to use the hardware accelerated encoders when using the ffmpeg *libraries*.
[11:47:44 CEST] <k_sze[work]> e.g. there doesn't seem to be a corresponding AV_CODEC_ID_XXX for h264_vaapi?
[11:50:43 CEST] <nonex86> not sure about vaapi, but for for dxva its not related with codecid, its all about pixel_format
[11:52:31 CEST] <nonex86> guess for vaapi it should work same way
[11:54:48 CEST] <jkqxz> You want to find it by name. If it's the only H.264 encoder built in then you will get it by ID, but libx264 will be preferred if it's there.
[11:55:07 CEST] <nonex86> AV_PIX_FMT_VAAPI_VLD
[11:55:45 CEST] <nonex86> or AV_PIX_FMT_VAAPI, check deprectation flags to be sure
[11:55:58 CEST] <nonex86> *deprecation
[11:56:46 CEST] <jkqxz> For the encoder, I mean - avcodec_find_encoder_by_name("h264_vaapi") rather than avcodec_find_encoder(AV_CODEC_ID_H264).
[11:58:57 CEST] <nonex86> ah... encoder.. my bad
[12:29:59 CEST] <livingBEEF> Does anyone know if minterpolate added just recently (3.1.2+)?
[12:30:18 CEST] <durandal_1707> In master only
[12:31:53 CEST] <livingBEEF> Ah, ok. I have 3.1.1 and I wondered if I just forgot to enable some library.
[14:25:24 CEST] <Chris2A> hello there !
[14:25:58 CEST] <Chris2A> Is there anyone interested in building an ffmpeg module for an Android project ?
[14:37:12 CEST] <JEEB> building FFmpeg for Android is rather simple
[14:51:50 CEST] <Chris2A> @JEEB, can you do it ?
[14:56:07 CEST] <JEEB> don't pm me without a proper reason :P
[14:56:30 CEST] <Chris2A> alright
[14:59:18 CEST] <Chloe> Chris2A: 'Is there anyone interested', as in, 'I'm looking to hire someone to do this for me'?
[15:00:35 CEST] <mrelcee> well there's no money but think of the exposure!
[15:01:38 CEST] <Chloe> mrelcee: ;_;
[15:04:25 CEST] <redgetan> im trying to statically build ffmpeg using this guide (https://trac.ffmpeg.org/wiki/CompilationGuide/Centos) with a minor modification of the ffmpeg configure line (adding --enable-static --disable-shared)
[15:04:56 CEST] <redgetan> i ended up having a dynamic linked exec instead of static one - http://pastebin.com/raw/3xeDh1Mr
[15:05:53 CEST] <redgetan> good thing is that the libs such as x264/lame are statically linked, but just not sure why final executable is still not static
[15:14:34 CEST] <theholyduck> redgetan: based on some googling. this might help you https://github.com/zimbatm/ffmpeg-static
[15:14:48 CEST] <theholyduck> https://github.com/pyke369/sffmpeg or this
[15:15:03 CEST] <theholyduck> they are scripts so you should be able to figurre out how they are doing it
[15:16:20 CEST] <relaxed> redgetan: you need --extra-ldflags="-static"
[15:18:58 CEST] <Chris2A> Yeah, I'm paying for this, of course
[15:20:41 CEST] <Chris2A> Chloe, are you in ?
[15:28:28 CEST] <redgetan> theholyduck: thanks. i actually tried the zimbatm repo the first time but didnt work (turned out dynamic which is an open ticket in github), but sffmpeg looks promising, i'll give it a try
[15:29:13 CEST] <redgetan> relaxed: i'll try that again as well. i tried that before and i got an error, but i'll see if i can make that work
[15:30:43 CEST] <relaxed> It is required for a static binary. You can try mine, https://www.johnvansickle.com/ffmpeg/
[15:30:49 CEST] <furq> redgetan: if you don't have static libraries for the external libs you're using then it won't work
[15:31:02 CEST] <erikalds> hello, i'm trying to use libav* with libvpx and the VP8 encoder to stream live video. i'm able to decode the stream at the receiving end, but only when the receiver is running when the transmitter starts encoding. am i trying to do something impossible or do i just have the incorrect options?
[15:35:02 CEST] <redgetan> furq: when i added --extra-ldflags='-static', im seeing /usr/bin/ld: cannot find -lc, i guess that means i also need static version of libc ?
[15:37:05 CEST] <relaxed> redgetan: on debian you would install libc6-dev
[15:39:34 CEST] <furq> apparently it's glibc-devel on rpm distros
[15:40:18 CEST] <Chloe> Chris2A: sure
[15:40:20 CEST] <Chloe> PM me
[15:43:23 CEST] <redgetan> furq: i tried installing glibc-devel, but apparently its already installed, yet still getting cannot find -lc
[15:43:53 CEST] <Enverex> Hi all, I'm trying to record using x11grab but every ~6 seconds or so, a black bar quickly scans down the screen moving down vertically. Any ideas what this is or what causes it? Given that it's roughly every 6 seconds and my FPS/Hz is 60, I'm thinking it's tied to that somehow.
[15:44:02 CEST] <relaxed> redgetan: is there a glibc-static?
[15:46:50 CEST] <redgetan> oh, cool, glibc-static worked
[15:51:45 CEST] <Enverex> Hrm, if I give ffmpeg the -framerate 59.81 value, the issue seems to go away...
[15:56:26 CEST] <erikalds> the call that fails at the receiving end is avformat_open_input(...)
[16:18:33 CEST] <relaxed> Enverex: yeah, whatever xrandr reports
[16:22:57 CEST] <Enverex> relaxed: The only issue then is, if I'm recording and something changes the screen res to a different resolution that does support 60Hz, how wonky will it go? (as it's now got an extra frame every 120 frames or so)
[16:44:41 CEST] <screcorder> Hi, I want to capture my screen in windows but I can't use video_size, can anyone help me ? it's always capture fullscreen desktop ! http://pastebin.com/KR1Ajy6E
[18:00:12 CEST] <mvardan> can anyone provide some info or link for compilation and usage of intel qsv with ffmpeg?
[18:02:18 CEST] <DHE> mvardan: https://trac.ffmpeg.org/wiki/HWAccelIntro start here
[18:10:05 CEST] <mvardan> DHE: I have done as described here, but failed test
[18:10:05 CEST] <mvardan> on "ffmpeg -i /dev/video0 -c:v h264_qsv -preset:v faster out.qsv.mp4"
[18:10:05 CEST] <mvardan> it says https://drive.google.com/open?id=0B2xidSTkSMV8a3lHN1F4ZGVOZW8
[18:11:11 CEST] <mvardan> sorry for link let me provide new one
[18:12:16 CEST] <mvardan> https://drive.google.com/open?id=0B2xidSTkSMV8bllwSEdzZkk0V0E
[18:12:33 CEST] <jkqxz> Do the media SDK samples work for you?
[18:15:55 CEST] <mvardan> unfortunately no... :(
[18:15:58 CEST] <mvardan> https://drive.google.com/open?id=0B2xidSTkSMV8VGpSU25Ldlp6Sms
[18:16:39 CEST] <jkqxz> If you just want to use the quick sync hardware, vaapi is much easier to use. If you have to use the media SDK, note that setting it up to work is hard, requiring significant knowledge of what you are doing if you aren't using the one supported CentOS version.
[18:18:55 CEST] <mvardan> jkqxz: I see that it is too hard... I just want to HW encode video from camera and stream it to my RTSP server
[18:20:46 CEST] <jkqxz> "ffmpeg -i /dev/video0 -vf 'format=nv12,hwupload' -c:v h264_vaapi out.mp4"
[18:22:55 CEST] <jkqxz> (That will still eat CPU on the input (possibly including a decode if it's mjpeg?), converting to nv12 and uploading the frames to the GPU to encode. It should use significantly less than an encode with libx264 would, though.)
[18:25:04 CEST] <jkqxz> Oops. Also "-vaapi_device /dev/dri/renderD128" after ffmpeg in that command line to tell it the device to use.
[18:27:14 CEST] <mvardan> jkqxz: https://drive.google.com/open?id=0B2xidSTkSMV8VXhKY0c3R0tTWEU
[18:29:32 CEST] <mvardan> Is there any way to use h264_qsv without Intel media SDK ?
[18:29:44 CEST] <jkqxz> Installing the media SDK installs their special incompatible version of libva which only works for them. Remove the media SDK (or fiddle with paths so it can't find it) and make sure you have the real driver installed (should be i965_drv_video, not iHD_drv_video).
[18:37:42 CEST] <mvardan> this method works for me, I'll try to do some benchmarking to understand how it improves performance
[18:38:36 CEST] <mvardan> But anyways is there any way to use h264_qsv without Intel media SDK ?
[18:40:47 CEST] <jkqxz> The "h264_qsv" encoder uses the proprietary Intel Media SDK interface to using the Quick Sync hardware. The "h264_vaapi" encoder uses the open source interface to the Quick Sync hardware.
[18:41:08 CEST] <jkqxz> So no, there is no way to use the "h264_qsv" encoder without the Intel Media SDK, but I don't think that was the question you wanted to ask.
[18:46:34 CEST] <mvardan> I am trying to use HW encoding on Intel compute stick, it have core m5 processor with qsync support
[18:46:34 CEST] <mvardan> http://ark.intel.com/products/88197/Intel-Core-m5-6Y57-Processor-4M-Cache-up-to-2_80-GHz
[18:46:34 CEST] <mvardan> But I can't find that cards in https://01.org/linuxgraphics/community/vaapi
[18:47:37 CEST] <mvardan> So I want to understand will this solution help me on my compute stick with ubuntu installed on it...
[18:48:40 CEST] <jkqxz> Yes, it works on Skylake too. That page probably hasn't been updated in a while.
[18:55:09 CEST] <mvardan> jkqxz: Thank you VERY MUCH
[20:38:21 CEST] <fauno> hi
[20:41:44 CEST] <durandal_1707> Hi
[20:46:26 CEST] <fauno> i'm trying to stream a logitech c920 webcam to an icecast server
[20:46:52 CEST] <fauno> but it seems to get stuck
[20:47:19 CEST] <fauno> i see lots of 'past duration ... too large' messages
[20:47:51 CEST] <fauno> and then it starts to drop lots of frames
[20:48:17 CEST] <fauno> i'm able to read the h264 stream using something like this
[20:48:21 CEST] <fauno> ffmpeg -f v4l2 -input_format h264 -video_size 320x400 -i /dev/video1 -copyinkf -codec copy stream.mp4
[20:48:51 CEST] <fauno> if i remove the -codec flag it gets stuck also
[20:50:22 CEST] <fauno> :O i think i got it
[20:51:56 CEST] <fauno> mmno
[20:52:08 CEST] <fauno> at some point it gets stuck again
[20:52:23 CEST] <fauno> stuck = the frame count doesn't increment
[20:53:50 CEST] <BtbN> The h264 streams those things generate are horribly broken.
[20:54:17 CEST] <BtbN> not suitable for streaming, barely enough to decode locally by a fault tolerant decoder
[20:54:27 CEST] <furq> since when did icecast support h264
[20:54:29 CEST] <BtbN> And the quality is horrible, too.
[20:55:09 CEST] <fauno> i'm encoding to webm
[20:55:10 CEST] <fauno> ugh
[20:55:37 CEST] <furq> which bit is breaking
[20:55:42 CEST] <fauno> the module just throws stack traces :P
[20:56:02 CEST] <fauno> it streams correctly (with quite a few frame drops) for ~20s and then it dies
[20:56:21 CEST] <fauno> frame= 1293 fps=9.2 q=0.0 Lsize= 1134kB time=00:09:34.20 bitrate= 16.2kbits/s dup=0 drop=2912 speed=4.09x
[20:56:43 CEST] <fauno> the frame doesn't increment while the drop count does
[20:56:51 CEST] <furq> if you're reencoding anyway then try capturing in mjpeg/rawvideo
[20:58:49 CEST] <trudev> Hello, How can I tweak this command so that the resulting video will be the same length of the original video? http://pastebin.com/7Wz4nX81
[20:59:22 CEST] <trudev> I added -shortest because if the audio is bigger than the video, the rest of the video will be black
[20:59:46 CEST] <trudev> but how can I make it so that the video will play even if the audio is cut out?
[21:00:45 CEST] <fauno> furq: it dies faster :P
[21:04:39 CEST] <kepstin> trudev: you'll basically need to make a filterchain that appends extra silence to the end of the audio so it's at least as long as the video.
[21:04:46 CEST] <kepstin> Could be as simple as "-af apad", but if you're selecting a particular source with map you might need to use -filter_complex with named pads
[21:05:22 CEST] <kepstin> The docs for apad in ffmpeg-filters have an example
[21:05:37 CEST] <trudev> Oh god, that sounds complex
[21:05:44 CEST] <kepstin> it's not, really.
[21:05:57 CEST] <trudev> I'm really new to ffmpeg
[21:06:04 CEST] <trudev> first time using it
[21:07:13 CEST] <Nahra_> Hi, I'm trying to compile ffmpeg in the msvc toolchain but I'm having problems enabling pthread support, after I run this configure: http://pastebin.com/rnRbxLJf the output shows "threading support no", what am I doing wrong?
[21:07:14 CEST] <kepstin> alright, i'll adapt your existing command line as an example for you: ffmpeg -i 1473360796576.mp4 -i sound.ogg -filter_complex '[1:a:0]apad[padded_audio]' -c:v copy -c:a aac -strict experimental -map 0:v:0 -map '[padded_audio]' -shortest 1473360796576_mixed.mp4
[21:07:57 CEST] <trudev> What is padded _audio?
[21:08:15 CEST] <kepstin> it's a name I made up for the stream containing the output of the apad filter
[21:08:37 CEST] <kepstin> it's just there in the filter_complex bit so that the output of that filter can be used in a "-map" option later
[21:08:55 CEST] <kepstin> (that could be pretty much any string, it's just an arbitrary name)
[21:09:18 CEST] <trudev> Interesting. Got it. Gonna give it a spin
[21:09:26 CEST] <trudev> thanks in advance kepstin!
[21:15:46 CEST] <trudev> kepstin: any reason you removed the -ss in that command?
[21:16:13 CEST] <furq> -ss 0 doesn't do anything
[21:16:19 CEST] <kepstin> you had -ss 00:00:00, which is redundant since it starts at the file start anyways
[21:16:34 CEST] <trudev> Yeah, but it changes depending on user input
[21:16:44 CEST] <trudev> that was just there for placeholder
[21:16:55 CEST] <kepstin> well, I didn't know that, you're free to add it back if needed :)
[21:18:24 CEST] <trudev> Ah ok, still trying to get it to run. I'm doing ffmpeg stuff on android and it's super annoying to build and run :/
[21:24:51 CEST] <JEEB> huh
[21:25:10 CEST] <JEEB> if that was true I wouldn't be doing mpv on android
[21:25:20 CEST] <JEEB> unless you mean actually executing ffmpeg cli
[21:33:51 CEST] <llogan> trudev: you won't need "-strict experimental" unless your ffmpeg is ancient
[21:34:44 CEST] <trudev> llogan: thanks I didn't realize that
[21:35:27 CEST] <trudev> JEEB: I'm using a library where you're basically using the command line version of ffmpeg
[21:36:10 CEST] <trudev> Android studio is annoying me right now
[21:36:14 CEST] <JEEB> and the only issue with android compilation is x86 >= 6.0
[21:36:28 CEST] <JEEB> because FFmpeg uses text relocations and android 6.0 doesn't let you use them with x86
[21:36:59 CEST] <trudev> rip
[21:45:07 CEST] <trudev> kepstin: Here is command I ran and the error I'm getting
[21:45:09 CEST] <trudev> http://pastebin.com/15Gpwkti
[21:45:15 CEST] <trudev> any ideas?
[21:46:19 CEST] <llogan> yet another andriod and quotes issue. try witout any quotes
[21:47:04 CEST] <kepstin> oh, yeah. I gave it quoted as required for a standard unix shell
[21:47:15 CEST] <llogan> also, where is the version info? it is missing in your output
[21:49:23 CEST] <trudev> it worked without quotes! You guys are aweseome!
[21:49:41 CEST] <trudev> Just gotta come up with a way to shrink the file size...
[21:50:30 CEST] <trudev> 15MB for 12 seconds of video. The backend won't be too pleased. Thanks for all the help. I really appreciate it
[21:50:57 CEST] <DHE> that's about 10 megabits, which is high but not absurdly so.
[21:51:29 CEST] <furq> there's not much you can do about that if you're copying the video stream
[21:51:41 CEST] <furq> and i doubt you want to encode it on a phone
[21:51:45 CEST] <kepstin> trudev: I assume that's the video as from the hardware encoder attached to the camera?
[21:52:08 CEST] <trudev> kepstin: yeah exactly
[21:52:27 CEST] <kepstin> yeah, software encoding on a phone is a good way to get people angry at your app which makes their phone hot, slow, and run out of battery :)
[21:53:00 CEST] <trudev> good to know. I'm pretty sure we compress the hell out of the video server side anyways
[21:53:26 CEST] <kepstin> the only reason you might want to have a smaller file on the phone is if your users are worried about the data upload size
[21:53:44 CEST] <trudev> Yeah, that's exactly what I'm thinking
[21:53:46 CEST] <kepstin> and if that's the case, you should be reconfiguring the camera to give a lower res/lower bitrate stream, not re-encoding
[21:54:14 CEST] <trudev> Currently set to 720p, gonna see what it looks like at 480p
[21:54:51 CEST] <llogan> users probably won't notice a difference
[21:55:31 CEST] <llogan> just upscale from 128x78
[21:59:08 CEST] <trudev> Yeah that's probably the best option
[22:45:03 CEST] <Enverex> Hi all, I'm trying to record with x11grab but I seem to be getting what I can only assume is screen tearing, despite vsync being enabled in the programs that I'm recording from.
[22:45:15 CEST] <Enverex> Skip about 15 seconds in here and you'll see what I mean - https://emerald.xnode.org/as/160908214140.mp4
[22:45:26 CEST] <Enverex> Basically manifests as black lines going up/down the screen
[22:45:41 CEST] <Enverex> What should/can I do to combat this?
[22:51:54 CEST] <kepstin> Enverex: that's a bug with your X server or your desktop compositor (if you're running one). Not really anything ffmpeg can do about it.
[22:52:14 CEST] <kepstin> ffmpeg just goes "hey, X server, give me a pixmap of the screen"
[22:53:53 CEST] <kepstin> if you're not running a compositor, you might consider running one, since that'll usually mean screen updates go from being partial to being "replace the entire screen image atomically". Could reduce the symptoms.
[22:55:32 CEST] <Enverex> kepstin: I'm indeed not running a compositor as I hear constant issues about them and that they reduce performance... know any that would well with x11grab and don't nail performance?
[00:00:00 CEST] --- Fri Sep 9 2016
More information about the Ffmpeg-devel-irc
mailing list