[Ffmpeg-devel-irc] ffmpeg.log.20170517

burek burek021 at gmail.com
Thu May 18 03:05:01 EEST 2017

[00:10:35 CEST] <Cracki> I have a video file with large gops and I'd like to know which frames are intra/keyframes (cheap to decode). how to get those? just a pointer in the right direction please
[00:10:37 CEST] <debianuser> sikilikis: Maybe your kernel was build without high-resolution timer? Can you check sleep times on your kernel? Copy to some pastebin the output of:   for i in `seq -w 100`; do time sleep .$i; done
[00:10:43 CEST] <Cracki> talking about ffmpeg _api_
[01:01:02 CEST] <teratorn> Cracki: I'm thinking AVPacket::flags
[01:02:00 CEST] <Cracki> I'm now at pyav. the vid.demux(vid.streams.get()[0]) -> packets. packet.decode() -> frames. frame.key_frame is 0 or 1
[01:02:14 CEST] <Cracki> however I'd like to get htat info without decoding the actual frames, just looking at them. what do I do?
[01:02:28 CEST] <Cracki> I think the workflow is similar to original lavf apis
[01:03:07 CEST] <teratorn> Cracki: relevant, http://stackoverflow.com/questions/14044335/keyframe-is-not-a-keyframe-av-pkt-flag-key-does-not-decode-to-av-picture-type-i
[01:03:12 CEST] <Cracki> thx
[01:03:49 CEST] <Cracki> I just want to know what frame indices are cheap to seek to
[01:04:26 CEST] <Cracki> my problem: i might need to play "backwards", so jump ahead to a keyframe, decode from there into buffers, then work with those frames.
[01:05:18 CEST] <Cracki> currently I'm just seeking -1 or -NUM relative using OpenCV, which seeks exactly to that position, even if it means decoding some frames and discarding them
[01:06:00 CEST] <Cracki> I'd be ok with "inaccurate-mode" seeking, if it'll tell me where it actually ended up
[01:07:17 CEST] <Cracki> video codec is usually h.264, very likely small gops (size ~4 I think), but in .m2ts, so no stored index
[01:07:42 CEST] <Cracki> (on occasion, also larger gops, ~100-500 frames)
[01:13:27 CEST] <teratorn> Cracki: you're doing your seeking based on frame numbers not timestamps?
[01:13:56 CEST] <Cracki> I can assume the frames are at a constant rate
[01:14:04 CEST] <Cracki> however I'd be ok with timestamp-based seeking too
[01:14:43 CEST] <Cracki> I just don't want to seek _into_ a GOP, or if I have to, I'd like to keep/get the frames that had to be decoded up to that exact point
[01:14:52 CEST] <teratorn> Cracki: well, you can check the flags on AVPacket to see if it holds a keyframe. then seek to that pts
[01:15:32 CEST] <Cracki> hm ok. unfortunately PyAV packets don't have flags or any other representation of keyframicity
[01:16:20 CEST] <Cracki> I understand this is not the channel if I have problems with pyav :P
[01:32:27 CEST] <Cracki> thanks though for avpacket::flags, I suspect pyav just didn't expose that
[02:19:26 CEST] <Cracki> ah, if I seek in a M2TS file, it jumps to the nearest gop after the pts I gave
[02:23:09 CEST] <Cracki> seems I can't jump reliably *at* a gop, meaning seek to the pts the first decoded frame would have, instead jumps to the next gop?
[02:59:48 CEST] <hxla> Hello guys, I have a gopro video file, that has multiple data streams, and I want to copy the video, audio and one specific data stream, but I can't, I get an error that the codec is not supported (https://pastebin.com/ZxSinT0P)
[03:00:13 CEST] <hxla> how should I copy this stream to another video file?
[03:11:31 CEST] <cryptodechange> I've seen aq2 and aq3 in some mediainfo results, is there any info on what this does?
[03:13:59 CEST] <cryptodechange> compared to standard aq
[03:23:24 CEST] <furq> http://vpaste.net/D50k8
[03:32:05 CEST] <cryptodechange> Thanks for that, I also looked at the git
[03:32:22 CEST] <cryptodechange> so 3, in theory, should be the best option?
[03:32:45 CEST] <cryptodechange> for maintaining quality
[03:32:45 CEST] <furq> if 3 was the best option it'd be the default
[03:33:15 CEST] <furq> there are presumably some sources where 2 and 3 give a significant quality boost, but that's evidently not the general case
[03:34:17 CEST] <furq> also from what i'm reading, 2 and 3 are prone to causing haloing on some sources
[03:34:42 CEST] <cryptodechange> I suppose dark, grainy films would benefit from aq 3
[03:34:52 CEST] <furq> shrug
[03:35:08 CEST] <furq> at a low enough crf, none of these settings will make any difference
[03:36:03 CEST] <cryptodechange> using crf=16 and aq=1, I noticed a bit of ringing around the stars in the universal studios intro
[03:36:30 CEST] <cryptodechange> At least that's what I think I see, slightly more pixelated around the bright stars
[03:36:31 CEST] <cryptodechange> will try with aq=3
[03:36:40 CEST] <furq> if grainy stuff generally benefited from 3 then tune grain would probably use it
[03:36:58 CEST] <furq> all the tunings use 1 except for psnr and ssim
[03:37:03 CEST] <furq> and you can safely ignore what they're doing
[03:38:48 CEST] <cryptodechange> I think I'm getting a slightly higher bitrate with aq=3 too
[03:38:57 CEST] <furq> that sounds about right
[07:36:50 CEST] <blue_misfit> hey guys I'm doing chunked encoding into mp4 files and then joining the resulting chunks using the concat demuxer. This is usually fine, but some sources are producing VFR outputs. Indeed the first frame at the start of each chunk (other than the first chunk ) has a DTS that's 1536 units higher than the last frame, whereas all the rest have a DTS difference of 512
[07:37:55 CEST] <blue_misfit> any ideas why this is happening?
[10:31:00 CEST] <zerodefect> I understand that AVFrame has the ability to handle reference counting.  I'm struggling a bit to understand it though.  There doesn't seem to be a method to decrement the reference count by '1'.  There is only av_frame_unref(..) which clears all the references.
[10:33:16 CEST] <Mavrik> Hmm, unref certanly should decrement count by 1
[10:33:17 CEST] <jkqxz> The reference counting is on the buffers which make up the frame, not the frame structure itself.  An AVFrame structure contains one reference to each buffer making up the frame data, and there can be multiple AVFrames referring to the same buffers.
[10:33:53 CEST] <Mavrik> Mhm that - the AVFrame will get destroyed but the underlying buffer is shared with other AVFrames you get by calling av_frame_ref or whatsit
[10:33:54 CEST] <jkqxz> So av_frame_unref() decrements the reference count on the buffers in that frame, and then clears the AVFrame structure.
[10:36:26 CEST] <zerodefect> Ah ok. That makes sense.  So I could create my own buffer and assign it to multiple AVFrames?
[10:39:14 CEST] <zerodefect> Just trying to grasp the relationship between buffers and frames.
[10:44:45 CEST] <jkqxz> Correct.  (Where assignment to an AVFrame is via setting the next member of the buf[] array inside the AVFrame to the result of av_buffer_ref(), and the corresponding data element to the actual pointer.)
[10:45:41 CEST] <zerodefect_> Connection bummed out
[10:46:53 CEST] <jkqxz> "08:44 < jkqxz> Correct.  (Where assignment to an AVFrame is via setting the next member of the buf[] array inside the AVFrame to the result of av_buffer_ref(), and the corresponding data element to the actual pointer.)"
[10:47:39 CEST] <zerodefect_> Ok. Understood. Thanks for your help
[11:58:23 CEST] <k_sze> I have a question not strictly related to ffmpeg, but I can't think of where else to ask. In libjpeg, what's the difference between jpeg_read_raw_data and jpeg_read_scanlines?
[11:59:12 CEST] <k_sze> I assume that people in this channel are more likely to also be knowledgeable about libjpeg.
[12:00:22 CEST] <BtbN> I'd guess one reads raw data to decode, and the other one reads lines of pixels to encode?
[12:09:00 CEST] <k_sze> BtbN: The thing is, I read the documentation here: http://refspecs.linuxfoundation.org/LSB_3.1.1/LSB-Desktop-generic/LSB-Desktop-generic/libjpeg.jpeg.read.raw.data.1.html and http://refspecs.linuxfoundation.org/LSB_3.1.1/LSB-Desktop-generic/LSB-Desktop-generic/libjpeg.jpeg.read.scanlines.1.html
[12:09:08 CEST] <k_sze> And I have a hard time understanding the difference.
[12:09:53 CEST] <k_sze> The way I see gstreamer use jpeg_read_raw_data makes me think that it's also already decoded.
[12:10:56 CEST] <BtbN> Yeah, they both seem to be for decoding, as the first parameter is a decompress_ptr
[12:11:32 CEST] <BtbN> from the looks of it, you might need both?
[12:11:37 CEST] <BtbN> read_raw_data: "shall return upto max_lines number of scanlines of raw downsampled data into the JSAMPIMAGE array argument"
[12:11:53 CEST] <k_sze> And I have no idea what they mean by "downsampled"
[12:11:54 CEST] <BtbN> And jpeg_read_scanlines seems to take that as an input
[12:13:07 CEST] <k_sze> BtbN: No, jpeg_read_raw_data is not needed for jpeg_read_scanlines.
[12:13:30 CEST] <k_sze> I have seen libraries use only jpeg_read_scanlines.
[12:17:54 CEST] <BtbN> Maybe it gives you yuv420 data, or whatever raw format the jpeg image had, instead of decoding to RGB
[12:19:29 CEST] <BtbN> yeah, looks like it
[12:19:30 CEST] <BtbN> "To obtain raw data output, set cinfo->raw_data_out = TRUE before jpeg_start_decompress() (it is set FALSE by jpeg_read_header()). Be sure to verify that the color space and sampling factors are ones you can handle. Then call jpeg_read_raw_data() in place of jpeg_read_scanlines(). The decompression process is otherwise the same as usual."
[12:21:33 CEST] <k_sze> BtbN: where did you find that?
[12:21:49 CEST] <BtbN> https://www4.cs.fau.de/Services/Doc/graphics/doc/jpeg/libjpeg.html
[12:42:31 CEST] <k_sze> Thanks.
[13:53:35 CEST] <eynix> Hi everyone
[13:54:50 CEST] <eynix> I'm working on something to create a video from an openGL scene images
[13:55:04 CEST] <eynix> using openCV
[13:55:43 CEST] <eynix> the final avi file has this error when I open it with vlc : "avi error: no key frame set for track 0"
[13:56:37 CEST] <eynix> I'm kind of stuck, any clue ?
[14:06:38 CEST] <BtbN> why would you use avi?
[14:06:53 CEST] <BtbN> And why do you need opencv for that?
[14:07:09 CEST] <DHE> indeed. AVI has been on the way out for a decade
[14:07:14 CEST] <eynix> BtbN: I'm very new to this image/video thing
[14:07:26 CEST] <eynix> openCV looks a lot simplier than ffmpeg
[14:07:41 CEST] <BtbN> OpenCV is something entirely different than ffmpeg
[14:07:43 CEST] <eynix> and the example I found showing the openCV capability was using avi
[14:14:11 CEST] <eynix> BtbN: since openCV uses ffmpeg I though maybe you could know this error
[14:14:48 CEST] <BtbN> Using ffmpeg via OpenCV can easily cause all kinds of confusin. Better ask the OpenCV people about their wrapper layer.
[14:15:03 CEST] <eynix> (they send me there :x)
[14:15:15 CEST] <eynix> np, i'll find what's going on
[14:17:32 CEST] <BtbN> Well, when the avi muxer is used correctly, it usually produces a working file
[15:10:54 CEST] <pedjaman> Hello everyone :) Is there a comparisson chart(s) of supported video encoders/formats regarding file size and speed of encoding? Thanks :)
[15:51:40 CEST] <dystopia_> speed of encoding is down to cpu and encoding settings
[15:51:46 CEST] <dystopia_> same deal for filesize
[15:51:59 CEST] <dystopia_> you can't really make such a comparison chart
[15:53:37 CEST] <hlechner> Hey guys, anyone knows why same ffmpeg command to generate video's thumbnail generate a slight different size image when using jpeg compared with png? (jpeg:172/96) (png:172/97)
[15:54:16 CEST] <pedjaman> thanks. SHouldn;t that be actually comparable if we take CPU and settings as cosntant?
[16:08:32 CEST] <anchvi> Hi
[16:08:53 CEST] <anchvi> How can i add multiple transitions to an image in the video?
[16:09:13 CEST] <anchvi> I want image to slide from left to right and after a while from right to left
[16:09:21 CEST] <anchvi> I know how to do these separately
[16:09:26 CEST] <anchvi> but not together
[16:22:36 CEST] <kepstin> hlechner: the jpeg is probably using subsampled chroma, which means the dimensions have to be a multiple of 2. png doesn't have that restriction.
[16:23:22 CEST] <kepstin> (since the pixel format has to be converted to non-subsampled rgb for the png image)
[16:23:50 CEST] <iive> that's only for yuv420, and partially for 422, jpeg support 444 too
[16:25:28 CEST] <hlechner> Thanks guys
[16:36:06 CEST] <roasted> Hi friends. Checking out some nvidia documentation for enabling cuda in ffmpeg. I see documentation regarding compiling ffmpeg for use with this, but got to wondering if anybody knows if this is already built into recent ffmpeg versions or if a PPA (Ubuntu) may exist that I'm not finding?
[16:58:12 CEST] <BtbN> roasted, you don't need to do anything to build ffmpeg with most nvidia features.
[16:58:33 CEST] <BtbN> Some features need the CUDA SDK and flag ffmpeg as nonfree, so you won't ever find binaries for those
[17:00:09 CEST] <roasted> BtbN: sounds good. I'm working with an OBS rig with a GTX 1070 running Ubuntu and proprietary drivers. I just didn't wnat to be missing out on any untapped resources. :D
[17:00:31 CEST] <BtbN> OBS can only use the features it had programmed into it anyway
[17:01:00 CEST] <roasted> Things run *really* good, but it's just... if the hardware is capable of more, why not open that door, ya know? That's all I was curious about.
[17:01:13 CEST] <roasted> I assumed from this documentation this was something additional that can be set up beyond the 'standard' ffmpeg.
[17:03:39 CEST] <kepstin> roasted: keep in mind that the benefit from using gpu is ... kinda minimal. there's a few filters that can use gpu compute, I think? and you can use the hardware encoder rather than software encoding, but the hardware encoder's only really useful for realtime encoding if your cpu is busy doing something else, since the quality's rather lower.
[17:04:17 CEST] <roasted> kepstin: we'll be recording + streaming at once. Based on what OBS devs said, on Linux as the target OS for OBS, Intel CPU/Nvidia GPU is the best combo, so I ran with that.
[17:04:33 CEST] <roasted> (this is for a morning announcement video broadcast system for a school)
[17:06:32 CEST] <kepstin> what kind of video sources are you using? cameras, screen capture? what else is running on the box?
[17:07:17 CEST] <kepstin> i'd probably consider foregoing the gpu and just doing all software encoding if you're not doing other cpu intensive stuff on the box.
[17:07:24 CEST] <roasted> this box is dedicated to OBS. It'll run two Logitech C920 cameras, whcih are 1080 30 FPS cams but I'll likely downscale that to 720p for the stream. It'll record + stream at once on the same box. Nothing else will run on this box. It leverages nginx/rtmp to broadcast an rtmp stream that clients (teachers) will tap into.
[17:07:36 CEST] <roasted> The only other thing it'll run is a public nginx share for downloading past announcements.
[17:08:26 CEST] <roasted> We already have the hardware, it's already built and OBS is installed. I just got to thinking if (IF) I could get more resources out of it via compiling ffmpeg with the -nv flag (if I recall, I think that's what it is) if I could boost it a bit. But keep in mind, the box runs really good as is. I'm just strictly curious since I wondered if there was untapped resources available here.
[17:08:56 CEST] <kepstin> if you hadn't already bought the hardware, i'd recommend not buying the gpu and spending the money on more cpu. As it is, it's kind of going to waste, imo.
[17:09:22 CEST] <roasted> According to OBS devs, there are some benefits to having Nvidia GPU pending you have the proprietary drivers installed. What exactly, I'm not sure.
[17:09:37 CEST] <roasted> But the box already has a kaby i7, so we have some good CPU power.
[17:10:14 CEST] <kepstin> if you have the proprietary drivers installed, then you can use the hardware encoder on the gpu. This is lower quality than software encoding, but its useful if you're doing screen capture of CPU intensive apps
[17:10:29 CEST] <kepstin> or find yourself otherwise cpu limited
[17:10:41 CEST] <roasted> I'll have to tinker around with it and see. This is all testing mode at the moment. This isn't meant to launch until August.
[17:10:53 CEST] <roasted> I just wanted to cross off this "ffmpeg T" while I was in planning mode.
[17:11:27 CEST] <Nacht> The camera's are to be used as PIP over what you stream, or as source ?
[17:11:48 CEST] <roasted> At the moment I just have them plugged in as v4l devices.
[17:11:53 CEST] <roasted> They lit right up and the stream worked fine.
[17:13:34 CEST] <kepstin> roasted: yeah, simply "enabling nvidia support" in ffmpeg won't magically make everything faster. It just means you have the option of using the hardware encoder instead of software encoder (software encoder is still default), and a couple of filters that you're probably not using might gain some gpu accelleration.
[17:13:50 CEST] <roasted> good to know.
[17:14:23 CEST] <roasted> truth be told I was kind of hoping to let it as-is, so this is good news (for me). But like I said, if there was some magical bonus to compiling ffmpeg in a way to better leverage the nvidia hardware, I wouldn't sneeze at it.
[17:14:29 CEST] <kepstin> but on the other hand, you probably have a decent rig for doing some gaming during downtime, so... :/
[17:14:42 CEST] <roasted> kepstin: ...ssshhhhh!
[17:14:57 CEST] <roasted> appreciate the insight guys. :)
[17:15:13 CEST] <kepstin> and you don't even need the nvidia gpu to do hardware encoding; the intel cpu has a hardware encoder on it too (I dunno if obs supports using it?)
[17:15:31 CEST] <roasted> I'm not sure.
[17:16:04 CEST] <roasted> It's a long story, but basically, video studio hasn't been upgraded in many years, wiring upgrade happening to building, current studio won't work, needed something else, enter OBS, OBS devs said Nvidia GPU can be a bonus, I ran with it.
[17:16:47 CEST] <roasted> I had a lot to do, check, plan in a short amount of time, so I wasn't in the business or position of taking chances or making guesses without further testing. And I needed hardware to *really* test, so...
[17:17:04 CEST] <dbpolito> Hello guys, does ffmpeg have support to Right-to-Left subtitles? I see it have a option called text_shaping for drawtext, but i don't see a way for doing it for subtitles... help? =)
[17:17:52 CEST] <kepstin> roasted: keep in mind that OBS is very commonly used for livestreaming screen capture of stuff like games or other cpu+gpu intensive applications, where having a decent gpu and hardware encoding are real benefits
[17:18:07 CEST] <roasted> definitely. I got that jist as I read into OBS more, for sure.
[17:18:40 CEST] <roasted> the video studio hadn't seen any love in years. I was given a budget and even with this rig I came in under half of the allotment to do everything I needed, so...
[17:18:49 CEST] <furq> dbpolito: i believe that works automatically if you built libass with fribidi
[17:18:54 CEST] <kepstin> dbpolito: I believe that the standard subtitle renderer used in ffmpeg (libass) should support rtl text just fine..? might depend on configure options on the external library.
[17:20:05 CEST] <dbpolito> huh, i'm currently using a .str file... do i need to use .ass instead? or it's possible to use libass with srt?
[17:20:13 CEST] <kepstin> roasted: think of it as investment in the future then, there might be some need that comes up later in the life of this hardware where having gpu and hardware encoding is useful :)
[17:20:26 CEST] <roasted> kepstin: for sure.
[17:21:47 CEST] <dbpolito> i'm currently using ffmpeg cli, installed from default ppa of ubuntu
[17:23:07 CEST] <kepstin> dbpolito: I'd expect it to work with srt, the same renderer is used for all text subtitle formats. I assume you're using the "subtitles" filter to do this?
[17:24:33 CEST] <dbpolito> kepstin: correct, something like: ffmpeg -i input.mp4 -vf "subtitles=input.srt" -strict -2 output.mp4
[17:24:59 CEST] <dbpolito> i add some forced_style too, but simple font size and color changes
[17:28:08 CEST] <dbpolito> i'm currently using: ffmpeg version 2.8.11-0ubuntu0.16.04.1, maybe it's because i'm using an old version
[17:31:09 CEST] <aster_> is this channel is appropriate for querying related to ffmpeg programming api instead of ffmpeg tools?
[17:32:56 CEST] <durandal_1707> yes it is
[17:34:05 CEST] <aster_> ok thanks
[17:39:38 CEST] <aster_> in Sws_context and in av_image_alloc, I'm using AV_PIX_FMT_YUV420P and SDL_CreateTexture uses SDL_PIXELFORMAT_YV12, but when I display picture it has greenish color, Where am i going wrong? Thanks
[17:47:51 CEST] <kepstin> aster_: I think ffmpeg's yuv420p and YV12 have the color planes swapped; you can probably fix it by just switching the U/V plane pointers between the two.
[17:52:22 CEST] <aster_> Sorry It''s my mistake, I mess up param in SDL_UpdateYUVTexture call, I corrected and its working. Thanks for your suggestion
[20:01:24 CEST] <OzgrK> hi, i am trying to create a 16 hour video by looping an online png image at wikimedia. encoding is so slow (starts fast but stablizes around 0.6fps) using http protocol for input. also it takes a while (sometimes a couple of minutes) to start encoding but i think it is fine since ffmpeg downloads the image first and my intenet connection is not stable... but when i downlaod the png file and use it locally with file protocol to encode, t
[20:01:54 CEST] <OzgrK>  again but stablizes around 25-26fps approx). i am suspicious that ffmpeg tries to pull the png image from wikimedia website each time to loop it. i tried to use cache:URL and async:cache:http://host/resource options but not sure they are for this task and also not sure if i applied them righr in my command. is there a way to make ffmpeg download an image once using http protocol as input and cache it to use for -loop so that speed woul
[20:02:31 CEST] <OzgrK> the default behaviour and the cause of the very  slow spped is something else? any help much appreciated outputs of both methods i mentioned above are at https://pastebin.com/Sv82x0Ey . i did quit the processes around points where the processing speed stabilizes
[20:08:02 CEST] <arvut> do I need to supply a videocodec when copying audiostream from a video?
[20:08:14 CEST] <arvut> or just -c:a copy?
[20:08:27 CEST] <dystopia_> copy is fine
[20:08:33 CEST] <dystopia_> -acodec copy also
[20:08:36 CEST] <dystopia_> both do the same thing
[20:08:50 CEST] <dystopia_> if you don't want video
[20:08:53 CEST] <dystopia_> do -vn
[20:09:00 CEST] <arvut> thanks
[20:09:05 CEST] <dystopia_> np
[20:09:36 CEST] <kepstin> OzgrK: is it an option for you to just download the image locally before doing anything else? Another thing to consider is using the loop filter rather than the loop input option, it might for your this.
[20:10:13 CEST] <arvut> dystopia_: so I could save it in the same container but without video then, the file is a .mp4
[20:10:37 CEST] <Threads> yes
[20:11:41 CEST] <dystopia_> you can do yeah but may as well save it as .mp2 .ac3 .aac or what ever it is
[20:13:01 CEST] <arvut> .aac actually
[20:13:13 CEST] <arvut> Output file #0 does not contain any stream
[20:13:27 CEST] <arvut> I can play it in mpv and it plays just fine
[20:14:18 CEST] <arvut> oh wait, forgot -i
[20:15:55 CEST] <arvut> weird, mixxx cannot play the .aac file but can play the .mp4 container containing same audio
[20:25:56 CEST] <kepstin> yeah, raw aac bitstreams are fairly unusual to see, I'd expect aac in mp4 (aka .m4a) would be better supported
[21:07:10 CEST] <alexpigment> does anyone know if ffmpeg 32-bit is large address aware (can utilize 4gb of memory on a non-32-bit OS) by default?
[21:07:14 CEST] <alexpigment> or does this still require a patch?
[21:08:52 CEST] <BtbN> why use a 32bit version on a 64 bit OS?
[21:17:59 CEST] <alexpigment> @BtbN, it's being called by a 32-bit application
[21:18:55 CEST] <alexpigment> rather than make two installations, it's easier to make one installation and just have ffmpeg large address aware
[21:19:15 CEST] <JEEB> alexpigment: if it's a subprocess then it doesn't really matter
[21:19:25 CEST] <JEEB> you can call a 64bit binary from a 32bit binary
[21:19:37 CEST] <alexpigment> JEEB, it's not about calling one from the other
[21:19:44 CEST] <JEEB> with libraries it gets less nice but it's still possible to do out of process
[21:19:47 CEST] <alexpigment> it's about making one installer for the application that works on 32-bit and 64-bit
[21:19:48 CEST] <JEEB> if you really want to
[21:20:12 CEST] <JEEB> alexpigment: just check if the OS is 32bit or 64bit? most installer systems let you do that
[21:20:30 CEST] <JEEB> and most likely your app will not die if the installer is ~10MiB bigger
[21:21:19 CEST] <alexpigment> i understand your logic, and i do appreciate it, but my ideal goal is dependent on whether ffmpeg 32-bit is large address aware by default or needs to be patched
[21:22:19 CEST] <JEEB> LARGEADDRESSAWARE gets set in some configurations
[21:22:33 CEST] <JEEB> but in any case, if you are running to 4GiB of RAM you are most likely going to run out of RAM soon
[21:22:55 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=commit;h=e81eca0ce59aa4973bc53ec064f83610e3642ce5
[21:23:27 CEST] <alexpigment> ok, so this is committed to the main branch at this point?
[21:23:34 CEST] <alexpigment> that's great to hear. thank you
[21:24:12 CEST] <alexpigment> i was reading a mailing list submission for a patch from sept 2010, but i didn't see any indication that it was going to get committed
[21:25:00 CEST] <JEEB> but yea, with 64bit binaries you also get a lot of other optimizations so in general it really doesn't make sense to try and run a 32bit binary there
[21:25:14 CEST] <alexpigment> JEEB: you'll just have to trust me on this one
[21:25:24 CEST] <kepstin> just the fact that the x86_64 abi has double as many registers can help in a lot of cases :/
[21:25:28 CEST] <JEEB> ^this
[21:25:45 CEST] <JEEB> and that people have stopped caring about trying to fit their hand-written optimizations to the lesser amount of registers
[21:26:09 CEST] <JEEB> so while things won't be getting slower, they most likely are not going to start getting faster
[21:26:28 CEST] <JEEB> (and in certain things there's already a pretty big gap)
[21:28:13 CEST] <alexpigment> i understand what you're saying. the installer is completely custom, and can't support putting in a different file based on the OS without modification. i'm currently using the path of least resistance
[21:28:36 CEST] <alexpigment> of course i agree that 64-bit is better and will try to push for that
[21:28:44 CEST] <JEEB> enjoy that
[21:31:19 CEST] <alexpigment> JEEB: i'm not a developer. but i do get a lot of traction by saying "this requires no developer time" :)
[21:31:46 CEST] <alexpigment> and i'm guessing that you, as a developer, probably cannot relate to that scenario at all
[21:33:24 CEST] <JEEB> depends on the situation
[21:33:42 CEST] <JEEB> if you can get something done without work that won't bite you in the long term that's all good
[21:34:11 CEST] <JEEB> my comments mostly come because there really are people who don't have such limitations and just don't know here every now and then :P
[21:34:40 CEST] <alexpigment> yep. this is a short term thing and is under limitations. the long term goal does not have such limitations and will be done "right"
[21:34:46 CEST] <JEEB> ooh
[21:34:50 CEST] <JEEB> the classic thing :D
[21:35:04 CEST] <JEEB> man, how many PoCs have ended up in production and maintained :P
[21:36:08 CEST] <alexpigment> i can assure you it won't. i've already built the 64-bit builds of FFMPEG for the next generation product which is not 32-bit
[21:36:52 CEST] <alexpigment> i'm effectively just trying to add some extra legs to an EOL product at the moment
[21:44:42 CEST] <OzgrK> kepstin: there is no problem with the downloaded local file but i need to input an online image. when i used the "filter" "loop" as -vf loop=loop=1440 instead of -loop 1, it producs just a single frame
[21:44:52 CEST] <OzgrK> my initial command was ffmpeg -loop 1 -i https://upload.wikimedia.org/wikipedia/commons/c/c4/PM5544_with_non-PAL_signals.png -vf hue=h=t*360/57600 -t 57600 -pixel_format yuv420p -c:v libvpx-vp9 -lossless 1 testHttp1.webm
[21:45:55 CEST] <kepstin> OzgrK: why can't you replace "ffmpeg -loop 1 -i https://..." with "wget https:// && ffmpeg -loop 1 -i ..." ?
[21:47:04 CEST] <kepstin> that said, in order to use the loop filter, you need to set the size parameter, so e.g. "-vf loop=size=1:loop=-1" to loop a single frame forever
[21:47:13 CEST] <kepstin> by default the size is 0 so it does nothing
[21:50:17 CEST] <R1CH> is it normal for avcodec_decode_video2 to return a larger value than the input packet size if a complete frame wasn't decoded on a previous call?
[21:54:19 CEST] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/decode.c;h=fabdab369400b04a201c481f6f4089d4d6bc052a;hb=HEAD#l891
[21:54:25 CEST] <BtbN> I'd say it's basically impossible?
[21:59:54 CEST] <R1CH> well that confuses things even further :|
[22:00:20 CEST] <BtbN> Which ffmpeg version?
[22:01:04 CEST] <R1CH> 57.61.100
[22:05:07 CEST] <JEEB> the library versions are rather unfortunate. I recommend either a git hash or the release version of the whole package if you're using a release branch
[22:05:48 CEST] <BtbN> cause decode_video2 is deprecated in the latest versions
[22:08:26 CEST] <R1CH> i saw, i think we are intentionally using this to keep compatibility with older installed ffmpeg packages
[22:08:48 CEST] <OzgrK> kepstin: i am trying to make it happen without using any other tool so wget option does not work for me but it just works fine, with same speed as file input when i use loop as "filter" as is in your sample! great. thanks. so what works is follows for my case is
[22:08:50 CEST] <OzgrK> ffmpeg -i https://upload.wikimedia.org/wikipedia/commons/c/c4/PM5544_with_non-PAL_signals.png -vf loop=size=1:loop=-1,hue=h=t*360/57600 -t 57600 -pixel_format yuv420p -c:v libvpx-vp9 -lossless 1 testloop1.webm
[22:10:06 CEST] <OzgrK> speed is approx 25fps for 16 hour video but for a 2min video it is approx 16fps and for a 9 seconds of video it is much lower again... is this normal?
[22:11:23 CEST] <kepstin> OzgrK: the same video each time, just with different options for '-t' on the output to change the lenght?
[22:11:54 CEST] <OzgrK> yess and for 57600 in hue=h=t*360/57600
[22:12:27 CEST] <kepstin> there's a fair amount of work that has to be done once at startup with many video codecs, and also the time needed to download the  frame off the web is included in that
[22:12:33 CEST] <OzgrK> i want a full hue circle change over the total duration of the video
[22:12:56 CEST] <kepstin> so if you take the long time for the first frame and the short time for each frame after that, then average them, a longer video will have a higher average framerate than a shorter video
[22:13:24 CEST] <OzgrK> ok, that makes sense, thanks. i have an1ther issue though
[22:13:26 CEST] <kepstin> and also the more change per frame the longer the frame takes to encode
[22:13:45 CEST] <kepstin> so if you make the hue cycle shorter, the frames are less similar, so they'll be slower to encode
[22:14:00 CEST] <kepstin> although that depends on the codec, it's not always true
[22:14:46 CEST] <OzgrK> i also want to fade out a 16 hour sound starting the fade in the begiining and fade out at the end of 16 hour totally, so it will be a veeery slow fade out
[22:14:48 CEST] <OzgrK> -f lavfi -i sine=frequency=1000:sample_rate=44100:duration=57600,afade=t=out:st=0:d=57600
[22:15:00 CEST] <furq> kepstin: i assume that's not the case for lossless libvpx
[22:15:35 CEST] <kepstin> yeah, I haven't played with that yet.
[22:15:42 CEST] <OzgrK> when i use this it doesn't work for such a long audio but when i use a smaller duration for total audio duration and fade duration, for example 60 seconds. it works as aspected. any idea?
[22:15:59 CEST] <kepstin> but with such a slow color fade, it's possible that you might have a bunch of identical frames in a row, so it could just be doing fast skip...
[22:16:16 CEST] <durandal_1707> OzgrK: define doesnt work
[22:17:35 CEST] <durandal_1707> near end it shouldbe silence
[22:17:49 CEST] <OzgrK> it was silence when i tried it yesterday but know i cannot reproduce it. gimme a min, pls
[22:17:49 CEST] <durandal_1707> if not its some kind of bug
[22:18:20 CEST] <OzgrK> not near the end but from the very beginning. i am tryin with ffplay now, lemme encode
[22:19:00 CEST] <cryptodechange> my encode jumps up by 2mbits when I change the horizontal cropping from 800 to 802
[22:19:11 CEST] <furq> durandal_1707: thanks for crossfeed btw
[22:19:13 CEST] <OzgrK> yes i have reproduced it.
[22:19:17 CEST] <OzgrK> -f lavfi -i sine=frequency=1000:sample_rate=44100:duration=57600,afade=t=out:st=0:d=57600
[22:19:22 CEST] <furq> i'll probably rebuild tomorrow and check it out
[22:19:32 CEST] <OzgrK> this produces a silence when run with ffplay. lemme try to encode once more
[22:19:58 CEST] <kepstin> cryptodechange: what codec?
[22:20:06 CEST] <OzgrK> without the fade out filte it works fine and creates a sine sound
[22:20:29 CEST] <kepstin> cryptodechange: note that 800 is divisible by standard macroblock sizes like 8 and 16, while 802 is not, so codecs are less efficient with the larger crop
[22:21:03 CEST] <kepstin> cryptodechange: as for a "2mbit jump", well, that's meaningless without context
[22:23:17 CEST] <durandal_1707> OzgrK: i will reproduce and fix ASAP
[22:25:25 CEST] <willbarksdale> Hey Folks, got a question, I was hoping someone could point me in the right direction. I am using FFmpeg to decode an RTMP stream. The stream opens find and streams for a bit, but after somewhere between 1-10 minutes, I get a hanging call to av_read_frame(...) which I can't seem to recover from without tearing down everything and reconnecting to the stream. Are there any known gotchyas that might be causing this? I am using FFmpe
[22:25:25 CEST] <willbarksdale> g compiled for iOS
[22:25:29 CEST] <OzgrK> thanks i am also trying to encode with opus in webm container and will report result. but encoding also didn't work when i tried yesterday. probably because of veeery long fade out option. if i change the 57600 (16h) with 60 for example it works. do you have an idea about the couse of the problem. is it really a bug
[22:26:34 CEST] <OzgrK> when i encode it is also total silence of 16 hours
[22:26:53 CEST] <OzgrK> just completed
[22:41:23 CEST] <CoJaBo> Anyone happen to know anyone who creates Youtube videos (professionally or semi-professionally..)?
[22:46:14 CEST] <james999> CoJaBo: well i have an account and ffmpeg but don't make videos
[22:46:17 CEST] <james999> what's your question?
[22:46:43 CEST] <CoJaBo> Uless you used it, you won't have been affected by The Incident :/
[22:51:48 CEST] Action: kepstin posts silly videos every week, but doesn't  do any monetization or have any sizable fan base.
[22:51:59 CEST] <jgirot> Sorry, not sure which channel to ask for help registering a new account on trac.ffmpeg.org. I'm getting an error after filling in new username/password etc.
[23:04:31 CEST] <durandal_1707> OzgrK: fixed afade with very long durations in latest master
[23:05:12 CEST] <OzgrK> great!thanks!
[23:50:26 CEST] <kms_> ALSA buffer xrun
[23:50:30 CEST] <kms_> wtf?
[23:55:54 CEST] <debianuser> kms_: "alsa buffer" is what soundcard writes data to (capture) or reads data from (playback). When the app records some sound it asks the soundcard to write data to that buffer, and it's app duty to read the data from it in time, if it doesn't, the buffer overflows and you get "alsa buffer overrun" error.
[23:56:48 CEST] <debianuser> It's not a fatal error, application should automatically recover from it and restart the capture, but you may hear a slight click sound in recorded data when that happens
[23:56:49 CEST] <kms_> how to improve?
[23:57:03 CEST] <debianuser> make sure app has enough CPU to read the buffer in time
[23:57:46 CEST] <kms_> sound is bad, it is stream from mic
[23:58:58 CEST] <kms_> after restart is ok, but after 20 sec error repeat
[00:00:00 CEST] --- Thu May 18 2017

More information about the Ffmpeg-devel-irc mailing list