[Ffmpeg-devel-irc] ffmpeg.log.20171019

burek burek021 at gmail.com
Fri Oct 20 03:05:01 EEST 2017


[01:25:09 CEST] <Mista_D> any fast frame accurate range selection/splicing recomnedations for MP4 and TS files with MPEG2/H.264 please?
[01:28:04 CEST] <wyth> Does anyone have any resource they can point me to for adding libraw support to my ffmpeg installation?
[01:29:49 CEST] <wyth> I'm hoping to figure out a way to wrap a dng sequence to mov without re-encoding. Is it impossible?
[01:31:13 CEST] <DHE> might be doable as long as mov supports the codec. there are limits. I don't konw what dng is so I can't comment
[01:32:16 CEST] <g0n> Im trying to decode avpackets and I got frames, but when I print them they look weird, with no color and triple image (no, im not drunk)
[01:32:49 CEST] <g0n> anyone knows what the problem could be?
[01:32:53 CEST] <wyth> dng is a raw format that comes out of cameras, like those from blackmagicdesign specifically is what I'm working with
[01:33:48 CEST] <klaxa> g0n: are you getting YUV planes maybe?
[01:33:52 CEST] <DHE> sounds like you've got red,green,blue or YUV planes
[01:34:35 CEST] <g0n> I dont really know im a noob concerning ffmpeg on c
[01:34:50 CEST] <g0n> Took me 3 days to get frames...
[01:35:44 CEST] <g0n> That has something to do with pix_fmt?
[01:36:08 CEST] <klaxa> probably
[01:36:27 CEST] <klaxa> a YUV frame is stored as 3 planes, first all Y values, then all U values then all V values
[01:36:37 CEST] <klaxa> to get one pixel color you have to combine the three planes
[01:37:04 CEST] <klaxa> for yuv420 for example, you take 4 Y pixels and 1 U and 1 V pixel to create one 2x2 pixelblock
[01:37:25 CEST] <g0n> Ahh, ok, so then Ill look on that...
[01:37:37 CEST] <klaxa> but i'm sure there is an easier way to do that in the library?
[01:37:37 CEST] <g0n> Thanks for the explanation
[01:37:39 CEST] <klaxa> not sure
[01:38:12 CEST] <g0n> Know that i know the name and what it is Ill be easier to find
[01:38:27 CEST] <g0n> Thanks :)
[01:38:40 CEST] <DHE> you can use a filter or swscale to convert to another format, like rgb
[01:41:51 CEST] <g0n> I found an answer
[01:41:54 CEST] <g0n> AVFrame can be interpreted as an AVPicture to fill the data and linesize fields. The easiest way to fill these field is to the use the avpicture_fill function.
[01:42:05 CEST] <g0n> :)
[01:45:21 CEST] <wyth> Hm.. all I can see is requests to support DNG, so I guess it's looking like I'm out of luck? :-\
[04:13:13 CEST] <Toba> hey yall... this is going to sound like a dumb question but what is a lavfi
[04:13:31 CEST] <Toba> what does that string refer to - it's tossed out all over online as a -f option, but I have no clue what it does and it isn't very google-able
[04:20:16 CEST] <Toba> aha it's technically a virtual input device, i see http://ffmpeg.org/ffmpeg-devices.html#lavfi
[08:57:36 CEST] <cowai> Hello all,
[08:58:16 CEST] <cowai> Has anybody here any experience with distributed encoding (splitting source file into segments and rendering separately on different hosts) ?
[09:00:15 CEST] <cowai> I want to segment the file with -f segment without audio, and render the audio separately. Then after all the segments are rendered I want to concat then along with audio. I have done tried this, and I always end up with lipsync problems.
[09:01:06 CEST] <blap> nice idea
[09:10:54 CEST] <JEEB> cowai: many here have experience with that, but that's generally the part where people write solutions on top of the APIs and those tend to be (C) YourDayJob and not publicly available
[09:11:21 CEST] <JEEB> I mean, every darn VOD video site on the internet is doing it :P
[09:13:11 CEST] <Nacht> Netflix wrote a nice bit about it on their techblog
[09:19:50 CEST] <JEEB> the general details are not exactly secret, yes
[09:19:58 CEST] <JEEB> it's just that nobody is just going to publish their code
[09:20:20 CEST] <Nacht> Yeah I can imagine that. Although, in the age of open source, you occasionally get a kind soul :)
[09:21:34 CEST] <Nacht> I haven't tried it myself, as I haven't had the need for it yet.
[09:23:57 CEST] <JEEB> basically the issue is that at that point you're publishing a workflow than just a tool in your workflow. for VOD it's pretty simple generally since there's stuff like ffms2 which lets you index and frame-exactly access content. audio in my opinion I'm not even sure is worth encoding in separate chunks :P
[09:24:14 CEST] <JEEB> (you can always fragment it according to your video fragment lengths)
[09:24:15 CEST] <Nacht> https://medium.com/netflix-techblog/high-quality-video-encoding-at-scale-d159db052746
[09:25:00 CEST] <JEEB> also don't remind me of the XML mess that is IMF... :|
[09:27:05 CEST] <cowai> Does netflix split the source into chunks before rendering?
[09:27:35 CEST] <Nacht> Ofcourse they do, read the article
[09:28:25 CEST] <JEEB> although tbh there's multiple ways around that problem, and the problem depends on if you're taking in files of the size of hundreds of gigabytes or not
[09:28:27 CEST] <Nacht> If you're rendering a movie of 2 hours in the highest bitrate and your process stalls, you just threw away allot of encoding. If you chunk it before, you only miss a tiny part
[09:28:35 CEST] <JEEB> uhh
[09:28:49 CEST] <JEEB> that has nothing to do with chunking the input (file)
[09:29:02 CEST] <JEEB> that's just chunked decoding/encoding
[09:29:13 CEST] <Nacht> That's one of the reasons Netflix mentioned as why they cut up the encoding
[09:30:15 CEST] <cowai> In my project the input can be any format, any framerate and any resolution. Do I have a chance with simple scripting and ffmpeg ?
[09:30:21 CEST] <JEEB> no
[09:30:30 CEST] <Nacht> Also the fact that you can run allot of encoding parallel, so the process takes shorter
[09:34:21 CEST] <cowai> I have  a working example where I split the source into segments in the "master node" and render the audio to the end format in the same process. I then render each segment with the end video format and -c:a copy. I then concat it and it works.
[09:34:21 CEST] <cowai> But this way is not efficient if I need 5 different renditions. The segments should have no audio, and the audio should be encoded separately and merged back in in the last step. But this results in out of sync issues most of the time.
[10:19:32 CEST] <Nacht> Anyone got a good recommendation on a hardware encoder brand ?
[10:19:50 CEST] <Nacht> Been looking at Harmonic and Ericsson(Envivio)
[10:20:19 CEST] <Nacht> But judging from the website of Ericsson it's almost as they don't want people to know they even sell the things
[10:26:27 CEST] <ChocolateArmpits> Nacht, maybe they are discontinued
[10:26:56 CEST] <JEEB> Nacht: pretty sure people here either make their own solutions or buy solutions based on open source (there are vendors selling such boxes, even for broadcast use)
[10:40:30 CEST] <Nacht> Can never hurt to ask
[10:53:35 CEST] <lethalwp> hello;   mpv --hwdec=vdpau shows me this: [vd] Pixel formats supported by decoder: vaapi_vld yuv420p10le
[10:53:35 CEST] <lethalwp> [vd] Codec profile: Main 10 (0x2)
[10:53:35 CEST] <lethalwp> [vd] Requesting pixfmt 'yuv420p10le' from decoder.
[10:53:35 CEST] <lethalwp> [vd] Falling back to software decoding.
[10:53:44 CEST] <lethalwp> but it should work:  vdpauinfo shows: HEVC_MAIN                      186 65536  4096  4096
[10:53:44 CEST] <lethalwp> HEVC_MAIN_10                   186 65536  4096  4096
[10:54:00 CEST] <lethalwp> any idea why libavcodec is refusing the vpdau ?
[11:00:13 CEST] <lethalwp> mm if i try ffmpeg -hwaccel vdpau -i samefile to otherone  i get;:   [AVHWDeviceContext @ 0x560fcd6825a0] Successfully created a VDPAU device (G3DVL VDPAU Driver Shared Library version 1.0) on X11 display :0
[11:00:13 CEST] <lethalwp>     So i guess it should work
[11:14:23 CEST] <JEEB> lethalwp: vdpau as far as I know has no 10bit output support
[11:14:39 CEST] <JEEB> you could get it with cuvid I think?
[11:15:12 CEST] <lethalwp> JEEB, even if vdpauinfo shows it's supported?
[11:15:18 CEST] <lethalwp> it's an rx480
[11:15:25 CEST] <JEEB> then use vaapi
[11:15:37 CEST] <JEEB> also yes, even if the PROFILE is supported it doesn't mean it can output 10bit
[11:15:43 CEST] <JEEB> vdpau doesn't specify 10bit YCbCr surfaces
[11:15:44 CEST] <lethalwp> ok  doesn't work either;  but will dig more ino that
[11:15:59 CEST] <JEEB> vaapi should be the preferred way for AMD/Intel
[11:16:10 CEST] <JEEB> and it does have a 10bit surface thing
[11:16:14 CEST] <JEEB> if I recall correclty
[11:16:42 CEST] <lethalwp> JEEB, i was hoping to use my rx480 to decode hevc_10bits even if it's display on a fullhd-nonhdr screen. But failing to do it.   Do you know if some geforce1030 could do it?: read the 10bits content, decode, but display it as normal fullhd ?
[11:17:35 CEST] <JEEB> well display is anyways up to your video renderer (which with mpv would be the opengl renderer)
[11:18:25 CEST] <lethalwp> atm mpv always fallback to software decoding, whatever i choose. (my next step was kodi, but it doesn't even render x264 in hw, so even worse..)
[11:19:12 CEST] <JEEB> it only falls back when something in the chain doesn't support the thing :P
[11:19:57 CEST] <JEEB> I don't see how something else would be able to hwdec something unless you've failed to build or configure the other thing properly or something
[11:20:04 CEST] <JEEB> also this is more related to #mpv to be honest
[11:20:10 CEST] <JEEB> just move there :P
[11:21:12 CEST] <lethalwp> still a little bit ;)  just rebuilded mpv from git (with ffmpeg from git, mesa from up to date ppa);   when decoding vaapi and rendering vaapi i get: [ffmpeg] AVHWFramesContext: Failed to create surface: 14 (the requested RT Format is not supported).
[11:21:42 CEST] <JEEB> just fucking move to #mpv , ok?
[11:21:48 CEST] <lethalwp> ok nvmd
[11:21:50 CEST] <lethalwp> thx anyway
[11:21:52 CEST] <JEEB> more people relevant to this discussion there
[11:28:53 CEST] <orzel> Hello. I'm extracting quite some very useful ("metadata") information from video files using ffprobe json output. Though there's one information I'm not able to get, which is called "Scan type". Typically mediainfo would display it under this name, but i can't make ffprobe output it
[11:29:50 CEST] <orzel> do you have any idea how to achieve this ? On a typical *.mov file i would get from mediainfo "Scan type" : interlaced, progressive, interleaved, ....
[11:30:28 CEST] <orzel> i kinda have some hint when using -show_frames, but the output is very huge, and it's not exactly what i want
[11:30:54 CEST] <JEEB> yea, show_frames and if you only care about what the first frame gives you can limit it to the first frame
[11:31:03 CEST] <JEEB> because it's something that can change
[11:31:10 CEST] <JEEB> like, it actually does in broadcast etc
[11:31:22 CEST] <JEEB> top first/bottom first and all that jazz
[11:31:31 CEST] <JEEB> and of course it only contains the information on the *coding* of the content
[11:31:35 CEST] <JEEB> not the actual content itself
[11:31:47 CEST] <JEEB> there are cases where interlaced coding is used for progressive content and vice versa
[11:32:01 CEST] <JEEB> which are mistakes more or less but just to note that you can only know as much :)
[11:34:10 CEST] <ChocolateArmpits> I usually skip 5 seconds and only then grab frame related information, makes it more safe I guess
[11:35:16 CEST] <willhunt> I want to record images captured from a high speed frame buffer into video format, I need the exact timestamp of each frame, how can I do that, any suggestion?
[11:36:22 CEST] <JEEB> what is your time line? does the frame buffer API come with a timestamp or is it just "you have to call some proper time API when getting stuff from the frame buffer"?
[11:39:53 CEST] <willhunt> I call some API to capture the image, and when I got the image I know the timestamp, I just don't know how to encoding the image to video and how to save the timestamp of each frame?
[11:40:58 CEST] <JEEB> in the FFmpeg libraries a raw image is an AVFrame, so you will have to create an AVFrame and utilize/create+copy buffers and set the PTS etc values
[11:41:08 CEST] <JEEB> (and of course stuff like the pixel format etc)
[11:41:58 CEST] <JEEB> there are helper functions for you to first get an AVFrame, and if you have to move your image to a more aligned memory buffer there's also one to allocate you that buffer with proper alignment as well
[11:42:11 CEST] <JEEB> then the AVFrame can be fed to video encoders
[11:42:14 CEST] <JEEB> then you get AVPackets
[11:42:23 CEST] <JEEB> and those can be written by libavformat
[11:42:31 CEST] <JEEB> I think that sounds relatively straightforward :P
[11:43:20 CEST] <orzel> JEEB: i noticed that "scan type" can be about encoding and also about storing. That's it ?
[11:43:38 CEST] <willhunt> is the timestamp information saved into the video?
[11:44:02 CEST] <JEEB> willhunt: if you set the timestamp (PTS) correctly in the AVFrame
[11:44:34 CEST] <willhunt> when I am decoding the video, what can I got?
[11:44:36 CEST] <JEEB> and know what your time base is, and rescale the timestamps accordingly where needed it should go through the workflow a-ok
[11:44:53 CEST] <JEEB> willhunt: if you are using the decoding APIs AVFrame's pts field will have the pts :P
[11:45:08 CEST] <JEEB> https://ffmpeg.org/doxygen/trunk/structAVFrame.html#a0452833e3ab6ddd7acbf82817a7818a4
[11:45:47 CEST] <JEEB> (the time base generally is the stream's time base which comes out of the demuxer, or is otherwise known)
[11:46:02 CEST] <JEEB> I am kind of lobbying for AVFrame to have its own time_base field
[11:46:10 CEST] <JEEB> but it has to be actually done
[11:50:32 CEST] <willhunt> when decoding the video, what information can I got? I want to know the total image frame numbers of the video, and the original image frame data and its timestamp by its frame number
[11:52:17 CEST] <JEEB> willhunt: to absolutely know how many frames a thing has you need to parse all packets first, image is a "d'uh, yes" and timestamps have nothing to do with frame numbers? if you need frame-exact access and indexing under a simpler API take a look at ffms2
[11:52:23 CEST] <orzel> JEEB:  any idea which 'scan type' mediainfo actually display ? encoding or storage ?
[11:52:38 CEST] <JEEB> ffms2 has frame-exactness and indexing of input done on top of FFmpeg's APIs
[11:52:54 CEST] <JEEB> and it does give you the original YCbCr or RGB or whatever images
[11:53:20 CEST] <JEEB> orzel: generally it's the coding type. Although if you can have the metadata in the container that is also exported
[11:54:09 CEST] <JEEB> if I just run as a test on a random interlaced MOV sample I have around
[11:54:30 CEST] <JEEB> http://up-cat.net/p/a2118527
[11:54:50 CEST] <JEEB> so all that information is available to you through the json output as well
[11:55:24 CEST] <JEEB> the "bottom coded first" part is the interlacism part
[11:56:09 CEST] <JEEB> it could be in the container for that track, or it could be in the video track itself; personally I don't care as long as I get the information :)
[11:57:48 CEST] <willhunt> How fast should I encoding the video, the capturing frame rate is about 100 FPS
[11:59:48 CEST] <JEEB> that depends on so many variables. how fast is your PC hardware, how much filtering is needed and if that is going to be a bottleneck if libavfilter is used
[11:59:51 CEST] <JEEB> etc etc
[12:00:02 CEST] <orzel> JEEB, ChocolateArmpits: is there any way to ask for ffprobe -show_frames to display only one (or few) frames, starting at 5S for example ?
[12:00:08 CEST] <JEEB> yes
[12:00:21 CEST] <JEEB> and you can even limit it to a specific stream after you've done -show_streams
[12:00:22 CEST] <orzel>  /frame in man ffprobe only reports -show_frames
[12:00:54 CEST] <ChocolateArmpits> the documentation has frame limit syntax
[12:01:47 CEST] <orzel> yop, ok https://stackoverflow.com/questions/35043972/how-to-limit-the-number-of-frames-ffprobe-interprets
[12:01:53 CEST] <orzel> so is such a good doc :)
[12:02:33 CEST] <JEEB> that's actually documented https://ffmpeg.org/ffprobe.html
[12:02:46 CEST] <JEEB> and this is generated from the man page
[12:02:54 CEST] <JEEB> if you look for "read_intervals" there
[12:03:51 CEST] <JEEB> willhunt: anyways you'd have to first find out the input parameters (pixel format etc), and then find out what your required output is
[12:04:02 CEST] <JEEB> and then one could start looking into possible bottlenecks
[12:14:02 CEST] <willhunt> the captured image is 1024x1024 and grayscale, the capturing frame is 100 FPS, if I want to record the images into video with their capturing timestamps, what should I learn? could you give me some concrete help?
[12:15:18 CEST] <orzel> JEEB: yes, thanks
[12:53:38 CEST] <mushy> hey
[12:54:14 CEST] <mushy> Is anybody in this chat?
[12:55:08 CEST] <blap> maybe
[12:55:50 CEST] <mushy> Awesome, Im new to ffmpeg (and not a very good linux user either)
[12:56:06 CEST] <mushy> Im currently trying to compile FFMPEG from source so that i can use it in a c++ program
[12:56:28 CEST] <mushy> do you think you'd be able to offer any help with this@?
[12:58:11 CEST] <cowai> is high average psnr good or bad?
[12:59:34 CEST] <mushy> Does anyone have a deecent up to date guide on installing & using FFMPEG in c++ for a beginner?
[13:00:52 CEST] <blap> i don't know
[13:01:12 CEST] <blap> i mean you can use a system call mushy
[13:01:22 CEST] <blap> and pass parameters to ffmpeg
[13:01:28 CEST] <blap> but maybe there's an api too
[13:01:56 CEST] <mushy> Yeah i know, but id like to use the API as i will be eventually cross compiling and using it from NDK on android
[13:02:09 CEST] <mushy> so i felt it would be worth while getting familiar with the api
[13:02:36 CEST] <JEEB> mushy: docs/examples has quite a few examples
[13:02:56 CEST] <JEEB> and there's the documents that you can find in doxygen comments which can also be browsed on teh web
[13:03:12 CEST] <mushy> JEEB thanks, ill check them out.. currently feeling a little bit demotivated as i have hit hiccups even with the compilation guide...
[13:03:13 CEST] <JEEB> stuff like https://ffmpeg.org/doxygen/trunk/group__lavf__decoding.html
[13:04:04 CEST] <JEEB> mushy: don't follow a guide with all the bells and whistles. keep in mind that 99% of all things for decoding are enabled within FFmpeg by default and need no external libraries. for encoding libraries are more often required but you must have a specific requirement so focus on that
[13:04:07 CEST] <mushy> JEEB brilliant - https://ffmpeg.org/doxygen/trunk/encode_video_8c-example.html - that gives me something to work on
[13:04:53 CEST] <mushy> Sure, this time i have encoded without :
[13:04:54 CEST] <mushy>   --enable-libfdk_aac \   --enable-libfreetype \   --enable-libmp3lame \   --enable-libopus \   --enable-libvorbis \   --enable-libvpx \   --enable-libx264 \   --enable-libx265 \
[13:05:17 CEST] <JEEB> basically start with a plain ./configure
[13:05:25 CEST] <JEEB> and then add things you *require*
[13:05:44 CEST] <JEEB> if you don't even require libx264 for H.264 encoding then it is highly likely that you don't need a lot of stuff you're enabling
[13:06:19 CEST] <JEEB> also most compilation guides focus on static dependencies and just getting the ffmpeg.c command line tool
[13:06:42 CEST] <JEEB> which might not be what you want when you're building your stuff
[13:07:00 CEST] <JEEB> so yea, find what you actually require
[13:08:07 CEST] <mushy> for now my goal is to write a very basic c++ program which will take frames from a grayscale image (i get the data from a fingerprint scanner) and save the frames into an AVI container
[13:08:33 CEST] <JEEB> if it's just images you might as well initially just write out PNG files?
[13:08:44 CEST] <JEEB> (you need zlib for that)
[13:09:23 CEST] <mushy> The program i am talking about building is purely an example for me to use to learn
[13:09:26 CEST] <JEEB> so yea, configure --prefix=/your/prefix --enable-zlib gives you static FFmpeg libraries, and add `--disable-static --enable-shared` to get shared libraries
[13:09:32 CEST] <JEEB> yes, the flow is the same
[13:09:59 CEST] <mushy> JEEB perfect, so will those flags allow me to link the librarys i need and have a self contained program
[13:10:19 CEST] <JEEB> mushy: you get required flags from the pc (pkg-config) files
[13:11:09 CEST] <JEEB> PKG_CONFIG_PATH=/your/prefix/lib/pkgconfig pkg-config --libs libavcodec
[13:11:11 CEST] <JEEB> for example
[13:11:26 CEST] <JEEB> many build systems have support for it and it's the standard way to convey "which flags do I need for this library"
[13:12:33 CEST] <mushy> Okay, i think ill have to do a little bit of reading up on pkg-config before starting
[13:12:52 CEST] <JEEB> the pc files get installed into the prefix when make install is ran
[13:13:09 CEST] <JEEB> for example I have my ARMv7 prefix in /home/jeeb/ownapps/armv7_prefix/
[13:13:15 CEST] <mushy> make & make install seems to have just ran fine.
[13:13:25 CEST] <JEEB> http://up-cat.net/p/97b74014
[13:13:34 CEST] <mushy> ill try to find where mine
[13:13:35 CEST] <mushy> are
[13:13:36 CEST] <JEEB> (PKG_CONFIG_LIBDIR completely overrides)
[13:13:45 CEST] <JEEB> PKG_CONFIG_PATH appends to the default search path
[13:13:56 CEST] <JEEB> in my case since I have built for ARM I don't want my local x86 stuff there ;)
[13:14:22 CEST] <JEEB> same for cflags, they're also there
[13:15:09 CEST] <mushy> would the command echo $PKG_CONFIG_PATH show me what is in mine?
[13:17:21 CEST] <JEEB> mushy: no since it's not around by default
[13:17:34 CEST] <JEEB> even though it's "SOMETHING_PATH" it's *appending* to the default locations
[13:17:42 CEST] <JEEB> which is kind of misleading, yes
[13:17:50 CEST] <JEEB> mushy: they always go under what you set as --prefix
[13:18:01 CEST] <JEEB> <prefix>/lib/pkgconfig
[13:18:41 CEST] <mushy> Right its a little confusing right now. i think ill go and find some basic resources that describe it
[13:19:04 CEST] <mushy> do you know if it has anything to do with the error messages im getting trying to compile some of the sample code?
[13:19:05 CEST] <mushy> In file included from /usr/local/include/libavutil/mem.h:34:0,                  from /usr/local/include/libavutil/common.h:464,                  from /usr/local/include/libavutil/avutil.h:296,                  from /usr/local/include/libavutil/samplefmt.h:24,                  from /usr/local/include/libavcodec/avcodec.h:31,                  from main.cpp:31: main.cpp: In function int main(int, char**): main.cpp:112:55: error: tak
[13:19:36 CEST] <mushy> formatted
[13:19:36 CEST] <mushy> https://pastebin.com/p48VHRNR
[13:20:07 CEST] <JEEB> because it's not C++
[13:20:13 CEST] <JEEB> there's a thing about that in the docs
[13:20:30 CEST] <JEEB> https://trac.ffmpeg.org/wiki/Including%20FFmpeg%20headers%20in%20a%20C%2B%2B%20application
[13:20:40 CEST] <JEEB> it's in the actual docs as well, but this popped up on top on google :P
[13:22:18 CEST] <mushy> Strange still getting the error when i try and compile. Should i be using any flags or will g++ main.cpp suffice?
[13:23:55 CEST] <JEEB> they're not C++ if you're talking about the examples
[13:24:25 CEST] <JEEB> I mean, it's not *that* different to make them into valid C++, but just for the record they're not made with that in mind
[13:24:44 CEST] <mushy> Ahhhh makes sense
[13:24:46 CEST] <JEEB> also I have never built the examples, I just have looked at how they're built at times
[13:25:02 CEST] <JEEB> but I would bet they'd require the linker and cflags
[13:25:03 CEST] <JEEB> :P
[13:25:07 CEST] <JEEB> which you can get from pkg-config
[13:25:45 CEST] <mushy> I think ive found exactly what i need... https://trac.ffmpeg.org/wiki/CompilationGuide/Generic
[13:26:24 CEST] <mushy> Thanks for the help so far though JEEB
[13:26:55 CEST] <JEEB> no problem
[13:38:14 CEST] <mushy> If i were to make install with the prefix set to /opt/ffmpeg on Fedora would uninstallation just be a case of deleting that directory?
[13:39:06 CEST] <JEEB> mushy: yes, but for quick development purposes I would just be utilizing a local prefix under /home
[13:39:26 CEST] <JEEB> that way no sudo is needed during make install, either
[13:40:02 CEST] <mushy> Okay thanks for the tip. I wasn't aware of what a prifix even was until just now so my previous installation is to /usr/local/
[13:40:22 CEST] <JEEB> so you just without thought did sudo make install? :P
[13:41:01 CEST] <JEEB> thankfully /usr/local should be relatively empty so you can double-check if there's anything that wasn't installed today and then clean that directory just in case
[13:44:27 CEST] <JEEB> mushy: think of prefix as the "installation location"
[13:44:53 CEST] <JEEB> that can have stuff under it like bin/ , lib/ , include/ to mention the most technically needed ones
[13:45:36 CEST] <JEEB> (which is the standard *nix directory structure)
[13:46:13 CEST] <JEEB> that way if you have two or more libraries required for a thing, you can install them all into a single prefix, and then use that prefix during configuration of the build of whatever you're finally trying to build
[13:46:45 CEST] <mushy> Ahhh makes sense!
[13:47:27 CEST] <mushy> do you reccomend i clean out ffmpeg from my /bin /include /lib and /include and re install with a prefix set to my home directory?
[13:48:00 CEST] <JEEB> during development usually I prefer a prefix that doesn't require root access to write to
[13:48:05 CEST] <JEEB> so I can easily and quickly make changes
[13:48:29 CEST] <mushy> Fair enough, i mean i am currently working in a VM so i dont really mind mucking about
[13:48:48 CEST] <JEEB> you can keep your prefix at /usr/local (since that's usually not utilized by the package management)
[13:48:52 CEST] <JEEB> if you want to
[13:49:02 CEST] <JEEB> just that it requires root to access it
[13:49:19 CEST] <JEEB> positive side is that /usr/local/lib usually is in your default shared library search path
[13:58:03 CEST] <yooyo> Need help from developers... Is it possible to call avio_open(...) then avformat_write_header(...) then write some packets, then av_write_trailer(..) then avio_close, AND change filename and start all over again.. avio_open, avformat_write_header, write packets, av_write_trailer(..), avio_close?
[13:58:43 CEST] <JEEB> sounds like simpler for you to make your own AVIO thing
[13:58:51 CEST] <yooyo> without touching codec context, streams, etc... just want to start writing file on different location
[13:59:35 CEST] <JEEB> so you provide C functions for read/seek/write (the required ones)
[13:59:44 CEST] <JEEB> and then tell avformat "use these functions"
[13:59:51 CEST] <JEEB> and you can handle all of your custom logic in that stuff of yours
[14:00:30 CEST] <JEEB> and then your lavf code would call the usual lavf APIs as required
[14:00:44 CEST] <JEEB> but all that "not pretty" custom I/O logic could be pushed somewhere else
[14:01:06 CEST] <JEEB> I made something for reading and seeking in 2013, and while the APIs in general have changed, I think this hasn't too much :P
[14:01:09 CEST] <JEEB> https://github.com/jeeb/matroska_thumbnails/blob/master/src/istream_wrapper.c#L14
[14:02:14 CEST] <yooyo> I'll check that
[14:02:22 CEST] <JEEB> see https://ffmpeg.org/doxygen/trunk/structAVIOContext.html
[14:03:28 CEST] <yooyo> I cant simply reopen files on lower level... old file hast to be properly closed, and new one properly open
[14:04:30 CEST] <yooyo> seems that avio_close or av_write_trailer mess with muxer
[14:04:31 CEST] <JEEB> well, I thought you had your logic for that stuff? although yes, if lavf is using multiple threads (to write stuff) that can be fun, but I'm not sure it does
[14:04:46 CEST] <JEEB> yooyo: yes, those are higher level things that you call from lavf
[14:04:55 CEST] <JEEB> those then cause it to call the write function
[14:05:09 CEST] <JEEB> avio_close is AVIO stuff and you can override it with your AVIO context
[14:05:11 CEST] <yooyo> and I cant find a way to "reset" muxer
[14:05:30 CEST] <JEEB> I'm still not sure WTF you're doing, I'm just giving you examples of how to write the different packets into different files
[14:05:39 CEST] <JEEB> because that's what it seemed to be what you're doing
[14:05:43 CEST] <JEEB> by your short explanation
[14:05:59 CEST] <JEEB> in which case you put your custom logic into the AVIO wrapper you have which decides which packets or whatever go where
[14:06:12 CEST] <JEEB> or well, at that point, which bytes
[14:07:10 CEST] <JEEB> and yes, if you were trying to do it with a single muxer instance that wouldn't work because that instance would be "done" with after you write the trailer/close
[14:07:22 CEST] <JEEB> so either you want to ignore the lavf level and just receive bytes
[14:07:28 CEST] <JEEB> in which case you have the AVIO level
[14:07:37 CEST] <JEEB> you receive bytes from lavf and you write them wherever the hell you want
[14:07:41 CEST] <JEEB> files are not even a thing at that point
[14:07:42 CEST] <yooyo> Im recording live stream and I want to record to a file and after say 10min close file and continue recording in new file
[14:08:02 CEST] <JEEB> ok, then there's the segment muxing capabilities of things
[14:08:07 CEST] <yooyo> so.. after 10 min close current file, open new one and loop
[14:08:43 CEST] <JEEB> you can do it by making new muxers of course, which is that I think the segment muxer is doing
[14:09:03 CEST] <yooyo> i can do it by freeing everything and creating context and codec etc.. but I would like to avoid that step
[14:09:16 CEST] <JEEB> you can just partially push it into lavf itself as I said :P
[14:09:23 CEST] <JEEB> it has the segment muxer
[14:09:39 CEST] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#segment_002c-stream_005fsegment_002c-ssegment
[14:10:29 CEST] <mushy> Am i doing something wrong? Im compiling this super simple code https://pastebin.com/v8dERt8M with the command "g++ main.cpp -L /usr/local/lib -I /usr/local/include" And getting the error https://pastebin.com/PD9NWgeS
[14:11:13 CEST] <JEEB> mushy: yes you need the required linker flags too
[14:11:34 CEST] <JEEB> PKG_CONFIG_PATH=/usr/local/lib/pkgconfig pkg-config --libs libavcodec for libavcodec for example
[14:11:36 CEST] <mushy> g++ main.cpp -L/usr/local/lib -lavutil -lavformat -I/usr/local/include
[14:12:22 CEST] <yooyo> OK.. I'll check segment_stream code.. there should be some trick
[14:12:44 CEST] <JEEB> mushy: just call pkg-config in your build process. in the worst case something like $(pkg-config --libs libavcodec)
[14:12:51 CEST] <JEEB> and cflags for the C flags
[14:13:13 CEST] <JEEB> so when you get some reference you cannot find, look up in which library it is and add that to your dependencies
[14:14:44 CEST] <mushy> Thanks JEEB, Im having a little trouble with the syntax but im sure ill figure it out
[14:14:54 CEST] <mushy> g++ main.cpp -L/usr/local/lib -lavutil -lavformat -I/usr/local/include pkg-config '--libs libavcodec libavformat'
[14:15:26 CEST] <JEEB> yea the singular quotes there might be a problem :P
[14:15:39 CEST] <JEEB> because they make '--libs libavcodec
[14:15:53 CEST] <JEEB> blah be a single parameter to g++ :P
[14:16:04 CEST] <mushy> right so it thinks its a file
[14:16:20 CEST] <JEEB> or just a parameter to itself, how it interprets that is separate :P
[14:16:44 CEST] <JEEB> I would recommend you to either make a Makefile quickly or poke something like meson or cmake. since your thing only needs the C++ compiler and pkg-config it should be pretty simple
[14:17:26 CEST] <JEEB> check if meson for example is available in your installed version of fedora
[14:17:36 CEST] <JEEB> and check making something like this quickly for your test thing http://mesonbuild.com/Tutorial.html
[14:17:50 CEST] <JEEB> then you don't have to make the long commands manually all the time :P
[14:18:37 CEST] <mushy> im a little familiar with cmake. is there any reason to use meson instead?
[14:19:02 CEST] <JEEB> not really. it seems more promising but if you already know of cmake that should be fine as well
[14:19:22 CEST] <mushy> Okay ill see what i can come up with, thanks for the direction again!
[14:19:25 CEST] <JEEB> np
[14:23:17 CEST] <cowai> is there a preset with better quality than "slow" on h264_nvenc ?
[14:23:49 CEST] <cowai> slow at h264_nvenc is worse quality than ultrafast at x264 at the roughly the same speed.
[14:23:54 CEST] <cowai> using a gtx 770
[14:24:15 CEST] <JEEB> don't expect good compression capabilities from hw encoders
[14:24:29 CEST] <JEEB> they're purely there for the speed + "no CPU usage"
[14:26:14 CEST] <cowai> Do you know if paid solutions like aws elastics encoder use gpu or x264?
[14:26:43 CEST] <JEEB> if they're cloud they're generally something based on FFmpeg or similar
[14:27:09 CEST] <JEEB> I just can't see dedicated server GPUs at any point being as price-performant as CPU-based encoders
[14:27:27 CEST] <JEEB> even with the fact that the server GPUs can give you some extra streams
[14:27:30 CEST] <cowai> aws delivers nvidia gpus with ec2
[14:27:47 CEST] <JEEB> yes, and I really wonder how the fuck people select those. is it that pure CPU pricing is just crazy there?
[14:28:00 CEST] <cowai> no idea.
[14:28:39 CEST] <JEEB> but yea, in very specific use cases I can see GPU ASIC encoding being useful
[14:29:00 CEST] <JEEB> but if the "box" is just going to be doing video streaming...
[14:29:15 CEST] <JEEB> (or vod transcoding or whatever)
[14:30:09 CEST] <cowai> I can see them using gpus and now doing the slow preset, and just do not care about the quality
[14:30:24 CEST] <JEEB> yea, but you're paying how much for the GPU per month?
[14:30:35 CEST] <JEEB> and how many streams at the same time
[14:30:36 CEST] <cowai> i mean with their encoding service.
[14:30:51 CEST] <JEEB> I'd be surprised if that was GPU based for anything else than marketing
[14:30:56 CEST] <JEEB> it just doesn't make sense
[14:31:06 CEST] <cowai> if they can encode twice as fast with lets say 20% worse quality, then for aws its maybe worth it
[14:31:47 CEST] <JEEB> even if it's less fast you can stick more streams on it, since you're usually not maximizing the CPU utilization with a single stream
[14:31:48 CEST] <cowai> quality for speed makes sense when the audience do not know what x264 is and all they want is an api.
[14:32:32 CEST] <JEEB> basically, given the extra costs for the GPUs and the fact that they have a very good system for putting CPU loads around I don't think there's *any* reason to use GPUs for encoding
[14:32:38 CEST] <JEEB> not to mention that most of that GPU would be idle doing nothing
[14:32:44 CEST] <JEEB> after all, it's not the actual GPU doing the encoding
[14:32:46 CEST] <JEEB> it's the ASIC on it
[14:32:57 CEST] <cowai> right now I am rending at 700fps with my gtx
[14:33:17 CEST] <cowai> *rendering.
[14:33:31 CEST] <JEEB> if you're actually rendering stuff that's actual GPU GPU usage
[14:33:46 CEST] <JEEB> the encoding has nothing to do with the GPU GPU parts
[14:34:07 CEST] <cowai> makes sense
[14:34:30 CEST] <JEEB> + the ASICs are limited by the amount of streams they can take in
[14:34:48 CEST] <JEEB> so while on a server you can put ~30 live streams
[14:34:55 CEST] <cowai> i have read that utilizing cuda for scaling as well has some benefits. But I think it all depens if one must have the cpu ready for other things or not.
[14:35:08 CEST] <JEEB> I'm not sure if you can put that amount of streaming on a GPU, or then you start needing multiple GPUs
[14:35:28 CEST] <JEEB> yes, you can do some filtering on the card as well
[14:35:41 CEST] <JEEB> but seriously, if you cannot feed the ASIC encoder enough streams then what's the point
[14:35:47 CEST] <cowai> the aws service I am talking about is for VOD, not live.
[14:36:15 CEST] <JEEB> and for VOD you can splice and encode separately, you don't care about absolutely fastest yet crappiest encoder
[14:36:29 CEST] <JEEB> and you can really put that load onto umpteen things
[14:36:42 CEST] <JEEB> even for VOD it really doesn't make sense on a scale
[14:37:03 CEST] <JEEB> because even if you don't split you most likely have more than one thing that you want to encode in a system like that (or well, your customers want to encode)
[14:37:36 CEST] <JEEB> I see the use case for GPU encoding being where you don't want to use the CPU for it for some reason, like gaming
[14:37:50 CEST] <JEEB> I've used my GPU's ASIC for encoding lossless H.264 game capture
[14:37:51 CEST] <cowai> and for marketing purposes ;)
[14:37:52 CEST] <JEEB> completely valid use case
[14:37:57 CEST] <JEEB> yes, of course
[14:38:54 CEST] <JEEB> but I've actually seen people on this channel pick GPU encoding for their solution, and it seems to somehow be viable. I haven't looked at the "cloud" providers' prices too much but are the proper CPU-based things actually priced that much higher than the GPU-included ones, that suddenly that starts making sense?
[14:39:12 CEST] <JEEB> in which case goddamn, someone should start looking at rented dedicated servers
[14:40:07 CEST] <JEEB> although to be honest, such selections probably have over 9000 non-technical reasons there
[14:45:30 CEST] <cowai> Does anybody see anything fundamentally wrong with this approach? https://gist.github.com/cowai/bc6ef0a9b12f6a0d44a6d73d74ca8947
[14:45:50 CEST] <cowai> Its an example of chunked encoding
[14:46:13 CEST] <cowai> It works. But I havent tested with more than one file right now.
[14:55:48 CEST] <mushy> Its working!!
[14:56:13 CEST] <JEEB> with the segment muxer I guess
[15:22:35 CEST] <paveldimow> Hi, I want to use the overlay feature but I would like to dynamically change the input stream. Is this possible?
[15:24:55 CEST] <JEEB> paveldimow: with the API *you* define what gets fed, but with the cli ffmpeg.c most likely not
[15:44:05 CEST] <captainmurphy> Hi everyone! Could someone point at functions (in libav* sources) that take care of handling RTMP/RTSP input? Thank you.
[15:45:04 CEST] <JEEB> a) make sure those protocols are enabled in libavformat b) open the thing as usual with the rtmp or rtsp url
[15:45:11 CEST] <JEEB> I don't think there's much more special to them than that
[15:45:47 CEST] <captainmurphy> Great. Thanks!
[15:46:35 CEST] <JEEB> setting any additional parameters to the protocol or demux is the same as usual as well, in case additional things are needed
[15:55:38 CEST] <paveldimow> tnx JEEB !
[16:10:40 CEST] <blap> jeeb's a helpful person
[17:35:19 CEST] <yooyo> i think i have found very rare bug... in libavformat/mux.c in finction av_write_trailer add following
[17:35:21 CEST] <yooyo> s->internal->nb_interleaved_streams = 0;
[17:35:49 CEST] <yooyo> this prevents reusing AVFormatContext
[17:35:57 CEST] <yooyo> after closing file
[17:36:20 CEST] <JEEB> you should pose that question on the -devel channel
[18:02:32 CEST] <BtbN> yooyo, reusing a context is not supported
[18:03:00 CEST] <jbreeden> Hello all. I am working on developing an ffmpeg codec that takes advantage of a proprietary static library (*.a) that I have licensed. When building ffmpeg, where is the appropriate place to link this library?
[18:03:28 CEST] <BtbN> I am pretty sure that highly violates both GPL and LGPL
[18:03:58 CEST] <Johnjay> BtbN: code that that uses proprietary code violates the GPL?
[18:04:14 CEST] <jkqxz> Only if you then distribute it.
[18:04:25 CEST] <BtbN> statically linking to a completely non-free library is
[18:04:51 CEST] <Johnjay> oh ok that makes sense
[18:05:03 CEST] <Johnjay> he said "takes advantage of" linstead of "statically links to" and I got thrown off
[18:05:32 CEST] <Johnjay> The last time I fussed with the GPL i was building python aexecutables on windows
[18:05:46 CEST] <BtbN> It's not forbidden to do so, but you can never distribute that
[18:05:47 CEST] <Johnjay> And it had an option to make a stand alone exe file to distribute. wasn't sure if that was GPL friendly or not
[18:06:25 CEST] <jbreeden> Hmmm okay. FFmpeg doesn't have any mechanism of dynamically linking plugins, does it?
[18:07:49 CEST] <thebombzen> not at the moment
[18:07:51 CEST] <thebombzen> or rather
[18:07:55 CEST] <thebombzen> it can dynamically link
[18:07:58 CEST] <thebombzen> but it can't load at runtime
[18:09:31 CEST] <Johnjay> would there be any benefit to that?
[18:09:34 CEST] <thebombzen> which means you can't make your version of FFmpeg search for your proprietary library and load it at runtime only if it's found, unless you do the hard work of making that happen
[18:09:59 CEST] <thebombzen> however, you can link dynamic libraries at compile-time to ffmpeg. that's very common and expected (I mean how else are you supposed to link libc?)
[18:10:30 CEST] <Johnjay> thebombzen: you can even link to dlls on windows with mingw! amazing
[18:10:34 CEST] <BtbN> You cannot dynamically load a static library
[18:11:01 CEST] <thebombzen> not sure why you'd want a static library anyway tbh
[18:11:04 CEST] <JEEB> well, on mingw-w64 .dll.a files are actually stubs for shared linking
[18:11:04 CEST] <BtbN> And even if you dlopen something proprietary, that does not clean up the license situation. Only makes it even more complicated.
[18:11:50 CEST] <JEEB> but with shared linking you're still way against GPL and just somewhat problematic wrt LGPL (since you're supposed to be able to replace the LGPL part with your own build with the same things)
[18:12:17 CEST] <thebombzen> well with LGPL isn't it fairly simple? isn't the point of LGPL that you can link components together and the LGPL doesn't interfere?
[18:13:02 CEST] <JEEB> well, the point in LGPL is that the user can replace the library if he or she so wants from the same sources
[18:13:20 CEST] <JEEB> now the problem when you link FFmpeg itself against a proprietary thing
[18:13:30 CEST] <JEEB> is that you'd have to provide the lib and headers for that
[18:13:40 CEST] <JEEB> so that it could be linked successfully
[18:15:34 CEST] <JEEB> basically it's problematic but not impossible (IANAL though)
[18:15:52 CEST] <thebombzen> man being in the EU must make this a lot easier tho
[18:16:01 CEST] <thebombzen> IP law in the US is a pain
[18:16:54 CEST] <JEEB> well the LGPL really doesn't differ
[18:17:44 CEST] <BtbN> The EU just doesn't have idiotic software patents
[18:18:35 CEST] <thebombzen> god I hate those
[18:22:31 CEST] <blap> the average german gets 8.83 liters / 100km (bier consumption)
[18:23:20 CEST] <JEEB> &32
[18:39:12 CEST] <jbreeden> Alright, so
[18:39:27 CEST] <jbreeden> I see this company makes an FFmpeg plugin that presumably is proprietary: http://beamr.com/h264-hevc-encoder-codec-software-development-kits/
[18:39:54 CEST] <jbreeden> I don't see how they'd be able to do this if there weren't some way around the licensing.
[18:45:26 CEST] <jkqxz> I imagine they supply you with a proprietary static library and a set of source code patches to ffmpeg.
[18:45:32 CEST] <JEEB> those services don't usually distribute their stuff, even if they use it
[18:45:40 CEST] <jkqxz> You can then build the modified source and link it with the static library, and use it yourself.
[18:45:43 CEST] <JEEB> yes
[18:45:45 CEST] <jkqxz> You can't distribute it to anyone else.
[18:46:04 CEST] <JEEB> because you yourself can do your own non-free builds just fine
[18:46:16 CEST] <JEEB> also lol, man that marketingtalk
[18:46:23 CEST] <Johnjay> JEEB: do you use mingw much?
[18:46:35 CEST] <JEEB> Johnjay: every now and then
[18:47:02 CEST] <Johnjay> TIL that mingw targets all the windows subsystems VC targets.. except one. Boot-Application
[18:47:24 CEST] <JEEB> is that the UEFI stuff?
[18:47:38 CEST] <jbreeden> Okay, thanks for the feedback.
[18:47:45 CEST] <Johnjay> idk
[18:47:48 CEST] <Johnjay> just noticed it
[18:48:01 CEST] <Johnjay> something about BCD
[18:48:01 CEST] <JEEB> jbreeden: so basically if you were planning on doing anything you distribute - that's a no go
[18:49:24 CEST] <JEEB> Johnjay: ok so it's something for the windows bootloader post-UEFI it seems
[18:49:31 CEST] <JEEB> at least that's how it looks like
[18:49:46 CEST] <JEEB> so in theory it's closer to windows, but not really
[18:49:59 CEST] <JEEB> a rather limited target if I may say
[18:50:00 CEST] <Johnjay> JEEB: is that significant, being after UEFI in this context?
[18:50:37 CEST] <JEEB> yes, since UEFI is completely separate from the OS. the bootloader binary loaded by the UEFI system then is closer to your OS (as it's provided by it)
[18:50:41 CEST] <marlenka> hi
[18:51:14 CEST] <jbreeden> So if I were to make an LGPL codec that used dlopen to access a proprietary shared library that was distributed separately, would that be in violation to your knowledge?
[18:51:15 CEST] <Johnjay> ohh ok lol
[18:51:23 CEST] <Johnjay> i thought you meant historically, as in after the advent of UEFI
[18:53:43 CEST] <JEEB> jbreeden: IANAL, with LGPL the base thing is that the source code for the LGPL thing is given out (in this case it would be FFmpeg) and that the user can rebuild the whole thing to replace the LGPL component with his own if he wishes (not that you need to support it, it just needs to be technically possible)
[18:55:46 CEST] <JEEB> jbreeden: also to be honest I'm not sure how much you'd be losing at that point by just having your proprietary thing using the FFmpeg APIs having its own encode API for that encoder, that way just not using FFmpeg for that and making it a simpler LGPL case :P
[18:56:44 CEST] <JEEB> since "just linking LGPL FFmpeg to a proprietary app" is something that's pretty common and OK as long as you a) provide sources to the LGPL part and b) enable the replacement of the binary by the user (technically)
[18:58:12 CEST] <Johnjay> JEEB: is that replacement of the binary thing part of the LGPL?
[18:58:17 CEST] <Johnjay> the wikipedia page mentions it too
[18:59:12 CEST] <JEEB> yes, although stuff like DRM can still stop the user from actually doing it, but if the user does it correctly he should be able to compile the binary as well.
[18:59:24 CEST] <JEEB> that's why people generally don't do LGPL with static linking
[18:59:36 CEST] <JEEB> because then you'd have to provide all your object files for the proprietary stuff :P
[18:59:47 CEST] <JEEB> (as the f.ex. FFmpeg gets linked to it statically)
[19:00:14 CEST] <JEEB> but as I said, this is all IANAL and generally from my history of looking at how things sit
[19:00:38 CEST] <JEEB> the most general failure with people who don't want to release their sources is that they pack in an FFmpeg binary with GPL features enabled :P
[19:00:49 CEST] <JEEB> and then complain "I just copied my configuration from <project X>"
[19:01:14 CEST] <Johnjay> it says on the ffmpeg webpage not to use --enable-gpl or --enable-nonfreee
[19:01:30 CEST] <JEEB> enable-nonfree makes an FFmpeg binary non-distributable
[19:01:37 CEST] <JEEB> enable-gpl enables GPL features in the configuration
[19:01:41 CEST] <jbreeden> Yeah, this is useful feedback. I have no issue releasing source for the wrapper plugin, but I have no control over the proprietary licensing. So, navigating this is a pita.
[19:02:31 CEST] <JEEB> do first check if whatever you're trying to utilize is actually worth it. I mean, x265 isn't great but if you're looking for HEVC it's pretty much one of the least retarded alternatives
[19:03:18 CEST] <JEEB> (it is GPL though, although it's under dual licensing if you find it actually useful)
[19:03:20 CEST] <Johnjay> i'm still confused about this providing object files thing
[19:03:36 CEST] <JEEB> Johnjay: it's specific to static linking of an LGPL library
[19:03:53 CEST] <JEEB> now you have a proprietary app X that utilizes an LGPL FFmpeg, built into static libraries
[19:03:59 CEST] <JEEB> it all goes into a single binary
[19:04:14 CEST] <jbreeden> Yeah, in lieu of providing source you can compile your propietary pieces but just provide the *.o so others can rebuild LGPL portions and statically link
[19:04:15 CEST] <Johnjay> ok that's what confused me, in the wiki page it only talks about the LGPL as relevant for dynamic linking
[19:04:27 CEST] <JEEB> yea, technically you can do it with static
[19:04:34 CEST] <JEEB> but it's really not generally what you want to do
[19:04:36 CEST] <JEEB> it's messy :P
[19:04:45 CEST] <JEEB> (and you have to release the object files which most people don't want to do)
[19:05:07 CEST] <Johnjay> because of the requirement to be able to compile with a new version of the LGPL library?
[19:05:13 CEST] <JEEB> not a new version
[19:05:34 CEST] <JEEB> you just have to be able to replicate the same thing with the sources provided by the vendor
[19:05:44 CEST] <Johnjay> >Essentially, if it is a "work that uses the library", then it must be possible for the software to be linked with a newer version of the LGPL-covered program.
[19:05:48 CEST] <jbreeden> https://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynamic
[19:06:12 CEST] <JEEB> Johnjay: that's bullshit, there's no requirement for a vendor to support a newer version of a thing that he's never heard of
[19:06:36 CEST] <JEEB> or well, newer micro version without API changes sure - that could be possible. but the vendor doesn't have to do that updating or anything for you
[19:06:48 CEST] <JEEB> in the sense that you can do your hacks and update stuff - sure
[19:06:51 CEST] <JEEB> that has to be possible
[19:06:58 CEST] <Johnjay> oh ok that page on gnu.org makes sense
[19:07:24 CEST] <JEEB> but the vendor doesn't have to add any support for things they didn't distribute
[19:08:17 CEST] <Johnjay> right. this is all basically trying to make up a legal tool to allow distributing GPL libraries with proprietary code
[19:08:23 CEST] <Johnjay> hence why ffmpeg is LGPL
[19:08:30 CEST] <jbreeden> I think the meaning isn't that the vendor needs to provide future-proof support, it's that if they provide objects or a dynamic library, the end user can theoretically make changes to maintain compatibility
[19:08:45 CEST] <Johnjay> so in the case of static linking you couldn't update the library or change it or whatever without the obj files
[19:09:04 CEST] <Johnjay> right it's trying to preserve the "4 freedoms" of the GPL
[19:09:06 CEST] <JEEB> they're not GPL libraries, they're LGPL if LGPL applies :P
[19:09:08 CEST] <Johnjay> i.e. the right to modify
[19:09:28 CEST] <JEEB> just a project generating libraries doesn't make it LGPL
[19:09:36 CEST] <JEEB> a GPL library is still a GPL library :P
[19:09:52 CEST] <JEEB> but yes, that is why FFmpeg is by default LGPL and most of the internals are LGPL
[19:10:17 CEST] <Johnjay> so if a library was GPL'd, could you dynamically link to it with your proprietary app?
[19:10:30 CEST] <Johnjay> static linking would violate the GPL ofc
[19:10:48 CEST] <jbreeden> https://en.wikipedia.org/wiki/GPL_linking_exception
[19:12:38 CEST] <Johnjay> that page just says the idea of what I said is not suppoted by legal evidence
[19:12:42 CEST] <JEEB> IANAL, but GPL and linking generally is generally noted as the same thing. even if you make the library dload'd if the software doesn't work in meaningful ways without the library it's pretty obvious (also if you're distributing that GPL library)
[19:13:03 CEST] <JEEB> if you start wanting to piss off open source developers there's plenty of ways of course
[19:13:12 CEST] <JEEB> not that I recommend them :P
[19:14:24 CEST] <Johnjay> it says dynamically linking to GPL libraries makes it covered by GPL
[19:14:44 CEST] <Johnjay> but how is that different than me calling say gcc from the command line?
[19:14:50 CEST] <JEEB> that is my understanding. be it static or dynamic - it's linking
[19:16:22 CEST] <JEEB> Johnjay: I'm too tired to go through the details of what the exact thing you're comparing to is and how it is different from making an application that links against a GPL library...
[19:16:45 CEST] <JEEB> and under certain circuimstances I'd say that yes, it's very similar if your app is just a layer that calls ffmpeg.c configured with GPL
[19:16:55 CEST] <JEEB> and you distribute that GPL ffmpeg.c
[19:17:57 CEST] <JEEB> but IANAL
[19:18:01 CEST] <Johnjay> yes
[19:18:05 CEST] <Johnjay> i'm tired too, gonna go eat now
[19:18:11 CEST] <jbreeden> If there's one thing that we can all agree on, it is that none of us are lawyers
[19:18:16 CEST] <JEEB> yes
[19:18:23 CEST] <Johnjay> lol
[19:18:28 CEST] <jbreeden> Thanks for the help, people
[19:18:39 CEST] <JEEB> np
[19:18:50 CEST] <Johnjay> stallman isn't either, but apparently to him static vs dynamic doesn't matter when it's the GPL: https://softwareengineering.stackexchange.com/questions/167773/how-does-the-gpl-static-vs-dynamic-linking-rule-apply-to-interpreted-languages
[19:19:51 CEST] <JEEB> yes, and I agree with that as far as I know :P the idea of the GPL after all is that if you make a derivative work your work is also GPL (not limited to, but *also*)
[19:21:19 CEST] <JEEB> there's a lot of stuff under more permissive licenses for a lot of stuff, if your needs do not match that specific project you have two different ways
[19:21:29 CEST] <JEEB> - you explain yourself and possibly start a project of re-licensing
[19:21:39 CEST] <JEEB> - you change the GPL dependency to something else
[19:21:55 CEST] <JEEB> mpv got to the point where they have the libmpv core licensed to LGPL
[19:22:00 CEST] <Johnjay> here's another discussion of the mattera: https://softwareengineering.stackexchange.com/questions/158789/can-i-link-to-a-gpl-library-from-a-closed-source-application
[19:22:09 CEST] <JEEB> I know there's a lot of discussion about it
[19:22:37 CEST] <JEEB> but what I'm noting is the understanding of it that most people around here (most likely) have. and as I've said countless times, IANAL
[19:22:39 CEST] <Johnjay> JEEB: the second link goes into more detail, because there's a lot of ambiguous cases between "calling ffmpeg from an open shell" and "statically linking it into your program"
[19:23:21 CEST] <JEEB> well FFmpeg is available as LGPL so it doesn't even have the GPL problem unless you specifically selected GPL as the license
[19:23:39 CEST] <JEEB> and yes I know there are more cases but they are basically in three things
[19:24:05 CEST] <JEEB> 1) statically linked lib 2) shared linked lib 3) shared library and dload
[19:24:41 CEST] <JEEB> first two with GPL are pretty obviously having you as a thing because you're using it
[19:25:12 CEST] <JEEB> third one I'd say you're trying to skirt the GPL unless it's something like a framework for something and you have plugins you have no control over
[19:25:30 CEST] <JEEB> and someone completely different made a GPL plugin not knowing that GPL for proprietary stuff doesn't exactly work
[19:26:01 CEST] <JEEB> so of course in that case even if you load that plugin you have no control over it since you're not distributing it, you're not installing it and you have no connection to it whatsoever
[19:26:38 CEST] <JEEB> this all is IANAL just to note it once again
[19:34:21 CEST] <JEEB> in the end, if it gets far enough it's not technical people who will decide it but judges based on "the letter and the *spirit* of the law" and "the spirit in which something was done". and if you have to start asking you're already playing with wanting to go against the common understanding of the spirit of what GPL is. and at that point I would already be looking for alternatives since I gain much more in
[19:34:27 CEST] <JEEB> the long term by not pissing off OSS developers
[19:39:16 CEST] <jbreeden> Sounds reasonable
[19:47:30 CEST] <JEEB> jbreeden: btw regarding your case it's usually simpler licensing wise if you just have the encoding part in your own thing part of your thing, and that you don't then need to poke additions into FFmpeg itself for it. that way the LGPL case is simple. but I'd really make sure you've tested the vendor's solution properly before skipping on x264 or x265 (both of which also have non-GPL licensing options if you
[19:47:36 CEST] <JEEB> pay)
[19:48:15 CEST] <JEEB> although you still might make something similar because the x264/5 modules in libavcodec are generally under GPL configuration only
[19:49:13 CEST] <jbreeden> Unfortunately x264/5 don't provide the codec I'm trying to use, which is Annex G of the h264 spec (Scalable Video Coding)
[19:49:18 CEST] <JEEB> oh
[19:49:24 CEST] <JEEB> interesting, someone actually trying that
[19:50:17 CEST] <DHE> oh that's cool... would make grooming very easy
[19:50:27 CEST] <JEEB> I think there might have been a patch for that at some point, but I'm not sure. could have been MVC only.
[19:50:48 CEST] <JEEB> SVC IIRC was only used by some cisco stuff if I recall correctly
[19:51:23 CEST] <jbreeden> It is really neat stuff, after seeing how well it works, it's kinda bewildering that it didn't gain more traction
[19:51:47 CEST] <JEEB> I think people generally either add adaptive rate control to their video call stuff
[19:51:58 CEST] <JEEB> as in, it adapts according to the connection quality
[19:52:18 CEST] <JEEB> or just have multiple alternative video streams of different bit rates / qualities
[19:52:23 CEST] <JEEB> (or resolutions)
[19:52:41 CEST] <jbreeden> Yeah, but all that HLS/DASH stuff requires quite a bit of technical overhead to do all the switching
[19:52:42 CEST] <JEEB> I think part of the problem also was that not a whole lot of things supported decoding it, either
[19:53:01 CEST] <JEEB> well yea, they're not really a streaming protocol :P
[19:53:13 CEST] <JEEB> it's all due to a) wanting to use the existing HTTP stack b) browsers
[19:53:13 CEST] <jbreeden> :D
[19:53:21 CEST] <Johnjay> so jbreeden has some nifty codec that is proprietary related to x265 and wanted to make a version of ffmpeg with it statically compiled in and distribute it?
[19:53:32 CEST] <JEEB> no
[19:53:44 CEST] <JEEB> he wanted to use it somehow and we've discussed alternatives
[19:54:11 CEST] <JEEB> I've noted what in my opinion is the simplest way if he requires LGPL FFmpeg features
[19:54:14 CEST] <jbreeden> Yeah, I'm trying to make sure I'm conforming to the licensing before digging in deep
[19:55:32 CEST] <JEEB> jbreeden: have you made sure that whatever you're planning on suporting actually supports SVC :P
[19:55:39 CEST] <JEEB> the support for it after all is not that common
[19:55:43 CEST] <jbreeden> lol yes
[19:56:04 CEST] <jbreeden> It's an end-to-end solution
[19:56:32 CEST] <JEEB> yea, it's just that on mobile etc HW decoding is much more important
[19:56:44 CEST] <JEEB> and if that hw decoder doesn't support MVC then that doesn't help :)
[20:00:28 CEST] <Johnjay> he also said ffmpeg doesn't provide some x265 codecs
[20:00:39 CEST] <Johnjay> is adding codec support a large part of updates to ffmpeg?
[20:00:44 CEST] <Johnjay> i would guess it would have to be
[20:01:33 CEST] <BtbN> "doesn't provide some x265 codecs"? Last time I checked x265 only supported exactly one codec.
[20:01:35 CEST] <JEEB> ok, let's start fixing the incorrectencies which I decided not to go first because jbreeden seemed to know what he was talking about and just using the wrong bocabulary
[20:01:41 CEST] <JEEB> s/x265 codecs/HEVC features/
[20:02:13 CEST] <JEEB> and it doesn't support them because those features are not supported in the x265 encoder library. they are not supported because they are no widely seen as anything that brings good gains.
[20:02:30 CEST] <JEEB> in this case, it was the scalable video extensions
[20:02:57 CEST] <JEEB> which I am honestly surprised someone is trying to sell because of the lack fo support :P
[20:07:15 CEST] <furq> does hevc even have svc
[20:07:18 CEST] <furq> i thought that was an avc thing
[20:07:29 CEST] <JEEB> I think it has it under a slightly different name
[20:07:54 CEST] <JEEB> yes, SHVC
[20:11:19 CEST] <JEEB> anyways, I've basically over the years discussed things with multiple people from various major video sites - people who do test out proprietary solutions, but so far pretty much only EVE (and maybe Ateme?) have been anywhere near the open source alternatives. SVC wasn't utilized because of most things just not supporting it.
[20:12:35 CEST] <JEEB> but I think now Ateme is fully hardware only aiming for the broadcast market
[20:13:20 CEST] <JEEB> I have myself only tested a few along the years, although if I had more free time i do have a few lying around :P
[20:13:32 CEST] <Johnjay> JEEB: quick question, if I want to split up a file using -f segment but also add silence from anullsrc at the end, how would I do that?
[20:14:06 CEST] <JEEB> no idea how to append silence at the end of your input so that video and audio lengths match :)
[20:14:14 CEST] <JEEB> (it should be irrelevant to -f segment)
[20:14:14 CEST] <Johnjay> it's just audio
[20:14:45 CEST] <JEEB> I know how I'd do it with the API, but no idea with the cli
[20:15:06 CEST] <Johnjay> yeah i haven't learned how to combine filters yet
[20:15:15 CEST] <Johnjay> so i assumed filter_complex does it somehow
[20:15:41 CEST] <JEEB> although have you just tried to set -t longer than your actual duration?
[20:15:57 CEST] <JEEB> I wonder if there's a way to append stuff... no idea, sorry
[20:16:05 CEST] <ChocolateArmpits> there's a concat filter
[20:16:28 CEST] <JEEB> yea but this is a "append until X duration" kind of case I guess? for the last segment?
[20:16:31 CEST] <JEEB> no idea
[20:16:59 CEST] <Johnjay> i just want 10 seconds of silence at the end
[20:17:06 CEST] <Johnjay> i've been manually doing it with a file called 10secsilence.mp3
[20:17:15 CEST] <ChocolateArmpits> https://ffmpeg.org/ffmpeg-filters.html#concat
[20:17:24 CEST] <JEEB> with the API you could just open the filter chain with anullsrc and get enough samples
[20:17:26 CEST] <Johnjay> but i'm not sure how to combine segment and anullsrc filters
[20:17:43 CEST] <JEEB> segment is not a filter :)
[20:20:00 CEST] <Johnjay> sorry yes it's a muxer
[20:20:25 CEST] <JEEB> but yes, if you always need a static amount of stuff then ChocolateArmpits's thing could be useful
[20:20:33 CEST] <JEEB> by just concatenating it in all of your workchains
[20:20:39 CEST] <JEEB> that way you always get +10 seconds
[20:32:05 CEST] <Johnjay> well i'll figure it out later
[20:38:18 CEST] <Alina-malina> can i set a download speed from internet with ffmpeg?
[20:39:13 CEST] <Alina-malina> i mean set the bitrate of downloading
[20:39:18 CEST] <Alina-malina> throttle downl the downloading
[20:42:11 CEST] <JEEB> not that I know
[20:43:39 CEST] <Alina-malina> :-/
[21:18:46 CEST] <utack> and of course ffmpeg supports opus in whatever this "caf" is...amazing how fast you enabled that after iOs added support for it :D
[21:19:12 CEST] <BtbN> It's not officially documented yet
[21:19:26 CEST] <BtbN> So can be broken or wrong.
[21:19:30 CEST] <utack> i have no idea if it works, but i sent a file to someone with an iphone and they will let me know
[21:21:15 CEST] <JEEB> the thing seems to work on QT but seems to blatantly go the wrong way compared to how QT does it
[21:21:33 CEST] <JEEB> "I don't know what this thing contains exactly so I'll just not write it"
[21:21:40 CEST] <utack> i tried to send it via whatsapp. it got a fancy headphone icon, but then something crashed whatsapp :D
[21:21:41 CEST] <JEEB> and it got merged even if more research was requested
[21:21:46 CEST] <utack> luckily i managed to delete the message
[21:22:01 CEST] <BtbN> It was just pushed without review in the first place.
[21:22:04 CEST] <BtbN> I think?
[21:22:21 CEST] <JEEB> oh? the discussion looked like it happened before the push
[21:22:38 CEST] <BtbN> It was a reply to cvslog iirc
[21:27:37 CEST] <JEEB> ah
[22:49:30 CEST] <agrantgreen> Can someone help me to understand the error "Failed to inject frame into filter network: Cannot allocate memory Conversion failed!"? Google results are all over the place and refer to issues with flags we don't use.
[22:49:37 CEST] <agrantgreen> Exact command incoming...
[22:50:15 CEST] <agrantgreen> https://gist.github.com/adamjgrant/3bb5b9a36c0309c3f05e0e9f7153e022
[22:50:37 CEST] <agrantgreen> Essentially we are trying to stitch together a set of images into a video.
[23:10:11 CEST] <mbr__> ffplay has the -fflag nobuffer option. Is there a way to specify the buffer size? I think I need just a tiny buffer for a nearest real-time stream
[23:20:29 CEST] <AlRaquish> Hello everyone, I have written a script in bash that uses ffmpeg to create a rather complicated effect, and I am wondering if I can do it in pure ffmpeg (or as little bash as possible)
[23:20:49 CEST] <mbr__> what is the effect?
[23:21:07 CEST] <AlRaquish> So what I want to get is split the video into columns of a given width and have column i have i frames delay
[23:21:41 CEST] <AlRaquish> You can see this effect here https://youtu.be/Dw9hjeQhDMY (with rows)
[23:22:04 CEST] <AlRaquish> I already have a working script, so I can provide that as well (it is rather ugly though)
[23:22:54 CEST] <AlRaquish> Here is my script https://pastebin.com/MSGNcw63
[23:23:55 CEST] <AlRaquish> I essentially cut it into n videos with the given delay, and then stack them back
[00:00:00 CEST] --- Fri Oct 20 2017


More information about the Ffmpeg-devel-irc mailing list