[Ffmpeg-devel-irc] ffmpeg.log.20170212

burek burek021 at gmail.com
Mon Feb 13 03:05:01 EET 2017


[02:01:32 CET] <truexfan81> what do i need to do to get libebur128 working on ffmpeg master?
[02:01:51 CET] <truexfan81> configure doesn't recognize it as an option when doing a compile
[05:15:28 CET] <Diag> Question, when making a gif with ffmpeg, can you select the amount of colors
[07:51:28 CET] <scootley> On Windows, "-hwaccel qsv" seems to hang when I try to use it to do true in-GPU transcoding.  Without that option, I can encode and decode just fine with QSV GPU acceleration
[09:45:42 CET] <thunfisch> hi! i'm trying to sync up a rt(s)p stream with local recorded audio. the stream comes from a axis security camera adhering to the onvif 1.0 standard, which says that a utc timestamp is transmitted via rtcp. i have not found a way to get this timestamp, or use it to sync audio and video. any ideas? is this even possible?
[09:46:30 CET] <thunfisch> see page 26: https://www.onvif.org/specs/stream/ONVIF-Streaming-Spec-v210.pdf
[13:16:30 CET] <dasj> Hello, I am trying to download a brightcove video with the following: ffmpeg -i master.m3u8 -c copy master.ts but I get [https @ 0x563da2300d40] Protocol not on whitelist 'file,crypto'! - master.m3u8: Invalid argument
[13:16:37 CET] <dasj> what should I do?
[13:20:54 CET] <dasj> I added -protocol_whitelist "file,http,https,tcp,tls" and it seems to work
[14:33:45 CET] <nostrora> Hi, i use ffmpeg in Arch Linux to play vorbis, mp3, flac etc. I want to know if ffmpeg is packaged with libflac or if ffmpeg use lib which is already in my arch linux ?
[14:33:58 CET] <c_14> ffmpeg has an internal flac encoder
[14:34:12 CET] <nostrora> c_14, why ? it is not KISS :(
[14:34:14 CET] <c_14> And an internal flac decoder
[14:34:48 CET] <nostrora> i want ffmpeg use the flac library of my system. i don't to have 5000 flac library everywhere in my system ... :/ understand i mean ?
[14:35:28 CET] <furq> it doesn't install a separate flac library
[14:35:41 CET] <furq> it's part of libavcodec, which you'll need to install anyway if you want ffmpeg to work
[14:36:29 CET] <nostrora> furq, so ffmpeg or libavcodec, use MY flac library which is already on my system ?
[14:36:34 CET] <furq> no
[14:37:09 CET] <nostrora> why
[14:37:12 CET] <c_14> The flac encoder/decoder is part of libavcodec
[14:37:38 CET] <nostrora> c_14 why ? i have already the flac package. why ffmpeg don't use it ?
[14:38:11 CET] <furq> why would it
[14:38:25 CET] <nostrora> i think my english is too bad for speak together :p
[14:38:48 CET] <furq> if ffmpeg only used external libraries for encoders and decoders then you'd end up with 5000 codec libraries installed
[14:38:57 CET] <c_14> Because ffmpeg provides a "base" set of codecs/formats so that you can install libav* and have everything you need without having to install lots of packages for each individual codec/format
[14:39:16 CET] <nostrora> c_14, yes. this is a bad thing
[14:39:19 CET] <nostrora> (i think)
[14:39:51 CET] <nostrora> Because if another tool use my libflac already installed on my system. so i have 2 times libflac somewhere, and this is bad
[14:39:58 CET] <furq> it's not libflac
[14:40:16 CET] <nostrora> furq, ffmpeg have his proper integration of flac codec ?
[14:40:19 CET] <furq> yes
[14:40:28 CET] <nostrora> furq, why ffmpeg have rewrited this ?
[14:40:36 CET] <JEEB> because someone cared enough
[14:40:48 CET] <furq> so it doesn't have to depend on libflac
[14:41:11 CET] <nostrora> ok i understand
[14:41:44 CET] <JEEB> sometimes it goes bad, like with the vorbis encoder, which you cannot use unless you enable experimental features, but thankfully with flac as long as you're lossless you're fine
[14:41:46 CET] <nostrora> what is the name of ffmpeg flac integration project ?
[14:41:54 CET] <furq> ffmpeg
[14:42:19 CET] <nostrora> ok
[14:42:22 CET] <furq> or i guess libavcodec to be more precise
[14:42:37 CET] <nostrora> I have trouble understanding why libflac is not good enough for ffmpeg
[14:43:05 CET] <JEEB> well if you have as good as thing internally there's not much reason to have an external library dependency
[14:43:25 CET] <furq> it's preferable to have internal implementations with no external dependencies
[14:43:46 CET] <furq> flac is a simple enough codec that it's easy to have that
[14:44:37 CET] <nostrora> furq, but with this logic there is Potentially, infinite version of library, and potentially infinite version installed in my system. i don't like it
[14:44:47 CET] <nostrora> it is not KISS in my mind. i'm not right ?
[14:44:51 CET] <furq> by potentially infinite do you mean two
[14:44:59 CET] <JEEB> and they're not the same library
[14:45:10 CET] <JEEB> you have one copy of libflac and one copy of libavcodec
[14:45:17 CET] <furq> i'm not saying this approach is good in general
[14:45:21 CET] <nostrora> JEEB, yes you right for this point
[14:45:35 CET] <furq> but for something whose goal is to decode hundreds of codecs out of the box, it makes more sense
[14:45:48 CET] <furq> and anything else which wants to do that can just use libavcodec
[14:46:22 CET] <nostrora> That mean there is 0 libflac codes in libavcodec. this is an rewrite from scratch ?
[14:46:33 CET] <JEEB> yes
[14:46:38 CET] <faLUCE> hello. Given a 3 planes AVFrame *frame, I can access the planes' data through frame->data[0,1,2];  but how can I obtain the size of each plane?
[14:47:00 CET] <furq> it's not a direct port, no
[14:47:24 CET] <nostrora> JEEB, and, how to know the difference between libflac last version and libavcodec(flac). maybe one is faster ? maybe one it more older
[14:47:45 CET] <nostrora> it's creepy if libavcodec is based on old flac recommandation and he not support all flac 1.3 features
[14:48:12 CET] <JEEB> I'm not sure FLAC has added more features
[14:48:28 CET] <JEEB> libflac is mostly getting bug fixes AFAIK
[14:48:33 CET] <furq> there's some minor new features in 1.3 but that's nearly four years old at this point
[14:48:38 CET] <furq> and point releases are purely bugfixes
[14:48:42 CET] <nostrora> JEEB, this is not the question, it is possible. so we have to care about i think
[14:48:59 CET] <nostrora> and what about opus ?
[14:49:03 CET] <JEEB> well if you care then you go compare. I most certainly do not care as long as the thing is not broken.
[14:49:04 CET] <furq> actually all the new 1.3 features are the flac binary
[14:49:11 CET] <nostrora> libavcodec have the own integration of opus codec ?
[14:49:19 CET] <furq> it has a decoder
[14:49:25 CET] <furq> you still need libopus for encoding
[14:49:26 CET] <JEEB> nostrora: atomnuker made his own encoder recently, it's on the mailing list in review https://patchwork.ffmpeg.org/patch/2497/
[14:49:33 CET] <furq> but yeah that's being worked on
[14:49:47 CET] <JEEB> "The aim of the encoder is to prove how far the format can go by writing the craziest encoder for it."
[14:49:54 CET] <nostrora> recently we have Opus 1.2-alpha
[14:50:03 CET] <nostrora> so again, we have 2 implementation of opus.
[14:50:07 CET] <nostrora> i don't like this idea :(
[14:50:19 CET] <JEEB> actually having two separate implementations is often a requirement to become a standard
[14:50:26 CET] <JEEB> which opus became :P
[14:50:31 CET] <furq> it makes no difference at all for decoding, the bitstream format was frozen long ago
[14:50:42 CET] <furq> and you can use libopus for encoding with ffmpeg
[14:50:47 CET] <JEEB> and as long as those two implementations are not broken you really shouldn't care :P
[14:50:52 CET] <nostrora> ok
[14:51:07 CET] <JEEB> the only negative cases are like the vorbis encoder in libavcodec
[14:51:16 CET] <JEEB> which thankfully is locked behind the "experimental features" switch
[14:51:19 CET] <furq> as far as encoding goes it's no different than having multiple aac encoders
[14:51:21 CET] <JEEB> because it's a piece of shit
[14:51:23 CET] <furq> at least six or seven that i know of
[14:51:33 CET] <nostrora> ok
[14:51:37 CET] <nostrora> Thanks for all answers guys :)
[14:52:16 CET] <nostrora> I do not blame ffmpeg ;) I wanted to be sure I understood the idea. Just a little sad to lose the KISS principle
[14:52:42 CET] <furq> like i said, if you had external dependencies for every decoder then it would make everything much less simple
[14:52:48 CET] <JEEB> would you say the same if, say, libflac was shit
[14:53:00 CET] <JEEB> and would still be maintained of course, but still shit
[14:53:02 CET] <furq> and yeah it's not a good philosophy because the external lib might be awful
[14:53:02 CET] <faLUCE> sorry, I had connection problems:
[14:53:08 CET] <furq> like a lot of the old external aac encoders
[14:53:37 CET] <furq> ffmpeg's aac encoder is probably the best gpl-compatible one in existence
[14:53:50 CET] <JEEB> anyways, when what libavcodec provides is Good Enough, I think the result is good
[14:53:56 CET] <JEEB> also if you remember libdcadece
[14:54:00 CET] <JEEB> *dec
[14:54:02 CET] <faLUCE> hello. Given a 3 planes AVFrame *frame, I can access the planes' data through frame->data[0,1,2];  but how can I obtain the size of each plane?
[14:54:02 CET] <faLUCE> should I do heigth*linesize ?
[14:54:09 CET] <furq> yeah that got folded into ffmpeg didn't it
[14:54:18 CET] <JEEB> that guy moved on to maintain the dca decoder in libavcodec, which pretty much became what libdcadec was
[14:54:43 CET] <JEEB> so sometimes things move into libavcodec from being separate libraries
[14:55:30 CET] <nostrora> so i understand this is not break the KISS principle, because this is not the same implementation :)
[16:20:35 CET] <faLUCE> when decoding from planar to packed, how the input AVPacket  pkt must be filled?  the AVPacket struct has only one pointer to data, and not one pointer for each plane...
[16:22:47 CET] <DHE> you mean AVFrame?
[16:23:00 CET] <DHE> AVPacket is for data going directly to disk, or at least via a container
[16:25:01 CET] <faLUCE> DHE: int avcodec_decode_video2() wants an AVPacket as input, not an AVFrame
[16:25:19 CET] <DHE> right. it'll be the encoded data out of your disk or network stream or whatever
[16:25:37 CET] <DHE> maybe you want to use an swscalar to convert between formats instead?
[16:27:46 CET] <faLUCE> DHE: I explain better. I grab frames from a V4L Device. For packed frames, I simply put the captured content into an AVPacket and decode it with avcodec_decode_video2(). But what if my device returns 3 planes arrays? How can I compose the AVPacket for avcodec_decode_video2() ?		
[16:28:16 CET] <DHE> what codec are you receiving?
[16:28:23 CET] <faLUCE> YUV420P
[16:29:16 CET] <DHE> then you're probably better off writing directly into an AVFrame with AVFrame.format=AV_PIX_FMT_YUV420P;
[16:29:51 CET] <faLUCE> DHE: I see but in this case: which decode function I have to use?
[16:30:46 CET] <DHE> none. it's already been decoded into an AVFrame equivalent by the time you receive it
[16:30:49 CET] <BtbN> none, because there'S nothing to decode.
[16:31:05 CET] <BtbN> Unless your device gives you mjpeg or something.
[16:31:09 CET] <DHE> avcodec_decode_video2 is intended to convert from, say, H264 into YUV420p
[16:31:57 CET] <faLUCE> DHE: BtbN. I was confused because it decodes formats too
[16:32:04 CET] <BtbN> it?
[16:32:12 CET] <faLUCE> avcodec_decode_video2()
[16:32:31 CET] <BtbN> it decodes video, exactly.
[16:32:36 CET] <DHE> no, it converts AVPackets (raw codec data) into AVFrames (raw pixels in a given format/packing)
[16:32:40 CET] <BtbN> But if you get a raw frame from v4l, what do you want to decode?
[16:33:47 CET] <faLUCE> Well, the question is a bit more complex. Unfortunately, in an example for capturing data from a v4l device, the avcodec_decode_video2() was used
[16:34:12 CET] <faLUCE> it was used only as a wrapper
[16:34:36 CET] <DHE> some devices do return codecs like mjpeg, in which case the AVPacket contains JPEG data and it will return an AVFrame containing the image in YUV420p (or other format)
[16:35:39 CET] <faLUCE> DHE: yes, I see, but in this case it was used as a decoder for MJPEG and wrapper for rawdata
[16:35:49 CET] <faLUCE> and that confused me
[16:45:41 CET] <faLUCE> Now: after resampling from YUYV422 to YUV420P I obtain an AVFrame with AVFrame.format=AV_PIX_FMT_YUV420P. I have to wrap it into a custom container. I know how to get the planes: (avframe->data[0,1,2]) but I don't understand how to get the SIZE of each plane. It should be avframe->linesize[i]*heigth,  but I see that avframe->data[i] contains more elements than that. And there's not a function that gives me the size of the ar
[16:47:07 CET] <faLUCE> is this "extra data" created by ffmpeg for optimizations?
[16:49:59 CET] <faLUCE> the API says "Some decoders access areas outside 0,0 - width,height, please see avcodec_align_dimensions2(). Some filters and swscale can read up to 16 bytes beyond the planes, if these filters are to be used, then 16 extra bytes must be allocated" .... so, what if I fill a YUV420P frame with no extra-data ?
[16:51:46 CET] <BtbN> the number of planes is defined by the pix_fmt
[16:52:01 CET] <BtbN> The data array is just as large as required for the largest one
[16:52:17 CET] <BtbN> The size of each plane also depends on the pixel format
[16:52:41 CET] <BtbN> With YUV 420 the U/V planes can be half the size
[16:53:58 CET] <faLUCE> BtbN: I know that, but this doesn't  explain why avframe->data[i] has more elements than  avframe->linesize[i]*heigth
[16:54:06 CET] <BtbN> what?
[16:55:11 CET] <DHE> how do you tell? stepping off the array by 1 byte won't necessarily cause a crash
[16:55:29 CET] <faLUCE> DHE: but they are not zeroes
[16:56:00 CET] <faLUCE> I don't know if I'm stepping off, because the API says:"Some decoders access areas outside 0,0 - width,height"
[16:56:50 CET] <BtbN> yes, because they write 16 or 32 byte blocks, hence the padding and linesizes
[16:56:56 CET] <DHE> that's because sometimes CPU instructions will touch more than exactly the number of bytes you're thinking of. 4, 8, 16, maybe more bytes are often accessed in a single instruction. all must be valid or the process segfaults
[16:57:27 CET] <BtbN> Also, some codecs require the dimensions to be multiple of 16, like h264, where 1080p is actually 1088p
[16:57:50 CET] <faLUCE> DHE: then, if I have to fill my custom YUV420P frame, how can I insert that extra data?
[16:58:03 CET] <BtbN> you ignore the "extra data"
[16:58:14 CET] <DHE> just allocate extra rows and extra wide rows to allow for the information.
[16:58:30 CET] <BtbN> There is no data in those regions
[16:58:41 CET] <BtbN> It's just padding
[16:58:45 CET] <BtbN> no need to read or write to them
[16:58:50 CET] <DHE> avframe_get_buffer should deal with that automatically, right?
[16:59:34 CET] <faLUCE> BtbN: so, these non-zero values outside the bounds are generated by who?
[16:59:59 CET] <BtbN> Whoever used the memory first, or whatever the instructions used to fill the memory generated.
[17:00:28 CET] <DHE> linux doesn't zero out memory allocated unless documentation says explicitly that it does (eg: calloc)
[17:01:22 CET] <faLUCE> DHE: this is true for out of bounds data
[17:01:29 CET] <BtbN> I'm pretty sure "fresh" memory is also everything but zeroed.
[17:01:35 CET] <faLUCE> DHE: sorry, now I see
[17:01:45 CET] <faLUCE> it's mallocd data, not callocd data
[17:02:32 CET] <DHE> for performance reasons I see why you wouldn't normally zero out huge 1080p frames by default
[17:03:11 CET] <faLUCE> Yes, I see. So: for a YUV420P frame, how much memory I have to allocate for plane0 ?
[17:04:15 CET] <BtbN> Why do you want to allocate it yourself?
[17:04:41 CET] <faLUCE> BtbN: I don' have to allocate myself, but I have to wrap it into another container, and I must know the size
[17:05:00 CET] <BtbN> you already mentioned the formula earlier
[17:05:19 CET] <BtbN> linesize * (height << chroma_subsample)
[17:05:25 CET] <BtbN> *>>
[17:06:43 CET] <faLUCE> BtbN: but in this case, this size is not  the ffmpeg's size
[17:07:12 CET] <BtbN> the ffmpeg size?
[17:09:14 CET] <faLUCE> BtbN: I mean, if I have to fill my container with a YUV420P image which I don't take from libav, and then I have to send that image to a ffmpeg function, the plane0 will have the  linesize * height size, and not the extra-size returned by libav
[17:09:50 CET] <BtbN> just add the required padding and you will be fine
[17:10:06 CET] <iive> is extra-size for simd alignment?
[17:10:07 CET] <faLUCE> BtbN: yes, and how  do I calculate this padding ?
[17:10:14 CET] <BtbN> you add it
[17:11:21 CET] <faLUCE> BtbN: what do you mean with "required padding" ? For YUV420P plane0 which is the required padding?
[17:11:32 CET] <BtbN> there is no format specific padding
[17:12:12 CET] <faLUCE> ok, so, plane0's size = linesize*heigth + padding.   Padding = ?
[17:13:32 CET] <faLUCE> iive: in fact the API says:   "see avcodec_align_dimensions2()"
[17:14:04 CET] <faLUCE> should I call this function in order to compute the new heigth?
[17:14:12 CET] <faLUCE> (for a given format)
[17:14:32 CET] <faLUCE> so I do: planesize = linsize*newHeigth ?
[17:16:58 CET] <faLUCE> in fact the API for this function says:  "Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you also ensure that all line sizes are a multiple of the respective linesize_align[i]"
[17:18:14 CET] <iive> yeh, i was going to say that if you need simd alignment it is better to put it into linesize :D
[17:18:33 CET] <faLUCE> iive: thank you. You GOT the sense of the question.
[17:18:35 CET] <iive> some codecs need few pixels more around the edges.
[17:19:07 CET] <faLUCE> iive: anyway, this is a bit of a mess. Because I have to allocate a codeccontext when there's not encoding/encoding
[17:21:01 CET] <faLUCE> (but this is a fault of the API)
[17:40:18 CET] <kerio> should i -tune film
[18:20:33 CET] <IntruderSRB> I know I'm boring with the same question but I still hope to stumble upon someone who did it recently
[18:20:45 CET] <IntruderSRB> anyone here have experience with 'cbcs' MPEG-CENC encryption scheme?
[18:29:32 CET] <faLUCE> Well, I just checked the dimension of the allocated  planes when a new frame is allocated:   av_buffer_alloc(frame->linesize[i] * h + 16 + 16/*STRIDE_ALIGN*/ - 1);  <---- then, the "extra data" is only trash from the compiler. And this comment, in the API, "Some decoders access areas outside 0,0 - width,height, please see avcodec_align_dimensions2()."   is put in a wrong place and misleading
[18:56:35 CET] <jarkko> i compiled ffmpeg 1st time yesterday and installed winff. any idea how do i get second pass done?
[18:57:33 CET] <jarkko> i changed the quality to placebo and it produces around 70mb videoclip and i couldnt see any difference to some other version i tried earlier with worse speed
[18:58:02 CET] <jarkko> durandal_1707: 3.2.1 is outdated already
[19:03:43 CET] <furq> jarkko: it's called placebo for a reason
[19:08:39 CET] <jarkko> furq: english isnt my native language what's the meaning of it
[19:27:05 CET] <durandal_1707> jarkko: that it does nothing
[19:37:36 CET] <DHE> jarkko: a placebo is offered to make you think it provides a benefit when really it does not
[19:39:17 CET] <stefan91> Hi all
[19:39:36 CET] <stefan91> Can anybody tell me something more about ffmpeg real dft?
[19:39:49 CET] <stefan91> I have issue when result of rdft is all nan or inf values
[19:39:56 CET] <stefan91> or simply zeros
[19:47:51 CET] <jajaja> iäm trying to merge subtitles.srt into video.mp4 file but the subtitle is always couple seconds late?
[19:48:58 CET] <jajaja> any fix?
[19:49:50 CET] <jajaja> i use following command to merge the subtitle: ffmpeg -i video.mp4 -i sub.srt -c copy -c:s mov_text new.mp4
[19:56:48 CET] <durandal_1707> stefan91: show the code
[19:59:27 CET] <stefan91> http://pastebin.com/nV1PHUfp
[20:07:54 CET] <faLUCE> DHE: bu why somean would spend time to implement a placebo feature if it's unuseful?
[20:07:57 CET] <truexfan81> why does ffmpeg compile give some many warnings about depracated things?
[20:08:00 CET] <faLUCE> but*
[20:08:07 CET] <faLUCE> someone
[20:08:43 CET] <furq> faLUCE: it's a preset
[20:08:51 CET] <furq> it just turns everything up as far as it'll go
[20:09:22 CET] <furq> it's incredibly slow and you usually get an immeasurably tiny benefit over veryslow
[20:09:58 CET] <furq> they could've called it "ultraslow" for consistency but i assume they want to discourage people from using it because it's a waste of time
[20:12:34 CET] <jarkko> how do you do second pass from command line?
[20:12:37 CET] <jarkko> and is it worth it
[20:13:00 CET] <jarkko> i installed winff, but are there better guis for linux?
[20:13:04 CET] <furq> -pass 1 and -pass 2
[20:13:17 CET] <furq> and it's not worth it with x264 unless you want to hit a bitrate target
[20:13:24 CET] <kerio> a global bitrate target, that is
[20:13:28 CET] <jarkko> i want to use x265
[20:13:29 CET] <kerio> ie a file size
[20:13:35 CET] <furq> x265 is much the same afaik
[20:14:03 CET] <furq> if you don't want to hit a bitrate target then use crf for both
[20:14:24 CET] <furq> and in theory for vp9 but apparently vp9's ratecontrol is so bad that it's somehow better to use 2pass in crf mode
[20:14:42 CET] <jarkko> what crf you recommend?
[20:15:01 CET] <furq> i normally use 20 for sd and 21 for hd
[20:15:08 CET] <jarkko> i have a cartoon that i want to encode and i installed ffmpeg yesterday all is so new to me now
[20:15:12 CET] <furq> actually nvm that's with x264
[20:15:19 CET] <furq> those numbers don't map to x265
[20:17:10 CET] <jarkko> i think the source is aac. how do i change it to mp3/ogg 2 channels 128kb/s?
[20:17:13 CET] <jarkko> and 44.1
[20:17:21 CET] <furq> why would you want to do that
[20:17:26 CET] <jarkko> its a cartoon
[20:17:31 CET] <jarkko> it doesnt need aac sounds
[20:17:33 CET] <truexfan81> why does ffmpeg compile give some many warnings about depracated things?
[20:17:41 CET] <furq> "aac sounds"?
[20:18:42 CET] <jarkko> i thought acc is lossless...
[20:18:47 CET] <furq> no
[20:18:52 CET] <jajaja> is there anyway to adjust the delay of subtitles when burning them into video?
[20:18:52 CET] <jarkko> https://en.wikipedia.org/wiki/Advanced_Audio_Coding
[20:19:56 CET] <furq> jajaja: are they synced before you burn them in
[20:20:06 CET] <furq> if they are then that sounds like a bug
[20:20:19 CET] <furq> if they're not then any decent subtitle editor will be able to timeshift them
[20:20:47 CET] <jajaja> furq, yes
[20:21:03 CET] <kerio> are opus and aac distinguishable at 128kbps stereo?
[20:21:22 CET] <furq> probably not
[20:21:23 CET] <jajaja> furq, yes they are sync before i burn them
[20:21:53 CET] <kerio> don't burn the subtitles in ._.
[20:22:06 CET] <furq> i'm pretty sure he's muxing them in
[20:22:41 CET] <furq> 18:49:49 ( jajaja) i use following command to merge the subtitle: ffmpeg -i video.mp4 -i sub.srt -c copy -c:s mov_text new.mp4
[20:22:47 CET] <kerio> phew
[20:22:49 CET] <jajaja> i use following command to merge the subtitle: ffmpeg -i video.mp4 -i sub.srt -c copy -c:s mov_text new.mp4
[20:22:55 CET] <furq> if that introduces a delay then that sounds like a bug
[20:23:24 CET] <jajaja> is that a correct command to burn subs into video?
[20:23:42 CET] <jajaja> thisnis first time i try that
[20:23:45 CET] <furq> "burn in" means hardcoding them into the picture
[20:23:50 CET] <jajaja> yes
[20:23:59 CET] <furq> that's not what that command does
[20:24:07 CET] <jajaja> my cellphono wont show subs in separate file that is
[20:24:27 CET] <jajaja> at least i haven'ft figure how
[20:24:34 CET] <furq> if it supports mov_text in mp4 then what you're doing is fine
[20:24:48 CET] <furq> that's not burning them in though, that's muxing them in
[20:25:19 CET] <jajaja> okay, i copied the term from web site where i take to command
[20:25:40 CET] <kerio> yo would it make sense to use -tune stillimage when encoding the video of a terminal emulator
[20:25:49 CET] <furq> no and also use asciinema
[20:25:49 CET] <relaxed> jajaja: if you have an android phone, mxplayer supports mov_text
[20:25:50 CET] <jajaja> what is a decent sub editor?
[20:26:00 CET] <furq> i believe aegisub is popular these days
[20:26:02 CET] <jajaja> i have android
[20:26:14 CET] <jajaja> in linux editor
[20:26:22 CET] <kerio> furq: asciinema doesn't have twitch chat
[20:26:29 CET] <furq> another good reason to use it
[20:26:34 CET] <kerio> and if i wanted to stream that, i'd just use termcast
[20:27:35 CET] <furq> i know you're sitting there waiting for me to ask "why are you live streaming your terminal emulator, you brave internet pioneer"
[20:27:38 CET] <furq> so assume i just did
[20:27:51 CET] <kerio> why are you so racist against text
[20:28:02 CET] <furq> i like text. it's good
[20:28:11 CET] <kerio> nethack is a game
[20:28:13 CET] <furq> that's why i keep it as text
[20:28:15 CET] <kerio> twitch.tv is a site to stream games
[20:29:14 CET] <furq> is there not such a thing as a nethack server
[20:31:30 CET] <kerio> of course there is
[20:31:34 CET] <kerio> i run one :3
[20:31:46 CET] <kerio> but it's different
[20:32:01 CET] <furq> it sure is
[21:15:29 CET] <faLUCE> [20:09] <furq> they could've called it "ultraslow" for consistency but i assume they want to discourage people from using it because it's a waste of time <--- furq, then is it some deprecated stuff?
[21:15:45 CET] <faLUCE> furq: why don't mark it as deprecated?
[21:16:03 CET] <kerio> it's not deprecated
[21:16:07 CET] <kerio> it's the highest preset
[21:16:11 CET] <kerio> it sets everything to the maximum
[21:16:16 CET] <kerio> it's just... not very useful
[21:16:40 CET] <faLUCE> I think this is highly confusing for users
[21:17:19 CET] <kerio> users need to RTFM
[21:17:54 CET] <faLUCE> kerio: one of the most difficult thing in ffmpeg is RTFM :-)
[21:18:05 CET] <faLUCE> the documentation is a mess
[21:18:13 CET] <jarkko> faLUCE: agree
[21:18:27 CET] <jarkko> i wouldnt be here if it was well done
[21:18:58 CET] <faLUCE> and I'm talking about ffmpeg's documentation. If you talk about libav's doc, it's not a mess, it's one of the worst nightmare ever seen in my life
[21:20:11 CET] <faLUCE> jarkko: anyway, these people in the channel are very kind and helpful
[21:20:55 CET] <jarkko> if you use gui for ffmpeg what do you use?
[21:21:06 CET] <jarkko> i have winff now
[21:21:09 CET] <faLUCE> jarkko: what do you mean?
[21:21:18 CET] <faLUCE> ffplay ?
[21:21:28 CET] <kerio> if you want a better ffplay, use mpv
[21:21:37 CET] <kerio> it's like a fork of a fork of mplayer
[21:21:46 CET] <jarkko> i use vlc
[21:21:57 CET] <jarkko> but i want something better for setting encoding options
[21:22:40 CET] <faLUCE> jarkko: which option do you miss in vlc ?
[21:22:58 CET] <jarkko> faLUCE: encoding, not decoding
[21:23:12 CET] <jarkko> i dont want to use command line for encoding videoclips
[21:23:24 CET] <jarkko> but winff is a bit too simple
[21:23:31 CET] <faLUCE> jarkko: you can't have a complex gui for that
[21:23:46 CET] <faLUCE> because it's too much codec specific
[21:24:00 CET] <jarkko> handbrake has good gui
[21:24:04 CET] <jarkko> but it doesnt work with ffmpeg
[21:24:23 CET] <faLUCE> jarkko: it's virtually impossible to have a complex gui for encoding options
[21:24:36 CET] <jarkko> not really
[21:24:58 CET] <faLUCE> jarkko: I disagree. You can have it only if you restrict it to a limited number of codecs
[21:25:11 CET] <faLUCE> but if you want to make it general, it's impossible
[21:25:19 CET] <jarkko> i only need it for x265
[21:25:34 CET] <faLUCE> jarkko: in fact, I said:  You can have it only if you restrict it to a limited number of codecs
[21:25:52 CET] <faLUCE> then your question should be: is there a good gui for x264/x265 ?
[21:25:53 CET] <relaxed> ffmpeg's wikis on encoding are good
[21:26:49 CET] <faLUCE> jarkko: you can build the GUI yourself, with some GUI designer. it's very easy
[21:27:06 CET] <faLUCE> you even don't have to code
[21:27:12 CET] <jarkko> what program would you suggest
[21:27:22 CET] <faLUCE> jarkko: do you know basic coding?
[21:27:30 CET] <faLUCE> c/c++/python etc.
[22:20:55 CET] <hiihiii> hello
[22:22:17 CET] <hiihiii> how can I do resampling similar to how sony vegas does it with ffmpeg? I read about tblend filter but I'm not sure of how to set its params.
[22:23:31 CET] <hiihiii> consider the frames:  A B A B A B... here's want I need : A (A+B) B A (A+B) B A (A+B) B...
[22:24:00 CET] <hiihiii> where (A+B) is a blended frame inserted between frames A and B
[22:24:54 CET] <durandal_1707> should it have blended frame after every frame?
[22:25:37 CET] <hiihiii> not sure what you mean? but let me write it another way.
[22:25:50 CET] <hiihiii> source frames : A B...
[22:26:01 CET] <hiihiii> output frames : A X B...
[22:26:14 CET] <hiihiii> X = A blended somehow with B
[22:26:45 CET] <hiihiii> it will insert a new frame between each pair of frames
[22:26:49 CET] <durandal_1707> what about: a b c d e f?
[22:27:36 CET] <hiihiii> a x1 b x2 c x3 d x4 e x5 f...
[22:27:43 CET] <hiihiii> x1 = a+b
[22:27:49 CET] <hiihiii> x2= b+c
[22:27:59 CET] <hiihiii> x3 = c+d
[22:28:01 CET] <hiihiii> an so on
[22:29:24 CET] <hiihiii> I could do test and trial but there's too much parameters fo that. If I could narrow it a bit that would be nice
[22:30:54 CET] <cali> Hi there.
[22:32:16 CET] <cali> I've compiled ffmpeg yesterday from the snapshot. I'm unable to get audio and video recorded together in a mkv or mp4. Recording audio alone works.
[22:32:21 CET] <durandal_1707> hiihiii: tblend doesnt dupe frames
[22:33:08 CET] <durandal_1707> hiihiii: try framerate filter it does something similar
[22:33:28 CET] <cali> That's what I run: "-f decklink -i 'Intensity Pro at 13' -f pulse -i 5 -c:a aac -b:a 192k -c:v libx264 -preset veryfast -s 1280x720 -flags +ilme+ildct+pass1 -pix_fmt yuv422p10le -b:v 4000k -r 30 output.mkv"
[22:34:05 CET] <cali> But "-f pulse -i 5 -c:a aac -ac 2 test.aac", records audio fine.
[22:36:19 CET] <hiihiii> durandal_1707:  my source is 15fps. I'm thinking of making it 24fps but I don't want to duplicate frames. I thought if I could blend frames I could get it to 30fps
[22:37:53 CET] <durandal_1707> hiihiii: a a+b b is creating extra frames
[22:38:18 CET] <hiihiii> that's wat I want
[22:38:58 CET] <hiihiii> in vegas without resampling if you change the fps it just duplicates frames
[22:39:21 CET] <hiihiii> [a b] would be : [a a b b]
[22:39:46 CET] <hiihiii> I don't want that [a a+b b] is what I'm looking for
[22:40:04 CET] <durandal_1707> hiihiii: then use fps filter and tblend with enable expresssion
[22:40:29 CET] <hiihiii> ok give me a sec I'll read about that
[22:41:29 CET] <durandal_1707> fps filter would increase fps by 1/3
[22:43:47 CET] <durandal_1707> tblend filter would be all_mode=average and enabled for every third frame or something like that
[22:44:05 CET] <hiihiii> you said tblend does not duplicate. then what does it do? blend then insert on top of existing frame?
[22:44:25 CET] <hiihiii> if that's the case guess i'll fps and tblend
[22:44:56 CET] <durandal_1707> hiihiii: it blends 2 frames in sequence
[22:45:19 CET] <hiihiii> where does it insert the new frame?
[22:45:32 CET] <hiihiii> after the two frames it processed?
[22:46:19 CET] <durandal_1707> it doesnt insert frame. fps does that
[22:46:57 CET] <hiihiii> ok
[23:34:20 CET] <hiihiii> durandal_1707: tblend=all_mode=average ok but then how can I enable this for every third frame like you said. framestep=2 just drops half of my fps
[23:35:13 CET] <durandal_1707> fps=25,tblend...
[23:36:32 CET] <hiihiii> umm I did you -r 30 before and after -vf
[23:36:40 CET] <hiihiii> I did use*
[23:37:12 CET] <durandal_1707> no use fps filter explicitly
[23:37:27 CET] <durandal_1707> drop -r 30
[23:40:54 CET] <hiihiii> yes that's much better
[23:41:08 CET] <hiihiii> fps=30,framestep=2,tblend=all_mode=average
[23:41:38 CET] <hiihiii> result in 15fps
[23:41:47 CET] <durandal_1707> why you use framestep?
[23:41:50 CET] <hiihiii> keeping it equal to src
[23:42:00 CET] <durandal_1707> wrong
[23:42:29 CET] <durandal_1707> tblend doesnt create frames
[23:42:54 CET] <durandal_1707> so you need fps filter to create ones
[23:44:18 CET] <hiihiii> my src is 15fps. so after I apply fps=30, I get each frame duplicated right?
[23:44:40 CET] <durandal_1707> yes
[23:45:04 CET] <hiihiii> so what if I fps=45,framestep=2,tblend=all_mode=average
[23:46:08 CET] <durandal_1707> same as fps=22.5,tblend....
[23:50:12 CET] <hiihiii> ok thank you. "fps=30,tblend=all_mode=average" achieved the desired result for me
[23:51:10 CET] <hiihiii> there's a blended frame between each consecutive pair of frames [A A+B B B+C C]
[00:00:00 CET] --- Mon Feb 13 2017


More information about the Ffmpeg-devel-irc mailing list