[Ffmpeg-devel-irc] ffmpeg.log.20190129

burek burek021 at gmail.com
Wed Jan 30 03:05:02 EET 2019


[01:20:31 CET] <retrojeff> hello I am trying to join (concat) 2 mkv files with ffmpeg
[01:20:56 CET] <retrojeff> tried ffmpeg -i "concat:input1.mkv|input2.mkv" -c copy output.mkv
[01:21:04 CET] <retrojeff> no errors but it did not join
[01:22:15 CET] <retrojeff> full output here https://pastebin.com/WSvVrjdu
[01:27:45 CET] <relaxed> retrojeff: "ffmpeg version 0.10.16" is ancient
[01:28:09 CET] <retrojeff> its the version thats on the website for RHEL/CentOS
[01:28:32 CET] <retrojeff> from RPMFusion
[01:28:54 CET] <relaxed> that's from 2015
[01:29:00 CET] <retrojeff> yikes
[01:29:29 CET] <relaxed> I have static recent builds here: https://www.johnvansickle.com/ffmpeg/
[01:29:39 CET] <retrojeff> ya I saw that
[01:29:39 CET] <relaxed> recent static*
[01:29:44 CET] <retrojeff> trying them now
[01:32:00 CET] <retrojeff> cool got yours to work
[01:32:01 CET] <retrojeff> https://pastebin.com/cggd3LGT
[01:32:30 CET] <retrojeff> maybe not
[01:32:32 CET] <retrojeff> [concat @ 0x4fcdc40] Line 1: unknown keyword '?Eã?'
[01:32:32 CET] <retrojeff> input1.mkv: Invalid data found when processing input
[01:33:06 CET] <relaxed> pastebin the command and output
[01:33:32 CET] <furq> retrojeff: you need to use the concat demuxer for mkv
[01:33:40 CET] <furq> https://trac.ffmpeg.org/wiki/Concatenate#demuxer
[01:34:15 CET] <retrojeff> https://pastebin.com/A10K8EZQ
[01:34:34 CET] <retrojeff> thats with ffmpeg -f concat -i input1.mkv -i input2.mkv -c copy output.mkv
[01:34:50 CET] <furq> yeah that's not how the demuxer works
[01:40:25 CET] <friendofafriend> Howdy all.  I'd like to change the "service_name" in the MPEG transport stream muxer.  How do I actually pass that option on the command line?
[01:40:28 CET] <retrojeff> my files play in VLC just fine
[01:40:46 CET] <retrojeff> just wanna join them together
[01:44:53 CET] <retrojeff> if I convert these to other formats I might lose quality
[01:50:48 CET] <friendofafriend> Ah, figured out the service name is the '-metadata service_name="foo"' flag.  Can I change the publisher data?
[01:59:45 CET] <friendofafriend> Ah, found it.  Thank you, all.
[04:09:00 CET] <retrojeff> still no solution to my issue?
[04:52:47 CET] <fella> retrojeff: < furq> yeah that's not how the demuxer works
[04:57:27 CET] <retrojeff> ?????
[04:58:11 CET] <retrojeff> followed https://stackoverflow.com/questions/7333232/how-to-concatenate-two-mp4-files-using-ffmpeg
[06:17:45 CET] <kurufu> Is there a filter I can apply to a video to double/triple up frames? A la, I want video to play back at the same rate but take say 24fps to 48fps.
[06:18:58 CET] <kurufu> ideally the duplicated frame would be an exact duplicate. The idea behind this is driving an adaptive sync display at a low fps results in serious overdrive artifacts.
[06:22:43 CET] <kurufu> oh this appears to be how the -r flag works.
[07:31:45 CET] <fella> retrojeff: furq told you to use the concat demuxer and gave you a link that shows how to use it!
[07:34:10 CET] <fella> ... and even your link shows how it's done properly ;)
[09:20:20 CET] <retrojeff> your funny men
[09:25:53 CET] <pink_mist> retrojeff: not sure what you think the joke is, but you have the correct answers already; if you refuse to use those answers that's your own fault
[12:21:45 CET] <oliv3> hi there, i'm using ffmpeg to encode a mp4. input is video frames dumped using a pipe (-i pipe:). looking for a way to dump audio samples from another pipe, is it possible ? using named pipes maybe ?
[12:30:32 CET] <yrios> Is there any way to get the number of bytes consumed from the new 'avcodec_receive_frame' API? Like with the old 'avcodec_decode_audio4'?
[12:31:58 CET] <BtbN> It consumes packets and outputs frames. Not raw bytes.
[12:34:26 CET] <yrios> BtbN: I understand this is supposed to be transparent to the library user, but I really need to know how many bytes were consumed! Do you know a workaround?
[12:34:33 CET] <BtbN> no
[12:34:52 CET] <BtbN> You can get the size of the packet before you send it if you want to know the size
[12:37:19 CET] <yrios> The returned frame has AVFrame::pkt_size, but it is not updated correctly, there is no delta (before - after) on first decode :(
[12:38:21 CET] <BtbN> delta of what?
[12:39:54 CET] <JEEB> AVFrame is AVFrame
[12:39:57 CET] <JEEB> AVPacket is AVPacket
[12:40:25 CET] <JEEB> and if you have packets with multiple things then generally parsers separate them for you?
[12:40:39 CET] <JEEB> you might want to explain more about what exactly you're trying to do
[12:43:02 CET] <yrios> delta is d in: a = frame->pkt_size; avcodec_receive_frame(ctx, frame);  d = a - frame->pkt_size
[12:44:07 CET] <JEEB> can you actually explain the use case?
[12:44:18 CET] <JEEB> like, going from the beginning
[12:44:39 CET] <JEEB> (unfortunately I'm busy at $dayjob so I will not be able to help)
[12:44:47 CET] <BtbN> frames don't really have a packet size
[12:44:48 CET] <yrios> I don't have the moov atom, so I try to reconstruct it. For this I need to know how long each packet is
[12:45:19 CET] <BtbN> You get that infrom from the demuxer
[12:45:27 CET] <BtbN> *info
[12:45:47 CET] <JEEB> the AVPacket you're feeding to the decoder should have that info, unless you're just randomly spewing data into the decoder?
[12:45:56 CET] <JEEB> in which case you maybe should try utilizing a parser?
[12:46:06 CET] <JEEB> (lavf/lavc should have different parsers)
[12:47:16 CET] <yrios> You dont know the size of the packet, so you set it to a constant. 'avcodec_decode_audio4' worked fine for getting the actual size. BtbN: How would I use the demuxer to get this information?
[12:47:48 CET] <BtbN> It gives you packets, and they have size.
[12:49:47 CET] <yrios> function names?
[12:50:59 CET] <BtbN> You already must be getting packets somehow?
[12:51:31 CET] <JEEB> he has broken files as input, and is just feeding data in chunks to a decoder
[12:51:37 CET] <JEEB> in order to recreate the index
[12:52:41 CET] <JEEB> I don't know/remember the parser API, but it sounds like that might be a better option if the data of a single stream is already there. although still, I would not be 100% sure what lavf gives out matches his needs (although if the older stuff worked then it should, maybe)
[12:52:46 CET] <JEEB> anyways, $dayjob
[12:53:17 CET] <JEEB> yrios: if you cannot figure it out within some time, explain your use case as well as possible and seek help on the trac issue tracker.
[12:53:41 CET] <JEEB> because not for everyone your explanation hits 100%, like you could see with BtbN. thus explaining it out will help with the context :P
[12:55:42 CET] <BtbN> I don't think decoders even generally support that, and will just randomly throw away excess data at the end of "packets".
[12:56:02 CET] <yrios> JEBB: ok, thank you
[15:35:11 CET] <mfolivas> Need to leverage the use of multithreads on my renders
[15:35:22 CET] <mfolivas> Right now I am using something like this on a Debian box
[15:35:26 CET] <mfolivas> `time ffmpeg -i some-random-video.mp4 -vf scale=1920:1080 -crf 16 -bf 0 scaled-some-random-video.mp4`
[15:35:40 CET] <mfolivas> it takes about 6 minutes per render
[15:36:03 CET] <mfolivas> this is for a small size video (seconds)
[15:36:24 CET] <mfolivas> for videos with more than 10 minutes, it takes a very long time
[15:36:44 CET] <mfolivas> I want to see if I can increase the speed by multithreading it
[15:36:51 CET] <BtbN> It already is.
[15:36:51 CET] <mfolivas> how can I do that?
[15:37:02 CET] <BtbN> You need to turn it explicitly off.
[15:37:12 CET] <BtbN> To not have it use all the CPUs it can get
[15:38:30 CET] <mfolivas> @BtdN thanks!!
[15:38:40 CET] <mfolivas> noticed that there is a `thread` command
[15:38:47 CET] <mfolivas> that is how you can tweak the threads?
[15:39:22 CET] <BtbN> yes
[15:39:28 CET] <BtbN> The default is 0, which means auto-detect
[15:39:51 CET] <mfolivas> yeah, I got confused because in StackExchange someone said
[15:39:56 CET] <mfolivas> >it depends on codec used, ffmpeg version and your CPU core count. Sometimes it's simply one thread per core. Sometimes it's more complex like:
[15:40:13 CET] <BtbN> Some codecs and filters indeed are single threaded
[15:40:30 CET] <furq> mfolivas: that command will use x264 which is multithreaded
[15:40:48 CET] <furq> and with no additional effort
[15:40:54 CET] <BtbN> You also most likely want to use scale=-2:1080
[15:40:55 CET] <egrouse> sorry to randomly jump in - is there a chance that a single threaded filter could cause a bottleneck if others are multi/
[15:41:00 CET] <BtbN> Also, why are you turning off B-Frames?
[15:41:04 CET] <mfolivas> @furq thank you!
[15:41:10 CET] <BtbN> egrouse, yes
[15:41:14 CET] <furq> egrouse: yes but it has nothing to do with threading
[15:41:20 CET] <furq> a slow filter will bottleneck faster filters
[15:41:30 CET] <egrouse> yeah, makes sense
[15:41:38 CET] <furq> libavfilter only supports slice threading iirc so it's not like one filter will need to buffer more frames
[15:41:54 CET] <mfolivas> @ BtbN I just started on this startup and they have this bottleneck - developer is no longer with company
[15:42:22 CET] <BtbN> Throw more/faster CPUs at it
[15:42:22 CET] <mfolivas> so, I do not know why we are using b-frames?
[15:42:31 CET] <BtbN> you are turning them off
[15:42:51 CET] <mfolivas> @BtdN by using b-frames I am turning them off?
[15:42:56 CET] <BtbN> what?
[15:43:03 CET] <BtbN> "-bf 0" 0 B-Frames. So you turn them off.
[15:43:25 CET] <furq> if you just want it to run faster then use a faster preset
[15:43:26 CET] <mfolivas> oh, I see...thanks
[15:43:32 CET] <furq> -preset fast/faster/veryfast/superfast/ultrafast
[15:43:38 CET] <mfolivas> gotcha
[15:43:44 CET] <mfolivas> @furq thanks again!
[15:43:48 CET] <furq> all of those should look fine at crf 16
[15:44:03 CET] <furq> ultrafast turns a lot of stuff off though so i wouldn't use that unless absolutely necessary
[15:44:16 CET] <DHE> what resolution is the source? if you can skip the rescaling that will help significantly
[15:44:29 CET] <DHE> and personally I would try to keep it no worse than "veryfast"
[15:45:01 CET] <mfolivas> @DHE using a 4k video and scaling to 1080p
[15:45:16 CET] <mfolivas> @DHE thanks for the tip!!
[15:45:48 CET] <DHE> ah... okay yeah the...
[15:45:49 CET] <DHE> *then
[15:46:07 CET] <BtbN> That will be slow no matter what really
[15:46:14 CET] <BtbN> scaling alone will take a bit
[15:47:00 CET] <mfolivas> ok, so, by default ffmpeg uses ALL the cores in the system (unless you explicit it define them).  To increase the throughput, the quickest way is to scale vertically (add more cpus)...or play with the `preset` option
[15:47:21 CET] <BtbN> If it's an nvidia system you could use the special features of the cuvid decoders, which can scale in hardware.
[15:47:35 CET] <furq> mfolivas: libx264 defaults to using all cores, not ffmpeg
[15:47:48 CET] <furq> most other codecs won't do that
[15:48:08 CET] <DHE> CPU scaling works to a point. diminishing returns sets in somewhere in the 10-16 core range I think..
[15:48:38 CET] <JEEB> mfolivas: if it's HEVC you could try spawning dozens of threads as a hack :P
[15:48:43 CET] <JEEB> (the input that is)
[15:48:48 CET] <JEEB> (or just use hwdec if available)
[15:49:01 CET] <furq> yeah decoding 4k hevc in software is no fun
[15:49:07 CET] <mfolivas> @furq thanks, the code we're using is in C++ so not sure if that is using libx264
[15:49:15 CET] <furq> ffmpeg is using libx264
[15:49:40 CET] <furq> if you used e.g. -c:v libvpx-vp9 then it would not multithread as easily as that
[15:49:46 CET] <furq> or easily at all
[15:50:07 CET] <furq> also yes listen to what JEEB just said
[15:56:31 CET] <mfolivas> @JEEB thanks!! I do not know if we're using HEVC
[15:57:28 CET] <mfolivas> but at this time, I just need to get this SIGNIFICANTLY fast (like twice as fast)
[15:59:34 CET] <DHE> the ffmpeg cli tool could use a multithreading makeover, but that's quite the project by itself...
[15:59:51 CET] <mfolivas> right now for a 2 minute 4K video to 1980p, the render takes more than 3 minutes
[15:59:58 CET] <mfolivas> *1080p
[16:00:43 CET] <egrouse> i take it that investing in a 2x more powerful machine is out of the question
[16:01:17 CET] <mfolivas> @egrouse not really, we will get a more powerful machine
[16:01:35 CET] <mfolivas> but also I would like to make the video rendering as optimal as possible
[16:01:44 CET] <egrouse> i am doing some streaming stuff right now and the cpu cant even hack downscale 1080>720 fast enough
[16:01:50 CET] <egrouse> so i had to preprocess everythgin to 720
[16:01:52 CET] <egrouse> zzz
[16:01:56 CET] <DHE> mfolivas: benchmark the video decoder with: ffmpeg -i $INPUTFILE -map 0:v -c:v rawvideo -f raw -y /dev/null
[16:02:19 CET] <mfolivas> standby @DHE
[16:02:53 CET] <DHE> eg: for h264 input at 1080p I'm getting 189fps (6.3x realtime)
[16:03:28 CET] <mfolivas> @DHE > Requested output format 'raw' is not a suitable output format
[16:03:36 CET] <DHE> whoops, -f null
[16:03:37 CET] <DHE> my bad
[16:03:39 CET] <mfolivas> `time ffmpeg -i some-random-video.mp4 -map 0:v -c:v rawvideo -f raw -y /dev/null`
[16:03:44 CET] <furq> just ffmpeg -i foo -f null -
[16:03:49 CET] <mfolivas> thanks guys!
[16:04:18 CET] <mfolivas> @DHE you're good!
[16:04:24 CET] <mfolivas> stand by
[16:04:32 CET] <mfolivas> real 0m22.858s user 1m6.764s sys 0m0.216s
[16:04:53 CET] <mfolivas> doing this: time ffmpeg -i some-random-video.mp4 -map 0:v -c:v rawvideo -f null -y /dev/null
[16:04:53 CET] <furq> what fps did ffmpeg report
[16:05:16 CET] <mfolivas> @furq checking
[16:05:46 CET] <DHE> you said this is a 2 minute video... so that's 5.2x realtime...
[16:06:06 CET] <DHE> which isn't too bad I think, but it's consuming ~3 cores doing that...
[16:06:09 CET] <mfolivas> give me a minute, this is NOT a 2 minute video
[16:06:13 CET] <furq> yeah that's not ideal
[16:06:15 CET] <mfolivas> this is just some random video that I am using
[16:06:30 CET] <mfolivas> but the majority of the videos that we render (on prod) are about 2 minute
[16:06:32 CET] <mfolivas> sorry for the confusion
[16:06:35 CET] <mfolivas> let me check the video
[16:06:40 CET] <mfolivas> length
[16:07:19 CET] <furq> also this is academic if you don't have a device that can hwdec hevc
[16:08:21 CET] <furq> so a recent intel cpu, recent nvidia gtx card, or amd rx400 or better
[16:08:33 CET] <mfolivas> so the video is 34 seconds and its demesions are 3840x2160, codecs: AAC, H.264
[16:09:11 CET] <furq> oh man 4k h264
[16:09:16 CET] <furq> that's always annoying
[16:09:51 CET] <furq> i think nvdev/uvd will deal with that though
[16:09:52 CET] <mfolivas> I'm using an Intel NUC on Debian with processor:3, vendor_id : GenuineIntel, model name : Intel(R) Core(TM) i5-7260U CPU @ 2.20GHz
[16:09:53 CET] <furq> nvdec
[16:10:18 CET] <furq> that should have qsv
[16:12:33 CET] <furq> is scale_qsv still a thing
[16:12:50 CET] <furq> it's not in the docs and i don't have an intel cpu newer than 2009 to hand
[16:13:05 CET] <mfolivas> @furq regarding your question of the fps:
[16:13:07 CET] <mfolivas> frame= 1020 fps= 45 q=-0.0 Lsize=N/A time=00:00:34.03 bitrate=N/A speed=1.51x
[16:13:25 CET] <furq> yeah that's more useful than the output of time
[16:13:53 CET] <furq> anyway i don't know how to use qsv but here is a wiki page
[16:13:55 CET] <furq> https://trac.ffmpeg.org/wiki/Hardware/QuickSync
[16:14:06 CET] <furq> that should at least speed up the decoding and scaling quite a bit
[16:14:17 CET] <furq> you could also consider using qsv for encoding but it's not as efficient as x264
[16:16:05 CET] <mfolivas> will look into that
[16:16:26 CET] <furq> using qsv for the whole pipeline will be massively quicker on such a low-power cpu
[16:16:34 CET] <mfolivas> @furq so don't bother with the multithreads at the time and focus on the QuckSync
[16:16:38 CET] <furq> and there's no reason not to use it for decoding and scaling
[16:17:51 CET] <mfolivas> sorry, that was a question @furq
[16:17:52 CET] <mfolivas> :)
[16:18:08 CET] <furq> if you do use x264 then just stick with the default threads setting
[16:18:22 CET] <furq> it rarely helps you to change it, and it especially won't help if you're offloading the decoding and filtering
[16:18:52 CET] <furq> and if you use qsv for everything then there's no threads setting to change, everything is running in hardware
[16:36:42 CET] <mfolivas> @furq seems like this is the code that we're using based on my research
[16:36:43 CET] <mfolivas> ffmpeg -i $input -vf scale=w=$width:h=$height -profile:v baseline -level 4.1 -crf 17 -preset fast -bf 0 $output
[16:37:28 CET] <mfolivas> also, the reason why we turn off b-frames is because our cameras (we have a proprietary camera) does not produce b-frames
[16:38:54 CET] <pixelou> Hi, I would like to dump the data from a video frame as a binary array of RGB values with the C API. How do I make sure that all pixels are contiguous in on big array of size w*h*3?
[16:56:27 CET] <mfolivas> @furq so we are using x264 so I guess that we're using *all* cores.  One of my developers said that it may not actually correspond to the number of cores on the machine. He said: I'd try explicitly setting the core count to the number of hyperthreads
[17:00:18 CET] <Mavrik> pixelou: video frames will probably be in YUV420 format
[17:00:20 CET] <mfolivas> Also, the bottleneck we have is with the video *encoding* time, so I don't think QuckSync can help me
[17:00:23 CET] <Mavrik> so you'll need to do a pixel format conversion to RGB
[17:00:31 CET] <Mavrik> non-planar I guess
[17:01:50 CET] <pixelou> Mavrik: I was planing to use sws_scale for the pixel format conversion.
[17:02:00 CET] <Mavrik> Yp.
[17:10:27 CET] <pixelou> For the data packing, I have found this: https://stackoverflow.com/a/35837289 I'm not sure whether this is an idiomatic way of doing things.
[17:28:21 CET] <mfolivas> @DHE seems that we are using x264 and the issue is with the encoding
[17:29:11 CET] <Mavrik> pixelou: you shouldn't really have to do that if you set input and output pixel formats correctly
[17:29:27 CET] <Mavrik> I'm not sure if RGB24 output is planar or not
[17:30:00 CET] <mfolivas> we're also turning the b-frames because our cameras doesn't support them
[17:30:08 CET] <mfolivas> we
[17:30:19 CET] <mfolivas> we're also using a "fast" preset
[17:30:53 CET] <mfolivas> since we're using the x264 lib, does that means that we're using all the cores of the machine?  No need to do any type of multithreating?
[17:34:39 CET] <egrouse> it defaults to use all cores
[18:21:03 CET] <pixelou> Mavrik: you are right, the sws_scale takse the line sizes of the destination
[18:22:06 CET] <pixelou> So I can choose the minimal value instead of rounding to the closest multiple of 32.
[18:22:54 CET] <pixelou> Actually, the line sizes are set by av_image_alloc which has an align parameter.
[19:47:03 CET] <rememberYou> hello friends, I have an acquaintance who filmed in portrait mode, which takes place at black stripes on the side. Is it possible to convert this video to landscape mode to remove these black bands on either side of the video?
[19:48:06 CET] <rememberYou> NOTE: I tried to use `ffmpeg -i foo.mp4 -filter:v "crop=w:h:x:y" output.mp4` but the black strip are still there, I'm not sure if this is the right thing to do
[19:49:36 CET] <friendofafriend> rememberYou: What is the actual command you are using for crop?
[19:51:46 CET] <rememberYou> well, I took a single frame of the video and opened it with gimp to know the "x, y" from the rectangle and the "width" and "height" of this rectangle and this is what it gave me: `ffmpeg -i test.mp4 -filter:v "crop=875:1275:882:0"`, but... ffmpeg doesn't like 1275, telling that the height it's too big, so I tried 1000 instead of 1275 (as the bar should be delete anyway, just the height of the rectangle wouldn't be enough), but the black
[19:51:46 CET] <rememberYou> strip are still there
[19:52:50 CET] <rememberYou> if you want to take a look at the video, this is the file that I've got: https://wetransfer.com/downloads/9605afe5f8a2f43044ca46b34048137820190129172928/b39284abbf45b244553f8cdb3d1fd87b20190129172928/6daf01
[19:53:27 CET] <friendofafriend> Sure!
[19:54:47 CET] <rememberYou> note, the idea it's to crop these black strips to be able to upload this video on YouTube (from a friend) :p
[19:56:05 CET] <friendofafriend> So, you want the black bars cropped, but not the video rotated?
[19:56:50 CET] <rememberYou> yeah, if possible, I don't want that the video is rotated as the video is already straight, but I would like to put this video in a landscape mode without these black strips, to be able to upload on YouTube.
[19:57:13 CET] <rememberYou> could you share also how do you do that? It's a good way to learn also for my case, as this is the first time that I'm doing this kind of thing
[19:59:54 CET] <friendofafriend> rememberYou: Sure.  You will lose a lot of resolution turning it to landscape.
[20:01:27 CET] <rememberYou> friendofafriend: hmm, it's probably a bad idea then. Well, maybe it's just possible to delete these black strips?
[20:04:53 CET] <friendofafriend> rememberYou: Sure.  Is she talking about cleaning a cat box?
[20:05:13 CET] <rememberYou> friendofafriend: ahaha yeah, it's a french video :p
[20:05:50 CET] <friendofafriend> The language of cleaning up after your cat is universal.
[20:06:27 CET] <rememberYou> hahaha, you're right
[20:06:29 CET] <friendofafriend> rememberYou: ffmpeg.new -i ./pelle\ huile.mp4 -vf "crop=607:1080:416:0" -c:v libx264 -crf 0 -c:a copy ./pelle_noblackbars.mp4
[20:06:36 CET] <friendofafriend> rememberYou: ffmpeg -i ./pelle\ huile.mp4 -vf "crop=607:1080:416:0" -c:v libx264 -crf 0 -c:a copy ./pelle_noblackbars.mp4
[20:06:57 CET] <friendofafriend> Sorry, you ffmpeg would be called just "ffmpeg".  :)
[20:07:19 CET] <rememberYou> alright, that fine. Can you explain a little bit, how you find the right "crop" and why these extra options?
[20:07:50 CET] <friendofafriend> Sure, I used the crop tool in GIMP, and opened tool options.
[20:08:43 CET] <friendofafriend> The first part, crop=607:1080, is the size of what you want the output.
[20:09:10 CET] <friendofafriend> But then the second part, 416:0 is how far that output is from pixel 0,0.
[20:09:53 CET] <friendofafriend> In this case, we cut off 416 pixels of black bar on the left, and did not cut anything from the top.
[20:10:51 CET] <rememberYou> ah yeah, so you took a screenshot of a single frame of the video and you opened with GIMP to know these properties
[20:11:11 CET] <rememberYou> damn, a big thank you for her, that exactly what she needed
[20:11:36 CET] <friendofafriend> You are welcome.  If I were you, I would just use 640x480.
[20:12:10 CET] <friendofafriend> rememberYou: ffmpeg -i ./pelle\ huile.mp4 -vf "crop=640:480:399:241" -c:v libx264 -crf 0 -c:a copy ./pelle_sd.mp4
[20:12:38 CET] <friendofafriend> But I don't know if her catbox tools will show as well.
[20:12:49 CET] <rememberYou> I will send to her the both version of video :p I'm glad to sleep more smart today
[20:13:21 CET] <friendofafriend> Regards to you and your friend.  My cat box is cleaner already.  ;)
[20:14:04 CET] <rememberYou> hahaha, if you need some cat advises, you can find her on YouTube "Cosmo Chats", she's a cat pro and talk more better than I in English *laughter*
[20:15:01 CET] <friendofafriend> I will certainly visit there, thank you!
[21:01:19 CET] <ring0> does ffmpeg support h265 encoding using OMX IL? looking at https://github.com/FFmpeg/FFmpeg/blob/2e2b44baba575a33aa66796bc0a0f93070ab6c53/libavcodec/omx.c#L949 it is not present. am I missing something?
[21:01:41 CET] <BtbN> Is there even a hwencoder that does that?
[21:03:04 CET] <ring0> yes, Xilinx provides a OMX IL integration, which features OMX IL w/ h265
[21:07:41 CET] <ring0> there already is a GStreamer implementation available, but I'm somehow not a fan
[21:08:52 CET] <ring0> they do GStreamer ’ OMX IL ’ Control SW ’ FPGA
[21:28:44 CET] <ring0> BtbN, btw I looked at the achieved bitrate using their FPGA, as you told me. f.e. on a 4k20 sample w/ h265, main, lvl 5, it produced ~45000 kbit/s
[22:17:21 CET] <johnjay> I don't suppose linux has any programs that can do text to speech on a wav file?
[22:17:37 CET] <johnjay> it's probably asking too much but meh. ffmpeg does so much
[22:18:58 CET] <Hello71> wait what
[22:19:16 CET] <Hello71> do you mean speech to text
[22:19:32 CET] <johnjay> yes
[22:19:40 CET] <johnjay> sorry i'm exhausted at the  moment
[22:20:00 CET] <johnjay> i get the impression it's so difficult no such programs exist except Dragon
[22:21:42 CET] <Hello71> there are lots of speech recognition programs
[22:21:55 CET] <Hello71> iirc there are no good foss ones
[22:22:07 CET] <Hello71> possibly you can get some free trial
[22:27:44 CET] <johnjay> yeah maybe. thanks
[22:29:18 CET] <Hello71> there are also paid cloud services
[22:29:23 CET] <Hello71> I think all three have one
[22:29:27 CET] <Hello71> or possibly only google and amazon
[22:59:39 CET] <Bray90820> How much performance boost would i get while encoding video if i upgraded to 8GB ram from 6
[23:00:07 CET] <Bray90820> Encoding mkv to mp4 to be exact
[23:01:44 CET] <Hello71> about .5 performances
[23:07:31 CET] <Bray90820> But still a boost alright
[23:13:24 CET] <DHE> maybe... unless you had 3 sticks of 2GB on a quad-channel controller. upgrading from triple channel to quad channel memory may give 10-20% performance improvement. maybe more. 1->2 channel gave me +30%
[23:36:41 CET] <Bray90820> I have a 4 gb and a 2 gb stick and i was gonna replac the 2 gb with a 4gb
[23:37:14 CET] <Bray90820> 4+2 becomes 4+4
[23:41:45 CET] <DHE> it might make a difference for dual channel memory controllers. make sure both sticks go into blue RAM slots
[23:42:02 CET] <furq> if you get matching sticks then that could potentially make a noticeable difference
[23:42:09 CET] <furq> or matching enough that you can run dual channel at 1T
[23:48:49 CET] <Bray90820> I dont have matching sticks
[23:49:11 CET] <Bray90820> I found a 4gb stick in my house and thought i would put it in
[23:52:24 CET] <furq> just having more ram won't speed anything up unless you're actually running out of it
[00:00:00 CET] --- Wed Jan 30 2019


More information about the Ffmpeg-devel-irc mailing list