[Ffmpeg-devel-irc] ffmpeg.log.20170606

burek burek021 at gmail.com
Wed Jun 7 03:05:01 EEST 2017


[03:35:03 CEST] <Obliterous> anyone know how to record audio from a webcam with ffmpeg 3.3? -f alsa no longer works...
[03:37:55 CEST] <Obliterous> I just get: Unknown input format: 'alsa'
[03:38:37 CEST] <Obliterous> arecord -l
[03:38:41 CEST] <Obliterous> D'oh
[03:46:31 CEST] Action: Obliterous gets stabby with alsa
[05:11:47 CEST] <zippers> I'm making video clips of all the dialogue in a movie, to help me learn Chinese. I'm using a shell script with commnads as below. The runtime is really slow - about 26 hours for a 1h46 movie. How can I make it faster?
[05:11:50 CEST] <zippers> ffmpeg -i "movie.mp4" -ss 01:36:29.542 -to 01:36:33.542 -async 1 -strict -2 "Clips/01-36-29.542_01-36-33.542.mp4"
[05:40:40 CEST] Action: Obliterous bangs head on wall
[05:40:53 CEST] <Obliterous> Why isn't alsa a recognized input format?!?!?!?!?
[05:41:25 CEST] <debianuser> Obliterous: You probably built ffmpeg without alsa support.
[05:41:41 CEST] <Obliterous> how can I check that?
[05:42:24 CEST] <debianuser> Check the output of configure?
[05:42:42 CEST] <debianuser> how do you build ffmpeg?
[05:43:02 CEST] <Obliterous> alsa is/was listed there
[05:43:29 CEST] <Obliterous>  Enabled indevs: alsa                      dv1394                    fbdev                     lavfi                     oss                       v4l2
[05:45:00 CEST] <debianuser> Can you go to the ffmpeg build dir, run `./ffmpeg -formats` there and check if you have "alsa" among them?
[05:45:58 CEST] <Obliterous> Yup, its there.
[05:46:07 CEST] <Obliterous> ...
[05:46:17 CEST] <Obliterous> let me check for something stupid...
[05:46:25 CEST] <debianuser> which ffmpeg? ;)
[05:46:29 CEST] <debianuser> `which ffmpeg`? ;)
[05:46:49 CEST] <Obliterous> headwall
[05:46:52 CEST] <Obliterous> headbrick
[05:46:57 CEST] <Obliterous> headcar
[05:47:02 CEST] <Obliterous> headrock
[05:47:35 CEST] <Obliterous> 'updatedb&;locate ffmpeg' returns 3 answers...
[05:47:38 CEST] Action: Obliterous sighs
[05:47:57 CEST] <Obliterous> *nukepave
[05:48:56 CEST] <Obliterous> fscking deviated install locations, and all in the path.
[05:50:09 CEST] <Obliterous> fixt. now to see if 'make install' gives me alsa support
[05:51:51 CEST] Action: debianuser doesn't like `make install`... It's usually better to build a package. But in the case of ffmpeg I never install it at all, I just do `cp ./ffmpeg ~/bin/`. :)
[05:52:07 CEST] <Obliterous> yippityboo!
[05:52:24 CEST] <Obliterous> now I get a completely different error~!
[05:53:05 CEST] <Obliterous> THIS one I know how to deal with. must turn on streaming server.
[05:56:02 CEST] <Obliterous> yay. now to check the rest of them.
[05:56:02 CEST] <zippers> Can I make many clips at one time, instead of seeking for each one? Seeking is slow.
[05:59:38 CEST] <Obliterous> zippers: this appears to have notes on faster seeking: https://trac.ffmpeg.org/wiki/Seeking#Cuttingsmallsections
[06:03:50 CEST] <Obliterous> debianuser: thanks for helping me cure my case of the stupids
[06:04:21 CEST] <debianuser> Obliterous: you're welcome! I wish all problems were that easy to solve :)
[06:05:29 CEST] <Obliterous> Hah. mine have just started. I've got to get multiple webcam streams from multiple machines all visible on a public-facing web page...
[06:05:54 CEST] <Obliterous> the wife wants to watch the squirrels on our back porch. from work. :-s
[06:07:57 CEST] <zippers> Obliterous: My previous command is ffmpeg -i "movie.mp4" -ss 01:36:29.542 -to 01:36:33.542 -async 1 -strict -2 "Clips/01-36-29.542_01-36-33.542.mp4"
[06:08:46 CEST] <zippers> Obliterous: If I change the command to the first "faster seek" example, the whole video (not just the clip) is exported. ffmpeg -ss 01:36:29.542 -i "movie.mp4" -to 01:36:33.542 -async 1 -strict -2 "Clips/01-36-29.542_01-36-33.542.mp4"
[06:09:44 CEST] <Obliterous> thats because the first example uses -to 0:0:0 as a DURATION
[06:10:28 CEST] <Obliterous> so if you only want 5 seconts, it would be $ ffmpeg -ss 01:36:29.542 -i "movie.mp4" -to 00:00:05 -async 1 -strict -2 "Clips/01-36-29.542_01-36-33.542.mp4"
[06:11:35 CEST] <zippers> I don't know how long I want. I only know the start and end times (it's coming from an srt subtitle file)
[06:12:29 CEST] <Obliterous> well, in the case of the example you posted, basic math says you only want 4 seconds....
[06:12:31 CEST] <zippers> It says "If you want to keep the original timestamps, add the -copyts option", and that makes a clip very fast, but the video and audio are blank.
[06:12:56 CEST] <zippers> Yes, basic math is fine, but I'm trying to do this for 1321 items.
[06:14:15 CEST] <Obliterous> Sounds like a script-worthy challenge.
[06:16:05 CEST] <zippers> I think that's what I need to do - calculate all the clip lengths in advance. Times are complicated though, especially with millisecond intervals. Now I know what I'll be busy doing this afternoon.
[06:16:43 CEST] <Obliterous> hopefully the fast seek method works better for you
[06:21:39 CEST] <zippers> Thanks, a quick test seems to be running much faster (40 clips/minute = probably about 33 minutes runtime, instead of 26 hours)
[08:06:18 CEST] <Pandela> Aye
[08:06:35 CEST] <Pandela> Does ffmpeg support 360 video or spatial media playback yet?
[08:24:39 CEST] <johnjay> furq does ubuntu do the same thing as debian w.r.t. stable/testing/sid?
[08:25:07 CEST] <johnjay> I'm on this askubuntu question and it sounds like they call it stable and "development"
[09:35:45 CEST] <madprops> Hi. I captured a video of my screen using Sharex, using an ffmpeg version provided by Collision. The problem is it seems the silent part of the audio channel was chopped off. When the video is reproduced the audio starts immidiately, ignoring the silence. Weird thing is, if I open it in a video editor it does show the two channels correctly. I tried re-encoding it and did some other offset and adding silence tricks but they didn't work.
[09:39:27 CEST] <k_sze[work]> Is there a guide to modernize ffmpeg API usage? I wrote an app a while back by following a tutorial (I don't remember which one, might have been dranger's). I now get deprecation warnings if I compile the app against FFmpeg 3.x.
[09:39:52 CEST] <k_sze[work]> e.g. warning: 'context_model' is deprecated
[09:40:55 CEST] <k_sze[work]> warning: 'avcodec_encode_video2' is deprecated
[09:40:56 CEST] <k_sze[work]> etc
[11:23:22 CEST] <guarani> Hi, I have a question for legally using ffmpeg. I want to make a website for video processing, like timelapse. Is there anything else I have to do except the License Compliance Checklist in the Legal tab on the ffmpeg website?
[11:25:05 CEST] <furq> if it's a website and you're using ffmpeg on the backend then you're not distributing ffmpeg
[11:25:17 CEST] <furq> so you don't even really have to pay attention to that
[11:25:57 CEST] <furq> you only really need to be concerned about potential codec licensing fees
[11:27:15 CEST] <guarani> like H.264 ?
[11:28:45 CEST] <furq> yeah
[11:29:02 CEST] <guarani> you ffmpeg guys are awesome ! 1st of all - great product! 2nd - well documented and 3rd - the support and everything is just awesome!
[11:33:05 CEST] <DHE> thankfully ffmpeg isn't AGPL and I'm not sure of any components that are, so that's usually fine. you're not redistributing ffmpeg itself or any of its components
[15:05:42 CEST] <phillipp> hey
[15:06:02 CEST] <phillipp> got a special question about extracting frames from a video. does someone have a minute?
[15:06:26 CEST] <c_14> just ask your question, if someone can help you they will
[15:07:54 CEST] <phillipp> ok, basically, i want to extract every 10th frame from a video, where the files are saved as frame_<frame>.png where frame is 1, 11, 21, 31 etc and not 1, 2, 3, 4
[15:09:16 CEST] <c_14> probably easiest to just 2pass the process
[15:09:26 CEST] <c_14> generate every image then delete 9 out of every 10
[15:09:42 CEST] <c_14> don't think the image2 muxer supports setting increments
[15:10:31 CEST] <phillipp> thats what i do right now
[15:10:44 CEST] <phillipp> but it takes quite long and is also quite an overheat
[15:12:15 CEST] <c_14> write a program that accepts a stream of images on stdin and writes every 10th frame out using ffmpeg [] -f image2pipe - | program
[15:12:18 CEST] <c_14> ?
[15:12:23 CEST] <c_14> can be a python script or something
[15:14:27 CEST] <phillipp> hmm, not sure about it. i would like to have ffmpeg handle the image generation from the video
[15:14:39 CEST] <furq> -i foo.mp4 -vf select='not(mod(n\,10))' out%04d.png
[15:14:43 CEST] <furq> then rename the images afterwards
[15:14:50 CEST] <c_14> that works too I guess
[15:15:08 CEST] <phillipp> my other solution, which i tried first, was to spawn an ffmpeg command for each frame
[15:15:16 CEST] <BtbN> Or just add a 0 after the %04d
[15:15:20 CEST] <furq> yeah
[15:15:20 CEST] <phillipp> similar to your one there
[15:15:28 CEST] <furq> except that'd be out by 10
[15:15:42 CEST] <phillipp> it works with a small video but takes even longer for normal videos (tested with 5 min vid)
[15:17:29 CEST] <furq> that's not really similar to what i suggested
[15:18:09 CEST] <phillipp> yeah just realized, sorry
[15:19:01 CEST] <furq> -i foo.mp4 -vf select='not(mod(n\,10))' -start_number 0 out%04d0.png
[15:19:03 CEST] <furq> should work
[15:19:23 CEST] <phillipp> let me try and see if its faster
[15:22:03 CEST] <phillipp> hmm, no, it doesnt process faster
[15:22:32 CEST] <furq> you're not going to get much faster than that
[15:24:15 CEST] <phillipp> hmm, thats bad
[15:24:49 CEST] <phillipp> guess i have to handle each frame on its own and put my worker on a server with many cores to handle it concurrently
[15:25:48 CEST] <furq> that makes even less sense
[15:26:03 CEST] <furq> you have to decode multiple frames to get the frame you want
[15:26:07 CEST] <phillipp> why? that way i only process 920 frames instead of 9200
[15:26:15 CEST] <furq> you still process every frame
[15:26:27 CEST] <phillipp> yeah but i can do it concurrent
[15:26:49 CEST] <furq> yeah but you'll end up decoding the same frame multiple times
[15:26:58 CEST] <furq> up to 25 times if this is x264 with normal settings
[15:28:11 CEST] <furq> if it's h264 or some other codec with a multithreaded decoder you can try adding -threads n before -i
[15:28:12 CEST] <phillipp> i think i have to test it on 8 or 16 cores
[15:28:55 CEST] <furq> it would be faster to do that if you split by gop
[15:28:59 CEST] <furq> which i guess is the only way to do it anyway
[15:29:28 CEST] <phillipp> split by gop?
[15:29:38 CEST] <furq> https://en.wikipedia.org/wiki/Group_of_pictures
[15:32:03 CEST] <phillipp> how can i make use of it with ffmpeg?
[15:32:11 CEST] <furq> !muxer segment
[15:32:11 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[15:33:25 CEST] <furq> the problem there is if you don't have fixed-length gops
[15:33:56 CEST] <furq> which a lot of modern encoders don't by default
[15:34:37 CEST] <phillipp> the first problem is that i am completely confused now, haha xD
[16:28:55 CEST] <Nacht> Anyone know why, when I use -t 00:00:02.01, and I see a nice output duration of "time=00:00:02.00", I get a totally different time when I use ffprobe to check the duration of that same file? "Duration: 00:00:02.20, start: 1.480000"
[16:37:53 CEST] <zerodefect> Attempting to understand how to programmatically configure multithreading for an individual filter (C-API).  It looks like I have to set the correct 'thread_type' flag in both the AVFilterGraph and AVFilterContext.  The confusion comes that there are two similar flags I've come across: AVFILTER_THREAD_SLICE and AVFILTER_FLAG_SLICE_THREADS. Which one do I set where?
[16:51:45 CEST] <c_14> afaik FLAG_SLICE_THREADS is an internal flag stating that the filter supports slice threads
[16:52:00 CEST] <c_14> >Add support for slice multithreading to lavfi. Filters supporting threading are marked with AVFILTER_FLAG_SLICE_THREADS.
[16:52:02 CEST] <c_14> yeah
[16:52:41 CEST] <c_14> so you'll want to set THREAD_SLICE and maybe check SLICE_THREADS (to see if the filter supports threading)
[16:53:43 CEST] <zerodefect> Ok cool. Thanks c_14.  I saw that in 'avfilter.c' : line 933.  I wasn't checking if filter supported threading though. Let me try that.
[17:08:24 CEST] <zerodefect> c_14: I'm using the dev packages of ffmpeg that are available through Ubuntu Zesty's pkg manager. Would there need to be special compiler flags for enabling threading in avfilter?
[18:13:10 CEST] <c_14> zerodefect: packages probably too old
[18:13:44 CEST] <c_14> hmm, although
[18:15:08 CEST] <zerodefect> https://packages.ubuntu.com/zesty/libavfilter6
[18:16:07 CEST] <zerodefect> ...which looks to be 3.2.4 if I read it correctly :)
[18:16:13 CEST] <c_14> should probably be fine
[18:16:17 CEST] <c_14> are you getting an error or?
[18:17:12 CEST] <zerodefect> No errors, I get frames pushed in, and then get them on the otherside without an issue.  If I put a breakpoint in the callback 'execute' function pointer, it's never hit.
[18:18:01 CEST] <zerodefect> Just been trying with encoding too, and I'm not having much success there either.  Maybe I ought to right a sample app?
[18:19:09 CEST] <c_14> probably a good idea
[19:33:27 CEST] <ChocolateArmpits> It seems x264 attempts to use more than 1 physical cpu only when the load is high enough.
[19:37:18 CEST] <DHE> x264 doesn't measure CPU load. it does as it's told. most it does is count how many CPUs are available for automatic detection
[19:41:22 CEST] <kepstin> if you're encoding fairly low res video using a faster preset, you're not gonna get much advantage from the multithreading code, of course
[20:06:07 CEST] <raduser> good evening #ffmpeg, is this also a good place to ask about audio encodings? I'm at my wit's end trying to figure out the difference between two .wav files. the only measurable difference is ffmpeg reports the encoder being for Lavf56.40.101 for one, and Lavf56.40.101 (libsndfile-1.0.25) for the other.
[20:15:13 CEST] <ChocolateArmpits> raduser, did you try running ffprobe -show_streams input.wav  and comparing stream information for both?
[20:15:16 CEST] <ChocolateArmpits> DHE, so how does it initiliaze the second processor?
[20:15:25 CEST] <DHE> ChocolateArmpits: huh?
[20:16:41 CEST] <ChocolateArmpits> I have two physical CPUs on the target system, if the load isn't high ffmpeg running x264 encode only works on one of the cpus. If the load is high the second cpu is engaged parallely
[20:16:57 CEST] <ChocolateArmpits> I'm using -threads 40 too
[20:17:45 CEST] <DHE> could be related to other things, multi-socket motherboards have NUMA. I don't know how linux does NUMA in great detail.
[20:18:00 CEST] <ChocolateArmpits> System is running Windows 10, not Linux
[20:24:33 CEST] <raduser> ChocolateArmpits, first, great username. second, before i read your message i thought about opening up the wav files as bytes in python and trying to find the descrepency that way. I have a better idea what the problem is, but not the fix.
[20:25:21 CEST] <raduser> the error i get when i send these wav files to the program that is supposed to accept them and turn them into wav forms, tells me that when the bad formatted wav file DOESNT work, it's because it (and i'm paraphrasing) found "LIST" instead of "data"
[20:26:07 CEST] <raduser> looking at the first 200 bytes of each wav file shows that the one that doesn't work has the bytes spelling out "LIST" before its bytes spell out "data" which suggests that when the good wavfile is made, it is excluding this metadata
[20:26:25 CEST] <raduser> but i'll run that ffprobe command to see if it's more helpful than looking at bytes in python XD
[20:28:36 CEST] <raduser> ffprobe says every single listed STREAM is identical for both files
[20:29:28 CEST] <raduser> sadly i don't have the slightest clue as to what LIST or data has to do with wav files
[20:30:11 CEST] <raduser> i'm half tempted to use python to crop out those 4 bytes spelling LIST and seeing if it'll work XD
[20:30:20 CEST] <kepstin> ChocolateArmpits: yeah, I suspect the os cpu scheduler is keeping the x264 threads all on one socket due to numa - iirc the threading code is very memory intensive.
[20:30:42 CEST] <kepstin> ChocolateArmpits: but ymmv, and you might get better results in linux
[20:31:53 CEST] <kepstin> raduser: what's the actual problem you're having? ffmpeg's report is just saying that there's some metadata differences between the files, but i'd expect both would work fine...
[20:33:06 CEST] <kepstin> raduser: it could be that whatever app your using can't handle some of the fancier riff stuff in wav files, so consider using ffmpeg to "convert" from wav to wav to fix it
[20:33:22 CEST] <raduser> kepstin: i'm using Tensorflow to turn wav files into spectrograms. when i open up an m4a file with audacity and save it as a wav file (meeting the tensorflow program's parameters as far as bit length and such)
[20:33:30 CEST] <kepstin> raduser: also make sure you're not using wav files >=4gb, that just causes pain.
[20:34:04 CEST] <raduser> when i give the audacity-made wav file to tensorflow everything works hunky-dory. but if i do the same conversion with ffmpeg then the same wav_to_spectrogram program fails with the hint of something like "found LIST, expected data"
[20:34:17 CEST] <raduser> lol wav files are WELL under 4 gb XD
[20:34:30 CEST] <raduser> i never plan on them being more than 10-20 seconds
[20:34:54 CEST] <kepstin> raduser: right, that's just a bug in the program, it should just skip chunks it doesn't recognize :/
[20:35:16 CEST] <kepstin> that said, there might be an option in the ffmpeg wav muxer to make it not write them...
[20:35:57 CEST] <raduser> would it be easier to change that, or since i can just open it up with the python it'll be passing through anyways just cut out the section?
[20:36:24 CEST] <raduser> like if i can find that section's specifications for whatever this LIST thing is i could just found out x bytes before and/or after that string
[20:36:58 CEST] <kepstin> it's a standard riff chunk, it's a 4-byte name followed by a 4-byte length field, iirc; trivial to skip.
[20:37:18 CEST] <kepstin> ideally you'd just fix the wav parser in the tensorflow stuff to skip unknown chunks until it finds a data chunk :/
[20:37:19 CEST] <raduser> sounds MUCH easier then trying to understand Tensorflow
[20:37:41 CEST] <raduser> absolutey that's ideal. but it's.. uh... complicated
[20:37:53 CEST] <raduser> i mean, i know c++, but that thing is on a whole different level
[20:46:01 CEST] <kepstin> anyways, looks like there's no way to tell ffmpeg not to write the "LIST" chunk with tags (or even just move it to a different spot, e.g. after the "data" chunk)
[20:46:47 CEST] <kepstin> given the issues with the tensorflow wav parser, i wouldn't be surprised if it does something like ignore the data chunk length and treat anything after it as part of the audio, tho ...
[21:00:37 CEST] <raduser> you said that chunks are 4-byte names and then 4 bytes describing length, right? coz another that might get in the way after LIST is INFOISFT
[21:01:17 CEST] <raduser> nvm, duh, just google it -_-
[21:04:41 CEST] <kepstin> raduser: the INFO stuff inside the LIST chunk, so if you skip the whole LIST all at once you never even see it
[21:06:00 CEST] <raduser> ya that's what i'm reading, i'm going to try cutting from index of LIST to the index of data and see if it's fine with it
[21:06:35 CEST] <johnjay> raduser: are you submitting a patch to ffmpeg to fix something?
[21:07:01 CEST] <raduser> no
[21:07:34 CEST] <raduser> this is discovering a problem with the thing using the converted file, not with ffmpeg's conversion
[21:08:56 CEST] <kepstin> a patch to tell ffmpeg not to write the LIST chunk might be acceptable, and probably wouldn't be that hard to write.
[21:09:58 CEST] <raduser> is it really a problem, though?
[21:10:33 CEST] <kepstin> it's not usually, but some really simple software that doesn't implement a correct riff parser sometimes has trouble with it, as you've seen.
[21:11:12 CEST] <raduser> i'd have thought that would be the simple software's problem
[21:11:46 CEST] <raduser> if i made such a software that's what i'd think upon finding this issue
[21:12:36 CEST] <raduser> just in case in the future it would be using a file converted from something other than ffmpeg that might also be including the LIST chunk for the sake of complete data
[21:16:48 CEST] <kepstin> but yeah, fixing this on the tensorflow side shouldn't be that hard, probably just a matter of adding a loop in the wav reading function to skip chunks until it finds one named "data" :/ you should probably open a bug about it at least.
[21:17:51 CEST] <raduser> IT WORKS! cutting it out produced a nice wavform. finally.
[21:18:03 CEST] <raduser> and much rejoicing was had! \o/
[21:19:12 CEST] <kepstin> to make a "correct" wav file with the list chunk removed, you should also update the file length field on the top level "RIFF" chunk, but I think tensorflow just ignores that.
[21:25:09 CEST] <raduser> as soon as it's turned into a wavform the file is getting deleted anyway, so it's not that important to me
[22:13:51 CEST] <zerodefect> Trying to get mp2v mt encoding going using the C-API.  I've modified an existing FFmpeg code sample to generates a .ts file containing mp2v video, but I cannot figure out why multi threaded encoding will not work (well, the ffmpeg library will not call back into my custom execute callback funcs).  https://pastebin.com/y1kae2fu
[22:16:06 CEST] <kepstin> zerodefect: not sure what you're trying to do? the mpeg2video encoder doesn't support multithreading...
[22:18:36 CEST] <kepstin> or, maybe i'm wrong about that :/
[22:18:39 CEST] <zerodefect> I should be clearer. When I say multithreading, I'm actually trying to get parallel encoding going so that it uses my own threadpool. I tried it within a larger testing rig, but I couldn't get it to work, so I've fallen back to the next best thing which is a sample app.
[22:19:08 CEST] Action: kepstin takes a closer look
[22:19:12 CEST] <zerodefect> You know more than me ;)
[22:20:12 CEST] <kepstin> mpeg1/2video doesn't set AV_CODEC_CAP_FRAME_THREADS but does have AV_CODEC_CAP_SLICE_THREADS, huh
[22:20:18 CEST] <zerodefect> So to remove the complexity of a external threadpool, I've instead just substituted some simpler code so that the 'execute' functions just perform the operations serially (single threadeed).
[22:20:48 CEST] <zerodefect> Yeah, was that a question or observation (the huh threw me off :) )
[22:20:53 CEST] <zerodefect> ?
[22:21:14 CEST] <kepstin> observation.
[22:21:41 CEST] <zerodefect> Ah. Yes, that is what I've observed running it in a debugger too.
[22:22:39 CEST] <zerodefect> So I should mention that I'm using the FFmpeg dev packages on an Ubuntu Zesty (17.04) system which looks to be using ffmpeg 3.2.4 - not the latest and greatest, but it seems it should still work.
[22:23:52 CEST] <kepstin> yeah, the slice threading code i don't expect has changed in a long time
[22:29:02 CEST] <rafal_rr> Hi. I want to convert subtitles from MicroDVD format to SRT format: "ffmpeg -i 1.txt 1.srt". Unfortunately output subtitles are scaled. E.g. Text at frame 800 is placed in 5min 33 sec (24 fps) instead of 5min 20 sec (25 fps, what is correct). How to tell ffmpeg to set fps=25 ?
[22:29:46 CEST] <rafal_rr> I meant "at frame 8000", of course
[22:30:10 CEST] <MoPac> Hello - I have a (rookie?) resolution question. I'm making 1080p VP9 .webm clips out of software-generated .png frames (not photos) that I can create at any arbitrary resolution. Is there a significant benefit to creating the .pngs in higher res (4k? 8k?) and then telling ffmpeg to downscale?
[22:31:19 CEST] <MoPac> I've read about the benefits of bigger-than-native frames, but most of that discussion is about photos because of cameras' limitations
[22:34:35 CEST] <kepstin> zerodefect: yeah, not sure what's up. it looks like you've put in enough to trigger multithreaded encoding, fwiw, so there should be at least one call to the execute() function - it doesn't use execute2 - on most frames, I think
[22:36:16 CEST] <zerodefect> Thanks for taking a look. Do you think I ought to try the mailing list (Libav-user)?
[22:36:22 CEST] <kepstin> MoPac: probably the only reason to generate at higher res and downscale is if your generating tool doesn't anti-alias the images
[22:36:48 CEST] <kepstin> MoPac: in which case, rendering higher res and downscaling is, well, basic supersampling anti-aliasing.
[22:37:22 CEST] <MoPac> kepstin: ah, thanks, I'll have to see whether it does or not
[22:40:52 CEST] <kepstin> rafal_rr: use the '-subfps' input option (see "ffmpeg -h demuxer=microdvd")
[22:44:55 CEST] <rafal_rr> kepstin: it works, thanks! I couldn't find it before. many thanks
[00:00:00 CEST] --- Wed Jun  7 2017


More information about the Ffmpeg-devel-irc mailing list