[Ffmpeg-devel-irc] ffmpeg.log.20181228

burek burek021 at gmail.com
Sat Dec 29 03:05:02 EET 2018


[00:13:23 CET] <octet> sorry, I lost connection. did anyone reply to my inquiry about ASS to SRT subs?
[00:24:53 CET] <pink_mist> no
[00:28:20 CET] <octet> I guess I will just use "sed -i 's/size="36"/size="20"/' *.srt" for now
[00:40:03 CET] <beandog> octet, what are you trying to do?
[00:40:51 CET] <octet> trying to extract some SSA/ASS subtitles, and create SRTs with proper size as the original
[00:41:00 CET] <beandog> ah
[00:41:06 CET] <beandog> good luck. :)
[00:41:18 CET] <octet> my media center forces transcode with SSA subs, so I need SRT
[00:41:30 CET] <beandog> what's your source?
[00:41:38 CET] <beandog> can you grab it from open subtitles?
[00:41:52 CET] <octet> no, it is embedded in an mkv
[00:41:56 CET] <beandog> lame
[00:42:16 CET] <octet> yeah, my hacked solution will work fine for now though
[00:48:07 CET] <fantastictoys> when i convert from flac to opus, i find that my file has been truncated. how can i fix this?
[00:50:18 CET] <relaxed> fantastictoys: pastebin.com your command and output
[00:59:21 CET] <fantastictoys> command was "ffmpeg -i 01-06\ Shake\ It\ Off.flac 01-06\ Shake\ It\ Off.opus" output is here https://ptpb.pw/kXPT
[01:22:37 CET] <relaxed> looks like a decoding error. Is the track complete?
[01:28:15 CET] <relaxed> fantastictoys: see if this decodes it correctly, flac -d 01-06\ Shake\ It\ Off.flac
[01:34:33 CET] <fantastictoys> relaxed: lol the source file is incomplete
[01:36:41 CET] <fantastictoys> relaxed: thanks for your help
[05:32:06 CET] <NGTmeaty> Hey guys. https://trac.ffmpeg.org/ticket/1291 doesn't seem to be fixed. Can anyone else verify as well?
[06:25:00 CET] Last message repeated 1 time(s).
[09:31:08 CET] <tempuser> For every 5 seconds that go by, I want just 1 frame extracted.  What's the command for that?
[09:31:36 CET] <tempuser> I'm trying ffmpeg -i video.mp4 -framerate 1/120 image-%03d.png
[11:04:42 CET] <pagios>  hi all, i did a modprobe v4l2loopback and i got /dev/video0 how can i get /dev/video1 , 2 ,3 ,4 ?
[11:22:29 CET] <relaxed> modprobe v4l2loopback devices=5
[11:22:55 CET] <relaxed> is this for webrtc?
[11:33:53 CET] <pron> so,i got video like this 720x576 [SAR 64:45 DAR 16:9] and if add png overlay it gets stretched, i suspect it is related to sar, but i am not entirely sure how to fix it properly
[11:34:00 CET] <pron> any hints/tips?
[11:36:16 CET] <pron> what i tried was to scale overlay width i_w/(64/45):in_h  but i am relay unsure if it is correct way
[11:38:55 CET] <relaxed> what is the frame size of the png?
[11:40:44 CET] <pron> relaxed: Stream #0:0: Video: png, rgba(pc), 122x193, 25 tbr, 25 tbn, 25 tbc
[12:05:05 CET] <relaxed> pron: bc -l <<<'122*(64/45)/(122/193)'
[12:07:08 CET] <pron> relaxed: can you explain me what is that?
[12:11:13 CET] <relaxed> width of png * (SAR you want to match) / ( pngwidth/pngheight) = final png height
[12:12:56 CET] <pron> i tought its only widh of pixels that is affected by non square pixel thing
[12:14:29 CET] <relaxed> hmm, it's early. maybe I'm wrong :)
[12:14:36 CET] <pron> so what i did was i resized ping  to 122/(64/45):193
[12:14:43 CET] <pron> png
[12:15:01 CET] <pron> i just wanted to understand if i am not messing up with sar par and what not
[12:15:15 CET] <pron> since i dont fully understand those things
[12:15:29 CET] <pron> i do think i get the idea =)
[12:15:39 CET] <pron> and the result looked fine ;D
[12:52:35 CET] <skident> Hi there
[12:53:24 CET] <skident> Is it possible to change AVFilter options without re-creating the whole filtergraph&
[12:53:26 CET] <skident> Is it possible to change AVFilter options without re-creating the whole filtergraph?
[12:56:30 CET] <durandal_1707> skident: no, use filter commands
[12:56:42 CET] <durandal_1707> if filter have such support
[12:57:02 CET] <durandal_1707> skident: in what filter(s) are you interested?
[12:57:31 CET] <skident> acompressor, agate, equalizer, alimiter
[12:58:48 CET] <skident> durandal_1707: are there any list of commands for each filter?
[12:59:01 CET] <durandal_1707> equalizer supports commands
[12:59:24 CET] <durandal_1707> skident: they are documented in usual filter documentation
[12:59:29 CET] <skident> I see
[13:00:08 CET] <durandal_1707> http://ffmpeg.org/ffmpeg-filters.html#toc-equalizer
[13:01:16 CET] <skident> durandal_1707: Thanks for link.
[13:02:44 CET] <skident> durandal_1707: I have a problem with re-creating a new filtergraph and replace an old one with new one. After replacing filtergraphs there are some audio clipping for less than 1 sec., but I can hear it
[13:04:00 CET] <durandal_1707> skident: you cannot simply replace filtergraph and expect no clippings with realtime filtering
[13:04:33 CET] <skident> yeah, I see. But how to do it smoothly?!
[13:05:53 CET] <durandal_1707> filter audio with both filtergraphs from beginning and do crossfade when switching to another one?
[13:07:03 CET] <skident> For example, I use only filters with zero latency and I could try to filter the last audio frame through both filtergraphs (old and new) and then merge these buffers somehow to avoid clips
[13:08:06 CET] <durandal_1707> yes, but this is at level above ffmpeg
[13:08:48 CET] <skident> durandal_1707: do you mean, I need a third filtergraph with only one filter 'crossfade' and pass both buffers through it to get a "normalized" audio?
[13:09:31 CET] <durandal_1707> no, ffmpeg cant do what you seek, also with 0 latency you can not genereally do what you seek
[13:10:47 CET] <durandal_1707> generally when doing new filtergraphs, you loose old state of old filtergraph
[13:11:07 CET] <skident> so what then does crossfade? Isn't it smooth two audio streams?
[13:11:52 CET] <durandal_1707> yes, but if you gonna use it, then you should use yet another filtergraph which complicates stuff
[13:12:01 CET] <skident> :)
[13:12:48 CET] <durandal_1707> and sure can not be 0 latency
[13:13:23 CET] <skident> could you please explain why 0 latency is bad in this case?
[13:13:49 CET] <durandal_1707> it is not bad, you just can not do it without latency
[13:14:48 CET] <skident> I meant to say, that I push N samples into my filtergraph and then I receive the same amount of filtered samples
[13:15:01 CET] <durandal_1707> also what parameters you need to change with what filters? as I may implements commands for filters missing them
[13:16:04 CET] <durandal_1707> skident: yea, but some filters may introduce unwanted delay, i fixed one such example recently
[13:16:05 CET] <skident> Because some of filters like 'loudnorm' can't work like this. Such filter waits until some amount of samples will be pushed and then can do its stuff
[13:16:21 CET] <durandal_1707> yes, i know
[13:16:40 CET] <skident> so, I meant that I don't use such filters
[13:16:48 CET] <durandal_1707> ok
[13:17:16 CET] <durandal_1707> skident: how big is your input amount of samples?
[13:17:48 CET] <skident> something like 1600 samples, 2 channels
[13:18:18 CET] <durandal_1707> so 1600 samples per channel for 44100?
[13:18:24 CET] <skident> 48000
[13:18:33 CET] <skident> fltp format
[13:22:39 CET] <skident> I'd like to change almost all parameters of 'acompressor'
[13:25:35 CET] <durandal_1707> if parameters are changed abruptly some artifacts may happen
[13:27:20 CET] <durandal_1707> so some interpolation might be needed
[13:28:57 CET] <skident> can you suggest any interpolation algo?
[13:29:57 CET] <durandal_1707> you just change parameters to final value in small step across timeline
[13:33:12 CET] <skident> sounds good. Seems like it will cause some amount of re-creation filtergraph.
[13:34:40 CET] <durandal_1707> well, i meant commands non parameters, parameters as commands
[13:35:20 CET] <durandal_1707> any recreation of filtergraph in middle of processing will cause artifacts
[13:36:04 CET] <skident> oh, sure. But for as You said and I also checked, only 'equalizer' supports commands
[13:37:50 CET] <skident> oh, sure. But as You said and I also checked, only 'equalizer' supports commands
[13:39:07 CET] <durandal_1707> yes, adding commands to acompressor seems rather trivial
[13:39:57 CET] <durandal_1707> also you can already use ladspa filter with commands just look for theirs variants of compressor or limiter
[13:40:16 CET] <durandal_1707> if you are on linux
[13:40:54 CET] <skident> unfortunately, macos and windows (
[13:45:02 CET] <durandal_1707> i think macos can use ladspa
[13:49:00 CET] <skident> durandal_1707: thank you, for your help.
[13:52:09 CET] <pron> can i use nvenc together with -filter_complex?
[13:53:57 CET] <BtbN> sure, but be aware what kind of frames each filter takes
[13:55:30 CET] <pron> i am intrested and adding png overlay
[13:59:54 CET] <BtbN> If you're not using hardware frames, there is nothing different there
[17:39:32 CET] <microcolonel> Hmm
[17:39:51 CET] <microcolonel> I have no idea what the individual components of encoding cost for e.g. baseline H.264
[17:40:19 CET] <microcolonel> is there a way to take shortcuts transcoding MJPEG to a genuine video format?
[17:54:58 CET] <pi-> Could anyone critique this code? https://paste.pound-python.org/show/7tkwvakxrHNti4gLn344/
[17:55:23 CET] <pi-> I'm guessing it is pretty horrible, but it seems to work.
[17:55:45 CET] <pi-> The line I contributed was:  $FFMPEG -i "$SRC"  -i LAST_FRAME.mp4 -filter_complex "[0:v] [1:v] concat=n=2:v=1 [v]"  -map "[v]" "$DST"
[17:56:04 CET] <pi-> Because the `$FFMPEG  -f concat  -safe 0  -i list.txt  -c copy  "$DST"` line was glitching the target vid.
[17:56:39 CET] <pi-> But have I got that `-filter_complex "[0:v] [1:v] concat=n=2:v=1 [v]"  -map "[v]"` part down right?
[17:57:33 CET] <furq> pi-: shorter than what
[17:58:21 CET] <pi-> furq: ah that error message is obsolete, tx
[17:58:36 CET] <furq> if you just want to repeat the last frame then there are easier ways to do it
[17:58:44 CET] <furq> https://ffmpeg.org/ffmpeg-filters.html#framesync
[17:58:55 CET] <furq> maybe overlay with eof_action=repeat or something
[17:59:28 CET] <furq> lots of filters support framesync so there's probably something even easier
[18:00:04 CET] <pi-> I did have another technique before, but it was slooooow: https://paste.pound-python.org/show/ef7Us9FUjNBNVBkEmubh/
[18:00:45 CET] <furq> well if you're using filters instead of the concat demuxer then both ways reqire you to reencode the entire video
[18:01:17 CET] <pi-> I wish I could find an ffmpeg engineer to rewrite my Python-wrapping-ffmpeg lib. It feels really unstable.
[18:01:27 CET] <furq> that other paste is basically what i was suggesting
[18:01:32 CET] <furq> i'd be surprised if it was much slower
[18:01:35 CET] <pi-> I don't suppose you might be interested... furq?
[18:01:46 CET] <furq> not today
[18:03:08 CET] <pi-> well, if any time in the future, do get back to me. What I have at the moment is a hack job. I would love to have someone who knows what they are doing look over it.
[18:45:45 CET] <jinboli> Hello, does anyone know if OpenMAX supports hardware encoding on Android? I notice OpenMAX partially supports hardware encoding on Android.
[18:47:39 CET] <Mavrik> ?
[18:50:57 CET] <pink_mist> jinboli: did ... you just answer your own question in the same breath as you asked it?
[18:51:17 CET] <Mavrik> Yeah, I'm not quite sure what that was
[19:08:33 CET] <jinboli> Sorry for the confusion. I'm new to ffmpeg. I'm trying to implement hardware encoding for video call on an Android app. I don't quite understand what "partially supported" mean. And I don't quite understand the difference between "standalone" and "hardware input" in the FFmpeg API Implementation Status
[19:10:50 CET] <Mavrik> Android has MediaCodec APIs for encoding
[19:10:54 CET] <Mavrik> OpenMAX isn't for app use
[19:16:49 CET] <furq> jinboli: mediacodec will use a hardware encoder if there's one available
[19:18:52 CET] <mrskman> Hello! Is there any way how to select a video stream by bitrate/resolution? I have a dash stream input and I don't want to map the best quality. Problem is that streams in the dash manifest are randomly ordered with every request so I can't map them by index.
[19:19:51 CET] <jinboli> But it says on the FFmpeg API Implementation Status table that MediaCodec only support hardware decoding on Android. So the table is not up to date?
[19:21:28 CET] <furq> jinboli: i meant mediacodec in general, not the ffmpeg implementation of it
[19:22:02 CET] <jinboli> Ok, thanks a lot
[20:19:32 CET] <Bombo> hi
[20:21:04 CET] <Bombo> i want to record video from v4l device + alsa audio, when i do "ffmpeg -thread_queue_size 1024 -framerate 25 -video_size 720x576 -c:v libx264 -f v4l2 -i /dev/video0 -f alsa -i hw:3 out.mkv" i get "unknown decoder 'libx264'"
[20:21:55 CET] <Bombo> i want to ENcode the thing
[20:22:14 CET] <Bombo> i don't get why it says DEcoder
[20:39:29 CET] <Bombo> ok got the right order now i think: ffmpeg -thread_queue_size 1024 -framerate 25 -video_size 720x576  -f v4l2 -i /dev/video0 -thread_queue_size 1024 -f alsa -i hw:3 -c:v libx264 -preset superfast -crf 23 -vf yadif -c:a libmp3lame -b:a 128k -t 00:00:10" out.mkv
[20:39:36 CET] <Bombo> seems to work ;)
[00:00:00 CET] --- Sat Dec 29 2018


More information about the Ffmpeg-devel-irc mailing list