[Ffmpeg-devel-irc] ffmpeg.log.20190227

burek burek021 at gmail.com
Thu Feb 28 03:05:01 EET 2019


[03:31:06 CET] <YellowOnion> How do I force the filter chain to do a rgb24 -> nv12 -> yuyv422p -> rgb24  conversion?
[03:32:54 CET] <furq> -vf format=rgb24,format=nv12,format=yuv422p,format=rgb24
[03:33:03 CET] <furq> i don't know why you'd want to do that but ok
[03:34:34 CET] <YellowOnion> I want to see the degradation of nv12 -> yuyv422p, and I just want the inputs and outputs as pngs.
[03:35:36 CET] <furq> just -vf format=nv12,format=yuv422p then
[03:35:49 CET] <furq> also there is no yuyv422p, there's yuyv422 and yuv422p
[03:36:44 CET] <YellowOnion> Yeah yuyv422
[03:37:07 CET] <YellowOnion> the output looks identical with that command.
[03:40:16 CET] <YellowOnion> I'm trying to debug this: https://github.com/CatxFish/obs-virtual-cam/issues/43
[07:43:58 CET] <ricemuffinball> i have a video file that was separated into 2,  how do i join them back together into one file?
[08:58:31 CET] <goodgrief> Hi! I need to reduce noise in audio record. The length of audio is 5 seconds, it is human voice with white noise. Which ffmpeg filter should I use? I've tryed to read the documentation, but there are a lot of filters, so I'm confused. Please help.
[09:01:38 CET] <Umori> ricemuffinball: ffmpeg -i "concat:input1.ts|input2.ts" -c copy output.ts (https://trac.ffmpeg.org/wiki/Concatenate)
[09:01:48 CET] <ricemuffinball> that doesn't work
[09:01:54 CET] <ricemuffinball> already tried
[09:02:27 CET] <Umori> What error do you get?
[09:04:05 CET] <ricemuffinball> no error
[09:04:28 CET] <ricemuffinball> resulted/create file =  just input1.ts
[09:04:32 CET] <ricemuffinball> resulted/create file =  just input1.*
[09:07:06 CET] <Umori> Are the sources the exact same a/v codecs, resolution etc?
[09:07:06 CET] <Umori> Have yoi tried with the concat filter?
[09:11:34 CET] <Umori> goodgrief: I don't know much about ffmpeg audio filter but have you checked with audacity? If the white noise is present continuously and you can get a sample of it without the voice, you should be able to use the noise reduction feature.
[09:13:18 CET] <ricemuffinball> umori yes
[09:13:41 CET] <ricemuffinball> umori  yes i did, that also created corrupted file without any errors
[09:14:58 CET] <goodgrief> Umori, yeah, I know how to do this in Audacity. But I have too many short audios with voice and white noise and I want to reduce noise programmatically - I want to create a Python script for this.
[09:17:50 CET] <goodgrief> Umori, this noise filter should be working like noise gate, not the noise reduction, which is based on the noise sample free of voice, as you adviced.
[09:28:10 CET] <th3_v0ice> How can I copy over the stream PID using the -map option? In my MPEGTS file with 5 programs and multiple streams I can use -map 0 to copy everything to the UDP output, but the stream PID's change. Is there a way to preserve them?
[09:29:11 CET] <th3_v0ice> Or maybe force FFmpeg to show the new PID's it created in the output file. Either will do the job.
[09:30:28 CET] <JEEB> there's some way of doing multiple programs in mpegtsenc.c
[09:30:41 CET] <furq> goodgrief: check the afftdn and anmldn filters
[09:30:49 CET] <JEEB> also the PID is the "id" of the stream so I'd think the thing is configurable
[09:31:38 CET] <furq> also the compand filter
[09:38:17 CET] <goodgrief> furq, thanks
[09:59:54 CET] <th3_v0ice> JEEB: Are you talking about API or FFmpeg binary? I only needed the stream PID's, programs are not relevant.
[10:04:29 CET] <JEEB> th3_v0ice: in the API it's the id field and I know that it can be set, so I would say it's highly likely that there's an option to either set them, or pass them
[10:06:44 CET] <th3_v0ice> JEEB: For this particular issue I am using the binary, not the API :)
[10:07:08 CET] <th3_v0ice> I am sorry if I didnt made that clear in my question.
[10:08:33 CET] <JEEB> well I noted that most likely ffmpeg.c has that capability as well, and if not you could make an issue about it :P (stream indexes are not set'able as far as I know, but I think IDs are?)
[10:12:29 CET] <th3_v0ice> There is actually -streamid option. Thanks for the help :)
[10:12:50 CET] <JEEB> ok, that's it then
[11:44:17 CET] <Marcin_PL> Hello, does audio AAC stream ripping via ffmped differs to server broadcast software from listening it via Winamp or something else?
[11:53:50 CET] <Accord> hey, I have a lot of mp4 files with both video and audio streams but if I add the duration field reported by ffprobe the resulting duration is different from the duration I get after concatenating all of the files
[11:55:28 CET] <Accord> so let's say I have v1, v2, v3 with durations reported like 5000, 5000, 5000 but after concatenating tem I get 15500
[11:55:59 CET] <Accord> any ideas on how to predict what the concat result will be and why this is happening?
[11:57:03 CET] <cslcm> Marcin_PL: the useragent will be different
[12:37:46 CET] <Accord> also ffmpeg -i doesn't report the same duration as ffprobe
[12:44:30 CET] <Marcin_PL> cslcm: may I fake useragent of ffmpeg somehow?
[12:52:15 CET] <cslcm> Marcin_PL: Yes, -user-agent switch. Google :)
[12:53:52 CET] <cslcm> Marcin_PL: To mimic winamp you probably want "WinampMPEG/5.80"
[13:02:52 CET] <Accord> ah, ffmpeg -i reports format duration
[13:05:13 CET] <Marcin_PL> cslcm: thank you very much, dunno why man ffmpeg and /agent returns nothing for me, but switch works. -user-agent is deprecated, now it's -user_agent but both works, thanks again.
[14:05:20 CET] <ichlubna> Hello, anyone has any experience with av_hwframe_ctx_create_derived? I cannot create any derived context of different device type than the source. Tried running FFmpeg/libavutil/tests/hwdevice.c but none of the tests works.
[15:47:33 CET] <jkqxz> ichlubna:  What combination are you wanting to use?
[15:47:42 CET] <alex``> What mean -max_muxing_queue_size?
[15:48:23 CET] <jkqxz> ichlubna:  The derived contexts are for interop cases, so it will only work where there is actually interop support.
[15:48:40 CET] <ichlubna> So there is no interop at all? :D
[15:49:12 CET] <jkqxz> Possibly not for whatever device you are using.
[15:49:29 CET] <ichlubna> I tried the libavutil/tests/hwdevice.c test with some (hopefully supported) combinations they prepared but none is working. I am using RTX.
[15:51:34 CET] <jkqxz> As in Nvidia?  Noone has written any interop cases for that, though I think in theory it should be able to work with D3D11 at least.
[15:53:01 CET] <ichlubna> Hmm OK well I expected at least the vaapi and opencl to work like mentioned https://trac.ffmpeg.org/wiki/HWAccelIntro#OpenCL here
[15:57:38 CET] <jkqxz> Hmm, that should probably mention drivers.  Only the Beignet OpenCL ICD for Intel is supported properly for VAAPI interop.
[16:00:02 CET] <ichlubna> I see OK, thanks for the help. I guess it cant be helped
[16:02:44 CET] <jkqxz> Yeah.  Generally the interop methods are all different and each case has to be written/tested individually.
[16:04:27 CET] <ichlubna> OK I thought that some are working everywhere...I hoped :D
[16:06:44 CET] <ichlubna> Anyway thanks!
[17:34:01 CET] <richar_d> on my computer, the example on the wiki for converting a still image and audio file into a video suitable for uploading to YouTube (https://trac.ffmpeg.org/wiki/Encode/YouTube) (the third command) outputs a video that has 00:00:28.80 of silence at the end. the actual length of the audio track I specified is 00:03:46.80. can anyone else reproduce this?
[18:38:19 CET] <cfoch> hi
[18:38:33 CET] <cfoch> is it possible to remove silence at the end of an audio file with a single command?
[18:41:03 CET] <cfoch> I wonder if silenceremove with start_periods=-1 would work
[21:47:00 CET] <piotr> helo
[21:47:15 CET] <piotr> i need to convert files very simple
[21:47:52 CET] <piotr> i do ffmpeg -b:v 256k -i "$fileline" "$filetempconv"
[21:48:17 CET] <piotr> first is  file.flac for example and other file.mp3
[21:48:25 CET] <furq> -b:v goes after the input filename
[21:48:39 CET] <furq> also with lame you should probably use -q:v 0
[21:48:41 CET] <piotr> yeah it tells me that
[21:49:36 CET] <piotr> so b is for bitrate and v ? is v option or somethign other?
[21:49:40 CET] <piotr> not sure i get the syntax
[21:49:52 CET] <furq> oh right yeah
[21:49:58 CET] <piotr> I know it's in man but at least  simpole hint
[21:49:59 CET] <furq> it should be -b:a
[21:50:03 CET] <furq> bitrate:audio
[21:51:08 CET] <richar_d> the quality will be higher if you use variable bitrate (VBR) encoding
[21:51:24 CET] <piotr> better ?  ffmpeg -i "$fileline" -b:a 256k "$filetempconv"
[21:51:31 CET] <piotr> richard  and how would i do that ?
[21:51:42 CET] <piotr> im basically writing script to turn all non mp3 to mp3
[21:52:21 CET] <furq> piotr: replace -b:a 256k with -q:a 0
[21:52:31 CET] <piotr> than use mp3gain to have all same volume
[21:52:41 CET] <piotr> I see thankyou
[21:53:24 CET] <piotr> what does q and 0 stand for ?
[21:54:22 CET] <computron> Hey everyone, Looking to see if ffmpeg can do something and wondering if i am looking at the correct software for what i want to do.  I have 2 Ip Cameara streams and an audio stream i would like to combine into a Sort of "picture in Picture steam" and then kick that out to a Single stream i can send to a streaming service is FFMPEG capable of this?
[21:54:25 CET] <furq> q is quality, 0 is specific to libmp3lame
[21:54:38 CET] <furq> in this case it does the same thing as lame -V0
[21:55:03 CET] <furq> fwiw if you're doing this for a phone you should probably use a more modern codec like aac or opus
[21:55:11 CET] <richar_d> piotr, https://trac.ffmpeg.org/wiki/Encode/MP3 " `-q:a 0` is the highest quality
[21:55:32 CET] <piotr> this is my computer music ;-)
[21:56:13 CET] <piotr> also i got those
[21:56:31 CET] <piotr> Parse error, at least 3 arguments were expected, only 1 given in string 'ity Hunter Collection (1987-2005)/05 NEVER GO AWAY.flac'
[21:57:04 CET] <piotr> tho seems like  bug in my script tbh
[21:57:09 CET] <piotr> ill investigate'
[21:59:42 CET] <richar_d> piotr, I wrote something earlier that may help you. give me a minute&
[22:00:43 CET] <furq> piotr: http://vpaste.net/iZCOX
[22:00:51 CET] <furq> i wrote this for someone else and never tested it so ymmv
[22:02:08 CET] <richar_d> furq, https://dpaste.de/XP5i
[22:02:56 CET] <richar_d> ugh, I feel so dirty when writing Bash scripts
[22:03:18 CET] <furq> is there any benefit to `for file in "${@}"` over `for file`
[22:05:21 CET] <richar_d> yes: 1. you can run the script like this: `./convert-audio ~/Music/Album\ 1/*.flac ~/Music/Album\ 2/*.flac 2. it can process file names that contain spaces
[22:05:49 CET] <furq> mine does that as well
[22:06:13 CET] <furq> the only reason i'm screwing around with -print0 is for multiprocessing with xargs
[22:06:38 CET] <piotr> it blew up because It ask for yes on overwrite also i see on flac it has soem questions
[22:07:16 CET] <richar_d> I didn't even know `for` without `in` was a valid syntax
[22:07:26 CET] <furq> yeah it just iterates over $@
[22:07:32 CET] <furq> or $*, i never remember what the difference is
[22:07:50 CET] <richar_d> `$*` doesn't preserve spaces
[22:08:00 CET] <furq> that's probably it
[22:08:21 CET] <piotr> thx for scripts tho i use ksh
[22:08:34 CET] <furq> bash scripts are still fine as long as you have it installed somewhere
[22:08:56 CET] <piotr> altho  probably would work i have mine almost working so just looking on yours from curiocity
[22:10:06 CET] <richar_d> Python is what I use when I want to write something elegant :)
[22:13:46 CET] <computron> Hey everyone, Looking to see if ffmpeg can do something and wondering if i am looking at the correct software for what i want to do.  I have 2 Ip Cameara streams and an audio stream i would like to combine into a Sort of "picture in Picture steam" and then kick that out to a Single stream i can send to a streaming service is FFMPEG capable of this?
[22:15:56 CET] <furq> it's capable of it but it'll be a bit fragile
[22:16:07 CET] <furq> https://ffmpeg.org/ffmpeg-filters.html#overlay-1
[22:16:43 CET] <furq> if the two IP cameras are on a local network then it should be ok
[22:17:26 CET] <furq> but ffmpeg won't do anything to try to keep the two streams in sync, so if there are dropouts or discontinuities or whatever then things might go wrong
[22:17:31 CET] <furq> there's not really anything you can do about htat
[22:18:47 CET] <furq> you'd want something like -lavfi "[0:v]setpts=PTS-STARTPTS[bg];[1:v]setpts=PTS-STARTPTS[fg];[bg][fg]overlay=..."
[22:19:53 CET] <computron> Furq, thank you yes both are on a local network its for a church the second stream is a power point presentation
[22:20:09 CET] <computron> that is getting kicked through an axis encoder
[22:21:09 CET] <computron> Is there a good Primer, or list of examples for FFMPEG i have never looked at it just doing some research right now
[00:00:00 CET] --- Thu Feb 28 2019


More information about the Ffmpeg-devel-irc mailing list