[Ffmpeg-devel-irc] ffmpeg.log.20190604
burek
burek021 at gmail.com
Wed Jun 5 03:06:38 EEST 2019
[00:00:02 CEST] <Necrosand> pink_mist yes in one file
[00:00:12 CEST] <pink_mist> then you want to merge them using a filter
[00:00:25 CEST] <pink_mist> find the right filter to use, and figure out how to use it
[00:02:37 CEST] <Necrosand> cehoyos -map 0 didn't work
[00:02:44 CEST] <Necrosand> ffmpeg -map 0 -f dshow -i audio="desktop audio" -f dshow -i audio="Microphone" output.mka
[00:02:48 CEST] <pink_mist> Necrosand: http://ffmpeg.org/ffmpeg-filters.html#amerge-1
[00:02:49 CEST] <cehoyos> That's because I did not suggest it;-)
[00:02:51 CEST] <cehoyos> (As said)
[00:03:30 CEST] <pink_mist> or possibly amix
[00:04:01 CEST] <pink_mist> Necrosand: http://ffmpeg.org/ffmpeg-filters.html#amix
[00:05:16 CEST] <Necrosand> cehoyos you said -map 0 is missing
[00:06:09 CEST] <Necrosand> pink_mist okay problem is it did not create mka with 2 audio tracks
[00:06:41 CEST] <pink_mist> well you probably needed what cehoyos said, but that hardly matters now anyway, since that's not what you wanted
[00:06:58 CEST] <cehoyos> No
[00:07:12 CEST] <Necrosand> pink_mist true but i know how to merge 2 audio track using audacitry
[00:08:17 CEST] <pink_mist> then why did you come here and scream about stuff not working?
[00:08:28 CEST] <Necrosand> pink_mist what happens when you run the same command?
[00:08:37 CEST] <pink_mist> I don't even have ffmpeg installed here
[00:08:44 CEST] <pink_mist> so it tells me that ffmpeg doesn't exist
[00:08:56 CEST] <Necrosand> lol
[00:09:11 CEST] <Necrosand> you make it sound like ffmpeg is so hard to get
[00:10:17 CEST] <pink_mist> it takes a while to compile, especially with all the prerequisites I'd want to make it usefuler
[00:15:18 CEST] <Necrosand> cehoyos where do i put -map 0
[00:19:45 CEST] <cehoyos> -map 0 -map 1 missing
[00:19:58 CEST] <cehoyos> But only if you want two audio streams in your output file.
[00:20:06 CEST] <cehoyos> If you want one stream, use one of the filters mentioned
[00:20:50 CEST] <Necrosand> cehoyos i want to see if two audio streams even works
[00:21:08 CEST] <Necrosand> even that working is a progress
[00:23:55 CEST] <Necrosand> ffmpeg -map 0 -f dshow -i audio="what u hear (Creative SB X-Fi)" -map 1 -f dshow -i audio="Microphone (Creative SB X-Fi)" test.mka
[00:24:04 CEST] <Necrosand> that didn't work
[00:31:03 CEST] <cehoyos> Doesn'
[00:31:09 CEST] <cehoyos> t the console output tell you why?
[00:31:23 CEST] <cehoyos> (I ask because I just tested and I get a very descriptive error message)
[00:32:24 CEST] <cehoyos> Anyway: ffmpeg -f dshow -i audio="what u hear (Creative SB X-Fi)" -f dshow -i audio="Microphone (Creative SB X-Fi)" -map 0 -map 1 test.mka
[00:34:49 CEST] <Necrosand> oh okay, i did it wrong
[00:34:50 CEST] <Necrosand> thanks
[00:48:58 CEST] <Necrosand> -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map "[a]" output.mka : why is this creating 4 channel output.mka file
[02:35:28 CEST] <kepstin> Necrosand: because that's what the amerge filter does, according to its documentation. you probably wanted amix instead.
[02:35:37 CEST] <Necrosand> oh ok
[02:35:45 CEST] <Necrosand> amerge is always 4 chnanels?
[02:36:47 CEST] <kepstin> no, amerge combines an N channel input and an M channel input into an N+M channel output
[02:37:15 CEST] <Necrosand> oh ok
[02:37:21 CEST] <Necrosand> thanks
[11:52:25 CEST] <Atlenohen> Hey
[11:52:37 CEST] <Atlenohen> libx264rgb is not mentioned on the configure
[11:53:03 CEST] <Atlenohen> there's no error or anything, it's not reported as included, if I do it specifically or libx264*, same
[11:53:19 CEST] <Atlenohen> When trying to build FFmpeg
[11:53:36 CEST] <Atlenohen> I'll see later if it's really included or not
[11:54:36 CEST] <BtbN> Isn't libx264rgb just a differently named endpoint of just libx264, which feeds RGB to x264, which thinks it's YUV?
[11:55:29 CEST] <Atlenohen> That may be the case yes, with some of these phantom encoders/muxers, but I'm no FFmpeg expert.
[11:57:26 CEST] <Atlenohen> btw, would lossless mode x264 be of any benefit over FFV1 ?
[11:57:50 CEST] <BtbN> You'll need to test what's smaller/faster for your case
[11:58:08 CEST] <BtbN> And if speed or efficiency is more important
[11:58:24 CEST] <Atlenohen> oh, ok
[12:03:35 CEST] <Atlenohen> Slight bug, after building, avconfig.h and ffversion.h don't come out as CRLF
[12:12:06 CEST] <dongs> [matroska @ 000001b98c918800] Can't write packet with unknown timestamp
[12:12:08 CEST] <dongs> how do i fix this
[12:14:36 CEST] <dongs> aight muxing to mp4 works
[12:20:26 CEST] <BtbN> Why would anything come out with Windows line endings?
[12:20:43 CEST] <BtbN> What even still needs them? Even Windows is slowly abandoning it.
[12:20:44 CEST] <dongs> wut
[12:21:21 CEST] <dongs> are you one of thos dudes who thinks lunix is succeeding on desktop?
[12:21:50 CEST] <BtbN> You are aware they made notepad.exe aware of non-Windows-Lineendings recently?
[12:22:02 CEST] <BtbN> They are moving away from CRLF. Very slowly though.
[12:22:04 CEST] <zamba> hi! i have an audio recording that has been made with the samsung voice recorder.. when pulling the file from the mobile phone, its extension is .m4a and i'm not able to play it back using mplayer/ffplay.. 'file' says: "ISO Media, MPEG v4 system, 3GPP"
[12:22:05 CEST] <dongs> no because i literally have never encountered such a file
[12:22:10 CEST] <dongs> in all the decades of usign windows
[12:22:28 CEST] <zamba> ffplay barfs with:moov atom not found
[12:22:29 CEST] <dongs> zamba, could it be DRM/protected?
[12:22:34 CEST] <dongs> oh, its short then
[12:22:39 CEST] <zamba> Invalid data found when processing input
[12:22:41 CEST] <dongs> did recording abort due to low battery or such?
[12:23:14 CEST] <BtbN> Best case copying it from the device was just interrupted and it's missing part of the file.
[12:23:24 CEST] <BtbN> Worst case, that recording is busted.
[12:23:51 CEST] <zamba> BtbN: what about just parsing/reading what's ok? if the file in fact was cut short?
[12:24:12 CEST] <BtbN> Can't, without the moov atom, an mp4 file is unreadable. And it's at the end of the file.
[12:25:56 CEST] <zamba> so it's not possible to make sense of the raw binary stream without the moov atom?
[12:26:20 CEST] <zamba> i mean.. not even "forensicly"?
[12:26:59 CEST] <dongs> you can make another recording
[12:27:00 CEST] <dongs> with same settings
[12:27:03 CEST] <dongs> and tehre's software/scripts
[12:27:16 CEST] <dongs> that can transplant the required mp4 bits from known good fine to bad one
[12:27:26 CEST] <dongs> not sure how well that works all the ones I've seen are geared for video
[12:27:31 CEST] <dongs> maybe audio works same way?
[12:28:43 CEST] <zamba> but i'm quite sure this is not mp4
[12:28:51 CEST] <zamba> i believe it should be 3ga
[12:39:39 CEST] <dongs> why dont you just make another recording and see
[12:39:42 CEST] <dongs> how did you get the file
[12:42:51 CEST] <zamba> dongs: i'm trying to save a file for someone else
[12:43:07 CEST] <dongs> ic then not much you can do
[12:43:33 CEST] <dongs> -c:v h264_nvenc -preset slow -profile:v high -level 5.2 -b:v 20M why doesnt this work?
[12:43:46 CEST] <dongs> oh hm wat no nvenc capable devices found
[12:43:52 CEST] <dongs> wtf is going on
[13:03:31 CEST] <BtbN> Increase the log level, it should tell you more about what exactly it's unhappy about
[13:05:16 CEST] <Atlenohen> I'm starting to dwelve into the FFmpeg API in depth, rewriting the old API implementation
[13:05:39 CEST] <Atlenohen> Wondering what does this do right now? https://pastebin.com/biNbErNz
[13:06:31 CEST] <Atlenohen> I'm getting rid of support anything below the version I included, I believe it's 4.1.3
[13:07:12 CEST] <Atlenohen> so this whole segment can be deleted?
[13:07:14 CEST] <BtbN> Stuff was renamed, and this just renames it via macros
[13:07:20 CEST] <BtbN> If the version is too old the have it
[13:08:54 CEST] <Atlenohen> oh I guess it doesn't hurt, but I think the stuff I'll be doing later will break old version anyway
[13:10:51 CEST] <Atlenohen> Doxygen is the optimal source to look for the version specific API stuff right?
[13:10:55 CEST] <BtbN> Might as well just get rid of it if you hard-require something newer
[13:11:11 CEST] <Atlenohen> Yeah I was thinking like that too
[13:11:16 CEST] <BtbN> I don't think there's anything to look at that
[13:11:42 CEST] <BtbN> stuff like that usually comes to by because an old codebase broke when built against a newer ffmpeg version, and then gets patched up.
[13:11:43 CEST] <Atlenohen> Oh, I mean in general, when beginning to work on the inital implementation
[13:12:02 CEST] <BtbN> I'd just look at the headers
[13:12:23 CEST] <Atlenohen> Like what to do first, second, next ... first I need a av_output_context, etc like that, if there's some flow guide, not just alphabetic order
[13:12:38 CEST] <Atlenohen> Because it's the first time I'm doing it
[13:13:40 CEST] <Atlenohen> This is only encode tho, well first is libavcodec, I'm sure that's not the place to start looking at right
[14:00:30 CEST] <Atlenohen> BtbN, are example files of any use?
[14:01:11 CEST] <Atlenohen> like encode_video.c ... however this is a C++ program integrating FFmpeg.
[14:03:55 CEST] <Atlenohen> Also, seems like version specific shows is exactly that version, so it doesn't include everything, is that trunk?
[14:04:12 CEST] <Atlenohen> For example wasn't av_network_init deprecated but it's not showing up in 4.1?
[14:04:32 CEST] <Atlenohen> sry: avformat_network_init
[14:04:50 CEST] <DHE> it's deprecated. you don't need to do it anymore, but calling it won't break anything
[14:06:23 CEST] <Atlenohen> I go to doxygen and search but finds nothing in 3.2 or 2.0 eithe
[14:06:40 CEST] <Atlenohen> Maybe it's mentioned only in the version it was deprecated in
[14:07:01 CEST] <DHE> oxygen should still mention the function, if only to say it's deprecated and what to do instead now
[14:07:04 CEST] <Atlenohen> I think av_register_all(); was too right?
[14:08:05 CEST] <DHE> https://www.ffmpeg.org/doxygen/trunk/group__lavf__core.html#ga84542023693d61e8564c5d457979c932 # avformat_network_init is deprecated and not needed anymore (on Trunk)
[14:09:59 CEST] <Atlenohen> oh ok
[15:11:37 CEST] <Gunstick> I already asked yesterday and get reply "channelsplit". Yeah, duh, I don't understand complex filter. So how do I remove the first 2 channels and graph the other 3? Need I 2 ffmpeg commands piped? ffmpeg -y -t 5 -i You_Bore.wav -filter_complex "[0:a]channelsplit=channel_layout=5.0[BL][BR][FL][FR][FC][BL][BR];[0:a]showwaves=s=1280x720:split_channels=y:mode=line,format=yuv420p[v]" -map "[v]" -map 0:a -c:v libx264 You_Bore.mkv # source file is here:
[15:11:37 CEST] <Gunstick> http://gkess.homeip.net/~georges/You_Bore.wav
[15:12:37 CEST] <Gunstick> the above thing does obviously not work as is.
[15:14:34 CEST] <DHE> well you're using 5.0 but have 6 outputs to channelsplit
[15:14:46 CEST] <DHE> *7
[15:15:12 CEST] <Gunstick> oops, but removing the duplicate [BL][BR] does not change a thing.
[15:16:07 CEST] <Gunstick> because I do not understand complex filter. So "BR has an unconnected output" does not mean anything to me. sorry
[15:16:47 CEST] <durandal_1707> learn how to learn how to learn to learn to read documentation
[15:17:21 CEST] <DHE> I think channelmap to re-assemble the split waves into a single 3.0 stream for showwaves would do you
[15:17:42 CEST] <durandal_1707> no
[15:17:49 CEST] <Gunstick> indeed. The documentation just says how to split into different streams, but I want to have one 3 cheannel stream and then put that through showwaves.
[15:18:00 CEST] <DHE> there a better way?
[15:18:38 CEST] <durandal_1707> plus join/amerge and anullsink
[15:19:32 CEST] <DHE> okay amerge makes more sense
[15:19:51 CEST] <dongs> yall 5.1 audio plebs, i've got some 22.2 ch audio i can't even play it back
[15:26:46 CEST] <Gunstick> @DHE @durandal_1707 anullsink + amerge! ffmpeg -y -t 5 -i You_Bore.wav -filter_complex "[0:a]channelsplit=channel_layout=5.0[BL][BR][FL][FR][FC];[BL]anullsink;[BR]anullsink;[FL][FR][FC]amerge=inputs=3[YM];[YM]showwaves=s=1280x720:split_channels=y:mode=line,format=yuv420p[v]" -map "[v]" -map 0:a -c:v libx264 You_Bore.mkv
[15:27:10 CEST] <durandal_1707> dongs: you havent provvided it
[15:27:38 CEST] <dongs> durandal_1707: eh i could, what would it do tho
[15:27:45 CEST] <Gunstick> DHE: thanks for the help
[15:27:47 CEST] <DHE> Gunstick: seems okay. try it
[15:27:53 CEST] <dongs> its LATM AAC btw
[15:28:19 CEST] <dongs> and im not quite sure of the mapping either. it could be 3 separate streams since aac spec doens't have that many channels configuration
[15:28:21 CEST] <Gunstick> DHE: yes, I confirm it does what I want. Now back to rotating the result 90° and I'm done. hoooray.
[16:03:50 CEST] <dongs> uh... where exactly do i find out if F302VC has TIM3 or not
[16:03:54 CEST] <dongs> im seeing conflicting shit
[16:04:01 CEST] <dongs> and i powered it up and all registers are at 0...
[16:05:14 CEST] <dongs> hmm my reference manual is for 303
[16:05:17 CEST] <dongs> i bet 302 is more gimped
[16:14:54 CEST] <dongs> hm 302xB/C is supposed to have TIm3
[16:14:59 CEST] <dongs> err what
[16:15:01 CEST] <dongs> wrong channel
[16:15:21 CEST] <durandal_1707> dongs: nope, please continue, its very interesting :)
[16:28:53 CEST] <roxlu> hey, when I play a video with `ffplay` can I see what video decoder is used?
[16:31:14 CEST] <roxlu> this is what i see https://gist.github.com/roxlu/e9b37b04bd749f45774d7e583bb4f10c
[16:36:02 CEST] <tinystoat> roxlu: ffprobe
[16:37:02 CEST] <roxlu> tinystoat: what option tells me what decoder is used?
[16:37:46 CEST] <tinystoat> roxlu: wow sorry mate i completely misread what you were asking.
[16:38:05 CEST] <roxlu> hehe np :)
[16:46:56 CEST] <roxlu> ok, I just tried to start a conversion to check if that shows what decoder is used and I got ` Stream #0:0 -> #0:0 (h264 (native) -> theora (libtheora))`
[16:47:08 CEST] <roxlu> I guess the "native" is the build in decoder?
[16:47:14 CEST] <Anony11> Hi Fellas
[16:47:28 CEST] <Anony11> Have a particular question: Have a program that generates continously a file, named "file.jpg" every millisecond for 10 seconds. Need that ffmpeg inputs(-i) the file continously and provide a "video.mp4" that is being "streamed"(keeps growing).
[16:55:56 CEST] <dongs> same filename?
[16:56:02 CEST] <dongs> is it getting overwritten?
[16:57:09 CEST] <kepstin> you'll have a bad time if you're doing something like truncating it then rewriting it, due to race conditions.
[16:57:15 CEST] <dongs> yeah every 10ms sounds kinda meh
[16:57:17 CEST] <dongs> err every ms
[16:57:21 CEST] <dongs> (even worse)
[16:57:39 CEST] <kepstin> yeah, 1000fps is kind of a lot :)
[16:58:07 CEST] <kepstin> need a hell of a computer to encode that in realtime, unless the images are really small.
[16:58:29 CEST] <Anony11> Yeah, but those "were numbers"... lol
[16:58:45 CEST] <dongs> anyway so?
[16:58:53 CEST] <dongs> either 1) fix your program to number the jpegs
[16:58:54 CEST] <Anony11> Yes, it's getting overwritten
[16:59:37 CEST] <Anony11> It's to save space
[16:59:37 CEST] <dongs> 2) do some filesystem hacking where it takes writen file and somehow sends it to ffmpeg on each open/write/close
[16:59:37 CEST] <dongs> maybe lunix + fuse or some otehr garbage could do it
[16:59:37 CEST] <Anony11> and resources, i can't keep getting filename_3billions.jpg + the file
[16:59:37 CEST] <kepstin> Anony11: truncated and rewrite is *not* going to work, you'll get partial file reads and it'll be corrupt :/
[16:59:46 CEST] <kepstin> Anony11: you probably want to pipe your applications output to ffmpeg rather than write a file.
[16:59:49 CEST] <dongs> do you want the output in realtime?
[17:00:30 CEST] <Anony11> Oh, yes, even better then
[17:00:31 CEST] <dongs> just pipe it as raw YUV/whatever
[17:00:40 CEST] <kepstin> yeah, save time by not bothering with the jpeg encoding
[17:01:00 CEST] <kepstin> (unless your high speed camera does hardware jpeg or something, i guess)
[17:01:56 CEST] <Anony11> No, it's worse. It's a generated computer image, and need to "stream it", that's why i said jpg and possibly can't use raw data, cause who knows how it operates low level
[17:02:31 CEST] <dongs> how much control do you ahve over whatever generates it
[17:02:32 CEST] <Anony11> it's a square on sdl
[17:02:34 CEST] <Anony11> lol
[17:02:34 CEST] <kepstin> if it's a generated computer image, at some point it must exist as a bitmap in memory before it's encoded to jpeg...
[17:02:44 CEST] <dongs> yeah SDL so uh
[17:02:50 CEST] <dongs> you can certainly access to the bbackbuffer
[17:03:41 CEST] <Freneticks> hey is it possible to make like a "rule" only rencode sound if it's eac3 ?
[17:04:22 CEST] <Anony11> Yeah, but, sdl probably handles it trough opengl, so who knows... it's not really something "intermediate", yeah would be very fast, but, trying to "develop fast" rather than "execute fast"
[17:04:54 CEST] <dongs> instead of saving to jpeg just pipe wahtever intermediate shit you get before that to ffmpeg directly.
[17:05:22 CEST] <Anony11> R-really? would it work?
[17:05:30 CEST] <dongs> why not?
[17:05:43 CEST] <dongs> i mean you'll prolly need to edit sores either way
[17:06:31 CEST] <Anony11> Cause, even tho of Italian Descent, Mario never tought me how to use pipes, and gave me Windows
[17:07:15 CEST] <kepstin> Freneticks: no, you'd have to script it (e.g. by using ffprobe to check codecs before writing your ffmpeg command line)
[17:07:31 CEST] <Freneticks> okay
[17:11:32 CEST] <dongs> Anony11: pipes work on windows too.
[17:11:59 CEST] <dongs> so instead of writing your shit to file everytime. maybe add some mjpeg header (if you really insist on using jpeg) and then feed that into ffmpeg
[17:13:39 CEST] <kepstin> no special header needed, ffmpeg can read a stream of jpeg images from a pipe
[17:13:45 CEST] <dongs> yea?
[17:13:53 CEST] <dongs> just continuous JFIF or wahtever shit works?
[17:13:56 CEST] <kepstin> but yeah, skipping the jpeg and streaming raw video would be better here.
[17:14:03 CEST] <kepstin> yeah.
[18:54:50 CEST] <StyXman> -pattern_type glob -i '*.jpg' is not working for me: Could not open file : *.jpg
[19:30:53 CEST] <pink_mist> lol
[19:48:18 CEST] <DHE> on the plus side, I doubt they'll make that mistake again any time soon
[19:59:58 CEST] <steve___> I'd like to skip the first 1:40 of an mp4 video I took with my android point and add a video fade in and a video fade out. Should this be done in two steps, something like -ss 1:49.0 -c copy and then add the fades, to avoid re-encoding?
[20:07:33 CEST] <DHE> it can be done in a single shot, but it will require a transcode step
[20:07:54 CEST] <DHE> though you'll need to know how many frames are involved ahead of time, so a dry run may be required first
[20:15:03 CEST] <steve___> DHE: So making a 01-fade_in.mp4 and a 03-fade_out.mp4 and then concat those three files would be faster/saner?
[20:32:04 CEST] <Toneloc> Can ffmpeg be scripted to create videos such as this?? >> https://www.youtube.com/watch?v=qtdqJQqaajM
[20:33:36 CEST] <Toneloc> I would like to import text data into ffmpeg and generate the video using a pre-defined graphocs, format (transistion, fade etc.)
[20:34:47 CEST] <kepstin> Toneloc: sort of; one way to do that sort of thing would be to design the text as an ass subtitles file and use ffmpeg to render the subtitles file over the static background image.
[20:35:49 CEST] <Toneloc> Would it be easy to embed an audio track that was made seperately also?
[20:36:44 CEST] <kepstin> combining separate audio and video tracks into a single output file is probably one of the easiest things you can do with ffmpeg.
[20:37:09 CEST] <Toneloc> kepstin - Thanks for that, do you know anywhere there is an example of this setup with ffmpeg?
[20:37:48 CEST] <Toneloc> kepstin - I din't really mind if it's not ffmpeg, I'm just looking for an easy and cheap way to achieve this
[20:37:59 CEST] <Toneloc> what do people usually use for this?
[20:38:29 CEST] <kepstin> this is a kind of unusual use. but there's examples out there for putting subtitles onto a video with ffmpeg and combining audio/video tracks. The hardest part of doing this would be writing the subtitle file to use as input.
[20:39:35 CEST] <Toneloc> kepstin - I'm sure I can write a program to write the subtitle file
[20:39:52 CEST] <Toneloc> if the format is open, I'm sure its something like .srt ?
[20:40:23 CEST] <kepstin> srt doesn't have much in the way of formatting; if you want to do positioning and animations and whatnot, you'd want to use the ass format.
[20:41:24 CEST] <Toneloc> kepstin - thanks, I'll look into that
[20:47:11 CEST] <DHE> steve___: no, pretty sure you can just do a single encode job with 2 fade commands, one for start and one for end. but you need an exact duration to set up the end fade.
[21:32:47 CEST] <Janhouse> I have a problem where using -hwaccel vaapi crashes for me when trying to output to rtmp stream. But it works fine when writing to a file.
[21:33:10 CEST] <Janhouse> This works fine: ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i rtmp://10.10.12.2:1935/stream/YOLO -vf 'fps=30,scale_vaapi=w=1920:h=1080' -c:a copy -c:v h264_vaapi -b:v 6M -maxrate 6M out.mp4
[22:02:25 CEST] <an3k> git on Windows loves to do this and some code fails because of this
[22:02:35 CEST] <an3k> nvm
[23:12:01 CEST] <upgreydd> JEEB, sleeping? :D
[23:13:53 CEST] <upgreydd> Is there an option to set `cpb_*` things in h264 ? What's related to?
[00:00:00 CEST] --- Wed Jun 5 2019
More information about the Ffmpeg-devel-irc
mailing list