[Ffmpeg-devel-irc] ffmpeg.log.20170511

burek burek021 at gmail.com
Fri May 12 03:05:01 EEST 2017

[00:04:18 CEST] <james999> I thought Febrice Bellard's name sounded familiar
[00:04:32 CEST] <james999> I read that in the Qemu article he was the founder and thought fuck where did i hear that before?
[00:05:21 CEST] <Tatsh> he's the guy who forked ffmpeg off
[00:05:24 CEST] <Tatsh> isn't he
[00:05:31 CEST] <Tatsh> created libav
[00:05:53 CEST] <james999> wikipedia says he created ffmpeg
[00:06:00 CEST] <james999> but those aren't logically exclusive
[00:06:39 CEST] <Tatsh> well he did create the fork of libav
[00:06:42 CEST] <Tatsh> under a pseudonym
[00:06:49 CEST] <Tatsh> and now it just causes confusion
[00:07:43 CEST] <Tatsh> but in any case, https://lwn.net/Articles/650816/
[00:08:00 CEST] <james999> hmm i didn't think so
[00:08:03 CEST] <james999> but maybe i'm wrong... lol
[00:09:06 CEST] <Tatsh> all for competition when it makes sense
[00:09:09 CEST] <james999> Tatsh: do you remember the pseudonym? i really think you've got to be wrong on this one
[00:09:56 CEST] <Tatsh> it says on wikipedia
[00:10:03 CEST] <Tatsh> Gérard Lantau
[00:10:14 CEST] <Tatsh> https://www.theregister.co.uk/2015/08/05/ffmpeg_leader_steps_down/
[00:15:17 CEST] <james999> right ok
[00:15:27 CEST] <james999> i'm trying to find the names of the libav people but none of the articles say
[02:43:24 CEST] <Tatsh> trying to use mjpeg_cuvid
[02:43:25 CEST] <Tatsh> Impossible to convert between the formats supported by the filter 'Parsed_setpts_0' and the filter 'auto_scaler_0'
[02:43:29 CEST] <Tatsh> i'm using -f concat
[02:43:49 CEST] <Tatsh> ffmpeg -y -hwaccel cuvid -f concat -safe 0 -c:v mjpeg_cuvid -i chunks.DwnIP3isM8.txt -an -preset veryslow -profile:v high -level 4.1 -flags:v +cgop -vf 'setpts=0.25*PTS' -g 12 -bf 2 -b:v 8M -maxrate:v 8M -bufsize:v 16M -pix_fmt yuv420p -movflags +faststart -vcodec libx264 -pass 1 11270506.mp4
[02:43:53 CEST] <Tatsh> any ideas?
[04:32:28 CEST] <Exairnous> anyone know what's wrong with this filter_complex?
[04:32:30 CEST] <Exairnous> -filter_complex "[0:a]volume=2.0[a0]; [1:a]volume=0.8[a1]; [a0][a1]amerge=inputs=2[a],pan=stereo|c0<c0+c1|c1<c2+c3[aout]"
[04:33:12 CEST] <james999> hmm, anybody know about the openmax decoder on the raspberry pi?
[04:33:26 CEST] <james999> apparently you have to use omxplayer and it sucks according to this gentlement in another channel
[04:48:29 CEST] <Wallboy> i have a question about scale2ref. I'm using it to scale a watermark so it's 30% the width of the video while maintaining the correct aspect ratio of the watermark by doing the following -filter_complex "[1:v][0:v]scale2ref=iw*0.3:(iw*0.3)*(LOGOHEIGHT/LOGOWIDTH)[logo][base]". I have to calculate the logo aspect ratio prior to the command, but is there a way to use specify iw/ih variables for
[04:48:29 CEST] <Wallboy> the logo input?
[05:09:32 CEST] <furq> james999: --enable-mmal for decoding, --enable-omx-rpi for encoding
[05:14:29 CEST] <james999> i take it those are configure flags
[05:14:34 CEST] <furq> yes
[05:14:53 CEST] <james999> do I have to be on the pi itself to compile ffmpeg or can i do it from the comfort of my desktop?
[05:15:09 CEST] <furq> you can cross compile
[05:16:06 CEST] <james999> yes!, awesome thanks
[06:03:01 CEST] <Wallboy> Does the order of filters matter? If I want to hflip a video and also scale it, should I first flip it, then scale, or vice versa? Or does that really matter?
[06:08:53 CEST] <JC_Yang> I guess flip then scale save you a bit of processing power, just guess
[07:02:05 CEST] <Glanzmann> Hello, I would like to record my screen plus audio on Debian stable. Screen recording works. But when I try to record audio with ffmpeg 3.2.4-static from the website, it does no longer have the also input device. What should I use instead?
[07:02:33 CEST] <Glanzmann> https://pbot.rmdir.de/IYIuhFZXS6vEcSp56AzoRw
[07:03:08 CEST] <Glanzmann> I'm using: ffmpeg -f alsa -ac 2 -i ${audiodevice} -f x11grab -video_size 1024x768 -framerate 10 -i :0.0 -preset ultrafast lab-%02d-%02d.mkv
[08:04:31 CEST] <beandog> I'm stuck. I'm working on my first audio encoder in C, and I can figure out how to decode the audio (I think) but not put the frames / packets in the new one
[08:04:42 CEST] <beandog> I'm not even sure if I'm explaining it right, heh
[08:05:26 CEST] <beandog> code if you wanna see where I'm at so far, https://gist.github.com/beandog/7f51cdd45cf7ca9c8e4fce33760dfc60#file-avbox-c-L202
[08:44:49 CEST] <Pandela> beandog: Wish I could help, but pretty dank man. Is it your own codec or are ya rebuilding one?
[08:45:08 CEST] <beandog> just going from ac3 on a dvd to aac
[08:45:17 CEST] <beandog> nothing special
[08:49:29 CEST] <Pandela> Stilll though. And did you try the development channel?
[08:51:23 CEST] <beandog> nope, I should do that
[08:51:31 CEST] <beandog> thx
[08:51:59 CEST] <beandog> topic says to bump back in here :|
[08:52:01 CEST] <beandog> s'all good
[09:12:23 CEST] <monoxane> hey, i need ffmpeg to output in realtime, ie at 30fps instead of whatever it can pump out, how do i do this?
[09:13:22 CEST] <celyr> I have a car that can't go over 50km/h but I want it to go 140km/h, how i do this ?
[09:15:55 CEST] <vlt> bend space time?
[09:15:58 CEST] <monoxane> *cant tell if shitting on me or cant understand question*
[09:16:17 CEST] <monoxane> ffmpeg is pumping out frames as fast as it can and i need it to output at 30fps, the same as the input
[09:16:47 CEST] <vlt> That's basically the 50 vs. 140 km/h example.
[09:17:27 CEST] <Nacht> Doesn't he mean the -re option ?
[09:17:28 CEST] <Mandevil> Or vice versa.
[09:17:54 CEST] <celyr> if you eventually are able to bend space time to output the correct number of fps can you please share your technology ? I have some applications in my mind
[09:18:21 CEST] <celyr> Jokes aside, can you please provide all the information and what is the goal ? So we can eventually help you to find out a solution
[09:18:28 CEST] <vlt> monoxane: ffmpeg should be able to make use of multiple cpu cores (if you make them available to the process).
[09:19:02 CEST] <Mandevil> vlt: Maybe monoxane means that ffmpeg is outputing frames at 140 fps, but they only need 50 fps?
[09:19:05 CEST] <Nacht> I think he means ffmpeg is going to fast ?
[09:19:42 CEST] <monoxane> Mandevil, is correct
[09:20:28 CEST] <Mandevil> monoxane: Is this for streaming?
[09:20:59 CEST] <monoxane> yup
[09:21:42 CEST] <Mandevil> monoxane: Then it should be the client who is throttling the server.
[09:22:09 CEST] <celyr> yeah, client should be happy to fill his buffer asap
[09:22:10 CEST] <Mandevil> monoxane: Client only consumes frames at specified rate, so eventually it will block the server from producing any more.
[09:22:52 CEST] <monoxane> yea but i cant do that because of how im running the websockets stream
[09:23:04 CEST] <monoxane> so i ned to limit ffmpegs output to the input fps
[09:23:07 CEST] <monoxane> *need
[09:23:16 CEST] <Nacht> Using the -re option on the input should do that afaik
[09:24:15 CEST] <monoxane> thanks Nacht
[09:24:22 CEST] <monoxane> that did exactly what i wanted
[09:24:26 CEST] <Nacht> np
[09:42:34 CEST] <monoxane> now weird other problems, it stutters when -re is used
[09:42:49 CEST] <monoxane> would that just be input read issues with my drive?
[10:03:48 CEST] <Pandela> Share your command line
[10:03:50 CEST] <Pandela> ?
[10:05:31 CEST] <Pandela> And have you messed with the bitrate at all?
[10:05:34 CEST] <monoxane> ffmpeg -re -i <video> -f mpegts -codec:v mpeg1video -s 960x540 -b:v 1500k -r 30 -bf 0 -codec:a mp2 -ar 44100 -ac 1 -b:a 128k<password>/<streamone>.ts
[10:05:37 CEST] <monoxane> no i havnt
[10:07:00 CEST] <Pandela> Maybe a lower video bitrate, and see what happens when you dont have audio
[10:07:41 CEST] <Pandela> With the -an option perhaps
[10:08:07 CEST] <monoxane> hm yea thats the problem
[10:09:36 CEST] <Pandela> The audio?
[10:10:07 CEST] <monoxane> yea
[10:10:15 CEST] <monoxane> -an fixed it
[10:10:27 CEST] <monoxane> but i need audio >_>
[10:10:32 CEST] <Pandela> Maybe a different audio codec will help, maybe remove your audio options and see if its too high for the codec or something
[10:10:38 CEST] <Pandela> <_<
[10:11:03 CEST] <monoxane> i literally cant change codecs though, i have to be outputting .ts
[10:11:21 CEST] <Pandela> It has to be mp2?
[10:12:03 CEST] <monoxane> yea
[10:12:11 CEST] <monoxane> streaming over websockets to html5
[10:12:15 CEST] <monoxane> via nodejs
[10:12:17 CEST] <Pandela> Oh i see
[10:12:20 CEST] <Pandela> noice
[10:13:40 CEST] <durandal_1707> mp2 should be fast to encode
[10:13:47 CEST] <Pandela> But hmm, the only thing i can think of so far is lowering the audio bitrate, since lowering the sample rate or -ar would probably lower the pitch of the audio
[10:15:04 CEST] <monoxane> lowering bitrate does nothing
[10:15:17 CEST] <Pandela> figures :/
[10:15:17 CEST] <Threads> is it actually going to localhost ?
[10:15:48 CEST] <monoxane> both localhost and over the internet have the same issue
[10:16:41 CEST] <monoxane> wtf no audio also results in higher res video
[10:16:56 CEST] <monoxane> theres something fuckey going on here
[10:20:12 CEST] <monoxane> im gonna try pumping obs into this to see if its ffmpeg or my streaming stuff
[10:20:47 CEST] <Pandela> Good call
[10:24:10 CEST] <Wallboy> I'm trying to concat an outro video and I'm a bit lost on where I need to add the concat filter and which streams I'm supposed to use for input/output. Here is my current ffmpeg command that adds a watermark:
[10:24:18 CEST] <Wallboy> ffmpeg -y -i test2.mp4 -i testlogo2.png -i outro1.mp4 -c:a copy -filter_complex "[0:v]hflip,scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2[scaled];[1:v]format=rgba,lut=a=val*0.7[logo];[logo][scaled]scale2ref=iw*0.3:(iw*0.3)*(118/409)[logo][base];[base][logo]overlay=0:342" -threads 4 test_enc.mp4
[10:24:48 CEST] <Wallboy> I'm guessing I need to add it after the overlay filter, but I tried and was getting some no matching stream errors
[10:28:38 CEST] <Wallboy> overlay=0:342[out];[out][2:0][2:1]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" -threads 4 test_enc.mp4 is what is giving me no matching streams error
[10:28:41 CEST] <Wallboy> what am I doing wrong?
[10:30:56 CEST] <Pandela> If you're trying to concat a couple of videos, isnt it ffmpeg -f concat -i concat.txt
[10:31:22 CEST] <Pandela> Been awhile since I used it, but I believe thats how you concatenate
[10:31:23 CEST] <Wallboy> Well I have to do processing on the main video by adding the logo watermark first
[10:32:11 CEST] <Wallboy> Then I'm trying to take that output [out] and concat with the outro1.mp4 video
[10:32:36 CEST] <Wallboy> "stream specifier ':0' in filtergraph description ... matches no streams
[10:34:28 CEST] <Wallboy> which I'm guessing means the problem is with [2:0], but why? I can see in the output: Stream #2:0(und): video exists
[10:38:21 CEST] <Wallboy> tried [2:v][2:a] as well, same problem
[10:50:44 CEST] <monoxane> hm its definitely ffmpeg giving the stuttering
[11:34:23 CEST] <Pandela> monoxane: Doesnt OBS use ffmpeg anyway?
[11:37:19 CEST] <monoxane> no idea
[11:39:40 CEST] <JEEB> a lot of stuff utilizes in some cases the APIs provided by FFmpeg (libavcodec/-format etc)
[11:40:02 CEST] <JEEB> the utilization of those APIs can and most likely will differ from what ffmpeg.c (the command line tool) does
[11:51:13 CEST] <JC_Yang> will patches improving documents be accepted? as a user, I found that ffmpeg's api and documents quite frustrated. without abandoned the c-centric project guideline, improving the documents is the path required least effort.
[11:56:51 CEST] <JEEB> JC_Yang: yes
[11:56:55 CEST] <JEEB> patches to ffmpeg-devel
[11:59:44 CEST] <JC_Yang> get it
[12:07:43 CEST] <Wallboy> I'm getting some other errors now. I'm trying to get both videos first scaled to the same aspect ratio before concat, but now I"m getting a "Media type mismatch between the 'Parsed_pad_6' filter output pad 0 (video) and the 'Parsed_concat_9' filter input pad 1 (audio)
[12:07:52 CEST] <Wallboy> With the following command:
[12:08:02 CEST] <Wallboy> ffmpeg -y -i test.mp4 -i logo.png -i outro.mp4 -filter_complex "[0:v]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2[scaled];[1:v]format=rgba,lut=a=val*0.7[logo];[2:v]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2[outro];[logo][scaled]scale2ref=iw*0.3:(iw*0.3)*(118/409)[logo][base];[base][logo]overlay=0:342[out];[out][outr
[12:08:03 CEST] <Wallboy> o]concat=n=2:v=1:a=1" -threads 4 test_enc.mp4
[12:08:51 CEST] <Wallboy> I'm quite new to ffmpeg and I know I'm already way over my head in this stuff, but I'm a bit lost on where I'm doing somethhing wrong lol
[12:12:11 CEST] <dystopia_> use 1 -i
[12:12:22 CEST] <dystopia_> encode/scale/whatever your video
[12:12:27 CEST] <dystopia_> do same with video 2
[12:12:30 CEST] <dystopia_> and your png
[12:12:34 CEST] <dystopia_> then cat the outputs
[12:12:54 CEST] <Wallboy> 1 -i where?
[12:13:01 CEST] <Wallboy> sorry, again quite new to ffmpeg lol
[12:13:36 CEST] <Wallboy> you mean I should do seperate encodes first?
[12:16:00 CEST] <Wallboy> I also want to eventually add a crossfade between the two videos, but for now am I just trying to get ANY sort of concatentation working lol
[12:16:28 CEST] <thebombzen> Wallboy: you're doing it correctly, but tyou're not feeding it to the concat filter the rgiht way
[12:16:49 CEST] <thebombzen> for the concat filter as you have it, which is n=2:v=1:a=1 you have two streams each of video and audio
[12:17:17 CEST] <thebombzen> also don't use the same [id] twice
[12:17:51 CEST] <Wallboy> did i use the same id twice?
[12:18:02 CEST] <thebombzen> yea you used [logo] ttwice
[12:18:08 CEST] <thebombzen> as both the input and output of scale2ref
[12:18:31 CEST] <Wallboy> your right, changing that now
[12:18:38 CEST] <thebombzen> also, you have [out][outro] fed to concat but those are just video streams, and concat needs V/A/V/A
[12:18:43 CEST] <thebombzen> because it needs two streams each of video and audio
[12:18:57 CEST] <Wallboy> i thought [out] containers both streams, and same with outro
[12:19:17 CEST] <thebombzen> it does not, because [out] is the output of the overlay video filter
[12:19:31 CEST] <thebombzen> when you use [0:v], this means "the video stream from input 0"
[12:19:41 CEST] <Wallboy> how do i get the streams from [out] and [outro] then
[12:20:11 CEST] <thebombzen> you can't "get the audio streams from [out] and [outro]" because the audio was never even in that filterchain to start
[12:20:19 CEST] <thebombzen> you should probably use [out][0:a][outro][2:a] if you're looking to concat the audio from the original videos
[12:20:41 CEST] <thebombzen> the '[0:v]' you used specifically means "the video stream from input 0." which excludes audio
[12:20:51 CEST] <thebombzen> likewise, "[0:a]" is the audio stream from input zero
[12:21:01 CEST] <thebombzen> however you probably also have to do it like this
[12:21:23 CEST] <thebombzen> "[out][0:a][outro][2:a]concat=n=2:v=1:a=1[v][a]" and then add the option -map "[v]" -map "[a]"
[12:23:49 CEST] <Wallboy> [Parsed_concat_9 @ 01247540] Input link in1:v0 parameters (size 640x480, SAR 0:1
[12:23:49 CEST] <Wallboy> ) do not match the corresponding output link in0:v0 parameters (640x480, SAR 210
[12:23:49 CEST] <Wallboy> 01567:20997120)
[12:23:49 CEST] <Wallboy> [Parsed_concat_9 @ 01247540] Failed to configure output pad on Parsed_concat_9
[12:23:49 CEST] <Wallboy> Error configuring complex filters.
[12:23:50 CEST] <Wallboy> Invalid argument
[12:24:28 CEST] <Wallboy> i'm guessing it means something with my  two videos isn't matching up?
[12:24:30 CEST] <thebombzen> that's because they don't have the same sar
[12:24:48 CEST] <Wallboy> i thought I fix that with scaling each video to the same res
[12:25:04 CEST] <thebombzen> you know how DVDs are 720x480 but they can still be widescreen? it's cause the pixels aren't square
[12:25:05 CEST] <thebombzen> that's SAR
[12:25:33 CEST] <thebombzen> you used force_original_aspect
[12:25:37 CEST] <thebombzen> don't use that and you'll be fine
[12:26:04 CEST] <thebombzen> or, better yet, you could use the setsar filter
[12:26:06 CEST] <thebombzen> -vf setsar=1
[12:26:16 CEST] <thebombzen> er, put in in the filterchain, but yes
[12:27:34 CEST] <Wallboy> so remove the force_original... from both [0:v] and [2:v] chains and replace with setsar?
[12:27:46 CEST] <thebombzen> no, just leave setsar
[12:27:51 CEST] <thebombzen> just use setsar
[12:28:03 CEST] <thebombzen> note that if your videos weren't originally the same size, they might be stretched
[12:28:12 CEST] <Wallboy> they aren't the same size
[12:28:33 CEST] <Wallboy> that's why I was trying to force the aspect to be the same with adding letterbox of pillarbox bars
[12:31:05 CEST] <Wallboy> this is where I got the information on using force_aspect ratio https://superuser.com/questions/547296/resizing-videos-with-ffmpeg-avconv-to-fit-into-static-sized-player
[12:33:54 CEST] <Wallboy> i used setsar=1 in each of the [0:v] and [2:v] chains and it doesn't seem to have any effect
[12:33:56 CEST] <Wallboy> getting same error
[12:37:04 CEST] <Wallboy> i tried sar=sar=1/1 and now it's shoing the SAR 1:1 does not match SAR 30749:30720
[12:38:29 CEST] <Wallboy> had to put it after force aspect ratio
[12:38:52 CEST] <Wallboy> PogChamp it's working
[12:39:19 CEST] <Wallboy> :D
[12:42:30 CEST] <Wallboy> curious though why SAR would be different for videos for the computer made for monitors. I could understand if the video came from a DVD like you mentioned. Maybe it was just that outro video that had a weird SAR?
[12:43:18 CEST] <durandal_1707> weird sar
[12:44:34 CEST] <Wallboy> I guess setsar=sar=1/1 is just good practice to always have in the chain anyway then?
[12:44:44 CEST] <Wallboy> to avoid those problems
[12:45:11 CEST] <thebombzen> no it's not
[12:45:16 CEST] <thebombzen> because weird SARs like that are uncommon
[12:45:49 CEST] <Wallboy> could it ever hurt things to have it in there then?
[12:46:50 CEST] <Wallboy> i'm gonna run some more tests with different resolution videos to make sure no stretching or anything happens in the meantime
[12:48:54 CEST] <Wallboy> any pointers for adding a crossfade between the two videos? Or am i going down a really deep rabbit hole to get that working... lol
[12:50:00 CEST] <thebombzen> my recommendations are that for what you're doing you probably want to use a NLE
[13:04:29 CEST] <Wallboy> I'm guessing you mean like video editing software? I don't think I can since I need to batch process videos using ffmpeg
[13:04:55 CEST] <Wallboy> https://superuser.com/questions/1001039/what-is-an-efficient-way-to-do-a-video-crossfade-with-ffmpeg i'm reading this now, seems this guy figured it out
[13:42:12 CEST] <durandal_1707> Wallboy: efficient way is to write crossfade filter
[13:52:15 CEST] <Wallboy> lol write my own filter?
[13:52:35 CEST] <Wallboy> i seen acrossfade for audio streams
[14:21:23 CEST] <SouLShocK> I'm trying to detect if my source material is interlaced and I'm rather confused by the output of the idet filter: https://pastebin.com/TJfwH3Tj almost half the frames are progressive
[14:22:07 CEST] <SouLShocK> is that normal for interlaced video?
[14:23:16 CEST] <SouLShocK> I thought it would be 0 progressive frames
[14:24:54 CEST] <kepstin> SouLShocK: it might be telecined, not interlaced?
[14:25:13 CEST] <kepstin> also, parts of the video with low motion can be misdetected
[14:28:10 CEST] <SouLShocK> ah ok
[14:30:17 CEST] <furq> it could also be hybrid film/video
[14:30:26 CEST] <furq> but most likely it's just misdetecting low-motion scenes
[14:34:55 CEST] <DHE> can't check the codec for the interlaced frames bit? (codec permitting)
[14:36:06 CEST] <SouLShocK> ah yeah. silly me, didn't think of that. MediaInfo says "Interlaced. TFF"
[14:36:23 CEST] <furq> what's the source
[14:36:36 CEST] <SouLShocK> XDCAM HD422
[14:37:02 CEST] <SouLShocK> wrapped in mov container
[14:38:30 CEST] <furq> i've not worked with that format but i wouldn't necessarily trust what mediainfo says
[14:38:50 CEST] <furq> e.g. all dvd video is flagged as tff interlaced, even when it's not
[14:39:44 CEST] <furq> with that said, that clip is obviously tff interlaced
[14:39:51 CEST] <furq> at least a large part of it is
[14:40:13 CEST] <furq> and if it's off a camera then i assume the whole thing is
[14:45:13 CEST] <SouLShocK> yeah it's from a camera
[14:45:24 CEST] <SouLShocK> thanks
[14:47:33 CEST] <furq> i mean if the interlace flag is accurate then that should save you a lot of time
[14:47:43 CEST] <furq> you probably want to check that
[14:54:37 CEST] <ritsuka> SouLShocK: xdcam uses a different fourcc for each flavour
[14:54:45 CEST] <ritsuka> you can check that too, or the encoder name if available
[14:54:52 CEST] <ritsuka> your file says XDCAM HD422 1080i50 (50 Mb/s)
[14:56:34 CEST] <SouLShocK> ah good point
[16:26:08 CEST] <termos> my stream is playing well in ffplay but not in the rtmp flash players that i've tried on the web, just getting a black screen but it seems like it's streaming something. Missing some crucial metadata?
[16:33:54 CEST] <marcurling> Hello, how would I change/set audio frame rate (especially if it differs from video one), please?
[17:45:30 CEST] <kepstin> marcurling: audio works in a completely different way from video, attempting to make audio and video "frame" rates the same doesn't make sense
[17:46:46 CEST] <kepstin> marcurling: are you trying to change the audio speeed to fix audio/video sync?
[17:51:00 CEST] <Hfuy> Hello.
[17:51:03 CEST] <kerio> well, audio has frames too
[17:51:08 CEST] <kerio> they're just a bit more frequent than video frames
[17:51:33 CEST] <Hfuy> Is it me or is there no MPEG-3. There's MPEG-2, and MPEG-4. The audio mp3 is part of MPEG-2, as I understand it.
[17:51:43 CEST] <kepstin> yeah, but the frame length/rate is dependent on the codec - most audio codecs have frames that fit a fixed number of samples
[17:52:01 CEST] <kepstin> Hfuy: mp3 is actually "MPEG 1 layer 3"
[17:52:06 CEST] <kepstin> iirc
[17:52:12 CEST] <Hfuy> Oh yes, MPEG-1, you're absolutely right
[17:52:52 CEST] <Hfuy> So what happened to MPEG-3.
[17:53:11 CEST] <Hfuy> Oh. Rolled into MPEG-2.
[17:53:16 CEST] Action: Hfuy should have wiki'd first
[17:53:51 CEST] <teratorn> kepstin: I thought the common parlance was that an audio frame was one sample per channel (?)
[17:53:53 CEST] Action: kepstin notes that MPEG-2 has some updates to the 'mp3' audio format, but it mostly just consists of adding lower bitrate/sample rate modes
[17:55:19 CEST] <Hfuy> I'm writing a technical article about why H.264 (being part of MPEG-4) is notionally better than MPEG-2.
[17:55:53 CEST] <kepstin> teratorn: in audio codecs, a "frame" is a group of samples that are encoded/decoded together, e.g. the DCT is run over the group together
[17:56:03 CEST] <Hfuy> From what I read, it boils down to "more ways to parts of I frames into B frames.
[17:59:15 CEST] <kepstin> Hfuy: don't forget the improved entropy coding, and the ability to do prediction on sub-block partitions
[18:00:29 CEST] <kepstin> the much-increased amount of reference frames allowed helps a fair bit as well.
[18:02:34 CEST] <furq> https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Features
[18:02:38 CEST] <furq> that pretty much covers it
[18:03:31 CEST] <Hfuy> The entropy coding is fairly easy to talk about.
[18:03:37 CEST] <Hfuy> "Magic lossless compression"
[18:03:48 CEST] <kepstin> Hfuy: note that using h264 in baseline or constrained baseline turns off a /lot/ of the new features. iirc it's still better than mpeg2, but not by all that much?
[18:04:03 CEST] <furq> it should still be a lot better than mpeg-2
[18:04:16 CEST] <furq> i'd have thought it'd be quite close to mpeg4
[18:04:35 CEST] <kepstin> hmm, or maybe i was thinking of mpeg4 asp, yeah
[18:04:59 CEST] <Hfuy> When you say "prediction on sub block partitions" are we talking about the motion compensation
[18:05:08 CEST] <Hfuy> or some other sort of prediction
[18:05:18 CEST] <Hfuy> the terminology gets a bit messy
[18:06:08 CEST] <kepstin> Hfuy: it can do both temporal (motion compensated) and spatial predictions in multiple block sizes, and apparently weighted prediction that combines both? :/
[18:06:40 CEST] <Hfuy> (Apropos of nothing, who or what is a Demi Lovato, and how does it compare to an entirely Lovato?)
[18:06:48 CEST] <Hfuy> (I am not a pop culture reference)
[18:07:06 CEST] <marcurling> kepstin no, I'm trying to make some (video) audio play on an old (hard) player (a FAI 'box') which doesn't get updates anymore.
[18:07:48 CEST] <marcurling> I just run a ffmpeg -r 25 : I upload to the player and will tell you ;)
[18:08:25 CEST] <kepstin> marcurling: ok, so, ... "audio frame rate" isn't relevant for that. Sample rate might by, but as long as you're doing either 44.1kHz or 48kHz it should be fine on most players...
[18:08:53 CEST] <kepstin> marcurling: more likely, the issue is that you're using either an audio or video codec that's not supported, or you're using too high of a profile on the video codec
[18:09:31 CEST] <marcurling> Oh, mediainfo tells me audio frame rate is still 43.066 fps (1024 spf) ; will see if it works...
[18:10:29 CEST] <marcurling> and FYI, sample rate is 44.1 so sure to play on any
[18:10:30 CEST] <kepstin> marcurling: that sounds like .. ~23ms audio frames? Hmm. kind of odd, what codec are you using? It had better not be opus, that definitely won't work on an out of date hw player ;)
[18:10:33 CEST] <furq> yeah that doesn't actually mean anything
[18:10:41 CEST] <furq> kepstin: that's standard aac
[18:11:10 CEST] <furq> 1024 samples per block
[18:11:14 CEST] <marcurling> I confirm: aac
[18:11:50 CEST] <furq> but yeah that has absolutely nothing to do with the video framerate
[18:12:05 CEST] <Hfuy> I'm not clear on what spatial prediction means in terms of MPEG-4.
[18:12:32 CEST] <Hfuy> The whole painting-gradients thing is part of that in other standards.
[18:13:14 CEST] <kepstin> Hfuy: spatial prediction just refers to any method of guessing the contents of a block based on the contents of nearby previously decoded blocks
[18:13:27 CEST] <furq> marcurling: the only way to change the "audio framerate" would be to change the sample rate
[18:13:47 CEST] <furq> lc-aac is always 1024 samples per block, so as long as it's 44.1khz then it'll always be 43.066 "fps"
[18:13:49 CEST] <Hfuy> Ah. This is how we get those lovely skies that should be graduated but look like a series of squares.
[18:14:01 CEST] <furq> and i hope the amount of scare quotes i used there will convince you that this information doesn't actually mean anything
[18:14:27 CEST] <kepstin> Hfuy: well, that'll only happen if the codec decides to use dc prediction and doesn't have enough bitrate to encode residuals to make it look good...
[18:14:48 CEST] <Hfuy> Residuals?
[18:14:52 CEST] Action: Hfuy thought he knew more about this
[18:16:06 CEST] <marcurling> ok, thank you/got it all.
[18:17:25 CEST] <kepstin> Hfuy: "residuals" is the difference between what the predicated block is, and what it's actually supposed to look like.
[18:17:45 CEST] <kepstin> Hfuy: The way codecs work is the encoder says "build a block based on these nearby blocks/ blocks from other frames", picking values to make it look as close as possible to the original. Then it takes the difference between the original video and the prediction, and says "and make these little changes to it afterwards"
[18:18:06 CEST] <kepstin> but depending on bitrate, sometimes it doesn't have room to encode all the remaining differences, so it ignores some
[18:18:13 CEST] <kepstin> and then you get a "lossy" codec :)
[18:18:19 CEST] <Hfuy> Oh, right.
[18:18:55 CEST] <Hfuy> My knowledge runs out in I-frame DCT stuff like ProRes.
[18:19:01 CEST] <Hfuy> I'm broadly aware of motion compensation.
[18:25:02 CEST] <kepstin> so, since the encoding of residuals is the big part - they're hard to compress, entropy limits and all that, modern video codecs get better by figuring out how to encoder fewer of them. Either by improving the quality of the prediction, or by improving psychological visual modelling to figure out which missing/wrong data people wouldn't notice as much
[18:27:18 CEST] <kepstin> h264 (and hevc) add a lot more choices the encoder can make to give better predictions, and encoders like x264 have done work in the psy optimization field to improve efficiency within an existing codec.
[18:27:40 CEST] <Hfuy> One thing I notice is that they're getting better at encoding fades to black.
[18:27:49 CEST] <Hfuy> My impression was that MPEG-1 didn't have any way of saying "like this, but darker"
[18:29:36 CEST] <Hfuy> This is a useful overview: https://www.vcodex.com/an-overview-of-h264-advanced-video-coding/
[18:37:39 CEST] <marcurling> Guys, do you prefer/recommend h264 (default) or x264 encoding?
[18:38:04 CEST] <kepstin> marcurling: by default ffmpeg will choose libx264 to encode h264 video
[18:38:04 CEST] <james999> Hfuy: nice overview, reading it now
[18:38:22 CEST] <kepstin> Hfuy: I think that sort of fade thing is done in h264 using the weighted prediction stuff - it could encode a block as "take this block from frame n-1, and take this DC block that's all black, and mix them 30%-70%"
[18:38:29 CEST] <kepstin> obviously it's more complicated than that
[18:39:13 CEST] <kepstin> I think weighted predictions can also be used to encode crossfades between 2 scenes?
[18:43:46 CEST] <Hfuy> Apparently weighted prediction can include explicit scaling and offset, so you absolutely can say "this, but darker."
[18:44:02 CEST] <kepstin> huh, cool.
[18:44:58 CEST] <marcurling> ty again kepstin
[18:45:43 CEST] <Hfuy> I believe you can also declare a block to be a specific colour, so it can actually encode flat colour with precision.
[18:45:53 CEST] <Hfuy> Although it'll break terribly around the edges of objects against that coloru.
[18:48:51 CEST] <Hfuy> I'm not quite clear on where the motion tracking stuff comes in. The GOP arrives. We create an I-frame using DCT and the various prediction modes.
[18:51:30 CEST] <Hfuy> We then test the following frame for potential matches of its picture information with picture information from the I-frame.
[18:51:35 CEST] <Hfuy> ...right?
[18:54:33 CEST] <james999> idk Hfuy but if you come across any more good documents or books on h264 let me know
[18:54:58 CEST] <james999> tutorial might be a better word. we already discussed the lack of prospects for books in here
[18:56:07 CEST] <Hfuy> Need someone from the x264 staff on it really.
[18:56:17 CEST] <Hfuy> I'm a technical writer, but I haven't the background.
[19:04:31 CEST] <james999> yeah
[19:04:48 CEST] Action: james999 idly wonders which FOSS projects have the most comprehensive docs from the team itself
[19:07:29 CEST] <Hfuy> FOSS projects have docs?
[19:08:36 CEST] <Hfuy> It's a notable irony that a breed of software specifically designed to be shared and reused tends to simultaneously be the worst-documented and most impenetrable code on the planet.
[19:10:54 CEST] <georgios> i have problems with transcoding h264 with with vdpau and va. my gpu is integrated. AMD A10 8xxx
[19:11:00 CEST] <james999> yeah well apparently I wasn't the first to realize that. ;)
[19:11:43 CEST] <Hfuy> I've tried repeatedly to write a .net wrapper for libav.
[19:11:48 CEST] <Hfuy> Good bloody luck.
[19:11:52 CEST] <georgios> [h264_vdpau @ 0x56161221ff40] decode_slice_header error
[19:11:53 CEST] <georgios> [h264_vdpau @ 0x56161221ff40] no frame!
[19:11:55 CEST] <georgios> Error while decoding stream #0:0: Invalid data found when processing input
[19:12:00 CEST] <Hfuy> The soducmentation is essentially "the source code to ffmpeg.exe"
[19:12:57 CEST] <JEEB> not the examples? :P also even I who consider myself oblivious to the whole external APIs could demux and decode stuff :D
[19:13:15 CEST] <JEEB> ffmpeg.c IMHO would be an awful example
[19:13:35 CEST] <JEEB> it and its related files just sprawl random hacks that nobody knows if they're actually needed
[19:13:38 CEST] <JEEB> among other things
[19:13:44 CEST] <georgios> Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
[19:14:04 CEST] <JEEB> Hfuy: this is old API usage but I deffo could figure this out https://github.com/jeeb/matroska_thumbnails/blob/master/src/matroska_thumbnailer.cpp#L98
[19:14:05 CEST] <james999> JEEB: which is precisely what happens when things aren't documented properly lol
[19:14:07 CEST] <Hfuy> The issue with writing a .net wrapper is that a lot of the types the API handles are rather large and complex structures which I suspect (I don't know) change rather frequently.
[19:14:15 CEST] <Hfuy> This can be handled but without better docs than exist it's a nightmare.
[19:14:21 CEST] <james999> have you heard the story of the cow poop saddles?
[19:14:38 CEST] <JEEB> james999: yes the documentation of that stuff is awful
[19:15:10 CEST] <JEEB> Hfuy: I *love* rust's bindgen for generating stuff out of the FFmpeg headers I'm using during build time
[19:15:17 CEST] <JEEB> has worked nicely for me so far
[19:16:38 CEST] <JEEB> james999: also all the awfulness in ffmpeg.c is why I try to get people to make their own API clients as soon as they get it validated that they can do what they want with the libraries (with ffmpeg.c or otherwise)
[19:17:18 CEST] <Hfuy> This is how you call into a native DLL in C#. https://pastebin.com/3FfyDKBE
[19:17:19 CEST] <JEEB> because ffmpeg.c tries to do /everything/ and handle completely broken stuff as well (although the latter also matches some of the demuxers/decoders)
[19:17:22 CEST] <Hfuy> Imagine doing that for libavcodec.
[19:17:28 CEST] <Hfuy> ALL of libavcodec.
[19:17:49 CEST] <JEEB> yes. that is something you want to generate based on your headers during build time
[19:17:58 CEST] <JEEB> I think VLC had something written in python for python
[19:18:03 CEST] <Hfuy> That's possible in theory for some simple types.
[19:18:32 CEST] <furq> that doesn't seem too bad
[19:18:42 CEST] <Hfuy> It's not too bad if you're just dealing with pointers and basic numeric types.
[19:18:46 CEST] <JEEB> if you have proper stuff like bindgen it should generally just work
[19:18:54 CEST] <furq> i meant that binding example
[19:19:05 CEST] <furq> there are much more cumbersome binding apis
[19:19:15 CEST] <Hfuy> If you're dealing with whatever damnation-spawned hunk of awfulness sws_getContext returns, for instance, it quickly becomes about as much fun as sucking off Satan.
[19:19:17 CEST] <JEEB> but yes, if you cannot read headers to get the data types of enums
[19:19:26 CEST] <james999> ah ok finally found the source for the story. it was supposedly british warplanes had to rub camel dung into the seat leather and nobody knew why. Then an old guy said it was because in Africa they used camels and the leather freaked them out so they rubbed camel dung.
[19:19:29 CEST] <JEEB> oh, sws
[19:19:32 CEST] <james999> The lesson is not to blindly follow tradition
[19:19:35 CEST] <JEEB> yes that is a special type of <beep>
[19:19:38 CEST] <georgios> basically accelerated decoding and encoding each gives each own error
[19:19:56 CEST] <Hfuy> I have no idea what an AVCodecContext is and I don't want to find out.
[19:19:58 CEST] <furq> how do you know how much fun it is to suck off satan
[19:20:07 CEST] <Hfuy> furq: I've done work for the BBC.
[19:20:08 CEST] <Hfuy> :)
[19:20:19 CEST] <JEEB> Hfuy: well that one is just a problem in doing bindings tbqh
[19:20:28 CEST] <JEEB> that's why you use llvm-based stuff to automatize it for you
[19:20:29 CEST] <furq> in what capacity
[19:20:37 CEST] <Hfuy> furq: Freelance thank god.
[19:20:45 CEST] <furq> i meant what role
[19:20:51 CEST] <JEEB> Hfuy: https://github.com/jeeb/ffmpeg_ffi_test/blob/master/build.rs#L17
[19:20:56 CEST] <JEEB> I halleluyah'd
[19:21:12 CEST] <Hfuy> JEEB: It's a problem in doing bindings if you can go to msdn and read all about it. It's a hideous nightmare of pain and torment if you are relying on FOSS-standard docs.
[19:21:13 CEST] <JEEB> that generates a bindings.rs
[19:21:16 CEST] <furq> hallelujah
[19:21:23 CEST] <furq> we pronounce that j as a y because english is very consistent
[19:21:44 CEST] <JEEB> well for bindings you wouldn't be using the docs anyways
[19:22:01 CEST] <JEEB> at least I cannot see any reason to do it rather than just parse the headers
[19:22:03 CEST] <Hfuy> well that's just your FOSS experience talking.
[19:22:09 CEST] <Hfuy> If the docs were sane, you could.
[19:22:13 CEST] <JEEB> eh
[19:22:13 CEST] <Hfuy> Otherwise.. eheh.
[19:22:20 CEST] <JEEB> only the headers define the structures
[19:22:20 CEST] <Hfuy> "the source code is the docs." no thanks.
[19:22:28 CEST] <JEEB> yes, the source code is not the docs
[19:22:31 CEST] <Hfuy> Source code is not documentation ever.
[19:22:34 CEST] <furq> yeah i don't know why you'd use the docs for binding generation
[19:22:40 CEST] <JEEB> but we're talking about the BINDINGS generation
[19:22:42 CEST] <furq> for actually writing something by hand, sure
[19:22:56 CEST] <JEEB> bindings specifically are C types that you want to (hopefully machinally) parse out of headers
[19:22:59 CEST] <james999> best documentation is x += 1; //Increment x by one
[19:23:03 CEST] <furq> or for writing something that uses the api rather than just binding it
[19:23:08 CEST] <Hfuy> If it's any consolation, sometimes binding internal Microsoft stuff is evil.
[19:23:24 CEST] <JEEB> I've done manual bindings and it sucks unless the API you're writing against is not gonna change. ever.
[19:23:26 CEST] <furq> i don't think you need to console us because we're not using microsoft apis
[19:23:27 CEST] <Hfuy> There's a specific datatype in .net best described as "pointer to System.Threading.NativeOverlapped instance"
[19:23:36 CEST] <furq> i'm pretty happy about it
[19:23:49 CEST] <Hfuy> which is used solely in platform invokes of the WriteFile function in kernel32.dll
[19:24:00 CEST] Action: Hfuy looks very mournful
[19:24:16 CEST] <JEEB> anyways, rather than having an issue with FFmpeg IMHO you're having an issue with unstable structures or APIs.
[19:24:22 CEST] <furq> at least it's not event tracing for windows
[19:24:29 CEST] <Hfuy> furq: speak not its name :(
[19:24:30 CEST] <JEEB> the only way to deal with that is to generate the bindings during build time
[19:24:41 CEST] <JEEB> instead of whacking them yourself
[19:25:01 CEST] <Hfuy> JEEB: I don't mind how unstable the structures are if there's a reliable set of docs and ways to get updates when they change
[19:25:13 CEST] <Hfuy> This being FOSS, there isn't.
[19:25:15 CEST] <furq> the way to get updates is to parse the headers again
[19:25:29 CEST] <Hfuy> Source code is not docs.
[19:25:33 CEST] <JEEB> ...
[19:25:42 CEST] <furq> didn't we already cover this
[19:25:46 CEST] <JEEB> once again, why the FUCK would you make the BINDINGS out of DOCS
[19:26:02 CEST] <JEEB> even if they were stable and the API/structs wouldn't change
[19:26:03 CEST] <furq> i can make it a similar-length soundbite if it helps
[19:26:05 CEST] <JEEB> what if there's a goddamn typo
[19:26:07 CEST] <furq> binding generation is not programming
[19:26:07 CEST] <Hfuy> Mainly so you had half a clue what was going on.
[19:26:52 CEST] <Hfuy> I think that avformat and avcodec are probably very good examples of where simply auto-generating a bunch of p/invokes that sort of sound a bit like the native API calls is a totally inadequate way to create a binding.
[19:27:12 CEST] <Hfuy> I mean what the hell actually is an AVFormatContext, and how is it supposed to work?!
[19:27:13 CEST] <JEEB> yes, but on the lowest layer you need to link against the structures
[19:27:23 CEST] <furq> yeah
[19:27:33 CEST] <JEEB> or are you actually herping a derp that you don't /understand how the create the higher level wrapper/?
[19:27:34 CEST] <furq> you're welcome to create a high-level wrapper for the bindings
[19:27:38 CEST] <furq> but you still need the bindings
[19:27:50 CEST] <JEEB> rather than not being able to make the base bindigns
[19:27:52 CEST] <JEEB> *bindings
[19:28:01 CEST] <Hfuy> You are aware, of course, that C# has no type called "AVFormatContext."
[19:28:04 CEST] <Hfuy> Nor will it ever.
[19:28:18 CEST] <furq> sounds like you'll be needing some bindings then
[19:28:20 CEST] <Hfuy> Said structure would need to be carefully rebuilt.
[19:28:56 CEST] <JEEB> nor does python have any of those https://github.com/jeeb/murphy/blob/fmbt_work/tests/mrp_libresource.py#L146
[19:29:10 CEST] <Hfuy> I'm fairly sure that's not something you could do without some manual intervention and understanding of what the structure is and how it works.
[19:29:40 CEST] <JEEB> no, the structure you would first make private and then you'd create your higher level wrapper around it. and yes, the latter requires good understading
[19:29:44 CEST] <Hfuy> Especially if it changes a lot, which I would assume it does.
[19:29:45 CEST] <JEEB> in which the docs play a role
[19:31:03 CEST] <Hfuy> From what I've seen, avcodec and avformat sort of combine to create a rather chaotic equivalent of DirectShow, structurally. Only DirectShow can step frames backwards :)
[19:32:32 CEST] <JEEB> also AVFormatContext looking at https://ffmpeg.org/doxygen/trunk/structAVFormatContext.html#details seems like just a general structure for information on the input. not final as it can change during runtime (mpeg-ts for example can have streams come up in the middle etc)
[19:33:03 CEST] <furq> the person writing the bindings doesn't need to give a shit what AVFormatContext::io_repositioned is
[19:33:06 CEST] <furq> you just need to know that it's an int
[19:33:19 CEST] <Hfuy> See, that's what I mean.
[19:33:21 CEST] <Hfuy> THAT is the docs.
[19:33:29 CEST] <furq> the person using the bindings (or writing a high-level wrapper) needs to care what that is
[19:33:33 CEST] <furq> potentially
[19:33:50 CEST] <Hfuy> This is how you actually write API documentation: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365747(v=vs.85).aspx
[19:33:52 CEST] <JEEB> well it's clear at this point that Hfuy wants to try and write a C#-style proper higher level wrapper
[19:33:58 CEST] <JEEB> and not just bind
[19:34:09 CEST] <Hfuy> Well once you've got stable bindings the API is another issue.
[19:34:36 CEST] <JEEB> if you still go around with the word "bindings" meaning something else than what you're actually binding against the C API I'll hit you over TCP
[19:34:56 CEST] <JEEB> when you have the C structures in your code what you are writing at that point is a wrapper
[19:34:56 CEST] <furq> writing a high-level wrapper for libav* seems like a lot of fun
[19:35:02 CEST] <furq> and you can substitue whatever word you want for "fun"
[19:35:13 CEST] <furq> you might want to keep the first two letters
[19:35:26 CEST] <Hfuy> I'd substitute a phrase like "as much fun as a tornado full of razor blades while it's raining lemon juice. On fire."
[19:35:36 CEST] <JEEB> anyways, how people generally do it is they have their use case
[19:35:41 CEST] <JEEB> and they write the library for it
[19:35:50 CEST] <JEEB> like ffms2 wrote a library for frame-exact access
[19:35:57 CEST] <furq> yeah i'm sure it's reasonable if you're wrapping some specific functionality
[19:36:08 CEST] <furq> wrapping the entire thing is some kind of sisyphean nightmare
[19:36:32 CEST] <Hfuy> My interest was in creating h.264 proxies from an SDI input card while simultaneously writing uncompressed DPX.
[19:36:52 CEST] <Hfuy> I had my own DPX code, it would be nice to use the av libraries to do the 264 and perhaps some burn-ins.
[19:36:59 CEST] <Hfuy> Theoretically it is possible to do that. Theoretically.
[19:37:18 CEST] <JEEB> avformat/avcodec for better or worse give you a fuckton of functionality and trying to make a generalized wrapper either leads to it just being the bindings
[19:37:31 CEST] <JEEB> or just fails
[19:37:46 CEST] <JEEB> that's my current understanding although I haven't read what on earth the ffmpeg crate people from rust did
[19:38:17 CEST] <Hfuy> Yes it's rather like trying to create a completely generic ffmpeg front end.
[19:38:29 CEST] <Hfuy> Even making it do one thing is not particularly easy.
[19:38:46 CEST] <furq> idk how .net does it, but if you're doing something that specific then you probably don't even need to care about what half these structs are at all
[19:38:47 CEST] <Hfuy> Most of the example code that's out there is a) extremely out of date, and b) only deals with file conversion.
[19:39:10 CEST] <JEEB> hmm so the rust people do it like this https://github.com/meh/rust-ffmpeg/blob/master/examples/transcode-audio.rs
[19:39:11 CEST] <Hfuy> It's presumably possible to take a pointer full of picture and get avcodec/avformat to throw the frames into an mp4 file.
[19:39:20 CEST] <furq> at least with FFIs i've used, if you don't need to access fields then you can just treat it as an opaque pointer
[19:39:23 CEST] <Hfuy> There is no documentation for doing that I could find.
[19:39:54 CEST] <JEEB> what I did was pretty much the opposite of that
[19:40:01 CEST] <JEEB> IStream -> demuxer -> decoder -> raw BMP
[19:40:13 CEST] <Hfuy> Ha there's a BMP writer in .net
[19:40:29 CEST] <furq> try!(try!(try!(filter.output("in", 0)).input("out", 0)).parse(spec));
[19:40:30 CEST] <JEEB> oh, and I did do swscale but I would just use zscale/zimg itself now
[19:40:32 CEST] <furq> did mark morrison write this
[19:40:47 CEST] <JEEB> furq: seems like they're lazy at error handling :P
[19:40:57 CEST] <furq> i don't think you're enough of a 90s kid to get that
[19:41:09 CEST] <JEEB> yea, I was born in the 1980s
[19:41:12 CEST] <furq> this is why you'll never be employed by buzzfeed
[19:41:36 CEST] <furq> if you were born after 1985 then you're still a 90s kid in my opinion
[19:41:48 CEST] <JEEB> ok, then I enter that area
[19:42:01 CEST] <JEEB> but I lived in Northern Europe and Soviet Union so welp
[19:42:24 CEST] <furq> was "return of the mack" not a big hit in leningrad then
[19:42:30 CEST] <james999> Hfuy: that msdn thing says minimum required system is windows xp. for a function named "WriteFile"???
[19:42:40 CEST] <JEEB> james999: they always update the docs
[19:42:41 CEST] <Hfuy> I am sitting ten feet from someone who grew up in Bulgaria.
[19:42:49 CEST] <Hfuy> She has no love for the soviet world.
[19:43:21 CEST] <Hfuy> james999 I would assume that the documentation given is probably fully only to XP up.
[19:43:53 CEST] <james999> yeah. it's just we were talking in here about win98 and NT and all that. i would have assumed they still had docs applying to those.
[19:43:54 CEST] <JEEB> anyways, Hfuy - https://ffmpeg.org/doxygen/trunk/structAVFrame.html#details . if you have your raw stuff aligned well enough you can just create one of these and feed them to the encoder
[19:44:01 CEST] <james999> e.g. something basic in kernel32 like "WriteFile"
[19:44:05 CEST] <JEEB> (or filter them before)
[19:44:39 CEST] <JEEB> or wait, was AVFrame the raw one :P I remember it being
[19:44:52 CEST] <JEEB> yea, AVPacket is the "what comes out of demuxer"
[19:45:07 CEST] <JEEB> you then get an AVPacket from the encoder and feed that to a muxer
[19:48:34 CEST] <JEEB> yea, you can either feed the buffer yourself or use av_frame_get_buffer to have the library allocate the needed space for you
[19:48:55 CEST] <Hfuy> JEEB: You can create one of those if you have the binding to do it.
[19:49:26 CEST] <Hfuy> And you will need to create a lot more than exactly one AVFrame instance to actually encode stuff to a file.
[19:49:28 CEST] <Hfuy> a LOT more.
[19:49:39 CEST] <JEEB> well, d'uh
[19:49:49 CEST] <furq> is making two avframes more difficult than making one
[19:50:09 CEST] <JEEB> anyways, roughly this amount I guess :P https://ffmpeg.org/doxygen/trunk/encode_video_8c-example.html#_a3
[19:50:25 CEST] <JEEB> except it skips the muxing part
[19:51:36 CEST] <JEEB> Hfuy: if you cannot programmatically create bindings for C# (around which you write your "proper" C# style wrapper) then just write a small, limited thing around the parts of the libraries that you need
[19:51:42 CEST] <JEEB> and then create bindings against that
[19:51:52 CEST] <Hfuy> The reason for doing this was that then we'd have the proxy available for people to watch immediately after the recording finished, rather than some minutes later as would be the case if we simply used ffmpeg
[19:51:55 CEST] <Hfuy> In the end, we just used ffmpeg.
[19:52:00 CEST] <JEEB> but at this point you have clearly given up
[19:52:15 CEST] <JEEB> so I should just fucking stop looking at this channel and wasting my fucking time
[19:52:17 CEST] <Hfuy> Believe me, I did not quit early on this.
[19:52:46 CEST] <Hfuy> The demo code you cite does, if you look at it, simply use a dummy image, so it isn't really demonstrating much.
[20:00:52 CEST] <Hfuy> ...calm down?
[20:23:03 CEST] <SviMik> hi
[20:23:07 CEST] <SviMik> I just stumbled into wrong colorspace conversion when used "format=rgba" as part of a complex filter
[20:23:16 CEST] <SviMik> Then I did a simple check making a png screenshot. And I was surprised how different it is comparing with snapchot taken with VLC
[20:23:23 CEST] <SviMik> http://svimik.com/screencap_by_ffmpeg.png
[20:24:03 CEST] <SviMik> how to explain that?
[20:24:12 CEST] <SviMik> the command used: ffmpeg -ss 30.5 -i tmp.flv -vframes 1 out.png
[20:24:42 CEST] <SviMik> and for VLC: Video - Snapshot
[20:32:40 CEST] <mccc> Hi, I'm having a problem with ffmpeg recording audio from alsa to AAC in MPEG2-TS using libfdk_aac.  I have a couple audio files, about 20kb each, along with details I can put on pastebin.  Is there a recommended place I can host the audio files for my question?  Thank you.
[20:45:33 CEST] <thebombzen> SviMik: the VLC timecode might be wrong
[20:45:55 CEST] <thebombzen> also perhaps the rounding is different. if 30.5 is between frames then VLC might round up and ffmpeg might round down (or vice versa)
[20:46:14 CEST] <SviMik> thebombzen I mean the colors are wrong. of course the timig is not perfect
[20:46:38 CEST] <SviMik> it was taken manually, just pausing
[20:46:51 CEST] <thebombzen> the colors are probably because VLC does filtering
[20:47:00 CEST] <thebombzen> or something like that
[20:47:06 CEST] <thebombzen> it also could be a color range issue
[20:47:26 CEST] <thebombzen> yuv full/partial range
[20:47:41 CEST] <thebombzen> did you try it in mpv?
[20:48:32 CEST] <SviMik> mpv? never heard of it
[20:51:29 CEST] <james999> Video player based on MPlayer/mplayer2. Contribute to mpv development by creating an account on GitHub.
[20:51:37 CEST] <james999> https://github.com/mpv-player/mpv
[20:52:04 CEST] Action: james999 ponders an irc bot that automatically retrieves the URL Title...
[20:58:53 CEST] <SviMik> thebombzen I have managed to make two differently looking screenshots by the same ffmpeg :)
[20:58:55 CEST] <SviMik> ffmpeg -ss 30.5 -i tmp.flv -vframes 1 out1.png
[20:58:55 CEST] <SviMik> ffmpeg -i tmp.flv tmp.mkv && ffmpeg -ss 30.5 -i tmp.mkv -vframes 1 out2.png
[20:59:07 CEST] <SviMik> out2.png lools like the one VLC did
[20:59:33 CEST] <thebombzen> what version of ffmpeg are you using
[21:00:00 CEST] <SviMik> ffmpeg version N-71497-gedbb9b5
[21:00:50 CEST] <SviMik> perhaps built from sources
[21:01:19 CEST] <thebombzen> that's from 2015
[21:01:26 CEST] <thebombzen> I recommend upgrading to git master
[21:01:32 CEST] <thebombzen> or at least something that isnt' two years old
[21:03:10 CEST] <SviMik> okay, will try with git master
[21:03:14 CEST] <james999> idk i keep wanting to write -acodec libaac instead of -acodec aac
[21:10:41 CEST] <james999> hmm
[21:11:00 CEST] <james999> someone is asking me a question about compiling intel qsv support into ffmpeg on linux kernel 4+
[21:11:22 CEST] <james999> apparently this guide on github is for kernel 3.14.5 and asks you to patch it for the compilation to succeed: https://gist.github.com/Brainiarc7/dd9e9b62bddb53b771b3a0754e352f53
[21:16:08 CEST] <furq> i don't think you need the qsv stuff any more on linux
[21:16:14 CEST] <furq> it should work through vaapi
[21:19:20 CEST] <geosmin> would 'ffmpeg -i foo.aiff bar.wav' be lossless?
[21:19:30 CEST] <geosmin> files are much smaller (~30%)
[21:19:40 CEST] <c_14> maybe
[21:19:46 CEST] <c_14> depends on bit depth of input format
[21:20:22 CEST] <james999> furq: vaapi is a way of doing coding/decoding with hardware directly and supports intel qsv?
[21:20:52 CEST] <furq> yes
[21:21:02 CEST] <geosmin> c_14: how would i check?
[21:21:03 CEST] <james999> wild
[21:21:23 CEST] <furq> geosmin: wav defaults to 16-bit pcm, so i guess your input is 24-bit
[21:21:26 CEST] <c_14> geosmin: ffprobe on the input
[21:21:48 CEST] <furq> if it is then use -c:v pcm_s24le
[21:21:52 CEST] <furq> er
[21:21:52 CEST] <furq> -c:a
[21:22:48 CEST] <james999> furq: is ffmpeg use of va api something like "check if hardware exists, if not use software codecs"?
[21:23:14 CEST] <furq> no
[21:24:11 CEST] <geosmin> furq, c_14: Stream #0:0: Audio: pcm_s16be, 44100 Hz, 2 channels, s16, 1411 kb/s
[21:25:21 CEST] <c_14> hmm, nah that's 16bit
[21:26:13 CEST] <geosmin> hmm, hmm, bitrate went from 1300 to 1000. so lossy then?
[21:26:33 CEST] <furq> that seems wrong
[21:27:07 CEST] <c_14> sample rate?
[21:27:12 CEST] <c_14> nah 44.1 should be default
[21:27:14 CEST] <furq> yeah
[21:27:39 CEST] <furq> the only conversion that should be happening there is from big endian to little endian
[21:27:44 CEST] <c_14> it shouldn't drop a channel either
[21:27:51 CEST] <c_14> You could try -c:a pcm_s16be ?
[21:27:51 CEST] <furq> dropping a channel would be 700kbps
[21:27:59 CEST] <furq> does wav support big endian
[21:28:11 CEST] <c_14> good question
[21:28:14 CEST] <furq> it does not
[21:28:22 CEST] <geosmin> is aiff wav?
[21:28:26 CEST] <furq> no
[21:28:29 CEST] <james999> no
[21:28:30 CEST] <geosmin> this is aiff (to flac)
[21:28:33 CEST] <geosmin> not wav
[21:28:34 CEST] <furq> ...
[21:28:45 CEST] <furq> well there's your answer
[21:28:57 CEST] <james999> aiff  is codec, wav is container
[21:29:03 CEST] <furq> aiff is a container
[21:29:16 CEST] <geosmin> fwiw i never said wav :{
[21:29:20 CEST] <geosmin> :P*
[21:29:31 CEST] <furq> 20:19:20 ( geosmin) would 'ffmpeg -i foo.aiff bar.wav' be lossless?
[21:29:45 CEST] <geosmin> oh crap, my bad
[21:29:55 CEST] <james999> well shit wikipedia, why u say it format
[21:30:07 CEST] <geosmin> i meant foo.aiff > foo.flac
[21:30:12 CEST] <james999> https://en.wikipedia.org/wiki/Pulse-code_modulation
[21:30:30 CEST] <c_14> And yeah, flac is compressed aiff isn't so yeah
[21:30:46 CEST] <thebombzen> how about -c copy
[21:30:54 CEST] <geosmin> so the conversion was lossless?
[21:31:06 CEST] <geosmin> even considering the 1.3k to 1k bitrate?
[21:31:10 CEST] <c_14> should be yeah
[21:31:16 CEST] <thebombzen> geosmin: have you ever seen a zip file?
[21:31:20 CEST] <thebombzen> that compressed things?
[21:31:52 CEST] <thebombzen> bitrate and quality aren't the same thing
[21:31:54 CEST] <geosmin> sure, but i would imagine ffprobe's output to be the uncompressed value
[21:31:59 CEST] <thebombzen> why would it
[21:32:01 CEST] <furq> ^
[21:32:11 CEST] <thebombzen> the uncompressed bitrate is always going to be the same
[21:32:18 CEST] <furq> that would make as much sense as it showing the uncompressed bitrate for mp3
[21:32:19 CEST] <thebombzen> which makes it an extremely useless number
[21:32:39 CEST] <thebombzen> all 16-bit 48 kHz files in stereo are 1536 kb/s
[21:32:47 CEST] <thebombzen> that number doesn't actually tell you anything you didn't know
[21:33:03 CEST] <thebombzen> uncompressed, that is. the compressed bitrate is what really matters
[21:33:14 CEST] <geosmin> ah, interesting
[21:33:15 CEST] <thebombzen> because it's the number of bits of information in the file required to encode one second of audio
[21:33:51 CEST] <thebombzen> bitrate is exactly what it sounds like. it's the number of bits used to encode the audio per second
[21:33:53 CEST] <furq> you can always derive the uncompressed bitrate of anything from its properties
[21:33:57 CEST] <thebombzen> i.e. bits per second
[21:34:14 CEST] <furq> as soon as i typed that i remembered vfr video exists
[21:34:23 CEST] <furq> but you get the idea
[21:35:43 CEST] <mccc> Hello, I'm trying to record using ffmpeg from an ALSA sound card in to AAC within an MPEG-2 TS container for HLS.  When I play back the audio (on VLC for Windows) it seems to play about 5% - 10% slower than it should.  I have details at https://pastebin.com/4s2FmWzp and short audio files I can share.
[21:36:16 CEST] <mccc> Something interesting, although I'm using ffmpeg to record 2 second segments, when I look at one of the segments in ffprobe, it shows me: Duration: 00:00:01.71
[21:39:13 CEST] <SviMik> thebombzen upgraded to git master, and the bug is still there...
[21:39:37 CEST] <SviMik> idk what's happening
[21:39:38 CEST] <thebombzen> probably a colorspace thing
[21:39:43 CEST] <thebombzen> post the full command and output
[21:39:46 CEST] <thebombzen> full output
[21:39:51 CEST] <thebombzen> or wait lemme use furqbot
[21:40:03 CEST] <furq> that's not my bot
[21:40:12 CEST] <thebombzen> which one is yours then
[21:40:58 CEST] <furq> !source mandelbrot
[21:40:58 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#mandelbrot
[21:41:01 CEST] <furq> that one
[21:41:58 CEST] <furq> also someone earlier mentioned a bot which prints the page title of urls pasted in here
[21:42:01 CEST] <furq> please don't do this
[21:42:48 CEST] <furq> pretty much don't ever make a bot which responds when not specifically requested to
[21:44:59 CEST] <SviMik> thebombzen here is how I made the first snapshot: http://svimik.com/ffmpeg_snapshot1.txt
[21:45:15 CEST] <SviMik> thebombzen here I have re-encoded it and made it again: http://svimik.com/ffmpeg_snapshot2.txt
[21:45:57 CEST] <SviMik> thebombzen and the colors are very different
[21:46:11 CEST] <furq> that's doing a colourspace conversion
[21:46:23 CEST] <furq> oh nvm no it isn't
[21:47:19 CEST] <SviMik> both videos are yuvj420p h264
[21:48:38 CEST] <SviMik> in VLC both tmp.flv and tmp.mkv looks exactly the same
[21:49:42 CEST] <SviMik> but ffmpeg somehow screws the colors when taking png from tmp.flv
[21:51:05 CEST] <BtbN> try jpeg vs. mpeg yuv color spaces
[21:52:52 CEST] <SviMik> BtbN just export to .jpg? or need to specify something?
[21:53:05 CEST] <BtbN> There's some parameter for that
[21:53:23 CEST] <BtbN> The source is some h264 video?
[21:53:28 CEST] <SviMik> yes
[21:53:37 CEST] <furq> if you're reencoding then do it with -pix_fmt yuv420p
[21:54:00 CEST] <BtbN> Could try adding -pix_fmt yuvj420p as input parameter.
[21:54:18 CEST] <BtbN> There's also some more modern parameters for that
[21:54:41 CEST] <furq> why would you need to do that
[21:54:53 CEST] <SviMik> furq why? isn't it logical to preserve colorspace?
[21:55:16 CEST] <furq> 20:52:52 ( SviMik) BtbN just export to .jpg? or need to specify something?
[21:55:18 CEST] <furq> in response to that
[21:55:21 CEST] <furq> i'm not saying it's a good idea
[21:55:30 CEST] <SviMik> ah
[21:55:48 CEST] <furq> maybe vlc is converting it to yuv420p before taking the screenshot
[21:55:58 CEST] <furq> vlc isn't known for wise choices
[21:57:32 CEST] <SviMik> well, I have tried just .jpg, without extra parameters, and it produced the same picture from both files
[21:59:22 CEST] <SviMik> so, what we have: 1) flv to png - colors are screwed 2) flv to jpg - colors are ok 3) mkv to png - colors are ok 4) mkv to jpg - colors are ok
[22:00:05 CEST] <furq> wait
[22:00:11 CEST] <furq> is 3 a typo
[22:00:27 CEST] <furq> if the mkv is full range then that makes no sense at all
[22:00:33 CEST] <BtbN> flv probably lacks color space information, while mkv does carry them
[22:00:46 CEST] <furq> ffmpeg is still detecting it as full range according to that paste
[22:01:22 CEST] <BtbN> Well, maybe it isn't full range though?
[22:01:23 CEST] <SviMik> no. it's what we have started with. when I re-encoded from flv to mkv - the png screenshot was ok from mkv, and screwed from the source flv file
[22:01:42 CEST] <SviMik> jpg is ok from both files
[22:01:59 CEST] <furq> if it wasn't then presumably the mkv conversion would look fucked
[22:02:16 CEST] <furq> although isn't the pixel format part of the stream
[22:02:45 CEST] <SviMik> flv is the source. if it lacks something - how it appeared in mkv?
[22:07:51 CEST] <james999> <furq> pretty much don't ever make a bot which responds when not specifically requested to
[22:08:00 CEST] <james999> yeah now that I think about it that's a good idea
[22:12:36 CEST] <SviMik> there's something different with exported png files: http://svimik.com/ffmpegpngformats1.png
[22:13:18 CEST] <SviMik> chromaticities? gamma? how? what? what for?
[22:13:55 CEST] <SviMik> what ffmpeg tried to do with that?
[22:16:12 CEST] <BtbN> it probably copies the chroma information from the source file
[22:17:41 CEST] <SviMik> how to discard this and make just normal screenshot?
[22:18:42 CEST] <BtbN> the video is yuv, png is rgb. Some conversion has to happen.
[22:19:32 CEST] <SviMik> BtbN well, both videos are yuv, both images are rgb, so?
[22:19:55 CEST] <SviMik> why image png is normal, another with colors screwed
[22:20:19 CEST] <SviMik> *one
[22:20:54 CEST] <BtbN> because jpeg is yuv.
[22:21:07 CEST] <BtbN> so it doesn't need to convert anything
[22:21:11 CEST] <SviMik> BtbN both are png. where do you see jpeg here?
[22:21:36 CEST] <SviMik> one png is ok, another png is screwed
[22:22:03 CEST] <SviMik> both PNGs were taken from yuv videos
[22:22:21 CEST] <BtbN> have you tried forcing it to use full/limited color ranges yet?
[22:22:34 CEST] <SviMik> nope. how to?
[22:23:44 CEST] <BtbN> force the input pix_fmt to yuvj420
[22:23:52 CEST] <BtbN> force the input pix_fmt to yuvj420p or yuv420p
[22:23:59 CEST] <BtbN> dependong on what it is right now
[22:25:58 CEST] <SviMik> BtbN Incompatible pixel format 'yuv420p' for codec 'png', auto-selecting format 'rgb24'
[22:26:22 CEST] <BtbN> input format, not output format. png can only do rgb
[22:27:03 CEST] <SviMik> err. how do I force the input format?
[22:27:16 CEST] <SviMik> ffmpeg -ss 30.5 -i tmp.flv -vframes 1 outpf.png
[22:27:16 CEST] <BtbN> put pix_fmt as an input option before your input...
[22:27:20 CEST] <SviMik> insert here ^
[22:28:22 CEST] <SviMik> >ffmpeg -ss 30.5 -pix_fmt yuv420p -i tmp.flv -vframes 1 outpf.png
[22:28:22 CEST] <SviMik> >Option pixel_format not found.
[22:28:33 CEST] <BtbN> pix_fmt
[22:28:45 CEST] <SviMik> sorry, I'm new to ffmpeg
[22:28:48 CEST] <furq> that's what he used
[22:28:54 CEST] <furq> that option probably doesn't exist for that demuxer
[22:29:12 CEST] <BtbN> pretty sure you can force a pix_fmt for the h264 decoder
[22:29:32 CEST] <furq> wouldn't you need to demux it first to do that
[22:30:53 CEST] <BtbN> It should match the pix_fmt option to the video streams. But seems like it indeed doesn't work with h264, weird, I'm sure I have done that before
[22:31:07 CEST] <BtbN> -vf format=yuv420p it is then, after the input
[22:31:17 CEST] <BtbN> or yuvj420p
[22:31:45 CEST] <BtbN> not sure if that doesn't try and do some equally broken colorspace conversion
[22:32:58 CEST] <SviMik> well, it was accepted, but nothing changed
[22:34:54 CEST] <slalom> I need to use Nielsen's watermarking on an audio file, but they only support MPEG-2 transport streams and seem to be assuming you're watermarking a video file's audio track.  I'm trying to convert our PCM WAV audio to a stream it can read and watermark.    I tried just outputting -f mp2 and their encoder said it wasn't a valid MPEG stream.
[22:35:11 CEST] <slalom> any other output ideas?
[22:35:15 CEST] <furq> mpegts
[22:35:57 CEST] <slalom> awesome! thank you, it worked
[22:45:04 CEST] <slalom> Nielsen's stuff is having an issue but im guessing i can deal with this with settings for the format
[22:47:35 CEST] <slalom> Error:AES3- SPMTE 302M format identifier, 0x42535344 is missing or mismatch.. haaa... ok...
[22:50:25 CEST] <slalom> im -c:a copy from my stereo wav file is still apparently making an mpegts with 8 audio channels. maybe that is just required?
[22:51:40 CEST] <slalom> i actually cant see what ffmpeg thinks about the stream, just says Stream #0:0[0x100]: Data: bin_data ([6][0][0][0] / 0x0006)
[00:00:00 CEST] --- Fri May 12 2017

More information about the Ffmpeg-devel-irc mailing list