[Ffmpeg-devel-irc] ffmpeg.log.20151024
burek
burek021 at gmail.com
Sun Oct 25 02:05:01 CEST 2015
[00:01:22 CEST] <onefix> Ok, so my CHIP took ~15x the time to encode a 30 second video as my AMD ... that's without any hardware drivers, but I don't suspect it will get much better with those ... has anyone done an extensive test of several systems with FFMPEG encoding???
[00:01:50 CEST] <onefix> I would really like to find out what would give the best "bang for the buck" so to speak
[00:12:12 CEST] <furq> which amd cpu is that
[00:16:35 CEST] <Mavrik> O.o
[00:36:19 CEST] <TD-Linux> probably using one thread on the AMD
[01:06:43 CEST] <BullHorn> is there a way to use ffmpeg inside Adobe Premiere's encoder?
[01:06:52 CEST] <BullHorn> they use h264 mainconcept and its TERRIBLE
[01:06:59 CEST] <BtbN> ask adobe?
[01:09:22 CEST] <TD-Linux> no, export to a lossless or near lossless format and encode separately
[01:13:20 CEST] <xsi> mavrik here
[01:14:49 CEST] <xsi> mavrik It is usually 44100 and 96k or so but i started trying lesser and lesser for sound to break through working
[01:20:19 CEST] <TD-Linux> xsi, since you mention it works with other codecs, and presumably you are stuck with AAC, have you tried ffmpeg's built in aac encoder?
[01:20:35 CEST] <TD-Linux> also you should probably only use 44100 or 48000
[01:36:26 CEST] <xsi> how to go built aac? just to clarify..
[01:38:55 CEST] <xsi> 44100 48000 ok i'll be testing within those parameters
[01:39:34 CEST] <xsi> ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -framerate 30 -video_size 1280x1024 -i :0.0+0,0 -vcodec libx264 -preset veryfast -maxrate 1984k -bufsize 3968k -vf format=yuv420p -g 60 -acodec libmp3lame -b:a 96k -ar 44100 -f flv 'rtmp://live.twitch.tv/app/live_100571654_3GRona33f8mCtHKYeCjnqLzNDNewf7'
[01:40:05 CEST] <xsi> almost sure that lots of parameters bufsize maxrate.. preset.. are excessive
[01:40:32 CEST] <xsi> but i see video and try again and reemerge. so to bo builtin i used
[01:40:42 CEST] <xsi> libfaac or aac -strict experimental
[01:40:59 CEST] <xsi> again, not very sure how..
[01:41:38 CEST] <xsi> It's really a pain here on forum they advice to use ffcast (i guess its' only a frontend): http://www.linuxquestions.org/questions/showthread.php?p=5439297#post5439297
[01:41:52 CEST] <xsi> so little dishearten me from cmdline parameters((
[01:49:42 CEST] <xsi> so again, its very bad i'm stuck with sound and codecs and it shows again that it's not ffmpeg but my system, which is fresh anew a kernel 4.2.3 and other yesterday emerged updated stuff
[01:49:56 CEST] <xsi> it shows picture but not sound
[01:55:15 CEST] <xsi> /upd: the same with mp4 and libmp3lame / libfaac . mb wrong library? or alsa problems as i thought previously
[02:29:41 CEST] <DHE> is the ffmpeg native AAC encoder better than libfdk at this point? or should I stick with libfdk
[02:30:57 CEST] <meh`> what am I supposed to do with the avcodec_encode_subtitle output buffer?
[02:31:38 CEST] <meh`> just stick it into an AVPacket like there's no tomorrow and write it to the format?
[02:47:18 CEST] <koz_> What kind of bitrate should I set for music encoded with opus?
[02:47:38 CEST] <koz_> I'm aiming for minimal size without sounding like ass.
[02:54:25 CEST] <furq> koz_: 96k should be more than enough for all your azis videos
[02:56:11 CEST] <koz_> Hi furq! Nice to see you again. Will 48k still sound OK?
[02:56:50 CEST] <furq> no idea, i'm just looking at http://listening-test.coresv.net/nonblocked_means_all2.png
[02:57:40 CEST] <furq> 64k opus scored a 4 on the same test so i'd probably stick with that for music
[02:58:06 CEST] <koz_> OK.
[03:56:10 CEST] <BullHorn> wait wat
[03:56:21 CEST] <BullHorn> apparently ffmpeg already has support for exporting from premiere pro
[03:56:21 CEST] <BullHorn> https://trac.ffmpeg.org/wiki/Encode/PremierePro
[03:56:32 CEST] <BullHorn> so why are you guys telling me to go ask adobe
[03:56:33 CEST] <BullHorn> ;/
[04:38:49 CEST] <BullHorn> anyway
[04:38:53 CEST] <BullHorn> i followed the guide at https://trac.ffmpeg.org/wiki/Encode/PremierePro
[04:39:10 CEST] <BullHorn> and when i do the final stage of using ffmpeg i get an error 'frameserver.avs: Unknown error occurred'
[04:39:14 CEST] <BullHorn> not very helpful ;x
[04:39:42 CEST] <BullHorn> the cmd i used was
[04:39:47 CEST] <BullHorn> ffmpeg -i f:\frameserver.avs -c:v libx264 -preset veryfast -crf 19 -pix_fmt yuv420p -c:a libfdk_aac -vbr 4 Output.mp4
[09:28:38 CEST] <worstje> Does anyone here have experience generating a video from an image+sound file combination? No matter what I try, the result always ends up longer than the audio source file for some weird reason.
[12:57:38 CEST] <Jan-> hihi guys
[12:58:06 CEST] <Jan-> this is not directly ffmpeg related, but you guys might be able to help. Does the string "AVC_VBR150M_3840_2160_25P_High at 5.1" mean anything in terms of a type of video codec?
[13:00:34 CEST] <JEEB> AVC is the MPEG name for MPEG-4 Part 10 aka H.264, VBR just means variable bit rate (which is what most video is), 150M = probably 150 megabits per second, the next two values are probably the resolution, and then you get the progressive rate (pictures in a second), and the last thing is the AVC profile and level
[13:00:58 CEST] <Mavrik> yp.
[13:01:11 CEST] <Mavrik> So I guess a 4K record at 25FPS? :)
[13:01:28 CEST] <ChocolateArmpits> i'd say broadcast 4k
[13:01:34 CEST] <JEEB> UHD you mean, while "4K" is often misused for that
[13:01:42 CEST] <JEEB> "4K" is a bit wider
[13:01:52 CEST] <ChocolateArmpits> and has a different aspect
[13:02:03 CEST] <Mavrik> Point there.
[13:04:13 CEST] <Jan-> it's a file made by a jvc gy-ls300 camera.
[13:04:38 CEST] <Jan-> and yes it does support UHD and 4K :D
[13:04:39 CEST] <JEEB> well, it's AVC
[13:05:04 CEST] <JEEB> and that's pretty much all you have to know about that is somewhat relevant about the file from the file name :)
[13:05:42 CEST] <Jan-> ah
[13:05:46 CEST] <Jan-> ffmpeg -i thing.mov
[13:05:47 CEST] <Jan-> Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 100647 kb/s, 25 fps, 25 tbr, 2500 tbn, 50 tbc (default)
[13:06:05 CEST] <JEEB> yup
[13:06:16 CEST] <Jan-> so that's "good" h.264?
[13:06:24 CEST] <JEEB> in what sense "good"?
[13:07:21 CEST] <Jan-> I uhoh.
[13:07:28 CEST] <Jan-> I guess it's hard to evaluate how good a job an encoder is doing.
[13:07:45 CEST] <Jan-> probably a hardware encoder in a camera is doing less of a good job than a piece of software could.
[13:08:23 CEST] <Jan-> can ffmpeg determine if the stream is 10 bit?
[13:08:32 CEST] <JEEB> yes
[13:08:33 CEST] <Jan-> does yuv420p imply 8 bit?
[13:08:36 CEST] <JEEB> yes
[13:08:40 CEST] <Jan-> bummer
[13:08:41 CEST] <JEEB> yuv420p10 would be 10bit
[13:08:57 CEST] <Jan-> there's no chance that's incorrect? because this is supposed to be a broadcast camera.
[13:09:07 CEST] <JEEB> nope
[13:09:13 CEST] <Jan-> Bumfluff :/
[13:09:15 CEST] <JEEB> as long as your ffmpeg is not from like 2011
[13:09:29 CEST] <Jan-> errr
[13:09:30 CEST] <Jan-> 2013
[13:09:32 CEST] <Mavrik> Also because it's a broadcast camera it probably records in YUV420P
[13:09:42 CEST] <Mavrik> Since that's the only widely supported pix format for players.
[13:09:44 CEST] <JEEB> yeah, that's way more than new enough
[13:09:55 CEST] <Jan-> fmpeg version N-57245-gf6b56b1 Copyright (c) 2000-2013 the FFmpeg developers
[13:09:55 CEST] <Jan-> built on Oct 18 2013 18:08:17 with gcc 4.8.2 (GCC)
[13:10:16 CEST] <JEEB> the camera seems to be able to do 4:2:2 with 1080p
[13:10:22 CEST] <JEEB> but no word about 10bit
[13:10:41 CEST] <Jan-> interesting that the 4k is 420
[13:10:46 CEST] <Jan-> it doesn't say that on the advertisements :)
[13:11:03 CEST] <Jan-> to be fair it is a low end broadcastish camera, it records to sdxc cards.
[13:11:04 CEST] <JEEB> well with progressive you don't really need 4:2:2
[13:11:20 CEST] <JEEB> 4:4:4 would be best, but almost nothing supports that :P
[13:11:41 CEST] <JEEB> and well of course, but usually good such things (hardware encoders) compensate by throwing bit rate at the problem
[13:12:01 CEST] <Jan-> 150 megabits is quite a lot I suppose. Although not for uhd.
[13:12:09 CEST] <Mavrik> Yeah, IMO 4:4:4 would be way more useful than 10bit :)
[13:12:33 CEST] <Mavrik> The realtime camera encoders are generally terrible, so it never hurts throwing more bitrate at them
[13:12:34 CEST] <Jan-> to be honest we often shoot 4k for hd output, and you get almost-444 and almost-10bit when you scale it down anyway :)
[13:12:56 CEST] <JEEB> well, yeah - 2160p contains 1080p chroma
[13:13:09 CEST] <Jan-> I'm not sure if there's a way of making ffmpeg do that.
[13:13:14 CEST] <Jan-> produce 10 bit out from 8 bit in, via scaling
[13:13:16 CEST] <ChocolateArmpits> 10bit is better than 8 bit for color correction
[13:13:30 CEST] <JEEB> ChocolateArmpits: yes your filtering workflow should be higher bit depth
[13:13:38 CEST] <JEEB> but having the input be 8bit isn't *that* bad
[13:13:42 CEST] <ChocolateArmpits> But 4:2:2 and up is better for keying
[13:13:57 CEST] <ChocolateArmpits> However at high enough resolution chroma subsampling isn't really an issue
[13:13:58 CEST] <JEEB> and of course with AVC 10bit helps with compression
[13:14:14 CEST] <Mavrik> Jan-, well, make a filterchain, convert to 10bit first, scale second, do the processing ?
[13:14:32 CEST] <JEEB> you could test the new scaling/colorspace conversion filter
[13:14:34 CEST] <JEEB> zscale
[13:14:43 CEST] <Mavrik> Since your input is 8bit anyway and you need additional precision when running filters.
[13:14:48 CEST] <JEEB> it uses the z.lib library https://github.com/sekrit-twc/zimg
[13:15:19 CEST] <JEEB> although I'm not sure if it supports switching colorspaces yet... will have to see, I was meaning to poke it today
[13:15:25 CEST] <JEEB> (I mean the ffmpeg wrapper)
[13:15:25 CEST] <Mavrik> What's the benefit over swscale?
[13:15:30 CEST] <ChocolateArmpits> JEEB: Oh I saw that before for Vaporsynth
[13:15:32 CEST] <JEEB> much better quality
[13:15:38 CEST] <ChocolateArmpits> So it got "ported" or something?
[13:15:43 CEST] <JEEB> no, it's a library
[13:15:51 CEST] <JEEB> unlike fmtconv
[13:15:55 CEST] <JEEB> which is a vapoursynth filter
[13:16:23 CEST] <JEEB> z.lib does have a vapoursynth filter as well, but the library part exists which is generic
[13:17:01 CEST] <ChocolateArmpits> aww SD PAL colorspace not supported
[13:18:31 CEST] <lyss> hi
[13:18:35 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavfilter/vf_zscale.c;h=50cac77d1f839af49649d0b9c18d92d75ade19f7;hb=416e35e5aafc2a2bf77372d5e8479c28796d1451#l314
[13:18:40 CEST] <JEEB> these seem to be the ones it actually supports
[13:19:14 CEST] <JEEB> it seems to support all of the main primaries and matrices and transfers
[13:19:31 CEST] <JEEB> I'd just guess the readme was never updated :P
[13:19:36 CEST] <ChocolateArmpits> k
[13:20:53 CEST] <JEEB> I still wonder if you can do colorspace conversion with the libavfilter thing (well, it often does it when it's required internally in zimg, but I mean as in output format being not input format)
[13:21:48 CEST] <JEEB> you can set output subsampling at least
[13:22:49 CEST] <JEEB> Mavrik: and some benchmarks from the author http://forum.doom9.org/showpost.php?p=1744072&postcount=33
[13:23:08 CEST] <JEEB> and yes, it seems like the guy likes his bullshit ;)
[13:23:19 CEST] <JEEB> "Behind the scenes, "z" has been thoroughly rearchitected to be scalable from the living room to the datacenter."
[13:23:32 CEST] <Mavrik> For start I'd like to know what the numbers on Y axis are :P
[13:23:42 CEST] <JEEB> fps
[13:23:48 CEST] <Mavrik> Ah.
[13:23:55 CEST] Action: Mavrik interpreted that as ms per frame.
[13:23:58 CEST] <Mavrik> No idea why.
[13:24:09 CEST] <JEEB> I will guess the X axis is threads
[13:24:23 CEST] <Mavrik> That actually looks rather good, but we kinda only do PAL stuff :/
[13:25:01 CEST] <JEEB> I'm pretty sure this supports all of the colorspaces you have ;)
[13:25:42 CEST] <JEEB> looking at the list of zimg_{matrix,transfer,primaries,range} variables :)
[13:26:04 CEST] <Mavrik> That's what I get for not clicking the link :P
[13:28:32 CEST] <JEEB> but yeah, the avfilter filter is very young and was literally pushed when I was asking the (filter) author regarding the latest version of the patch for testing :D
[13:28:46 CEST] <JEEB> seems to work, though
[13:28:55 CEST] <JEEB> made some simple tests, nothing too fancy yet
[13:44:53 CEST] Action: Jan- doesn't get this whole filter thing
[13:44:57 CEST] Action: Jan- is not an experienced ffmpeg user
[13:57:55 CEST] <durandal_1707> JEEB: use format filter after zscale to change pixel format
[13:58:38 CEST] <JEEB> ah
[13:58:44 CEST] <JEEB> so that's where it gets the output pixfmt
[13:59:12 CEST] <JEEB> I always thought it converts
[14:01:59 CEST] <durandal_1707> it must be immediately after zscale otherwise sws is used
[14:02:05 CEST] <JEEB> yeah
[14:10:44 CEST] <Jan-> I assume there's some way to turn something to 10 bit RGB, then scale it down
[14:10:47 CEST] <Jan-> but I have no idea how
[14:11:37 CEST] <JEEB> with a good scaler you only have to care about the input and output format and the size of both
[14:12:05 CEST] <Jan-> I have no idea.
[14:12:08 CEST] <JEEB> it should do internal filtering in a higher bit depth when required and only do YCbCr<->RGB conversions when absolutely required
[14:12:22 CEST] <Jan-> theoretically I agree but this is open sores we're talking about :)
[14:12:49 CEST] <JEEB> well, I recommend you take a look at zimg and the zscale filter
[14:13:08 CEST] <Jan-> is there a decent guide on filtering basics anywhere?
[14:13:59 CEST] <JEEB> you should just look for examples in the docs, it's not really that hard. esp. when you just want to do scaling/colorspace conversion
[14:14:26 CEST] <JEEB> -vf 'zscale=w=1024:h=576' is what I used to test the zscale filter a couple of days ago
[14:14:48 CEST] <JEEB> it is comma-delimited if I recall correctly
[14:14:52 CEST] <Jan-> how does that differ from just -s
[14:15:14 CEST] <JEEB> -s is an internal shoot-off to swscale and can do almost everything to your cat
[14:15:31 CEST] <JEEB> basically currently I think it uses the normal scale filter with very similar parameters
[14:15:32 CEST] <Jan-> you leave my cat out of this!
[14:15:37 CEST] Action: Jan- gives izzy a tickle between the ears
[14:16:45 CEST] <JEEB> so I guess if I wanted my output to be 10bit YCbCr with the zscale filter it'd be -vf 'zscale=w=1024:h=576,format=yuv420p10'
[14:16:55 CEST] <JEEB> *10bit 4:2:0 YCbCr
[14:17:20 CEST] <JEEB> it's two filters but it seems like the latter sets the output format for the previous
[14:17:50 CEST] <JEEB> if the zscale is not there to be able to do the conversion, it would automagically put the swscale scaler there
[14:18:21 CEST] <JEEB> so I guess it's a marker more than a filter? "at this point I want the video to be of this format"
[14:19:20 CEST] <durandal_1707> yes
[14:22:17 CEST] <Jan-> um, er
[14:22:38 CEST] <Jan-> so if I have a uhd video in 8 bit 420 and I want an HD video in 10 bit 422 or even better 444...?
[14:23:01 CEST] <JEEB> -vf 'zscale=w=1024:h=576,format=yuv444p10'
[14:23:10 CEST] <Jan-> well, 1920x1080 ideally :)
[14:23:16 CEST] <JEEB> switch the numbers then :)
[14:23:39 CEST] <JEEB> and of course you will have to actually have a very recent ffmpeg built with the zimg library
[14:23:52 CEST] <Jan-> ohdear.
[14:24:05 CEST] Action: Jan- copies JEEB's suggestion carefully to notes.txt
[14:24:53 CEST] <Jan-> if I then have it output that to prores will I get 10 bit prores?
[14:25:25 CEST] <JEEB> I think prores was always 10bit? but yes
[14:25:56 CEST] Action: JEEB builds himself a new FFmpeg with zimg for testing on his server
[14:27:02 CEST] <furq> has he really called this library zlib
[14:27:07 CEST] <JEEB> z.lib
[14:27:10 CEST] <JEEB> but yes
[14:27:17 CEST] <furq> that's some confidence
[14:27:18 CEST] <JEEB> also the header is called zimg I think
[14:28:51 CEST] <JEEB> and the library is called libzimg.{a,so} as well
[14:29:13 CEST] <JEEB> so I guess he just named it z.lib but actually didn't go full troll mode regarding the library/header file names
[14:30:50 CEST] <Jan-> arrrgh
[14:30:58 CEST] <furq> http://abload.de/img/desktopajjie.png
[14:31:05 CEST] <furq> i saw that and got really confused for a second
[14:31:48 CEST] <JEEB> it doesn't help that he didn't name his axises
[14:32:14 CEST] <furq> 75 scales per desktop
[14:32:39 CEST] <Jan-> I think the open source community needs to be clearer on who's a user and who's a developer
[14:32:42 CEST] <Jan-> nobody should have to do this stuff :/
[14:33:02 CEST] <furq> what stuff
[14:33:28 CEST] <Jan-> know about headers :/
[14:33:40 CEST] <JEEB> uhh, as a user you wouldn't care
[14:33:52 CEST] <Jan-> you would if you wanted any help on say a mailing list
[14:33:58 CEST] <Jan-> "we only support current git head"
[14:34:52 CEST] <JEEB> yes, there is no LTS. primarily bugs are to be tested on the latest code and if they still happen they get fixed and possibly backported to some release(s) that have already been made
[14:35:50 CEST] <JEEB> if it's not broken in the latest code then the search for what fixed it begins, and if it can be considered just a bug fix which is something that can be backported to older rleeases
[14:35:51 CEST] <Jan-> teehee open sores authors fix bugs?
[14:35:57 CEST] <JEEB> har har har, yes
[14:36:54 CEST] <JEEB> most systems (windows, 32bit and 64bit linux) seem to have automated git HEAD binaries available by third parties
[14:37:08 CEST] <JEEB> so testing with one shouldn't be too hard
[14:37:10 CEST] <Jan-> there's a reason we're still using one from 2013. It works.
[14:37:20 CEST] <JEEB> sure
[14:37:32 CEST] <furq> that's the same reason why i'm using 2.8.1
[14:37:40 CEST] <Jan-> if it ain't broke etc
[14:37:58 CEST] <JEEB> but if you get a bug, you might want to reproduce it in the latest code, and see if it's still broken
[14:38:04 CEST] <JEEB> no need to actually, you know, switch your production
[14:38:05 CEST] <JEEB> :D
[14:38:18 CEST] <Jan-> and if it is still broken?
[14:38:31 CEST] <furq> then you're already running git head so you can report it on the ml
[14:38:35 CEST] <JEEB> then you make a bug report on the trac with a way to replicate it
[14:39:13 CEST] <JEEB> it would be best if there could be a sample attached (and there is a private FTP for private samples)
[14:39:21 CEST] <Jan-> trac?
[14:39:26 CEST] <JEEB> trac.ffmpeg.org
[14:39:29 CEST] <JEEB> the issue tracker
[14:39:42 CEST] <Jan-> usually I'd expect there to be a support at ffmpeg.org but hey.
[14:39:56 CEST] <JEEB> if you were actually paying for support there might be
[14:40:13 CEST] <Jan-> I find it easier to assume there isn't any support and you can get what you can get.
[14:40:48 CEST] <durandal_1707> you can ask for paid support
[14:41:24 CEST] <Venti^> for most software, you are lucky if the company provides a forum for the users to support them selves
[14:41:45 CEST] <Jan-> there isn't even that for ffmpeg
[14:41:48 CEST] <JEEB> uhh
[14:41:59 CEST] <JEEB> there's the users mailing list for random user stuff
[14:42:06 CEST] <JEEB> and then there's trac for issues
[14:42:19 CEST] <furq> and then there's this irc channel for complaining about them both
[14:42:25 CEST] <Jan-> from what I've seen of the mailing list, the "user" part of it is mainly about coding, and I haven't even dared look at the "developer" part.
[14:42:42 CEST] <JEEB> it might have API users there, but it's a general users' mailing list :P
[14:42:42 CEST] <Jan-> I mean it's fine in its way but it isn't very helpful for people who aren't coders
[14:42:54 CEST] <JEEB> the developer part is strictly for internals development
[14:43:20 CEST] <furq> is it really that off-putting that people are asking questions about issues that you don't have
[14:43:20 CEST] <c_14> Most of the mails on ffmpeg-user aren't about coding.
[14:43:45 CEST] <JEEB> but seriously, I don't find it hard to grasp: users mailing list for users (together with this IRC channel), and trac for actual bugs/issues
[14:44:05 CEST] <Jan-> usually when you post an issue there you get told to "use git head" so you spend a day trying to set up a cygwin build environment, verify the fault still exists, get told to "build with debug symbols" (?!) and use "valgrind" (!?) or some other analysis tool
[14:44:25 CEST] <Jan-> basically they just don't want to do support. Which again is fine, but I wish they'd just say so.
[14:44:39 CEST] <JEEB> did you provide any way for a developer to test the issue?
[14:44:48 CEST] <JEEB> I mean, developers if given a simple way to test can test it as well
[14:45:06 CEST] <JEEB> they will require you to do it only if you cannot produce out any way of testing it
[14:45:27 CEST] <JEEB> (or if they are lazy, but most of the cases I've seen revolve around there being no simple way to test it)
[14:45:53 CEST] <JEEB> I mean, some devs have even signed NDAs with places in case the sample is not something to be shared with other eyes
[14:46:03 CEST] <Jan-> that's mainly the problem we have
[14:46:10 CEST] <Jan-> but usually you can make a test file that's just color bars or something
[14:46:16 CEST] <JEEB> yes
[14:46:46 CEST] <Jan-> and they go "Upload it to ftp://wherever.org" and you upload it and it vanishes.
[14:47:00 CEST] <Jan-> Or you get partway through and it says "too big!" because everyone who works on ffmpeg is used to 360p youtube videos.
[14:47:59 CEST] <JEEB> then you complain about that, and when possible make sure that your test case is as small as possible for the issue to happen (like, if the issue doesn't need hours of content to happen you can usually cut it to a few dozen megabytes)
[14:48:00 CEST] <Jan-> they don't make it very easy, is all I'm saying. In fact they seem to be deliberately making it really hard.
[14:48:55 CEST] <JEEB> I have no idea what kind of issues you've been running into with the trac :P and I know people like cehoyos aren't the best in all cases (but he does like check every goddamn issue there is, which as such is kind of commendable)
[14:49:19 CEST] <Jan-> he's a complete dick
[14:49:23 CEST] <JEEB> but in general, as long as it gets to the point where there's a sample available somewhere, it should generally get chugging
[14:49:28 CEST] <Jan-> I've been assuming it's some sort of language problem
[14:49:36 CEST] <Jan-> I think he's austrian
[14:49:45 CEST] <durandal_1707> Jan-: what bugs have you encountered?
[14:50:26 CEST] <Jan-> mainly improper handling of color conversions 601 to 709 or srgb to 709 luma levels, even when you specify it properly (not that it is easy to work out what it is even trying to do)
[14:50:34 CEST] <JEEB> ah
[14:50:36 CEST] <JEEB> swscale
[14:50:51 CEST] <JEEB> you should really test out zscale then :D
[14:51:00 CEST] <Jan-> I don't think there are many (any?) real video engineers on the coding team. And they really won't listen to anyone else, so..
[14:51:47 CEST] <durandal_1707> we have video devs
[14:52:17 CEST] <JEEB> but yeah, swscale is a monstrocity that in the worst case can do a lot of things wrong
[14:52:30 CEST] <durandal_1707> and I listen others
[14:52:34 CEST] <JEEB> in many cases it does things more or less right and is very quick
[14:52:53 CEST] <JEEB> but then you can easily hit cases where only the quick part applies
[14:53:41 CEST] <Jan-> this is what you get when you get software guys with no management.
[14:53:47 CEST] <durandal_1707> some stuff in swscale are simply missing
[14:54:12 CEST] <Jan-> "Hey it works really fast and it's sort of roughly vaguely approximately right, that'll do" :D
[14:54:24 CEST] <JEEB> Jan-: also remember swscale is something developed over 10 years ago
[14:54:31 CEST] <durandal_1707> swscale is old
[14:54:43 CEST] <JEEB> speed was a *thing* in the early 2000s with video processing
[14:54:53 CEST] <Jan-> Then it's a lot younger than the international telecommunications union's recommendation bt.709 :)
[14:54:55 CEST] <JEEB> nobody was really thinking of colorimetry or higher bit depths
[14:55:18 CEST] <JEEB> sure
[14:55:28 CEST] <JEEB> also not a lot of people were thinking of conversions between things ;)
[14:55:37 CEST] <JEEB> I mean from one colorimetry to another
[14:55:50 CEST] <Jan-> We noticed.
[14:56:19 CEST] <JEEB> that kind of stuff only caught on like... five+ years ago, first with avisynth filters and then vapoursynth filters
[14:56:36 CEST] <JEEB> we finally got enough of the CPU power, and started looking into correct solutions
[14:57:17 CEST] <Jan-> well that's sort of the issue.
[14:57:25 CEST] <Jan-> it actually caught on a lot longer ago than that.
[14:57:31 CEST] <Jan-> it's just that nobody on the ffmpeg team noticed.
[14:57:40 CEST] <JEEB> not limited to ffmpeg in any way
[14:57:57 CEST] <Jan-> we still can't make mp4 videos where black is actually black not gray.
[14:58:59 CEST] <ChocolateArmpits> what?
[14:59:01 CEST] <JEEB> the only reason why it was less pronounced in avisynth world was that everyone was doing every step pretty much manually. you had no way of signaling the metadata through, so...
[14:59:18 CEST] <JEEB> you just knew that input was this, then you did filter, that filter etc
[15:00:03 CEST] <JEEB> but only like around 2010 or so? you started getting avisynth filters with functionality like internal filtering done in a higher bit depth to make it more accurate etc
[15:00:13 CEST] <durandal_1707> Jan-: with h264?
[15:00:50 CEST] <Jan-> uhhuh
[15:01:03 CEST] <Jan-> hollywood trailers (encoded in mainconcept) on youtube have black blacks
[15:01:07 CEST] <Jan-> ffmpeg ones have grey.
[15:01:22 CEST] <Jan-> because swscale STILL doesn't understand studio luma ranges.
[15:01:53 CEST] <ChocolateArmpits> I don't see how's that possible
[15:01:56 CEST] <JEEB> welp, wanted to try out zscale BT.2020->BT.709 and then remembered that my sample was HEVC
[15:02:12 CEST] <durandal_1707> hmm, I think that might get fixed
[15:03:26 CEST] <durandal_1707> if source is properly marked as full or limited range zscale should do right job
[15:03:28 CEST] <ChocolateArmpits> JEEB: Does the colorspace conversion there include any rendering intents ?
[15:04:00 CEST] <JEEB> Jan-: that's why often people used avisynth or vapoursynth to do the actual colorspace conversions before that and just pushed the already converted stuff into ffmpeg or so :)
[15:04:33 CEST] <JEEB> ChocolateArmpits: ./ffmpeg -i ~/samples/Japanese_Rugby_Game-4K_Channel.ts -vf 'zscale=w=1280:h=-
[15:04:36 CEST] <JEEB> 2:p=709,format=yuv420p' -c:v libx264 -crf 21 -c:a copy bt2020_to_bt709.mp4
[15:04:56 CEST] <JEEB> or wait, I should have set the other ones too
[15:04:56 CEST] <JEEB> shit
[15:05:00 CEST] <JEEB> transfer and matrix
[15:05:18 CEST] <Jan-> ffmpeg -i -vf 'rumpelstiltskin monkey caboose'
[15:05:35 CEST] <Jan-> yes yes very obvious :)
[15:06:10 CEST] <JEEB> at least the options are at least somewhat well documented
[15:06:13 CEST] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#Options-72
[15:07:23 CEST] <ChocolateArmpits> JEEB: By "transfer characteristics" it means the gamma transform?
[15:07:27 CEST] <Jan-> The filter accepts the following options: rumpelstiltskin monkey caboose underside intrinsic blancmange blancmange furtive.
[15:07:39 CEST] <Jan-> now implement HDR :)
[15:07:59 CEST] <JEEB> yes, and "rumpelstiltskin" is "Set the dither type." f.ex.
[15:08:18 CEST] <JEEB> HDR is a marketing term, you generally mean some colorspace with it
[15:08:28 CEST] <Jan-> does it do 3D lookup tables yet? I remember it being talked about, but it's not like there's a changelog
[15:08:33 CEST] <ChocolateArmpits> b-bbut the 4000 nits display
[15:08:40 CEST] <Jan-> ChocolateArmpits: it's the future.
[15:08:56 CEST] <JEEB> basically even just having content as BT.2020 right now is "HDR" as far as the blu-ray consortium is concerned
[15:09:03 CEST] <Jan-> but it's OK, you can repurpose it as a tanning bed if it isn't the future after all.
[15:09:19 CEST] <JEEB> as it is a wider space than BT.709
[15:09:47 CEST] <Jan-> if you guys were any good, you'd use the open jpeg2000 library and implement xyz color, and have it be able to make DCPs :D
[15:10:01 CEST] <durandal_1707> Jan-: lut3d filter, but currently without high bitdepth support
[15:10:34 CEST] <Jan-> *facepalm* whynot
[15:11:27 CEST] <Jan-> by the way the documentation is wrong
[15:11:33 CEST] <JEEB> Jan-: I think when I mentioned digital cinema XYZ support he said it wasn't possible as it wasn't specified well enough or whatever. there was just some "semi-standard" that is what most such content seems to use or whatever. he said it shouldn't be too hard
[15:11:34 CEST] <Jan-> or at least incomplete
[15:11:48 CEST] <Jan-> smmmrrrnnkkk not specified well enough that's hilarious
[15:11:58 CEST] <JEEB> XYZ is specified well enough of course
[15:12:02 CEST] <JEEB> let me find the logs
[15:12:09 CEST] <Jan-> for a piece of software that still doesn't know about studio swing luma
[15:12:10 CEST] <Jan-> snrrrrrk!
[15:12:27 CEST] <JEEB> zimg is a separate library
[15:12:34 CEST] <JEEB> which is now used in the zscale filter in ffmpeg
[15:13:09 CEST] <JEEB> ok... let's see how this went
[15:13:19 CEST] <Jan-> that remark comes under the heading of "I don't care about your internal engineering" :/
[15:14:12 CEST] <JEEB> welp, I had forgotten that the frame count in ffmpeg isn't the amount of frames already encoded...
[15:14:26 CEST] <JEEB> stopped it at ~40 with HEVC input and x264 output and I got a couple of fabulous frames
[15:14:54 CEST] <JEEB> just not too fancy of encoding this whole thing on this 8core atom box
[15:14:55 CEST] <JEEB> lol
[15:15:11 CEST] <Jan-> there are very fast gpu based hevc encoders now
[15:15:14 CEST] <Jan-> may be hard to catch those
[15:15:33 CEST] <JEEB> yeah, but those are meant for completely different use cases, just like hw AVC encoders
[15:15:49 CEST] <furq> JEEB: is that one of those avoton NAS boards
[15:15:55 CEST] <JEEB> furq: ye
[15:16:09 CEST] <furq> neat
[15:16:22 CEST] <JEEB> a dediserver provider had it for a good price per month as a cheap ECC'd server box
[15:16:30 CEST] <JEEB> with an SSD
[15:16:36 CEST] <JEEB> so I grabbed it for random usage
[15:16:46 CEST] <Jan-> mmmm
[15:16:48 CEST] <Jan-> coooores :)
[15:17:29 CEST] <JEEB> yeah, if I was either more wealthy or was actually buying/renting this for work then I'd just get a much more expensive xeon thing
[15:17:42 CEST] <JEEB> might buy a xeon box for homeserver use next year with a nice amount of HDDs
[15:17:49 CEST] <Jan-> happily the sort of computers that are good at video editing also tend to be quite good at encoding.
[15:19:27 CEST] <JEEB> but yeah, the only reason why I had asked the zimg developer regarding digital cinema JPEG2000 content was because I was thinking of using zimg to convert the tears of steel 4K master into 16bit YCbCr for encoding
[15:19:35 CEST] <JEEB> because JPEG2000 decoding is slow as molasses
[15:19:42 CEST] <Jan-> tell barco that
[15:19:45 CEST] <JEEB> so it's faster and simpler if I just make a lossless intermediated
[15:20:02 CEST] <JEEB> *intermediate
[15:20:44 CEST] <Jan-> "Intel(R) Xeon(TM) CPU E5-2630 @ 2.60GHz 2.60GHz"
[15:20:48 CEST] <Jan-> I don't know if that is good
[15:20:50 CEST] <Jan-> it was certainly expensive
[15:21:14 CEST] <JEEB> which version?
[15:21:22 CEST] <JEEB> there's like three versions of that ID from intel now
[15:21:23 CEST] <JEEB> :D
[15:21:29 CEST] <Jan-> I uhoh. I'm copypastaring from properties on "my computer£"
[15:21:57 CEST] <JEEB> but the latest from late 2014 is pretty good. 20MB of cache, 8 cores
[15:22:31 CEST] <JEEB> I hope there will be more broadwell/sky lake server stuff soon'ish
[15:22:41 CEST] <JEEB> there's already the xeon d series for broadwell
[15:23:21 CEST] <Jan-> I have no idea how to find out how many cpu cores this computer has.
[15:23:33 CEST] <JEEB> on windows there's cpu-z
[15:23:44 CEST] <JEEB> that is what I usually launch up to see what's in the box
[15:23:48 CEST] <JEEB> if I don't already know
[15:24:18 CEST] <BtbN> Well, technicaly there are no GPU based encoders.
[15:24:24 CEST] <BtbN> They just happen to be on graphics cards.
[15:24:29 CEST] <JEEB> yeah
[15:24:37 CEST] <JEEB> thankfully they gave up on the GPGPU based stuff
[15:24:43 CEST] <Jan-> in device manager under "processors" it says " "Intel(R) Xeon(TM) CPU E5-2630 @ 2.60GHz" 24 times.
[15:25:09 CEST] <JEEB> Jan-: http://www.cpuid.com/downloads/cpu-z/cpu-z_1.74-en.zip
[15:25:12 CEST] <JEEB> I recommend this :)
[15:25:18 CEST] <Jan-> two six core chips with hyper threading?
[15:25:31 CEST] <furq> if it's dual-cpu then yeah
[15:25:31 CEST] <JEEB> yeah, could be
[15:25:37 CEST] <Jan-> hur hur
[15:25:39 CEST] <Jan-> 24 cores
[15:25:40 CEST] Action: Jan- wins
[15:25:44 CEST] <furq> 12 cores
[15:25:49 CEST] <furq> HT cores don't count
[15:26:18 CEST] <JEEB> HT does help with the latest CPU types nicely, but yeah - you should compare by actual core count/speed/architecture
[15:26:20 CEST] <Jan-> the scheduler thinks they do.
[15:27:54 CEST] <JEEB> also it seems like my issue with not enough pictures getting muxed was due to one of the things I find "fun" explaining to users (bit stream filters). at least I think that's being improved now so maybe in the future the user shouldn't care about how their streams are encapsulated and if the encapsulation has to be changed between two containers
[15:28:41 CEST] <JEEB> f.ex. transport streams have usually AAC streams in one type of packets, and when muxing into mp4 a different scheme is used 8)
[15:30:03 CEST] Action: Jan- doesn't care or want to have to care
[15:30:27 CEST] <JEEB> yes, in theory that should all be automatized.
[15:31:24 CEST] <Jan-> I just wrote a riff file parser. That was about my limit.
[15:32:25 CEST] <JEEB> also I guess I should have enabled verbose debugging to see if I got swscale in there somewhere with this filtering...
[15:33:01 CEST] <JEEB> durandal_1707: 'zscale=w=1280:h=-2:p=709:t=709:m=709,format=yuv420p' should in theory have no swscale, right?
[15:36:10 CEST] <Jan-> p? t? m?
[15:38:06 CEST] <JEEB> (output) primaries, transfer, matrix
[15:38:17 CEST] <JEEB> I think they have a longer name too if you like longer options :)
[15:38:46 CEST] <JEEB> I'm currently comparing the result with mpv's video rendering which should be one of the few things handling BT.2020 rendering correctly
[15:41:11 CEST] <JEEB> oh of course
[15:41:29 CEST] <JEEB> seems like the output didn't get tagged with those primaries, transfer, matrix...
[15:41:40 CEST] <JEEB> at least ffmpeg -v debug -i file.mp4 doesn't tell
[15:41:43 CEST] <Jan-> Er,
[15:41:45 CEST] <Jan-> okay
[15:42:00 CEST] <Jan-> surely you need to specify input and output primaries and gamma function
[15:42:33 CEST] <JEEB> at least with this input file the input information should be in the bit stream
[15:42:49 CEST] Action: Jan- sounds error klaxon
[15:42:55 CEST] <Jan-> *NRRRK*
[15:42:59 CEST] <Jan-> must allow user to override :/
[15:43:09 CEST] <Jan-> also where do you specify the luma ranges
[15:43:18 CEST] <JEEB> yuv420p10le(tv, bt2020nc/bt2020/bt2020-10)
[15:43:25 CEST] <JEEB> yes, there are overrides
[15:43:28 CEST] <JEEB> just not used by me
[15:43:34 CEST] <JEEB> because the input data was correct
[15:43:37 CEST] <Jan-> usually best to be very specific
[15:44:05 CEST] <JEEB> luma range by default is "same as input" except maybe with RGB input/output?
[15:44:11 CEST] <JEEB> not sure, didn't check how the zscale filter handles it
[15:44:31 CEST] <Jan-> it's very very very bad to make any sort of assumptions with that stuff
[15:44:39 CEST] <Jan-> without printing out a big warning in 100 point red letters
[15:45:21 CEST] <Jan-> from what we've seen ffmpeg outputs in studio swing no matter what if it's yuv input.
[15:45:25 CEST] <JEEB> yeah, except for data that is actually written to the input stream. and that "not sure" was regarding cases where one side is RGB and the other isn't
[15:45:29 CEST] <Jan-> we have never been able to make it do anything else.
[15:45:52 CEST] <JEEB> Jan-: this is not swscale so as long as swscale is not related we're now discussing a whole different thing regarding colorspaces
[15:45:55 CEST] <JEEB> and their conversions
[15:46:08 CEST] <Jan-> if you can even control that
[15:46:25 CEST] <JEEB> before now you couldn't really
[15:46:26 CEST] <Jan-> also, are there better scaling options for subsampled chroma now?
[15:46:28 CEST] <JEEB> it was up to swscale
[15:46:49 CEST] <JEEB> zscale should be up your alley for that, too
[15:48:12 CEST] <JEEB> and yes, I agree that things that are not written in the input file shouldn't be assumed upon
[15:48:27 CEST] <Jan-> even if they are, they're often wrong
[15:48:28 CEST] <JEEB> and there should always be a force-option in case that stuff is wrong
[15:48:38 CEST] <JEEB> which is what is the case here f.ex.
[15:48:47 CEST] <JEEB> the levers are there
[15:48:51 CEST] <Jan-> for the longest time ffmpeg seemed to assume that anything that was less than 1280 pixels wide was in rec 601 color.
[15:49:01 CEST] <Jan-> which was horribly broken for some dslrs that recorded video in 601, but hd resolution
[15:49:05 CEST] <JEEB> yes
[15:49:32 CEST] <JEEB> and it would have been broken for Japanese DTV where SD content was also BT.709 (although properly signaled)
[15:50:04 CEST] <Jan-> it was properly signaled by canon too
[15:50:13 CEST] <Jan-> but ffmpeg went "nono, you don't want THAT, you want THIS..."
[15:50:27 CEST] Action: Jan- beats on ffmpeg with a mallet
[15:50:29 CEST] <JEEB> I know it painfully well that the signaled information was not properly used, and it probably still isn't
[15:50:50 CEST] <Jan-> it's because none of the coders really know what they're doing, and they won't listen to anyone who isn't a coder.
[15:50:50 CEST] <JEEB> poked the zimg developer with the sample and stuff regarding the BT.2020->BT.709 thing I did:
[15:50:59 CEST] <JEEB> 16:47 < anon32> idk, I'll have to look into it
[15:50:59 CEST] <JEEB> 16:47 < JEEB> thanks :)
[15:51:00 CEST] <JEEB> 16:47 < anon32> gamut reduction is not support anyway, so nothing will be fixed
[15:51:00 CEST] <JEEB> 16:47 < anon32> not supported*
[15:51:20 CEST] <Jan-> if at this stage you don't have a way to just plug in xy coords of primaries and a gamma function, your software is Not Finished Yet.
[15:51:54 CEST] <JEEB> nobody is saying anything is finished :)
[15:52:10 CEST] <Jan-> no, but if you mention it on the mailing list you'll get a buttload of butthurt in response.
[15:54:25 CEST] <JEEB> there's this "old" quote by loren
[15:54:26 CEST] <JEEB> <pengvado> making an alpha product into final is easy
[15:54:26 CEST] <JEEB> <pengvado> the hard part is adding features so that it stays alpha
[15:56:03 CEST] <Jan-> I guess to be fair, having written a colorspace converter, if you did implement actually being able to punch in primaries and gamma function, you would probably have to stop calling it ffmpeg and start calling it reallyfrickenslowmpeg
[15:56:26 CEST] <Jan-> unless you calculated a really high precision lut on the fly
[15:56:28 CEST] <Jan-> that might work
[15:59:29 CEST] <durandal_1707> JEEB: so what conversion is not supported?
[16:00:13 CEST] <JEEB> well he officially doesn't support gamut reduction, but we're still looking into if it's otherwise done correctly or not
[16:01:25 CEST] <Jan-> how can you not support gamut reduction
[16:01:25 CEST] <JEEB> also does the -colorspace option *really* only take in integers?
[16:01:52 CEST] <JEEB> Jan-: it's called "I haven't tested that yet with large gamut reduction", basically :P
[16:02:30 CEST] <Jan-> [r,g,b]out = ([r,g,b]in * (1-factor)) + (sum([r,g,b]in/3) * factor)
[16:02:33 CEST] <Jan-> gamut reduction
[16:02:34 CEST] <JEEB> 17:01 < anon32> looks fine to me
[16:02:34 CEST] <JEEB> 17:01 < anon32> 709->2020->709 reverses exactly
[16:02:35 CEST] <JEEB> 17:01 < anon32> everything else is associated with clipping, so it's as described in the manual
[16:03:20 CEST] <Jan-> why would -colorspace take only integers? you might want to say "p3" or "xyz" or "adobergb"
[16:03:36 CEST] <Jan-> or "srgb" ffs
[16:03:42 CEST] <Jan-> srgb should be really, really common
[16:03:57 CEST] <JEEB> it most probably does support string-based arguments but it's just not mentioned in the docs
[16:03:59 CEST] <Jan-> in a practical sense it's almost never corrected for because 709 looks okay on srgb displays, but...
[16:05:13 CEST] <Jan-> I would take the position that "709" in that context is not a number anyway, it's an abbreviation of the name of a standard.
[16:05:24 CEST] <Jan-> thus it is a string
[16:05:34 CEST] <Jan-> it's not like changing it to 710 is valid
[16:07:14 CEST] <Jan-> also clipping gamuts is worth testing because good converters will implement a soft rolloff (optionally) to minimise ugliness.
[16:18:03 CEST] <JEEB> I think I might have my first patch idea for the zscale filter
[16:18:10 CEST] <JEEB> [Parsed_zscale_0 @ 0x3423d00] w:3840 h:2160 fmt:yuv420p10le sar:1/1 -> w:1280 h:720 fmt:yuv420p sar:1/1
[16:18:19 CEST] <JEEB> guess what this debug output lacks ;)
[16:18:57 CEST] <Jan-> primaries and gamma function?
[16:19:11 CEST] <Jan-> of course it becomes even more entertaining when you're considering yuv
[16:19:13 CEST] <JEEB> well yes, colorimetry information on the whole :)
[16:19:29 CEST] <Jan-> also luma range
[16:19:35 CEST] <JEEB> ye
[16:19:37 CEST] <Jan-> which ffmpeg still likes to pretend doesn't exist and isn't a thing
[16:19:54 CEST] <JEEB> actually everything but swscale seems to handle it relatively well now
[16:19:55 CEST] <Jan-> and bear in mind, the chroma ranges for yuv are different for the y range.
[16:20:00 CEST] <JEEB> yes
[16:20:31 CEST] <JEEB> the issue of course is that until now swscale was the only thing doing colorspace conversions and scaling
[16:20:55 CEST] <Jan-> it's not THAT hard.
[16:21:01 CEST] <Jan-> there's a number of bases to cover with a conversion tool
[16:21:05 CEST] <Jan-> but it's just math and it's easy to test
[16:21:21 CEST] <JEEB> anyways, at this point I would be more interested in having to make sure that zscale does it right
[16:21:26 CEST] <JEEB> rather than trying to fix swscale
[16:21:32 CEST] <Jan-> I guess... have they made it hard for themselves by not breaking everything down into floats internally?
[16:22:05 CEST] <Jan-> if I had to design somethin like this I'd probably fire it all into linear 32 bit float and go from there
[16:22:19 CEST] <JEEB> that's what people have done during the last five years or so
[16:22:23 CEST] <Jan-> this is what aces is for but I bet you guys have never heard of it.
[16:23:05 CEST] <JEEB> basically swscale is a massive lump of issues that seems to kind of work well enough as long as you're not poking it the wrong way. which is why nobody really likes it
[16:23:28 CEST] <Jan-> Personally I think we should probably set it on fire.
[16:24:13 CEST] <JEEB> that might happen if zscale covers all the ground well enough
[16:26:29 CEST] <Jan-> I suppose if they made it work in linear float internally you wouldn't be able to do lossless conversions.
[16:27:00 CEST] <Jan-> But then if it is supposed to be lossless you wouldn't be scaling.
[17:01:24 CEST] <waressearcher2> if I amplify level of audio by increasing or decreasing dBm do I loose quality ? say I increased it by 5dBm and then decided I don't want it so I decreased it by -5dBm, was there loss in data or quality after these two changes ?
[17:02:33 CEST] <ChocolateArmpits> waressearcher2: normally depends on bit depth and if the resulting audio has clipping, if the clipping happens after +5db increase you will lose quality
[17:08:38 CEST] <waressearcher2> ChocolateArmpits: what if its a wav file ? and no bit clipping
[17:08:59 CEST] <waressearcher2> "bit depth" doesn't matter for wav right ?
[17:09:08 CEST] <ChocolateArmpits> waressearcher2: To be more specific about the bit depth, say if you have 24bit audio but the audio filter performs only at 16bit precission, after the very first operation you will lose audio quality
[17:09:26 CEST] <ChocolateArmpits> It's usually a one-way relation between the filter and audio, where filter plays the major role
[17:10:09 CEST] <ChocolateArmpits> wav can store pcm at various bit depths, most common is 16bits
[17:11:03 CEST] <ChocolateArmpits> Unless there's clipping introduced I don't think even a 16bit precision filter will do any real harm to the audio
[17:11:58 CEST] <ChocolateArmpits> provided your input is no more than 16 bits
[17:13:22 CEST] <waressearcher2> can bit clipping happen if I not increase but decrease dBm very much ?
[17:15:34 CEST] <ChocolateArmpits> Not only clipping but the dynamic range will also suffer
[17:17:41 CEST] <ChocolateArmpits> If it's processed digitally and then transmitted over cables noise can be introduced
[17:18:35 CEST] <ChocolateArmpits> Which one is more destructive (increasing or decreasing) depends really on the loudness of the content
[18:01:17 CEST] <waressearcher2> is dynamic range compression destroying music ?
[18:03:24 CEST] <ChocolateArmpits> Conceptually probably yes, as the artists's or audio engineer's vision gets warped
[18:03:49 CEST] <ChocolateArmpits> It's all very relative
[18:04:58 CEST] <ChocolateArmpits> Orchestra will suffer more from audio compression than say pop music
[18:05:18 CEST] <ChocolateArmpits> Former usually utilize a much broader range of volumes
[18:05:30 CEST] <ChocolateArmpits> or should I loudness
[18:05:32 CEST] <ChocolateArmpits> say*
[19:29:34 CEST] <RobotsOnDrugs> yes, it very much depends on the genre
[19:30:46 CEST] <RobotsOnDrugs> in some genres, loudness and compression are very much part of the music, and in others, it's meant to be mostly quiet
[19:31:54 CEST] <RobotsOnDrugs> of course, compression is a necessary evil in many environments where one cannot simply focus solely on the music
[19:35:56 CEST] <zhanshan> hi
[19:36:07 CEST] <zhanshan> I got problems trying to record my screen with audio
[19:36:13 CEST] <zhanshan> running jack using cadence
[19:38:31 CEST] <zhanshan> I always get "[alsa @ x] cannot open audio device hw:0 (Device or resource busy)"
[19:56:42 CEST] <Bhaskar> How to merge mp3 and image to generate video? I tried "ffmpeg -i 1.png -i 1.mp3 output.mp4" but video generated is having error
[19:56:48 CEST] <Bhaskar> what is correct way to do it?
[19:57:04 CEST] <Bhaskar> i want to generate video in mp4 forat
[19:57:07 CEST] <Bhaskar> format
[20:12:59 CEST] <grublet> Bhaskar: i think you need to use -loop to let ffmpeg know to repeat the image, i dont know the specifics of the loop command though maybe someone else can help
[20:16:44 CEST] <Bhaskar> i tried using loop as well but didnt work :( Might be i am missing sumthing
[21:14:12 CEST] <lauren> hey folks, anyone used the nvenc encoder?
[21:15:56 CEST] <lauren> I'm trying to encode jpegs to single-frame h264 files as fast as possible
[21:16:22 CEST] <lauren> (because h264 is a way better compressor for single frames than jpeg is, even though it's designed for video)
[21:21:39 CEST] <DHE> x264 produces better images, but nvenc is crazy fast
[21:23:04 CEST] <lauren> the command I used was: ffmpeg -i -y -vf "crop=1280:720:0:0" -pix_fmt yuv420p -c:v nvenc_h264 -qmax 22 -qmin 22 -bsf:v h264_mp4toannexb -f mpegts "$2"
[21:23:11 CEST] <lauren> unfortunately, it wasn't faster
[21:23:27 CEST] <lauren> oops, I deleted one of the arguments while I was editing the command for understandability
[21:23:52 CEST] <lauren> anyway, I gave it a jpeg on the input and .ts file on the output
[21:24:10 CEST] <lauren> I tried again with mp4 to mp4, and that *was* fast
[21:24:18 CEST] <DHE> you don't need the bsf for transcoding. just for copying
[21:24:29 CEST] <lauren> oh, ok
[21:26:58 CEST] <furq> is there some reason you're using h264 over webp
[21:29:40 CEST] <lauren> yes, it can also be decoded quickly on nvidia cards
[21:29:51 CEST] <lauren> and ultimately this is just an intermediate format that has to be fast
[22:16:59 CEST] <cs_> hi, i'm trying to batch convert .flac files to .ogg - what command should i use?
[22:29:17 CEST] <IntelRNG> cs_: I'd just script it with oggenc if you have it around.
[22:29:55 CEST] <IntelRNG> Use flac -d to decode it and oggenc to encode it. Asuming you have them.
[22:37:31 CEST] <cs_> sorry, but i'm a bit of a linux noob. i hate to ask this, but can you give me the full command?
[22:48:19 CEST] <tp__> find -name "*.flac" -exec ffmpeg -i {} -acodec libvorbis {}.ogg \;
[22:52:44 CEST] <cs_> thanks tp__
[22:53:02 CEST] <cs_> but i get "Unknown encoder 'libvorbis'
[22:53:02 CEST] <cs_> "
[22:53:07 CEST] <cs_> even though i have it installed
[22:53:40 CEST] <tp__> ffmpeg should be compiled with libvorbis support
[22:55:30 CEST] <tp__> you could try "-codec:a vorbis -strict experimental" as arguments instead, but lower quality encodes
[23:03:30 CEST] <cs_> okay thanks tp__ ill try fiddling around some more
[23:09:28 CEST] <TD-Linux> ehhh don't use the built in vorbis encoder
[23:10:16 CEST] <TD-Linux> ah he left
[23:22:52 CEST] <rsully> I'm converting a wmv -> mp4, how can I deinterlace it?
[23:24:18 CEST] <relaxed> rsully: yadif filter
[23:26:18 CEST] <rsully> relaxed what params? 0,-1,0 is an example in the wiki, and im looking at the page that explains them but I have no idea what I would want
[23:29:51 CEST] <JEEB> in general the defaults should work
[23:29:56 CEST] <JEEB> as in, no params
[23:30:14 CEST] <JEEB> the parameters are documented @ https://www.ffmpeg.org/ffmpeg-all.html#yadif-1
[23:30:44 CEST] <JEEB> and yes, they are named so a command line can be less black box'd than that example
[23:31:36 CEST] <JEEB> the last number is the only non-default value from 0,-1,0. and it just means that will deinterlace all pictures even if they would be signaled as non-interlaced
[23:31:56 CEST] <JEEB> you should only use that when the source file is incorrectly created
[23:32:33 CEST] <rsully> ok I'll try without the params
[23:34:16 CEST] <relaxed> create some small samples with -t 60 and see what works best
[00:00:00 CEST] --- Sun Oct 25 2015
More information about the Ffmpeg-devel-irc
mailing list