[Ffmpeg-devel-irc] ffmpeg.log.20190330
burek
burek021 at gmail.com
Sun Mar 31 04:00:02 EEST 2019
[00:00:39 CET] <kevinnn> BtbN: okay let me see if I can tweak it to do that
[00:01:08 CET] <BtbN> There could also be a network send queue at some point. Or the encoder could be going slow
[00:01:14 CET] <BtbN> real-time encoding is insanely complex
[00:01:20 CET] <BtbN> or streaming, rather
[02:20:55 CET] <HickHoward> uhh
[02:20:58 CET] <HickHoward> you guys awake?
[02:21:40 CET] <DHE> no, sleeptyping
[02:22:51 CET] <HickHoward> whatever
[02:22:52 CET] <HickHoward> so
[02:22:55 CET] <HickHoward> i've made this thing
[02:23:12 CET] <HickHoward> https://ghostbin.com/paste/aehkn
[02:23:56 CET] <HickHoward> it's a WIP .bms script based on both m35's amazing work at documenting the STR format and some exe from a game i loaded up using Cutter, a radare2 gui program
[02:39:05 CET] <kevinnn> Can anyone tell me why tuning b_vfr_input is crashing my encoding program?
[02:43:40 CET] <kevinnn> I am sending raw annexb
[02:43:46 CET] <kevinnn> as well, don't know if that effects things
[02:56:34 CET] <kepstin> kevinnn: probably a good idea to check with a debugger if you're actually getting a crash. but if you have b_vfr_input=1 set, then you should be providing a timebase and pts values, so make sure you're doing that.
[02:57:55 CET] <kevinnn> kepstin: hmm, timebase and pts? can you provide an example?
[02:58:48 CET] <faLUCE> do you know if opus decoder produces interleaved or non-interleaved audio frames?
[02:59:42 CET] <kepstin> which opus decoder?
[03:01:00 CET] <kepstin> and i assume you mean samples, not frames
[03:01:26 CET] <kepstin> you can tell the the sample format in ffmpeg, there's different types for interleaved vs planar
[03:01:27 CET] <faLUCE> kepstin: yes, frames with interleaved samples
[03:01:37 CET] <kepstin> e.g. 'flt' is interleaved, 'fltp' is planar
[03:01:57 CET] <kevinnn> kepstin: what is i_timebase_num equivalent to?
[03:01:57 CET] <kepstin> i assume there's some way to programmatically see the number of planes and whatnot
[03:02:20 CET] <kepstin> kevinnn: that's the numerator of the timebase, which is a rational number.
[03:02:43 CET] <kevinnn> kepstin: rational number that represents what?
[03:02:55 CET] <faLUCE> kepstin: so, it produces interleaved or non-interleaved depending on the content of the opus packets?
[03:03:09 CET] <kepstin> faLUCE: no, opus codec itself has no concept of anything like that
[03:03:35 CET] <kepstin> whether you get interleaved or non-interleaved depends on the implementation of the decoder, they'll do whichever they think is more efficient
[03:03:57 CET] <faLUCE> kepstin: I wonder if there's some constraint for opus dec
[03:03:57 CET] <kepstin> i think the libopus decoder will give you back interleaved samples, I dunno about ffmpeg's internal decoder.
[03:04:13 CET] <faLUCE> sorry:
[03:04:28 CET] <faLUCE> I wonder if there's some constraint for ffmpeg's internal dec
[03:04:34 CET] <kepstin> strangely, "ffmpeg -h decoder=opus" doesn't print the supported sample formats
[03:05:40 CET] <kepstin> looks like ffmpeg's internal decoder is hardcoded to give you fltp
[03:05:55 CET] <kepstin> ... which it sets on the avctx, so you already knew that
[03:06:23 CET] <faLUCE> [03:05] <kepstin> looks like ffmpeg's internal decoder is hardcoded to give you fltp <--- non interleaved then
[03:06:37 CET] <faLUCE> so I just found a bug for gstreamer's wrap
[03:06:51 CET] <kepstin> yeah. but don't hardcode that assumption - check the sample format on the frames you get from the decoder, and behave appropriately
[03:07:09 CET] <faLUCE> kepstin: unfortunately I'm using the gstreamer's wrap
[03:07:30 CET] <faLUCE> it claims to output both interleaved and non interleaved, but this is a bug due to what you just noted
[03:07:48 CET] <kepstin> why would you be using gstreamer's ffmpeg wrapper to decode opus rather than gstreamer's libopus wrapper?
[03:08:12 CET] <faLUCE> kepstin: is opusdec the libopus wrapper?
[03:08:18 CET] <kepstin> yes
[03:08:24 CET] <faLUCE> it doesn't work for realtime
[03:08:27 CET] <faLUCE> high latency
[03:08:53 CET] <kepstin> well, that's a bug. it really should work fine for realtime, they're promoting gstreamer for webrtc and whatnot
[03:09:43 CET] <faLUCE> so, the only solution for realtime is to use the libav wrapper
[03:09:53 CET] <faLUCE> but it requires a conversion
[03:10:03 CET] <faLUCE> from non-interleaved to interleaved
[03:10:14 CET] <kepstin> i hope you're helping the gstreamer folks out by reporting these issue to them, instead of to ffmpeg folks when this has nothing to do with us
[03:10:19 CET] <faLUCE> so the pipe is opusdec ! audioconvert ! audiosink
[03:10:26 CET] <faLUCE> yes, I'll talk to them tomorrow,
[03:10:30 CET] <faLUCE> now the channel is idel
[03:10:31 CET] <kepstin> opusdec isn't ffmpeg
[03:10:43 CET] <faLUCE> so the pipe is avdec_opus ! audioconvert ! audiosink
[03:10:58 CET] <faLUCE> with opusdec I don't need audioconvert, but it's buggy for realtime
[03:12:48 CET] <kepstin> so.. what's your problem?
[03:13:45 CET] <faLUCE> my problem is that I don't want to make a conversion
[03:14:16 CET] <kepstin> it looks like avdec_opus is correctly outputting planar audio (although gst-inspect says it outputs interlaced, but when it runs it correctly identifies what the decoder is giving it.)
[03:14:33 CET] <kepstin> so if you want to send that to something that needs interleaved audio, you need an audioconvert
[03:15:11 CET] <kepstin> general best practice with gstreamer is to stick audioconverts around here and there where you might need conversions, iirc it's basically a no-op if it doesn't need to do anything
[03:15:46 CET] <kepstin> ffmpeg's filter chain automatically inserts conversions where needed by default, using aresample (which more or less does the same thing as gstreamer's audioconvert)
[03:16:15 CET] <faLUCE> kepstin: the problem is that gst-inspect avdec_opus claims to produce both non-interleaved and interleaved
[03:16:27 CET] <faLUCE> which is not true
[03:16:37 CET] <kepstin> faLUCE: at runtime it'll correctly negotiate non-interleaved.
[03:16:57 CET] <faLUCE> kepstin: this doesn't mean that gst-inspect has not a bug
[03:17:18 CET] <kepstin> the ffmpeg codec wrappers in gstreamer are generic - it's a single C file used to wrap all the audio codecs.
[03:17:18 CET] <faLUCE> if I gst-inspect and see both caps, then I expect to use both
[03:17:55 CET] <faLUCE> kepstin: yes, but in this case, caps are not correctly set for avdec_h264
[03:18:06 CET] <faLUCE> so, it would trivial to fix it
[03:18:10 CET] <kepstin> gstreamer is very heavily built around runtime negotiation for stuff like this, since in many cases stuff like formats can't be fully identified until the decoder's acutally running
[03:18:32 CET] <faLUCE> kepstin: please... gst-launch doesn't have to output both caps. That's all
[03:18:45 CET] <faLUCE> gst-inspect
[03:18:48 CET] <kepstin> avdec_h264 says it supports I420 and RGB and Y444
[03:18:51 CET] <faLUCE> (not gst-launch)
[03:18:59 CET] <kepstin> but which one you get at runtime depends on the media you're decoding
[03:19:06 CET] <kepstin> it doesn't use whichever one you request
[03:19:15 CET] <faLUCE> kepstin: of course
[03:19:18 CET] <kepstin> it's the exact same thing.
[03:19:21 CET] <faLUCE> read agai what I wrote
[03:19:24 CET] <kepstin> (well, almost)
[03:19:24 CET] <faLUCE> no, it's not
[03:19:57 CET] <faLUCE> gst-inspect doesn't have to output both caps. That's all
[03:20:11 CET] <kepstin> but if it doesn't output both, what should it say? none?
[03:20:29 CET] <kepstin> (note that ffmpeg -h decoder=opus doesn't say any, fwiw, so that would at least be consistent)
[03:20:33 CET] <faLUCE> kepstin: it should say what you just noted is hardcoded
[03:20:46 CET] <faLUCE> otherwise it's misleading
[03:21:05 CET] <kepstin> it's hardcoded in the *codec initializer* - so something using the ffmpeg apis doesn't know what format it's going to get until it has *initialized the decoder*
[03:21:16 CET] <kepstin> so the gstreamer wrapper is correct as it is
[03:21:39 CET] <faLUCE> I understand that but a note could be added
[03:21:58 CET] <kepstin> it advertises the possible options, and then when the encoder is initialized at runtime it negotiates down to a supported one
[03:22:24 CET] <kepstin> that's just how gstreamer works, on all codec formats audio and video, it doesn't need a note
[03:22:56 CET] <faLUCE> then it should be fixed on libavcodec?
[03:24:59 CET] <kepstin> nothing's broken, so it doesn't need to be fixed, but you could certainly improve libavcodec to have opusdec put a list of sample formats into the AVCodec struct.
[03:25:47 CET] <faLUCE> kepstin: don't be overkill with concepts. you got the idea ;-)
[03:27:26 CET] <kepstin> anyways, even if that gets set, you'd still need the audioconvert in your gstreamer pipeline
[03:27:37 CET] <kepstin> because avdec_opus will still generate non-interleaved audio
[03:27:39 CET] <faLUCE> kepstin: of course
[03:28:11 CET] <faLUCE> but at least I'm not misleaded by the gstreamer'sAPI
[03:28:59 CET] <faLUCE> kepstin: anyway, I'll see tomorrow if to send a note to the ml or a patch
[03:29:11 CET] <kepstin> you're not being mislead. the gstreamer api says "the decoder will produce either interleaved or non-interleaved audio", and then when you run it, it gives you one of those.
[03:29:55 CET] <faLUCE> kepstin: but from what we saw, it doesn't produce interleaved in any case
[03:30:12 CET] <kepstin> sure, but gstreamer doesn't know that.
[03:30:34 CET] <faLUCE> kepstin: then it doesn't have to write that it produces BOTH
[03:30:47 CET] <kepstin> no, it says it produces one of the things in the list
[03:30:50 CET] <kepstin> not both
[03:31:03 CET] <faLUCE> no, because they are called "capabilities"
[03:31:18 CET] <faLUCE> for example, if you gst_inspect opusdec
[03:31:27 CET] <faLUCE> it prints "interleaved" only
[03:31:50 CET] <kepstin> just like flacdec says it'll produce s8, s16le, s24_32le, s32le - but then when you decode a flac file, it won't let you use any of them, it'll only let you use one that matches the bit depth of the file you're decoding
[03:31:58 CET] <kepstin> all gstreamer capabilities work like that
[03:32:26 CET] <faLUCE> kepstin: you are not following what I said...
[03:32:29 CET] <kepstin> runtime negotiation will pick one of the options from the list or range
[03:32:43 CET] <kepstin> and it might not be the one you wanted
[03:33:01 CET] <faLUCE> kepstin: I'm talking about gst-inspect
[03:33:09 CET] <pridkett> does ffmpeg site use third party hub to report bugs ?
[03:33:10 CET] <faLUCE> not RUNTIME
[03:33:16 CET] <faLUCE> and not gst-launch
[03:33:18 CET] <faLUCE> ok?
[03:33:30 CET] <faLUCE> now, look at the output of gst-inspect opusdec
[03:33:41 CET] <kepstin> faLUCE: exactly. since gst-inspect isn't doing runtime things, it can't tell you what you'll get at runtime
[03:33:45 CET] <faLUCE> and compare it with the outut of gst-inspect avdec_opus
[03:33:51 CET] <faLUCE> kepstin: wait
[03:34:01 CET] <faLUCE> follow what I'm saying
[03:34:21 CET] <faLUCE> now: the output of gst-inspect of opusdec says that it produces interleaved only
[03:34:24 CET] <faLUCE> ok?
[03:34:53 CET] <kepstin> sure, because opusdec in gstreamer knows that the libopus decoder it uses internally will always produce interleaved, so it doesn't put interleaved in the caps
[03:34:53 CET] <faLUCE> BUT the output of gst-inspect avdec-opus claims to be able to produce both
[03:35:41 CET] <faLUCE> yes. now, in case of avcodec_opus, gstreamer doesn't obtain a list of layers, right?
[03:35:48 CET] <kepstin> the output of gst-inspect avdec_opus says "the output layout will at runtime be negotiated to one of the values from this list: interleaved, non-interleaved"
[03:36:00 CET] <kepstin> the output of gst-inspect opusdec says "the output layout will at runtime be negotiated to one of the values from this list: interleaved"
[03:36:20 CET] <kepstin> both are correct.
[03:36:58 CET] <faLUCE> kepstin: so, you think that "layout: { (string)interleaved, (string)non-interleaved }"
[03:37:15 CET] <faLUCE> as output of gst-inspect avdec_h264 is correct ?
[03:37:26 CET] <faLUCE> as output of gst-inspect avdec_opus is correct ?
[03:37:50 CET] <faLUCE> (capabilities part)
[03:38:14 CET] <furq> the sentence as worded is correct
[03:38:33 CET] <kepstin> given the knowledge avdec_opus has of the opus decoder in ffmpeg, it's behaving correctly - the avcodec structure says that the sample format is "unknown" (it's not set until encoder initialization), so gstreamer advertises both possible options, and negotiates at runtime after the sample format is known.
[03:38:36 CET] <faLUCE> furq: but there 's also the OTHER sentence
[03:39:32 CET] <faLUCE> kepstin: then also capabilities should be "unknown" for the "layer" field
[03:39:48 CET] <kepstin> faLUCE: that's not how gstreamer works.
[03:42:13 CET] <faLUCE> I'm asking in the gstreamer channel
[03:42:27 CET] <kepstin> hmm, well, maybe it is, i forget whether the gstreamer stuff lets you leave a capability like that unset and add it later
[03:42:31 CET] <kepstin> they could confirm that
[03:43:03 CET] <kepstin> but the current way that works isn't wrong.
[03:43:18 CET] <faLUCE> kepstin: in fact they already told me that I was correct and should send a patch
[03:43:43 CET] <faLUCE> now, I'm asking again because you are insisting so much
[03:44:48 CET] <kepstin> i assume they're also wondering why you're using avdec_opus instead of opusdec, and if you're having issues with that you *really* should get help with that
[03:44:59 CET] <faLUCE> kepstin: they already know
[03:45:05 CET] <faLUCE> and I also told you why
[03:45:07 CET] <kepstin> libopus is an encoder/decoder designed for realtime usage, and used in gstreamer for realtime stuff already
[03:45:15 CET] <faLUCE> and they confirmed there's a latency problem for realtime
[03:45:15 CET] <kepstin> so... ? i dunno.
[03:45:21 CET] <faLUCE> with opusdec
[03:47:33 CET] <kepstin> anyways, the one-line patch to avcodec/opusdec.c to throw a list of supported sample_fmts into the avcodec struct will improve this situation... eventually
[03:47:50 CET] <faLUCE> kepstin: yes, in fact I agree with that
[03:48:14 CET] <faLUCE> and then, accordingly, the gst_inspect wrap could be automatically fixed
[03:49:15 CET] <kepstin> if they think that the gstreamer ffmpeg wrapper should not set the layout capability when an ffmpeg codec says its supported sample formats are unknown, then they need to fix that on their end.
[03:50:10 CET] <faLUCE> kepstin: yes, agree with that too
[03:51:00 CET] <kepstin> oh wow, i think it's actually even buggier than that. on my system with gst 1.14 the gst-inspect avdec_opus is *only* returning layout: interleaved
[03:51:05 CET] <kepstin> and that's definitely wrong
[03:52:21 CET] <faLUCE> kepstin: with GStreamer 1.15.2 (GIT) it shows both
[03:52:35 CET] <kepstin> well, at least that's an improvement
[03:52:56 CET] <faLUCE> yes, maybe they are working on that
[03:53:03 CET] Action: kepstin assumes that avdec_opus is mostly untested, because people mostly use opusdec
[03:53:20 CET] <faLUCE> it works well, I tested it
[03:53:37 CET] <kepstin> opusdec has a much higher rank, so it's autoselected when you use pipelines based on playbin/decodebin, etc.
[03:54:21 CET] <faLUCE> ok, time to bed now
[03:54:24 CET] <faLUCE> have a good night
[04:16:14 CET] <pridkett> everybody big news
[04:16:41 CET] <FishPencil> I'd like to add some random "sensor like" noise to an image, is FFmpeg able to do this (and if so how), and is FFmpeg the right tool for this?
[04:25:31 CET] <kepstin> FishPencil: it's really hard to say, depends on the type of noise you want to add. https://www.ffmpeg.org/ffmpeg-filters.html#noise might be able to do it
[04:25:40 CET] <kepstin> would have to play around with options.
[04:26:01 CET] <kepstin> if you have an external image or video of just the "noise", you can also use ffmpeg filters to blend it onto a video.
[04:27:08 CET] <pridkett> everybody i have big news
[04:28:14 CET] <FishPencil> kepstin: I'm looking to do some de-noising algo stuff, so it probably need to be implemented in a way that I can rely on the result
[04:29:03 CET] <kepstin> ah, denoising is hard. you probably want to avoid artificial noise for that sort of thing :/
[04:29:24 CET] <FishPencil> kepstin: that's how you normally asses the effectiveness of an algo
[04:29:32 CET] <kepstin> (especially if you're doing neural net type stuff, it'll probably overfit to match the characteristics of the artificial noise)
[04:30:41 CET] <kepstin> I'd suggest finding some sources of real noise (like, filmed blank/neutral grey video) that can be merged onto clean video if you really need a clean source to compare to the result of denoising.
[04:31:23 CET] <kepstin> although getting that right would be tricky, i don't really know if the characteristics of say ccd noise in low light would work well like that :)
[04:31:44 CET] <FishPencil> kepstin: It will actually be for DNN training, but what's the difference between "random" noise and actually random noise?
[04:32:29 CET] <kepstin> well, real noise depends on the physical characteristics of the devices being used, so it's quite biased
[04:33:01 CET] <kepstin> plain artificial random noise just adds a random amount of error to each pixel in an image.
[04:33:42 CET] <FishPencil> kepstin: over a large sample size though wouldn't that get close to all the different sensors out there?
[04:34:11 CET] <FishPencil> Or do all sensors have some sort of common noise pattern that cannot be duplicated
[04:34:15 CET] <kepstin> FishPencil: probably not without manual tweaking
[04:34:28 CET] <kepstin> your first step should be to analyze and model the type of noise that you want to remove, tbh
[04:34:46 CET] <kepstin> it might be that you can build an artificial noise generator that will work well
[04:34:50 CET] <FishPencil> I think it's just Gaussian sensor noise?
[04:35:35 CET] <kepstin> FishPencil: I can't answer, i dunno how this works. start researching some papers, I'm sure people have modelled this kind of thing before :)
[09:33:28 CET] <pridkett> dongs hey
[09:47:51 CET] <dongs> well hello
[10:05:20 CET] <pridkett> dongs so how do you feel?
[10:07:58 CET] <dongs> pretty good until you started chatting
[10:08:30 CET] <pridkett> dongs lol
[10:08:37 CET] <pridkett> dongs so how do you feel to be busted
[10:08:42 CET] <dongs> ?
[10:08:50 CET] <dongs> still no idea what yo're on
[10:08:52 CET] <pridkett> [dongs VERSION reply]: irssi v0.8.19
[10:08:58 CET] <dongs> i dont get it
[10:08:59 CET] <dongs> and?
[10:09:02 CET] <pridkett> what kind of linux hater uses irssi
[10:10:00 CET] <dongs> as if there's no irssi port for windows
[10:10:00 CET] <pridkett> that makes no sense to me: please make some sense into me
[10:10:23 CET] <pridkett> dongs why would you want to use irssi period especially in windows
[10:15:13 CET] <dongs> i donno man, ask dongs
[10:15:28 CET] <pridkett> huh
[10:15:44 CET] <pridkett> everybody big news: neural vocoder using LPCNet https://people.xiph.org/~jm/demo/lpcnet_codec/
[10:16:17 CET] <pridkett> dongs did you hear about that
[10:17:41 CET] <pridkett> everybody big news: neural vocoder using LPCNet https://people.xiph.org/~jm/demo/lpcnet_codec/
[10:19:49 CET] <dongs> why are big news spaced like 2 minutes apart in your chats
[10:20:03 CET] <pridkett> sorry
[10:20:48 CET] <pridkett> dongs so what do you think?
[10:21:07 CET] <dongs> about?
[10:21:14 CET] <dongs> i dont care bout opus/xiph
[10:21:20 CET] <dongs> like at al
[10:21:21 CET] <dongs> l
[10:22:12 CET] <pridkett> dongs look how low the kbps is
[10:24:39 CET] <dongs> By contrast, the complexity of this 1.6 kb/s LPCNet-based codec is just 3 GFLOPS
[10:24:42 CET] <dongs> 3 fucking niggaflops
[10:24:42 CET] <dongs> comeon
[10:24:50 CET] <dongs> nobody is gonna use that in real life
[10:25:01 CET] <dongs> how is CIA gonna tap every mobile phone converstaion
[10:25:05 CET] <dongs> if all the carriers switch to this
[10:26:53 CET] <dongs> anyway the nubmers are completely unreasonable
[10:27:05 CET] <dongs> to be used for anything beyond academic wank
[10:28:52 CET] <pridkett> dongs what is GFlops of current typical VOIP use then
[10:29:08 CET] <dongs> its not even in G-anything range
[10:29:13 CET] <pridkett> i see
[10:29:36 CET] <pridkett> what does 2k Video use then?
[10:29:39 CET] <dongs> https://www.adaptivedigital.com/g-729/
[10:29:50 CET] <dongs> 1st random hit for g729 which is still poular
[10:29:58 CET] <pridkett> does 4k video use 3 Gflops ?
[10:30:00 CET] <dongs> like aroudn 12mips or so
[10:30:04 CET] <dongs> 4K video of what
[10:30:15 CET] <dongs> 4K video has hardware decoders that uses little power compared to CPU
[10:30:16 CET] <pridkett> decoding/watching 4k video
[10:30:28 CET] <pridkett> decoding/watching 4k video without using hardware decoder
[10:30:34 CET] <dongs> nobody normal does that
[10:32:04 CET] <dongs> i have some i7 and im almost certain software-only impementation (like vlc or wahtever) wil use 100% cpu
[10:32:16 CET] <dongs> to the point wehre i dont even care about trying
[10:32:20 CET] <pridkett> i see
[10:32:26 CET] <pridkett> but how do you calcuate gflops
[10:32:36 CET] <pridkett> when i get 100 cpu suage
[10:32:45 CET] <pridkett> when i get 100 cpu usage
[10:33:08 CET] <dongs> https://www.pugetsystems.com/pic_disp.php?id=32669
[10:33:45 CET] <pridkett> i7 920 does 40 gflops
[10:33:52 CET] <pridkett> so 3 Gflops is nothing then
[10:34:07 CET] <pridkett> why are you crying about 3 gFlops
[10:34:23 CET] <dongs> thats 40gflops of highly optimized benchmark shit
[10:34:32 CET] <dongs> your own link says it uses something like 13% of a threadcrapper
[10:34:48 CET] <dongs> at 3ghz
[10:34:49 CET] <dongs> thats a LOT
[10:34:55 CET] <dongs> for a fucking voice codec
[10:35:13 CET] <pridkett> not sure what Gflop is threadripper
[10:35:22 CET] <dongs> software based G729 implemnentations on voip gateways run hundreds of not thousands of calls transocding 711 to 729
[10:35:24 CET] <pridkett> not sure what Gflop is threadripper 3ghz
[10:38:13 CET] <pridkett> AMD A1100 (Cortex-A57) 1.7 GHz 102% 0.98x
[10:40:48 CET] <dongs> yeah, 100% cpu usage for a voice codec
[10:40:51 CET] <dongs> on a mobile processor
[10:40:58 CET] <dongs> for a single stream
[10:41:17 CET] <pridkett> lol
[10:41:55 CET] <pridkett> but cpu is getting powerful every year
[10:42:18 CET] <dongs> software dvelopers are getting lazier by the minute]
[10:42:36 CET] <dongs> using shit tools , shit languages and writing (or rather copy pasting) shit code
[10:44:19 CET] <pridkett> what does your HD camera use for audio?
[10:44:29 CET] <dongs> lpcm
[10:44:39 CET] <pridkett> what bit/samplerate
[10:44:55 CET] <dongs> 16/48 iirc
[10:45:21 CET] <pridkett> that's a lot of data ,
[10:45:25 CET] <pridkett> just for audio
[10:45:37 CET] <dongs> it does video up to 200mbit so thats fine
[10:45:56 CET] <pridkett> what video codec?
[10:46:03 CET] <dongs> h264
[10:46:08 CET] <pridkett> which one
[10:46:14 CET] <dongs> there is only one
[10:46:27 CET] <pridkett> there are several h264 encoders
[10:46:32 CET] <dongs> wot
[10:46:39 CET] <pridkett> apple has their own
[10:46:43 CET] <pridkett> ms has their own
[10:46:45 CET] <dongs> its a camera dude.
[10:47:35 CET] <pridkett> dongs let me see the quality, upload a sample
[10:48:22 CET] <dongs> https://www.panasonic.com/global/consumer/camcorder/gallery.html
[10:48:42 CET] <pridkett> wow 4k?
[10:49:05 CET] <pridkett> can you even play it on a smartphone?
[10:49:27 CET] <dongs> i don't play contents i record on a phone
[10:49:49 CET] <pridkett> dongs these retards uploaded the sample on youtube
[10:49:57 CET] <dongs> yea well
[10:49:59 CET] <pridkett> youtube degrades quality
[10:50:12 CET] <pridkett> they should know that
[10:50:30 CET] <dongs> nobody is gonna download several gigs of shit and try to play it back on a random desktop pc
[10:50:34 CET] <dongs> just to see the quality
[10:50:56 CET] <pridkett> people who is interested in buying would
[10:51:09 CET] <dongs> no
[10:51:35 CET] <dongs> youtube gives a perfectly useful sample of what the optics/dynamic range/colors its capable of
[10:51:37 CET] <pridkett> dongs wouldn't you care to see the samples before buying that camera
[10:51:45 CET] <dongs> its hard to fuck up video when you're doing it at 200mbit
[10:51:49 CET] <dongs> no
[10:51:51 CET] <pridkett> they reencode everything to shit quality
[10:53:11 CET] <pridkett> dongs how much YEN did it cost you, may i ask
[10:53:51 CET] <dongs> i paypal'd it from teh jews in new york (b&hphoto) for around 3.5k iirc.
[10:53:56 CET] <dongs> usd
[10:54:01 CET] <dongs> back wehn it was new
[10:54:06 CET] <dongs> i think its about ~2k now
[10:54:15 CET] <pridkett> aren't you in Japan?
[10:54:22 CET] <dongs> yes, but it was too expensive in japan
[10:54:27 CET] <pridkett> i see
[10:54:36 CET] <pridkett> it's panasonic
[10:54:42 CET] <pridkett> it should be cheaper
[10:54:46 CET] <pridkett> since it's panasonic
[10:54:52 CET] <dongs> i recently had it serviced to replace the lens assembly as the zoom motor crapped out
[10:55:01 CET] <dongs> and there was no issue wiht the warranty/whatever
[10:56:19 CET] <dongs> https://www.bhphotovideo.com/c/product/1451846-REG/panasonic_ag_cx350_4k_camcorder.html this is the new hot shit
[10:57:51 CET] <pridkett> what is the disk space does it come with
[10:58:20 CET] <dongs> sdxc cards, so any
[10:58:31 CET] <dongs> and there's 2 slots so you can swap stuff out while its recording on the other slot.
[10:58:38 CET] <pridkett> so none
[10:58:44 CET] <pridkett> you have to rely on sdxc
[10:58:52 CET] <pridkett> cheap bastards
[10:59:03 CET] <pridkett> what filesystem does sdxc cards even use these days
[10:59:28 CET] <dongs> exfat
[10:59:36 CET] <dongs> at least thats what i use on mine
[10:59:45 CET] <dongs> i think it can also read NTFS-formatted USB3 hdd
[10:59:45 CET] <pridkett> that's what i thought, because i know you cannot use fat32
[10:59:48 CET] <dongs> read/write
[11:00:03 CET] <pridkett> people don't seem to like exfat
[11:00:15 CET] <pridkett> "linux people"
[11:00:18 CET] <dongs> by "people" you probably mean "
[11:00:19 CET] <dongs> yes exactly
[11:00:22 CET] <dongs> normal people don't care.
[11:00:29 CET] <pridkett> lol yeah "linux people"
[11:01:33 CET] <pridkett> i don't like the idea of swapping sdxc cards though
[11:01:42 CET] <pridkett> why can't it have a 512 GB SSD on it
[11:01:52 CET] <dongs> because you can have 1TB SD card
[11:01:55 CET] <dongs> or two of them
[11:02:11 CET] <pridkett> SDcard is not realizble
[11:02:16 CET] <dongs> ?
[11:02:29 CET] <pridkett> they are not as reliable as ssd
[11:02:37 CET] <pridkett> it's true
[11:03:06 CET] <pridkett> trying to fit 1TB on that small space
[11:03:07 CET] <dongs> removable 512gb unreliable media is orders of magnitude more useful than non-removable SSD
[11:04:20 CET] <pridkett> also it's slow
[11:04:45 CET] <dongs> non-issue
[11:05:58 CET] <pridkett> 1tb sdxc would be super expensi e
[11:06:11 CET] <pridkett> and can you imagine if you lost it
[11:08:03 CET] <dongs> you'd be surprised
[11:08:26 CET] <pridkett> dongs stop using irssi
[11:08:52 CET] <dongs> cheap 1tb sdxc (certainly not capable of doing 400mbit for that panasonic cam) is half price of last gen samsung 860 evo 1tb
[11:09:41 CET] <pridkett> $200 for samsung evo 1tb
[11:10:04 CET] <pridkett> 1tb sdxc is not $100
[11:10:20 CET] <dongs> it is
[11:12:23 CET] <pridkett> aren't you the guy who does JAV AV ?
[11:14:12 CET] <dongs> i am
[11:14:29 CET] <pridkett> editing or shooting?
[11:14:37 CET] <dongs> both
[11:14:49 CET] <pridkett> what camera do you use?
[11:14:59 CET] <dongs> we literally just finished discussing cameras
[11:15:07 CET] <pridkett> or is it the same one we were dicussing
[11:15:17 CET] <dongs> yes
[11:15:19 CET] <pridkett> dongs sorry
[11:15:49 CET] <pridkett> do you think everybody should be shooting with 4k these days even if final product is 2k ?
[11:18:28 CET] <pridkett> you have this? Panasonic 4K Camcorder WXF990/WXF991 but you want to upgrade to: Panasonic AG-CX350 4K Camcorder ??
[11:18:46 CET] <dongs> no i have HC-HX1000
[11:18:52 CET] <dongs> and yeah cx350 looks nice
[11:19:06 CET] <pridkett> what is better? HC-HX1000 or WXF991
[11:19:10 CET] <dongs> yeah definitely shoot in 4K and downrez in post
[11:19:17 CET] <dongs> i dont know, wxf is some consume shit i think
[11:20:14 CET] <pridkett> wow AG-CX350 is way more expensive than HC-HX1000
[11:20:22 CET] <dongs> it is now, yes
[11:20:26 CET] <dongs> launch price for x1000 was same
[11:20:28 CET] <dongs> around 3.5k
[11:20:34 CET] <pridkett> i see
[11:20:52 CET] <pridkett> you better upgrade very soon, to satisfy your customers
[11:24:42 CET] <pridkett> dongs do you shoot in PAL mode or NTSC mode
[11:42:10 CET] <`mist> hey guys, i'm trying to hw transcode plex using ffmpeg but i've got an interesting message in my plex log
[11:42:22 CET] <`mist> Mar 30, 2019 11:33:54.876 [0x7f0db77fe700] ERROR - [FFMPEG] - Cannot load libcuda.so.1
[11:42:46 CET] <`mist> So i would assume that means it can't find libcuda, however if i do ldconfig -p | fgrep cuda, i can see it there
[11:47:17 CET] <pridkett> `mist what is the exact command are you typing
[11:47:36 CET] <pridkett> for your transcoding
[11:50:08 CET] <`mist> nvm, i solved it
[11:50:19 CET] <`mist> i had forgotten to create my docker container with the nvidia runtime
[12:55:49 CET] <dongs> pridkett: lol.
[12:55:55 CET] <dongs> can you stop trolling or something
[12:56:01 CET] <pridkett> what is funny?
[12:56:11 CET] <pridkett> did i say something funny?
[14:32:41 CET] <pink_mist> dongs: he's not even capable of holding a coherent conversation, so stopping trolling seems like you're asking for a mountain when he can't even give you a pebble
[14:32:59 CET] <dongs> arent you the same guy?
[14:33:14 CET] <pink_mist> ???
[14:33:56 CET] <pink_mist> how in hell did you get that idea?
[14:34:09 CET] <pink_mist> I think that's the most insulted I've ever been
[14:34:14 CET] <pink_mist> mistaking me for fucking pridkett
[14:34:17 CET] <dongs> haha
[00:00:00 CET] --- Sun Mar 31 2019
More information about the Ffmpeg-devel-irc
mailing list