[Ffmpeg-devel-irc] ffmpeg.log.20180127

burek burek021 at gmail.com
Sun Jan 28 03:05:01 EET 2018


[02:08:09 CET] <islanti> is there a command line solution to convert an aax file to multiple mp4s for each chapter?
[02:29:30 CET] <M6HZ> c_14, Sequel of: "Jan 19 20:42:36 <M6HZ>  c_14, I will use: curl -L '[URL]' | tee >(ffplay -) > "$(date --rfc-3339=seconds -u | sed 's/ /_/g')"_-_radio-record.mp3"
[02:29:30 CET] <M6HZ> The problem happend again, but the recorded file is playable without any problem.
[02:31:36 CET] <M6HZ> c_14, I will try to reproduce this result a second time to confirm the issue.
[02:37:05 CET] <mainomenon> i am kind of stupid with computers. how do i add the libx265 encoder to my build of ffmpeg on macOS ?
[02:43:08 CET] <kazuma_> you could just download a pre built binary mainomenon
[02:43:49 CET] <kazuma_> https://ffmpeg.zeranoe.com/builds/ has mac builds for example
[02:50:25 CET] <mainomenon> thank you
[02:50:37 CET] <mainomenon> i got mine from homebrew and theirs doesn't have it
[02:52:38 CET] <furq> mainomenon: brew install --with-x265
[02:52:41 CET] <furq> (apparently)
[02:53:06 CET] <mainomenon> ah
[02:53:14 CET] <mainomenon> why dont they include something like that by default?
[03:08:11 CET] <youngluoyang> JOIN
[03:08:16 CET] <youngluoyang> HELP
[03:09:15 CET] <youngluoyang> how to use docker build ffmpeg develop environment?
[10:55:47 CET] <kikobyte> BtbN, also, I guess, this initialization in ffmpeg_cuvid.c https://github.com/FFmpeg/FFmpeg/blob/master/fftools/ffmpeg_cuvid.c#L58 should be same as https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_scale_npp.c#L178, because when the context allocated in ffmpeg_cuvid.c gets into the h264_nvenc encoder, for certain resolutions it cannot be accepted by the nvidia api
[10:56:22 CET] <kikobyte> Alignment and stuff
[10:57:21 CET] <kikobyte> Preferably, surely, there should have been an allocator negotiation across the pipeline, where every component could have imposed its requirements on the buffer, but since it's not actually there, at least some guaranteed alignment could have been added
[11:52:42 CET] <specing> Hi
[11:52:55 CET] <BtbN> kikobyte, the CUDA hwframes context takes care of propper linesize-alignment itself.
[11:52:57 CET] <specing> Is it possible to continue ffmpeg after it got killed?
[11:53:18 CET] <specing> as in continue the last running job
[11:53:41 CET] <BtbN> what scale_npp does is actually wrong. It will lead to frames with wrong dimensions which is then later has to correct
[11:54:16 CET] <BtbN> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavutil/hwcontext_cuda.c;h=37827a770c3a621234e75da9d8a07b7893f6ae4f;hb=HEAD#l118
[11:55:11 CET] <BtbN> so far I have not encountered a resolution that nvenc refused.
[13:00:18 CET] <specing> I've heard that it is possible to "resume" by stiching two files together before the last keyframe in the first file
[13:00:25 CET] <specing> how to do so?
[13:01:59 CET] <specing> I don't know what/where the last keyframe is and -ss and -to only take time, not frames
[16:52:46 CET] <onodera> Hi, will ffmpeg ever be fixed to work with libressl?
[16:53:07 CET] <c_14> already has been?
[16:53:08 CET] <onodera> I've seen a patch last year and I thought i'd wait till it gets merged but even master is still not building with libressl unpatched
[16:53:20 CET] <c_14> was the patch not applied
[16:53:22 CET] <c_14> ?
[16:53:36 CET] <onodera> I tried building a few days ago
[16:53:57 CET] <onodera> let me pull in master and rebuild to verify
[16:57:28 CET] <c_14> There's definitely --enable-libtls
[16:59:16 CET] <onodera> nope still fails, I'm talking about this patch: https://github.com/gentoo/libressl/blob/master/media-video/ffmpeg/files/ffmpeg-3.3-libressl.patch
[16:59:34 CET] <sfan5> that's just a workaround to make --enable-openssl compile on libressl
[16:59:40 CET] <c_14> yeah
[16:59:43 CET] <c_14> there's actual libtls support now
[16:59:51 CET] <sfan5> but if you have libressl installed, it makes much more sense to use the "native" TLS library it provides
[16:59:54 CET] <sfan5> ...which is libtls
[16:59:54 CET] <onodera> hmm let me look into this
[17:07:28 CET] <BtbN> You will need latest master
[17:07:34 CET] <BtbN> stable branches will most likely not get it
[17:46:54 CET] <tyng> ffbox0-bg.ffmpeg.org at 79.124.17.100 is unresponsive
[17:49:14 CET] <JEEB> is it used for anything?
[17:49:40 CET] <JEEB> since the web site works and I would guess the git is on videolan territory
[17:49:48 CET] <JEEB> yup http://git.videolan.org/?p=ffmpeg.git;a=summary
[17:49:54 CET] <c_14> ml maybe?
[17:50:54 CET] <tyng> both ffmpeg.org and git.ffmpeg.org resolves to this server
[17:51:44 CET] <c_14> it's responsive for me
[17:57:32 CET] <tyng> c_14 try getting this file https://ffmpeg.org/releases/ffmpeg-3.4.1.tar.xz
[17:57:56 CET] <JEEB> tyng: that's why I generally do not utilize the git.ffmpeg.org name for the git repo
[17:58:19 CET] <JEEB> it's on videolan infrastructure behind it so I just point towards that
[17:58:28 CET] <JEEB> and as I noted git.videolan.org seems to be OK
[17:59:52 CET] <tyng> does the videolan server mirror releases too?
[18:00:02 CET] <tyng> then maybe set it as the default in the download page
[18:01:04 CET] <JEEB> seems like the git instance has disabled tarballs for tags
[18:01:24 CET] <JEEB> but that release tarball downloads just fine for me?
[18:09:22 CET] <tyng> i am getting ~2s ping ~80% packet loss and tls/http timeouts from several endpoints
[18:11:00 CET] <tyng> but it seems to be fine now
[18:37:33 CET] <sine0> hello folks, pardon the offtopic, but I dont know what tool I should use to do the job, this seems like one of these linux cli jobbies. I have thousands of images and I want to do a batch resize on them. what tool should I use
[18:37:56 CET] <TaZeR> ffmpeg is so badass
[18:38:53 CET] <JEEB> sine0: you can either use imagemagick's convert or ffmpeg
[18:39:13 CET] <JEEB> see whichever fits you more
[18:39:32 CET] <ddubya> has anyone coded a script or tool to track the history of every api in FFmpeg. Namely, when it was introduced, and any changes to the function signature
[18:39:49 CET] <ddubya> oh, and when it was removed of course
[18:40:38 CET] <sine0> JEEB: sweet.
[18:40:57 CET] <JEEB> ddubya: there's an APIchanges document under doc
[18:41:10 CET] <JEEB> not perfect but recent changes I think seem to have been added there
[18:42:41 CET] <ddubya> Thanks, that could be useful. But I need to go waaay back. For some reason the project is stuck on FFmpeg 2.2
[18:42:48 CET] <ddubya> but still needs to work on 3.5
[18:43:14 CET] <ddubya> I guess some combination of git bisect and grep is in order
[18:44:14 CET] <ddubya> apichanges seems to go back far enough but nothing about the apis I'm looking to validate
[18:44:27 CET] <ddubya> avcodec_get_name and av_get_default_channel_layout
[20:04:06 CET] <ddubya> it log --follow -p libavcodec/avcodec.h | grep avcodec_get_name
[20:04:13 CET] <ddubya> git log --follow -p libavcodec/avcodec.h | grep avcodec_get_name
[20:04:18 CET] <ddubya> ^^
[20:04:36 CET] <ddubya> for some reason that doesn't show when CodecID -> AVCodecID
[20:26:19 CET] <kevinn> Does anyone know how to have avcodec_receive_frame() output in packed RGB?
[20:27:31 CET] <JEEB> the output format depends on the decoder, and you will have to utilize avfilter to convert it to packed if it happens to planar (or anything else that you know can handle the conversion)
[20:28:21 CET] <kevinn> Any help would be greatly appreciated!
[20:28:42 CET] <JEEB> I replied to you right after you quit
[20:28:47 CET] <JEEB> "the output format depends on the decoder, and you will have to utilize avfilter to convert it to packed if it happens to planar (or anything else that you know can handle the conversion)"
[20:30:26 CET] <kevinn> if I do something like c->pix_fmt = AV_PIX_FMT_0RGB32; before avcodec_open2() will that output in packed RGB
[20:30:43 CET] <JEEB> no
[20:31:24 CET] <kevinn> okay I must be missing something then, how can I configure my decoder if not through avcodec_open2?
[20:31:30 CET] <JEEB> decoding is just decoding so you get what is there in the data (for example RGB from H.264 ends up being planar as it's coded in separate planes
[20:31:57 CET] <kevinn> ahh, so x264 will always be planar?
[20:32:19 CET] <JEEB> decoders do not convert formats for you
[20:32:30 CET] <JEEB> they give you something and then you deal with that as you require
[20:32:44 CET] <kevinn> Is there anyway I can tell x264 to not use planar?
[20:32:48 CET] <JEEB> no?
[20:33:12 CET] <JEEB> seriously, just add the filter that converts to packed RGB?
[20:33:28 CET] <JEEB> there's an example for avfilter under docs
[20:33:36 CET] <JEEB> doc/examples, if I recall correctly :P
[20:33:54 CET] <JEEB> avfilter is not perfect but you should get it going since you only want to convert to packed RGB
[20:34:30 CET] <JEEB> another way is to utilize libp2p which is a thing that specializes on packing/unpacking (https://github.com/sekrit-twc/libp2p)
[20:34:33 CET] <kevinn> well there lies the problem. avfiler code is actually pretty bad. I wrote my own conversion algorithm from planar to packed but it takes about 20 miliseconds every render cycle to run. Which is about 10 more miliseconds then I have
[20:34:42 CET] <ddubya> JEEB, is swscale no good now?
[20:34:52 CET] <JEEB> ddubya: avfilter uses swscale for various things, yes
[20:35:06 CET] <ddubya> oic i always used it directly
[20:35:24 CET] <JEEB> if it works for you like that, that's fine as well. for a newcomer "just feeding AVFrames" is simpler though
[20:35:35 CET] <JEEB> which is the main point of avfilter for me
[20:35:44 CET] <JEEB> it takes in AVFrames and outputs them
[20:35:59 CET] <kevinn> Should I just put the planar to packed algorithm into threads or something to increase the speed?
[20:36:51 CET] <ddubya> kevinn, probably not, your video decode is going to dominate most likely, that's where using multhtreaded codec will help
[20:36:52 CET] <JEEB> depends. do check out libp2p as well since it's a thing specialized in (un)packing. of course I think it mostly has optimizations for x86 so if you're running on ARM that's a whole separate pain
[20:37:12 CET] <JEEB> and of course one thing is to do the RGB related stuff on a GPU
[20:37:19 CET] <JEEB> if the upload to VRAM is fast enough
[20:38:03 CET] <ddubya> JEEB, If I use hw codec like cuvid is it simple to keep the frame in hardware and pass it to cuda for example
[20:38:30 CET] <kevinn> ddubya: hmm, okay, so your suggestion is to use a multithreaded codec to increase the speed of my decode?
[20:38:33 CET] <JEEB> yes, with most hw decoding APIs you have ways to keep the image in VRAM, and then you can use something like libplacebo or anything else
[20:38:46 CET] <JEEB> well H.264 is already multithreaded decoding-wise :P
[20:38:51 CET] <kevinn> does avcodec support multiple threads?
[20:38:54 CET] <JEEB> yes
[20:39:09 CET] <JEEB> as long as you have some threading primitives (posix threads or windows threads)
[20:39:09 CET] <kevinn> I thought x264 is just encoding?
[20:39:19 CET] <JEEB> x264 is  an encoder
[20:39:23 CET] <JEEB> H.264 is the format
[20:39:28 CET] <kevinn> okay and can I configure that ussing avcodec_open2?
[20:39:46 CET] <JEEB> yes you can control threads
[20:40:30 CET] <JEEB> https://www.ffmpeg.org/doxygen/trunk/structAVCodecContext.html#aa852b6227d0778b62e9cc4034ad3720c
[20:40:46 CET] <JEEB> although as far as I remember lavc should kind of auto-decide on amount of threads
[20:41:19 CET] <kevinn> hmm, okay I'll set thread_count and I'll see if it helps at all.
[20:41:36 CET] <JEEB> you might want to look at its value after you initialize the decoder already
[20:41:40 CET] <JEEB> if it's already set to something
[20:41:55 CET] <kevinn> okay I will, thank you again for the help
[20:42:10 CET] <kevinn> Don't know if you remember me from a few months ago ;)
[20:42:18 CET] <JEEB> I have a very bad memory on things
[20:42:25 CET] <JEEB> which might be a good thing
[20:42:56 CET] <kevinn> ya, you started cursing at me last time c:
[20:47:01 CET] <kevinn> JEEB: so I checked the value it was before and it was set to 1
[20:47:06 CET] <kevinn> and I set it to 4
[20:47:29 CET] <kevinn> it didn't get any faster but instead introduced what I can only describe as latency
[20:47:33 CET] <kevinn> same effect with 2
[20:47:57 CET] <JEEB> yes, you will get latency with frame threads, d'uh
[20:48:20 CET] <JEEB> you can use sliced threads if your encode has slices and the decoder supports them (H.264 does), which is not as fast but doesn't have latency
[20:49:16 CET] <kevinn> okay, I'll try sliced threads, I've seen examples online. Do I have any other options on cutting down on decode time?
[20:49:34 CET] <JEEB> not correct ones
[20:49:47 CET] <JEEB> also as I said, in FPS frame threads are faster
[20:49:58 CET] <JEEB> sliced threads will give you LESS fps
[20:50:07 CET] <ddubya> kevinn, only if you have a hardware decoder (I didn't see your original post)
[20:50:09 CET] <JEEB> (also as I said, the videos have to be encoded WITH SLICES)
[20:50:25 CET] <JEEB> otherwise there's one slice per image
[20:50:31 CET] <JEEB> and of course that doesn't thread
[20:51:39 CET] <kevinn> ddubya: I unfortunately do not have a hardware decoder... am I really stuck with this 20 milisecond number?
[20:52:01 CET] <kevinn> JEEB: oh okay then I definitely don't want to slice threads then
[20:52:17 CET] <ddubya> kevinn, if you can't control the input files then nope
[20:52:17 CET] <kevinn> is 20 miliseconds fairly standard for decode time?
[20:52:31 CET] <kevinn> control input files?
[20:52:34 CET] <kevinn> what do you mean?
[20:52:42 CET] <JEEB> no idea, never had to compare on my machines because my stuff was always fast enough :P
[20:52:47 CET] <ddubya> reformat them, reencode or whatever
[20:53:02 CET] <JEEB> well, one exception being HEVC decoding on the CPU
[20:53:09 CET] <JEEB> which is just not optimized because nobody cares enough
[20:53:25 CET] <kevinn> ddubya: you mean there are things I can do to x264 to make it quicker to decode?
[20:53:34 CET] <ddubya> sure
[20:53:42 CET] <kevinn> like what!!
[20:53:57 CET] <ddubya> lower the bitrate & resolution are the main things
[20:54:01 CET] <ddubya> and frame rate
[20:54:20 CET] <JEEB> it really depends on which is being slower for you: CABAC decoding or having more bits for the same original data
[20:54:37 CET] <JEEB> tune fastdecode in x264 makes the encoder disable more CPU intensive features
[20:54:56 CET] <JEEB> (but on the other hand if the bit rate goes waay up then suddenly your CAVLC decoding is the bottleneck)
[20:55:26 CET] <kevinn> JEEB: will fastdecode conflict heavily with zerolatency?
[20:55:41 CET] <kevinn> which is what I have set now
[20:55:47 CET] <JEEB> no?
[20:56:07 CET] <JEEB> also do note that zerolatency is for latency, not speed. just so you aren't misunderstanding things
[20:56:08 CET] <kevinn> okay let me try that and see what happens
[20:57:11 CET] <ddubya> kevinn, what speedup do you get by removing the yuv->rgb conversion.? And what is the frame used for (display? analysis ?)
[20:57:35 CET] <JEEB> it seems like his content is RGB anyways
[20:57:41 CET] <kevinn> JEEB: yes I know. so if I do this "zerolatency+fastdecode" it'll use both right?
[20:57:50 CET] <ddubya> the point is, you might achieve your goals with YUV format, for example computer vision only needs grayscale normally
[20:57:52 CET] <kevinn> ddubya: not sure, let me test that
[20:57:56 CET] <JEEB> kevinn: I don't remember exactly :P
[20:58:19 CET] <JEEB> kevinn: you don't have YCbCr to RGB conversion at least during decoding if your H.264 is already RGB :P
[21:00:00 CET] <ddubya> another example, if you are using OpenGL it is possible to make a shader for the YUV -> RGB conversion
[21:01:05 CET] <JEEB> if he doesn't have an issue with encoding RGB I don't think he will get less problems on the receiving ends by switching to YCbCr :P
[21:01:39 CET] <JEEB> but yes, all sorts of stuff can be moved to GPU (p2p conversion or whatever) as long as it makes sense in the larger context
[21:01:55 CET] <JEEB> how much upload and download requires in time f.ex.
[21:02:13 CET] <JEEB> so if he was doing more filtering on the GPU, that would make sense
[21:02:24 CET] <JEEB> just pushing it there and back doesn't often make sense
[21:03:05 CET] <JEEB> of course depending on your needs having less data to decode might make sense with YCbCr :P
[21:03:05 CET] <ddubya> yeah, not worth the trip to GPU for that operation
[21:03:13 CET] <kevinn> JEEB: okay the fastdecode tune didn't have an effect at all
[21:03:23 CET] <JEEB> too bad
[21:03:23 CET] <ddubya> unless doing more work there
[21:03:25 CET] <kevinn> ddubya: let me run that test you suggested now
[21:03:38 CET] <JEEB> kevinn: did you make sure it disabled CABAC and instead used CAVLC?
[21:03:44 CET] <JEEB> as in, the option actually got utilized
[21:04:24 CET] <kevinn> JEEB: well I didn't touch those settings when configuring x264 so I assume it did right?
[21:04:43 CET] <JEEB> that's one of the options tune fastdecode is supposed to modify
[21:04:49 CET] <JEEB> you should see it in libx264's logging
[21:04:55 CET] <JEEB> either cavlc or cabac
[21:05:05 CET] <JEEB> also `strings your_encode |grep x264`
[21:05:23 CET] <kevinn> okay let me confirm that...
[21:05:24 CET] <JEEB> it writes out a custom SEI message with the parameters and x264 version at the beginning of the stream
[21:05:31 CET] <JEEB> it's either cabac=0/1 or something else
[21:05:55 CET] <JEEB> tune fastdecode is what I used when I still cared about the 1st gen Xbox :P
[21:05:59 CET] <JEEB> to get SD H.264 to play
[21:06:09 CET] <JEEB> (but then I got tired of re-encoding pretty darn quickly)
[21:08:35 CET] <kevinn> ddubya: as for your point when I comment out the planar to packed conversion it's taking about 5 miliseconds to decode. Which is pretty darn fast. so 15 miliseconds is spent converting
[21:08:52 CET] <JEEB> is this on x86 or ARM or what, btw?
[21:08:54 CET] <kevinn> which leads me to believe the problem is still in the planar to packed conversion
[21:09:19 CET] <kevinn> right now it is on x86, but I do intend on running on ARM eventually
[21:09:21 CET] <JEEB> yea, H.264 decoding is pretty optimized :P
[21:09:39 CET] <JEEB> (at least on x86, and some on ARM - although ARM in general is much slower and mostly kept up by hw decoding chips)
[21:09:55 CET] <JEEB> kevinn: for x86 you might want to give libp2p a try which I linked
[21:10:10 CET] <JEEB> it specializes in packed<->planar optimized conversions if I recall correctly
[21:10:17 CET] <JEEB> it's by the zimg dude
[21:11:01 CET] <kevinn> okay I will try that
[21:11:18 CET] <kevinn> for reference here is my planar to packed algorithm
[21:11:19 CET] <kevinn> https://pastebin.com/ik9HzHfF
[21:11:43 CET] <kevinn> does it look like libp2p would be better
[21:12:08 CET] <JEEB> quite possible
[21:13:03 CET] <kevinn> oh p2p looks really easy to implement, let me give this a try
[21:13:35 CET] <JEEB> yea, it's a simple thing you're meant to put into your code base according to the guy
[21:13:57 CET] <JEEB> of course if that's not fast enough then you will have to start looking at what makes sense in your use case
[21:14:17 CET] <ddubya> so the h.264 is actually encoded in rgb? interesting
[21:14:30 CET] <JEEB> yes, it is actually encoded in BGR I think
[21:14:32 CET] <kevinn> ya I am slightly worried about what I am to do if it isn't fast enough
[21:14:33 CET] <JEEB> but yes, it's RGB
[21:14:43 CET] <ddubya> that's a bit unusual, any reason for that?
[21:14:53 CET] <JEEB> probably screen capture or something
[21:14:59 CET] <ddubya> hmm
[21:15:13 CET] <kevinn> JEEB is right, it is for essentially remote desktop
[21:18:00 CET] <JEEB> kevinn: btw what's the final destination of the decoded picture?
[21:18:34 CET] <kevinn> It's the machine watching the screen of the other machine, is that what you mean?
[21:18:49 CET] <JEEB> well, stuff like "it will get rendered on screen" etc
[21:18:53 CET] <JEEB> the use case, basically
[21:18:54 CET] <ddubya> yeah but what GUI library or image format
[21:19:33 CET] <kevinn> oh yes, well it is slightly unusual but I am rendering directly to /dev/fb0
[21:19:55 CET] <JEEB> well I've already seen opengl-on-drm so that's not even perverse
[21:20:23 CET] <JEEB> (mpv got all sorts of opengl back-ends added)
[21:20:42 CET] <JEEB> just wondered if you could just move the RGB data as-is planarly into the GPU and just render it there
[21:21:19 CET] <kevinn> no unfortunately that won't work :(
[21:21:53 CET] <JEEB> well then you have a problem
[21:22:22 CET] <kevinn> you don't think p2p will help?
[21:22:30 CET] <JEEB> no, just if that fails
[21:22:46 CET] <JEEB> although there are some ways you can still get your head further into the chaos
[21:22:48 CET] <kevinn> ahh, well I am praying that it will work
[21:23:05 CET] <kevinn> how so?
[21:24:12 CET] <JEEB> like writing special SIMD for all architectures, or praying that decoding of H.264 can be made so that it outputs into a single buffer instead of separate ones, or you switch to 4:2:0 YCbCr and pray that the dumb fbdev you're pushing stuff into supports planar 4:2:0 YCbCr
[21:24:25 CET] <JEEB> all of those three are possibilities
[21:25:25 CET] <kevinn> okay let me finish up libp2p first before I think about those ;)
[21:30:34 CET] <kevinn> JEEB: hey a few questions on libp2p
[21:31:06 CET] <kevinn> so in p2p_api.h there this line: typedef void (*p2p_pack_func)(const void * const src[4], void *dst, unsigned left, unsigned right);
[21:31:31 CET] <kevinn> does the const void * const src[4] mean that it can only pack 4 bytes at a time?
[21:31:40 CET] <kevinn> as in I'd have to do each individually?
[21:31:53 CET] <GyrosGeier> four void pointers
[21:32:07 CET] <JEEB> it means it wants four pointers
[21:32:08 CET] <kevinn> jesus
[21:32:13 CET] <kevinn> of course
[21:32:27 CET] <JEEB> (since generally planar formats have up to four planes)
[21:32:48 CET] <kevinn> and what does unsigned left and right mean?
[21:32:52 CET] <kevinn> what does it want?
[21:33:16 CET] Action: GyrosGeier suspects that these are different planes than what we do on Amiga
[21:35:48 CET] <kevinn> JEEB: what is left and right?
[21:36:02 CET] <kevinn> do you know?
[21:37:07 CET] <ddubya> kevinn, I'm just farting around here, but maybe this function is better that what you had: https://pastebin.com/qK9kHFr3
[21:37:21 CET] <ddubya> *totally untested and not compiled*
[21:38:13 CET] <kevinn> ddubya: what does __restrict do! I've never seen it before
[21:38:34 CET] <JEEB> kevinn: not sure but the tests seem to be calling the latter with stuff https://github.com/sekrit-twc/libp2p/blob/master/main.c#L21
[21:38:46 CET] <JEEB> I can ask the author
[21:39:26 CET] <DHE> https://en.wikipedia.org/wiki/Restrict This seems related?
[21:39:28 CET] <kevinn> JEEB: I will just use 0, 2 like the author does. thanks again
[21:39:38 CET] <ddubya> kevinn, it tells compiler that pointer types with the same type can't point to the same memory
[21:40:17 CET] <JEEB> he uses 0, 2 for rgbx_be but then rgb24_le|be has 0, 1
[21:40:34 CET] <kevinn> it's an optimization thing, okay I see, let me try that too, thank you again for all the help ddubya
[21:43:34 CET] <JEEB> < Arwen> JEEB, left/right offset
[22:07:01 CET] <stsbqm> Hi!
[22:10:21 CET] <stsbqm> -c:a flac -sample_fmt s24 :: Invalid sample format 's24'
[22:10:24 CET] <stsbqm> -c:a flac -sample_fmt s32 :: [flac @ 0x55b061c63e60] encoding as 24 bits-per-sample
[22:10:31 CET] <stsbqm> Is there an explanation for this?
[22:13:03 CET] <JEEB> stsbqm: there is no separate sample format for 24bit, instead 24bit is internally handled as 32bit with the coded_bits (I think) set 24
[22:13:24 CET] <DHE> I think it's related to the fact that the compiler doesn't have a good data type for 24 bits
[22:13:27 CET] <JEEB> so it means in the framework "this is a 32bit sample, but actually utilized in it are 24bit"
[22:15:05 CET] <stsbqm> That's confusing. But okay, ty.
[22:15:31 CET] <JEEB> DHE: that in theory could have been handled with a separate 24bit sample format which would have been in many ways exactly the same as 32bit
[22:16:53 CET] <DHE> I guess.. that's what 9-bit and 10bit h264 use for example (with 16 bit data types)
[22:48:32 CET] <BtbN> Can ffmpeg "decode" a Dolby Pro Logic (II) signal?
[22:51:19 CET] <JEEB> depends on what you mean, but there was a filter mentioning that if I recall correctly
[22:58:45 CET] <BtbN> You can produce it, aresample has support for it.
[22:58:50 CET] <BtbN> But I can't find anything to consume it
[23:02:21 CET] <durandal_1707> surround filter
[23:03:41 CET] <BtbN> That can take Pro Logic?
[23:04:00 CET] <BtbN> The documentation is a bit barren then
[23:23:35 CET] <mkdir_> Yo 2chainz
[23:23:40 CET] <mkdir_> what's up woodward
[23:24:02 CET] <mkdir_> Idk spectograms
[23:24:07 CET] <mkdir_> I gotta learn spectogram
[23:24:10 CET] <mkdir_> How I do dat?
[23:25:50 CET] <mkdir_> Sharkigator
[23:25:52 CET] <mkdir_> sup
[23:29:40 CET] <mkdir_> mete, hi
[00:00:00 CET] --- Sun Jan 28 2018


More information about the Ffmpeg-devel-irc mailing list