[Ffmpeg-devel-irc] ffmpeg-devel.log.20170405

burek burek021 at gmail.com
Thu Apr 6 03:05:03 EEST 2017


[02:01:00 CEST] <J_Darnley> God fucking dammit, Cygwin!  You've hosed the git directory by crashing so much.
[02:12:34 CEST] <J_Darnley> Hm... lucky bastard.  Deleting a few files and now git thinks things are fine.
[02:19:50 CEST] <llogan> I'm glad I remain ignorant of this Cigwin thing.
[02:33:38 CEST] <iive> +1
[03:46:36 CEST] Action: J_Darnley prods FATE
[03:46:40 CEST] <J_Darnley> go faster
[03:53:55 CEST] <J_Darnley> and with those emails sent: good night
[03:56:50 CEST] <chatter29> hey guys
[03:58:07 CEST] <chatter29> allah is doing
[03:58:12 CEST] <chatter29> sun is not doing allah is doing
[03:58:14 CEST] <chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
[03:58:17 CEST] <kierank> fuck off
[04:25:33 CEST] <blb> wow. that guy spams my channel on a different network too
[04:29:13 CEST] <Shiz> he's everywhere
[04:32:42 CEST] <Compn> batman ?
[04:33:23 CEST] <Jaex> yea that guy keep joining my channel too
[10:59:56 CEST] <mateo`> I wonder why the h264 decoder output depends on the probed values of avctx->has_b_frames in avformat_find_stream_info. I'm trying to only rely on the h264 parser to do the probing in avformat_find_stream_info without entering try_decode_frame (to speed things up). Shouldn't the probed value of avctx->has_b_frames be just an information (not used when we actually want to decode samples - because the
[10:59:58 CEST] <mateo`> decoder will correct it when needed) ? A lot of conformance tests break because of this, the samples are not outputed in the right order and the ts are wrong.
[11:00:42 CEST] <nevcairiel> this information is currently required, yes.
[11:00:55 CEST] <nevcairiel> you wont be able to get rid of decoding to retrieve that without breaking output
[11:02:17 CEST] <mateo`> but why do we need this information in advance ?
[11:09:41 CEST] <wm4> mateo`: because of the old decode API, AFAIK
[11:10:11 CEST] <wm4> also I've never set this field with my code
[11:10:20 CEST] <wm4> (for mkv input at least)
[11:11:01 CEST] <mateo`> you do not rely on avformat_find_stream_info ?
[11:11:14 CEST] <wm4> no, fuck that shit
[11:11:20 CEST] <mateo`> hahaha :D
[11:11:27 CEST] <nevcairiel> if you dont set that field, decoding can randomly drop frames at the start of a stream
[11:11:31 CEST] <nevcairiel> and its not related to the API
[11:11:36 CEST] <nevcairiel> its just h264 insanity
[11:12:57 CEST] <wm4> I thought with the new decode API we can adjust the delay better at runtime
[11:13:04 CEST] <nevcairiel> its not a bout that
[11:13:48 CEST] <nevcairiel> if you dont know the delay beforehand, you only know when its already too late and all you can do is drop the frame (or output it out of order, which would be worse)
[11:16:04 CEST] <wm4> what's the proper solution?
[11:16:59 CEST] <nevcairiel> you can set strict mode, which balloons memory use and delay to the maximum (especially with gpu decoding this can be annoying, as you need 32 surfaces for delay+references then, which adds up to quite a bit)
[11:19:31 CEST] <wm4> I mean for determining the delay beforehand
[11:20:37 CEST] <nevcairiel> you can probably do decoding without decoding, if you want to speed that up (ie. skip the actual decoding of slices)
[11:20:49 CEST] <nevcairiel> but not sure its really meant to be possible
[11:20:58 CEST] <nevcairiel> luckily hevc cleaned up that mess
[11:21:37 CEST] <mateo`> maybe this can be done in the parser ?
[11:21:43 CEST] <jkqxz> For proper streams you look the reordering in the VUI information.  Unfortunately, it isn't mandatory for encoders to set it.
[11:21:52 CEST] <jkqxz> And yeah, H.265 makes it mandatory.
[11:22:28 CEST] <nevcairiel> mateo`: you would have to replicate a lot of decoding logic
[11:22:34 CEST] <nevcairiel> just pick a new codec to work on
[11:22:38 CEST] <nevcairiel> leave h264 be :p
[11:23:40 CEST] <mateo`> i guess i will end up with a very specific patch in my codebase to do the probing
[11:24:44 CEST] <mateo`> i don't have the h264 sw decoder on my android builds, i wanted to avoid doing the probing with the mediacodec one
[11:25:36 CEST] <wm4> nevcairiel: and this is broken in Libav?
[11:33:05 CEST] <nevcairiel> they just use the maximum delay
[11:33:29 CEST] <nevcairiel> which i personally really dont like due to the excessive memory use (or surface use) and added latency
[14:01:21 CEST] <BBB> michaelni: Im looking at your simple_idct implementation in x86/simple_idct.c (the inline asm one)& uhm & thats a & ton of code? it seems to me that its basically a way of manually unrolling the 1d idcts and then (because its not using a loop) it uses zero-checks to optimize for the (common) case where part of the idct is zero (not just dc-only, but also top/left 4x4 sub-idct), is that right?
[14:12:23 CEST] <michaelni> BBB, the simple idct was hand optimized alot, yes
[14:17:02 CEST] <BBB> I guess most of that is not transferrable to a potential sse2 version, right?
[14:17:32 CEST] <BBB> I also dont see how that code resembles my code in _template.asm in any way TBH (you added a comment saying this is identical to "simple" IDCT written by Michael Niedermayer [..])
[14:32:01 CEST] <michaelni> BBB dunno about SSE2 its very long since i wrote this, somehing "similar" can probably be done with SSE2
[14:32:58 CEST] <michaelni> about the identical comment that seems from e9a68b0316ab127098ac4c24a6762ce68980bd23 Christophe Gisquet or am i missing something ?
[14:34:50 CEST] <michaelni> the comment sound like it refers to algorthmic or numerical relation btw
[14:35:57 CEST] <BBB> maybe& it implies the code is similar (which it isn't)
[14:36:18 CEST] <BBB> so I guess you have no interest in writing a yasmified version (or sse2 version) of your inline asm in simple_idct.c?
[14:43:11 CEST] <michaelni> I have no interrest in converting between inline and yasm/nasm. I think i wrote nasm based asm before inline and the whole push toward rewriting all inline to yasm somewhat failed to infect me. In fact i wouldnt be surprised if at some point in the future the whole would flip back and there would be a push to all rewrite to inline. Iam not interrested in that either
[14:44:04 CEST] <kierank> BBB: i'm going to ask j_darnley to do it
[14:44:18 CEST] <kierank> when i am in vegas and on vacation
[14:44:31 CEST] <BBB> michaelni: ok understood
[14:48:39 CEST] <kierank> BBB: is there anything else I could ask james to work on (ideally relating to mpeg2/h264)
[14:49:26 CEST] <nevcairiel> Inline is not portable, ie.there is no universally accepted syntax that works everywhere, so I doubt that inline is going to be celebrating a comeback :)
[14:50:32 CEST] <kierank> nevcairiel: yeah, sunk cost fallacy imo michaelni is arguing for
[14:50:54 CEST] <BBB> michaelni may not necessarily be arguing for gcc-style inline asm, but for something else
[14:51:16 CEST] <BBB> but right now yasm/nasm syntax is the most portable since it works on gcc-compatible compilers as well as MSVC
[14:51:25 CEST] <BBB> inline asm only works on gcc-compatible compilers
[14:51:44 CEST] <nevcairiel> Writing pure asm is the most efficient for big functions, of course actual inline functions also serve some purpose
[14:52:13 CEST] <BBB> kierank: does he have a preference for assembly work or just anything?
[14:52:24 CEST] <nevcairiel> And we use it in a few places that should be actually inlined
[14:52:48 CEST] <BBB> kierank: and dont underestimate the idct, that code is yuge [tm]
[14:52:56 CEST] <kierank> yes i know, i'm thinking more long term
[14:53:15 CEST] <kierank> because james has written all combinations of the asm we use internally for broadcast pixel formats
[14:55:01 CEST] <BBB> so, I can think of a few things for mpeg2
[14:55:23 CEST] <BBB> first, its probably worth profiling that code and seeing if all code actually has sse2 counterparts (where useful) for all dsp functions
[14:55:31 CEST] <BBB> for example, the idct for 8bit is still mmx only
[14:55:46 CEST] <kierank> we have profiles of that already
[14:55:55 CEST] <kierank> https://usercontent.irccloud-cdn.com/file/9ctGXrOA/mpeg2hd.png https://usercontent.irccloud-cdn.com/file/QwdctwxR/mpeg2sd.png
[14:56:14 CEST] <BBB> I didnt mean see whats slow, I just meant see that all things that loop in mmx simd have a sse2 counterpart :) its more review than profile I guess
[14:56:23 CEST] <kierank> ah
[14:56:53 CEST] <BBB> then, two things come to mind for me
[14:56:56 CEST] <nevcairiel> do you really still need more h264/mpeg2 speed? those decoders are already quite speedy
[14:57:21 CEST] <BBB> in h264 and beyond, we typically integrated the put/add into the actual idct, but for pre-h264 codecs (including idctdsp/etc), the two are actually separate functions
[14:57:31 CEST] <nevcairiel> not that more speed isnt good, but gaingin 1.12x speedups on some h264 idcts seems like a tiny gain
[14:57:32 CEST] <BBB> in the same way, clear_blocks and idct_add are separate functions
[14:57:44 CEST] <BBB> whereas for h2645/vp89, the clear_blocks and idct are integrated
[14:58:11 CEST] <BBB> thats quite a bit of work (and involves various archs), but would lead to some nice speedup as well as a simplification of the API on the decoder side
[14:58:12 CEST] <kierank> nevcairiel: yes, in $job any increases in cpu performance are good
[14:58:38 CEST] <BBB> the clear_blocks integration might not work (or help) on the encoder-side, on the other hand&
[14:58:48 CEST] <BBB> but at least integrating idct and put/add should be universally useful
[14:59:27 CEST] <BBB> thats all mpegvideo (mpeg1/2/4)
[14:59:46 CEST] <BBB> maybe michaelni knows some other outstanding fixmes in the design there, he knows that code better than me
[15:00:27 CEST] <BBB> if eventually we can call this thnig called blockdsp, that would be absolutely amazing
[15:00:39 CEST] <BBB> (but maybe not realistic)
[17:04:17 CEST] <BBB> michaelni: for timestamp calculations, compute_pkt_fields() is kind of annoying in vp9 frames after superframe (invisible frame) splitting& is it OK to add a invisible frame flag to AVPacket and skip compute_pkt_fields() for invisible frames?
[17:05:04 CEST] <BBB> michaelni: it causes some bugs/inconsistencies for vp9 files with duration set (invisible frames have pts=dts=AV_NOPTS_VALUE, but compute_pkt_fields() will then make up timestamps manually based on last_pts+duration)
[17:10:15 CEST] <nevcairiel> thats the one for encoding, isnt it
[17:11:29 CEST] <nevcairiel> shouldnt invisible frames be super-framed at that point
[17:14:42 CEST] <nevcairiel> guess not, thats the one for reading, 2 is the one for muxing
[17:17:33 CEST] <nevcairiel> could use the sledgehammer method and just make it skip timestamp interpolation for vp9, just like it does  for h264/hevc
[17:17:38 CEST] <wm4> BBB: why not do it like Libav and split the packets in the decoder?
[17:18:11 CEST] <BBB> needs to be done pre-decoder to allow frame-mt
[17:18:12 CEST] <nevcairiel> thats slower for frame threads
[17:18:17 CEST] <wm4> (or rather, a BSF right before the decoder)
[17:18:26 CEST] <BBB> we could do BSF before decoder
[17:18:50 CEST] <BBB> auto-bsf would come in useful here since the decoder could request the bsf rather than the demuxer
[17:18:51 CEST] <wm4> Libav's libavcodec can apply BSFs automatically for decoders
[17:18:55 CEST] <BBB> but right now thats not how it works
[17:19:04 CEST] <BBB> can someone backport that? ;)
[17:19:24 CEST] <wm4> I feel like an "invisible" flag would be abused for other stuff, also we already have a "do not show" flag for the broken edit list stuff
[17:19:41 CEST] <BBB> hehe maybe
[17:19:47 CEST] <wm4> it shouldn't be so far ahead in the merges
[17:19:52 CEST] <BBB> Im fine with the BSF, hows the backport for that coming along?
[17:20:12 CEST] <wm4> not aware that anyone is working on it
[17:21:14 CEST] <BBB> :)
[17:21:51 CEST] <michaelni> i feel there should be no invissible AVPackets, their content should be merged into the AVPacket actually using it. We do the same for all other codecs, all additional stuff in mpeg and h26* is put in the packet/access unit it belongs to
[17:22:06 CEST] <michaelni> i think this would be cleaner
[17:22:59 CEST] <wm4> so we might actually need a vp9 superframe merge BSF for some demuxers (possibly)
[17:24:34 CEST] <wm4> I'm also wondering what vp9 codec copy will do
[17:24:42 CEST] <wm4> *stream copy
[17:25:50 CEST] <BBB> wm4: we already have that, its called vp9_superframe_bsf.c
[17:25:57 CEST] <BBB> wm4: I wrote it to fix stream copy ;)
[17:26:21 CEST] <BBB> but we need a superframe split bsf which decoders can use before actual decoding with frame-mt not breaking
[17:26:43 CEST] <BBB> michaelni: superframe is like packed divx, I believe packed divx is handled in parsers also
[17:27:04 CEST] <BBB> (also b/c of frame-mt)
[17:27:23 CEST] <michaelni> packed divx IIRC goes packed into decoders 
[17:27:37 CEST] <wm4> BBB: oh.
[17:29:13 CEST] <BBB> how does frame-mt work with packed divx?
[17:31:19 CEST] <michaelni> off the top of my head (which may be wrong) the 2 packed frames goes in, B frame gets decoded, next a dummy frame (thats how packed divx is stored) goes in and the 2nd frame comes out
[17:32:16 CEST] <wm4> delicious vfw bullshit (?)
[17:32:21 CEST] <michaelni> yes
[17:32:30 CEST] <wm4> with restrictions due to avi etc.
[17:32:52 CEST] <wm4> didn't we have a BSF that fixes this? (just guessing..)
[17:33:26 CEST] <michaelni> yes for stream copy
[17:56:05 CEST] <PaoloP> Hello ffmpeg people. About 6 days ago I posted this new API example,  by following ALL the guidelines that ubitux, wm4, michaelni and others told me:  http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209494.html.  Then, after the code was finished (it's very clean and it works. I also made a clean memory check with valgrind), got no reviews anymore. Obvioulsy I don't pretend that it has to be pushed, but it seems 
[17:56:06 CEST] <PaoloP> that it has not reviewed yet because the team was very busy with the last release; at least, can you tell me if there are chances it will be pushed? No problem if it won't, but I would be glad if I know the reasons for which the code has been rejected or if it is still pending.
[17:57:27 CEST] <PaoloP> (I ask that also because I see that the current state of doc/examples is a bit unclear)
[17:57:35 CEST] <wm4> did you make all changes that were requested?
[17:57:42 CEST] <PaoloP> wm4: yes
[17:59:02 CEST] <PaoloP> wm4: the only changes that I did not make were the ones that were not asked as "do it", but "let's discuss about then" (and I explained why I preferred to leave the code in the way I proposed)
[18:00:01 CEST] <wm4> personally I still think it does too much at once
[18:01:21 CEST] <PaoloP> wm4: obviously I don't want to change your idea, but I don't understand if it's the same as the other ffmpeg team think. Then, I ask you all to analyze it.
[18:03:28 CEST] <PaoloP> wm4: anyway, consider that "muxing.c", and "transcode_aac.c" do much more than my example (and they are really messy). But, as said before, it's up to the ffmpeg team to decide what is useful and what's not. I would only be glad to receive an answer, here
[18:04:34 CEST] <atomnuker> michaelni: could you run fate to make sure the 3 mjpeg patches are ok when you have the time?
[18:06:12 CEST] <jamrial> atomnuker: the mjpeg valgrind fix should be backported to 3.3 as well once it's pushed to git master
[18:06:18 CEST] <PaoloP> wm4: so, can I consider the code rejected?
[18:06:31 CEST] <wm4> PaoloP: no, that's just my opinion
[18:06:44 CEST] <atomnuker> jamrial: yep, that's the plan
[18:07:07 CEST] <PaoloP> wm4: how can I know if the code has been rejected or it's in a pending state?
[18:07:54 CEST] <PaoloP> the guideline says: "Give us a few days to react. But if some time passes without reaction, send a reminder by email. Your patch should eventually be dealt with."
[18:08:30 CEST] <PaoloP> but I don't understand to who I have to send the email. In addition, I saw that you were all very busy in the last days
[18:08:43 CEST] <cone-983> ffmpeg 03Damien Riegel 07master:549acc999533: codec: bitpacked: add decoder
[18:08:43 CEST] <cone-983> ffmpeg 03Damien Riegel 07master:01718dc0df57: rtp: rfc4175: add handler for YCbCr-4:2:2
[18:09:26 CEST] <wm4> PaoloP: there's no overly formal way to see whether it's accepted or rejected
[18:10:35 CEST] <PaoloP> wm4: I see, but the strange thing is that people here gave me precise indications of things to modify in it. And I applied these things. Then they were very busy in doing other things
[18:11:10 CEST] <PaoloP> They also told me to modify the build stuff (and I did that)
[18:11:37 CEST] <wm4> maybe nobody feels responsible
[18:11:40 CEST] Action: wm4 looks away
[18:12:39 CEST] <PaoloP> [18:11] <wm4> maybe nobody feels responsible .... it could be
[18:13:26 CEST] <PaoloP> wm4: but I think that a single word could be spent
[18:14:16 CEST] <wm4> I'm just trying to explain why it is how it is, I don't claim it's ideal
[18:15:46 CEST] <wm4> PaoloP: I'd recommend sending the patch again (with no --reply-to)
[18:16:45 CEST] <BBB> michaelni: I know thats how decoding works ;) I meant how is it achieved with frame-mt (which is enabled in mpeg4, of which divx is a part)
[18:16:47 CEST] <PaoloP> wm4: ok.  Ubitux: what do you think about that?
[18:17:21 CEST] <BBB> michaelni: how do we get the two frames in the packed packet ( :-p ) to be decoded in parallel if theyre in the same packet? they must be split pre-decoding (i.e. in a parser), right
[18:17:23 CEST] <BBB> ?
[18:17:25 CEST] <PaoloP> wm4: did it happen in the past that patches were accepted after a bit long time, and after re-sending to the mailing list?
[18:17:35 CEST] <BBB> or does the decoder literally serialize that step, thus choking up the whole frame-mt mechanism?
[18:17:57 CEST] <nevcairiel> i think it does
[18:18:05 CEST] <BBB> it chokes frame-mt?
[18:18:22 CEST] <nevcairiel> there is even warnigns about the file being inefficient
[18:18:29 CEST] <wm4> PaoloP: it just reminds people that it exists, rather than being buried under hundreds of other mails
[18:18:30 CEST] <BBB> /quit Im gonna kill myself
[18:18:35 CEST] <BBB> really?
[18:18:50 CEST] <ubitux> PaoloP: TBH i personally think we have too much examples and i'd rather have a cleanup of the existing examples instead of adding yet another audio encoding
[18:19:12 CEST] <ubitux> PaoloP: like, we have encode_audio. and resampling_audio.c
[18:19:17 CEST] <ubitux> what does your example add?
[18:19:24 CEST] <ubitux> so while the code itself is starting to be ok
[18:19:44 CEST] <ubitux> i have doubts wrt how the example mess is going to be perceived by our users
[18:21:18 CEST] <PaoloP> ubitux: as explained in the comments: "It can be adapted, with few changes, to a custom raw audio source (i.e: a live one). "  "It uses a custom I/O write callback (write_adts_muxed_data()) in order to show to the user ow to access muxed packets written in memory, before they are written to the output file". In addition, it show sequentially all the common step: reading raw input frames , resampling, encoding and 
[18:21:19 CEST] <PaoloP> muxing all in one.
[18:21:56 CEST] <PaoloP> step by step
[18:22:47 CEST] <ubitux> yeah it's just another combination of things we already have
[18:22:52 CEST] <wm4> PaoloP: is this about educating the user how the API works, or for providing copy&paste snippets for applications?
[18:23:01 CEST] <PaoloP> wm4: both
[18:23:33 CEST] <PaoloP> wm4: don't reject the idea of copy/past snippet. It's very useful for all the libraries
[18:24:32 CEST] <PaoloP> ubitux: it's a combination, but it's written in a CLEAR way (strictly sequential, without splitting the code into functions that force the user to jump from a line to another, when reading)
[18:25:49 CEST] <nevcairiel> BBB: on a quick glance it seems like it stores the "packed" frame in an allocated buffer in the context which gets even copied over to the next thread in update_thread_context and then decoded there - this is all assuming that after such a packed frame an "NVOP" frame appears (basiscally a place holder for the re-ordered packed frame), which the decoder can just ignore and instead decode the buffered packed frame
[18:25:55 CEST] <PaoloP> from the user's point of view (consider me like an _user_) it was very difficult and time consuming to adapt and test the existing examples to some real cases.
[18:26:21 CEST] <BBB> nevcairiel: yeah so it splits also, thats what I tought :)
[18:26:28 CEST] <nevcairiel> it just does it very ugly
[18:26:32 CEST] <nevcairiel> its so old
[18:26:40 CEST] <BBB> people think vp9s method is ugly also ^^
[18:26:59 CEST] <nevcairiel> doing it in the parser was probably a bit ugly
[18:27:01 CEST] <ubitux> PaoloP: that's my opinion, but i don't think your example helps in that regard, so while i won't object to its inclusion, i will not encourage it
[18:27:05 CEST] <nevcairiel> doing it in a bsf might be better
[18:27:27 CEST] <PaoloP> for example, IMHO (and I repeat IMHO) it's bad idea to generate tones instead of reading "real" raw frames. It adds code that the user has to remove in order to adapt the code to his needings
[18:27:58 CEST] <wm4> PaoloP: there's always a trade-off between requesting improvements and taxing a patch author too much with amends
[18:28:11 CEST] <nevcairiel> if the point is to demonstrate processing and muxing, we also dont need to implement full file reading
[18:28:37 CEST] <nevcairiel> examples are not supposed to be "templates" you can copy and modify
[18:28:42 CEST] <nevcairiel> they are for learning and understanding
[18:29:44 CEST] <ubitux> PaoloP: i'm sorry if you saw my previous reviews as cheering for its inclusion and made you work on it if it doesn't get applied
[18:30:37 CEST] <wm4> I don't understand what's the advantage over other "multi-examples" like transcode_aac is
[18:30:41 CEST] <PaoloP> nevcairiel: IMHO in all the libraries examples should be used as templates too. Anyway, I repeat: this is a strict personal opinion. I obviously accept what the team say. Anyway, consider that the examples in the lib are really too much difficult for the users. for real scenarios. 
[18:30:47 CEST] <michaelni> BBB, was afk, i see nevcairiel already awnsered, yes theres a bitstream buffer ...
[18:31:28 CEST] <BBB> is that changed to be a BSF in libav also?
[18:31:38 CEST] <wm4> I doubt Libav touched dixv
[18:31:38 CEST] <nevcairiel> dont think so
[18:31:45 CEST] <nevcairiel> everyone is afraid of mpegvideo
[18:31:47 CEST] <wm4> *divx (but works either way)
[18:32:12 CEST] <BBB> :)
[18:32:16 CEST] <BBB> hm...
[18:32:42 CEST] <jamrial> BBB: about libav, they added a vp9 superframe splitter bsf based on your superframe merge bsf
[18:32:54 CEST] <nevcairiel> yeah we talked about that earlier
[18:33:02 CEST] <jamrial> ah ok
[18:33:17 CEST] <nevcairiel> might be a good idea to merge that functionality and get the parser to st op doing that
[18:33:31 CEST] <BBB> sure
[18:33:37 CEST] <PaoloP> ubitux: I did not see you review in this way. I only did not understand what to do with the example. I'm still convinced that the state of the doc/example folder is bad and not useful at all for all the people who want to adapt the library for real needings (so they are forced to work a lot with the code before they have something that "works"). But, anyway, if this is your opinion I accept it. I'm not going to re-
[18:33:38 CEST] <PaoloP> send the patch again to the mailing list
[18:33:39 CEST] <jamrial> it's queued
[18:34:08 CEST] <PaoloP> so, the case is closed for me
[18:34:26 CEST] <wm4> well I still think the examples could use improvements
[18:34:33 CEST] <nevcairiel> the way I see it, being forced to understand what the code does is a good thing
[18:34:44 CEST] <wm4> but IMO most examples are bad because they do too much
[18:34:51 CEST] <nevcairiel> instead of just copying some template example and somehow having something that barely works, but not knowing why
[18:35:25 CEST] <wm4> e.g. transcode_aac.c... who the hell is going to write a transcoder and start with that example to learn how? that's absurd
[18:35:34 CEST] <PaoloP> wm4: IMHO generating tones and dummy pixel is very bad. And it is very bad to use function only in order to split the code.
[18:35:53 CEST] <nevcairiel> splitting the code with functions is something every developer should learn
[18:35:59 CEST] <nevcairiel> having one mega function is bad design anywhere
[18:36:06 CEST] <wm4> PaoloP: it provides an easy way to get raw data
[18:36:13 CEST] <PaoloP> nevcairiel: I agree when the functions are used more than once
[18:36:14 CEST] <wm4> PaoloP: it's either that, or fopening raw files
[18:36:54 CEST] <PaoloP> but not when the functions are something "global" called once, with obscure names that hide all the things that the functions do
[18:37:03 CEST] <wm4> (or involving a complete decoder chain)
[18:37:53 CEST] Action: wm4 stares at http_mullticlient.c
[18:38:03 CEST] <PaoloP> wm4: no,  because for raw files, you have _real_ frames. In case of generating tones, the uses has to worry to jump this part of code, which is not useful at all for every real scenario (that surely has to be managed)
[18:38:37 CEST] <wm4> PaoloP: even generating a raw tone shows how AVFrames are setup and how you put data into them
[18:38:58 CEST] <cone-983> ffmpeg 03Clément BSsch 07master:d8eb40bd70c9: doc/general: fix project name after 2b1a6b1ae
[18:38:59 CEST] <wm4> (but it's bad if the example is just doing that to pass it to swresample or so)
[18:40:01 CEST] <nevcairiel> examples should be concise and centered on one specific part to show-case, unfortunately dealing with multi-media you always need some data to process, so that data has to come from somewhere
[18:40:02 CEST] <iive> well, I love kernel_function() _kernel_function() , __kernel_function() and ___kernel_function()
[18:40:12 CEST] <nevcairiel> cant just write a decode example without having real data
[18:40:28 CEST] <nevcairiel> but you can write an encode example without having to  worry about also decoding some file before
[18:40:32 CEST] <nevcairiel> by just generating random data
[18:40:33 CEST] <PaoloP> wm4: it adds much code, which the user will surely remove. I would say that: look at muxing.c example. It's wrong starting by the name (it doesn't simply "mux", it makes many things in addition)
[18:40:54 CEST] <PaoloP>  and ALL the functions inside it have wrong and incomplete name
[18:41:15 CEST] <PaoloP> let me show an example
[18:41:19 CEST] <wm4> that's a bloated POS full of bad stuff from the first look
[18:41:22 CEST] <wm4> (muxing.c)
[18:41:47 CEST] <PaoloP> wm4: this is the result of using these weird functions
[18:42:22 CEST] <PaoloP> and this is why I AVOIDED that functions
[18:42:34 CEST] <nevcairiel> having everything in one function is also terrible
[18:42:35 CEST] <PaoloP> and I preferred to write a strict procedural code. Which is not long
[18:42:47 CEST] <nevcairiel> a single function over 100 lines should likely be split
[18:42:49 CEST] <nevcairiel> general rule ^
[18:42:50 CEST] <wm4> you aren't going to win me over by writing big, monolithic functions
[18:43:10 CEST] <PaoloP> wm4: I don't intend that like a "competition" (win you)
[18:43:34 CEST] <wm4> but yeah, muxing.c absolutely fucking sucks as example
[18:43:41 CEST] <wm4> not even I can tell what's going on in there
[18:43:43 CEST] <PaoloP> IMHO, functions that are called once, only in order to split the code, are absolutely NONSENSE
[18:43:48 CEST] <wm4> and I'd call myself an experienced API user
[18:43:58 CEST] <wm4> PaoloP: disagree
[18:44:10 CEST] <PaoloP> I used few functions, in my code, that are called more than once
[18:44:12 CEST] <wm4> if done cleanly, it's superior to monolithic functions
[18:44:48 CEST] <PaoloP> wm4: anyway, I told that I could modify the code with functions, if I had a common feedback
[18:44:59 CEST] <nevcairiel> splitting logical blocks of code into functions improves readability immensely
[18:45:12 CEST] <nevcairiel> even if only used o nce
[18:45:13 CEST] <PaoloP> but given that the example has not been accepted it doesn't matter anymore
[18:45:15 CEST] <nevcairiel> once*
[18:45:43 CEST] <PaoloP> nevcairiel: if you use comments, the readability is automatically improved. In fact I used detailed comments
[18:45:57 CEST] <nevcairiel> comments dont replace well-structured code
[18:46:03 CEST] <PaoloP> not atomic comments, but comments about the sequence of operations
[18:46:37 CEST] <wm4> using functions with good names is superior over commented sections of code in a single big functions
[18:47:19 CEST] <PaoloP> wm4: as said before, it would not have been a problem, for me, to modify the code with functions. I don't pretend that other people accept my idea
[18:47:39 CEST] <PaoloP> but in this case, all that is pointless, because the code has not been approved
[18:52:34 CEST] <wm4> I guess I understand if you don't want to invest more time into this
[18:53:17 CEST] <PaoloP> wm4: this is obvious, but this is not my or your fault.
[18:53:45 CEST] <wm4> whose fault is it?
[18:54:59 CEST] <PaoloP> wm4: the point is not the "fault". the point is that the doc/examples folder will stay in a bad state in the future too. And lot of people will ask in the libav-user mailing list how to do that and that withou having replies
[18:56:33 CEST] <PaoloP> I made many snippets for both audio and video tasks, for lot of pratical needings. But if you start by thinking that "copy and paste like snippets" are bad (which is very dogmatic, IMHO), they are really not appliable at all to the ffmpeg tree
[18:57:06 CEST] <PaoloP> so, this is not "fault". different point of views
[19:03:14 CEST] <cone-983> ffmpeg 03Kyle Swanson 07master:f3d8e0d36945: avfilter/af_loudnorm: do not upsample during second-pass linear normalization
[19:22:37 CEST] <StevenLiu> Hello guys?
[19:23:04 CEST] <StevenLiu> Can the av_strreplace rename to av_strireplace patch applied?
[19:23:31 CEST] <jamrial> StevenLiu: no
[19:23:33 CEST] <StevenLiu> I saw Michael release 3.3?
[19:23:44 CEST] <jamrial> you can't rename, but you can apply the actual changes to the function
[19:23:58 CEST] <BtbN> that sounds highly API and ABI breaking
[19:24:04 CEST] <StevenLiu> ok, applied the patch without rename, right?
[19:25:14 CEST] <jamrial> yes
[19:25:19 CEST] <StevenLiu> if right, i will remake a new patch without rename operate
[19:25:20 CEST] <michaelni> StevenLiu, 3.3 is not released yet, i wanted to give more time to people to test the release branch
[19:26:22 CEST] <jamrial> michaelni: this is about the branch being already cut, which already happened, not about the release being tagged
[19:26:25 CEST] <StevenLiu> Okay, Thanks, let me remake a new patch without rename av_strreplace and apply it :)
[19:28:22 CEST] <StevenLiu> maybe if rename the av_strreplace to av_strireplace, need add modify log into APIChange ?
[19:29:23 CEST] <BtbN> It'll need a major bump
[19:30:38 CEST] <jamrial> looks like it didn't even get an APIChanges line when it was pushed...
[19:31:08 CEST] <jamrial> fuck, i don't even know at this point what would be best
[19:40:38 CEST] <StevenLi_> I've resent a new patch, just remove the rename operation
[19:41:12 CEST] <StevenLi_> I've resend a new patch, just remove the rename operation
[19:47:59 CEST] <wm4> I don't understand why it can't just be renamed?
[19:48:53 CEST] <BtbN> It's public API?
[19:49:01 CEST] <wm4> unreleased
[19:49:13 CEST] <StevenLi_> it is a new API
[19:49:31 CEST] <StevenLi_> maybe no module and user call it right now
[19:50:06 CEST] <StevenLi_> it in libavutil/avstring.c
[19:50:35 CEST] <jamrial> wm4: i have stopped caring, the thing wasn't even properly applied to begin with
[20:08:45 CEST] <michaelni> ubitux, sws-pixdesc-query fails on mips http://fate.ffmpeg.org/report.cgi?time=20170405024041&slot=mips-ubuntu-qemu-gcc-4.4
[20:09:20 CEST] <ubitux> mmh, i forgot this thing
[20:09:31 CEST] <ubitux> i'll have a look
[20:09:49 CEST] <jamrial> also, that filter-pixfmts-scale failure is old, and happens on every big endian system
[20:09:56 CEST] <jamrial> i think it was introduced by paul
[20:10:12 CEST] <michaelni> jamrial, yes i think so too
[20:10:25 CEST] <wm4> just drop big endian support
[20:12:30 CEST] <jamrial> durandal_1707: could you look at the above? 6427c9ffee is probably the commit that introduced it
[20:14:27 CEST] <durandal_1707> jamrial: it is about 16bit pix fmt?
[20:15:26 CEST] <jamrial> durandal_1707: yes, the test i mentioned above is failing on big endian since that commit it seems
[20:38:57 CEST] <wm4> I guess I should never have suggested to move it to a separate function
[21:37:36 CEST] <cone-983> ffmpeg 03Ronald S. Bultje 07master:7c7e7c44a6eb: huffyuv: assign correct per-thread avctx pointer to HYuvContext::avctx.
[21:53:04 CEST] <cone-983> ffmpeg 03Rostislav Pehlivanov 07master:c901ae944040: bitpacked: fix potential overflow
[22:17:16 CEST] <kierank> BBB: i think you upset J_Darnley
[22:17:23 CEST] <kierank> oh not you
[22:17:26 CEST] <kierank> jamrial apparently
[22:20:15 CEST] <BBB> huh?
[22:20:23 CEST] <BBB> what did I do? Im sorry :(
[22:20:27 CEST] <kierank> BBB: sorry was jamrial 
[22:20:35 CEST] <kierank> about REP_RET discussion
[22:21:54 CEST] <BBB> if what hes saying is true, were probably not the right people to discuss it, it looks highly technical about x86inc.asm internals& maybe ask gramner?
[22:22:18 CEST] <BBB> (Im not saying the patch is wrong or doesnt belong here, just that its very technical and I probably am not sure about it)
[22:22:37 CEST] <BBB> also ubitux thanks for the slice-mt-tsan station, it found me a bug ;)
[22:23:01 CEST] <ubitux> ah right i didn't check if it worked
[22:23:09 CEST] <ubitux> so it spot some new stuff?
[22:24:12 CEST] <ubitux> wow, it goes crazy with h264dec
[22:24:35 CEST] <ubitux> oh, you found issues in vp8 ok
[22:26:08 CEST] <BBB> I knew there was one issue
[22:26:17 CEST] <BBB> (which is atomic_int so not very interesting practically)
[22:26:23 CEST] <BBB> but it found a second issue which is a real bug
[22:26:30 CEST] <BBB> (not one triggered by our fate samples, but still)
[22:26:40 CEST] <BBB> so cool stuff
[22:26:59 CEST] <BBB> and yes it has issues w/ h264 and hevc, big surprise, slices kill puppies
[00:00:00 CEST] --- Thu Apr  6 2017


More information about the Ffmpeg-devel-irc mailing list