[Ffmpeg-devel-irc] ffmpeg-devel.log.20170802
burek
burek021 at gmail.com
Thu Aug 3 03:05:04 EEST 2017
[01:38:42 CEST] <rcombs> jamrial: actually I just realized I can't make the change you suggested in flacenc because it conflicts with my change to rewrite the sample count value in-muxer
[01:39:09 CEST] <jamrial> how so?
[01:39:52 CEST] <rcombs> see the last patch in the set
[01:40:25 CEST] <rcombs> it overwrites the sample count with the actual number of samples written at the end, regardless of whether the extradata was otherwise updated
[01:41:10 CEST] <rcombs> I suppose I could do one write for the updated extradata (only if it had actually changed), and then another seek+write for sample count
[01:42:35 CEST] <jamrial> if this header editing you're doing per packet invalidates the contents of STREAMINFO, then you should make sure that's updated to reflect said changes, yes
[01:42:37 CEST] <rcombs> but that ends up reading from streaminfo anyway (*shakes fist at 36-bit integers*) so it doesn't really simplify anything
[01:44:19 CEST] <jamrial> in any case, i'm not sure if libavformat is the correct place for this kind of bitstream handling
[01:44:20 CEST] <rcombs> it's actually sort of tangential to the per-packet editing
[01:44:33 CEST] <jamrial> this seems more the job for an avcodec bitstream filter
[01:44:50 CEST] <rcombs> e.g. do ffmpeg -i input.flac -c copy -t [period shorter than the duration] out.flac
[01:45:09 CEST] <rcombs> currently you'll end up with an incorrect duration in the output, because we copy the streaminfo from the input
[01:45:39 CEST] <jamrial> also an invalid md5 checksum, i suppose
[01:45:43 CEST] <rcombs> streaminfo and some portions of the FLAC header behave less like codec headers and more like container headers
[01:46:47 CEST] <rcombs> (a decent portion of both are completely redundant when muxed into some other container format)
[01:47:56 CEST] <rcombs> I can't really fix the md5 thing because it operates on raw audio data, for reasons I will never understand
[01:48:34 CEST] <rcombs> but it doesn't actually affect demuxing so I care less about it
[01:50:30 CEST] <jamrial> because it's a lossless format. you care about the decoded pcm data, not the encoded flac data
[01:51:56 CEST] <rcombs> I mean, an integrity check on one is equivalent to an integrity check on the other
[01:52:10 CEST] <rcombs> except doing it on the encoded data would let me rewrite it in the muxer
[01:56:02 CEST] <rcombs> I guess it serves as a validation check of the decoder implementation, but& eh
[02:24:02 CEST] <atomnuker> dcherednik: it takes floats
[02:24:09 CEST] <atomnuker> float samples
[03:37:06 CEST] <ZeroWalker> Hmm, if you have Audio and Video encoded in different threads, is there any good practice on how to flush the packets to av_interleaved_write_frame, as it (as far as i can tell) should be done on a singlethread
[04:23:40 CEST] <DHE> ZeroWalker: you can have 2 threads do writes as long as you do your own synchronization. like a mutex for them
[04:27:34 CEST] <ZeroWalker> so, a CriticalSection (windows) would be fine to use just on that place where it writes?
[04:32:14 CEST] <DHE> I decline to answer anything involving APIs I know nothing about
[04:35:56 CEST] <ZeroWalker> okay, thanks though, the mutex gives the clear picture:)
[08:57:21 CEST] <ldts> I am testing master before rebasing (commit 1193301 lavc/htmlsubtitles: reindent after previous commits) and fate seems to be broken https://hastebin.com/egobehutoq.pas
[11:26:16 CEST] <cone-576> ffmpeg 03Paul B Mahol 07master:c79e7534712f: avfilter: add unpremultiply filter
[13:12:34 CEST] <J_Darnley> atomnuker: may I bug you with more questions? About diracdec this time.
[13:12:56 CEST] <J_Darnley> How does it arrange the coeffs after dequant?
[13:13:57 CEST] <J_Darnley> The templated dequant_subband appears to place them in the 4 orientation layout.
[13:15:25 CEST] <J_Darnley> because it writes to the buffer with "*dst_r++ = c*sign"
[13:17:55 CEST] <atomnuker> its different than the encoder
[13:18:25 CEST] <J_Darnley> Well, yeah
[13:18:37 CEST] <atomnuker> IIRC during dequant they're reordered
[13:20:06 CEST] <J_Darnley> Oh, I should say I'm not looking at master but at your "trimmed" branch on github, if that makes a difference.
[13:20:38 CEST] <J_Darnley> I cannot see where the transform interleaves, or treats as interleaved, the coeffs
[13:21:21 CEST] <atomnuker> I think the transforms did that themselves as one step
[13:21:28 CEST] <atomnuker> so it's not a separate step
[13:32:53 CEST] <J_Darnley> Okay. I did find the interleave templated function. (NOTE: I really need to run that through cpp)
[13:34:30 CEST] <J_Darnley> Now the idwt_plane function does the idwt over a series of 16 lines
[13:35:16 CEST] <J_Darnley> oh ignore that
[13:36:05 CEST] <J_Darnley> the coefs can't be laid out so that you could do an idwt of 16 lines because only haar can be done as a block transform.
[13:38:02 CEST] <J_Darnley> I do see why you said to look at the encoder first. It is so much simpler.
[13:41:47 CEST] <atomnuker> its not
[13:49:11 CEST] <J_Darnley> Huh? The encoder is much more simple. Only 1 32-bit data type, only 1 dsp struct, no templating.
[13:52:41 CEST] <atomnuker> oh, sorry, I read "I do not see"
[13:52:58 CEST] <J_Darnley> Ah
[13:55:34 CEST] <J_Darnley> :( I forgot that make doesn't clean a file after cpp like it does for nasm preprocessing
[14:35:28 CEST] <iive> J_Darnley: what are you working on?
[14:38:27 CEST] <J_Darnley> Trying to make the dirac decoder able to decode anything less than a whole picture.
[14:39:01 CEST] <J_Darnley> ideally: make it able to decode a slice independantly of any other
[14:39:32 CEST] <J_Darnley> but that extreme is only possible with the haar transform/kernel
[16:26:05 CEST] <J_Darnley> atomnuker: is init_planes() in diracdec.c called every frame or just when frame properties change?
[16:26:30 CEST] <J_Darnley> It is called from dirac_decode_picture_header
[16:27:11 CEST] <J_Darnley> which itself is called Sometimes(tm)
[16:28:33 CEST] <atomnuker> dirac packets can have multiple subtypes of packets in them
[16:28:59 CEST] <atomnuker> but there must always be a picture header, some other header
[16:29:22 CEST] <atomnuker> and they're always wrapped in those start of sequence headers
[16:29:41 CEST] <atomnuker> so its not called sometimes, its called on every frame
[16:31:25 CEST] <J_Darnley> thankyou
[17:07:06 CEST] <J_Darnley> Yay! Something has come out which looks a littke like what it should.
[17:08:12 CEST] <ZeroWalker> how do you use avdevice_capabilities_create, i am trying to use it on a dshow device format context
[17:08:51 CEST] <J_Darnley> On second thoughts, that is *very* like what it should be.
[17:11:38 CEST] <J_Darnley> I am clearly missing the last row of slices
[17:12:01 CEST] <J_Darnley> so my if condition must be wrong
[17:12:34 CEST] <J_Darnley> Ah! Should be less than or equal to.
[17:14:50 CEST] <J_Darnley> Oh wow. I don't believe it.
[17:14:59 CEST] <J_Darnley> That fixed all my problems?
[17:43:08 CEST] <J_Darnley> oh... not quite all my problems
[17:43:35 CEST] <J_Darnley> 420 doesn't work completely
[17:47:27 CEST] <kierank> J_Darnley: don't waste too much time on that if you can't get it to work
[17:57:38 CEST] <J_Darnley> Oh, I think it is just the chroma from the last row of slices not being transformed
[17:58:17 CEST] <J_Darnley> hd720only has 360 chroma lines which is not mod16
[17:58:52 CEST] <J_Darnley> That means I need to test hd1080
[18:00:51 CEST] <J_Darnley> Hm. Needs more work. 1080 has wrong lines throughout
[18:17:57 CEST] <Compn> anyone test 8k workflow in ffmpeg ?
[20:05:47 CEST] <BBB> Compn: you mean other than -s 7680x4320?
[20:05:56 CEST] <BBB> Compn: or are you talking about how slow is it"?
[20:55:09 CEST] <Compn> BBB : real world samples, testing if all filters and containers handle it, etc
[20:57:34 CEST] <BBB> hm& nope; only on 4K from me so far
[20:57:39 CEST] <BBB> sorry :/
[20:58:33 CEST] <Compn> is ok
[20:58:38 CEST] <Compn> i've heard youtube trial 8k
[20:58:41 CEST] <Compn> is why i ask :)
[20:59:13 CEST] <Compn> http://neumannfilms.net/?product=ghost-towns-8k
[21:05:54 CEST] <thardin> is relying almost entirely on compute_pkt_fields() kosher?
[21:06:16 CEST] <thardin> I just compute pkt->duration and make use of ff_pcm_read_seek(), seems to work just fine
[21:15:27 CEST] <ZeroWalker> when you get data from a dshow device, shouldn't it be event based, or are you supposed to poll av_read_frame?
[21:16:24 CEST] <BtbN> the libav* APIs are synchronous
[21:17:50 CEST] <ZeroWalker> so av_read_frame will block till data arrives?
[21:18:07 CEST] <BtbN> if you don't use nonblocking mode, yes.
[21:18:20 CEST] <ZeroWalker> nice:)
[21:53:12 CEST] <cone-058> ffmpeg 03Aleksandr Slobodeniuk 07master:0aa8fa963f79: avformat/riff.h : remove unused function parameter "const AVCodecTag *tags" of "void ff_put_bmp_header()"
[21:53:13 CEST] <cone-058> ffmpeg 03Aleksandr Slobodeniuk 07master:50aeb6e4edf6: avformat/riff: remove useless tag correlation 'mpg2'->MPEG1VIDEO.
[23:18:51 CEST] <ZeroWalker> hmm, how do i know if something is RGB or BGR (except looking at the end result)
[23:19:23 CEST] <ZeroWalker> cause of example, in directshow a device shows RGB24, but in ffmpeg it shows as BGR
[23:20:15 CEST] <ZeroWalker> though then again, it's flipped in ffmpeg, and it has some exadata thing that says "bottom up" i think
[23:23:19 CEST] <Compn> ZeroWalker : depends on your source
[23:32:45 CEST] <ZeroWalker> hmm, well why i wonder is cause as far as i can tell, you can't list the dshow devices programatically, so i have to do it the normal way and then give ffmpeg the information afterwards
[23:33:46 CEST] <ZeroWalker> and it's kinda hard is the source says RGB24 but it's actually BGR for ffmpeg, so unless there's a way to know it, it's a guessing game:(
[23:33:57 CEST] <Compn> yes in mplayer its interesting
[23:34:07 CEST] <Compn> since we have binary codecs, so we have to add "flip" code to them sometimes
[23:34:14 CEST] <Compn> to get them working in vfw/dshow
[23:34:20 CEST] <Compn> (and display right side up)
[23:34:30 CEST] <Compn> thats all enumerated with guid in dshow/windows
[23:34:50 CEST] <Compn> so you're trying to add a new dshow device ?
[23:35:01 CEST] <Compn> or are you trying to figure out if a webcam will display correctly ?
[23:35:10 CEST] <Compn> because i can offer solutions and ideas
[23:35:22 CEST] <Compn> depending on what you are doing haha
[23:38:06 CEST] <ZeroWalker> well i simply want first list All dshow devices and there capabilities (formats etc) in ffmpeg (if it's possible programatically), cause that would solve that problem
[23:39:07 CEST] <ZeroWalker> If that's not possible and i have to get all that information via Windows itself, then i need to somehow be able to intrepretet the mediatype information cause i guess somehow you can tell if it's flipped, BGR and whatnot. Even if Windows just says "RGB24"
[23:44:23 CEST] <J_Darnley> It must be possible to list dshow devices programtically, ffmpeg.exe does it.
[23:44:43 CEST] <J_Darnley> I have no idea how you would do that with the ffmpeg API though.
[23:45:30 CEST] <ZeroWalker> https://trac.ffmpeg.org/wiki/DirectShow
[23:45:43 CEST] <ZeroWalker> FFmpeg does not provide a native way to do this yet, but you can lookup the devices yourself or just parse standard out from FFmpeg
[23:45:52 CEST] <ZeroWalker> it's really weird:(
[23:46:09 CEST] <ZeroWalker> perhaps that information is old though, what do i know
[23:46:49 CEST] <ZeroWalker> i mean, there are functions that seems to do what i want, but i can't get them to work and basically no information on examples can be found with it either. Or i am just bad at googling xd
[23:47:44 CEST] <J_Darnley> Yeah, that is what I was thinking of.
[23:48:01 CEST] <J_Darnley> I know nothing of using the API though.
[23:49:11 CEST] <J_Darnley> To me the command line just looks to me like API use of: opening a "file", library printing stuff, then returning.
[23:51:15 CEST] <ZeroWalker> yeah, it basically looks like a hacky way
[23:51:33 CEST] <ZeroWalker> in the API i would assume it to be something like avdevice_list_input_sources this one or something
[23:51:55 CEST] <ZeroWalker> but, can't get any of these to work, so i am doing something wrong, or they aren't what i think they are
[23:52:46 CEST] <nevcairiel> avdevice is just a hack around the existing demuxer api, it doesnt expose any magic function. if you're lucky the devices support enumeration to the log, but as a API user that doesn't help you, so basically you're stuckj there
[23:53:53 CEST] <ZeroWalker> hmm
[23:53:57 CEST] <nevcairiel> dshow in any case has the log device list thing, which may be useful for the CLI, but nothing else
[23:54:34 CEST] <ZeroWalker> how does it actually get the dshow input etc in the first place?
[23:55:13 CEST] <ZeroWalker> cause it's Really messy to get directshow data cause of the filters, you have to use samplegrabber etc (as far as i know), and using ffmpeg (as i am encoding it anyhow) is much cleaner in that regard
[00:00:00 CEST] --- Thu Aug 3 2017
More information about the Ffmpeg-devel-irc
mailing list