[Ffmpeg-devel-irc] ffmpeg-devel.log.20160627

burek burek021 at gmail.com
Tue Jun 28 02:05:03 CEST 2016

[02:18:32 CEST] <cone-970> ffmpeg 03Marton Balint 07master:b18d6c58000b: avdevice/decklink: fix mingw portability
[02:24:57 CEST] <cone-970> ffmpeg 03Marton Balint 07n3.1:HEAD: avdevice/decklink: fix mingw portability
[05:24:20 CEST] <cone-970> ffmpeg 03Marton Balint 07n3.2-dev:HEAD: avdevice/decklink: fix mingw portability
[10:31:29 CEST] <saste> hey, anybody interested in reviewing my "lavf: add textdata virtual demuxer and demuxer" patch?
[10:38:51 CEST] <mateo`> nevcairiel: are you working on the next libav merge 1e9c5bf ?
[10:39:23 CEST] <nevcairiel> not right now
[10:39:47 CEST] <mateo`> can I pick it ?
[10:40:00 CEST] <nevcairiel> sure
[10:40:26 CEST] <nevcairiel> i wont have time until tonight anyway, and maybe not even then
[10:45:53 CEST] <cone-802> ffmpeg 03Paul B Mahol 07master:d693392886b8: avformat/mov: parse rtmd track timecode
[11:41:39 CEST] <mateo`> I'm having so much fun ... never used so much the space key.
[11:42:03 CEST] <ubitux> mateo` the new space master
[12:30:00 CEST] <mateo`> ubitux aka the const master ...
[12:30:11 CEST] <ubitux> ;)
[17:25:28 CEST] <cone-802> ffmpeg 03Diego Biurrun 07master:1e9c5bf4c136: asm: FF_-prefix internal macros used in inline assembly
[17:25:29 CEST] <cone-802> ffmpeg 03Matthieu Bouron 07master:39d6d3618d48: Merge commit '1e9c5bf4c136fe9e010cc8a7e7270bba0d1bf45e'
[17:25:30 CEST] <cone-802> ffmpeg 03Matthieu Bouron 07master:9eb3da2f9942: asm: FF_-prefix internal macros used in inline assembly
[17:39:56 CEST] <cone-802> ffmpeg 03Diego Biurrun 07master:5b1b495c8d21: build: Print a message when generating version scripts
[17:39:57 CEST] <cone-802> ffmpeg 03Matthieu Bouron 07master:0fd76d77d60a: Merge commit '5b1b495c8d21600eac694d50f428654a3125e217'
[17:41:44 CEST] <fritsch> as it is a deprecation question and I am currently porting to 3.1 I got a non user question (at least I hope so)
[17:41:49 CEST] <fritsch> how to access: codec_ctx = stream->codec;
[17:41:55 CEST] <fritsch> without running into a deprecation warning
[17:42:05 CEST] <fritsch> _all_ examples for 3.1 use it like that
[17:42:23 CEST] <fritsch> therefore I wonder what the right way nowadays to do it
[17:42:31 CEST] <fritsch> http://ffmpeg.org/doxygen/trunk/transcoding_8c-example.html#a12
[17:46:04 CEST] <cone-802> ffmpeg 03Rick Kern 07master:d9561718135a: Changelog: Add VideoToolbox encoder entry for 3.1
[17:47:07 CEST] <cone-802> ffmpeg 03Rick Kern 07release/3.1:36fcb8cc559a: Changelog: Add VideoToolbox encoder entry for 3.1
[18:01:45 CEST] <andrey_turkin_> examples really should be updated to the latest API
[18:03:22 CEST] <cone-802> ffmpeg 03Diego Biurrun 07master:535a742c2695: build: Change structure of the linker version script templates
[18:03:23 CEST] <cone-802> ffmpeg 03Matthieu Bouron 07master:0acc170aad99: Merge commit '535a742c2695a9e0c586b50d7fa76e318232ff24'
[18:04:21 CEST] <cone-802> ffmpeg 03Clément BSsch 07master:c5566f0a944e: lavc/pnm_parser: disable parsing for text based PNMs
[18:11:48 CEST] <BBB> does anyone know if I can use the concat filter in some way to concat the first N frames of a set of video files together?
[18:12:22 CEST] <BBB> like, Im wondering if I can do something along the lines of ffmpeg -vframes N -i fileA -vframes N -i fileB [..] -filter_complex concat[..] [output]
[18:29:04 CEST] <tomonori> hello, can someone tell how to make ffmpeg to processing frame parallel? currently, I have written a filter that could remove watermark dynamically, but it's too slow, it process frame one by one. Is there any way to make it processing frame parallel?  thanks
[18:31:58 CEST] <BtbN> no
[18:32:37 CEST] <BtbN> filters in ffmpeg are only threaded per-frame, with each thread processing a portion of the same frame.
[18:38:22 CEST] <tomonori> if so, can you tell where is the source code that ffmpeg to invoke filter functions?  I want to create some threads to preprocessing, I think this can reduce some time in filters
[18:46:37 CEST] <iive> BBB: should be possible, the filter just needs each file to start at 0 timestamp
[18:47:52 CEST] <BtbN> you can't just filter multiple frames in parallel that way. Filters can depend on processing frames in order.
[18:50:36 CEST] <iive> he can buffer frames and unload them done, can't he?
[18:52:19 CEST] <iive> tomonori: maybe you should try some SIMD to speed up things first. byte access is the slowest thing there is.
[18:53:52 CEST] <tomonori> my filter is very slowly, so that ffmpeg can process 5 frame per second, the reason is I called a 3rd library to analysis the frame. this analysis will takes about 200ms, so I want to using multi-thread to analysis every frame.
[18:57:06 CEST] <iive> opencv also supports threading and simd
[18:57:16 CEST] <ubitux> BBB: yes; [0]trim=...[f0trimed]; [1]trim=...[f1trimed]; [f1trimed][f2trimed] concat=... ?
[18:58:54 CEST] <BBB> thats what Im doing now, it seems kind of convoluted but I guess its ok
[18:59:04 CEST] <ubitux> tomonori: BtbN: lavfi has slice threading though
[18:59:07 CEST] <BBB> (Im using select=lt(n,..) instead of trim=..)
[18:59:13 CEST] <BBB> thats the same thing right?
[18:59:24 CEST] <ubitux> mmh with the select filter you might actually decode the whole file
[18:59:39 CEST] <BBB> oh right
[18:59:39 CEST] <BBB> ok
[18:59:43 CEST] <BBB> let me change that
[18:59:43 CEST] <BtbN> ubitux, isn't that what I said? Or how does it work?
[18:59:44 CEST] <BBB> tnx
[19:00:03 CEST] <ubitux> BtbN: right, i misread
[19:01:47 CEST] <ubitux> BBB: you can have the filtergraph in a file, with line breaks etc
[19:01:59 CEST] <ubitux> if it's getting too convuloted
[19:12:09 CEST] <tomonori> how can I get the AVFormatContext from filter?
[19:13:51 CEST] <nevcairiel> you cannot
[19:15:38 CEST] <tomonori> I need using av_seek_frame() to preprocessing every frame by using multi-threads
[19:19:44 CEST] <ubitux> you won't get frame threading in your filter unless you add frame threading support to libavfilter
[19:40:47 CEST] <cone-802> ffmpeg 03Diego Biurrun 07master:c5fd4b50610f: build: Simplify postprocessing of linker version script files
[19:40:48 CEST] <cone-802> ffmpeg 03Clément BSsch 07master:da7c918e80ed: Merge commit 'c5fd4b50610f62cbb3baa4f4108139363128dea1'
[19:48:43 CEST] <cone-802> ffmpeg 03Diego Biurrun 07master:b2d5d6a7f20a: build: Only enable symbol reduction if the compiler does proper DCE
[19:48:44 CEST] <cone-802> ffmpeg 03Clément BSsch 07master:85a52a77ce87: Merge commit 'b2d5d6a7f20a255a5f3c9bf539cc507afd909ce5'
[19:55:07 CEST] <cone-802> ffmpeg 03Denis Charmet 07master:38f99017e69b: vp9: Return the correct size when decoding a superframe
[19:55:08 CEST] <cone-802> ffmpeg 03Clément BSsch 07master:e6d0acd4383d: Merge commit '38f99017e69bd25e88be87117237c29727c25635'
[20:01:15 CEST] <pfelt> afternoon all.  i'm looking at the decklink stuff and trying to clean up a little more of the code and am having an issue finding where some deprecated functionality went.   the decklink module is trying to read AVCodecContext::coded_frame in order to set the frame interlacing.  docs say to use quality factor side channel, but interlacing is in the AVFrame.  is there an easy way to track down how to go from a AVStream to an AVFrame?
[20:11:39 CEST] <cone-802> ffmpeg 03Vittorio Giovara 07master:20a8c78ce0a5: avconv: Do not copy extradata if source buffer is empty
[20:11:40 CEST] <cone-802> ffmpeg 03Clément BSsch 07master:8b4d6cc809c2: Merge commit '20a8c78ce0a5baf37f6a94e2d1e57e186b6f4b54'
[21:02:26 CEST] <BBB> ubitux: thanks, it looks like its working now
[21:03:11 CEST] <BBB> pfelt: coded_frame was internal stuff so its basically gone
[21:03:23 CEST] <BBB> pfelt: anything that was exposed in coded_frame should probably not be exposed at all
[21:17:33 CEST] <pfelt> BBB: makes sense, but for somereason libavdevice/decklink thought it needed to set interlaced and the field ordering on the frame.  is that just not needed at all anymore?
[21:18:08 CEST] <BBB> I couldnt possibly imagine why you would set that on coded_frame
[21:18:18 CEST] <BBB> in a decoder or demuxer, you set it on AVFrame
[21:18:27 CEST] <BBB> in an encoder, the input sets it so the output doesnt need to do anything
[21:19:09 CEST] <omerjerk> Hey, How do I write testing code for my patch? Michael mentioned about some "fuzzing" yesterday.
[21:19:19 CEST] <BBB> omerjerk: google zuff
[21:19:35 CEST] <pfelt> bbb: ok.  i'll remove this whole block then
[21:19:55 CEST] <BBB> pfelt: what is the output from declink?
[21:19:58 CEST] <BBB> is it an AVFrame?
[21:20:00 CEST] <BBB> or something else?
[21:21:33 CEST] <BBB> its AVPacket
[21:21:55 CEST] <pfelt> the code will pull the data from the DL api and drop it into a AVPacket
[21:22:14 CEST] <pfelt> (i'm cleaning up ff_decklink_read_packet()
[21:22:45 CEST] <BBB> it seems to me that for raw frame data, it should reference a AVFrame that is wrapped in a AVPacket
[21:22:55 CEST] <BBB> if that makes any sense
[21:23:02 CEST] <BBB> and then the avframe would contain the relevant fields
[21:23:11 CEST] <BBB> anything else seems like a total hack to me
[21:23:39 CEST] <pfelt> so, you're thinking i should do something along these lines (pseudo code here)  AVPacket::AVFrame::interlaced_frame
[21:24:31 CEST] <pfelt> ah, but i don't see a good way to go from AVPacket to AVFrame either
[21:25:08 CEST] <andrey_turkin_> there is some codec involved, yes? s302m or something?
[21:25:41 CEST] <pfelt> andrey_turkin: that for us?
[21:25:49 CEST] <andrey_turkin_> yes
[21:26:11 CEST] <pfelt> DL pulls in rawvideo.  i'm not sure of any codec other than taht
[21:26:52 CEST] <andrey_turkin_> ok, anyway, there is always a codec. So interlaced state might be corresponder through codecparams
[21:27:02 CEST] <nevcairiel> we have a wrapped_avframe pseudo codec for raw devices like that, which encapsulates an AVFrame in AVPacket
[21:27:50 CEST] <pfelt> ugh.  so do we need to look at moving the whole DL code over to this?
[21:27:52 CEST] <andrey_turkin_> for decklink it seems either v210 or rawvideo (depending on bitness)
[21:28:41 CEST] <andrey_turkin_> width/height is already passed through for rawvideo this way
[21:30:55 CEST] <andrey_turkin_> it seems to me that this is cleanest way to do this. wrapped_avframe is not used here anyway
[21:31:11 CEST] <pfelt> i'm not following you andrey_turkin
[21:33:23 CEST] <pfelt> i don't actually see where it sets height
[21:33:23 CEST] <andrey_turkin_> ok, so decklink device is actually a demuxer, right? It will setup avformat context for itself, it will output packets which have to be decoded. And it will claim they are to be decoded using say v210. And also that encoded frames are codecpar->width and codecpar->height
[21:34:33 CEST] <andrey_turkin_> here: https://github.com/FFmpeg/FFmpeg/blob/master/libavdevice/decklink_dec.cpp#L568
[21:34:37 CEST] <pfelt> ok.  yeah.  we set that all up in read_header()
[21:34:46 CEST] <pfelt> yep
[21:35:13 CEST] <pfelt> but it doesn't set AVFrame::width
[21:35:17 CEST] <andrey_turkin_> neither v210 nor rawvideo ever see avframes going in from decklink, they operate solely on data in avpacket based on metadata passed throught to them
[21:35:48 CEST] <pfelt> ok.  i follow that
[21:36:09 CEST] <andrey_turkin_> so now in decklink there is a "horrid hack" where it snoops into decoder and sets some fields
[21:36:17 CEST] <pfelt> but at some point in the past, and i've no idea if it's still needed, the original writer thought they needed to set AVFrame::interlaced_frame
[21:36:27 CEST] <pfelt> (that's the hack right)
[21:36:28 CEST] <pfelt> ?
[21:37:09 CEST] <andrey_turkin_> that's a hack if I ever saw one
[21:37:27 CEST] <BBB> pfelt: if interlacing is in coded_params, thats obviously fine also, yes, but I dont know exactly wht goes where
[21:37:40 CEST] <andrey_turkin_> I assume there wasn't a way to communicate field ordering, or just writer didn't found one
[21:37:41 CEST] <BBB> pfelt: ask nevcariel or one of the libav guys that did the actual coded_params (e.g. anton)
[21:39:21 CEST] <pfelt> BBB: you mean codecpar ?
[21:39:28 CEST] <BBB> yes
[21:39:28 CEST] <BBB> sorry
[21:39:35 CEST] <pfelt> ah.  kk.  had me worried there :D
[21:39:40 CEST] <andrey_turkin_> there is field_order now. 
[21:40:24 CEST] <pfelt> yeah.  i saw that and thought maybe setting that
[21:40:49 CEST] <andrey_turkin_> so you can set proper field order there and also fix codecs to use it
[21:40:50 CEST] <pfelt> there is no bool for interlaced_frame though, so i guess if it's set to TT it's assumed interlaced?
[21:40:55 CEST] <andrey_turkin_> TT or BB
[21:41:30 CEST] <andrey_turkin_> interlaced_frame=0 => PROGRESSIVE; interlaced_frame=1&top_field_first=1 => TT; interlaced_frame=1&top_field_first=0 -> BB
[21:41:53 CEST] <BBB> pfelt: sorry for being unclear, Im a little woolie-headed today
[21:41:59 CEST] <BBB> wooly-headed?
[21:42:01 CEST] <BBB> brb
[21:44:04 CEST] <andrey_turkin_> ah, and both codecs already know how to use this field. That should be it then
[21:46:47 CEST] <pfelt> ok.  so when i set bmd_field_dominance i should be able to find and set the codecpar for interlaced to the right value
[21:46:52 CEST] <pfelt> and then not do it per frame
[21:46:56 CEST] <andrey_turkin_> right
[21:47:55 CEST] <pfelt> or i guess better would be whe is set up the stream
[21:47:57 CEST] <pfelt> but same idea
[21:48:24 CEST] <andrey_turkin_> that's what I meant, right. same place where it sets other metadata
[21:48:38 CEST] <fritsch> I am in the process of transitioning kodi to ffmpeg 3.1 and fixing all the deprecated warnings and wanted to know how to avoid using codec as AVCodecContext in that method: https://github.com/fritsch/xbmc/commit/4935c8ccb314d22636596f59e9b11d25c99f99f6#diff-6c48c7b2be31f1d5311113860b1a10ddR277
[21:49:23 CEST] <fritsch> creating a new context, allocating it, copying all the parameters from codcpars and afterwards destryoing the ctx can't be what the new api wants us to do, right?
[21:49:29 CEST] <andrey_turkin_> fritsch: iirc you are supposed to loop receive_frame
[21:49:39 CEST] <fritsch> that's not my question
[21:49:54 CEST] <fritsch> it works as is - EAGAIN is handled separately
[21:50:02 CEST] <andrey_turkin_> I don't think it is what it wants you to do
[21:50:05 CEST] <fritsch> it's just about: m_fctx->streams[0]->codec
[21:50:42 CEST] <fritsch> accessing codec produces a depracated warning and I wanted to know how this is meant to be done
[21:50:50 CEST] <fritsch> as all the ffmpeg examples use this ->codec
[21:51:20 CEST] <andrey_turkin_> right. So now at setup time you don't use streams->codec. You allocate your own context, you copy parameters from codecpar and use that context to decode video
[21:51:32 CEST] <andrey_turkin_> or maybe it is that you meant
[21:51:57 CEST] <fritsch> so that's really the way to go
[21:52:28 CEST] <andrey_turkin_> for me it made things more clean. Less dependencies between avformat and avcodec
[21:52:48 CEST] <fritsch> https://github.com/fritsch/xbmc/blob/4935c8ccb314d22636596f59e9b11d25c99f99f6/xbmc/guilib/FFmpegImage.cpp#L244
[21:53:00 CEST] <fritsch> for me, it does not - as the AVCodecContext is created not by me
[21:53:35 CEST] <pfelt> ok.  next question.  av_dup_packet()  that used to take just one paramater (a pkt), what exactly was that function supposed to do?
[21:54:27 CEST] <andrey_turkin_> I think it copied the data from its argument
[21:54:41 CEST] <fritsch> https://www.ffmpeg.org/doxygen/2.7/group__lavc__packet.html#ga04c83bc8a685960564a169f3a050b915 <- pfelt 
[21:55:45 CEST] <fritsch> i think the source is the "best" documentation for av_dup_packet https://www.ffmpeg.org/doxygen/2.7/avpacket_8c_source.html#l00248
[21:55:48 CEST] <andrey_turkin_> ah, right. It was to make "proper" avpacket from non-refcounted ones.
[21:56:02 CEST] <fritsch> andrey_turkin_: do I overlook something concerning my AVCodecContext?
[21:56:15 CEST] <andrey_turkin_> what do you mean?
[21:56:27 CEST] <fritsch> see: https://github.com/fritsch/xbmc/blob/4935c8ccb314d22636596f59e9b11d25c99f99f6/xbmc/guilib/FFmpegImage.cpp#L244
[21:56:52 CEST] <pfelt> fritsch: uh& yeah, but where did it copy to?
[21:57:02 CEST] <pfelt> that function takes a single argument and returns an int
[21:57:08 CEST] <andrey_turkin_> pfelt: into the same packet
[21:57:18 CEST] Action: pfelt is lost.  that doesn't seem useful
[21:58:28 CEST] <andrey_turkin_> well, given you have packet with no reference (so data is somewhere elsa in static buffer) and you want to preserve its data. You give it to av_dup_packet and it does memory allocation and copies data. So you are left with packet with same data but now memory for that data is owned by packet
[21:58:57 CEST] <fritsch> that's a very good explanation for that dup_packet
[21:59:27 CEST] <andrey_turkin_> fritsch: that line is fine. using codec context from avformat is deprecated
[22:00:35 CEST] <fritsch> yeah, see the line after it
[22:00:39 CEST] <fritsch> how would you solve that?
[22:00:57 CEST] <fritsch> create an own AVCodecContext and copy all the codecpars from the stream?
[22:01:03 CEST] <andrey_turkin_> exactly that
[22:01:07 CEST] <fritsch> and then destroy this temp context?
[22:01:14 CEST] <fritsch> that seems a whole lot overhead
[22:01:21 CEST] <andrey_turkin_> no, don't touch format context at all
[22:01:43 CEST] <pfelt> andrey_turkin: ok.  so the fix for av_dup_packet(&pkt) is av_packet_ref(&pkt, &pkt)
[22:02:20 CEST] <andrey_turkin_> better fix is to use referenced packets from the start
[22:02:36 CEST] <andrey_turkin_> packet_ref won't do the right thing in this case I think
[22:03:58 CEST] <andrey_turkin_> actually, is there any decoder which doesn't output referenced packets? 
[22:05:43 CEST] <pfelt> i'm sure that rewriting decklink to do all referenced packets would be a good place to end
[22:05:45 CEST] <andrey_turkin_> pfelt: I think av_dup_packet calls are not needed anymore
[22:08:49 CEST] <andrey_turkin_> actually in decklink something has to be fixed for that. Instead of init_packet and setting its data field, use av_packet_from_data
[22:09:10 CEST] <fritsch> andrey_turkin_: http://sprunge.us/dBQF <- this really is the truth?
[22:09:55 CEST] <fritsch> as it compiles, but does not work - I think not :-)
[22:10:07 CEST] <andrey_turkin_> I think so. and you need to preserve codec_ctx. 
[22:10:28 CEST] <andrey_turkin_> and also copy timebase from context iirc
[22:11:49 CEST] <andrey_turkin_> you can check libav examples. They are updated to use codec params
[22:12:05 CEST] <fritsch> I will - you got a link by chance?
[22:12:17 CEST] <fritsch> though - I find this overlay complicated, resource hungry and so on
[22:12:19 CEST] <andrey_turkin_> https://github.com/libav/libav/blob/master/doc/examples/transcode_aac.c e.g.
[22:12:52 CEST] <andrey_turkin_> it is much better from architecture point of view. Much less linkage between muxer/demuxer and coder/decoder
[22:14:09 CEST] <fritsch> mmh, that looks exactly like the code I just wrote
[22:14:21 CEST] <fritsch> then I need to find out why it does not work :-)
[22:14:46 CEST] <andrey_turkin_> for me main issue when converting was using wrong timebases everywhere
[22:15:30 CEST] <andrey_turkin_> assumptions which were true became false so I had some fun time fixing timestamps throughout the transcoding chain
[22:15:51 CEST] <pfelt> andrey_turkin: i removed the call, recompiled, and am running it.  it seems to be outputting the data to sdi right, so you may be right
[22:16:52 CEST] <andrey_turkin_> you don't need to fix packets on encoder side (where they come from the users). Something has to be done on decoder side though
[22:31:38 CEST] <fritsch> andrey_turkin_: thx - it worked :-)
[22:31:55 CEST] <fritsch> it needed a bit for me to see my error
[22:33:05 CEST] <andrey_turkin_> great
[22:33:23 CEST] <fritsch> one needs to use this ctx to do the decoding later on
[22:33:29 CEST] <fritsch> obviously
[22:33:40 CEST] <andrey_turkin_> yeah
[00:00:00 CEST] --- Tue Jun 28 2016

More information about the Ffmpeg-devel-irc mailing list