[Ffmpeg-devel-irc] ffmpeg.log.20180510
burek
burek021 at gmail.com
Fri May 11 03:05:02 EEST 2018
[00:00:20 CEST] <Purebes> It's possible I could setup that build chain and fix it but I'm not really sure how I'd maintain it since I have 4-5 different flavors to support
[00:02:00 CEST] <tuxuser> I see, hmm.. sorry, no idea then
[00:03:04 CEST] <tuxuser> its confusing tho, since ffmpeg detects #0.1 as audio
[00:03:22 CEST] <tuxuser> can you maybe post a -loglevel debug pastebin?
[00:03:30 CEST] <Satao> is there anything similar to the prft (presenter reference time box) from mp4 when outputing in mpegts?
[00:03:36 CEST] <tuxuser> with: -i video="Blackmagic WDM Capture"
[00:04:30 CEST] <JEEB> Satao: PCR is supposed to be the program clock reference
[00:04:57 CEST] <JEEB> do note that standard MPEG-TS clock is a 90k time base with 33bit field
[00:05:04 CEST] <JEEB> so every ~26 hours there's a wrap-around
[00:05:26 CEST] <JEEB> not sure how big the PCR is, but the value is on a time base of 27 megahurtz
[00:05:33 CEST] <JEEB> so 300 times 90000
[00:07:13 CEST] <Purebes> I can do that, https://pastebin.com/MVi9xX3R
[00:07:38 CEST] <Purebes> Here's another one where I tried setting the crossbar pin numbers https://pastebin.com/VR2u7K4Z
[00:10:50 CEST] <dragmore88> anyone got any tips on why i get silence in left and right channels with this syntax? -filter_complex "[0:a:6][0:a:7]amerge=inputs=2,pan=stereo[aout]" -map "[aout]" -codec:a aac -b:a 128k
[00:14:05 CEST] <Satao> thks for the info JEEB. I was reading about that and I'm failing to understand how to achieve what I need
[00:14:28 CEST] <Satao> to explain, I want to calculate the overall latency between encoder and presentation
[00:16:05 CEST] <Satao> using isobmff I add an prft box with encoder localtime to each segment and read it in the player
[00:16:25 CEST] <Satao> compare client time with encoder time
[00:17:00 CEST] <Satao> now... if I'm using mpegts between encoder and packager... I loose that info
[00:21:04 CEST] <Mista_D> tuxuser: https://pastebin.ca/4024153 its FFOUT var, I think arrows couse bash to add single quotes...
[00:31:51 CEST] <tuxuser> Mista_D: Use $(FFOUT) instead of $FFOUT
[00:33:21 CEST] <Mista_D> tuxer: My hat's off to you sir. Thanks!
[00:33:40 CEST] <tuxuser> you're welcome
[01:06:42 CEST] <Purebes> Turns out everything I said earlier was wrong, apparently the computers soundcard died a couple hours earlier... definitely did not expect that.
[04:29:13 CEST] <brandor5> hello everyone: I'm looking to build an rpm for ffmpeg for fedora 28 and I'd like to use --enable-libfdk-aac... From the docs I see that I need to use --enable-nonfree as well, but how to enable those in the spec file is over my head... can anyone help a noob out?
[04:32:33 CEST] <brandor5> hmm actually i just figured it out.... simply add `%global _with_nonfree 1` in the globals section at the top... :)
[11:59:29 CEST] <cryptopsy> can ffmpeg be built with -O3 -flto -pipe ?
[11:59:42 CEST] <cryptopsy> err, ffmpeg-4
[12:06:35 CEST] <furq> lto definitely
[12:07:10 CEST] <furq> --enable-lto
[12:07:46 CEST] <furq> --optflags=O3 but it's probably not guaranteed to work properly
[12:32:56 CEST] <cryptopsy> i think i got 3.4 to build with O3 lto but 4 doesn't seem to work even without O3
[12:52:10 CEST] <cryptopsy> is 4.0.0 stable on gentoo?
[13:19:47 CEST] <PureTryOut[m]> hey guys, ffmpeg segfaults for me when trying to use an file from sftp as input. turning on verbose logging doesn't show any interesting errors or warnings...
[17:12:28 CEST] <tuna> does the output of av_log print to console or a file?
[17:13:32 CEST] <c_14> normally console
[17:14:33 CEST] <c_14> If you're using the API you can specify an av_log_callback though
[17:26:58 CEST] <tuna> Thanks
[17:30:21 CEST] <tuna> How often does the windows build of ffmpeg happen...if I requested some code changes to help me debug, how long till I would see these changes in the downloadable binaries?
[17:32:18 CEST] <JEEB> we don't build official binaries at all
[17:32:25 CEST] <JEEB> so I have no idea whom you requested help from
[17:32:56 CEST] <JEEB> I think the quickest way to test is to have a local linux system in either a VM or in the Windows Subsystem for Linux running a cross-compilation
[17:33:10 CEST] <JEEB> (or you can of course build natively, but welcome to "configure script takes 15 minutes")
[17:33:35 CEST] <tuna> Yea I just tried building with mingw and had an awful time...haha
[17:33:41 CEST] <tuna> couldnt get it to work
[17:35:08 CEST] <JEEB> no idea what you hit, but it shouldn't be too hard unless someone failed at packaging/installing something
[17:49:49 CEST] <friki_> Hi. I'm testing ffplay against an https url and it works even with an invalid certificate
[17:50:30 CEST] <JEEB> yes, many of the TLS things in FFmpeg seem to disable CA checking by default
[17:51:58 CEST] <friki_> IMAO, it should be fixed
[17:52:49 CEST] <cryptopsy> '<portage.util._dyn_libs.LinkageMapELF._ObjectKey object at 0x7f0c2853c328> (/usr/lib64/libavutil.so.55.78.100) not in object list'
[17:53:07 CEST] <JEEB> not sure if there's been discussions about that, but to be honest I'm not sure if invalid certificates are a major issue with FFmpeg's use cases
[17:53:24 CEST] <JEEB> there's an AVOption which you can set of course to force validation
[17:53:41 CEST] <JEEB> you can even propose a patch that sets that option to 1 by default
[17:53:53 CEST] <JEEB> as I said, I haven't ctrl+F'd the mailing list archives about this
[17:54:39 CEST] <friki_> it may become critical in some common situations like "hls push". Even downloading the wrong video may envolve security problems
[17:54:51 CEST] <friki_> ok, i'll take a look arround, thanks for your time, JEEB
[17:56:07 CEST] <JEEB> friki_: http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/tls.h;h=beb19d6d5571634a6d35bf19ed4e303093d054a8;hb=HEAD#l45
[17:56:19 CEST] <JEEB> so -tls_verify is the common option
[17:56:38 CEST] <philipp64> what's an example of taking a separate audio and video stream and multiplexing them together programmatically?
[17:56:41 CEST] <JEEB> depending on the TLS library you may or may not need to specify a CA file
[17:57:58 CEST] <JEEB> philipp64: open avformat contexts for both, open up an output avformat context. add streams to it as needed, call the write headers function and then read both while feeding the AVPackets from streams you "picked" to the streams in the output avformat context
[17:58:51 CEST] <JEEB> that's the gist of it and expects that your timestamps on both inputs are somehow relevant. if they are off-sync, you will have to deal with that by adjusting timestamps so that the streams' timestamps are aligned so that they make sense
[17:59:19 CEST] <philipp64> i figured the multiplexor would read from the contributing streams... is there a snippet out there in an example of doing this? I figure it's something reasonably common...
[17:59:22 CEST] <JEEB> for example, if you just have 2 seconds extra of audio and you somehow know that, then you have to make sure that the video timestamps start with the 2 second mark
[18:00:57 CEST] <JEEB> there are examples under docs/examples
[18:01:14 CEST] <JEEB> one of them has demuxing at least, and then there's a remuxing example as well?
[18:01:27 CEST] <JEEB> also googling `site:ffmpeg.org doxygen trunk KEYWORD`
[18:01:28 CEST] <JEEB> generally helps
[18:06:41 CEST] <tuna> I found something interesting in the code...the EINVAL possible return of av_pix_fmt_count_planes is not handled on line 753 of frame.c..........this is not a bug to me, just something that seems off
[18:07:30 CEST] <tuna> But I guess that should never really return, at least the route I have taken...becuase the check that returns that error is checked before this func is called....prob non issue
[18:54:12 CEST] <tuna> BtnB: I Found the mysterious EINVAL I believe...seems its on line 98 of imgutils....its an if that checks:
[18:54:14 CEST] <tuna> if (!desc || desc->flags & AV_PIX_FMT_FLAG_HWACCEL)
[18:54:39 CEST] <tuna> inside of av_image_fill_linesizes
[18:55:00 CEST] <BtbN> why are you calling that?
[18:55:46 CEST] <tuna> damnit....nvm...seems its not possible
[18:56:04 CEST] <tuna> because of !frame->linesize[0]
[18:56:08 CEST] <BtbN> btw., ffmpeg master can now give you the two nvenc rgb formats from the CUDA hwframes ctx
[18:56:11 CEST] <tuna> line 219 of frame.c
[18:56:43 CEST] <tuna> ah ok, thanks
[18:56:47 CEST] <pablo__> Hello, I want to add a 5.1 .flac audio track to a .ts file that already has three audio tracks. I with ffmpeg with unsuccessful results. Everything seems to work fine until the very last moment when I check the file and the .flac audio track is not included in the "output.ts". The .flac track is about 3GB and its lenght is around two and a half hours. Thank you!
[18:57:07 CEST] <JEEB> is FLAC in MPEG-TS even standardized?
[18:57:37 CEST] <pablo__> JEEB: I don't know, I also used ffmpeg to converted it from a .wav file
[18:58:34 CEST] <pablo__> I tried first with the .wav and decided to try it with flac, I thought size could be quite problematic
[18:59:04 CEST] <JEEB> well, I am not 100% sure what on earth the mpegts muxer is doing, but it might jsut be muxing unknown stuff as "data" tracks
[19:00:17 CEST] <JEEB> yes, seems like it would fit under "STREAM_TYPE_PRIVATE_DATA"
[19:00:19 CEST] <pablo__> I don't know. It works flawlessly with smaller files but not with bigger ones
[19:00:21 CEST] <JEEB> good luck reading that out
[19:00:44 CEST] <JEEB> at most ffmpeg.c could maybe figure out and probe the stream, but I have my doubts
[19:00:55 CEST] <JEEB> or well, libavformat I guess, since that's doing the probing :P
[19:01:09 CEST] <JEEB> what I'm trying to tell thee is that FLAC in MPEG-TS is not going to work
[19:01:26 CEST] <JEEB> if it has ever worked for you that has been a miracle
[19:06:10 CEST] <furq> 17:59:04 ( JEEB) well, I am not 100% sure what on earth the mpegts muxer is doing, but it might jsut be muxing unknown stuff as "data" tracks
[19:06:24 CEST] <furq> this is exactly what ffmpeg does so i guess lavf does the same
[19:07:04 CEST] <JEEB> ffmpeg.c just feeds it to lavf
[19:07:08 CEST] <JEEB> thus, it is lavf that does that
[19:07:08 CEST] <furq> it's quite happy to mux it and then ffprobe just gives unsupported codec
[19:07:21 CEST] <JEEB> since I don't see it having a format check
[19:07:31 CEST] <JEEB> just cases for various known-working things
[19:09:14 CEST] <^Neo> hello
[19:09:40 CEST] <^Neo> can I chain encodes together in a single FFmpeg command?
[19:09:47 CEST] <pablo__> is there any size limitation when adding a .wav track?
[19:10:10 CEST] <^Neo> so do something like input > scale > encode > decode > scale > encode > output
[19:12:49 CEST] <JEEB> pablo__: none other than in WAV itself :P
[19:13:01 CEST] <JEEB> WAV after all is a RIFF format and thus has its own size limitation
[19:13:13 CEST] <JEEB> some applications work around that by having multiples of something in the WAV file
[19:13:23 CEST] <JEEB> then there are extensions/improved formats like WAV like W64
[19:14:42 CEST] <pablo__> Oh okay, I'm gonna try w64 then
[19:14:45 CEST] <pablo__> thank you!
[19:15:37 CEST] <furq> ^Neo: no
[19:15:44 CEST] <furq> unless piping counts
[19:16:05 CEST] <^Neo> furq: OK, that's what I thought... thanks!
[19:17:09 CEST] <furq> is there a list of supported mpegts codecs somewhere
[19:17:23 CEST] <furq> google's not helping
[19:17:38 CEST] <JEEB> it's in multiple specs just to make it funnier for you
[19:17:42 CEST] <JEEB> the general ITU-T H.222 spec
[19:17:49 CEST] <JEEB> and then the DVB specs (thankfully freely available)
[19:17:56 CEST] <JEEB> and the ATSC/ARIB/china/whatever specs
[19:18:00 CEST] <furq> i mean i could just ask in here
[19:18:11 CEST] <furq> does it support any lossless/uncompressed audio codec that ffmpeg has an encoder for
[19:18:42 CEST] <JEEB> I would rather read the switch statement in the mpegtsenc.c file in that case
[19:18:48 CEST] <JEEB> in the write PMT function
[19:18:49 CEST] <furq> apparently pcm doesn't work so the best thing i can think of is mp4 als and dtsma
[19:18:56 CEST] <ariyasu_> ac3/aac/mp2 = only broadcast audio i have ever seen
[19:18:58 CEST] <furq> and neither of those are in ffmpeg last i checked
[19:18:59 CEST] <JEEB> huh, PCM doesn't work?
[19:19:08 CEST] <JEEB> MPEG-TS most definitely has extensions for raw PCM
[19:19:09 CEST] <furq> doesn't look like it
[19:19:19 CEST] <JEEB> ok, then that's a limitation in the muxer
[19:19:40 CEST] <JEEB> I mean, blu-ray uses it just fine and I'm pretty sure it's specified somewhere :P
[19:20:03 CEST] <JEEB> mpegts_write_pmt
[19:20:18 CEST] <^Neo> s302m is dual channel pcm?
[19:20:19 CEST] <JEEB> this is the function that has the switch around st->codecpar->codec_id
[19:20:34 CEST] <JEEB> ^Neo: yea but not even going there (that's codec bit stream in PCM)
[19:20:40 CEST] <JEEB> *coded bit stream
[19:20:49 CEST] <^Neo> fair
[19:20:49 CEST] <JEEB> there is actual PCM support in MPEG-TS
[19:21:43 CEST] <furq> yeah i forgot about bluray audio
[19:22:22 CEST] <furq> i wonder if that's new in the bdav spec or whatever they call m2ts
[19:23:22 CEST] <JEEB> m2ts is just 192 byte MPEG-TS
[19:23:26 CEST] <JEEB> nothing really special to be honest
[19:23:38 CEST] <JEEB> heck, they even used it in H.222 with DVDs, so it's *old*
[19:23:54 CEST] <JEEB> (DVDs were MPEG-PS though, not -TS)
[19:23:58 CEST] <furq> i was about to say
[19:24:53 CEST] <JEEB> but it's the same spec, just different type of stream
[19:25:06 CEST] <JEEB> most likely case is that nobody needed raw PCM support
[19:25:13 CEST] <JEEB> so nobody implemented it in mpegtsenc
[19:25:20 CEST] <furq> that would also make sense
[19:25:54 CEST] <JEEB> file an issue about it, see if someone cares enough (I've got enough things on my todo list to now being over two months late with various more fun things than coding open source)
[19:33:29 CEST] <pablo__> Yay, it works flawlessly with .w64! Thanks again :)
[20:18:57 CEST] <SpeakerToMeat> what do you guys consider is the best quality scaling algo for upscaling? Lanczos?
[20:21:01 CEST] <philipp64> JEEB: sorry for being a bit thick-headed... I'm actually a kernel hacker and don't play with ffmpeg (or libavcodec) much... can I then read from the output stream? I'd figure that taking 2 input streams and multiplexing them together would result into another input stream... or maybe I'm just thinking too much in the GStreamer paradigm of things... I looked in doc/examples and couldn't...
[20:21:03 CEST] <philipp64> ...find an example like you talked about, but maybe I missed it or your talking of some other examples...
[20:21:46 CEST] <JEEB> philipp64: I don't think you want to be reading too much into the output context's streams
[20:21:47 CEST] <philipp64> my scenario is I have two sockets open with video/webm and audio/webm and I want to create a single, synchronized stream that I can then read from.
[20:22:17 CEST] <JEEB> ok, is anything synchronizing the timestamps in those webm inputs?
[20:23:00 CEST] <philipp64> they should both be zero based... they'll have different ts's, but that shouldn't be a problem, should it?
[20:23:15 CEST] <JEEB> or is it just that you have to note that you received the first packet from input X at monotonic clock stamp Y, and for Z at Q - and hope those two mean what time they contain?
[20:23:23 CEST] <JEEB> and then adjust accordingly
[20:23:37 CEST] <JEEB> yes, timestamps are a problem if your input doesn't synchronize them
[20:23:52 CEST] <philipp64> in the trivial case, it's YT content and I'm using the adaptive content, pairing the matching video and audio so the duration, etc. should all match up...
[20:24:45 CEST] <JEEB> ok, if they match up then it's just a case of reading the packets from each input and pushing them to the matched up output streams
[20:24:47 CEST] <philipp64> oh, I was thinking that the multiplexer would synchronize the 2 sources...
[20:25:02 CEST] <JEEB> uhhh
[20:25:10 CEST] <SpeakerToMeat> if it had something to sync on....
[20:25:23 CEST] <JEEB> like, yes. it will synchronize the output on the timestamps it receives
[20:25:42 CEST] <JEEB> but what I was asking is if the timestamps actually mean something or not together
[20:26:27 CEST] <JEEB> now, if you are just ripping the crap out of youtube then yes, most likely that is not a problem
[20:27:24 CEST] <SpeakerToMeat> And if it's not a live stream, youtube-dl will probably take care of it (using ffmpeg) for you
[20:27:27 CEST] <JEEB> you just feed the demultiplexed AVPackets through and just make sure your timestamps are accordingly adjusted for input/output AVStream time_base differences (there's a function for that) - although with matroska that would be most likely a no-change change since most likely youtube is using the default time_Base
[20:27:43 CEST] <JEEB> yes, youtube-dl can call ffmpeg.c to re-multiplex the separate audio and video
[20:28:10 CEST] <philipp64> let's say the video is 60fps and the audio is 44.1KHz... I guess the duration of the audio will need to be 1/60s (which is probably isn't) otherwise I'll need to apply some gating function to packets to apply a merge-sort.
[20:28:45 CEST] <JEEB> the streams are generally separate
[20:28:54 CEST] <JEEB> as long as the packets' timestamps are correct it's all good
[20:29:08 CEST] <kepstin> the timestamps on the youtube audio and video streams should align correctly, so just doing "ffmpeg -i video.webm -i audio.opus output.mkv" or whatever should do the right thing :/
[20:29:11 CEST] <JEEB> generally you receive packets with a single audio packet anyways, and you can't "cut" that any more than that anyways
[20:29:19 CEST] <SpeakerToMeat> at 60fps 44.1khz adio comes out at 735 unites per frame
[20:29:23 CEST] <JEEB> since it's compressed packets
[20:29:48 CEST] <SpeakerToMeat> audio, units
[20:29:50 CEST] <JEEB> so don't really care about the FPS and the audio rate being separate
[20:29:52 CEST] <philipp64> sorry, I need to do this programmatically, not CLI...
[20:29:59 CEST] <JEEB> they are separate streams
[20:30:28 CEST] <philipp64> I actually don't ever both to render the content... I'm performing broadband measurements (stalls, etc).
[20:30:31 CEST] <JEEB> philipp64: I haven't read too much into this, but there's this example http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/examples/remuxing.c;h=9e4d1031b4a59761617b7de3c1dac5d97b5191cb;hb=HEAD
[20:30:57 CEST] <klaxa> i think i used that a lot
[20:31:40 CEST] <klaxa> yeah my muxing stuff is basically remuxing.c :P
[20:31:55 CEST] <JEEB> but yea, the gist is - make one AVFormatContext for input, another for output. figure out the streams you want from the input, create streams in output that match, then write the "header" (there's a function for that), and then start pushing AVPackets into the output
[20:31:55 CEST] <kepstin> philipp64: hmm, are you trying to analyze network behaviour of a simulated player?
[20:31:59 CEST] <JEEB> and at the end you flush it
[20:32:08 CEST] <JEEB> that's how you re-multiplex
[20:32:20 CEST] <philipp64> kepstin: something like that, yeah.
[20:32:21 CEST] <JEEB> from FFmpeg's perspective that's all that there is
[20:32:41 CEST] <kepstin> philipp64: note that unlike the real youtube player, ffmpeg won't dynamically switch quality levels and whatnot, that's up to you to decide how to handle
[20:32:56 CEST] <JEEB> also if you're simulating playback, players never re-multiplex
[20:32:59 CEST] <JEEB> they just play
[20:33:13 CEST] <JEEB> so I'm not sure why you'd be re-multiplexing?
[20:33:19 CEST] <JEEB> other than if you want to check the results
[20:33:22 CEST] <kepstin> philipp64: but you can do this pretty much completely independently for the audio video streams - read each stream separately in its own thread or process or whatever, and slow it down by timing it to the system clock
[20:33:25 CEST] <JEEB> for data corruption
[20:33:56 CEST] <kepstin> (a real video player would usually time to the sound card clock, but system clock's good enough for this)
[20:33:59 CEST] <philipp64> JEEB: actually, I'd want to "read" the output packets as if it were another input... since that's the way way the code is structured... i.e. if it's a multiplexed stream like the YT classic format with MP4 video and audio combined, then read from that stream...
[20:34:15 CEST] <JEEB> you lost me
[20:34:22 CEST] <philipp64> if it's two separate streams, multiplex them together into a new pseudo-stream and read from that.
[20:34:29 CEST] <kepstin> philipp64: ffmpeg abstracts the multiplexing away - you just read separately from each stream
[20:35:09 CEST] <kepstin> philipp64: and then time each stream to the system clock to slow it down to simulate realtime playback. done.
[20:36:09 CEST] <philipp64> okay, I want to read from a single AVFormatContext, but have packets interleaved that have their stream_index and PTS/DTS all sorted out for me... just like it was coming from a container that held all of the combined content.
[20:36:54 CEST] <JEEB> then you need something separate that puts your multiple inputs into a single thing :P
[20:37:09 CEST] <philipp64> kepstin: I already have something that does that, but it expects a single AVFormatContext, as above... reading from 2 different ones will require adding another thread, rewriting the code, etc.
[20:37:17 CEST] <JEEB> PTS/DTS synchronization is all sorted for you... if your input makes sense when put together
[20:37:26 CEST] <kepstin> philipp64: well, just run two copies of it, one with the video one with the audio :/
[20:37:36 CEST] <kepstin> should give basically the same results.
[20:37:44 CEST] <philipp64> JEEB: yes, that's what I'm asking for... a Y-connector that got knocked over...
[20:38:10 CEST] <JEEB> ok, then just look at the remuxer example and add another avformatcontext
[20:38:16 CEST] <philipp64> this would be so much easier in GStreamer....
[20:38:18 CEST] <JEEB> then output to whatever abomination you want with the AVIO stuff
[20:38:23 CEST] <JEEB> oh really?
[20:38:49 CEST] <philipp64> well, that might just be because I'm much less versed in ffmpeg...
[20:39:35 CEST] <JEEB> anyways, I don't see the reason why you need the Y connector because couldn't you just read from both input avformatcontexts during each loop or something?
[20:39:46 CEST] <JEEB> if you're not having to do any extra timestamp adjustments then that should just work
[20:39:51 CEST] <philipp64> JEEB: the code as it stands expects to read MPEG with audio and video as 2 streams in the same format context...
[20:40:26 CEST] <JEEB> anyways, I've given you enoug hints already and it's almost ten PM and I've still not reviewed the bloody code I thought I'd review today
[20:40:29 CEST] <philipp64> I want to trick it to think it's getting a single format context (which I'm cobble together) from separate network streams of audio/webm video/webm.
[20:41:10 CEST] <philipp64> I'll start coding and look at remux.c again and maybe it will start to make sense once I start throwing things together...
[20:41:19 CEST] <philipp64> * remuxing.c
[20:41:52 CEST] <philipp64> too bad there's not a canned function to setup the merge for me...
[20:42:32 CEST] <kepstin> it's not needed, since in almost all cases you want to demultiplex audio and video streams to handle them separately. You're doing something really unusual :/
[20:42:54 CEST] <philipp64> kepstin: PM?
[20:43:02 CEST] <kepstin> no.
[20:43:43 CEST] <philipp64> well, for the adaptive bit, we don't have to try that. we just try lower and lower quality streams until we find one that doesn't cause buffer underruns.
[20:47:11 CEST] <kepstin> so anyways, I guess you could write some code that downloads, demultiplexes, multiplexes, then hands of to your other code that demultiplexes again
[20:47:38 CEST] <kepstin> but it would probably be easier just to adapt the existing code so you can run it on multiple parallel streams :/
[20:49:10 CEST] <JEEB> yes, that's what was my feeling as well :P
[20:49:43 CEST] <JEEB> just have N AVFormatContexts. it will hurt at first, but IMHO much better than just making some sort of multiplexing
[21:04:30 CEST] <philipp64> I thought about that... problem is that (a) I don't know how much data the next av_read_packet() is going to take, and (b) when I call it and we don't have enough data, it's going to block...
[21:05:21 CEST] <philipp64> would be nicer if I could "feed" data to a function and it would tell me when it had a complete frame/packet ready for me.
[21:06:26 CEST] <philipp64> when you're reading from a disk file, all the data is already available before you start so you can just read it as you go. with a network stream, data arrives asynchronously of course...
[21:06:53 CEST] <JEEB> if you want to control the reading from network etc, you could just implement your own AVIO wrapper
[21:07:07 CEST] <JEEB> you implement the callbacks for read/seek/write (according to what you actually support)
[21:07:09 CEST] <philipp64> work-around is have N threads, one for each network connection... but this is running on an extremely resource-constrained embedded device.
[21:07:29 CEST] <JEEB> https://github.com/jeeb/matroska_thumbnails/blob/master/src/istream_wrapper.c
[21:07:31 CEST] <philipp64> JEEB: what would that look like? what's an example?
[21:07:40 CEST] <JEEB> there's an avio example in the examples dir
[21:07:43 CEST] <JEEB> for the record :P
[21:07:51 CEST] <JEEB> but that's what I coded up almost five years ago
[21:08:25 CEST] <JEEB> you'd probably log with av_log or something in reality
[21:08:39 CEST] <JEEB> (since you most likely set up your logging handling with a callback for that, too)
[21:09:27 CEST] <philipp64> you're talking about http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/examples/avio_reading.c;h=cbfeb174b8ce43c5fed7510bcb940e8126a7b3ed;hb=HEAD ?
[21:09:34 CEST] <JEEB> yes
[21:09:45 CEST] <JEEB> I've never looked at it, but I think the name implies it uses custom AVIO callbacks
[21:10:24 CEST] <JEEB> and IIRC returning zero now is not an EOF I think? although you'd have to test it
[21:10:32 CEST] <philipp64> well, it uses the read_packet() callback to drain data from a buffer...
[21:10:44 CEST] <JEEB> yes, and you control what that calls
[21:10:59 CEST] <JEEB> (AVERROR_EOF is now the EOF marker)
[21:11:20 CEST] <philipp64> if you don't have enough data for a frame them it just keeps calling the read_packet() function until it does...
[21:11:51 CEST] <philipp64> which makes it hard to poll multiple streams.
[21:11:55 CEST] <JEEB> yes, that could be true, I don't remember the wrapper that calls read_packet and if it will keep looping for zeroes
[21:12:15 CEST] <JEEB> but yea, if you think there are better ways for lavf to do IO I think new API proposals are welcome
[21:12:26 CEST] <JEEB> we all know the current design is not perfect
[21:12:51 CEST] <philipp64> well, if I did return zero... how would I signal back to it, "okay, I've got more data now"... not seeing how the API would accommodate that.
[21:14:17 CEST] <JEEB> without longer thinking I would be looping over both inputs and if one didn't have anything I'd check the next one. and you get the usual loop with that. or you have an off-band marker for "I have more data"
[21:14:37 CEST] <JEEB> and you call read on the input(s) when you know you've got some data in the buffers that you have for your AVIO
[21:14:55 CEST] <philipp64> still doesn't fix it... having "more data" isn't the same as having "enough data", i.e. a full-packet.
[21:15:25 CEST] <JEEB> sure
[21:16:06 CEST] <JEEB> but for a full packet you need a parser for that format to begin with. and as I noted, if you have better ideas for IO interfaces design recommendations are welcome at the mailing list
[21:18:34 CEST] <philipp64> I'm guessing av_get_packet() isn't relevant here, either.
[21:19:32 CEST] <JEEB> I'm honestly not too sure what that was again
[21:39:07 CEST] <SpeakerToMeat> Is there any "I can't believe it's not offitial", "all you could ever want ever!" "wow this is very current" build repo for debian stretch?
[21:39:53 CEST] <JEEB> I'd guess not since setting up compilation on *nix is generally straightforward
[21:40:13 CEST] <JEEB> and it's not like you don't know which things you need out of the kitchen sink (quite few in the end game)
[21:40:45 CEST] <SpeakerToMeat> hm
[21:40:46 CEST] <JEEB> because a default ../configure --disable-autodetect already gives you like 99% of all decoders etc
[21:40:57 CEST] <JEEB> so that generally leaves you with "what encoders you need?"
[21:41:08 CEST] <JEEB> as in, external ones
[21:41:20 CEST] <JEEB> most popular one is probably libx264
[21:42:01 CEST] <SpeakerToMeat> Nod
[21:42:08 CEST] <SpeakerToMeat> Makes sense
[21:43:10 CEST] <SpeakerToMeat> Sigh I can't wait until either on the fly 22:1 lossless compression, or $15 20Tb ssd drives are a thing
[21:48:24 CEST] <djk> SMB 1 turned off and the win10 machine aren't seeing each other or connecting with net use
[21:51:17 CEST] <djk> sorry wrong channel
[22:36:13 CEST] <Plippz> In some assignment we should run some ffmpeg.exe command in Windows, but I'd rather use Linux, so I tried the same in Linux: https://bpaste.net/show/ca77d0950dff no luck. Does the windows and linux versions have different command line argument syntax? Or must the windows version have an old version or something
[22:36:53 CEST] <JEEB> no, that sounds like something that wouldn't work in windows, either
[22:37:00 CEST] <JEEB> the keyword for input height IIRC was something like ih?
[22:37:27 CEST] <JEEB> also I would recommend to specify which options for the filter you are setting
[22:37:37 CEST] <JEEB> like scale=w=-1:h=ih
[22:37:52 CEST] <klaxa> i want a class where you are supposed to use ffmpeg
[22:38:08 CEST] <JEEB> (most likely you either needed to script that, or put the height you wanted where the text "height" is mentioned :P
[22:40:31 CEST] <Plippz> JEEB: ah, I may indeed just need to replace 'height' :D
[22:40:52 CEST] <Plippz> Yes indeed.. thanks!
[22:40:55 CEST] <JEEB> but yea, the usage of unnamed options is not recommended
[22:41:07 CEST] <JEEB> and might be removed in the future
[22:41:14 CEST] <JEEB> (although I bet it won't be removed :P)
[22:41:32 CEST] <JEEB> any new stuff you write should have name_of_option=value key-value list
[23:23:15 CEST] <zerodefect> So I've been using the C-API to create a simple graph which uses the overlay filter. I'm overlaying an image which is 253x253 over a frame that is 720x576. What I've noticed is that when I create the 'buffer' filter for the overlay, I have to set the w/h to the size of the image otherwise the image/AVFrame's true dimensions are not respected during compositing. That caught me by surprise.
[23:23:55 CEST] <zerodefect> Having looked at the activate() method in the overlay, I believe that the w/h of the overlay should be dynamic.
[23:25:46 CEST] <durandal_1707> zerodefect: w/h on input image of overlay is not dynamic
[23:26:34 CEST] <zerodefect> Yeah, I figured out after writing some sample code (as you suggested) - https://github.com/zerodefect/ffmpeg_overlay_demo :)
[23:26:38 CEST] <durandal_1707> dynamic is only position where to overlay
[23:27:29 CEST] <zerodefect> Isn't the w/h of overlay being updated in do_blend() method though?
[23:30:02 CEST] <kepstin> the normal method for writing filters like that is to set the outlink w/h once at filter initialization, and use the same value during runtime
[23:30:18 CEST] <kepstin> I don't think the filter chain system supports changing image size on filter links at run time at all?
[23:30:21 CEST] <durandal_1707> zerodefect: for case it is partially not visible
[23:31:01 CEST] <zerodefect> Darn
[23:32:31 CEST] <zerodefect> Out of interest, is this the sort of problem that people regularly run into?
[23:34:45 CEST] <zerodefect> It would be quite nice if there was an API that could be used to overlay AVFrames outside the filter graph (like the scaler). Thoughts?
[23:35:23 CEST] <durandal_1707> zerodefect: i do not know what you are attempting to do
[23:39:19 CEST] <zerodefect> Looking for a compositor to compose one overlay (or more?) frame at pos (x,y) over another; however, looking for something that is dynamic such that I can change both the x,y and dimensions of the overlay (so modify the properties of the overlay on the fly. Any actual modifications of the overlay would be done upstream).
[23:40:20 CEST] <zerodefect> It makes for an easy way to implement simple(!) video effects (DVE).
[23:47:41 CEST] <vlt> Hello. Id like to encode a video and space is not so much a concern but I want to avoid any "you CPU is too slow to play this" messages by mplayer. Which x264 preset or setting would you recommend?
[23:48:29 CEST] <vlt> Does -preset ultrafast also affect CPU consumption during playback?
[23:48:30 CEST] <JEEB> vlt: resolution, frame rate and your CPU are?
[23:48:35 CEST] <JEEB> no
[23:48:45 CEST] <durandal_1707> zerodefect: use multiple overlays, or something like shotcut
[23:48:47 CEST] <vlt> JEEB: Full HD, 25 fps
[23:48:50 CEST] <kepstin> vlt: it's rare for any settings on x264 to increase complexity enough to cause issues on modern computers - frame rate and resolution are the main issue.
[23:48:50 CEST] <JEEB> I mean, it does but there is a separate tune
[23:48:55 CEST] <JEEB> -tune fastdecode
[23:49:11 CEST] <vlt> Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz
[23:49:21 CEST] <vlt> JEEB: Perfect, thanks!
[23:49:26 CEST] <JEEB> doesn't that have hwdec?
[23:49:48 CEST] <kepstin> intel arc doesn't list quicksync on that, which is kinda weird
[23:49:50 CEST] <JEEB> you can try with mpv and --hwdec=d3d11va-copy or the one without -copy
[23:50:14 CEST] <JEEB> this one contains a quite recent build since lachs0r only builds releases https://github.com/mpv-player/mpv/pull/5804
[23:50:26 CEST] <vlt> JEEB: Maybe it has but I still got that message from mplayer (Ubuntu 16.04) on a "-preset slow" video.
[23:50:33 CEST] <JEEB> oh, ubuntu
[23:50:47 CEST] <JEEB> --hwdec=vaapi-copy then
[23:50:57 CEST] <JEEB> although 16.04 will have an ancient mpv
[23:51:37 CEST] <zerodefect> @durandal_1707, not familiar with shotcut. Is that some sort of API associated with open source video editor (just done web search)
[23:52:09 CEST] <JEEB> for broadcast live overlays there was some norwegian thing
[23:52:48 CEST] <vlt> JEEB: "oh, ubuntu"? What might be the problem there? Can you suggest something?
[23:53:03 CEST] <JEEB> no, your initial comments just sounded like windows
[23:53:04 CEST] <JEEB> :P
[23:53:13 CEST] <JEEB> which is why I linked a windows build
[23:53:16 CEST] <zerodefect> @JEEB: sounds interesting. Any more info?
[23:53:19 CEST] <kepstin> zerodefect: honestly, I'd consider just converting the image to a pixel format you can use with, i dunno, cairo or something - then just use that to draw on top of the buffer.
[23:54:25 CEST] <JEEB> zerodefect: I actually tried googling for it as I didn't remember the name of the open source project, but OBS basically overtook my keywords, and it's not that one
[23:54:52 CEST] <zerodefect> @kepstin: not familiar with that project, but tht does look _very_ interesting. Thanks!
[23:55:45 CEST] <zerodefect> Ok, no worries JEEB. I'll go down the route kepstin suggested
[23:57:28 CEST] <durandal_1707> zerodefect: shotcut is gui video editor
[00:00:00 CEST] --- Fri May 11 2018
More information about the Ffmpeg-devel-irc
mailing list