[Ffmpeg-devel-irc] ffmpeg.log.20180328

burek burek021 at gmail.com
Thu Mar 29 03:05:01 EEST 2018


[00:45:43 CEST] <user234234> Hi, I asked 2 days ago about an audio problem with an USB grabber and found out that it is solved in the current git state. Static builds from johnvansickle.com are still to old so I'm simply compiling it myself statically against musl. I do get an error about a missing library (ERROR: libfdk_aac not found) despite having it set via --enable-libfdk-aac --enable-nonfree --extra-ldflags=-L/prefix/lib. The library
[00:45:43 CEST] <user234234> libfdk-aac.a and the pkgconfig are there. Could someone with more insight please help?
[00:56:18 CEST] <DHE> user234234: your pkgconfig might not be searching the right path. /usr/local/lib/pkg-config (the default for most default compiler installations) isn't on the default
[00:57:22 CEST] <furq> user234234: there's probably a more descriptive error near the end of ffbuild/config.log
[00:59:51 CEST] <user234234> A lot of "relocation * against symbol `*' can not be used when making a shared object; recompile with -fPIC" messages ... - Thanks for pointing out that log
[01:00:36 CEST] <furq> are you building ffmpeg with --enable-shared
[01:01:11 CEST] <furq> you need to build all your static libs with -fPIC if you are
[01:01:21 CEST] <furq> or alternatively you could just build a static ffmpeg which is way easier
[01:01:46 CEST] <user234234> forgot to disable dynamically linked ffmpeg building ...
[01:02:26 CEST] <furq> it would be nice if that configure error message was made a bit more verbose
[01:02:41 CEST] <furq> granted it does tell you to check config.log but we still get this question an awful lot
[01:05:49 CEST] <user234234> that would be cool if it would be mentioned on error of the ./configure script
[01:06:36 CEST] <user234234> --disable-shared did nothing. My compiler setup probably already disables it by default. I'll try -fPIC.
[01:07:53 CEST] <furq> --enable-static?
[01:08:04 CEST] <furq> that should be the default though so shrug
[01:09:29 CEST] <furq> you probably also want --extra-ldflags="-static" --pkg-config-flags="--static"
[01:09:41 CEST] <furq> for a fully static binary
[01:09:46 CEST] <furq> (other than libc)
[01:15:08 CEST] <user234234> ok thanks for the help. I probably try tommorow again a bit recompiling all libs. Already late. gn
[01:29:57 CEST] <jpabq_> HI all.   I want to make use of ff_load_image.   I am building ffmpeg from git branch 3.4.  It is building  libavfilter/lavfutils.c  and is showing up in the libavfilter.a , but the libavfilter/lavfutils.h file does not get installed with the other headers.  Is ff_load_image considered 'private', or something like that?
[01:32:22 CEST] <JEEB> yes
[01:32:28 CEST] <JEEB> everything starting with ff_ is private
[01:33:01 CEST] <jpabq_> Ah, okay.  Thanks.     So, if I want to load an image into a frame, what is the correct way?
[01:33:53 CEST] <JEEB> open an avformat context, read the AVPacket (most likely singular in the file), then open an avcodec context for it for decoding
[01:34:03 CEST] <JEEB> then https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[01:34:11 CEST] <JEEB> follow that for how to decode
[01:34:18 CEST] <JEEB> then you will receive an AVFrame
[01:34:22 CEST] <JEEB> which is the raw decoded picture
[01:34:37 CEST] <jpabq_> JEEB: thank you.  I will look at that.
[01:36:13 CEST] <JEEB> also take a look at doc/examples, although some examples are not top-notch. see open_input_file in transcoding.c example for example
[01:36:55 CEST] <JEEB> also generally it is recommended to deal with master, as everyone is focused on that, if only possible. the releases really get a random() amount of focus
[01:37:14 CEST] <JEEB> http://fate.ffmpeg.org/ is a way of checking how things generally seem to be in current master
[01:37:23 CEST] <JEEB> it's the automated testing suite
[01:37:27 CEST] <jpabq_> Good to know.  Thanks again.
[01:38:00 CEST] <JEEB> 25
[01:38:11 CEST] <JEEB> but yea, that should get you started :)
[09:52:01 CEST] <confusedjoe32> cock
[10:46:59 CEST] <meins> Hello
[10:48:44 CEST] <meins> i have a Problem with a the avformat_find_stream_info function
[12:52:02 CEST] <apus> hi, i would like to convert my vhs tapes. is there a way of using ffmpeg to encode the signal received via usb video grabber directly to x264 or x265 in a way that is as lossless as possible, so that i have enough raw material to be able to postprocess the video afterwards? the format is PAL, 576i
[13:04:14 CEST] <DHE> apus: true lossless with x264 means using -qp 0 and a -pix_fmt that is lossless (yuv444, rgb, etc)
[13:26:36 CEST] <th3_v0ice> Hi guys. Is it possible that av_interleaved_write_frame is somehow duplicating packets it writes? Because I marked frames with numbers that I am sending to encoder and encoded frames that come out have the same numbers and in same order. But when I write it to .H264 file and decode that same file to YUV it contains first frame two times. Am I doing something wrong or is this some kind of a bug?
[13:28:25 CEST] <JEEB> it shouldn't duplicate unless you're sending it the same AVPacket
[13:28:53 CEST] <apus> DHE: from what i see this results in really high file sizes. is there a way to get "nearly" lossless when talking about archiving vhs streams while applying a good compression?
[13:29:21 CEST] <JEEB> DHE: the pix_fmt should be the same as the source one, no need to make it RGB or 4:4:4
[13:29:34 CEST] <JEEB> because as long as you have no difference in it, you should be OK
[13:30:11 CEST] <JEEB> apus: sure, low'ish CRF value instead of qp 0. it's not lossless of course
[13:31:16 CEST] <apus> JEEB: something like this a "good" option: ffmpeg -f v4l2 -standard PAL -thread_queue_size 512 -i /dev/video0 -f alsa -thread_queue_size 512 -i hw:2,0 -vcodec libx264 -preset superfast -crf 8 -s 720x576 -r 25 -aspect 4:3 -acodec libmp3lame -b:a 128k -channels 2 -ar 48000 out.avi ?
[13:35:18 CEST] <GuiToris> hello, is anyone active now or I should come back later?
[13:35:36 CEST] <pmjdebru1jn> just ask and wait around for an answer
[13:35:50 CEST] <JEEB> apus: > superfast > expecting compression
[13:35:55 CEST] <GuiToris> I'll ask it but I need to leave soon
[13:35:56 CEST] <GuiToris> so
[13:36:08 CEST] <JEEB> apus: but yes, around that range maybe for CRF?
[13:37:08 CEST] <GuiToris> my nephew got a dvd but my sister doesn't have a dvd player, I thought I would rip it by ffmpeg. I did ffmpeg -i /path/to/dvd /path/to/file.mkv  but I only got the first chapter
[13:37:20 CEST] <GuiToris> which is not the cartoon
[13:37:36 CEST] <JEEB> with lossless of course the preset only affects compression ratios
[13:37:57 CEST] <JEEB> also not sure how much you'd gain with that sort of content :)
[13:38:08 CEST] <JEEB> although superfast *is* rather "dumb"
[13:45:48 CEST] <GuiToris> should I use handbrake?
[13:50:27 CEST] <iive> GuiToris, yes, if it works for you.
[13:50:47 CEST] <apus> JEEB: so just leave that preset option out? i'm trying that command out at the moment and after a few seconds/minutes i always get: Dequeued v4l2 buffer contains 414720 bytes, but 829440 were expected. Flags: 0x00002001. /dev/video0: Invalid data found when processing input  This issues seems to be because of the driver stk1160. is there a solution? https://superuser.com/questions/1048637/ffmpeg-video-recording-freezes-after-invalid-data-found-when-
[13:50:48 CEST] <apus> processing-input
[13:59:55 CEST] <JEEB> apus: oh right, realtime
[14:00:17 CEST] <JEEB> also there's more presets than medium or superfast :)
[14:00:40 CEST] <JEEB> I would recommend using a faster preset (ultra/superfast) for the lossless capture, and then you can use a slower preset to re-encode that
[14:07:56 CEST] <DHE> JEEB: point taken about the source pix_fmt thing
[14:13:21 CEST] <GuiToris> thanks iive
[14:13:33 CEST] <GuiToris> I need to go now, see you all :)
[15:41:50 CEST] <kiroma> Hey, I can't compile ffmpeg
[15:42:22 CEST] <kiroma> Just pulled it and it fails with undeclared function
[15:42:28 CEST] <kiroma> https://pastebin.com/XsMzeHqK
[15:43:43 CEST] <kiroma> Could be related to --enable-hardcoded-tables
[15:43:57 CEST] <c_14> could be, wouldn't be the first time that broke
[15:44:06 CEST] <c_14> is AV_INPUT_BUFFER_PADDING_MAX defined in the source?
[15:44:11 CEST] <c_14> s/MAX/SIZE
[15:45:37 CEST] <kiroma> Should be, where do I check that?
[15:47:06 CEST] <kiroma> Alright it's defined in avcodec.h
[15:48:33 CEST] <c_14> Have you tried compiling without --enable-hardcoded-tables?
[15:48:46 CEST] <kiroma> Yes, it's working.
[15:50:42 CEST] <c_14> yeah, one of the fate machines with hardcoded tables is also failing
[15:50:44 CEST] <c_14> looks like a bug
[15:51:05 CEST] <c_14> Can you report it on trac? (and preferably bisect to find the commit that makes it fail)
[15:52:09 CEST] <kiroma> Alright, give me a minute to bisect and I'll report it.
[15:54:49 CEST] <kalipso> hey guys, i have a webm with a size of about 50mb and want to compress it down to a size of maximal 6mb, is there any command so that ffmpeg reduces resolution, ect automaticly to get to a fixed sizes?
[15:58:32 CEST] <kiroma> You can use two pass encoding.
[16:01:44 CEST] <kiroma> How do I tell make to compile a specific file?
[16:02:38 CEST] <kiroma> oh nvm
[16:04:56 CEST] <c_14> kalipso: no, like kiroma said you can use two-pass encoding to get a certain filesize but you have to decide on the resolution yourself
[16:13:47 CEST] <c_14> kiroma: it's this commit e529fe7633762cb26a665fb6dee3be29b15285cc it's been reported on the ml already
[16:13:51 CEST] <c_14> so there should be a fix soon"
[16:14:00 CEST] <kiroma> Oh okay thanks
[16:14:13 CEST] <c_14> https://ffmpeg.org/pipermail/ffmpeg-devel/2018-March/227466.html
[16:15:28 CEST] <c_14> you can just revert the commit or build one commit before that if you want to build from git with hardcoded-tables
[18:14:09 CEST] <th3_v0ice> Hi guys, I am using the new API to decode frames, but even when I send a NULL to avcodec_send_packet, so I can drain my remaining frames, avcodec_receive_frame just returns EOF, and I know there are more frames in the input file because FFmpeg outputs them. 3 frames are missing from the end. Does anyone know what can be the problem?
[18:15:27 CEST] <JEEB> which video format out of interest? and FFmpeg version?
[18:15:44 CEST] <JEEB> Also ffmpeg.c utilizes the same APIs as far as I know
[18:15:49 CEST] <JEEB> the send/receive one
[18:21:46 CEST] <th3_v0ice> JEEB: video format is mp4, and FFmpeg version is 3.4.2
[18:25:56 CEST] <th3_v0ice> and is this flag "avctx->internal->draining" supposed to be set to 1? Because its set to 0 even after sending the NULL packet.
[18:26:19 CEST] <JEEB> with format I meant the actual avcodec side of things
[18:26:31 CEST] <JEEB> and I would recommend testing with master
[18:27:06 CEST] <JEEB> although I'm pretty sure my tests from like summer last year had the flushing work
[18:27:44 CEST] <th3_v0ice> Its h264 video
[18:28:26 CEST] <JEEB> ok, that should have worked but I have not tested 3.4.x specifically
[18:33:17 CEST] <th3_v0ice> Well its working in FFmpeg so we can assume that I am doing something wrong :)
[18:39:44 CEST] <JEEB> yea, ffmpeg.c is using that API as I can see by decode() in it
[18:40:44 CEST] <JEEB> https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html and https://www.ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3 / https://www.ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
[18:40:49 CEST] <JEEB> should be enough docs
[18:53:58 CEST] <apus> i'm trying to convert VHS content in real time from a usb 2.0 video grabber to x264 (ultrafast) and mp3 (audio). i even tried aac. for whatever reason after a few minutes i always get Non-monotonous DTS ... Any solutions? ffmpeg -f v4l2 -standard PAL -thread_queue_size 512 -i /dev/video0 -f alsa -thread_queue_size 512 -i hw:2,0 -vcodec libx264 -preset ultrafast -crf 12 -s 720x576 -r 25 -aspect 4:3 -acodec libmp3lame -b:a 128k -channels 2 -ar 48000 out.avi
[18:55:22 CEST] <th3_v0ice> JEEB: Thanks for the help!
[18:55:37 CEST] <dystopia_> crf 12 seems kinda pointless
[18:56:27 CEST] <dystopia_> a crf of 21 and a -preset of slow would be better imo and you would probably be able to do it in realtime without issue,
[18:57:11 CEST] <apus> i have to post-process the thing afterwards. thought i'd keep the data until that is finished. afterwards i can choose those options
[18:57:23 CEST] <dystopia_> also, it's probably 576i
[18:57:31 CEST] <dystopia_> so you might want to deinterlace on the fly also
[18:57:40 CEST] <dystopia_> -vf yadif=0:0
[18:58:15 CEST] <dystopia_> well doing it after doesn't help when the orignal capture is lossy
[18:58:16 CEST] <Meins> can someone help me with streaming. ffmpeg Needs a lot of time to Show me the stream
[18:58:52 CEST] <dystopia_> regarding the error "Non-monotonous DTS" i get it occasionally too
[18:58:59 CEST] <apus> dystopia_: so i'd better just save it completely lossless and only afterwards use x264 on it?
[18:59:05 CEST] <dystopia_> i wouldn't worry about it unless the video is messed up
[18:59:34 CEST] <apus> the non-monotonous DTS error appears constantly from some point onward. like 5 per second
[18:59:38 CEST] <dystopia_> not completly lossless, but go to your desired output on the fly, it's only sd at 25fps so your pc can more than handle going staight to the output
[19:00:08 CEST] <apus> anything else i'd need to handle on-the-fly aside from yadif?
[19:00:19 CEST] <dystopia_> crop
[19:00:55 CEST] <dystopia_> the original resolution might be 720x576 but that likely has postbox bars that need cropping
[19:02:00 CEST] <Meins> the function avformat_find_stream_info Needs toooo much time to analyze the stream
[19:02:44 CEST] <dystopia_> -vf crop=desired_width:desired_hight:pixels_cropped_from_left:pixels_Cropped_from_top,yadif=0:0,setsar=1/1
[19:02:49 CEST] <dystopia_> for video filter
[19:04:37 CEST] <dystopia_> eg, -vf crop=crop=720:300:0:138,yadif=0:0,setsar=1/1
[19:05:00 CEST] <dystopia_> somthing like that would crop 138 pixels from top and bottom, deinterlace and set sar
[19:05:18 CEST] <dystopia_> but you need to work out correct crop value for your video
[19:05:31 CEST] <dystopia_> everyone is likely to be slightly diffrent
[19:18:12 CEST] <apus> okay. i'll try that. thanks!
[20:14:40 CEST] <th3_v0ice> JEEB: I just did a few tests. I had 3 sequences, one with only I frames. This one decoded in full, drain loop working properly. Second one with I & P frames, this one also decoded correctly. But the third sequence which has I & P & B frames doesnt decode last few frames. Does this rings any bells? Thanks!
[20:15:39 CEST] <JEEB> th3_v0ice: B-frames and threads are what usually bring in latency in getting back pictures, which is why flushing is so important
[20:15:58 CEST] <JEEB> latency as in you are not getting pictures back right away
[20:16:36 CEST] <JEEB> with b-frames it's the re-ordering delay, and then you have threads giving you delay
[20:16:49 CEST] <th3_v0ice> But I did send a NULL packet, and I did while(ret >= 0) avcodec_receive_frame. Is there something else that I need to do?
[20:18:21 CEST] <th3_v0ice> or, how can I know when these pictures are available? What am I missing here?
[20:19:09 CEST] <JEEB> if you start flushing that's the end of decoding, and all decode'able pictures should be returned
[20:20:58 CEST] <th3_v0ice> The problem is that they are not...
[20:21:58 CEST] <JEEB> and if you say that ffmpeg.c works for you... it is using exactly the same API
[20:22:45 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=fftools/ffmpeg.c;h=d581b40bf2f791064d23408468da1bffabedccff;hb=refs/heads/release/3.4#l2258
[20:22:49 CEST] <JEEB> this is from the 3.4 branc
[20:22:50 CEST] <JEEB> *branch
[20:24:46 CEST] <th3_v0ice> Can this have something to do with av_read_frame?
[20:26:51 CEST] <th3_v0ice> The logic is the same between ffmpeg.c and my code, but putting a B frames into the mix avcodec_receive_frames is returning AVERROR_EOF immediatelly after the av_read_frame does.
[20:32:20 CEST] <axew3> hello all
[20:34:20 CEST] <axew3> is possible to ask a question around here about ffmpeg install into a centos server?
[20:35:18 CEST] <kiroma> Don't ask to ask and just ask the question.
[20:36:39 CEST] <JEEB> th3_v0ice: out of interest, you're not re-using the avcodecontext from the avformat context, right? that should give you warnings and is not the right way anyways
[20:36:46 CEST] <JEEB> although no idea if it would cause this
[20:37:04 CEST] <JEEB> av_read_frame is anyways one of the most misnamed functions :P it reads an AVPacket
[20:41:58 CEST] <th3_v0ice> JEEB: No, I am opening my own decoder AVCodecContext, based on the parameters from the input AVFormatContext. Haha, yeah, luckily it has a good description :)
[21:37:22 CEST] <Fenrirthviti> Is av1 support functional in ffmpeg at this point? Or is that still ongoing?
[21:38:30 CEST] <JEEB> do note that the PR stunt the marketing dep of AOM did was bad, it's not finished *yet* although it's close
[21:38:45 CEST] <JEEB> there's a patch for AV1 decoding and it might get merged soon as it was just discussed
[21:38:55 CEST] <Fenrirthviti> yeah, that's fair, I just wasn't sure if it got merged yet
[21:38:57 CEST] <JEEB> (the patch has been around in one way or another for months now)
[21:39:16 CEST] <JEEB> but yea, what I am trying to say is that AV1 is not finished. not yet, at least.
[21:39:18 CEST] <Fenrirthviti> I saw a few other commits earlier today that have supporting av1 functions is why I was asking
[21:39:33 CEST] <Fenrirthviti> like the ivf one
[21:39:47 CEST] <JEEB> yea, the muxing commits went in quite a while ago
[21:39:55 CEST] <durandal_1707> Fenrirthviti: why do you care for av1?
[21:40:15 CEST] <Fenrirthviti> Because rtmp and flv is absolutely awful
[21:40:16 CEST] <JEEB> because AOM's PR department just released the news article?
[21:40:30 CEST] <Fenrirthviti> I've been interested in it as a potential replacement for streaming video for a while now
[21:41:06 CEST] <JEEB> it's the least bad thing coming up it seems
[21:41:19 CEST] <JEEB> given that HEVC is just not kosher licensing-wise for corporate use
[21:41:22 CEST] <Fenrirthviti> The only other thing is SRT, but they seem even worse at playing with others
[21:41:28 CEST] <JEEB> SRT is a protocol
[21:41:34 CEST] <JEEB> AV1 is a video format
[21:41:57 CEST] <Fenrirthviti> yes, I'm looking for replacements for both rtmp and flv
[21:42:10 CEST] <DHE> if the AV1 reference encoder runs at 0.05fps for 1080p I'll considered that good...
[21:42:22 CEST] <BtbN> both rtmp and flv won't go anywhere with a new video codec though
[21:42:28 CEST] <BtbN> they are containers/transports
[21:42:51 CEST] <BtbN> And AV1 is not a replacement for them
[21:42:54 CEST] <Fenrirthviti> fair enough I guess.
[21:43:28 CEST] <BtbN> AV1 won't be relevant for another 2 years at least
[21:44:28 CEST] <Fenrirthviti> sure, not really expecting anything right now
[21:44:56 CEST] <Fenrirthviti> just would be fun to start playing with it
[21:47:07 CEST] <DHE> if the decoder can improve to the point of realtime playback, some people might be willing to put up with the slow encoder for VOD style playback...
[21:47:15 CEST] <DHE> but that's still probably a year out at very best
[21:47:51 CEST] <JEEB> the decoder is already realtime 1080p30
[21:47:54 CEST] <JEEB> without threading
[21:48:09 CEST] <JEEB> and rav1e is a fast encoder if you want SPEEED
[21:48:17 CEST] <JEEB> TD-Linux was doing a live stream with it after FOSDEM
[21:48:27 CEST] <DHE> oh? cool...
[21:48:40 CEST] <JEEB> rav1e is basically part of the effort to verify the spec
[21:48:44 CEST] <Fenrirthviti> What were they using for transport?
[21:48:53 CEST] <JEEB> so they wrote another encoder just for that :P
[21:49:03 CEST] <JEEB> (and also to see how well/bad you can write an encoder in rust)
[21:49:11 CEST] <Fenrirthviti> hah
[21:49:33 CEST] <JEEB> AV1 is going to most likely have matroska and ISOBMFF (mp4) mappings from the get-go
[21:49:49 CEST] <JEEB> MPEG-TS might follow if someone cares enough.
[21:50:49 CEST] <DHE> mpegts would suggest they want it to become a broadcast standard... (format hacks aside)
[21:51:07 CEST] <Fenrirthviti> that was in their mission statement at some point
[21:52:00 CEST] <JEEB> well AV1 as a video format isn't limited to any specific use case. but MPEG-TS or broadcast isn't a focus point as far as I can tell. thus the "if someone cares enough"
[21:52:40 CEST] <JEEB> also reminds me of how opus got registered after paying over 1000 USD, and then the standards body updated its listing+site and opus just disappeared. understandably some people were rather salty.
[21:52:44 CEST] <Fenrirthviti> they had a whole subset of goals for streaming though, I thought? unless I'm just misunderstanding
[21:53:00 CEST] <JEEB> streaming for web is one of the primary use cases, thus matroska and isobmff
[21:53:14 CEST] <JEEB> as in, GOOG and others care about that probably the most
[21:53:31 CEST] <Fenrirthviti> ah ok, broadcast meaning something else in this context then (I'm not so great with terminology semantics)
[21:53:48 CEST] <furq> Fenrirthviti: if you're looking for an nginx-rtmp replacement then https://github.com/arut/nginx-ts-module might be of interest
[21:53:59 CEST] <furq> although it's only really useful if you want to do dash live streaming
[21:54:13 CEST] <Fenrirthviti> yeah, the problem is latency
[21:54:22 CEST] <Fenrirthviti> rtmp is by far the lowest latency of anything viable right now
[21:54:26 CEST] <furq> yeah
[21:54:38 CEST] <Fenrirthviti> and that's basically priority one for my purposes
[21:54:45 CEST] <Fenrirthviti> it just blows I have to keep supporting flash :\
[21:54:45 CEST] <DHE> and webm by extension...
[21:54:56 CEST] <DHE> really? google dropped flash entirely last year
[21:55:04 CEST] <JEEB> it still kind of works :P
[21:55:13 CEST] <Fenrirthviti> you just have to add the exception manually
[21:55:17 CEST] <Fenrirthviti> it still works fine
[21:55:19 CEST] <DHE> if your browser doesn't support html5, it doesn't play anything... I've tried.
[21:55:36 CEST] <Fenrirthviti> I mean, I do it basically every day, it works.
[21:55:53 CEST] <JEEB> also just plain live presentation through HTTP without HLS/DASH does work and with probably similar latency to RTMP etc
[21:55:58 CEST] <JEEB> too bad the browsers are stupid
[21:56:03 CEST] <JEEB> regarding XHR and other stuff
[21:56:05 CEST] <DHE> there was a few weeks during the migration where embedded players supported flash but the main web player didn't...
[21:56:14 CEST] <Fenrirthviti> yeah, ingest becomes the issue with that scenario
[21:56:27 CEST] <furq> JEEB: does that work well with fmp4 now
[21:56:28 CEST] <Fenrirthviti> basically all output clients support is rtmp right now
[21:56:34 CEST] <furq> emphasis on "well"
[21:56:55 CEST] <JEEB> furq: it should. not that I've tested it with anything else than MPEG-TS back in the days :P
[21:56:57 CEST] <Fenrirthviti> DHE: I use video.js and the flash tech plugin
[21:57:06 CEST] <Fenrirthviti> not native browser stuff
[21:57:07 CEST] <furq> well yeah i specifically mean to browsers
[21:57:21 CEST] <BtbN> You are forced to use WebRTC if you want low latency now
[21:57:37 CEST] <JEEB> well I heard there's a chunked XHR-like API in webshit browsers now?
[21:57:46 CEST] <JEEB> so you could utilize that?
[21:57:55 CEST] <BtbN> Just use a websocket?
[21:57:56 CEST] <JEEB> so it doesn't buffer the whole received data
[21:57:57 CEST] <Fenrirthviti> yup, or SRT, but that's a nightmare to even get working with PoC
[21:58:03 CEST] <JEEB> fuck SRT
[21:58:11 CEST] <JEEB> you can just KISS it
[21:58:25 CEST] <JEEB> like seriously, I don't see why the flying fuck you would use WS or SRT or anything
[21:58:36 CEST] <JEEB> if you can just do it with a well-defined thing like an HTTP end point
[21:58:45 CEST] <BtbN> Because Websockets are the only thing that allows streaming to JavaScript
[21:59:00 CEST] <DHE> Fenrirthviti: and that will integrate with youtube?
[21:59:01 CEST] <JEEB> ok, so the chunked API is not around? so yes, webshits are still webshits
[21:59:15 CEST] <Fenrirthviti> DHE: Not sure what youtube has to do with anything here?
[21:59:17 CEST] <JEEB> I just heard there was a chunked transfer api
[21:59:20 CEST] <BtbN> There is no JavaScript API that does not buffer whole http requests
[21:59:32 CEST] <JEEB> ok, let me go beat TD-Linux around with a bush
[21:59:42 CEST] <JEEB> because he said something about a chunked thing
[21:59:42 CEST] <Fenrirthviti> I host my own streaming site, I don't use any public services
[22:00:05 CEST] <DHE> Fenrirthviti: I do something similar for limited uses. I'm using hls.js
[22:00:19 CEST] <furq> same
[22:00:28 CEST] <Fenrirthviti> yeah, I have hls support through nginx rtmp, it's just not viable for my use case because of the latency
[22:00:29 CEST] <BtbN> JEEB, you can process the data in chunks, there is an API for that
[22:00:40 CEST] <BtbN> But XHR will still buffer the whole thing internally and return it at the end
[22:00:46 CEST] <Fenrirthviti> and I don't have anywhere near the skillset to build something custom to get that latency down to any kind of reasonable time
[22:00:50 CEST] <JEEB> BtbN: ok
[22:01:00 CEST] <furq> Fenrirthviti: i'm pretty sure there's no solution for that that doesn't suck anyway
[22:01:06 CEST] <Fenrirthviti> That too.
[22:01:16 CEST] <furq> webrtc is the only thing i know of that has hassle-free support in browsers
[22:01:19 CEST] <JEEB> to be honest I would bet that MPEG-TS ingesting thing could be pretty well turned into serving MPEG-TS from HTTP as-is or something
[22:01:21 CEST] <Fenrirthviti> The major problem is that you have to start with RTMP.
[22:01:32 CEST] <furq> and by "hassle-free" i mean the playback, there is still plenty of hassle in actually getting a webrtc stream from a server to a browser
[22:01:33 CEST] <JEEB> and then you could stick WS onto that if XHR sucks that much with webshits
[22:01:40 CEST] <furq> and also you're then stuck with baseline h264
[22:01:56 CEST] <Fenrirthviti> If I could start with something else, it would be a hell of a lot easier.
[22:01:57 CEST] <BtbN> All browsers play high h264 just fine via WebRTC
[22:02:02 CEST] <furq> do they really
[22:02:03 CEST] <BtbN> They just can't encode it
[22:02:16 CEST] <JEEB> it probably uses the same lib for decoding anyways :P
[22:02:24 CEST] <JEEB> as normal H.264 content
[22:02:31 CEST] <DHE> do browsers even support encoding in the video API?
[22:02:37 CEST] <BtbN> yes
[22:02:41 CEST] <JEEB> yup
[22:02:41 CEST] <furq> i thought chrome still used openh264 for decoding
[22:02:44 CEST] <DHE> the only h264 encoder I know of for browsers is libopenh264
[22:02:53 CEST] <furq> in some kind of effort to push webm for webrtc
[22:02:59 CEST] <JEEB> yes, the encoder usually is openh264 in browsers
[22:03:01 CEST] <furq> unless they changed their mind pretty recently
[22:03:06 CEST] <lindylex> Can I use -frames:v to start a slice?  This is what I am doing ffmpeg -ss 00:00:11.0  -i leftleg.mp4  -frames:v 200 -vcodec copy -acodec copy -y cartWheel1.mp4
[22:03:06 CEST] <JEEB> the decoder is either lavc or native hwdecs
[22:04:01 CEST] <JEEB> but yea, I wish webshits would just give us streaming HTTP request API :P
[22:04:07 CEST] <JEEB> so much overly clever bullshit going on
[22:04:31 CEST] <JEEB> I did MPEG-TS over HTTP in 2008 to watch .jp TV remotely
[22:04:44 CEST] Action: JEEB points at the clouds and acts old
[22:06:04 CEST] <JEEB> (webrtc being a thing on top of RTP is kind of OK for the use cases it was made for, just for the record. which isn't normal media streaming as far as I can tell)
[22:06:19 CEST] <DHE> I've done things I'm not proud of to get free TV...
[22:06:35 CEST] <JEEB> in this case it was my own server with a receiver :P
[22:06:41 CEST] <JEEB> and vlc 0.8.6 or something
[22:06:53 CEST] <JEEB> because it could serve stuff through http
[22:07:16 CEST] <DHE> yep. there's an OTA receiver in my office. I've done something... not identical, but remote viewing nonetheless
[22:07:28 CEST] <furq> i successfully used ffserver 0.5
[22:07:29 CEST] <furq> what do i win
[22:07:53 CEST] <DHE> howie mandell's haircut (sp?)
[22:08:29 CEST] <furq> in case you're wondering, it sucked, it dropped out a lot, and then one day debian updated to ffmpeg 0.7 and my previously working config stopped working forever
[22:08:38 CEST] <Fenrirthviti> hurray
[22:08:45 CEST] <furq> but i did briefly use it with some degree of success
[22:08:58 CEST] <JEEB> in a way I must pay respects to those who got ffserver to actually chooch
[22:09:14 CEST] <furq> my place in hell is assured
[22:09:51 CEST] <durandal_1707> you will write ffserver config files for rest of eternity
[22:10:04 CEST] <furq> sisyphus will envy me
[22:10:22 CEST] <furq> wait
[22:10:24 CEST] <furq> pity
[22:10:28 CEST] <JEEB> durandal_1707: btw how did lavfi reconfig work? I'd like to give a try at fixing the PTS fuck-ups with audio?
[22:12:43 CEST] <durandal_1707> there is no reconfig, complete filtergraph is recreated iirc
[22:12:51 CEST] <JEEB> oh
[22:12:57 CEST] <JEEB> no wonder it loses the state then
[22:13:41 CEST] <durandal_1707> i guess you would need to feed frames after new graph with different pts
[22:14:45 CEST] <durandal_1707> anyway explain your usage/scenario/issues as RFC and Nicolas may respond
[22:15:01 CEST] <durandal_1707> on ml
[22:15:15 CEST] <JEEB> sure, although thankfully it doesn't seem to break too much
[22:15:31 CEST] <JEEB> you just get a "queue just went backwards" in the audio encoder :D
[22:15:56 CEST] <JEEB> (and yes, timestamps fed into the filter chain are going up the whole time)
[22:16:12 CEST] <JEEB> it's just that at the reconfig it loses the offset from frame size differences or whatever
[22:17:45 CEST] <durandal_1707> what you mean by reconfig? auto inserted filter in graph?
[22:20:22 CEST] <JEEB> you can make it happen with just ffmpeg.c for example. "ffmpeg -i INPUT -filter_complex "[0:a:0]aresample=48000[a_out]" -map "[a_out]" -c:a aac -b:a 192k -ac 2 blah.mp4"
[22:20:36 CEST] <JEEB> then just go over a thing that switches audio layouts
[22:20:45 CEST] <JEEB> it will reconfig the filter chain
[22:21:17 CEST] <JEEB> (AC3 or something preferred, since it has different audio sample counts in AVFrames than AAC)
[22:26:49 CEST] <durandal_1707> JEEB: i run aresample with eac3 that changes 7.1 -> 5.1 and pts is continuous here, if I run astats i get 2 reports, because, obviously it calls uninit twice
[22:27:58 CEST] <JEEB> alright, might have something to do with other factors as well such as start timestamps etc. it happens with an MPEG-TS input I have around
[22:29:09 CEST] <durandal_1707> give me sample and command and i will play with it
[22:31:21 CEST] <JEEB> only thing I additionally do is :async=1 in that aresample, but the timestamps should be good in the sample I had the issue with. let me try and cut the sample since it's ~300MiB now
[23:28:25 CEST] <apus> for x264, if i decrease the crf value the speed will be slower and not reach 1x needed for realtime capture. if i go from preset medium to ultrafast the cpu load goes down but the speed up, however it still doesn't reach 1x unless i increase crf again. is there a way to utilize more cpu while using preset ultrafast to reach 1x speed ?
[23:29:19 CEST] <BtbN> sounds more like your bottleneck is somewhere else
[23:31:16 CEST] <apus> hm, i'll take a closer look at the rest of the hardware then. thanks for the tip!
[23:32:32 CEST] <BtbN> x264 should not have issues fully making use of a cpu with non crazy amounts of cores
[23:33:01 CEST] <BtbN> unless you disabled or limited threading
[23:34:18 CEST] <kepstin> i don't think the entropy encoder can pe parallelized tho, and in cases with really high bitrate streams, that might be the limit with single-core perf.
[23:34:31 CEST] <apus> for preset medium it goes to 100% on all cores. for ultrafast it just uses 70-80% of one.
[23:34:53 CEST] <kepstin> ultrafast turns off cabac tho, so hmm.
[23:35:11 CEST] <BtbN> if you can't hit 1.0x speed with ultrafast, and the CPU is not at 100%, I really think your slow part is elsewhere
[23:35:12 CEST] <kepstin> apus: what kind of bitrate and video resolution are you looking at?
[23:35:16 CEST] <BtbN> like, some filter, or the io
[23:37:33 CEST] <apus> i'm capturing vhs with -standard PAL, video filter yadif and a black box at the bottom 6 pixels, audio aac 128k with low- and highpass for audio noise. framerate set to 25, 720x576
[23:38:42 CEST] <kepstin> SD video? it's really strange to have a problem like this with SD video :/
[23:39:08 CEST] <kepstin> are you actually seeing problems, or are you just seeing that the "speed" number that ffmpeg cli shows is <1?
[23:39:16 CEST] <ChocolateArmpits> apus, over firewire or analog composite ?
[23:40:02 CEST] <apus> i'm saving it to external usb 3.0 wd red, so that shouldn't be the problem. i get errors if the buffers i increased already are filled up and speed hasn't reached 1x till then. scart -> composite -> usb 2.0 video grabber stk1160
[23:40:05 CEST] <kepstin> because of weirdness with frame queueing, etc., the speed number that ffmpeg reports probably won't be exactly 1, although it should average to that eventually.
[23:40:31 CEST] <kepstin> assuming a live capture source, like v4l or something
[23:41:14 CEST] <ChocolateArmpits> apus, have you tried encoding without any output ?
[23:44:42 CEST] <apus> ChocolateArmpits: no, i guess i'll try that to make sure the problem is not on that end. then i'll remove the filters one by one. something is off here. Early on i had problems with video/audio sync after only a few seconds, then i set thread_queue_size 512, then 1024, then 4096 for both audio (alsa) and video (v4l) and added rtbufsize 15M. afterwards if it got to 1x speed fast enough, there weren't any issues with corrupt video/audio.
[23:45:04 CEST] <apus> could that cause problems?
[23:45:49 CEST] <ChocolateArmpits> single-threaded filters can cause frames to not get rendered realtime even if the overall cpu usage isn't maxed out
[23:46:26 CEST] <ChocolateArmpits> I had that situation so had to restructure my filtergraph to be "less" heavy
[23:48:37 CEST] <apus> are yadif, drawbox and highpass,lowpass single-threaded filters?
[23:50:51 CEST] <furq> apus: ffmpeg -filters | grep ^..S
[23:50:55 CEST] <ChocolateArmpits> except for yadif they are
[23:51:00 CEST] <furq> those are all the filters with threading support
[23:51:05 CEST] <ChocolateArmpits> however those two audio filters shouldn't be causing any problems here
[23:52:22 CEST] <furq> what cpu is this anyway
[23:54:58 CEST] <apus> laptop cpu. i'll just use another system then for the work and see how doing the work over ssh looks like. then the single-core performance problem goes away. didn't expect yadif to be limited to single-threading. that is the problem then i guess. thanks for all your help !
[00:00:00 CEST] --- Thu Mar 29 2018


More information about the Ffmpeg-devel-irc mailing list