[Ffmpeg-devel-irc] ffmpeg.log.20180112
burek
burek021 at gmail.com
Sat Jan 13 03:05:01 EET 2018
[09:59:41 CET] <Nacht> Hmm. I keep getting Segmentation fault (core dumped) when I try to use HTTP links as input in Windows Bash. Any ideas how I can debug it ?
[10:13:29 CET] <Nacht> https://ffmpeg.org/ down ?
[10:19:44 CET] <rrva> Hello! Can we detect if an output video is interlaced or not in the same pass as we are encoding it? Or do I have to invoke ffmpeg a second pass to do this? Using the technique in http://www.aktau.be/2013/09/22/detecting-interlaced-video-with-ffmpeg/
[10:22:02 CET] <rrva> sorry, mixed up things.. I meant input video
[10:31:32 CET] <DHE> rrva: afaik the only way to tell interlaced content is to check the interlaced field in the AVFrame that comes out of a decoder
[10:32:01 CET] <DHE> some deinterlacers, like yadif, may support examining this field for itself and doing nothing on progressive content. though this may explode in your face on mixed content if that comes up
[10:35:43 CET] <rrva> so, conditionally encode video with different flags depending on interlace needs to be done by probing it first
[10:38:10 CET] <DHE> I'm not aware of a means of doing that, no.
[10:43:49 CET] <gp> ver 3.4 of ffmpeg has rockchip MPP support. I am struggling to find info on how to implement it. any help would be appreciated. thx
[11:44:45 CET] <pagios> hi all, i am writing x264 ! mpegtsmux ! hlssink, on client side when user is playing the m3u8 file it plays fine but with a huge latency, the video is playing like 1 minute late in time. i tried reducing the sgment duration size to 2sec , same same anything else i can try doing ?
[15:59:11 CET] <ThugAim> guise... I love tahrpup but this thing with wlan drivers for HP 350 models is murder... got it working in 6.0.5 but apparently the process is outdate for 6.0.6
[16:00:15 CET] <ThugAim> and sadly 9/11 laptops in my office are models with requiring the same drivers...
[16:00:33 CET] <ThugAim> So... any word on an update of Tahr? :D
[16:11:01 CET] <ThugAim> TFW wrong chat
[17:06:13 CET] <yumbox> hi, how can I see lame's progress instead of ffmpeg's when I do this? "ffmpeg -i in.webm -f wav - | lame -V1 - out.mp3"
[17:11:49 CET] <buu> yumbox: what does: ffmpeg ... 2>/dev/null | lame ...; do for you?
[17:12:50 CET] <yumbox> the transcoding works, but I see no output at all
[17:16:12 CET] <c_14> does lame even show progress?
[17:16:26 CET] <c_14> also, why not use -c:v libmp3lame ?
[17:16:32 CET] <c_14> *-c:a
[17:17:30 CET] <yumbox> when I do "lame in.wav out.mp3" it does show progress
[17:18:08 CET] <c_14> probably just doesn't show it for stdin then
[17:18:14 CET] <c_14> because there's no way to know how long it is
[17:20:30 CET] <yumbox> okay
[17:20:41 CET] <yumbox> is there a difference between using libmp3lame and lame?
[17:20:57 CET] <yumbox> how can I add options like "-q 0" and "-V 1" to ffmpeg?
[17:32:18 CET] <furq> yumbox: the only difference is that ffmpeg doesn't write the encoder settings to the xing header
[17:32:33 CET] <furq> so it won't show up as V1 in an audio player that shows that sort of thing
[17:32:44 CET] <furq> the files will be identical if it's the same version of lame
[17:32:51 CET] <furq> or the audio stream, rather
[17:37:37 CET] <oborot> I resolved my issue yesterday where the video stream was getting set to 1 frame after adding an audio stream.
[17:38:04 CET] <oborot> Seems to be some sort of bug with ffmpeg, switching to mp4box to add the audio stream works.
[17:41:53 CET] <alexpigment> orobot: if you have a small sample that reproduces the problem, it may be worth logging something on trac
[18:04:59 CET] <oborot> alexpigment: Unfortuantely I've been unable to reproduce it outside of my production environment
[18:05:25 CET] <alexpigment> ah
[18:37:17 CET] <oyla_> Greetings from Rostock/GER :) I'm currently working on our livestream for our alternative community radio, and now I'm at the Point, where I try to insert the metainformation (title) dynamicly in an opus/ogg stream. So everytime the title chance, i'd like to put the new titel into the stream. I tried the -metadata title="Foo" option for outut, this regulary works but I can't renew it without stopping the stream. Someone an Idea? :)
[18:38:52 CET] <DHE> this may be one of those situations where you have to rebuild ffmpeg with additional features in C, or just write an application to do the same job also in C.
[18:39:06 CET] <JEEB> I would recommend utilizing the API and then some sort of containerization that lets you update that information
[18:42:04 CET] <JEEB> and if libavformat only writes that data for the container you're using when you do the write_header() function then you're more or less crapped on because officially you're not supposed to call that after you've started writing data :)
[18:42:10 CET] <JEEB> (as in, another time)
[18:42:20 CET] <JEEB> even though for some formats it might work accidentally
[18:44:31 CET] <DHE> a quick skim of the ogg muxer makes it look like it only writes metadata into the headers
[18:46:28 CET] <JEEB> so basically it depends 100% on if header writing once more happens to work (and writes a valid bit stream for a metadata update), and then that topic should be brought up on the ML
[18:46:50 CET] <JEEB> or you just flush your old packets and re-initialize the muxer I guess
[18:59:28 CET] <oyla_> So it's not an regular move with ffmpeg as I hoped ;) We used liqudidsoap before and there was an feature for that (but it was a little buggy an there is no deb-maintainer, so we'll using ffmpeg now). There is also a feature in Shoutcast for Streaming, but we're using icecast... Thanks for short Input, it gets a lower prio now ;)
[19:02:21 CET] <JEEB> but yea, for live streaming and f.ex. keeping the input alive while there's no actual input coming in the ffmpeg.c API client might hit a wall soon enough :)
[19:02:39 CET] <JEEB> so just get ready inside that at some point you might need to write an API client for the libraries that are provided by FFmpeg
[19:02:57 CET] <JEEB> or something better, if such exists specifically for the containers and formats you require
[19:23:53 CET] <yumbox> how can I add options like "-q 0" and "-V 1" to ffmpeg?
[19:24:07 CET] <yumbox> (for trans/encoding to mp3)
[19:25:07 CET] <c_14> !codec libmp3lame @yumbox
[19:25:22 CET] <c_14> meh
[19:25:24 CET] <c_14> https://ffmpeg.org/ffmpeg-codecs.html#libmp3lame
[19:27:11 CET] <JEEB> `/33
[19:31:28 CET] <furq> !encoder libmp3lame @c_14
[19:31:29 CET] <nfobot> c_14: http://ffmpeg.org/ffmpeg-codecs.html#libmp3lame-1
[19:32:51 CET] <c_14> almost wrote that, but then wasn't sure
[19:33:11 CET] <yumbox> !encoder mp3 @nfobot
[19:33:14 CET] <furq> i would say "you can pm the bot to check" but that sort of undermines the point of a time-saving bot
[19:33:21 CET] <yumbox> !encoder libmp3lame @nfobot
[19:33:21 CET] <nfobot> nfobot: http://ffmpeg.org/ffmpeg-codecs.html#libmp3lame-1
[19:33:42 CET] <yumbox> !encoder libmp3lame @/quit
[19:33:42 CET] <nfobot> /quit: http://ffmpeg.org/ffmpeg-codecs.html#libmp3lame-1
[19:33:53 CET] <furq> lol
[19:33:58 CET] <yumbox> bad coding
[19:34:12 CET] <yumbox> software should always do what its user intends
[19:34:17 CET] <yumbox> he didnt quit
[19:34:19 CET] <furq> i'm sorry i let you down
[19:34:39 CET] <yumbox> How do you sleep at night?
[19:35:01 CET] <yumbox> How do you sleep at night, knowing you write bad software?
[19:35:39 CET] <yumbox> wait, that's redundant to ask in a #ffmpeg channel, nevermind.
[19:37:26 CET] <furq> uh
[19:37:28 CET] <furq> http://ffmpeg.org/ffmpeg-codecs.html#mpeg2
[19:37:31 CET] <furq> did this get renamed
[19:37:41 CET] <furq> it's still mpeg2video in 3.3
[19:39:30 CET] <JEEB> that sounds like a weird change
[19:39:53 CET] <JEEB> while I do agree that mp2 is the thing used for mpeg-1 layer 2
[19:42:30 CET] <c_14> It's been called that since 2014 apparently (in the docs at least)
[19:43:10 CET] <JEEB> ah yes
[19:43:15 CET] <JEEB> that probably is it :P
[19:43:49 CET] <teratorn> anyone know of solutions for encoding fMP4 video once and generating HLS (.m3u8) and DASH (.mpd) manifests off the same assets?
[19:45:28 CET] <thebombzen__> teratorn: probably want to use the tee muxer
[19:45:31 CET] <JEEB> teratorn: I'm pretty sure either the HLS or the DASH meta muxer got that support added
[19:45:42 CET] <JEEB> or at least I've seen patches fly on the mailing list
[19:46:08 CET] <thebombzen__> !muxer tee @teratorn
[19:46:08 CET] <nfobot> teratorn: http://ffmpeg.org/ffmpeg-formats.html#tee-1
[19:46:31 CET] <teratorn> not familiar with meta muxers :/
[19:46:43 CET] <JEEB> well it's meta because it calls another one :P
[19:47:01 CET] <JEEB> it's the dash/hls muxers in lavf, which call movenc and/or mpegts muxers internally
[19:47:44 CET] <teratorn> for what purpose exactly?
[19:47:56 CET] <JEEB> ?
[19:49:19 CET] <JEEB> right, the feature was in dashenc
[19:49:20 CET] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=commit;h=8c2b37e678e3d5ab16fef471fffc741b88622a85
[19:49:48 CET] <JEEB> so that or newer FFmpeg revision should let you write both a HLS and an MPEG-DASH manifest with fragmented ISOBMFF as written by the movenc muxer
[19:51:22 CET] <teratorn> hrmmm, https://github.com/jronallo/abrizer
[19:51:48 CET] <teratorn> ok cool, i'll check it
[21:24:34 CET] <zerodefect> Using the C-API, can I assume that the duration of AVPacket will be correctly handled by the encoder? The reason I ask is that when I hand over the AVPacket to the HLS muxer, I'm getting a repeated log message per AVPacket: 'pkt->duration = 0, maybe the hls segment duration will not precise'. Now as anticipated, the duration of AVPacket after the encoding process is '0'.
[21:26:03 CET] <DHE> zerodefect: HLS specifically calls for the .m3u8 file to specify the duration of each individual segment file. that means that it needs to know the duration of the last frame. the actual duration is (lastframe->pts + lastframe->duration - firstframe->pts)
[21:29:43 CET] <zerodefect> DHE: Oh ok. So when you say 'firstframe' and 'lastframe', are those the first and last frames of the entire segment?
[21:30:03 CET] <DHE> yes
[21:30:19 CET] <devinheitmueller> Anybody have any suggestions for dealing with data streams with sparse packet intervals? Ive hacked an extra codec into mux.c to indicate its not interleaved, but its pretty ugly.
[21:32:11 CET] <zerodefect> DHE: So on the current segment I _think_ what you're saying is that I'd give the muxer a running (and updated) duration of the segment?
[21:33:12 CET] <DHE> well for constant FPS material the duration is just the difference between any consecutive frames' PTS values
[21:34:57 CET] <zerodefect> DHE: Ok. Thanks. Now on the encoder/AVPacket front, should I not expect the encoder to set the duration of AVPacket?
[21:35:19 CET] <zerodefect> DHE: Using H.264 by the way. Duration is always 0.
[21:39:22 CET] <BtbN> the encoder will just passthrough the original duration
[21:42:15 CET] <zerodefect> BtbN: Thanks. Not seeing that but maybe there is something funny with my AVFrame that is handed to the encoder.
[21:46:37 CET] <zerodefect> Or if it has something to do with the content being interlaced
[21:47:50 CET] <kepstin> I'd hope you're deinterlacing it, web and mobile players that support hls/dash tend not to have deinterlacers...
[21:49:09 CET] <zerodefect> kepstin: Good point! Let me double check if it's interlaced!
[21:55:10 CET] <zerodefect> kepstin: Quite right. It's being de-interlaced (must be happening further upstream).
[00:00:00 CET] --- Sat Jan 13 2018
More information about the Ffmpeg-devel-irc
mailing list