[Ffmpeg-devel-irc] ffmpeg.log.20180226

burek burek021 at gmail.com
Tue Feb 27 03:05:01 EET 2018


[00:28:27 CET] <gagandeep> guys what do you call the library inclusion in gcc like in this code 'gcc -o main.c -lavcodec'
[00:29:06 CET] <gagandeep> the -lavcodec while compiling, i want to read more about this command
[00:37:58 CET] <gagandeep> nevermind it's related to something called pkg-config
[00:45:47 CET] <DHE> gagandeep: pkg-config is a tool that produces the commandline options required for compiling with libraries. avcodec will require avutil among other things, so pkg-config can track that
[00:46:47 CET] <JEEB> as I noted the other day, set PKG_CONFIG_PATH to <your prefix>/lib/pkgconfig where the prefix is what you set in FFmpeg's configure (default is /usr/local)
[00:46:54 CET] <JEEB> and pkg-config --libs libavcodec
[00:46:58 CET] <GTAXL> How do you set HTTPS headers for an HLS, I'm using version ffmpeg version 3.2.2-1~bpo8+1
[00:47:11 CET] <JEEB> will give you the flags to link against libavcodec
[00:47:26 CET] <JEEB> and if you add --cflags you get the stuff to be able to include the headers
[08:46:57 CET] <techbomber> is it better to use 12bit/10bit over 8bit ?
[08:47:16 CET] <pmjdebruijn> without context that question has no answer
[08:47:58 CET] <techbomber> for video encoding
[08:48:01 CET] <pmjdebruijn> but essentialy if you source is 8bit, it's unlikely you'll get a big advantage out of >8bit
[08:48:18 CET] <pmjdebruijn> also >8bit may reduce your video file compatibility with older players
[08:48:27 CET] <techbomber> fair enough
[08:48:44 CET] <techbomber> what does  commericial bluray use then
[08:48:49 CET] <pmjdebruijn> but again, you're not providing much information to sensibly answer your question
[08:48:58 CET] <pmjdebruijn> techbomber: 10bit is only available on 4K bluray IIRC
[08:49:05 CET] <pmjdebruijn> but /me isn't an expert by a long shot
[08:49:23 CET] <pmjdebruijn> so you'll likely need a very recent bluray blayer
[08:49:38 CET] <pmjdebruijn> AFAIK 1080p "traditional" BluRay is 8bit
[08:50:01 CET] <techbomber> i see even 12bit now
[08:52:21 CET] <pmjdebruijn> "see" where?
[08:52:33 CET] <pmjdebruijn> anyhow increasing bit depth is increasingly pointless
[08:52:59 CET] <pmjdebruijn> there's a difference what you want/need in edit, vs the final output product
[08:53:08 CET] <pmjdebruijn> higher bit depth mostly makes sense for editing
[08:53:12 CET] <pmjdebruijn> as intermediates
[08:53:33 CET] <pmjdebruijn> the 10bit in output format is mostly to accomodate higher dynamic range displays, which are still fairly uncommon
[08:54:11 CET] <pmjdebruijn> but again, if you're transcoding, and your source is 8bit, it's irrelevant anyhow
[08:54:26 CET] <pmjdebruijn> unless you're going to do some funky editing
[09:07:17 CET] <furq> 10bit will give a minor compression advantage even for 8-bit sources and particularly for animation
[09:07:32 CET] <furq> but support for h264 10-bit is basically nonexistent outside of desktop computers
[09:07:45 CET] <furq> hevc 10-bit is a bit better in that regard
[09:08:31 CET] <furq> if you're looking for picture quality improvements with 8-bit sources then no
[11:03:42 CET] <capreyon> someone plss suggest me the repository link of 360 video filter
[11:03:46 CET] <termos> The AVCodecContext.time_base is `0/2` after `avcodec_parameters_to_context` and `avcodec_open2` when opening my input. This causes my filter graph to segfault as it requires a proper time_base to generate the `buffer` arguments. Is this a bug in the library? Right now I'm forced to use the legacy AVStream.codec AVCodecContext instead, as that's the only one with the correct time_base values
[11:05:44 CET] <durandal_1707> capreyon: are you applying for gsoc?
[11:09:18 CET] <capreyon> yes @durandal_1707
[11:12:01 CET] <durandal_1707> capreyon: search for it on github under name panorama
[11:14:19 CET] <durandal_1707> it should be in ffmpeg fork
[11:20:28 CET] <capreyon> https://github.com/FFmpeg/FFmpeg :@durandal_1707 are you talking about this link?
[11:21:52 CET] <durandal_1707> capreyon: no, but you are looking at repository which is fork of that one, and have panorama branch
[11:22:39 CET] <durandal_1707> s/at/for
[11:34:17 CET] <durandal_170> capreyon: found it?
[12:04:30 CET] <capreyon> someone plss send me the link of 360 video filter repository on github?
[12:06:11 CET] <durandal_170> capreyon: why you dont contact your mentor via mail?
[12:07:33 CET] <capreyon> kk fine thanks @durandal_170
[12:42:02 CET] <shtomik> Can somebody help me with transcoding pleasE?
[12:56:42 CET] <shtomik> Guys, I always have an issue: [mp4 @ 0x7f87ee000600] Application provided invalid, non monotonically increasing dts to muxer in stream 1: -1024 >= -1024 (and it's always video stream), when I want to transcode mp4 file to mp4 file, I use transcoding.c example files, what could it be? thanks!
[14:51:41 CET] <classic> Hi, somebody know how I can use vaapi with two graphic cards? because alternate in /dri128 and /dri129
[14:51:48 CET] <classic> I need get this fixed.
[15:33:06 CET] <shtomik> Guys hi, who can explain it to me, I'm using transcoding.c to transcode mp4 to mp4(with different encoder params) file. But on the output I have mp4 with low fps and bitrate, in result - twice longer?
[15:33:58 CET] <shtomik> But fps and bitrate I set x2. Thanks!
[15:34:30 CET] <shtomik> Input fps = 23, I set output fps = 30, bitrate = 2k output bitrate = 4k.
[16:02:07 CET] <caim> Hello again :-) Anyone knows what might be wrong with these commands ? /usr/local/bin/ffmpeg/ffmpeg-3.4.1-64bit-static/ffmpeg -i rtmp://ipaddress:1935/live1/streamname -f image2 -s 1000x564 -vf fps=1/60 -an -updatefirst 1 -y /var/www/streamx.jpg .   Trying to generate thumbnails for a bunch of streams but it seems really hit or miss. Sometimes it takes one minute , sometimes it takes even 5 or 6 ...
[16:02:40 CET] <King_DuckZ> hi, can somebody point me to a recent tutorial/guide/example on how to write a video file using ffmpeg please?
[17:03:47 CET] <gh0st3d> Anyone know of good ways to programmatically create animated videos? Currently using phantomjs with an animated html file and piping screenshots to ffmpeg. It runs a little slower than I'd like with not the best quality, and some of the animations aren't supported in phantom it seems
[17:27:30 CET] <blap> render each frame to a png?
[17:37:53 CET] <saml> you got some html crap and you want to turn that into video?
[17:38:22 CET] <saml> can't think of a better way than screencapture
[17:38:36 CET] <saml> unless browser can export frames
[17:39:43 CET] <DHE> I've seen browser javascript that can export frames. but I think I'd rather do the screen capture  method
[17:40:46 CET] <saml> use chrome cast :P
[17:41:01 CET] <DHE> that... might actually work..
[17:41:24 CET] <gh0st3d> Lol, i like all the phrasing. Yeah I'm just wondering if there's some magical library that's relatively easy to use to create animated videos without HTML
[17:42:05 CET] <saml> what's it for? what are you building?
[17:42:05 CET] <gh0st3d> I've been googling for a week or so though, so at this point I'm pretty confident there's not haha
[17:42:22 CET] <saml> are you generating video for customer support through automated web page interaction?
[17:42:47 CET] <gh0st3d> We're starting to create content videos, and I'm building a system that creates "personalized" versions for our members. It basically just adds 10s to the end of the video with their profile showing
[17:42:59 CET] <gh0st3d> I have it working with a static image, and I've got the rough version of the animated HTML with their profile
[17:43:23 CET] <gh0st3d> But the animated is pushing the time close to 30s which will cause it not to work with the serverless method I was planning on
[17:43:49 CET] <saml> sounds pretty web scale. good luck
[17:44:16 CET] <gh0st3d> Thanks, appreciate it!
[18:31:48 CET] <joao> Hi there, can anyone help me?
[18:32:34 CET] <pomaranc> joao: no
[18:33:25 CET] <joao> I have 2 ".m3u8" links, and i want to use ffmpeg to rtmp them to 2 diferent places. But i dont know how
[18:35:09 CET] <joao> Can anyone help?
[18:40:42 CET] <joao> I have 2 ".m3u8" links, and i want to use ffmpeg to RTMP them to 2 diferent places. But i dont know how
[18:42:18 CET] <saml> joao, two ffmpeg commands?
[18:42:40 CET] <saml> ffmpeg -i 1.m3u8 <input options like codec> rtmp://1
[18:42:49 CET] <saml> ffmpeg -i 2.m3u8 <input options like codec> rtmp://2
[18:43:18 CET] <saml> *output options
[18:44:49 CET] <joao> didn't work
[18:44:58 CET] <joao> it stops after 5seconds
[18:45:35 CET] <joao> saml thanks for trying to help :)
[18:45:40 CET] <saml> i have no idea
[18:46:02 CET] <saml> what's rtmp endpoint? twitch?
[18:47:03 CET] <joao> widestream.io
[18:47:13 CET] <saml> wow nice
[18:47:28 CET] <saml> did you try streaming a regular video file to rtmp?
[18:47:45 CET] <saml> like ffmpeg -re -i yolo.mp4 ....  rtmp://widestream.io
[18:47:52 CET] <joao> yes,take a look
[18:47:53 CET] <joao> http://widestream.io/live-24134
[18:48:04 CET] <saml> nice
[18:48:12 CET] <saml> do you have example url of m3u8?
[18:48:21 CET] <saml> actually i  never tried streaming m3u8 to rtmp
[18:48:53 CET] <saml> wow it's live streaming now at widestream.io
[18:48:59 CET] <saml> is that copyrighted movie?
[18:49:16 CET] <joao> it is a tv channel actually
[18:49:22 CET] <joao> from Portugal
[18:50:03 CET] <saml> that's nice
[18:51:18 CET] <joao> the ideia is to make this for like 3 or 4 channels
[18:51:26 CET] <joao> but i can only once at time
[18:51:30 CET] <joao> :(
[19:02:45 CET] <alexpigment> hey guys, i've got a question about using filters with multiple input files
[19:03:09 CET] <alexpigment> i'm specifying -i [infile] twice so that i can adjust the audio/video sync with -itsoffset
[19:03:15 CET] <alexpigment> i've got a handful of video filters being used
[19:03:23 CET] <alexpigment> but i also need to apply dcbias to the audio
[19:03:32 CET] <alexpigment> do i do two -vf chains?
[19:04:07 CET] <alexpigment> oh nm
[19:04:08 CET] <alexpigment> -af
[19:04:12 CET] <alexpigment> sorry :)
[19:12:36 CET] <saml> i use -filter_complex because i'm cmoplex
[19:23:31 CET] <alexpigment> -filter_complex requires that i look at examples every time i use it, so i try not to use it :)
[19:35:58 CET] <gagandeep> guys is the tutorial on writing a video player in 1000 lines, deprecated, because on using the function av_register_all(), the compiler is printing deprecated
[19:50:38 CET] <King_DuckZ> is there any difference between av_register_all() and avcodec_register_all()? which one should I use?
[19:52:20 CET] <gagandeep> King_DuckZ: even though it was deprecated, my compiler was giving error because i was not providing flags for libswscale, now it is compiling properly
[19:53:19 CET] <King_DuckZ> gagandeep: which one is deprecated?
[19:54:06 CET] <JEEB> King_DuckZ: in the latest master av_register_all is no longer in the docs
[19:54:24 CET] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/APIchanges#l33
[19:54:31 CET] <JEEB> it's in the APIchanges doc
[19:55:01 CET] <King_DuckZ> JEEB: I'm following the 3.4 docs, isn't that the latest version?
[19:55:09 CET] <JEEB> it's the latest released version
[19:55:24 CET] <JEEB> the difference is that av_register_all is in libavformat
[19:55:37 CET] <JEEB> and avcodec_register_all is in avcodec
[19:56:26 CET] <JEEB> that said I only called av_register_all :D
[19:56:30 CET] <JEEB> in my example code I wrote some time ago
[19:56:43 CET] <King_DuckZ> so I should call both? or just avcodec_register_all() since the other one is getting removed?
[19:56:43 CET] <JEEB> (and avfilter_register_all
[19:56:50 CET] <JEEB> both are removed
[19:57:08 CET] <JEEB> the whole pseudo-registration stuff is going away
[19:57:09 CET] <King_DuckZ> :s
[19:57:24 CET] <JEEB> but as it notes it's still within deprecation period
[19:57:27 CET] <JEEB> so just call av_register_all
[19:57:31 CET] <JEEB> that seems to be enough
[19:57:38 CET] <King_DuckZ> ok
[19:57:51 CET] <JEEB> (or you can make an ifdef or configure check in your project about it)
[19:58:24 CET] <King_DuckZ> I'm trying to follow this http://ffmpeg.org/doxygen/3.4/encode_video_8c-example.html and I'm getting confused
[19:59:02 CET] <JEEB> don't follow that one specifically, since it does a lot of stuff that you can do better with the helpers in the API
[19:59:39 CET] <JEEB> like the dummy image generation
[19:59:53 CET] <JEEB> also you generally don't want to just fwrite your stuff out
[20:00:04 CET] <JEEB> instead you want to use avformat to properly write it out in a container
[20:00:11 CET] <King_DuckZ> ok, do you have a link to some tutorial or example then please? in my program I have frames being generated by the code, and I just need to write the backend that saves them in a video file
[20:00:37 CET] <King_DuckZ> so I don't care about reading at this point
[20:01:10 CET] <JEEB> either the transcoding or muxing example
[20:01:42 CET] <JEEB> it contains a whole lot more than you need, but you already probably grasp the idea of having an AVFrame and filling it with your image?
[20:01:56 CET] <JEEB> then you just need to initialize an encoder and feed/receive frames from it
[20:02:05 CET] <King_DuckZ> that's wishful thinking :p
[20:02:14 CET] <JEEB> this API is the currently recommended decoding/encoding API https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[20:02:44 CET] <JEEB> (it has been for quite a while, we don't switch the internals around too often vOv)
[20:03:23 CET] <King_DuckZ> ok, I'll have to go catch my train, I'll look into this tomorrow again, thanks so far!
[20:04:21 CET] <JEEB> but yea, basically you want to 1) make your images into AVFrames, 2) open an encoder 3) feed it stuff while you can't receive stuff 4) when you receive stuff you then open the avformat context and add stream(s) and write header -> start writing
[20:04:35 CET] <JEEB> at the end flush encoder encoder and write footer
[20:04:55 CET] <JEEB> close avformat context and consider the thing written
[20:07:28 CET] <shtomik_> JEEB: Hi JEEB, can you help me with transcoding, please? I don't understand why I have a problem with audio if I use a avformat devices, for transcoding to mp4.
[20:08:57 CET] <JEEB> shtomik_: as soon as I saw those mac specific input things all my interest stopped. also it's telling that your problem might not be in the code per se if it works without the video
[20:13:02 CET] <shtomik_> JEEB: I think that the problem with timing, because on output I have more video packets than audio(Video packets for 30 sec and audio for 25)
[20:13:23 CET] <shtomik_> JEEB: but with ffmpeg it's work perfectly ;(
[21:54:11 CET] <marcurling> Hello, is there some keys I can bash while encoding to have stats, pause or whatever?
[21:57:32 CET] <DHE> unless you changed the loglevel, you should get a status line refreshed every 1/2 second by default
[22:01:18 CET] <marcurling> DHE I got it, just to know if you could have some other infos or tune some params while encoding... Thanks indeed
[22:51:09 CET] <greentea> I am having trouble trying to get libvorbis, libogg, into ffmpeg in a static build using MSVC compiler and link, windows x64 using msys shell
[22:51:34 CET] <greentea> https://pastebin.com/wqYmRbMr
[23:00:42 CET] <trfl> I'm trying to do hardsubbing (burn in subtitles) by piping mpv into ffmpeg, and am having problems keeping the colorspace from getting lost
[23:00:56 CET] <trfl> for example, the original mkv that i'm reading from will say the following if handed to ffmpeg: Stream #0:0(jpn): Video: h264 (High 10), yuv420p10le(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9]
[23:01:17 CET] <trfl> but if I transcode it to libx264, I get this: Stream #0:0(jpn): Video: h264 (High) (H264 / 0x34363248), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9]
[23:02:43 CET] <JEEB> could either be a filter chain not passing that on issue, or something else not passing it on
[23:03:02 CET] <JEEB> the progressiveness is by default so that's kept, just the bt.709 flag is lost
[23:03:31 CET] <trfl> the full command I'm attempting currently is:  ffmpeg -i blurei.mkv -map 0:v -vcodec libx264 -t 1 -f nut hardsub/ffmpeg.nut
[23:03:41 CET] <trfl> losing the bt.709 flag could cause issues for some players right?
[23:04:07 CET] <JEEB> well thankfully your resolution is something that everything expects to be BT.709
[23:04:14 CET] <JEEB> larger than 1024x576
[23:04:27 CET] <JEEB> but still, would be nice if it would keep that
[23:04:39 CET] <trfl> yeah, was hoping the same command would work for dvdrips as well
[23:04:41 CET] <JEEB> (when no relevant conversions happened from the filter chain)
[23:04:50 CET] <alexpigment> trfl: you can specify the colorspace stuff
[23:05:03 CET] <JEEB> yes, but that's kind of besides the point :D
[23:05:19 CET] <alexpigment> -color_primaries bt709 -color_trc bt709 -colorspace bt709
[23:05:33 CET] <alexpigment> the real issue though is losing high 10
[23:05:34 CET] <JEEB> if he isn't doing actual colorspace conversions the metadata should stay, and if there was a conversion the AVFrame should be updated to match
[23:05:44 CET] <alexpigment> in which case, you need to have a 10-bit version of x264 in your ffmpeg
[23:05:45 CET] <JEEB> well generally hardsubbing is done for hw compatibility
[23:05:57 CET] <JEEB> and you well know how well 10bit is supported in hwdec :)
[23:06:04 CET] <trfl> that's nice to know (and I'll use that if push comes to shove) but I was hoping for a "simple" command that could be handed an .mkv file and it'd automatically Do The Right Thing without having to tweak it
[23:06:06 CET] <JEEB> so that might be as expected
[23:06:13 CET] <trfl> i'll explicitly drop Hi10p and go for yuv420p8
[23:07:11 CET] <JEEB> in theory since ffmpeg.c is "static as fuck" it could wait until it gets the first image from the filter chain to initialize the encoder
[23:07:15 CET] <JEEB> and set the values from the filter chain
[23:07:28 CET] <JEEB> after which the only thing that's required is to make sure the filters pass it on
[23:09:40 CET] <trfl> sounds like a good approach in the long run yea
[23:09:55 CET] <trfl> I suppose the alternative would be to always convert the colorspace to bt709 on the way out, and then tagging it with that? I'm not familiar with kind of loss you'd get out of that
[23:10:04 CET] <trfl> assuming a dvdsrc that is
[23:10:38 CET] <JEEB> DVDs are BT.601/the other very similar thing
[23:10:52 CET] <JEEB> which is what most players expect for 1024x576 or smaller
[23:10:56 CET] <trfl> ooh, thought they were night and day :\/
[23:11:19 CET] <JEEB> well, I meant there are two very similar if not the same values which DVDs use
[23:11:33 CET] <JEEB> there is a small diff between BT.709 and BT.601 but even that is barely visible usually
[23:11:43 CET] <JEEB> BT.709 vs BT.2020 is where the gamut goes wiiide
[23:12:02 CET] <alexpigment> i always thought 601 vs 709 was just limited vs full
[23:12:07 CET] <alexpigment> but i could be very wrong about that
[23:12:12 CET] <JEEB> http://avisynth.nl/index.php/Colorimetry#How_can_I_see_if_the_correct_standard_is_used_upon_playback.3F
[23:12:27 CET] <JEEB> that's the same image rendered as either Rec.601 or Rec.709
[23:13:06 CET] <alexpigment> yeah that's pretty subtle..
[23:13:15 CET] <alexpigment> definitely more subtle than limited vs full range
[23:13:19 CET] <JEEB> yup
[23:14:12 CET] <JEEB> anyways, since crap was really bad at messaging the colorimetry before, players will basically expect BT.601 if ~< (1024x576), and BT.709 if >~ (1024x576)
[23:14:18 CET] <JEEB> even if not marked
[23:14:23 CET] <alexpigment> although there is a bigger difference in 601 mistakenly decoded as 709 and vice versa
[23:14:37 CET] <alexpigment> yeah i ran into that in browsers before
[23:14:50 CET] <JEEB> well the second thing is that
[23:14:56 CET] <alexpigment> i had to tag something as 601 or 709 explicitly if the resolution was a certain size3
[23:14:58 CET] <JEEB> the upper "correct" one is BT.601
[23:15:14 CET] <JEEB> and then the lower one is "mistakenly converted to RGB as BT.709"
[23:15:21 CET] <trfl> aight so that's not bad enough to worry about, good to know
[23:15:37 CET] <trfl> but how about the plan to convert everything to bt.709 and tag it as that?
[23:15:44 CET] <JEEB> not recommended
[23:15:50 CET] <JEEB> why do an extra conversion if you don't have to?
[23:16:33 CET] <JEEB> trfl: anyways, if you are not doing conversions you can just run ffprobe first with the json output
[23:16:37 CET] <JEEB> -show_streams
[23:16:47 CET] <JEEB> and pick the parameter from there
[23:16:51 CET] <JEEB> and set it in ffmpeg.c
[23:18:25 CET] <trfl> aight, sounds good! say, is there a way to set it using the ffmpeg cli tool? or is it purely an internal thing?
[23:18:41 CET] <JEEB> yes, alexpigment over there noted the parameters
[23:18:50 CET] <JEEB> for ffmpeg.c to force the encoder to have those params
[23:19:10 CET] <trfl> ...i'm a dum
[23:19:15 CET] <trfl> aight, thanks o/
[23:19:51 CET] <alexpigment> the parameters i gave are generic ffmpeg params; should work with any encoder. there are specific x264 params that you can also specify, but i think (rather, i hope) the generic ones map to those x264 ones on the backend
[23:20:34 CET] <JEEB> they're the ones that set stuff in the avcodeccontext
[23:20:43 CET] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/libx264.c;h=12379ff76364760b8d95a35f9d8a7a3711729fa9;hb=HEAD#l758
[23:20:49 CET] <JEEB> and yes, libx264 utilizes those values if available
[23:20:59 CET] <JEEB> (during init time)
[23:21:19 CET] <JEEB> I actually tried to enable reconfig when you feed it frames but I failed so far :<
[23:22:04 CET] <JEEB> (and I had a little API client I was poking things with so I knew the AVFrames contained the data)
[23:22:07 CET] <JEEB> *metadata
[23:28:53 CET] <JEEB> greentea: this is the proper channel, but I'm pretty sure you just didn't get a reply because MSVC is not what people generally try things with, including me
[23:29:12 CET] <JEEB> building FFmpeg itself is not a problem, and with some trying you can even get libx264 linked
[23:29:23 CET] <JEEB> haven't seen libvorbis or libopus though
[00:00:00 CET] --- Tue Feb 27 2018


More information about the Ffmpeg-devel-irc mailing list