[Ffmpeg-devel-irc] ffmpeg.log.20170201
burek
burek021 at gmail.com
Thu Feb 2 03:05:01 EET 2017
[00:00:06 CET] <llogan> or just make them the proper size so they can be vstack/hstacked and then you can view it all at once
[00:00:37 CET] <Mateon1> That's odd..: Output with label 'out0' does not exist in any defined filter graph, or was already used elsewhere.
[00:01:00 CET] <Mateon1> Oh, this is the -vf command, I'll switch to -filter_complex
[00:01:18 CET] <DHE> yes, use filter_complex for that
[00:01:37 CET] <hyponic> can ffmpeg detect AV out of sync when trasconding a live stream?
[00:02:30 CET] <Mateon1> Okay, awesome, it seems to be working
[00:03:07 CET] <Mateon1> I'm getting a bunch of warnings "Past duration 0.999992 too large"
[00:11:05 CET] <Mateon1> It worked, kudos to DHE and logan
[00:53:36 CET] <Mateon1> Also, just curious, is it possible to switch the polar/oscilloscope view from L/R to X/Y?
[01:16:47 CET] <Mateon1> Ah, so you just need to change mode to lissajous_xy, then rotate the output by 90 degrees
[01:52:37 CET] <Diag> furq: whats the ffmpeg to do opus
[01:52:44 CET] <Diag> i have a shit i want to encode
[01:53:01 CET] <Diag> that or kerio
[01:55:36 CET] <TD-Linux> ffmpeg -i in out.opus
[01:55:46 CET] <Diag> THANKS
[01:55:48 CET] <Diag> thanks*
[01:57:11 CET] <Diag> TD-Linux: how do i set the bitrate?
[01:57:45 CET] <TD-Linux> -b:a 96k
[01:57:55 CET] <Diag> 96k is enough?
[01:58:02 CET] <TD-Linux> try it and see
[01:58:06 CET] <Diag> ok but
[01:58:08 CET] <Diag> where do i put that
[01:58:10 CET] <Diag> before the -i?
[01:58:20 CET] <TD-Linux> after
[01:58:22 CET] <TD-Linux> I think both work
[01:58:46 CET] <Diag> that was fast
[01:59:04 CET] <Diag> oh my god
[01:59:13 CET] <Diag> furq: holy fuckballs
[01:59:19 CET] <Diag> opus is the best thing ever
[02:19:31 CET] <DHE> you mean "ffmpeg is the best thing ever"
[02:47:07 CET] <kepstin> i dunno, opus is also pretty awesome
[03:17:21 CET] <Diag> kepstin: this is the shit
[03:50:49 CET] <Threads> <Diag> i have a shit i want to encode <-- best i've seen said on irc all day
[07:16:02 CET] <satinder___> hi
[07:16:13 CET] <satinder___> anyone here ?
[07:20:00 CET] <satinder___> I have some queries about ffmpeg
[07:20:54 CET] <satinder___> can we make chat conference client (multiple) - server by use of ffmpeg , Is it possible ?
[07:20:58 CET] <satinder___> in linux
[07:33:38 CET] <satinder___> hi
[07:38:43 CET] <kerio> Diag: the default is 96k for stereo
[07:38:49 CET] <kerio> i recommend 128 probably
[07:38:56 CET] <Diag> im blown away
[07:42:07 CET] <kerio> Diag are you converting from a lossy format to another lossy format >:C
[07:42:23 CET] <TD-Linux> if you're dropping a lot of rate it hardly matters
[07:42:24 CET] <Diag> no fuckass im doing it from a wav
[07:43:06 CET] <kerio> TD-Linux: no, the one who wanted to convert audio to 6kbps opus is not Diag
[07:45:04 CET] <satinder___> kerio : please see my question , Is it valid , if yes then please give your opinion
[07:45:07 CET] <satinder___> thnaks
[07:45:15 CET] <satinder___> thanks*
[07:45:30 CET] <kerio> satinder___: you absolutely can by using ffmpeg the library
[07:45:37 CET] <kerio> using ffmpeg the commandline tool... eeh
[07:46:32 CET] <satinder___> yes
[07:46:52 CET] <thebombzen> a two-way chat conference client is not really ffmpeg.c's job
[07:47:02 CET] <satinder___> kerio : Is there another option
[07:47:04 CET] <satinder___> ?
[07:47:12 CET] <thebombzen> but using libavcodec, libavformat, and libavdevice definitely could help
[07:48:44 CET] <satinder___> thebombzen : I want more than three clients connect at same time via a streaming server that server can save all chat (video stream)
[07:48:51 CET] <satinder___> Is it possible
[07:49:29 CET] <satinder___> kerio : please see above question ?
[07:49:30 CET] <thebombzen> it's definitely possible, given that this already exists
[07:49:36 CET] <thebombzen> but not with nothing but ffmpeg.c
[07:49:40 CET] <kevmitch> I'm trying to grab a m3u8 stream, but I'm getting an error "http://v.watch.cbc.ca/p//38e815a-00b9d7a9e04//CBC_MICHAELEVERYDAY_SEASON_01_S01E06-v2-11715224/segments/CBC_MICHAELEVERYDAY_SEASON_01_S01E06_v9/prog_index.m3u8: could not find codec parameters"
[07:49:44 CET] <kevmitch> http://sprunge.us/WMDM
[07:50:03 CET] <kevmitch> Above is a command executed by youtube-dl
[07:50:13 CET] <satinder___> thebombzen : Any tutorial or reference ?
[07:50:33 CET] <satinder___> second thing is that which one is good ;
[07:50:43 CET] <satinder___> via command line or using API
[07:50:52 CET] <satinder___> thebombzen : ?
[07:51:12 CET] <thebombzen> no I can't give you a tutorial on rewriting skype
[07:51:51 CET] <thebombzen> teh sort of thing you want to do has 1. been done before 2. is sort of more in depth than a tutorial
[07:52:03 CET] <thebombzen> kevmitch: unfortunately we have 403 forbidden on that m3u8
[07:52:19 CET] <satinder___> ok
[07:52:21 CET] <kevmitch> it's probably geo locked.
[07:52:30 CET] <satinder___> but I want know just one thing
[07:52:43 CET] <satinder___> commandline is better option or APi
[07:52:44 CET] <thebombzen> kevmitch: well you could sprunge it (if you're allowed)
[07:52:54 CET] <thebombzen> satinder___: definitely you want to write this yourself
[07:53:03 CET] <satinder___> yea
[07:53:05 CET] <thebombzen> you're trying to come up with a GUI - so doing it with nothing but CLI tools won't really work
[07:53:14 CET] <thebombzen> so you should write your own GUI
[07:53:20 CET] <thebombzen> rather than use existing CLI tools
[07:53:24 CET] <satinder___> but I have just 15 days
[07:53:24 CET] <kevmitch> you mean it's a text file?
[07:53:32 CET] <thebombzen> m3u8 is a playlist format
[07:53:49 CET] <thebombzen> so if the file is actually an m3u8 file then it'll be text
[07:54:27 CET] <satinder___> dranger is helpful ?
[07:54:39 CET] <satinder___> thebombzen : ?
[07:54:41 CET] <kevmitch> thebombzen: http://sprunge.us/OBSi
[07:55:35 CET] <thebombzen> well lol that won't help us tbh
[07:55:43 CET] <thebombzen> cause it has relative paths
[07:56:08 CET] <kevmitch> is that why it's failing?
[08:02:42 CET] <satinder___> thebombzen : can you give me entry point for develop a application
[08:02:45 CET] <satinder___> ?
[08:02:47 CET] <satinder___> please
[08:03:06 CET] <satinder___> I am searching alot
[08:03:13 CET] <thebombzen> https://doc.qt.io/qt-5/qtexamplesandtutorials.html
[08:03:29 CET] <thebombzen> kevmitch: I don't know
[08:03:33 CET] <thebombzen> I can't access it ^_^
[08:03:48 CET] <satinder___> ok
[08:03:52 CET] <satinder___> thank you
[08:05:54 CET] <kevmitch> satinder___: I'm going to go out on a limb and guess you'd need more than 15 days
[08:06:47 CET] <satinder___> kevmitch : not getting your point sir ,
[08:06:58 CET] <satinder___> what you want to say
[08:06:59 CET] <satinder___> ?
[08:08:55 CET] <kevmitch> that it's a more difficult problem than you can solve in 15 days
[08:12:43 CET] <kevmitch> aside from a GUI and av processing you'd use ffmpeg for, you'd also need to figure out how to push the data over networks and deal with crap like connecting to computers behind router NATs.
[08:13:13 CET] <satinder___> kevmitch : yes you right bro
[08:13:42 CET] <satinder___> but we can do anything if we want
[08:13:44 CET] <satinder___> :)
[08:14:07 CET] <satinder___> have you any idea or reference ?
[09:06:19 CET] <kerio> am i a bad enough dude to stream lossless x264 to youtube
[09:07:46 CET] <ritsuka> yes
[09:07:55 CET] <kerio> would they appreciate it tho
[11:42:59 CET] <hyponic> can ffmpeg detect AV out of sync when trasconding a live stream?
[11:48:02 CET] <satinder___> why we need sdp file during rtp streaming ?
[11:59:19 CET] <thebombzen> satinder___: because that's how the protocol works
[12:03:45 CET] <satinder___> thebombzen : when I streaming rtp vlc not playing giving error : can not play without sdp file
[12:03:50 CET] <satinder___> :(
[12:03:59 CET] <thebombzen> what is the URL
[12:04:16 CET] <thebombzen> cause it depends on whether you're using rtp over tcp or udp
[12:14:09 CET] <satinder___> ok
[12:14:30 CET] <satinder___> rtp://@192.168.1.84:1234
[12:16:03 CET] <satinder___> A description in SDP format is required to receive the RTP stream. Note that rtp:// URIs cannot work with dynamic RTP payload format (96).
[12:16:12 CET] <satinder___> that is error in vlc
[12:16:18 CET] <satinder___> thebombzen : ?
[12:17:30 CET] <thebombzen> it appears you've discovered the difficulty of point to point streaming lol
[12:17:45 CET] <thebombzen> there's a reason we said you could not finish this project in 15 days
[12:19:23 CET] <satinder___> I am using CLI
[12:19:27 CET] <satinder___> :(
[12:23:51 CET] <luc4> Hello! I need to stream a video from a source to a number of clients inside a LAN. I was thinking of using ffmpeg to stream h264 via multicast RTP and again ffmpeg to deocde and render the stream. It should be feasible right? Would you say this is a good solution? Can you suggest other solutions I should consider?
[12:35:44 CET] <thebombzen> well if by "decode and render" you just mean "play"
[12:36:01 CET] <thebombzen> then you could use any player that can play the stream
[13:22:56 CET] <faLUCE> thebombzen_: about what you told me yesterday. Are there video formats with different bits per channel?
[13:23:06 CET] <faLUCE> for example: 16, 16, 8
[13:24:40 CET] <BtbN> Well, there's 16bit RGB
[13:25:00 CET] <faLUCE> BtbN: and how are the bitdepths distributed?
[13:25:03 CET] <BtbN> 565
[13:25:20 CET] <faLUCE> 5 bits, 6 bits, 5 bits ?
[13:26:21 CET] <faLUCE> More specifically, my question is restricted to different bytes number per channel
[14:01:01 CET] <xeberdee> Hi - I'm trying to work out which flags I need to make Final Cut Pro compliant ProRes files - does anyone have a link to good article about these?
[14:17:24 CET] <kepstin> faLUCE: you see a lot of YUV video formats with subsampled chrome, which gives you an effectively lower bits per pixel per channel
[14:17:29 CET] <kepstin> subsampled chroma*
[14:18:13 CET] <kepstin> e.g. yuv420p uses quarter-size chroma, so a single pixel effectively has 8 bit luma, and 2 + 2 bit chroma, so 12 bit per pixel total.
[14:18:45 CET] <kepstin> but each individual sample in that case is still the same number of bits - there's just fewer samples in the chroma planes
[14:22:59 CET] <xeberdee> kepstin: is that 4:2:"
[14:23:04 CET] <xeberdee> 4:2:2
[14:26:54 CET] <xeberdee> no sorry - I would imagine that it's the same 4:2:0 and 4:1:1 as far as pixels are concerned
[14:27:11 CET] <kepstin> 4:2:0 is quarter-size chrome; 4:2:2 is half-size
[14:27:46 CET] <kepstin> (4:2:0 is half the horizontal sampling freq. and skip alternate lines, 4:2:2 is half horizontal sampling freq, but sample every line)
[14:28:25 CET] <xeberdee> anybody help me with a relevant link for reading about flags for Final Cut Pro compliant prores encoding
[14:32:14 CET] <xeberdee> I didn't see that it was 8 bits per pixel - 4 refers to the ratio, so 8:2:2 is 4:1:1 - or 4:2:0 - I would assume there is no difference for pixels bitdepth for 4:1:1 and 4:2:0.
[14:33:25 CET] <kepstin> there's no different in the bit depth of the individual samples, they're all 8 or 10 or 12bits or whatever.
[14:33:46 CET] <kepstin> the difference is in the effective number of bits for a single output pixel when the video's displayed
[14:34:31 CET] <kepstin> since e.g. with 4:2:0, an output pixel is made from a whole luma sample, but only a quarter of a chroma sample, effectively.
[14:50:27 CET] <xeberdee> I'm looking for information on flags and metadata for compliant Final Cut Pro ProRes with the Zeranoe build of FFMPEG
[14:51:48 CET] <xeberdee> what might I need more than setting the profile and the colorspace -c:v prores_ks -profile:v 3 -pix_fmt yuv422p10le
[17:19:54 CET] <rjp421> JEEB, the linux static git build is using old librtmp? http://pastebin.com/raw/71gS9Ufs
[17:48:09 CET] <paperManu> hi there. I have a question. I'm decoding audio frames using the new API (avcodec_send_packet and avcodec_receive_frame), and I have issues with planar formats (tested with S16P and FLTP): only the first channel is decoded correctly, the other ones are a mess
[17:48:38 CET] <paperManu> I output the frame from avcodec_receive_frame directly to the disk as a raw audio
[17:48:54 CET] <paperManu> is their anything specific I should do regarding planar format?
[17:52:28 CET] <faLUCE> [14:18] <kepstin> e.g. yuv420p uses quarter-size chroma, so a single pixel effectively has 8 bit luma, and 2 + 2 bit chroma, so 12 bit per pixel total. <--- then, how are them disposed in the planes* data? 8, 8, 0 and I have to get the other channel by masking the second byte?
[17:53:31 CET] <kepstin> faLUCE: each individual sample is 8-bit; you have to interpolate the missing samples by upscaling the chroma. It's entire missing samples, not really fewer bits per sample
[17:57:17 CET] <faLUCE> kepstin: I wonder if this applies to v4l too. Isn't it overhead?
[17:58:31 CET] <kepstin> hmm? chroma subsampling is a standard way of reducing video bandwidth. and the upscaling is fairly low overhead, since you normally need to scale the video for display anyways...
[18:00:41 CET] <faLUCE> kepstin: in case of RGB, I'm seeing that in V4l the samples are spread into the same bytes. It doesn't use 8 bit per sample, as you just wrote
[18:01:16 CET] <kepstin> well, RGB and YUV are different, and it depends a lot on details of the particular pixel format you're using...
[18:01:50 CET] <kepstin> I didn't think there were many V4L devices which even returned RGB tho, I'd expect most to be YUV?
[18:02:28 CET] <faLUCE> kepstin: do you say that the 8 bit per sample rule applies to YUV and not to RGB ?
[18:02:38 CET] <faLUCE> (planar)
[18:02:53 CET] <kepstin> 8 bit per sample applies to a few specific pixel formats, including some types of both YUV and RGB, but not all of either
[18:03:00 CET] <faLUCE> in case of planar, I understand that, because of the different planes
[18:03:14 CET] <faLUCE> kepstin: then, are these fmts planar?
[18:03:16 CET] <kepstin> you have to know which *specific* pixel format you're talking about, since they're all different
[18:03:24 CET] <kepstin> some are planar, some are not
[18:03:36 CET] <faLUCE> kepstin: I see, tnx
[18:06:18 CET] <faLUCE> so, we can have 1) 1 byte per sample 2) 2 bytes per sample 3) 1 byte per multiple samples 4) samples with different bitdepths but same bytedepths
[18:06:37 CET] <faLUCE> 5) sample with different bitdepths and different bytedepths
[18:07:08 CET] <faLUCE> right?
[18:07:18 CET] <kepstin> in most cases, if you're working with samples, you'll want to just pick a few simple ones to work with, and rely on libswscale to convert to and from it.
[18:07:32 CET] <faLUCE> kepstin: I'm writing an interface
[18:07:44 CET] <faLUCE> and I have to understand the different cases
[18:08:04 CET] <kepstin> iirc, there's some fun packed formats for e.g. 10-bit video where you can have 3 samples in 4 bytes with 2 bits left unused
[18:08:31 CET] <kepstin> why are you writing an interface for this stuff, when you could just re-use ffmpeg's pixel format handling? :)
[18:08:50 CET] <faLUCE> I'm writing a safe interface
[18:09:00 CET] <faLUCE> for handling few formats
[18:09:22 CET] <kepstin> ok, if you're just handling a few formats, then pick easy ones
[18:09:34 CET] <faLUCE> kepstin: I pick the libx264 ones
[18:10:54 CET] <kepstin> hmm, then you'll need YUV pixel formats, planar, with 4:4:4, 4:2:2 and 4:2:0 subsampling, supporting 10 and 8 bit per pixel (i'm not sure how the 10bpp ones are packed? have to check that...) along with a couple of rgb formats that i'm not familiar with :/
[18:11:11 CET] <faLUCE> I don't want to provide to the user a generic bytebuffer, and then tell him: "read the doc in order to understand how many bytes and which bits to pick"
[18:11:57 CET] <faLUCE> I'll give him the right bitsdepth and bytedepths for those formats
[18:12:07 CET] <kepstin> in this case, tho, a "safe" interface is probably a slow one - particularly since in many cases, they'll either already have video in the right format, or will have to convert it with something like libswscale anyways
[18:12:43 CET] <faLUCE> kepstin: you can't know if it is slow. I don't do resampling
[18:12:53 CET] <faLUCE> I only provide the right types
[18:13:06 CET] <faLUCE> and the functions for getting the samples
[18:14:00 CET] <faLUCE> but I leave an internal generic bytesbuffer which has to feed libswscale
[18:14:03 CET] <kepstin> the "right type" in most cases, will be an array-style buffer of samples per plane which can be copied in a single batch operation.
[18:14:27 CET] <kepstin> and potentially addressed directly by sample too
[18:15:08 CET] <faLUCE> kepstin: no, IMHO the right INTERNAL type is the one you say. The type provided to the user can be safer
[18:16:12 CET] <faLUCE> the user doesn't have to bother with bit or byte depths
[18:16:24 CET] <kepstin> hmm, I have no idea what the target user of your api will be, but it doesn't sound particularly useful for most video use cases
[18:16:45 CET] <faLUCE> kepstin: if you want to manipulate separate pixel it's useful
[18:16:48 CET] <kepstin> but they will have to know about bit/byte depth to know what range of values is acceptable in a sample?
[18:17:08 CET] <faLUCE> no, they just obtain that through the API
[18:17:17 CET] <kepstin> sure, but if you're manipulating separate pixels, you're probably using a drawing api, and the result of that will be... an array buffer full of pixels in some format.
[18:17:41 CET] <faLUCE> kepstin: no, because the internal buffer will remain the generic bytes one
[18:18:14 CET] <kepstin> e.g. I have an app that draws frames using the Cairo drawing API, which gives me an RGB image buffer that I pass off to ffmpeg for conversion and encoding
[18:18:50 CET] <faLUCE> kepstin: I see, but I don't want to apply what I said to the entire image
[18:19:03 CET] <faLUCE> I only apply it to the pixel get/set function
[18:19:34 CET] <faLUCE> otherwise I would Have to copy to another context the entire image, and it would be overkill
[18:20:07 CET] <faLUCE> IMHO the image (or plane) buffer has to be bytebuffer
[18:20:41 CET] <faLUCE> kepstin: do you agree?
[18:21:08 CET] <kepstin> well, if you have a use case for this stuff, and they want individual sample addressing, and don't mind the overhead of function calls per sample (which means they can't do e.g. assembly vector optimizations), go ahead...
[18:21:59 CET] <faLUCE> kepstin: the function will only make a cast inside. no overhead
[18:22:14 CET] <kepstin> there's a few use cases for this sort of thing in a couple of ffmpeg filters, which is why ffmpeg has an internal api to draw pixels in a frame in a sample-agnostic way, but it's not generally that useful.
[18:23:22 CET] <kepstin> the function's only no overhead if it can be compile-time inlined into the calling code, so that in the end it's operating directly on the backing buffer ;)
[18:24:02 CET] <faLUCE> kepstin: in the end the user operates directly in the backing buffer. Then function will only hide the cast
[19:01:07 CET] <Duality> so i want to stream some data with h264 coding and this is the command i am using: http://pastebin.com/MJu3U9gW
[19:01:21 CET] <Duality> and i want to play it to test if it works, so not sure how to do that
[19:01:37 CET] <Duality> i tried with vlc say, and it doesn't display anything,
[19:01:45 CET] <Duality> but it's playing
[19:05:18 CET] <Duality> i tried with ffplay like so: ffplay -an -sn -i udp://127.0.0.1:12345
[19:05:32 CET] <Duality> but that says invalid data when processing input
[19:07:15 CET] <faLUCE> does anyone know if planar frames got with v4l do already provide a set of arrays (on array per plane) or if libav reorders the samples?
[19:09:07 CET] <BtbN> it probably just forwards whatever pix_fmt comes out of v4l2
[19:09:28 CET] <BtbN> Would be strange if a libavdevice source would implement its own swscale
[19:09:50 CET] <faLUCE> BtbN: but it's not a scaling... it's a reordering
[19:10:03 CET] <BtbN> so?
[19:10:19 CET] <faLUCE> then it would not be so strange
[19:10:23 CET] <BtbN> yes it would
[19:10:40 CET] <faLUCE> um
[19:10:42 CET] <BtbN> it would be duplicating swscale functionality into lavdevice
[19:11:10 CET] <faLUCE> no, because it's not a scale. It would just duplicate the reordering
[19:12:45 CET] <faLUCE> making planes causes a copy. And it's not obvious that v4l api do provide a copy function
[19:13:42 CET] <faLUCE> I don't even know if ffmpeg can grab planar frames through v4l
[19:13:47 CET] <kepstin> Duality: I don't think the 'nut' container can be played starting at arbitrary points. Try using mpegts instead.
[19:14:21 CET] <Duality> kepstin: i noticed that if stop the stream with ffmpeg, and open vlc and then start the stream with ffmpeg it works.
[19:14:48 CET] <kepstin> Duality: makes sense, there's probably a header sent out when ffmpeg first starts which is needed.
[19:15:34 CET] <Duality> i'll try the mpegts :)
[19:15:53 CET] <Duality> i am all new to this and moste of what i am doing with ffmpeg doesn't make sense
[19:15:54 CET] <Duality> to me
[19:16:12 CET] <faLUCE> Duality: with mpegts some players need the global header too
[19:16:26 CET] <faLUCE> although it should not be required
[19:16:39 CET] <BtbN> with mpegts it's frequently repeated
[19:16:54 CET] <kepstin> mpegts is explicitly designed so that it can be used for things like tv broadcasting, where you can tune into a channel at any point, wait for a resync point, then start showing video.
[19:17:46 CET] <faLUCE> BtbN: even if it is repeated they need it at the beginning
[19:18:03 CET] <faLUCE> (with http)
[19:18:17 CET] <BtbN> the client just waits until the next "entry point" comes by
[19:18:26 CET] <faLUCE> BtbN: not with all the players
[19:18:36 CET] <faLUCE> I know it's absurd, but it happens
[19:18:37 CET] <BtbN> would be a terrible player if not
[19:18:54 CET] <faLUCE> BtbN: in fact many players are terrible
[19:18:57 CET] <BtbN> mpegts is designed to be broadcasted via DVB and the like. There is no way to guarantee entry at an exact beginning
[19:19:20 CET] <faLUCE> and I'm not talking about PA
[19:19:20 CET] <BtbN> players just discard the data until the required information came by
[19:19:22 CET] <faLUCE> PAT
[19:19:29 CET] <Duality> well i notice that vlc says that allot of frames are dropped, and i see that allot of frames are dropped the video skips
[19:19:47 CET] <BtbN> Well, that's VLC not being able to decode in real time
[19:19:50 CET] <faLUCE> Duality: in fact when vlc drops frames, it goes into confusion
[19:19:55 CET] <faLUCE> BtbN: exactly
[19:20:09 CET] <faLUCE> I was forced to provide a starting header in order to avoid that
[19:20:11 CET] <Duality> faLUCE: what does that meen ?
[19:20:18 CET] <BtbN> It means your CPU is probably overwhelmed
[19:20:37 CET] <Duality> it might be i am doing it on my old laptop :)
[19:21:02 CET] <faLUCE> Duality: it means that vlc makes stupid things when caching mpegts
[19:21:24 CET] <faLUCE> in addition, there's not a probe feature
[19:21:32 CET] <faLUCE> (like ffplay and mplayer)
[19:21:50 CET] <faLUCE> with HTTP
[19:21:53 CET] <BtbN> what the hell are you even talking about?
[19:21:54 CET] <faLUCE> with file, it works ok
[19:22:13 CET] <BtbN> of course vlc probes the container and codec information... how else would it play shit
[19:22:39 CET] <faLUCE> BtbN: vlc wants a file-like mpegts stream
[19:22:51 CET] <faLUCE> over http
[19:23:02 CET] <faLUCE> so, you have to start with 0 pts
[19:23:10 CET] <faLUCE> and a header at the beginning of the stream
[19:23:22 CET] <faLUCE> I know it's absurd, but it is.
[19:23:24 CET] <BtbN> So me watching live tv via DVB with VLC was just some weird dream?
[19:23:48 CET] <faLUCE> BtbN: you will have a huge delay
[19:23:49 CET] <BtbN> mpegts doesn't even have any kind of global header, everything that's a header is frequently repeated.
[19:24:13 CET] <faLUCE> BtbN: in fact, I said that vlc expects a file-like stream
[19:24:21 CET] <faLUCE> with pts starting from 0
[19:24:26 CET] <BtbN> no it doesn't
[19:24:38 CET] <faLUCE> yes it does. otherwise it makes a mess with cache
[19:24:49 CET] <faLUCE> and ity provides a long delay before starting
[19:24:57 CET] <faLUCE> so, it's good for live tv
[19:25:06 CET] <faLUCE> but it's weird for video-conferencing
[19:25:11 CET] <faLUCE> (audio only)
[19:25:17 CET] <faLUCE> but it's too long to explain
[19:25:31 CET] <BtbN> It has a delay before starting because it waits for the next PAT/PMT and IDR frame to come by, so the delay depends entirely on the interval between those and the random entry point into the stream.
[19:25:33 CET] <thebombzen> there's no such thing a a file-like mpegts stream
[19:25:37 CET] <faLUCE> BtbN: no
[19:25:38 CET] <thebombzen> that sort of defeats the whole point of mpegts
[19:25:39 CET] <BtbN> yes
[19:25:51 CET] <faLUCE> BtbN: then, try to stream audio only with mpegts
[19:26:06 CET] <faLUCE> and try to reduce the delay by adjusting the pes frequency
[19:26:19 CET] <faLUCE> and see if vlc reduces its delay
[19:26:20 CET] <thebombzen> faLUCE: are you confusing "startup delay" with "latency"
[19:26:37 CET] <faLUCE> thebombzen: I'm talking about startup delay
[19:26:44 CET] <thebombzen> why do you care about startup delay
[19:26:44 CET] <faLUCE> not latency
[19:26:52 CET] <faLUCE> thebombzen: you did not read.
[19:27:00 CET] <faLUCE> I talked abou videoconferencing
[19:27:10 CET] <thebombzen> yes and it's not important for videoconferencing
[19:27:28 CET] <faLUCE> it's important, because you have a delay between sender and receiver
[19:27:33 CET] <BtbN> ...
[19:27:35 CET] <faLUCE> if both are senders
[19:27:35 CET] <thebombzen> that's called "latency"
[19:27:41 CET] <thebombzen> not "startup delay"
[19:27:51 CET] <thebombzen> see my question "are you confusing startup delay" with "latency"
[19:27:55 CET] <faLUCE> you are not understanding
[19:28:05 CET] <BtbN> no, you seem to be not understanding, quite a few things.
[19:28:10 CET] <thebombzen> no I'm definitely aware of what videoconferencing is
[19:28:17 CET] <thebombzen> and I'm fully aware of what latency is
[19:28:24 CET] <thebombzen> it appears you are not though
[19:28:54 CET] <faLUCE> thebombzen: and I'm sure you did not try to make a phone call with ffmpeg http mpegts and vlc
[19:29:00 CET] <thebombzen> "latency" is the time difference between when an event is recorded by one sender and when it is displayed by the viewer
[19:29:12 CET] <thebombzen> it's very important to keep that low in twoway communication
[19:29:28 CET] <faLUCE> I'm not talking about latency
[19:29:32 CET] <BtbN> yes you are
[19:29:34 CET] <thebombzen> "startup delay" is the amount of time before you start being able to receive the communication
[19:29:41 CET] <thebombzen> this is not as important
[19:30:03 CET] <faLUCE> latency is the time in which the encoder encodes data
[19:30:09 CET] <thebombzen> no it'snot
[19:30:11 CET] <BtbN> low latency as in real time communication is far from trivial and not achives with regular audio/video streaming solutions.
[19:30:33 CET] <thebombzen> latency is the time between when the speaker says something over the phone and when the receiver hears it
[19:30:50 CET] <thebombzen> it's not the amount of time it takes to encode - that's only one factor in the amount of latency
[19:31:42 CET] <faLUCE> thebombzen: I repeat: try to stream mpegts audio with vlc and see if the vlc receiver can manage that in a good way
[19:31:53 CET] <faLUCE> the rest is not important
[19:32:02 CET] <faLUCE> the other things are not important
[19:32:11 CET] <thebombzen> first of all "manage that in a god way" is super vague
[19:32:17 CET] <thebombzen> but again the thing you're describing has a name
[19:32:20 CET] <thebombzen> the word for that is "latency"
[19:32:41 CET] <faLUCE> in my world this is called delay
[19:32:57 CET] <faLUCE> and there's not a STANDARD for these terms
[19:33:18 CET] <thebombzen> yes but most people in the a/v field call it "latency"
[19:33:29 CET] <faLUCE> I use latency for encoding/decoding
[19:33:37 CET] <faLUCE> not for what you say
[19:33:41 CET] <thebombzen> well if you say "I'm not talking about latency" then you're wrong
[19:33:45 CET] <thebombzen> because you are talking about latency
[19:33:57 CET] <thebombzen> given that the people you're talking to use the word that way
[19:33:59 CET] <faLUCE> thebombzen: I'm quite tired to discuss about that. think what you want
[19:34:24 CET] <thebombzen> it really doesn't matter how you use it. what matters is how WE use it, given that you're asking us for help
[19:34:37 CET] <furq> latency does have a well-defined meaning if you're talking about telecom networks
[19:34:51 CET] <thebombzen> so if you want to be like "I use latency to talk about this one thing" we'll say, "that's nice, honey"
[19:36:13 CET] <thebombzen> "thebombzen: and I'm sure you did not try to make a phone call with ffmpeg http mpegts and vlc" well actually
[19:36:17 CET] <thebombzen> about that
[19:40:35 CET] <faLUCE> BtbN: https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/planar-apis.html <--- "Initially, V4L2 API did not support multi-planar buffers and a set of extensions has been introduced to handle them"
[19:41:30 CET] <faLUCE> then it can be that ffmpeg provided that for v4l2, in the past. But what it does now it obscure
[19:43:11 CET] <Karede> Hey, I just installed ffmpeg on an ubuntu 16.04 server for the first time but I am having a problem installing ffmpeg-php afterwards.
[19:43:32 CET] <Karede> configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option
[19:43:47 CET] <varion> hey, guys. I'm working on a fairly obscure one. I'm attempting to use an mp4 as a composite on top of another video (which is easy enough) except since mp4 doesn't have alpha, the video is split horizontally. the left half is color information, and the right half is alpha. i'm attempting to use alpha merge, but so far I don't think I"m on the right track. any recommendations?
[19:43:54 CET] <Karede> is the error I'm getting and all the solutions I've found on Google do not fix the problem.
[20:11:52 CET] <varion> is there a way to use luminosity as alpha? it seems colorkey works... awkwardly
[20:22:12 CET] <wodim> what codec/quality options do you recommend for a screencast?
[20:28:25 CET] <BtbN> if it's just a desktop, you don't really need much for it to look decent
[20:37:54 CET] <durandal_1707> varion: you could use shuffleplanes
[20:45:45 CET] <varion> i'm going to check out shuffleplanes, then
[20:54:08 CET] <Karede> I guess the reason it wasn't working is because ffmpeg-php is from 10 years old and probably no longer compatible. A new question then, what would you recommend to replace it?
[20:57:17 CET] <rnsanchez> greetings. anyone using h264_nvenc?
[20:57:39 CET] <rnsanchez> I am running on a Tesla K20c, and can successfully transcode the video---but only the video stream
[20:57:40 CET] <llogan> Karede: do you *need* a PHP wrapper for ffmpeg? can you not just use ffmpeg directly?
[20:58:36 CET] <rnsanchez> the command I'm using: ffmpeg -y -benchmark -hwaccel cuvid -c:v h264_cuvid -i big_buck_bunny_1080p_h264.mov -vf scale_npp=1920:-1 -vcodec h264_nvenc -level 4 -preset llhq -qmin 1 -qmax 43 -b:v 3000k -bufsize 2000k -minrate 2500k -maxrate 8000k -codec:a libvorbis -qscale:a 3 output-1920.h264
[20:58:44 CET] <JEEB> and if you need to have API wrappers, you'd have to make one yourself
[20:58:46 CET] <varion> can you loop an output (x times) of something with a complex filter? everything seems to suggest using stream_loop, which doesn't seem to like the idea of complex filters
[20:59:33 CET] <llogan> there is a loop filter, IIRC
[20:59:38 CET] <varion> not for video
[20:59:42 CET] <varion> loop filter works on images
[20:59:51 CET] <rnsanchez> I've tried the -map I use on regular sessions (no acceleration), and pretty much all knobs I know/have used for audio, no success
[21:00:02 CET] <kepstin> rnsanchez: you're gonna have to save the output to a container format that can hold an audio stream. ".h264" means a raw h264 video-only file
[21:00:21 CET] <llogan> varion: there is aloop filter too
[21:00:24 CET] <kepstin> rnsanchez: .mkv is probably a good choice
[21:00:43 CET] <rnsanchez> kepstin: oh boy.. have I been that silly? :-)
[21:01:43 CET] Action: rnsanchez ducks in shame
[21:02:13 CET] <rnsanchez> there we go: Stream mapping:
[21:02:13 CET] <rnsanchez> Stream #0:0 -> #0:0 (h264 (h264_cuvid) -> h264 (h264_nvenc))
[21:02:13 CET] <rnsanchez> Stream #0:2 -> #0:1 (copy)
[21:02:30 CET] <rnsanchez> thank you so much, kepstin. and sorry for the noise :^)
[21:02:54 CET] <llogan> Karede: some examples http://trac.ffmpeg.org/wiki/PHP
[21:05:33 CET] <Karede> According to my boss just using ffmpeg would pose a security risk so he wanted the php wrapper. Also, sorry for the late response.
[21:07:06 CET] <Karede> I'm new to this, needless to say.
[21:07:08 CET] <llogan> so use a 10 year old wrapper instead
[21:07:33 CET] <Karede> No, not using that one. I didn't realize it was 10 years old when I grabbed it.
[21:08:16 CET] <llogan> it was a joke. any security expliots should be so old that nobody would be using them anymore. Graybeard Security.
[21:08:42 CET] <Karede> lol
[21:09:19 CET] <llogan> how would using ffmpeg pose a security risk compared to using some sort of wrapper?
[21:10:08 CET] <kerio> how do you even use ffmpeg from php
[21:10:21 CET] <kerio> is this some long-running php meme
[21:11:48 CET] <llogan> i wonder why the tee muxer won't allow AAC-LC in .ts to be stream copied to...anything. http://pastebin.com/raw/AQhzN18G
[21:11:53 CET] <varion> so if i use: -filter_complex "loop=5:100:0", i can loop 100 frames, but why doesn't: -filter_complex "loop=5" seem to loop the entire thing? do you always have to specify?
[21:12:07 CET] <Karede> I asked him why we need it and that was the answer he gave me. Not forthcoming with info.
[21:18:39 CET] <llogan> Karede: whatever this is or does seems more up to date. https://github.com/PHP-FFMpeg/PHP-FFMpeg although can you trust something that doesn't spell FFmpeg correctly?
[21:21:32 CET] <JEEB> Karede: running FFmpeg on anything user-provided is a huge potential security hole, which is why generally you put it into its own little world
[21:22:24 CET] <Karede> I found a newer version on Packagist for php. Going to try that one.
[21:23:04 CET] <JEEB> that said, whether or not you use a PHP wrapper mostly has pretty much nothing to do with security
[21:25:46 CET] <Karede> I'll be trying to understand it better later. For now I am just doing what I am told.
[21:25:46 CET] <durandal_1707> varion: yes. you need always to specify, mainly to not trigger memory bomb
[21:27:09 CET] <varion> durandal_1707: so i guess i'm going to need to ffprobe first or something, right? because i always need to loop aaaaaall of the frames
[21:27:11 CET] <JEEB> Karede: as in, if the input is fed by an untrusted actor running FFmpeg-based stuff (including PHP wrappers) without further security precautions can be rather... insecure, to say the least
[21:27:36 CET] <JEEB> Karede: because FFmpeg tries to support so much and a clever attacker can try to utilize a bug in one of the less popular formats
[21:28:15 CET] <durandal_1707> varion: how big are frames? how much ram do you have?
[21:29:13 CET] <varion> this will likely be on an ec2 at the end of the day. i can try doing an additional step using concat or stream_loop or something
[21:29:27 CET] <varion> does stream_loop re-encode? or quite literally just duplicate the data inside the container?
[21:32:35 CET] <durandal_1707> varion: stream loop loops decoded frames
[21:32:53 CET] <varion> so i'd have an additional generational loss
[21:33:43 CET] <durandal_1707> varion: using filter complex does reencoding anyway
[21:34:13 CET] <durandal_1707> you could use stream loop without reencoding iirc
[21:34:13 CET] <varion> no, i'm fine with that. i mean, i don't want to do my filtering, then do another pass (another ffmpeg command) to loop it, and lose the additional generation of quality
[21:34:50 CET] <varion> if stream_loop can be used to dupe the data without an additional generation of encoding, it could work
[21:36:23 CET] <varion> at the end of the day, i basically need to get a video, and a video watermark. composite them, and then loop them x times. what i don't want to do is re-encode every video twice and lose the quality in the middle generation
[21:37:15 CET] <varion> and i don't think the stack will allow me to throw intermediate codecs into the mix, because of HD space concerns
[21:52:39 CET] <faLUCE> how could you define "R" "G" and "B" for both planar and packed frames? Components? I can't use the word "channel", because in planar frames a channel can have more than one component.
[21:54:05 CET] <durandal_1707> faLUCE: context?
[21:54:38 CET] <faLUCE> durandal_1707: it's a generic question
[21:54:53 CET] <faLUCE> but I see that they are called components too
[22:49:32 CET] <thebombzen> do you guys recommend building ffmpeg with clang or gcc?
[22:50:00 CET] <jkqxz> Yes.
[22:50:13 CET] <JEEB> both should be fine, with android I recently switched to clang
[22:50:16 CET] <thebombzen> I was waiting for someone to give the obnoxious answer
[22:50:29 CET] <thebombzen> is there any reason to prefer one over the other?
[22:50:45 CET] <JEEB> what is supported better by your system
[22:50:54 CET] <JEEB> I mostly love clang's static analysis and warnings|errors
[22:51:12 CET] <thebombzen> well I don't really care about warnings/errors/analysis
[22:51:15 CET] <jkqxz> What, it's totally a useful answer! You are far better off with clang or gcc over msvc, say.
[22:51:24 CET] <thebombzen> given that I'm just building from the git master
[22:51:24 CET] <jkqxz> And I don't think there is much to choose between them.
[22:51:40 CET] <thebombzen> I wouldn't be using msvc :P
[22:52:18 CET] <JEEB> the old saying used to be that gcc optimized somwhat better, but not sure if people actually did any benchmarking with recent versions of both with x86_64 or ARM
[22:52:33 CET] <JEEB> both certainly create working binaries
[22:52:38 CET] <thebombzen> my system is x86_64-linux-elf
[23:16:03 CET] <eagspoo> Hi, Im trying to create a test HLS audio output (m3u8 file and ts files) by using a series of 5 mp3 files. Id like them split into 2s segments and I want the mp3 id3 metadata to pass through to the HLS output. I naively tried this: http://pastebin.com/A976f8TY. The result is a m3u8 but only single ts file. Im (probably obvious) very new to ffmpeg so any help would be great.
[23:30:03 CET] <hyponic> Hi.. transcoding real-time stream using vaapi is giving really unsatisfying results: http://148.251.126.76/test.ts and i can't seem to figure out a why. anyone with experience with that? QSV gives better results but there is a limit on numer of simultanious sessions. any way around that too?
[00:00:00 CET] --- Thu Feb 2 2017
More information about the Ffmpeg-devel-irc
mailing list