[Ffmpeg-devel-irc] ffmpeg.log.20180315
burek
burek021 at gmail.com
Fri Mar 16 03:05:01 EET 2018
[01:09:46 CET] <Arbition> I would like some guidance in transcoding interlaced content. I have been getting this message dumped continuously "Past duration 0.999992 too large" . The "fix" i found was to set the output framrate. however, I notice the tbc has thus gone from 50 to 25 (for 25 fps). This sounds like interlacing frames are now being dropped
[01:10:29 CET] <Arbition> Would it be better to squash interlacing to progressive, thus merging the two frames, or is there some other approach to preserving the interlacing?
[01:11:12 CET] <Arbition> Or am I misunderstanding the TBC value?
[01:12:52 CET] <kepstin> you're just misunderstanding the 'tbc' value. It's a weird legacy thing and you should really just ignore it
[01:13:11 CET] <kepstin> it's just a number that, for certain mpeg codecs, is sometimes twice the framerate for no real reason.
[01:13:38 CET] <Arbition> Does it make a difference to force the output FPS then?
[01:13:56 CET] <Arbition> (DVD mpeg2 -> x264)
[01:14:16 CET] <kepstin> forcing the output fps (using the fps filter is one way) will solve the Past duration too large issue, and shouldn't drop any frames with most video sources
[01:14:24 CET] <Arbition> ok cool
[01:15:06 CET] <kepstin> that message comes from a problem where sometimes ffmpeg misdetects the file as having constant framerate that's lower than the real framerate.
[01:15:21 CET] <kepstin> when you see that message, ffmpeg might be dropping frames to make it match the guessed framerate
[01:15:44 CET] <kepstin> (but I'm not actually sure about that, it's in a weird bit of code)
[01:16:34 CET] <kepstin> note that you need some extra options to encode interlaced mode with libx264
[01:16:52 CET] <kepstin> (although I'd suggest deinterlacing instead, depending on your use case)
[01:17:14 CET] <Arbition> -flags +ilme+ildct ?
[01:18:06 CET] <kepstin> -flags +ildct is sufficient
[01:18:32 CET] <kepstin> (+ilme is ignored by libx264, so it doesn't hurt anything)
[02:23:43 CET] <miguex> Hello, I'm trying to download an m3u8 stream but I'm having a problem in the middle of it. The thing is that after some time the stream goes into some kind of "pause" in which it loops a video and a #EXT-X-DISCONTINUITY-SEQUENCE tag starts to appear on the m3u8 file. This causes an error while downloading the stream and I end up getting the error "
[02:23:43 CET] <miguex> Non-monotonous DTS in output stream 0:0; previous: 8013626261, current: 29319390; changing to 8013626262. This may result in incorrect timestamps in the output file.
[02:23:44 CET] Last message repeated 1 time(s).
[02:23:44 CET] <miguex> "
[02:24:50 CET] <miguex> Once that happens, after I try to play the video it starts to appear to be of 2 hrs length when I recorded just a few seconds and the video won't open and if opened audio will be completely out of sync.
[03:10:47 CET] <miguex> So anyone has an idea of what I'm asking?
[03:10:54 CET] <miguex> Is there anyone even here lol?
[03:24:23 CET] <miguex> Anyone @here ?
[03:29:57 CET] <klaxa> everyone's asleep zzz
[03:32:50 CET] <furq> miguex: https://trac.ffmpeg.org/ticket/5419
[03:32:57 CET] <furq> that doesn't look promising
[03:39:54 CET] <furq> if this is a vod stream then you could potentially split the playlist at every discontinuity, download them all separately and then concat them
[03:40:11 CET] <furq> if it's a live stream then you're pretty much screwed as far as i can tell
[03:45:07 CET] <monokrome> nope, we all fell over
[03:45:08 CET] <monokrome> we can't get up
[03:53:16 CET] <miguex> it's a live stream
[04:14:57 CET] <miguex> furq that post fixes it just by rebuilding a verson with the modified code?
[04:16:55 CET] <furq> that was over a year ago and the patch hasn't been merged
[04:17:05 CET] <furq> so uh
[04:17:12 CET] <furq> i guess you could try it but i wouldn't hold my breath that it works
[04:17:16 CET] <furq> or that it won't break anything else
[04:17:50 CET] <miguex> I'm using windows
[04:18:01 CET] <miguex> So idk if I can build from here
[08:45:11 CET] <rk_> Hi, I am using libavformat to mux a webm file. The output webm file doesn't contain duration set in its segment info. what can be the reason ?
[09:46:06 CET] <dragmore88> Anyone know if theres an alternative to Bitrate Viewer 2.3 (that use ffmpeg as a backend) out there, commercial or open ?
[13:47:04 CET] <gloomy> Hi :)
[13:48:00 CET] <gloomy> Why is it that `ffmpeg -i input.mp4 -filter_complex "atrim=0:60" output.mp4" is so slow (3x) when the operation requested is extremely simple?
[13:48:13 CET] <furq> because you're reencoding the video
[13:49:14 CET] <gloomy> 1. What's the right way to do it then? and 2. how comes it goes on even after the 1st minute?
[13:49:29 CET] <furq> because you're only trimming the audio
[13:49:48 CET] <furq> the right way is -i in.mp4 -c copy -t 60 out.mp4
[13:50:12 CET] <furq> that might not cut precisely at the end but it's as close as you'll get without reencoding
[13:51:05 CET] <gloomy> Aaah, so it's atrim as in audio-trim
[13:51:09 CET] <furq> right
[13:51:11 CET] <gloomy> should have read the doc more carefully
[13:51:19 CET] <gloomy> Ok, thank you :-)
[15:09:34 CET] <a-l-e> hello. i'm creating a "loop" webm from a single png...
[15:09:55 CET] <a-l-e> using vp8 takes forever and with vp9 it's really fast...
[15:10:16 CET] <a-l-e> is there anything i can do to speed up the creeation of a vp8 webm?
[15:10:22 CET] <a-l-e> here is the command with the times:
[15:10:24 CET] <Smith> Hi
[15:11:14 CET] <a-l-e> https://framabin.org/?d4c653276018d2cc#+mZyteBO0HV5qOU/JqSCoCiYSjh4LQ0YtyACR1knIxU=
[15:11:16 CET] <Guest34935> How do I speed up seeking in video files for an app that displays a video and requires someone to click on characters inside the video?(The only part I am asking is about how to seek inside a video fast, not the clicking part)
[15:11:57 CET] <Guest34935> I am using EmguCV that wraps OpenCV which use ffmpeg as I can tell
[15:12:47 CET] <Guest34935> The most painful thing is stepping back one frame
[15:13:15 CET] <Guest34935> It is possible I just need to use a more seek friendly video format, but I didn't find much info about it
[15:13:36 CET] <kepstin> a-l-e: there's various options to adjust speed/quality tradeoffs in vp8 and vp9 encoding.
[15:14:12 CET] <kepstin> a-l-e: but do note that vp9 is a better codec, and (with recent versions) a better encoder, so you really shouldn't bother doing vp8 unless you need to support particular old browsers.
[15:15:02 CET] <a-l-e> the problem is that i'm using webm in vp8 craeted by another application...
[15:15:21 CET] <kepstin> Guest34935: if backstepping (going back 1 frame) is too slow, then the easiest fix is to re-encode the video with a shorter keyframe interval (gop size)
[15:15:40 CET] <a-l-e> but if there is no (easy) way to speed up the creation of my vp8, i will convert the existing ones to vp9...
[15:16:23 CET] <a-l-e> ok, the conversion from vp8 to vp9 seems to be fast...
[15:16:45 CET] <kepstin> a-l-e: don't re-encode from vp8 to vp9, that's silly, it would just lower quality from the extra lossy encode generation
[15:16:54 CET] <kepstin> leave your vp8 files as vp9, encode new ones as vp9
[15:17:00 CET] <kepstin> leave your vp8 files as vp8, encode new ones as vp9
[15:17:06 CET] <kepstin> sorry, typo in the first one :)
[15:18:36 CET] <a-l-e> kepstin, the problem is that i'm using mkvmerge to merge multiple webm files and it complains (and fails) because the mismatch between vp8 and vp9...
[15:18:47 CET] <kepstin> but to adjust speed, use the "-speed N" option, with a number from about 0 to 6, I think. Higher numbers encode faster, but have lower quality.
[15:18:56 CET] <kepstin> default is -speed 1
[15:20:19 CET] <kepstin> (when using vp8, you should also be setting bitrate and possibly also quality options - the defaults aren't great)
[15:21:27 CET] <a-l-e> no idea what i should set... basically i'm simply producing some screencasts
[15:21:55 CET] <kepstin> a-l-e: well, with the defaults i'd expect that you're getting pretty low quality video :/
[15:22:05 CET] <kepstin> depends on video size and content what options make sense
[15:22:30 CET] <a-l-e> well, i'm just creating a static webm with the content of an svg (mostly text and shapes)
[15:22:38 CET] <a-l-e> and then merging them with screencaptures.
[15:23:11 CET] <a-l-e> for now, the only issue is that it takes forever to create the vp8 files and it's snappy with vp9...
[15:23:44 CET] <a-l-e> so i'm considering to add an option to first convert the vp8 i already have in vp9 and filling a ticket in the peek to get them to produce vp9 files : - )
[15:24:05 CET] <a-l-e> (filling the ticket in the peek tracker)
[15:42:59 CET] <Guest84446> Hello ! I have a question, i need to make a multiple conversion off one video (to different quality), can i do this with only one command ?
[15:43:09 CET] <furq> sure
[15:43:44 CET] <furq> -i foo.mp4 -crf 20 -s 1920x1080 bar.mp4 -crf 19 -s 1280x720 baz.mp4 ...
[15:44:04 CET] <Guest84446> and with different bitrate ?
[15:44:15 CET] <DHE> he's using -crf, but -b also works
[15:44:23 CET] <furq> everything works
[15:44:29 CET] <furq> you can have as many outputs as you want
[15:44:49 CET] <DHE> ffmpeg [input options] -i input [output1 options] output1 [output1 options] output2 ...
[15:44:59 CET] <furq> output2 options
[15:45:02 CET] <DHE> hell you can do multiple inputs using the same rules (though it gets complicated fast)
[15:45:03 CET] <DHE> whoops
[15:45:28 CET] <furq> just remember that subsequent outputs don't inherit options from earlier ones
[15:45:39 CET] <furq> you need to specify them all for every output
[15:47:51 CET] <Guest84446> Ok !
[15:47:59 CET] <Guest84446> Ty very much
[16:01:23 CET] <Guest34935> Thanks guys, apparently the file format was really awkward so renconding it to avi with qscale 2(overkill I know) solved it
[16:38:04 CET] <debianuser> Hello. A short question: what container should I store video+audio+"hdmv_pgs_subtitle" to? Details: I have a bunch of *.MTS files to concat, crop and cut a little. I.e. I'm going to reencode video anyway. But those files also have timestamps as "hdmv_pgs_subtitle" that'd like to keep. So, what should I save the result to? .mkv? .mp4? .ts? Should I reencode subs to some other scodec?
[16:39:52 CET] <CoreX> .mkv as mp4 is dead as the night now
[16:40:17 CET] <furq> "dead as the night" meaning by far the most popular container in use in the world
[16:40:23 CET] <furq> but yeah it doesn't support PGS
[16:40:38 CET] <furq> if mkv supports it then use that, otherwise ts
[16:52:45 CET] <debianuser> It's just I don't know what scodec can be put in what container (is there a list somewhere?). And I'm not sure I can just "try it", i.e. `ffmpeg -i file.MTS -map 0:v -map 0:a -map 0:s -scodec copy -t 10 file.mp4` creates a file (!), but... can any player actually play that?
[16:54:59 CET] <furq> are you sure that's actually creating a subtitle stream in the output
[16:57:13 CET] <furq> also i checked and pgs is supported in mkv and it seems to play back ok
[16:57:14 CET] <debianuser> Well... it says it does: Output #0, mp4, to 'file.mp4': ... Stream #0:2: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080
[16:57:30 CET] <furq> weird
[16:57:34 CET] <furq> that shouldn't work
[16:58:29 CET] <furq> [mp4 @ 0x80e06a600] Could not find tag for codec hdmv_pgs_subtitle in stream #5, codec not currently supported in container
[16:59:39 CET] <debianuser> ("ffmpeg version 3.4" here, if that matters)
[17:00:02 CET] <furq> i'm on 3.4.2 so that doesn't help much
[17:07:38 CET] <debianuser> ffmpeg successfully created file.mp4, file.mkv and file.ts. `ffplay file.mkv` displays subs on screen and prints "Stream #0:2: Subtitle: hdmv_pgs_subtitle, 1920x1080 (default)". But `ffplay file.mp4` prints "Stream #0:2(und): Data: none ([144][0][0][0] / 0x0090), 47 kb/s (default)" and shows no subs. Also no subs from `ffplay file.ts` : "Stream #0:2[0x102]: Data: bin_data ([6][0][0][0] / 0x0006)".
[17:08:29 CET] <debianuser> So either I must recode subs to some other scodec, or .ts doesn't support hdmv_pgs_subtitle, or there's some bug in my ffmpeg. :(
[17:13:26 CET] <debianuser> Not sure why .ts didn't work... I guess I'll use .mkv then. Thanks furq and CoreX!
[17:22:54 CET] <debianuser> (Actual reencode would probably take weeks, so if anyone has any other suggestions, a better container or a better scodec - any suggestions welcome!)
[17:43:42 CET] <Nik05> Does anyone know if the ffmpeg static build works on Windows Server Core?
[17:44:35 CET] <Nik05> When running on Windows 10 I get on output but on Windows Server Core nothing happens
[17:51:48 CET] <Nik05> Turns out there are still dll dependencies
[17:55:53 CET] <Jodon> Hello, I tried posting a long question to the libav-user list, but it doesn't look like it went through (this was weeks ago with no response). Was hoping some people could help me on here instead. There seems to be a thread going on right now about AVCodecParameters vs. AVCodecContext which is part of what I'm confused about. I thought I'd paste my original post since it's quite long and covers a whole area of things: https://paste
[17:56:38 CET] <furq> Jodon: that cut off in the middle of the url
[17:57:45 CET] <Jodon> Doh! https://pastebin.com/0qXR564k
[17:58:39 CET] <Jodon> Does that URL work?
[18:28:34 CET] <Jodon> Is my post too long, too disconnected in thoughts, or unclear?
[18:33:27 CET] <memo1> Hi, im capturing video with ffmpeg, but how i manage to overwrite the disk (automatically) once the disk is full?
[18:51:26 CET] <gh0st3d> Hey everyone... I'm concatenating two videos. The output video is working fine in VLC player & the web, but the "second" video is just showing black when played in quicktime. All I can find online is that the video should be pixel format yuv420p. The output video says it's yuv420p. The second source video also says yuv420p and plays fine by itself in quicktime. Any ideas?
[18:55:58 CET] <kepstin> gh0st3d: mp4 container?
[18:56:29 CET] <gh0st3d> Yes, sorry I tried to include everything and forgot that part haha
[18:56:59 CET] <kepstin> gh0st3d: it could just be that the two h264 streams have some different parameters and that one player doesn't care but quicktime breaks :/
[18:57:19 CET] <kepstin> iirc, ffmpeg can't store the sps or whatever it is that signals parameter changes into mp4 container
[18:57:56 CET] <gh0st3d> That would make sense to me if the source file for the second video didn't play correctly in quicktime, but the source file does
[18:58:19 CET] <gh0st3d> Ah, I reread what you said, I'll check the two sources
[19:00:40 CET] <kepstin> both files could individually play fine, but the second could fail to play when concatenated due to this issue
[19:02:26 CET] <ritsuka> probably ffmpeg doesn't add a separate sample description for the concatenated h.264 track. So QuickTime will try to use the first sample description for the second part, and it won't work
[19:03:13 CET] <durandal_1707> Jodon: do you have issues with porting code to AVCodecParams?
[19:08:28 CET] <gh0st3d> Which of these need to be exactly the same: 380 kb/s, 25 fps, 25 tbr, 12800 tbn ? (i know FPS, it looks like while ffprobe reports 25 as the fps on the first video, the stream actually says 24.95)
[19:18:33 CET] <kepstin> gh0st3d: there's a lot of internal paramters in the h264 stream that have to match (this isn't stuff most tools will show you)
[19:21:16 CET] <gh0st3d> Well, hopefully changing the fps to 24.95 will fix it then :(
[19:21:37 CET] <kepstin> different fps between streams isn't normally a problem
[19:22:16 CET] <gh0st3d> Hmm, I did also just notice the pixel format on the video that plays both times is "yuv420p(tv, bt709)" while the secondary clip is just "yuv420p"
[19:28:14 CET] <Jodon> @durandal_1707: Kind of. I mean, I followed tutorials that use AVCodecContext for everything. I'm writing new software that allows the user to basically paste in configuration (using the AVOptions serialization API -- I forget the name of it). However, this new AVCodecParameters is kind of killing this idea. For instance, video_size=1920x1080 is parsed correctly from AVCodecContext which we're not supposed to use.
[19:30:31 CET] <Jodon> So I'm just trying to understand. What should I be doing? Should I be avoiding using AVOptions and instead using AVDictionary? Should I be copying over from the CodecContext to AVCodecParameters ... that seems oddly redundant and assumed those calls were hacks to maintain current compatability.
[19:31:11 CET] <durandal_1707> AVCodecParameters is filled by demuxers and given to muxers, and are not hacks
[19:33:34 CET] <Jodon> Yup, AVCodecParameters is the new way forward, I understand that. What I mean is the avcodec_parameters_from_context ... is that a hack? It seems like my AVCodecParameters should always be filled out by an AVCodecContext.
[19:33:49 CET] <Jodon> (this is for encoding, I'm currently only dealing with that)
[19:34:22 CET] <durandal_1707> that is useful helper utility
[19:41:13 CET] <Jodon> Cool, so that's not going away anytime soon? So I guess the other part of the question is, if I use AVStream and do av_new_stream, passing in the codec I want to use (the documentation tells me to do that, as it needs this information to setup defaults)... it will give me an AVStream::codec (which is a CodecContext) whose access is deprecated. Should I not be using that?
[19:57:26 CET] <durandal_1707> deprecated stuff should not be used, as its gonna be removed
[20:05:57 CET] <gh0st3d> kepstin ok I made some changes and used ffprobe to check the streams of both source files... They show a different "major brand" one is mp42 and one is isom. Other than that the only differences are: profile, time_base, duration_ts, duration, bit_rate, nb_frames.... Could it be any of those? I know most of those are just the time of the video. Also not sure if ffprobe can give me everything I need to check.
[20:06:59 CET] <kepstin> gh0st3d: ffprobe won't show you the internal details of the h264 stream, no. It mostly just has generic stuff.
[20:09:36 CET] <gh0st3d> Gotcha. Damn. Any suggestions for fixing the quicktime issue?
[20:10:22 CET] <kepstin> gh0st3d: what exactly are you trying to do?
[20:10:52 CET] <kepstin> i assume you have some existing video clip, already encoded, and you're trying to add some newly encoded clip either before or after it?
[20:11:41 CET] <gh0st3d> Yes so we start with a content clip made with a video maker and then re-encoded in handbrake.... Then we use phantomjs to animate some html and pipe screenshots to FFMPEG which generates another video
[20:11:47 CET] <gh0st3d> Then the two videos are concatenated
[20:12:19 CET] <kepstin> gh0st3d: ok. so what you need to do is find out the *exact* options that handbrake is using on libx264 and then copy them to your ffmpeg command
[20:12:31 CET] <kepstin> once you get those to match, you should be good
[20:12:43 CET] <kepstin> also make sure handbrake is using libx264 and not a hardware encoder.
[20:13:24 CET] <furq> gh0st3d: if it's x264 then you can just use mediainfo to pull the x264 settings string
[20:13:25 CET] <kepstin> (you can actually see the libx264 options used to encode a file with certain stream info tools, or even by just opening it in a hex editor)
[20:13:37 CET] <furq> or yeah strings or xxd will also show it but less nicely
[20:14:15 CET] <furq> if you have a profile set up in handbrake or something then that ought to be easier
[20:14:28 CET] <furq> since it's very likely to just be -preset that you need to set
[20:14:31 CET] <kepstin> gh0st3d: the other option is to just skip the "re-encoded with handbrake" step, and just have ffmpeg do the final encode all at once
[20:14:58 CET] <kepstin> depending how exactly you're generating the video, you might be able to do that.
[20:22:38 CET] <gh0st3d> I'm not seeing much other info on handbrake than what ffprobe shows. Gonna see if I can find any output logs from handbrake. If not I'll look into installing mediainfo on the server so I can check the files that way.
[20:46:12 CET] <Jodon> @durandal_1707: Okay, so for encoding. Are these the steps then? avcodec_find_encoder_by_name, pass that AVCodec to avformat_new_stream. The newly created AVStream will take the information from AVCodec and store it in its AVCodecParameters. However, you now create your own AVCodecContext... you copy the parameters out of AVStream::codecpar into your own AVCodecContext.
[20:46:50 CET] <durandal_1707> yes
[20:47:41 CET] <Jodon> Then you configure your own AVCodecContext, copy those parameters back into AVStream::codecpar (in case you've changed them), then do all of your encoding with your AVCodecContext?
[20:48:53 CET] <durandal_1707> yea
[20:49:47 CET] <Jodon> Excellent. Thanks!
[21:01:14 CET] <Jodon> Oh two more (related) things: Is there a preferred method of using AVOptions or using AVDictionary during the creation of the codec? I prefer AVOptions because it tells me when I get something wrong. Is there ever a case where one would work and the other doesn't? In the case I've just outlined, would AVOptions be required since you need the codecpar defaults before your configuration takes place?
[21:02:36 CET] <kepstin> I believe there shouldn't be any intersection between things configured via avoptions/avdictionary (which are normally codec-private options) vs. common options which can appear in codecparameters or set on codeccontext.
[21:03:19 CET] <kepstin> (although there's some weird cases, like x264opts/x265opts which can override settings on the context)
[21:06:34 CET] <Jodon> Oh really? That's even more confusing now. Now I'm not sure of what I would even put in the AVDictionary. I thought they were the same!
[21:09:27 CET] <kepstin> Jodon: the AVOptions stuff is basically a way for codecs to support additional features specific to that codec. Some examples are the "preset" and "crf" options on libx264
[21:10:09 CET] <Jodon> I've been using it to set common things like gop size, and even pix_fmt
[21:11:25 CET] <kepstin> Huh, I've always just set those on the appropriate contexts directly, I didn't realize you could set them via avoptions.
[21:11:28 CET] <Jodon> Because I don't know what I'm doing, I basically have everything user-configurable from a text string where we can quickly tune g=10,flags=+qscale and all that kind of stuff.
[21:14:52 CET] <gh0st3d> Looks like the profile=Main & profile=High was the problem. Think we're goood to go now
[21:15:44 CET] <kepstin> https://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/options_table.h - huh, there they are. I guess everything on the context does have an avoption.
[21:19:56 CET] <kepstin> Jodon: creating the context with stuff in an AVDictionary vs. creating it and then setting stuff via AVOptions afterwards should be equivalent
[21:22:26 CET] <kepstin> so just do whichever fits your application better.
[21:31:07 CET] <Jodon> Excellent. Thanks. I really like AVOptions since it tells me when I'm wrong. I spent so much time trying to set a flag as an option (since I misread how the options_table worked).
[21:37:00 CET] <kepstin> the avdictionary stuff sort of tells you when something didn't work - it updates the list to remove options it read and leave options it didn't read.
[22:07:17 CET] <kepstin> note that if you have to deal with an avoption that's reporting an 'invalid' error, you might have to look at the log output from the library to see a message saying what's wrong.
[22:08:56 CET] <Jodon> Yeah... I'm also developing on windows so sometimes I can't get some of the error messages since they go to the error log... haven't figured out how to redirect that to windows' outputdebugstring stuff
[22:11:26 CET] <kepstin> you can install a handler/callback for the log messages that'll let you output them however you like
[22:19:02 CET] <Jodon> oh really? didn't know that.
[22:19:15 CET] <Jodon> you mean as part of ffmpeg, or as part of like MSVC
[22:19:22 CET] <kepstin> part of ffmpeg
[22:19:50 CET] <memo1> Hi, im capturing video with ffmpeg, but how i manage to overwrite the disk (automatically) once the disk is full?
[22:21:50 CET] <kepstin> Jodon: see av_log_set_callback and related functions in libavutil/log.h
[22:36:40 CET] <Jodon> kepstin: Sounds great. Thanks. Do you know about thread-safety in ffmpeg at all? I'm doing almost all my encoding on the render thread in my application (which kills performance). I was wondering if I can spin-off a worker thread to perform avcodec_send_frame, avcodec_receive_packet, and/or av_interleaved_write_frame.
[22:38:03 CET] <DHE> ffmpeg is not thread-safe within a single AVSomethingContext
[22:38:09 CET] <kevinnn> Hi all! How do I convert BGRA 32 packed to YUV 4:4:4 planar c++
[22:38:15 CET] <kevinnn> in c++
[22:38:22 CET] <DHE> but each instance of any Context may be used independently by a thread, or by multiple threads with your own locking
[22:38:22 CET] <kevinnn> Is there a library for it?
[22:38:27 CET] <kepstin> Jodon: it's safe to pass AVFrame and AVPacket between threads, and that's pretty much it.
[22:40:56 CET] <JEEB> kevinnn: you can either use libswscale or zimg straight, or either through libavfilter
[22:41:33 CET] <kevinnn> libswscale was what I was looking for! thanks
[22:44:03 CET] <kevinnn> JEEB: Do you by any chance have a link to an example I could look at?
[22:51:10 CET] <Jodon> Cool, that's just the info I needed, thanks!
[22:57:20 CET] <zamba> i'm using s-video to get input to my video4linux card.. the problem is i'm only getting black and white
[22:58:26 CET] <zamba> i tried specifying -standard pal, but to no avail
[22:58:32 CET] <zerodefect> Using the C-API, I think I'm observing some behaviour where it looks like FFmpeg file readers will buffer data from file before it is needed. I'm wondering if there is a way to control the readahead/lookahead? I appreciate why buffering is performed, but I was wondering if there was some fine grain control on its behaviour.
[23:07:32 CET] <ltrudeau> yaowu: sorry for the delay, like I said in the email, unless members of the alliance speak up in favor, it probably won't get merged
[23:10:52 CET] <ltrudeau> Sorry wrong channel
[23:26:30 CET] <atomnuker> zerodefect: there's no such thing as lookahead when using the C api
[23:26:41 CET] <atomnuker> you control when you feed it frames or packets
[23:27:29 CET] <zerodefect> Ok. I wondered if there was an internal buffer: file <=> buffer <=> api
[23:27:50 CET] <zerodefect> And that buffer was performing some look ahead.
[23:29:01 CET] <JEEB> if nothing else, you can do the IO yourself if you really want to
[23:30:47 CET] <zerodefect> Yeah, I saw that...I wouldn't like to puts my bets down that I could improve it
[23:30:51 CET] <zerodefect> :)
[23:31:03 CET] <JEEB> I usually use it to implement custom things
[23:31:10 CET] <JEEB> like IStreams in windows
[23:31:15 CET] <JEEB> used for IPC
[23:33:01 CET] <zerodefect> What happens if I decode the same file simultaneously using two different decoder instances in the same process. Will both decoders require re-reading from disk. No caching?
[23:33:13 CET] <zerodefect> I don't suspect so, but just trying to understand what I'm observing
[23:43:07 CET] <JEEB> you read packets with avformat. you get avpackets. you can just fine open two decoder avcodec contexts and feed both the same avpackets
[23:43:25 CET] <JEEB> avpackets and the decoding are all in RAM, once you have read packets from the avformat context there's no IO
[23:45:18 CET] <Jodon> I'm trying to stream content as fast as I can display it (VSync), problem is I don't maintain a constant 60fps. If I use a time_base like 1/60, it's giving me issues in VLC where it decides it needs to buffer more. If I use 1/30 it kind of plays in slomo. Is there any way to say play at a variable frame rate, or the frame as I send it? I'm using mpeg2video at the moment, but have also tried mpeg4.
[23:47:52 CET] <zerodefect> @Jeeb: Thanks
[23:49:44 CET] <JEEB> some formats even require you to open multiple decoders :P
[23:49:51 CET] <JEEB> like teletext
[23:50:21 CET] <JEEB> you can control which pages the decoder will output, but you don't get a flag on which page the decoded things originally were
[23:50:37 CET] <kevinnn> JEEB: hey I am sorry to interrupt you but I need a little bit of help with x265 again, do you think you could look on the x265 irc?
[23:51:14 CET] <JEEB> no
[23:53:47 CET] <kevinnn> JEEB: :( why!
[23:54:47 CET] <JEEB> because I'm alreay helping some people and I have no interest in trying to debug what the flying fuck you're doing wrong at 1am
[23:55:53 CET] <DHE> this is one reason why you're encouraged to ask "the room" rather than any one person...
[23:56:38 CET] <kevinnn> DHE: okay I will! It's just JEEB usually knows a lot about this kind of stuff
[23:56:48 CET] <kevinnn> Hi, I am having a bit of trouble with x265. Every frame I create has an NAL type of NAL_UNIT_CODED_SLICE_TRAIL_R.
[23:56:59 CET] <kevinnn> and on the decoding side it says this: [hevc @ 0x5568441f7b00] missing picture in access unit
[23:57:29 CET] <DHE> and now the room knows. someone might answer.
[23:58:06 CET] <kevinnn> DHE: thanks, I don't mean to be annoying or anything. Sorry about that
[00:00:00 CET] --- Fri Mar 16 2018
More information about the Ffmpeg-devel-irc
mailing list