[Ffmpeg-devel-irc] ffmpeg.log.20161017

burek burek021 at gmail.com
Tue Oct 18 03:05:01 EEST 2016


[00:01:19 CEST] <emilsp> I'll check for sure just now
[00:01:45 CEST] <emilsp> but I don't know whether the ffmpeg binary will be part of ffmpeg (the package) or the new and shiny ffmpeg2.8 (the package)
[00:02:16 CEST] <emilsp> ah, the main swscale version is 262500
[00:05:17 CEST] <furq> the ffmpeg package in arch is 3.1
[00:05:36 CEST] <furq> 2.8 will be the legacy package for stuff which hasn't updated to the new api
[00:06:04 CEST] <emilsp> oh, nice
[00:06:18 CEST] <emilsp> so then, in 3.1 SwsContext isn't typedeffed by default ?
[00:08:17 CEST] <c_14> It's not typedeffed
[00:08:22 CEST] <c_14> Was it in the earlier releases?
[00:08:38 CEST] <emilsp> I'm judging by the example code I was looking at
[00:08:49 CEST] <c_14> The functions all take and return a struct SwSContext
[00:08:51 CEST] <emilsp> https://github.com/filippobrizzi/raw_rgb_straming/blob/master/client/x264decoder.h#L39
[00:08:52 CEST] <c_14> What example code?
[00:09:10 CEST] <emilsp> maybe C++ does stupid things
[00:10:33 CEST] <emilsp> it does indeed seem that C++ might be doing incredibly stupid things :(
[00:10:36 CEST] <c_14> yep
[00:10:38 CEST] <c_14> That's C++
[00:10:43 CEST] <emilsp> FUCK ME
[00:11:21 CEST] <emilsp> sorry about that
[00:11:35 CEST] <emilsp> and the spam; again, thank you both very much for the help
[01:50:50 CEST] <MrMonkey31> uhm wow, so trim feature is a gyp... I tried to achieve a simple "crop out" of the middle of a video and I got a file which plays 10 minutes of nothing before any data begins
[01:53:59 CEST] <MrMonkey31> looking at the docs it waits till the end to say none too helpfully insert a setpts filter.  this is the kind of stuff that melts ordinary people's BrAiNS!!.....
[01:55:16 CEST] <MrMonkey31> I saw 'setpts=PTS-STARTPTS ' elsewhere, which I presume is the equation needed to 'fix' it to act like a simple trim, but can anyone confirm this before I start another 20 min encode?
[01:55:41 CEST] <furq> that's correct
[01:56:36 CEST] <furq> you'll need the same thing with asetpts if you're using atrim
[01:57:03 CEST] <MrMonkey31> yeah, thx dood.  skullmelt averted!
[01:58:03 CEST] <furq> it would be nice if those filters had examples with setpts considering that's probably how they're usually used
[01:59:39 CEST] <MrMonkey31> nah! I'm not prepared to let my audio be handled automatically.  instinctively I just forebode all manner of desync, under-run and god-only-knows what other stream "problems".  I can't wrap my mind around stamps controlling a sound wave, man.  that's just too out there
[02:09:43 CEST] <klaxa> shouldn't pts (and dts) even be "filterable" at (de)muxing? there would be no need to re-encode in that case, but i'm not sure if it's in ffmpeg
[06:58:51 CEST] <bencc> I'm doing screen capture with "-vcodec libx264 -pix_fmt yuv420p -preset:v ultrafast -crf 0"
[06:59:23 CEST] <bencc> what parameters can I change to make the output file smaller without increasing cpu too much?
[10:34:13 CEST] <maarhart> hi, I want to do the same as http://stackoverflow.com/questions/21510521/ffmpeg-move-a-slider-image-over-a-background-from-0-to-100-in-sync-with-t but removing the upper figure, so keeping just the waveform and the overlayed picture. how can I do this?
[10:37:56 CEST] <maarhart> sorry, disregard my question
[10:44:32 CEST] <pihpah> Anyone has ever tried to use Intel Quick Sync Video with ffmpeg?
[10:50:06 CEST] <emilsp> is there a perror equivalent for ffmpeg ?
[10:50:51 CEST] <BtbN> av_strerror?
[10:51:08 CEST] <BtbN> just look at error.h, there are several functions and macros.
[10:52:23 CEST] <emilsp> thanks :)
[11:01:24 CEST] <emilsp> this might be a bit of a stupid question, but what's the coordinate system that is used for ffmpeg ?
[11:01:38 CEST] <emilsp> is y=0 at the top of the frame ? or at the bottom ?
[11:06:57 CEST] <BtbN> as ffmpeg usually uses offset=x+linesize*y I'd say it's safe to assume y=0 is the top row.
[12:21:36 CEST] <termos> I'm transcoding using libfdk_aac with sample_fmt s16, but ffprobe keeps telling me my output is fltp format and I get no audio. Why is it detecting it wrongly, or is there something I'm not setting?
[12:21:46 CEST] <termos> It only happens when I transcode audio from ac3 to aac
[12:22:17 CEST] <BtbN> the sample_fmt only matter when writing raw pcm.
[12:23:40 CEST] <termos> hm interesting, but I'm setting it also in my filter graph with the filter aformat=sample_fmts=s16. is it not really doing anything?
[12:24:08 CEST] <BtbN> it is, it will convert to that format, and if the aac encoder does not support it as input, it will be automatically converted to something it supports again.
[12:24:34 CEST] <BtbN> And the output ffprobe shows of the aac stream depends entirely on the output of the aac decoder it uses.
[12:43:29 CEST] <termos> ok so the aac encoder will convert it for me anyway
[12:44:20 CEST] <termos> it's just very curious that if I transcode from aac -> aac and mp3 -> aac it's fine, but ac3 -> aac seems to cause issues with the output being interpreted as fltp
[12:44:38 CEST] <termos> I must have forgotten something, just can't figure out what
[12:45:26 CEST] <BtbN> It will not convert anything
[12:45:33 CEST] <BtbN> it's aac it writes, not some sample format
[12:49:36 CEST] <iive> is fltp just float point?
[12:50:03 CEST] <iive> most audio decoders work with floats internally, so that's what they output as native.
[12:50:12 CEST] <iive> decoder->codecs
[12:50:31 CEST] <BtbN> fltp should just be 32 bit floats, yes
[12:51:17 CEST] <BtbN> https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/samplefmt.h#L69
[12:53:49 CEST] <termos> the audio plays for maybe 1s when I start my stream, then it stops
[12:54:19 CEST] <termos> rtmp://cp353594.live.edgefcs.net/live/udp_sound_133377_1364k@316196 this is an example stream that I set up
[13:30:55 CEST] <s0126h> why so many people obsessed with 10 video encoding
[13:31:01 CEST] <s0126h> why so many people obsessed with 10 bit video encoding
[13:35:37 CEST] <qmr> mo bits mo problems I always say
[13:36:26 CEST] <termos> bitses?
[14:19:16 CEST] <klaxa> s0126h: with 8-bit per channel banding is quite an issue, especially for animated content
[14:19:58 CEST] <s0126h> what is "banding"?
[14:20:15 CEST] <DHE> when you can see the colours as distinct lines
[14:20:29 CEST] <s0126h> do you have example of a picture
[14:20:51 CEST] <DHE> 256 shades from black to while when you have 1920 pixels in one direction and 1080 in the other (or more!) starts to become noticeable
[14:22:30 CEST] <s0126h> do you have a example
[14:25:30 CEST] <DHE> me personally, no
[14:27:14 CEST] <s0126h> anybody have an example of this  banding issue ?  picture or something
[14:27:43 CEST] <klaxa> https://en.wikipedia.org/wiki/Colour_banding
[14:28:36 CEST] <s0126h> i see a photograph there
[14:28:40 CEST] <s0126h> what's wrong with it
[14:31:28 CEST] <klaxa> in the sky there are bands of different shades of blue
[14:31:49 CEST] <klaxa> zoom in to see it better
[14:45:24 CEST] <s0126h> klaxa i see
[14:45:35 CEST] <s0126h> klaxa  why si it doing that
[14:46:02 CEST] <s0126h> and which video encoder is that
[14:46:56 CEST] <BtbN> it is doing that because there are only that many shades of blue in 8 bit color.
[14:47:19 CEST] <s0126h> typo?
[14:47:34 CEST] <s0126h> that sentence didn't make sense
[14:47:37 CEST] <klaxa> it does
[14:47:55 CEST] <klaxa> you have 24 bits for all of rgb
[14:48:12 CEST] <s0126h> so it's only noticable in blue?
[14:48:15 CEST] <klaxa> but in this picture it's only a change  of a few bits
[14:48:40 CEST] <BtbN> The color doesn't matter
[14:48:42 CEST] <klaxa> in the gradient the color changes by less than 1 bit per pixel
[14:48:43 CEST] <BtbN> there are 8 bits per color
[14:48:55 CEST] <s0126h> btbn  my bad, your sentence do make sense
[14:49:45 CEST] <s0126h> so it doesn't matter what video encoder is used?   all 8bit encoding will do that in the sky  like the picture?
[14:54:15 CEST] <BtbN> It's not an encoding artifact.
[14:54:25 CEST] <BtbN> It would happen with plain lossless images as well.
[14:54:38 CEST] <s0126h> are you serious
[14:55:02 CEST] <s0126h> why would it do that  on  lossless encoding?  if original didn't do that
[14:55:14 CEST] <s0126h> unless original did it too
[15:00:41 CEST] <BtbN> Becaue 8 bit colors are 8 bit colors
[15:01:28 CEST] <furq> the original would do that by definition
[15:01:55 CEST] <furq> if you reduce the bit depth then it's not a lossless conversion
[15:03:51 CEST] <s0126h> furq  but what if original is also 8bit
[15:04:02 CEST] <furq> then it will have those banding artifacts
[15:52:39 CEST] <beauty> hello
[15:54:30 CEST] <klaxa> hi
[15:57:52 CEST] <beauty> how to use a ffmpeg process to surpport multi channel stream data?
[16:00:39 CEST] <klaxa> what
[16:00:48 CEST] <klaxa> can you explain that a bit more specifically?
[16:18:35 CEST] <beauty> klaxa: how to make ffmpeg support four thousands videos to decode at the same time?
[16:18:51 CEST] <beauty> the videos is streamed data.
[16:21:15 CEST] <klaxa> just use a lot -i
[16:21:34 CEST] <klaxa> make sure you increase your filedescriptor limit
[16:23:46 CEST] <beauty> ??
[16:24:01 CEST] <beauty> I want to use ffmpeg code.
[16:24:07 CEST] <beauty> not ffmpeg command.
[16:24:57 CEST] <retard> you're not really describing what you're trying to do at all
[16:27:35 CEST] <beauty> oh, sorry
[16:28:08 CEST] <beauty> I want to use ffmpeg decode multi streamed video file at the same time.
[16:28:18 CEST] <beauty> I want to use ffmpeg decode multi streamed video files at the same time.
[16:53:31 CEST] <Bogas> Hello all i have to convert this code from ffmped to ffmbc what i need to change ?
[16:53:33 CEST] <Bogas> C:\utils\ffmpeg.exe -i "S:\pharos\sandbox_cubix\Production\CubixIn\AETN\ATN-29.mxf" -y -s 1024x576 -c:v libx264 -profile:v baseline -level 3 -b:v 1117k -maxrate 1200k -threads 0 -vf "[in]yadif=0:1,drawtext=fontfile=/windows/fonts/arial.ttf: timecode='00\:00\:00\:00': r=25.00: \x=(w-tw)/2: y=(1*lh): fontcolor=white:fontsize=80: box=1: boxcolor=0x00000000 at 1[out]" -aspect 1.78 -pix_fmt yuv420p  -acodec libvo_aacenc -ar 48000 -ac 2 -ab 25
[16:54:57 CEST] <Bogas> any one can help me im getting a error related to the 1 frame and i read to try to change to ffmbc but i do not know how to do this
[16:58:01 CEST] <Bogas> Hello any one here ?
[16:58:39 CEST] <BtbN> This is #ffmpeg. I don't even have any idea what ffmbc is.
[17:00:11 CEST] <BtbN> https://github.com/bcoudurier/FFmbc "This branch is 441 commits ahead, 50371 commits behind FFmpeg:master. " oh my. Well, it's an ancient ffmpeg fork. Something you definitely don't want to use.
[18:00:27 CEST] <acamargo> hi. I'm segmenting a live feed using ffmpeg to produce 60 seconds duration time files. but when I import those files in adobe premiere the duration is lesser than ffprobe/mediainfo show. any tip about this issue? I'm producing keyframes at 30 frames interval and capturing at 29.97fps
[18:01:40 CEST] <acamargo> one more thing, when I concat the files using ffmpeg the result duration is fine.
[18:03:26 CEST] <acamargo> but when I concat the segments on premiere, the duration is lesser than ffmpeg's concated file
[18:05:22 CEST] <acamargo> it seems like premiere is losing/discarding some frames
[18:09:55 CEST] <acamargo> here's my ffmpeg command http://pastebin.com/3QLuSMcA
[18:45:45 CEST] <ferdna> how do i tell ffmpeg to keep recording to the same file...?
[18:46:04 CEST] <ferdna> i dont want to overwrite the already created file
[18:46:10 CEST] <ferdna> ( option -y )
[18:57:22 CEST] <c_14> Not supported internally at all (though you can hack it by using shell redirection for mpegts and similar formats)
[19:06:04 CEST] <michael_> Hi, I have a question, born out of this sharex github issue: https://github.com/ShareX/ShareX/issues/205
[19:06:23 CEST] <michael_> developer said, in december 2015, that root cause is ffmpeg, not supporting pause
[19:06:36 CEST] <michael_> is this situation the same as of today?
[19:07:01 CEST] <michael_> (forgive me for avoiding to delve into ffmpeg documentation, i'm not the expert type of guy)
[19:19:37 CEST] <c_14> Well, ffmpeg supports pause in the same way that every posix process supports pause.
[19:19:39 CEST] <c_14> SIGSTOP
[19:19:44 CEST] <c_14> Other than that, no
[19:21:45 CEST] <JEEB> also how do you pause, for example, live streaming to youtube (most probably done through RTMP?)? do you just push null packets to the FLV?
[19:22:34 CEST] <JEEB> pretty sure all those things would have to be defined and then APIs defined around such functionality in specific output protocols
[19:22:38 CEST] <furq> pause/enter works here, but obviously you need a terminal up to do that
[19:22:38 CEST] <furq> and yeah i wouldn't expect it to work reliably
[19:23:16 CEST] <furq> pausing while capturing or broadcasting is probably going to break something
[19:23:44 CEST] <JEEB> or in other cases I think the "pause" functionality starts pushing some pre-defined overlay or something to the end point
[19:33:22 CEST] <furq> i just tested with lavfi and rtmp and i can confirm that it does sort of work and it does break something
[19:59:25 CEST] <michael_> so from what i get, it means pausing is not a natural evolution for ffmpeg, or easy to implement, hence i rather not insist on sharex issue. thanks for the info. it appears the only way is to make many screen recordings, and then glue them together, afterwards
[20:02:03 CEST] <JEEB> nah, it just isn't clear what exactly is meant by "pause"
[20:02:59 CEST] <JEEB> because in most cases it's just not as simple as "pause the whole process from input to pushing packets over the internets"
[20:03:20 CEST] <JEEB> people and esp. protocols tend to want different things
[20:03:42 CEST] <JEEB> or more specifically protocols can just kick you out if you do nothing with the connection
[20:05:30 CEST] <furq> michael_: if you're recording to a file then the current behaviour might be what you want
[20:05:53 CEST] <furq> you'd need some way to send keys to the process, though
[20:07:11 CEST] <furq> if you're broadcasting then you could probably hack something together with sendcmd
[20:07:35 CEST] <michael_> i didn't realize the implications, i was just focusing on sharex feature, screen recording which creates a video file, no internet, no broadcast or anything
[20:08:06 CEST] <JEEB> the ticket mentions streaming etc
[20:08:27 CEST] <JEEB> and thus I of course started going off of that since that's what many people nowadays do with their recordings
[20:09:48 CEST] <michael_> i have to admit, i read the issue fast, it wasn't created by me. developer closed my feature request, and sent me to this issue. indeed, if you get streaming into the picture, or shared screen, it gets nasty
[20:11:01 CEST] <michael_> but, for simple screen recording, which is creating a video file, does ffmpeg (current version) have such a feature as pause?
[20:11:06 CEST] <JEEB> and then with the screen capture modules it depends how you can "pause" them or "destroy" them
[20:11:46 CEST] <JEEB> well, furq seemed to note that something might be available even if you just call the ffmpeg cli (as opposed to utilizing the APIs within your application)
[20:14:02 CEST] <michael_> sharex most probably uses ffmpeg cli (in options, it shows command line preview, like: -y -rtbufsize 100M -f gdigrab -framerate 20 -offset_x 0 -offset_y 0 -video_size 1920x1200 -draw_mouse 1 -i desktop -f dshow -i audio="Microphone (Realtek High Defini" -c:v libx264 -r 20 -preset veryfast -tune zerolatency -crf 30 -pix_fmt yuv420p -c:a aac -strict -2 -ac 2 -b:a 96k "output.mp4")
[20:16:40 CEST] <JEEB> yeah, that's the integration most things begin with
[20:16:50 CEST] <JEEB> and then they notice that they can do X,Y,Z better by utilizing the API
[20:20:19 CEST] <DrSlony> Hello, what is the best automated video stabilization method available in the latest ffmpeg? Has anything changed on this front in the last 3 years?
[20:26:09 CEST] <michael_> ok, thank you guys, thank you JEEB, thank you furq. i appreciate your help, now it's clearer
[20:29:04 CEST] <furq> michael_: you could just use ffmpeg directly
[20:29:57 CEST] <michael_> directly, for screen recording? :D i wasn't aware of it
[20:30:16 CEST] <furq> yeah
[20:30:30 CEST] <furq> the options you just pasted from sharex will capture your entire desktop to output.mp4
[20:30:44 CEST] <michael_> i will look into it, sounds interesting
[20:30:58 CEST] <furq> it looks like sharex does some fancy stuff but if you just want to capture your desktop or a window then ffmpeg will do the job
[20:31:34 CEST] <furq> those options also look really bad for recording to a file
[20:31:42 CEST] <furq> -tune zerolatency in particular
[20:33:09 CEST] <michael_> my issue with sharex is that i would like to pause the recording, and then continue afterwards. other than that, even with imperfect options, it does a good job, small file, enough quality
[20:34:45 CEST] <furq> the other options are questionable but down to taste
[20:34:51 CEST] <furq> but -tune zerolatency is just wrong
[20:35:31 CEST] <michael_> i don't know what this option should do. why is it wrong? i can open an issue on their github if i understand
[20:36:06 CEST] <furq> it makes the file much bigger and disables frame multithreading to reduce the encoder latency
[20:36:21 CEST] <furq> which is potentially an issue if you're live streaming
[20:36:23 CEST] <DrSlony> I use: ffmpeg -y -f x11grab -show_region 1 -s 1920x1080 -i :0.0+0,0 -an -c:v libx264 -preset ultrafast -qp 0 -threads 0 /ram/drive/screencast.mp4
[20:36:25 CEST] <furq> otherwise it's just a waste
[20:37:42 CEST] <michael_> good point then, i will take this to their github
[20:38:21 CEST] <furq> DrSlony: how much lossless 1080p screencast can you fit in that ramdisk
[20:38:27 CEST] <DrSlony> enough for my needs
[20:38:38 CEST] <DrSlony> never reached the limit so can't tell ']
[20:38:39 CEST] <DrSlony> ;]
[20:42:37 CEST] <DrSlony> Is there a better codec for *intermediate* files if I want either lossless or close to it lossy? something which won't bog down the CPU while recording but which also wont take jiggawatts of space. I transcode all video after recording.
[20:46:20 CEST] <bencc> makes sense to do transcoding over tmpfs?
[20:46:37 CEST] <DHE> probably not. if you don't have the disk space, you probably won't have the RAM
[20:47:25 CEST] <DHE> x264 with -qp 0 will be lossless, so long as you're okay with the 4:2:2 colourspace loss (there are more options for fixing that though)
[20:47:42 CEST] <furq> the fastest lossless codec in ffmpeg is ffvhuff afaik
[20:47:43 CEST] <DHE> other options include ffv1, huffyuv or the ffmpeg variant...
[20:47:48 CEST] <DHE> that's the one
[20:47:58 CEST] <furq> but it'll be bigger than ffv1 and x264 lossless
[20:48:03 CEST] <bencc> DHE: I have the disk space and enough RAM. trying to avoid iowait
[20:48:13 CEST] <bencc> DHE: when transcoding several files at the same time
[20:48:16 CEST] <DHE> so you have to choose between CPU or disk space
[20:48:22 CEST] <DHE> oh, then you need an SSD
[20:48:24 CEST] <Jaex> furq: https://trac.ffmpeg.org/wiki/StreamingGuide suggests -tune zerolatency
[20:48:45 CEST] <furq> it also says "Streaming" right at the top of the page
[20:49:04 CEST] <furq> like i said, it's potentially useful for streaming
[20:49:05 CEST] <Jaex> yes?
[20:49:13 CEST] <bencc> DHE: tmpfs increase RAM usage?
[20:49:14 CEST] <Jaex> <furq> which is potentially an issue if you're live streaming
[20:49:21 CEST] <Jaex> you told it is issue for live streaming
[20:49:42 CEST] <DHE> bencc: tmpfs is a linux filesystem that operates as a ramdisk
[20:49:42 CEST] <furq> i meant that encoder latency is potentially an issue if you're live streaming
[20:49:53 CEST] <DHE> about the only exception is that tmpfs does support being swapped out
[20:49:54 CEST] <bencc> DHE: sorry, I meant does tmpfs increase CPU usage?
[20:49:55 CEST] <Jaex> it is also issue for recording
[20:50:00 CEST] <Jaex> otherwise fps will drop alot
[20:50:04 CEST] <furq> no
[20:50:20 CEST] <bencc> DHE: you said I have to choose between CPU and disk space.
[20:50:25 CEST] <furq> that's what the buffer is for
[20:50:42 CEST] <Jaex> buffer is not solution it will nonstop fill
[20:50:45 CEST] <Jaex> you cant rely on buffer
[20:51:03 CEST] <furq> you can absolutely rely on the buffer if you're not filling it faster than you can encode
[20:51:14 CEST] <DrSlony> I don't mind lossy as long as it's easy on the CPU. Is there a lossless or (not-very-)lossy codec which is easy on the CPU and doesn't require chroma subsampling?
[20:51:26 CEST] <bencc> DHE: I have 2TB HDD and 16GB RAM. using tmpfs get iowait of the way. I'm saving files to s3 anyway so I don't mind losing them
[20:51:27 CEST] <Jaex> ofc it is filled faster than we can encode
[20:51:38 CEST] <Jaex> this is why using -tune zerolatency
[20:51:56 CEST] <DHE> bencc: ramdisk is only going to hold ~15 GB of space, max then. is that enough space? I'm guessing No if you have multiple projects going at once
[20:52:02 CEST] <furq> what
[20:52:54 CEST] <bencc> DHE: I'm cpu bound anyway. can do about 4 concurrent transcodings. each file is about 1-2 GB so RAM is enough
[20:54:27 CEST] <bencc> DHE: transcoding jobs will start and stop randomly so not all will use max ram at the same time
[21:52:17 CEST] <iamtakingiteasy> hi i am not strictly on ffmpeg topic (though using it extensively), but rather with general question: what readings or basic concepts can you recommend/provide for audio/video synchronization? i am implementing a realtime rtmp -> mp4 muxer and confused a lot by audio bitrates, samplerates and bit depths, how they are generally mapped on video frames?
[21:54:11 CEST] <iamtakingiteasy> until considering audio all was greate with h264 payload -- simply use timescale equal to FPS and incriment counter each full frame by 1, but now i am not so sure what can i do about mp3/aac payloads
[21:54:57 CEST] <iamtakingiteasy> they have a lot of samples in single frame and i am not sure if it could be splitted without decoding in order to match video frames
[21:57:07 CEST] <DHE> the demuxer will provide you with a time base and pts values for all packets. typically you only decode a little bit of the video/audio to determine codec parameters - you don't need to decode the whole thing for simple remuxing
[21:57:13 CEST] <DHE> at least, most formats do that
[21:57:38 CEST] <iamtakingiteasy> yeah, that what i do. but how can i match 1/nth audio frames to 1/FPSths frames of video?
[21:58:25 CEST] <iamtakingiteasy> is it even possible without complete decoding of audio samples and re-encoding them back in matching chunks?
[21:59:24 CEST] <iamtakingiteasy> i am worried about timings: timescale and frametick parameters
[22:00:25 CEST] <iamtakingiteasy> or should i rather adjust video frametick/timescale to match audio instead?
[22:01:32 CEST] <iamtakingiteasy> is simple 1/FPS frametick interval okay?
[22:01:54 CEST] <JEEB> usually audio and video are going with different tick rates anyways
[22:02:22 CEST] <JEEB> and with any sane muxing library you should be able to just feed it packets for the streams as they come
[22:02:28 CEST] <JEEB> without thinking too much how it's doing the interleaving
[22:03:18 CEST] <JEEB> basically in mp4 you have a DTS, CTS and duration
[22:03:26 CEST] <JEEB> (CTS being very similar to PTS)
[22:04:50 CEST] <iamtakingiteasy> hou. i am using [ftyp moov] header with zero defaults and [moof mdat]+ sequences each with own [tfdt] box providing the time offset expressed in number of frames transmitted
[22:05:24 CEST] <iamtakingiteasy> not sure what DTS, CTS and PTS is
[22:05:51 CEST] <JEEB> Decoding Time Stamp, whatever the C was Time Stamp, and Presentation Time Stamp
[22:06:01 CEST] <iamtakingiteasy> aha
[22:06:09 CEST] <JEEB> so as long as you have timestamps for all samples the decoding entity can match things up
[22:06:49 CEST] <iamtakingiteasy> so i shouldn't worry much about keeping them in sync at muxer side?
[22:07:13 CEST] <JEEB> you should worry about having the timestamps correct :P
[22:07:17 CEST] <iamtakingiteasy> aha
[22:07:18 CEST] <JEEB> and that's it
[22:07:50 CEST] <iamtakingiteasy> okay, thanks, i was under impression that i had to match each video frame with related audio frame having them exactly the same duration long
[22:07:57 CEST] <JEEB> no
[22:08:07 CEST] <JEEB> I mean, that happens very very rarely
[22:08:15 CEST] <JEEB> just look at any normal mp4 file with L-SMASH's boxdumper :P
[22:08:22 CEST] <JEEB> `boxdumper --box file`
[22:08:36 CEST] <JEEB> (you probably want to either redirect to file or to less or something)
[22:09:04 CEST] <iamtakingiteasy> i am currently using mp4dump from bento tools and mp4file --dump from whatever it comes from, thanks for another tool reference
[22:09:34 CEST] <iamtakingiteasy> thanks for the hints
[22:09:48 CEST] <JEEB> yeah, you can never have too few tools for stuff like that :)
[22:09:54 CEST] <iamtakingiteasy> indeed
[22:10:06 CEST] <JEEB> I should really get my smooth streaming things upstreamed to it
[22:10:25 CEST] <JEEB> have some patches in my fork that I utilized while debugging some legacy crap
[00:00:00 CEST] --- Tue Oct 18 2016


More information about the Ffmpeg-devel-irc mailing list