[Ffmpeg-devel-irc] ffmpeg.log.20181010

burek burek021 at gmail.com
Thu Oct 11 03:05:02 EEST 2018


[00:03:29 CEST] <JEEB> trashPanda_: fftools/ffmpeg.c:            av_usleep(10000);
[00:03:37 CEST] <JEEB> it looks like it's straight in ffmpeg.c :P
[00:04:00 CEST] <trashPanda_> I saw those calls, but how could that be used to match input framerates?  Its a flat sleep for 10 seconds
[00:08:01 CEST] <JEEB> ctrl+F "rate_emu" in ffmpeg.c
[00:08:13 CEST] <JEEB> I checked which variable the "re" option goes to in ffmpeg.c
[00:08:15 CEST] <JEEB> and that was it
[00:17:48 CEST] <trashPanda_> that makes more sense, thank you
[02:27:04 CEST] <garyserj> if i want to concatenate two mp4 files, each is AVC/AAC. I want to maintain those codecs. What command should I use? Is ffmpeg -i a.mp4 -i b.mp4 output.mp4 ?  If I do that then would I get some default encoding? When I do that I get lots of errors/warnings like "Past duration 0.600655 too large", and I ctrl-c.  If I try ffmpeg -i part1.mp4 -c copy -i part2.mp4 -c copy output2.mp4  Then it says
[02:27:04 CEST] <garyserj> "Unknown decoder 'copy'".  What command should I be using?
[02:59:23 CEST] <relaxed> garyserj: ffmpeg -f concat -safe 0 -i <(for i in part1.mp4 part2.mp4; do printf "file "%s"\n" "$PWD"/"$i"; done) -c copy -movflags +faststart done.mp4
[03:18:36 CEST] <garyserj> relaxed: I see that uses a linux shell. so I switched from cmd to cygwin(which gives some linux commands and bash functionality in windows). I got the message "/dev/fd/63: No such file or directory"
[03:22:15 CEST] <kepstin> garyserj: ah, you'd need a cygwin build of ffmpeg for that command to work.
[03:22:50 CEST] <kepstin> try just making the playlist file for the concat muxer manually
[03:28:24 CEST] <garyserj> thanks I think that worked, making the playlist file for the concat muxer manually!
[05:11:03 CEST] <quint> How can I tell ffmpeg to use more threads? I used the -threads option and it remained the same while encoding with opus
[05:11:31 CEST] <quint> Or is it codec specific?
[05:25:34 CEST] <furq> it is codec specific
[05:25:34 CEST] <furq> i don't think any audio codecs have multithreading
[05:25:34 CEST] <furq> if you're encoding multiple files then use xargs or parallel or something
[05:35:44 CEST] <quint> I'll have a look furq, thanks
[06:36:19 CEST] <geri> hi, how to merge 2 videos, they are mixed horicontally and vertically, format is .mov
[06:36:21 CEST] <geri> ?
[06:49:17 CEST] <geri> i get that error:  Invalid data found when processing input
[06:49:18 CEST] <geri> hm
[07:11:27 CEST] <geri> hu?
[11:46:25 CEST] <illuminated> when encoding does time=  output correspond to the time on the video it's encoding or how long the encoding process has been going?
[14:09:22 CEST] <Essadon> Hello, I would like to know the syntax for combining a slideshow of multiple GIF pictures and the sound of a MP4 file. I have timestamps for when a certain picture should be viewed, and I want to merge the pictures and the sound together into a single file.
[15:43:43 CEST] <King_DuckZ> hi, I'm still working on my seeking problem and I found I'm not the only one https://stackoverflow.com/questions/23018933/calculate-current-video-time-with-ffmpeg
[15:43:58 CEST] <King_DuckZ> so is that frame number thing just a monotonic counter?
[15:52:19 CEST] <JEEB> there's no frame exact frame number thing in libavformat
[15:52:26 CEST] <JEEB> if you need to be frame exact use something like ffms2
[15:54:44 CEST] <King_DuckZ> JEEB: I looked into that but it looks like it's going to be a lot of work before I can use it
[15:55:05 CEST] <King_DuckZ> JEEB: I'd have to refactor the whole reader backend, which I tried already and I failed
[15:55:38 CEST] <w1kl4s> I know that it's not vapoursynth related channel, but i don't know anything better :P How to obtain frame timestamps using ffms2 plugin?
[15:55:42 CEST] <w1kl4s> King_DuckZ that's easier than it sounds once you go throu black magic
[15:56:02 CEST] <w1kl4s> Depends what you want to do with those frames
[15:57:00 CEST] <King_DuckZ> it's just used in debug print statements as I'm trying to find out what's wrong with my seeking, and I'm wondering if I'm looking at some useless number
[16:03:19 CEST] <King_DuckZ> this is my program's output https://alarmpi.no-ip.org/kamokan/cw?colourless starting from the first time I seek back (line 137) the frame number goes out of sync with the dts (5338 is DTS for frame 161)
[16:03:36 CEST] <King_DuckZ> so that's my question, am I looking at a pointless number or do I have some real problem there?
[16:04:20 CEST] <King_DuckZ> as for my overall problem, as you can see I seek to different DTS but every read restarts from 5338 for some reason
[16:05:29 CEST] <King_DuckZ> compare line 47 and 68 for example, how can the same DTS have different frame numbers?
[16:06:49 CEST] <Foaly> it nowhere says pts in that text, only pts
[16:06:55 CEST] <Foaly> *dts
[16:07:32 CEST] <Foaly> i mean, it only talks about pts in you log
[16:12:12 CEST] <Foaly> also, where do you get you frame number?
[16:27:34 CEST] <King_DuckZ> Foaly: yes, I just found out that av_seek_frame expects a dts, I used to pass the pts before and haven't updated all the output strings and variable names yet
[16:28:11 CEST] <Foaly> and where are the frame numbers from?
[16:28:56 CEST] <King_DuckZ> Foaly: that frame number comes from AVCodecContext's frame_number
[16:29:50 CEST] <King_DuckZ> avcodec_receive_frame(my_decoder, my_frame); printf("%d", my_decoder->frame_number);
[16:30:49 CEST] <Foaly> well, and it is monotonically increasing?
[16:31:03 CEST] <Foaly> the docs just say "decoding: total number of frames returned from the decoder so far."
[16:31:17 CEST] Action: King_DuckZ facepalms
[16:31:30 CEST] <Foaly> you should rely on the pts usually
[16:31:51 CEST] <King_DuckZ> I'm trying to, I was trying to print some meaningful debug messages
[16:32:01 CEST] <Foaly> it is not even guaranteed that every frame is equally long, since you may have a variable framerate
[16:32:43 CEST] <Foaly> probably makes sense to output a timestamp in seconds.milliseconds
[16:32:50 CEST] <King_DuckZ> indeed, pts increases by 32 or 33 at each frame
[16:34:39 CEST] <King_DuckZ> I have this code https://alarmpi.no-ip.org/kamokan/ct?colourless which is currently returning an incorrect value at frame 160, as explained in that pastie
[16:35:55 CEST] <Foaly> incorrect means ...?
[16:35:59 CEST] <King_DuckZ> the other day I've been told that I can't rely on just that multiplication to be pts-accurate so I'm trying to fall back to the closest frame now whenever my pts is between 2 frames
[16:36:41 CEST] <King_DuckZ> Foaly: it means frame_to_pts(160) = 5339, but input file has 5338
[16:37:49 CEST] <Foaly> well, it may have rounded differently on encoding
[16:37:58 CEST] <King_DuckZ> I have this:  5338 (frame 160)    --- 5339 (my calculated pts)     ------   5371 (frame 161)
[16:38:00 CEST] <King_DuckZ> yes
[16:38:32 CEST] <Foaly> so what do you need?
[16:39:12 CEST] <King_DuckZ> in that case there is no exact match so I just take 5338 because it's the closest, but frame 161 has been read already, or otherwise I wouldn't know its pts
[16:40:53 CEST] <King_DuckZ> so I take frame 160 and av_seek_frame to 5338, so that frame 161 doesn't get skipped during the next attempt to read a frame
[16:42:22 CEST] <Foaly> well, arguing about frame numbers is hard, since you can't be sure that your input has a fixed framerate
[16:43:04 CEST] <King_DuckZ> but then look at the log, line 58 and line 68: av_seek_frame(5371) and avcodec_receive_frame() -> 5338
[16:43:37 CEST] <King_DuckZ> I know, it's extremely frustrating to debug and I can't figure out what I'm doing wrong
[16:45:10 CEST] <King_DuckZ> but I keep on getting the same frame at every read, starting from that frame the code goes like 160, then 160,161, then 160,161,162 etc, and so decoding becomes slower and slower after each frame
[16:45:53 CEST] <King_DuckZ> if you quickly scroll my log you'll notice a chunk of text that becomes longer and longer
[16:46:07 CEST] <Foaly> well, what is you application of this?
[16:46:13 CEST] <Foaly> why do you do that?
[16:46:51 CEST] <Foaly> also, av_seek_frame says: "Seek to the keyframe at timestamp."
[16:47:02 CEST] <Foaly> so you cannot seek to a frame that is not a keyframe
[16:47:41 CEST] <King_DuckZ> I'm writing an API that's like load_image(int frame number) so although client code is likely to request sequential frames I can't be sure
[16:48:10 CEST] <Foaly> well, then you will have to keep state internally to optimize for that
[16:51:04 CEST] <King_DuckZ> this https://stackoverflow.com/questions/39983025/how-to-read-any-frame-while-having-frame-number-using-ffmpeg-av-seek-frame says I should land at the first keyframe at or before the PTS
[16:51:55 CEST] <Foaly> and it does not do that?
[16:54:05 CEST] <King_DuckZ> interesting... can I retriever that somehow? for debug printing?
[16:55:13 CEST] <Foaly> what?
[16:55:36 CEST] <King_DuckZ> I'd like to print if my current frame is a keyframe or not
[16:56:36 CEST] <Foaly> AVFrame.key_frame duh
[16:57:52 CEST] <King_DuckZ> kewl
[17:02:05 CEST] <King_DuckZ> you're right, 160 is a keyframe :S
[17:02:09 CEST] <King_DuckZ> I see what's going on now
[18:02:58 CEST] <King_DuckZ> thanks for helping, I haven't fixed my problem yet but at least I know where I should look now :)
[19:41:20 CEST] <grkblood13> if there a way to reuse a predefined list of filenames in segment mode? for example if I defined 5 segments could i have ffmpeg used file_0.ts through file_4.ts and overwrite the oldest one if all files are being used?
[19:41:43 CEST] <grkblood13> so after file_4.ts is written, file_0.ts would be rewritten next
[19:44:10 CEST] <ChocolateArmpits> grkblood13, there's a segment_wrap option
[19:45:29 CEST] <grkblood13> thanks, ill give it a look
[19:45:46 CEST] <grkblood13> can you define segments by byte size?
[19:46:27 CEST] <ChocolateArmpits> grkblood13, don't see such an option
[19:46:37 CEST] <ChocolateArmpits> seems only frames or time specified
[19:47:26 CEST] <grkblood13> would frames from the same video all be the same size?
[19:47:37 CEST] <JEEB> possibly
[19:47:45 CEST] <JEEB> always check your AVFrame width/height
[19:47:49 CEST] <JEEB> after decoding
[19:48:00 CEST] Action: JEEB has streams that switch SD to HD and back
[19:48:24 CEST] <grkblood13> ok, thanks for the info
[19:48:32 CEST] <ChocolateArmpits> I think he's asking in terms of file size
[19:48:36 CEST] <JEEB> right
[19:48:40 CEST] <JEEB> that's all over the place :P
[19:48:52 CEST] <JEEB> depending on the encoder, parameters etc
[19:49:01 CEST] <grkblood13> oh, well nvmd then :/
[19:49:06 CEST] <JEEB> some encoders can possibly make you packets of exact size
[19:49:10 CEST] <JEEB> but that's just *some*
[19:49:19 CEST] <ChocolateArmpits> grkblood13, check out the options here https://www.ffmpeg.org/ffmpeg-formats.html#toc-segment_002c-stream_005fsegment_002c-ssegment
[19:49:21 CEST] <JEEB> also that tends to be inefficient as hell
[19:49:40 CEST] <grkblood13> i want to use ffmpeg in a web-worker to repack an mpegts stream into an hls one
[19:49:59 CEST] <grkblood13> so im thinking i need to create blobs beforehand
[19:50:11 CEST] <grkblood13> and Im guessing ill need to know the size of those blobs
[19:50:22 CEST] <grkblood13> so this might not be possible
[19:52:56 CEST] <BtbN> you want segments to always start at an iframe, so filesize is a bad measurement
[19:53:11 CEST] <BtbN> If you use CBR and a constant gop length, you will get a constant file size automatically
[19:53:32 CEST] <JEEB> but in HLS the size of segments doesn't really matter
[19:53:38 CEST] <JEEB> you just need to know your theoretical max
[19:53:42 CEST] <JEEB> (length)
[19:53:50 CEST] <JEEB> and the length in duration of each segment
[19:53:52 CEST] <BtbN> well, you do want them to be in roughly the same size
[19:53:58 CEST] <JEEB> sure
[19:54:02 CEST] <BtbN> otherwise bad networks will hate you
[19:54:03 CEST] <grkblood13> right, but the output would be written to blobs, not files
[19:54:09 CEST] <BtbN> to what?
[19:54:09 CEST] <JEEB> yes?
[19:54:11 CEST] <grkblood13> so i need to knwo beforehand how big to make those blobs
[19:54:28 CEST] <JEEB> you parse the input in your inpur parser far enough?
[19:54:35 CEST] <JEEB> then you know the size and then you can memcpy
[19:54:47 CEST] <JEEB> or write that block or whatever
[00:00:00 CEST] --- Thu Oct 11 2018


More information about the Ffmpeg-devel-irc mailing list