[Ffmpeg-devel-irc] ffmpeg.log.20150825
burek
burek021 at gmail.com
Wed Aug 26 02:05:01 CEST 2015
[01:24:15 CEST] <ChazNunz> Anyone available to help with a concat issue? I have three 15 second duration clips that I am trying to concat. The resulting video is 2:15 in duration, and not the expected 45 seconds. http://pastebin.com/PHrpeNc4
[01:28:33 CEST] <llogan> ChazNunz: since you're filtering (and therefore re-encoding) anyway, just use the concat filter. you could do it all in one command.
[01:31:46 CEST] <ChazNunz> llogan, I'm still learning (confused, really) about how i string together multiple commands. I'd love to do it all in one command.
[01:34:41 CEST] <ChazNunz> In theory though, my existing commands *should* work, right?
[01:39:43 CEST] <llogan> read docs about filterchains, filtergraph, link labels. zample: http://pastebin.com/raw.php?i=ELpy7EV3
[01:41:27 CEST] <ChazNunz> llogan, Very helpful. Let me go learn... Thank you very much.
[01:46:45 CEST] <llogan> ChazNunz: why is the input frame rate 30.14?
[01:48:29 CEST] <ChazNunz> llogan: The initial videos are recorded from RTSP streams from a security camera. They are recorded with FFMPEG -r 30, but there is some variablility in the streams.
[01:48:58 CEST] <ChazNunz> Camera's are configured for RTSP to stream @ 30 FPS.
[01:50:52 CEST] <ChazNunz> All the streams come in at slightly different FPS - 30.14, 30.13, 30.15 It's never perfectly 30.00.
[06:04:08 CEST] <FlorianBd> Hi guys :)
[06:05:17 CEST] <FlorianBd> I realize that whatever -thread option I use, ffmpeg does not use the full cpu power because if I encode 2 videos in the same time, each of them takes the same amount of time (output to mp4 and webm, 30s and 12s respectively)
[06:05:38 CEST] <FlorianBd> so my quesion is: how can I encode to these formats using the maximum cpu power?
[07:34:34 CEST] <baadf00d> FlorianBd -threads 0
[07:39:43 CEST] <FlorianBd> baadf00d: thanks but that does not change anything. Same encoding times than any other threads option, and encoding 2 videos in the same time takes the same amount of time for each video, which means the cpu was not fully used.
[07:40:38 CEST] <FlorianBd> Intel(R) Xeon(R) CPU E3-1270 V2 @ 3.50GHz - ffmpeg 2.7.1 compiled native on 64
[08:17:06 CEST] <k_sze> Internally, does libav attempt to reuse memory blocks from some kind of pool when I call av_malloc?
[09:09:52 CEST] <c_14> k_sze: no
[10:52:04 CEST] <maksim_> hello, I am trying to merge a normal mp4 video (which did not have any audio) with an mp3 file.. i used simple ffmpeg -i file.mp4 -i file.mp3 output.mp4 and it worked, but at the end of the video the sound suddenly becomes wrong (similar to playing a very slow version of an audio).. how can I just make it play entirely without this issue in audio?!
[11:12:22 CEST] <r3m1> hello
[11:15:33 CEST] <durandal_1707> Hello
[11:15:48 CEST] <r3m1> i have a .mp4 video file. I extract frames very simply using -i video.mpg %5d.png . Now I see that all frames are duplicated and interlaced: i mean it is obvious that even rows of frame i are taken at time N, and odd rows of frame i are taken at time N+1
[11:16:13 CEST] <r3m1> and frames i and i+1 are duplicate when I extract them using the command above
[11:16:53 CEST] <r3m1> I would like to extract frames from the video such that image i contains only even rows, and image i+1 only odd rows
[11:17:24 CEST] <r3m1> so image have resolution width/2 but it doesn't matter
[11:17:32 CEST] <durandal_1707> see field filter
[11:17:33 CEST] <r3m1> is there a way to do it with filters ?
[11:18:29 CEST] <r3m1> i'm looking at filters, but there are so many. can you point me to relevant ones?
[11:20:17 CEST] <maksim_> durandal_1707: http://pastebin.com/tQpXUR8r
[11:20:44 CEST] <maksim_> everything goes fine, but at the end of the video the audio gets very strange as if it is a slowed down version of it.
[11:21:51 CEST] <durandal_1707> r3m1: field filter
[11:29:58 CEST] <r3m1> durandal_1707: alright! thanks
[11:30:46 CEST] <r3m1> durandal_1707: but is it possible to set it to alternate btw field 0 and 1 for each extracted frame?
[11:32:34 CEST] <durandal_1707> no
[14:08:03 CEST] <zhanshan> hi
[14:08:27 CEST] <zhanshan> does it make sense to choose 'keyframe=12' if I want to jump around while watching a lot?
[14:08:33 CEST] <zhanshan> it's a 28 min video
[14:08:44 CEST] <zhanshan> VLC kinda freezes and does crazy things!
[14:19:58 CEST] <Ping> Hello I'm trying to make a custom build of ffmpeg. I've downloaded the following build... https://github.com/Brainiarc7/ffmpeg_libnvenc and was able to configure make and install it.
[14:20:46 CEST] <Ping> My problem is that I would like to add the librtmp library to this particular build, but Mingw (the compiler I'm using) is unable to find the library when I download it
[14:36:04 CEST] <FlorianBd> c_14: sorry for the delay, here it is with the times : http://pastebin.com/U1U3dnDX
[14:36:12 CEST] <r3m1> I have a video where a big cross with thin lines is incrusted on top. I would like to remove this. delogo filter is a bit harsh as the cross is really just a big "+" sign with thin lines and I would like to preserve the interior... any filters for that?
[14:36:59 CEST] <FlorianBd> and for those who don't know what I'm talking about, the problem is that the encoding time is the same when encoding one video, and when encoding another in the same time with another ffmpeg process. Therefore, not all my cpu is used.
[14:37:07 CEST] <c_14> FlorianBd: it's probably the scale filter
[14:37:13 CEST] <c_14> It doesn't thread very well (or at all)
[14:37:21 CEST] <FlorianBd> ah, hmmm
[14:38:38 CEST] <FlorianBd> c_14: is there a way to run e.g. 4 processes of ffmpeg that will create pngs w/ scale (each of 25% of the clip), then encode these png to mp4 and save time in the end?
[14:39:27 CEST] <FlorianBd> because in the use I'll make of it in the end, I will have to scale the video twice, one for mp4, one for webm
[14:40:35 CEST] <FlorianBd> so the idea would be to first make scaled pngs in a temp ramfs folder w/ 4 processes, then encode from them simultaneously w/ 2 processes to mp4 and webm while using the audio of the original video file
[14:41:28 CEST] <r3m1> is there a filter like "delogo" but where you could give a mask image to where the logo is?
[14:42:31 CEST] <c_14> FlorianBd: If the video is the same for the mp4 and the webm, you could just scale once, then use the split filter and have two outputs in one command. One for the mp4 and the other for the webm. You'll save one scaling pass.
[14:42:55 CEST] <c_14> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[14:43:28 CEST] <FlorianBd> c_14: ok thanks, but will it use full cpu when encoding comparing to using 2 different ffmpeg instances?
[14:44:08 CEST] <r3m1> removelogo! great
[14:44:42 CEST] <FlorianBd> because I have a huge amount of ram, and this machine does only encoding, so maybe non-compressed png would be fast?
[14:44:53 CEST] <c_14> Well, you'll be able to run both encoding processes simultaneously. And if the scale filter is the blocker, you'll save the time for the second run of that. If your CPU is powerful enough to handle both encoding ops faster than the scale filter feeds data *shrug*.
[14:45:18 CEST] <FlorianBd> ok
[14:45:43 CEST] <FlorianBd> thanks c_14, I'll try this in the next couple hours and let you know :)
[14:45:46 CEST] <c_14> What you _could_ do if that isn't enough for you is to run numerous ffmpeg procesess that scale chunks of the input video into some lossless codec (ffv1, ffvhuff, or whatever) then concatenate those and encode them
[14:56:04 CEST] <FlorianBd> c_14: that's exactly what I meant w/ png
[14:56:36 CEST] <FlorianBd> because in that case it couldn't be a video codec, it has to be frame by frame pictures
[14:57:51 CEST] <c_14> It can be a video codec. You just have to make sure to cut on i-frames and then concat together with the concat demuxer. You could also use pngs with the image2 muxer/demuxer.
[15:01:27 CEST] <FlorianBd> I just tried the double outputs but that does not save time
[15:02:05 CEST] <FlorianBd> but look, webm takes 12s, mp4 takes 32s. Whether they are in parallel or not, same thing. So it's obviously the encoding that is long.
[15:02:34 CEST] <FlorianBd> I need to find a way to use full cpu w/ that mp4 encoding
[15:02:54 CEST] <FlorianBd> 12s is acceptable in my situation (webm) but not over 30 (mp4)
[15:03:30 CEST] <FlorianBd> c_14: so a solution could be to use 2 processes to encode mp4, while using one for webm.
[15:03:47 CEST] <FlorianBd> but I have no idea if that's possible for mp4
[15:08:29 CEST] <c_14> You could use preset medium instead of preset slow if the filesize up is ok, otherwise you can cut the video into two parts on an i-frame and then encode to 2 files which you then concat with codec copy
[15:25:28 CEST] <FlorianBd> c_14: can I concat all that via pipes in real time to avoid temp files? (even though it will be in ramfs)
[15:42:12 CEST] <deivior_> Hi, I'm streaming ogg over http and I'd like to know if there's any flag I can use to send the file duration on the headers
[15:46:47 CEST] <DrBenway> hey guys, I'm trying to transcode an mp4 h264 of about 4 seconds using avformat. i copy most of my codec/format settings from the input stream. problem is: the output video seems to have a missing second. (should be 4 seconds but it only has 3)
[15:46:51 CEST] <DrBenway> what could this be due to?
[15:51:07 CEST] <deivior_> can you paste the link?
[15:51:19 CEST] <deivior_> sorry,command
[15:51:33 CEST] <DrBenway> this is code
[15:51:34 CEST] <DrBenway> not a command
[15:52:31 CEST] <DrBenway> i load an mp4 using avformat, decode the frames and then write them back using av_format and avcodec_encode_video2
[15:52:47 CEST] <DrBenway> (let me know if i should be on ffmpeg-devel instead)
[15:52:53 CEST] <DrBenway> (or some other channel)
[15:53:55 CEST] <deivior_> No clue, sorry
[17:02:33 CEST] <deivior_> quit
[17:22:38 CEST] <brontosaurusrex> a. what would be a nice cli to make a non-black movie thumbnail (just one)? b. few frames animation for web (gif, png, webm?)?
[17:27:18 CEST] <durandal_1707> ffmpeg
[17:30:03 CEST] <brontosaurusrex> durandal_1707: lmao, very funny
[17:55:03 CEST] <brontosaurusrex> testing: ffmpeg -i in.dv -vf thumbnail -frames:v 1 out.png
[17:55:21 CEST] <brontosaurusrex> but that doesnt take anamorficity into account
[18:00:49 CEST] <FlorianBd> so ffmpeg can save to -f rawvideo but cannot read it in input after? (Invalid data found when processing input)
[18:01:39 CEST] <DHE> you can't just use rawvideo. you need to give it information about it, like format and resolution and stuff. raw video has no headers
[18:44:53 CEST] <FlorianBd> DHE: I see, thanks :)
[18:56:46 CEST] <DeifLou> hello. I'm trying to build ffmpeg with libopenjpeg support under msvc. I have built libopenjpeg as a release shared library. I configure ffmpeg with --enable-libopenjpeg, but the configure script ends with "ERROR: libopenjpeg not found". The config.log file says that it failed linking the program that configure creates internally to test the presence of the openjpeg library. The specific error is: "unresolved external symbol _opj_ver
[18:57:46 CEST] <DeifLou> the openjpeg library exports the symbols as _stdcall and i don't know how to make ffmpeg to recognize that
[18:58:33 CEST] <DeifLou> thanks for the help in advance
[20:13:35 CEST] Action: FlorianBd jjust realized that the position of -y matters... should be at the end then
[20:17:22 CEST] <FlorianBd> ah no it actually doesn't. Ihave a problem coming from something else
[20:19:16 CEST] <FlorianBd> ffmpeg -hide_banner -loglevel info -threads 0 -i video -i over.png -filter_complex scale=1024x576,overlay=main_w-overlay_w:main_h-overlay_h -t 30 -c:v v410 -af aresample=44100 -c:a pcm_s16le -y -f mov video-temp
[20:19:33 CEST] <FlorianBd> any idea what can make this hang right after writing the headers of the output file ?
[20:20:19 CEST] <FlorianBd> "Press [q] to stop, [?] for help", headers are written (36 bytes), then nothing ever happens and no cpu usage
[20:20:35 CEST] <FlorianBd> but that seems to happen only when called from a perl script, not directly in command line
[20:21:04 CEST] <FlorianBd> but I already executed other ffmpeg commands from perl successfully
[20:43:37 CEST] <DrBenway> I'm using avformat and the h264 decoder to get raw frames. I have a 4 seconds video at 25 fps. is there any common reason for av_read_frame to return eof after 92 frames?
[20:43:49 CEST] <DrBenway> it feels like logically i should get 100 frames
[20:45:16 CEST] <DrBenway> (this is that typical bunny video)
[20:48:09 CEST] <DrBenway> although if i do ffmpeg.exe -i deleteme\h264.mp4 deleteme\pictures%d.jpg
[20:48:12 CEST] <DrBenway> i get my 100 frames
[20:51:52 CEST] <DrBenway> what i notice is that avcodec_decode_video2 fails to return a frame ~8 times
[20:52:06 CEST] <DrBenway> am i supposed to do something about those non frames?
[20:53:00 CEST] <DrBenway> the other thing that i notice is that they are the 8 first frames of the video
[21:12:15 CEST] <FlorianBd> ah I found the problem, the command needs </dev/null for some reasons
[22:34:00 CEST] <podman> Is there any sort of inherent priority for video streams in a container?
[22:34:21 CEST] <podman> if i have a multi-stream WMV that I need to convert down to a single stream mp4, how do I pick the correct stream?
[22:36:19 CEST] <JodaZ> podman, the map option
[22:36:37 CEST] <podman> JodaZ: I guess I meant how do I know which one I should be interested in
[22:36:51 CEST] <podman> I'll give you an example in a sec
[22:37:13 CEST] <iive> podman: play them one by one
[22:37:24 CEST] <podman> iive: this is an automated system
[22:38:05 CEST] <iive> podman: how do you know which one is the correct one?
[22:38:13 CEST] <podman> https://gist.github.com/podman/5b2fd4405ade3fd9b65b
[22:40:05 CEST] <JodaZ> so you want the ai that decides what language you prefer sentient or does some kinda heuristic suffice? xD
[22:40:07 CEST] <iive> so, the first video stream is single image
[22:40:38 CEST] <podman> JodaZ: yeah, a simple heuristic would work, assuming that there is some sort of general rule that works for most cases
[22:42:02 CEST] <JodaZ> podman, you see, the way i see it is that you are asking the question because the "general rule that works for most cases" that ffmpeg uses just failed for you
[22:42:52 CEST] <podman> JodaZ: is the general assumption that the first stream is the "best" one? or is there no real preference given?
[22:43:09 CEST] <JodaZ> i dunno really, but i think it'd use the first one
[22:43:28 CEST] <podman> Actually... I wonder.
[22:44:06 CEST] <podman> Part of it could be my fault. I'm extracting width and height information from the video streams but I'm using the first one in all cases
[22:45:20 CEST] <podman> and then i'm doing something like this: -vf "scale='iw*sar:ih',scale='min(iw+mod(iw,2),160):-2'"
[22:46:03 CEST] <podman> So, I could potentially try using the last stream in all cases
[22:47:10 CEST] <podman> I have a feeling ffmpeg is doing the right thing but I'm picking the wrong stream to use for the video dimensions
[22:48:48 CEST] <llogan> default stream selection will choose the video stream with the largest frame size.
[22:49:54 CEST] <JodaZ> do you really need to manually put the height into the scale command, can't you just use ih?
[22:52:50 CEST] <podman> JodaZ: it's not ih though
[22:54:01 CEST] <JodaZ> why not?
[22:55:16 CEST] <podman> well, for once it should be the width
[22:55:54 CEST] <podman> and it's not something that's coming from ffmpeg. i have to go back through my code... one second
[22:56:54 CEST] <podman> JodaZ: basically that code is trying to preserve the aspect ratio
[22:57:03 CEST] <podman> llogan: actually that's probably the most helpful bit of info
[22:57:22 CEST] <podman> so, basically i should choose the stream with the largest frame size and I should be all set
[22:58:22 CEST] <JodaZ> podman, i think you can do that with just some scale magic
[22:59:09 CEST] <podman> JodaZ: well it's to get it to fit within a specified width and height and preserving the aspect ratio
[22:59:27 CEST] <podman> without being larger than the original width & height
[22:59:52 CEST] <JodaZ> sounds doable with all that lt gt the scale thing can evaluate
[23:00:10 CEST] <podman> The way I'm doing it mostly works except for this odd case. If I'm smarted about which stream to use to get the dimensions, it should be fine though
[23:01:16 CEST] <JodaZ> i mean that macro language you can use in that scale command for example is like close to turing complete if not
[23:03:17 CEST] <podman> llogan: do you know where I can find documentation of that behavior?
[23:21:50 CEST] <llogan> podman: http://ffmpeg.org/ffmpeg.html#Stream-selection
[23:22:01 CEST] <podman> llogan: thanks!
[00:00:00 CEST] --- Wed Aug 26 2015
More information about the Ffmpeg-devel-irc
mailing list