[Ffmpeg-devel-irc] ffmpeg.log.20180111

burek burek021 at gmail.com
Fri Jan 12 03:05:01 EET 2018


[00:05:50 CET] <oborot> Has anybody used ffmpeg on AWS Lambda?
[00:06:27 CET] <oborot> Seems extremely slow, even if I give my function 1.5GB of memory
[00:07:29 CET] <oborot> I was using a pre-built binary made around Christmas so it was fairly up to date
[00:09:07 CET] <furq> what's the command
[00:09:54 CET] <oborot> Here's one of the commands I was doing that took a while:
[00:09:54 CET] <oborot>     ffmpeg -loop 5 -i $TEMP_FILE -vf "zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0035))':d=125" -t 1 -threads $(nproc)   -preset ultrafast -crf 0 -r 25 -c:v libx264 -y "${WORK_DIR}/${counter}.mp4" < /dev/null
[00:10:20 CET] <oborot> It's a zoom out effect on an image
[00:10:45 CET] <oborot> If I recall, that took around 20-30 seconds or so
[00:10:57 CET] <oborot> It's pretty fast on my Mac though
[00:11:22 CET] <oborot> Err, ignore that -loop 5 bit, it's actually -loop 1
[00:12:07 CET] <oborot> Here is the actual command:
[00:12:08 CET] <oborot>     ffmpeg -loop 1 -i $TEMP_FILE -vf "zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0035))':d=125" -t 5 -threads $(nproc)   -preset ultrafast -crf 0 -r 25 -c:v libx264 -y "${WORK_DIR}/${counter}.mp4" < /dev/null
[00:13:34 CET] <oborot> I'm trying to build the latest git head source and testing that out
[00:51:20 CET] <zamba> how do i turn a mono signal into a stereo signal?
[00:51:28 CET] <zamba> meaning, how to put what's in the left channel in both channels
[00:51:55 CET] <teratorn> zamba: theres a filter that will do that
[00:52:49 CET] <zamba> teratorn: any more details?
[00:53:02 CET] <zamba> ffmpeg -i input.mp3 -ac 2 output.m4a
[00:53:05 CET] <zamba> but that didn't work
[00:57:24 CET] <teratorn> zamba: https://trac.ffmpeg.org/wiki/AudioChannelManipulation
[01:10:42 CET] <zamba> teratorn: that's what i tried, but i'm only getting from one channel
[01:11:14 CET] <teratorn> zamba: try the one with the filter
[01:12:55 CET] <zamba> Stream specifier ':a' in filtergraph description [0:a][0:a]amerge=inputs=2[aout] matches no streams.
[01:23:36 CET] <teratorn> uh
[01:23:59 CET] <teratorn> zamba: to me that sounds like it means that input #0 has no audio at all
[01:28:47 CET] <zamba> well, it has
[03:29:20 CET] <buu> So I'm using ffplay to play a file containing the stream 'Video: h264 (High), yuv420p(tv, bt709, progressive), 1912x568', on a 4k tv. It's playing full screen but the video itself is letter boxed
[03:29:29 CET] <buu> on all four sides
[03:31:05 CET] <furq> ffplay isn't intended to be used as an actual player, it's just there for debugging
[03:31:25 CET] <furq> you probably have to explicitly scale it if you want it to play at non-native res
[03:34:51 CET] <DHE> I've seen ffplay get confused by things like dual monitors, etc. like furq said, it's more a debug tool and demonstration of the decoder/playback APIs than a real player. I suggest mpv for real playback
[04:29:34 CET] <buu> =[
[04:29:35 CET] <buu> ok
[05:19:54 CET] <verite> Has anyone in there used both ffmpeg and avconv on a Mac and can summarize me the pro's and con's of each?
[05:35:38 CET] <chocolaterobot> furq: hi. `ffmpeg -i file:filename.ogg -loop 1 -i image.png -tune stillimage -shortest -c:a copy out.mkv` made my 5.1MB ogg into a 10.6MB out.mkv. (the jpeg is 110KB).
[05:35:44 CET] <chocolaterobot> is this to be expected?
[06:16:06 CET] <kepstin> chocolaterobot: by default it makes a 25fps video. assuming your song is several minutes long, that'll have a bunch of keyframes, so it'll be larger than the jpeg, yeah
[06:17:36 CET] <kepstin> chocolaterobot: depending what your final use case is, you might consider lowering the framerate, increasing the gop size (keyframe interval), or lowering the video quality (higher crf value)
[07:23:18 CET] <troyborg> Hey, so I got a new security camera, that transfers the videos Wireless to my NAS.   It seems to screw up writing the moov part sometimes.  But it generates a IDX file that looks like it could be used to rebuild the moov part of the file..
[07:23:42 CET] <troyborg> Is there a way to do this..   Here is what the IDX file kinda looks like:    https://pastebin.com/raw/QuvjLSbK
[07:35:50 CET] <furq> i have no idea what that is, but there are tools that will rebuild a moov atom if you have a working file from the same device
[07:36:13 CET] <furq> i forget the name of the oss one that does it and that isn't way out of date
[11:17:23 CET] <pagios> k any gstreamer freelance developers in the house? IOS/Android for GST needed
[11:20:15 CET] <Imaster> Hi,  Is it possible to add some custom atom in the header of a video file?  I need to add the term "loop" in the header of MP4 files before the "moov" atom.  More information can be found here :  https://stackoverflow.com/questions/44893316/whatsapp-video-as-gif-sharing-on-android-programatically  Note: Just opening the video file using any text editor shows the file header and atoms.
[11:20:25 CET] <Imaster> I do not want to edit the headers via too much calculations, as the answer suggests. Is there a simple FFMPEG command to add the atom "loop" to any MP4 files?
[11:20:38 CET] <Imaster> Use Case :-  WhatsApp does a smart thing. Their animated GIFs are actually MP4 files (which contain this atom "loop"), and the MP4 files which it wants to treat as videos in their UI, they do not add that "loop" atom  I tried loads of -loop and stream loop commands.. Nothing is adding the atom "loop" to my mp4 files
[11:20:46 CET] <Imaster> My current command for converting a GIF to MP4 is as follows :
[11:20:51 CET] <Imaster>     ffmpeg -i giphy9.gif -c:v libx264 -c:a aac -pix_fmt yuv420p -movflags +faststart  giphy9_loop.mp4
[11:20:56 CET] <Imaster> I wish to add some parameter to the above command (or maybe run a separate command) so that the output mp4 file has that "loop" atom in the header.
[11:22:36 CET] <kili> Hi
[11:31:25 CET] <kili> I have quick question
[11:35:47 CET] <kili> In company we use WMV8 on embedded device, but lately we noticed stability problems. How to increase stability if we use ffmpeg to convert video? I mean if lowering bitrate would help?
[11:36:25 CET] <kili> Maybe this is simple question, but not for me
[11:37:16 CET] <squ> what is stability in video?
[11:37:32 CET] <squ> stable video is what?
[11:39:42 CET] <neca> hey, i have a little question. the '-ar' - what exactly does it do and what options do i have? could i, e.g., do a '-ar 43200' to convert a 440hz file to a 432hz file?
[11:40:59 CET] <BtbN> I doubt non-standard samples rates are supported. And if they are, they are a terrible idea.
[11:43:03 CET] <neca> 432hz is >>the<< standard. it's called pythagorean tuning: https://en.wikipedia.org/wiki/Pythagorean_tuning -- it only changed in the ~fourties or fifties, because of industrial reasons.
[11:43:36 CET] <neca> or am i wrong here?
[11:44:25 CET] <neca> (we used 432hz for several thousand years. just recently we switched to 440hz)
[11:45:29 CET] <BtbN> It's 44100Hz, or 44.1kHz.
[11:45:52 CET] <BtbN> Also, tell me more about digital audio sampling in the 50s.
[11:46:35 CET] <neca> sry, didn't want to sound offensive. i was just amazed you called it non-standard
[11:46:42 CET] <BtbN> Well, because it is.
[11:46:57 CET] <neca> orchestras still tune to 432hz most of the time?
[11:47:12 CET] <BtbN> 44.1kHz and 48kHz is what is used in way over 99% of the normal cases. Hi-Fi stuff sometimes uses notably higher sample rates.
[11:47:30 CET] <BtbN> Orchestras don't have sample rates. You're talking about something entirely different.
[11:47:45 CET] <neca> ah, ok. that is very well possible :D
[11:48:12 CET] <neca> is there an option to change tuning via ffmpeg or do the source need to be tuned?
[11:49:02 CET] <neca> *does
[11:49:12 CET] <BtbN> I'm not even sure what you mean by tuning in the context of digital audio. You can't easily digitally re-tune instruments used in the music.
[11:51:54 CET] <neca> well, the C in most cases is tuned to 440hz. i thought it maybe would be possible to change the audio digitally to tune the C to 432hz (and everything else accordingly) or would it be too hard to "figure out where the C is"?
[11:53:09 CET] <neca> i saw someone doing that with a digital file - but that was only a single guitar.
[11:55:37 CET] <neca> maybe it was a digital created guitar-track (e.g. he could simply change it within his tool of choice where he put the nodes etc in)
[11:57:52 CET] <neca> sry, am noob regarding that stuff :3
[11:59:04 CET] <kili> squ: stability for me is not having slide show on screen.
[12:10:19 CET] <FurretUber> Hi, I'm trying to build a static FFmpeg but the configure does not find/use the libraries I have built, it searches only the system ones. The configure fails at libfdk_aac. The command I'm using to configure is:
[12:10:20 CET] <FurretUber> PKG_CONFIG_PATH=/home/usuario/Documentos/ffmpeg_linux/ffmpeg_build/lib/pkgconfig ./configure --extra-libs=-static --prefix=/home/usuario/Documentos/ffmpeg_linux --enable-gpl --enable-libx264 --enable-libfdk_aac --enable-libopus --enable-libvorbis --enable-libvpx --enable-nonfree --disable-shared --disable-doc --pkg-config-flags=--static --pkg-config-flags=--static '--extra-cflags=-I/home/usuario/Documentos/ffmpeg_linux/include -static' -
[12:10:20 CET] <FurretUber> -extra-ldflags=-static '--extra-ldflags=-I/home/usuario/Documentos/ffmpeg_linux/include -L/home/usuario/Documentos/ffmpeg_linux/lib -ldl' --disable-ffprobe --disable-ffplay
[12:10:20 CET] <FurretUber> And the error message is:
[12:10:20 CET] <FurretUber> ERROR: libfdk_aac not found
[12:16:56 CET] <FurretUber> By the logs, clearly FFmpeg was not able to find the libfdk_aac, but shouldn't the --extra-cflags and --extra-ldflags options point to the correct directory?
[12:29:30 CET] <q66> it probably uses pkg-config to find fdk_aac and cannot find it
[12:37:41 CET] <th3_v0ice> Hello, can anybody give me an example or required steps to mux two input files together using the ffmpeg c api? I am not sure what exactly I need to do, because PTS and DTS needs to be changed? I have used transcoding.c and remux.c and muxing.c but I cant understand how to do it on the two input files. Thanks!
[12:40:33 CET] <JEEB> th3_v0ice: all streams (and decoders) can have their own time bases and timestamp ranges. you will have to make them match, yes.
[12:40:59 CET] <JEEB> so that the packets from all inputs are in synch
[12:45:50 CET] <th3_v0ice> @JEEB, so I open up the input files, read them one by one, use av_packet_rescale_ts with that particular stream time base and write the result in the output file? I can make them match by just having a next_pts value and using that as an next frame pts?
[12:49:59 CET] <JEEB> th3_v0ice: although I think the biggest possible thing is the start timestamp actually rather than the time base (although scaling the timestamps to the output stream's time base is important)
[12:50:53 CET] <JEEB> because audio streams for example can have encoder delay of N ms, which would have their initial timestamps be negative (or X amount of smaller than the first timestamp of the other related stream, such as video)
[13:03:26 CET] <th3_v0ice> JEEB, some pseudo code would be greatly appreciated if you have it. What i dont understand is should all PTSs be different? To clarify, if I have a video frame and then the audio frame and audio frame, PTSs should be 1,2,3 respectively, right? Thanks for all the help.
[13:03:49 CET] <JEEB> each stream has its own time line
[13:06:57 CET] <th3_v0ice> JEEB, so for each stream PTS value is increasing in some manner, but PTS can be the same between the streams? This should make sense.
[13:07:24 CET] <th3_v0ice> *This makes sense.
[15:17:23 CET] <Fyr> guys, why do the developers not implement parameters for libx265 in command line?
[15:17:23 CET] <Fyr> libx264 has plenty of options to set in command line, while one has to use -x265-params for libx265.
[15:29:37 CET] <JEEB> Fyr: because nobody cares? and libx26{4,5}-params works from the command line as well. Also IIRC some of the matches in libx264 weren't that great. I do agree that libx265 could have some extra stuff utilized like maxrate/bufsize, but as far as I can tell nobody's caring about that encoder too much :P
[15:45:55 CET] <FurretUber> When using filters as showwaves, is there a way to choose the background color of the output?
[15:47:38 CET] <durandal_1707> FurretUber: default background is transparent so overlay it over something
[15:49:00 CET] <FurretUber> Ah, I thought that it had black color
[15:51:36 CET] <durandal_1707> FurretUber: in recent version its transparent
[17:13:42 CET] <chocolaterobot> kepstin: thx for your reply. I'd like to upload the audio to YouTube, and they'll only accept audio if combined with Video.
[17:14:13 CET] <chocolaterobot> kepstin: video quality therefore doesn't matter
[17:14:53 CET] <chocolaterobot> But I don't want audio quality to deteriorate
[17:15:41 CET] <furq> chocolaterobot: if it's for youtube then drop the framerate
[17:16:28 CET] <furq> although there used to be a bug where -loop 1 -i image.png -shortest would get the duration wrong if you copied the audio
[17:16:34 CET] <furq> and it would get worse at lower framerates
[17:16:41 CET] <furq> so you'll want to double check that
[17:17:18 CET] <furq> idk if that ever got fixed, youtube won't let me upload anything until i complete their COPYRIGHT SCHOOL exam
[17:24:24 CET] <chocolaterobot> furq: I never had to complete a copyright exam
[17:24:51 CET] <furq> you will if someone falsely copyright claims an entire album you uploaded
[17:25:14 CET] <chocolaterobot> My audio is not copyrighted though (it's my audio creation)
[17:25:31 CET] <chocolaterobot> Furq I upload them as private
[17:25:39 CET] <chocolaterobot> Only for my eyes and ears
[17:25:52 CET] <furq> that will help to an extent
[17:26:06 CET] <furq> not as much as it being something you made
[17:27:08 CET] <chocolaterobot> Okay
[17:27:22 CET] <chocolaterobot> I understand
[18:12:33 CET] Action: kepstin notes that audio not being copyrighted (being of your own creation) is not really enough to stop youtube automatic copyright claims... https://twitter.com/littlescale/status/949032404206870528 :)
[18:14:08 CET] <Cracki> white noise is just data you can't decipher
[18:14:21 CET] <Cracki> (maybe)
[18:43:32 CET] <kiroma> Hey, I just tried to compile ffmpeg statically with lto and it failed.
[18:44:32 CET] <kiroma> build log from where it failed: https://pastebin.com/6DW0MYE5
[18:45:28 CET] <kiroma> Could it be a compiler bug?
[18:47:34 CET] <kiroma> I think I saw a similar problem while compiling another project with lto so I'm not sure.
[18:50:53 CET] <JEEB> kiroma: that's the toolchain failing, yes
[18:51:30 CET] <kiroma> Alright, thanks.
[18:51:48 CET] <JEEB> or well, that's how it looks like
[18:51:49 CET] <JEEB> :P
[20:47:01 CET] <ccyang> Hi, I am trying to install ffmpeg from source with H.264 support.  Before this, I have installed libx264 from VideoLAN.  However, when I compile ffmpeg, it complains about the missing identifier x264_bit_depth in libavcodec/libx264.c.  I have looked through the source trees of both ffmpeg and x264, but I can't find any definition for x264_bit_depth.  Does anybody know where the issue comes from?
[20:49:03 CET] <kepstin> ccyang: it's a symbol from the x264 library, it's declared in the main x264.h header.
[20:50:02 CET] <ccyang> Hi kepstin: yes, I do have x264.h and x264_config.h installed by VideoLAN source.  But there is no x264_bit_depth defined there.
[20:50:12 CET] <kepstin> getting an error with that undefined usually indicates an x264 installation issue.
[20:50:45 CET] <kepstin> latest x264 git?
[20:50:54 CET] <ccyang> Latest stable release
[20:51:09 CET] <kepstin> x264 doesn't really do stable releases
[20:51:19 CET] <ccyang> Let me check, hold on.
[20:51:38 CET] <ccyang> x264-snapshot-20180103-2245
[20:51:43 CET] <ccyang> It's pretty new
[20:52:03 CET] <kepstin> right, that should match the latest git then, I don't think it's changed since late last december.
[20:52:28 CET] <ccyang> I'm confused why there is no x264_bit_depth defined anywhere in their headers
[20:52:32 CET] <kepstin> so, you built & installed x264, and are looking at the installed copy of the x264.h?
[20:52:47 CET] <ccyang> yes, and all the source code in x264 tree
[20:52:54 CET] <ccyang> no such thing exists
[20:53:10 CET] <kepstin> oh, wait, hmm. you might actually have *too new* of an x264 library
[20:53:18 CET] <ccyang> @@? oh?
[20:53:34 CET] <kepstin> https://git.videolan.org/?p=x264.git;a=commit;h=71ed44c7312438fac7c5c5301e45522e57127db4 they removed the symbol and made the library multi-bit-depth
[20:54:03 CET] <kepstin> grab a snapshot from november 2017 or earlier and it'll work with ffmpeg until ffmpeg is updated.
[20:54:28 CET] <kepstin> or try a git version of ffmpeg, people might have already been working on it
[20:54:49 CET] <ccyang> Thanks so much.  This helps a lot.
[20:54:55 CET] <ccyang> I know what to do now.
[21:36:40 CET] <Bobobo> Hi everyone
[21:37:35 CET] <Bobobo> I'm trying to compile ffmpeg (latest version) in a raspberry pi 3b, using raspbian (also latest version), everything is updated, even raspberry firmware, however I can't get 'mmal' hw acceleration. Do you know if there is currently any bug with that?
[21:42:46 CET] <c_14> Bobobo: does --enable-mmal fail?
[21:43:33 CET] <Bobobo> no, in fact, ./configure with --enable-mmal works perfectly, and it even shows mmal inside 'Hardware accelerators:' section
[21:43:47 CET] <JEEB> ok, then it's being built
[21:43:50 CET] <Bobobo> but when it's finally compiled, the command 'ffmpeg -hwaccels' won't show anything
[21:43:54 CET] <JEEB> and something like mpv should be able to utilize it
[21:44:08 CET] <JEEB> ffmpeg.c is limited in what it can utilize so it might not be usable through it
[21:44:22 CET] <Bobobo> there isn't any '*mmal*' file inside ffmpeg install dir (I use custom --prefix)
[21:44:30 CET] <JEEB> nope
[21:45:04 CET] <Bobobo> in my case, this is for attract mode (a front-end for gaming emulators), which uses mmal for video decoding
[21:45:18 CET] <Bobobo> but attract mode doesn't recognize the mmal hwaccel from ffmpeg
[21:45:30 CET] <JEEB> well then it wasn't built for it?
[21:45:48 CET] <JEEB> and if you're using a custom prefix I hope you're using PKG_CONFIG_PATH or PKG_CONFIG_LIBDIR (latter overriding the search path)
[21:45:54 CET] <Bobobo> it should, I had it working in earlier versions
[21:46:00 CET] <furq> Bobobo: does -c:v h264_mmal -i foo.mp4 work
[21:46:05 CET] <Bobobo> yep, pkg-config is correctly showing ffmpeg, also ldconfig -p
[21:46:10 CET] <JEEB> ok
[21:46:37 CET] <JEEB> I would recommend checking if mpv's configure finds FFmpeg's MMAL hwaccel
[21:46:45 CET] <JEEB> or whatever the rpi thing was
[21:46:59 CET] <kepstin> mmal isn't a hwaccell, i think? it's just a decoder
[21:48:46 CET] <Bobobo> furq: yes, that command is working
[21:49:06 CET] <Bobobo> JEEB: I'll try to ./configure mpv
[21:49:11 CET] <furq> uh
[21:49:17 CET] <furq> if that command works then mmal is working
[21:49:53 CET] <Bobobo> kepstin: mmal is a chip the raspberry has for hwaccel and ffmpeg uses to decode videos among other things (mainly video/cam recording)
[21:50:02 CET] <furq> right
[21:50:08 CET] <furq> he's saying it doesn't show up as an hwaccel
[21:50:16 CET] <furq> it just adds accelerated decoders
[21:50:46 CET] <JEEB> Bobobo: with mpv you first get waf by running the bootstrap script and then while setting your pkg-config related variables you do ./waf configure
[21:51:41 CET] <Bobobo> furq: that makes sense, indeed
[21:54:12 CET] <JEEB> and yes, it could quite well be that MMAL things are not hwaccels but just "normal" decoders utilizing the hw decoding APIs in the background
[21:54:15 CET] <JEEB> just like mediacodec
[22:08:00 CET] <oborot> For some reason when I run the below command to add an audio stream to my video the video becomes 0 seconds long:
[22:08:03 CET] <oborot> ffmpeg -y -i myvideo.mp4 -i music.aac -map 0:0 -map 1:0 -c:v copy -c:a aac -b:a 256k -shortest myvideo.mp4
[22:08:26 CET] <oborot> Tested using the most up to date ffmpeg built from git src
[22:08:27 CET] <Bobobo> thank you guys, I'll do some troubleshooting, as it actually seems mmal is working but not in my other software (attract mode), not ffmpeg fault it seems
[22:09:05 CET] <furq> oborot: pastebin the full output somewhere
[22:09:17 CET] <furq> you shouldn't need -shortest though
[22:10:30 CET] <oborot> furq: Sure, here is the output:
[22:10:31 CET] <oborot> https://pastebin.com/3ji94aub
[22:11:07 CET] <oborot> I also posted this question in the mailing list a couple of days ago, but I thought to ask here to since I haven't made any progress yet :(
[22:11:18 CET] <oborot> I'm testing now without the shortest option
[22:12:45 CET] <furq> does ffprobe say the output is 0 seconds long
[22:12:58 CET] <furq> if not then i'm guessing your player can't handle yuv444p
[22:13:08 CET] <furq> and it's just very bad at giving errors
[22:13:49 CET] <alexpigment> i wonder why it's set to high 444
[22:13:58 CET] <alexpigment> maybe it was encoded with crf 1 ?
[22:14:15 CET] <alexpigment> seems kind of odd that it would get that profile
[22:14:21 CET] <furq> oh wait
[22:14:30 CET] <furq> one frame huh
[22:14:47 CET] <oborot> ffprobe says 00:00:00.05,
[22:14:50 CET] <oborot> for duration
[22:15:00 CET] <furq> does the input video have one frame
[22:15:15 CET] <oborot> no
[22:15:21 CET] <oborot> It's a 48 second long input video
[22:15:27 CET] <furq> it could still have one frame
[22:15:30 CET] <oborot> oh
[22:15:35 CET] <oborot> Uh let me check
[22:15:44 CET] <furq> ffprobe -count_frames -show_streams
[22:16:06 CET] <furq> it'll show up as nb_read_frames iirc
[22:16:47 CET] <furq> i would guess it's a single frame with a duration of 48 seconds and then ffmpeg is remuxing it and assuming it's 25fps for some reason
[22:16:56 CET] <furq> idk why it would want to do that
[22:17:14 CET] <oborot> nb_read_frames=1150
[22:17:17 CET] <furq> oh
[22:17:19 CET] <furq> well then that's weird
[22:17:20 CET] <oborot> On the input file
[22:17:52 CET] <oborot> Well hang on, I re-downloaded this file from vimeo after uploading it. Maybe they did some conversion on it.
[22:17:58 CET] <oborot> I'll try and get the actual file.
[22:18:41 CET] <alexpigment> they definitely did some conversion on it :)
[22:18:57 CET] <alexpigment> i don't think vimeo has a way to format a video so that it doesn't get re-encoded
[22:24:23 CET] <alexpigment> i wonder if the problem here is that the time scale is just wrong
[22:25:09 CET] <alexpigment> maybe it's possible to add -video_track_timescale 25 (or perhaps set the framerate to 25 before the input)
[22:37:22 CET] <oborot> furq: alexpigment: Oh, so I finally got back the "real" before and after videos prior to any additional vimeo conversions.
[22:37:44 CET] <oborot> After removing the shortest option the video is the proper duration and the audio plays
[22:37:54 CET] <oborot> However it's just a still image
[22:37:57 CET] <oborot> no video
[22:39:16 CET] <oborot> And here's what I get from the output of ffprobe with -count_frames and -show_streams:
[22:39:19 CET] <oborot> nb_read_frames=1
[22:39:21 CET] <oborot> nb_read_frames=2201
[22:40:17 CET] <oborot> For the 2 streams
[22:40:42 CET] <oborot> Not sure why it's changing to 1 frame :/
[22:45:09 CET] <oborot> I tried this in a docker container simulating my production environment and it works which is very strange...
[22:45:13 CET] <oborot> It's using the same static binary
[22:45:35 CET] <d4re> i use the docker image
[22:45:59 CET] <d4re> saves time compiling the whole thing
[22:46:10 CET] <oborot> Sadly I can't in production
[22:46:21 CET] <oborot> Limited to a 500MB disk
[22:46:34 CET] <d4re> lol
[22:46:49 CET] <d4re> what year are you from?
[22:47:00 CET] <oborot> It's actually using modern technology
[22:47:02 CET] <oborot> AWS lambda
[22:47:12 CET] <oborot> They limit you to 500MB of disk space
[22:47:35 CET] <oborot> Which is fine for me, since I'm only producing short < 1 minute videos.
[22:47:42 CET] <d4re> i see
[22:47:49 CET] <oborot> However I'm running into this weird issue.
[22:57:36 CET] <oborot> Here's my pastebin output if anybody can make any sense of it: https://pastebin.com/tvhxZc0n
[23:14:23 CET] <alexpigment> orobot: as i mentioned earlier, you might try to add -video_track_timescale 25
[23:14:44 CET] <alexpigment> not sure if it'll work - just a theory. i don't have any samples that have an incorrect duration to test with
[23:25:55 CET] <oborot> I'll try that
[23:26:23 CET] <alexpigment> also, just to make sure, your file size is greater than 1.9 megabytes, right?
[23:26:33 CET] <oborot> The input or the output?
[23:26:37 CET] <alexpigment> the input
[23:26:59 CET] <oborot> It's 18mb
[23:27:08 CET] <alexpigment> k, just making sure
[23:32:01 CET] <oborot> Uploading....
[23:32:24 CET] <oborot> Also, does ffmpeg manage audio directly for videos? Or does it use a third-party library for that?
[23:37:17 CET] <oborot> alexpigment: No dice on -video_track_timescale 25
[23:37:27 CET] <oborot> Same result I'm afraid
[23:38:19 CET] <alexpigment> oh well, i had a slight bit of hope that would work. no clue why ffmpeg/ffprobe are reading the file wrong
[23:38:39 CET] <alexpigment> my gut says that it's reading the thumbnail for the video rather than the actual video, but that would generally show up as a separate stream
[23:40:09 CET] <oborot> That's the only frame I can actually see is the thumbnail
[23:42:05 CET] <oborot> I did notice this in my log: [aac @ 0x3b27600] Estimating duration from bitrate, this may be inaccurate
[23:42:20 CET] <oborot> Not sure if it's related to my issue or not
[23:43:56 CET] <NVENC_001> Hello
[23:44:10 CET] <NVENC_001> I ned help for NVENC
[23:44:30 CET] <NVENC_001> Ffmpeg stop working 4-5 min
[23:44:55 CET] <NVENC_001> PCU Core 100%
[23:45:26 CET] <NVENC_001> ?
[00:00:00 CET] --- Fri Jan 12 2018


More information about the Ffmpeg-devel-irc mailing list