[Ffmpeg-devel-irc] ffmpeg.log.20190605

burek burek021 at gmail.com
Thu Jun 6 03:05:02 EEST 2019


[00:05:23 CEST] <electrotoscope> Is there a way to use drawtext=textfile to write one line of the txt file for every frame of the video? Like if my text file was "hello[linereturn]world" it would write "hello" to the first frame and "world" to the second frame
[00:10:52 CEST] <Zcom> try to add a subtitle file
[00:12:07 CEST] <BtbN> one line of text per frame is either a very low fps video or very fast text
[00:16:41 CEST] <electrotoscope> it's from a text document I have with the date/time that the frame was recorded, but it's not written into the source file. It is indeed fast text!!
[00:17:06 CEST] <electrotoscope> I was thinking about a .srt but it's weird VFR source
[00:17:48 CEST] <electrotoscope> I think I've found a way where I put in a ludicrous amount of line returns and set y=(offsetpx-(n*linereturnspx))
[00:49:27 CEST] <ctcx> Trying to watch DVR rtsp stream with ffplay. I use "ffplay rtsp://user:pass@full_url",
[00:49:33 CEST] <ctcx> and get "method DESCRIBE failed: 404 stream not found"
[00:50:01 CEST] <ctcx> So the server (the DVR) is indeed found, but stream is not. Any ideas?
[02:52:46 CEST] <friendofafriend> Have any of you nice folks made a normal video into a "side-by-side" video for Google Cardboard VR (et al)?
[02:58:28 CEST] <mozzarella> why?
[03:00:44 CEST] <friendofafriend> mozzarella: The geometry seems kind of confusing.  I'm guessing you'd have to crop the sides of a 16:9 to make two side-by-side 8:9 images, or scale it.
[03:01:10 CEST] <friendofafriend> (And I didn't see any examples around that I could use as a starting point.)
[03:10:45 CEST] <friendofafriend> Ah, this seems to be doing sort of what I'm after.
[03:10:51 CEST] <friendofafriend> ffmpeg -i "left.mkv" -i "right.mkv" -filter_complex "pad=in_w*2:in_h, overlay=main_w/2:0, scale=in_w/2:in_h, scale=-1:1080" -b:v 10000k -vcodec libx264 -an "output.mkv"
[03:24:30 CEST] <friendofafriend> And this works much better:  ffmpeg -i "input.mkv" -i "input.mkv" -filter_complex "pad=in_w*2:in_h, overlay=main_w/2:0, scale=in_w/2:in_h, scale=-1:in_h*2" -b:v 10000k -vcodec libx264 -c:a copy "output.mkv"
[03:24:34 CEST] <friendofafriend> Thanks for the help, all.
[07:31:06 CEST] <cards> Could someone walk me through the process of demuxing and remuxing audio from one video onto another?
[07:31:11 CEST] <cards> example:  this video https://www.facebook.com/watch/?v=448098122617310  to this audio  https://www.youtube.com/watch?v=DnsDbOi6sTY
[09:33:01 CEST] <dongs> [matroska,webm @ 000001cf18d5ad40] Element at 0x1aaaa90d ending at 0x1aaaa96f exceeds containing master element ending at 0x1aaaa908
[09:33:04 CEST] <dongs> ???
[09:33:42 CEST] <dongs> this is on like couple days old ffmpeg build
[09:35:09 CEST] <pink_mist> sounds like someone made a shitty webm
[09:36:35 CEST] <dongs> its a mkv that came from eac3to from a disk that I own
[09:36:40 CEST] <dongs> not even webm
[10:13:35 CEST] <dongs> [aac_latm @ 0000027aa6c39900] Multiple programs is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[10:36:41 CEST] <ntt> Hello, I'm trying to stream a webcam video to an nginx rtmp server. It works, but I have a huge delay (about 30 sec). How can I solve the issue? Actually I'm using this: ffmpeg -i /dev/video0 -framerate 1 -video_size 720x404 -vcodec libx264 -maxrate 768k -bufsize 8080k -vf "format=yuv420p" -g 60 -f flv rtmp://X.X.X.X:1936/live/camerapc
[10:53:37 CEST] <dongs> try using windows
[10:54:40 CEST] <ntt> dongs: I don't think this is the solution
[10:55:25 CEST] <dongs> i mean
[10:55:26 CEST] <dongs> at 1fps
[10:55:29 CEST] <dongs> and bufsize of 8megs
[10:55:33 CEST] <dongs> and 720x400
[10:55:41 CEST] <dongs> why are you complaining about 30sec delay?
[10:55:45 CEST] <dongs> that sounds quite reasonable
[10:56:02 CEST] <dongs> 30 sec of 1fps trash in h264 probably is around 8megs.
[11:08:44 CEST] <ntt> I can modify all values. The goal is to obtain a reduced delay, even though I will decrease quality
[11:09:10 CEST] <ntt> can you give me some idea? The problem actually is delay
[11:09:17 CEST] <furq> ntt: how are you watching this video
[11:09:25 CEST] <dongs> i just explained you the reason for delay
[11:10:00 CEST] <ntt> furq: I'm using vlc
[11:10:28 CEST] <ntt> dongs: so, can I use a different framerate and reduce the buffer size?
[11:12:20 CEST] <ntt> I'm using vlc with rtmp://X.X.X.X:1936/live/camerapc  .... and I'm using nginx-rtmp
[11:12:46 CEST] <ntt> dongs: I'm trying with ffmpeg -i /dev/video0 -framerate 15 -video_size 720x404 -vcodec libx264 -preset veryfast -maxrate 768k -bufsize 100k -vf "format=yuv420p" -g 30 -f flv
[11:12:51 CEST] <ntt> but nothing changes
[11:16:31 CEST] <furq> ntt: -framerate and -video_size are input options, they're not doing anything in that command
[11:16:36 CEST] <furq> move them before -i
[11:19:11 CEST] <ntt> I have about 8/9 seconds of delay. Is this normal? Can I obtain something better?
[11:20:45 CEST] <ntt> Actually I'm using this: ffmpeg -framerate 25 -video_size 640x480 -i /dev/video0  -vcodec libx264 -preset veryfast -maxrate 768k -bufsize 100k -vf "format=yuv420p" -g 30 -f flv
[11:22:51 CEST] <ntt> Could I use something different than h264?
[12:28:10 CEST] <Anill> Hi
[12:28:26 CEST] <rnderr> Hu
[12:28:44 CEST] <rnderr> \O/
[12:28:46 CEST] <Anill> is there any way to identify how much time ffmpeg will take to transcode files in a folder without actually transcoding the same.
[12:29:01 CEST] <Yuyu0> no
[12:29:43 CEST] <Anill> I just started with ffmpeg can you explain what this command will do ffmpeg -i input.avi -vf scale=320:240 output.mp4
[12:29:52 CEST] <Anill> whats -vf option is
[12:29:59 CEST] <Anill> and scale=320:240 mean
[12:38:53 CEST] <DHE> convert the AVI to MP4 with transcoding, and rescaling the input image to 320x240
[12:41:03 CEST] <Anill> can it be possible to pass the directory to ffmpeg and it transcodes all the files in directory in the directory?
[12:43:19 CEST] <DHE> no, have your shell do that
[12:44:11 CEST] <Anill> DHE: One more question, can we transcode the video files playlist from ffmpeg
[13:17:10 CEST] <Anill> Can we transcode the video files playlist from ffmpeg.
[13:17:32 CEST] <TheDude9000> hi, does anyone know if there's a straightforward way to get the desired speaker location for a given audio stream channel?
[13:18:01 CEST] <TheDude9000> also does FFMPEG automatically reorder the channel layout?
[13:51:28 CEST] <Anill> Has anyone used this https://pypi.org/project/ffmpeg-progress/ on windows?
[13:52:07 CEST] <grkblood13> not a ffmpeg specific question but I wasn't sure where else to ask this. When using the HTML5 fetch API to download a MPEGTS stream, is there a way to seperate the download into proper MPEGTS packets?
[13:53:11 CEST] <grkblood13> fill a buffer up until it has a completed packet, hand it off to the next process, empty buffer, rinse and repeate
[14:14:46 CEST] <grkblood13> repeat*
[14:22:38 CEST] <deterenkelt> ffmpeg -v error -dump_attachment:t "" -i input.mkv   prints a confusing error At least one output file must be specified. The attachments are all extracted. How to avoid this message?
[14:24:02 CEST] <deterenkelt> If you specify -f null - at the end, ffmpeg tries to transcode the video instead of quickly extracting fonts.
[14:25:45 CEST] <deterenkelt> FFmpeg is compiled from git, heres -version output: https://pastebin.com/raw/4wpYfsuN
[14:47:47 CEST] <th3_v0ice> Why in the examples of the API usage we are shown that packet's DTS and PTS information should be converted from the stream's time base to the codec time base? I cant find a place where FFmpeg.c is doing this particular step. Its using stream time base only.
[14:48:31 CEST] <th3_v0ice> I am considering a case where decoding is needed.
[16:42:49 CEST] <atbd> hi, i'm trying to use complex filters in C to rescale (1024x576 -> 720x576), change format (yuv420p -> gray8) and deinterlace (yadif) but i have issues with the conversion. Once i add it the output frames are empty. Do I miss something ?
[16:43:00 CEST] <atbd> Using ffmpeg api 4.1.3
[16:44:17 CEST] <atbd> My filter description: yadif,scale=720x576,format=pix_fmts=gray8
[17:04:47 CEST] <steve___> I'm trying to concat three videos without transcoding them.  The second video is two seconds of black which I generated from two black png's.  Here is the stream output from ffprobe for all three videos:
[17:04:49 CEST] <steve___> Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc, bt470bg/bt470bg/smpte170m), 1920x1080, 16998 kb/s, SAR 1:1 DAR 16:9, 29.93 fps, 29.92 tbr, 90k tbn, 180k tbc (default)
[17:04:51 CEST] <steve___> Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc), 1920x1080, 21 kb/s, 29.93 fps, 29.93 tbr, 11972 tbn, 59.86 tbc (default)
[17:04:53 CEST] <steve___> Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc, bt470bg/bt470bg/smpte170m), 1920x1080, 16998 kb/s, SAR 1:1 DAR 16:9, 29.93 fps, 29.92 tbr, 90k tbn, 180k tbc (default)
[17:06:15 CEST] <DHE> I would suggest redoing that blank video with -profile:v baseline then
[17:08:30 CEST] <steve___> I didn't work.  This is the command I used to generate the blank video -- ffmpeg -y -loglevel -8 -framerate 1 -i img/black-%02d.png -c:v libx264 -r 29.93 -pix_fmt yuvj420p -profile:v baseline 03.mp4
[17:08:46 CEST] <steve___> the stream output -- Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuvj420p(pc), 1920x1080, 29 kb/s, 29.93 fps, 29.93 tbr, 11972 tbn, 59.86 tbc (default)
[21:03:33 CEST] <CarlFK> can someone show me a current "save mixed to disk" command?
[21:04:07 CEST] <CarlFK> I just discovered the one I use has -ilme "Force interlacing support..."
[21:04:20 CEST] <furq> mixed what
[21:09:38 CEST] <CarlFK> um, mixed streams?
[21:09:51 CEST] <CarlFK> https://github.com/CarlFK/voctomix-outcasts/blob/master/record-timestamp.sh#L36
[21:11:16 CEST] <CarlFK> opps, wrong chan - I thought this was #voctomix
[21:11:45 CEST] <CarlFK> context: https://github.com/voc/voctomix/blob/master/example-scripts/ffmpeg/record-mixed-ffmpeg.sh
[21:25:08 CEST] <another> CarlFK: still not sure what you want to do
[21:25:29 CEST] <CarlFK> another: not force interlacing ;)
[21:25:48 CEST] <another> is your content interlaced?
[21:27:32 CEST] <CarlFK> no - but don't worry about it, I mostly got what I was looking for:
[21:27:41 CEST] <CarlFK> https://github.com/voc/cm/blob/master/ansible/roles/encoder/templates/voctomix-scripts/recording-sink.sh.j2
[21:28:59 CEST] <another> right. the currently maintained scripts are in ansible
[21:29:35 CEST] <CarlFK> another: you know/do vocto?
[21:30:02 CEST] <another> i looked at some of the scripts
[21:30:52 CEST] <another> and wrote a mini patch once
[21:33:14 CEST] <CarlFK> neat
[00:00:23 CEST] --- Thu Jun  6 2019


More information about the Ffmpeg-devel-irc mailing list