[Ffmpeg-devel-irc] ffmpeg.log.20170706
burek
burek021 at gmail.com
Fri Jul 7 03:05:01 EEST 2017
[00:05:49 CEST] <nemosphere> hi there. I never succeed to install FFMPEG the right way for audacity to find it's requested lib. I decided to learn enough for being able to do it. Why my fresh compiled from sources ffmpeg didn't provide that libavformat.so.55 that audacity requires please ?
[00:06:12 CEST] <c_14> --enable-shared probably
[00:06:23 CEST] <nemosphere> default yes, no ?
[00:06:32 CEST] <c_14> It's not the default, you need to add it
[00:06:52 CEST] <nemosphere> mmh, i go compile it again
[00:08:37 CEST] <nemosphere> --enable-shared build shared libraries [no]
[00:08:42 CEST] <nemosphere> you're right
[00:08:47 CEST] <c_14> I'm always right
[00:08:57 CEST] <furq> stop stealing my gimmick
[00:09:19 CEST] <BtbN> you don't want to install that globally into your system
[00:09:23 CEST] <BtbN> it's most likely going to collide
[00:09:29 CEST] <nemosphere> do i have another trap to manage ?
[00:09:40 CEST] <nemosphere> for using audacity with ffmpeg inside
[00:09:45 CEST] <BtbN> Install it somewhere in your homedir, and point audacity there
[00:09:53 CEST] <BtbN> static libs should work just fine for that
[00:10:10 CEST] <nicolas17> yes, pass a --prefix=$HOME/myownffmpeg to configure too
[00:10:43 CEST] <nemosphere> i don't understand collide
[00:10:55 CEST] <nemosphere> i know collapse
[00:11:10 CEST] <nemosphere> i'm not an english speaker
[00:11:11 CEST] <c_14> If your system installs a different version of ffmpeg they'll clash
[00:11:18 CEST] <nemosphere> oh
[00:11:18 CEST] <c_14> things will (most probably) break
[00:11:23 CEST] <nemosphere> thx
[00:11:31 CEST] <nicolas17> are you compiling audacity from source?
[00:11:36 CEST] <nemosphere> yes
[00:11:58 CEST] <nicolas17> do you need to install ffmpeg from source? why not install the -dev packages from your distro?
[00:12:26 CEST] <nemosphere> i don't think it's an available package, i'm using slack
[00:12:38 CEST] <nicolas17> slackware?
[00:12:42 CEST] <nemosphere> yep
[00:13:02 CEST] <nicolas17> yeah probably need to compile your own then
[00:13:18 CEST] <nemosphere> in fact i remember i tried, but audacity don't see what he is looking for
[00:13:42 CEST] <nemosphere> that's why i try from sources, i want to win against the machine :)
[00:14:35 CEST] <nemosphere> i have some scripts i called avi2wav, or things like that, but now i would like to gain time, and comfort
[00:15:27 CEST] <nemosphere> thx for your quick answers, i read it again for being sure to understand well, then try
[01:23:52 CEST] <nicolas17> I managed to produce a video!
[01:23:57 CEST] <nicolas17> but it doesn't play properly
[01:24:11 CEST] <nicolas17> and it reports pretty bad metadata
[01:30:31 CEST] <nicolas17> Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 2592x1944, 7070412 kb/s, 15515.15 fps, 15360 tbr, 15360 tbn, 60 tbc (default)
[02:14:17 CEST] <hexptr> hi, I'm trying to call swr_convert_frame with a NULL input to flush the buffered data out at the end (as the docs tell me to)
[02:14:58 CEST] <hexptr> but this seems to lead to a sigsegv on line 143 of swresample_frame.c where "in" is dereferenced despite being NULL
[02:15:09 CEST] <hexptr> what am I doing wrong?
[02:15:23 CEST] <hexptr> this is ffmpeg 3.3.2 btw
[02:19:53 CEST] <atomnuker> no, its not wrong, its just the way it is
[02:20:29 CEST] <atomnuker> just put your data plane pointers in an array and give swr the pointer of the array
[02:20:39 CEST] <atomnuker> and specify 0 as the number of input samples available
[02:20:44 CEST] <atomnuker> it'll then flush itself out
[02:20:55 CEST] <atomnuker> do remember that the return value is what's important
[02:21:18 CEST] <atomnuker> the function which tells you how many samples swr can output gives you a ceiling, not the actual amount that will be output
[02:21:26 CEST] <hexptr> all this time I've been passing in empty AVFrames, assuming that swr would allocate the buffers for me
[02:21:39 CEST] <hexptr> I take it I'm not meant to be doing that then
[02:22:21 CEST] <atomnuker> of course not, you have to allocate your avframe buffers prior to calling swr
[02:22:51 CEST] <hexptr> aha.
[02:22:53 CEST] <atomnuker> populate format, layout, sample rate and the number of samples and then call av_frame_get_buffer(frame, 0)
[02:23:20 CEST] <hexptr> I was lured into thinking that that was valid because it seemed to work all the way until I needed to flush
[02:23:20 CEST] <atomnuker> after that you can call swr_convert like this:
[02:23:44 CEST] <atomnuker> frame->nb_samples = swr_convert(swr, frame->data, frame->nb_samples, src, input_samples)
[02:24:16 CEST] <hexptr> I see what you mean, I'll end up doing that, thanks
[02:24:17 CEST] <atomnuker> with src being defined as "const uint8_t *src[] = { (const uint8_t *)src_samples };"
[02:24:26 CEST] <hexptr> but looking at the docs for swr_convert_frame
[02:24:42 CEST] <hexptr> pretty sure it suggests that the output buffers can be unallocated:
[02:24:46 CEST] <hexptr> "If the output AVFrame does not have the data pointers allocated the nb_samples field will be set using av_frame_get_buffer() is called to allocate the frame."
[02:25:39 CEST] <hexptr> I mean the grammar is a bit odd, but I took from that that swr would call av_frame_get_buffer for me
[02:25:43 CEST] <atomnuker> swr_convert_frame?
[02:25:49 CEST] <hexptr> yes
[02:26:20 CEST] <atomnuker> huh, didn't realize there was such a function
[02:26:25 CEST] <atomnuker> nevermind then, I guess it works
[02:27:42 CEST] <hexptr> well, I wish it worked as claimed...
[02:28:07 CEST] <hexptr> I'll dig into the code and see if I can propose a patch
[02:28:13 CEST] <hexptr> either to docs or to the code
[02:28:34 CEST] <hexptr> thanks for clarifying anyway :)
[02:29:32 CEST] <atomnuker> nevermind what I said, I was assuming you used swr_convert(), not swr_convert_frame()
[02:30:01 CEST] <atomnuker> it should always allocate data planes if there's enough data to output
[02:31:14 CEST] <hexptr> hm, reading the code I'm pretty sure it's just missing a NULL check in the calculation of number of samples to allocate
[02:32:19 CEST] <hexptr> getting late, I'll deal with this later
[02:47:08 CEST] <nicolas17> still clueless here
[02:47:34 CEST] <nicolas17> can't figure out what other parameter I'm missing
[02:50:54 CEST] <nicolas17> is it okay for my output AVFormatContext, AVCodecContext, and AVStream's AVCodecParameters to have bit_rate=0?
[02:55:19 CEST] <nicolas17> I'm using crf so I guessed I didn't need to set the bitrate
[03:51:36 CEST] <nicolas17> when I encode with ffmpeg, I get a debug line with all the libx264 options, how do I print that from my own program?
[03:51:50 CEST] <nicolas17> [libx264 @ 0x55fd6e35d220] 264 - core 148 r2795 aaa9aa8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 etc=etc
[05:02:46 CEST] <nicolas17> my video ends up with correct timestamps and all
[05:02:52 CEST] <nicolas17> but ffprobe -show_streams says codec_time_base=33/2000
[05:02:57 CEST] <nicolas17> only way I found to change that is
[05:03:08 CEST] <nicolas17> outputVideoStream->codec->time_base = {1,30}
[05:03:17 CEST] <nicolas17> but AVStream.codec is deprecated and I shouldn't be using it
[05:04:00 CEST] <nicolas17> why the heck are there two different AVCodecContexts anyway? the one in AVStream.codec and the one I create and feed with data
[05:11:13 CEST] <Tangome> Hello
[05:31:34 CEST] <thebombzen> why do people do that
[05:31:57 CEST] <nicolas17> this is weird
[05:32:50 CEST] <nicolas17> av_dump_format, which prints the '30 fps, 30 tbr, 15360 tbn, 60 tbc' line, gets the tbc from AVStream.codec
[05:33:08 CEST] <nicolas17> ffprobe gets the number for "codec_time_base=" from AVStream.codec
[05:33:15 CEST] <thebombzen> I was referring to saying "hi" and leaving less than 40 seconds later but why would you set the timebase to be 1/30?
[05:33:20 CEST] <nicolas17> the only way I found to set that timebase is AVStream.codec
[05:33:25 CEST] <nicolas17> yet it's deprecated
[05:33:38 CEST] <thebombzen> 1/30 second time base sounds extremely high
[05:33:39 CEST] <nicolas17> yet it's clearly being saved somewhere in the file
[05:33:51 CEST] <thebombzen> for reference, mpegts uses 1/90000 as its timebase
[05:33:56 CEST] <thebombzen> why would you want a 1/30 timebase?
[05:33:57 CEST] <thebombzen> :O
[05:34:30 CEST] <nicolas17> also I see time_base=1/15360
[05:34:45 CEST] <nicolas17> which I'm using to calculate the pts
[05:34:47 CEST] <nicolas17> so what *is* the codec timebase?
[09:29:35 CEST] <nemosphere> Hink, i tried to compile ffmpeg in my home with --enable-shared, but something is missing for 'make' compiling libavcodec/libavcodec.so.57
[09:30:05 CEST] <nemosphere> /usr/lib64/gcc/x86_64-slackware-linux/5.3.0/../../../../x86_64-slackware-linux/bin/ld: libavcodec/mqc.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
[09:30:59 CEST] <nemosphere> wait a mn, i'm in doubt
[09:31:08 CEST] <nemosphere> i will try all that again
[09:31:13 CEST] <nemosphere> sorry for the noise
[09:55:15 CEST] <nemosphere> in fact it seems ok after a make clean
[09:58:42 CEST] <nemosphere> not fun ! I compiled the libavformat.so successfully, in my home but then audacity don"t take it. He said that lib is still absent
[10:09:02 CEST] <chuckleplant> I have a broad question about h264 encoding. How many NAL units are generated per frame? What does that number depend on? Hope it's not too broad, a very broad/generic answer would work for me at this point
[10:28:00 CEST] <squ> chuckleplant: why NAL units are important to you?
[10:28:14 CEST] <squ> I don't know what they are
[10:29:58 CEST] <Nacht> There isn't a fixed number of NAL units per frame. I kinda depends on the video and what it includes
[10:30:17 CEST] <Nacht> But there's always an AUD per frame
[10:31:10 CEST] <Nacht> A good page explaining NAL units is this one: http://yumichan.net/video-processing/video-compression/introduction-to-h264-nal-unit/
[10:31:16 CEST] <Nacht> I found it to be quite helpfull
[10:31:31 CEST] <squ> thank you
[11:44:16 CEST] <gallegr> hello! anyone here?
[11:44:41 CEST] <hexptr> if you have a question just ask :)
[11:46:56 CEST] <gallegr> i'm trying to segment an mp4 video into segments (every segment of different rime lenght).
[11:47:16 CEST] <squ> -t -ss
[11:47:53 CEST] <gallegr> i'm using this code https://pastebin.com/WxPV6M4t, but it generates only segments of 10 seconds. I've tryed both on Windows and Ubuntu
[11:48:11 CEST] <furq> you usually can't do that with -c copy
[11:48:17 CEST] <furq> every segment has to start on a keyframe
[11:49:25 CEST] <gallegr> doesn't force_key_frames do that?
[11:49:29 CEST] <furq> no
[11:49:32 CEST] <furq> that does nothing with -c copy
[11:53:33 CEST] <gallegr> i'll do some more research and try, thanks for advice
[12:38:06 CEST] <middleman> Hello, i have a problem sanyo camera model VCC-HD2300P with ffplay returns "Server returned 400 Bad Request"
[12:38:25 CEST] <middleman> What may be the problme ?
[13:45:00 CEST] <tlhiv_laptop> i'm trying to combine audio streams in an MP4 and a WMA file. The MP4 has a video stream and may or may not have an audio stream. The WMA file only has an audio stream. I want to create an output MP4 that copies the video stream and "merges" the audio streams from the two input files (which may merge only one audio stream or two depending on whether the input MP4 contains an audio stream or not). The following does not seem to w
[13:45:01 CEST] <tlhiv_laptop> ork
[13:45:28 CEST] <tlhiv_laptop> ffmpeg -i foo.mp4 -i foo.wma -vcodec copy bar.mp4
[14:42:59 CEST] <kepstin> tlhiv_laptop: you want a single output audio stream that contains a mix of the audio from both files?
[14:43:12 CEST] <kepstin> or do you want the result to have multiple (selectable) audio tracks?
[14:44:56 CEST] <tlhiv_laptop> kepstin: the former
[14:46:25 CEST] <kepstin> tlhiv_laptop: no way to do that with one command that'll work in all cases, you'll have to check whether the mp4 has an audio track first
[14:46:38 CEST] <tlhiv_laptop> :-/
[14:47:25 CEST] <tlhiv_laptop> writing a script in Windows may be challenging ... if it were Linux, then it would be a piece of cake
[14:47:44 CEST] <kepstin> but the command will be something like ffmpeg -i foo.mp4 -i foo.wma -filter_complex '[0:a][1:a]amix=inputs=2[audio]' -map 1:v -map '[audio]' <codec options> bar.mp4
[14:48:57 CEST] <tlhiv_laptop> what would "ffmpeg -i video.mp4 -i audio.wav -filter_complex amix=duration=first -vcodec copy -y foo.mp4" do?
[14:49:25 CEST] <tlhiv_laptop> i cannot test it at the moment
[14:49:43 CEST] <kepstin> tlhiv_laptop: take the "first two" audio tracks and mix them, and if there's fewer than two audio tracks, fail with an error.
[14:50:10 CEST] <kepstin> and I'm uncertain about which streams will be included in the output, would have to double-check that
[14:53:34 CEST] <tlhiv_laptop> what about "ffmpeg -i video.mp4 -i audio.wav -vcodec copy -y foo.mp4" ?
[14:54:26 CEST] <kepstin> tlhiv_laptop: without any map options, ffmpeg will select one video and one audio stream (usually the first of each) and use those stream unmodified in the output
[14:55:31 CEST] <tlhiv_laptop> then it seems like i need to play with windows batch file programming ... it's been a long time since i've done that
[14:56:14 CEST] <kepstin> maybe consider powershell, instead? (I haven't used it, but it's probably better than batch files)
[14:56:28 CEST] <tlhiv_laptop> is that native to windows?
[14:56:59 CEST] <kepstin> yeah
[14:57:04 CEST] <tlhiv_laptop> thanks for the tip
[14:57:18 CEST] <tlhiv_laptop> i may just grab a windows bash :-)
[14:58:56 CEST] <tlhiv_laptop> unxutils :-)
[15:21:43 CEST] <tlhiv_laptop> thank you again kepstin ... i'm going to code it with bash
[16:42:03 CEST] <nicolas17> so!
[16:42:09 CEST] <nicolas17> trying again in another timezone :P
[16:42:26 CEST] <nicolas17> what is the 'tbc' shown in, for example, ffprobe?
[16:44:45 CEST] <nicolas17> it seems that's taken from AVStream.codec.time_base when encoding, and read from the same field by av_dump_format
[16:45:03 CEST] <nicolas17> same for codec_time_base in ffprobe -show_streams
[16:45:11 CEST] <nicolas17> but AVStream.codec is deprecated
[16:46:02 CEST] <nicolas17> and I don't understand what the "codec timebase" *is*, conceptually, since players seem to interpret pts timestamps using the container's timebase
[17:22:54 CEST] <krzee> im running mint 17.3 (based on debian trusty) and mmpeg is not in my apt
[17:23:05 CEST] <krzee> should i install from source or is there a repo i can use?
[17:24:13 CEST] <kepstin> ffmpeg you mean?
[17:24:25 CEST] <krzee> oops, yes
[17:24:33 CEST] <kepstin> and there's no 'debian trusty'; I think it's based off ubuntu trusty (14.04) instead
[17:24:33 CEST] <krzee> i spelt it right in the shell ;]
[17:24:39 CEST] <kepstin> which is... kind of old
[17:24:43 CEST] <pgorley> there's this ppa which is (i find) well-maintained: https://launchpad.net/~jonathonf/+archive/ubuntu/ffmpeg-3
[17:24:54 CEST] <krzee> thank you sir
[17:25:33 CEST] <kepstin> I hope mint switches to an ubuntu xenial or newer base at some point, then you'll have a working ffmpeg in the standard repos.
[17:26:37 CEST] <krzee> that did it, installed now =]
[17:26:59 CEST] <pgorley> kepstin: mint 18 is based on 16.04, no?
[17:27:38 CEST] <kepstin> oh is it? I have no idea what the mint version numbers mean :)
[17:28:07 CEST] <pgorley> just checked, and yes, 18 is based on xenial
[17:29:04 CEST] <kepstin> of course, you only get ffmpeg 2.8 in xenial, so it's still kind of old (would want a newer one for the aac encoder, at least)
[17:36:27 CEST] <krzee> so im trying to turn a directory of screenshots into a small slideshow, i used this command: k at smallaptop:~/mom > ffmpeg -r 1/3 -i pics/*.jpg -vcodec copy out.mp4 but i just ended up overwriting all the images in the directory with the first image, and made a slideshow of the single image that copied to all the others :D
[17:56:24 CEST] <krzee> i see one issue is that my geometry doesnt match, i will use gm to fix that... do i have some problem with my -i pics/*.jpg that is causing the overwrite?
[17:56:51 CEST] <nicolas17> yeah
[17:56:59 CEST] <nicolas17> that will be expanded by your shell into
[17:57:10 CEST] <krzee> -i <all files>
[17:57:15 CEST] <nicolas17> -i pics/pic1.jpg pics/pic2.jpg pics/pic3.jpg
[17:57:17 CEST] <nicolas17> which means
[17:57:27 CEST] <nicolas17> use pic1.jpg as an input, and pic2.jpg and pic3.jpg as outputs
[17:57:36 CEST] <krzee> ahhh
[17:57:50 CEST] <krzee> so i need a -i for every pic? i can do that
[17:58:02 CEST] <nicolas17> you could let ffmpeg expand the wildcard
[17:58:06 CEST] <nicolas17> -pattern_type glob -i "pics/*.jpg"
[17:58:15 CEST] <krzee> ahhh
[17:58:31 CEST] <krzee> nice thank you!
[18:00:08 CEST] <krzee> hey that seems to have worked without even using gm to resize them all to the same
[18:01:03 CEST] <krzee> now i just need to slow this sucker down, -r 1/3 and -r 1/8 are both light speed
[18:01:19 CEST] <nicolas17> oh you're making a slideshow?
[18:01:25 CEST] <krzee> yessir
[18:01:54 CEST] <krzee> automagically generating a slideshow of adoptable dogs for my moms dog rescue's weekly dog adoption day
[18:02:37 CEST] <krzee> already built the scraper that grabs the dogs pics from her website and watermarks them with the dogs name =]
[18:05:37 CEST] <nicolas17> too bad doing ken burns effect with ffmpeg video filters would be a pain in the ass :P
[18:06:49 CEST] <krzee> haha totally
[18:07:02 CEST] <nicolas17> it's probably *possible* but :P
[18:07:07 CEST] <krzee> but she never imagined that this could be automated, she'll be fine without effects
[18:07:27 CEST] <krzee> just scrolling through the dogs at like 2-3 sec per dog is greatness
[18:07:44 CEST] <krzee> and i even see ffmpeg will let me add audio, that may end up happening later on
[18:08:29 CEST] <krzee> this will free up extra time for a volunteer that will be able to spend that extra time with the doggies every week :)
[18:24:28 CEST] <krzee> oh wow your transition suggestion led me to diascope
[18:25:36 CEST] <krzee> its pretty crazy but seems to totally simplify doing this, it's a cli frontend for making slideshows using gm and ffmpeg
[18:28:22 CEST] <krzee> diascope creates a long shell script to do all the craziness, which is funny because i
[18:28:58 CEST] <krzee> i'll have a short shell script generating the diascope file, and calling diascope to make a much longer shell script :D
[18:29:27 CEST] <krzee> thanks nicolas17, i actually will do transitions now :D
[19:38:12 CEST] <ShaneVideo> I read ffserver was going to be deprecated, is it still in the project or is there a replacement?
[19:40:53 CEST] <BtbN> It's in there, but nobody maintains it
[19:40:59 CEST] <BtbN> the moment it gets in the way, it will go
[19:41:06 CEST] <ShaneVideo> i see, thanks
[19:41:17 CEST] <BtbN> it's horrible and should not be used anyway
[19:41:48 CEST] <ShaneVideo> is there an alternative or better way?
[19:41:54 CEST] <BtbN> for what?
[19:41:57 CEST] <DHE> nginx-rtmp enjoys popularity
[19:42:16 CEST] <ShaneVideo> I need to stream a mjpg and h264 stream at the same time
[19:42:21 CEST] <ShaneVideo> from /dev/video1
[19:42:25 CEST] <DHE> to what?
[19:42:55 CEST] <ShaneVideo> mjpg to client web browsers, h264 to vlc
[19:43:04 CEST] <BtbN> I doubt Browsers can play mjpeg
[19:43:14 CEST] <ShaneVideo> IOS plays it great
[19:43:23 CEST] <BtbN> But why would you do that? It's huge
[19:44:11 CEST] <ShaneVideo> its part of a webapp, it just needs to show a preview of whats happening on the live feed
[19:45:56 CEST] <krzee> weird... i came back to ffmpeg cause that other frontend for it is too old to use, i found that ffmpeg -r 1/3 -pattern_type glob -i "pics/*.jpg" out.flv gives me a slideshow video file that plays how i want, but i cant get the same results with .mpg .avi or .mp4
[19:46:46 CEST] <krzee> even converted the flv to avi and then even that stays at the first image
[19:56:34 CEST] <krzee> boom got it, ffmpeg -framerate 1/3 -pattern_type glob -i "pics/*.jpg" -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
[20:00:52 CEST] <BtbN> you will want to specify a bitrate, or it will look horrible
[20:01:04 CEST] <BtbN> or a crf, or at least something
[20:01:11 CEST] <furq> x264 should look ok with default settings
[20:02:45 CEST] <furq> if it's a slideshow then you probably want -tune stillimage
[20:04:24 CEST] <kepstin> ShaneVideo: if you wactually want to send stuff to browsers, why not do e.g. HLS to the browser, and then not even require vlc at all?
[20:05:09 CEST] <nicolas17> I was disappointed to find VLC can't seek in HLS
[20:05:14 CEST] <nicolas17> at least the version I have and with the stream I tried
[20:06:01 CEST] <kepstin> well, yeah, it depends a lot on the stream. A live stream where the playlist only includes a few recent segments, for example, won't be seekable.
[20:06:35 CEST] <kepstin> but it should be able to seek if all the segments from the start are available, and the playlist has correct segment duration info
[20:06:42 CEST] <nicolas17> it wasn't live
[20:07:34 CEST] <nicolas17> 'vlc http://devstreaming.apple.com/videos/wwdc/2016/504m956dgg4hlw2uez9/504/hls_vod_mvp.m3u8'
[20:07:51 CEST] <nicolas17> which is a video *about* HLS, enjoy the meta :P
[20:08:32 CEST] <kepstin> seeking works in that video in mpv, fwiw.
[20:08:39 CEST] <kepstin> so yeah, i guess it's a vlc issue
[20:18:27 CEST] <Hopper_> Hey FFmpeg, looking to move this from linux to windows. https://pastebin.com/PxGTyER9 I know I need to change v4l2 to dshow, what else am I missing?
[20:34:28 CEST] <kepstin> Hopper_: well, the value for -i to use with dshow is quite a bit different, I think it takes a device name or something. Run "ffmpeg -h demuxer=dshow", it might have an option to list devices, and you can double-check the other options there too.
[20:35:07 CEST] <Hopper_> Ya, I'm working on that part now, either of them work independently, but when I try to use both I get an error on the second feed.
[20:36:11 CEST] <kepstin> hmm. I think with dshow, you might be able to do something to grab multiple devices with a single input
[20:36:39 CEST] <Hopper_> I don't really know what dshow is, should I be using that or something else?
[20:37:04 CEST] <kepstin> dshow (short for 'directshow') is the name of the windows specific api to grab frames from typical cameras.
[20:37:26 CEST] <Hopper_> So it's the standard?
[20:37:29 CEST] <kepstin> looks like it should work with multiple inputs; can you pastebin the complete command & output from a failing command?
[20:37:39 CEST] <Hopper_> Yup
[20:38:00 CEST] <kepstin> Hopper_: on linux you use 'v4l2', which is the "video for linux (version 2)" api
[20:38:08 CEST] <kepstin> these are just all os-specific drivers.
[20:38:16 CEST] <Hopper_> I understand.
[20:40:37 CEST] <kepstin> but if you're using the same config on all the cameras, doing something like -i "video=Camera1DeviceName:video=Camera2DeviceName" might work. worth a try, anyways.
[20:40:48 CEST] <Hopper_> Man, is there an easier way to copy in WindowsCMD?
[20:41:11 CEST] <Hopper_> I'll look into that as well, they will be identical.
[20:41:13 CEST] <Hopper_> https://pastebin.com/Avb5UzwM
[20:41:14 CEST] <kepstin> Hopper_: I thought they improved that in windows 10, hmm.
[20:41:31 CEST] <Hopper_> I'm in 7, not used to it. So much more work than debian.
[20:41:52 CEST] <kepstin> ah, yeah, 7 still has the old terminal window. It's a lot better in 10 :/
[20:42:43 CEST] <kepstin> Hopper_: you forgot to include "-f dshow" before the second input, so it's trying to open a file named "Logitech HD Pro Webcam C920"
[20:42:55 CEST] <Hopper_> Hm, didn't have to do that in linux.
[20:43:22 CEST] <nicolas17> well /dev/video0 is certainly a v4l device
[20:43:26 CEST] <kepstin> Hopper_: a quirk, I think on linux, there's some magic where the special dev files /dev/video* are recognized as v4l devices
[20:43:41 CEST] <nicolas17> but "Logitech HD Pro Webcam C920" *could* be a relative path to a file with whatever
[20:43:56 CEST] <kepstin> Hopper_: it wouldn't be wrong to put another -f v4l2 before the second input on linux :)
[20:44:00 CEST] <tdr> kepstin, its not some implied magic, its just where the device nodes are created
[20:44:26 CEST] <Hopper_> kepstin: Got it, my linux syntax was wrong.
[20:44:44 CEST] <buu> When I'm using the concat filter/filter complex and I start off by saying [0:0] [0:1] and so on, I know that 0:0 is the first stream of the first input but how is this information passed to the rest of the command?
[20:44:54 CEST] <kepstin> tdr: the magic is that inside ffmpeg, it uses the v4l driver to read the device rather than just opening it as a file and trying to read a video stream...
[20:44:54 CEST] <buu> Is this something specific to how concat= is defined to take arguments?
[20:44:56 CEST] <kepstin> I think?
[20:45:13 CEST] <tdr> kepstin, ah ok, so inside ffmpeg not "kernel magic" got it
[20:46:26 CEST] <nicolas17> tdr: yeah, magic like "if the path is /dev/video0 then use -f v4l2 by default"
[20:46:34 CEST] <nicolas17> like it does for file extensions
[20:46:35 CEST] <tdr> very cool
[20:47:50 CEST] <buu> does the order of the -map '[v]' /[a] lines matter?
[20:48:52 CEST] <kepstin> buu: yes, the order that you use the map options is the order the streams will be included in the output
[20:51:18 CEST] <krzee> furq: adding -tune stillimage makes the filesize go from 3mb to 3.5mb for my video but i cant see a difference when playing it, i dont see it in the manpage nor in ffmpeg -h so im not sure what its doing
[20:51:46 CEST] <nicolas17> krzee: are you using crf or bitrate or what?
[20:52:38 CEST] <furq> http://vpaste.net/FqDLA
[20:52:48 CEST] <krzee> ffmpeg -framerate 1/3 -pattern_type glob -i "pics/*.jpg" -c:v libx264 -r 30 -pix_fmt yuv420p -tune stillimage out.mp4
[20:53:03 CEST] <furq> it should generally look a bit sharper
[20:53:13 CEST] <nicolas17> no point playing with -tune if you're not even setting a basic parameter for quality vs size :P
[20:53:23 CEST] <nicolas17> furq: what's the default crf?
[20:53:25 CEST] <furq> 23
[20:53:37 CEST] <krzee> oh maybe its because the source images arent that special in quality i dont see a difference
[20:54:14 CEST] <nicolas17> krzee: maybe using -tune stillimage means you can afford to use a higher crf and get a smaller file
[20:54:48 CEST] <nicolas17> maybe with crf 23 the quality is so good you won't see the difference, but with a higher crf (lower quality, smaller file) you'll notice -tune stillimage vs no tuning
[20:55:08 CEST] <kepstin> well, i seem to recall that the '-tune stillimage' was originally designed for encoding single-frame "video" (i.e. using x264 as a still image format, rather than video)
[20:55:42 CEST] <kepstin> it can be used on video, and I think it'll preserve sharpness better? not sure
[20:55:59 CEST] <nicolas17> kepstin: he's making a slideshow, so
[20:56:07 CEST] <furq> yeah
[20:56:11 CEST] <kepstin> but yeah, -crf values are *not the same* across different tune and profile options
[20:56:27 CEST] <furq> higher aq-strength and lower deblock should increase sharpness
[20:56:58 CEST] <furq> i assume those psy settings will do the same but i forget how that actually works now
[20:57:00 CEST] <nicolas17> let's try this again:
[20:57:23 CEST] <nicolas17> does anyone know what tbc is, in the "30 fps, 30 tbr, 15360 tbn, 60 tbc" line?
[20:57:57 CEST] <kepstin> note that the reason crf values are not equivalent across tunes is the same reason as why you shouldn't change tune between first and second pass of video encodes ;)
[20:58:02 CEST] <furq> apparently it's the timebase from AVCodecContext
[20:58:13 CEST] <nicolas17> it's read from AVStream.codec.time_base, and if I set that field when encoding, I get the same value back when decoding
[20:58:27 CEST] <nicolas17> same for codec_time_base in "ffprobe -show_streams" output
[20:58:37 CEST] <nicolas17> but what *is* it? it doesn't seem to change anything when playing
[20:59:10 CEST] <krzee> ya with -crf 30 -tune stillimage its 1.8M now, looks the same too, very sweet
[20:59:41 CEST] <nicolas17> krzee: nice, do you see a difference with and without tuning at that crf?
[21:00:07 CEST] <krzee> i dont see any difference at all, just better file size
[21:00:31 CEST] <krzee> quite a bit better, from 3M (3.5 with -tune stillimage) to 1.8M
[21:01:16 CEST] <buu> kepstin: ok
[21:01:26 CEST] <nicolas17> I meant crf30 vs crf30+tune
[21:01:28 CEST] <buu> Is there any actual difference between audio being stream 0 and video being stream 1?
[21:01:32 CEST] <buu> or vice versa
[21:01:45 CEST] <Hopper_> Does anyone know of a guide to the -v options?
[21:01:58 CEST] <buu> Also in the filter man page, "The stream sent to the second output of split, labelled as [tmp], is processed through the crop filter", why does it say "sent to the output"
[21:02:08 CEST] <furq> Hopper_: https://ffmpeg.org/ffmpeg.html#Options
[21:02:11 CEST] <furq> ^F loglevel
[21:02:24 CEST] <krzee> ahh, crf30 = 1.9M, +tune is 1.8M, still no visable difference when watching
[21:02:25 CEST] <kepstin> buu: if you've only got 1 audio and 1 video stream? I don't think any players will care about the order.
[21:02:27 CEST] <buu> Is this just bad english or is there some convention that you send things to outputs of filters?
[21:02:34 CEST] <nicolas17> furq: also AVStream.codec is deprecated
[21:02:36 CEST] <buu> kepstin: Yeah, what about multiples of either?
[21:02:39 CEST] <Hopper_> furq: Thanks
[21:02:56 CEST] <nicolas17> buu: I think it means split sends the stream to its output, but let me read the whole thing
[21:03:13 CEST] <buu> https://ffmpeg.org/ffmpeg-filters.html#Filtering-Introduction
[21:03:14 CEST] <kepstin> buu: filters have inputs and outputs. You send frames to the inputs of a filter, then the filter does something and sends fromes to its outputs, which then might be connected to the inputs of other filters
[21:03:24 CEST] <buu> kepstin: That is what I expected
[21:04:05 CEST] <buu> Also, reading further, does what the split filter actually do is copy the input?
[21:04:08 CEST] <kepstin> buu: if you have multiple audio tracks, a player that allows switching between them will show them in the order that you use the -map otpions in.
[21:04:09 CEST] <nicolas17> "the stream you told the 'split' filter to send to its second output..." :)
[21:04:35 CEST] <buu> nicolas17: I'm not sure that works =]
[21:04:37 CEST] <nicolas17> buu: yes
[21:04:55 CEST] <buu> I think the actual problem is that when it says 'stream' it means 'name'
[21:04:56 CEST] <nicolas17> split is like Unix 'tee', it copies the input into multiple outputs
[21:05:07 CEST] <buu> i.e. you're naming the outputs coming from split
[21:05:32 CEST] <furq> "the name...is processed through the crop filter"
[21:05:34 CEST] <furq> that doesn't work
[21:05:35 CEST] <kepstin> a "stream" is a series of frames (video) or samples (audio). it means exactly what it says.
[21:05:53 CEST] <furq> also "the name, labelled as [tmp]"
[21:06:27 CEST] <buu> Ohh
[21:06:30 CEST] <buu> I see what it means
[21:07:03 CEST] <buu> Something about "sent to the output of split" just read really oddly
[21:07:34 CEST] <buu> "The second output of split, the stream labeled [tmp], is processed through the crop filter.." ?
[21:42:24 CEST] <alexpigment> hey guys, if i get a warning while encoding mpeg2video that says "warning, clipping 1 dct coefficients to -255..255", i can safely ignore this, right?
[21:42:46 CEST] <alexpigment> i'm working within a particular set of constraints and my tests show that things look mostly fine
[21:43:24 CEST] <alexpigment> if there were a simple workaround that wouldn't introduce a speed hit, though, i'd try it of course
[21:46:11 CEST] <alexpigment> for what it's worth, "-mbd rd" eliminates those warnings, but a) there's a significant speed hit, and b) i don't notice a huge improvement in quality
[21:52:28 CEST] <alexpigment> http
[21:52:40 CEST] <alexpigment> oops - ignore last message
[21:59:42 CEST] <kepstin> alexpigment: looks like that message is printed only when using -mbd simple, I guess because the other values for mbd reconstruct the mb and account for the clipping when making decisions, but simple doesn't.
[21:59:48 CEST] <kepstin> alexpigment: so, it just is what it is.
[22:00:31 CEST] <alexpigment> kepstin: thanks for the reply. not knowing enough about it, it seems like I should be able to ignore it
[22:00:43 CEST] <alexpigment> I just didn't know if there were deeper implications that I didn't observe in my tests
[22:00:48 CEST] <kepstin> alexpigment: if the quality looks good enough for you, then yeah
[22:01:12 CEST] <alexpigment> good to know. thanks again
[22:01:13 CEST] <kepstin> it just means that it's not making optimal decisions, so you're getting an efficiency loss (lower quality/bit)
[22:01:59 CEST] <alexpigment> yeah, i'm already having to bit starve this encode anyway, so it's really hard to tell what decisions are "bad" or "good" when the scenes get too intense
[22:34:55 CEST] <leif> I am using the asetpts filter to slow down audio (the documentation for setpts says you can do 2*(PTS-STARTPTS) to do it), but the resulting file has the same length.
[22:35:29 CEST] <leif> I am feeding it in a ~3 minute file, and ffmpeg says it produced a ~6 minute file, but when I play it the file is only 3 minutes long, and sounds the same.
[22:35:33 CEST] <leif> Here is the command: https://gist.github.com/LeifAndersen/4ad123e2bf3b528f5d13b10d4f78d973
[22:35:39 CEST] <leif> Any suggestions?
[22:35:46 CEST] <leif> (command+output ^)
[22:43:40 CEST] <leif> Also asetpts=2*PTS seems to have the same result.
[22:45:57 CEST] <ShaneVideo> I've currently got a 5 second delay when using HLS, is there something im missing to get that lower?
[22:46:32 CEST] <ChocolateArmpits> leif, i would guess mp3 doesn't support timestamps like that, so you would have to either interpolate the audio or lower the sampling rate, onlya guess though
[22:47:18 CEST] <ChocolateArmpits> try wrapping to mkv and see if that works
[22:48:28 CEST] <ChocolateArmpits> ShaneVideo, is hls an input stream ?
[22:48:36 CEST] <ShaneVideo> yes from /dev/video1
[22:49:08 CEST] <ChocolateArmpits> a video device that gives hls ??
[22:49:14 CEST] <leif> ChocolateArmpits: OH, that would make sense. I'll play with this a bit more. Thanks.
[22:49:25 CEST] <ShaneVideo> When I use ffserver and encode to mjpeg, the latency is 100ms, when I convert to a m3u8 the delay is 8 seconds
[22:49:27 CEST] <ShaneVideo> *5
[22:49:53 CEST] <ChocolateArmpits> HLS delay is primarily affected by segment length
[22:50:04 CEST] <leif> Also nope, sadly adelay didn't work. :(
[22:50:17 CEST] <leif> (for delaying the audio)
[22:51:24 CEST] <ShaneVideo> Thanks
[22:53:32 CEST] <leif> ChocolateArmpits: So, when I use mkv I seem to get the track twice.
[22:53:42 CEST] <leif> (At least it still seems like it has the same pitch.)
[22:53:53 CEST] <leif> I'll try a different song.
[22:54:42 CEST] <leif> OH NO
[22:54:53 CEST] <leif> Actually, my media player is now just playing it twice as fast.
[22:54:59 CEST] <leif> That's very odd.
[22:55:02 CEST] <leif> Thanks btw.
[22:58:08 CEST] <leif> ChocolateArmpits: Here is the ffprobe of the two files: https://gist.github.com/LeifAndersen/a2a3892d5aa1c2c7a48a735087ad0bad
[22:58:35 CEST] <ChocolateArmpits> leif, is there a problem ?
[22:59:31 CEST] <leif> Even though one is twice as long as the other, they both take the same amount of time to play. Although I think that's because of the differing bitrates.
[22:59:41 CEST] <leif> But if that is expected behavior that is fine. Thanks. :)
[23:00:30 CEST] <ChocolateArmpits> try playing back with ffplay or any other libav based player
[23:03:21 CEST] <leif> ChocolateArmpits: Hmm...same behavior.
[23:03:26 CEST] <leif> (I was testing it with mplayer before)
[23:03:47 CEST] <leif> AH, okay
[23:03:59 CEST] <leif> VLC worked as expected. (aka, really choppy, and half speed)
[23:04:01 CEST] <leif> Thanks. :)
[00:00:00 CEST] --- Fri Jul 7 2017
More information about the Ffmpeg-devel-irc
mailing list