[Ffmpeg-devel-irc] ffmpeg.log.20171101
burek
burek021 at gmail.com
Thu Nov 2 03:05:01 EET 2017
[01:17:36 CET] <Cracki> so I have a dshow webcam via dshow that claims vcodec=mjpeg min s=1280x720 fps=5 max s=1280x720 fps=30
[01:17:46 CET] <Cracki> however [mjpeg @ 0000000000503740] No JPEG data found in image
[01:18:09 CET] <Cracki> for: ffplay -f dshow -video_size 1280x720 -framerate 30 -vcodec mjpeg video="Logitech HD Pro Webcam C920"
[01:18:25 CET] <Cracki> any idea what's wrong?
[01:18:32 CET] <Cracki> rawvideo yuv420p works
[01:20:04 CET] <Cracki> -video_pin_name 3 -vcodec h264 works (it can do that too)
[01:20:23 CET] <Cracki> might the camera send weird mjpeg?
[01:20:33 CET] <Cracki> appears that nothing is coming at all
[01:21:48 CET] <Cracki> I suspect the camera an also do varying bitrates of h264, but -list_options true doesn't list those
[01:22:49 CET] <Cracki> I'm also interested in setting a fixed exposure time and gain... any hints?
[01:25:15 CET] <Cracki> I can pop open the dialog, but I'd like to set those things in the command line
[01:25:39 CET] <Cracki> basically https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=2487
[03:39:39 CET] <vandemar> Is there some way to get ffmpeg to treat -to differently from -t, or is -to just completely broken? -to acts just like -t for me (ffmpeg 3.4).
[03:45:59 CET] <Guest28023> Hey I was wondering if someone can help me with an issue I'm encountering with piping an image sequence to ffmpeg? I'm trying to pipe an svg image sequence which should be possible with ffmpeg 3.4. It works with file patterns but fails when piping. See the following: https://pastebin.com/Mj9WXj7S
[03:51:45 CET] <klaxa> Guest28023: just a hunch but you probably have to specify a resolution for the output
[03:51:54 CET] <klaxa> although i don't know why it doesn't need that in the first case
[03:52:10 CET] <klaxa> maybe because files are seekable and pipes are not
[03:52:31 CET] <klaxa> >Consider increasing the value for the 'analyzeduration' and 'probesize' options
[03:54:35 CET] <klaxa> so try adding -probesize some_value
[03:55:04 CET] <klaxa> where some_value is maybe as large as one svg?
[03:55:24 CET] <Guest28023> I believe probesize is 5mb whereas the svgs are a few kb so probably not the issue
[03:55:33 CET] <Guest28023> (default probe size)
[03:55:58 CET] <Guest28023> I've tried to set resolution via cat image/*.svg | ffmpeg -i - -s 640x640 video.mp4 but same error
[03:56:53 CET] <klaxa> maybe move it before the -i but after the -f image2pipe ?
[03:57:12 CET] <klaxa> that sets it as an input option instead of an output option
[03:59:30 CET] <cryptodechange> I'm getting this message during an encode
[03:59:33 CET] <cryptodechange> "Timestamps are unset in a packet for stream 1. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly"
[03:59:36 CET] <cryptodechange> Should I ignore?
[04:01:04 CET] <Guest28023> setting scale before or after input has no effect :(
[04:01:42 CET] <klaxa> hmm :/
[04:06:03 CET] <Guest28023> There's a commit related to svg piping which says it only checks extension and mime type, both of which are correct with the files I'm using:
[04:06:05 CET] <Guest28023> img2dec: add support for piped SVG demuxing Only checks the extension and MIME type, since determining whether a file is SVG is difficult since they're just XML files.
[04:36:19 CET] <atomnuker> oh wow someone's using the svg demuxer I wrote, this is awesome
[04:36:28 CET] <atomnuker> I thought I was the only one who wanted that
[04:40:53 CET] <cryptodechange> dang, grain tune really does use a lot more bitrate on crf
[04:41:01 CET] <cryptodechange> I suppose I can up the CRF as it would be more efficient
[06:49:37 CET] <hendry> why is a 10s .ts blank segment 5.2M? Shouldn't it be tiny? Thre is no change frame to frame with https://s.natalian.org/2017-11-01/blank.ts
[07:57:09 CET] <twid> is it possible to create relay server based on ffmpeg library? Relay server means server record and stream remote stream simultaneously.
[07:57:42 CET] <twid> I am asking cause ffmpeg support various protocols.
[08:45:37 CET] <dan3wik> twid, yes
[08:47:17 CET] <dan3wik> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[08:47:59 CET] <dan3wik> That should help if I understood you correctly.
[09:03:33 CET] <twid> Thanks Dan3wik. I want to write my one Media server so was asking about code. I try to understand ffmpeg code, but it seems bit complecated.
[12:28:06 CET] <rouslanzs> Hi! I'm trying to install a ffmpeg like here: https://trac.ffmpeg.org/wiki/CompilationGuide/Centos I use the libmp3lame 3.100. ffmpeg says not found. How to fix it?
[12:28:27 CET] <rouslanzs> ERROR: libmp3lame >= 3.98.3 not found
[12:29:50 CET] <tdr> thats the wrong version of libmp3lme
[12:30:52 CET] <tdr> see how the error points to a specific version # ... if you used 3.100 it wont match
[12:32:19 CET] <Mavrik> Shouldn't 3.100 BE >= 3.98? :P
[12:32:31 CET] <Mavrik> rouslanzs: check what the actual error in config.log is
[12:32:45 CET] <rouslanzs> one minute
[12:47:57 CET] <rouslanzs> vbrquantize.c:(.text+0xaa2): undefined reference to `log10f' /root/ffmpeg_build/lib/libmp3lame.a(vbrquantize.o): In function `VBR_encode_frame': vbrquantize.c:(.text+0x35b7): undefined reference to `sqrt' vbrquantize.c:(.text+0x35da): undefined reference to `sqrt' vbrquantize.c:(.text+0x38ec): undefined reference to `sqrt' vbrquantize.c:(.text+0x3afd): undefined reference to `sqrt' collect2: error: ld returned 1 exit status ERROR: l
[12:48:21 CET] <rouslanzs> Mavrik , is it help?
[12:48:50 CET] <rouslanzs> Mavrik: or not )
[12:53:51 CET] <twid> It's linker error. What os are you using?
[12:54:03 CET] <rouslanzs> twid: CentOS7
[12:56:01 CET] <rouslanzs> Linux 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[12:56:31 CET] <twid> ok. are you building libmp3lame from source? otherwise you can install with the help of Yum installer.
[12:57:21 CET] <rouslanzs> i take it from http://downloads.sourceforge.net/project/lame/lame/3.100/lame-3.100.tar.gz and build from source.
[12:57:32 CET] <twid> ok.
[12:57:55 CET] <twid> you can follow this link: https://gist.github.com/icaliman/1ee56b7f3ed5abf0dec1
[12:59:05 CET] <rouslanzs> twid: ok, i try use older version of libmp3lame. thank you.
[13:00:01 CET] <twid> ok
[13:25:46 CET] <rouslanzs> twid: I delete lame-3.100, then download lame-3.99 from your link. Then make, make install it. When trying to install ffmpeg, ERROR: libmp3lame >= 3.98.3 not found again.
[13:26:05 CET] <rouslanzs> what can i do? )
[13:28:11 CET] <JEEB> --extra-libs="-lm"
[13:28:19 CET] <JEEB> since I don't think the math library got added globally yet
[13:28:24 CET] <JEEB> blame static linking
[13:28:34 CET] <JEEB> (and projects' pkg-config files sucking)
[13:28:55 CET] <JEEB> generally you have your "internal" (relevant with static linking) dependencies under Libs.private
[13:29:00 CET] <JEEB> LAME doesn't have them :P
[13:29:49 CET] <rouslanzs> it very interesting, but .. i don't uanderstand how to install it %)
[13:30:07 CET] <rouslanzs> and sorry for my engilsh
[13:37:01 CET] <rouslanzs> --extra-libs="-lm" it works for me )
[13:37:20 CET] <twid> rouslanzs: It's not version number problem, but when compiling ffmpegit look for libmp3lame.so which it's not able to find.
[13:38:03 CET] <twid> rouslanzs: run 'make VERBOSE=1' and checout all lib paths.
[13:38:22 CET] <twid> that's great
[13:45:56 CET] <JEEB> there were recently improvements to the configure script where the dependencies were properly separated and we put more trust into the pc files :)
[13:46:12 CET] <JEEB> now, the problem is, almost nobody seems to make their pc files properly for static linking
[15:18:37 CET] <ovi> hello to everyone
[15:19:00 CET] <ovi> little help please https://pastebin.com/pUk3MWgv
[15:31:03 CET] <BtbN> ovi, there is no vf_nvresize.c in ffmpeg. Contact whoever gave you that and ask them for advice on how to build it.
[15:36:53 CET] <ovi> BtbN: im trying to build an ffmpeg with nvenc
[15:37:37 CET] <BtbN> Whatever you found then must have been horribly outdated then. nvenc is included by default and does not need some 3rd party fork
[15:37:43 CET] <BtbN> -then
[15:37:58 CET] <ovi> following the tutorial from here https://dwijaybane.wordpress.com/2017/07/19/ffmpeg-with-nvidia-acceleration-on-ubuntu-16-04-nvenc-sdk/
[15:38:42 CET] <BtbN> that's horrible
[15:38:52 CET] <ovi> aw ok
[15:39:09 CET] <BtbN> And ffmpeg build that does not explicitly disable it comes with nvenc.
[15:39:11 CET] <BtbN> *any
[15:39:33 CET] <BtbN> The resize-filters are more involved though, as they require a non-free build
[17:45:04 CET] <djk> Looking for suggestions. I am wanting to stream live audio from an input on the PC. I have looked at unreal and icecast but not have good success with them. Any thoughts?
[17:51:36 CET] <djk> this on a windows 10 system
[18:51:20 CET] <FurretUber> Is there a way to see the available options for a specific encoder, similar to the ones for specific input and output devices?
[18:52:18 CET] <sfan5> ffmpeg -h encoder=libx264
[18:53:38 CET] <FurretUber> Thank you
[19:31:47 CET] <domane> so I want to read x264 --fullhelp, but I don't know how to do it. "ffmpeg x264 --fullhelp" is an invalid command
[19:33:13 CET] <DHE> no, just x264 --fullhelp
[19:35:47 CET] <domane> I don't have that exe anywhere
[19:38:28 CET] <relaxed> domane: https://www.johnvansickle.com/x264.txt
[19:39:44 CET] <domane> thanks, but how do I find out if ffmpeg was compiled with 8 bit or 10 bit x264?
[19:53:03 CET] <relaxed> domane: look at the output of "ffmpeg-10 -h encoder=libx264" - you should see "Supported pixel formats: yuv420p10le yuv422p10le yuv444p10le nv20le" if it was compiled with 10bit support
[19:53:17 CET] <relaxed> er, ffmpeg -h encoder=libx264
[19:54:12 CET] <relaxed> domane: https://trac.ffmpeg.org/wiki/Encode/H.264
[19:54:46 CET] <domane> awesome, thanks. I got "yuv420p yuvj420p yuv422p yuvj422p yuv444p yuvj444p nv12 nv16 nv21" so that must mean it's 8 bit?
[19:54:56 CET] <relaxed> correct
[19:55:17 CET] <domane> cool, thanks!
[19:59:58 CET] <mozzarella> what's yuv?
[20:01:39 CET] <blap> a way of representing color
[20:01:48 CET] <blap> like rgb is
[20:39:36 CET] <FishPencil> I need to split a video into 36 even parts, saving the first frame from each to a png. Right now I'm using ffprobe to get the duration, then performing the division outside, and doing a ffmpeg -ss s -i ... s.png where s is for (s = 0; s < d; s += split) is there any better way to do this?
[20:40:18 CET] <FishPencil> I know I can use -vf fps=1/split but that is much slower than the -ss method
[21:53:27 CET] <relaxed> FishPencil: look at the segment muxer
[21:54:05 CET] <relaxed> ffmpeg -h muxer=segment
[21:54:07 CET] <FishPencil> relaxed: I don't think that's any faster than doing -vf fps?
[21:57:02 CET] <relaxed> you can stream copy with it
[22:03:03 CET] <relaxed> FishPencil: understand?
[22:04:20 CET] <FishPencil> relaxed: not exactly, I don't understand your stream copy comment.
[22:04:33 CET] <FishPencil> Or how that could be faster than doing a pre -i, -ss
[22:05:25 CET] <relaxed> -vf requires trandcoding, whereas the segment muxer will copy the video/audio streams
[22:05:33 CET] <relaxed> transcoding*
[22:06:05 CET] <FishPencil> does it use -ss to find the part of the file to start the segment?
[22:07:09 CET] <relaxed> no, see https://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment
[22:10:45 CET] <relaxed> you would divide the duration as you have been, then supply the quotient to the muxer. it should be much faster than what you were doing
[22:14:31 CET] <FishPencil> relaxed: I must have something wrong: -f stream_segment -segment_time 10 -frames 1 %03d.png
[22:14:45 CET] <FishPencil> relaxed: [stream_segment,ssegment @ 00000000046e00a0] format image2 not supported.
[22:15:15 CET] <relaxed> are you spliting the video or do you just need images?
[22:16:28 CET] <relaxed> if it's the former, segment the video and then run a second ffmpeg instance to grab an image from the beginning of each segment
[22:16:31 CET] <FishPencil> The end result is to tile, but since -vf fps=1/s,tile=6x6 is super slow, I wanted to use -ss and combine after
[22:23:32 CET] <relaxed> then yes, -ss is probably the way to go
[22:27:43 CET] <arvut> heya, is the .mpg format a compressed format or raw video container like .ts?
[22:28:22 CET] <arvut> and how do I go about when transcoding it to .mp4, presumably something an ipad could play
[22:28:39 CET] <JEEB> ffprobe file.mpg
[22:29:30 CET] <JEEB> if it's "Input #0, mpeg"
[22:29:32 CET] <JEEB> that is MPEG-PS
[22:29:38 CET] <JEEB> which is a "sister" format to MPEG-TS
[22:30:12 CET] <sfan5> if you're lucky ffmpeg -i input.mpg -c copy output.mp4 might even work
[22:30:39 CET] <FishPencil> relaxed: Could a 2nd FFmpeg catch the pipe from stream_segment and do the png save?
[22:30:45 CET] <JEEB> I'd be surprised if an ipad could play streams as-is from MPEG-PS, since you've got a lot of MPEG-1 Video and MPEG-1 Layer 2 audio there :D
[00:00:00 CET] --- Thu Nov 2 2017
More information about the Ffmpeg-devel-irc
mailing list