[Ffmpeg-devel-irc] ffmpeg.log.20161008
burek
burek021 at gmail.com
Sun Oct 9 03:05:01 EEST 2016
[01:39:25 CEST] <CorvusCorax> Hi. I have a question regarding creation of video timestamps. I'm writing a program that creates a timestamp at the same time it sends a trigger signal to a camera. Then it retrieves a frame from the camera. Currently both timestamps and frames are stored separately in individual files (per frame), but that's not performant enough. I want to stream this raw video data to ffmpeg for encoding in a
[01:39:25 CEST] <CorvusCorax> suitable container. But how do I get the timestamps into ffmpeg?
[01:44:05 CEST] <klaxa> this might be a long shot, but i think at least matroska supports a generic data track
[01:44:12 CEST] <klaxa> not sure if this can be interleaved
[01:44:21 CEST] <klaxa> i wouldn't be surprised though
[01:45:43 CEST] <CorvusCorax> that's not really it though. I don't want that as generic auxilliary data (though that might be a cool thing for other data, so thanks for the suggestion :) ) but to be used for the actual frame timestamps in the video file
[01:47:23 CEST] <CorvusCorax> aka if the trigger signals to the camera were 12 ms apart, then the frame timestamps in the resulting stream should be 12ms apart. even if due to system lag, I have to feed the raw image data to ffmpeg with different timing. I can't have ffmpeg make its own timestamps based on fixed framerate or system time, since neither would be accurate
[01:47:30 CEST] <c_14> If you're using the libraries, you can magic up a timestamp using your input and the appropriate functions. If you're using the binary, set the mtime on the images using the timestamps and use the -ts_from_file option of the image2 demuxer
[01:49:17 CEST] <CorvusCorax> thanks c_14, that might be an option. So I could either save them temporarily in image files in a ramdisk, and have ffmpeg read them from there, or call the encoder in libavcodec directly with home-brew timestamp info
[01:49:20 CEST] <CorvusCorax> ?
[01:56:57 CEST] <klaxa> haven't heard of that yet
[01:57:38 CEST] <klaxa> you could look at doc/examples/remuxing.c
[01:57:48 CEST] <klaxa> it helped me understand remuxing a lot better
[01:57:59 CEST] <klaxa> if you really want to go down the rabbit hole
[01:58:10 CEST] <klaxa> although it might actually only need slight adjustions
[01:58:24 CEST] <klaxa> maybe there is a variable fps filter i don't know of
[02:06:25 CEST] <CorvusCorax> that looks quite promising actually. All I#d have to do is override pkt.pts for the output stream. The trickier thing might be to form pkt from raw rgb data, but that should be in another example
[02:09:11 CEST] <CorvusCorax> https://ffmpeg.org/doxygen/trunk/muxing_8c_source.html <-- I guess that would be it :-)
[02:09:31 CEST] <CorvusCorax> thanks klaxa :-)
[02:14:22 CEST] <CorvusCorax> this is an awesome chat room! :-) thanks @all, all my problems solved :)
[06:35:29 CEST] <yong> Is there a rule of thumb as to when re-encoding looks worse than immediate encoding with a higher qp? For example, lossless -> qp 24 -> qp 27 vs lossless -> qp 29, which one should look better?
[06:40:03 CEST] <klaxa> that's a good question, if you are talking about the same codec, it may be so that the artefacts "overlap" with the new artifacts and little additional quailty is lost, it could also be that many bits are spent on those artifacts to reproduce them and other areas lose quality
[06:40:09 CEST] <klaxa> that's just speculation though
[06:40:35 CEST] <klaxa> i would test with a series of videos/short clips ?
[06:40:40 CEST] <klaxa> if you are really interested
[06:43:29 CEST] <yong> klaxa: It's not only the same codec, it's the same encoder ;) (x264) yeah, maybe I should test it, but even testing for one specific case is kind of annoying, and that still doesn't give me a general rule ;)
[06:44:12 CEST] <klaxa> that's why i said "with a series" :)
[06:54:25 CEST] <yong> klaxa: lol, when re-encoding my sample -preset veryfast produces a smaller file than -preset slow - wtf?
[06:55:12 CEST] <klaxa> might be exactly what i described :D
[06:55:17 CEST] <klaxa> both cases even
[06:55:33 CEST] <klaxa> just that quality doesn't get degraded, but bits get added
[06:57:19 CEST] <yong> klaxa: The thing is, I didn't do the re-encoding test yet. I was just comparing sizes from lossless -> qp 31 once with veryfast and once with slow, this makes no sense to me. Maybe 10 seconds isn't a long enough sample, but still, I'd always expect slow to be smaller. Also medium seems to be right in the middle, exactly the opposite of what I'd expect
[06:57:56 CEST] <klaxa> ah, look at the files and how they look visually
[06:59:46 CEST] <klaxa> i bet even though you set -qp to 31 they look different (fast being worst up to slow)
[06:59:57 CEST] <klaxa> afk, getting some food :)
[07:00:18 CEST] <yong> klaxa: Slow seems to be looking the best (it's the largest file after all), but isn't -qp 31 supposed to mean that they should all look almost the same?
[07:00:54 CEST] <klaxa> not too sure about that, i think it also depends on the preset
[07:01:18 CEST] <klaxa> maybe -crf is the better choice?
[07:01:28 CEST] <klaxa> i don't know, read up on those two and what the differences are
[07:04:14 CEST] <yong> klaxa: crf is just qp but it adjusts the qp value slightly depending on how the human eye tends to percieve scenes, so for testing things like these I tend to prefer a constant qp by setting it directly and not using crf
[07:15:17 CEST] <klaxa> ah
[08:54:37 CEST] <teratorn> is there any way to get a `split' filter, but with actually independant AVFrame containers? which would in theory prevent pts changes on frames in one of the split streams from affecting the pts values of frames in other split streams? I would still like to share and ref-count frame data, but not share timestamps... any way other than writing my own modified
[08:54:37 CEST] <teratorn> split filter? I know I can have the same input multiple times on the ffmpeg command-line to achieve the same effect, but that means decoding the media multipe times, plus the memory overhead of non-shared frame data... any clues? =)
[10:46:25 CEST] <fqtw> what does -g 250 -sc_threshold 1000000000 do?
[10:53:48 CEST] <furq> fqtw: it's probably supposed to forced fixed-size gops
[10:54:08 CEST] <furq> s/forced/force/
[11:16:03 CEST] <fqtw> furq: what would be the libffmpeg function that does the same?
[15:05:38 CEST] <n4zarh> hello there, I still have problem with encoding samples to pcm_mulaw :D
[15:09:36 CEST] <cuba_> is there any image format i can pipe into ffmpeg to encode to a video
[15:09:37 CEST] <n4zarh> not using command line, I have problem with my C++ code using ffmpeg (specifically avcodec)
[15:09:44 CEST] <cuba_> that holds a timestamp
[15:10:02 CEST] <cuba_> so timing will be correctly
[15:10:28 CEST] <furq> cuba_: if the timing is regular you can specify the input framerate
[15:10:42 CEST] <cuba_> furq: but what if it is slighty changing
[15:10:57 CEST] <cuba_> depending on camera lightning etc
[15:11:15 CEST] <furq> you can use -ts_from_file with the image2 demuxer
[15:11:25 CEST] <furq> i don't know if that works with image2pipe though
[15:11:48 CEST] <c_14> I would assume no because piped data doesn't have an mtime
[15:11:58 CEST] <cuba_> yes c_14 not really image2pipe
[15:12:15 CEST] <furq> if there's anything equivalent, rather
[15:14:45 CEST] <cuba_> so I guess best method would be to create a ts file and reencode it afterwards
[15:22:30 CEST] <n4zarh> okay, sorry for not being there for a while, but again: I have problem using avcodec_fill_audio_frame, it returns -22, avlog shows no error at all
[15:26:46 CEST] <realies> what would be a modern hardware config for optimal hardware accelerated ffmpeg encoding/transcoding awesomeness?
[15:28:08 CEST] <n4zarh> I am sure I have frame allocated with nb_samples, format and channel_layout set, I know what kind of data I'm putting into that function, but it just returns -22 every time and I am clueless why
[15:30:09 CEST] <furq> realies: a quadro, or lots of xeons
[15:31:05 CEST] <realies> furq, why quadro?
[15:31:12 CEST] <furq> nvenc is artificially restricted to two concurrent jobs on consumer cards
[15:31:31 CEST] <realies> and unrestricted on quadros?
[15:31:50 CEST] <furq> idk about unrestricted but you get more than two
[15:32:00 CEST] <furq> the docs on this are nonexistent
[15:32:25 CEST] <realies> wow
[15:32:48 CEST] <realies> any clues on why is this limited?
[15:33:01 CEST] <furq> so that you buy a quadro
[15:33:02 CEST] <realies> or is nvenc written by nvidia
[15:33:06 CEST] <furq> yeah
[15:33:08 CEST] <realies> meh
[15:33:15 CEST] <furq> nvenc is the asic block on nvidia cards
[15:33:27 CEST] <realies> I get you, I thought it's a software limitation
[15:34:04 CEST] <furq> i believe the consensus in here is to just buy a lot of xeons and use x264
[15:34:18 CEST] <furq> maybe nvenc stacks up better if you're doing hevc encoding though
[15:37:46 CEST] <realies> a quadro sounds like a better plan than multiple xeons
[15:38:28 CEST] <furq> well it depends what you're doing
[15:38:33 CEST] <furq> one i7 might be plenty
[15:39:22 CEST] <realies> just looking forward to upgrade a P4 that I'm using for cloud storage but recently started to transcode full-quality footage for web preview and the poor thing is not happy
[15:40:12 CEST] <furq> oh
[15:40:12 CEST] <realies> I was considering https://www.scan.co.uk/products/asrock-c2750d4i-intel-octa-core-avoton-c2750-ddr3-sata-iii-6gb-s-vga-2x-gbe-lan-1x-ipmi-lan-2x-usb-2
[15:40:25 CEST] <furq> everything i mentioned is overkill then
[15:40:32 CEST] <realies> but I won't know for sure before I see/do some nvidia/cpu benches
[15:40:42 CEST] <furq> an old gtx750 or something will probably be fine
[15:41:03 CEST] <realies> what about gtx750 vs octacore cpu?
[15:41:04 CEST] <furq> you'll definitely want hwaccel with that cpu
[15:41:08 CEST] <realies> asic probably beats cpu
[15:41:08 CEST] <furq> that's an 8-core atom
[15:41:33 CEST] <realies> fair enough
[15:42:02 CEST] <furq> fwiw you can get quad core versions of those for a bit less money
[15:42:33 CEST] <furq> i know someone who's got one in his nas, they're pretty nice
[15:43:54 CEST] <realies> I know, but why go for less cores
[15:44:28 CEST] <realies> if you're running a file server, ffmpeg instances and what not
[15:45:43 CEST] <realies> MAXWELL GEN 2, Standard 4:2:0, 4:4:4 and H.264 lossless encoding, ~900 fps 2-pass encoding @ 720p
[15:45:46 CEST] <realies> woah
[15:46:05 CEST] <realies> that's way better than the current 14fps
[15:46:14 CEST] <realies> on ultrafast
[15:46:56 CEST] <furq> that's nice but it's a bit useless unless you get a quadro
[15:47:07 CEST] <furq> you're not going to be able to use more than 120fps of that
[15:47:23 CEST] <realies> why? I thought if you have a single instance it's just going to be fast
[15:47:39 CEST] <furq> for some reason i've got it in my head that you're doing realtime
[15:47:54 CEST] <realies> I might be :P
[15:47:57 CEST] <realies> if it's fast enough
[15:48:14 CEST] <realies> which it will be
[15:48:45 CEST] <furq> maxwell 1st gen (gtx7**) is probably good enough
[15:48:47 CEST] <furq> it's your money though
[15:49:49 CEST] <realies> i wounder how faster than the P4 would that 8 core atom go
[15:50:51 CEST] <realies> at least 10 times I guess
[15:51:06 CEST] <furq> i'd be surprised
[15:51:21 CEST] <realies> depends if the encoder is optimised for multithreading
[15:51:26 CEST] <furq> it is
[15:51:32 CEST] <furq> that's a 14W cpu though
[15:51:40 CEST] <realies> And way newer
[15:51:54 CEST] <furq> it's not in the same league as a desktop i5
[15:52:02 CEST] <realies> i'm talking pentium 4 :P
[15:52:14 CEST] <furq> it'll probably be faster, sure
[15:55:47 CEST] <furq> oh nvm that's a 20W part
[15:56:07 CEST] <furq> it is also a lot faster than any other atom
[15:56:14 CEST] <furq> i guess that's why they dropped the name then
[15:56:29 CEST] <realies> I wonder why are they not refreshing the line as this mobo is released some time ago
[15:57:27 CEST] <furq> apparently it's close to a core 2 quad in performance
[15:57:32 CEST] <furq> that's pretty good for 20W
[15:58:54 CEST] <furq> maybe hold off on buying a gpu until you've benchmarked that then
[15:59:11 CEST] <furq> you should easily be able to do 720p30 realtime with that
[15:59:30 CEST] <realies> yeah, although I sort of wanted to be able to do a few at a time
[16:00:40 CEST] <realies> hence the quadro idea
[16:01:10 CEST] <furq> intel have quicksync which is the same idea as nvenc
[16:01:18 CEST] <furq> it's not supported on that cpu though
[16:01:49 CEST] <furq> it's on the i3 but then you lose ecc support
[16:03:31 CEST] <furq> actually i forgot they added ecc support to the new i3s and pentium Gs
[16:03:49 CEST] <furq> i'm sure it'll be characteristically easy to find a board which lets you use it
[16:06:14 CEST] <realies> I doubt I can find a mobo with that many sata ports
[16:06:32 CEST] <realies> I assume the i3s and pentium Gs have integrated GPUs, hence the asic?
[16:06:51 CEST] <furq> yeah
[16:10:03 CEST] <furq> i've seen mini-itx boards for those cpus with 6xSATA
[16:12:20 CEST] <realies> what about 12x? :P
[16:12:59 CEST] <furq> you'll be lucky to get more than 8
[16:13:08 CEST] <furq> those avoton boards are the only mini-itx boards i've ever seen with more than 8
[16:13:14 CEST] <realies> same
[16:13:57 CEST] <furq> you could always add a raid card
[16:14:34 CEST] <furq> although you wouldn't be able to add an nvidia card then
[16:15:04 CEST] <realies> yus
[16:15:10 CEST] <realies> what about ffmpeg + cuda?
[16:15:22 CEST] <furq> cuda/opencl don't really do anything for h264
[16:15:41 CEST] <realies> ah
[16:16:00 CEST] <furq> there are some filters which use them, but they're no use for encoding
[16:24:18 CEST] <realies> apparently xeon-d is the successor of avoton
[16:29:19 CEST] <furq> that looks to be about twice as powerful
[16:35:22 CEST] <realies> which one are you looking at? most of them are 45W
[16:37:15 CEST] <deostroll> Hello, I want to crunch the length of a video to a determined length. Can we do this? This is basically timelapsing but with this specific constraint I just mentioned...
[16:38:32 CEST] <deostroll> Usually those videos don't have audio...
[16:38:34 CEST] <furq> realies: there are some 35W ones
[16:39:25 CEST] <furq> deostroll: https://ffmpeg.org/ffmpeg-filters.html#setpts_002c-asetpts
[16:42:09 CEST] <deostroll> furq, that doesn't have an argument for specifying an output length I want...
[16:43:17 CEST] <furq> -vf setpts=(outputlength/inputlength)*PTS
[16:43:28 CEST] <furq> you'll need to work out the input length separately
[17:48:59 CEST] <realies> any ideas of free web-based asset managers?
[18:56:18 CEST] <n4zarh> I guess I will ask again, maybe there will be someone to answer :) I have problem with filling audio frame for pcm_mulaw encoding, it always returns -22 and avlog does not show any error/warning
[18:56:52 CEST] <n4zarh> here's my code http://pastebin.com/ycTxME58 - problem appears on line 53, but it might be something before that point
[18:59:31 CEST] <n4zarh> I'm compiling the code with android NDK and JNI library; I am able to decode both video and audio without problem with my built ffmpeg libs
[21:20:33 CEST] <fahadash> in the -t switch can I instead of telling the duration tell the end time?
[21:21:11 CEST] <furq> use -to
[21:21:18 CEST] <fahadash> Thanks
[21:22:51 CEST] <fahadash> furq: Got this error: http://pastebin.com/ACLkrJAU
[21:23:27 CEST] <c_14> Just read the message and do what it tells you.
[21:24:04 CEST] <c_14> >>you are trying to apply an input option to an outp ut file or vice versa. Move this option before the file it belongs to.
[21:27:00 CEST] <someOne_> Hi everyone, I'm currently developping a mutimedia player based on ffmpeg libraries (livav*). I implemented the seek feature which works but since I don't/can't flush codec buffers, first frames decoded after seek are frames from the old position. I can't flush codec buffers because avcodec_flush_buffers uses a AVCodecContext structure and AVStream::codec has been marked as deprecated (and seems to be remove in trunk), generatin
[21:27:20 CEST] <someOne_> So my question is simple, what is the equivalent of this function/way to do this with a AVCodecParameters parameter ? I tried avcodec_parameters_to_context but it doesn't work. Any idea? Thx!
[21:33:30 CEST] <fahadash> c_14: Not sure what am I doing wrong, I am trying to clip a segment from an input file, it works fine if I use just -t <duration> but -to <end_time> gives me that error. Here is my full command http://pastebin.com/3rkxCuEs
[21:33:56 CEST] <c_14> -to is an output option, place it before the output file, not before the input file
[21:36:51 CEST] <fahadash> I am trying to create a timelapse of a segment, and -filter:v has hosed up my computer and seems to be taking too long
[21:40:39 CEST] <fahadash> looks like the -to didn't make it stop at the end_time I provided it continued after that
[21:41:52 CEST] <fahadash> ah, I better stick with -t and just compute the duration;
[00:00:00 CEST] --- Sun Oct 9 2016
More information about the Ffmpeg-devel-irc
mailing list