[Ffmpeg-devel-irc] ffmpeg.log.20170518

burek burek021 at gmail.com
Fri May 19 03:05:01 EEST 2017

[00:00:05 CEST] <debianuser> cpu usage?
[00:00:28 CEST] <kms_> cpu is ok 50%
[00:01:27 CEST] <kms_> command is ffmpeg -f v4l2 -s 640x360 -r 5 -i /dev/video0 -f alsa -ac 1 -i hw:2,0 -c:v libx264 -preset ultrafast -acodec libmp3lame -ar 44100 -b:a 128k -ar 44100  -b:v 300k -g 10 -f flv rtmp://....
[00:14:13 CEST] <kms_> after time speed is slowdown speed=0.997x
[00:14:23 CEST] <kms_> and error is coming
[00:16:23 CEST] <kms_> but why after 20-30 sec speed is slowdown if cpu is ok
[00:34:34 CEST] <debianuser> kms_: Does it still happen if you, e.g. reduce encoded image size to 160x90? E.g. ffmpeg -f v4l2 -s 640x360 -r 5 -i /dev/video0 -f alsa -ac 1 -i hw:2,0 -vf scale=160:90 -c:v libx264 -preset ultrafast -acodec libmp3lame -ar 44100 -b:a 128k -ar 44100  -b:v 300k -g 10 -f flv rtmp://....
[00:35:44 CEST] <debianuser> (that should reduce cpu usage of x264 encoder, you're supposed to see like 10% usage)
[00:35:56 CEST] <debianuser> *10% cpu usage
[00:50:54 CEST] <kms_> hmm... on this computer i have same problem with -vf scale=160:90, but on more powerful computer i have no problem (speed is always 1x)
[00:54:13 CEST] <kms_> oh no
[00:54:31 CEST] <kms_> on powerful pc i try on builtin webcam
[00:54:51 CEST] <kms_> but on same webcam i have same error
[00:56:21 CEST] <debianuser> And CPU usage is rather low, like 10% CPU, right?
[00:58:29 CEST] <kms_> cpu 20%, on builtin webcam i have no error, but on usb webcam(s) i have this error
[01:04:48 CEST] <kms_> (go to reboot)
[01:13:21 CEST] <kms_> yes, this problem on different usb webcams
[01:13:33 CEST] <kms_> cpu 4%
[01:13:55 CEST] <kms_> but no problem on builtin webcam
[01:15:05 CEST] <kms_> oops
[01:15:31 CEST] <kms_> now error on builtin webcam
[01:16:15 CEST] <kms_> cpu 2-3 % it is strange
[01:19:00 CEST] <dystopia_> speed should be 1x
[01:19:17 CEST] <dystopia_> your encoding from a live source in real time, at a low resolution
[01:19:29 CEST] <dystopia_> so your never going to go faster than real time
[01:48:37 CEST] <debianuser> kms_: It could be your webcam provides samples with a little bit lower frame rate, e.g. 4.997 fps instead of 5 fps, then if ffmpeg syncs audio to video, it'd read audio from buffer slower than it gets filled, and you'll get overrun. That's just a guess, but if that's right - you need to switch ffmpeg to sync video to audio stream instead. And I don't know how to do that.
[01:48:47 CEST] <debianuser> Does anyone knows if ffmpeg uses video or audio as a base timeline? And if uses video - is possible to switch it to using audio as a base?
[02:16:10 CEST] <k1ngdom> Hello
[02:17:03 CEST] <k1ngdom> I am having issue with ffprobe and encrypted hls. i need to get the bitrate but ffprobe is always returning 0
[02:21:13 CEST] <k1ngdom> http://paste.ubuntu.com/24595934/
[03:16:11 CEST] <k1ngdom> anyone?
[05:54:57 CEST] <hurricanehrndz> anyone know if john's static built works with vaapi
[08:47:38 CEST] <mkozjak> hello. is there a way to use a hls aes encryption feature from libavformat in my own code which includes avformat.h somehow? is there a way to use hls_write_* methods from hlsenc.c from the outside somehow?
[08:55:35 CEST] <mkozjak> should i be using AVOutputFormat's struct methods?
[11:21:53 CEST] <teratorn> is there a way to *accurately* cut a section of video (from non-keyframe to non-keyframe) while using the copy codec for the middle part of video that presumably doesn't really need to be re-encoded?
[11:32:18 CEST] <Mavrik> teratorn, no.
[11:32:33 CEST] <Mavrik> How would that work? :)
[11:32:39 CEST] <teratorn> Mavrik: figured. but do you think it is theoretically possible?
[11:32:54 CEST] <Mavrik> Where would you get information for reconstructing frames before the first keyframe? :)
[11:33:17 CEST] <Mavrik> Maybe if you'd patch the encoder in a way that it would only reencode stuff before first keyframe.
[11:33:57 CEST] <teratorn> Mavrik: hmm well, those GOPs in the middle stand on their own don't they? so you would have to decode and reencode the frames from the starting seek point to the first GOP, and re-encode the frames from right after the last GOP until the ending cut point
[11:34:33 CEST] <Mavrik> Yes, that's what I meant.
[11:34:34 CEST] <Mavrik> :)
[11:39:37 CEST] <teratorn> heh, such selling points http://www.fame-ring.com/sk/accurate.frames.html
[11:40:16 CEST] <teratorn> maybe i'll get the chance to build a tool with ffmpeg that can do it :)
[12:21:31 CEST] <Wallboy> Anyone know would could cause scale2ref to have different results with the exact same ffmpeg command? I'm encoding a watermark over my video and I noticed the watermark was stretched, however after I ran the exact same ffmpeg command it came out non stretched and fine. What sort of ffmpeg voodoo could cause that?
[12:53:29 CEST] <durandal_1707> Wallboy: pastebin full command output
[12:59:50 CEST] <Wallboy> i already tossed the video that had the problem, i couldn't duplicate the error after trying to encode it 10 times they all came out fine. I had -v quiet on as well so couldn't see what happened
[13:01:02 CEST] <Wallboy> could fifo filters cause any issues?
[13:01:59 CEST] <durandal_1707> fifo just eat memory if bugs or missused
[13:04:16 CEST] <Wallboy> i have quite a long filter chain. can ffmpeg do each filter in slightly different order?
[13:08:15 CEST] <DHE> no. each filter will be deterministic. if threading is available it's one filter using threads for faster processing of a single frame but the overall pipeline process is unchanged
[13:14:25 CEST] <Wallboy> https://pastebin.com/KPTbKwmT this is the ffmpeg command. It's slightly different per video used as certain calculations for things are done before hand to figure out logo placement, audio padding, etc. anull/null are replaced with the correct filter if specified in my program
[13:25:00 CEST] <Wallboy> so when the logo was stretched something with the math of: "scale2ref=iw*0.5:(iw*0.5)*(351/1073)" must have been wrong. The logo was stretched about 25% more vertically
[13:25:54 CEST] <Wallboy> logo is 1073x351 w/h
[13:37:15 CEST] <Wallboy> i've heard of weird things happening when using odd numbers for resolutions. maybe it was a case of that
[13:38:09 CEST] <Wallboy> then again, in this case i don't see why since it's just used in an expression
[13:38:41 CEST] <zerodefect> I've written some sample code to decode a DV video clip and encode it to MPEG.  The clip is that of a testcard.  I notice that the colors on the output are slightly off from the original colors, but in the correct region...
[13:39:28 CEST] <zerodefect> I've checked the AVCodecContext and the AVFrame from the decoder and the color ranges are all unspecified.  The documentation seems to imply that the decoder will set these values. Is that correct?
[13:39:29 CEST] <Wallboy> i'm no expert on that, but I believe that would happen when you switch yuv formats
[13:39:52 CEST] <zerodefect> Yeah, I'm pretty confident that the input clip is using ITU-R BT601-6 525
[13:40:10 CEST] <zerodefect> Sorry, ITU-R BT601-6 625 not 525
[13:41:07 CEST] <Wallboy> output shows the same?
[13:42:30 CEST] <zerodefect> Wallboy: Sorry can you clarify? Do you mean the encoded picture colors?
[13:42:58 CEST] <Wallboy> ya what format is the encode at after?
[13:44:16 CEST] <zerodefect> So I encode it to using a YUV422P with m2v encoder, but I don't explicitly set any of the colour ranges anywhere.
[13:44:41 CEST] <zerodefect> It's being wrapped in a .ts container
[13:49:54 CEST] <zerodefect> Is it common for the color properties not be set in the decoder?
[13:51:17 CEST] <Wallboy> https://ffmpeg.org/ffmpeg-all.html#colormatrix maybe that would help?
[13:53:17 CEST] <Wallboy> the colorspace filter below that seems of interest as well
[13:55:13 CEST] <zerodefects> (changed username).  Thanks I'll dig a bit more. I have a few things I can try.
[13:56:51 CEST] <Wallboy> it might be a PC Levels vs TV Levels thing instead
[13:57:07 CEST] <Wallboy> color_range integer (decoding/encoding,video)
[13:57:07 CEST] <Wallboy> If used as input parameter, it serves as a hint to the decoder, which color_range the input has.
[14:05:31 CEST] <zerodefects> Ok. Thanks.  I'll check that out
[14:21:02 CEST] <Tatsh> hey all
[14:21:10 CEST] <Tatsh> anyone figured out with alsa how to record 24-bit 96 KHz?
[14:21:45 CEST] <Tatsh> i'm trying to record from my card with pcm_s32le and then undoing that bit-depth so i can pass it to flac live (flac can't do 32)
[14:22:01 CEST] <Tatsh> when i use -f alsa -i hw:0,0 it always gives an input of pcm_s16le
[14:22:13 CEST] <Tatsh> i don't see a way to force the bit depth for this?
[14:22:18 CEST] <Tatsh> or format in general
[14:24:16 CEST] <Tatsh> hmm
[14:24:19 CEST] <Tatsh> -acodec before the -i
[14:26:24 CEST] <Tatsh> ffmpeg -f alsa -ar 96000 -acodec pcm_s32le -i hw:0,0
[14:26:34 CEST] <Tatsh> flac supports storing this format after all
[14:26:48 CEST] <Tatsh> cuts the size by 50% on default settings :)
[14:28:02 CEST] <Tatsh> actually, flac doesn't support 32-bit, but ffmpeg automatically converts
[14:28:08 CEST] <Tatsh> very good
[16:32:21 CEST] <maziar> hi every one
[16:35:25 CEST] <Guest57739> Hello, I am trying to use ffmpeg shared libraries in Unity since days using a c# wrapper to be able to decode raw h264 frames. It works great on Windows but I'm struggling for days to get it working on Android. What ever I try I always end up with a DllNotFoundException. Home made shared libraries (libTest.so) works great. Anyone here managed to do what I'm trying to do ?
[16:36:36 CEST] <Guest57739> the file -L command says that my lib is 32bits ARM EABIS shared library
[16:47:16 CEST] <kepstin> Guest57739: the issue is probably that one of the libraries which ffmpeg is linked to is missing. Run `ldd libavcodec.so` (or whichever library you're trying to load) and see if there's anything with 'not found' on the right side
[16:47:54 CEST] <Guest57739> in a shell you mean ?
[16:49:25 CEST] <Guest57739> I am not using the executable file, just trying to use the shared libraries in a Unity project
[16:49:54 CEST] <Guest57739> just tried 'ldd libavcodec.so' : 'not a dynamic executable'
[16:52:45 CEST] <mutowang> Could you share the Exception stack log?
[17:03:55 CEST] <Mavrik> Guest57739, you need to use ldd from the NDK for your architecture
[17:03:58 CEST] <Mavrik> not the system one.
[17:04:10 CEST] <Mavrik> or ndk-depends
[17:05:29 CEST] <Guest57739> ok i will try this in a moment, thanks
[17:07:13 CEST] <Guest57739> I have no ldd in toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/
[17:08:36 CEST] <Guest57739> I will give you the stack log in a moment
[17:09:39 CEST] <Mavrik> did you try ndk-depends?
[17:10:31 CEST] <kerio> what's the best way to encode a terminal screen with h264?
[17:10:40 CEST] <kerio> libx264 with `-tune stillimage`?
[17:13:43 CEST] <Guest57739> (I have all libs in a single folder) said missing libavutil.so. I renamed all my libs so <version> is not part of the name anymore
[17:13:52 CEST] <Guest57739> and now everything is found
[17:14:02 CEST] <Guest57739> I will try this in a moment on Unity
[17:14:12 CEST] <Guest57739> Thanks for the tip
[17:25:15 CEST] <thebombzen> kerio: the most important thing is to use yuv444p
[17:25:18 CEST] <thebombzen> and not yuv420p
[17:26:37 CEST] <Guest57739> I come back to you shortly, had to reinstall android studio; takes some time
[17:27:06 CEST] <thebombzen> kerio: when I record a terminal, in realtime I just do it losslessly. if you have an 8bit libx264, do -c libx264rgb -preset ultrafast -qp 0
[17:28:21 CEST] <thebombzen> when you want to recompress it, I use -pix_fmt yuv444p -c:v libx264 -preset:v veryslow -qp:v 0
[17:28:52 CEST] <thebombzen> You can actually record losslessly from the terminal without an enormous bitrate cause nearly everything is the same
[17:29:02 CEST] <thebombzen> inter prediction is also very easy
[17:30:17 CEST] <thebombzen> as for tuning, it doesn't really matter, but I think best tuning is --tune touhou
[17:30:34 CEST] <thebombzen> which is specifically designed to preserve boundaries hard boundaries accurately
[17:31:03 CEST] <thebombzen> although the tune doesn't really matter at higher qualities and bitrates
[17:47:33 CEST] <furq> kerio: stillimage is probably the best
[17:47:39 CEST] <furq> although the best way is to not encode it to a video at all
[17:51:59 CEST] <PlanC> I've just found out about ffmpeg's built-in download function which is really cool
[17:52:19 CEST] <Guest57739> I will come back to you tomorrow if I'm still facing issues with ffmpeg, but you helped me a lot. I'm reinstalling Android Studio made a mistake and have to do it again, what a waste of time
[17:52:36 CEST] <PlanC> ffmpeg -i https://website.com/video.mp4 output.webm
[17:52:45 CEST] <PlanC> which downloads the remote file and converts it to webm
[17:53:01 CEST] <PlanC> it's a really cool feature but I've found a huge issue which is bandwidth limitation
[17:53:25 CEST] <PlanC> is there any way to set a download speed limit to the downloads since it's eating up my whole line slowing everything on my PC down?
[17:53:59 CEST] <PlanC> for example: ffmpeg -i https://website.com/video.mp4 --max-download-speed-kb 100 output.webm
[17:54:06 CEST] <furq> not within ffmpeg
[17:54:09 CEST] <furq> https://github.com/mariusae/trickle
[17:54:13 CEST] <furq> you can use something like that
[17:54:28 CEST] <kepstin> PlanC: if that's a problem you have in general, you might want to look at getting a router which can so smart qos, fix the problem as close to the source as you can :/
[17:54:45 CEST] <furq> i was going to suggest qos but it's probably not going to help much if it's one https connection causing issues
[17:54:51 CEST] <PlanC> furq: I actually spent a ton of time with trickle yesterday
[17:54:59 CEST] <furq> presumably from the same ip as the traffic he wants prioritised
[17:55:08 CEST] <furq> unless qos is much smarter now than the last time i messed with it
[17:55:09 CEST] <PlanC> furq: the problem is that since ffmpeg talks to the kernel directly it doesn't work
[17:55:16 CEST] <kepstin> well, what you actually want is smart queue management
[17:55:29 CEST] <PlanC> furq: https://superuser.com/questions/701527/trickle-throttle-all-programs-works-for-statically-linked-executables
[17:55:50 CEST] <PlanC> kepstin: that sounds really complicated, any more info on how I can do that?
[17:56:04 CEST] <furq> "talks to the kernel directly" is a bad way of putting that
[17:56:30 CEST] <kepstin> PlanC: if you have a router  that supports the openwrt/lede project firmware, install it and follow https://lede-project.org/docs/howto/sqm :/
[17:56:50 CEST] <furq> that looks nicer than the dd-wrt/tomato stuff i've used
[17:56:58 CEST] <furq> that was years ago on an actual wrt54g though
[17:57:16 CEST] <PlanC> furq: direct system calls to the kernel would be a better term, but regardless what I'm trying to say is that trickle doesn't work with ffmpeg
[17:57:31 CEST] <kepstin> yeah, most of these firmware don't even support the wrt54g(l) anymore, because ram is too low or flash is too small :/
[17:57:39 CEST] <furq> it doesn't matter anyway
[17:57:57 CEST] <furq> i have 150mbit down and the wrt54g won't do more than ~30mbit on the wan port because the cpu is so slow
[17:58:25 CEST] <PlanC> furq: I don't have that luxury unfortunately so it matters to me lol
[17:58:40 CEST] <kepstin> PlanC: one option is to build a non-static copy of ffmpeg& then stuff that intercepts C library functions should work.
[17:58:50 CEST] <furq> yeah or use your distro ffmpeg
[17:58:57 CEST] <furq> assuming it has one which isn't useless
[17:59:07 CEST] <PlanC> kepstin: oh that might be a good solution
[18:01:02 CEST] <PlanC> kepstin: would it be enough to remove the "--static" pkg config flag when compiling to make it non-static?
[18:01:17 CEST] <furq> no
[18:01:20 CEST] <PlanC> kepstin: and perhaps adding --enable-shared (or doesn't that have anything to do with it)?
[18:01:26 CEST] <furq> enable-shared has nothing to do with it either
[18:01:37 CEST] <furq> you presumably have --extra-ldflags=-static
[18:01:41 CEST] <furq> get rid of that
[18:02:36 CEST] <PlanC> furq: this is what I'm using: https://trac.ffmpeg.org/wiki/CompilationGuide/Centos
[18:03:19 CEST] <PlanC> furq: ffmpeg only has --static when compiling
[18:03:31 CEST] <PlanC> furq: and libx264 has "--enable-static"
[18:04:24 CEST] <furq> weird
[18:04:51 CEST] <furq> what does `ldd /path/to/ffmpeg` return
[18:07:34 CEST] <PlanC> furq: https://pastebin.com/raw/d620fZ1y
[18:07:46 CEST] <furq> yeah that's linked against your system libc
[18:10:32 CEST] <MeXIst3nZ> If I convert an .mp3 to .wav and then to .mp3 again, it won't lose any quality compared to how it was in its initial .mp3 state, right?
[18:10:47 CEST] <furq> it absolutely will lose quality
[18:11:29 CEST] <MeXIst3nZ> MP3 => WAV is lossless, no? So WAV => MP3 should produce an identical file?
[18:11:36 CEST] <furq> no
[18:11:44 CEST] <MeXIst3nZ> I don't see why not.
[18:11:46 CEST] <R1CH> you're decoding already lossy source file
[18:11:53 CEST] <MeXIst3nZ> Yes?
[18:11:56 CEST] <furq> it's exactly the same as going mp3 to mp3
[18:12:00 CEST] <R1CH> just because you're storing that lossy data in a lossless file doesn't magically restore it
[18:12:02 CEST] <furq> you're encoding a different signal
[18:12:07 CEST] <furq> with a lossy encoder
[18:12:10 CEST] <furq> so you're going to lose more data
[18:12:36 CEST] <MeXIst3nZ> Since WAV has no compression at all, the contents of the WAV should "perfectly contain" the MP3, no?
[18:12:45 CEST] <furq> yes, that's not the part you're wrong about
[18:13:09 CEST] <furq> that's exactly what would happen if you encoded an mp3 to mp3
[18:13:15 CEST] <R1CH> encoding to mp3 will always degrade the input no matter what
[18:13:23 CEST] <furq> it would be decoded to pcm and then that decoded signal would be encoded to mp3
[18:13:33 CEST] <furq> the encoder doesn't magically know that the signal was already compressed
[18:17:43 CEST] <R1CH> is there anything obviously wrong with this new decode api code? https://pastebin.com/raw/rKLvdpZR
[18:17:55 CEST] <R1CH> d->pkt comes from av_read_frame calls
[18:18:45 CEST] <R1CH> everything seems to work fine but the video always comes out corrupted (eg http://i.imgur.com/hwIMMuF.png)
[18:20:19 CEST] <R1CH> the problem is only for certain videos too, but other ffmpeg based players (and ffplay) handle it perfectly
[18:22:14 CEST] <PlanC> furq: it definitely has to be linked to the system since trickle isn't working
[18:22:46 CEST] <PlanC> furq: I'll try to compile it without the --static and see if it helps
[18:23:44 CEST] <furq> if it's linked to libc.so then that won't make any difference
[18:23:50 CEST] <furq> something else is presumably causing the issue
[18:24:06 CEST] <furq> it's definitely not --enable-static, that has absolutely nothing to do with libc
[18:24:23 CEST] <PlanC> furq: no I mean the "--static" for ffmpeg
[18:24:31 CEST] <PlanC> PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib -ldl" --bindir="$HOME/bin" --pkg-config-flags="--static" --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --e
[18:24:31 CEST] <PlanC> nable-libx265
[18:24:31 CEST] <furq> is there such an option
[18:24:47 CEST] <furq> if you mean pkg-config-flags then i doubt that'll make any difference
[18:24:57 CEST] <furq> maybe ldd trickle and see if it's linked to the same libc
[18:25:16 CEST] <furq> i vaguely remember rpm distros handing 32/64 bit in some wonky way
[18:25:20 CEST] <furq> handling
[18:26:15 CEST] <BtbN> R1CH, I think the intended flow is you send input, and then get output in a loop until EAGAIN
[18:27:03 CEST] <BtbN> the new API does explicitly not require a de/encoder to output one onput per input
[18:27:09 CEST] <BtbN> *one output
[18:27:40 CEST] <BtbN> see https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/avcodec.h#L79
[18:27:41 CEST] <PlanC> furq: could all the "--disable-shared" be causing it?
[18:28:19 CEST] <MeXIst3nZ> How can FFMPEG be so much better than Adobe at saving JPEGs in maximum quality but with much smaller file size?
[18:28:46 CEST] <MeXIst3nZ> I saved a JPEG from Photoshop and it was > 4 MB. The same file, first saved as a PNG and then converted to JPEG with FFMPEG, was ~2 MB.
[18:29:02 CEST] <R1CH> BtbN, this is kind of in a loop, if got_frame isn't 1 then decode_packet is called with the next packet
[18:29:24 CEST] <R1CH> it's just abstracted a bit so we can use both old and new api for users who don't have a recent ffmpeg
[18:29:25 CEST] <BtbN> yeah, but in the case of multiple output per input, you'd accumulate more and more of a backlog
[18:30:20 CEST] <R1CH> we always try calling avcodec_receive_frame before sending more packets though, wouldn't that prevent it?
[18:30:39 CEST] <BtbN> it should, yeah
[18:31:43 CEST] <R1CH> i just can't figure out where the corruption is coming from
[18:32:46 CEST] <BtbN> you still have the same corruption with this?
[18:33:05 CEST] <R1CH> yup
[18:33:48 CEST] <R1CH> its unrelated to the avcodec_decode_video2 issue from yesterday which turned out to be fixed in a newer ffmpeg build
[18:34:14 CEST] <R1CH> but the corruption is persistent with yesterday's git head
[18:34:30 CEST] <R1CH> yet not present in ffplay..
[18:47:13 CEST] <PlanC> perhaps a better way would be to use wget since I then could use --limit-rate to control the download speeds
[18:47:16 CEST] <PlanC> http://stackoverflow.com/a/7137391
[18:47:51 CEST] <PlanC> the only problem is that I want to have two remote inputs
[18:48:25 CEST] <PlanC> for example: wget [URL1] | wget [URL2] | ffmpeg -i pipe:0 -i pipe:1 -vcodec mpeg4 -s qcif -f m4v -y output.flv
[18:48:31 CEST] <PlanC> anyone know how I could do this?
[18:54:15 CEST] <furq> you wouldn't be able to encode on the fly with mp4 anyway
[18:54:29 CEST] <furq> ffmpeg can only do that because it does a range request from the end to get the moov atom
[18:54:42 CEST] <furq> or rather, you wouldn't reliably be able to do that
[19:00:30 CEST] <PlanC> furq: oh no I just copied the example from stackoverflow
[19:00:53 CEST] <PlanC> furq: but is there a way to use 2 pipes with two wget requests running at the same time?
[19:01:38 CEST] <PlanC> I know that it's possible to do "wget -r http://link1.com/file1 & wget -r http://link2.com/file2 &" which would download "file1" and "file2" at the same time
[19:01:44 CEST] <PlanC> but how can I pipe the output to ffmpeg?
[19:03:18 CEST] <kepstin> PlanC: it gets complicated, you'd have to make use of some advanced shell redirect features. doable tho.
[19:04:14 CEST] <PlanC> kepstin: any tips that would get me started because I have no idea where to begin with this?
[19:04:16 CEST] <kepstin> if you're using a modern bash, you can probably do something like "ffmpeg -i <(wget -r http://example.com/whatever)"
[19:05:02 CEST] <kepstin> what that does is open the wget in a subshell with the output as a pipe, then it gets replaced with "/dev/fd/N" in the ffmpeg command line which makes ffmpeg read from that pipe
[19:07:51 CEST] <PlanC> kepstin: thanks I'll try that out now
[19:07:56 CEST] <kepstin> er, you'd need to use "ffmpeg -i <(wget -O - http://example.com/whatever)" so the wget outputs on stdout and sends to the pipe
[19:08:26 CEST] <PlanC> + the "-r" to make it simultaneous, right?
[19:08:45 CEST] <kepstin> no, the r isn't needed here; that's for recursive download of e.g. all linked files on a page
[19:09:18 CEST] <kepstin> which doesn't really make sense if you're downloading a single file?
[19:12:08 CEST] <kepstin> note that if you're using wget anyways, it has built in rate limiting, you can do e.g. "--limit=rate=1m" to limit it to 1MB/s
[19:13:23 CEST] <hlechner> Hey guys, speaking of ffmpeg thumbnail: to generate a mosaic/tile thumbnail from a video with the examples from ffmpeg website it consumes too much CPU and time to do it, a 16GB file can take several minutes (in a intel i7)
[19:13:31 CEST] <hlechner> But in other hand if I take different thumbnails (one thumbnail by command) using the respective time to '-ss' it work Instantly however in this case I need to create the mosaic/tile thumbnail after it using the different thumbanils generated.
[19:13:36 CEST] <hlechner> Anyone knows a 'clean'/better way to do that?
[19:15:02 CEST] <grublet> hlechner: you could probably use a script to pipe the screenshots to something like imagemagick, which has a command called "append" that can be used to make mosaics fairly easily
[19:16:21 CEST] <PlanC> kepstin: I just tried it out and it's working, thanks! :)
[19:16:31 CEST] <PlanC> kepstin: and it's one file indeed so no need for "-r"
[19:17:41 CEST] <PlanC> kepstin: the "--limit-rate" is indeed the benefit of using wget since trickle wasn't able to limit its speed
[19:18:07 CEST] <kepstin> hlechner: you can probably just do something like "-vf fps=0.1,tile" (using whatever tile options you like), to grab a frame from every 10s of the video. adjust the fps to taste
[19:18:32 CEST] <PlanC> kepstin: it is however for some strange reason giving the warning "Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)" when using wget
[19:19:09 CEST] <kepstin> hlechner: but yeah, that might in some cases be slower than seeking, if you use really large gaps between tiles, since it has to decode the entire video :/
[19:19:25 CEST] <hlechner> grublet: thanks, it seems a good option
[19:19:52 CEST] <hlechner> kesptin: yea, it is the problem that Im having, it is decoding all video :(
[19:20:41 CEST] <c_14> you can try -discard nokey
[19:20:58 CEST] <c_14> then it'll only use keyframes which will speed up seeking and decoding
[19:21:29 CEST] <c_14> that or -skip_frame nokey
[19:22:08 CEST] <c_14> ah, discard is demuxer and not supported by all. so skip_frame is prob. better
[19:24:45 CEST] <hlechner> I have tried with -skip_frame and nokey but it seems that on files too big like the 16GB that Im using it takes too long still
[19:26:39 CEST] <c_14> probably io?
[19:26:43 CEST] <c_14> how long is it taking?
[19:28:17 CEST] <hlechner> more than 10 minutes for sure. but Im running it inside a SSD. I shouldnt have problem here, especially its not a server but a desktop machine
[19:28:35 CEST] <c_14> 10 min seems too long
[19:28:39 CEST] <c_14> what's your command?
[19:30:40 CEST] <hlechner> here one (example): '-ss', distance ,'-skip_frame', 'nokey','-i', inPath ,'-f', 'image2','-vframes', count || 1,'-aspect', '4:3','-filter:vf', 'select=\'isnan(prev_selected_t)+gte(t-prev_selected_t\\,'+ distance +')\',' + 'scale=\'if(gt(a,4/3),128,-1)\':\'if(gt(a,4/3),-1,96)\',pad=w=128:h=97:x=(ow-iw)/2:y=(oh-ih)/2:color=black,tile=4x1','-y'
[19:31:24 CEST] <hlechner> this example runs on node.js (javascript) but I made most of my tests outside the node, just running from bash
[19:34:57 CEST] <hlechner> and here my command to generate just one frame: ffmpeg -ss start_time_here -i input_here -f image2 -vframes 1 -aspect 16:9 -filter:vf scale="'if(gt(a,16/9),174,-1)':'if(gt(a,16/9),-1,98)',pad=w=174:h=98:x=(ow-iw)/2:y=(oh-ih)/2:color=black" -y preview.jpeg
[19:36:09 CEST] <hlechner> The last command generate instantly, but I need to generate each thumb and after create a mosaic/tile
[19:36:27 CEST] <c_14> the select is probably taking too long
[19:38:12 CEST] <hlechner> I didn't found a better way to do that, It was taken from ffmpeg example
[19:38:35 CEST] <hlechner> here a different select (from ffmpeg examples too): "select=not(mod(n\,1000)),scale=128:-1,tile=2x2" but the result is just similar
[19:39:52 CEST] <c_14> you can try using the thumbnail filter, might be faster. Can't remember if I ever benched it
[19:45:44 CEST] <kepstin> nah, thumbnail filter does some complicated processing to pick a "representitive" frame, I'd imagine it could only be slower
[19:46:10 CEST] <furq> it buffers all the uncompressed frames as well
[19:46:18 CEST] <furq> so it's prone to running out of memory
[19:46:29 CEST] <c_14> it might be better to just run the command in several passes for each frame
[19:46:39 CEST] <c_14> if you know the separation you want anyway
[19:46:49 CEST] <mick27> Howdy folks, anyone running ffmpeg 3.3 + cuda 8 + aws P2 instances around here ?
[19:46:50 CEST] <furq> having tried to do this exact thing before, there's not really an easy answer
[19:47:03 CEST] <furq> it's either going to be slow, memory hungry or both
[19:48:09 CEST] <hlechner> Yea, ffprobe give some details to work and get variables, so I can "know" where to use the '-ss' to take the thumbnails.
[19:48:50 CEST] <kepstin> if you know the separation you want, and it's really big, then maybe do multiple inputs with a different seek on each, trim each input to 1 frame, concatenate them, feed to tile filter?
[19:49:19 CEST] <c_14> can you limit frames on input?
[19:49:31 CEST] <c_14> other than trying to specify a really small -t and hoping?
[19:50:10 CEST] <furq> i'd have thought multiple -ss would be slower than -skip_frame nokey -vf select
[19:50:15 CEST] <kepstin> no, but that would have to be a complex filter chain anyways, so just do a trim=end_frame=1 or whatever
[19:50:17 CEST] <furq> given that you have to seek from the start every time iirc
[19:50:18 CEST] <c_14> though I guess if you've done an ffprobe before you can know the exact duration of the frame you want and pick that
[19:50:37 CEST] <c_14> furq: sure, but you can seek using the index so you can skip a lot of frame processing
[19:50:46 CEST] <furq> the best solution for me was -skip_frame nokey -vf select,scale,thumbnail,tile
[19:50:48 CEST] <kepstin> furq: if the file seeking is indexed and the gap between desired frames really big, seeking can save decoding a lot of keyframes
[19:50:55 CEST] <furq> fair enough
[19:51:30 CEST] <kepstin> in particular if the source file used a really low keyframe interval, i guess :/
[19:51:45 CEST] <furq> yeah i was testing with an h264 mp4 iirc
[19:52:08 CEST] <furq> which i can imagine would tip the scales against ss
[19:52:50 CEST] <hlechner> Guys maybe my terrible english is the problem here, but what do you mean by "with different seek on each"? would be generating different thumbs?
[19:53:01 CEST] <furq> on which note, i started doing something like this with the api but got distracted by a dog with a puffy tail
[19:53:08 CEST] <furq> i should probably finish it
[19:53:10 CEST] <c_14> ffmpeg -ss seek1 -i input -ss seek2 -i input [..]
[19:54:34 CEST] <hlechner> I didnt try it seems really useful!! if I can use the same file as input and use the select as well seems perfectly
[19:54:49 CEST] <mick27> Howdy folks, anyone running ffmpeg 3.3 + cuda 8 + aws P2 instances around here ?
[19:55:18 CEST] <hlechner> I will test it here
[19:55:25 CEST] <c_14> hlechner: you'll have to use a select or trim on each of the inputs to grab only one frame and then use the concat filter to concat all those pads and then pass that to the tile filter
[19:57:02 CEST] <kepstin> don't use select, it won't generate an eof until it sees an eof from its source. use trim :)
[19:57:50 CEST] <kepstin> should be trim=end_frame=1 I think? Assuming frame numbers are 0 based.
[20:00:35 CEST] <hlechner> I don't know yet, I have never used trim, fortunately there is documentation on ffmpeg website
[20:00:51 CEST] <hlechner> I will take a further look there
[20:23:26 CEST] <ThugAim> Guys, on windows I wanna do batch conversion to libx264 mp4 from a text file will all the directories listed in there... how do I do that?
[20:23:43 CEST] <ThugAim> g'dafteroon btw.
[20:27:56 CEST] <ChocolateArmpits> ThugAim, if you use powershel you can do: gc text.txt|%{ gci $_ -file | %{ ffmpeg -i $_.fullname -vcodec libx264 -acodec aac "$($_.directoryname)\$($_.basename).mp4"  } }
[20:28:33 CEST] <ChocolateArmpits> presuming the source file isn't mp4 originally
[20:30:27 CEST] <ChocolateArmpits> eh you make it shorter as: gc text.txt | gci -file |%{ ffmpeg -i $_.fullname -vcodec libx264 -acodec aac "$($_.directoryname)\$($_.basename).mp4"  }
[20:33:20 CEST] <ThugAim> I'll do some testing. Is there a way using command prompt? I've got the .bat thing down with a directory and was trying to figure out how to get individual files from multiple directories in from a text file with the directories listed. brb after testing.
[20:39:53 CEST] <ChocolateArmpits> it's doable by running command output from a for loop but looks pretty nasty
[20:40:16 CEST] <ChocolateArmpits> imo powershell is more straightforward
[20:49:37 CEST] <zerodefects> Decoding a DV video and then encoding into m2v wrapped in ts.  I notice that the encoded frame had some interesting artifacts which I had originally put down to difference in color primaries, but not I'm not so sure...
[20:49:55 CEST] <zerodefects> Decoded: http://imgur.com/a/V083K
[20:50:07 CEST] <zerodefects> Encoded: http://imgur.com/a/zgj4I
[20:51:05 CEST] <zerodefects> Unfortunately, none of the color characteristics are set in the AVCodecContext or AVFrame by the decoder, so I'm not too sure which primaries it's using.
[21:00:02 CEST] <kepstin> zerodefects: looks like a combination of not enough bitrate and maybe a simple deinterlacer to me
[21:00:30 CEST] <kepstin> but mostly not enough bitrate
[21:01:37 CEST] <zerodefects> Thanks for checking it out :)  Interesting. My bitrate is at 800K. I'll try increasing that.
[21:02:06 CEST] <zerodefects> If I look at the yellow bar, I found it interesting how it shadows all the way to the bottom of the encoded pic.
[21:03:01 CEST] <kepstin> hmm. 800K is low, but for mpeg 2 I wouldn't think it's low enough to do that, at least on mostly static color bars :/
[21:03:16 CEST] <zerodefects> Yeah, that as my thinking.
[21:04:11 CEST] <zerodefects> Do you think the color primaries (YUV ranges) could be playing a part too?
[21:05:35 CEST] <zerodefects> Originally, I thought the clip was BT.601 625, but I see the clip has YUYV values of 0x0C which is outside the range of that color space (0x10 to 0xF0).
[21:10:22 CEST] <kepstin> if it's originally an analog capture, it's normal for values to overshoot the nominal range
[21:10:40 CEST] <kepstin> due to miscalibration or noise or whatever other reason
[21:12:42 CEST] <zerodefects> Would an AtoD not ensure that the values were correctly bounded though?
[21:13:15 CEST] <kepstin> well, it would have to clip the values, when the purpose of the range limit is to give headroom for overshoots like this...
[21:13:19 CEST] <kepstin> it's really not a problem
[21:13:44 CEST] <kepstin> if you need to, you can digitally clip the values (and they'll be clipped during playback by the player)
[21:13:49 CEST] <zerodefects> That makes sense
[21:13:58 CEST] <zerodefects> Oh ok
[21:15:19 CEST] <zerodefects> I'll keep digging. I must be doing something wrong with the encoder because the decoded picture looks fine. Thanks for looking.
[21:16:00 CEST] <furq> does anyone have any amazing insights about segmenting and rejoining mp3s without getting gaps
[21:17:59 CEST] <kepstin> furq: in theory if you break on frame boundaries within a file then rejoin, it should be fine, but concatenating arbitrary mp3s without gaps is impossible :/
[21:18:12 CEST] <furq> well i'm segmenting with ffmpeg
[21:18:51 CEST] <zerodefects> kepstin: One last question. Is it often the case that color characteristics are not set by the various decoders on the AVFrame and.or AVCodecContext? Is that considered a bug?
[21:18:58 CEST] <furq> i'm guessing it's not cutting on a frame boundary or something
[21:19:22 CEST] <kepstin> furq: with -c:a copy it should only be able to copy on frame boundaries, so that's confusing.
[21:19:30 CEST] <kepstin> zerodefects: I don't know :/
[21:19:39 CEST] <furq> it's somewhat non-obvious what i'm doing because it's not something that's useful
[21:19:56 CEST] <zerodefects> Ok. No worries. Thanks :)
[21:20:06 CEST] <furq> i'm writing a script that segments an audio file, encodes each part with mp3 an increasing number of times, then concats them
[21:20:14 CEST] <kepstin> that said, if you're remuxing mp3 with ffmpeg, the extra headers it adds might make concatenating the files back again troublesome
[21:20:14 CEST] <furq> like those images you get of jpeg generation loss
[21:20:43 CEST] <kepstin> furq: oh, well, each pass through the encoder will add a chunk of encoder delay, so yeah that's not gonna work
[21:20:51 CEST] <furq> well even the first two parts have a gap
[21:21:06 CEST] <kepstin> if you decode the result, concatenate the raw audio, then save to flac or something you can probably get it right
[21:21:15 CEST] <kepstin> assuming the decoder handles the delay/sample trim correctly
[21:21:32 CEST] <furq> so er
[21:21:38 CEST] <furq> concat the list of mp3s and save as pcm?
[21:21:50 CEST] <furq> or do you mean save each segment to pcm and concat those
[21:21:53 CEST] <kepstin> decode the mp3s individually to pcm, concat that
[21:22:02 CEST] <furq> i don't see how that'd make a difference but i'll try it anyway
[21:22:08 CEST] <furq> i'll try both, rather
[21:22:57 CEST] <kepstin> the encoder has a delay - it adds a bunch of samples to the start of the first frame which have to be thrown out when decoding. but if you concat at the mp3 level, it'll just play them in place. they're usually just silence
[21:23:07 CEST] <furq> ah
[21:23:15 CEST] <furq> would this affect other lossy codecs
[21:23:30 CEST] <kepstin> yes, affects basically all lossy audio codecs
[21:23:40 CEST] <furq> well i guess if this works then it won't matter
[21:25:02 CEST] <kepstin> the encoder puts some metadata (in the LAME-compatible "INFO" header) saying how many samples at the start/end of the file need to removed to get only the original audio back
[21:25:22 CEST] <kepstin> I think current versions of ffmpeg write that correctly, but for best results just use the lame cli :/
[21:25:45 CEST] <furq> that would make it much more annoying to make the script support other codecs
[21:27:21 CEST] <furq> i wonder if splitting to wav instead of mp3 would help
[21:28:24 CEST] <kepstin> taking the original file, decoding to pcm/wav, splitting that, then doing your re-encoding on the wav files would give the most accurate start/end times, yeah
[21:28:52 CEST] <kepstin> then you could re-encode those segments as many times as you like, and it *shouldn't* add or remove any samples if the encoders/decoders are working correctly.
[21:30:03 CEST] <furq> atm i'm going flac -> segment with -c mp3 -> encode -> concat with -c copy
[21:30:18 CEST] <furq> i just tried converting them all to wav and concating those, it seemed to help
[21:30:22 CEST] <furq> there's still a click but it's not as bad
[21:30:50 CEST] <furq> segmenting to wav made more sense, it just saves a couple of lines to not do that
[21:32:03 CEST] <kepstin> I'd do anything -> segment with pcm audio -> encode segments separately -> decode segments to pcm -> concat, encoding to anything
[21:32:14 CEST] <furq> yeah i'm doing that now
[21:32:28 CEST] <furq> it unsurprisingly takes a while to run
[21:32:37 CEST] <furq> i should really parallelise it
[21:33:27 CEST] <tezogmix> Hey all.
[21:34:15 CEST] <ThugAim> heya.
[21:35:21 CEST] <ThugAim> guys been failing at the bash only to realize i might have the list in the .txt file formated wrong.
[21:36:45 CEST] <tezogmix> I had a 1080p source mp4 that's ~3gb size, 59.94 fps @12.5kbps bitrate, was going to convert it to mp4 @ 29.97 at same resolution... i've been playing around between fast & veryfast presets + -crf 18 and -crf 16... anything else i can consider to keep the bitrate closer to the 12mb mark? it seems I lose ~3000-4000kbps on the conversion
[21:36:47 CEST] <dystopia_> what are you trying to do ThugAim
[21:37:11 CEST] <dystopia_> 12.5kbps :O
[21:37:16 CEST] <dystopia_> you mean mbps?
[21:37:38 CEST] <tezogmix> yes sorry :)
[21:37:44 CEST] <tezogmix> ffmpeg -r 60000/1001 -i inpute.mp4 -preset veryfast -crf 18 -r 30000/1001 -c:a copy output.mp4
[21:38:05 CEST] <tezogmix> so something like that is what i've started with and switched from fast and 16
[21:38:11 CEST] <ThugAim> dystopia_, I'm trying to batch convert multiple files from multiple directories listed in a text file to h.264 mp4
[21:38:15 CEST] <tezogmix> wondering what else i could try...
[21:38:42 CEST] <dystopia_> if your reducing the frame rate the bit rate will naturally drop
[21:39:50 CEST] <tezogmix> understood on that part dystopia_ was there any other string to adjust to the above? I wasn't sure...
[21:41:24 CEST] <tezogmix> the file output size isn't a major issue although, something in the same 3gb-range is ok
[21:44:43 CEST] <ThugAim> in the text file i have things listed like #Media for Friday
[21:44:44 CEST] <ThugAim> file "E:/Folder1/file.mpg"
[21:44:44 CEST] <ThugAim> file "E:/Folder2/file2.mxf"
[21:44:44 CEST] <ThugAim> file "E:/Folder3/file3.mxf"
[21:44:44 CEST] <ThugAim> file "E:/Folder4/file4.mov"
[21:46:14 CEST] <kepstin> tezogmix: if you actually want a specific bitrate, then do a 2-pass mode with that bitrate. If you're using crf, then just pick a value you like and note that the file size will be variable.
[21:47:44 CEST] <furq> kepstin: fwiw splitting to pcm made a big difference, joining to pcm also makes a big difference
[21:47:58 CEST] <furq> decoding each file to pcm before joining doesn't seem to make any difference if i also do the other two
[21:48:03 CEST] <maicod> does anyone know of a tool to check a x264/h264 file for frame errors and report those frame numbers ?
[21:48:15 CEST] <tezogmix> thanks for the addon to dystopia_ 's comment kepstin ... how much of a noticeable difference will there be between a crf 15 and crf 18?
[21:48:17 CEST] <furq> there's still a click at the join points but it's much better than it was
[21:48:20 CEST] <furq> there was an audible gap before
[21:48:33 CEST] <tezogmix> i know it's quite subjective
[21:49:26 CEST] <tezogmix> hey furq what's up, btw thanks also from before on the tips on converting from 4k to 1080... it worked out ok!
[21:49:26 CEST] <kepstin> tezogmix: depends a lot on the original source, but unless you're looking really close it's hard to tell the difference between any setting crf18 or better with veryslow, imo.
[21:51:07 CEST] <tezogmix> i see (no pun intended) kepstin -
[21:52:21 CEST] <kepstin> if you're doing multiple generations of encodes, using better than that can be useful to reduce generation loss
[21:53:10 CEST] <tezogmix> some content for example would be at the microscopic level (e.g. if you're looking at at a sample of moving cells under a microscope, so the extra details do helps)
[21:54:16 CEST] <ThugAim> I might just have to copy all the files to a folder and batch convert from there temporairly
[21:54:45 CEST] <kepstin> for medical/scientific stuff my impression was that you really want to avoid adding any compression artifacts which could distort the data, so yeah, reallly low crf or even lossless might be appropriate
[21:55:46 CEST] <kepstin> (note that crf is a decibel scale, more or less)
[21:55:46 CEST] <tezogmix> is it normal in those cases for the output to be double the size? i did a test before with a crf 12 and crf 14
[21:57:12 CEST] <kepstin> so a change of ~6 crf would double or half the file size, if I am remembering correctly
[21:57:14 CEST] <tezogmix> putting crf at 0 (lossless), and the file was way too large to the point i had to cancel it, but it was more of the tradeoff to compare different levels of dtails
[21:57:25 CEST] <tezogmix> ok that helps with perspective kepstin
[21:58:00 CEST] <tezogmix> so output files larger than the source file in that sense, is not then due to a poorly formatted command line
[21:58:03 CEST] <tezogmix> i take
[21:58:41 CEST] <kepstin> if you tell x264 to make output larger than the input, it will happilly try to find a way
[21:59:22 CEST] <kepstin> it has no idea what the compressed size of your input was anyways :)
[22:00:49 CEST] <tezogmix> i meant otherwise, i just thought it was an interesting observation to see the output bigger than input (video-tech isn't my field of expertise obviously hehe (medical student) and have been kind of appreciating ffmpeg over handbrake),
[22:03:06 CEST] <tezogmix> i'm looking to build a new pc in the summer... what kind of cpu-gpu specs should i be looking for (general consumer, not for commercial) that can handle ffmpeg at some of the more higher quality parameters without taxing the system? right now, i'm doing this on a i5 laptop, windows 7 with 8gb ram, 5400hdd
[22:03:45 CEST] <tezogmix> i think an ssd would help one aspect...
[22:04:59 CEST] <tezogmix> really left i suppose then between amd or an intel variant, i'm not sure how much the gpu would even play in this role... but right now, i can't even playback 4k content on the current machine.
[22:07:32 CEST] <kepstin> for general video playback, gpu can help because they often include a hardware decoder that can make up for the cpu being too slow to decode something
[22:08:15 CEST] <kepstin> but for encoding, particularly high quality stuff, software encoders are preferred. So that's just as many high performance cpu cores as possible :/
[22:08:22 CEST] <tezogmix> i'm not an active gamer but video playback/video capture-streaming would be more of my interest uses
[22:10:00 CEST] <kepstin> if you're doing screen capture/streaming of cpu-intensive applications, the gpu encoders can be useful because they take load off the cpu. Pretty much any modern nvidia gpu would work well for that use case, doesn't even have to be particularly high end.
[22:10:01 CEST] <tezogmix> any particular gpu-cpu recommendations to keep in mind so i can research more about those kepstin ?
[22:10:26 CEST] <furq> the new amd zen stuff is nice for encoding
[22:10:39 CEST] <kepstin> as far as cpu recommendations - if you're doing software encoding "as many high performance cores as you can get"
[22:10:44 CEST] <furq> because of that
[22:10:49 CEST] <kepstin> and yeah, zen is a pretty cheap way to do that :)
[22:10:50 CEST] <furq> 8 cores without the intel premium
[22:11:09 CEST] <tezogmix> ok cool furq and kepstin , i'll look into that...
[22:11:11 CEST] <furq> old intel server stuff is nice as well
[22:11:14 CEST] <furq> it's cheap but it uses a lot of power
[22:11:22 CEST] <furq> i wouldn't really use it for a general-purpose pc
[22:11:32 CEST] <tezogmix> available to general consumers? :) ah, i see furq
[22:11:43 CEST] <furq> well you can just get it on ebay
[22:12:11 CEST] <furq> every so often $bigcorp will buy a bunch of new servers and suddenly there'll be a new batch of haswell xeon board/cpus on ebay for very cheap
[22:12:25 CEST] <tezogmix> so not as applicable for me in that sense... but cool to know if ever i learn more and get one on the cheap to use solely as a standalone machine
[22:12:27 CEST] <kepstin> as sold it'll be hot and noisy :/ but it can be turned into an ok workstation with some creativity.
[22:12:35 CEST] <JEEB> furq: yea I'm waiting for the next wave
[22:12:36 CEST] <furq> generally the dual cpu stuff is incredibly cheap for how powerful they are
[22:12:46 CEST] <furq> but yeah you'll need to put extra effort in to make a serviceable pc out of them
[22:12:47 CEST] <JEEB> E5 2670v2 or something
[22:12:56 CEST] <JEEB> since the v1 was already <3
[22:13:25 CEST] Action: kepstin just got a ryzen 1700 box to play with, it's pretty impressive even at stock speeds.
[22:13:27 CEST] <tezogmix> what kind of price range are you all talking about the latter stuff to check on ebay
[22:13:30 CEST] <tezogmix> ?
[22:13:52 CEST] <JEEB> a first gen E5 2670 is around 70-90 bucks a pop
[22:13:58 CEST] <furq> https://0x0.st/gIf.wav
[22:14:03 CEST] <furq> here's some mp3 v4 generation loss for you
[22:14:06 CEST] <tezogmix> i know my local pc store sells a lot of dell and hp workstation desktops
[22:14:10 CEST] <furq> a new generation every four seconds
[22:14:14 CEST] <tezogmix> not sure of their cpu's
[22:14:31 CEST] <JEEB> kepstin: yea a dual cpu setup of ryzen would be cool too. esp. with the 16 core thing they're pushing out
[22:14:35 CEST] <kepstin> furq: one way to work around 'clicks' at boundaries might be to do overlapping segments, then trim them before concatenating.
[22:14:45 CEST] <furq> JEEB: aren't they calling the zen server cpus EPYC
[22:14:50 CEST] <furq> it's not worth the stigma of owning something called that
[22:15:03 CEST] <JEEB> I think it was THREADRIPPER
[22:15:05 CEST] <JEEB> or something
[22:15:11 CEST] <furq> that's the uarch isn't it
[22:15:15 CEST] <JEEB> could be :D
[22:15:17 CEST] <kepstin> furq: you could have at least flaced that :/
[22:15:20 CEST] <furq> i'm pretty sure they're branding them as EPYC
[22:15:22 CEST] <kepstin> it's kinda big ;)
[22:15:23 CEST] <furq> kepstin: oh yeah
[22:15:27 CEST] <furq> <--
[22:15:35 CEST] <furq> also it would play in the browser if i did that
[22:15:56 CEST] <kepstin> the consumer 16core ones are gonna be Ryzen THREADRIPPER, yeah
[22:16:05 CEST] <kepstin> but those aren't gonna support multi-cpu, apparently
[22:16:11 CEST] <JEEB> aww
[22:16:17 CEST] <JEEB> still, ECC works
[22:16:18 CEST] <JEEB> \o/
[22:16:36 CEST] <kepstin> if you get lucky and find a board that has it wired up and bios support :(
[22:16:43 CEST] <kepstin> the cpus have it, the boards are iffy
[22:16:46 CEST] <furq> yeah that's always been a pisser with amd boards
[22:16:47 CEST] <JEEB> well yea
[22:17:03 CEST] <furq> i was going to build an am1 system for my nas but it was impossible to get reliable information on which boards had ecc
[22:17:04 CEST] <JEEB> but at least there's results that when the sticks get strained there's actual error correction happening
[22:17:08 CEST] <JEEB> which is guy
[22:17:09 CEST] <JEEB> *gut
[22:17:47 CEST] <kepstin> I've seen confirmation that at least some ASUS and ASRock X370 boards have working ecc, I think the asus one even has some bios config options for scrubbing and whatnot
[22:18:06 CEST] <furq> http://vpaste.net/O4ZYF
[22:18:09 CEST] <furq> there's the script fwiw
[22:19:16 CEST] <furq> it's a shame the guy who didn't believe in generation loss left hours ago
[22:19:23 CEST] <furq> since that's the reason i thought this would be a good idea
[22:30:04 CEST] <james999> generation loss?
[22:50:20 CEST] <tezogmix> ok thanks all again for the chat and additional feedback... have a good weekend to come!
[23:44:36 CEST] <riataman> hey
[23:44:44 CEST] <durandal_170> hi
[23:44:55 CEST] <riataman> I'm trying to save a rtsp stream to segmented mp4 files
[23:45:01 CEST] <riataman> but those files don't worth in iOS
[23:45:18 CEST] <riataman> and are kind of funnny in chrome (desktop and mobile) like the slider acts very weird
[23:45:27 CEST] <riataman> they work very well in firefox
[23:45:41 CEST] <riataman> I'm using this line:
[23:45:44 CEST] <riataman> ffmpeg -loglevel error -i rtsp:// -c copy -an -map 0 -f segment -segment_time 60 -segment_format mp4 -strftime 1 /root/video-recordings/video-%Y%m%d-%H%M.mp4
[23:46:14 CEST] <riataman> I have tried a bunch of options like -segment_format_options movflags=+faststart
[23:46:22 CEST] <riataman> but nothing can make it play in iOS
[23:46:23 CEST] <riataman> any ideas?
[23:47:36 CEST] <riataman> I would love to avoid transcoding if possible
[00:00:00 CEST] --- Fri May 19 2017

More information about the Ffmpeg-devel-irc mailing list