[Ffmpeg-devel-irc] ffmpeg.log.20171129
burek
burek021 at gmail.com
Thu Nov 30 03:05:01 EET 2017
[00:38:57 CET] <whysohard> Hi guys. For years I couldn't understand. Ffmpeg is very powerful. But it cannot handle with a very simple trim case: Trim end of file.. There are several commands -ss -sseof -t... But it's not sufficient. There is a question about that: https://stackoverflow.com/questions/31862634/i-need-to-cut-mp4-videos-a-part-at-the-beginning-and-a-part-at-the-end-in-a-bat The solution is sooo complex. Why we dont have an easy single command :(
[01:46:11 CET] <anomie____[m]> I just noticed noticed a couple of rtp streams in a video file. I'm wondering how I can extract them and view their contents.
[02:16:12 CET] <TheRock> a question
[02:16:31 CET] <DHE> a response
[02:17:12 CET] <TheRock> can you answer my question
[02:20:33 CET] <klaxa> if you can ask one, maybe
[02:24:48 CET] <TheRock> do you know the answer
[02:27:09 CET] <klaxa> unknown until you ask the question
[02:28:45 CET] <TheRock> before i can ask the question, i must know if you can answer it
[02:30:02 CET] <klaxa> sounds like a deadlock, i'm out
[02:31:14 CET] <TheRock> k, i will try to ask it later
[02:38:17 CET] <DHE> acknowledgement and acceptance of terms
[02:40:30 CET] <zash> Tomorrow, on "How to ask questions on IRC", will TheRock finally ask their question? Tune in and see!
[03:20:50 CET] <raytiley_> i'm messing around with ffmpeg to generate HLS from a deckling card using dshow. I think i'm seeing the bitrit of the files grow if the decklink card doesn't get video for a while and x264 is encoding black frames... the output is still working ok.. the files are just oo big, does that make sense?`-c:v libx264 -x264opts keyint=60:no-scenecut -s 416x234 -r 29.97 -b:v 200k -preset ultrafast -profile:v main -level 3.0
[03:20:50 CET] <raytiley_> -pix_fmt yuv420p -y -c:a aac -ar 48000 -hls_list_size 5 out.ts` is the output part of the command i'm using
[03:42:57 CET] <CCFL_Man> kerio: the transport stream identified the video stream as mpeg2
[03:43:11 CET] <CCFL_Man> but it's really mpeg4
[03:46:12 CET] <dystopia_> your tivo records it as mpeg4
[03:46:36 CET] <dystopia_> it was probably broadcast originally as mpeg2
[03:46:55 CET] <dystopia_> but pvr's including tivo don't record the transport stream or program stream raw
[03:47:22 CET] <dystopia_> they usually transcode it on the fly to save space
[03:56:35 CET] <CCFL_Man> dystopia_: no, the hd channels of my cable are mpeg4
[03:56:49 CET] <CCFL_Man> tivo dumps the stream to the MFS database
[03:57:07 CET] <CCFL_Man> they do record it raw
[03:57:21 CET] <CCFL_Man> Tivo always recorded raw
[04:05:06 CET] <CCFL_Man> except for analog and an mpeg2 encoder was used.
[04:07:23 CET] <ian5v> hi all, looking for a way to fill the black portions of a screen on a video taken on a phone in portrait mode with some blurred out still from the video
[04:07:43 CET] <ian5v> news channel do this pretty frequently for videos they get
[04:08:16 CET] <ian5v> has anyone seen it done in ffmpeg? if not; elsewhere?
[04:11:24 CET] <ian5v> something that looks roughly like this: https://youtu.be/LdCLQ2DQ0NY
[04:13:30 CET] <ian5v> (worth noting: the video doesn't have black bars yet)
[04:29:20 CET] <CCFL_Man> SD channels are mpeg2, HD channels are mpeg4
[05:15:27 CET] <furq> ian5v: scale,boxblur;overlay
[05:17:27 CET] <furq> -filter_complex "[0:v]scale=1920:-2,boxblur=4:4[bg];[bg][0:v]overlay=240:0[out]" -map "[out]"
[05:17:35 CET] <furq> something like that but you'll want to mess around with the actual numbers
[05:17:36 CET] <KTSamy> some of the ffmpeg recommended download sources like evermeet.cx, fmpeg.zeranoe.com are not providing the complete source codes. Thus they are breaking LGPL. But, ffmpeg team still recommends them.
[05:18:39 CET] <KTSamy> According to LGPL, they must provide the source codes along with control scripts by which we can produce the exact library or executable.
[05:20:11 CET] <KTSamy> if ffmpeg itself supports those who are violating the LGPL, how can we expect others will comply to the LGPL?
[05:20:59 CET] <furq> zeranoe does provide the source
[05:21:03 CET] <furq> there's a link to it at the top of the page
[05:22:13 CET] <furq> actually nvm that just links to the ffmpeg repo
[05:22:23 CET] <furq> he definitely used to provide the source for everything
[05:22:38 CET] <furq> and also providing it on request is still lgpl compliant
[05:25:42 CET] <KTSamy> Which section of LGPL allows disclosing the sources only on demand?
[05:27:27 CET] <furq> https://www.gnu.org/licenses/gpl-faq.en.html#WhatDoesWrittenOfferValid
[05:27:46 CET] <furq> the zeranoe builds include the license text so that offer is there
[05:28:11 CET] <furq> and he definitely used to autogenerate and package all the sources, so i assume you can still get them on request
[05:33:39 CET] <KTSamy> I can understand that they might be using a script to autogenerate the binary based on the latest/specific version of the sources.
[05:34:14 CET] <KTSamy> but, it must be disclosed to comply with LGPL.
[05:34:34 CET] <KTSamy> I will contact them first to know their stands on this.
[05:55:21 CET] <ian5v> furq: thank yoU!!
[05:56:20 CET] <ian5v> i'll give it a shot
[05:58:59 CET] <KTSamy> @furq
[05:59:03 CET] <KTSamy> thank you
[08:12:09 CET] <KTSamy> frug: I have received a reply from evermeet.cx
[08:12:43 CET] <KTSamy> They are not ready to release the build script to compile the binaries
[08:13:43 CET] <KTSamy> Their Reply:
[08:14:35 CET] <KTSamy> I'm sorry, I won't release my script to build the binaries.
[08:14:35 CET] <KTSamy> Some of the 3rd party libraries have broken build systems and some require manual intervention to create static libs to be then able to create a static binary. These things took many, many hours of trial and error and years of maintainance of my script to get it working.
[08:16:35 CET] <KTSamy> They are violating LGPL terms.
[09:20:35 CET] <foul_owl> How do I reverse a video with filter_complex?
[09:24:02 CET] <Nacht> -i inputfile.mp4 -vf reverse reversed.mp4
[09:27:58 CET] <foul_owl> Already using filter_complex to do other stuff, so -vf reverse doesn't work
[09:28:37 CET] <KTSamy> can you just provide a sample command?
[09:33:59 CET] <Nacht> Then add reverse to your filter complex
[10:58:54 CET] <pihpah> Hi everyone, I've got a customer feedback, she reported that a video file has an out-of-sync audio stream. I've checked it by myself and noticed the same problem, but I did that on Windows using Chrome browser, today I checked the same video on Linux and everyhting worked just fine. Any idea? This is ffprobe's output https://pastebin.com/1kfgz0m4
[11:12:54 CET] <pupp> How to use dshow device "virtual-audio-capturer" inside a amovie filter?
[11:13:00 CET] <pupp> I tried this, but it didn't work. "No such file or directory".
[11:13:04 CET] <pupp> ffplay -f lavfi "amovie='virtual-audio-capturer'[aud];[aud]asplit[aud][out1];[aud]showcqt=1280x720:volume=30:volume2=30[out0]"
[12:02:35 CET] <Brian_> im cropping/scaling a video with ffmpeg. It works for most videos but when i try to process one with a rotation in the metadata i get an error:
[12:02:37 CET] <Brian_> https://pastebin.com/raw/h3JKA5qv
[12:02:56 CET] <Brian_> i compiled ffmpeg myself with minimal settings so maybe im missing a filter?
[12:03:29 CET] <Brian_> 'Error reinitializing filters!'
[12:36:49 CET] <pihpah> mp4 files made for streaming can't be played on macOS, I am having problems with those both in Safari and Firefox, on Linux and Windows everything is fine. Any idea what's wrong?
[12:44:50 CET] <Chloe> pihpah: ffprobe them
[12:50:44 CET] <pihpah> Chloe: https://pastebin.com/RJfHf8qK
[12:50:57 CET] <Martchus> pihpah: maybe those are mp4 dash? what happens if you just remux the concerning files (with ffmpeg)?
[12:52:42 CET] <pihpah> Martchus: did not try that, I am using -crf 28 -strict -2 options when transcoding video streams.
[12:53:08 CET] <pihpah> Maybe the problem is in video stream rather audio.
[12:53:38 CET] <KTSamy> Probably it's an issue with hardware acceleration
[12:54:28 CET] <KTSamy> have you used any other encoding parameters?
[12:55:36 CET] <pihpah> KTSamy: I am thinking of using -preset fast option instead of -crf, would that do it? I mean if there is a possible problem with video decoding and the speed of that process.
[12:58:57 CET] <KTSamy> I think It shouldn't fix anything.
[13:00:20 CET] <KTSamy> I have tried a file after transcoding. It works absolutely fine for me.
[13:00:31 CET] <KTSamy> can you share a sample?
[13:09:39 CET] <pihpah> KTSamy: https://www.dropbox.com/s/2i0nv33ctorxvlh/S06E06_720p.mp4?dl=0
[13:26:07 CET] <KTSamy_> pihpah: it plays absolutely fine for me on both Firefox & Safari
[13:32:09 CET] <SortaCore> it may be processor speed?
[13:32:16 CET] <SortaCore> maybe the hardware isn't up to par
[13:33:28 CET] <pihpah> What about profile levels? https://trac.ffmpeg.org/wiki/Encode/H.264#Compatibility
[13:33:53 CET] <SortaCore> yea, I'm not sure that would cause lag?
[13:34:02 CET] <SortaCore> that might make it undecodable
[13:34:13 CET] <SortaCore> although, maybe a slower software decode would be used by the browser automatically
[13:34:29 CET] <pihpah> Well, on macOS I can't play them at all.
[13:34:49 CET] <SortaCore> have you checked the profile levels that your mac supports?
[13:36:28 CET] <SortaCore> you could try baseline, main and high
[13:36:30 CET] <SortaCore> see where it dies
[13:36:56 CET] <SortaCore> baseline 3.0, 3.1, main 3.1, high 4.0, 4.1, 4.2
[13:37:01 CET] <KTSamy_> I am also checking in mac
[13:37:07 CET] <BtbN> I don't think any actual baseline encoder even exists
[13:37:15 CET] <BtbN> it's always constrained baseline
[13:37:23 CET] <KTSamy_> Macbook Pro with High Sierra
[13:38:46 CET] <SortaCore> wasn't there a big security risk announced for high sierra?
[13:38:50 CET] <SortaCore> https://www.theverge.com/2017/11/28/16711782/apple-macos-high-sierra-critical-password-security-flaw
[13:38:51 CET] <KTSamy> h264 hardware acceleration might be broken on your device.
[13:39:12 CET] <KTSamy> yah. noticed that. hope soon they will announce a fix.
[13:39:31 CET] <KTSamy> disabled Remote Login for safer side.
[13:40:11 CET] <SortaCore> it's to do with root user, if y'all haven't, add a password for it
[13:40:56 CET] <SortaCore> there's a note in h264 that QuickTime only supports h264 with yuv420p
[13:57:13 CET] <ritsuka> pihpah: I can play that file even in the old and deprecated QuickTime Player 7
[13:58:27 CET] <pihpah> ritsuka: weird
[13:58:52 CET] <pihpah> Okay, I made some changes to ffmpeg options, will try on another video.
[13:59:14 CET] <ritsuka> sync issues in some players are caused by those players lack of edit lists support
[14:01:09 CET] <ritsuka> in your file the audio track is delayed by 978 ms, if the player can't handle the edit list, it will be out of sync
[14:03:05 CET] <pihpah> ritsuka: how did you figured it out? I could not perceive any audio delay.
[14:47:16 CET] <seirl> hi
[14:47:18 CET] <seirl> i don't understand why the -to parameter cannot be used before -i
[14:47:56 CET] <seirl> is there a reason why i can't just do ffmpeg -ss start -to end -i input.mp3 output.mp3?
[14:49:00 CET] <seirl> instead i either have to use relative position with -t, or do the cut on the output (which isn't really good when you want to integrate that to another file)
[14:49:24 CET] <seirl> or use -ss start -i input -to end -copyts which is both inconvenient and stupid
[14:50:00 CET] <seirl> i have trouble understanding the technical reason behind that choice
[14:50:23 CET] <seirl> anyone could shed some light on the issue for me? :-)
[14:50:31 CET] <BtbN> Nobody implemented it.
[14:51:09 CET] <BtbN> And since there is no overhead from specifying it on the output, nobody cares enough
[14:52:05 CET] <seirl> well that's the thing, i looked at the code and it *looks* like it is implemented: https://github.com/FFmpeg/FFmpeg/blob/86cead525633cd6114824b33a74d71be677f9546/fftools/ffmpeg_opt.c#L963
[14:52:32 CET] <seirl> this function handles a non-null recording_time option
[14:52:38 CET] <seirl> but maybe i'm mistaken?
[14:52:58 CET] <seirl> stop_time* sorry, not recording_time which is for -t
[14:53:10 CET] <seirl> line 989
[14:53:45 CET] <BtbN> https://github.com/FFmpeg/FFmpeg/blob/master/fftools/ffmpeg_opt.c#L3426 it's an input and output option.
[14:54:32 CET] <seirl> oh... 11 days ago https://github.com/FFmpeg/FFmpeg/commit/80ef3c83601881ff2b6a90fa5c6e82c83aad768f
[14:54:41 CET] <seirl> thank you for helping me find that commit!
[14:56:44 CET] <dradakovic> guys i have a problem compiling ffmpeg with libmp3lame enabled. I do ./configure --enable-libmp3lame but i get error: libmp3lame >= 3.98.3 not found
[14:57:19 CET] <DHE> you might need to export PKG_CONFIG_PATH=/usr/local/lib/pkg-config or such if it's a custom build not installed to standard locations
[14:58:15 CET] <dradakovic> Understood. It would be easier then to install it to a standard location then. How can i do that with ./configure?
[14:58:42 CET] <seirl> --prefix
[14:58:44 CET] <BtbN> Installing into your system manually is a bad idea, don't do that.
[14:58:45 CET] <seirl> maybe?
[14:58:57 CET] <dradakovic> I am doing that as yum version of ffmpeg is too old and im a noob compiling
[15:02:22 CET] <dradakovic> How else can i obtain a new 3.4 version on centos? Did i already ruin it already if i already ran ./configure make install
[15:02:42 CET] <dradakovic> There was some installation going on
[15:03:15 CET] <BtbN> You might have mad a mess that's hard to clean up
[15:04:49 CET] <dradakovic> Hmm. So how exactly should have run my ./configure command?
[15:33:35 CET] <tolland> the problem with all this GPL stuff on the mailing list, is that no one is paying attention to my one about gett ONVIF data from RTSP :-(
[15:41:26 CET] <atomnuker> ping the patch
[16:49:24 CET] <rom1v> hi, on ubuntu 16.04, avcodec_send_packet() does not exist https://www.ffmpeg.org/doxygen/3.1/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
[16:49:53 CET] <rom1v> how to know easily the exact version where it appears, so that I can #ifLIBAVCODEC_VERSION_INT < AV_VERSION_INT(?, ?, ?) ?
[16:51:55 CET] <rom1v> s/appears/appeared/
[17:00:06 CET] <rom1v> 7fc329e2dd6226dfecaa4a1d7adf353bf2773726 ok la version a été modifiée par le commit lui-même
[18:20:25 CET] <SortaCore> Couldn't send an ending video image to encoder: avcodec_send_frame failed with error -11: Resource temporarily unavailable
[18:24:39 CET] <BtbN> "ending video image"?
[18:26:31 CET] <SortaCore> the stuff frames received from rtsp formatcontext, after a null packet was sent to rtsp
[18:26:50 CET] <BtbN> rtsp is not an encoder
[18:27:13 CET] <BtbN> And if you send EOF to an actual encoder, it will go into EOF mode, you cannot send it any further frames.
[18:27:23 CET] <SortaCore> yea, I'm transcoding
[18:27:56 CET] <SortaCore> and I've not sent the null to the encoder yet, not until the attempt to receive from RTSP goes AVERROR_EOF
[18:30:00 CET] <SortaCore> atm I do avcodec_send_packet with null packet, then I do avcodec_receive_frame(rtsp) until EOF and avcodec_send_frame to encoder, then send null frame to encoder and do avcodec_receive_packet until it EOFs
[18:30:15 CET] <SortaCore> would be neater with some C++ streams amirite
[18:30:41 CET] <SortaCore> although I don't know C++ well enough to do that
[18:33:40 CET] <SortaCore> atm to do transcoding I'm getting the YUV420P frames from rtsp, passing it through sws_scale to convert to NV12, then passing it to encoder
[18:33:55 CET] <SortaCore> otherwise H264 won't be all hardware-ified
[18:34:12 CET] <SortaCore> although, it's not QSV hw frame, so... it may be not very hardware anyway
[18:34:21 CET] <SortaCore> just regular NV12
[18:53:54 CET] <Cracki> -f image2, how to specify individual images explicitly? multiple -i? comma-separated?
[19:04:15 CET] <Cracki> got a build that doesn't have globbing, so image2 with glob not possible
[19:21:12 CET] <relaxed> Cracki: use a static build
[19:21:21 CET] <Cracki> I have a static build
[19:22:03 CET] <relaxed> And it doesn't have glob support? Look at "ffmpeg -h demuxer=image2"
[19:23:09 CET] <Cracki> [image2 @ 00000000004f9bc0] Pattern type 'glob' was selected but globbing is not supported by this libavformat build *.png: Function not implemented
[19:23:28 CET] <relaxed> which static build are you using?
[19:23:49 CET] <relaxed> pastebin.com your command and output
[19:23:51 CET] <Cracki> minor inconvenience. I have to take the -f concat -i somelist way
[19:24:00 CET] <Cracki> ok hold on I'll check
[19:25:06 CET] <Cracki> the 3.4 build from zeranoe
[19:26:11 CET] <Cracki> https://pastebin.com/y6Q9s9T3
[19:34:13 CET] <relaxed> Cracki: try without the quotes
[19:35:20 CET] <Cracki> changes nothing
[19:35:29 CET] <Cracki> you see this is windows
[19:35:40 CET] <Cracki> there is no globbing on windows, except if you do it yourself
[19:35:54 CET] <Cracki> apparently libavformat has no own glob impl
[19:36:14 CET] <Cracki> or zeranoe built it without support for that, if there were support
[19:37:43 CET] <Cracki> https://github.com/FFmpeg/FFmpeg/blob/a20f64bee235042f6e35c8e7ae65ccfddbf7343b/libavformat/img2dec.c#L282
[19:37:51 CET] <Cracki> I think there's no fallback
[19:41:21 CET] <JEEB> and you most likely would be correct
[19:41:26 CET] <JEEB> some image formats can be piped though
[20:05:35 CET] <gon_> Hi. Is there a way to use more than 2 inputs with vstack and hstack?
[20:07:07 CET] <furq> gon_: hstack=inputs=3
[20:09:25 CET] <gon_> Awesome, thanks furq
[20:47:56 CET] <DocHopper_> Hey ffmpeg, I'm using a program called BDA Viewer Plus to view/log the video feed transmitted by my remote comp running ffmpeg. In this program, there is an option for "UART Demux", which includes GPS options. Is anyone here familiar with adding a data stream of some sort to a .ts steam from FFMPEG?
[20:49:49 CET] <SortaCore> may have something to do with -map?
[20:53:36 CET] <DocHopper_> SortaCore: That could be an option. There's documentation on subs, maybe there's a way to do it through that?
[21:01:20 CET] <BtbN> I don't think ffmpeg can embed gps information
[21:02:46 CET] <BtbN> you can set arbitrary metadata, but an embedded running gps stream, I don't think so
[21:03:35 CET] <DocHopper_> BtbN: So how do I go about adding metadata that updates frequently?
[21:03:45 CET] <DHE> maybe not metadata, but a whole private stream?
[21:04:30 CET] <BtbN> If you have something that generates that uart data stream, you might be able to just pipe it in
[21:04:37 CET] <BtbN> But there is no direct gps support in ffmpeg.
[21:04:46 CET] <BtbN> And metadata does not update. It's set once in the header.
[21:04:59 CET] <DHE> I think this might be better suited to making your own via the API. add a private stream type, build AVPackets for it and mux them as you go
[21:05:42 CET] <DocHopper_> DHE: That sounds wonderful, and well beyond my current knowledge.
[21:06:16 CET] <DocHopper_> Right now I'm just adding my datastream to the video feed and hoping I can set something up to read it through character recognition.
[21:06:22 CET] <DocHopper_> Or maybe generation a barcode?
[21:29:46 CET] <furq> is it just me or is showspectrumpic size broken
[21:30:21 CET] <furq> it truncates the y axis if the height isn't a multiple of 512
[21:31:43 CET] <furq> https://imgur.com/a/aSUEB
[21:31:53 CET] <furq> top is s=1024x512, bottom is s=1024x513
[21:37:59 CET] <durandal_1707> furq: height must be multiple of 2
[21:38:52 CET] <furq> it still truncates it with non-mod2 heights
[21:39:10 CET] <furq> unless you mean power of 2
[21:39:29 CET] <durandal_1707> ah, yes , power of 2
[21:40:20 CET] <furq> well that's annoying
[21:47:12 CET] <SortaCore> DocHopper, you may be better off just converting it to a subtitle format and using that
[21:47:22 CET] <SortaCore> not exactly efficient or ideal though
[22:56:20 CET] <DocHopper_> SortaCore: I've considered that, but I'm not sure how I would extract that from the stream on the receiving computer.
[23:04:35 CET] <SortaCore> some video formats are designed to embed subtitles as well
[23:04:42 CET] <SortaCore> *cough*
[23:04:46 CET] <SortaCore> some video container formats
[23:04:58 CET] <SortaCore> video streams, audio streams, subtitle streams
[23:05:03 CET] <SortaCore> it's how you get dual-audio
[23:05:23 CET] <klaxa> how do subtitles lead to dual audio?
[23:05:37 CET] <SortaCore> BOI
[23:06:02 CET] <SortaCore> point is you can embed any amount of any type in a container format
[23:06:18 CET] <kinkinkijkin> so, how extensive is opencl support in x264? a google says it's just frame lookahead, but these results are from 2012
[23:06:27 CET] <SortaCore> and most players will pick up on it, so you just need a way to convert gps to coordinates
[23:06:34 CET] <SortaCore> I think there was updates on opencl just last week
[23:09:44 CET] <SortaCore> gps to coordinates?
[23:09:53 CET] <SortaCore> gps to coords + time it so it appears with the video at the correct time
[23:38:34 CET] <DocHopper_> SortaCore: So how should I go about doing that in a method that is compliant with DVB standards?
[23:59:00 CET] <SortaCore> DVB = ? for me
[23:59:03 CET] <SortaCore> so, who knows
[23:59:19 CET] <SortaCore> I'm not a ffmpeg guy
[23:59:28 CET] <SortaCore> I'm just hanging because my current project involves ffmpeg
[00:00:00 CET] --- Thu Nov 30 2017
More information about the Ffmpeg-devel-irc
mailing list