[Ffmpeg-devel-irc] ffmpeg.log.20180321

burek burek021 at gmail.com
Thu Mar 22 03:05:01 EET 2018


[00:19:43 CET] <keglevich> hey all... I've been struggling for the whole day with this "project" and it seems I'm not "big" enough to finish it alone...therefore I'm wondering if someone is kind enough to provide compiled ffmpeg with NDI support for windows x64 platform? I'm willing to donate if needed...
[07:41:48 CET] <Jerry007B> Hi guys, can you help me with solving and issue? Described it in here: https://ffmpeg.zeranoe.com/forum/viewtopic.php?f=7&t=5575&sid=784a22b4ef284525a80a08df6aa3b8e2
[09:51:57 CET] <keglevich> hey all...I've put together a script to manually compile ffmpeg with NDI support... now I'm struggling to edit the script, so it would cross-compile also windows files..I'm running it on linux ubuntu... the script is this https://pastebin.com/mJGNWWi9
[09:52:56 CET] <keglevich> how should I modify it, so it would complete without an error...each time I enter the cross compile parameters, I get an error (ERROR: libass not found using pkg-config)... without the crosscompile parameters, script runs fine
[09:53:42 CET] <pmjdebru1jn> so it's ubuntu linux, not linux ubuntu :)
[09:54:04 CET] <pmjdebru1jn> just like it's a ford mustang, not a mustang ford :)
[09:54:15 CET] <pmjdebru1jn> </pedantic>
[09:54:17 CET] <pmjdebru1jn> anyhow
[09:54:35 CET] <pmjdebru1jn> to crosscompile you need all your libraries to be available in crosscompiled from as well IIRC
[09:55:03 CET] <pmjdebru1jn> giving ffmpeg's dependancies, that's a bit of a task I guess
[09:56:00 CET] <pmjdebru1jn> given*
[09:57:21 CET] <pmjdebru1jn> you might be able to get some of the dependancies here http://mirrors.kodi.tv/build-deps/win32/
[09:57:47 CET] <pmjdebru1jn> not sure how trustyworthy that source is, and if their suitable for compiling (maybe header files might be lacking)
[09:58:13 CET] <keglevich> ok...is there an "easier" alternative, maybe a script on windows where I can directly recompile ffmpeg?
[09:58:24 CET] <keglevich> I'm actually just trying to get working ffmpeg.exe with ndi support
[09:58:34 CET] <keglevich> and I'm trying to do this for about two days without success
[09:58:44 CET] <keglevich> I really didn't think this will be sucah a task
[09:59:22 CET] <pmjdebru1jn> no clue to be honest
[09:59:36 CET] <pmjdebru1jn> windows isn't a particular great development environment at all really
[09:59:57 CET] <pmjdebru1jn> keglevich: have you ever considered contacting your vendor? (NDI)?
[10:00:05 CET] <pmjdebru1jn> maybe they have a custom build available
[10:00:10 CET] <pmjdebru1jn> for customers
[10:01:14 CET] <keglevich> huh... I think they don't...it's freely available and people are using it...it just seems impossible to get a working bin somewhere online
[10:01:21 CET] <keglevich> that's all I need...one single file
[10:02:44 CET] <pmjdebru1jn> just repeating that you need a single file doesn't really help
[10:03:08 CET] <pmjdebru1jn> ffmpeg is a huge project, so that crosscompiling it isn't trivial is completely expected
[10:03:48 CET] <pmjdebru1jn> the ability to crosscompile something at all isn't to be taken for granted at all
[10:04:11 CET] <pmjdebru1jn> keglevich: my point being, your perception that this should be simple, is a bit misguided
[10:04:46 CET] <pmjdebru1jn> if anybody in the world should provide ready to go binary with NDI support, it should be your vendor, who you've paid good money presumably
[10:05:50 CET] <keglevich> no, I don't think this is a simple task...I just tried to say that I don't understand why there's no precompiled ffmpeg binaries floating around with NDI support... NDI is free in the end, and it's such a great feature
[10:06:47 CET] <pmjdebru1jn> as I said, if anybody should provide you with this, it should have been your vendor
[10:07:21 CET] <pmjdebru1jn> as for the rest of the world, you've been explained why already
[10:07:42 CET] <pmjdebru1jn> https://trac.ffmpeg.org/wiki/CompilationGuide/CrossCompilingForWindows https://github.com/rdp/ffmpeg-windows-build-helpers not sure if those help
[10:07:48 CET] <pmjdebru1jn> I don't have any win32 crosscompile experience myself
[10:09:53 CET] <moreentropy> hi
[10:10:27 CET] <moreentropy> i'm trying to encode flac into mp4/dash, which is marked experimental
[10:10:34 CET] <moreentropy> now I have this problem:
[10:10:46 CET] <moreentropy> trying mp4 only first, this doesn't work:
[10:11:04 CET] <moreentropy> ffmpeg -i https://sender.eldoradio.de/icecast/192.webm -c:a flac -f mp4 test.mp4
[10:11:43 CET] <moreentropy> as the message says, adding -strict -2 / -strict experimental helps:
[10:11:46 CET] <moreentropy> ffmpeg -i https://sender.eldoradio.de/icecast/192.webm -c:a flac -strict -2 -f mp4 test.mp4
[10:11:49 CET] <pmjdebru1jn> moreentropy: the mp4 container support support the flac codec
[10:11:52 CET] <pmjdebru1jn> IIRC
[10:12:00 CET] <pmjdebru1jn> -c:a is the codec you want to encode INTO
[10:12:10 CET] <pmjdebru1jn> -f mp4 is the container you want to store the encoded data INTO
[10:12:23 CET] <pmjdebru1jn> presumably you want -c:a aac
[10:12:37 CET] <moreentropy> now the same doesn't work for dash:
[10:12:37 CET] <moreentropy> ffmpeg -i https://sender.eldoradio.de/icecast/192.webm -c:a flac -strict -2 -f dash test.mpd
[10:12:55 CET] <pmjdebru1jn> as I said -c:a flac is wrong
[10:13:22 CET] <pmjdebru1jn> also, if your distro has an old ffmpeg, consider getting a static binary from https://www.ffmpeg.org/download.html as the aac encoder was greatly improved
[10:13:36 CET] <moreentropy> pmjdebru1jn: I'm running a aac/hls and dash/webm/opus streams in prodution using ffmpeg, this all works fine
[10:14:04 CET] <moreentropy> I'm thinking about providing a losless compressed stream right into the browser
[10:14:09 CET] <pmjdebru1jn> that statemnet has no bearing on my point
[10:14:28 CET] <pmjdebru1jn> you want store flac in an mp4 container
[10:14:32 CET] <moreentropy> so yes, -c:a flac is exactly what I want, the input will be from alsa later
[10:14:36 CET] <pmjdebru1jn> ffmpeg's error almost literally states this
[10:15:10 CET] <moreentropy> it states flac in mp4 is experimental
[10:15:27 CET] <pmjdebru1jn> moreentropy: possibly, but it's very doubtful many client will be able to decode it
[10:17:04 CET] <moreentropy> Just experimenting here, from https://caniuse.com/#search=flac I see that browsers now support flac, they support mp4 and mp4 seems to be the container of choice for flac in dash (?)
[10:17:23 CET] <moreentropy> this is all without too proper research, I see that bbc radio UK is experimenting with flac+dash right into the browser
[10:17:34 CET] <pmjdebru1jn> support flac as codec, and support mp4 as a container doesn't mean they'll work together
[10:17:38 CET] <pmjdebru1jn> but it might
[10:19:12 CET] <pmjdebru1jn> moreentropy: stick around though, maybe someone else is more clued in on the particulars of this
[10:19:44 CET] <moreentropy> ok, yeah that's what I expect, I have no idea if this will work in the end. status right now is that I can make ffmpeg encode flac into mp4 when directly using the mp4 muxer (although with -strict experimental) but I can't make ffmpeg encode flac into mp4 used by the dash muxer
[10:19:46 CET] <pmjdebru1jn> https://bugzilla.mozilla.org/show_bug.cgi?id=1286097 is marked as resolved
[10:21:51 CET] <moreentropy> DASH with webm/OPUS streaming to medieelement.js in web browsers is working beautifully with ffmpeg 3.4.2, having a lot of fun implementing this
[10:22:09 CET] <moreentropy> pmjdebru1jn: thanks so far :)
[10:23:45 CET] <moreentropy> It seems the dash muxer chooses the fitting output container depending on the codec, if i use -c:a aac it will automatically create mp4 fragments, if i use c:a libopus it will create webm fragments
[10:26:45 CET] <pmjdebru1jn> ah
[10:28:06 CET] <moreentropy> I assume the -strict flag will be set for the dash muxer but not for the mp4 container muxer invoked by the dash muxer
[10:29:06 CET] <moreentropy> the HLS muxer has -hls_ts_options to set options for the mpegts muxer used by the hls muxer
[11:09:51 CET] <yusa> Hello, i am trying to use overlay filter option for some images in a folder and overlay them on a background image. In the folder there are many images but only the first three are processed. Any ideas? Command: ffmpeg -i black_background.png -i data/%010d.png -filter_complex "[0:v][1:v] overlay=104:0" out/%010d.bmp
[11:11:45 CET] <yusa> So, the first three images are processed correctly, but then the execution stops. The input files follow the naming convention strictly (%010d).
[11:12:05 CET] <yusa> frame=    3 fps=1.8 q=0.0 Lsize=N/A time=00:00:00.12 bitrate=N/A dup=0 drop=297
[11:12:19 CET] <yusa> it says drop 297 images. 300 images are in the folder.
[11:14:59 CET] <pmjdebru1jn> yusa: you don't have any missing numbers in your sequence do you?
[11:15:20 CET] <yusa> They are consecutive :/
[11:19:27 CET] <yusa> for example: "ffmpeg -i data/%010d.png -vf scale=1600:512 out/%010d.bmp" works just fine
[11:23:30 CET] <kepstin> yusa: the problems is probably that you're running out of background images.
[11:25:34 CET] <kepstin> yusa: try adding "-loop 1" before the background input to make it into an infinite stream, then use overlay=104:0:shortest=1 to make it cut the stream when it runs out of overlay images.
[11:31:15 CET] <yusa> kepstin: Thank a lot! "ffmpeg -loop 1 -i background.png -i data/%010d.png -filter_complex "[0:v][1:v] overlay=104:0:shortest=1" out/%010d.bmp" worked for me just fine.
[11:35:01 CET] <Jerry007B> Hi, I am using FFmpeg to capture what is happening in my app, but I ran into an issue I need help solving. The issue is that while capturing with parameter -i title=MyApp, the title of the window is not visible and mouse is way off the button it's clicking. I assume it is caused by missing title of the window. This is my setup: ffmpeg -y -rtbufsize 100M -f gdigrab -framerate 20 -probesize 10M -draw_mouse 1 -i
[11:35:01 CET] <Jerry007B>  title="Klik" -c:v libx264 -r 30 -preset ultrafast c:/video_test.mkv and here is what the result looks like: https://www.dropbox.com/s/rdqpw88zxh4y2tq/video_test_title.mkv?dl=0
[11:35:01 CET] <Jerry007B> So how can I have title shown or mouse positioned correctly? I need to record only that one application.
[11:41:14 CET] <kepstin> Jerry007B: are you using a scale factor other than 100% in windows?
[11:41:51 CET] <Jerry007B> kepstin: no, I a do have 100% scale factor set up
[11:42:28 CET] <kepstin> hmm. yeah, the mouse appears to be offset by the amount of space the titlebar should be taking up :/
[11:42:59 CET] <kepstin> what windows version?
[11:43:04 CET] <Jerry007B> W10 x64
[11:43:21 CET] <Jerry007B> I can try in any OS I guess
[11:44:12 CET] <Jerry007B> but I need it to work in any OS to be honest
[11:45:46 CET] <kepstin> hmm. I'm not really sure what's going on there. I mostly use the full screen or screen area mode rather than window capture :/
[11:46:58 CET] <Jerry007B> I usually do that as well, but in this case only one window can be recorded. Where should I turn to?
[11:48:36 CET] <kepstin> honestly, for recording stuff on windows you might want to consider using OBS instead, it has capture modes that are much better performance than the legacy gdi stuff.
[11:49:31 CET] <kepstin> other than that, it's getting someone familiar with the windows apis to take a look at the gdigrab code. I'd do it, but I'm pretty busy with other stuff right now :(
[11:50:33 CET] <kepstin> if you otherwise know where the window is on the screen - and it's not moving - then use the full desktop capture mode with the size and offset options.
[11:51:19 CET] <Jerry007B> I will look for OBS, thanks. The problem is, that the windows can be anywhere, on any monitor
[11:52:38 CET] <kepstin> i remember that someone mentioned that in windows 8+, sometimes the selection by title would pick up the taskbar previews rather than the actual window :/
[11:52:45 CET] <kepstin> i dunno if that ever got fixed
[11:55:15 CET] <Jerry007B> Hmm, OBS seems to be not usefull in my case as it is not just just a command of FFmpeg, right?
[13:03:58 CET] <arpad> Hi! I'm using libav to encode some video (h264) and mux it with an existing audio file (aac) into an MP4 container. Some players (e.g. Firefox) are missing the audio, but if I remux the file using the ffmpeg command line (-c copy) then it works fine, Firefox can play the audio too.
[13:04:49 CET] <arpad> Comparing the two files (mine and the remuxed one) there are a few differences, but the most obvious is mine is getting audio stream profile=-1 and the remuxed one has profile=LC
[13:05:23 CET] <arpad> this is comparing using ffprobe
[13:06:11 CET] <arpad> running the  doc/examples/remuxing example also results in profile=-1
[13:06:36 CET] <ritsuka> arpad: did you set the aac track extradata?
[13:07:28 CET] <ritsuka> are you using ffmpeg cli or libavformat?
[13:08:21 CET] <arpad> I'm using libavformat etc - I'm not sure what the aac extradata is so does sound like I'm missing that
[13:08:31 CET] <arpad> hang on I'll paste the relevant code
[13:09:46 CET] <arpad> https://pastebin.com/akK0yqB5
[13:12:31 CET] <arpad> ost->st->codec->profile has the value 1 after the copy - but I'm not sure if that's the same "profile" which ffprobe is reporting
[13:15:32 CET] <arpad> looks like avcodec_parameters_copy is copying the extradata member
[13:16:34 CET] <arpad> actually looking in gdb, extradata is null in the source stream
[13:19:04 CET] <arpad> any ideas where the profile=LC is coming from?
[13:25:01 CET] <arpad> ok so FF_PROFILE_AAC_LOW=1 so that value is correct, I guess the question is then how I'm ending up with the reported profile=-1
[13:52:04 CET] <moreentropy> pmjdebru1jn: I found the solution to my problem btw, by adding ctx->strict_std_compliance = s->strict_std_compliance; in dashenc.c I can hand down the -strict setting from the dash muxer to the mp4 muxer, when doing this it will start encoding without errors, will test if this actually works later.
[13:53:38 CET] <pmjdebru1jn> cool
[14:16:03 CET] <pagios> hello, can i capture a 360 video source using ffmpeg?
[14:19:25 CET] <pmjdebru1jn> pagios: isn't a 360 video source just a normal video with a weird lens ?
[14:19:50 CET] <pmjdebru1jn> but that begs a better question
[14:20:04 CET] <pmjdebru1jn> which video source specifically, and do you already have the device in question?
[14:20:15 CET] <pagios> pmjdebru1jn, yea
[14:20:32 CET] <pagios> idea is i wanna capture it and restream it to fb
[14:20:41 CET] <pagios> so i would act as a gateway only right?
[14:20:51 CET] <pmjdebru1jn> i'm not sure
[14:20:53 CET] <pagios> no need to change anything, just get the rtmp rtsp link of that device and send t right?
[14:21:08 CET] <pmjdebru1jn> presumably ou have a device you're asking about
[14:21:12 CET] <Nacht> facebook needs extra metadata
[14:21:17 CET] <Nacht> Checkout Facebook 360
[14:21:42 CET] <arpad> re my dodgy AAC-in-MP4 issue - I'm now sure that the codec ctx keeps profile=1 throughout, and av_dump_format shows "aac (LC)" but ffprobe on the resulting file still shows profile=-1 - any ideas where it could be getting broken?
[14:24:43 CET] <pagios> Nacht, so how can you catpure that extra metadata?
[14:35:25 CET] <bodqhrohro> Is there a reference of functions that can be used in geq expressions?
[14:43:58 CET] <DHE> bodqhrohro: https://ffmpeg.org/ffmpeg-all.html#Expression-Evaluation  this should be it
[14:44:11 CET] <bodqhrohro> Thank you
[15:05:00 CET] <arpad> a bit more info.. MP4Box -info -v shows "Corrupted AAC Config" for my output file
[15:05:30 CET] <arpad> that line for the file remuxed using the ffmpeg command line is "MPEG-4 Audio AAC LC - 2 Channel(s) - SampleRate 44100"
[15:05:50 CET] <ritsuka> it still means you are missing the aac extradata ;)
[15:06:18 CET] <ritsuka> is your source file a adts aac?
[15:06:37 CET] <ritsuka> there is a bitstream filter to recreate the extradata
[15:07:24 CET] <arpad> I'm already using the aac_adtstoasc filter
[15:08:02 CET] <arpad> I am indeed missing extradata in the codec context, not sure how I should be creating it though
[15:09:37 CET] <arpad> ritsuka: is aac_adtstoasc the one you mean?
[15:10:58 CET] <ritsuka> yes
[15:39:48 CET] <arpad> ritsuka: I see that filter creates the extradata with av_packet_new_side_data but it has access to the output codec context there too
[15:39:53 CET] <spade> Hello. I'm using sw_scale using format AV_PIX_FMT_YUVJ420P which is deprecated. When I create a new context, it correctly changes the format to AV_PIX_FMT_YUV420P and sets correct color range. My question is, can I somehow create a context with the proper AV_PIX_FMT_YUV420P format and color range directly? The function SwsContext *sws_getContext() doesn't take color range as argument and internals of SwsContext structure are hidden.
[15:41:50 CET] <arpad> ritsuka: should I be parsing the extradata from the packet where I'm calling av_bitstream_filter_filter or is this handled automatically somewhere?
[15:43:06 CET] <furq> spade: you can set range in sws_setColorspaceDetails
[15:43:09 CET] <ritsuka> I don't know, it's been a long time since I used it
[15:45:36 CET] <arpad> ritsuka: ok no worries, thanks for the pointer though, it's starting to make sense :)
[15:52:45 CET] <spade> furq: ok, so that takes some sort of input table in adition to color range, where can I get that?
[16:28:18 CET] <pgorley> hello, is it possible to save an rtp stream to a file without decoding, while at the same time decode and display its contents?
[16:58:44 CET] <philipp64> is there a good explanation of DTS and PTS and converting frames to normalized time?
[17:02:01 CET] <kepstin> dts you can mostly ignore as an internal detail of modern encoders that re-order frames. It's not relevant once the frames are decoded. pts / timebase = time_in_seconds.
[17:02:43 CET] <kepstin> or is it pts * timebase = time_in_seconds
[17:02:47 CET] <kepstin> I can never get that right
[17:03:03 CET] <DHE> since timebase is usually fractional as 1/30 I would use * for multiplication
[17:04:11 CET] <kepstin> for regular constant framerate stuff, you usually just set the timebase such that you can use a simple incrementing number (frame number) as the pts.
[17:08:05 CET] <teratorn> kepstin: time_base is the unit-of-measure, so
[17:33:09 CET] <spade> demuxer works with open file, can it also work with arbitary buffers as input? I do my own reading from network and want to feed demuxer data and recieve frames
[17:42:51 CET] <arpad> still stuck on my MP4 aac_adtstoasc issue - I'm callig av_bitstream_filter_filter and I can see the filter itself being called, but extradata is never being set on the codec ctx
[17:43:28 CET] <arpad> it looks like it's the muxer's responsibility to read the side data from the packet and set that as extradata
[17:43:55 CET] <arpad> any ideas what might be missing?
[17:50:56 CET] <arpad> if I set a breakpoint here it's never hit - https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/movenc.c#L5400
[17:54:48 CET] <arpad> just above that check does get hit, so I know it's the correct muxer, codec etc. - it's just that it's not receiving the side data set by aac_adtstoasc_bsf
[17:55:26 CET] <arpad> seems like I must be missing something simple :)
[18:16:52 CET] <philipp64> DHE are there helpers for dealing with timestamps?
[18:18:53 CET] <DHE> philipp64: there's some generic rational number functions which can be used to assist in the fractional calculations
[18:20:00 CET] <DHE> if the time_base changes during your processing it's on you to recalculate the proper pts and/or dts values when moving data around
[18:20:14 CET] <DHE> (so if possible, recycle any existing time_base values)
[18:20:26 CET] <kepstin> philipp64: note that you generally don't want to convert to or through floating point for anything except display purposes.
[18:22:53 CET] <philipp64> I'm downloading data (not using the HTTPS helpers from ffmpeg) and then just decoding the frames with av_read_frame()...  I want to decode 1 frame every 1/n seconds... if I can.  if I can't then I've seen a network underrun and I count that.
[18:23:48 CET] <philipp64> it's mostly for measuring the reliability of streaming video as part of a test and measurement suite.  so typically I'd be comparing timestamps to normalize 'struct timeval' or 'struct timespec`.
[18:24:22 CET] <philipp64> for now, it's just Youtube URL's that we use for sample data.
[18:31:36 CET] <dealer> I need help
[18:32:00 CET] <dealer> [root at server1 ffmpeg_sources]# cd ffmpeg/ [root at server1 ffmpeg]# PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure Unable to create and execute files in /tmp.  Set the TMPDIR environment variable to another directory and make sure that it is not mounted noexec. Sanity test failed.  If you think configure made a mistake, make sure you are using the latest version from Git.  If the latest version fail
[18:37:38 CET] <arpad> is there an up to date example of using bitstream filters? I see av_bitstream_filter* is deprecated
[19:00:09 CET] <arpad> for the record, my issue with aac_adtstoasc_bsf was I was using the old API with av_bitstream_filter_filter which doesn't copy the side data
[19:01:05 CET] <arpad> switching to av_bsf_send_packet / av_bsf_receive_packet directly works fine
[20:33:39 CET] <pgorley> how do i achieve the same effect as "-c:v copy" in code? i want to save an rtp stream to a file without transcoding
[20:34:56 CET] <jkqxz> Don't do anything with libavcodec.  Just pass packets from the input to the output.
[20:35:58 CET] <JEEB> open an input and output avformat contexts, start reading packets with the "read_frame" function. create streams and pass packets from input to output as needed
[20:36:33 CET] <JEEB> possibly adding bit stream filters if the bit stream format between input and output containers doesn't match up (although in various cases this is automated, IIRC)
[20:36:36 CET] <pgorley> jkqxz: i'd have 2 AVFormatContext and no AVCodecContext? essentially i'd call av_read_frame and just pass its AVFrame to my other AVFormatContext?
[20:36:48 CET] <JEEB> av_read_frame returns an AVPacket
[20:36:53 CET] <JEEB> it's a bad name for an old function
[20:37:06 CET] <JEEB> AVFrames are raw decoded frames, while AVPackets are packets of data
[20:37:13 CET] <pgorley> my bad, i meant AVPacket
[20:37:24 CET] <JEEB> the name of the function is misleading, I agree
[20:38:04 CET] <JEEB> not sure how great they are, but see the demuxing and muxing examples under docs
[20:38:19 CET] <JEEB> also google with "site:ffmpeg.org" doxygen trunk KEYWORD is useful
[20:38:27 CET] <pgorley> and i could use an AVCodecContext to decode and display the stream in live at the same time?
[20:38:45 CET] <JEEB> yes, you could also pass the AVPacket to a decoder as well
[20:38:54 CET] <pgorley> perfect, thank you!
[20:39:16 CET] <JEEB> for decoding this API is the recommended one https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[21:31:29 CET] <nurupo> i have an audio file (mp4 container with aac LC audio stream) that has silent segments that are several minutes long. i want to remove the silent segments, while keeping everything else the same. to do this, i ran `ffmpeg -i original.mp4 -af silenceremove=0:0:0:-1:20:-50dB silenceremove1.mp4` on it. this did remove the silent segments, but also lowered the bitrate and changed the acoustic spectrum: the original goes as high as 24kHz, but the
[21:31:29 CET] <nurupo> prcessed goes as high as 17kHz https://imgur.com/a/hbxBi
[21:32:26 CET] <nurupo> here is ffmpeg's output when running the command https://pastebin.com/TuEMdaM4
[21:34:24 CET] <ChocolateArmpits> nurupo, aac encoders have -cutoff parameter to define frequency cutoff value.
[21:36:59 CET] <nurupo> thanks, i will try that. i have tried specifying a matching bitrate before with `-b:a 158k`, but the frequency was still cut off
[21:37:32 CET] <nurupo> is there any option to just keep the audio the same, without changing bitrate or cutting off frequencies?
[21:38:37 CET] <ChocolateArmpits> nurupo, is the silenceremove only removing audio portions or does it affect video as well ?
[21:39:16 CET] <nurupo> the mp4 container has no video, only an audio stream
[21:39:30 CET] <ChocolateArmpits> oh ok
[21:39:36 CET] <nurupo> you can see it in the paste
[21:41:53 CET] <ChocolateArmpits> You'd have to determine the timespans separately and then use one of several ways to combine those parts together
[21:42:13 CET] <ChocolateArmpits> that would allow you to copy the stream without reencoding
[21:43:00 CET] <ChocolateArmpits> you can either use multiple inputs of same file with seek and duration times OR use a concat file with seek and end times
[21:43:37 CET] <ChocolateArmpits> though thinking about it the first option may not necessarily work
[21:43:50 CET] <ChocolateArmpits> The second one should have no problems though
[21:47:54 CET] <nurupo> there is a silencedetect filter in ffmpeg which prints out offets and durations of silences in a video
[21:48:22 CET] <nurupo> sounds like i could parse ffmpeg's output and use this information to do what you just said
[21:49:49 CET] <nurupo> ChocolateArmpits: can you give an example of how would i do what you have said, assuming i have seek offsets, durations and end times?
[21:50:11 CET] <durandal_1707> nurupo: you cant trim aac to lossless remove silence...
[21:50:21 CET] <nurupo> not very familiar with ffmpeg, would probably spend hours goolging this :P
[21:50:51 CET] <durandal_1707> the only way is to enocode to lossless audio format
[21:51:04 CET] <ChocolateArmpits> nurupo, https://www.ffmpeg.org/ffmpeg-formats.html#concat-1
[21:51:09 CET] <BtbN> shouldn't it, at least in theory, be possible, as long as you cut on frame boundaries? Not that ffmpeg can do that though.
[21:51:26 CET] <barhom> couple of years ago the most cost-effective transcoding (mpeg2->h264) would be using something like an xeon e3-12xxVx processor and do everything in CPU. I am a bit outdated now and I am wondering what is the most cost-effective today? Still CPU? Intel QSV? NVENC ?
[21:51:27 CET] <ChocolateArmpits> durandal_1707, i think concat file should work for this
[21:51:31 CET] <durandal_1707> BtbN: for aac it is simply put not possible
[21:51:54 CET] <BtbN> barhom, all hardware encoders are crap
[21:52:08 CET] <barhom> BtbN: source on that?
[21:52:24 CET] <furq> it depends what you're doing really
[21:52:25 CET] <durandal_1707> ChocolateArmpits: concat is concatenation of several files, not for trimming silence in single file
[21:52:35 CET] <furq> hardware encoders are no good for archival
[21:52:48 CET] <furq> nvenc is competitive if you need a lot of realtime streams
[21:52:48 CET] <barhom> I am transcoding live-streams (mpeg2 input) h264 output
[21:53:00 CET] <furq> but you need a quadro or you're artificially limited to two concurrent
[21:53:06 CET] <furq> which makes it way less cost-effective than it could be
[21:53:19 CET] <ChocolateArmpits> durandal_1707, that's why I'm suggesting him to first determine silence timespans in the file, then grab them through a concat file using inpoint and outpoint timestamps
[21:53:43 CET] <durandal_1707> ChocolateArmpits: nonsense you speak
[21:53:53 CET] <ChocolateArmpits> you may want to be more specific
[21:54:01 CET] <ChocolateArmpits> if you see error in this
[21:54:07 CET] <durandal_1707> aac does not have keyframes
[21:54:34 CET] <ChocolateArmpits> and what they have to do with this?
[21:55:37 CET] <barhom> 50 channels, real-time transcoding (SD, 720x576 3.5mbit mpeg2) 24/7 transcoding into 3 profiles h264 - I would have gone with something like a supermicro with 24 nodes and run a bunch of ffmpeg to do this before all on CPU.
[21:55:58 CET] <barhom> Is this STILL the right approach? or should I really look into QSV or NVENC?
[21:57:49 CET] <barhom> furq: I looked into the Tesla NVIDIA cards but damn they are expensive
[21:57:59 CET] <barhom> Its an expensive test even since I have no idea how they would perform
[21:58:28 CET] <furq> the actual gpu part is irrelevant
[21:58:38 CET] <furq> any card from the same generation has the exact same nvenc asic
[21:58:50 CET] <furq> the only difference is that consumer cards are artificially locked down
[21:59:00 CET] <BtbN> I wonder if two consumer cards can use double the amount of streams
[21:59:04 CET] <furq> they can
[21:59:09 CET] <barhom> furq: can they be flashed to remove that lock?
[21:59:14 CET] <furq> not as far as i'm aware
[21:59:16 CET] <BtbN> So put a bunch of GTX1050 in there, and it will be damn cheap
[21:59:22 CET] <BtbN> The lock is Driver-Side
[21:59:23 CET] <furq> i'm only going from previous conversations in here though
[21:59:39 CET] <barhom> but wait, "two consecutive streams", what does it mean exactly?
[21:59:53 CET] <durandal_1707> ChocolateArmpits: read carefully what OP said, he transcoded file to remove silence and it gave him degraded quality, using concat will not help if he gonna transcode
[21:59:54 CET] <furq> it means a single card can only run two streams at once
[21:59:55 CET] <BtbN> not more than two encoders can be open at a time
[22:00:00 CET] <barhom> 2 input sources? 2 output profiles?
[22:00:06 CET] <furq> two encodes
[22:00:23 CET] <barhom> so 1 source with two profile encode = limited
[22:00:32 CET] <BtbN> I'd say you will be happier with a bunch of like E5-2690 v4 running x264.
[22:00:47 CET] <furq> if you can find some cheap quadros then it's worth looking into
[22:00:49 CET] <BtbN> notably better quality, less pain
[22:00:52 CET] <furq> but cpu is definitely still a good option
[22:00:56 CET] <barhom> BtbN: cheaper than running a bunch of E3?
[22:01:09 CET] <BtbN> Per-Core, definitely
[22:01:30 CET] <barhom> Ive never used E5's for transcoding. Got like 50 E3-1270v2,v3,4 running though
[22:01:43 CET] <ChocolateArmpits> durandal_1707, I don't think he knows about stream copy, however his original example can't work without transcoding because it relies on the filter, so only concat can help
[22:02:05 CET] <BtbN> A Threadripper or Epyc will probably be even more cost effective
[22:02:09 CET] <BtbN> by a lot
[22:02:45 CET] <jkqxz> E3 is completely pointless for Intel, I think.  If you want to cheap out, get the consumer stuff.  Otherwise buy the standard mid-range dual-socket stuff that every server maker churns out by the zillion.
[22:03:58 CET] <barhom> I'm getting a feeling I will go CPU since I am most comfortable with that. And as everyone said x264 > quality over HW encoders
[22:04:26 CET] <jkqxz> EPYC may well be worth a look too, yeah.  Threadripper probably not if EPYC is possible (fewer faster cores and much less memory bandwidth, which isn't what you want for encoding throughput).
[22:05:00 CET] <barhom> I should still buy a quadro just to test with, furq can you reccomend some quadro card?
[22:05:14 CET] <BtbN> the cheapest one of the latest gen
[22:05:24 CET] <BtbN> They all have the exact same chip anyway
[22:05:28 CET] <barhom> yeh
[22:05:31 CET] <jkqxz> If you really want a hardware solution then lots of the cheapest boards you can find with bottom end Intels are going to be a lot cheaper than the Nvidia, though obviously somewhat more difficult to manage.
[22:05:35 CET] <durandal_1707> ChocolateArmpits: and concat demuxer cant help with aac
[22:06:25 CET] <jkqxz> (That is probably technically the right answer to greatest transcode throughput for lowest cost, but it's also not very sane.)
[22:08:09 CET] <barhom> last time I purchased this, 24 E3 CPUs: https://www.supermicro.com/products/system/3U/5038/SYS-5038MD-H24TRF.cfm
[22:08:29 CET] <barhom> That worked out really well, but this time I want to actually calculate if there is a better option
[22:08:40 CET] <barhom> I'll look into EYPC, although I dont think I want to go AMD
[22:15:28 CET] <BtbN> Intels prices are just ridiculous
[22:15:32 CET] <BtbN> AMD is way more realistic
[22:18:08 CET] <jkqxz> Intel's list prices on non-consumer-puchasable products are pretty meaningless.
[22:19:09 CET] <BtbN> The prices you actually pay for them are not any less ridiculous
[22:35:51 CET] <barhom> BtbN: You said E5 would be cheaper per core than E3. Are they really?
[22:35:56 CET] <nurupo> ChocolateArmpits: -cutoff 24k did the trick
[22:37:07 CET] <nurupo> ChocolateArmpits: also had to up the bitrate to that of the source, i.e. -b:a 160k, it somehow ended up with a higher bitrate than that tough, 163kb/s, but that's fine
[22:55:01 CET] <cfsmp3> I'm trying to receive a stream from an android phone into ffmpeg... this fails:
[22:55:03 CET] <cfsmp3> https://pastebin.com/vuD83iHk
[22:55:30 CET] <cfsmp3> I know the android app works - tested using a Wowza server
[22:56:16 CET] <cfsmp3> I setup ffmpeg to list to rtsp, the app connects, but then ffmpeg stops in a couple seconds
[22:56:32 CET] <cfsmp3> tried a number of parameters (just shooting in the dark), no luck with any combination
[22:56:58 CET] <nurupo> ChocolateArmpits: this is the result it got btw. what i had before https://imgur.com/a/hbxBi and now https://imgur.com/a/n1NEr, both original vs processed. thanks!
[22:57:16 CET] <ChocolateArmpits> np
[23:04:04 CET] <durandal_1707> nurupo: what command you used?
[23:38:46 CET] <xeberdee> Hi - quick question - looking at the encoder list, is it possible to encode XDCAM422HD with 8 channels of audio with ffmpeg?
[23:40:39 CET] <durandal_1707> yes
[23:52:38 CET] <Guest66109> Hello everyone, is there a way to cut a video file at specific frame without re-encoding?
[23:55:19 CET] <JEEB> only if that frame is a random access point
[23:55:40 CET] <JEEB> otherwise even if you force the code to cut at a certain frame you will get just borked pictures until the next random access point
[23:59:15 CET] <Guest66109> Thanks JEEB, what you mean by random access point frame. Does it mean it is a Key frame?
[00:00:00 CET] --- Thu Mar 22 2018


More information about the Ffmpeg-devel-irc mailing list