[Ffmpeg-devel-irc] ffmpeg.log.20170804

burek burek021 at gmail.com
Sat Aug 5 03:05:01 EEST 2017


[02:38:46 CEST] <EodiV> I have 24 bit depth audio, I have doubt whether it is simply re-encoded 16 bit depth audio, how would I test this?
[05:17:57 CEST] <bdheeman> hello
[05:20:45 CEST] <bdheeman> any possibility of continuing a convertion after system crash, hangour and, or power failures? ffmpeg processes the media streams frame by frame, why it can't continue?
[05:21:44 CEST] <bdheeman> s/hangour/hangout/
[05:38:09 CEST] <thebombzen> bdheeman: it depends on what you're converting it to
[05:38:27 CEST] <thebombzen> if you're converting it to an mp4 or mov? No, sorry.
[05:38:42 CEST] <thebombzen> If it's a streamable format, perhaps
[05:42:58 CEST] <bdheeman> thebombzen: dose not, it depends on both the input and output container formats?
[14:18:31 CEST] <cq1> EodiV: Can you paste ffmpeg's output?
[14:49:06 CEST] <Mockarutan> I'm having trouble with quality loss over time in VP8 encoding. My own C++ code. I'm not settings any special variables my self, just have deadline at "good" and cpu_used at 5. But at like ~150 frames, this happens in 1 frame: https://gyazo.com/7d82cc869bbe9342ee2a98ab68a3ee61
[14:49:13 CEST] <Mockarutan> Anyone have any tips?
[15:07:39 CEST] <nofacetimber> I'm looking to convert videos to mp4 and upload them to Amazon S3 storage for streaming.  No other video manipulation is necessary except for maybe retrieving a thumbnail for the video.  What standard libraries are required for this?  Or rather which standard libraries could I forgo?  Would you recommend using the qt-faststart tool as well for streaming purposes?
[15:14:51 CEST] <Mavrik> You don't need qt-faststart if you add the faststart flag to ffmpeg encode command
[15:26:23 CEST] <nofacetimber> Thank you, the standard installation comes with all the libraries I might need?  I'm looking to install ffmpeg via homebrew and If I type "brew options ffmpeg" there are a lot of options there.
[17:11:21 CEST] <Bear10> Does anyone know how to include a 3rd party lib into the makefile by any chance? Is there anything special I'd need to do other than add to LD_FLAGS?
[17:12:12 CEST] <pgorley> does ffmpeg do cpu detection on android? i haven't found anything related to cpu-features.h in the code
[17:13:28 CEST] <JEEB> the usual ways work
[17:13:36 CEST] <JEEB> for ARM generally there's very little runtime CPU detection
[17:13:40 CEST] <JEEB> IIRC
[17:14:00 CEST] <JEEB> for x86 the usual capability checks are done
[17:14:28 CEST] <pgorley> JEEB: thanks
[17:16:00 CEST] <pgorley> i know that arm has little cpu detection to be done, but i know that libvpx does it anyway, so i was just wondering
[17:16:53 CEST] <JEEB> yea if a dependency library does it that's up to it of course
[17:17:10 CEST] <JEEB> (if you are using it)
[17:17:54 CEST] <pgorley> i guess it's mostly neon that needs chekcing though
[17:18:38 CEST] <JEEB> yea, and in FFmpeg as far as I can tell it's a build time switch
[17:18:43 CEST] <pgorley> yup
[17:19:04 CEST] <JEEB> oh wait no
[17:19:09 CEST] <JEEB> it's checked if the compiler can output them
[17:19:21 CEST] <JEEB> but then checkasm does check for it
[17:19:21 CEST] <JEEB> tests/checkasm/checkasm.c:    { "NEON",     "neon",     AV_CPU_FLAG_NEON },
[17:19:29 CEST] <JEEB> or at least it has a flag for it
[17:32:56 CEST] <crot> Hello all, I am using ffmpeg to transcode live broadcast tv and audio on my output the audio goes silent randomly. I was wanting to detect said silence on the output and restart the stream. What is the best to to accomplish this?
[17:39:46 CEST] <thebombzen> crot: try using the silencedetect filter
[17:40:57 CEST] <thebombzen> !filter silencedetect
[17:40:57 CEST] <nfobot> thebombzen: http://ffmpeg.org/ffmpeg-filters.html#silencedetect
[17:43:08 CEST] <crot> thebombzen: that is what I'm currently testing just wanted to see if there were any better ways to go about it.
[17:43:25 CEST] <thebombzen> it is literally exactly designed to detect silence in a configurable way
[17:43:31 CEST] <thebombzen> I'm not sure what else you want
[17:49:39 CEST] <crot> No that sounds perfect.
[18:00:22 CEST] <Raigin> Hey guys, I've got an interesting question/setup that I'd like to get your opinion on. I'm using ffmpeg to have a 24/7 live-stream broadcasting that runs a python script to adhere to a scheduled broadcast.
[18:00:50 CEST] <Raigin> What I'd like to do is configure ffmpeg (or setup something else) to have the option to "Go Live" from a secondary stream without having to stop and restart the stream.
[18:02:15 CEST] <Raigin> Kind of like a "barge" thing. Right now I have ffmpeg to listen in on a UDP port and that's what Python streams to. ffmpeg converts it and then sends it to another UDP stream that goes on the tele. Any ideas on how to do this?
[18:03:07 CEST] <Raigin> I'm thinking another ffmpeg process that has priority over the first one listening on a different port?
[19:07:55 CEST] <arthure> Is there a way to get metadata information about each AVFrame of an input video (e.g. PTS and pkt pos) without decoding them (e.g. calling avcodec_decode_video2 and avcodec_decode_audio4 on the old API)?
[19:13:31 CEST] <DHE> usually the PTS is still available in the AVPacket. did you check there?
[20:30:30 CEST] <mickie> Hi All, I want to transcode a video which has #0:1 (pcm_u8 (native) -> mp2 (native)) - but I'm getting an error: "Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height". What is the correct syntax to achieve this?
[20:31:01 CEST] <mickie> i.e. input of raw pcm_u8 and output of mp2
[20:31:25 CEST] <klaxa> we'll need more than that
[20:34:20 CEST] <mickie> Thanks fflogger, I had missed -b:a in my syntax, as soon as I added it all worked out fine!
[22:22:38 CEST] <leeaandrob> hello, I am building a ott system and I want to use ffmpeg to encode mp4 to hls..
[22:23:06 CEST] <leeaandrob> what the recommendation to use that a better way?
[22:29:18 CEST] <user890104> leeaandrob: you can use nginx rtmp module, and push your video using ffmpeg .... -f flv rtmp://nginx-host/..., then use the HLS stream from nginx module (it also provides DASH)
[22:30:22 CEST] <furq> if this is vod then you can just use the hls muxer
[22:30:24 CEST] <furq> !muxer hls
[22:30:24 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-formats.html#hls-1
[22:30:36 CEST] <leeaandrob> so I was using the aws elastic transcoder but it's very very expansive and I am thinking on change of the aws for the ffmepg
[22:31:46 CEST] <leeaandrob> I will build a application using python and django to receive the file, after uploaded I will use the ffmpeg to encode for the hls and after i will streaming using s3 and cloud front.
[22:32:21 CEST] <furq> do you really need to transcode
[22:36:21 CEST] <leeaandrob> yes.
[22:36:46 CEST] <leeaandrob> for example I am thinking to use 1 single core to create a queue and transcode the files that the users did upload
[22:37:09 CEST] <leeaandrob> but i don't know if exists better way to do this, for example using threads and etc..
[22:45:13 CEST] <leeaandrob> what do you think about that @furq?
[22:49:44 CEST] <arthure2> DHE:
[23:11:27 CEST] <DHE> leeaandrob: are we talking live video, or just on-demand viewing? (ie. TV or Netflix)
[23:14:11 CEST] <leeaandrob> on-demand viewing DHE
[23:14:47 CEST] <leeaandrob> I am looking for aws lambda solutions to help me auto transcoder and after save on s3 bukets
[23:16:39 CEST] <leeaandrob> *buckets
[23:23:59 CEST] <DHE> that could work
[23:53:05 CEST] <leeaandrob> nice.. i will try
[00:00:00 CEST] --- Sat Aug  5 2017


More information about the Ffmpeg-devel-irc mailing list