[Ffmpeg-devel-irc] ffmpeg.log.20150813

burek burek021 at gmail.com
Fri Aug 14 02:05:01 CEST 2015


[01:09:12 CEST] <Nolski> When encoding from vp9 -> huffyuv -> vp9, is it reasonable to expect skips and little cuts in the video?
[01:15:02 CEST] <klaxa> Nolski: no?
[01:15:03 CEST] <klaxa> not if everything is stored on disk
[01:16:26 CEST] <Nolski> Hm, it must be one of the edits I'm running. I wasn't sure if that sort of thing is common when switching encodings
[01:37:55 CEST] <Guest45151> hi guys! i'm trying to convert a bunch of jpgs to gif. for 48 pictures at 420x270, it takes less than 1s and the produced gif is ~1.6Mb (which is good), but the quality is not great (looks pixeleted). I saw examples using scale and dither, but they all take a video as input. Providing a palette is not really an option. Any suggestion? thx
[01:39:27 CEST] <occupant> maybe you should be using imagemagick
[01:42:08 CEST] <Guest45151> We have great results with imagemagick, but it's too slow (> 2s for the same batch of images)
[01:49:17 CEST] <Nolski> Guest45151: Maybe the reason ffmpeg is faster is because it produces worse results
[01:50:07 CEST] <Nolski> and maybe running scale and dither would end up making ffmpeg run just as slow (if not slower) than imagemagick
[01:52:40 CEST] <chungy> ffmpeg uses a generic palette by default, you can generate a global palette suited for your set of images
[01:52:42 CEST] <klaxa> i would think so too
[01:52:58 CEST] <klaxa> imagemagick probably does some internal 2-pass encoding where it calculates an optimal palette
[01:53:00 CEST] <chungy> gif supports per-frame palettes but ffmpeg doesn't (yet) support making them (neither does imagemagick iirc)
[01:53:42 CEST] <chungy> http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html
[01:55:20 CEST] <chungy> the gifenc.sh script there will have its own two passes
[02:02:44 CEST] <Guest45151> ok, thanks!
[02:07:25 CEST] <chungy> https://web.archive.org/web/20140216175606/http://phil.ipal.org/tc.html
[02:08:17 CEST] <chungy> This was kind of more relevant before PNG became widely adopted :P
[02:57:29 CEST] <qmr> how can I flip + recode h264 with least loss of quality?
[04:18:45 CEST] <c_14> qmr: ffmpeg -i input -vf vflip -c copy -c:v ffv1 -map 0 out.mkv
[06:39:43 CEST] <jY> recording a live stream.. is there a way to write like 60 second files
[06:47:15 CEST] <c_14> segment muxer
[06:55:21 CEST] <jY> thanks
[08:59:03 CEST] <k_sze> When using the ffmpeg library, is it generally possible to encode video frames with a certain codec but not mux it into one of the "standard" container formats? E.g. if I want to encode some video with H.264 and I want to push it to a server using my own HTTP transport stream format.
[09:25:00 CEST] <JEEB> k_sze: that's why it's libraries and not library
[09:25:30 CEST] <JEEB> you can replace io, muxer or even encoder if you want to :p
[11:34:19 CEST] <whald> hi! i have some encoding code which uses "avcodec_encode_video2" to, well, encode video. the packets generated there optionally get the AV_PKT_FLAG_KEY set iff the coded frame was a key frame, like so:
[11:34:25 CEST] <whald> pkt.flags |= c->coded_frame->key_frame ? AV_PKT_FLAG_KEY : 0
[11:36:12 CEST] <whald> i have two questions with this: a) is this actually necessary? b) with recent ffmpeg this gives a deprecated warning for accessing AVCodecContext::coded_frame, suggesting I should "use the quality factor packet side data instead" -- what does that mean?
[11:36:15 CEST] <Mavrik> ?
[11:37:09 CEST] <whald> Mavrik, slow typing, sorry. :-)
[11:39:07 CEST] <Mavrik> sec, checking source :)
[11:41:23 CEST] <Mavrik> Ok, it's like that: Some muxers actually check for the flag
[11:41:57 CEST] <Mavrik> But for some reason avcodec_encode_video2 doesn't set it, even though avcodec_encode_video did
[11:43:34 CEST] <whald> Mavrik, ok, that avcodec_encode_video2 doesn't do that is kind of odd from my limited understanding, but whatever. so i'll just have to figure out what that side data i'm supposed to use is.
[11:44:39 CEST] <Mavrik> side data? :)
[11:46:39 CEST] <whald> yes, the deprecation message says "use the quality factor packet side data instead", and I have no clue what that means
[15:25:11 CEST] <Pawel_> Hi
[15:36:58 CEST] <wizbit> how can one detect bpm of mp3 and flac files?
[15:37:26 CEST] <klaxa> listen to it, clap with the beat, measure the time between your claps
[15:37:40 CEST] <DHE> that would be acoustic analysis at best, and human intervention at worst
[15:38:02 CEST] Action: DHE prefers the head-banging method
[15:38:31 CEST] <klaxa> a quick search through the ffmpeg-filters doesn't make it look like there is a filter that detects beats or bpm
[15:38:52 CEST] <klaxa> i know that audacity has a beat detection "filter"/effect
[15:39:10 CEST] <wizbit> i want to re-tag my collection with bpms so i can create smart playlists
[15:39:18 CEST] <klaxa> there may be other software better suited for that
[15:42:28 CEST] <wizbit> there is a old tool for linux called 'bpmcount'
[15:42:32 CEST] <wizbit> ill see if i can get it working
[15:42:51 CEST] <wizbit> http://superuser.com/questions/129041/any-beat-detection-software-for-linux
[15:46:09 CEST] <wizbit> lets get this beast compiled
[15:46:10 CEST] <wizbit> https://github.com/mihow/bpmdj/tree/future-main
[16:31:30 CEST] <well0ne> hi guys, i'm trying to hardcode some srt-sub into the video
[16:32:03 CEST] <well0ne> but i always getting  Invalid UTF-8 in decoded subtitles, setting -sub_charenc is not resolving the issue
[16:32:05 CEST] <well0ne> any advice?
[16:32:08 CEST] <well0ne> i'm on windows
[16:32:37 CEST] <KarlFranz> well0ne: Convert the str subtitles to utf-8 if they are not utf-8
[16:48:19 CEST] <well0ne> thanks
[16:48:30 CEST] <well0ne> but now i see that its only put into the container
[16:48:35 CEST] <well0ne> how do i hardcode
[16:49:02 CEST] <DHE> you need to transcode it to re-render the video with the subtitles on top
[16:49:20 CEST] <klaxa> well0ne: https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo
[16:49:31 CEST] <kosc> Hello. Is there any special channel for avconv users?
[16:49:39 CEST] <klaxa> #libav
[16:49:42 CEST] <klaxa> afaik
[16:50:20 CEST] <kosc> klaxa: thank you.
[17:00:21 CEST] <Synthase_> Good day all
[17:00:39 CEST] <Synthase_> Trying to compile ffmpeg, running as root, and encountering this: ./configure: line 3345: /tmp/ffconf.TvJRxTby.sh: Permission denied
[17:01:02 CEST] <Synthase_> Have confirmed that /tmp is accessible and writable
[17:05:03 CEST] <klaxa> are files in /tmp executable?
[17:06:21 CEST] <Synthase_> They are. Fixed just now by creating a throwaway directory in /root, and declaring it in the configure string. Kludge but working it seems.
[17:33:19 CEST] <fling> Can you guys help me with 'map' ?
[17:33:30 CEST] <fling> I have two input video files.
[17:34:01 CEST] <fling> I want to use left channel from the first video and right channel from the second video for the output stereo stream.
[17:34:09 CEST] <fling> How to do so the proper way?
[17:34:15 CEST] <well0ne> klaxa ffmpeg -i video.avi -vf subtitles=subtitle.srt out.avi is not working
[17:34:28 CEST] <well0ne> it appends the srt file, but does not hardcode it
[17:45:58 CEST] <klaxa> "not working" is pretty generic
[17:47:35 CEST] <klaxa> fling: have you had a look at -map_channel? https://ffmpeg.org/ffmpeg.html#toc-Advanced-options
[17:47:46 CEST] <fling> klaxa: looking at it.
[17:48:18 CEST] <fling> -vn -map_channel 0.1.0 left.flac
[17:48:25 CEST] <fling> klaxa: trying thist ^
[17:48:44 CEST] <fling> How then I merge all the things together properly? with -map?
[17:50:52 CEST] <klaxa> probably like: ffmpeg -i left.flac -i right.flac -map_channel 0.0.0 -map_channel 1.0.0 -c copy stereo.flac
[17:50:53 CEST] <klaxa> not tested
[17:52:53 CEST] <fling> klaxa: thanks ;>
[17:53:23 CEST] <fling> klaxa: not 1.0.0 but 1.0.1 ?
[17:53:55 CEST] <klaxa> assuming left.flac and right.flac are mono, it should be 1.0.0 since they only have one channel each, right?
[17:55:42 CEST] <klaxa> in then end, whatever works for you is the correct command
[17:56:11 CEST] <fling> ohh
[19:15:35 CEST] <Cs123> Hi, I'm looking advice, how to start a project on web-cam video modification, to show the modified video on screen. C#/C++ Is the ffmpeg it?
[19:17:10 CEST] <Yahia> Hello.. Can Somebod help me with some questions?
[19:17:13 CEST] <DHE> potentially. ffmpeg will let you decode the video and later encode it for whatever you're showing the video on
[19:17:37 CEST] <klaxa> Yahia: don't ask to ask, just ask
[19:18:01 CEST] <Yahia> Okay.. I have been trying to download ffmpege and install it and i failed in all my trys
[19:18:06 CEST] <Yahia> Is there an automatic installer?
[19:18:08 CEST] <Cs123> DHE: thanks.. .so I'd say I need to look for something else. Any  ideas?
[19:18:15 CEST] <Yahia> or I have to do it manualy?
[19:18:25 CEST] <klaxa> i'm assuming you are on windows?
[19:18:37 CEST] <Yahia> Yea i am On windows
[19:19:07 CEST] <Yahia> im on that website,but i cant find any installer that i can use
[19:19:12 CEST] <Yahia> only found rar files
[19:19:21 CEST] <klaxa> extract the rar and you are good to go?
[19:19:25 CEST] <Yahia> okay
[19:19:29 CEST] <Yahia> Thank u
[19:19:52 CEST] <DHE> Cs123: so you want to do live video editing/manipulation?
[19:19:54 CEST] <klaxa> you can just run the executable from a command prompt
[19:20:01 CEST] <DHE> eg: overlay something on the current screen
[19:20:04 CEST] <Cs123> DHE: right..
[19:21:01 CEST] <Yahia> Yes but i am facing a problem,there is a program that requires me to have ffmpeg to render a video,i have the rar extracted and till now it keeps saying: FFMPEG NEEDED TO RENDER
[19:21:53 CEST] <klaxa> i have never seen an installer for ffmpeg for windows, you will have to ask the party providing the software for support
[19:22:10 CEST] <klaxa> maybe in the readme it says where the ffmpeg executable has to be placed?
[19:22:20 CEST] <Yahia> okay thanks for the support
[19:22:49 CEST] <Cs123> DHE: on new  screen..
[19:23:48 CEST] <Cs123> DHE: Like Cheese Webcam, where I'd work the code myself to modify the video
[19:24:35 CEST] <klaxa> isn't cheese open-source?
[19:26:08 CEST] <Cs123> If so (cheese = open), which tools I need to rework the code?
[19:27:16 CEST] <klaxa> a text editor?
[19:27:33 CEST] <Cs123> ;) howabout to build & debug?
[19:31:07 CEST] <klaxa> that's your task to find out :P
[19:32:40 CEST] <Cs123> klaxa: see above: to modify the incoming video, wih my own filter.. so how-to-get-started for beginner ..
[19:37:32 CEST] <klaxa> well checkout the cheese source and read code
[19:37:39 CEST] <klaxa> run it in a debugger, follow the program
[19:37:57 CEST] <klaxa> i've come to like the nemiver debugger
[19:39:58 CEST] <Cs123> klaxa: ok, thanks , I'll look into that. Is there like a project file to get stared, as with MS VisuaStudio?
[19:40:34 CEST] <klaxa> phew dunno
[19:46:59 CEST] <TikityTik> How can I concatenate something and cut the duration?
[20:09:59 CEST] <qmr> c_14:  why switch container to mkv?
[20:12:42 CEST] <c_14> qmr: you never told me what the original container was nor which container you wanted so I just picked one I liked
[20:14:30 CEST] <c_14> Matroska has the benefit of supporting basically every codec under the sun plus a few other things.
[20:16:46 CEST] <qmr> mmm
[20:16:48 CEST] <qmr> I think mp4 or mov
[20:21:11 CEST] <TikityTik> anyone know why you can't seem to cut videos without them freezing if you use -codec copy?
[20:21:26 CEST] <akurilin> quick question: can I generate both an ogv and a webm at the same time with one ffmpeg call? There's that whole -map functionality, wondering if that's the right way to go
[20:21:30 CEST] <TikityTik> and then when you re-encode  the video stream, the subs are out of sync?
[20:22:18 CEST] <c_14> akurilin: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[20:23:34 CEST] <akurilin> c_14: let's say I do ffmpeg -i foo.mp3 foo.ogv foo.webm, is that the right simplest way to generate multiple conversions at once?
[20:23:45 CEST] <c_14> yes
[20:24:00 CEST] <akurilin> oops that was meant to be .mp4 for the first one
[20:24:12 CEST] <c_14> doesn't matter
[20:24:14 CEST] <akurilin> c_14: perf-wise, I'm better doing the above rather than running ffmpeg twice, yes?
[20:24:34 CEST] <c_14> You'll save some memory&overhead, yes.
[20:25:07 CEST] <akurilin> perfect, thanks
[20:25:56 CEST] <akurilin> I have a few hundred files I need to convert, is my best use of my multicore box to just run 1 ffmpeg process per core?
[20:26:08 CEST] <akurilin> or is ffmpeg able to max out all of the cores in one single process?
[20:28:16 CEST] <klaxa> tricky question
[20:28:29 CEST] <klaxa> depends on whether your CPU can encode faster than your disk can write
[20:29:05 CEST] <akurilin> I can see one core being maxed out, so I'd be surprised if I wasn't CPU-limited
[20:31:16 CEST] <klaxa> it also depends on the encoding settings, you'll have to find the optimum for yourself
[20:33:26 CEST] <well0ne> its me again, i'm havin problems to hardcode subs into video with ffmpeg
[20:33:37 CEST] <well0ne> i followed the instructions
[20:33:47 CEST] <well0ne> but the subtitle is added in the container, not in video
[20:35:10 CEST] <well0ne> omg
[20:36:13 CEST] <well0ne> http://pastie.org/10349129
[20:36:20 CEST] <well0ne> https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo
[20:36:22 CEST] <well0ne> tried both
[20:36:29 CEST] <well0ne> with ass / subtitles filter
[20:36:46 CEST] <klaxa> >Sorry, there is no pastie #10349129 or it has been removed. Why not create a new pastie?
[20:36:59 CEST] <well0ne> it works for m http://pastie.org/10349129
[20:37:13 CEST] <well0ne> Plain text   1 minute ago
[20:37:58 CEST] <klaxa> haha based firefox i guess? when i curl the url i get a paste
[20:38:10 CEST] <well0ne> but there is no need for a log file, believe me if i'm saying that ffmpeg just puts the file into the container
[20:40:28 CEST] <well0ne> dont know what to do now
[20:40:39 CEST] <klaxa> fontconfig seems broken
[20:41:04 CEST] <klaxa> but i'm not sure that's messing everything up, shouldn't it fall back to a default font?
[20:42:01 CEST] <well0ne> eeewww
[20:42:07 CEST] <well0ne> my fault
[20:42:08 CEST] <well0ne> Fontconfig error: Cannot load default config file
[20:44:29 CEST] <klaxa> you might also have to extract the fonts from the mkv if the .ass is using fonts from there
[20:45:17 CEST] <klaxa> here is a poor man's script to do that: https://gist.github.com/klaxa/ecd82401d921a4b487dc
[20:45:21 CEST] <klaxa> needs mkvtoolnix
[20:53:09 CEST] <well0ne> i got it
[20:53:18 CEST] <well0ne> needed to create a fontconfig
[20:53:24 CEST] <well0ne> works for now
[00:00:00 CEST] --- Fri Aug 14 2015


More information about the Ffmpeg-devel-irc mailing list