[Ffmpeg-devel-irc] ffmpeg.log.20140912
burek
burek021 at gmail.com
Sat Sep 13 02:05:01 CEST 2014
[00:00] <tab1293> how can I specify the output to be a mpeg audio elementary stream?
[00:00] <_dunno_> okay - it's something about the aspect ratio - when i halve x and y coordinates i can also downscale the video - why? and how to overcome?
[00:03] <_dunno_> tab1293 where's the prob? if u'r using the right acodec and u'r still running into probs, maybe atag may solve it
[00:04] <tab1293> _dunno_: here is the command I am running to transcode a mp3 file to a AAC HLS stream
[00:04] <tab1293> ffmpeg -i sample.mp3 -codec a libfdk_aac -b:a 128k -map 0:a:0 -f segment -segment_time 10 -segment_list outputlist.m3u8 -segment_format mpegts 'output%03d.m4a'
[00:05] <tab1293> with this command each segment is a mpeg transport stream
[00:05] <tab1293> I would like it to be an elementary stream instead
[00:15] <valder> is there a way to have av_dump_format output to Logcat?
[00:27] <_dunno_> it'S quite trivial - when scale=x:y are not multiples of crop=x:y you have to put '-aspect cropX:cropY' to the cmdline
[00:58] <pyBlob> is there an ffmpeg native filter that does the same as mp=eq2?
[01:11] <pyBlob> ... the curves filter seems to be it
[01:17] <ac_slater_> I know I should only say nice things ... but man libavformat/codec are not at all friendly. Documentation is scarce and I literally have to run everything through a dugger/callgraph to find out the simplest things - ie - how to set video size on rawvideo decoder/demuxer
[01:17] <ac_slater_> +1 for ffmpeg, -1 for bad/lack of documentation :(
[01:33] <c_14> Did you try reading the doxygen?
[01:39] <ac_slater_> c_14: yea, it does it's just to tell you type info, etc, and there are some things regarding simple usage
[01:39] <ac_slater_> c_14: sadly, it doesnt really do much to explain the plugin process or creating new ones.
[01:39] <ac_slater_> :(
[01:47] <llogan> ac_slater_: was anything in doc/examples useful?
[01:48] <dahat> Q: I'm developing an app to play basic videos with FFMPEG as a backend... and currently it accepts a local path into avformat_open_input(), now I am trying to expand it to target RTMP files, currently I can do the following at the command line, how can I apply the same args to avformat_open_input()? ffmpeg -rtmp_app <AppName> -rtmp_conn <AuthToken> -rtmp_playpath <PlayPath> -i rtmp://streaming.example.com/
[01:49] <ac_slater_> llogan: sort of. I guess I always seem to do the things that boilerplate examples wont cover. They were helpful getting started though.
[01:50] <ac_slater_> dahat: one way to start, is to compile the ffmpeg binary in debug mode, run `cachegrind` on it and get a call graph.
[01:50] <ac_slater_> dahat: that's time consuming though
[01:55] <dahat> ac_slater: given how much time I've spent trying to get FFMPEG to work in my environment... what's another few hours? ;) Q then... is --enable-debug=LEVEL the way to do that? And does it support non GCC compilers?
[01:58] <dahat> Hrm... configure looks to add the -gX argument (where X is the debug level)... MSVC would use /Zi I believe
[02:13] <ac_slater_> dahat: I know nothing about windows, but `-g` is used for clang and gcc ... I do `-ggdb3` and it works well
[02:14] <ac_slater_> dahat: BUT, it looks like all you need to do is set options.
[02:14] <ac_slater_> (for the demux/decoder)
[02:15] <ac_slater_> dahat: specifically http://www.ffmpeg.org/doxygen/2.1/rtmpproto_8c.html#a01616ba57274bb27f2e3f86ebc252203
[02:16] <ac_slater_> dahat: you can pass an AVDictionary to avformat_open_input with those values filled in
[02:18] <ac_slater_> dahat: or maybe use AVOption ... I get those two mixed up and am facing the same issue
[02:19] <ac_slater_> dahat: and by AVOption, I mean av_opt_set*
[02:28] <relaxed> dahat ac_slater_ are you guys on the libav-user mailing list?
[02:28] <ac_slater_> relaxed: I am, yes.
[02:28] <ac_slater_> relaxed: I rarely get replies though ;), its ok
[02:48] <dahat> relaxed: I am not
[02:49] <dahat> ac_slater: AVOption or AVDictionary are a couple of ways I've suspected it could be done, but haven't found a bit of example code that demonstrates their use yet
[02:52] <relaxed> ok, I was just thinking you might get some answers there.
[02:52] <ac_slater_> dahat: https://libav.org/doxygen/master/group__lavf__decoding.html ... under "Opening A Media File"
[02:53] <ac_slater_> (I've never gotten AVOptions to work via av_opt_set* ... one day
[02:53] <ac_slater_> also, I know that's libav not ffmpeg... it still works. It just had a clear example
[02:53] <ac_slater_> You can find some examples in the `avcodec.c` example in 2.2
[02:54] <ac_slater_> (and the other ones, but they are mostly for audio... same idea)
[02:56] <llogan> ac_slater_: https://ffmpeg.org/doxygen/trunk/group__lavf__decoding.html
[02:57] <ac_slater_> llogan: thanks!
[02:57] <ac_slater_> I should have known
[02:57] <ac_slater_> dahat: use that ^
[03:19] <Baked_Cake> windox malicous toools found my kms service onoes
[05:04] <dahat> ac_slater: That looks like it might just do it! will try now, thanks :)
[07:31] <dahat> ac_slater: thank you thank you thank you! your link gave me the insight needed to get my prototype functioning
[07:32] <dahat> I've been silent for a while as I've been singing and dancing around the house in happiness that my prototyping of something is finally doing something I want
[07:33] <dahat> bah! seems he has already left at a reasonable hour of the day... I should do the same... but there is too much to do!
[09:00] <omnivibe> Here is my command, which is giving me errors: http://pastebin.com/NpmvCLpD
[09:00] <omnivibe> when I leave out the "-profile:v main -level 4.0", it works...
[09:01] <relaxed> pastebin the error
[09:03] <omnivibe> http://pastebin.com/UnjHdRp4
[09:04] <relaxed> try -level 40
[09:05] <omnivibe> same error. Turns out anyway it's not the -level part that's a problem, but the -profile
[09:07] <relaxed> it would help if you pastebined all the output
[09:10] <relaxed> -profile:v main works here, maybe your build is old
[09:12] <omnivibe> http://pastebin.com/jvur10kZ
[09:13] <omnivibe> nah, it's a recent build I think...
[09:13] <omnivibe> ffmpeg version 2.3.git Copyright (c) 2000-2014 the FFmpeg developers
[09:14] <relaxed> that's the longest metadata output I've ever seen
[09:15] <relaxed> are those edit points?
[09:15] <relaxed> add -pix_fmt yuv420p
[09:16] <relaxed> I think you were trying to use an unsupprted color space with the main profile
[09:22] <toeshred> is there a way to change which audio track is the default (shown as "(default)" when looking at the streams)?
[09:22] <omnivibe> oh, that did it!
[09:22] <omnivibe> the pix_vmt
[09:23] <toeshred> i tried swapping the order of the first and second audio track, but that doesn't change which track played by default.
[09:23] <relaxed> toeshred: try again with -map_metadata -1
[09:23] <omnivibe> can you explain a bit about what that does, and do I need to pay attention to something for all the different source files to make sure there's no color shifts?
[09:24] <toeshred> relaxed: ok, i'll give that a try. i did use -map originally, but not -map_metadata -1
[09:25] <relaxed> toeshred: they're different
[09:26] <relaxed> omnivibe: certain profiles only support specific color spaces. http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Profiles
[09:28] <relaxed> and main only supports 4:2:0, which is why you needed -pix_fmt yuv420p.
[09:30] <relaxed> while the input color space was yuv444p10le
[09:32] <omnivibe> damn, it definitely lost some detail like that :( shadows are more crushed, picture is overall darker
[09:32] <omnivibe> but I guess going from 4:4:4 to 4:2:0 is going to leave something to be desired...
[09:33] <relaxed> do you need main 4:2:0?
[09:34] <K4T> frame=2434691 fps= 34 q=17.0 size=26204796kB time=13:31:33.79 bitrate=4408.6kbit -> this is my log from ffmpeg console which is up from almost 24h. I noticed that now fps parameter = 34, but when I started ffmpeg it was 50 and it should be always 50 fps. Can someone tell me what happened? Right now my CPU usage is 15% and RAM 34%.
[09:35] <omnivibe> yeah, targeting ipads
[09:35] <omnivibe> (so main level 4.0)
[09:40] <relaxed> omnivibe: see if -sws_flags accurate_rnd makes any difference
[09:49] <omnivibe> Undefined constant or missing '(' in 'accurate_rn'
[09:49] <omnivibe> oh hehe
[09:49] <omnivibe> nm
[10:02] <omnivibe> same problem with darkness :(
[10:05] <relaxed> omnivibe: the algorithms are listed in "man ffmpeg-scaler", maybe chroma* ?
[10:22] <omnivibe> tried -sws_flags full_chroma_int+full_chroma_inp, still dark :(
[10:37] <relaxed> omnivibe: does it look dark when you play the input with ffplay?
[10:45] <omnivibe> can't really do that since my virtual machine is console only
[10:46] <relaxed> can you put a sample of the video up somewhere?
[11:17] <Keshl> How does ffmpeg get keypresses while it's rendering? Does it use any special libraries or anything?
[11:27] <saste> Keshl, no special library
[11:28] <Keshl> Awesome. o_o. How's it do it, then? o.O I don't know of any special built-in functions for C++ that do it.. (ffmpeg's C++, right?)
[11:37] <michaelni> omnivibe, if you think the darkness is a bug, then please open a bug report on trac with a reproduceable testcase
[12:08] <K4T> about ffmpeg console output format - is it possible to get days number in time column too? Now format is: HH:mm:ss
[12:08] <K4T> I would like to see days too
[12:31] <paolo_> Hi i wanna stream my desktop with a webcam overlay. I am struggeling with the filters. when i am recording to a file the webcam is missing. and when i am using ffplay the video window won't open. i would appreciate some help. here is what I do :
[12:31] <paolo_> ffmpeg -video_size 1680x1050 -framerate 25 -f x11grab -i :0.0+0,0 -vf "[in] scale=1152:720 [tmp]; movie=/dev/video0:f=video4linux2, scale=320:-1, fps [webcam]; [tmp][webcam] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" test.mp4
[12:37] <relaxed> paolo_: -filter_complex is recommended now for multiple inputs
[12:39] <paolo_> relaxed: thank you for your answer. but it can't be used with ffplay right ?
[12:42] <relaxed> ffplay can't open test.mp4 until ffmpeg is finished encoding. Use test.mkv as your output instead.
[12:43] <paolo_> relaxed: i am using ffplay to create the filter. so i can se the results without encoding first.
[12:45] <paolo_> relaxed: does the representation of the filtergraph remain the same between -vf and -filter_complex
[12:45] <paolo_> ?
[12:46] <relaxed> https://www.ffmpeg.org/ffmpeg-filters.html#overlay-1
[12:46] <omnivibe> Thanks for the help everyone
[12:46] <omnivibe> peace
[12:48] <relaxed> paolo_: ffplay doesn't not support filter_complex, which is annoying
[12:48] <relaxed> does not*
[12:53] <paolo_> relaxed: "Stream specifier 'in' in filtergraph description [...] matches no streams"
[12:54] <paolo_> relaxed: i used ffmpeg and replaced vf with filter_complex.
[13:00] <paolo_> i think i will switch to a stable version first an see if it still happens
[13:00] <paolo_> relaxed: thank you very much for your help. i appreciate it.
[13:01] <halt2> Hi All, I propably will ask something which was asked million times here before, but can't really find the answer so forgive me to asking it again, I have huge MOV files and I what to convert them to .MP$ to be able to play then with flow-player ( web ) if it's possible without much quality lost, I know nothing about codecs and encoding and etc so plz keep it simple
[13:06] <__jack__> halt2: try something like this: ffmpeg -i input.mov -c copy output.mp4
[13:25] <someone-noone> Can ffmpeg encode vp6 somehow?
[13:26] <ubitux> relaxed: you can use -f sdl - with ffmpeg if you want to use -filter_complex and visualize
[13:26] <ubitux> (or -f gl, -f xv)
[13:36] <paolo_> I am using -filter_complex now but the overlay is still missing. any hints ?
[13:36] <paolo_> here is what I do now
[13:37] <paolo_> ffmpeg -video_size 1680x1050 -framerate 25 -f x11grab -i :0.0+0,0 -filter_complex "[0:v] scale=1152:720 [tmp]; movie=/dev/video0:f=video4linux2, scale=320:-1, fps [webcam]; [tmp][webcam] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" -map '[out]' test.mp4
[13:37] <paolo_> I am sure the webcam is recording because the LED is on
[13:51] <ubitux> paolo_: are you sure of the overlay position?
[13:52] <ubitux> also, if you use -filter_complex, you can drop the movie= ..
[13:52] <paolo_> ubitux: i am not sure about the overlay position. I have 2 screens.
[13:53] <ubitux> try just "overlay" first
[13:53] <paolo_> ubitux: i did
[13:53] <ubitux> and?
[13:53] <ubitux> nothing is overlayed?
[13:53] <paolo_> it didn't word
[13:53] <paolo_> *morked
[13:53] <paolo_> lol
[13:53] <paolo_> *worked
[13:53] <paolo_> :-)
[13:55] <paolo_> ubitux: i'll try another corner. 0:0 is the upper left ?
[13:56] <paolo_> it doesn't work :-/
[13:56] <ubitux> going to have a look
[13:56] <ubitux> just a moment
[13:57] <ubitux> ffmpeg -video_size 1680x1050 -framerate 25 -f x11grab -i :0.0+0,0 -f v4l2 -i /dev/video0 -filter_complex "[0:v] scale=1152:720 [tmp]; [1:v] scale=320:-1, fps [webcam]; [tmp][webcam] overlay [out]" -map '[out]'
[13:57] <ubitux> try this
[13:58] <paolo_> ubitux: your settings are working
[13:59] <ubitux> now why it didn't work with the amovie, i don't really know
[13:59] <ubitux> maybe the pts were kind of broken or something
[13:59] <ubitux> saste might know better
[13:59] <brontosaurusrex> what would be a 2014 cli for converting to jpg image sequence?
[14:00] <ubitux> ffmpeg -i ... out%03.jpg
[14:00] <ubitux> ffmpeg -i ... out%03d.jpg
[14:00] <brontosaurusrex> i have "-f image2 -an -qmax 1" as well
[14:02] <paolo_> ubitux: thank you for the help !
[14:03] <brontosaurusrex> ubitux: thanks
[14:04] <ubitux> paolo_: you can probably drop the fps filter btw
[14:04] <paolo_> ubitux: OK :-)
[15:00] <brontosaurusrex> ubitux: uhmm, how to control compression?
[15:02] <ubitux> qscale, qmax, or something
[15:02] <__jack__> Use -qscale:v (or the alias -q:v) as an output option. The full range for JPEG is 1-31 with 31 being the worst quality. I recommend trying values of 2-5.
[15:03] <__jack__> (dixit some links on the web)
[15:08] <brontosaurusrex> -q:v 1 would be max quality?
[15:10] <__jack__> yep, I guess, but remember, you won't get a better quality than the soruce
[15:10] <__jack__> source*
[15:12] <brontosaurusrex> __jack__: sure, i have like 100 droplets and iam trying to make them all work with new ffmpeg, cant even remember why i ever need jpg seq at all now
[16:10] <pmart> Hi. Should I use VBR mode of libfdk_aac? It is considered experimental but I much prefer VBR in general (e.g. CBR encodes silence with same bitrate).
[16:13] <pmart> I'm talking about document with actor voices but no music
[16:13] <halt2> __jack__: Thanks!
[17:13] <sine0> does ffmpeg do batch resizing of images just as jpg
[17:14] <__jack__> bah, use something else
[17:16] <sine0> i am
[17:16] <__jack__> like imagemagick, which may be far better than ffmpeg for such task (example: convert input.jpg -resize 64x64 output.jpg)
[17:18] <ubitux> sine0: sure you can
[17:19] <ubitux> __jack__: why?
[17:20] <__jack__> dunno, in my head, ffmpeg is designed for "record, convert & stream audio & video", which is a bit out of "image resizing", and thus may not be the best tool for that
[17:21] <ubitux> ffmpeg input.jpg -vf scale=64x64:force_original_aspect_ratio=decrease output.jpg should do exactly the same
[18:35] <valder> hey all. I need some assistance. I'm having a problem adding audio to my video that I'm creating. I'm splicing videos from multiple sources into a single output. I can't seem to get the associated audio to be a part of the final output. All my videos come out silent. here's my code: http://pastebin.com/rAZkU3XZ the code I have to write audio to the output file starts at line 911.
[18:35] <valder> any help would be appreciated.
[18:38] <tab1293> can anyone tell me how I can output an elementary audio stream with ffmpeg?
[19:09] <relaxed> tab1293: what type of stream?
[19:23] <vlatkozelka> hi , im tryin to save a udp stream on a .ts file segments wiht -f segment -segment_time 60 , and using -copyts to keep the timestamps untouched ... but the -copyts is making me get 1 second long files (with right timestamps) and if i dont use -copyts i lose the timestamps
[19:36] <stevehehehe> how's best to mp3 -> mp4 (mp3 + jpg) ?
[19:37] <stevehehehe> specifically for youtube. this has been way harder than i thought it would be
[20:32] <tab1293> relaxed: aac
[21:01] <haspor> im trying to decode adpcm_swf audio stream of FLV file but all i get is really messed up sound, for mp3 and aac it works perfectly, any special tricks i need for my code to make it work?
[21:02] <haspor> with ffplay.exe it works ok so the FLV file itself is ok
[21:02] <relaxed> tab1293: ffmpeg -i input -map 0:a -c copy -f aac output.aac
[21:03] <tab1293> relaxed: thank you
[21:06] <relaxed> tab1293: that should be -f adts instead of -f aac
[21:14] <stevehehehe> to create basically a mp4 of an mp3+image really shouldn't take this long, surely?
[21:15] <relaxed> stevehehehe: use -shortest
[21:16] <stevehehehe> could you give me an example commandline? i'm not too familiar with ffmpeg
[21:16] <relaxed> what are you doing?
[21:16] <stevehehehe> basically trying to upload an mp3 to youtube
[21:16] <stevehehehe> with a nice image as a background/video
[21:20] <relaxed> ffmpeg -i input.mp3 -loop 1 -i input.jpg -shortest -c:a copy -c:v libx264 -tune stillimage -movflags faststart output.mp4
[21:22] <relaxed> add -pix_fmt yuv420p
[21:22] <stevehehehe> thanks thats going pretty fast
[21:33] <brontosaurusrex> relaxed: is movflags faststart not the default behaviour?
[21:42] <llogan> brontosaurusrex: it is not the default
[21:43] <relaxed> movflags slowstart
[21:45] <brontosaurusrex> ok, and thats the same as using that qt-faststart tool?
[21:46] <relaxed> yes
[21:48] <brontosaurusrex> just re-bash-ing some of my old scripts, but I found so many stupid things it will probably take me a week to fix it all...
[21:52] <lipizzan> I'm using ffmpeg v0.7.15, and encoding dv to (mp4) h.264 using libx264 -vpre lossless_medium -crf 20. The bitrate can vary WIDELY depending on the source image. Is this to be expected?
[21:53] <brontosaurusrex> lipizzan: yes, especially if you use -tune film < just my experience
[21:53] <lipizzan> ~ 700kbs for a largely balck & white image with < 20% region change, vs 4000+ kbs for a frame of tree leaves.
[21:57] <llogan> lipizzan: i have a feeling that 0.7 branch lacks the old preset "emulation" files that you're using (lossless_medium). but i may be wrong.
[21:58] <lipizzan> I'm piping dvgrab to ffmpeg, but I paste the cmd shortly.
[21:59] <llogan> doesn't have to be the whole file. just a short output will suffice
[22:01] <llogan> but yes, input complexity will affect bitrate, and DV can be "complex" due to noise (such as from a low light situation using a crappy camcorder), etc.
[22:07] <lipizzan> http://pastie.oeg/9548738
[22:07] <lipizzan> http://pastie.org/9548738
[22:18] <lipizzan> I guess my concern is making sure I'm encoding 640x480 source DV into h.264 efficiently and with decent quality. I don't wanna waste bits if i don't have to.
[22:21] <llogan> lipizzan: i'd start with a more recent ffmpeg build.
[22:22] <llogan> and use a real preset instead of "lossless_medium"
[22:22] <llogan> which no longer exists
[22:26] <llogan> ffmpeg -i - -i logo.png -filter_complex "[0:v]yadif=1:1,hqdn3d=3,format=yuv420p[bg];[bg][1:v]overlay=W-w-3:H-h-62,fade=in:0:30[out]" -map "[out]" -map 0:a:0 -c:v libx264 -preset slow -crf 24 -movflags +faststart -c:a libfdk_aac -vbr 4 -metadata title="title" output.mp4
[22:26] <llogan> or something like that
[22:26] <llogan> you can dump movie filter
[22:27] <lipizzan> llogan: I know, I know, but I'm using the linux distro, AVLinux v6.0.3, and this is what it has. Also, I'm doing alot with the script, and don't have time to rework it at the moment.
[22:27] <llogan> format should be in the second filterchain instead of the first in my example
[22:27] <llogan> you can just download a recent build (but you would have to use the native AAC encoder)
[22:28] <llogan> not that libfaac is that great anyway
[22:28] <lipizzan> llogan: can I use neroAccEncode?
[22:28] <llogan> sure. you can pipe to it, then remux the audio from nero
[22:30] <lipizzan> llogan: remux? Is that a separate step? I try to do this all in one pass.
[22:30] <llogan> yes, a separate step
[22:31] <lipizzan> llogan: I'll have to work on that since there is another step where I add a "sponsors" leader to the video. I want good audio quality, is neroAccEncode worth it?
[22:32] <llogan> i don't know. you'll just have to compare it yourself and listen
[22:33] <lipizzan> llogan: others, in the past, have indicated that it was probably preferrable to libfaac.
[22:33] <llogan> since quality is subjective, and your level of "do I care" is too, then it will have to be up to you
[22:34] <lipizzan> llogan: thx. probably worth my time to chek it out.
[22:34] <llogan> i'm a lazy bastard, so i just use libfdk_aac (but i compile my own ffmpeg)
[22:35] <lipizzan> llogan: I'm not familiar with libfdk_aac. I need to check it out. thx!
[22:35] <llogan> https://trac.ffmpeg.org/wiki/Encode/AAC
[22:36] <llogan> https://trac.ffmpeg.org/wiki/Encode/H.264
[22:36] <llogan> 264 - core 118 is old.
[00:00] --- Sat Sep 13 2014
More information about the Ffmpeg-devel-irc
mailing list