From burek021 at gmail.com Thu Aug 1 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Thu, 1 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130731 Message-ID: <20130801000501.4654018A02C0@apolo.teamnet.rs> [00:16] Hello! I've written some code using the decode example, except I'm using huffyuv and RGB24. However, the output file fails to be loaded in to any media player, including ffmpeg. The only error I get from ffmpeg is "Invalid data found when processing input". My code is here: http://sprunge.us/iHcG Any help would be ... helpful? [00:21] Jookia: without a container format, how would ffmpeg know what codec to use? [00:22] teratorn: oh. i guess H264/MPG have magic numbers that ffmpeg picks up on? [00:23] well, it looks at the file extension and tries to guess [00:23] if you dont specify [00:24] you *can* create mpeg2 video by concatenating encoded packets together in a file, but I'm not sure that will work for h264 [00:24] well, mpeg1 video anyway, im not sure about mpeg2 [00:24] but you should probably just use a container format like matroska [00:24] The example encoding/decoding must do that then? [00:24] or something [00:24] they are doing mpeg1 right? [00:24] They're doing h264 and mpg, yeah [00:25] raw encoded h264 packets concatenated in a file? [00:25] i'm not sure how it recognizes that on playback. some combination of file extension and guessing [00:25] yeah [00:26] Is there an example on how to write the packets in to a container (is that what I'm suppose to do?) [00:26] muxing.c ? [00:27] yeah look there [00:27] you'll open up an AVFormatContext using one of the nice APIs [00:27] then avformat_write_packet() calls writes your encoded packets to a stream in that format [00:28] Ah, thanks so much [00:28] good luck [01:18] I'm piping raw images into ffmpeg to create a video. The images are not taken at a constant framerate, so is there any way to timestamp those images as they go in? [01:18] so as to encode in "realtime" rather than as fast as possible at a specific framerate [01:24] did you try specifying -r for the input? [01:32] klaxa: yes, but -r sets a constant framerate while the frames are coming in at a variable frame rate [01:32] ah... hmm... [01:33] kevint_, we added a new image2 option [01:33] ts_from_file or something [01:33] unfortunately it is undocumented [01:34] ffmpeg -h demuxer=image2 [01:35] that's okay I can walk through the code... how does it detect the timestamp of the incoming images? [01:35] do I prepend it to the image? [01:36] "If set to 1, will set frame timestamp to modification time of image file" [01:36] wow [01:36] how accurate should that be? milliseconds? [01:40] looks like 1 second resolution, which isn't good enough unfortunately [01:41] Hello @ all [01:42] Something curious happened, it seems when I try to parse "deblockalpha" parameter to the new ffmpeg it does not recognize the option [01:42] Seems it is depreciated, does anybody know if it was replaced by another argument? [01:43] err I mean parameter [01:43] kevint_, no it is a different thing [01:43] it is taking the time from the file stats [01:43] can't be used to provide timestamps [01:44] there are some tickets related to the feature you ask for, please upvote the tickets [01:44] i plan to work on it, but I don't know when [01:45] Hello saste [01:45] this channel almost feels spooky ^^ [01:46] saste - What if the file's last modified field is the timestamp I want to provide the frame with? [01:46] I am creating the image file at the moment I want the frame to be timestamped as [01:54] wb saste [01:54] Do you happen to be involved with the development of ffmpeg? [01:58] t4nk674, yes why? [01:59] Maybe I could intrigue you with my quesiton about whether deblock parameter has full depreciated without replacement [02:03] t4nk674, what's that an x264 option? I don't know [02:03] it is a ffmpeg option [02:04] probably part of the x264 lib though [02:11] t4nk674: it was removed years ago.... [02:11] you can pass all x264 options via AVOpt [02:11] x264-params [02:12] AVOpt? [02:12] if you are only using ffmpeg then you should not need to bother about AVOpt [02:12] would it be like ffmpeg -i bla bla.mp4 -AVOpt -someparameterfor-x264-lib output.mp4 ? [02:12] nope [02:13] there is documentation [02:13] and its -x264-params [02:13] ah [02:13] ffmpeg -h encoder=libx264 [02:13] isnt the encoder already specified with vcodec ? [02:14] above thing will show help [02:14] for encoder libx264 [02:14] ah [02:15] well i have enough of reading documentation [02:15] so to make it short, i can pass anything if i add -x264 before the x264 parameter [02:15] i'm busy [02:16] there are few cases ffmpeg does not provide an equivalent parameter for x264 [02:17] t4nk674: https://trac.ffmpeg.org/wiki/x264EncodingGuide#Overwritingdefaultpresetsettings [02:21] thank you llogan [02:22] t4nk674: although you should probably just be using the presets and not monkeying with various x264 options [02:22] durandal_1707: thanks [02:22] phr3ak: for what? [02:23] llogan why would I want to use the presets [02:23] read the guide [02:24] there is no learning curve in using presets [02:25] llogan, where are the presets like normal, fast, ultrafast stored? [02:25] tried to look for them, but only found presets for ipod and stuff [02:27] they are not stored as files anymore [02:29] http://git.videolan.org/?p=x264.git;a=blob;f=common/common.c;hb=HEAD#l180 [02:41] thanks llogan [02:42] good bye all [02:42] thank you for your help [02:42] im gonna start the next youtube site now! [02:43] lol [03:16] Hello there, I'm trying to use ffplay on a audio file but It seems to ignore my filters and give me the showspectrum scale=sqrt video output instead. I'm tried as a more generic result -vf vflip but even that wouldn't make my spectrum upside down. [03:16] The command I'm using is: ffplay -i Test.wav -vf showspectrum=scale=lin -loop 0 [03:18] I've also tried using "-vn -vf showspectrum..." and -"vf nullsink,showspectrum=scale=lin" but that didn't work either. [03:22] gamax92: see the example in the showspectrum documentation: http://ffmpeg.org/ffmpeg-filters.html#showspectrum [03:25] "Argument 'asplit' provided as input filename, but ''amovie=Test.wav,' was already specified." [03:27] http://pastebin.com/uPp1E937 [03:30] oh, windows. maybe it doesn't like the single quotes. [03:30] change them to " [03:33] Well, that worked, but why is it that i have to use lavfi to generate the filter? [03:34] I also just realized that scale refers to color intensity and not frequency scale and so the entire effort is worthless. [03:34] i'm not sure [07:20] I am attempting to set up a flash stream using a webcam and my local server. I have configured ffserver.conf and I am running this command to initiate the stream: [07:21] ffmpeg -f v4l2 -s 640x480 -r 15 -i /dev/video0 http://prannon.net:8090/chopstick.swf [07:21] I can visit the above URL and it appears that the server is trying to serve me the stream, but I don't see that my webcam is online and I don't see that the system is writing to the chopstick.swf file in /tmp/. I am not sure what I'm doing wrong. [07:21] Are there any tips on what I should check? [10:18] how do I get the AV_PIX_FMT_* of a AVCodecContext ? [10:20] heh, pix_fmt member, ok [10:29] Hi! Is there any way to avoid the need for seeking when muxing using libavformat? [10:31] hi all, I have a small question. Is there any way to place two input images next to each other using a filter? [10:32] I have two input sources and I need to place the m next to the other one [10:43] can I somehow create a video from two different sources but the resultant video will contain these two video side by side? [10:44] myFfmpeg, couldn't you just memcpy the two sources to a double-sized buffer? [10:44] and then encode with ffmpeg? [10:45] decode to rgb -> memcpy into a single buffer -> encode again [10:45] yes but actually I am trying to get my answer by simlplfying my main question [10:45] I have two image sources, one of them is yuv image and the other one is rgb [10:45] what I want to do is to have one resultant image which contains two input images side by side [10:45] I want to do it by one call if possible [10:46] go look at the ffmpeg's video filter docs, seriously. Although the fact that you are dealing with two pictures of different colorspace can make it a bit less simple [10:46] I have looked at it JEEB [10:47] I see tile filter but I have no idea if I can solve my problem with it [10:47] Have you, you know, TRIED it? Although I must agree that such less simple cases of filtering can be rather funky to get right the first time :P [10:48] I have other things I've gotten used to using, and thankfully those work. But I'm pretty sure you can do what you want to do with just ffmpeg [10:48] you just have to wrap your head around the filtering syntax of ffmpeg's [10:48] ohh, really? Can you give me a hint where to start? [10:49] "you just have to wrap your head around the filtering syntax of ffmpeg's" that I didn't understand [10:49] so you are saying that I don't need any filder? I can do it ffmpeg? [10:50] filder = filter [10:50] ... [10:50] ok. let me read more [10:50] you do know the -vf option, right? [10:50] thanks [10:51] well I am using API [10:51] good luck and have fun [10:51] it's surely possible [10:51] but you'll still have to deal with the avfilter syntax and stuff [10:52] hah [10:52] and I already found you an example [10:52] http://ffmpeg.org/ffmpeg-filters.html#Examples-34 [10:52] Compose output by putting two input videos side to side [10:53] first try with command line, then try to poke that into API usage [10:53] ok [10:53] I am trying now [10:53] thanks a lot [11:08] another question came to my mind. is it possible to convert yuv to jpeg without converting it to rgb first? [11:08] ] [11:08] i mean is there any direct conversion? [11:09] or is it even possible [11:09] em [11:09] JPEG is stored in yuv usually :) [11:09] so yes. [11:09] how do you mean? [11:09] as far as I know jpeg2000 is a sort of wavelet transformation [11:10] ahh you mean it is applied on yuv channels seperately [11:10] is there any public algorithm for this? [11:11] I mean for the conversion? [11:11] myFfmpeg, JPEG compression works on YPbPr color space not RGB, so it's always converted [11:12] so please tell me this, I have stored a YUV image and I want to have jpeg out of it. what is the process to apply? [11:12] myFfmpeg, depends with that [11:12] *what [11:13] myFfmpeg, are you looking for API calls, command-line tools, ffmpeg command-line? [11:13] any of them :D [11:13] but I would preser API call [11:13] or even any algorith mthat I can apply myself [11:13] use swscale to convert image color space to PIX_FMT_YUVJ420P [11:13] then just initialize AV_CODEC_ID_MJPEG encoder and pass the image there [11:14] :) [11:14] :D [11:14] perfect man [11:14] thanks a lot [11:14] when you call avcodec_encode_video2 with mjpeg encoder you'll get a full JPEG image out everytime :) [11:15] i see [11:18] Hi! Anybody who knows what the write callback in AVIOContext is supposed to return? The total number of bytes written maybe? [11:19] luc4, yes [11:20] Mavrik: thanks [11:25] Mavrik, I need your help [11:25] :) [11:25] mhm. [11:25] my main problem was the following. I have two frames. one is in yuv format and the other is rgb. [11:26] My aim is to join these frames into one frame and encodi them with jpeg [11:26] well, from yuv to jpeg, it is done by your help [11:26] I can also convert rgb to jpeg [11:26] but do you know how I can join them? [11:27] or should I join them before encoding? [11:27] of course you should join them before encoding, it's the only way you can :) [11:27] myFfmpeg, how do you want to join them? put them up side by side? [11:27] yes exactly [11:27] they have the same res. [11:29] well you could initialize a filter but that would be a huge hassle [11:29] or you convert them to the same pixel format [11:29] then create a new AVFrame with the output resolution [11:29] so, first I will comvert RGB to J420P. and also convert YUV to J420P. Then do memcpy [11:29] and copy them in side by side [11:29] would this wokr [11:29] yes [11:29] perfect man [11:29] God bless you [11:29] just remember images are stored line by line [11:30] so memcpy won't work [11:30] or line by line memcpy [11:30] so if you're putting them side by side in horizontal fasion you'll need to run a loop that'll compose lines [11:30] ok I got it [11:30] and that planar formats (the "P" in format) use data[0] - data[2] for each plane [11:30] e.g. data[0] is the Y channel, data[1] is U, etc. [11:31] that part is the fun part :) [11:31] I will take fcsare of uit [11:31] thanks a lot [11:31] oh and that 420 format has half the resolution of UV channels than Y :) [11:31] it's fun :D [11:31] but not that hard once you get what's going on [11:31] yes but I will do memcpy after converting them to P420 [11:32] so they will have the same resolution [11:32] I mean each channels [11:34] mhm [11:35] no? [11:35] myFfmpeg, Y will have 2x as much pixels as UV [11:35] since it's a 420 format [11:35] so if your image is 1280px wide [11:35] ahh, ok [11:35] data[0] will have 1280 elements, data[1] and [2] will have 640 [11:35] I know that part [11:35] yes yes. [11:35] that's no problem :) [11:35] sorry, data[0] will have 1280xheight elements ;) [13:13] hi people [13:26] ffmpeg -f rawvideo -pix_fmt uyvy422 -vtag 2vuy -s 720x576 -r 25 -aspect 4:3 -vsync 2 -i pipe: -f alsa -acodec pcm_s16le -ac 1 -i default -y output.mpg [13:27] i want to record video and audio from different sources [13:27] raw video on stdin and audio on microphone [13:28] i attempt to do it with the command above [13:29] but audio disappears after a while [13:41] can i pass utf-8 encoded filename to avformat_open_input ? [14:15] anyone? [14:18] it should work [14:18] i have seen some commits about this in the past [14:24] nlight: you can always try and see if it works :) [18:28] is it possible to capture a webcam (/dev/video0) and overlay text input from a serial port on the same output? [18:49] hi, is it possible to increase buffer on streams played with ffplay? [20:50] Does any one know why the ass filter might not burn subtitles into a tif sequence? [20:52] I am trying to compile the resampling_audio in doc/examples in version 1.2 and it segfaults. The stack trace is http://pastebin.com/syHK8xkb . Any ideas ? [21:06] t4nk149: what? compilation segfaults? [21:07] My bad, when I am running it segfaults.. [00:00] --- Thu Aug 1 2013 From burek021 at gmail.com Thu Aug 1 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Thu, 1 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130731 Message-ID: <20130801000502.4DF4C18A02D6@apolo.teamnet.rs> [01:48] michaelni, could you bounce mails sent to the old mplayer.hu address? [01:48] so my filtering rules always work, and i get emails in the right inbox [01:49] (potentially affect more users, and the transitions happened more than two years ago) [01:49] ^random use of plural forms [02:01] saste: carl still sends to mplayerhq.hu, AFAIK [02:02] people will send to it indefinitely until it breaks. [02:34] whats the new addy again ? [02:34] oh ffmpeg-user at mplayerhq.hu is the old one, i geti t [02:34] you could just setup a filter to rename it all in mailman [02:50] Compn: are you going to vdd? [02:52] ffmpeg.git 03Michael Niedermayer 07release/0.10:466911f0004a: wmaprodec: tighter check for num_vec_coeffs [02:55] kierank: are you going? [02:55] yes [02:55] i never even knew there was an asterix park [03:10] ffmpeg.git 03Michael Niedermayer 07release/0.11:2a8c3a789549: avcodec/kmvc: fix MV checks [03:10] ffmpeg.git 03Michael Niedermayer 07release/1.0:24cff71d029c: avcodec/kmvc: fix MV checks [03:10] ffmpeg.git 03Michael Niedermayer 07release/1.1:a1ce54ce6a5d: avcodec/kmvc: fix MV checks [03:10] ffmpeg.git 03Michael Niedermayer 07release/1.2:f9c872622e84: avcodec/kmvc: fix MV checks [03:10] ffmpeg.git 03Michael Niedermayer 07release/2.0:64444cd5784b: avcodec/kmvc: fix MV checks [03:18] llogan: you going there at night? [03:20] i'd like to but paris is too far away and i'm about to spend all of my disposable income on a garage [03:23] to house my awesome 1995 Mazda Protege [03:29] ffmpeg.git 03Romain Beauxis 07fatal: ambiguous argument 'refs/tags/n0.10.8': unknown revision or path not in the working tree. [03:29] Use '--' to separate paths from revisions [03:29] refs/tags/n0.10.8:HEAD: Support for shine 3.0.0 [03:32] llogan: just sell it [04:14] kierank : yes [04:14] kierank : anything i should know? what kind of electrical outlets are they in france ? [04:14] like british , or like american? or ... something scary [04:15] 230v European [04:16] oh scary then! [04:23] Compn: you put you fingers into electrical outlets? [04:24] fingers dont fit, but any stray piece of wire or coins, sure [04:24] paperclips [04:24] why are you doing it? [04:25] i'm just kidding [10:19] how do I get the AV_PIX_FMT_* of an AVCodecContext? [10:20] AVCodecContext->pix_fmt ? [10:25] nlight: just look at AVFrame.format of the decoded frame you get [10:27] am I not guaranteed that AVFrame.format will be the same as AVCodecContext->pix_fmt? [10:27] sorry that I'm asking in this channel, we could move this to #ffmpeg [10:30] in most cases it'll be the same [10:30] there are however some cases where the format changes in the middle of decoding where there may be a slight difference in these formats due to codec delay [10:31] makes sense, thanks [11:12] ffmpeg.git 03Michael Niedermayer 07master:9696740af715: hls: Call avformat_find_stream_info() on the chained demuxers [11:12] ffmpeg.git 03Michael Niedermayer 07master:95960027a59f: Merge remote-tracking branch 'qatar/master' [11:51] what's -fflage +genpts useful for? [11:52] can you show a scenario when it is useful? [12:25] durandal_1707: you mean lucy.pkh.me? [12:26] durandal_1707: i messed up the index.html and don't remember what the content was :p [12:39] ffmpeg.git 03Timothy Gu 07master:ccb212b6c3ed: libxvid: add working lumimasking and variance AQ [12:58] ffmpeg.git 03Timothy Gu 07master:3b3c1ed0768a: libxvid: Add SSIM displaying through a libxvidcore plugin [14:49] beastd? [14:50] michaelni: once again, I invite you again at VDD. It would be nice if you could come, tbh. [14:51] saste: I would love seeing you there too [14:52] merbanan: siretart: Yuvi: nevcairiel: you should come too :) [14:53] j-b: ok, I'll try to arrange it this evening [14:54] I count already 10 people mentionning "FFmpeg" as registered [14:55] 11:51:05 <@saste> what's -fflage +genpts useful for? [14:55] 11:52:05 <@saste> can you show a scenario when it is useful? [14:55] j-b: sadly my schedule doesnt allow it, i'm in the US the week before and can't afford (timewise) to be gone another few days [14:55] avi ? mkv [14:56] nevcairiel: it's on the week-end! But I understand :) [14:56] ubitux, then the next question, why? [14:56] bad timing really, my august is full of things to do [14:56] saste: see #1979 too [14:57] saste: and #1180 eventually [14:57] possibly some others [14:58] saste: i think it's something like format with no pts but only dts or something like that [14:58] but i'm really not familiar with that stuff [15:03] formats that really require a pts (ie. formats with b frames) are really not all that good for avi anyway =p [15:04] ubitux, documentation by bug tickets, interesting idea [15:04] j-b: I move to NYC this fall -> very short on time this year :-( [15:05] saste :) [15:05] saste: i have nothing better for you :P [15:05] BBB: where are the vp9 specs/drafts so i can start reading them during my flight tomorrow? [15:06] i don't see anything on the webm website or the libvpx repo [16:20] ubitux: there's none [16:20] ubitux: there's just the decoder source code [16:20] mmh, ok [16:21] shall I push what I have now? it doesn't work at all, but it's a start. I hope to have working keyframes maybe by the end of next week or so (perhaps minus loopfilter) [16:31] push where? [16:32] saste: doing anything new? [16:36] ubitux: Daemon404: https://github.com/rbultje/ffmpeg/tree/vp9 very very very early code version (i.e. it doesn't work, don't bother reporting bugs etc.), but the code structure is sort of a rough start. it currently does something roughly equivalent to bitstream parsing for keyframes, and implements 4x4/8x8/16x16 intra prediction and inverse transforms. there's probably typos and mistakes all over the place, so again, it does not yet work, but [16:36] start [16:37] ubitux: Daemon404: todos basically A) finish keyframe reconstruction (e.g. 32x32 intra pred, transform, loopfilter) B) backwards adaptivity of probabilities C) inter frame handling, D) probably more, E) simd [16:37] truncated after "not yet work, but" [16:37] oh [16:37] "but it's a start" [16:37] ok [16:39] BBB: so how do you propose we work on this; are you going to rewrite history in that branch? do we need to rebase regularly and you'll merge in your branch? [16:43] whichever you want [16:43] I don't mind either way [16:43] I sent yuvi patches when we did vp8, that was kind of painful, so I'm sure github has better tools than that :) [16:44] there's various immediate things you can do to help, e.g. import the default tables from libvpx so we can do keyframe bitstream parsing and validate correctness (that was going to be my next step) [16:44] you can start reading over the loopfilter and try to understand it so we can impement it from scratch, but better [16:45] try to follow the style of e.g. ffvp8 or ffh264 or ffhevc in terms of allowing for simd optimization of the actual filter itself [16:45] you can try to implement 32x32 idct or intrapred [16:45] you can try to start implement interframe handling, testing that might be somewhat troublesome but that's ok for now, it doesn't have to work yet [16:45] if you know what it is, you can try to do bw adaptivity (that's adaptation of probabilities based on symbol occurrence in previous frame) [16:46] is ffhevc yet a good example for how to put hooks for simd ? [16:46] or just look at the code in general and tell me if you understand it and if it's ok [16:46] kurosu_: no idea, I didn't look at it [16:46] ok [16:46] I was hoping someone here looked at it [16:47] I partially did, but mostly to see the small issues I expected, like implementing SAO without pshufb [16:47] some code puts into dsp things that shouldn't afaik [16:49] lots of newcomers to asm around the hevc implementations I have seen, but look where you are now, so not an issue :D [16:50] so yeah, its probably a good idea to import the vp9 and work on it there [16:50] collectively [16:50] BBB, are we using github PRs? [16:51] PRs? [16:51] pull requests [16:51] you want to make pull request? [16:51] it's a valid way of handling merges for WIP branches [16:51] itl lall be squashed later anyway [16:52] itll all* [16:52] itll? [16:52] add a ' [16:52] it'll [16:52] aka it will [16:52] just do not add pull request to github.com/ffmpeg/ffmpeg [16:53] durandal_1707, this is just for bbb's vp9 branch [16:56] import? [16:56] Compn: what is import [16:57] Daemon404: sure, if ubitux does that also [16:58] oh i thought you meant work on it in ffmpeg git master:P [16:58] no [16:58] bad idea [17:00] i'll follow whatever dev model you want [17:00] i [17:00] i'm interested in A, D and eventually E [17:02] ok why don't you do 32x32 idct and intra pred asembly then [17:02] most of that goes in vp9dsp.c [17:02] then I'll work on getting bitstream parsing to be correct [17:02] unless you want to do that [17:02] then we'll do it the other way around I guess [17:02] anything particular i should work on [17:03] what do you want to work on? [17:03] "not entropy decoding" [17:03] lol [17:03] do the loopfilter? [17:03] inter frame handling? [17:04] those are kind of hard to do without intra frame decoding mostly working no? [17:08] they all are [17:08] that's why I was hoping to get intra frame decoding working first [17:08] but I'm not that far yet [17:09] anyway, with some imagination you can just pretend intra frame decoding works and write a skeleton for inter frame handling [17:09] and then, as we finish intra frame, the references for inter frame will just become available [17:09] and it'll magically work [17:10] or fail miserably [17:10] :D [17:14] it would have been a great subject (or several?) for a gsoc - and a way to motivate people to work on it [17:15] 17:02:20 <@BBB> ok why don't you do 32x32 idct and intra pred asembly then // sounds good to me [17:17] just do E [17:20] is vp9 really that good? files are 50% smaller compared to vp8? [17:20] durandal_1707: i thought it was compared to h264 [17:21] but that might be propaganda [17:22] also there is still opus to finish [17:26] ramiro_, image2 muxer patch? [17:26] uhm no i mean the image2 demuxer (documentation) patch [17:29] Daemon404: nah I believe in you! :) [17:30] durandal_1707: do a comparison? [17:30] I can provide parameters for vpxenc if wanted [17:30] I did provide them on the mailinglist and on doom9 previously [17:30] anyway I'll go back to doing entropy decoding of keyframes then [17:30] once that's done, most of keyframes should start working magically [17:31] BBB, ok [17:31] so you want to do inter frames? [17:32] sure... if you point me at relevant info/code [17:32] also im not starting till tomorrow -- plenty of stuff for me to wrap up at work today [17:33] I'll help you tomorrow then :) basically look at vp9_decodemv.c in libvpx [17:33] I'll look up the function names tomorrow [17:33] also vp9_decodframe.c [17:34] one does bitstream parsing, the other does actual frame/block reconstruction [17:34] you want to replicate their behaviour (functionally), but do it faster, better and more ffmpeg-style [17:51] this thread about raising the av_log level is epic [18:06] AV_LOG_BIKESHED, with rainbow colors [18:06] and everyone will be happy [18:07] ubitux: are you "back"? [18:07] no, my plane is tomorrow morning [18:07] i'm just wandering around on the internet right now, gonna disconnect soon [18:07] ok [18:08] typically if you ask me sth about subtitles, i detach immediately [18:08] fetal position? [18:09] :) [21:16] wbs, lavu/opt: make av_opt_get() return NULL in case a string is not set [21:19] saste: did we spent all money? [21:49] ffmpeg.git 03Andrey Utkin 07master:a8f171151f0f: file: Add 'blocksize' option [21:49] ffmpeg.git 03Andrey Utkin 07master:681ad3a5b696: Document new 'blocksize' option of 'pipe' protocol [22:03] ffmpeg.git 03Paul B Mahol 07master:76e27b1d0594: smacker: make code independent of sizeof(AVFrame) [22:03] ffmpeg.git 03Paul B Mahol 07master:451b2ca1b434: indeo2: make code independent of sizeof(AVFrame) [22:03] ffmpeg.git 03Paul B Mahol 07master:02fe531afefa: mss2: make code independent of sizeof(AVFrame) [22:03] ffmpeg.git 03Paul B Mahol 07master:ff1c13b133d5: mss3: make code independent of sizeof(AVFrame) [22:03] ffmpeg.git 03Paul B Mahol 07master:678431d3f2c5: jv: make code independent of sizeof(AVFrame) [22:03] ffmpeg.git 03Paul B Mahol 07master:fe3755124953: sunrastenc: do not set avctx->coded_frame [22:03] ffmpeg.git 03Paul B Mahol 07master:69fe25cdca7c: lclenc: remove unused code [23:38] formatting, will ever end? [23:38] and why people don't find nothing more creative to do [23:45] creativity is hard [00:00] --- Thu Aug 1 2013 From burek021 at gmail.com Fri Aug 2 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Fri, 2 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130801 Message-ID: <20130802000502.A999018A042C@apolo.teamnet.rs> [00:26] saste: weren't you one of the people making a big deal about 1st vs 3rd person in doxy a few years back? [00:54] BBB, i only wanted it consistent, never cared much about 1st vs 3rd [00:55] but let's not run it again, it's dangerous ;) [00:57] but why it was raised yet again? [01:01] saste: wise words [01:10] ubitux: Daemon404: does it screw you guys over if I rewrite history in my branch? i.e. should I try to avoid doing that from now on? [01:10] (I tend to do that if I work on stuff alone) [01:11] once i actually start working it would [01:25] ok I just accidently killed history, will try to do incremental from now on [01:25] it all gets squashed in the end anyway [01:30] just do it, let them suffer [01:44] if you're just rebasing to pull in upstream changes you can use rebase-with-history from https://github.com/mhagger/git-imerge [01:44] it sets the parent commits of the rebased commits to the unrebased versions, which makes dealing with your rebased history a lot easier [01:45] (it also makes the merge tree a giagantic confusing mess so I'd only use it on stuff that's destined to be squashed away) [04:15] <@BBB> ubitux: Daemon404: does it screw you guys over if I rewrite history in my branch? i.e. should I try to avoid doing that from now on? // it's fine with me [04:16] i'll fetch and rebase on my branch regularly [04:20] and fuck my isp btw [04:28] ffmpeg.git 03Andrey Utkin 07master:11ace706071f: doc/protocols: Document file protocol options [10:46] ffmpeg.git 03Diego Biurrun 07master:a9b04b2c43f9: tree.h: K&R formatting and typo cosmetics [10:46] ffmpeg.git 03Michael Niedermayer 07master:bc47d126bf41: Merge commit 'a9b04b2c43f95cc17c2291f83c27a3119471d986' [11:29] ffmpeg.git 03Diego Biurrun 07master:c2e936de07d0: tree-test: Refactor and plug memory leaks [11:29] ffmpeg.git 03Michael Niedermayer 07master:161054f37b15: Merge commit 'c2e936de07d054bf476e60445b453bf6b4836820' [11:29] ffmpeg.git 03Michael Niedermayer 07master:b32643813961: avutil/tree: fix memleaks [11:41] ffmpeg.git 03Diego Biurrun 07master:45dd1ae1b3c1: avfilter: Add some missing FF_API_AVFILTERBUFFER ifdefs [11:41] ffmpeg.git 03Michael Niedermayer 07master:0c8efe48917f: Merge commit '45dd1ae1b3c18331f3db2293a9135bc5851e553f' [11:47] ffmpeg.git 03Martin Storsj? 07master:2e814d0329ad: rtpenc: Simplify code by introducing a macro for rescaling NTP timestamps [11:47] ffmpeg.git 03Michael Niedermayer 07master:57b8ce414be5: Merge commit '2e814d0329aded98c811d0502839618f08642685' [11:53] ffmpeg.git 03Martin Storsj? 07master:54e03ff6af8a: rtpproto: Support nonblocking reads [11:53] ffmpeg.git 03Michael Niedermayer 07master:d2c613dd1460: Merge commit '54e03ff6af8a070f1055edd26028f3f7b2e2ca8e' [11:58] ffmpeg.git 03Martin Storsj? 07master:7531588fffbc: rtpproto: Remove a misplaced comment [11:58] ffmpeg.git 03Michael Niedermayer 07master:b39f012dee7a: Merge commit '7531588fffbca1f0afdcc06635999c00dfc16ca6' [12:02] ffmpeg.git 03Martin Storsj? 07master:892b0be1dfbd: rtpproto: Simplify the rtp_read function by looping over the fds [12:02] ffmpeg.git 03Michael Niedermayer 07master:d6b37de4d42d: Merge commit '892b0be1dfbdeaf71235fb6c593286e4f5c7e4ec' [12:10] ffmpeg.git 03Martin Storsj? 07master:b7e6da988bfd: rtpproto: Move rtpproto specific function declarations to a separate header [12:10] ffmpeg.git 03Michael Niedermayer 07master:fcccb4c11dcf: Merge commit 'b7e6da988bfd5def40ccf3476eb8ce2f98a969a5' [12:43] ffmpeg.git 03Martin Storsj? 07master:c7e921a54ffe: avopt: Check whether the object actually has got an AVClass [12:43] ffmpeg.git 03Michael Niedermayer 07master:cca229e75a55: Merge commit 'c7e921a54ffe7feb9f695c82f0a0764ab8d0f62b' [12:53] ffmpeg.git 03Martin Storsj? 07master:b85dbe68e222: avconv: Call exit_program instead of exit in avconv_opt as well [12:53] ffmpeg.git 03Michael Niedermayer 07master:56e682374ecd: Merge commit 'b85dbe68e222586fd77332716eb8ed5724db4e1b' [13:21] ffmpeg.git 03Vittorio Giovara 07master:7748dd41be3d: avconv: add -n option to immediately exit when output files already exist [13:21] ffmpeg.git 03Michael Niedermayer 07master:4f07fcd30b0e: Merge commit '7748dd41be3d6dd6300f14263586af4ee104ead2' [13:51] ffmpeg.git 03Martin Storsj? 07master:1851e1d05d06: rtpproto: Check the size before reading buf[1] [13:51] ffmpeg.git 03Michael Niedermayer 07master:2ee58af53e1c: Merge commit '1851e1d05d06f6ef3436c667e4354da0f407b226' [13:59] ffmpeg.git 03Martin Storsj? 07master:ee37d5811caa: rtpproto: Allow specifying a separate rtcp port in ff_rtp_set_remote_url [13:59] ffmpeg.git 03Michael Niedermayer 07master:0f5a40c2a4bc: Merge commit 'ee37d5811caa8f4ad125a37fe6ce3f9e66cd72f2' [14:28] ffmpeg.git 03Vittorio Giovara 07master:3c8bff0740ab: avframe: have av_frame_get_side_data take const AVFrame* [14:28] ffmpeg.git 03Michael Niedermayer 07master:9408d990c46c: Merge remote-tracking branch 'qatar/master' [16:14] who's going to attend vdd at the end of august? [16:14] Action: funman [16:49] saste : me [17:07] Plorkyeran: I was just going to mention imerge [19:23] ffmpeg.git 03Carl Eugen Hoyos 07master:bb7f71d9b6ae: lavf/movenc: Write total number of tracks as part of metadata. [19:23] ffmpeg.git 03Carl Eugen Hoyos 07master:fbc0004b4b1d: lavf/movenc: Write disc number and total number of discs as part of metadata. [19:50] ... man and i thought libav was bad for bikeshedding [19:50] Action: Daemon404 stares at the ever expanding raise level thread [19:51] and its so extremely significant of a change [19:52] very. [19:53] plenty people agreed on it, i would've just pushed it [19:53] yes but then you get revert wars [19:53] a la vlc [19:54] so what do you do if you have some stubborn person that lost track of the actual commit and just doesnt want to give in anymore on principle? discard a perfectly fine change? [19:55] post the patch to libav, and wait for the merge [19:55] kinda hard if the change is in ffmpeg the application [19:55] so changes to avconv are not merged back? [19:55] nevcairiel, ive been there [19:56] you give up and go get booze [19:56] wm4: partly, but they have diverged quite a lot [19:59] on this particular issue, its really quite baffling how one person can be so stubborn, even if you can question the upsides, it has basically zero downsides [19:59] i've actually run into that issue myself once [19:59] at work [19:59] a warning would have been nice [20:03] changes from avconv are merged with ffmpeg [20:03] to answer that question [20:03] last i heard anyhow [20:04] yea michael trys, but some parts of the apps are really too different now [20:04] but then someone would notice and revert it [20:37] michaelni: trac is down ;-( [21:09] oh no!, what you will do now... [21:10] Don't worry, Michael fixed it=-) [21:11] Action: durandal_1707 *sigh [21:15] cehoyos: why are you against raising level for libx264 patch? [21:15] ffmpeg.git 03Stefano Sabatini 07master:f118b4175905: ffmpeg: raise level for message printed in case of auto-select pixel format [21:16] Because dozens of requests were made before the message was introduced several months ago, one since if I counted correctly. [21:17] and if people still complain? what would be next natural step? [21:18] Please don't ask me, you were the only one who ever commented on this issue before last week... [21:18] (imo = who was ever interested in this issue before last week) [21:20] i'm for removal of: libx264rgb encoder, mxf_d10 muxer, and maybe there are other such cases.... [21:21] the libx264rgb encoder is actually in active use [21:21] by an entire community [21:21] (game cappers, retro speedrunners, etc) [21:22] for does not have libx264rgb encoder, i never found their user complained about it [21:22] *k [21:22] they tend to use ffmpeg [21:23] (theyve submitted swscale patches here too) [21:23] deprecated one [21:23] i may be mistaken -- does libx264 support the RGB mode of libx264 too now? [21:23] isn't it just format selection? [21:24] im only ok with removing it if we dont lose that functionality, in any cas [21:24] e [21:24] ugh fork does not support rgb at all [21:30] durandal_1707: why remove the mxf_d10 muxer ? :D [21:30] cehoyos: why you do not put space, ' ' between [PATCH] and rest of title? [21:31] mateo`: not removing funcionality, just stupid separation into extra muxer [21:31] have you ever wondered why we do not have wav_pcm_float and wav_pcm_s16 muxer? [21:32] Why should I? [21:32] its common sense [21:34] well, unless you manually type it [21:44] I can only type manually... [21:46] i do not need to type '[PATCH] ' [21:47] That's cool for you, it's ok for me to type it. [21:47] as you please, including changing bits on trac [00:00] --- Fri Aug 2 2013 From burek021 at gmail.com Fri Aug 2 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Fri, 2 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130801 Message-ID: <20130802000501.755D018A042B@apolo.teamnet.rs> [00:16] hi, where can I get up to date ffmpeg/x264 .ffpreset files from? [00:16] You generally don't need them. You can use -preset veryslow (for example) to use the x264 preset directly. [00:17] sacarasc: ah, thanks - I didn't realise they were built in now [00:18] had to remove the old ones from %HOME/.ffmpeg dir. now it seems to be encoding fine. thanks [00:43] hello!, is possible to use ffmpeg to convert a video file from any format to xdcam hd422 mxf file ? [00:43] xdcam allows 422? [00:45] i think so [00:46] I have found an 422 sample in http://www.opencubetech.com/page47/ [00:47] mxf muxer writes OP1a files [00:47] so it should be supported, assuming you put into mxf what xdcam expects [00:48] it accepts audio as 24 bit pcm but each channel must be in separate track [00:52] also it seems to be picky on bit rate stored in mxf [00:56] I am bit newbie in this video formats and transcoding, so I am bit lost, what codec should I use with ffmpeg to convert video file to xdcam mxf file [00:56] mpeg2video [00:57] that is only supported by xdcam afaik [01:53] Hi, does anyone know how to install the stereo3d filter for ubuntu 13.04? It doesn't seem to be built-in to ffmpeg [02:48] BubbaP: the "ffmpeg" package for ubuntu is actually libav, which does not have the stereo3d filter [02:49] so you will need to install real ffmpeg [02:49] I assume there's a PPA for it [05:51] are there any ways to reduce the lag in ffserver? [05:56] ___________________ [05:56] < Is anybody there? > [05:56] ------------------- [05:56] \ ^__^ [05:56] \ (oo)\_______ [05:56] (__)\ )\/\ [05:56] ||----w | [05:56] || || [05:58] ________________________________________ [05:58] / __ __ | \/ | ___ ___ | |\/| |/ _ \ / _ \ [05:58] | \ | | | | (_) | (_) | |_| |_|\___/ | [05:58] \ \___/ / [05:58] ---------------------------------------- [05:58] \ ^__^ [05:58] \ (oo)\_______ [05:58] (__)\ )\/\ [05:58] ||----w | [05:58] || || [05:58] Stop spamming. [05:58] sorry [10:26] Hi! When muxing using callbacks I noticed that ffmpeg needs to seek back on data to change it. Is there any way to avoid this? Maybe there are some containers that does not need ffmpeg to seek? [10:59] luc4, TS containers for example. [11:01] Mavrik: ah you're right, but can I place h264 into mpeg 2 for instance? [11:02] you can place H.264 into MPEG2-TS container yes [11:02] Mavrik: thanks! [11:05] is it possible to increase input buffer on streams played with ffplay? [11:06] like mplayers -cache options [11:10] what does it mean if AVCodecContext->pix_fmt is 0? [11:11] when decoding [11:11] means that the demuxer can't figure out the format from the headers? [11:16] Mavrik: do you know off the top of your head what I have to pass to av_guess_format to get mpeg2 ts container? [11:17] luc4, mpegts [11:17] nlight, probably the probe size was not big enough to probe pix format [11:17] nlight, or the input is broken [11:17] nlight, I suggest developing with debug version of ffmpeg so you can step through with gdb and see for yourself whats going on [11:18] Mavrik: thanks! [11:18] luc4, also make sure you don't have global headers flag set on H264 [11:19] Mavrik: it seems I'll have to modify the h264 stream as ffmpeg is complaining... it will take a long time: "[mpegts @ 0x982bfe0] H.264 bitstream malformed, no startcode found, use the h264_mp4toannexb bitstream filter (-bsf h264_mp4toannexb)" [11:20] hence my last statement :P [11:20] if you don't set global header flag, H.264 will insert PSS/SPS data into stream instead of dumping it into priv_data for one-time output [11:24] Mavrik: I'll se how to do that, thanks!! [11:32] Mavrik: sorry for the stupid questions, I'm not a av expert, but do you mean that I have to place the annexb format into the mpegts instead of the avcc? [11:33] avcc [11:33] ? [11:33] luc4, but yes [11:33] H.264 stream in MPEG2-TS must be in AnnexB format [11:33] since MPEG2-TS doesn't have global headers [11:34] and that is why ffmpeg does not need to seek, right? [11:37] avcc: http://aviadr1.blogspot.it/2010/05/h264-extradata-partially-explained-for.html, I suppose ffmpeg calls it length prefixed mode. [11:39] Mavrik: works perfectly that way, thanks! [11:40] luc4, the seeking is connected to mp4 container [11:40] since ffmpeg puts MP4 index at the beginning, and the index can only be written after file is done [11:41] well, no [11:41] Mavrik: sure, I perfectly understand that. But my problem is that I can't seek, so I was hoping there was some other solution. [11:41] the index first is put in the end [11:42] and then there's an option that then decides whether or not to seek and put it in the beginning [11:42] :P [11:42] JEEB, there's some seeking going on even if you don't put the faststart flag [11:42] if it was indeed mp4 that was the problem [11:42] no idea why really, didn't check the source [11:42] o_O [11:42] what the flying... [11:42] could also be fixed later :P [11:43] mp4 in general shouldn't need seeking around unless the index was then moved to the beginning [11:43] I did have some annoying problems with that on pre-1.0 [11:43] the only problem is that with the mpeg2ts I just created seek when playing is not working... is this correct? [11:43] luc4, hmm, it should be possible to seek mpeg2ts with most players [11:43] unless you're livestreaming it [11:44] I'm trying with vlc but it is showing some errors. [11:44] no, I'm playing locally. [11:46] JEEB: when muxing to mp4 I get ffmpeg requests to seek many times. Shouldn't this happen? [11:47] JEEB: the resulting mp4 is anyway perfect. If I do not implement seek, it lacks some information and it does not play with vlc. [11:47] make sure you have the flag off regarding moving the index, as well as make sure ffmpeg doesn't think you're writing into a seekable thing [11:48] yuck [11:48] http://ffmpeg.org/doxygen/trunk/movenc_8c_source.html [11:49] search for avio_seek [11:50] lol [11:50] yes, I see it needs to seek to write the moove etc... [11:50] that is exactly what is missing if I do not implement the seek callback [11:53] JEEB: those things you remarked are specified when allocing the context? There is a flag in there I see. [11:57] JEEB: ah yes sorry, found it. [12:03] JEEB: If I set seekable = 0 I get: [mp4 @ 0xa7bafe0] muxer does not support non seekable output [12:04] luc4, then you'd have to use some other muxer I guess [12:05] JEEB: I'm not bound to mp4, I can change. But I tried mpegts which is not seeking but when seeking during playback I get many errors. Anything else that comes into your mind? [12:05] JEEB: I need to mux h264. [12:06] JEEB: also, my h264 stream has vfr. [12:08] luc4, make sure you're passing SPS and PPS before every seeking point with mpeg-ts [12:12] JEEB: sorry, by "seeking point" you mean I-frame? [12:12] a Random Access Point [12:14] I have some video file and those are its parameters http://sprunge.us/gHfO codec and stuff, why is wherever I seek forward or backward in player it looks that way for few seconds http://image.bayimg.com/f6cd8a540b1d5c0dc17561430c747aad1eadfaf5.jpg , what is all those garbage is that player problem or that file was coded that way ? [12:33] JEEB: in fact, I'm not passing SPS and PPS to ffmpeg. How do I pass those to ffmpeg? Do I have to create a separate AVPacket or do I have to preprend to I-frames data passed to the AVPacket? [12:35] put them on extradata [12:51] where would I find the option to enable a specific rtmp authentication? [12:59] is there a way to determine an output format based on an input's format? [13:01] i.e., i want my output to be the same format as my input, the normal output format auto detecting doesn't work for me because i'm piping so there is no file name extension to inspect [13:01] void av_dump_format (AVFormatContext *ic, int index, const char *url, int is_output) [13:02] is second parameter stream index? [13:03] nlight: using the command line utility, not lib [13:03] how do i go about not copying audio? [13:03] in terms of: avconv -i ./Untitled01.avi -c:v copy -c:a none output.avi [13:03] (except none doesnt work) [13:04] parshap, i was asking for myself but ok ;p [13:04] nlight: hehe nm then :) [13:08] when using -codec copy does output format even matter? [13:09] no it doesn't, but it will effect the audio if you only specify video to copy [13:09] i figured out what i wanted btw [13:09] ok I'm really bummed, what do I need to do to have AVCodecContext->pix_fmt properly initialzied? [13:09] whatever file I open it's always 0 [13:11] i think i missed what you're working on in the first place [13:11] ok, now i need a command to convert video into lossless h.264 with 4:2:2 chroma [13:15] Anyone know how I can get this to simply output the file back to me? `ffmpeg -i ./audio.bin -acodec copy -`. I get "Unable to find suitable output format" [13:21] I see in libavformat.a authmod=adobe and authmod=llnw, but the options to use them don't seem apparent [13:44] 0 is a valid format..... really? I mean.. really? [13:44] i spent an hour on this lol [13:45] nlight: what are you doing exactly? seems maybe related to what i'm trying to do. [13:46] i'm decoding a video and kept getting pix_fmt == 0 thinking it meant AV_PIX_FMT_NONE [13:46] but it actually means AV_PIX_FMT_YUV420P [13:46] which is beyond retarded but yae [13:46] yea* [13:46] hah [14:22] should I read from AVFrame->data or AVFrame->data[0]? [14:35] to answer my own question -> data[0] [14:52] how would i pass a list of images to video but where i specify each images name [15:03] ItsMeLenny, http://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/image_sequence [15:04] nlight, i have the image sequence as a video working perfectly, however i want to specify more images that don't count to the file name, and double up on some images in between [15:05] so i basically want to create a kind of xsheet image list that puts the images in the order i choose into a video [15:05] no idea, sorry [16:19] Hi to all [16:21] I need to read an H264 stream and stream it in MJPEG format [16:22] I understand that ffmpeg is able to perform this transcodification [16:24] I did some experimets, but I always the error message "Protocol not found" if I try to stream-out as rmpt or rtp [16:25] Maybe in command line like follow: [16:25] ffmpeg -i 'rtsp://source/encoder1' [... a lot of options ...] -f flv 'rtmp://server/live/cam0' [16:25] keyword "server" stands for an external component that I miss? [16:27] Any suggestion? [16:29] ( I've googled a lot, and also read documentations, but I'm unable to solve the problem ) [16:37] Hi, I'm trying to encode an m4a file as an mp3 but i'm getting an error with libswresimple.so.0 https://gist.github.com/kotfic/6131991 [16:39] any idea why vp8 encoded through ffmpeg is about 15% slower than vpxenc? [16:39] same lib version [16:39] b4u, using -threads 99 [16:39] or something like it!?! [16:39] using -threads 6 [16:39] on both [16:40] same settings [16:40] and same input y4m, for fairness [16:40] one likely compiled with SSE optimzations etc. [16:40] they're the same [16:41] downloaded libvpx, compiled, did make install [16:41] 15% is a typical code optimization amount. [16:41] that installed vpxenc, and also let me compile ffmpeg with libvpx support [16:41] what thread priority is set when it runs? [16:42] how do I tell? [16:42] -6 on a 4core can be detremental if threads are not deliberatly core assigned. [16:42] what CPU? do you have 6 "real" cores? [16:42] ( No answers for my question? ) [16:43] it's a server [16:43] 12 real cores, hyperthreaded to 24 [16:43] and the workload on the server is much the same when encoding both? [16:43] there's nothing running on it other than the encodings, so yeah [16:44] w2008usr: what was yours? [16:44] I want to transcod an h264 stream to an MJPEG4 stream [16:44] do you see a fairly even distribution of workload over all 6 cores during encoding? [16:44] zap0: to be honest neither vpxenc nor ffmpeg use 100% CPU on any of the cores [16:45] so you are storage limited? [16:45] but I see that vpxenc is using around 420% in top and ffmpeg is using around 280% [16:46] Experimenting , I noticed that any attempt to stream-out in "rtp", "rmtp", "rstp" stops ffmpeg with error message "Protocol not found" [16:47] is it listed in ffmpeg -protocols? [16:48] if not, perhaps for some reason your build has it disabled [16:48] as it should be enabled by default [16:48] w2008usr, !pb [16:48] I check "-protocols" an d let you know [16:50] zap0: I think you're right btw [16:50] about hardware compilation [16:50] I checked ffmpeg compile line (I didn't build it) and someone compiled with --disable-yasm... [16:50] recompiling now [16:50] rtp is listed [16:50] rtmp should be [16:51] b4u ;) [16:51] (I'm going to paste in pasteie or similar) [16:52] rtmp is present too [16:52] w2008usr: in that case do what saste says and pastebin your whole command and output [16:52] Maybe I've wrote wrong protocol? [16:52] ( I'm going to paste... "please hold the line" ) [16:54] zap0: still not really sure why it doesn't max the cores I give it though, x264 does [16:54] cause it's too effecient! [16:54] maybe you need faster storage [16:54] it's SSDs :< [16:54] or perhaps this means you could increase your quality settings [16:55] is it RAID? [16:55] this is a pretty decent production server... 2 CPUs, RAID10 SSDs (yay no trim), 100GB+ of RAM, etc. [16:55] are the CPUs NUMA? [16:55] sounds like a dream machine! [16:56] i wish i had one :( [16:56] Mavrik: is that something you can tell if I give you the model number? [16:56] not sure of the answer [16:56] not really [16:56] mobo model number ? [16:56] but that usually explains non-100% usage on multisocket machines [16:57] either way, hoping I'll see a speed boost with yasm enabled... [16:57] b4u, he makes a good point. you should try locking the process's threads to a single CPU. [16:58] that might improve cache issues [16:58] ah can't do that, I am testing baremetal now but ultimately this will be rolled out virtualised [16:58] so won't have that kind of control [16:58] I'm back [16:59] if it's going to be virtualized, 15% might turn into 50 [16:59] ugh yeah [16:59] ( ops... please wait ) [16:59] ( I'm sorry ) [16:59] don't expect perfect utilization on virtualized servers [16:59] of course [16:59] seeing way better utilisation by ffmpeg now with yasm [17:00] I will stab whoever compiled it [17:00] Here is it: [17:00] http://pastebin.com/7eSU4Hbe [17:01] it now beat vpxenc by quite some margin :) [17:01] ( mistake... rmtp istead of rtmp... the error is now "failed to connect socket") [17:01] is this linux? [17:01] No [17:01] Windows [17:01] it's your output which is failing [17:02] "16:55:16,65>" is the prompt [17:02] but, I am not familiar with streaming so [17:02] maybe someone else can explain why [17:02] Oh [17:03] ( hope someone can explain ) [17:05] it is obviously retrieving your input fine because it has codec, dimensions etc. [17:05] Yes, read works [17:06] b4u, your rtmp url is not complete, check docs [17:06] I've seen the stream with ffplay [17:06] saste: not mine [17:06] w2008usr: I think maybe your output should be rtp:// not rtmp:// for that ip:port format? [17:07] I try [17:07] it is streaming something [17:08] maybe rtmp need some other external component/server to work? [17:08] well as saste said I think your rtmp URL is incorrect [17:09] rtp does't work: "Unsupported RTP version packet received" (ffprobe) [17:09] rtmp is supposed to be like: rtmp://127.0.0.1:1234/destinationresource [17:10] can try -f rtp with that instead of -f flv [17:10] ( now ffmpeg works - is streaming - but ffprobe says "Unable to receive RTP payload type 106 without an SDP file describing it") [17:11] You said it right [17:11] But player can't read it [17:12] see this [17:12] http://trac.ffmpeg.org/wiki/StreamingGuide [17:12] There is an option to list codec? [17:12] ffmpeg -i input -f rtsp -rtsp_transport tcp rtsp://localhost:8888/live.sdp [17:20] The ffprobe says "rstp://127.0.0.1:1234/live.dsp: Protocol not foundsq= 0B f=0/0" [17:23] here is ffmpeg / ffprobe: http://pastebin.com/Eq0EaC9x [17:25] However, can You confirm that - apart from my mistakes - you can receive in input a stream H264 and stream it in output as MJPEG ? [17:28] ( I think so, but I'm not sure, and I have to decide if continue on this way or rewrite a lot of an existing system code ) [17:28] ffmpeg can convert pretty much anything to anything :P [17:29] but of course depending on the quality it may not be realtime [17:30] did you read the page I linked? [17:31] also [17:31] http://trac.ffmpeg.org/wiki/StreamingGuide ? [17:31] you typoed in ffprobe [17:31] you did .dsp [17:31] instead of .sdp [17:31] Bookmarked a couple of hour before I entering here.. [17:31] try ffprobe with correct url [17:32] you're right [17:32] I try [17:33] ( My neurons are in need of rest ) [17:34] "Protocol not found" again [17:35] what if you try to play direct? [17:35] the play command would be this - ffplay -rtsp_flags listen rtsp://localhost:1234/live.sdp [17:36] Protocol not found [17:37] weird [17:37] I am not sure sorry, I am not that familiar with streaming files [17:38] Maybe both command line are wrong: "rstp://" istead of "rtsp://" [17:38] You do not have to be sorry, I'm grateful for helping me :-) [17:43] Ok, I'm going [17:43] Thank you [17:43] Bye! [19:41] i just posted a question regarding libffmpeg on the forum at http://ffmpeg.gusari.org/viewtopic.php?f=11&t=1027 ... if anyone has the time, please take a look. otherwise, i'll wait for an answer (hopefully) on the forum. [19:43] yousef: but ffmpeg project have nothing to do with libffmpeg thing [19:43] it does not? please excuse my ignorance. [19:44] there is no such library as libffmpeg provided by ffmpeg [19:44] no libavcodec, libavformat, etc? [19:45] yes, libavcodec, libavformat, libavutil etc. are ffmpeg libraries [19:45] but there is no such thing as libffmpeg [19:50] my apologies. by libffmpeg i was referring to the collection of ffmpeg libraries. [19:50] no libffmpeg.so [19:50] not* [19:52] i just edited my post and corrected this. [20:57] is avfilter slice threading supported? [21:09] RobertNagy: yes [21:10] you can see what filters have slice threading enabled with: ffmpeg -h filters [21:10] *without '-h' [21:11] RobertNagy: you are interested in specific filter or? [21:34] hello. [21:34] i decode a webcam stream, and then encode it using VP8. The webcam stream has a yuyv422 pixel format, which VPx claims doesn't support. [21:34] How could i change the pixel format, so that i can encode the stream using VP8, please? [21:34] what you use to encode? [21:34] i'm writing in C, by the way [21:35] VP8 [21:35] there is code that gives list of encoder supported pixel formats [21:35] i read a webcam stream from /dev/video0, and decode that stream [21:35] would there be no way to convert the pixel format? [21:36] you can convert with swscale [21:36] or using format filter within libavfilter [21:38] I'm trying to connect a desktop stream to someone else's server over RTMP, I haven't yet figured out the needed settings for the RTMP portion [21:40] User name, stream name and password where provided as well as the needed URLs [21:43] Honestly I should come back when I get home, I don't have the stream info right now [21:51] durandal_1707: thanks a lot! :) [22:20] where would I find the options to enable rtmp authentication? [22:27] Is it possible to use a std::vector as the input buffer for decoding audio? It gets to the decode function and says the mp3 header is missing (http://pastebin.com/ZFzrdv4S) Thank You [22:35] spidey_: there is documentation [22:36] sorry, didn't see it in there [22:37] I've been looking for days and I couldn't find smtp authentication either [22:37] Datalink-M: smtp? [22:37] think it was was supposed to be rtmp [22:38] Rtmp in my case [22:38] there is everything in documentation for protocols [22:38] Hey fellas. I'm interested in making a 24/7 stream. There is plenty of documentation on that. From time to time within that stream, I'd like to overlay another stream. There is plenty of documentation related to overlays -- but I'd like to "switch" the overlay on and off as needed, without restarting the ffmpeg stream in progress. [22:38] i repeat, I've been searching it for days without luck [22:39] So - my question is -- how would you fellas go about doing this? Is there a term for what I'm trying to do? As of right now, I was thinking about making a script/application that would manage a series of ffmpeg pipes. When it was time to "switch" on the overlay, the application could direct the source pipe through an additional ffmpeg process in charge of the overlay, and when it was time to stop, would direct the pipe traffic back to [22:39] the ffserver process. Think this is possible? [22:39] Datalink-M: read documentation i linked [22:39] I'm trying to modify the behavior of incoming raw images (over a pipe) while encoding to h264 - I'm looking for the source file that sets the PTS of these incoming images. Where can I find it? [22:41] durandal_1707: I've been hunting through that for days... this is the third time I've said I have been looking and couldn't find it, telling me to RTFM is redundant to the fact I can't find it in the manual [22:41] Datalink-M: what you can not find? [22:41] Rtmp authentication [22:41] did you found rtmp protocol documentation? [22:41] Yes [22:41] RTMP Doc: http://www.ffmpeg.org/ffmpeg-protocols.html#rtmp [22:42] Yes... I've tried commands to the point I'm past what should work [22:43] what ffmpeg version are you using? [22:43] what you actually tried? [22:45] 0.8.6 armhf raspbian, user at pass, as well as a few variations of app= parameters, I'd have to check when I get home for more details [22:46] Rtmp://user at pass:host/app/stream [22:47] Basically a lot of stuff for the flv string [22:47] Meh, I'm gonna come back when I'm home [22:48] PatNarciso, basically, write your own transcoder& ffmpeg doesn't really support what you want [22:48] it's not meant as a permanent video streamer, it's a file transcoder first and foremost [22:49] Mavrik: right, and it does a great job at that. and broadcasting -- both pregenerated and live content. [22:50] It also does a great job, via filters, of allowing a person to manipulate video (or audio). [22:51] I'd imagine someone must have made a "switcher" powered by ffmpeg. I figured that this would be the place to ask before attempting to create my own. [23:15] I'm trying to alter the PTS of incoming raw rgba images in the source (from pipe, encoding into h264) - what source file does this? [23:15] I'm swimming through hundreds of source files and just cannot find it [23:31] Anyone know how I can get this to simply output the file back to me? `ffmpeg -i ./audio.bin -acodec copy -`. I get "Unable to find suitable output format". [23:32] define the output format you'd like returned, and then you'll be good to go. [23:33] if... if you want to get it raw without transcoding, just "cat audio.bin" [23:34] PatNarciso: the problem is that my audio.bin could be mp3, could be ogg... i don't want to "hard-code" an output format, I just want to keep it the same as the input [23:34] klaxa: in reality I am using `-metadata` to sets some metadata on the file] [23:34] then you have to specify a container too [23:35] klaxa: i want to keep the same container as the input source [23:35] klaxa: i.e., if audio.bin was an mp3, use mp3. if it was ogg, use ogg. [23:35] shouldn't be too hard with a little scripting [23:35] yeah [23:35] but you will have to add -f [23:35] klaxa: :( [23:35] and you can't use just "-" but use "pipe:" or "pipe:1" [23:35] klaxa: so i guess i could parse the format out of the ffmpeg output [23:36] mhh yes that would be one way [23:36] klaxa: and then run ffmpeg again passing -f with the format i got from the first run [23:36] yep [23:36] klaxa: i'd like to avoid having to read the file twice though, any ideas? [23:36] or you just check the file for the extention and assume it's the right one [23:36] for parsing the output, see ffprobe, and the json format is my favorite. [23:37] PatNarciso: awewsome! thanks for the tip - it looked like it was going to be a pain to get the format from ffmpeg output [23:38] for file in *; ffmpeg -i "$i" -metadata author="someone awesome" -c copy -f "$(echo $i | sed s/.*\.//)" pipe: [23:38] which there was a way to avoid running two procsses though - wish i could do something like `-f copy` like i can `-c copy` [23:38] uh... damn :P [23:38] parshap, no problem. I spent too many hours parsing ffmpeg output before I learned that. [23:38] for i# in *; ffmpeg -i "$i" -metadata author="someone awesome" -c copy -f "$(echo $i | sed s/.*\.//)" pipe: [23:38] without the # [23:38] stupid keyboard [23:38] argh that's still no valid bash [23:38] *wish - not which [23:39] for i in *; do ffmpeg -i "$i" -metadata author="someone awesome" -c copy -f "$(echo $i | sed s/.*\.//)" pipe: | ; done [23:39] now it should be valid bash [23:39] klaxa: i don't have filenames, dealing with just raw buffers of data [23:39] ah [23:39] well then ffprobe is your best bet, but you'll still have to read the file twice [23:39] cool, thanks for the help! [23:39] klaxa, whats the best format to use when running a bunch of ffmpeg processes tied together with pipes? [23:40] in how far? [23:40] out of curiosity, is there a technical reason why something like `-f copy` doesn't exist? [23:40] hmm... good question [23:41] PatNarciso, can you explain the usecase a little bit further? [23:41] in general i would think passing data through pipes is best done in raw data since it requires neither compression nor decompression [23:42] sure. ffmpeg -i whatever.mpg - | ffmpeg -i - blah.mp4 [23:42] why... would you do that? [23:42] obviously in that example, there is no need for a pipe [23:42] ah yeah... [23:42] you want multiple outputs i guess? [23:43] I'm trying to cook up the elements needed to create a switcher (or router?) powered by ffmpeg. [23:43] if so there is no need for pipes at all [23:43] afaik [23:43] or in other words: create a live stream, empowering a person to switch feeds at will. [23:44] in how far livestream? [23:45] i mean if you write stuff to a pipe it's blocking [23:45] if you write stuff to a file it's not live [23:46] I'm thinking newsroom application. 4 cameras. and a dude choosing the camera to be on the single output. [23:47] good point about the blocking. fifo may not be the best idea. [23:48] you would need a decoder that decodes the stream and only sends it out over fifo if requested [23:49] but mind you, the output has still to be compliant with container/codec specs, i.e. correct header/trailer [23:52] right. somewhere in my notes I found a person who was removing that, using... I think it was an mpegpts stream. [23:54] maybe so, not too familiar with that [23:54] i've done it once for mp3 but that's it [23:54] PatNarciso, how different are those formats? [23:55] shouldn't one be able to put just everything into one matroska container and selectively play one video stream? [23:55] I recall trying to concat two streams before, and the mpeg header of the second file came along and upset the ffmpeg process. I think the doc's have an example of how to get around that one. [23:56] PatNarciso, MPEG-TS may be switched since it's a streaming format [23:56] just don't expect your decoders to survive a resolution switch or a timestamp jump [23:56] Mavrik, so, for example: the source of camera1 and camera2 could be anything. but I have no problem running additional ffmpeg processes to convert those streams to "whatever" format is needed. [23:56] right. [23:57] so yeah, package your streams into MPEG-TS container [23:57] make sure they're in same color space, video resolution and formats [23:57] and that could be okish [23:57] and I-frame only [23:57] it's my hopes that, I can chain a few ffmpeg processes together -- and a -switch on the final process be told to ignore the timestamp (and generate a new one?) [23:57] mark4o, nah. [23:58] PatNarciso, MPEG-TS can be concatenated on TS packet boundaries [23:58] even though, putting all of that in a horrible chain of pipes will probably work like crap [23:58] considering a specific usecase a more custom solution using libav would probably be way more performant and stable [00:00] --- Fri Aug 2 2013 From burek021 at gmail.com Sat Aug 3 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sat, 3 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130802 Message-ID: <20130803000502.D3ECC18A03BC@apolo.teamnet.rs> [03:26] ffmpeg.git 03Timothy Gu 07master:b3f858b829ab: libxvid: cosmetics: Realign the code [03:40] ffmpeg.git 03Marton Balint 07master:d75d9112236a: mpegts: save last pcr of pcr pids in PES Context [03:55] ffmpeg.git 03Michael Niedermayer 07master:5936d982446e: avcodec/wmaenc: change commented assert to av_assert [03:55] ffmpeg.git 03Michael Niedermayer 07master:bff371e34ce2: jpeg2000dec: simplify jpeg2000_read_bitstream_packets() [10:00] 'morning [10:21] Action: michaelni ZZzzZZZzz [10:45] ffmpeg.git 03Martin Storsj? 07master:fd8f91e3f44a: rtsp: Simplify code for forming the remote peer url [10:45] ffmpeg.git 03Michael Niedermayer 07master:ca1b02910889: Merge commit 'fd8f91e3f44a2bdbefaaebead388133c5fdd3423' [11:01] ffmpeg.git 03Diego Biurrun 07master:390b4d7088b5: flvdec: Fix = vs. == typo in sample rate check [11:01] ffmpeg.git 03Michael Niedermayer 07master:cb73f84087ab: Merge commit '390b4d7088b5cecace245fee0c54a57e24dabdf4' [12:18] ffmpeg.git 03Diego Biurrun 07master:e4529df94461: flvdec: K&R formatting cosmetics [12:18] ffmpeg.git 03Michael Niedermayer 07master:ae48547a5296: Merge commit 'e4529df944616917ae8462f5102253ff7f983093' [12:23] ffmpeg.git 03Diego Biurrun 07master:f900f35ac8db: flvdec: Eliminate completely silly goto [12:23] ffmpeg.git 03Michael Niedermayer 07master:67291ffd6390: Merge commit 'f900f35ac8db4ac30df6fda1c27502c2ef9e6ba5' [12:28] ffmpeg.git 03Diego Biurrun 07master:9ea24e927e5b: twinvq: Add proper twinvq prefixes to identifiers [12:28] ffmpeg.git 03Diego Biurrun 07master:4c7fd58f8ae7: h264_sei: Remove pointless old comment [12:28] ffmpeg.git 03Michael Niedermayer 07master:da4cd6150233: Merge commit '4c7fd58f8ae729b964b6859eace5ab9a55ce3c8c' [12:55] ffmpeg.git 03Vittorio Giovara 07master:b18412171fda: h264_sei: K&R formatting cosmetics [12:55] ffmpeg.git 03Michael Niedermayer 07master:2ae5ac78d820: Merge remote-tracking branch 'qatar/master' [13:16] ffmpeg.git 03Michael Niedermayer 07master:61a28d00e85c: flvdec: silence unused warning [13:20] is the website down? [13:22] or just slow [13:33] is having problems wm4 [13:34] really slow [13:39] not cpu related, i dont think [13:42] --- yahoo.com ping statistics --- [13:42] 88 packets transmitted, 88 received, 0% packet loss, time 87073ms [13:42] rtt min/avg/max/mdev = 122.411/149.325/234.512/28.315 ms [13:47] michaelni : problem with mphq [13:49] wm4 : website is back up to speed [13:49] still seems kinda slow [14:53] Compn, /me was afk, whats the issue ? ffmpeg.org seems normal speedwise [15:01] shell is still slow [15:01] nevermind [15:01] reconnect makes it work better [17:34] ffmpeg.git 03Michael Niedermayer 07master:2b9590ebab8b: avdevice/timefilter-test: dont try to optimize par1 for n0=0 case [17:52] saste: ubitux, Daemon404, kierank, Gisle, Compn, beastd, JEEB, tfoucu, Thilo, buxiness [17:54] hmm, not enough plance in hot tub? [18:14] j-b, sup [18:14] he's saying who has registered from vdd [18:14] o [18:14] ok [18:14] :context: [18:37] j-b : whats ? [18:37] ah :P [18:49] ffmpeg.git 03Paul B Mahol 07master:bc2187cfdb5e: ttaenc: fix packet size [21:21] ffmpeg.git 03Michael Niedermayer 07master:65dd93209dc1: movenc: make uuids static const [21:21] ffmpeg.git 03Michael Niedermayer 07master:0553f2c6e585: avformat/matroskaenc: make 2 tables static that are not used outside matroskaenc [23:26] ffmpeg.git 03Michael Niedermayer 07master:04c50cb3a0b9: avcodec/dnxhdenc: make header_prefix static const [23:27] ffmpeg.git 03Michael Niedermayer 07master:b984c727f53c: avcodec/dxtory: make def_lru[8] static [23:27] ffmpeg.git 03Michael Niedermayer 07master:f19a23bd4fb6: avcodec/mjpeg: make 2 outcommented tables static [23:27] ffmpeg.git 03Michael Niedermayer 07master:154c8bf60b65: avcodec/mdec: make block_index static const [00:00] --- Sat Aug 3 2013 From burek021 at gmail.com Sat Aug 3 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sat, 3 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130802 Message-ID: <20130803000501.C590918A0391@apolo.teamnet.rs> [00:00] mmh [00:00] I feel like I could make *something* with ffmpeg -- but I've never worked directly with libav before. [00:01] In my testing a few months back, I did have a lot of overhead -- chaining ffmpeg's together. [00:01] well, try and you'll see how fast and stable it is [00:02] thus, I was curious if there was a more ideal format to toss ffmpeg data -- something more native. [00:02] Do you guys know of any alternatives to `ffmpeg -metadata` for writing metadata like id3 tags or ogg comments that is as robust as ffmpeg? [00:02] PatNarciso, you could probably do it with a single ffmpeg process per camera stream and a piece of code that muxes the TS streams together [00:02] and switches them [00:05] ok -- it took me a few seconds to let that sink in... how do you see the "switcher" working? [00:07] oh wait, now I remember [00:07] you want to have an overlay right? [00:09] heh - yes, that would be the next extension of the equation. [00:09] parshap: most alternatives are format-specific, so it depends what format you are using [00:10] PatNarciso, yeah, good luck with that :) [00:10] sleep time. [00:10] not sure where in the ffmpeg chain I would put that yet - between the camera and switcher? or perhaps after the switcher and before the broadcast? [00:11] PatNarciso: multiple overlay filters and then turn them on and off with timeline editing? [00:11] if I could create a successful switcher, I think everything else would be "easy" after that. [00:11] haha, how often i thought things would be "easy" :') [00:11] Mavrik: goodnight. thanks for your help. [00:12] mark4o, with "timeline editing"? I don't understand. [00:13] If I could turn overlay filters on and off at will -- that would be amazing. Is this possible? [00:13] you can enable or disable the filter at the times you specify, e.g. enable="between(t,100,200)" [00:14] ahh cool -- I follow ya now. problem is: this would be live -- I dunno how long it would be. [00:14] ... or when it would be for that matter. [00:15] I think it supports sendcmd as well; haven't tried it [00:16] or zmq - haven't tried that either [00:16] no idea sendcmd was a thing, googleing now. so i'll obviously be in expert in 2 mins. brb. [00:18] http://ffmpeg.org/ffmpeg-filters.html#Examples-23 [00:23] http://ffmpeg.org/ffmpeg-filters.html#sendcmd_002c-asendcmd [00:23] and wow, this is awesome. [00:32] so zmq is a slick message server/client library? http://zeromq.org/intro:read-the-manual [00:47] zmq sounds like an ircd on crack. [01:03] PatNarciso: um, I think at least in ffmpeg it is just a way to receive commands from an external source [01:03] although it may be capable of more than that [01:09] true. [02:30] Chroma-key filtering-- I'm looking at the filters doc, but nothing sticks out to me (unless it's a transparent png). Is there a filter for filtering out colors (or a range of colors)? ex: green screen. [02:52] Hello, I am working with the decoding_encoding.c example and seem to be having a problem with the final packet being processed or written. Suggestions? http://pastebin.com/YCWTFLS5 Thank You [03:38] Is there a way I can watch the stream I am saving to disk, as I record it? I am recording from a webcam to a simple .mp4 file. Nothing fancy. [03:47] ffprobe says that this mp4 file has 20746 frames - how do I extract them all? Using ffmpeg -i input.mp4 -r D pos/%5d.jpg with varying values of -r gives different results. [03:47] When I use -r 1, there are hundreds of frames; when I use -r 60 there are 30,000+ images. [03:48] Why does the -r option even matter? I want ALL frames. [03:48] Why would ffmpeg produce more images than the amount even contained in the file? ffprobe says there are 20746 frames, so there should be that many images after I run a command. [04:36] This is what I am trying to do but I don't understand the tee option. http://pastebin.com/BTGJsVNk [04:54] kizzo2: you asked for a fixed frame rate of 60fps; that means to duplicate or drop frames as necessary to make it that rate [04:54] don't use -r if you just want to keep the frames as they are [05:30] mark4o: Thank you. [06:01] There are exactly that many frames now, great. [07:19] The file saved by cv2.imwrite("temp.jpg", frame0) has a different MD5 from file 00001.jpg after running "ffmpeg -i input.mp4 %5.jpg" [07:20] Why is that? [07:20] du -sh 00001.jpg temp.jpg [07:20] 32k 00001.jpg [07:20] 96k temp.jpg [07:46] kizzo2: jpg is lossy [07:49] Yes, someone responded in another channel that the reason may be due to headers or something. [07:49] any reason you're not using png? [07:50] No particular reason - now I'm using it actually. [07:51] Just started reading about the differences and was like, "ok no real reason not to use PNG instead of this lossy JPEG stuff." [08:33] what's the replacement for sws_scale ? [08:33] or I've wrongly assumed it's deprecated? [08:52] what's the way to check if a codeccontext is progressive or interlaced? [08:57] uhh [08:58] that's not as easy as you'd think :) [08:58] ok, scratch that [08:59] I'm trying to get sws_scale to rescale my input [08:59] it works when src width/height equals dst width/height but if dst width/height is different sws_getContext returns null [08:59] sws_getContext(sw, sh, (AVPixelFormat)fmt, dw, dh, (AVPixelFormat)dst_format, SWS_BICUBIC, nullptr, nullptr, nullptr); [09:00] this returns null when sw != dw and/or sh != dh [09:00] why could that be? [09:00] i am aware i should use cachedcontext just want to get this working first [09:02] that's wierd [09:02] nlight, do you have a 420 pixel format and sizes not divisible by 2? [09:02] nlight, or, did you compile ffmpeg without swscale? [09:03] i have a 420 format, yes [09:03] i use the zeranoe builds [09:23] hey i have a wav file and i want to create a new version of it that is louder. how do i do this? [09:27] keep on running into ALSA xruns [09:27] http://ix.io/70K [09:27] encoding to aac for playing on an iphone [09:27] any suggestions how to avoid that? [09:29] i figured out my problem, avpicture_fill doesn't set width/height of the frame [09:29] so i was passing 0, 0 to sws_scale [09:29] figures [09:46] nlight, ah yes, avpicture_fill only allocates memory :) [09:47] now I set them myself, that is allowed, right? [09:47] well it works so far, so whatever, if it breaks I'll ask again :D [09:49] nlight, it's expected actually :) [09:53] good [09:53] thanks a lot for the help :) [09:55] How can i make a virtual webcam device as: /dev/video18 using FFmpeg (showing a jpeg or video clips for example) ? Which can be then readable from uvccapture -d/dev/video18 or vlc or mplayer etc... as video input source? [09:56] huh [09:56] ffmpeg isn't really a tool for that [10:03] Mavrik, what else i can use to make a Virtual video device playing a jpeg or video clips and available to use as video input source from /dev/video18 ? [10:03] your problem is that you need to implement a V4L2 device [10:08] is it possible that sws_getContext followed by sws_scale followed by sws_freeContext leaks any memory? [10:08] I got a very small leak somewhere [10:08] and I've tracked it down to conversion [10:09] not sure what I'm doing wrong [10:09] no, it shouldn't leak [10:09] are you freeing the AVFrame itself? [10:09] instead of just its buffers? [10:09] i do av_free(frame); [10:10] after freeing the buffers [10:10] should I use avcodec_free_frame instead? [10:10] YES - Mavrik exactly, you got my point. Do you have any idea how i can make one virtual???? [10:10] yea, I should [10:10] IamTrying, besides writing your own driver& no idea [10:11] Mavrik, OK will make one and paste here [10:13] Hi, I'm trying to run the following command to add a background image to a video, and convert it from flash to HTML5 [10:13] I've got the following, but it's losing the audio! [10:13] ffmpeg -loop 1 -f image2 -i bg.png -vcodec libvpx -vf "movie=mercury.flv [logo]; [in][logo] overlay=0:0 [out]" -acodec libvorbis filename.webm [10:13] Mavrik, can we not do like? mencoder vid.avi -nosound -vf scale=320:240 -ovc raw -of rawvideo -o /dev/video0 [10:14] Mavrik, where /dev/video0 is a virtual video input source for other tools [10:14] nope. [10:14] and is there a reason why are you doing something as awful as this? [10:14] instead of modifying your tools to read a file? [10:15] earthworm, losing in what way? [10:15] Mavrik, NO modifying the tools will take me 6 months. And having a virtual video device will solve my problem within hours. [10:15] Mavrik, the new file is silent for some reason ... [10:15] IamTrying, hah. [10:15] Mavrik, i have a video capture reader application. But it must read a video source any if there is none available. [10:15] !pbb earthworm [10:16] bah, stupid bots [10:16] earthworm, do that and we'll see what's going on :) [10:19] Right you are [10:28] Mavrik, i will code on those. 1) http://code.google.com/p/v4l2loopback/ 2) https://github.com/umlaeute/v4l2loopback [10:40] Hello. Is there a way of writing custom filters for ffmpeg? I am not satisfied with what I found, I would like to program my own filter. [10:41] ffmpeg is opensource, so yes, you can add your own filter [10:41] there's probably not much documentation available [10:42] but you can just check a source of a simple filter and build on that [10:44] Mavrik: true. there is any documentation available on how to write filters. I will check an existing filter and try to start from that. [10:44] wawrek, check for documentation in ffmpeg source [10:44] otherwise, start in libavfilter [10:44] there isn't that much code there :) [10:44] from what I can see, ffmpeg uses mostly c. [10:44] thanks I will [10:45] yes, ffmpeg is written in C [11:09] Here's my command output ... [11:09] http://pastebin.com/xLFJQE6A [11:09] Just ran that, the audio has gone. [11:12] earthworm: your input is a png [11:12] Yeah, I'm mixing a PNG background with a flash video that's got a transparant background [11:13] your ONLY input there is a png [11:13] Hi [11:13] What the hell, the video comes out in the output, just without the audio? [11:13] earthworm: oh, I see. Look at -filter_complex in the man page. [11:15] I'll be honest, I took the command off a forum [11:15] I'll look that up ... [11:17] man ffmpeg | less +/"^ -filter_complex filtergraph" [11:17] Could I just convert the video to HTML5 with a white background? It translates the transparent background to black which is the real problem as it looks naff [11:22] im passing -crf when transcoding to webm/vp8 and getting pretty terrible results no matter what value i pass [11:22] the input is 720p h.264 and the output looks like potatoe [11:23] thoughts? [11:23] setting a specific bitrate seems to work [11:28] I tried this and it made a silent video with just the background image :/ [11:28] ffmpeg -i mercury.flv -i bg.png -filter_complex 'overlay[out]' -map '[out]' -vcodec libvpx -acodec libvorbis filename.webm [11:29] ffmpeg -i mercury.flv -i bg.png -filter_complex 'overlay[out]' -map '[out]' -map 0:a -vcodec libvpx -acodec libvorbis filename.webm [11:29] oh, just use the example for the man page. [11:30] That's kind of what I tried to do, let's see what your command does [11:32] ffmpeg -i mercury.flv -i bg.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' -map 0:a -vcodec libvpx -acodec libvorbis filename.webm [11:32] That's good, we've got audio now, but it's put the image on top of the video :D [11:33] I thought you wanted the png overlayed? [11:34] Well, the video overlayed onto the PNG, so the transparent FLV background goes through to the PNG [11:35] transparent flv- I've never heard of such a thing. [11:35] Yeah, it's annoying [11:35] The flash video has a transparent background [11:36] If I just convert it, I end up wth black which will look gash on my web site [13:03] Why does people say that avi is a file format? [13:03] no he said filmformat actually wich means film format [13:03] video format. Yeah but stil.. [13:04] no film fomrat translates to film format, like 35 mm and 120mm [13:06] to many syntactic errors [13:06] s/to/too [13:06] ^^ [13:07] Why can't people learn that file extensions (containers) like avi aint the encoded format like H.264 [13:14] hmm [13:15] so i just finished encoding a 40 minute 720p video in webm/vp8 [13:15] before it finished i tried to watch the first 10 secodns or so and it was fine [13:15] but when it finished i tried to play in vlc and nearly every frame was dropped [13:21] I was asking a bit ago about transcoding a FLV file to HTML5 video, can anyone help with that? [13:21] earthworm, heh that's what im doing [13:21] I need to make sure the transparency in the flash video is replaced with either white, or a background image, either will do [13:22] what do you mean by html5 video though [13:22] since different browsers support different things [13:22] I managed to do it, but this transparency is buggering things up [13:22] I thought I just needed WebM ... [13:23] earthworm, not supported in ie or safari without a plugin [13:24] Oh [13:27] earthworm, most video formats don't support transparency btw [13:28] Yeah, I just want to get rid of it and replace it with white instead of black when I transcode [13:37] hrm [13:37] vlc's progress bar is all screwed up [13:37] like it has no idea how long thie file is [13:39] how do I correctly free a frame that has been initialized with avcodec_alloc_frame() and avpicture_fill() [13:40] currently I say avcodec_free_frame() but it seems to leak a tiny bit of memory [13:41] did you check with valgrind which field leaks? [13:41] i'm on win32 currently and don't have valgrind but I will run a debugger now [13:41] hoped that i was just using the wrong calls [13:43] looking at the source [13:43] avpicture_free and av_free after should be enough [13:43] so no avcodec_free_frame? [13:44] hmm, ok, avcodec_free_frame after avpicture_free [13:44] thanks, I will try now [13:48] nope, segfault [14:19] what's the default value of refcounted_frames? [14:32] nlight: that thing is removed in latest API version [14:33] actually ignore that [14:33] there is no default value [14:33] ok, thanks [14:34] I will check for it always then [14:34] decoders just sets value that explains how they decode files [14:36] nlight: what version of lavcodec you use? [14:36] latest zeranoe build [14:37] let me see [14:38] well only single decoder sets it [14:39] i use only h264 files so far [14:39] but still i want to support whatever so I will take care to check it correctly [14:39] nlight: actually caller set it [14:39] so you set it if you want such funcionality as described in header [14:40] ah, I get it [14:40] ok, thanks a lot [15:12] has anybody had experience with the blackmagic intensity pro? [15:17] ItsMeLenny, i work with decklinks maybe i can help you [15:19] I worked with some other blackmagics [15:19] imho blackmagic sucks [15:19] deltacast make waaaay better products [15:19] of course the price range is a bit different [15:19] is it possible to use anything else in linux other than their terrible program? [15:20] can it be directed through ffmpeg [15:20] ItsMeLenny, you can use the SDK [15:20] oh, whats the sdk [15:20] http://www.blackmagicdesign.com/support/sdks [15:20] ItsMeLenny, last I checked they worked with V4L2 as well [15:20] v4l2 also works [15:20] but their sdk is not bad at all [15:20] I used mines via DirectShow on Windows though, since I had the USB3 versions [15:21] you can setup a basic capture/render/playout in 200-300 lines [15:22] Mavrik, i bought the intensity pro coz the usb one wasnt supported on linux, also, last i heard it didnt work with v4l2 :P [15:22] nlight, i'll look into that sdk [15:22] ItsMeLenny, hmm, the internal ones did work with V4L2 and DirectShow for me [15:22] also used ffmpeg to capture stuff [15:22] but they might have f'ed up something in meantime [15:22] is there any simple one line for ffmpeg to capture? [15:24] /dev/blackmagic0 [15:25] hm, i get memory leaks when decoding h264 only [15:25] no memory leaks when decoding with other codecs [15:25] any ideas? [15:25] see it doesnt show up as a device in webcam lists or anything [15:26] nlight, is that h264 or h264_vdpau [15:27] also, none of those SDKs are for intensity? [15:31] Mavrik, how did you get it to work with v4l2 [15:33] don't remember anymore, I think I did some magickery with loopback [15:33] ah [15:34] i did this: avconv -f rawvideo -s 720x576 -i /dev/blackmagic0 test.mov [15:34] but the video records one frame and stops [15:35] nlight: you need to unref decoding frames you no longer need [15:35] its explained in header, isn't it? [17:12] hi, i think there's a bug with ffmpeg and matroska output: i can't copy a pcm_bluray stream from an m2ts file to a mkv file, i get that error: No wav codec tag found for codec pcm_bluray [17:26] drwx: you can not copy it [17:26] why not? [17:26] mkv does not support pcm_bluray [17:26] oh. ok [21:40] hi @all [21:40] I'm new to ffmpeg multithreading [21:41] Would like to hear your opinion on how many threads you plan per file encoding for ffmpeg? [21:41] right now I have a dual hexacore, with 24 threads [21:42] and am encoding 6 files simultanously [21:42] but ffmpeg spawned over 660 processes [21:42] 110 per file [21:42] do you think that is excessive behaviour and somehow I could improve speed or quality if I reduced the number of threads per file? [21:49] nobody? [21:50] How do I encode from an .avi into a .mpeg2? [21:52] http://www.itbroadcastanddigitalcinema.com/ffmpeg_howto.html#Encoding_MPEG-2_I-frame_only_in_Highest_Quality [21:53] CentRookie: thanks [21:53] -pix_fmt yuv422p -qscale 1 -qmin 1 -intra -an -- why these couldn't be by default [21:54] I mean, couldn't there be a simple cli program that will just understand $ transcoder -i video.avi -o video.mpeg2? [21:54] by loading the most common defaults [21:55] good question, nowadays the community is more focused on creating mobile device presets it seems [21:55] so legacy formats are a bit underdeveloped [21:56] denysonique, becouse mpeg2 and 422 are not really default [21:56] brontosaurusrex: mpeg2 could be just detected by target extension [21:56] by the target extension* [21:57] " -pix_fmt yuv422p -qscale 1 -qmin 1 -intra -an -- why these couldn't be by default" < i'am replying to this crap [21:58] Look at this [21:58] A user has a TV player which only plays .mpeg2. All he wants to do is to run a command which will convert what_ever_encoded.avi into old_standard.mpeg2 [21:58] denysonique: I'd expect it actually would be [21:59] Why do I need to learn everything about video encoding for such simple task? [21:59] denysonique: if you just want to convert to MPEG2, ffmpeg -i input.avi -o output.mp2 should be fine [21:59] denysonique: the extra options are for increasing the quality of the output [21:59] interesting [21:59] "A user has a TV player which only plays .mpeg2" < that is also not something expected in 2013 [22:00] brontosaurusrex is right, though [22:00] MPEG2 is a rather old format [22:00] I have just picked up mpeg2, because for sure it will play that [22:00] but no, you don't have to use all those extra arguments just to convert; it'll use sane settings by default [22:02] Thank you guys [22:02] actually, the -an argument disables audio [22:02] so you definitely don't want that one [22:02] Could someone post the link to the ffmpeg screenrecording guide? [22:06] There was one mentioning raw recording and then later encoding [22:08] rcombs: also I meant .mpeg2 not .mp2 [22:08] either way [22:09] which is H.262, but nvm [22:10] remind me, which libav* is responsible for dithering 10bit H.264 down to 8bit? [22:10] denysonique: ffmpeg -i input -codec:v mpeg2video -q:v 2 output.mpg [22:15] rcombs: libswscale i guess [22:16] hmm [22:19] isn't it internally considered a colorspace conversion? [22:21] for instance, if VLC is using libav* to decode an MP4 file with an H.264 High 10 at 4.1 stream, the color space is something like yuv420p10le, and it's converting to rgb24 for display, yeah [22:21] ? [23:00] ffmpeg -i input.wmv -f image2 -vf fps=fps=1/21 out%04d.jpg fails with "Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height" while ffmpeg -i input.wmv -f image2 -vf fps=fps=1/20 out%04d.jpg doesn't complain at all. I'm using ffmpeg-1.2.1-3.fc19.x86_64 (more details http://paste.fedoraproject.org/29820/77220137 ) [23:12] done http://paste.fedoraproject.org/29821/75477936 [23:45] ciupicri: does it work if you replace "-vf fps=fps=1/21" with "-r 1/21"? [23:46] "Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height" [23:46] http://paste.fedoraproject.org/29823/13754799 [23:48] Hello. Please help me. How to apply +10dB volume to output ? For example : ffmpeg -t 30 -i 01-INtroduction.wmv -ac 2 "volume=10dB" 01-INtroduction.mkv [23:50] Moreover : "Unrecognized option 'af'" . ffmpeg version 0.10.7 [23:51] ciupicri: what if you add "-qscale:v 2" as an output option [23:52] llogan: it seems to work, let me try with larger (smaller actually) values [23:53] sane range is 2-5 [23:53] i guess [23:53] what does qscale mean? [23:55] quantizer scale, but you can basically think of it as a quality scale [23:55] ffmpeg -t 30 -i 01-INtroduction.wmv -ac 2 -af "volume=10dB" 01-INtroduction.mkv http://pastebin.com/aHUbT2fe [23:56] g1ra: i guess it's too old. is there any output of a list of filters with "ffmpeg -filters"? [23:56] and what's the relationship between that and the fps? I can't have a low fps without a low quality? [23:56] original reason for this conversion is to convert mono to stereo (-ac 2) but volume is too low [23:56] is the original volume good? [23:57] ciupicri: you can conrol the fps and quality independently [23:57] yes the original vol is ok [23:57] then why did it failed before setting that quality to 2? [23:58] because by default it uses -b:v 200k, and i suppose for your output the resulting bitrate tolerance is too small. [23:58] llogan, "ffmpeg -filters" http://pastebin.com/kFzHbasP [23:59] ok, thanks for the help and the explanation [23:59] so instead you could use -b:v instead of -q:v but then you'd have to find a bitrate that isn't too low. q:v is easier [00:00] --- Sat Aug 3 2013 From burek021 at gmail.com Sun Aug 4 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sun, 4 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130803 Message-ID: <20130804000502.44E7A18A00CC@apolo.teamnet.rs> [00:55] ffmpeg.git 03Carl Eugen Hoyos 07master:401f6a7126fe: Do not suggest to use gas-preprocessor if using it would break compilation. [00:55] ffmpeg.git 03Tim.Nicholson 07master:ae4c912bcecb: Forward interlaced field information from mov to ffv1 decoder. [00:55] ffmpeg.git 03Michael Niedermayer 07master:bdccfc3fc354: Merge remote-tracking branch 'cehoyos/master' [02:41] hm getting lots of valgrind errors with valgrind ffmpeg -i test.mkv -vf edgedetect,gradfun out.mkv [02:54] actually it just crashes [03:14] 01[13FFV101] 15michaelni pushed 2 new commits to 06master: 02http://git.io/bXCgjA [03:14] 13FFV1/06master 147210c5c 15Michael Niedermayer: ffv1: Document run length VLC coding [03:14] 13FFV1/06master 14dc84a70 15Michael Niedermayer: ffv1: update year [03:22] now we have two git commit bots? [03:43] github has one source.ffmpeg.org ha another [03:43] the ffv1 spec is just at github [08:59] ffmpeg.git 03Vittorio Giovara 07master:0d8b943d204b: h264_sei: Return meaningful values [08:59] ffmpeg.git 03Michael Niedermayer 07master:7cd13f618c3c: Merge commit '0d8b943d204bd16fcf2f4a59c742e65a401dd3d0' [09:13] ffmpeg.git 03Gavriloaie Eugen-Andrei 07master:0d6fa3977b01: rtmp: Add seek support [09:13] ffmpeg.git 03Michael Niedermayer 07master:8e970a58614f: Merge commit '0d6fa3977b016f1b72b0b24b8834ff9222498548' [09:22] ffmpeg.git 03Diego Biurrun 07master:b5a138652ff8: Give less generic names to global library option arrays [09:22] ffmpeg.git 03Michael Niedermayer 07master:a8e963835a43: Merge commit 'b5a138652ff8a5b987d3e1191e67fd9f6575527e' [09:26] ffmpeg.git 03Diego Biurrun 07master:79be2c325c5e: doc/print_options: Move options headers to a saner place [09:26] ffmpeg.git 03Michael Niedermayer 07master:fa5410f61a75: Merge commit '79be2c325c5ee8f7ac9e28399e51986ebe99bb3c' [09:38] ffmpeg.git 03Diego Biurrun 07master:3a7050ffed5c: build: Add _Pragma macro to disable deprecated declaration warnings [09:38] ffmpeg.git 03Michael Niedermayer 07master:85fc1a18ca7b: Merge commit '3a7050ffed5ce061b114a11e4de4b77aba8efa0b' [10:35] ffmpeg.git 03Diego Biurrun 07master:7950e519bb09: Disable deprecation warnings for cases where a replacement is available [10:35] ffmpeg.git 03Diego Biurrun 07master:038c4f65ee6c: configure: Check for GCC diagnostic pragma support inside of functions [10:35] ffmpeg.git 03Michael Niedermayer 07master:20be5e0a0e75: Merge commit '7950e519bb094897f957b9a9531cc60ba46cbc91' [10:35] ffmpeg.git 03Michael Niedermayer 07master:1607a9854552: avcodec/mlp: Fix bugs in libavs warning fixes [10:53] ffmpeg.git 03Yusuke Nakamura 07master:a8b19271c3b4: avcodec: Add output_picture_number to AVCodecParserContext [10:53] ffmpeg.git 03Michael Niedermayer 07master:82fdfe8e51c5: Merge commit 'a8b19271c3b40ac3c3dc769fe248887acf14ba5a' [11:03] ffmpeg.git 03Diego Biurrun 07master:6da5b57da11b: configure: Check for GCC diagnostic pragma support inside of functions [11:03] ffmpeg.git 03Michael Niedermayer 07master:62f616ed58ba: Merge remote-tracking branch 'qatar/master' [12:59] ffmpeg.git 03Michael Niedermayer 07master:5ad4e2933748: MAINTAINERS: add myself as maintainer for the interface code to swresample & swscale in libavfilter [12:59] ffmpeg.git 03Andrey Utkin 07master:b7ed18b9bd0f: mpegtsenc: add option tables_version [15:28] ffmpeg.git 03Michael Niedermayer 07master:62738157dd54: pnmdec: always output native pixel format [15:28] ffmpeg.git 03Carl Eugen Hoyos 07master:34d48dac252f: avcodec/pnmdec: support more pnm files [16:24] ffmpeg.git 03Michael Niedermayer 07master:d6fd1242f318: avdevice/timefilter-test: provide more space for the printout to allow larger values [16:24] ffmpeg.git 03Michael Niedermayer 07master:bc4e79856281: avdevice/timefilter: 2nd try at avoiding rounding issues [16:31] this command crashes for me after about 33 frames: ffmpeg -f rawvideo -video_size 1280x720 -i /dev/zero -vf edgedetect,gradfun /tmp/out.mkv [16:31] *** Error in `../build_libs/bin/ffmpeg': corrupted double-linked list: 0x091d66f8 *** [16:32] could be a series issue? maybe gradfun's asm is writing past the frame memory allocation [16:33] *serious [16:39] \o/ my custom non-bayer ordered dither works =) http://pippin.gimp.org/dither/ [19:22] wm4, crash fixed [19:23] ffmpeg.git 03Michael Niedermayer 07master:e43a0a232dbf: avfilter: fix plane validity checks [19:24] michaelni: uh I don't get it [19:24] michaelni: shouldn't it use the plane count the pixel format implies? [19:24] because the unused fields could contain random values [19:24] and in this case, they apparently did [19:25] ah thx for that commit [19:25] you just made the bogus check a bit tighter [19:25] but still bogus [19:25] should use av_pix_fmt_count_planes() instead [19:27] anyway, must have been hard to find... [19:47] data wasnt random data[1] was a 256 entry palette, the planes could be cheked differently, this just seemed the simplest [20:34] michaelni: so in case of palette in data[1], we must make sure that linesize is *not* set [20:34] shouldn't an assert be added somewhere? [20:34] sounds like a good idea [20:38] huh, in my code I explicitly set linesize for palette... [20:38] hm or actually I don't [20:39] so must unused planes (plus the palette) have linesize set to 0? [21:39] ffmpeg.git 03Andrey Utkin 07master:5b76c3a12049: doc/muxers: Document previously undocumented mpegts muxer options [21:39] wm4, i think they are always null, do you know of anyting that could set them to non null ? [21:40] michaelni: for example when code doesn't explicitly initialize the fields (not all code necessarily uses the libavutil functions to fill a frame) [21:49] with sizeof(AVFrame) not being part of the public API how can an application even allocate a AVFrame without uisng libavutil functions [21:49] ? [21:49] and if it doesnt use the functions how could it set the fields to defaults [21:49] it can allocate an avframe with libavutil, but set the plane pointers itself [21:50] they are initialized to null, so it sets them to something invalid? [21:50] (and overwrite unused plane pointers with crap) [21:50] sounds like a bug in the application [21:50] well, it's not really clear that ffmpeg would use the unused plane pointers [21:51] we should probably check that they are null on public entry points if we keep the null checks [21:53] and it should be documented [21:54] or all these checks could be replaced if people prefer and someone volunteers to implement it [00:00] --- Sun Aug 4 2013 From burek021 at gmail.com Sun Aug 4 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sun, 4 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130803 Message-ID: <20130804000501.3D3B218A00B6@apolo.teamnet.rs> [00:00] g1ra: so it does have filters, but i forgot how to use such old ffmepg [00:01] maybe use -vf instead. [00:02] my version is old.. ?? But I use a recent build . https://launchpad.net/~jon-severinsson/+archive/ffmpeg [00:02] ciupicri: you could probably go back to using "-vf fps=1/20" instead of you like [00:03] that PPA (or any other PPA, AFAIK) does not provide recent builds [00:03] but I wanted to use less [00:03] I don't want tons of thumbnails, only a couple [00:04] well, I try to complile ... but i'm afraid of. [00:04] ~20 thumbnails for a 30-60 minutes video [00:04] try the select filter [00:06] g1ra: you can use a linux build https://ffmpeg.org/download.html#LinuxBuilds [00:06] instructions http://askubuntu.com/a/270107 [00:06] or follow compile guide [00:06] http://trac.ffmpeg.org/wiki/UbuntuCompilationGuide [00:07] ciupicri: or maybe a combination of -r and -vframes 20, but i'd use select probably [00:08] I can't seem to find something about the select filter in the man page [00:08] only about different input streams [00:08] https://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect [00:08] Action: llogan leaves for a while [00:08] thank you [00:13] llogan, Thank you for your help. Static build is working. [00:14] guys I might have a bug here - installed libvpx from git and ffmpeg is saying I need libvpx > then... ;) [00:15] just in case reinstalling libvpx from the scratch ;) [00:15] llogan, I cant found in ffmpeg docs . But i heard "-af" options. Did the job with volume filter. [00:21] yup its a bug alright. [00:21] ERROR: libvpx decoder version must be >=0.9.1 [00:21] libvpx-git 20130223-1 [00:30] removing libvpx from the configure removes the issue but thats not the solution - nasty workaround [00:34] hack ./configure a little :P [00:34] nah not tonight [00:34] ;) [00:35] well it's probably one line to fix [00:35] yeah ;D but I am just as much of a coder as a rap is a music... [00:36] hmm... i'll have a look [00:38] :D thanks [00:40] latest ffmpeg git? [00:40] Why when I do this does the size of the file change radically? `ffmpeg -i test/test.mp3 -acodec copy test2.mp3` and ffprobe reports "Format mp3 detected only with low score of 24, misdetection possible!" on the new file? [00:45] klaxa: yes [00:47] AndrzejL: did you follow onw of the ffmpeg compile guides? [00:47] *one [00:48] parshap: does it actually copy? should see "Stream #0:0 -> #0:0 (copy)" in the output [00:49] hmm AndrzejL, can you pastebin your config.log? [00:49] drv: yep, i see that [00:49] hmm, it seems to work ok here on the first mp3 i tried [00:49] drv: maybe it's the file [00:50] drv: the only odd things i see in stderr are these: [00:50] [swscaler @ 0x9c76aa0] deprecated pixel format used, make sure you did set range correctly [00:50] [mp3 @ 0x9c752e0] Frame rate very high for a muxer not efficiently supporting it. Please consider specifying a lower framerate, a different muxer or -vsync 2 [00:50] hmm, is the source not actually a mp3? it shouldn't be invoking swscaler for audio [00:50] drv: ffprobe reports the original file is mp3 [00:51] parshap: pastebin ffprobe output :X [00:52] klaxa: https://gist.github.com/parshap/ede206df3820eb48820a [00:52] klaxa: that's the original file [00:53] ah, i guess it's got album art or something [00:53] yeah probably [00:53] if you want to strip the album art, try: ffmpeg -i test/test.mp3 -map 0:a test2.mp3 [00:53] does that throw off the mp3 container parser or something? [00:54] hmm i think it still needs -c:a copy [00:54] it shouldn't [00:54] klaxa: i actually want to keep it [00:55] llogan: no I used pkgbuild file for my distro from aur repository just as always [00:55] klaxa: the failed one? [00:55] AndrzejL: yes [00:55] sure [00:56] hmm i'll see if i have an mp3 with album art, but actually i doubt i have [00:56] parshap: i don't have any mp3s with album art handy to test, but you should be able to copy the video too [00:56] gimme a sec and I will do it [00:56] that is probably why it gets smaller - ffmpeg will reencode by default with a low bitrate [00:57] damn c and x are way to close to each other on the keyboard.. I almost asked You for something else... [00:57] heh [00:58] drv: the file is actually getting ~600K bigger (3M -> 3.6M) [00:58] drv: which was surprising to me too [00:58] parshap: can you try ffmpeg -i test/test.mp3 -c copy test.mp3 ? [00:59] klaxa: ah ha! that seems to work [00:59] klaxa: there was a change [00:59] configure | 4 +++- [00:59] libavcodec/ffv1dec.c | 7 +++++++ [00:59] 2 files changed, 10 insertions(+), 1 deletion(-) [00:59] it may or may not build now [00:59] yeah that's why i just pulled it too [01:00] so I will let You know / give You the link if it fails [01:00] yeah it failed [01:02] klaxa: http://andrzejl.cyryl.net/config.log [01:04] >/tmp/ffconf.5kO1VW9W.c:1:29: fatal error: /usr/include/vpx/vpx_decoder.h: Permission denied [01:04] wut [01:04] you don't have permission to read the file [01:04] i'm also not sure whether /usr/include/ is the right directory if you want to use your own libvpx build [01:04] [root at wishmacer andrzejl]# ls --full /usr/include/vpx/vpx_decoder.h [01:04] -rw------T 1 root root 14108 2013-02-23 16:42:57.000000000 +0000 /usr/include/vpx/vpx_decoder.h [01:05] wow.. that seems weird [01:05] not sure if ffmpeg proceeds to compile as root... [01:05] ok gave it 755 [01:05] but still maybe your own version is in /usr/local/lib/? [01:05] or did you install it in /usr/lib/? [01:06] I used pkgbuild :D [01:06] still fails... [01:06] weird [01:07] pastebin config.log :P [01:07] more permission denied errors [01:08] ah yes, it will try to read more than one header [01:08] change it for the headers in /usr/include/vpx [01:08] 2 files had wrong permissions so far [01:08] like for all of them, i mean having include files read access denied for everyone is kinda weird [01:08] it looks like it might kick it now [01:09] lets see... [01:10] so its not a ffmpeg issue [01:10] sorry my bad ;) [01:11] I am going to wreck heads in the libvpx channel if they have one ;D [01:11] no they dont [01:11] and running chmod 755 /usr/include/vpx/vpx_* fixes it [01:11] lol, so are you sure it's the correct version? [01:12] yes it is [01:12] ;D [01:12] its the latest and greatest from their git [01:12] ;D [01:12] ok then [01:13] for some reason all the permission were set to rw for root only [01:13] AndrzejL: libvpx-git and/or ffmpeg-git from AUR? [01:13] yes [01:13] ffmpeg-git-full-fixed [01:14] ffmpeg-full-git-fixed [01:14] ;) [01:15] ah, that. blame the maintainer of the PKGBUILD. [01:15] always blame other people [01:15] i do it all the time [01:15] :) but the PKGBUILD for ffmpeg does not changes libvpx permissions for headers ;D [01:15] and the PKGBUILD for the libvpx does not do it too ;D [01:16] but what i meant is that pkgbuild will not build regardless of your current issue [01:16] so it has to be something... [01:16] hehe [01:16] I'm having another problem now where it looks like when I am piping the output the mp3 file doesn't get written correctly. The piped version is about 1K smaller and piping it to ffprobe gives "[mp3 @ 0x9c4f0e0] Header missing". https://gist.github.com/parshap/d2d1da189d6ebe236f9c [01:16] I prefer to try finding solution and if fail then try finding help in finding solution [01:16] and nobdy needs to enable every. single. thing. [01:16] outputting to a path instead of using pipe:0 works fine [01:16] for some stupid reason I missed the obvious error... ;D [01:17] header missing is weird... [01:17] because the header is the first 4 bytes of the mp3 frame [01:18] AndrzejL: why are you compiling anyway? repo ffmpeg is 2.0 which should be new enough for general users [01:19] if you are in #ffmpeg, are you still a "general user"? [01:20] klaxa: especially weird since it seems to be missing only when piping... any ideas? [01:20] yes, because ffmpeg-full-git-fixed is made by someone who doesn't know what they are doing [01:20] llogan: my phone demands one lib cannot remember which one for the video conversion and the library is not available in the repo version [01:20] let me check.. [01:20] parshap: no idea actually... [01:23] this is what winff uses to convert video for my nokia n73 [01:23] -f mp4 -r 15 -vcodec mpeg4 -vf scale=320:240 -b 320k -aspect 4:3 -acodec libfaac -ab 96k -ar 44100 -ac 2 [01:23] and libfaac is unavailable in the repo version so I need to compile.. [01:24] wat [01:24] :) [01:24] that's uh... okay... [01:24] also, use libfdk-aac is advised [01:24] http://ffmpeg.org/trac/ffmpeg/wiki/AACEncodingGuide [01:24] if you're going to compile then don't use libfaac. use libfdk_aac instead, or you can simply use the native AAC encoder if you give it enough bitrate [01:25] hmmmm I will think about it ;) in the future [01:26] Action: llogan wonders when kamedo2 and klaussfreire are going to finish aac patches [01:26] is there any way to specify which output resolution and pixel format to use when capturing video from directshow devices? [01:34] GoaLitiuM: did you see http://ffmpeg.org/ffmpeg-devices.html#dshow [01:35] ah, devices documentation [01:36] yes, it can be confusing sometimes, but the monolithic doc has/had its downsides [01:50] AndrzejL: try ffmpeg-git in AUR. it has libfdk-aac support, is actively maintained, and doesn't have retarded configure settings: https://aur.archlinux.org/packages/ffmpeg-git/ [02:17] FDK has the best AAC encoder, but if you don't want to compile just use -acodec aac -strict -2 [02:17] and screw with it until the quality sounds good (-limit or somesuch?) [03:16] Anyone know off the top of their head where the "av_frame_set_*" functions are defined? [03:17] oops I meant 'av_frame_get_*' [03:18] they are declared in "avcodec.h" but can't find the definition [04:09] bits per sample at 44.1khz sound usually is ... ? [05:10] Hello! Is there an (easy) way to tell ffmpeg to read the frame's data bottom to top, since my data's Y axis is flipped? [05:12] Jookia: I think the transpose filter will do want you want. man ffmpeg-filters| less +/' transpose' [05:12] relaxed: Oh, sorry, I forgot to note: I'm using the API [05:12] from C [05:14] I bet that filter is written in c [05:19] Hmm, can swscale flip Y? [05:22] I'll look in to filters, thanks for the help [09:03] I want to replace a video with black frame! I googled and found this: http://stackoverflow.com/a/6087453 [09:06] I've converted the video to an audio file to bundle it with a picture and compile it with FFMPEG into a mp4 file! but I get some errors with the command mentioed in the above link. [09:06] May you review this command: [09:06] ffmpeg -loop 1 -shortest -f image2 -i image.jpg -i audio.wav -c:v libx264 -tune stillimage -c:a aac -strict experimental -b:a 192k out.mp4 [09:11] guys? [09:21] ffmpeg -loop 1 -shortest -f image2 -i image.jpg -i audio.wav -c:v libx264 -tune stillimage -c:a aac -strict experimental -b:a 192k out.mp4 [09:21] Errorrs: [09:21] Unrecognized option 'c:v' [09:21] Failed to set value 'libx264' for option 'c:v' [09:21] Unrecognized option 'c:a' [09:21] Failed to set value 'aac' for option 'c:a' [09:31] lol? [09:32] ALO? [09:56] How to replace a video with black frame? [10:20] hi [10:21] 'ab' option is related to audio quality? how to say ffmpeg to keep the original audio quality during conversion? [10:26] audio bitrate [10:26] R0SSI: do you do -c:a ? [10:26] also? [10:33] Fjorgynn: this way [10:33] ffmpeg -i 1.avi -b 10 -c:a avi.mp4 [10:33] ? [10:33] no [10:34] Fjorgynn: Error: [10:34] Unrecognized option 'c:a' [10:34] Failed to set value 'i.mp4' for option 'c:a' [10:34] -c:a is the same as -acodec in old ffmpeg [10:35] -acodec libmp3lame -ab 128 [10:35] I want the lowest video quality and the original audio quality [10:36] R0SSI: what video is it? [10:36] you mean my ffmpeg is old? [10:36] My version is this one: [10:36] ffmpeg version 0.8.6-6:0.8.6-1ubuntu2, Copyright (c) 2000-2013 the Libav developers built on Mar 30 2013 22:23:21 with gcc 4.7.2 [10:38] you want something like ffmpeg -i video.avi -c:a copy -c:v libx264 -crf 20 output.mp4 [10:38] something like that [10:38] you want something like ffmpeg -i video.avi -c:a copy -c:v libxvid -crf 20 output.avi (if you want xvid) [10:38] https://trac.ffmpeg.org/wiki/x264EncodingGuide [10:40] a series of avi files -> I want to decrease their vidoe quality to decrease their size -> I need just their audio but they should be in video format -> because I want to upload them on youtube -> to use Youtube speech-recognizer to create subtitle for them [10:41] aha [10:42] I got that error again after running your first command: [10:42] Unrecognized option 'c:a' [10:42] Failed to set value 'copy' for option 'c:a' [11:11] Why I get this error: Unrecognized option 'c:a' [11:13] I think you've already been told that -c:a is the same as -acodec in old ffmpeg [11:13] So either update your ffmpeg, or try -acodec [11:16] ffmpeg version 0.8.6-6:0.8.6-1ubuntu2 -> I'm using latest updated software on my Ubuntu -> repositories are completely updated! [11:36] is this a valid command? ffmpeg -i input.mkv -codec copy output.mp4? [14:27] Hi, should the following be the complete output? SOmethings wrong in my video conversion.. http://pastebin.com/eF5SnmpN [14:28] first of all, your ffmpeg is pretty old [14:28] you are running 0.6, 2.0 was released a few weeks ago [14:29] klaxa: Cheap shared hosting I suppose [14:30] well, just grab a static build [14:30] I don't have a VPS/ssh access sadly [14:31] how do you encode then? [14:31] it's installed by default, and I call it by php [14:31] @exec(..) [14:31] :S [14:32] Yeah I know. [14:32] can you paste the command you execute? [14:32] and maybe contact your hoster :X [14:32] or don't if it's too much hassle [14:32] Nindustries, wait... how much are you paying for that hosting? [14:33] because VPS and heck even Kimsufi dedis are like a couple of bucks per month [14:33] klaxa: input http://pastebin.com/Pnz6q9FZ [14:33] as in, literally 3 euros something per month [14:33] JEEB: 1EU / month, 12EU/year for domai [14:33] then if you have no terminal access you're paying way too much [14:34] are you sure? :X encoding at almost 30 fps from 1080i to 720p seems rather fast [14:34] then again, i don't know about most of the flags in that command line he pasted [14:35] Me neither tbh, it's copied from the php code that is used in the Joomla component. [14:36] I'm just trying to find as fast as possible why the conversion isn't working [14:37] well that for sure isn't showing any errors, and the damn thing's so old that you're probably setting like over 9000 settings somewhere [14:37] and upkeeping that is like "no, thank you" [14:39] I wish I never accepted this project, stupid website. [14:39] At first, only problem was that gt-quickstart wasn't installed so I removed that from the code [14:42] behh [14:42] I have logfile, but it doesn't even format it [14:42] http://www.d00med.net/debuyl/media/hwdVideoShare_VideoConversionLog.dat [14:46] Wait, qt-faststart if part of ffmpeg? [15:06] Action: pippin is trying to make ffmpeg use ordered dither to generate .gif animatio - trying with: -sws_flags -error_diffusion but it doesn't work :/ [19:48] Hi i have a wheerd question how i can simulate a bad dvb/mpeg stream with 5-10% packet loss? [20:37] is it possible to add that kind of effect to video like it begins from black screen that fade in to picture and at the end of the video picture fade out to the black screen, slowly ? [20:56] hi [20:57] i want to encode a video with ffmpeg/avconv but i get the error "unrecognized option '-angle'". that's the params i used: http://paste2.org/whBLFngb [20:57] For avconv help, go to #libav. [23:44] hello @all [23:44] h [23:45] hello [23:45] while converting videos I stumbled over a mkv file with embedded subtitles, I neither know if this is some .srt or .ass file [23:45] just that it is embedded [23:45] has anybody of you experience with converting mkv with embedded softsubs to mp4? [23:45] i have problems selecting a pulseaudio input device for streaming audio with ffmpeg [23:46] when i use -i default, also my microphone is recorded [23:46] but i want just my playback to be recorded [23:46] i dont know what the name of the input is [23:47] i tried surround51 and playback but i have no idea what to set there :X [23:48] can anyone here help me plz? :) [23:52] ffmpeg -list_devices true -f openal -i dummy out.ogg [23:52] ffmpeg -f openal -i 'DR-BT101 via PulseAudio' out.ogg [23:52] Capture from the OpenAL device DR-BT101 via PulseAudio [23:53] http://ffmpeg.org/ffmpeg-devices.html [23:58] thx for showing me the pulseaudio man page, i still dont get which device is my 5.1 surround without my microphone as input though :X [23:58] its so weird with all these sound devices [23:58] since i have multiple sound cards in my system [23:59] in fact i want to record just my playback [00:00] --- Sun Aug 4 2013 From burek021 at gmail.com Mon Aug 5 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Mon, 5 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130804 Message-ID: <20130805000501.5B9AF18A0075@apolo.teamnet.rs> [00:01] cant you list all`and then simply record by trial and error? [00:02] as long as you dont have like 10 cards, you will figure out [00:02] which device has which sream [01:50] Hello, When I am am decoding audio (in my case a pcm_s16le wav file). Should the header be removed after the decode. As when I encode I get a pop noise at the beginning on the audio. Is this the old header? Thank You [01:58] What might cause this? -- Whenever I try to seek during playback, it seeks to the beginning [02:11] no i-frames could cause that [02:11] the lack of i-frames [03:38] I think I may have got it, in an AVFormatContext the buf_ptr is the start of the PCM data? So to know the bytes to skip (to avoid the PCM header) I do pointer arithmetic of AVFormatContext buf_ptr - buffer? [04:20] hi folks - I have a question about using multiple video filter arguments [04:21] I want to run both -vf pp=al and -vf "scale="800:trunc(ow/a/16)*16" [04:21] stackoverflow told me I can put both vfs in quotes and separate each argument with a semi-colon, but the issue is that scale is already in quotes [04:29] Is there anyone active here tonight? [05:56] Seems like not really [05:58] Carraway: try: -vf "pp=al, scale=800:trunc(ow/a/16)*16" [05:58] at least that's the syntax in the online documentation [05:59] That worked! Fantastic. [05:59] Must be too tired to tell the difference between a comma and a semicolon [05:59] :) [09:11] can anyone tell me what was the option for ffmpeg to kind of works like anty alias for low bitrate movies? [14:48] hello I have used ffmpeg -i "s and d.MTS" -r 1 -f image2 stillshots2/%05d.png to break a video down that has a lot of green, but ffmpeg is turning it very green and on the redish side is there an easy fix to devide up the video and maintain the original color balance? [14:50] s/devide/divide [14:51] oops wasnt logged in at the risk of a repeat here is the question again [14:51] hello I have used ffmpeg -i "s and d.MTS" -r 1 -f image2 stillshots2/%05d.png to break a video down that has a lot of green, but ffmpeg is turning it very green and on the redish side is there an easy fix to devide up the video and maintain the original color balance? [15:35] hikenboot: what you use to view images? [15:35] you are probably doing yuv->rgb->yuv conversion [16:08] can I ask rtmpdump questions in here? [16:28] Hello. Does anyone know if it is possible to figure out the size of a .ts file just by looking at the headers inside the file? [16:29] Say I got a partial ts file and I want to know the size it was supposed to be.. [16:32] recover_: I hope this might help you http://mediaarea.net/en/MediaInfo [16:36] I tried MediaInfo, but I couldn't find anything conclusive... I think that maybe taking the bitrate of each stream, times total seconds, divide by 8, plus the size of all headers, should be a good estimate [16:36] but I need it in bytes and mediainfo prints kbps, so I need to make my own program... [17:19] Can I have ffmpeg print the bitrate in bps instead of kbps? [17:59] multiply by 1024? [18:08] how do i do ffmpeg -i "s and d.MTS" -r 1 -f image2 stillshots2/%05d.png changed so that it doesnt do a yuv->rgb->yuv conversion ? [19:05] ok i will ask it in a different way how do I determine the current encoding format? [19:06] for color [19:33] well figured it out with trial and error thanks [20:25] Hi I'm new to ffmpeg, how can I use an H.265 encoder if I downloaded it from the official website (http://xhevc.com/en/downloads/downloadCenter.jsp) ? [20:26] probably not at all atm [20:27] you will have to use the distributed binary you got and feed it some kind of raw format, most accept yuv4mpeg [20:28] oh especially since it's a windows only binary [20:28] (well and a mac version) [20:30] By distributed binary do you mean ffmpeg.exe or the h265 dll? [20:32] the h265.dll if you got one even... i doubt you can just use it with ffmpeg [20:32] I do have one, so I'll try using it directly thanks! [20:33] you can't execute a .dll though [20:33] actually no idea how to use this encoder :| [20:33] Didn't the BBC release a 265 executable? [20:34] maybe someone else knows something, so maybe stay a little longer [20:34] there was an open source h265 encoder on bitbucket https://bitbucket.org/multicoreware/x265/overview [20:35] people complained it uses cmake though [20:35] what is cmake? [20:39] the readme says "The encoder filter is a Transform Filter, with an input Pin and an output Pin. The input Pin supports two media types: MEDIASUBTYPE_IYUV and MEDIASUBTYPE_YV12. The output Pin supports the media type MEDIASUBTYPE_HM91, which is a user-defined type, indicating bitstreams compatible with HM9.1." although I'm not sure what that means exactly [22:10] if you need a 230V power cable, you can be sure the one you have will have a 90 deg plug on it, and is bent in wrong direction that will not fit [22:18] Does anyone know why simply copying out an MP3 would work fine when writing to a file but fail when piping out to pipe:1? I'm doing `ffmpeg -i test.mp3 -codec copy -f mp3 pipe:1 > piped.mp3` which results in a bad file. See full demonstration: https://gist.github.com/parshap/d2d1da189d6ebe236f9c [23:19] How can I set the default flag of a subtitle track? [23:19] Currently all my subtitle tracks in a MKV are set as default. [23:49] Hi. Any idea if/how ffprobe can tell me whether an mp3 file is CBR or VBR-encoded? [23:50] If it's not exactly 32, 56, 64, 96, 112, 128, etc bitrate, it's VBR. :p [23:51] sacarasc: hm, I'd like to be sure, it's causing problems in a car media player [23:51] `mediainfo` will definitely tell you. [23:51] Thanks, I'll try that, I've seen other mentions of it. [23:53] (I've been rather happy with ffprobe so far, I guess mediainfo might deal with pretty-printing all those internals better) [23:55] mediainfo would look something like this: http://pastebin.com/g2iQRb0N [00:00] --- Mon Aug 5 2013 From burek021 at gmail.com Mon Aug 5 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Mon, 5 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130804 Message-ID: <20130805000502.686E918A0099@apolo.teamnet.rs> [01:26] ffmpeg.git 03Michael Niedermayer 07master:f1873909070d: doc/examples/filtering_audio: make const arrays also static [01:26] ffmpeg.git 03Michael Niedermayer 07master:8862ed73403a: avdevice/dshow: make constant arrays static [01:26] ffmpeg.git 03Michael Niedermayer 07master:c0ef5d6c169f: avdevice/vfwcap: make constant arrays static [01:27] ffmpeg.git 03Michael Niedermayer 07master:61af627d56c7: avfilter/graphparser: remove 256 char limit from create_filter() [09:11] wm4: sorry, gonna resurect an old discussion (2 wks ago), about the YUVJ vs YUV+colorrange, my concern was indeed about the format negociation: some filters could be working on YUVJ only (let's say some DVD video specific processing), or YUV only (code assuming full range all the time) [09:11] so to me it doesn't look wrong to have different pixel formats [09:12] (yes i'm quickly backlogging the few weeks i missed...) [13:34] ffmpeg.git 03Michael Niedermayer 07master:48188a512068: MAINTAINERS: order libavutil entries alphabetically [13:34] ffmpeg.git 03Timothy Gu 07master:3415058541a4: vf_scale: add force_original_aspect_ratio [13:37] ubitux: but that would mean you have to add range variants for each pixel formats in theory [13:37] and someone else already thought this was a bad idea in the past (which is why the jpeg formats are deprecated) [13:41] ffmpeg.git 03Andrey Utkin 07master:27cc3e72f850: doc/muxers: Document use case of mpegts muxer option tables_version [13:43] wm4: kinda, but does it make any sense in practice? [13:43] i mean can you have that much J variants? [13:47] but anyway, it's not like i'm against removing the J variants; it's just that there might be a few things to consider before doing so, including this concern [14:08] ffmpeg.git 03Nedeljko Babic 07master:18d7074b4e5a: libavcodec: Implementation of 32 bit fixed point FFT [18:04] https://people.xiph.org/~xiphmont/demo/daala/demo2.shtml [20:42] where does it say that the buffer passed to avio_alloc_context() must be allocated with av_malloc? [20:59] * @param buffer Memory block for input/output operations via AVIOContext. [20:59] * The buffer must be allocated with av_malloc() and friends. [20:59] right in the comment? [20:59] :) [21:00] somehow I overlooked that [21:02] this isn't new though [21:02] the commit from 2 years ago which added this says "Else a later buffer resize in ffio_set_buf_size() will ABORT." [21:03] never had it resize on me, though... [21:03] not sure under which conditions that happens [21:04] mplayer traditionally makes the buffer 32K for some reason, maybe that's large enough [21:06] wm4 : did you look at coverity ? [21:06] i mean do you have an account to look at the mplayer review ther e? [21:06] Compnn: no [22:50] ffmpeg.git 03Michael Niedermayer 07master:62cf5c114a38: avformat/matroskadec: make sipr_bit_rate static const [22:54] If anyone wants a coverity account for ffmpeg, tell me and ill make sure you get one [22:56] michaelni: what motivated the const ? static const? [22:56] some kind of warnings somewhere? [22:58] for ones within functions gcc (in the past at least) initialized the arrays on the stack each time the function is run when its not static while statics are just initialized once [22:58] outside functions theres global namespace polution [22:58] yes i'm not discussing the usefulness of the commits [22:58] just wondering why now [22:58] if that's pure hazard or just spotted by some random tool [22:59] git grep '^ *const[ a-z0-9A-Z_]*\[.*=' | egrep -v 'static|extern|ff_|av_|avpriv_' <-- The tool(tm) [22:59] ok :) [23:01] ffmpeg.git 03Michael Niedermayer 07master:6bbcae2c16e7: avcodec/fft: Fix "warning: unused variable" [23:06] pengvado, seems the CC to you fro the following mail bounced (http://ffmpeg.org/pipermail/ffmpeg-devel/2013-August/146846.html) [00:00] --- Mon Aug 5 2013 From burek021 at gmail.com Tue Aug 6 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Tue, 6 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130805 Message-ID: <20130806000502.A7A3C18A02D5@apolo.teamnet.rs> [03:53] ffmpeg.git 03Michael Niedermayer 07master:1787432b2344: mp3dec: make const tables static const [03:53] ffmpeg.git 03Michael Niedermayer 07master:c4f9a4cd2f6b: rdt: make const tables static const [04:49] michaelni: dunno why that would happen. I got the copy of that mail via the list, and that's delivered to the same address as the CC. [05:12] maybe something related to SPF caused it, iam no expert on that ... [08:41] ubitux: Daemon404: any progress on your vp9 code parts? [08:41] (I'll probably work on finishing keyframes minus loopfilter this week) [08:42] nope sorry [08:42] i'll start pretty soon [08:42] just a few urgent real life things to deal with and i'm in [08:43] life is overrated... go on vacation, much more enjoyable :) [08:43] i'm just back from it, and got a huge stack of things i should have been dealing with since months [08:43] and it's starting to get a bit critical [08:43] :p [08:43] but should be done soon :) [08:44] well let me know how the code progresses [08:44] I don't want to duplicate efforts but obviously want a working decoder with all features asap [08:44] so these two goals are sort of conflicting in a way [08:44] i'll make public statements about my progress [08:44] and what i plan to do [08:44] cool [08:44] don't worry about that [08:45] ok I'll keep pushing github whenever there's something worth pushing [09:30] ffmpeg.git 03Anton Khirnov 07master:77cc958f60f7: lavfi: add const to the AVFilter parameter of avfilter_graph_create_filter() [09:30] ffmpeg.git 03Michael Niedermayer 07master:46b3dbf9ca1b: Merge commit '77cc958f60f73963be4281d6e82ef81707e40c26' [09:37] ffmpeg.git 03Anton Khirnov 07master:51fc88e74671: avconv: improve some variable names [09:38] ffmpeg.git 03Michael Niedermayer 07master:783c674da7da: Merge commit '51fc88e7467169031b20b9983d80456b893a9fa3' [09:51] ffmpeg.git 03Luca Barbato 07master:71953ebcf94f: aac: Check init_get_bits return value [09:51] ffmpeg.git 03Michael Niedermayer 07master:48af87819a03: Merge commit '71953ebcf94fe4ef316cdad1f276089205dd1d65' [09:57] ffmpeg.git 03Luca Barbato 07master:a10c4ce24bd4: aac: Forward errors properly in aac_decode_frame_int [09:57] ffmpeg.git 03Michael Niedermayer 07master:d6c36fba0be8: Merge commit 'a10c4ce24bd4a0dd557d5849aa53a0cc74677808' [10:02] ffmpeg.git 03Alexandra Khirnova 07master:7684a36113fa: mxfenc: switch to av_reallocp_array() and check allocation errors [10:03] ffmpeg.git 03Michael Niedermayer 07master:dd98d9d1ff88: Merge remote-tracking branch 'qatar/master' [10:21] av_read_frame does neither return nor call interrupt_cb for mmsh://95.0.159.133/TSM after 8.59 seconds. It just freezes.. How can we prevent this problem? Even the untouched ffmpeg binary freezes at the same point. [10:41] ffmpeg.git 03Michael Niedermayer 07master:77e37c34cbaa: avformat/latmenc: use init_get_bits8() [10:41] ffmpeg.git 03Michael Niedermayer 07master:6ed1aa4f85fe: avcodec/ra144dec: use init_get_bits8() [10:42] ffmpeg.git 03Michael Niedermayer 07master:5dff26999841: avcodec/diracdec: use init_get_bits8() [10:42] ffmpeg.git 03Michael Niedermayer 07master:263da1a8f7ff: avcodec/eatgq: use init_get_bits8() [10:42] ffmpeg.git 03Michael Niedermayer 07master:89c3f5a907df: avformat/takdec: use init_get_bits8() [10:42] ffmpeg.git 03Michael Niedermayer 07master:22458ccbcc0c: avcodec/tta: use init_get_bits8() [10:47] zidanne, ive just put a fprintf() in the callback and its regularly called with that url and ffmpeg after 9sec [10:53] ffmpeg.git 03James Almer 07master:1ca3902726fb: fate: Add vorbiscomment cover art test [16:42] michaelni: it hangs on "ff_network_wait_fd_timeout ()" and it's code checks for interrupt_cb& interestingly my printf in interrupt callback does not print anything so it's somehow not getting called [16:47] #0 0x00007fff920dcf96 in poll () #1 0x000000010014b2d2 in ff_network_wait_fd_timeout () [18:17] zidanne, if you can reproduce this with command line ffmpeg then please open a ticket on trac.ffmpeg.org [18:18] ffmpeg.git 03Timothy Gu 07master:3217a706e27c: libxvid: Reduce the size of an array [18:46] michaelni: I've put this into network.c / ff_network_wait_fd_timeout method: av_log(NULL, AV_LOG_DEBUG, "ff_network_wait_fd_timeout: %d, %d, %d\n",write,timeout,int_cb==NULL?0:1); and it prints zero for the int_cb.. Are there more than 1 interrupt callback? I am not clearing the related context/pointer, so how can interrupt callback become null? [19:29] Who is Sandeep Hosangadi? [20:07] zidanne, the callback is stored per context so it can be null for some and non null for others [20:08] it is the same AVFormatContext [20:26] michaelni: my first call to avformat_open_input was not returning zero, so I was falling back to other protocol and calling avformat_open_input again.. In the second call, it seems like I have to re allocate format context because when the first avformat_open_input fails, it also somehow invalidates the initial format context. [21:58] surprising amount of ffmpeg devs at vdd this year [22:03] ffmpeg.git 03Wei-Cheng Pan 07master:f646cd44716b: rtp: Make ff_rtp_codec_id() case insensitive [23:23] how hard would it be to include hardware encode/decode for Raspberry Pi through mechanisms similar to omxplayer? [23:45] should I be seeing libav* folders while I compile? or did I get the wrong repo? [23:47] Datalink: libav* folders are normal and native to FFmpeg since its beginnings [23:47] ok [23:48] found out the hard way that the libav fork was causing my problems so I'm compiling the source on native hardware to update the production install [23:49] the forking group of developers just used the common prefix as a name because it was omnipresent in the ffmpeg source and lib names. [23:50] ah okay, lack of creativity, got it [23:51] Datalink: no, not at all i think. imho they should have changed lib names to sth all together. but obviously it was in their interest. [23:51] to no change them. [23:52] meh, it's 'hard' to do a string replace on an entire document tree... [23:52] Datalink : adding new hwaccel decoders is ok if you keep it seperate from the other decoders :) [23:52] and obviously, have the api/source/docs to be able to access it. [23:53] lol, sadly I'm not at the level where I could code the hwaccel decoder, but if I did, I'd probably have it as pi(codec).c in the source [00:00] --- Tue Aug 6 2013 From burek021 at gmail.com Tue Aug 6 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Tue, 6 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130805 Message-ID: <20130806000501.A00D318A02C0@apolo.teamnet.rs> [00:02] Any idea why 'file' says I got an MPEG ADTS file on an mp3? [01:48] What is the difference between rgb48le and rgb48be, or is this the wrong place? [01:58] endianness? [02:06] so then which one would be an array of shorts? [02:07] both [09:29] Hello all... [09:29] I've a video at 720x576i [09:29] yuv420p, 720x576 [SAR 16:11 DAR 20:11], 25 fps, 50 tbr, 90k tbn, 50 tbc [09:30] This 720x576i is at 16:9 (non-squared pixels). How can I make it *progressive*, at something like width=640, and squared pixels? [10:25] for the command line: "ffmpeg -i mmsh://95.0.159.133/TSM abc.wav" does your version of ffmpeg freezes too and not even process [q] keyboard press to stop? [11:11] hi, could rtp dynamic handlers handle pcmu? [15:54] viric: Looks like there's an error in the video around that point. Handbrake converts fine and it plays fine in MPlayer and VLC, but FFMpeg just freaks out [15:55] no idea [16:14] viric: Thanks anyway [18:57] how could I list the supported output formats? [19:01] ffmpeg -formats [19:07] it doesn't list wmav but it is supported [19:16] phr3ak: that is a codec not an output format [19:16] ffmpeg -codecs [19:18] happy monday everyone. [19:37] Ok, let's try this again... I am attempting to get detailed help using ffmpeg to screencast, the capturing of video is working, the transmission of it is not, I am using: -f flv "rtmp://user:pass at host/EntryPoint/ conn=streamname live=1" the error I get is Operation Not Permitted on the stream URL, I have looked at the manual, spent days trying to find additional info and am asking because I have run out of ideas despite my extensive search [19:40] For the stream info I was given, I have a URL of rtmp://host/EntryPoint, stream name, user, password for 2 entry points (for redundancy) [19:42] I have also tried appending streenname to the end of the URL, with url encoding the @, this also fails [19:50] viric: sth like ffmpeg -i IN -vf yadif,scale=720:360,setsar=1 OUT [19:51] er 640:360 [19:51] if it was really 16x9 to start with [19:54] mark4o: thanks [20:04] Stupid disconnect button [20:06] Ok, asking again, rtmp, info I have causes an error, I think it's something to do with stream name, setup works in Tricaster andd Liveshell but not for rtmp from ffmpeg [20:06] How would I tell avconv the stream name? [20:09] Anyone could tell me please better way to split mp3 file to chunks? http://dtbaker.net/random-linux-posts/split-mp3-file-with-ffmpeg/ [20:21] mark4o: it's really 16:9, taken by a jvc digital camera [20:21] setsar=1... I'll look for it [20:22] When playing,mplayer says: VO: [xv] 720x576 => 1047x576 Planar YV12 [20:22] ah, sorry, it's not 16:9 it seems. [20:40] Has anyone here streamed with ffmpeg to ustream or livestream? [20:47] phr3ak: you can use segment muxer [20:48] phr3ak: e.g. ffmpeg -i in.mp3 -f segment -segment_time 30 -map 0 -codec copy chunk%03d.mp3 [21:09] Is -crf still used or should I start using -qscale now for my videos? [21:09] I'm on ffmpeg version 1.0.7 by the way [21:10] rypervenche: for H.264? use -crf [21:12] Yes, ok thanks. [21:12] http://trac.ffmpeg.org/wiki/x264EncodingGuide [21:14] ooOOoo. Thank you very much. [21:14] Oh this is perfect. I've been needing something like this. [21:48] mark4o: thank you! [21:50] np [22:06] hi guys [22:06] when i try to run this cmd [22:06] ffmpeg.exe -ss 00:12:15 -i file.mp4 snapshot.png [22:06] i get this error [22:06] [image2 @ 025d9900] Could not get frame filename number 2 from pattern 'snapshot.png' (either set updatefirst or use a pattern like %03d within the filename pattern) av_interleaved_write_frame(): Invalid argument [22:06] although i get the snapshot.png [22:07] add "-vframes 1" [22:07] as an output option [22:08] sam-lap: if there is more then 1 video frame inside of the input ffmpeg does not know what to do with the remaining. so it tries to tell you that you maybe did something you were not fully aware of. and of course listen to llogan's advice :) [22:09] no, listen to beastd [22:12] eheh [22:12] thanks [22:22] With libxvid will I want to use -qscale or does it have -crf as well? [22:22] that encoder does not support -crf, so you can use -qscale:v [22:24] ok, great. And as far as finding more information on that, is there a place that I can find out about all of the available options for libxvid? Such as a man page or something? [22:24] range should be linear scale of 1-31, 2-5 is a sane range to try [22:24] ffmpeg -h encoder=libxvid [22:25] i personally have used the native "mpeg4" encoder instead of libxvid the 8 times i've needed that format [22:26] Hmmm, that didn't give much information: http://bpaste.net/show/120255/ [22:27] did timothy gu update the libxvid docs recently? [22:27] see http://ffmpeg.org/ffmpeg-codecs.html#libxvid [22:27] note that online docs correspond with current code, so it may not apply to your build [22:32] llogan: All right. I guess I'll use the net then. I was hoping to find documentation on my system for the codec like with x264. Thanks very much :) [22:32] rypervenche: there is this: https://trac.ffmpeg.org/wiki/How%20to%20encode%20Xvid%20/%20DivX%20video%20with%20ffmpeg [22:35] llogan: Thank you. You have been very helpful. [22:45] okay, I have a rtmp stream set up for TriCaster, which requires a URL, stream name, user and password, how do I make ffmpeg able to write to this same RTMP stream... nothing in the manual or other documentation I've hunted for seems to describe this and I've been trying to get that info here for a couple days now [22:48] http://illogicallabs.com/paste/00000003.txt [23:03] Datalink: you're not using ffmpeg [23:05] okay, so it's that fork thing... okay [23:05] maybe they can help you [23:05] or you can use ffmpeg [23:06] switching to ffmpeg [23:06] the ffmpeg included in your distro (Ubuntu I assume) is also not from FFmpeg [23:06] rasbian [23:06] still applies [23:06] rasbian --> debian --> avconv [23:07] yeah, debian based, ugh, okay, I'll have to build a deb for the Pi in question then [23:08] might be useful: https://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20compile%20FFmpeg%20for%20Raspberry%20Pi%20%28Raspbian%29 [23:09] ah, thanks [23:09] I'm doing this stuff on a test unit, as the Pi itself is supplying video to the local Gov't Channel [23:10] we bought 2 of them, so the second one ends up being for testing, I'll go through this, thanks [23:10] ooooh, avconv is part of libav.. that would explain a lot. [23:11] also http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html [23:17] okay that explains issues I was having then [23:22] source.ffmpeg.org is the current ffmpeg git, right? [23:34] ah the sweet sweet joy of having a spare Pi to compile this stuff on [23:36] there's also the fact that the two Pis are running almost identically, so if I can get it running off this Pi, I can just update the slideshow and background then swap the SD cards to make it work across the whole chain [00:00] --- Tue Aug 6 2013 From burek021 at gmail.com Wed Aug 7 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Wed, 7 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130806 Message-ID: <20130807000501.3FA1A18A03E5@apolo.teamnet.rs> [00:16] hi guys, I 'm trying to encode a png sequence but it fails with "pipe:: could not find codec parameters", when doing: cat dir/*.png | ffmpeg -f image2pipe -vcodec png -s 1280x720 -i - out.mp4 [00:16] Someone who knows why I get that error? [00:18] How can I pipe in an audio stream (mp4 format) if I am already using stdin to pipe in video stream? [00:20] roxlu: also show the output of "ls dir/*.png" [00:22] kevint_: maybe you could try a named pipe [00:22] llogan: https://gist.github.com/roxlu/76040ba166a155bcce85 [00:23] kevint_: or specify a file descriptor, e.g. pipe:3 [00:28] with jpegs it works. ... looks like abug [00:28] a bug [00:28] roxlu: you probably don't need cat: ffmpeg -i png/frames_recording/frame_%04.png output.mp4 [00:28] also, worksforme. also, you never included your console output. [00:28] %04d [00:29] yes, thanks. [00:29] llogan: yeah I need cat because I need to merge several dirs [00:29] using frame_%04.png works too [00:29] it's just the combination image2pipe + png which doesn't work [00:29] it's never worked [00:29] google gives me many other people mentioning it [00:29] yeah [00:29] : ( [00:30] roxlu: do you have your console output? [00:30] use `ln` to rename to a number sequence in one location. [00:30] http://mplayerhq.hu/pipermail/ffmpeg-devel-irc/2012-December/001119.html (search for image2pipe which give the same error) [00:30] mark4o: Error while decoding stream #0:0 [00:31] :/ complete console output [00:31] i have no troubles using cat with png [00:31] I admit I haven't tried it in a while. [00:32] mark4o: what is the filesize of your pngs? [00:32] that might be related as I found in some posts [00:32] nobody can really help without your commands and console outputs [00:33] full debug output: https://gist.github.com/roxlu/1327a657f396f4c439e4 [00:33] that's not the complete console output [00:34] i want to confirm that you're actually using ffmpeg [00:34] ...and the version number, and the configuration options [00:34] ah I'm using avconv, but aren't those kinda identical ? [00:35] lol [00:35] btw, windows is cutting some of my selection :# [00:35] we don't serve their kind [00:35] (as in stuff from forks) [00:36] :) I'm nobodies kind .. thought they were quite identical [00:36] use ffmpeg from FFmpeg or go to #libav [00:36] http://stackoverflow.com/a/9477756/1109017 [00:36] hmm don't you just love windows :) [00:37] http://ffmpeg.zeranoe.com/builds/ [00:44] hi all, I have a master file with 16 tracks of audio 1 track of video and I'm attempting to cut a sammple out of it. The sample cuts just fine, the problem is the channel position information in all the audio tracks is getting set to FC isntead of keeping the original information. [00:45] I've attempted a bunch of different -map_metadata incatations and nothing seems to do the right thing. [00:45] copious: give the clients a mandelbrot instead: ffmpeg -f lavfi -i mandelbrot -t 10 output.mp4 [00:46] will do... be back shortly [01:49] pebkac error apologies [03:31] okay ffmpeg just finished compiling on the studio's Pi, http://illogicallabs.com/paste/00000003.txt now how much of this do I have to change if I switch from avconv to ffmpeg? [03:34] You can remove -vcodec libx264, as you've already declared it on the line before. Other than that, just change avconv to ffmpeg and I think it should work. [03:36] sacarasc, it's throwing an error [03:36] gimme a sec, updating the text file to show the updated code and error [03:39] http://illogicallabs.com/paste/00000004.txt [03:39] ugh, one of these times I'm afraid I'm gonna end up giving the stream credentials trying to get this working [03:40] Hmm, that is an odd error indeed. [03:40] ah, found the option giving the error, -preset fast is right after the libx264, I'll remove that and try again [03:40] .... [03:41] ugh, it seems to not have x11grab now >.< [03:49] oh ok, it appears that x11grab isn't enabled by default in the compiler [03:50] :( [03:51] it's okay, I'm taking advantage of the recompile to enable the code from additional licenses including nonfree code, this is internal, and not for a distro, so I can safely use nonfree code [04:56] i have a bunch of 16-bit grayscale images to encode as video. is there such a thing as a grayscale video? using a certain api based on ffmpeg, opening a video stream with pixel type gray16le is failing. [05:47] Datalink: How's it going? [05:48] sacarasc, finished dephell to use the h264 library and x11grab, now it's compiling, this could take another 2 hours, I figure, since I'm compiling native on the Pi [05:48] Ah. [05:49] I'll have to fix a known bug with the production scripts in this Pi, copy it's root partition, update it, and swap another pi for it when (if) I get this working [05:50] Wow. [05:50] That's a lot of hassle. [05:51] yeah, which is why I'm annoyed libav has been so sloppy with various updates and also annoyed that the Debian branch includes one of those devs [05:51] aka, having to do this [05:51] after this, I have to figure out how to handle a streamname [05:58] cool, it's in the h264 section [05:58] for libavformat at least... this'll take... another couple of hours [07:41] color me a noob, but where might I find the default location of the ffserver log? [07:58] uh... /var/log? [08:08] I think the one I was looking for was smack-dab in the home directory [08:08] (I'm assuming just where you execute from?) [08:10] I'm attempting to write an ffm file to an ffserver feed, but ffserver doesn't like what I'm doing, as I keep getting this: http://pastebin.com/kQLU1Uxw [08:33] hi all [08:33] the question about capture video + audio from USB webCam by ffmpeg [08:34] VLC can not play the stream , it says "PTS is out of range" [08:34] anyone have idea about "PTS is out of range" [08:34] the command I use to capture "ffmpeg -loglevel debug -f video4linux2 -r 30 -s 640x480 -input_format h264 -i /dev/video1 -f alsa -ar 48000 -ac 2 -i hw:0 -vcodec copy -acodec copy http://localhost:8090/feed1.ffm " [13:36] hello. i am decoding an audio stream from a microphone, but the decoded frames just output junk. When reading raw packets, they contain PCM audio. But when i decode it, i just get junk [13:45] it's not a command line, it's coded in C [13:45] ah [13:46] well PCM is already decoded, it's raw audio you don't have to decode it, you have to set samplerate and bit depth to properly handle it though [14:11] welp, compiled overnight, but still not working right: http://illogicallabs.com/paste/00000005.txt I think I'm down to my original issue in that there's a streamname that I don't know how to pass to ffmpeg [14:12] Datalink: you had problems with ustream and ffmpeg? [14:13] GoaLitiuM, rtmp stream, actually, but yeah [14:14] adding flashver=FMLE/3.0\20(compatible;\20FMSc\201.0) right after the rtmp url made ustream streaming working for me [14:18] this isn't ustream, and that didn't change the error :/ [14:19] I have a stream name which isn't in the code there, but is part of the authentication and I'm not sure how to add it [14:23] oh hey, it's an adobe server, that'll help figure this out [14:24] [rtmp @ 0x2d417e0] Server error: [ AccessManager.Reject ] : [ code=403 need auth; authmod=adobe ] [14:30] yeah, this is getting frustrating but at least rtmp is throwing the error more usefully [14:30] GoaLitiuM, no difference between flashver and no flashver [14:30] since this isn't actually ustream, but an RTMP with similar auth credentials [14:41] http://illogicallabs.com/paste/00000005.txt it seems to not even care about the flashver [15:13] hi, is it possible to drop all the even lines. i want to deinterlace, but not by combining them but rather by dropping them (i'm doing resize anyhow, so there'll be less quality loss) [15:14] i don't want to do an extra step in AVIdemux just to resize [15:27] Is there some option in ffmpeg to speed up a video by extracting one frame every 1 sec and using that to compose a new file? [15:29] vad_, in libav there's 'select' filter, i'm unsure if it is also in ffmpeg [15:31] "decimate" also looks interesting [15:34] as does "framestep" [15:44] http://illogicallabs.com/paste/00000005.txt I'm looking at what I think is an adobe based stream server, I've ben supplied with a user, a stream name, the URL and a password, how would I enter the stream name? [16:32] hello, i'm trying to set up a decoder for a microphone, but after decoding, the data is corrupt. The raw packets from the audio device do contain valid PCM data though. [16:32] http://pastebin.com/T17gjUX6 [16:33] would this be a bug, or am i doing something wrong? [17:54] Using Mediainfo i have a piece of Media that says Mpeg AUDIO 2 Layer 3....but ffmpeg is crapping out on the file with Stream #0:1: Audio: none, 8000 Hz, [17:54] and then Encoder (codec none) not found for output stream #0:1 [17:54] any ideas? [17:57] tmkt: and what do media players do with it? :) [17:57] nm found the issue [17:57] http://www.dilella.org/unsupported_codec_ffmpeg/ [17:57] solved it [18:00] the last line [18:00] extra-53 [18:55] hi! [18:56] I've a video "720x576 [SAR 16:11 DAR 20:11]" [18:56] If I scale it to a different resolution, do SAR and DAR keep the same? [18:57] I don't really understand the difference between sar and dar [19:03] So, getting it to squared pixels 16:9 would be nice. [19:13] hello, i'm trying to set up a decoder for a microphone, but after decoding, the data is corrupt. The raw packets from the audio device do contain valid PCM data though. [19:13] http://pastebin.com/T17gjUX6 [19:13] would this be a bug, or am i doing something wrong? [19:15] ok got it about dar/sar [19:22] Martijnvdc: what kind of PCM [19:24] 16le [19:24] the resulting decoded packet should also contain PCM, but it just contains junk [19:25] it's at 48000 samples per sec [19:28] if it's already LPCM (assuming it's LPCM), why would you need a decoder, or are you speaking of an ffmpeg decoder? [19:30] i'm decoding using an ffmpeg decoder, yes [19:31] well, the data shouldn't be corrupt after decoding it [19:43] mh I'm playing with overlay... how can I make so when the shortest video finishes, it's not overlayed anymore? It looks like it stays overlayed forever [19:59] Anarhist: ffmpeg 1.0.6 has "select", but it always errors out when trying one of the examples :/ [19:59] jfyi [20:10] humm using "-ss", affects "movie=" sources in the vf chain too. [20:10] annoying [20:10] I want only the input skipping some seconds, not the vf movie= source [20:45] damn it :D [20:45] I always fall for the same stuff [20:45] x264 not found [20:45] Damn it [20:45] Why is the aur package for aur created for x264 without the enable shared option DUH [20:45] :D [20:46] hehe it took me 10 mins to figure out... 10 minutes I will never get back :D [21:39] Hi all [21:41] people, how I do to receive video via rtsp in ffmpeg with auth? In VLC I do: "vlc rtsp://test1:p1234 at 192.168.0.30" [22:08] hello, i'm trying to set up a decoder for a microphone, but after decoding, the data is corrupt. The raw packets from the audio device do contain valid PCM data though. [22:08] http://pastebin.com/T17gjUX6 [22:08] would this be a bug, or am i doing something wrong? [22:08] Would you need to decode it? Isn't it already decoded? [22:09] i can't put an AVPacket into an encoder after that [22:09] basically, i just want to encode an audio device with the opus codec [22:10] arecord -f - | ffmpeg -i - test.ogg (for oggvorbis, for example) [22:11] just select the proper options for S16LE and Opus [22:11] yeah, but i'm doing it in C, it's for a VOIP application [22:12] video works perfectly, but i have encountered a very strange bug with audio [22:13] see, the code in that pastebin causes the decoder to have a corrupted output, but i don't know why [22:13] so i'm thinking it might be a bug... [22:14] if you want to produce Opus from LPCM, would you not need an *encoder* rather than a decoder? [22:15] how can i encode an AVPacket? wouldn't i need an AVFrame for that? [22:15] possibly (I don't know, I am just looking at this from a high level) [22:16] the packet does hold the raw PCM data, which i want to encode directly; but ffmpeg doesn't allow that, so i decode the PCM stream into an AVFrame [22:17] but the AVFrame holds corrupt data; which is very strange behavior [22:18] what if you feed some framed data (say oggvorbis) into the decoder, does the AVFrame then contain something useful? (either vorbis again or the decoded LPCM) [22:20] the data which i'm feeding the decoder is valid data. the resulting AVFrame never holds anything useful, no matter what i feed to it [23:16] viric: -shortest, and setpts filter [23:25] mark4o: hm I don't want to cut at shortest [23:25] I used fade, at the end, + qtrle container for the overlayed movie [23:28] viric: oic what you mean; you can also use enable option on overlay filter, e.g. overlay=...:enable=between(t\,100\,200) to enable between 100s and 200s only [23:31] or if you want to fade you don't need to use a container with alpha, you can instead just add the alpha with ffmpeg if you prefer, e.g. movie=...,format=yuva420p:fade=...:alpha=1 [00:00] --- Wed Aug 7 2013 From burek021 at gmail.com Wed Aug 7 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Wed, 7 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130806 Message-ID: <20130807000502.4C60018A03E6@apolo.teamnet.rs> [00:26] ubitux: make testdata after cloning or downloading and configuring libvpx [00:26] ubitux: then look for vp90-02-*.webm [00:43] good night... [01:08] ffmpeg.git 03Carl Eugen Hoyos 07master:4c15f3491f19: Set bits_per_raw_sample when decoding pnm. [02:54] Daemon404: ubitux: fyi keyframe decoding works now (for one file tested, loopfilter not yet done) [02:55] oic [02:55] ive been practicing [02:55] Daemon404: ubitux: is anyone working on loopfilter or bw adaptivity yet? if not I'll pick either of those two (in that order) [02:55] implemented an mpeg1 decoder from scratch [02:55] (what a huge waste of time) [02:55] that's a waste of time [02:55] yeah [02:55] its te first dct-related thing i ever coded [02:56] so it wasnt *entirely* useless to me [02:56] "learning experience" [02:56] walk before you run etc [02:56] ffmpeg.git 03Marton Balint 07master:a7bb12a30756: mpegts: add fix_teletext_pts mpegts demuxer option [02:57] i s'pose [02:57] anyway [02:57] so nobody doing lf/bwadapt? [02:57] then I'll do lf [02:57] lf i loopfilter.. what is bw? [02:58] is* [02:58] bandwidth adaptive loopfilter [02:58] backward [02:58] damn [02:59] updating probabilities based on frequency of symbol/branch in previous frame(s) [02:59] it's a little like cabac but different [02:59] Action: Daemon404 still doesnt understand arithmetic coding so well [02:59] we'll get there later [02:59] brb [02:59] :) [02:59] ;p [03:02] I can explain arithcoding at some point but for now suffice to say it's a bitstream reader that is more efficient in terms of how many bits you spend per symbol [03:03] ive just been thinking of it as a black box where symbls go in and bits come out [03:03] defined in a spec somewhere [03:03] <_< [03:04] that's somewhat generic, no, really, it's not that weird [03:04] we have multple range coder implementations in ffmpeg [03:04] and this is just one particular kind [03:04] but yes black box works for now [03:12] <@Daemon404> its te first dct-related thing i ever coded <-- I thought that was JPEG [03:13] technically [03:13] Daemon404: D_S once explained to me that it was like encoding a huge fraction -- you're writing the digits of the denominator [03:13] (arithmetic coding that is) [03:13] something like that [03:13] and the decoding would be equivalent to executing the division (with 1 as the numerator) and reading the digits of the result [03:53] Daemon404: richardson explains it [03:54] i wasnt in a good enough state of mind while reading it on a plane to comprehend it [03:54] also i like how the aricoding section is as long as all the other sectins combined [03:54] I'm about to recompile ffmpeg for screencasting to an RTMP server from a Raspberry Pi, does anyone know a good list of config options for the compiler? [03:55] I'm mainly wondering if things like --enable-hardcoded-tables would speed up runtime and reduce load on the limited processor [03:55] that only affects init time [03:56] ok, would I need to enable gpl, gpl3 or nonfree for my needs? this will likely be streaming constantly unless power goes out to the Pi [03:56] hardcoded tables is for embedded systems with limited RAM, in order to put the tables in ROM instead [03:56] license options are for enabling particular non-LGPL external libraries; it will refuse to configure if you haven't set the right ones [03:57] there's also some gpl mmx code [03:57] but i dont recall it being useful at all [03:57] also >arm [03:57] it's not relevent on ARM, so yeah [03:57] okay so I'll be able to just enable x11grab and be okay? [03:57] x11grab is gpl atually [03:57] :D [03:58] is it? heh, okay... so I do need to enable nonfree [03:58] er, gpl [03:58] no, you need to enable... gpl [03:58] yes [03:58] that was a typo :P [03:58] alright, now to wait the 2 hours+ for this to compile... [03:58] what's the status of libx264? [03:59] also gpl [03:59] yeah, configure just told me that x.X okay, I'll get this done [04:00] if you're using x264, I might warn you the raspberry pi is an armv6 cpu, while most of the encoding optimizations are for armv7 (neon) [04:00] it may be, um. quite slow [04:00] Skyler_, noted [04:01] it's kinda the 'it's cheap' tax, everything's optimized for the generation of CPU after the Pi's SoC core [04:01] partly because armv6 doesn't have much in the realm of SIMD, so there's not really much to optimize for [04:02] true, I wonder how hard it'd be to optimize for the VideoCore GPU [04:03] videocore is more of a DSP than a GPU [04:03] but it's very proprietary [04:05] yeah, true, but after much user pressure, Broadcom opened up it's Linux side code to a point, there's also a player that uses it already, omxplayer [04:06] they opened APIs for their decoding libraries, but they didn't open up the development tools, processor ISA, and so on [04:06] i.e. you can use their decoder; you can't go write your own [04:06] so you wouldn't be able to go port to run on their DSP [04:06] true :/ [04:10] and now to install libx264 [04:13] hm, should I be worried if ./configure takes a long time? [04:14] never mind, it couldn't find X11... [04:14] if you're compiling /on/ the pi, well, it probably will [04:14] true [07:16] hm actually I have to retract that bw adapt and loopfilter stuff [07:16] I need 32x32 intra pred and idct first [07:16] so I'll go do that now [07:22] BBB: well... [07:22] i was looking for the 32x32 intra pred [07:23] i don't mind doing something else, but well... :) [07:23] 32x32 idct* [07:23] I'll do intra pred if you do idct [07:23] that's fine with me [07:23] ok [07:24] ok idct, just give me the time to dive into the code properly [07:24] send patches even if it doesn't work yet :) [07:24] the code I originally uploaded wasn't QA'ed either [07:24] I just put it up there and it had embrassing bugs [07:24] that's an ok model for development for something that moves quickly like this [07:25] i don't move quickly yet myself, need some little time to get familiar with the code first [07:25] and ok for the make testdata, handful, thx [07:27] np [07:27] I can supply some test files if you want to test the 32x32 idct [07:28] the vp90-02-size-*.webm files don't trigger it [07:28] i suppose i just have to generate very low bitrate video with the simple_encoder of libvpx? [07:28] for example [07:28] I can send you a 25kb file [07:28] it's 50 frames of akiyo @ 100kbps [07:28] with pleasure then [07:29] sent [07:30] thx [07:30] first 64x64 decodes ok, then it crashes because of missing 32x32 functions being called [07:30] (intra pred) [07:30] ok [07:30] after that, it'll crash b/c the idct 32x32 is missing [07:30] then it should decode correctly? [07:30] (first frame, at least) [07:39] BBB: so you're on init_intra_pred(TX_32X32, 32x32), right? [07:39] yes [07:40] and you're on init_idct(TX_32X32, idct_idct_32x32); [07:40] ok, then i'm on init_idct(TX_32X32, idct_idct_32x32) [07:40] :) [07:40] let me push the boilerplate for my stuff so it doesn't crash for you [07:40] if you play that file [07:40] ignore the missing intra prediction for now, if you compare against libvpx, just focus on the residual only [07:41] ok [07:41] so if the output of intra pred is X and residual is Y, just make sure that your final reconstruction outputs Y [07:41] I'll make it output X+Y [07:41] (or the other way around) [07:41] (i.e. you make it output X+Y and I make it output just X) [07:41] pushing a boilerplate function is fine btw, then at least it doesn't crash for me [07:41] :) [07:43] (also a good practice for making sure we can push to each other's repo and don't kill each other's changes) [07:43] i'm mostly reading now anyway [07:44] itxfm = inverse transform ... function modes? [07:44] inverse transform [07:44] I don't call it idct because we use more than just dct here [07:45] i.e. adst [07:45] so I can call it idct_1d or iadst_1d (inverse dct, inverse adst) [07:45] dct is discrete cosine transform, adst is assyetric discrete sine transform [07:45] oh txfm is "transform" ok [07:45] assymetric * [07:45] yeah [07:45] funny letter compression [07:45] maybe itx is better? [07:45] no no it's ok :) [07:47] so for each of the 2 dimensions of each block, you can choose between the 2 tx? [07:47] hence the 4 combinations? [07:48] for 4x4-16x16 yes [07:48] for 32x32 no [07:48] 32x32 is dct only [07:48] that's why the 32x32 initializer is different [07:48] and that's the reason it was left and the macro is not the same as the other [07:48] also, lossless is one type only (it's not strictly a dct or adst, it's a walsh-hadamard based one) [07:48] hence the name iwht [07:49] I didn't implement idct32x32 because I was lazy [07:49] not strictly because it was different [07:49] it's the longest, as you might have noticed [07:49] and as you translate from libvpx to the real world, you'll notice I order instructions differently [07:49] it typically makes it a lot shorter [07:49] I hope that can be done for the 32x32 one also [07:50] :) [07:51] ok now 32x32 intra pred diagnoal functions... they are a pain [07:51] (because they're long when written out) [08:17] 07:48:11 <@BBB> 32x32 is dct only // why libvpx has a adst 32 then? [08:17] also, why a special case for 32x32? [08:17] oh wait [08:18] forget it [08:20] good thing I was off for a while :) [08:20] yeah, 20 sec was w [08:20] was enough [09:19] ffmpeg.git 03Anton Khirnov 07master:612a5049d9b4: avserver: do not use a static string as a default for a string option [09:19] ffmpeg.git 03Michael Niedermayer 07master:4131f21f7761: Merge commit '612a5049d9b4ac1c2a293daf75fe814b7a94fdc7' [09:28] ffmpeg.git 03Anton Khirnov 07master:06cd4c5a68e2: avconv: fix usage of deprecated lavfi API [09:28] ffmpeg.git 03Michael Niedermayer 07master:a59a64cbc853: Merge commit '06cd4c5a68e23f5be199c0d2d563da80989f839f' [09:40] ffmpeg.git 03Anton Khirnov 07master:3799376dd337: lavfi/fifo: fix flushing when using request_samples [09:40] ffmpeg.git 03Michael Niedermayer 07master:bb5ef961647f: Merge commit '3799376dd3373ee255651ed542c75b15665801a8' [10:13] ffmpeg.git 03Anton Khirnov 07master:2e661f26f8b1: avconv: insert extra filters in the same way for both graph inputs and outputs [10:13] ffmpeg.git 03Michael Niedermayer 07master:84bc317019ba: Merge commit '2e661f26f8b12195f75ae3b07d9591e395135bc7' [10:43] ffmpeg.git 03Anton Khirnov 07master:56ee3f9de7b9: avconv: distinguish between -ss 0 and -ss not being used [10:43] ffmpeg.git 03Michael Niedermayer 07master:3fa72de82f04: Merge commit '56ee3f9de7b9f6090d599a27d33a392890a2f7b8' [11:51] ffmpeg.git 03Anton Khirnov 07master:811bd0784679: avconv: make input -ss accurate when transcoding [11:51] ffmpeg.git 03Michael Niedermayer 07master:7cbef2ed7ed4: Merge commit '811bd0784679dfcb4ed02043a37c92f9df10500e' [12:03] somebody wants to test my super2xsai patch on multiple cores cpu? [12:48] ffmpeg.git 03Anton Khirnov 07master:488a0fa68973: avconv: support -t as an input option. [12:48] ffmpeg.git 03Michael Niedermayer 07master:6d77279ed81c: ffmpeg_opt: Remove support for specifying -t anywhere to set the duration [12:49] ffmpeg.git 03Michael Niedermayer 07master:b7fc2693c70f: Merge commit '488a0fa68973d48e264d54f1722f7afb18afbea7' [13:44] ffmpeg.git 03R?mi Denis-Courmont 07master:578ea75a9e4a: vdpau: remove old-style decoders [13:44] ffmpeg.git 03Michael Niedermayer 07master:bf36dc50ea44: Merge commit '578ea75a9e4ac56e0bbbbe668700be756aa699f8' [13:53] ffmpeg.git 03R?mi Denis-Courmont 07master:a0ad5d011318: vdpau: deprecate old codec-specific pixel formats [13:53] ffmpeg.git 03Michael Niedermayer 07master:4ee0984341d8: Merge commit 'a0ad5d011318f951ecd4c9ffe1829518c9533909' [14:25] ffmpeg.git 03R?mi Denis-Courmont 07master:549294fbbe1c: vdpau: deprecate VDPAU codec capability [14:25] ffmpeg.git 03Michael Niedermayer 07master:3b805dcaa97e: Merge commit '549294fbbe1c00fee37dc4d3f291b98945e11094' [14:25] ffmpeg.git 03Michael Niedermayer 07master:50fb8c1114b9: avcodec/vdpau.h: define FF_API_CAP_VDPAU if its not defined [14:30] ffmpeg.git 03R?mi Denis-Courmont 07master:2852740e23f9: vdpau: store picture data in picture's rather than codec's context [14:30] ffmpeg.git 03Michael Niedermayer 07master:c3b290232078: Merge commit '2852740e23f91d6775714d7cc29b9a73e1111ce0' [14:49] http://illogicallabs.com/paste/00000005.txt [14:50] ffmpeg.git 03R?mi Denis-Courmont 07master:f824535a4a79: vdpau: deprecate bitstream buffers within the hardware context [14:50] ffmpeg.git 03Michael Niedermayer 07master:9547e3eef369: Merge commit 'f824535a4a79c260b59d3178b8d958217caffd78' [14:50] ffmpeg.git 03Michael Niedermayer 07master:66056f74a1e9: avcodec/vdpau.h: define FF_API_BUFS_VDPAU if its not defined [14:50] ffmpeg.git 03Michael Niedermayer 07master:318d7a963871: avcodec/vdpau: include attributes.h, needed for attribute_deprecated [14:56] ffmpeg.git 03Diego Biurrun 07master:bea3d6f4363f: ismindex: Replace mkdir ifdeffery by os_support.h #include [14:56] ffmpeg.git 03Michael Niedermayer 07master:1dfb34db6dd0: Merge commit 'bea3d6f4363ff1bbbd99c1717f7498b9fdb12cfc' [15:00] ffmpeg.git 03Diego Biurrun 07master:22a154e4363b: build: Add missing img2.o dependency to apetag.o [15:01] ffmpeg.git 03Michael Niedermayer 07master:7ed002d79136: Merge commit '22a154e4363b351dd9f321003de01dffebd2fa18' [15:10] ffmpeg.git 03Ben Avison 07master:daf1e0d3de03: avio: Add an internal function for reading without copying [15:10] ffmpeg.git 03Michael Niedermayer 07master:8878aef04882: Merge commit 'daf1e0d3de03bd424016e2a7520e4e94ece5c0ac' [15:16] ffmpeg.git 03Ben Avison 07master:cabb16816975: mpegts: Remove one memcpy per packet [15:16] ffmpeg.git 03Michael Niedermayer 07master:4ed0b28a45ad: Merge commit 'cabb1681697555e2c319c37c1f30f149207e9434' [15:22] ffmpeg.git 03Ben Avison 07master:c84ea750cf76: mpegts: Make discard_pid() faster for single-program streams [15:22] ffmpeg.git 03Michael Niedermayer 07master:5dd8ca7d1b54: Merge commit 'c84ea750cf765c9d8845fca5546eb0ae25b9c855' [15:31] ffmpeg.git 03Luca Barbato 07master:bc54c2ae3ca6: libx264: add shortcut for the bluray compatibility option [15:31] ffmpeg.git 03Michael Niedermayer 07master:560e9365b6eb: Merge commit 'bc54c2ae3ca6abd225dc331eafc12108513158de' [15:39] ffmpeg.git 03Luca Barbato 07master:605387582bd3: lavf: Support unix sockets [15:39] ffmpeg.git 03Michael Niedermayer 07master:8d06ce79411f: Merge commit '605387582bd35920b83a26dabbe1c0601f425621' [15:44] ffmpeg.git 03Luca Barbato 07master:bb9378251a16: network: Use SOCK_CLOEXEC when available [15:44] ffmpeg.git 03Michael Niedermayer 07master:253976720653: Merge commit 'bb9378251a167ef0116f263912e57f715c1e02ac' [15:53] ffmpeg.git 03Luca Barbato 07master:9991298f2c4d: bink: Bound check the quantization matrix. [15:53] ffmpeg.git 03Michael Niedermayer 07master:06fd4e45d9b6: Merge commit '9991298f2c4d9022ad56057f15d037e18d454157' [16:02] ffmpeg.git 03Luca Barbato 07master:090cd0631140: vc1: check the source buffer in vc1_mc functions [16:02] ffmpeg.git 03Michael Niedermayer 07master:f606c6e92c63: Merge commit '090cd0631140ac1a3a795d2adfac5dbf5e381aa2' [16:14] ffmpeg.git 03Luca Barbato 07master:43bacd5b7d3d: vc1: check mb_height validity. [16:14] ffmpeg.git 03Michael Niedermayer 07master:91062ddef18e: Merge commit '43bacd5b7d3d265a77cd29d8abb131057796aecc' [16:23] ffmpeg.git 03Ben Avison 07master:a22ae9f0c579: mpegts: Remove one 64-bit integer modulus operation per packet [16:23] ffmpeg.git 03Michael Niedermayer 07master:0df55e1ba886: Merge commit 'a22ae9f0c579793f411e2bd7a8db557091a3a4ae' [16:29] ffmpeg.git 03Kostya Shishkov 07master:bc909626b0a3: twinvq: move all bitstream reading into single place [16:29] ffmpeg.git 03Michael Niedermayer 07master:9648c8e57b15: Merge commit 'bc909626b0a3c107625f2cb4c85479d18de422a8' [16:34] ffmpeg.git 03Diego Biurrun 07master:4d8d16b596c6: twinvq: Prefix enums and defines shared with VoxWare MetaSound [16:34] ffmpeg.git 03Michael Niedermayer 07master:a97f7499909a: Merge commit '4d8d16b596c63de85e52488734338fbb41238058' [16:41] ffmpeg.git 03Kostya Shishkov 07master:86f4c59bd676: twinvq: Split VQF-specific part from common TwinVQ decoder core [16:41] ffmpeg.git 03Michael Niedermayer 07master:7d03e60c124b: Merge commit '86f4c59bd676672040b89d8fea4c9e3b59bfe7ab' [16:47] ffmpeg.git 03Diego Biurrun 07master:971cce7ebb48: riff.h: Remove stray extern declaration for non-existing symbol [16:48] ffmpeg.git 03Michael Niedermayer 07master:f5b2718c0ae5: Merge commit '971cce7ebb48a58e72e4dc57b3008e2682bcf4e7' [16:53] ffmpeg.git 03Diego Biurrun 07master:0ba4ea312b2a: avcodec/options: Drop deprecation warning suppression macros [16:53] ffmpeg.git 03Michael Niedermayer 07master:83db013a063b: Merge remote-tracking branch 'qatar/master' [17:25] ffmpeg.git 03Martin Storsj? 07master:2a0ec47bd70e: unix: Convert from AVERROR to errno range before comparing error codes [17:25] ffmpeg.git 03Michael Niedermayer 07master:287f7d0ae126: Merge commit '2a0ec47bd70ebb79e8b2d2f956feeb3a813df798' [17:46] BBB: i believe i'll be done with idct32 in the next 24 hours [17:46] if you can wait "that much" [17:51] Thilo ? [17:57] ffmpeg.git 03Martin Storsj? 07master:abe5268c3328: tcp: Use a different log message and level if there's more addresses to try [17:58] ffmpeg.git 03Michael Niedermayer 07master:89efaabc9949: Merge commit 'abe5268c3328bf0e8fcfb7dc6e231b8920177c3a' [18:06] ffmpeg.git 03Diego Biurrun 07master:fcc455ff2e11: avformat/dv: K&R formatting cosmetics [18:06] ffmpeg.git 03Michael Niedermayer 07master:7565aaecb497: Merge commit 'fcc455ff2e11ed04603aead1984a92ac3a4be226' [18:13] Action: funman looks for dilaroga [18:27] ffmpeg.git 03Diego Biurrun 07master:3dd5c95deef5: riff: Move muxing code to a separate file [18:27] ffmpeg.git 03Michael Niedermayer 07master:508a5349da98: Merge commit '3dd5c95deef51d7fbf6f4458ba42d1335d2f1472' [18:36] ffmpeg.git 03Diego Biurrun 07master:255d9c570e11: riff: Move demuxing code to a separate file. [18:36] ffmpeg.git 03Michael Niedermayer 07master:0a8f5eb23a2d: Merge commit '255d9c570e117f0fcb8e51fa2c5996f3c4b2052b' [19:20] ffmpeg.git 03Diego Biurrun 07master:406e6c0ba539: configure: Properly split avserver component and system dependencies [19:20] ffmpeg.git 03Michael Niedermayer 07master:1faece7eccdb: Merge commit '406e6c0ba5393fa302080202fe77bd09187889a1' [19:26] ffmpeg.git 03Diego Biurrun 07master:a7d45e06e975: configure: The W64 demuxer should select the WAV demuxer, not depend on it [19:26] ffmpeg.git 03Michael Niedermayer 07master:05f1b4e2ecc4: Merge commit 'a7d45e06e9757f49ea4e105cbefc3462a7324e9a' [19:33] ffmpeg.git 03Diego Biurrun 07master:61c31e4ee7ea: configure: Properly set zlib dependencies for all components [19:33] ffmpeg.git 03Michael Niedermayer 07master:c32db6adab99: Merge commit '61c31e4ee7ea79a9e74c0476b81244febf17e6d7' [19:40] ffmpeg.git 03Diego Biurrun 07master:6fb65973c950: configure: Properly split dv1394 indev dependencies [19:40] ffmpeg.git 03Michael Niedermayer 07master:66328da700c7: Merge commit '6fb65973c9501d3fe94a5a9195c01cd20083066e' [19:45] ffmpeg.git 03Christian Schmidt 07master:1c6d2bb9a927: pcm_bluray: Return AVERROR_INVALIDDATA instead of -1 on header errors [19:45] ffmpeg.git 03Michael Niedermayer 07master:aa24729c2135: Merge remote-tracking branch 'qatar/master' [20:02] ffmpeg.git 03Michael Niedermayer 07master:5b13778f9323: mpegts: remove usage of MOD_UNLIKELY() [20:37] ffmpeg.git 03Michael Niedermayer 07master:ef71717901b0: avformat/smoothstreamingenc: Make const tables static const [20:37] ffmpeg.git 03Michael Niedermayer 07master:9e10b2cfc9ec: avformat/spdifenc make const tables static const [20:37] ffmpeg.git 03Michael Niedermayer 07master:a68b6ec7f55e: avutil/hmac: make const tables static const [20:47] 07:49:32 <@BBB> and as you translate from libvpx to the real world, you'll notice I order instructions differently [20:47] 07:49:40 <@BBB> it typically makes it a lot shorter [20:47] 07:49:59 <@BBB> I hope that can be done for the 32x32 one also [20:47] it seems so ^ [20:48] so far i'm able to follow the changes you did [21:37] ubitux, you dont want to maintain libavutil/timecode.c ? [21:37] michaelni: yeah sorry i didn't reply [21:37] i don't know, i'm not much into broadcasting anymore [21:38] i guess it doesn't require much maintainance anyway, so it should be ok [21:39] ok and yes shouldnt need much/any work [21:39] and as one of the authors it should be even easier for you [21:39] BBB, you dont want to maintain libavutil/atomic ? [21:41] wbs, you dont want to maintain libavutil/hmac.c ? you are the author AFAIK ... [21:45] ubitux: welcome back. [21:45] hello llogan :) [21:45] i'm back since a few days actually [21:45] how was the break? [21:46] mind free [21:47] my summer plans all got canceled due to one thing or another... [21:47] and now it's already august [21:48] yup, worst time of the year [21:48] welcome to hell [22:07] BBB: WIP available at https://github.com/ubitux/FFmpeg/commits/vp9 [22:07] still not usable (doesn't build anyway) [22:08] possibly working tomorrow [22:08] i will start working on vp9 tomorrow -- if you give me a task [22:08] so i dont conflict with someone else [23:35] michaelni: any reason you kept decode_audio3 but not AVCODEC_MAX_AUDIO_FRAME_SIZE when merging ? [23:37] aballier, which commit ? [23:38] michaelni: e052f06531c400a845092a7e425ef97834260b3b [23:38] (this is the merge commit) [23:41] aballier, i cant awnser this question, it might have been a mistake or maybe i had something in mind, i dont rememer [23:42] can it be readded ? iirc decode_audio3 was meant to be used with buffers of that size [23:42] (or decode_audio3 killed, i dont mind, its just it doesnt seem useful that way) [23:52] ubitux: ok got it [23:52] ubitux: I'm far from done anyway, so don't worry about it taking another few days [23:52] funky weird bugs all over the place [23:52] first frame of akiyo starting to look good though [23:53] michaelni: let me think about that, on vacation right now, not quite the right time [23:57] oh dear, one merge commit per merged commit [23:59] yes, but michaelni likes having one extra commit per each fork one [00:00] --- Wed Aug 7 2013 From burek021 at gmail.com Thu Aug 8 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Thu, 8 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130807 Message-ID: <20130808000502.016A718A0361@apolo.teamnet.rs> [00:45] Does anyone know if it is possible/impossible to do this operation using ffmpeg's existing filters? http://pastebin.com/51mcTShT [00:47] meekohi: can you explain in english what you want to achieve? [00:47] llogan: take the greyscale value at each pixel, and use it to adjust the saturation of the RGB channels at that same pixel. [00:48] Basically I can't figure out how to grab the greyscale value and then use it when doing some math on the original channels [00:48] I was almost thinking I might have to output a greyscale version of the video then combine them somehow? [00:52] i don't have an example for you, but did you refer to the current list of video filters? http://ffmpeg.org/ffmpeg-filters.html#toc-Video-Filters [00:52] llogan: I did indeed, have been trying to find a combination that makes it work. [00:53] meekohi: did you try lut, or maybe even haldclut? [00:54] mark4o: out can't access the other channels while working on each channel unfortunately. Will look into Haldclut [00:55] Haldclut looks like in only supports fixed color lookup tables -- unfortunately saturation is a function of the greyscale value (so it isn't a simple lookup table) [00:56] haldclut can take any combination of r, g, and b into consideration so that shouldn't be an issue [00:57] mark4o: Ahhhh okay I see what you mean, duh. Thanks this looks promising. [00:57] it is normally fixed but can cover every color [00:57] and I think it actually doesn't have to be fixed, can be a movie [06:47] how do I find the framerate when using ffmpeg libraries? [06:47] I have a formatContext and a codecContext [08:03] I want to associate AVPacket with the returned AVFrame from avcodec_decode_video2. since the decoders can produce delayed frames, it's not trivial how to associate them. what's the proper way? [08:06] mkvsynth: can you access to the AVStream structures which belong to the format context? I think AVStream has one. [08:13] I think AVRational? [08:18] avg_frame_rate? [08:27] got it. in the format context you find the video stream and then use avg_frame_rate [09:25] blah, I'm back... [09:41] http://illogicallabs.com/paste/00000006.txt anyone know why I'd have an input/output error on this? [11:10] i'm trying to set up a decoder for a microphone, but after decoding, the data is corrupt. The raw packets from the audio device do contain valid PCM data though. [11:10] pastebin.com/T17gjUX6 [11:10] would this be a bug, or am i doing something wrong? [11:10] um [11:10] why are you even decoding if you have PCM data? [11:11] because i need an AVFrame instead of an AVPacket [11:13] I remember having issues with that... and finding out why. [11:14] Martijnvdc: check my init_input: http://viric.name/cgi-bin/ring/artifact/30b92f876e91c1e719923c43c24930798e55e702 [11:14] and process_input [11:15] I remember troubles initialising the contexts. [11:20] viric: thank you! [11:20] this code works :) [11:21] I suffered making it. hehe. it's the only thing I ever wrote with ffmpeg [13:13] Hi [13:13] What command would I use to convert a whole folder and the subfolders of music from mp3 to ogg [13:18] viric: thanks a lot man. thanks to your example, it now works perfectly [13:19] i don't know what exactly caused the corruption, but at least it now works :D [13:35] I have some video that coded video with "camtasia" codec and audio with "pcm_s16le, 12000 Hz, 1 channels, s16, 192 kb/s", and the audio sounds really poor, and a very little huming noise, is there some filter that can improve its quality ? [13:36] because watching that video for some period of time became really annoying [13:36] 12000 Hz means you will only get sounds up to 6 kHz. [13:37] IOW, someone chose really poor encoding settings [13:38] hi all [13:38] I mean to make it little softer, the only sound is voice its some recording of desktop video with commentary, and that voice sounds like really old radio from 30s, its quite brute for ears to hear [13:40] there is a way to undestand why when i play video fom an ip camera i receave the massege : corrupted macroblock specially on heavy network traffic [13:42] karlox: if the camera sends the stream in realtime to you, and the network is congested, it may have to drop frames because there's no space to write them anywhere [13:42] and there is a way to drop this frame so i don't display video damaged [13:42] so use it. [13:43] vad_: "chose really poor encoding settings", at least that way 26 minutes are 55Mb in size, very efficient [13:43] elkng: for 192kbit/s, you would get wonderful audio if encoded as MP3 or MP4AAC, even multichannel at that. [13:43] vad_, are you talking about network device or opertive system? [13:43] karlox: any component can and must drop something if buffers are full [13:43] s/opertive/aperture [13:44] vad_, may i drop this frame? The image with this error is terrible to diplay! [13:45] vad_, i'm trying with ffplay [13:59] elkng, the problem is that the audio is already destroyed [14:02] Mavrik: shouldn't there be filters for that purpose to restore quality ? [14:02] or make it softer or something [14:02] there's nothing to restore [14:02] you have all frequencies over 6kHz missing [14:16] Martijnvdc: perfect :) [14:16] Martijnvdc: I hope you write something very good and gpl [14:49] viric: it will be gpl :) [15:37] i have a lot of rtp missed warnings also with negative number (RTP: missed -52283 packets) where i can investigate to find the cause of this warnings [15:53] in ffplay the option -fflags discardcorrupt can be used to drop frame this kind of error: error while decoding MB 44 26 [15:53] ? [17:04] hi. are there any examples of using FFmpeg with DirectSound? ffplay uses SDL for audio, but I need DirectSound in this specific case. I have one implementation but it generates a little white noise when I play it [17:31] Hey guys, which one is the currently best video codec supported by ffmpeg? [17:56] What's the difference between "-q:v 1" and "-qscale 1" ? [18:27] mcbonz: libx264 [18:28] meekohi: it's the same... -q:v 1 is the newer way of specifying it, -qscale is the older way [18:28] microchip_: Ah great, thanks! [18:54] i'm alway stuck with dropping frame to avoid to display artifact image.... [18:54] i can't found the right option in ffplay! [19:07] Hi, how can I convert an RGB image to YUV420 using sws_scale? (I get the RGB image with glReadPixels(), as an char-array) [20:55] which yuv ? [21:15] anyway to copy gop structure of the source file? [21:47] Is there no way to make ffmpeg use the english tagged audiotrack always as default if source has multiple language tracks (dvb input) [21:48] Somewhat like what vlc's --audio-language=blah does [21:59] Haha pretty simple, -alang eng always thought that was for setting output metadata ^^ [22:10] can I get ffmpeg to just dump information about a file without doing anything to it [22:11] ffmpeg -i file [22:11] does just that [22:11] :) [22:44] how could I avoid verbose/progress messages if I run ffmpeg in a cron job? [22:47] -v loglevel ? [22:50] what do you think -v 0 ? [22:50] no idea. [23:16] phr3ak: you could log it all to a file somewhere with, ffmpeg -i .... output 2>/path/to/ffmpeg.log [23:26] phr3ak: you can try -loglevel quiet if you want to see nothing really or -loglevel warning or something in between. the log levels are described in the manual if you are curious. [00:00] --- Thu Aug 8 2013 From burek021 at gmail.com Thu Aug 8 02:05:03 2013 From: burek021 at gmail.com (burek) Date: Thu, 8 Aug 2013 02:05:03 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130807 Message-ID: <20130808000503.08B3118A042B@apolo.teamnet.rs> [00:00] aballier, is there any software that still uses AVCODEC_MAX_AUDIO_FRAME_SIZE or decode_audio3 ? [00:01] hmm, if decode_audio3 is not removed, it should still be working... [00:01] BBB, ok, ill add add the others to MAINTAINERS and you can add yourself/send me a patch/pull req when you have decided [00:02] BBB: [00:02] [16:08] <@Daemon404> i will start working on vp9 tomorrow -- if you give me a task [00:02] [16:08] <@Daemon404> so i dont conflict with someone else [00:02] because of constant API changes, some users decide to stop using libav* [00:02] Daemon404: inter frame coding!!!!! [00:02] well ok [00:02] if you're up for that [00:03] I can help out once intra is done, but at least for now it's a big wide open [00:03] i have no idea what it entails in vp9 [00:03] givn vp9 has no spec [00:03] it starts with bitstream parsing [00:03] so ill have to see [00:03] you just do the same as what libvpx does [00:03] it's like reverse engineering [00:03] except you have the source [00:03] heh [00:03] so it _is_ easier [00:03] so like every day i touch ffmpeg [00:03] right [00:03] except this source is not ffmpeg's ;) [00:04] it has roots in on2 [00:04] who i.. dont have the most respect for [00:04] <_< >_> [00:04] durandal11707: IMO the problem is that the libav* API is too low level [00:04] wm4, what you dont like to loop over packets? [00:05] it's hilarious how most of the ffmpeg examples are subtly or completely buggy/inapproriate/useless [00:05] if i am using a local file i always use ffms2 [00:05] since it providers a higher level api [00:06] getframe(n) is extremely convenient. [00:07] wanting to grab a frame by frame number in one step is totally an obscure use-case not worth supporting [00:07] of course. [00:08] if anything ffms2's API forces the user to do twice as much crap as should be needed for basic uses [00:08] how long has ffms2 actually managed to keep ABI/API compatibility? [00:08] although I guess some of that's just a result of no default params/overloading in C [00:08] years? [00:08] wm4, ... 7 years now? [00:08] I broke API compatiblity in uh [00:08] 2011 [00:08] wasnt that related to postproc [00:09] nah, added a param to the audio open function to control how to handle container-level audio delay [00:09] right. so one break since inception more or less [00:09] (also wow.. ffms2 is almost a decade old now i think) [00:10] no [00:10] 2008 or so? [00:10] it was 06 i think the first time it showed up ? [00:10] maybe ffms1 [00:11] michaelni: a lot [00:11] michaelni: k3b for ex. [00:11] also AVCODEC_MAX_AUDIO_FRAME_SIZE is not related to decode_auio3 [00:11] it was cargo culted, but not ACTUALLY related [00:11] michaelni: or gst-ffmpeg/libav [00:11] it was moved to its own repo 2009-06-03 [00:11] and that was still a beta version [00:12] a single api break in 4-5 years is still pretty decent [00:13] if you maintain a project depending on libav*, you have to play catch-up literally all the time [00:13] aballier, if its used it should stay in ffmpeg [00:13] i know, i maintain several [00:13] mostly i gave up on supporting anything but HEAD [00:14] tbh it's not as bad as it used to be [00:14] michaelni: it should be deprecated but all the code i've seen does buffer[AVCODEC_MAX_AUDIO_FRAME_SIZE] then give buffer to decode_audio3 [00:14] aballier, that is wrong [00:14] even for that api [00:14] AVCODEC_MAX_AUDIO_FRAME_SIZE is NOT related to what the buffer size should be [00:14] its a BS cargo culted define [00:14] those apps should migrate to decode_audio4 though [00:14] ffms2 did AVCODEC_MAX_AUDIO_FRAME_SIZE * 10 for no apparent reason [00:15] until it was switched to decode_audio4 [00:15] Daemon404: what should be the buffer size then ? [00:15] there was never a real way to guarantee a buffer size is right [00:15] its one of the reasons the api sucked [00:15] i remember it was not sufficient for some codecs [00:15] but like *4 or *10 was [00:15] thats basically just a arbitrary number [00:15] that happens to work OK [00:16] yep [00:16] also mostly people dont want to because they will need to bother with either: [00:16] a single frame can expand to an arbitrarily large size once decoded [00:16] a) use libswresample or libavresample to deinterleave [00:16] b) write their own deinterleave function [00:16] so any fixed-size output buffer is going to fail eventually [00:16] and thats :effort: for man [00:16] y [00:16] er, i mean interlave, not deinterleave [00:16] also typos. [00:17] why doesn't libswscale and lib*resample not use AVFrames for input and output? is there a specific reason or is everyone just too lazy to implement that [00:17] s/not// [00:17] i thought lib*resampe do [00:17] wm4: i asked the same thing some weeks ago, started to bake up some patches but got lazy :/ [00:17] Daemon404: nope [00:17] maybe to be 'generic' ? [00:18] Daemon404: so far what i've been doing is the lazy fix: #ifndef + #define 192000 * 4 [00:18] too generic? [00:18] aballier, like i said, the api sucked [00:18] it was deprecated for a reason [00:18] BS, it is because of constant API/ABI changes in all libav* libs :D [00:19] [18:18] <+wm4> too generic? <-- this is all of the libav* api [00:19] why its too low level. [00:19] you do need the low-level API [00:19] there just also needs to be a higher-level API [00:19] yes [00:20] and that api is ALMOST ffms2 [00:20] almost? [00:20] i still have to do libav* api stuff for streaming [00:20] ffms2 requires a local file to index [00:20] maybe ill look into libvlc [00:20] so it needs to create an index beforehand [00:20] in memory at least [00:20] how else would it see kacurately? [00:20] with libav*'s api? lolno [00:21] seeking a stream doesn't exactly work anyway [00:21] Plorkyeran, im also using libav* for remote files [00:21] you *can* seek in remote files over http [00:21] you could write something with ffms's api that supports streaming, but the innards would have to be pretty different [00:22] generally i want to grab the Nth frame on some remote file via https [00:22] without dling the whole thing [00:22] i had to write a custom thing for that [00:24] ffms2 would be able to support that with very little effort i think [00:24] depending on how much it needs to dl for indexing [00:26] it has to parse the video frames for things like repeat_pic and vp8 hidden frames so sorta depends on how little you can get away with feeding the parsers [00:26] sounds expensive [00:27] ill just stick my custom-rolled thing for now [00:27] (rather than attempt to impl it in ffms2) [00:50] bye... [01:23] ffmpeg.git 03Michael Niedermayer 07master:caa7a494817f: avformat/utils: fix memleak with nobuffer [01:45] ffmpeg.git 03Michael Niedermayer 07master:b45b1d7af93b: MAINTAINERS: Add some maintainers for parts of libavutil [02:45] ubitux: ok afaics all bugs in the first keyframe of that sample I made are b/c of missing idct32 (or some secondary derivative thereof), so I'll wait for your patch before doing anything more... maybe I'll work on bw adapt in the mean time, or loopfilter, or so [02:45] still have a few directional intra32x32s to write also [02:45] ... [02:45] anyway it's looking somewhat ok at this point [08:19] ffmpeg can't finish avformat_open_input (it never returns) for shout cast 1.9.9beta servers. (example server: 95.211.60.38), gdb only prints "0x00007fff90282f96 in poll ()" when I hit ctrl+c [08:20] correction for the example server: http://95.211.60.38:6006 [09:51] http://illogicallabs.com/paste/00000006.txt [09:51] I'm starting to feel the urge to give up on trying to make this connect [10:44] ffmpeg.git 03Michael Niedermayer 07master:81f4d55c3669: MAINTAINERS: alphabetical sort [10:44] ffmpeg.git 03Stephen Hutchinson 07master:f277d6bf4232: avisynth: Cosmetics [10:44] ffmpeg.git 03Stephen Hutchinson 07master:9db353bc4727: avisynth: Exit gracefully when trying to serve video from v2.5.8. [11:22] Why would one get this error while compiling for arm? libavcodec/arm/mdct_vfp.S:114:unknown register alias 'TCOS_D0_HEAD' [12:04] ffmpeg.git 03Martin Storsj? 07master:f542dedf7209: rtspenc: Check the return value from ffio_open_dyn_packet_buf [12:05] ffmpeg.git 03Stephen Hutchinson 07master:7f43a09da7b5: configure: Add -lstdc++ to the requirements for linking with libgme. [12:05] ffmpeg.git 03Michael Niedermayer 07master:5d08f8149c11: Merge commit 'f542dedf72091af8e6f32a12bd64289c58857c21' [12:12] ffmpeg.git 03Martin Storsj? 07master:62572435d410: rtpenc_chain: Check for errors from ffio_fdopen and ffio_open_dyn_packet_buf [12:12] ffmpeg.git 03Michael Niedermayer 07master:6bf6d6ad498d: Merge commit '62572435d4106098c090fb8f129a9090e41ff1eb' [12:23] ffmpeg.git 03Kostya Shishkov 07master:f544c586364e: deprecate AV_CODEC_ID_VOXWARE and introduce AV_CODEC_ID_METASOUND instead [12:23] ffmpeg.git 03Michael Niedermayer 07master:e4eab2d9bde3: Merge remote-tracking branch 'qatar/master' [12:38] zidanne, is this with apples toolchain on a mac ? [12:38] yes [12:40] gas-pp [12:42] I have gas-pp. You mean that it should get updated? [12:42] with the new/missing instructions? [12:52] zidanne, try: https://github.com/michaelni/gas-preprocessor [12:54] (above contains a few fixes from martin, iam not sure why yuvi hasnt merged them) [12:56] michaelni: thanks, it worked very well. [12:57] Yuvi, please merge martins fixes (see above they are needed) [14:01] someone availabe for testing single patch? [14:02] durandal_1707: ah you wanted someone to test the threading for superx2sai? [14:02] yes [14:02] do you have it in a branch? [14:03] and benchmark, i like benchmarks [14:03] ubitux: there is patch on ml [14:07] and how its quality compares to hxn2 something [14:18] durandal_1707: if you want me to test, just push it in a branch [14:18] it will save me some time [14:19] but you just fetch patch and do git am patch [14:19] i don't have a copy of the mail [14:19] and it requires me to dive through archives etc [14:20] i pushed it into super brach [14:20] *branch [14:21] thx [14:29] doesn't look faster but maybe i'm doing something wrong [14:31] durandal_1707: http://pastie.org/pastes/8215033/text [14:32] (-mt is a build with your patch, and no -mt is a build with your branch HEAD^) [14:32] uses more cpu though =p [14:33] yes [14:33] with -mt, all the cpu are indeed in use, but most of the time 50% of them [14:33] is the decoder fast enough to give it enough to do? [14:34] http://ubitux.fr/pub/pics/_ff-super2xsai-nomt.png [14:34] http://ubitux.fr/pub/pics/_ff-super2xsai-mt.png [14:35] nevcairiel: dunno [14:36] i'm using a random relevant material for the filter: http://archive.org/download/pokemonblue-tas-mrwint/pokemonblue-tas-mrwint.mkv [14:38] http://pastie.org/pastes/8215054/text [14:38] not better with that one [14:38] source: http://www.archive.org/download/AParticularlyFishyDump/pokemongold-tas-fractalfusion.mp4 [14:41] perhaps it fight with decoder threads? [14:42] feel free to propose different cmd line testing [14:44] well the same one with which yadif filter gives performance gains [14:44] or fade filter [14:46] it would be cool to have some information about threading in the log [14:46] like where it is enabled/supported [14:48] how fast is the filter single threaded? because yadif can be relatively slow, so the speed up is easier measurable [14:49] personally i also had better experience looking at the fps from the stats output during transcoding [14:49] time seems rather odd sometimes [15:14] ffmpeg.git 03Michael Niedermayer 07master:5cd57e8758e3: avcodec/jpeg2000dec: check sample sepration for validity [15:18] michaelni: "separation" [16:41] ffmpeg.git 03Michael Niedermayer 07master:2960576378d1: avcodec/g2meet: fix src pointer checks in kempf_decode_tile() [17:08] ubitux: how's idct32 going? :) [17:09] still working on it :p [17:09] please wait a little more :) [17:09] got some unexpected events today, so couldn't progress as wanted [17:26] ffmpeg.git 03Michael Niedermayer 07master:731f7eaaade4: ff_flac_parse_picture: assert that len is within the array [17:26] ffmpeg.git 03Michael Niedermayer 07master:d0a882ab1d2a: avformat/oggparsevorbis: fix leak of ct [17:26] ffmpeg.git 03Michael Niedermayer 07master:f3b7f4707012: avformat/oggparsevorbis: fix leak of tt [17:33] ffmpeg.git 03Michael Niedermayer 07master:60ae776d04b6: av_frame_copy_props: fix unintended self assignment [17:36] it's also easy to get lost in that maze, i hope i won't make any confusion [17:36] but it's fun :) [17:48] sox API is sloppy, anyone wants to take over the sox wrapper? [18:02] is robert sykes on this channel? [18:03] allright, i'll bother him by email [18:30] Is this a known issue? Encoding HLS into libmp3lame gives error: Specified sample format s16 is invalid or not supported [19:16] ffmpeg.git 03Nicolas George 07master:39bb26f91b18: lavu/log: do not skip overwritten lines. [19:16] ffmpeg.git 03Nicolas George 07master:dd9555e94b14: ffmpeg: remove obsolete workaround in trim insertion. [19:16] ffmpeg.git 03Michael Niedermayer 07master:b11b7ceb8926: Merge remote-tracking branch 'cigaes/master' [19:22] zidanne, i dont know of that issue (which doesnt mean its not known) but probably a good idea to open a ticket [19:41] Action: pippin continous tweaking a new dithering method - wondering if it would be worthwhile to figure out how to make a clean patch; rather than a quick hack that bastardises the error diffusion :] [19:41] http://pippin.gimp.org/dither/ [19:41] oh my sample :D [19:42] :D [19:43] (my current hack is a 7 line diff that hijacks the error diffusion code paths; providing a constant "error" per pixel) [20:04] the bunny clip looks rather odd with that dithering, the colors are wrong and the pattern is very obvious [20:06] the colors are a bit off - but I find the pattern less objectionable than both the bayer and the diffusion [20:07] I suspect the colors being off to either be due to a bug in my ffmpeg integration - ... or a gamma issue [20:09] the thing that I find worst with my method; is the level of detail in deep shadows and bright highlights [20:09] (when the bunny emerges from the cave, and some cloud details) [20:09] the cave is hard to judge because the wrong colors are very obvious in that part [20:12] my real target for this is eink related dithering, but I like the tendencies it is exhibiting [21:56] ubitux: ffmpeg recognizes a microdvd subtitle file as "Tele-typewriter" [21:57] what's especially dumb is that this "Tele-typewriter" format is completely useless fringe bullshit [22:27] ubitux: actually, this looks like microdvd, but with [] instead of {}... possibly mpl2 [22:27] mplayer's subreader.c handles it anyway [23:02] ubitux: this is the file: http://sprunge.us/RXNE [23:04] mplayer supports it as microdvd? [23:25] ubitux: subreader.c detects it as SUB_MPL2 [23:25] it even seems to parse it [23:25] although I don't know what's with these lines starting with \ [23:26] err, starting with / [23:26] it's for italic [23:26] which is handled in lavc/mpl2dec.c [23:27] lavf/mpl2dec probing might not be working or strong enough in comparison to lavf/microdvd [23:27] for that file at least [23:28] i don't have time to look at it right now, feel free to have a look [23:29] I'm not sure why it wouldn't work [23:30] maybe it'll fail quickly at very small probe sizes, but it should be ok with larger ones [00:00] --- Thu Aug 8 2013 From burek021 at gmail.com Fri Aug 9 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Fri, 9 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130808 Message-ID: <20130809000502.B54CC18A044D@apolo.teamnet.rs> [00:00] wm4: if you're not looking into it, please open a ticket, i don't have time to look into it currently [00:01] ubitux: yeah, I want to take a closer look at it, it'll end with a ticket or a patch [00:01] thanks [00:02] nevcairiel: done, see #x264dev for x264 patch [00:02] oops maybe wrong channel, dunno [00:21] ubitux: Daemon404: I'll start doing some loopfilter work meanwhile [00:21] ok [00:21] sorry for taking some time :p [00:24] that's fine [00:24] also how can I convince Nicolas George NOT to spam the log with errors when a subtitle is not utf-8 [00:24] because it fucking spams the terminal and you can't prevent it [00:25] grep on the log callback output? [00:25] wm4: come to vdd at the end of the month with a cutter or another random letal weapon [00:25] I think that would get me into prison [00:26] ubitux: found the issue... sending patch [00:32] is there a list of topics that will be discuted during the vdd ? [00:32] "Schedule [00:32] To be defined&" [00:33] :( [00:35] hm: return FFERRTAG('N', 'I', 'G', 'E'); [00:35] I think this creates wrong associations [00:36] then return FFERRTAG('N', 'I', 'C', 'O'); it is [00:40] I'll try to come to the VDD this year if I can a flight for a resonable price [00:40] don't forget you need to register [00:46] is it ok to post unrelated patches as one series? [00:47] hm or maybe git-send-email has an option to do that [01:47] michaelni, dont merge 869b04e89154cd92d2bcfdabcecbe3217864c099 and pals until my fix goes into libav [01:47] they broke all windows builds. [01:57] Daemon404, if you point me to the fix ill cherry pick it immedeatly after the problematic commits or if possible before them. That way i dont have to wait for some external thing to happen and fewer checkouts are broken [02:01] ubitux: I'm starting to believe what you said is the only thing that would work [02:03] michaelni, http://lists.libav.org/pipermail/libav-devel/2013-August/049638.html [02:03] it has no been OK'd yet [02:03] and may change [02:03] not* [02:03] it may not even be correct [02:05] thanks, will cherry pick it when i merge the stuff [02:33] Daemon404 : who cares about windows ? [02:33] :P [02:34] most people who use computers [02:38] doesn't Compnn run windows 2000 or something? :P [02:39] yes he hates life [02:45] xp4eva [02:49] ffmpeg.git 03Michael Niedermayer 07master:0b5d1b88e09b: avutil/opt: fix types passed to the format string "%s" [03:02] ffmpeg.git 03Michael Niedermayer 07master:bc721ac9f775: swscale/utils: Fix potential overflow of dstPos*xInc before converting to 64bit [03:02] ffmpeg.git 03Michael Niedermayer 07master:8d745281a489: swscale/utils: Fix potential overflow of srcPos*C before converting to 64bit [03:09] ubitux, did you read / do you have any comments on "76729 0626 1:20 wm4 (1.5K) [FFmpeg-devel] [PATCH] lavc: make invalid UTF-8 in subtitle output a non-fatal error" and [03:10] 0808 1:04 wm4 (1.3K) [FFmpeg-devel] [PATCH 3/4] libavcodec: return specific error code on subtitles with invalid UTF-8 [03:10] Nicolas expressed his dislike over the error code fourcc [03:10] so maybe that should be changed if it's accepted [03:10] nicolas is out of touch with reality [03:17] Daemon404: how's inter mode going? [03:17] i ended up helping ruggles with something today... started reading libvpx [03:18] perhaps a PR toomorrow [03:20] who here is in the US and wants some FFmpeg stickers? [04:41] -pass 512 [07:15] michaelni: i'll try to find some time for it today [10:07] can't we convert AV_SAMPLE_FMT_S16 to AV_SAMPLE_FMT_S16P with swr_convert? I get BAD_ACCESS crash at "CONV_FUNC(AV_SAMPLE_FMT_S16, int16_t, AV_SAMPLE_FMT_S16, *(const int16_t*)pi)" [10:12] (I need this because I want to encode decoded audio (which is AV_SAMPLE_FMT_S16 by request_format) into mp3 with libmp3lame and libmp3lame needs S16P not S16) [10:33] zidanne, S16->S16P should work fine like any other convertion [10:34] I use this line which should be familiar.. I think I'm doing something wrong somewhere else& swr_convert(resampleCtx, &resampledOut, AVCODEC_MAX_AUDIO_FRAME_SIZE,(const uint8_t**)decoded_frame.extended_data,decoded_frame.nb_samples); [10:48] ffmpeg.git 03Martin Storsj? 07master:f4d371b9737c: rtsp: Don't include the listen flag in the SDP demuxer flags [10:48] ffmpeg.git 03Michael Niedermayer 07master:eb02f0c6e833: Merge commit 'f4d371b9737c0405b3bc46d7ca0c856c0a8616b1' [10:51] michaelni: swr_init(resampleCtx) is returning error. The line I use is: SwrContext* resampleCtx = swr_alloc_set_opts(NULL,av_get_default_channel_layout(pCodecCtx->channels), AV_SAMPLE_FMT_S16P, pCodecCtx->sample_rate,av_get_default_channel_layout(pCodecCtx->channels), pCodecCtx->sample_fmt,pCodecCtx->sample_rate,0, 0); [10:51] Maybe swr_convert really does not want S16 as the input? [11:07] ffmpeg.git 03Vittorio Giovara 07master:22c879057ead: mpegvideo_enc: drop outdated copy_picture_attributes() in favour of a modern av_frame_copy_props() [11:07] ffmpeg.git 03Michael Niedermayer 07master:1b2a5817fcd6: Merge commit '22c879057ead189c0f59241cb9eeb926381e3299' [11:15] ffmpeg.git 03R?mi Denis-Courmont 07master:869b04e89154: libavutil: add avpriv_open() to open files with close-on-exec flag [11:15] ffmpeg.git 03Michael Niedermayer 07master:5f38317e59bd: Merge commit '869b04e89154cd92d2bcfdabcecbe3217864c099' [11:15] ffmpeg.git 03Derek Buitenhuis 07master:87e8cbf70931: libavutil: Don't use fcntl if the function does not exist [11:24] ffmpeg.git 03R?mi Denis-Courmont 07master:880391ed2d2f: libavutil: use avpriv_open() [11:24] ffmpeg.git 03Michael Niedermayer 07master:579a13789799: Merge commit '880391ed2d2faf796ca3a16f63cec69767546a21' [11:47] ffmpeg.git 03R?mi Denis-Courmont 07master:fee9db1fdf92: libavcodec: use avpriv_open() [11:47] ffmpeg.git 03R?mi Denis-Courmont 07master:71bf6b41d974: libavdevice: use avpriv_open() [11:47] ffmpeg.git 03Michael Niedermayer 07master:eec9c7af3370: Merge commit 'fee9db1fdf921295233e94cbe2769f9cd722206d' [11:47] ffmpeg.git 03Michael Niedermayer 07master:95fa1fe437c7: Merge commit '71bf6b41d974229a06921806c333ce98566a5d8a' [11:53] ffmpeg.git 03R?mi Denis-Courmont 07master:51eb213d0015: libavformat: use avpriv_open() [11:54] ffmpeg.git 03Michael Niedermayer 07master:756f865e3b66: Merge commit '51eb213d00154b8e7856c7667ea62db8b0f663d4' [12:03] ffmpeg.git 03Diogo Franco 07master:e8edf4e1cf60: cmdutils: Only do the windows-specific commandline parsing on _WIN32 [12:03] ffmpeg.git 03Derek Buitenhuis 07master:0f1fb6c0194c: libavutil: Don't use fcntl if the function does not exist [12:03] ffmpeg.git 03Michael Niedermayer 07master:f09b5fb4afed: Merge commit '0f1fb6c0194c85483dedb93b20a5b76f6fc9d520' [12:09] ffmpeg.git 03Ben Avison 07master:5afe1d27912b: avio: Add const qualifiers to ffio_read_indirect [12:09] ffmpeg.git 03Michael Niedermayer 07master:a0fb6083967a: Merge commit '5afe1d27912be9b643ffb4ddc21f6d920260dbb0' [12:26] ffmpeg.git 03Kostya Shishkov 07master:3e5898782dce: Voxware MetaSound decoder [12:27] ffmpeg.git 03Michael Niedermayer 07master:69ea65b46f1a: Merge commit '3e5898782dce60334ab294821ca00b19c648cf66' [12:36] ffmpeg.git 03Ben Avison 07master:7a82022ee2f9: h264_parser: Initialize the h264dsp context in the parser as well [12:36] ffmpeg.git 03Michael Niedermayer 07master:50b7ce12574d: Merge commit '7a82022ee2f9b1fad991ace0936901e7419444be' [12:51] ffmpeg.git 03Ben Avison 07master:218d6844b37d: h264dsp: Factorize code into a new function, h264_find_start_code_candidate [12:51] ffmpeg.git 03Michael Niedermayer 07master:c0f2ad3dbdea: Merge commit '218d6844b37d339ffbf2044ad07d8be7767e2734' [12:57] ffmpeg.git 03Ben Avison 07master:45e10e5c8d3d: arm: Add assembly version of h264_find_start_code_candidate [12:57] ffmpeg.git 03Michael Niedermayer 07master:e1a5ee25e048: Merge remote-tracking branch 'qatar/master' [15:14] ffmpeg.git 03Michael Niedermayer 07master:9386f334af53: avcodec/bitstream: Dont try to free buffers for static VLCs [15:57] Action: pippin updates his test gif of his position stable dither - now better adapted to RGB332 ; and with a less prominent pattern http://pippin.gimp.org/dither/ [16:38] yo [16:44] michaelni: sorry I couldn't connect after my connection is gone, did you write anything about my issue that is related to swr_init() returning error? [16:45] zidanne, did you look at swr_init() ? / where does it fail exactly ? [16:46] ffmpeg.git 03Matthieu Bouron 07master:88a1ff22336e: MAINTAINERS: add myself as maintainer for lavf/aiff* and lavf/movenc.c [16:46] ffmpeg.git 03Michael Niedermayer 07master:55a88daf6ff1: MAINTAINERS: remove myself from movenc, 2 maintainers should be enough [18:04] BBB: totally untested (except compilation) and with personal comments: https://github.com/ubitux/FFmpeg/commit/7801a2ee4f3e9cc9cbe70a3cea570a331bbd9ad5 [18:06] ubitux: funny [18:08] why does that look like disassembled code [18:09] :) [18:12] nevcairiel: to ease the port to assembly of course [18:20] BBB: i hope there is no idct64+ though now :p [19:01] Hi. I've created an blend filter which allows me to remove transparent logos based on a mask. It is simple but it works very well. Is it ok to submit this as an blend filter or should it be in an own module? Is it too trivial? [19:07] what's the difference with removelogo? [19:11] it keeps the remaining information [19:11] can't you update that filter to add the missing bit instead? [19:12] we already have 2 filters to remove logo [19:12] i'm not a real programmer but i can try [19:13] this is how it looks like: http://www.xbmcnerds.com/index.php?page=Attachment&attachmentID=2946 [19:13] "Der Zutritt zu dieser Seite ist Ihnen leider verwehrt. Sie besitzen nicht die notwendigen Zugriffsrechte, um diese Seite aufrufen zu k?nnen." [19:14] i don't know what that means, but it's in red and there is nothing else [19:14] give me a minute, i will upload it at imageshack [19:15] you have to be registered to view uploads in this forum. i did'nt think of that - doh [19:17] i didn't understand the "it keeps the remaining information" btw [19:17] this link works http://imageshack.us/photo/my-images/441/nzf9.png/ [19:18] there is some colour information left in transparent logos [19:23] the colour range in this example spans from 127 to 255. all i have to to is to stretch that to the normal range [19:25] here is an image without those artifacts: http://imageshack.us/photo/my-images/197/8dgy.jpg/ [19:25] membrane: are the results better than the current filters ? [19:30] definitely. removelogo and delogo guess the colour information based on the sourrounding pixels. my attempt is to remove the logo without destroying the infomation underneath [19:34] the equation is: ((A - B) * 255) / (255 - B) [19:35] A is the original pixel, B is the mask [19:37] looks like it works really well [19:37] would be very nice to have as a filter [19:37] delogos never work "well" [19:37] as a viewer, the blurry patch is never gone, and it's always annoying [19:38] this case seems to deal with transparent logo overlays [19:38] in which case there is obviously a lot of information to recover [19:38] It's Never Perfect(TM) [19:39] :) [19:39] just grabbed the latest ffmpeg git and now i get no video (mpeg2 or h264) using vdpau. if i revert to ffmpeg.20130730.git.d4925272, video works again. audio works fine however [19:40] there were recent(ish) changes [19:40] to its API, iirc [19:40] what are you using to view? [19:40] im using vdr with the softhddevice plugin to watch tv [19:41] well.. i have no knowledge of that, so im not of much help [19:41] no prob. if there was an api change in the last 8 days, thats likely the problem i think [19:42] 3 days ago [19:42] http://git.videolan.org/?p=ffmpeg.git;a=search;s=R%C3%A9mi+Denis-Courmont;st=author [19:45] i think it is better than nothing. the loss of colour information is about 12% in my example. the logo is far less noticable than without the correction. [19:45] thanks Daemon404 [19:45] indeed, I guess the color information loss would be highly dependent on the logo overlay though [19:45] might be 2852740e23f91d6775714d7cc29b9a73e1111ce0 [19:46] just 12% is excellent, most of the tv broadcasts I've seen used transparent logo overlays so it would be much better to use your kind of filter [19:47] defining the logo/position etc should be fairly straightforward, would be a really nice option for e.g. xbmc + pvr [19:48] in theory it should be possible to build a mask after N frames [19:48] it is possible to create this masks with an other filter [19:48] just by examinng luma SAD [19:48] or something [19:48] ah nice [19:48] just cache the lowest value and use the equation [19:54] there could be an filter which detects the present of logos [20:02] those exist in e.g. MythTV [20:02] it uses logos and black screens for automatic commercial cutting [20:04] Daemon404, I really stopped believing in that way when some shows started showing the logos a few seconds in :D [20:06] its not ok on its own [20:06] also it failed miserably for 24 [20:06] because of its clock-on-black-screen [20:06] sc metrics + black screen + nologo = failed [20:08] :D [20:08] at least looking at Japanese TV there should be ~0.6 seconds of complete silence (!) between ads and the shows [20:09] that could be used as a hint together with the logos [20:12] there are also scte 104 messages [20:12] or scte 35 i forget which spec hits the consumer [20:12] Daemon404: some of the delogos are superb [20:12] the ones that use motion compensation [20:13] http://pastebin.com/cr4rDgSU OK, I'm attempting to use a Raspberry Pi to connect to one of the streams for the PEG station I volunteer with, I have been trying for a couple weeks, and I've been banging my head against this input/output error for a while without an idea what's causing it. [20:13] kierank, theyre never 100% perfect [20:13] Datalink-Studio: raspberry is far too slow [20:13] can you connect to it using a regular puter Datalink ? [20:13] er Datalink-Studio [20:14] erm [20:14] have you tried that ffmpeg line on a regular puter, i mean :) [20:17] I've gotta get a source, the Pi's handled it otherwise, just not liking going to rtmp [20:17] yeah, I need a source [20:27] what [20:27] does it work on your pc? [20:30] I don't have a video source at the moment, sorry, had other stuff come up related to a different project [20:57] eh [22:15] ffmpeg.git 03Reimar D?ffinger 07master:a48979d71559: Reduce MAKE_ACCESSORS code duplication via a new header. [23:34] Kovensky, yours ^ ? [23:43] nope [23:43] that looks completely bananas -- cygwin64 requires gcc 4.8 [23:43] orite [00:00] --- Fri Aug 9 2013 From burek021 at gmail.com Fri Aug 9 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Fri, 9 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130808 Message-ID: <20130809000501.AB4E918A03A1@apolo.teamnet.rs> [00:09] hi there [00:11] The hi is totally on my side. [01:46] Hi folks, trying to use ffmpeg to broadcast from my ip camera to ustream, It's sort of working, I'm getting high speed bursts of playback followed by gaps of buffering. Any ideas on how I could fix it? [01:46] Here's the command I'm using: ffmpeg -i "http://admin:1234 at 192.168.1.120/mjpg/video.mjpg" -f flv "${RTMP_URL}/${KEY} flashver=FME/2.5\20(compatible;\20FMSc\201.0)" [01:46] and the stream from my camera, so you can see the effect, http://www.ustream.tv/channel/Azelphur [01:52] llogan: http://pastebin.com/d254C7M9 here you go :) [01:53] hehe pasted the API key anyway, will have to change that ;) [02:01] Azelphur: is there a reason you're using old, outdated flv1 instead of libx264? [02:01] llogan: I wouldn't think so, I pretty much followed a guide to get where I'm at. [02:01] change -f flv to -f libx264? [02:02] not quite... [02:02] 1. get a more recent ffmpeg [02:03] llogan: more recent? this is a build from the ppa o.O [02:03] or compile [02:03] the PPA provides old stuff [02:03] tis only 3 weeks old :< [02:03] http://trac.ffmpeg.org/wiki/UbuntuCompilationGuide [02:03] grabbing the latest, anyhow :) [02:04] ah, so he updated, but that branch is considered "old" anyway. [02:04] you want to be a neckbeard like us, don't you? [02:04] sure [02:04] working on the beard ;) [02:05] don't forget to put foil on the windows so neighbors don't see you using the computer at 3am with no clothes on [02:05] haha, no foil on the windows here ;) [02:06] your output shows bitrate=1185.0kbits/s [02:06] so i assume that's too high [02:06] I'm now using ffmpeg version N-55249-ga7bb12a Copyright (c) 2000-2013 the FFmpeg developers built on Aug 6 2013 05:43:01 with gcc 4.6 (Debian 4.6.3-1) from the static link you gave me :) [02:06] so you can try a more efficient encoder, libx264 [02:06] ok, that will work [02:07] not sure how I use libx264, I assume it's something with -f, I tried libx264 and x264, neither seemed to work [02:07] now add -c:v libx264 as an output option. see if the default settings work fine for you for this input. [02:08] or -codec:v libx264 [02:08] either is fine [02:09] llogan: hehe, I think it's still fast playback, it's also beating the hell outta my bandwidth ;) [02:09] 12.6mbit/sec up :) [02:10] this is ./ffmpeg -i "http://admin:1234 at 192.168.1.120/mjpg/video.mjpg" -f flv -codec:v libx264 "${RTMP_URL}/${KEY} flashver=FME/2.5\20(compatible;\20FMSc\201.0)" [02:12] llogan: I just had a thought, it's grabbing all these mjpgs off the camera, they are just a series of images, it has no frame rate information [02:13] perhaps that's the problem, I need to set the output frame rate? [02:16] this does sort of seem to stand up too, I notice the input is stream is capturing at 25 fps [02:16] but I know the camera is set up to output 10 [02:18] ffmpeg is assuming your input is 25 fps [02:18] yea, it's wrong about that :) [02:18] i have no experience with http input protocol [02:19] hey look at that, I got it vaguely working [02:19] slightly bad frame rate, but it's looking like the timing is about right now [02:20] this is with -r 5 on both the input and the output [02:20] but you said the camera was 10 fps [02:22] I turned it up to 15 now, just for the hell of it, but -r 15 on both sides still gets me strange hyperspeed behaviour [02:22] 5 seems too slow, I end up with a neverending backlog ;) [03:06] llogan: managed to get an "ok" solution, I set the fps to 8, it gets about 1 second ahead every 20, and stops to catch up [03:06] not perfect, but it'll do :) [03:14] Azelphur: i forgot about the -re input option [03:14] try that isntead of any -r options [03:16] ./ffmpeg -re -i "http://admin:1234 at 192.168.1.120/mjpg/video.mjpg" -vf "hflip" -f flv "${RTMP_URL}/${KEY} flashver=FME/2.5\20(compatible;\20FMSc\201.0)" [03:16] llogan: ^ gets me hyperspeed as usual :) [03:29] pastie.org the output [03:31] relaxed: http://pastebin.com/EtPLjPQr [03:32] relaxed: just to give you the highlights of the conversation, camera is outputting at 15fps, ffmpeg seems to be capturing at 25, if I -r 8 on both sides I get a somewhat usable output, -r 15 results in the hyperspeed playback, less than -r 8 results in a neverending backlog [03:32] and you can see the live playback at http://www.ustream.tv/channel/Azelphur, the timer is counting seconds in a uniform manner :P [03:33] unrelated, but what's with the flashver? do you really need that for ustream? [03:33] llogan: no idea, as I say, I just pulled that from a guide [03:33] had to modify the command a bit, the guide was using a webcam, I'm using an ipcam. [03:33] i doubt you need it [03:34] Azelphur: you probably want -c:v libx264 [03:36] relaxed: llogan mentioned that earlier, I tried it, the result was same hyperspeed playback while using 10x the bitrate ;) [03:36] capture the stream locally using different frame rates and pick the closest one. [03:37] actually, if you have mplayer- mplayer -dumpstream -dumpfile output http://admin:1234 at 192.168.1.120/mjpg/video.mjpg [03:37] then pastebin the output of `ffmpeg -i output` [03:37] mplayer is just an apt-get away :) [03:39] The problem is most likely ffmpeg getting the input frame rate wrong. [03:39] that was my assessment, the camera is outputting at 15 and ffmpeg says 25 [03:39] and you really want -c:v libx264, by the way [03:40] Do you plan to stream 1280x1024 video? [03:41] relaxed: http://pastebin.com/WctrvYFv [03:41] relaxed: yea, ustream seems happy enough with it. [03:41] you must have a pro account. basic is limited 480 [03:42] oh [03:42] if not then they will probably scale it. probably better for ffmpeg to scale it. [03:42] I'll set my camera to 640x480 then [03:42] That's not very helpful. Can mplayer play it? [03:42] my camera can just do it natively, seems faster :) [03:44] maybe a ffmpeg ustream/twitch/justin guide would be useful. i've never done much streaming myself though [03:45] Azelphur: you beat me. my upload is ~0.5 Mbps [03:45] relaxed: vlc opened it sort of [03:45] it played every single frame in a split second [03:46] mplayer says "failed to recognise file format" [03:47] try ffmpeg -r 15 -i $input -c:v libx264 -f flv output.flv [03:48] Action: relaxed will brb [03:48] ffmpeg -r 15 -i "http://admin:1234 at 192.168.1.120/mjpg/video.mjpg" -c:v libx264 -f flv output.flv [03:48] just to make sure I'm doing whats expected :) [03:53] hey, changing the camera to 640x480...seems to have fixed it [03:54] ffmpeg -r 15 -i "http://admin:1234 at 192.168.1.120/mjpg/video.mjpg" -r 15 -f flv "${RTMP_URL}/${KEY} flashver=FME/2.5\20(compatible;\20FMSc\201.0)" [03:54] now getting sane results on http://www.ustream.tv/channel/Azelphur woo \o/ [03:55] well, it's a tad on the fast side, seems to buffer occasionally [03:55] try -r 12 [03:56] on both sides I assume? [03:57] the output will inherit the input frame rate, so no [03:57] ok, just before the input then :) [03:58] looks pretty decent now, I don't see it buffering at all :) [04:01] I'd say that's working, thanks all :) [04:02] also consider -bufsize and -maxrate [04:03] what would they be good for? :) [04:06] i'll just paste some quotes for you: http://pastebin.com/ndN59MmE [04:07] fun [04:11] there we go, got it pointing out the window and ready to go, give it 2-ish hours and a nice beach will appear on that ustream page :) [04:11] 2-ish hours because of sunrise? [04:11] yup [04:11] you are now a certified neckbeard [04:11] woo \o/ [04:12] llogan: https://www.dropbox.com/s/lcl7chbyoq1kfsb/2013-07-28%2020.59.52.jpg camera is stationed in the bottom corner of that left window (under the desk) [04:13] so it should get quite a nice view come sunrise :) [04:13] damn. nice setup. [04:13] ty :) [04:13] which country is this beach in? [04:13] *jealous* [04:13] UK [04:13] so it'll probably rain :P [04:13] man, my desk looks shabby and small now [04:14] scotland? [04:14] nah, england, margate specifically [04:14] i'll try to remember to take a look [04:14] hehe :) [04:15] also...windows would be nice. [04:15] Action: llogan needs a new location [04:19] xD [04:19] llogan: my place annoys the hell out of people, because it's awesome and I pay far less than anyone would think [04:20] I pay like, ?720/mo (~$1100) for it :) [04:20] nice. i pay about the same for my dumpartment (but it has 4 rooms). [04:21] ah, this is a 2 bedroom [04:21] but it was built in 1905. [04:21] haha, this was built in the 1700's [04:21] and the bars are noisy so there are tradeoffs [04:22] llogan: ah, no bars here :) [04:23] this is pretty much a really nice place, I think the area is just underrated really [04:46] hi, does ffmpeg support bdmv dir as input? thanks. I try, seem failed [04:50] oh, a directory? maybe i should read first before summoning fflogger [04:54] i wonder if ffmpeg support something like ffmpeg -i "blahblah\bdmv" ? eac3to support this [05:01] jcath: what does the dir contain? [05:02] the bluray movie disc bdmv directory structure [05:02] you will need to find the correct m2ts file [05:02] there are stream, mpls , blah,blah under bdmv, and there are m2ts files under stream dir [05:03] some discs, the movie is build up with some m2ts files, not only one [05:04] use ffmpeg -i concat:1.m2ts\|2.m2ts\|3.m2ts .... [05:05] ok, that's a solution [05:05] tsmuxer can do it too [05:08] yeah, i c. i just want to see if it is possible to do the transcode only one command line with ffmpeg, save the disk storage, speed up the transcode time [11:02] Hi, is there a way to capture screenshot of a movie every second in ffmpeg? [11:04] maybe [11:04] you want to make a gif? [11:05] http://blog.room208.org/post/48793543478 [11:05] http://stackoverflow.com/questions/6079150/how-to-generate-gif-from-avi-using-ffmpeg [11:07] no, not gif, .png every second of the movie [11:08] vl4kn0: I think there is a page in the wiki for that. About taking thumbnails every second. [11:08] ah no, wait. It's in the ffmpeg manpage. [11:08] Look for "For extracting images from a video" in ffmpeg(1) [11:09] hi [11:09] i have a question about joining 2 audio streams to 1 stream with 2 channels (L/R) [11:11] i use the following commandline option: [11:11] -filter_complex "join=inputs=8:channel_layout=stereo:map=0.0-FL\,1.0-FR" [11:11] viric: thanks [11:12] that does nearly what i want [11:12] the problem is that the streams are discrete and balanced to center [11:13] you are welcome [11:13] after joining them they are still balanced to center, so that i hear both channels from left AND right [11:14] any suggestions how to re balance them? [11:14] or join them in a different way [11:14] ? [11:31] vl4kn0: man ffmpeg-filters | less +/^' select' [13:32] no ideas? [13:32] :( [13:41] I issued this command ffmpeg -y -i "..url.." -t 60 -c:v libx264 -preset superfast test.mp4 [13:41] and i get errors like these: http://codepad.org/XOJ1Y1Be [13:41] why is there an h264 prefix sometimes and libx264 prefix othertimes? [13:42] am i understanding wrong that these errors are from ffmpegs internal h264 decoder? [13:42] 'h264 @' is the decoder.. 'libx264 @' should be the encoder, from what I remember [13:42] doesn't lix264 decode? [13:43] There are multiple 264 decoders available [13:43] (If only currently "h264" and "h264_vdpau") [13:44] See `ffmpeg -codecs` [13:44] do you think another decoder would work? [13:44] or is the stream just bad? [13:45] Tell me your URL and I show you secrets.. [13:45] xlinkz0, those are either buffer overruns [13:45] 6 [h264 @ 0x1a269a0] RTP: missed 9 packets [13:45] 7 [h264 @ 0x1a269a0] RTP: missed 77 packets [13:45] or your stream is broken [13:45] more like underruns [13:45] so can i fix it from the software or not? [13:46] ffmpeg cannot really fix a broken stream. Ensure that the stream properly gets downloaded and no drops occur, if that is what happens. [13:51] sorry, got disconnected [13:51] did anyone answer by any chance? [13:52] Fix your internet, then your stream issues may go away. [14:04] not really an option, the cameras are connected on LAN [14:04] stream copy always works flawlessly [14:04] transcoding live is an issue.. [14:53] i never really had rtsp or rtp streams without package loss [14:54] udp always has packet loss [14:54] also tcp. Otherwise we wouldn't have bandwidth limits :) [14:56] i was about to say that you are wrong regarding the bandwidth limits, but then i realized you are right :o [15:31] turns out the cisco camera just encodes badly [15:31] stream copied without packet loss errors and transcoded the resulted video with errors [16:22] I have video file 3 minutes and two audio files 1 minutes and 2 minutes can I put that two audios on that video in one command or should I first merger those two audios in one file and then do things regular way ? [16:25] hi, I'm trying to create a 1080i file but I keep getting a progressive output, does anyone know whta the options for interlaced upscaling are ? [16:25] usr/local/bin/ffmpeg -threads 4 -i 20SEC_VOX-POP_MASTER___.mov -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -c:v libx264 -preset superfast -tune fastdecode -crf 22 -c:a libfaac -b:a 128k TEST.x264.mov [16:34] badcompiler_: have a look at https://www.ffmpeg.org/ffmpeg-filters.html#interlace and https://www.ffmpeg.org/ffmpeg-filters.html#telecine [16:44] elkng, several options are available, check the join/merge entry in the FAQ [16:52] ./ffmpeg -r 12 -i "http://admin:1234 at 192.168.1.120/mjpg/video.mjpg" -filter:v "crop=640:360:60:0" -f flv "${RTMP_URL}/${KEY} flashver=FME/2.5\20(compatible;\20FMSc\201.0)" [16:52] can anyone tell me what's up with my output filter? [16:52] the output isn't getting cropped, it's still going out at 640x480 [17:09] oops, that was a pebcak, it does work :) [17:11] how tdo i use the advanced video option -ilme ? to preserve interlace -filter:v ilme ? [18:05] dad gummit] [18:39] Hi All [18:40] I did a fresh install of FFMpeg on my Fedora box and it went alright [18:40] however, I can't find ffplay in there [18:41] any suggestions as to where should I look for it or do I need to build with some added options? [18:41] rurtle: if you don't have the sdl-dev packages installed it won't be built. [18:42] I was checking some blog post on ffplay, they mentioned the same thing - but for a different platform. Thanks for the tip. Lemme try that. [18:57] Hi. I've created an blend filter which allows me to remove transparent logos based on a mask. It is simple but it works very well. Is it ok to submit this as an blend filter or should it be in an own module? Is it too trivial? [18:59] relaxed: After installing SDL-dev, it worked like a charm. Thanks a lot! [19:01] Whoops, wrong window. I will post this in #ffmpeg-devel [19:23] is there a way to control the packet size when reading from av_read_frame? [19:36] when ffprobe tells me "Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s", what does the "(und)" mean? [19:36] undefined. [19:37] I guess I figured that, but what aspect of the stream is undefined? [19:37] In other words, what does that field represent [19:37] The language of the audio has not been specified [19:37] You will see "eng", "ger", etc. in carefully authored files [19:37] OIC... thx [19:39] I guess that doesn't explain my larger problem, which is that I can't avoid "No channel layout for input 2" errors. I have a MP4 file, containing several video and audio streams I've just captured from a set of cameras and microphones, and now I'm trying to post-process it. Mix some audio streams together, create one 5.1 audio, change the order of the various streams, and eventually postprocess the video a bit. [19:40] specifically, this: http://pastie.org/8219262 [19:41] The ffprobe output at the beginning of that paste is redundant, I guess, but ... oh well. [19:56] http://pastebin.com/cr4rDgSU OK, I'm attempting to use a Raspberry Pi to connect to one of the streams for the PEG station I volunteer with, I have been trying for a couple weeks, and I've been banging my head against this input/output error for a while without an idea what's causing it. [20:11] I'm starting to wonder if I'm doing the impossible at this point [21:01] can ffmpeg generate a video from a bunch of dated files, or do they have to be incrementally named? [21:06] Azelphur: are the still in sequential order (but not specifically numerically)? [21:07] think so, the naming convention I went with is 2013-08-08 16:38:01.jpg [21:07] llogan: dunno if you saw btw, the fruits of yesterdays neckbearding, http://www.ustream.tv/channel/Azelphur ;) [21:08] you can use the glob pattern: ffmpeg -pattern_type glob -i '*.jpg' ... [21:08] cool, I'll give that a go [21:09] i just checked it out a few minutes ago. is it almost sunset time now? [21:09] yup [21:09] llogan: sweet, it works :) [21:10] Azelphur: you got some low tide [21:10] note that by default the images will be 25 frame rate unless you add -r as and input option [21:10] vad_: yea, it's pretty much flat out there, complete millpond :) [21:10] yea, the default seems ok :) [21:10] llogan: It's sunset time anytime, for some place on Earth [21:11] llogan: https://www.dropbox.com/s/l901jp1es6trtwf/out.mp4 :) [21:13] hey guys, I'm wondering if it's possible to analyze a video, extract the byte position of each keyframe, and construct a segment of the video on demand using this information [21:13] something like [21:13] i like the part with the tide going out, but is the camera out of focus? [21:13] axorb: that's what the seek indexes in videos are there for, already [21:14] vad_ how can I extract those? [21:14] show_frames seems to decode the entire file [21:14] axorb: you don't really extract those; you simply let them be used when seeking [21:14] vad_: I need them :P [21:14] no? [21:15] I'm constructing a HLS playlist prior to generating the parts [21:15] wtf is hls [21:15] it's a streaming format [21:15] basically, you segment the file into parts [21:15] and I want to segment on keyframe boundries [21:15] except I don't have the entire file at my disposal, I want to download as little of it as possible to reduce latency [21:15] llogan: I'm not sure in all honesty, I think it might be, it's auto focus [21:16] so yeah, reading the seek indexes would be fantastic [21:17] Azelphur: what frame size are you giving to ustream? [21:17] in mpeg4, they should be in the moov atom [21:17] llogan: 640x360 [21:17] I'm downscaling from 720p then cropping [21:17] vad_: I'm going to be supporting all sorts of videos, do you know of a uniform way of doing it? [21:18] like some output from ffprobe or something [22:18] hello [22:19] how would you stream continuosly raw data from a device? [22:20] ./ffmpeg -f rawvideo -i /dev/video0 -vcodec copy -f rawvideo http://localhost:8080/data.ffm [22:20] basically i need to copy raw data from a device and feed a .ffm [22:21] the prblem is that it seems ffmpeg does not keep reading from the device, it exits immediatly (with no errors) [00:00] --- Fri Aug 9 2013 From burek021 at gmail.com Sat Aug 10 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sat, 10 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130809 Message-ID: <20130810000502.1075718A055B@apolo.teamnet.rs> [00:13] Install ffmpeg according to https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide [00:13] On Ubuntu 13 keep getting The program 'ffmpeg' is currently not installed. [00:15] I have file "file.swf", these are its parameters: http://sprunge.us/RPXh, and thats the command I use to convert it to video: "ffmpeg -i file.swf -vcodec mpeg4 -vb 1500k -ab 128 -ar 44100 -ac 2 -acodec libmp3lame -y -f avi file.avi", and thats the output: http://sprunge.us/UegZ, what is wrong with that file or ffmpeg can't convert from *.swf [00:15] to *.avi ? [00:21] elkng: Is it an actual video, or a vector animation? [00:33] hi has anyone used the dshow capture o windows with logitech c920? [00:33] (on windows 8 that is...) [00:34] videoman: did you encounter any errors? [00:36] llogan I did not encounter any errors. [00:36] is the ffmpeg binary in ~/bin? [00:38] no it is not [00:38] then something went wrong somewhere. the next step is to determine which step is messed up at. [00:39] ok thanks, I'll try again but it made it through make install [00:41] apparently this VM ubuntu machine is named "cornhole" [00:42] how does anyone get used to Unity? [01:44] any owner of an logitech c920 that has capture the hardware h264 with it? [01:48] llogan: still trying on Ubuntu 13.04. Can't run make: Nothing to be done for `all'. Some warnings in running .config but it finished. I've done this several times on earlier versions of Ubuntu, never ran into this [01:49] who run make on ubuntu ? [01:49] pun intended [01:49] got it ? [02:16] videoman: then perhaps configure failed. see the tail of config.log in ~/ffmpeg_sources/ffmpeg/ for a clue [02:17] Thanks llogan, I found the issue. The binary was put in home/ubuntu/bin instead of /usr/bin [02:19] yes, the guide is designed to place the binary in the user's home to avoid conflict with repository stuff [02:19] (and libraries especially) [02:19] but issuing "ffmpeg" should still use the version in $HOME/bin if all is well with the guide. [03:22] is it possible to use the segmenter feature of ffmpeg to create segments with individual files for audio and video ? [07:22] hello! i just compile ffmpeg using the guide (http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide) it looked like everything went smooth till it's all done -- bash: ffmpeg: command not found [07:22] i'm on debian 7 [07:25] sounds like the ffmpeg binary is not in your PATH environment variable [07:27] i can't locate the ffmpeg binaries at all lol [07:28] did you run "make install" ? [07:28] sure [07:28] did you specify --prefix=/some/dir during ./configure? [07:29] yes ofc [07:29] did you add that directory to your PATH? [07:32] soundz? [07:32] sorry, one sec [07:41] this second is taking to long for me to postpone sleep any further, i suggest you check your $PATH, good night [07:42] *too long [07:42] the $path is not the problem [07:42] and i'm sorry again [07:42] i'm just trying to trace the problem [07:42] night [07:43] can you post your $PATH variable? [07:43] and tell me where you installed ffmpeg? [07:44] why are you so sure your PATH variable is correct? [07:44] if the command is not found, there is nothing else that comes to my mind [07:44] undless you cross compiled for another system, but i don't think you did that if you followed the guide properly [07:50] sorry for my delay again lol [07:50] ffmpeg version git-2013-08-09-18be3fa Copyright (c) 2000-2013 the FFmpeg developers [07:50] built on Aug 9 2013 07:48:14 with gcc 4.7 (Debian 4.7.2-5) [07:50] i just followed the old guide [07:50] http://web.archive.org/web/20130326055445/https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide [07:51] mindlessly copy paste everything except the ffmpeg ./configure part [07:51] and it worked great lol [07:52] thanks again [08:02] soundz: when you followed the current compile guide did it put anything in $HOME/ffmpeg_build? There should have been a bin directory with ffmpeg in it. [08:03] or actually it uses $HOME/bin [08:04] it uses ffmpeg_build [08:04] did you configure with --bindir="$HOME/bin" ? [08:05] it doesn't matter anymore [08:05] i managed to get it working [08:06] ok, just wanted to find out if there was something that needed to be fixed in the current guide [08:07] i just used the old guide :\ it's idiot proof [08:08] well some people don't have root and can't install stuff systemwide, so it didn't work for them [08:08] i had to root in current guide [08:09] the yasm part needs root [08:10] especially the make install [08:10] that's the old guide; the current guide installs in your home directory [08:12] i had to root in order the compile yasm [08:12] i was getting denies and stuff [08:12] with the new guide [08:14] I don't know why that would happen; there should be no need for root to compile or install in your home directory [09:34] hey [09:34] "intel inside, idiot outside" :) drawn this in 2005, i see it is quite popular on the net now, they sell t-shirts with this on them http://tinypic.com/view.php?pic=332lzls&s=5 [10:31] hi all thereis a way to drop frame when there are this error: Cannot use next picture in error concealment ? [12:37] Hi, I'm using the ffmpeg libraries for a audio streaming and playback application. I'm using "avformat_open_input" and then "av_read_frame" to open a remote ogg file. I'm wondering if FFMPEG does any automatic background preloading/buffering the of a remote data? Or should I read as many frame as possible into a temporary buffer to avoid risking audio-playback to starve? [12:38] And if so, can I configure how much of data is downloaded when calling avformat_open_input? [13:28] qqwhat does --extra-ldflags=somedir actually do when configuing ? [13:31] that does nothing, -L/some/dir/somwhere/lib adds a directory to the library search path [13:32] (with --extra-ldflags since -L is a linker flag [13:33] @jeeb, yeah thanks, I don't know why I didn't figure that out, there are so many ffmpeg guides, some of them specify lib dirs, some of them don't. I stupidly assume everytime I compiled something the "system" would magically find what it needs [13:33] if you poke gcc enough you will see its default include and library search dir paths [13:33] usually googling for it helps [13:35] so if you install into those directories, it will be found automagically (caveat: if you compiled as shared, you will have to do ldconfig first, only after that your library will be found) [13:35] yeah thanks, I was trying to build x264-devel and it was not picking up lavf, I realise now if I point it with ldflags=somedir and cflags=somedir it configures with lavf support [13:37] I guess I should really about how compiling works [13:54] hello, would you please help me out with this one: http://stackoverflow.com/questions/18132342/ffmpeg-rtmp-streaming-process-exit [13:58] opening 2 terminals and running ffmpeg with 2 different rtmps, then killing one of them closes the other one also. [13:58] is this expected? can it be avoided? [15:05] http://pastebin.com/2Ff7DDLn [15:16] is it possible to convert .swf to .avi ? [15:21] trying to build --enable-shared, keep getting XXXXXX.some.thing can not be used when making a shared object; recompile with -fPIC [16:42] Here is a stackoverflow post for my question earlier, if anybody happens to know: http://stackoverflow.com/questions/18120182/ffmpeg-libavformat-internal-buffering [16:53] elkng. swf is vectors in'it ? [16:55] zap0: some of them can be converted, by in my case seem like can't [17:23] hello I have an mpeg file which has mpeg2video codec and ac3 audio and I want to convert it to flv and mp4 in a single ffmpeg command when I do it in two steps everything is fine but when i do it in one step for the mp4 file I get 'moov atom not found' [17:23] can anyone help me with this [17:24] ffmpeg -i test.mpg -y -b 512000 -s 320x240 -ar 22050 -ac 2 -ab 32k /var/www/test/test1.flv -b 512000 -s 320x240 -vcodec libx264 -ar 48000 -r 30000/1001 -acodec libfaac -ac 2 -ab 32k /var/www/test/test1.m4v [17:29] was there alien technology used in ffmpeg ? [18:07] :) [18:07] hm [23:26] hello. is there a way to dynamically change the bitrate of an encoder in ffmpeg? [00:00] --- Sat Aug 10 2013 From burek021 at gmail.com Sat Aug 10 02:05:03 2013 From: burek021 at gmail.com (burek) Date: Sat, 10 Aug 2013 02:05:03 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130809 Message-ID: <20130810000503.1BC8718A055C@apolo.teamnet.rs> [00:00] llogan: there is no -pass [00:04] ffmpeg.git 03Michael Niedermayer 07master:c11c180132b3: MAINTAINERS: add Alexander Strasser for the server [00:04] ffmpeg.git 03Michael Niedermayer 07master:3b2e99fe9ec4: avfilter/vf_perspective: factor u cliping code [00:09] durandal_1707: what you say!!! [00:12] only i see pass1 and pass2 [00:12] really awkward [00:12] so lets add pass3 - pass 255 [00:33] have fun [00:35] its no fun, its pain [00:35] "Life is pain." [00:36] anybody want a peanut? [00:37] no, i prefer nut [00:39] pine nut [00:43] durandal_1707: so i guess the user can pass pass via x264opts/x264-params? da/nyet? [00:46] ubitux: no [00:46] ubitux: 32 is largest [00:47] ubitux: does it work? :) [00:50] ok so it doesn't work yet (after plugging in the function pointer), but it does do something [00:50] so that's a good start [00:50] ubitux: nice! so would you like me to commit this or hold off until it works? [00:56] BBB, think im sending you PR tomorrow with some bitstream parsing [00:56] ok [00:56] Action: Daemon404 has to visit family tonight [00:56] the pre-moving visits. joy. [00:56] yay [00:57] don't worry about timing, I'm on vacation in asia so not doing much useful either [00:57] lol [00:57] stuff like this takes time [03:05] I'm a bit confused about why when I use libfdk_aac on a matroska output, I only get CodecID = 'A_AAC' on the audio TrackEntry. I was hoping for something more descriptive that includes the profile as well. Just knowing that it's AAC isn't enough for when developing a video player. [03:05] any thoughts on that? [03:12] the A_AAC/X/Y codec ids are deprecated [03:12] A_AAC is correct. [03:13] aac is complicated because of backwards compatiblity [03:13] Oh [03:14] So, but in iOS I need to tell the audio queue what type of AAC it is... so is there a way to find out? [03:14] there's something to parse, presumably [03:14] Hehe... yeah [03:14] can't find it (yet) [03:15] why does the audio queue need to know what kind of aac it is [03:15] matroska is a mess.. their spec docs are out of date (like circa 2004) [03:15] you need to use an aac parser and find out the true sample rate [03:15] including extensions [03:15] so enjoy RTFSing... [03:15] ewww [03:16] really?! [03:16] ewww [03:19] kierank: I believe to setup an audio queue, I need to give it a valid AudioStreamBasicDescription, and there are values there that it seems to want. Such as... mFormatID, which can be any number of AAC profiles. [03:19] ok [03:26] that's frigging insane. Sigh. I thought I was done with decoding information when I finished my matroska reader. [03:26] grr [07:03] ffmpeg.git 03Marton Balint 07master:9f120e034fbe: ffplay: free subtitle pictures on exit [07:03] ffmpeg.git 03Marton Balint 07master:608989f6bf8e: ffplay: fix memleak of non-bitmap subtitles [07:03] ffmpeg.git 03Marton Balint 07master:e84ca8d38a18: ffplay: ensure the decoder is flushed before exiting or looping [07:03] ffmpeg.git 03Marton Balint 07master:18be3fac1d04: ffplay: check for filter EOF return codes [07:48] BBB-work: yeah please hold, it's a WIP, i will try to make it work [08:00] ffmpeg.git 03Martin Storsj? 07master:dfc6b5c81491: file: Move win32 utf8->wchar open wrapper to libavutil [08:00] ffmpeg.git 03Michael Niedermayer 07master:0dc17da30825: Merge commit 'dfc6b5c81491abf7effb97b23af17ccf7adcd132' [08:59] ffmpeg.git 03Anton Khirnov 07master:fa09e76010b7: FATE: add a TAK test [08:59] ffmpeg.git 03Michael Niedermayer 07master:760c5278dbff: Merge remote-tracking branch 'qatar/master' [09:30] ubitux: ok [09:43] ffmpeg.git 03Michael Niedermayer 07master:190a5893d189: avfilter/fifo: explicitly assert that a frame should have become available after request [10:28] I am trying to manually convert S16 to S16P. Where am I doing wrong? http://pastebin.com/h2DujTa6 [10:28] S16 is interleaved sample by sample [10:29] you cannot convert interleaved to planar with a memcpy [10:29] unless you have a special memcpy :) [10:33] swr_convert crashes, that is why I am trying to find another way :/ I already use swr_convert perfectly to convert FLTP to S16, but I can't do it to convert from S16 to S16P (which I will use in libmp3lame) [10:33] zidanne: swr probably crashes because you're not using it correctly [10:33] zidanne: maybe paste your swr code? [10:34] it's probably just some oversight if it already works with other formats [10:37] wm4: http://pastebin.com/XRbH8Xze [10:40] you pass AVCODEC_MAX_AUDIO_FRAME_SIZE as number of output samples [10:40] this looks wrong [10:41] and passing decoded_frame.linesize[0] as in_count looks wrong too [10:43] as this is non-planar input, i thought that linesize[0] would be correct [10:44] linesize is in bytes [10:44] hi wm4! [10:44] but swr wants something else [10:44] I'm not sure what it is... looks like you could call it number of frames [10:44] bernie_: hi [10:45] so, I've got vp8 video playing from a matroska file on my iOS device... nice! Having a headache with getting the Audio going though (AAC). What a mess! :( [10:46] I'm not sure even where to start debugging the problem. iOS is telling me that it can't decode the first 2 frames... which seem impossibly small to me anyhow (just 160 bytes) [10:54] bernie_: why not discard 2 frames? [10:58] I changed the function to handle interleaved format. manually converting from S16 to S16P but it still does not work. Do you see any problems here? http://pastebin.com/uxzZySt5 [10:59] its ugly [10:59] define src, a and b outside and just increase in the loop [11:00] also, uint8_t for S16? [11:00] also, why the channel check inside the loop? [11:00] will the number change while it runs? [11:01] *a++=*srr++; [11:01] *b++=*src++; [11:01] also [11:01] it fails for channels > 2 [11:03] I am just trying to make it work for the current situation. that is why I ignored situations other than channels==2 [11:03] http://pastebin.com/H6D9eQbY [11:05] that might work [11:35] av500: it worked after a few more adjustments. thank you. [12:19] ffmpeg.git 03Michael Niedermayer 07master:c9837613ed05: avfilter/trim: Fix assertion failure with empty frames [12:49] smarter, whats the status of hevc ? [12:51] he hasn't worked on it for a while, so the openhevc libav tree is currently the most up-to-date IIRC. And elenril is going through it atm. [12:51] also we still don't have the final draft on 14496-15 amendment 2 :< [12:52] which holds back both "mp4" and matroska with regard to HEVC [12:52] whats the expected timeframe for elenril merging openhevc ? [12:53] it's really a mess so I don't know of timeframes just yet :s openhevc added intrinsics and there's random commented out code etc. [12:54] there is some reason for the rush? [12:54] lol 61af627d56 [12:55] there's no rush really, but everyone kind of wants HEVC to get finally merged into libavcodec [12:55] since it mostly works [12:55] its already in widespread use? [12:56] we're kind of getting stuff created in it almost every day, and the MCW x265 project kind of has brought a semi-usable-speed encoder [12:56] MC/DivX have publicized their decoder, and they should put up the encoder during the summer as well [12:57] and where are such stuff available? [12:58] I don't know which you mean with that, but MCW's project is @ https://bitbucket.org/multicoreware/x265/ and DivX releases its stuff @ http://labs.divx.com/term/HEVC [12:58] i mean watchable files [12:58] oh [12:58] or stream? [12:58] well divx has some samples, I have some samples @ http://x264.fushizen.eu/samples/hevc/ [12:59] and then there's the ITU ftp used for the tests smarter made [13:01] someone should put some samples to samples.ffmpeg.org [13:02] also do note that rovi/mc/divx hacked up a kind of a pre-release spec for hevc in matroska, but it most probably won't be used in the end because everyone wants to copy the extradata format from 14496-15 amendment 2... which is seemingly in a limbo because there are no public working drafts, unlike with JCT-VC [13:04] sounds like many non-issues. We need to support both the final spec and whatever hacks people use before it anyway [13:05] I'm not sure if supporting both will be fun :P Since they both use the same ID and all [13:07] Action: pippin notices that ffmpeg sometimes encodes invalid GIFs (decoders complain about "too much data") [13:07] pippin, is there a ticket on trac about that ? if not please open one (with enough info to reproduce) [13:08] pippin: how to reproduce? [13:09] it seems data dependent [13:09] thus should be possible to reproduce also without the custom dithering algorithm I'm hacking on... (http://pippin.gimp.org/dither/) [13:12] what is exact error reported by decoder? [13:14] gimp reports: [13:14] GIF: too much input data, ignoring extra... [13:15] Last message repeated 1 time(s). [13:16] mplayer reports: GIF-LIB error: Image is defective, decoding aborted. [13:19] what lavf/lavc version you are using? [13:20] bdccfc which is origin/master as of Sat Aug 3 00:45:08 2013 +020 [13:22] the middle of the gif's on the url referenced above should be the defective gif, though I prefer not to check as I'm on expensiveish mobile connection right now - if it loops before the others in the browsers it likely is the one [13:41] this error difusion dither is single pass? [13:42] yep, and it is deterministic based on input color and position [13:43] well to me all 3 sucks on its own [13:47] finding a good dither (to 256 colors) is about compromising various suck factors [13:47] the test-nesodden.gif is shorter than others, is that one you get error when decoding? [13:48] yes [13:48] the browser loops it at the error [13:48] i do not get any error here [13:49] and firefox loops all of them [13:49] also i failed to find too much input ... error log in code [13:50] ffmpeg.git 03Michael Niedermayer 07master:f58cd2867a8a: avformat/paf: Fix integer overflow and out of array read [13:50] ffmpeg.git 03Michael Niedermayer 07master:c94f9e854228: avutil/mem: Fix flipped condition [13:50] perhaps you have original lossless files somewhere (same dimensions) [13:50] it would have to be the lossless, dithered result [13:51] if I keep stumbling across this while experimenting with dithering; I'll wrap it up in a bug report... [13:54] durandal_1707: btw firefox wouldnt complain, it would just loop the gif prematurely [14:22] ffmpeg.git 03Michael Niedermayer 07release/2.0:1bf2461765c5: avfilter: fix plane validity checks [14:22] ffmpeg.git 03Michael Niedermayer 07release/2.0:211374e52a93: avutil/mem: Fix flipped condition [14:22] ffmpeg.git 03Michael Niedermayer 07release/2.0:50f9c4acc3ea: avformat/paf: Fix integer overflow and out of array read [14:22] ffmpeg.git 03Michael Niedermayer 07release/2.0:d6d168e87b63: avfilter/vf_separatefields: fix ;; [14:28] michaelni: this too 86e722ab97d7f5? [14:29] and bc2187cfdb5eeb82e3ca [14:59] ffmpeg.git 03Michael Niedermayer 07master:a9553e8f3792: avcodec/tiff: avoid seek back on reading tags [14:59] ffmpeg.git 03Michael Niedermayer 07master:200170e8c0b7: avcodec/tiff: remove redundant check [16:50] spam incoming [23:08] ffmpeg.git 03Michael Niedermayer 07master:12538bb9c2f4: avformat/nut: support planar rgb [23:08] ffmpeg.git 03Michael Niedermayer 07master:db8578a809f5: avcodec/raw: gbrp support [23:38] ok, just compiled ffmpeg on my Ubuntu box, same parameters as the Raspberry Pi I'm trying to stream from, same error [23:48] it gives me an input output error on the RTMP [00:00] --- Sat Aug 10 2013 From burek021 at gmail.com Sun Aug 11 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sun, 11 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130810 Message-ID: <20130811000502.32D5318A045D@apolo.teamnet.rs> [00:21] @Datalink: what error? [00:22] gimme a sec [00:24] http://pastebin.com/im7vQ3kB [00:25] what is the receiver? [00:25] Imean the server? [00:25] oh, and you try to push RGB? [00:26] ignore the last question [00:26] that is the input, sorry [00:29] the reciever's an adobe server, I think, we honestly get it as part of a PEG hosting service from TelVue [00:30] I see [00:30] many of this service providers are also validating tcUrl, swfUrl, etc [00:30] is not enough to have the user:passwd correct [00:31] okay, so what do I need to pass for that, it's Stream_Name on the paste since it's part of our authentication chain [00:32] do you have a working thing which can publish to that service? if so, I can provide you with a sniff by acting as a man in the middle [00:32] the sniff is human-readable, created by a tool of mine [00:33] than you can compare it with what ffmpeg is trying to do and find the missing/wrong pieces [00:33] a TriCaster video switcher, a dedicated and active stream box, and I would not be allowed to share the passwords involved, so I would need to run that MitM method [00:34] I can share the sniffer with you, but is closed source and I can only offer you a binary. Is part of a commercial product that I'm developing [00:35] I could rig up Netcat to do the same on Monday, I don't have the equipment available here, right now, only the ffmpeg [00:35] yeah, you definitely need to look and compare the traffic [00:35] my method is only more comfortable [00:36] in case you can create a dummy user/passwd, let me know [00:37] forgive me, but there's a security concern with city government property in the mix, so I need to keep that in mind [00:37] I don't have configuration, we had to apply for this as a support ticket [00:37] in the long term, I have to set up ffserver on the studio's mac to act as a relay for live events later [00:45] 42 [01:36] j-b: Will VDD13 be at the same location as last year? [01:53] no [02:00] kierank: no? [02:00] see the email [02:01] ffmpeg.git 03James Almer 07release/2.0:baf92305a6f5: lavf/matroskaenc: Check for valid metadata before creating tags [02:01] ffmpeg.git 03Michael Niedermayer 07release/2.0:f593ac1c2183: matroskaenc: simplify mkv_check_tag() [02:05] kierank: the public announcement that was on multiple mailing lists? i must be blind because i can only see that on friday is the trip to the amusment park but not where the actual meeting will be. [02:05] an email you should have received [02:07] can someone confirm mpeg2 video doesnt work with vdpau [02:07] using current git [02:09] ffmpeg.git 03Paul B Mahol 07release/2.0:b79f337f8a68: ttaenc: fix packet size [02:09] ffmpeg.git 03Paul B Mahol 07release/2.0:80fb38153e7d: sgidec: safer check for buffer overflow [02:15] kierank: Now I can see it. The confirmation mail went into the spam folder somehow :( [02:17] hotwings: seems to work for me [02:32] hmm ok. thanks wm4 [07:57] ffmpeg.git 03Reimar D?ffinger 07master:d4db7c334b66: Integrate accessors.h header into internal.h [10:13] ffmpeg.git 03Bryce W. Harrington 07master:d9c46c3cd9b8: doc: apply various grammar fixes [10:13] ffmpeg.git 03Mark Harris 07master:69f543854deb: doc/filters: fix sine sample_rate abbreviation [10:24] ffmpeg.git 03R?mi Denis-Courmont 07master:9d5ec50ead97: ff_socket: put out-of-line and fallback to fcntl() for close-on-exec [10:24] ffmpeg.git 03Michael Niedermayer 07master:296eaa84b9d0: Merge commit '9d5ec50ead97e088d77317e77b18cef06cb3d053' [10:29] ffmpeg.git 03Martin Storsj? 07master:33237123c83b: libavutil: Enable the MSVC DLL symbol loading workaround in shared builds as well [10:29] ffmpeg.git 03Michael Niedermayer 07master:09f1afc784b4: Merge commit '33237123c83bf4f8345e6ac889ad2e7dbd303d0e' [10:36] ffmpeg.git 03Martin Storsj? 07master:cb0244daaca8: bktr: Changed a missed occurrance of open into avpriv_open [10:36] ffmpeg.git 03Michael Niedermayer 07master:e3a296dfa53b: Merge commit 'cb0244daaca83ab666798818f74f5181bf6bc387' [10:41] ffmpeg.git 03Martin Storsj? 07master:a76d0cdf21c3: libavutil: Move avpriv_open to a new file, file_open.c [10:41] ffmpeg.git 03Michael Niedermayer 07master:ef13a005c41c: Merge commit 'a76d0cdf21c3d9e464623cc0ad1c005abf952afa' [10:47] confused by AAC encoding in matroska. It looks like it does populate CodecPrivate, but the docs say it's VOID.... so I guess I should not look at it. [10:51] ffmpeg.git 03Martin Storsj? 07master:e743e7ae6ee7: libavutil: Make avpriv_open a library-internal function on msvcrt [10:51] ffmpeg.git 03Michael Niedermayer 07master:b37ff488b8aa: Merge remote-tracking branch 'qatar/master' [11:12] Hmm. Sounds like CodecPrivate is valid... AAC AudioSpecificConfig structure (the bytes contained in Matroskas CodecPrivate). Good. [15:06] BBB: the decode is indeed not perfect; i fixed something yesterday, still not correct [17:10] when will ffmpeg support changing encoding paramaters on the fly? Currently there is no way to change the target bitrate during encoding, is there? [17:34] Joske, you can try to reopen the encoder with different parameters, [18:20] hm what broke flic-af11-palette-change on so many systems this time around? [18:20] some change in trim handling, i guess [18:33] maybe some float rounding [18:59] ffmpeg.git 03Michael Niedermayer 07master:148310ca1659: avutil/log: Use bprint for part [18:59] ffmpeg.git 03Michael Niedermayer 07master:0b5627189d83: avfilter/f_sendcmd: make const arrays static const [18:59] ffmpeg.git 03Michael Niedermayer 07master:db4918b72e11: avformat/tedcaptionsdec: make const arrays static const [18:59] ffmpeg.git 03Michael Niedermayer 07master:18b1381c5f14: avutil/opt: fix av_log type [18:59] ffmpeg.git 03Michael Niedermayer 07master:5fc5170c5575: avutil/opt: make const tables static const [19:02] BBB: fixed a few more, but it still doesn't look perfect [19:03] http://b.pkh.me/vp9-idct32-tmp0.png [19:09] i have a sample triggering "[mpeg4 @ 0x34b0580] new pred not supported" [19:09] anyone interested? [19:27] Action: ubitux winks at michaelni since he's the maintainer of lavc/mpeg4videodec.c [19:30] BBB: do i need to do the remaining pred functions or the output should be correct without them? [19:35] seems the pred callback are not called, so i guess that's my entire fault :) [19:37] ubitux, best open a ticket for newpred [19:38] yeah ok [19:40] what's the size limit for trac upload already? [19:43] 2mb i think [19:45] thx [20:21] ffmpeg.git 03Nicolas George 07master:d5f38847f54b: tests/fli: avoid rounding errors in -t option. [21:51] ffmpeg.git 03Stephen Hutchinson 07release/2.0:2881bfbfd6c3: avisynth: Cosmetics [21:51] ffmpeg.git 03Stephen Hutchinson 07release/2.0:e7a4c34e7c70: avisynth: Exit gracefully when trying to serve video from v2.5.8. [22:02] ^ patch already sent to fix this [22:58] ffmpeg.git 03Derek Buitenhuis 07master:d9c1fb8376aa: bprint: Include va_copy compat [00:00] --- Sun Aug 11 2013 From burek021 at gmail.com Sun Aug 11 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sun, 11 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130810 Message-ID: <20130811000501.2A21C18A045A@apolo.teamnet.rs> [02:17] hello is it possible to create two output files with two differenet video codes and audio codecs from a single input file in just one command line [02:17] yes [02:19] When you were here earlier you posted you command, and the second output was called, "output.m4v." Which to everyone in the world except Apple means raw mpeg4 video. [02:19] your command* [02:19] yeah [02:20] thanks relaxed I would go back and look at the conversation log [02:20] So you need "-f mp4 output.m4v" to force the right container. [02:21] I am new to using ffmpeg so is that what I have missed there [02:21] I will try it again and see if that solves my problem [02:21] if not, pastie.org the command and all console output. [02:21] sure I will do it [02:36] relaxed the console output is at http://pastebin.com/50hybAXL [02:38] relaxed I am getting the 'moov atom not found' for the mp4 file I have pasted the output. hre is the link http://pastebin.com/yzukx69C [02:39] I think there was a bug with older version of libmp3lame that gave that error. Which version of ffmpeg are you using? [02:40] >FFmpeg version SVN-r19352-4:0.5+svn20101223-1 [02:40] an old one :X [02:40] FFmpeg version SVN-r19352-4:0.5+svn20101223-1 [02:40] cheri: It's time to update ffmpeg [02:41] yeah but there is lot of devlopment required to port the changes to the new version so our company is living with it [02:41] I haven't seen anyone using a 0.5 in quite some time. You should win a prize. [02:42] "port the changes?" What? [02:43] the problem I have is if I run two ffmpeg processes I am falling short on system resources so I want to do it with one ffmpeg process [02:45] I am having 16 core server and I need to run transcode 4 flv and 4 mp4 in real time which is taking one cpu completely so I thought if I can bring that down to 4 my problem would be solved [02:46] but... that doesn't change the fact that your ffmpeg is outdated [02:46] it's like 3 years old (almost) [02:46] it's older than that I think [02:46] yeah I agree with that [02:46] well it says 20101223-1 [02:47] but maybe that's the build date [02:47] ah yes that's the build date :X [02:47] Anyway, upgrade ffmpeg and then use -threads to specify the amount of cores you want to use. [02:48] I read threads can be used only for few codecs. Is my understanding correct [02:50] libx264 supports threads, not sure about mpeg2video [02:51] cheri: http://johnvansickle.com/ffmpeg/ [02:52] ok I will try with the latest version 2.0 [02:53] thanks relaxed I will try with that [03:11] how do i record my X at 1fps into a 30fps video? (timelapse) [03:12] i tried doing ffmpeg -threads 0 -f x11grab -framerate 1 -s 1366x768 -r 30 -i :0.0+0,0 -vcodec libx264 -crf 16 output.mp4 [03:15] -r 30 after the input [03:18] after the input... ok [03:19] nope [03:19] oh well ill just do it afterwards... [03:39] roboman2444: remove -threads 0 [03:40] or place it after the input, too [03:53] hm [04:10] does anyone in here have much knowledge around segmenting mp4 files for use in a MPEG-DASH stream ? [04:14] hi [04:14] when i'm splitting a h264/aac mp4 using "keyframe seeking" [04:15] the audio usually ends up not in sync with the video in the split files [04:15] is there any way to avoid this? [04:15] my command line is something like "ffmpeg -ss 00:40:33.3 -i IN.mp4 -c:v copy -c:a copy -t 00:05:51.7 seg1.mp4" [04:19] my bad, it's present in my input stream [04:19] bleah [04:48] how do you downmix 6ch audio to 2ch? [05:22] crumb, -ac 2 [05:23] diesel42: and for normalizing? [05:39] hi [05:40] i got ffmpeg to extract audio from a microphone, how do i switch it to taking input from speakers? [05:44] you mean capture audio the computer is playing? [05:44] yes [05:45] I use 'ffmpeg -f alsa -i pulse' then the codecs and bitrate stuff [05:45] my current command line: fmpeg -f x11grab -r 25 -s 1920x1080 -i :0.0 -f alsa -ac 2 -i "sysdefault:CARD=Headset" [05:46] output of arecord -L : http://pastebin.com/uyFiPzzh [05:46] Unknown input format: 'pulse' [05:46] I've compiled my ffmpeg with pulse support [05:47] oh wait [05:48] [alsa @ 0x96aa10] cannot open audio device pulse (No such file or directory) [05:48] this comes up when i run ffmpeg -f x11grab -r 25 -s 1920x1080 -i :0.0 -f alsa -ac 2 -i pulse [05:49] how can i check if i am using pulse? i might not [05:50] I just tested this line:'ffmpeg -f x11grab -r 30 -s 1920x1080 -i :0.0 -f alsa -i pulse -c:a libfaac -b:a 192K -ac 2 test.mkv' works fine [05:51] if you just type ffmpeg with no options you should see the options it was compliled with [05:52] look for --enable-libpulse [05:52] i am pretty sure i don't have pulse on my system [05:52] well you may have pulse but ffmpeg was not compiled with support for it [05:53] i am on gentoo, over here you have to do a lot of work to get pulse to work, i am pretty sure i didn't do it [05:57] try sysdefault:CARD=PCH [06:00] its no longer capturing from microphone but all i hear is low noise but not what is coming from speakers [06:01] when you say speakers are you using your usb headset? [06:01] yes [06:01] sorry, headset [06:02] i don't use my MB audio output, although its connected to small speakers in the monitor [06:02] I would think your initial setting would have worked then [06:03] they work, but capture my microphone [06:03] in the headset [06:03] try front:CARD=Headset,DEV=0 then [06:03] but i want it to capture the speakers in the headset [06:04] [alsa @ 0x874a10] cannot open audio device front:CARD=Headset,DEV=0 (Device or resource busy) [06:07] by the way, kde says i have phonon backend gstreamer [06:09] I would try for the sake of elimination, unplug the usb head set, use the speakers and try some of the card options in your list like the sysdefault:CARD=PCH. That way at least you'll know if its a limitation of the headset device. [06:10] because the way your trying it, it does make sense [06:13] hm [06:17] i think its hw:0 but i get an error (Device or resource busy) [06:22] doing a little reading on it. try also specifying a sub device like hw:0.0 or hw:0,1 [06:22] hw:0,0 not . [06:22] hw:0,0 same as hw:0, busy [06:23] i guess something is using it, and whatever it is, i need to get audio from that [12:51] hi all [12:52] I got a Segfault with prores and 15696x2048 input images [12:52] is this the right channel or should I move to ffmpeg-devel? [12:53] This is the right channel for reporting things. [12:53] -devel is for development only. [12:53] thansk [12:53] I have send a mail to the users list with all informations [12:53] https://lists.ffmpeg.org/pipermail/ffmpeg-user/2013-August/016789.html [12:55] I'm not sure if the prores format is able to use 15kx2k and 30kX2k image inputs for video encoding. [12:55] I must create the video in the same size as the input Images are [13:17] thanks for help and hints to get a working movie with 30kX2k [14:17] does anyone know what do big movie studuos use to proceess their videos ? do they use proprietary video encoding software ? [14:25] elkng: We?re using Telestream Vantage and Rhozet Carbon Coder. So yes, proprietary. [14:27] ackjewt: why don't you use ffmpeg ? [14:27] too unstable ? [14:28] No, we use ffmpeg as well for smaller customers [14:29] for transcode-only applications [14:31] ackjewt: what the company are you talking about it its not a secret ? [14:31] s/it/if [14:31] elkng: Who i work for? [14:34] woohoo [14:35] I have a whole range of avi files with an audio channel I want to remove. Is there a way to remove the channels from all files with a single command? [14:50] ackjewt: I mean what the company you are talking about ? [15:01] test [15:01] Apparently I've been brutally disconnected [15:01] Oh, apparently my original nickname is banned from this channel? [15:12] Does anyone happen to know how to remove an audio channel from a whole batch of files? [15:12] ffmpeg -v info -i input /mnt/stor/Downloads/Simpsons/Simpsony.01.sezon/ -map 0:0 -map 0:2 -vcodec copy -acodec copy /mnt/stor/Downloads/The Simpsons/Season 1/ [15:12] whops [15:12] Either way, I need that to work for all files inside the input folder [15:20] ThePendulum, you need to script it, ffmpeg can't do that directly [15:25] Ahke [15:32] hello all, trying to compile 3.99.5 but I keep getting a libmp3lame error. I just compiled the latest version of it and still no luck. [15:33] What do is needed to be know to help? [15:33] I have the config.log handy. [16:04] Hi Borg^Queen , I haven't compiled ffmpeg for a while, but what is the error ? [16:04] hi grepper! [16:04] :) [16:04] ERROR: libx264 must be installed and version must be >= 0.118. [16:05] I resolved the libmp3lame but this one is harder. [16:05] I have the latest snapshot of x264 [16:05] going for the log [16:05] have you installed it systemwide? [16:05] anything specific you want to know or should I just pastbin [16:05] if not, did you tell autoconf where to look? [16:06] just pastebin everything [16:06] let me show you the config line [16:06] Borg^Queen: too much info is better than not enough :) [16:06] ./configure --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-libaacplus --extra-ldflags=-L/usr/local/lib --extra-ldflags=-L/usr/lib [16:06] ok pastebinning [16:09] http://pastebin.com/Npc1ANan [16:09] there [16:09] but I think I figured it out [16:10] 2 versions of libx264 installed ? [16:10] that I didn't notice, hang on [16:10] okay, I was just guessing at what you figured out, don't let me sidetrack you :) [16:16] I don't see any z264 *.h files [16:18] ok going to recompile x264 any suggestions as to how to make it include headers? [16:18] x264 shouldn't really need any headers [16:18] uh [16:18] off hang on [16:18] wait i take that back :X [16:18] odd I meant [16:18] ok brb [16:20] guys will you be here in about an hour? [16:20] maybe not for me [16:20] need to help out a crackmonkey on win8 [16:20] grepper: not to worry [16:21] nice to sort of hear from you again ! [16:21] i'll be probably here [16:21] thanks, been so busy since 8 came out, it's crashing and committing suicide at a greater rate than vista [16:21] klaxa: thanks mate bb as soon as I can, [16:59] I'm trying to get a jpeg2000 viewer for linux 64bit. All I could find is that there would be a "OPJviewer", which supposedly would be in the openjpeg svn trunk. So, I downloaded truck, followed the build instructions (cmake .; make) and got no viewer :/ [16:59] How can I get it to compile the viewer, if at all possible? [17:08] hi [17:08] Which video from Khan Academy is better quality, YouTube: http://pastebin.com/p1Tqq5bk or DDL: http://pastebin.com/hwJJFaQq ? [17:09] you can't judge video quality from encoding settings [17:09] video quality is a subjective matter [17:10] Actually I'm fairly certain video quality is an objective measure [17:10] Subjective would be your personal tradeoff between file size and picture quality / frame rate [17:10] comparing encoders would be a much simpler process if that was the case [17:11] the ddl version is a quarter of the size of the youtube video despite being a lower profile so it's fairly safe to assume it's worse, though [17:11] but both are so hilariously low bitrate that I assume it's just a talking head video and the video quality is pretty irrelevant [17:12] supe: you can measure video quality (otherwise comparisons would be based on >my opinion is better than yours) [17:12] but you cannot tell whether a video is of good quality without seeing it [17:13] klaxa: Yes, but you should be able to determine given textual attributes of each; which is better quality overall [17:13] if you know the source material you can calculate it [17:13] supe: no [17:13] yes, but i could have encoded it 9001 times with 9001 different codecs [17:13] and the result looks like shit [17:14] i could apply 165 filters to make it look bad [17:14] i could telecine it and use a deinterlace filter to make it look bad [17:14] you cannot tell video quality without looking at the video [17:15] klaxa: Sounds like you're spouting bullshit... [17:15] if you want to believe that, it's fine with me [17:18] http://illogicallabs.com/paste/00000007.txt I've attempted recompiling the same ffmpeg build on a second computer running ubuntu, I get the same error as I do on my Raspberry Pi for this command, does anyone know where I'd find a full list of parameters that can be passed to the -f flv or rtmp handlers? [17:18] grepper: still there? [17:18] klaxa: ? [17:19] yep [17:20] Just got back. I feel so badly for this guy. He just bought a new computer but, because he had a second hd installed, sony will not honour the warranty. MS says the problem is hardware, sony says it's win8. He packed up the computer and tried to give it to me. When I got there is was all boxed up on the porch. [17:20] I took it inside, installed linux and he's chugging away, doing his writing. [17:20] /usr/src/openjpeg/openjpeg-trunk/src/bin/wx/OPJViewer/source/imagjpeg2000.cpp:672:2: error: opj_event_mgr_t was not declared in this scope [17:21] Borg^Queen, that kinda sucks [17:21] Datalink: (cute nick), it had a happy ending. [17:22] so onward to ffmpeg and it not seeing x264 [17:22] but that's likely because x264 build without headers [17:22] this is exactly why I hate closed box computer waranties though, a computer is meant to be a general purpose machine, with upgrade options... [17:22] Borg^Queen, it's distro dev package [17:22] Datalink: agreed [17:22] like cars? :) [17:22] yeah [17:23] Datalink: I build from source. [17:23] joy... probably should for this application [17:23] aye [17:23] which means building it on a Raspberry Pi then rebuilding ffmpeg [17:24] Someone dumped a working sony blueray player with lots of extras, so I picked it up and put it back into service. It plays video off usb stick. [17:24] why raspberry and Pi? [17:25] oh you're speaking of something else. [17:25] the RPi is the computer for the application, I've got a second test machine so I can remove "because it's a cheap embedded CPU" from the equasion [17:26] ah [17:30] I guess I missed grepper [17:31] ehe, it was someone in dev who thought the Pi would be too slow for standard def video [17:31] Seriously... 'DESTDIR=/usr/src/openjpeg/install make install' installs in /usr/src/openjpeg/install/usr/local ?!? [17:32] So then the INSTALL instructions, telling people to do: [17:32] DESTDIR=$HOME/local make install [17:32] will install in $HOME/local/usr/local lol [17:32] is this to be used by the user alone? [17:33] I'm glad I didn't install it as root. [17:33] I suppose you missed all my previous questions :/ [17:33] I did, sorry mate. [17:34] I'm trying to get OPJviewer to compile [17:34] what distro? [17:34] debian [17:34] ah, good distro [17:36] cmake . -DCMAKE_INSTALL_PREFIX=/usr/src/openjpeg/install [17:36] make [17:36] make install [17:39] cmake . -DCMAKE_PREFIX_PATH=/usr/src/openjpeg/install -DCMAKE_INSTALL_PREFIX=/usr/src/openjpeg/install -DBUILD_VIEWER:BOOL=ON [17:39] That should work.. but it doesn't [17:40] This package doesn't even search for it's own libs? :/ [17:40] hang on, I remember something about it, one moment [17:40] My problem is that it find /usr/include/openjpeg.h .. which is the wrong version [17:40] finds* [17:41] Borg^Queen, just recompiled libx264 on the test machine, same error [17:41] ah, don't know what to do about that. sorry [17:41] I want it to find and use /usr/src/openjpeg/install/include/openjpeg.h [17:41] Datalink: ugh [17:41] I was compiling ffmpeg, [17:41] Borg^Queen, yeah, I've been working on this for 2 weeks now, starting to get more than a tiny bit frustrated [17:41] no headers on x264? [17:41] oh well,, guess ... [17:42] poor lad/lass [17:42] ./configure --enable-static --enable-shared [17:42] that was libx264's build config [17:43] aye, same, let me see what I've got now. [17:44] blast, no headers [17:44] no includes [17:46] BINGO [17:47] Datalink: are you trying to compile x264 right? [17:47] Borg^Queen, yeah, I did, (still on the Pi) [17:47] ok hang on [17:47] thanks [17:48] ./configure --prefix=/usr --enable-debug --enable-static [17:48] try that [17:48] I just got a successful build with headers and all [17:50] building now, Ubuntu system [17:50] hmm [17:51] why set prefix to /usr ? [17:51] that would mishmash it with the packaged things [17:51] use /usr/local preferably [17:52] true, [17:52] let me try that. [17:52] Action: Datalink restarts the build [17:53] thank you JEEB [17:55] brb knock at the door [17:56] thank you JEEB, working nicely. [17:56] brb [18:06] still getting IO error on test machine [18:08] blast [18:08] and I broke a nail on a bloody windows computer. [18:08] I blame MS for all this [18:08] yeah :/ [18:08] ouch [18:09] ah bugger, it's bleeding, brb [18:11] JEEB I got a cleaner rpm with local, thanks so much [18:15] Datalink: any luck? [18:15] not yet [18:15] :( [18:16] yeah... taking a break for a bit [18:16] I'll probably chat with support or MitM it Monday [18:17] Sorry I wasn't able to help [18:18] it's okay, heh, I'll keep at it in a bit [18:18] aye, "never give up, never surrender" [18:18] aye, but take breaks to stretch when needed [18:18] aye [18:19] blast, ffmpeg still isn't finding x264 [18:19] the 'fun' part of this will be making it work with the city network [18:20] did you enable GPL? [18:20] hang on [18:20] if you compiled x264 from source, did you configure it with --enable-shared ? [18:21] checking config line [18:21] recompiling now [18:22] gpl enable yes, enable shared, no, redoing [18:22] no wait [18:22] --enable-shared for x264 [18:22] pardon? [18:22] if you compiled x264 from source, you have to enable shared libraries for x264 [18:22] i.e. compile libx264.so [18:23] this has nothing to do with ffmpeg - yet [18:23] compiling it again [18:23] ok so just --enable-shared is the addon aye? [18:23] for the configure line for x264 [18:23] if you compiled x264 from source [18:23] only applies if you compiled x264 from source [18:23] ahhh ok I see, what you are saying, ok recompiling x264 [18:24] thank you klaxa [18:25] brb another windows machine just came in... [18:43] ok back [18:45] Here's a new one. Guy brings his mac in to be updated (second hd more ram), goes to pick it up, the "genius" wants to show him the computer up and running, plugs it in and it blows up. The owner, very angry, demands money back and repairs. Of course the manager says ok to the money back but no to the repairs. So, one of the lower tech suggested he take it to me to see why it blow up (the geniuses couldn't find a cause). [18:46] I open the case, smell burnt and follow my nose. They plugged a power cable (with an adapter) into a usb pin array. [18:46] The board is extra crispy. [18:47] That's a new one for me. Never seen that before. [18:47] .... [18:47] SERIOUSLY? [18:47] That I've never seen that before or that they did it? [18:47] both [18:48] what was it, a fan power cable? [18:48] then seriously to both. Are you telling me you've seen power cable connected to usb ? [18:48] LOL Yes! Oh no, seriously mate, you've seen this before? [18:49] not as a messed up wiring job, but as a test job [18:49] the power cable had an adapter on the end of it [18:50] we've got a cable that goes from 5 pin XLR to a NEMA 15 outlet at the studio, I've yet to figure out what that one's for [18:51] ok ..... that's odd [18:51] meh, you see odd stuff in TV [18:52] klaxa: thanks seems to have worked [18:52] great [18:52] hang on let's see of ffmpeg finds it now [18:55] so far no error, usually it spits out a bloody error in seconds [18:56] YES! YES! It configed! [18:57] moving on to make [18:57] Datalink: perhaps your problem is similiar? [18:59] I donno [18:59] I'm not getting build errors [18:59] what are you getting? [19:00] a runtime error trying to make it work with an RTMP stream [19:01] more specifically an IO error [19:01] oooooh [19:15] I'm always finding strange stuff in the recyclers, (he lets me pick through the electronics dumpster), I found an intel board, intact, that was sitting in water for several days. I allowed it to dry out for a week. It booted up! [19:17] you found an intel board? i found a sony laptop [19:18] the graphics chip was broken, it was a known issue with the batch [19:19] got a friend to help me, we took it apart down to the mainboard, put that in the oven for 8 min at 200?C, got it back out, put everything back together and it works flawlessly again [19:19] it heats up even less than the laptop i bought, because we applied new thermal paste [19:19] also, re: supe, which video is better based on encoding settings: http://dedi.klaxa.eu/bbb1.mkv.txt http://dedi.klaxa.eu/bbb2.mkv.txt respective video files are http://dedi.klaxa.eu/bbb1.mkv and http://dedi.klaxa.eu/bbb2.mkv [19:20] tell me which one is better before downloading based on the output of mediainfo [19:28] I don't normally like sony stuff [19:28] how did you resolve the video problem? [19:28] >put that in the oven for 8 min at 200?C [19:29] ok how did that fix it? [19:29] it was it a water damage thing [19:30] ah my br rip of the original BSG is almost complete. [19:30] there were some contacts on the graphics chip that weren't connected properly, heating up the chip causes the stuff that connects them to melt and reconnect properly [19:30] 2 hrs 40 mins to go.... [19:30] klaxa: that's brilliant. How did you come up with that? [19:30] internet :X [19:30] what is this internet you speak of? [19:31] still, I'm amazed it worked. [19:32] well we booted it up with some linux live sticks first, not using the nvidia graphics drivers worked up to a resolution of 1024x768 [19:32] it had minor graphic glitches, but when it ran for a while the glitches would disappear (i guess metal expands under heat :P) [19:32] so it's an older laptop [19:33] actually a friend of another friend suggested we try baking it [19:33] 2008 i think [19:33] I like your friends [19:33] but it has an intel core 2 duo at 2 ghz so it can decode 1080p in software [19:33] pretty smoothly [19:33] nice [19:33] i use it to watch stuff on my projector so i don't have to use my regular laptop [19:34] this laptop an IBM/Lenovo T60, I got from someone for $25 quid [19:34] er dollars [19:34] also if i used my regular laptop there would be a lot of shit because of video synchronizing with X and video tearing and window manager overhead and general cpu usage because of other stuff [19:34] i don't want all that [19:35] He said it was dead. I booted it up and find windows couldn't find it's boot yadda yadda. [19:35] I use this one for work and to play videos. I usually do both. I'm currently encoding to mkv on it as well as compiling [19:36] each cpu is at 62/63 C [19:36] Of course, it could barely play video in 1080p with vista. [19:36] another reason that guy was convince it was a POS. [19:37] Vista'll bog any hardware down though [19:38] ffmpeg is taking its sweet time compiling, 40 mins so far. [19:38] Datalink: true, but I installed XP and it barely played. It was better with mplayer, but still only 720 [19:38] with linux, I can play 1080, encode and compile, with smooth video playback. [19:39] Related- my sister's macbook pro died, so she took it the "Genius Bar" at her local Apple store. They said it couldn't be fixed and recommended she buy a new one. Unfortunately she lives 12 hours away, but I told her if she mailed it to me I could proabaly fix it. She declined and bought a new. I begged her to send it to me since it was unfixable. [19:39] She did...and it just needed a new hard drive :/ [19:39] truly unfixable [19:39] See! [19:39] well, for me it's :-D [19:40] I noticed everyone puts "genius" in quotes well. [19:40] She won't make that mistake again. [19:40] i sure hope so [19:40] Because they're not [19:40] how much did she pay for a new wacbook? [19:40] around $1500 I think [19:40] relaxed: aye, they bloody well aren't. [19:40] OH! [19:41] what did she say after you fixed it? [19:41] She wasn't pleased, haha. [19:41] probably choice words about Apple [19:41] relaxed: she wasn't? Why? 0_0 [19:41] Now it runs Debian, so we all win. [19:41] why would she get another macbook anyway? does she need os x? [19:41] relaxed: The Rescuer [19:42] Borg^Queen: you wouldn't be pleased either if you get ripped off [19:42] by the official support [19:42] I have a large bin filled with ipuds, ifonnies, macbooks. [19:42] tried to sell them, they'd only give me $1 for each item. [19:43] buying any new computer, especially a mac, is a rip off. [19:43] Seriously, it's messed up that it was the hard drive and they lied, saying it couldn't be fixed. She works from home so she felt pressured to get another one quickly. [19:43] custom is the only way to go. [19:43] That's sad. [19:43] She was happy I got it working, for me, but it sucks for her. [19:44] I need an intel mac one of these days [19:44] I have several in a box. [19:44] blown boards [19:44] I need a running intel mac :P [19:44] LOL [19:44] stupid XCode [19:44] ? [19:45] XCode the dev environment for Apple products [19:45] ok someone to pick up his computer. I need to sit him down and explain he can't install the "plugins" from porn sites. [19:46] i don't write a lot of software, but the software i do write runs on linux and that's pretty much it [19:46] brb, this will be a while... [19:46] klaxa: amen... [19:46] brb [19:46] although, that one C project i wrote uses POSIX only so it *should* run on a mac [19:46] I remember that talk with my dad, his wife and I ripped into him for it [19:46] it runs on solaris at least [20:20] I want to stream from a Camera over http, here is the pastbin link for http://pastebin.com/jrCjfuiZ. And I get error message "Could not find codec parameters (Video: mjpeg)" [20:21] what was the output of the command? [20:21] the Output was "Could not find codec parameters (Video: mjpeg)" [20:22] Is that an input or an output error? (Pasting the full output would be advisable.) [20:23] Here is the pastlink to the full output http://pastebin.com/TPqVmLqh [20:28] a) you are using avconv, try #libav b) include your input command line too, c) are you sure accessing the .cgi page is correct? [20:28] klaxa, he pastebinned the command earlier [20:29] ah [20:29] no i am using ffmpeg [20:29] >*** THIS PROGRAM IS DEPRECATED *** [20:29] >This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. [20:29] no you are not [20:29] i have used the command with rtsp protocol in the past and it worked. [20:30] ffmpeg had an agressive fork a few years ago... Debian branches went with the libav branch, which is slower to adapt fixes [20:32] hearsay [20:32] yes i am using ffmpeg pls. i will give the result from avconv shortly [20:33] ok back. 1. ffmpeg successfully built, I have nice clean rpms. Thank lads/lasses, thank you Yoda klaxa [20:33] 2. that was a very awkward conversation. [20:33] heh, np and great it worked [20:33] thank you lad [20:33] well building ffmpeg with x264 from source is a thing i have done so many times [20:33] It's sad that "there's no support for linux" [20:34] by now i know what to do and what to look out for [20:34] And I am wiser for your tutorage. [20:34] here the link for using avconv command http://pastebin.com/t9LQH8YK [20:35] blah, I need to find out how to deal with stream name, user and pass properly for my stream server [20:35] 53 mins left to see if the BSG 1979 mkv was a waste of 12 hours [20:35] Datalink: open source? :X [20:35] hey is there any reason to install live555? [20:35] klaxa, mn, I'm honestly very burned out on trying to make this work [20:37] i had rebuilt ffmpeg from source including "--enable-decoder=mjpeg" [20:39] sco_: Is it an actual mjpeg? Or it is just a static image that gets refreshed every X seconds? [20:40] i do not know. How can i find that out. the Camera is ""wanscam". [20:42] remove -vcodec mjpeg and try again [20:42] Ok [20:44] i have just tried without -vcodec mjpeg and the result is the same. [20:46] this worked flawlessly with vlc. But vlc is too noisy [20:55] so, i've noticed a few Lavf UA headers coming in to an audio service that I run, I think there's a few set top DVRs that are using ffmpeg [20:55] our TriCaster and LiveView.Pro both use ffmpeg under the hood [20:55] as does the studio's CloudCast video server [20:56] as does probably any video streaming server that works :X [20:56] lol [20:56] a good portion of them, yeah [20:57] i'm just wondering how many of these different vendors 1) have the patents for mp3/aac support 2) have anything documenting that they use ffmpeg at all [20:58] i'm looking at possibly releasing an open source audio encoder with some features, but the legal issues are keeping me from even thinking about releasing a binary, which limits distribution a bit. [20:58] it's in the manual for the studio equipment I use [20:59] ogg? [21:00] well, if you're trying to offer the ability to stream to multiple devices and such, mp3 and aac are what have the most penetration [21:01] i wrote a program that re-streams mp3, am i in trouble now? :I [21:01] i don't change the mp3 stream at all [21:01] i don't know how all this royalty stuff works anyway [21:01] if you're in the US, i think that's technically infringement...but not for AAC [21:02] It turns out, never needed to compile ffmpeg. All I needed was to have compiled x264 properly for mplayer to compile with x264 support. [21:02] >: [21:02] brb [21:02] klaxa: btw, i might have to end up doing that...i'd like to avoid tying users to a service, but i might just do the aac/mp3 myself and use ogg/opus on the client [21:03] (streaming source client, that is) [21:03] i stream mp3 and re-stream the same stream [21:03] everything mp3 because the format is so damn simple [21:03] guys any help here for my problem? [21:03] brb [21:04] aac + adts is pretty simple, and it supports slicing on any frame boundary [21:04] :o [21:04] sco_: how did it work with vlc? [21:27] klaxa here is the past to the vlc command http://pastebin.com/bQ0EPpkm [22:29] hmm no mplayer is exiting make with 2 errors [22:35] could someone with a bit of time and experience please look at http://pastebin.com/Xbz4UY5d, and tell me what I'm doing wrong. [22:35] now mplayer isn't compiling. [22:57] Borg^Queen: gcc version? [23:03] one moment [23:03] gcc-4.3.1-0.134806.1ark.i586 [23:05] should be new enough i think? anyway, it looks like there is stuff your compiler doesn't support [23:06] >ibavfilter/internal.h:276: error: #pragma GCC diagnostic not allowed inside functions [23:06] but it compiled before, which is what is confusing [23:17] Borg^Queen, why do you have such an old gcc? [23:17] i have 4.7.2 and this is debian stable [23:18] I'm keeping my fav distro, which is technically "dead" until I find something of equal power and versatility and something that uses either kde 3.5.x or TDE with RPM. [23:19] Ark Linux is faster, runs on old and new stuff. It's never failed me. [23:19] ok ... [23:19] I haven't been able to find anything that can do everything Ark does for me. Sadly I know I have to find something. [23:20] well, crunchbang is nice, but probably not what you are after [23:20] crunchbang? [23:21] when in doubt LFS [23:21] http://crunchbang.org [23:21] thanks [23:22] interesting [23:22] Datalink: at this point, it's pretty much what I'm doing [00:00] --- Sun Aug 11 2013 From burek021 at gmail.com Mon Aug 12 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Mon, 12 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130811 Message-ID: <20130812000502.3E87A18A028A@apolo.teamnet.rs> [00:00] nice, now I can resume my work on av_bprint_options [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:d5dd54df69ec: MAINTAINERS: add myself as maintainer for the interface code to swresample & swscale in libavfilter [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:ec334232731a: MAINTAINERS: drop 1.1 from the releases that i maintain [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:15ea618ef629: MAINTAINERS: order libavutil entries alphabetically [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:b2a9f64e1b12: MAINTAINERS: Add some maintainers for parts of libavutil [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:f09f33031b9d: MAINTAINERS: alphabetical sort [00:06] ffmpeg.git 03Matthieu Bouron 07release/2.0:01838c5732e7: MAINTAINERS: add myself as maintainer for lavf/aiff* and lavf/movenc.c [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:1155bdb754b0: MAINTAINERS: remove myself from movenc, 2 maintainers should be enough [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:fa004f4854db: MAINTAINERS: add Alexander Strasser for the server [00:06] ffmpeg.git 03Michael Niedermayer 07release/2.0:fd2951bb53b3: update for 2.0.1 [00:28] ffmpeg.git 03Michael Niedermayer 07release/2.0:acf511de34e0: avcodec/g2meet: fix src pointer checks in kempf_decode_tile() [00:49] ffmpeg.git 03James Almer 07master:214293b1433d: lavd: Fix make checkheaders [00:49] ffmpeg.git 03Thilo Borgmann 07master:4dcb2f74786e: lavu: fix grammar in doxy for av_frame_ref. [01:41] ubitux: not sure, add an assert(0) or so in them and see if they trigger [01:41] ubitux: I think they didn't trigger (at least in the first frame) when I tested them, but I may be wrong ofcourse [02:06] ubitux: I have some half-assed loopfilter strength calculation code in my tree. will now work on the actual simd for the loopfilter itself, once it works I'll commit [02:42] ffmpeg.git 03Thilo Borgmann 07fatal: ambiguous argument 'refs/tags/n2.0.1': unknown revision or path not in the working tree. [02:42] Use '--' to separate paths from revisions [02:42] refs/tags/n2.0.1:HEAD: lavu: fix grammar in doxy for av_frame_ref. [04:21] with hwaccel, what is AVFrame.data[0] supposed to contain? [08:39] the same as [3] but only because it must be non-null [10:26] ubitux: Daemon404: OK I have a rough first implementation of loopfilter done, not exactly working yet and certainly nothing close to finished, but just so you know where I am [10:27] ok :) [10:27] Action: ubitux is still trying to figure out the problem with idct32 :p [10:27] just printf all intermediates for a case where it fails in libvpx and your code and see where it goes off [10:27] ffmpeg.git 03Vittorio Giovara 07master:b3dc260e7fa6: h264: return meaningful values [10:27] ffmpeg.git 03Michael Niedermayer 07master:e0b45ca730f2: Merge commit 'b3dc260e7fa6f3f852dd5cb7d86763c4b5736714' [10:27] ffmpeg.git 03Michael Niedermayer 07master:019eb2c77b7c: svq3: Fix ff_h264_check_intra_pred_mode() return code check [10:28] yeah i was considering this [10:32] that's how I debugged idct8/16 [10:32] idct4 was correct on the first try [10:33] ffmpeg.git 03Vittorio Giovara 07master:5eb488bfa835: h264: use explicit variable names for *_field_flag [10:33] ffmpeg.git 03Michael Niedermayer 07master:d2d8e259fd2e: Merge commit '5eb488bfa835f2902a31ba99d57c16ae36c4f598' [10:39] (then again you have to admit, idct4 is kind of easy :) ) [10:40] BBB: any simpler way than editing libvpx code to disable asm optim? i don't see any runtime nor configure flags to disable them [10:41] there's an environmental flag [10:41] let me figure out what it was [10:41] ah, ok [10:41] BBB: don't worry, i'll look at it, thx [10:41] ffmpeg.git 03Vittorio Giovara 07master:c1076d8479a6: h264: check one context_init() allocation [10:41] ffmpeg.git 03Michael Niedermayer 07master:921c1d4c95a8: Merge commit 'c1076d8479a6c0ee2e0c4b0e2151df5b0228438e' [10:42] VPX_SIMD_CAPS and VPX_SIMD_CAPS_MASK i guess [10:42] ah yes [10:43] setting one of them (or both) to 0 as an env var in your shell should disable all simd [10:43] then you can edit the c function to your liking to spew you with evil output [10:47] idct4? do you use both dct and dst for intra 4x4 ? [10:48] and 8 and 16 [10:48] they're separable 1d filters [10:48] so can be used in any order [10:48] (you could also do rectangular ones, but we didn't do that in vp9) [10:53] vc1 is the only codec I know that uses rectangular transforms [10:53] right [10:54] dont wanna go there i guess ;) [10:55] I'd thought you wouldn't have gone towards dst either ;) [10:56] dst is kinda useful [10:57] it's certainly a big technical investment, i.e. a ton of extra code for really only intra gains at this point, but it's cool technology [10:57] I think we could use it in inter also, but nobody looked into that [11:00] BBB: are you sure vp9 decoder honors those caps? [11:00] ubitux: I believe it does yes [11:01] i wonder what i'm doing wrong then, the caps are indeed set to zero, but the _c func are still not called [11:01] export VPX_SIMD_CAPS=0 [11:01] i did that [11:01] oh it might be that there's only a sse2 unction [11:01] the caps are set to zero at runtime, this part is ok [11:01] these are specialcased on x86-64 [11:01] #define func func_sse2 [11:01] instead of them being runtime allocated [11:02] so then you need to do a x86-32 build or comment out the sse2 function in vp9/common/vp9_rtcd_defs.sh [11:02] ffmpeg.git 03Luca Barbato 07master:5a9a9d4a2abe: lavc: Add refcounted api to AVPacket [11:02] ffmpeg.git 03Michael Niedermayer 07master:67a580f423f2: Merge commit '5a9a9d4a2abefa63d9a898ce26715453c569e89d' [11:02] ffmpeg.git 03Michael Niedermayer 07master:b905a7137a51: avcodec/avpacket: Fix memory allocation failure check [11:02] right i was looking at vp9/common/vp9_rtcd_defs.sh [11:04] much better.. [11:04] thx [11:05] np [11:06] BBB: If you used the starting point I'm thinking of, the author only worked on intra; I think inter needs rdo decision to bring gains [11:06] well of course, but that's not an issue [11:06] I mean, let's be honest, vp9 is dog slow, so making it a little slower is ok [11:06] modeling can be done late [11:07] I thought at this stage that vp9 was mostly done [11:07] it is, I'm thinking in terms of vp10 [11:07] or 11, or 12 [11:07] ok [11:08] vp9 bitstream is done and there won't be anything more [11:08] (for vp9) [11:08] I think there's a web profile planned for alpha, 444, rgb etc. support, and a high profile (long future) for 10bit [11:08] but for now, this is it [11:11] ffmpeg.git 03Luca Barbato 07master:4ebc7d659f0d: rtmp: Use PRId64 when needed. [11:11] ffmpeg.git 03Michael Niedermayer 07master:a2b0699f4f0a: Merge commit '4ebc7d659f0da6c1305ca08cf4303959203fff4b' [11:16] ffmpeg.git 03Luca Barbato 07master:ba5393a609c7: rtmp: rename data_size to size [11:16] ffmpeg.git 03Michael Niedermayer 07master:06186a3160d0: Merge commit 'ba5393a609c723ec8ab7f9727c10fef734c09278' [11:17] BBB: well, actually, it seems my function outputs the same thing [11:17] so the problem might be elsewhere [11:22] does the 2d one output the same thing? [11:22] maybe dequant is broken? [11:22] dunno [11:22] i was going to check the 2d [11:22] (there's a specialcase there for 32x32 [11:22] right [11:22] sounds good [11:23] i've pushed a clean 1d if you want to pick/merge/squash/whatever [11:23] the style is not exactly the same as the other 1d (inlined int and space in parenthesis, but i'll fix that later) but the logic matches [11:24] ffmpeg.git 03Martin Storsj? 07master:8e1fe345577a: rtmp: Detect and warn if the user tries to pass librtmp style parameters [11:24] ffmpeg.git 03Michael Niedermayer 07master:6c7a05352f5e: Merge commit '8e1fe345577a42f99591caf8a06c447613449694' [11:25] ok sounds good [11:25] I'll try to merge verysoon [11:26] nice work [11:26] not very useful yet though :p [11:27] it's ok [11:28] once you debug the 2d it should be easy to figure out [11:28] ffmpeg.git 03Martin Storsj? 07master:aa16a6b0c56e: doc: Extend the rtmp example to include how to pass username/password [11:28] ffmpeg.git 03Michael Niedermayer 07master:78242e431028: Merge commit 'aa16a6b0c56e3f46c5d7efb706b87a8f7a1603ec' [11:31] heh that's indeed not the same in the 2d [11:32] ok got it [11:32] BBB: fixed [11:32] will push soon [11:33] ffmpeg.git 03Martin Storsj? 07master:a435ca5b4d9e: doc: Explain that the default RTMP user agent is different when publishing [11:33] ffmpeg.git 03Michael Niedermayer 07master:159dfd26259d: Merge commit 'a435ca5b4d9efebf0759220681010977c36615f7' [11:33] BBB: pushed, you can merge [11:33] (i squashed it) [11:33] i was using 7 bit instead of 6 for the 2d [11:34] it decodes fine now [11:35] BBB: what do you want me to do now? [11:36] gonna watch an anime while you decide [11:36] ;) [11:39] ffmpeg.git 03Martin Storsj? 07master:3bea53dbdf16: doc: Add librtmp to the section header for the librtmp specific details [11:39] ffmpeg.git 03Martin Storsj? 07master:d175a5730b42: doc: Add an example on publishing over RTMP [11:39] ffmpeg.git 03Michael Niedermayer 07master:5993b962691a: Merge commit 'd175a5730b42166704b7262b33f4b780d9d92f60' [11:54] ffmpeg.git 03Martin Storsj? 07master:205a4502d3da: doc: Clarify the avconv section about -re [11:54] ffmpeg.git 03Michael Niedermayer 07master:99091ff21744: Merge commit '205a4502d3da9de2db75d2c965c9d065574e9266' [12:06] ffmpeg.git 03Luca Barbato 07master:5718e3487ba3: rtmp: Do not misuse memcmp [12:06] ffmpeg.git 03Michael Niedermayer 07master:15c92f8c486a: Merge remote-tracking branch 'qatar/master' [12:37] ubitux: uh, dunno yet, help daemon404 with inter coding [12:37] ubitux: keyframes seem mostly finished now [12:38] ubitux: unless you want to write the remaining 6 32x32 intra pred functions [12:40] Daemon404: do you need/want help or something? [12:41] otherwise yeah i guess i'll go look for the remaining intra pred functions [12:41] pushed I think [12:43] yeah first frame of akiyo decodes correctly now [12:45] oh no there is a difference [12:48] fixed [12:48] ok keyframe is good now [12:48] apparently this one has no loopfilter [12:48] weird [12:48] anyway, yes, off to the second frame [12:48] once that's done we should be mostly good except for small one-off bugs [12:49] then we can work on performance (micro-optimizations, simd, threading, etc.) [12:49] :) [12:50] daemon404 said he was working on code for parsing the mb data, I think [12:51] so let's see how far he is [12:58] BBB: btw, in idct16, last t7 assignment was "lucky" [12:58] it looks like it should have been a t7a [12:58] t7 = t0 - t7; ? [12:58] yeah [12:58] t7a = ... [12:58] t4 also [12:58] in practice it doesn't matter [12:59] right [12:59] yeah I think that was my logic [12:59] it doesn't matter much in practice [12:59] I'm fine with changing it [12:59] it's ok :p [12:59] the idea is simply that the code as-is serves as a boilerplate for simd [12:59] and in simd, let's be honest, registers have no names, so it doesn't matter much [13:00] how is simd gonna work for those function? making the function doing multiple 1d at the same time? [13:01] so take the second 1d idct4 as an example [13:01] (the first 1d needs some changes that are trivial but haven't been done yet) [13:02] what you're doing is that you're reading (from tmp[4*4]) IN(0-3), which is tmp[0*4], tmp[1*4], tmp[2*4] and tmp[3*4] right? [13:02] and then calculate t0, t1, t2, t3 [13:02] so instead of doing that once, you can load 8 bytes, representing tmp[0*4] for the first loop (i=0 in the wrapper) as well as for all other iterations of that loop [13:02] then subtract/add with each of these [13:03] and store the result [13:03] that's the basic idea of simd: do the same operation multiple times, assuming each one in different locations of the same register can be done independently [13:04] using mmx (8byte registers), you can do 4 2-byte calculations for a idct/iadst (e.g. a 4x4) [13:04] ok, so indeed multiple 1d [13:04] using sse (16byte registers), you do 8 at a time [13:04] so you can do a 8x8 without a loop [13:04] i was wondering if it was that way, or if the function itself was trying to make several op at the same time [13:04] 16x16 is harder in sse2, you need some intermediate memory stores, but avx2 allows it to be done loopless again [13:05] 32x32 is a bitch :) [13:05] but can still be done, it's just more work [13:05] ok :) [13:15] there are some trailing whitespaces btw, i'm assuming they will be blocking for pushing [13:15] probably [13:15] I'll remove them later [13:16] or if you can, go for it [13:16] ffmpeg.git 03Mark Harris 07master:4ccafaca1cf6: avformat/id3v2enc: use UTF-16 in id3v2.3 APIC frame only if non-ASCII [13:17] fixed [13:17] (was easier than I thought) [13:17] bbl [13:17] :) [17:01] [08:43:33] the same as [3] but only because it must be non-null <- well, I had data[0] set to a random, non-NULL value - but it appears this made the vaapi hwaccel fill in incorrect bitstream info, and led to corrupted decoding [17:01] with data[0] set to data[3] it works [17:02] didn't see anything suspicious in the libavcodec vaapi code (but then I don't know it well) [17:29] might depend on the implementation of the actual hwaccel, not that setting [0] to the same as [3] is hard :) [17:33] nevcairiel: took me quite a while to find, though :( [17:46] ffmpeg.git 03Michael Niedermayer 07master:475df42eb62c: avutil/sha512: make const tables static const [17:46] ffmpeg.git 03Michael Niedermayer 07master:90b40b45d40b: avutil/sha:make const tables static const [17:46] ffmpeg.git 03Michael Niedermayer 07master:0e98a133226f: avutil/xtea: make const tables static const [17:46] ffmpeg.git 03Michael Niedermayer 07master:64a3dbadee7d: avutil/ripemd:make const tables static const [17:46] ffmpeg.git 03Michael Niedermayer 07master:22159474381c: avutil/parseutils:make const tables static const [17:47] ffmpeg.git 03Michael Niedermayer 07master:5717689c750b: avutil/avstring: make const tables static const [17:47] ffmpeg.git 03Michael Niedermayer 07master:06137a496b93: mpeg4videodec: check resolution marker bits [17:47] ffmpeg.git 03Michael Niedermayer 07master:8e119a22c482: mpeg4videodec: Parse newpred headers [18:38] ffmpeg.git 03Michael Niedermayer 07master:61e0e809998f: tiff: continue parsing on non fatal errors [19:32] ffmpeg.git 03Reimar D?ffinger 07master:49cf36f4e3e9: sanm: fix undefined behaviour on big-endian. [19:32] ffmpeg.git 03Reimar D?ffinger 07master:d87f9da53c93: vdpau_internal.h: Add missing include for FF_API_BUFS_VDPAU. [19:32] ffmpeg.git 03Reimar D?ffinger 07master:af05edc658f3: vdpau: Add an allocation function for AVVDPAUContext. [20:17] ffmpeg.git 03Reimar D?ffinger 07master:d404fe35b2fb: Make new VDPAU easier to use by adding context to callback. [21:54] ffmpeg.git 03wm4 07master:a5ef7960fc96: ape: check avio_read() return value [22:10] michaelni: i don't know if it's playable by anything so far, the guy with the sample asked the uploaders for more info [22:11] but i have the complete sample (~300M iirc) if you want [22:11] it seems a bunch of videos were encoded exactly the same way [22:11] note: i'm not responsible for the twisted content ;)_ [22:12] (mpeg4 new pred thing) [00:00] --- Mon Aug 12 2013 From burek021 at gmail.com Mon Aug 12 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Mon, 12 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130811 Message-ID: <20130812000501.2922418A0289@apolo.teamnet.rs> [00:35] apparently something in the latest ffmpeg broke, mplayer [00:40] apparently, according to mplayer devs, something in ffmpeg is borked. [01:09] <_dan_> hi anyone here? [01:10] <_dan_> i am trying to figure out how to stream my h264 video to browser using html5 [01:10] <_dan_> without keeping it saved in mp4 format on my drive [01:10] <_dan_> seems like mp4 can't be muxed and streamed in real time? [01:23] _dan_: hang on for a bit, everyone is working on something. Someone who knows will eventually answer. [01:24] or at least send you in the right direction. [02:24] Hello, After using avformat_open_input on an audio file I am able to get some information about each stream (such as the codec, number of channels). But I attempt to query the codec for the particular stream and the channel_layout is always 0. Am I querying incorrectly? Thank You [03:20] how do I remove a specific frame number from a video to keep as a picture? [03:20] i have the frame number I want from avidemux [03:31] dunno how easy it would be to be "frame accurate" but you could get close with -ss POSITION after the -i FILE. (seconds or HH:MM:SS.xxx) Avidemux might make it easier to get the 'exact' frame. Or transcode (transcoding.org). I'm no expert however... [03:57] how do you normalize the volume of channels? [04:12] Unrecognized option 'af' [04:12] what is it now [04:12] i'm trying to do volumedetect [04:25] gn everyone. [06:53] Does anyone have an idea as to what filter to use to reduce the blocking from a source that looks like this? http://i.imgur.com/viiR7e3.png [06:54] I see nothing about blocking in the output of ffmpeg -filters [07:12] pirating jackie chan? [07:16] mebe :3 [07:26] The only copy I found was terrible: uncompressed PCM audio track in the first season, bad upscaling in the 2nd and 3rd, and it's all msmpeg4. [12:08] ffmpeg+ffserver stability issue via 127.0.0.1 http post [12:09] it is not stable about ffmpeg send video to ffserver stability issue via 127.0.0.1 http post [13:23] hi [13:23] I seem to have an issue with creating a slideshow from images [13:23] there are no issues when using matroska as a container format [13:23] only when using avi or mp4 [13:24] the first image is displayed for twice the set time, while the last image is only displayed for a single frame [13:25] the command is in this format: [13:25] ffmpeg -y -f image 2 -r 0.20 -i %02d.png -i audio.wav -shortest -c:v libx264 -tune stillimage -c:a pcm_s16le -pix_fmt yuv420p -crf 0 -r 30 output.avi [13:26] err ... that's "image2", not "image 2" [13:28] can i join images with ffmpeg to procude 1 big png/jpg? [15:03] ffmpeg seems to drop either the last or the first batch of frames that should've been duplicated [15:12] http://superuser.com/questions/434232/why-is-the-first-image-changing-so-quickly-when-creating-video-from-images-via-f/552718#552718 [16:02] oh great, using -vf fps=30 now the first image displays correctly, and the last image is displayed for a single frame [16:31] smplayer can't seek correctly in my dvb-s recordings (MPEG-TS), so I tried to convert a recording like this: ffmpeg -i in.ts -vcodec copy -acodec copy out.mpg. this seems to work fine, I see and hear no difference between the two, seeking works too. but out.mpg is smaller than in.ts (5.3GB vs 7GB). where does that size different come from? Stream #0.0 of in.ts is dvb_teletext, can that make such a big difference? "ffmpeg -i in.ts" prints in [16:31] fo about other tv channels, but I have no idea what that is about. info about in.ts http://pastebin.com/SMSaFubK and out.mpg http://pastebin.com/xjcLt5Yk [16:57] heiko_xs, teletext and metadata coult be [17:22] my ffmpeg crashes [17:23] why ? [17:27] this is file's parameters: http://sprunge.us/ARBQ and I use that command: ffmpeg -i "file.flv" -vcodec mpeg4 -vb 1500k -ab 128k -ar 4100 -ac 2 -acodec libmp3lame -y -f avi "_/_3/file.avi" to convert it to *.avi, and thats the entire output: http://sprunge.us/DegF, is there something wrong with file ? or its a ffmpeg's problem ? [17:28] as you can see there is buffer overflow or something: "*** glibc detected *** ffmpeg: free(): invalid next size (normal):" [17:29] I got that file from that link http://www.youtube.com/watch?v=rvj_J0YChmE and its called rvj_J0YChmE.flv, if you can download it and try on you ffmpeg you can see if thats a file's issues or not [17:29] l?gg av [17:29] I use "ffmpeg version 1.2 Copyright (c) 2000-2013 the FFmpeg developers" [17:30] its quite fresh, isn't it ? [17:30] seems like topic says there is "FFmpeg 2.0.1" allready, strange why so big leap from 1.2 to 2.0 version, are there really big changes ? [17:32] you know what interesting thing is ? seem like ffmpeg was able to encode that file to avi, and then crashed, because both are 0:15:9 long, so it actually completed encoding and only then crashed for some reason [17:32] so maybe its not an issue, I just skip that file [17:34] I also noticed that line in dmesg: "libavutil.so[20318]: segfault at 24660 ip b77d4412 sp bff5a63c error 4 in libavutil.so.52.18.100[b77cd000+2d000]" [17:34] if it helps [17:35] I already removed that flv file, maybe I should've tryed it with 2.0.1 versio of ffmpeg ? [17:38] elkng: did you compile this yourself? [17:39] relaxed: I compilled my ffmpeg 1.2 with that script http://sprunge.us/OiVB [17:41] compile it by hand [17:41] what ? [17:42] what the difference ? [17:42] I made that script, and its the same as to issue those commands in command line but instead they are batched in one file [17:42] That script sets SLKCFLAGS="-O2 -march=i486 -mtune=i686" which should really be left to ffmpeg's configure. [17:43] I don't know if that's the issue, but you can quickly check by compiling it by hand to see if you run into the same issue. [17:46] should I remove "SLKCFLAGS="-O2 -march=i486 -mtune=i686" part entirely or what ? [17:49] remove --arch=$ARCH, CFLAGS="${SLKCFLAGS}", and CXXFLAGS="${SLKCFLAGS}" [17:51] on older versions there's --enable-runtime-cpudetect [17:51] which is the default now [18:01] I want to record a webcam and see the output as it records to disk. is this possible? [18:03] linux? [18:08] WIndows [18:08] using dshow [18:09] the command I am using now which works fine for outputting the file, all I want to add is a preview window of what is being seen by the webcam: ffmpeg -y -f dshow -s 640x480 -r 60 -rtbufsize 3000k -i video="PS3Eye Camera" out.mp4 [18:09] JustAnotherUser, pretty sure you can use VLC for it tho [18:10] How? [18:10] JustAnotherUser, use the convert/save thing of vlc, with the preview option on [18:11] how can i get it to record to MP4? [18:11] it asks you when you press the "save/convert" button.. [18:12] not sure exactly what to press, but its an easy-to-use gui thing [18:12] should be easy to figure out [18:12] h264 tho, for mp4 [18:13] Profile: Video - H.264 + MP3 (MP4) for example :p [18:16] easy to use gui things I don't care abouit... not to mention it crashed. [18:48] HI?GUYS [18:49] I?want to transform mp3 files into videos with a picture on them (static picture, not changing), can I do that with ffmpeg? [18:51] yes [18:51] how? [18:51] google create a video slideshow with ffmpeg [18:51] it has instructions for single-picture videos as well [18:52] I want to create a true slideshow, but it seems ffmpeg is buggy in that regard [18:52] https://ffmpeg.org/trac/ffmpeg/ticket/1925 [18:53] duplicate of https://ffmpeg.org/trac/ffmpeg/ticket/1578 [18:53] 13 months old :( [19:11] sucks man [19:13] could som1 try that on youtube? :p [19:18] my script is working [21:16] hi im trying to record minecraft videos without sound and i have the dimensions and position of the window but i need help putting it into one command for ffmpeg [21:17] can anyone help me? [21:20] http://trac.ffmpeg.org/wiki/How%20to%20grab%20the%20desktop%20(screen)%20with%20FFmpeg [21:21] ty [21:22] np :) [21:22] im gonna try it out right now brb gonna start minecraft up (java game) so yea bbs ill let u know wut happens [21:31] bak it did its job but the video output was very fuzzy/blurry kinda [21:31] any idea how to fix this? [21:32] what fps to set etc? [21:32] could be any number of things, really [21:32] name a few? so i can check them plz [21:33] codec settings, for instance [21:33] if u think it wud be faster can we try teamviewer? im new to this kinda stuff [21:33] !pbb slick [21:34] ill give u the whole ouput if u want [21:34] im guessing that wud be best right? [21:34] yeah [21:34] ok 1sec [21:34] since that has the information about whats going on :) [21:35] http://pastebin.com/dKY9U5W4 [21:37] ugh [21:37] flv. [21:37] 200kbit. [21:37] that probably looks like barf :D [21:37] lol wut setting shud i use then [21:37] -c:v libx264 -crf 0 -preset ultrafast [21:38] what jure said :D [21:38] crf 0 is losless [21:38] :\ im newb can u give me the whole command for that lol [21:38] id mess it up knowing me [21:38] slick, add jure's parameters before output.flv [21:38] ffmpeg -f x11grab -r 25 -s 854x480 -i :0.0+151,215 -c:v libx264 -crf 0 -preset ultrafast output.mp4 [21:38] ok [21:38] ty [21:38] you can change crf 0 to something like crf 24 to get more compression if the file is too large [21:38] play with crf until you get the size/quality you want [21:39] ok sweet ty ill bbs ima go try it out [21:40] crf 0 = lossless, so you can use that as imtermediate before recompressing to something smaller using a smarter preset later on [21:52] it plays too fast what should i try? [21:53] change crf? i have 3gb ram and 2.8ghz on kde4 kwin and i can switch to blackbox if i have2 [21:55] play around with -r ... [21:55] try -r 30, -r 60 [21:55] ok ty [22:05] oddly enuff when i lowered -r im guessing fps it acted normally with 10 [22:05] any idea why? [22:07] any1? [22:10] well bye im gonna go try blackbox lol [00:00] --- Mon Aug 12 2013 From burek021 at gmail.com Tue Aug 13 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Tue, 13 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130812 Message-ID: <20130813000502.3197D18A0367@apolo.teamnet.rs> [03:07] ffmpeg.git 03Michael Niedermayer 07master:98fd8a78487f: avcodec: Remove ff_packet_free_side_data, use av_packet_free_side_data [04:11] ffmpeg.git 03Thilo Borgmann 07master:f18ccb529fb7: Fix wrong use of "an" in some comments. [05:41] Daemon404: ubitux: ok at this point loopfilter gives me almost correct (1 pixel off in first frame of vp90-02-size-64x64.webm Y plane; UV still completely broken) output; any update on your stuff? [06:06] BBB, sorry ive lacked PRs... moving is more work than i anticipated [06:06] <-- n00b [06:06] (also laziness... :x) [06:54] moving n00b? [06:54] weird [07:18] BBB, indeed -- first permanent move [07:18] between countries [07:18] ok I guess [07:19] [00:06] <@Daemon404> (also laziness... :x) <-- [07:35] Daemon404: ubitux: ok lf works now [07:35] ubitux: Daemon404: wanna review or can I commit? [07:41] its a WIP branch, commits are 'free' [07:50] ok pushed [07:50] hope that doesn't screw up other things but we'll see [07:50] so... inter coding now? [07:51] how far's your wip? [07:54] not far. im basically just not home ever. its scrapable if its holding people back. [07:54] hm... ok [07:54] ubitux: yt? [09:50] BBB: here i am [09:50] no update on my side [09:51] i guess i'm gonna start trying to fill the 6 pred func, or there are other priorities? [09:53] ubitux: inter coding? [09:53] ubitux: I guess I'll start bitstream parsing for inter coding then [09:53] that way we don't overlap [09:56] ok added a slightly more simd-efficient way of doing loopfilter [09:56] still not perfect, but good enough for now [09:56] will do inter bitstream parsing now [10:04] is the lf of vp9 special? [10:04] i mean in comparison to some other codecs [10:06] not really [10:06] you just need to make sure to do it in the correct order, and not do the same lf twice [10:10] more details: the lf acts on 8px edges; adjecent edges can use the same lf settings, in which case you could (using 16byte sse2 simd instructions) do 16px at a time (instead of only 8, using mmx/mmx2 instructions) [10:11] avx2 would allow 32 maybe, haven't looked into it yet [10:12] need hardware anyway [10:12] :) [10:13] the benefits are obviously doing more pixels in ~~same amount of instructions == ~~same amount of cycles/time [10:16] BBB: can you tell me in a few hint words in what sense the intra pred code is different from libvpx? [10:17] ffvp9 vs libvpx? [10:17] I tried to mimic the code in h264pred.c [10:17] it's unrolled, basically [10:20] h264 only goes up to 8x8? [10:21] ah i see some 16x16 too [10:21] but they are not unrolled for those [10:22] they are only h, v [10:22] not directional [10:22] do whatever is easier to code, there's some easy ways to write the directional ones as a loop [10:23] 16 funcs are starting to be pretty huge unrolled [10:23] vp90-2-00-quantizer-23.webm uses one of the 32x32 directional intra predictors if you want a sample [10:23] i'm not sure it's wise to unroll the 32 as well [10:23] ok [10:23] it's up to you, you're the implementer :) [10:24] the libvpx c code is kind of slow, you can write it as a loop and still be a lot faster [10:24] it depends on how much code you want to write [10:24] and yes the 16x16 ones are not small, I'm aware of that and not necessarily liking it, but it's ok for now I guess [10:26] ok ok, thx for the help [10:26] gonna get up, and look at that today [10:28] ffmpeg.git 03Hendrik Leppkes 07master:3ca5df36a50e: wmall: use AVFrame API properly [10:28] ffmpeg.git 03Michael Niedermayer 07master:c103d5f538d9: Merge remote-tracking branch 'qatar/master' [10:29] in fact a lot of the vp90-2-00-quantizer-*.webm files use 32x32 intra predictors [10:29] probably good for some basic unit tests [10:29] also reminds me of a bug in the loopfilter/intra mixing which I guess I'll fix soon [11:35] how one would handle mt in png decoder for this abdomination of non-keyframes? [11:36] and not breaking performance with frame multithreading of normal pngs [12:13] durandal_1707: just handle them specially [12:13] durandal_1707: i.e. if keyframe-png, multithread as images. if not, multithread more specially [12:15] what specially? [12:16] non-key frames need previous frame - this just makes frame multithreading near useless [12:17] right, that's why codecs like h264 use frame-mt [12:18] but how? [12:19] why don't you read the code [12:20] (I'm willing to explain if you really don't know, but you're typically not every interested in being lectured, so I won't bother if you don't care) [12:22] BBB I guess you mean slice-mt & [12:22] no [12:22] I mean frame-mt [12:23] why you think I'm not interested in being lectured? [12:23] commit 6a9c85944427e3c4355bce67d7f677ec69527bff [12:23] Author: Alexander Strange [12:23] H264/MPEG frame-level multi-threading. [12:23] cbsrobot: ^^ [12:24] durandal_1707: I still don't know if that means yes please or no thankyou [12:24] BBB I know, I just misread [13:36] BBB: i'm really not interested in your subjective perspection [14:51] ffmpeg.git 03Paul B Mahol 07master:9087dcbe5b7e: lavfi/trim: check for right default value [14:51] ffmpeg.git 03Paul B Mahol 07master:d4ab1292e9ac: ffmpeg_filter: do not pick evil path for trim filters [18:17] ffmpeg.git 03Kirill Gavrilov 07master:2395ae22ce8b: img2dec: fix typo (double "with with") [18:18] ffmpeg.git 03Kirill Gavrilov 07master:53f309c61739: pixfmt: extend description for planar YUV 9/10 bits formats [19:49] we could use youtube to showcase multiple cases of ffmpeg filters in action and make it cool, just for fun :) [19:50] i mean, create multiple videos, showing ffmpeg filters in action, which look very cool and stuff [19:50] create an ffmpeg youtube channel, etc. [19:50] you gonna do this? [19:51] man, you really know how to kill the great idea :D [19:51] it is really great idea, there are many more great ideas too [19:51] actually I just might do it :) [19:51] do we have any kind of repository with video samples and stuff [19:52] btw, it appears that av_seek_frame has got a bug [19:52] which im not gonna report [19:53] at least not on cehoyos' bug tracker [19:53] http://ffmpeg.gusari.org/viewtopic.php?f=16&t=1011 [19:53] but i just wanted to say you should report youtube as bug ;-/ [19:54] they are fixing it every day :) [19:56] seeking is often quite buggy [19:56] it's probably the most buggy part of libavformat [19:57] I suspect the main reason is that ffmpeg.c barely needs seeking, and seeking with ffplay is awkward at best [19:57] so ffmpeg devs rarely test it [19:57] oh ok [20:03] seeking in files with timestamps or even an index usually works OK in my experience. files without either are not so good indeed. [20:11] ffmpeg.git 03Michael Niedermayer 07master:e1c44be80233: doc/filter_design.txt: Fix duplicate words [20:11] ffmpeg.git 03Michael Niedermayer 07master:a8163a786bcb: doc/filters.texi: Fix duplicate words [20:11] ffmpeg.git 03Michael Niedermayer 07master:7fabf3a4b71c: libavcodec/ac3enc_template.c: Fix duplicate words [20:11] ffmpeg.git 03Michael Niedermayer 07master:f10462377d93: libavcodec/ac3tab.c: Fix duplicate words [20:11] ffmpeg.git 03Michael Niedermayer 07master:9da6e742f45a: libavcodec/avcodec.h: Fix duplicate words [20:11] ffmpeg.git 03Michael Niedermayer 07master:c77c5f6c9f5b: libavcodec/bink.c: Fix duplicate words [20:11] ffmpeg.git 03Michael Niedermayer 07master:805fbbefb3a8: libavcodec/bintext.h: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:e458fd6cf9ca: libavcodec/dv.c: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:1e6816dcda7f: libavcodec/lpc.h: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:c6325e50dd12: libavcodec/mpegvideo.h: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:ad26aa362389: libavcodec/rv40.c: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:5347de881bf1: libavcodec/xsubenc.c: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:5086b26809bc: libavutil/file_open.c: Fix duplicate words [20:12] ffmpeg.git 03Michael Niedermayer 07master:3500f53c9375: libavutil/opt.h: Fix duplicate words [20:13] use single one for such cosmetics [21:05] my nickname is registered? [21:06] can anyone read me? does anyone know what this means? WHO has registered this nick? [21:07] we read you [21:07] thanks for replying. I'm relieved (a bit) [21:09] Don't we care for "Libav" in licence headers all over the place? (see your own grep) [21:10] why care? [21:11] If we do accept wrong license headers why care for them in the first place? [21:12] how they are wrong? [21:14] the inconsistence in licenses like: "This file is part of FFmpeg/Libav." [21:17] well if original file that got merged have such license, why it should be changed? [21:19] so we have 132 original files merged from Libav? [21:23] you sure 132 is correct number? [21:24] i believe the rule currently followed is: file originating from libav have header kept, otherwise it's set to FFmpeg [21:24] but michaelni changed some of them .... [21:24] 113 if grepped more strictly: grep -i 'part of \' `find . -iname '*.c' -o -iname '*.h'` | cut -f 1 -d ":" | uniq | wc -l [21:24] some changes were done where libav actually changed FFmpeg to Libav, so that was reverted [21:24] in case the code originated from FFmpeg [21:28] and in some cases michaelni changed it to FFmpeg [21:28] so e.g. libavformat/network.c was originally created by Libav in 2007, eh? [21:29] i guess it was a mistake during one of those merge/rename commit [21:31] basically if you want to change that i would suggest a case by case, assuming you think it's really worth the effort spending time fixing these [21:35] I think it might be worth writing a script [21:35] hf :p [21:36] and do it with each file separate commit - so that it appears its very important [21:37] so much bitterness... :)) [21:37] its candy [21:38] so much sweetness then? maybe it'll do a file separate pull-request to libav-devel... [21:38] cheers [21:39] Action: ubitux wonders what just happened [21:40] perhaps he realized how such thing is absurd [21:41] personally i prefer consistency, and current situation is just confusing [21:43] Action: ubitux really doesn't care [22:38] huh, some people really get too much confused [22:39] ooo i missed drama? [22:40] i'm not talking about useless drama... [23:53] ffmpeg.git 03Michael Niedermayer 07master:4b101ab02ea7: avformat/asfdec: call ff_read_frame_flush() in asf_read_pts() [00:00] --- Tue Aug 13 2013 From burek021 at gmail.com Tue Aug 13 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Tue, 13 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130812 Message-ID: <20130813000501.2858D18A0366@apolo.teamnet.rs> [11:14] is there any program in linux thats similar to a jack audio monitoring program which shows spectrums and waveforms but does it for videos [11:14] but not for videos, for video input i should say, webcam, blackmagic (if i can even get it working) [11:46] ItsMeLenny: see https://trac.ffmpeg.org/ticket/1624 [11:48] i just thought up a way to calibrate this actually [12:04] is cbsrobot a bot? [12:06] half man, half machine [12:35] <_dan_> hallo [12:35] <_dan_> av_interleaved_write_frame(): Invalid argument [12:35] <_dan_> what does it mean? [12:53] it means stop using -vsync drop [12:53] it doesn't work [15:28] Hi I'm using ffmpeg+zoneminder to moinitor a parkingspace. Somehow my RTSP frames are corrupted. [15:28] maybe related to "https://ffmpeg.org/trac/ffmpeg/ticket/285". Any ideas how to fix this? [16:33] How can I add a 5s image to the front of a video? [17:27] hi, can anyone give me a hand with this one: http://stackoverflow.com/questions/18132342/ffmpeg-rtmp-streaming-process-exit [17:34] echo [17:34] that behaviour is not expected i think [17:35] thought so.. but any causes for it to behave like this? [17:48] raduu: define crash [17:49] i dont see " process exits " as a crash [17:49] video:5264kB audio:0kB subtitle:0 global headers:0kB muxing overhead -100.000408% [17:49] and then I get the bash $ [17:49] okey so just an exit then. [17:50] right [17:52] I have a reference .ogg file with n samples of audio. I'd like to generate a silence file with n samples of audio using ffmpeg -i /dev/zero. [17:52] so basically I try to run ffmpeg -threads 4 -i rtmp://..../chat/mp4:.mp4 [17:52] -q:v 0.6 -r 15 -s 320x240 /frames/10021237_data/frame-%0999d.jpg [17:52] I've tried setting the duration in seconds, using ffmpeg -t but that isn't sample-accurate, [17:52] in 2 terminals, with 2 different streams [17:53] and I'm noticing that the length of the reference file and the silence file are not identical. [17:53] and when one ends (or I kill the process)... the other one exits [17:53] Any suggestions on generating an .ogg file with an exact number of samples of silences? [17:53] raduu: try lower frame-%0999d.jpg the number there to something lower [17:53] raduu: and you will get some pictures [17:53] and i guess muxing thing is beacuse of .jpg [17:54] since there is no "container" for .jpg [17:54] hwo do I do that? [17:55] what ? [17:55] lower the frame [17:56] why do you have -r 15 and the frame number? [17:56] I tried with different frame rates [17:56] but same behaviour.. [17:56] Action: stefanha tries dding an exact number of /dev/zero bytes and then .ogg encoding that [17:56] I didn t get why you pasted frame-%0999d.jpg [17:57] I thought that is just a tweak to produce the same image name always [17:57] i dont see the problem. [17:58] :) [17:58] raduu: try lower frame-%0999d.jpg the number there to something lower [17:58] what do you mean? [17:58] i didnt know that you chould do that to get the frame number [18:01] that gives me frame-0000000000000000000.jpg [18:01] all the time [18:07] spaam: I m not really sure what I m doing.. I m just trying to produce images from a stream [18:08] and it works really well for 1 stream [18:08] I have no clue what a muxer is in the first place [18:08] so if there is something wrong with the command please let me know [18:27] raduu, get a container (avi etc), stick in some audio (wav/pcm/mp3) and some video (h264,mp4,yuv).. mux it all up, and BLAMMO! you have a video [18:27] I do not want a video.. I want a sequence of images [18:27] from an mp4 adobe fms stream [18:28] that's nice dear. [18:28] ? [19:26] !burek [19:27] !beer :) [19:28] burek: Could you update the bot to point to http://johnvansickle.com/ffmpeg ? [19:30] also, the kernel requirement is now 2.6.32+ [19:31] ok [19:31] just a sec [19:32] Thanks, a lot of businesses block dropbox now. [19:33] I can also give you a hosting place if you need [19:33] could maybe setup a cron [19:33] to download the build and publish it on the web root [19:34] It's what I do now but I've been too lazy to make a cron job for it. [19:35] We should probably scrape fate to check whether it's green or not. [19:35] :) [19:36] i think i made mine with netcat or something [19:37] Oh, you do check fate? [19:37] which acts like a dummy web server that only serves the given file on a command line [19:37] er no. not quite :) [19:37] i was just trying to say that there is no need to install a full-featured web server like apache [19:37] just to be able to get your build [19:38] I am trying to create an .ogg file containing silence. It should be just as long as a reference file. [19:38] I've already purchased hosting for this and other things, so that's not an issue. [19:38] stefanha: what have you tried? [19:38] It looks like the encoding process has some kind of block size, the silence file does not end up with the exact same number of samples as the reference: [19:38] $ dd if=/dev/zero bs=2 count=352874 | ffmpeg -y -ar 44100 -ac 1 -f s16le -i - -acodec libvorbis out.ogg [19:38] $ ffprobe -show_streams -i out.ogg | grep duration_ts [19:38] duration_ts=352896 [19:39] 352896 != 352874 [19:39] you could use -i /dev/zero, fyi [19:39] but that's not your problem [19:39] relaxed: I tried that but only found ffmpeg -t , which isn't sample-accurate either [19:40] Later on I mix together multiple audio files, and if their length is not identical, things get out of sync. [19:40] That's why I'm trying to create a file with a precise length. [19:41] right, give me a second [19:42] relaxed: thanks! [19:43] I just noticed another weird thing: converting an .ogg file to .wav results in a different duration_ts. [19:43] That's just decoding to wav, no ogg/vorbis encoding. [19:44] The .wav has a larger duration_ts value. [19:45] by how much? [19:47] durandal_1707: 598 samples (ogg: 352874, wav: 353472) [19:48] you use vorbis, and it may add silence for last frame [19:48] so no you can not use vorbis to encoder exact number of samples [19:49] durandal_1707: I am trying to match a reference .ogg file, so it doesn't need to be arbitrary numbers of samples, [19:49] it just needs to be the same as an existing .ogg file. [19:52] that is certainly useless objective, if file and not decoded data must be bit by bit same [19:53] durandal_1707: It's about timing. I will concatenate and mix several tracks. [19:53] If the silent piece has a different length, then the tracks will be out of sync. [19:54] why not creating identical length wav files as a start, and do whatever magic you need [19:54] maybe use raw formats until you have to encode to lossy [19:54] The input is already .ogg and my goal is to concat ogg without reencoding [19:54] I don't have raw input here. [19:55] yes, but you can not have arbitary duration with vorbis [19:55] durandal_1707: The fact that the reference file exists suggests vorbis can encode that length :) [19:56] It must be possible but I guess there is some buffer size parameter somewhere. [19:56] stefanha: hmm, it could be really possible, but not atleast with libvorbis from ffmpeg [19:57] it's like "I want to create an mp3 file of specific size in bytes"... how will you accomplish it easily? [19:57] but if you can compile ffmpeg yourself i could give you patch which you could try [19:58] durandal_1707: That sounds interesting but maybe I should explain what I'm trying to do first. Perhaps I'm doing it in a silly way :) [19:58] There are several tracks of audio, like vocals, drums, guitar. They come in .ogg except only in 8 second pieces. [19:58] So you have guitar_1.ogg, guitar_2.ogg, etc. [19:59] And I'm mixing them down into a single audio file for listening (that part works). [19:59] The tricky part is that there is also silence sometimes, so guitar_4.ogg, guitar_6.ogg <--- #5 is missing (silent) [19:59] durandal_1707: So what I'm doing at the moment is to find those silent parts and generate silence files with ffmpeg. [20:00] Then I can concat all the guitar tracks (including silent files I generated) into guitar.ogg [20:00] And finally I can mix guitar.ogg, drums.ogg, vocals.ogg down into a single audio file. [20:00] can you concat oggs losslessly? [20:00] relaxed: If they have the same number of channels, sample fmt, etc yes. [20:01] good, but I still wonder why you use ogg and vorbis for intermediate step [20:01] I want to keep the concat .ogg tracks so they are lossless. [20:01] So you can have the guitar.ogg file and it's not reencoded. [20:02] That's nice to have, but I can drop it if necessary. [20:02] well if you must use vorbis, and need to create silence of arbitary length in vorbis that is fine [20:02] Are you suggesting decode it all, add sample-accurate silences, and then mix + encode? [20:03] stefanha: no, concatenate, without any transcoding should be enough [20:04] so yes it perhaps should be possible to encode vorbis with arbitary samples, i just need to try it [20:10] or you can try it right a way if you can compile ffmpeg [20:13] durandal_1707: Sure, I can try building from source [20:13] you tried it already? [20:13] durandal_1707: Nope, just cloning ffmpeg.git now [20:14] you just add CODEC_CAP_SMALL_LAST_FRAME to libvorbisenc .capabilities [20:18] durandal_1707: thanks, testing now [20:24] durandal_1707: Unknown encoder 'libvorbis' [20:24] durandal_1707: I must have built without libvorbis support... [20:24] I did ./configure && make [20:26] Yeah, config.mak says "!CONFIG_LIBVORBIS_ENCODER=yes" :( [20:28] external libraries need explicit enable switch [20:28] --enable-libvorbis --enable-libx264 ... [20:28] ubitux: Thanks, just enabled it [20:34] hmm it seems it does not help... [20:38] durandal_1707: It makes a difference for encoding here. Now I'm able to encode 352874 samples exactly. [20:38] $ dd if=/dev/zero bs=2 count=352874 | ffmpeg -y -ar 44100 -ac 1 -f s16le -i - -acodec libvorbis out.ogg [20:39] hmm... [20:39] durandal_1707: It seems to work for encoding, thanks! :) [20:39] what libvorbis you have installed? [20:40] perhaps it works only for silence ........ [20:40] durandal_1707: libvorbis 1.3.3-4.fc19 [20:41] I also tried 352875 (+1) and that worked too. [20:42] durandal_1707: Still working for /dev/urandom here [20:42] i'm puzzled it does not work with random flac i tried [20:42] and i also have 1.3.3 [20:42] durandal_1707: What if you try raw s16le input instead of flac? [20:43] it should not matter ... [20:44] durandal_1707: Here decoding an .ogg test file with 352874 samples to .wav still results in 353472 samples. [20:45] durandal_1707: So I wondered if flac decoding has the same issue. [20:48] flac decoding is fine [20:56] durandal_1707: Thanks for your help. Got to go now. [21:41] Hello! I've got a weird problem where I'm converting a video into jpegs, and around ~6-7k jpegs in, ffmpeg goes to 0% CPU usage and just sits. It doesn't exit, or throw an error. Anyone heard of that? Or have an idea what may be happening? [21:42] jstackhouse: Just a hunch, your disk isn't filling up, is it? [21:42] Nope, it's running on EC2, still got ~29GB on the drive. :S [21:43] what is command? and what ffmpeg version? [21:43] No idea, then. .?. Wait for someone who actually knows what they're talking about like durandal_1707, o?o. [21:44] The command is.. ffmpeg -i /path/to/mp4 -s 384x212 -f image2 -qscale 1 -vf fps=fps=24 /path/to/out%d.jpg [21:44] ffmpeg version git-2013-08-12-d4ab129 Copyright (c) 2000-2013 the FFmpeg developers [21:44] built on Aug 12 2013 16:27:28 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) [21:45] perhaps limit on number of files in directory? [21:45] i just thinking loud what 0% CPU means [21:45] stalling is far from optimal solution.... [21:46] The odd part is that it works fine doing like 6 videos at a time that are of like length 30 seconds to 4 minutes. [21:46] but this one video that is over 9 minutes, causes this issue. [21:46] Happens on my local machine too. [21:46] I'm running OSX, the server is Ubuntu 12.04 [21:47] hmm, so you want to say it depends on input and not number of outputs? [21:47] I suppose yea, initially we thought HTTP might be timing out, so we tried wget and run ffmpeg locally, but it still happens. [21:48] well i would try another file and see if creating 6-7k files will work.... [21:49] It's definitely not a limit on the number of files per folder. I've had far more. [21:49] Heck. Right now I'm looking at a folder with 32,189 items in it. [21:50] ffmpeg doesn't hold open the file descriptors does it? maybe it is hitting ulimit? [21:50] ahh or it may be recently reported issue of max opened files? [21:50] jstackhouse: If it was it couldn't even do 20. [21:50] though it should close file when writting [21:50] The limit most OS's stick on open files by a single program is 20. [21:51] hello everyone! I just started to use ffmpeg and I have lots of questions if I may ask! [21:51] Soe1en: Go ahead, dun ask to ask, o?o [21:52] heh : ) I tried to take a look what de- and encoders are available with ffmpeg -decoders and ffmpeg -encoders, but I get Missing argument for option 'de- encoders' for some reason, how come? [21:53] No idea, I get that too. x.x (Obviously not an expert here well, shit, the documentation is suggesting this, yet it is not working [21:54] Soe1en: that works fine here [21:54] durandal_1707: in fact a lot of things doesn't seem to work [21:54] durandal_1707: You seem incredibly blessed, then. [21:55] in fact perhaps you are not using FFmpeg at all [21:55] No, we are. [21:55] Keshl: because I'm omnipotent [21:55] Nu you's not. durandal_1707: what do you mean? [21:55] is it possible to convert video/quicktime; to wav? [21:56] red6m: Strictly speaking, no. As far as you need to care, yes. Just ffmpeg -i thingy.mov sound.wav [21:56] Keshl, awesome. awesome to the max! [21:56] Welcomes, o?o [22:02] "We used to have people in the industry, but they are basically gone" - Fredrik Reinfeldt, 2013. [22:02] wrong channel [22:03] how come ffmpeg is renamed avconv on ubuntu? [22:03] It's not. [22:03] it's not [22:03] avconv is a fork, not related to ffmpeg. [22:03] Well. It's as related to it as forks go. But that's it., [22:04] Fjorgynn: "Corn? When did I eat corn?" - Fredrik Reinfeldt, 1987 [22:04] llogan: lol [22:04] back in university i guess [22:04] anyways [22:05] they trashed the Swedish industry programme now [22:05] canned maybe [22:06] thanks for the link! [22:07] llogan: why corn? [22:07] because it does not always digest well. [22:08] Keshl, can I also set it to 8bit when converting? [22:08] durandal_1707: So I ran ffmpeg directly, not by spawning it from Node, and it ran without issue [22:08] Soe1en: also refer to the link within the link [22:08] durandal_1707: As well, I didn't put it on an EBS drive on Amazon. So now I move my output to the EBS drive. [22:09] Keshl, is this correct: ffmpeg -i in.mov -ab 8 -ar 11025 -ac 1 out.wav [22:09] ? [22:09] red6m: Dunno. Really, all you need is -i thing.mov out.wav [22:10] Keshl, thanks. [22:10] meteor shower tonight [22:10] red6m: bitrate is in bits 8 bits is nice [22:10] Welcomes, o?o [22:10] llogan: gotcha [22:10] durandal_1707, I don't follow. is this a sarcasm? [22:10] red6m: so -ab 8 means 8bits/sec [22:11] durandal_1707, is that normal for .wav? [22:11] No. Not at all. XD [22:11] 8-bit is.. Remember how the OLLLD Gameboy sounded? [22:11] Black and white one? [22:11] That's 8-bit. [22:11] Keshl: are we that old now? [22:12] hmm. is bitrate or bits the same thing? [22:12] ...Yes? [22:12] ...ok [22:12] red6m: no. [22:12] Was saying yes to llogan. [22:12] Bitrate is how many bits are read per second. It depends on the bit depth ("bits") and sample rate. [22:13] -ab is not valid for .wav (pcm) (although ffmpeg won't complain) [22:13] Bit depth is how many different positions the speaker can be told to move. Samples is how many times a second it's told to move. [22:13] ahhh. so I think I'd like to convert that thinkg into 8 bit dept please. [22:13] lol [22:13] Generally, if you want high fidenlity, you need at least 44,100 samples per second and I forget the depth, but way more than 8. [22:13] red6m: use pcm_s8 as audio encoder [22:13] Keshl, it's for single word pronunciation files. [22:14] red6m: You DEFINITELY need more than 8, then. [22:14] Far more than 8. [22:14] Action: Keshl finds what a gameboy sounds like. Appearntly red6m's too younge. [22:14] Keshl, geesh. thanks internet. [22:14] red6m: http://videospielmusik.de/Nintendo/game%20Boy/Pokemon%20(Red,%20Blue,%20Yellow)/PRBLAVEN.mid [22:15] That is literally the most fidelity you can possibly get with 8-bit. [22:15] for extra high fidelity that Keshl prefers use pcm_f64le [22:15] For voice you pretty much need at least 32 bits, preferably more. [22:16] hmm. i see. [22:16] ugh, 16 bits is enough [22:16] durandal_1707: Really? o.O' [22:16] Action: Keshl looks for SNES music. [22:17] Oh. Yeah. my bad, 16 bits'll work. [22:17] the highest audio i managed to find is 24bit flac at 19200 sample rate [22:17] * one 0 missing [22:17] durandal_1707: Really? You've never heard over 19200 sample rate? o.O' [22:18] Keshl: http://people.xiph.org/~xiphmont/demo/neil-young.html [22:18] Keshl: it's typo was supposed to be 192k [22:18] Oh. [22:19] mark4o: This I know. [22:19] thinking about it - I think I mean 8 bits per sample. Does that make it ok? [22:19] Well, about the sampling rate anyway. Bit depth, I'm sure I either hear a difference. [22:20] red6m: That's what bit depth is. Trust me, no, for voice it's not. You won't be able to understand anything. [22:20] Action: Keshl just records his voice and uploads it. [22:20] 8 bits per samples is limited, usually you use it one you do not have enough space [22:22] hmm. im looking at this python module: sndhdr and they list sampling_rate and bits_per_sample) as totally different thins: (type, sampling_rate, channels, frames, bits_per_sample) [22:22] Action: Keshl is definitely doing something wrong here.. [22:22] next worse thing are digital rips of analog recordings [22:24] Oh. No. Okay. Wow, that actually sounds more reasonable than I thought. [22:24] im pretty sure those are different things: http://www.voxforge.org/home/docs/faq/faq/what-are-sampling-rate-and-bits-per-sample [22:24] red6m: They are. I explained that earlier. [22:25] red6m: http://www.youtube.com/watch?v=XUZOzEBAIww 8-bit samples of human voices. Let a few play through. [22:25] Keshl, so - I can just use pcm_s8 ? [22:26] No idea. [22:26] If you're asking if it'll work right, no clue. If you're asking if the sound quality will be acceptable, to me, no way. Not a shot. [22:26] To you? Listen to that video and decide for yourself. [22:27] Keshl, lol. thanks. i'll give it a try. [22:27] red6m: http://xiph.org/video/vid1.shtml - skip to 13:14 for sample formats [22:33] Keshl: what sample rate you use? [22:34] 44,100. Same as anyone else on a modern-day system, o?o [22:34] "Libav is totally ignoring FFmpeg", sounds like the way ubuntu acts to debian, heh [22:35] Other way around. Ubuntu's a deritave of Debian. <.< [22:36] so is libav to ffmpeg isn't it? [22:36] No. [22:36] Er, yes. [22:36] But you still have the order backwords on one, o?o. [22:37] Oh. [22:37] Misread. [22:37] Kay. [22:37] >w> [22:38] heh ^^ [22:39] anybody have 16bit float (aka half-float) files? [22:41] so let me get this straight: ffmpeg != ffmpeg @ ubuntu [22:42] ugh, whoever wrote this scenario should get to hollywood [22:44] Soe1en: look what ffmpeg outputs when its run [22:44] if it says something in caps lock, than its virus, you should remove it asap [22:44] durandal_1707: what are you talking about? [22:45] durandal_1707: you mean this? : *** THIS PROGRAM IS DEPRECATED *** [22:45] This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. [22:45] yes [22:45] that is 'ffmpeg' from libav [22:46] how do I get the real thing? [22:46] this is so confusing, even after reading those articles [22:46] do you really need real thing? [22:47] perhaps [22:51] alright, I will try to remove avconv and install ffmpeg instead [22:51] well other things are connected with it [22:52] so perhaps you should download static [22:52] what does download it static mean? [22:52] there are dynamic and static builds [22:52] Static means all the code is built into the executable. No dependancies. Dynamic has dependancies. [22:53] Dynamic, however, can re-use code already existing in RAM and your processor's cache to speed up. Static can't, but there's less complications. [22:53] I am having trouble with ActiveSupport::MessageVerifier after upgrading to Rails 4. Was there something changed about how Cookies are encrypted? [22:54] if you only will encoder 8 bit pcm files, you can live with avconv (but -decoders/-encoders will not work) [22:56] I downloaded and unpacked ffmpeg, having problems already [22:56] ./ffmpeg -version [22:56] Illegal instruction (core dumped) [22:56] Soe1en: Get the 32-bit version. [22:57] it is the 32-bit version! [22:57] o.O' [22:58] ./ffmpeg -version [22:58] Illegal instruction (core dumped) [22:58] ah crap, stupid paste function [22:58] Soe1en: proably wrong kernel version [22:58] I see [22:58] I don't -- Mind explaining that, o?o? [22:59] downloaded version for newer/older kernel and do not have cap to run never/older app [22:59] I thought that only affected drivers.. [23:00] he did not said what he downloaded and what kernel version he have [23:00] I'm using ubuntu 12.04 lts [23:00] no clue what kernel I apparently use! But it must be less than 3.2 [23:01] Ahh, ubuntu, never teaching Linux users important things.. x.x [23:02] Keshl: which distro do you prefer [23:02] Sabayon. Gentoo. [23:02] Although, literally anything besides a *buntu is better. [23:03] Linux may be a kernel, but there's a lot of design principles behind it. Ubuntu doesn't follow them, so some people don't even consider it Linux-based anymore. [23:03] One of the main ones is "Linux doesn't stop you from doing stupid stuff, cuz that'd stop you from doing smart stuff too". [23:04] somehow it does make sense [23:05] Here's a somewhat crude example, but it's what I mean. [23:05] So, by default on windows, there's no utility to control your fan speeds. [23:05] well I get your point no worries heh [23:05] There isn't on Linux either, but that's regardless, I'm just showing how something stupid can be used for smartness. [23:06] Now, let's say you're on a laptop and you spill water into your case. [23:06] You don't know where it is, but by some miracle nothing shuts down or shorts. [23:06] At this point you have two choices: Either shut your computer off and try to let it dry into air, which depending on the area you live in might not even be reasonably possible.. [23:07] Or B, turn your fans off, force them to stay off, and then run benchmarks to get your system to warm up so it evaporates. [23:07] Normally that'd be stupid. But in a situation like that, it's /really/ smart. I've actually had to do it before. [23:07] (With careful tempeture monitoring, obviously) [23:07] ow [23:07] Totally worked, too. Using that laptop now. <.< [23:08] and causing that whole are lost electricity in such scenario [23:08] durandal_1707: I coudln't understand that, mind rewording your post? [23:09] are you seriusly telling that if one spills water he should just wait? [23:09] Keshl: how did you know that the water stopped to float? [23:09] If you don't know where it is, and you live in a really humid environment, yes. You kinda have to. [23:09] Unless you feel like running your laptop hot and trying to evaporate it. [23:10] Soe1en: Stopped to float? What do you mean? [23:11] Action: durandal_1707 lemme try it... [23:11] ... [23:11] Do not spill water in your laptop purposely. [23:11] I got VERY lucky and it happened to land in an area with no electronics. [23:12] In general, this won't happen and you'll fry something instead. [23:12] I just didn't want to risk it being knocked around accidently, that's why I tried to evaporate it quickly rather than let it sit and evaporate very slowly on its own. [23:13] I see why you stopped the fan to rotate: 1. to stop spreading more water inside the laptop in case water landed in or near the fan and 2. for the overheat effect [23:13] It landed nowhere near the fan, but yeah, that's another point. [23:14] but how did you know that the water inside your laptop stopped to float into idk, parts which could cause a defect? [23:14] The effect would've been instant if it did. I woudln't be typing right now. [23:14] Simply by it seeping inside and surviving more than two or three seconds, I knew it had to have landed in a spot with no electronics. [23:15] I see, interesting thought process [23:15] Based on where I saw it seeping in, I also had a general idea of how long it'd take to heat the area and how long I'd need to maintain it. [23:16] And in retrospect, I probbaly didn't need to turn the fans off. It was by the battery so I probably could've just unplugged it and let the battery heat up; the processors weren't anywhere near it. [23:16] Still. It worked. [23:18] not bad dude, ingenious indeed, on the other hand, I would never have done that just because I would have considered this situation as too complex to predict what would happen in the next seconds, I would just had shutted down the system and waited some time [23:19] It depends on how well you know your laptop vs your environment. [23:20] I guess that's true heh [23:20] In my case, I know my laptop /very/ well. I know how long parts take to heat up depending on what I'm doing and where the heat generrally flows inside it. I also knew that the area it happened to be in had a lot of traffic so there's a chance that someone could've caused enough vibration to shake the water onto something electric. [23:20] Besides that, it probably woudl've been me that did it. <.< I'm in an area that you cannot physically get out of unless you move the laptop. [23:23] But yeah. My point in all this is that Linux lets you do stuff like that and ubuntu tries to stop you from doing that. [23:29] well yeah but I think ubuntu was never targeting senior users [23:29] if you know what I mean [23:29] It wasn't, but it shouldn't be flying under the Linux banner if that's the case. [23:30] From what I understand (and I may be entirely wrong about this), Linux is intended for very serious users, not newbies. [23:31] Stuff just tends to be more efficent when it operates under the assumption that the end-user knows everything about the system. If it tries to be friendly, all the stuff that makes it friendly ends up getting in the way as soon as you try to get more serious, [23:31] *. [23:31] Is there any way to for ffprobe to tell if timecode is drop-frame or non-drop-frame? [23:32] to be honest I never thought about what linux is intended for! [23:33] I just watched an interview about richard stallmann a few weeks ago and understood that I always missunderstood the meaning behind gnu all the time until that point [23:34] Which interview, o?o? I'm pretty sure I misunderstand it too Keshl: http://www.youtube.com/watch?v=uFMMXRoSxnA [23:35] Danks, o?o [23:35] no pasa nada! [23:43] .... -Kicks Stallman's balls.- <.< [23:43] Action: Keshl does not like 4:20. [00:00] --- Tue Aug 13 2013 From burek021 at gmail.com Wed Aug 14 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Wed, 14 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130813 Message-ID: <20130814000501.8EA7B18A024A@apolo.teamnet.rs> [00:47] hello to all [00:50] resistance is futile [00:50] Indeed [00:51] trying to see of the kerfuffle between mplayer and ffmpeg as been resolved with the latest nightly [00:52] may fortune favour the foolish [01:01] damn, mplayer still fails to compile libavfilter/internal.h:289: error: #pragma GCC diagnostic not allowed inside functions [01:12] right then, I'm off. Thank again, gn to all [01:50] How may I capture x264 from an uvc webcam? [02:00] ffmpeg -i /dev/video0 -c:v libx264 out.mkv ? [02:03] klaxa: I want to capture in x264, not to encode [02:03] does the camera produce h264? [02:03] (x264 is just the encoder, h264 is the standard) [02:05] oh! [02:05] klaxa: it does, but idk how to use, captured mjpeg in the past. [02:05] you can try ffmpeg -i /dev/video0 -c copy out.mkv and see what codec it produces [02:06] klaxa: it may produce mjpeg, h262 and raw. [02:06] then you can't capture h264 streams from it... (unless h262 was a typo and you meant h264) [02:07] klaxa: by default it is mjpeg, idk how to swith, btw in h264 there is a different resolution [02:07] you should try to find out how to switch it [02:16] fling: http://ffmpeg.org/ffmpeg-devices.html#video4linux2_002c-v4l2 [02:17] see -list_formats then use -input_format [02:17] llogan: thanks. [02:18] also "v4l2-ctl --list-formats-ext" [05:33] hi. i get this warning [libx264 @ 0x2942240] max bitrate less than average bitrate, assuming CBR [05:33] which bitrate does it choose then? my maxrate or my average [05:34] using bitrates with x264 is a bad idea anyway [05:34] did you have a look at http://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide ? [05:34] my complete command is ffmpeg -i test.mp4 -vcodec libx264 -vprofile High -preset slow -b:v 3000k -maxrate 400k -bufsize 800k -vf yadif, scale='trunc(oh/ih*iw/2)*2:1080' -threads 0 -acodec libfdk_aac -f mp4 -y test2.mp4 [05:35] i should use crf? [05:35] but i want to control my bitrate for adaptive bitrate streaming [05:35] if you are not bound by the throughput of a decoding chip or need to-- [05:36] well... that makes it a little more complicated, but if there is enough caching it should be okay anyway [05:36] so just to confirm. from my command [05:36] does it take 3000k or 400k? [05:37] i actually don't know, just do a test encode (add -t 30 or something, that means only encode for 30 seconds of video material) and check the bitrate [05:46] klaxa, hmmmm the problem is this used to be OK. the different bitrates will create different filesizes. [05:46] now it just createss one filesize no matter if i use 1000k, 1200k or 1500l [05:46] but you are specifying an average bitrate of 3000k and a maxbitrate of 400k [05:46] maybe cause i updated my ffmpeg [05:47] is the bitrate 400k by any chance? [05:47] i will check that [05:47] maybe you wanted to type 4000k? [05:49] i will test it using what you suggestedd [10:29] hi, can anyone help me with this? http://stackoverflow.com/questions/18132342/ffmpeg-rtmp-streaming-process-exit [15:14] Hi folks [15:14] Is it OK if I ask a general video encoding question here? [15:15] like what? [15:15] Taking in calculation frame stride when resizing a frame [15:16] i do not get it [15:17] So I want to resize a frame in NV21 video format [15:17] Which is tight packed with an interleaved UV plane [15:17] But apparently it has row/plane stride somewhere [15:18] Is there a way to determine the row/plane stride? [15:18] hmm, you mean as returned in AVFrame or ? [15:18] As a byte array [15:19] It is on Android and I don't use FFMPEG, since I want to use hardware encoding [15:19] but there is on internet description how raw nv21 looks like [15:21] From what I've read, they don't mention row/plane padding/stride for NV21 [15:22] if there is one, it may depend on implementation [15:23] I was think about that. Maybe the hardware vendor implemented that. [15:23] What exactly does it mean when ffmpeg says "Read error at pos. "? The original input file is corrupted? :-O [15:24] Pitfall: but microsoft says its stride is same as Y one [15:25] Could you provide me with the link? [15:27] kimochiwarui: it found end of stream, perhaps something is missing, or anything else [15:27] hello [15:27] how come a wav file encoded by ffmpeg is not played back via aplay ? [15:27] it says "is not pcm format" [15:28] when i open that same .wav file with audacity, re-export it [15:28] then aplay plays it [15:28] `file` command in linux reports both files (before and after audacity) 1:1 same microsoft pcm mono 8k [15:29] durandal_1707: ffmpeg's output includes the time at which the error supposedly happen, but when I play both the input and output files, I see nothing wrong at the specified time. [15:30] durandal_1707: Oh, I was wrong. That time had nothing to do with the error. [15:31] sledgeSim: aplay ? [15:36] sledgeSim: it may just be that aplay does not properly supports wav format and get confused by some stuff ffmpeg writes, like riff metadata [15:37] I have a set of images numberated and I'd like to convert it to a video [15:38] There are numbers missing how do I do this? [15:39] what should be done for missing images? [15:39] just skipped or duplicate previous, or something else? [15:40] skipped [15:44] the frames are from a security camera [15:47] and why are they in separate files per frame at first place? [15:48] the software is producing it like this [15:48] 1 folder / event 10 frames / second [15:49] durandal_1707, an option to disable metadata pls? [15:50] -map_metadata -1 :) [15:50] let's try [15:52] didn't help [15:52] durandal_1707, https://pastee.org/zzwaz [15:58] noone? [15:58] 001-capture.jpg first frame 421-capture.jpg <- last frame [16:00] sledgeSim: add -flags +bitexact [16:01] just did [16:01] durandal_1707, https://pastee.org/9u9cp [16:01] still no play on aplay [16:04] i suppose we're both on the same page durandal_1707 http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2013-April/142725.html [16:10] what is aplay, and why should i care? [16:11] durandal_1707, is the only tool we can use to playback in our embedded solution [16:12] aplay is ALSA play [16:46] hey guys, I got a question concerning static and shared libs [16:47] I read up on this in the Internet, but there is still a question coming up concerning ./configure flags [16:47] I can set --enable-shared on certain libs, which ffmpeg uses and on ffmpeg itself. what is the difference then? [16:49] I figure, I would tell ffmpeg, to include these libs directly in the binary with --enable-static and make it use the shared libs with --enable-shared. But what does this have to do with the --enable-shared flag, I can set on libopus for example?! [16:49] durandal_1707, why should you care is that audacity exports compatible (playable) format, and ffmpeg doesn't [16:49] sledgeSim: i still do not care, sorry [16:50] this confuses the hell out of me [16:52] relaxed, just want to say thank you for the static builds :-D [16:54] aacplus or faac? [16:54] schlitzer|work: you're welcome. [16:55] let me start over.... I have a shit ton of vobs that I'd like to convert into mp4 and maintain the audio and video quality [16:55] possible with ffmpeg? [16:56] yes [16:56] suprsonic: http://trac.ffmpeg.org/wiki/x264EncodingGuide [16:57] relaxed I'm a complete noob when it comes encoding [16:58] reading is the only cure for noobism [16:59] start with the url I just gave you and ask questions as you learn. [16:59] Hey guys, I'm trying to convert an audio file to ogg, and in my case it's an mp3 file which has artwork in it. Is there any way to tell ffmpeg to only convert audio tracks? I wouldn't mind the artwork inclusion, but I get this error: Encoder (codec none) not found for output stream #0:0 [17:00] thanks m8 [17:00] (btw, streams are: Stream #0:1 -> #0:0 (mjpeg -> ?), Stream #0:0 -> #0:1 (mp3 -> vorbis)) [17:00] would you recommend any aac encoder vs another relaxed? [17:01] aacplus vs faac [17:01] or x264 vs xvid [17:04] Frantic: ffmpeg -i input -map 0:a ... [17:04] relaxed: just found it, thanks :) [17:12] Hey guys [17:13] I'm working on an android application that for streaming live web cam feeds using FFMpeg as background engine. [17:14] FFmpeg for android has got compiled fine. However, when I'm trying to make use of libffmpeg for my demo app, I'm getting errors while doing ndk-build [17:15] Relevant details can be found http://pastebin.com/F7x1Tfea [17:16] This contains the android makefile and NDK build error log [17:16] Can anyone please tell where I'm doing it wrong? [17:35] static vs. shared, anybody? [17:36] I am not asking about the difference in concept, I know the concept. I am asking about why I can set this option both on libs used to compile ffmpeg (x264) and ffmpeg itself?! [17:37] when both static and dynamic libs are available, how do I tell ffmpeg to compile with the static or the dynamic version? [19:58] --enable-shared? [20:00] sorry, was playing bf3 [20:01] static: --enable-static, shared: --enable-shared [20:02] don't be sorry [20:03] durandal_1707: hah :) you reminded me of this: http://www.youtube.com/watch?v=Gqo417l3YSg&t=8s [22:01] What's the best AAC encoder one can get for free (as in beer) these days? [22:03] but what free (as in beer) means? [22:03] does it means you can tak beer and make beer factory of it and sell it? [22:03] at no cost [22:03] not as in freedom [22:04] at no cost - steal it [22:04] sacarasc: libfdk-aac [22:04] stop being obtuse [22:05] That's better than NeroAacEnc? [22:05] there's neroaacenc too [22:05] Any reason "-ac 2" would fail to downmix 7.1 to stereo. [22:06] Are you using a recent ffmpeg verison? [22:06] relaxed: just built it. [22:06] pastebin or it didn't happen [22:08] sacarasc: I have no idea which is better. Test them both. [22:09] Thanks. [22:10] I would say ffmpeg with libfdk-aac would be much easier to use. [22:12] sacarasc: i thin Nerosomething you need to buy or whatever [22:12] Nope. [22:14] http://ftp6.nero.com/tools/NeroAACCodec-1.5.1.zip [22:24] but note you may need to pay if you use encoder commercially [22:39] http://pastebin.ca/2431560 -ac 2 fails to downmix - output file has surround sound only. [22:48] Mista-D: file a bug report and include a small sample. [23:19] relaxed: http://ffmpeg.org/trac/ffmpeg/ticket/2860 [00:00] --- Wed Aug 14 2013 From burek021 at gmail.com Wed Aug 14 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Wed, 14 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130813 Message-ID: <20130814000502.9ACBF18A0270@apolo.teamnet.rs> [00:42] ffmpeg.git 03Thilo Borgmann 07master:b7ba7cbd6e5a: avcodec/tiff: Refactor TIFF tag related functions to share the code. [00:42] ffmpeg.git 03Thilo Borgmann 07master:ad0f7574effe: avcodec: Add EXIF metadata parser to libavcodec. [00:43] ffmpeg.git 03Thilo Borgmann 07master:bb4e1b4cf910: avcodec/mjpegdec: Read EXIF metadata in JPEG input. [01:17] llogan: "avilable" [01:24] shit [01:25] i'll fix that [01:25] i must have read it 3 times [02:30] it appears that iOS would like to have more than a few sound samples at a time for reading into its memory queue. Is there a way to control the interleaving of the audio / video samples, so that the sound data precedes the video data in bigger chunks? (like a second of audio data, followed by a second of video data), etc? [02:30] (using matroska, vp8 and AAC) [02:44] bernie_: no, you have to do proper buffering [02:45] so you might have to read some packets ahead in general [02:45] in the end, your playback chain should look at the timestamps of the _decoded_ data, and play them in sync [03:39] ffmpeg.git 03Alexis Ballier 07master:ca2378ad04e1: libavformat/version.h: Drop FF_API_OLD_AVIO (unused and undefined since libavformat 55) [04:11] wm4: thanks. Indeed, I intent to keep things in sync. Though for audio, it should be true that the samples pile up nicely. iOS has a way of querying the audio queue timeline, which I was planning to use to sync to the video. Sometimes the audio channel gets a hiccup, and so it's good to query the audio buffer timeline, and use that for syncing to the video. I assume that will work. But it does sound like I'll have to first r [04:12] so that got me thinking... if I really want a "chunk" of audio, and then a "chunk" of video data, why not have the muxer do that for me? Seems more efficient than to do that on the player side. [04:13] But I'm still debugging stuff. Still not hearing anything for AAC on iOS via matroska. There is plenty of magic in AAC I wasn't aware of. [04:19] I have a hard time matching the AAC Audio Object Types here (http://wiki.multimedia.cx/index.php?title=MPEG-4_Audio) with what iOS has (ie: kAudioFormatMPEG4AAC and kAudioFormatMPEG4AAC_* on this page here: http://developer.apple.com/library/ios/documentation/MusicAudio/Reference/CoreAudioDataTypesRef/Reference/reference.html) [04:20] Never mind, if I and how I would have to set a magic cookie... kAudioQueueProperty_MagicCookie SIGH. [04:20] but now I must pickup pizza... [04:27] ffmpeg.git 03Alexis Ballier 07master:7a48b1c49282: Remove FF_API_PKT_DUMP cruft. Not compiled since libavformat 54. [05:35] ubitux: ok I think loopfilter is correct now (not 100% sure, but mostly looks ok) [05:56] ubitux: so while you finish 32x32 intra pred modes (they're triggered by various vp90-2-00-quantizer-*.webm files, e.g. 55, 23, 01), I'll work on basic inter bitstream parsing [05:56] then once that's done, we'll do various pieces of inter reconstruction [06:26] Daemon404: I'll assume you're busy moving until you mention otherwise? [06:27] ragemoving [06:27] lol [06:27] i have to tether my phone for my first two weeks in the UK for internet (outside of work) [06:27] because the isp cant send anyone until after vdd [06:27] (sep 2nd) [06:27] miserable. [06:28] omg [06:29] is the whole company on strike or so? [06:29] it's some fun involving openreach, and a giant monopoly in the UK [06:29] either way im not very pleased [06:30] (what that means is every company uses the same installation service) [06:35] that sounds like a problem that is easily solved by adding a second installation service to the country [06:36] how un-British! [06:39] it's british to suffer needlessly? [06:39] stiff upper lip and all that [06:39] how odd [10:01] morning [10:32] morning [10:47] ffmpeg.git 03Ian Taylor 07master:46dee21a3238: png: allow encoding 16-bit grayscale [10:47] ffmpeg.git 03Michael Niedermayer 07master:1057390d916e: Merge remote-tracking branch 'qatar/master' [11:38] ffmpeg.git 03Piotr Bandurski 07master:e87fcaa8d5cc: avformat/riff: treat msn audio like gsm_ms [11:46] Daemon404: LAND OF HOPE AND GLORY [11:46] MOTHER OF THE FREEEEEE [11:47] I expect you to be singing along [11:47] during last night of the proms [11:47] you drinked too much? [11:48] see the part above about britain [11:48] it is a british joke [12:04] BBB: what is the step following intra pred, for error adjustment? (if any) [12:05] the block being intra pred (assuming !dc) will not have any dct/ast run, right? [12:06] i mean it's either dct/adst or intra pred, right? but is intra pred working as a full replacement for that dct/adst? [12:22] ubitux: error adjustment? [12:22] ubitux: what does that mean? [12:23] ubitux: oh I see, no, the intra pred in all cases precedes a full inverse 2d transform [12:23] ubitux: the type of transform is mode-dependent (always 2d dct for inter modes, and adst/dct combinations for intra pred), and all that logic works already [12:23] ffmpeg.git 03Nedeljko Babic 07master:7b71feabfb31: MAINTAINERS: add myself as maintainer for MIPS and Zeljko Lukac as maintainer for new fixed point FFT [12:23] ubitux: plus for 32x32 it's always dct [12:24] i don't understand sth: isn't the intra pred some spacial/pixel prediction? [12:24] spatial [12:24] yes [12:24] how can you run a adst/dct after this step? [12:24] ah it's not pixel then [12:25] no it is pixel [12:25] you're predicting the contents of a 4x4 block (or 32x32 in your case) [12:25] then you code a residual, which you add/subtract to/from it [12:25] aaah [12:25] so final reconstructed pixel array is pred + residual [12:25] ok [12:25] now it makes sense [12:25] thx [12:26] the spatial prediction just makes the residual smaller (thus more efficient to code) [12:26] right ok [12:49] I'm relatively halfway doing the block parsing for inter frames [12:49] (although I still have to do the hard part, which is the motion vector coding, that'll take me a while) [14:15] BBB: btw, av_assert* are prefered (and use with --assert-level=...) [14:15] what is av_assert()? [14:16] more controllable asserts [14:16] see libavutil/avassert.h [14:16] av_assert0() ? all the time, and 1 & 2 for speed relevant code [14:17] uh ok [14:18] how's the intra pred code progressing? [14:18] well i'm still trying to understand some things [14:19] for example, trying to explain why the apparent inconsistency in the logic between the 4, 8 and 16 [14:20] ? [14:20] you mean the use of upperright? [14:20] (that's the only thing that's inconsistent I think) [14:22] if you take downleft for instance, 4 and 8 are using the same number of input (in comparison to 16), and 4 has not the "stairway form" [14:22] that kind of stuff [14:22] so i'm not sure what the "top" represent in those case [14:22] (and what form it will follow in 32) [14:23] 32 will be the same as 8/16 [14:23] so the amount of input is the size of the predictor [14:23] 4 is a specialcase, it uses 8 top pixels [14:23] 4 is top, the other 4 is topright [14:23] so for 32, a top 32 input, stairway form? [14:23] like this: 0 1 2 3 4 5 6 7 8 (above) [14:24] L x x x x [14:24] Last message repeated 3 time(s). [14:24] L is left [14:24] 0-8 is topleft, topx4 and toprightx4 [14:24] x is the predicted pixels [14:24] 8x8 and 16x16 only use left, topleft and top (not topright) [14:24] 32x32 follows that design also [14:25] yes [14:25] (sorry ignored your question) [14:25] why the special case for 4? [14:25] I think you're asking "why not the special case for the others (8, 16, 32) also?" [14:25] and the answer is "I don't know", it's probably a bug or not-enough-time [14:26] ah [14:26] ok [14:26] more pixels = more precise predictor [14:26] so using topright improves the predictor [14:26] not using it is thus silly if it's available [14:26] you'll notice it's also not used in all cases where it is available [14:26] another shortcoming [14:26] e.g. if you have a 8x8 block with 4 4x4 blocks in it (2x2) [14:27] are you talking about a bug in the original implementation, or ffvp9? [14:27] if the complete row above us has finished decoding, topright is available for 3 of 4 blocks [14:27] bug in original [14:27] so ffvp9 is required to follow it [14:27] ok [14:27] topright is available for topleft, topright and bottomleft 4x4 block in 8x8 block [14:27] but not for bottomright [14:27] yet vp9 dictates we cannot use it for topright and bottomright, oddly [14:28] ok [14:28] thx [14:28] I should go to bed, flying off to thailand tomorrow [14:28] ask more questions, I'l lcheck email at the airport [14:28] or chat :) [14:29] bbl [14:29] gnight [14:29] 'night, hf :) [14:44] ffmpeg.git 03Paul B Mahol 07master:11afe28b9a56: lavfi/ebur128: fix typo: s/negociation/negotiation [15:29] ffmpeg.git 03Michael Niedermayer 07master:ef36ba5e088e: avcodec: clarify documentation of avcodec_copy_context() [15:29] ffmpeg.git 03Michael Niedermayer 07master:e85771f26814: ffserver: allocate rc_eq, prevent freeing invalid pointer [15:29] ffmpeg.git 03Michael Niedermayer 07master:cba9a40d47ae: avcodec: free priv_data in avcodec_copy_context() [15:37] ffmpeg.git 03Compn 07master:8e0b6d82b3dd: riff: add msn audio comment [15:38] been a while. usually i like to sit back and make others do the hard work :) [15:39] Compn: you did not send this on ml for a review [15:42] and there are other msn audio codecs [15:42] like this siren7 thing from aMSN that got posted on ml [16:36] ffmpeg.git 03Thilo Borgmann 07master:6a64b23d93d0: Update my email address. [16:51] hmm i guess i should use fmax and not FFMAX [19:53] ffmpeg.git 03Michael Niedermayer 07master:97064019279d: avcodec/mpeg12dec: check slice size before trying to decode it [21:23] michaelni: but did you see anything obviously wrong in output? [21:31] durandal11707, there are some artifacts vissible on diagonal edges, i guess they are at slice borders [21:32] that is from looking at it again, i didnt originally spot them when i quickly looked [21:34] and tiff does not crash any more? so i can apply it and stare at helgrind report growing [21:57] durandal11707, tiff didnt crash anymore [22:01] what would be the best strategy for adding an additional dithering method? add another sws_flag in addition to error_diffusion? [22:02] if there is enought flags left, sure why not... [22:02] finished off tuning my dither, http://pippin.gimp.org/a_dither/ [22:03] the formulas used are devilishly simple [22:06] how it behave on bigger image sizes? [22:06] because there was already attempt to make pal8 output looks better [22:07] ... thats the most useless use case [22:07] the current one (afaik) is just 8bit with dithering [22:07] so is this one [22:07] i would think the most used case would be dithering 10-bit yuv to 8bit [22:07] for playback [22:07] that is by far te most used dither branch in swscale [22:07] yes, sure... who use pal8 those days.... [22:08] in GIMP it will probably end up being used for 16bpc / 32bit single precision float -> 8bpc dithering [22:08] Daemon404: except it doesn't work [22:08] only works when doing dither with the same colourspace in and out [22:09] it cant dither between the same colorspace at a different bit depth? [22:12] durandal11707: the rightmost GIF is the "current one", and the middle one "a dither" [22:17] well it looks better than middle one for sure, and maybe even little better than bayer (this is just personal mumbo-jubmo) [22:17] durandal11707: I dislike the strobing blinking it has; very apparent on dark parts [22:17] but i guess optimizes pallete+dithering if necessary would beat all of them [22:19] there are also other more pathological videos one can create which would make error diffusion look a lot worse [22:19] its very little image and i'm on 1920x1080 [22:20] then listening to you is quite meaningless ;) [22:20] i personally have found the most consistently "same" dither is random dither [22:20] in fact i think lav filters uses this [22:21] you can replace the formulas in the bototm of this with a Math.random() [22:21] also, are you a gimp dev? [22:21] due to hte frequency distribution in the noise; it looks much clumpier and thus worse [22:21] i thought gimp had it's own (never update) colorspace lib [22:21] "a dither" is closer to a o a "blue noise" or "green noise" based mask [22:22] Daemon404: mhm, I'm a gimp dev, and also the author of http://gegl.org/babl/ as well as the maintainer of GEGL ;) [22:22] yes babl [22:22] last i checked it was basically dead... [22:23] your methods of life assesment are severely lacking [22:23] pippin: so you gonna work on libswscale now? [22:24] pippin, perhaps blame the site. it lists 0.1.3 as the newest [22:24] and only release in the last 3 years [22:24] sorry 0.1.2 [22:24] 0.1.2 is likely latest stable, but gimp dev depends on 0.1.3 [22:25] I'm not exzpecting any lecturing on API/ABI stability from here :p [22:25] no [22:25] i mean i see up to 0.1.8 in the dir [22:25] but the site only lists up to 0.1.2 [22:25] leading one to believe no new release have occurred since 2010 [22:25] and development vesions of GIMP likely depends on the unstable version in git [22:26] http://gegl.org/babl/ <-- this page [22:28] durandal11707: the only work I might end up doing on libswsale is hacking up a proper patch enabling that dithering metod for animated GIFs [22:28] ffmpeg.git 03Paul B Mahol 07master:8a7295beeb09: tiff: frame multithreading support [22:30] i think there is already tools that make much better quality-wise gifs than ffmpeg [22:31] where dithering is far less obvious [22:32] its ok though to add more generic dithering methods to libswscale, i think there is much missing there [22:33] "a dither" (I'm going to get fed up with that name) is a very simple approximation of http://white.stanford.edu/~brian/psy221/reader/Ulichney.Dithering.pdf [22:34] michaelni: SWResamples options are mentioned twice in 'ffmpeg -h full' [22:35] pippin: are you trying to patch swscale to improve the gif encoding support in ffmpeg? [22:36] hmm but michaelni already added some kind of error_diffusion dither... [22:37] durandal11707: the three big buck bunnies, are 1) bayer dithering from ffmpeg, 2) my hacked version of libswscale 3) michaelni's already added error_diffusion [22:39] woo... power outtage [22:40] ohh, imho swscale is one big mess, there is lot of thing to clean up [22:41] i wanted to do bunch of various ditherings but than abandoned idea when looked at code [22:43] AFAIK reasonable gif conversion requires knowing a large number of frames in advance [22:43] so this can't be in libswscale or just in a single-frame dither algorithm? [22:43] also everyone just uses imagemagick to make gifs anywa [22:43] it does not need to be large [22:43] which has pretty goos dithering [22:44] good* [22:44] durandal11707: yes it has to [22:44] imagemagick uses all frames [22:44] fyi [22:44] you need to pick a good palette, and palette choices are always hard [22:44] i thought you use graphicsmagick [22:45] you mean the fork of imagemagick that nobody gives a shit about? [22:45] no i dont use that. [22:45] *magick has mostly different error diffusion weights IIRC [22:45] Daemon404: imagemagick use so much memory, so maybe i will switch [22:46] i think you will be disappointed [22:46] though, doing that in combination with median cut for generating an optimal palette based on all colors surely will get you better looking things than using an 332 rgb palette [22:47] compared to "a dither" though,. any error diffusion based method will give you the problem that a lot more pixel change between frames in the resulting GIF - doubling it's size [22:47] more than anything [22:47] im impressed animated gifs have made such a comeback [22:47] gif conversion also needs to do other processing than just palette selection and color conversion [22:47] yeah error diffusion does indeed produce giant images [22:47] nevcairiel, yes [22:48] also, noone managed to implement a replacement for animated gifs that got adopted by enough things [22:48] APNG duh [22:48] /troll [22:48] I have tested "a dither" on quite extensive color testing charts, and I am very happy with it overall as a "pal8" converter, that can even be used on the fly inside a "pset" [22:48] Action: durandal11707 cries for apng gsoc [22:48] http://www.kickstarter.com/projects/374397522/apngasm-foss-animated-png-tools-and-apng-standardi [22:48] lol trying to start normal OSS projects on kickstarter [22:49] durandal11707, -h full prints swr twice because its once from the aresample filter and once swresample itself [22:49] gnafu, doesnt mean jack unless you can travel through time and make old browsers support it [22:49] gnafu: you know whats the biggest issue with apng, for example on its demo page .. it doesn't fucking animate :P [22:49] michaelni: ahh [22:49] nevcairiel: perhaps you use ie [22:49] no, chrome [22:50] It animates in the only browser that matters to me ;D. [22:50] and it removed it (if it ever had it) [22:50] firefix is probably the only browser that supports it then :p [22:50] firefox* [22:50] I think opera used to, but they've moved to webkit now [22:50] does firefox not have The Worst Image Downscaling In Existence yet? [22:51] did the yfix that? [22:52] Oh, and: https://chrome.google.com/webstore/detail/apng/ehkepjiconegkhpodgoaeamnpckdbblp [22:53] wm4: what other processing does GIF conversion need to do? [22:54] an unacceptable patch, in case someone just wants to test it: http://pippin.gimp.org/tmp/0001-swscale-hack-up-error-diffusion-to-be-a-dither.patch [22:56] pippin: I don't know, but for example stabilizing parts of the gif that don't move much can be important (since even static parts of a TV capture etc. can be noisy) [22:56] I just know that even naive imagemagick conversion doesn't give very good results [22:56] wm4: you would normally scale it down first; and scaling down with a sane resampler would cancel out that noise [22:57] wm4: at least beyond the level of noise neccesary to add for dithering [22:58] I've seen a lot of GIFs lately worse than what ffmpeg produces with that hack [22:58] i think those are from older ffmpeg version [22:59] *sarcasm* [23:53] ffmpeg.git 03Timothy Gu 07master:bbbd9596ad8f: doc/filters: reformat scale filter doc [00:00] --- Wed Aug 14 2013 From burek021 at gmail.com Thu Aug 15 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Thu, 15 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130814 Message-ID: <20130815000501.56B1B18A00B5@apolo.teamnet.rs> [01:57] I'm having an issue transcoding an mpegts created with mythtv. The first keyframe is missing from the output file. However, the frame is present in both vlc, and a transcode created via vlc. I'm assuming that the problem lies in the video codec parameters. [02:00] Here's a pastebin with the command line and ffmpeg output. http://pastebin.com/T9KTrwQw [02:26] VLC decodes and plays the file properly. http://pastebin.com/RDDeqYin [04:01] Hello All [04:03] need some help if anyone can help after yasm part of install doc i ov on to x264 and on the configure part im getting Found no assembler [04:03] Minimum version is yasm-1.2.0 If you really want to compile without asm, configure with --disable-asm [04:04] I'm having an issue transcoding an mpegts created with mythtv. The first keyframe is missing from the output file. However, the frame is present in both vlc, and a transcode created via vlc. I'm assuming that the problem lies in the video codec parameters. [04:04] Here's a pastebin with the command line and ffmpeg output. http://pastebin.com/T9KTrwQw [05:11] http://pastebin.com/YjdKGQKq [05:17] Why does transcoding an mpegts stream to avi, then mkv correct a/v sync problems? [05:25] http://pastebin.com/DR4shxFT [07:07] has anyone used ffmpeg to stream to Akami? [07:24] http://illogicallabs.com/paste/00000008.txt [15:22] A file produced by re-muxing an mpegts to mkv has a/v synchronization issues. However, when the mpegts is converted into an avi then mkv, the synchronization problems are gone. Any ideas? [16:32] I seem to have a problem with "-f yuv4mpegpipe" [16:32] It does not produce a header [16:32] Though it will if I use -f y4m and write it to the hard drive [16:33] oops [16:33] I did not use -f y4m [16:33] I just used the extension [16:36] Demon_Fox: You must have muxed it into a container. It works here. ffmpeg -i input -t 1 -f yuv4mpegpipe - 2>/dev/null | sed q [16:36] YUV4MPEG2 W768 H432 F30000:1001 Ip A1:1 C420mpeg2 XYSCSS=420MPEG2 [16:36] Here is what the file says [16:37] YUV4MPEG2 W1280 H720 F24000:1001 Ip A1:1 C444 XYSCSS=444 [16:37] I need chroma 444 for a filter I am writing [16:37] I just need to figure out how to pipe it correctly [16:38] That is the header, so what is the problem? [16:38] hmm [16:38] Could you do me one favor [16:38] Just use -f rawvideo [16:39] Could you write a line to a video with the header you have except it needs chroma 444 [17:09] i would not use a header. i would do what relaxed just said [18:50] Hi! I just encoded a video using ac3 but it seems that there are interruptions in the audio. Is this command correct? ffmpeg -i video-2013-08-14-17-56-46.mp4 -vcodec h264 -preset veryslow -crf 22 -acodec ac3 video-2013-08-14-17-56-46_high.mp4 -threads 4 [18:51] -vcodec h264 --> -vcodec libx264 [18:52] remove -threads 4 (and ot goes before the output) [18:52] it* [18:53] relaxed: I'll try that way, thanks [19:15] Hi all [19:53] I'm having an issue transcoding an mpegts created with mythtv. The first keyframe is missing from the output file. However, the frame is present in both vlc, and a transcode created via vlc. I'm assuming that the problem lies in the video codec parameters. Here's a pastebin with the command line and ffmpeg output. http://pastebin.com/T9KTrwQw [19:53] VLC decodes and plays the file properly. http://pastebin.com/RDDeqYin [20:09] interesting. [20:10] but you're not technically transcoding the file. [20:10] you're merely remuxing it. [20:10] Action: file can not be remuxed [20:10] sorry, file. lol. [20:11] I get it a lot, keep on going [20:11] there's something even funnier about your input command line vs. your output. [20:12] you are ostensibly remuxing to matroska (.mkv). [20:12] yet your output is named "output.mp4". [20:12] are you being honest with us, shoop_da_whoop? [20:13] you're not even here, heh. [20:13] Hello All [20:15] is there a work around or another how to doc on Compiling on 12.04.2 lts ? [20:17] relaxed: still here? Nothing changed after the change. Anything else I could be doing wrong? [20:19] Anyone who notices anything wrong in this command for transcoding? ffmpeg -i video-2013-08-14-17-56-46.mp4 -vcodec libx264 -preset veryslow -crf 22 -acodec ac3 -threads 4 video-2013-08-14-17-56-46_high_aac.mp4. The problem is that the resulting video has audio interruptions. [20:23] relaxed: wow& "nothing changed after the change" is really a paradox& :-D I mean with the fixed command [22:24] Hello [22:26] Can some1 help me install ffmpeg so that it, AND all the libraries, are available to all users including www-data (apache)? [23:20] Last message repeated 1 time(s). [23:24] im trying to do ivtc but this '-vf "fieldmatch=order=tff:combmatch=full, yadif=deint=interlaced, decimate"' crashes ffmpeg [23:26] sorry [23:26] brb [23:30] I'd like to destroy audio, changing the samplerate from 44.1kHz to 8kHz, for purposes [23:30] (educational!) [23:30] -ar 8000 [23:30] well, this errors, saying it's not supported [23:31] not supported by what codec? [23:31] Impossible to convert between the formats supported by the filter 'Parsed_anull_0' and the filter 'auto-inserted resampler 0' [23:31] Error opening filters! [23:31] flac? [23:31] flac :) [23:31] I'm using 48kHz flac as the source [23:31] don't use flac [23:31] well, I tried vorbis [23:31] and? [23:31] http://bpaste.net/show/ZDA2FYZ1oHTF70P6CNw7/ [23:31] same thing! [23:31] what about mp3? [23:32] let's see [23:32] -acodec mp3? same! [23:32] the speex codec should support 8Khz [23:33] ffmpeg -i source.flac -ar 8000 -acodec mp3 /tmp/test.mp3 [23:33] this errors to the same thing as above [23:33] er, -acodec speex produces the same result again [23:34] is my command wrong? [23:34] usually the sample rate is set after you specify the audio codec you want to use [23:35] ffmpeg -i source.flac -acodec mp3 -ar 8000 /tmp/test.mp3 ? no change! [23:35] drop mp3 [23:35] speex definitely supports 8kHz [23:36] just need to figure this out [23:36] s/mp3/speex/ doesn't help [23:36] obviously you need the right file extension [23:36] because that sets the muxer to be used [23:36] I tried spx [23:36] mkv? [23:36] I don't know what it is [23:36] let's see [23:37] yep, it's .spx. [23:37] ok, does it work? [23:37] not at all [23:37] is your ffmpeg compiled with speex support? [23:37] the error I pasted earlier always happens [23:38] let's see [23:38] DEA.L. speex Speex (decoders: libspeex ) (encoders: libspeex ) [23:38] says ffmpeg -codecs |grep speex [23:38] it could be that "ar" is the brute-force version of resampling [23:38] isn't there aresample [23:39] http://ffmpeg.org/ffmpeg-filters.html#aresample-1 [23:39] so -af aresample=8000 [23:40] oh, this doesn't error anymore. compilation problem though(?) [23:40] ffmpeg: relocation error: /usr/lib/libavfilter.so.3: symbol swr_next_pts, version LIBSWRESAMPLE_0 not defined in file libswresample.so.0 with link time reference [23:40] bwargh wtf [23:41] library version mismatch I guess [23:41] uh well, the ffmpeg binary and this lib belong to the same package [23:42] but hum, I have an update, let's not consider this a problem until I apply it. [23:42] just get a statically compiled ffmpeg [00:00] --- Thu Aug 15 2013 From burek021 at gmail.com Thu Aug 15 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Thu, 15 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130814 Message-ID: <20130815000502.5F4B118A00C6@apolo.teamnet.rs> [00:01] ffmpeg.git 03Piotr Bandurski 07master:1ee1a3d9f4e1: avcodec/gsmdec: reject unsupported msn audio modes [00:20] michaelni: a patch that makes this dithering method a run-time option http://pippin.gimp.org/a_dither/0001-swscale-add-a_dither-dithering-method-for-GIFs.txt [00:25] hm, I spot a mistake - the flag value should probably be quite a bit lower [00:55] pippin, the code can probably be optimized a bit, its doing quite a bit of stuff per pixel [00:58] the most natural optimizatiob would be to use a lookup-table the size of the framebuffer [00:59] which is what a proper blue noise dither also would need in terms of infrastrucure [01:00] (blue noise dithering, is doing the same thing, but with an expensively pre-computed threshold mask, yielding qualities similar to error-diffusion) [01:01] also looking at your webpage the dithers have a different color tint / contrast [01:02] either the existing ones are off or the new one is off [01:02] whatever is, should be fixed [01:02] or they all are "off" in slightly different ways [01:03] though, for minimizing the average color - error diffusion should in principle be exact when done corectly [01:05] the equation looks like it would produce a pattern that repeats in 512x512 blocks [01:06] while we're on the subject, how would you feel about a random dither using libautil's PRNG, michaelni? [01:07] libavutil* [01:10] Daemon404: plain random ends up with more objectionable clusterings of pixels [01:11] plain random is not good for gif "compression" btw :( [01:11] Daemon404: you can try replacing the case 1: in the code at http://pippin.gimp.org/a_dither/ with "case 1: return input < Math.random() ?0:1;" and compare with case 2 [01:12] Daemon404, i think we should add whatever is best / has users who want to use it / has usecases. i dont know if that applies to random dither [01:13] ubitux, again i dont mean for gif [01:13] about PRNGs, a good LCG (64bit and discard the LSBs) will look indistinguishable from a perfect one for dither so no need for PRNG from avutil [01:14] michaelni, i only thought we'd liek deterministic behavior [01:14] yes deterministic behavior is preferred [01:14] what I use for deterministic random in GEGL is: https://git.gnome.org/browse/gegl/tree/gegl/gegl-random.c [01:17] which gives a per-pixel random access sequence (with an additional seed) [01:26] not that it matters for our purpose here but how does this prng perform in the various tests (diehard, u01) ? [01:28] if i do need a good prng, i tend to use marsaglias kiss99 (one possible implementation of that:http://www.ffmpeg.org/~michael/git/noe/random_internal.h) [02:26] michaelni: no idea; I have only done perceptual evaluations of it [02:27] though I would imagine it to work well; if the initial random data it is seeded with is of good quality [03:04] ffmpeg.git 03Michael Niedermayer 07master:640a36a05c4d: ffmpeg_filter: check that the input media type match the filter [10:17] ffmpeg.git 03Luca Barbato 07master:aae159a7cc4d: nuv: Do not ignore lzo decompression failures [10:17] ffmpeg.git 03Michael Niedermayer 07master:7ec7d626a121: Merge commit 'aae159a7cc4df7d0521901022b778c9da251c24e' [10:44] ffmpeg.git 03Luca Barbato 07master:075dbc185521: nuv: Pad the lzo outbuf [10:44] ffmpeg.git 03Michael Niedermayer 07master:86fe16a763ab: Merge commit '075dbc185521f193c98b896cd63be3ec2613df5d' [10:54] ffmpeg.git 03Luca Barbato 07master:feaaf5f7f0af: nuv: Reset the frame on resize [10:55] ffmpeg.git 03Michael Niedermayer 07master:1dee467d262d: Merge commit 'feaaf5f7f0afac7223457f871af2ec9b99eb6cc6' [11:09] I wish ffmpeg was not converted to output FLTP instead of S16. Reason: When it was S16, ARM devices without VFP was working smoothly. After the FLTP change, ARM devices require VFP otherwise library becomes unusable because of the floating point operations :-/ [11:10] the decoder internally always used floating point, it just converted to S16 before output [11:10] which is why the switch made so much sense, directly output the internal format except converting it everytime [11:11] some decoders have fixed point alternatives, like mp3 [11:11] that is strange because codecs those accepting S16 as requested format works smoothly on devices without VFP but not the same for FLTP to S16 (swr_convert -ed codecs) [11:12] I think "mp3" (as you've noted) is one of those accept S16 as the requested format. [11:51] ffmpeg.git 03Luca Barbato 07master:2df0776c2293: nuv: Use av_fast_realloc [11:51] ffmpeg.git 03Michael Niedermayer 07master:8da4305eb520: Merge commit '2df0776c2293efb0ac12c003843ce19332342e01' [11:55] ffmpeg.git 03Luca Barbato 07master:62cc7a910801: rtjpeg: return meaningful error codes [11:55] ffmpeg.git 03Michael Niedermayer 07master:d12bc01ec55a: Merge commit '62cc7a91080194d9ead162516f779f20931220d9' [12:03] ffmpeg.git 03Luca Barbato 07master:f13fe6020e6a: rtjpeg: Use init_get_bits8 [12:03] ffmpeg.git 03Michael Niedermayer 07master:2bac839bd3c3: Merge commit 'f13fe6020e6a3871f9b0c96b240e58e6ed4fb5d7' [12:22] ffmpeg.git 03Luca Barbato 07master:3562684db716: ogg: Always alloc the private context in vorbis_header [12:22] ffmpeg.git 03Michael Niedermayer 07master:070c22d53a2b: Merge commit '3562684db716d11de0b0dcc52748e9cd90d68132' [12:30] ffmpeg.git 03Luca Barbato 07master:5268bd2900ef: segafilm: Error out on impossible packet size [12:30] ffmpeg.git 03Michael Niedermayer 07master:ab06436dbff5: Merge commit '5268bd2900effa59b51e0fede61aacde5e2f0b95' [12:35] ffmpeg.git 03Martin Storsj? 07master:2427ac6ccd86: rtpproto: Update the parameter documentation [12:35] ffmpeg.git 03Michael Niedermayer 07master:1a01f367a40f: Merge commit '2427ac6ccd868811d1fe9df7c64c50ca58abe6f6' [12:42] ffmpeg.git 03Martin Storsj? 07master:6b58e11a8331: rtpproto: Add an option for writing return packets to the address of the last received packets [12:42] ffmpeg.git 03Michael Niedermayer 07master:2425be894a52: Merge commit '6b58e11a8331690ec32e9869db89ae10c54614e9' [12:48] zidanne: AAC and AC3 have a fixed-point implementation, recently updated by MIPS iirc [12:49] I think you need to manually specify you want the FP decoder rather than the float one [12:52] ffmpeg.git 03Martin Storsj? 07master:b56fc18b20d6: sdp: Add an option for sending RTCP packets to the source of the last packets [12:52] ffmpeg.git 03Michael Niedermayer 07master:20904518e98b: Merge remote-tracking branch 'qatar/master' [12:55] kurosu_: would you mind giving more hint about this? How can shall I manually specify? (the name of the fixed point decoder, etc..) [12:58] sorry, I don't remember, maybe -acodec [12:58] zidanne: I think this is in the documentation [14:21] michaelni: updated patch ( http://pippin.gimp.org/a_dither/0001-swscale-add-a_dither-dithering-method-for-GIFs.txt ) and sample, based on measurments - I was trying to evade too much noise; this trims down the size of the code as well. [14:25] (the measurements were of the actual tone reproduction curve / gamma of the half-toning method) [14:25] this improves the color reproduction [14:33] pippin, i suggest instead of using flags, a enum dither should be added to the context otherwise not only will we run out of flags we also have ambigous cases where multiple dithers are enabled [14:35] and please use something longer than 1 letter to identify the dither unless thats really supposed to be its name and is uniquely identifying it [14:36] it is the name I have chosen for the method; though - painful as it is; that is just one of the 2 (or 4) variations I've ended up playing with out of many more permutations... [14:38] lets hope no other human on the planet will pick the first letter of the alphabet to identify their dither algorithm then ;) [14:38] it is worse it is "a dither" it isn't "a" [14:39] the name is really quite .. bad. :) "a dither" could also simply be the english "a", describing "one" dither [14:39] you should invent a new name :D [14:40] nevcairiel: be careful now,.. [14:41] i dont care for careful [14:41] its boring [14:41] http://pippin.gimp.org/git/cairo/log/ <- that is what I named a cairo "backend", pronounced "circle" - in english [14:43] isrgb8?4:4 [14:43] and " (A_DITHER (i+67,y) - 128) / (isrgb8?4:4);" does 2 shifts which could be changed to 1 [14:43] I'd be disappointed in a compiler not catching that one [14:44] (the isrgb8?foo:foo) one [14:44] of course it does, still can be cleaned up in the code [14:45] not doing it, makes the code align with the line above and below [14:45] also the / (2^x) should probably be >>x, the compiler will optimize that one but with signed numbers rounding differs so it will need extra code [14:45] nonsene code for alignment? o.O [14:46] I'm not going to defend it hard; it is an artifact from me initially not caring about the !PAL8 case, and then mechanically transforming the code [14:49] will provide a new patch in some minutes; going afk for some mins first [14:51] pippin: is your nick named after the dog pippin? [14:52] durandal_1707: are you going to asterix? [15:03] Action: michaelni afk [15:09] kierank: you want to hire bodyguards? [15:11] durandal_1707: no [15:12] It's michaelni that wants bodyguards afaik [15:13] what good are bodyguards that you dont hire yourself [15:13] hm, unifying the two shifts; ends up obfuscating the logic of the algorithm with more obscure shifting amounts [15:15] obfuscated - in that the numbers make sense in the wrong way; and if the method is reused for rgb565 16bpp naively - the results would be wrong [15:22] kierank: i'm big, strong and fast [15:23] I don't plan to fight you or anyone else [15:27] durandal_1707 is an important tactical advantage in the expected deathmatch between ffmpeg and libav devs [15:29] thats gonna be fun [15:55] wm4 : two projects enter, one project leaves [15:56] deathmatch of awkwardness [16:00] pippin / michaelni: I propose yadith as the name of that yet another dither algorithm [16:01] Action: durandal_1707 ugh [16:09] ffmpeg.git 03Paul B Mahol 07master:93f4277714ff: WavPack encoder [16:09] r9 [16:17] michaelni: http://pippin.gimp.org/a_dither/0001-swscale-add-a_dither-dithering-method-for-GIFs.txt <- that is how it looks optimized for speed playing manual compiler, taking all the trinary operators out of it [16:33] I couldn't find fixed point arc decoder, there is only fixed point encoder? [16:33] *arc= aac [16:33] pippin, when calculating pixels from left to right A_DITHER(i,y) ; A_DITHER(i + 67,y); ... calcukate the same values 3 times [16:35] michaelni: yes [16:35] zidanne: native aac encoder is not fixed point [16:35] but any compiler worth its salt has common subexpression elimination, and I'v even taken loads of conditionals out of the inner loop... [16:36] Action: pippin leaves any further tweaks to ffmpeg devs - especially touching enums vs flag differences :) [16:37] but ffmpeg compiles on bunch of worthless compilers [16:37] pipin ive not yet seen gcc create a ring buffer to save and reuse sub expressions [16:37] so, I am mixing things.. I must stop drinking bear while coding 8-|. I need fixed point aac decoder because armv6 devices without VFP can't handle smooth streaming [16:37] beer (: [16:38] michaelni: if you that is the part you are referring to; then you are willing to optimize this PAL8 thing much further than I am ;) [16:39] (the part where it recomputes the same value after >60 pixels) [16:39] using a full 512x512 lookup-table would be better then, for anything but micro-controllers [16:39] i dont mind a 512x512 LUT if that happens to be faster [16:40] but it seems a bit big to me [16:40] I'd guess it to be faster by ~10-15% of the overhead over doing just shifting [16:41] (based on measurements of whether this was few enough instructions to beat a LUT, it is not) [16:42] should I use libfaad for fixed point aacp decoding? [16:43] Action: pippin needs to stop dithering and focus on some font code instead :) [16:43] zidanne, look at the ML there are some patches for a aac fixed oint decoder [16:44] zidanne, if you test it please post your benchmark results [16:44] shame on me; what is ML? [16:44] mailing list [16:44] ffmpeg-devel [16:45] btw, I have a hunch about what the wrongly encoded GIFs I've seen is [16:45] I don't think GIF deals with more than 254 unique colors when used for animation... [16:46] unless one start mucking with per frame color maps [16:46] michaelni: http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2013-June/144864.html [16:48] I couldn't get attachments from archives [16:48] pippin, btw how does your dither compare to the other 2 ? (for example in terms of gif compression) [16:50] zidanne, try a message that contains a patch [16:51] michaelni: on the bunny clip I've used, with bayer as a baseline: bayer 100% (2.2mb) a-dither: 127% (2.8mb) error-diffusion: 300% (6.6mb) [16:52] the reason error-diffusion loses out is that it keeps changing all pixels all the time, making huge delta frames [16:52] so what makes a-dither better than bayer ? [16:53] many more dithered gray-levels, and less apparent patterning [16:53] both bayer and a-dither are positionally stable [16:53] s/gray-levels/colors/ [16:53] did you try cluster & void dither ? [16:53] its also positionally stable [16:54] cluster and void, are those other names for blue and green noise? [16:58] i suspect, no [17:00] void and cluster is blue noise [17:01] no, I haven't tried it - might end up implementing it later for GEGL/GIMP, a dither is enough for my eink needs [17:02] "implement it" stated loosely;,. more preceisely implementing something more like blue/green noise LUT masks, rather than using a-dither which is a procedural such threshold mask [17:03] the core working mechanism is exactly the same for this and that method though, in the live js web example I use the comparison with the threshold mask instead of mixin it in as noise like this when doing multiple levels [17:10] void and cluster might produce a blue noise like spectrum but iam not sure this is true the other way around, id have to find and read some paper [17:29] I couldn't find Fixed Point AAC decoder files. So I will try to exclude armv6 devices those do not have VFP. : [17:29] :/ [17:36] zidanne, you could post in the thread and ask the mips devels if they have a git repo with the patches applied [17:37] it probably makes sense for other people too if that can simply be checked out instead of applying multiple patches from the ML [17:37] pippin, see "FFmpeg devel (7.2K) [FFmpeg-devel] [PATCH] sws: add dither enum" [17:37] review/comments welcome [17:55] michaelni: I think you'd want to check on something other than error_diffusion in utils [17:56] michaelni: since that is another location where I had to hook in when adding a-dither [18:03] hi, i have a problem, I try to use ffmpeg API in VS2010, I add the libs but I still have an error : [18:03] Error3error LNK2019: unresolved external symbol "int __cdecl avcodec_decode_video2(struct AVCodecContext *,struct AVFrame *,int *,struct AVPacket const *)" (?avcodec_decode_video2@@YAHPAUAVCodecContext@@PAUAVFrame@@PAHPBUAVPacket@@@Z) referenced in function "public: void __thiscall rtspPlayerObject::startCapture(void)" (?startCapture at rtspPlayerObject@@QAEXXZ) [18:03] I've downloaded the binaries from ffmpeg.zeranoe.com [18:04] do you have an idea ? I link all the libs I have in the folder [18:04] calling convention.. [18:05] is it possible to fix it ? 'cause for avformat_open_input i have no problem [18:05] ... you need lbavcodec [18:05] libavcodec* [18:06] I have it, in the linker's settings I have : avcodec.lib avdevice.lib avfilter.lib avformat.lib avutil.lib postproc.lib swresample.lib swscale.lib [18:12] I just tried using #pragma comment, but still the same error [18:12] you need to set C linkage around the header imports [18:13] ie: extern "C" {\n #import "libavcodec/avcodec.h" \n} [18:13] i did it [18:14] ehrr, that's what i meant, not calling convention, sorry. saw cdecl vs thiscall and brain farted. [18:14] oups may be I missed one [18:14] it's linking ! [18:15] thank you [18:18] "import" ? [18:19] someone writing too much objc? 8) [18:19] he got the idea [18:20] also, java [18:32] nevcairiel, I hope it's at least java7 :) [18:42] I have a crash on "avformat_open_input", I try to open an rtsp stream but but I reach this function, I have an error about memory and the debugger says that the problem is out of my own code like QWidget class or _unlock function ... do I need to do something particular ? I have : avformat_open_input(&pFormatCtx, "rtsp://192.168.1.17/h264", NULL, 0); and pFormatCtx is defined like this : AVFormatContext [18:42] *pFormatCtx; [19:13] michaelni: cause of corrupted GIF encoding detected; gif.c does not deal correctly with frames with no change, encode a sequence of the same frame to reproduce [19:19] ffmpeg.git 03Michael Niedermayer 07master:dabfa80ce27f: avcodec/mjpegdec: print a message when there was just a single field and no frame [19:30] pippin, i tried to encode lena.pnm as 6 images in a gif, the file plays fine with ffplay thogh, how do i reproduce it ? [19:31] play it with mplayer, or try to open it in gimp [19:32] if frames that end up having 1x1 difference in the lower right are not written to the stream; those decoders no longer complain [19:33] (at least one web browser I tested in loops at the wrongly encoded no-diff frame) [19:38] michaelni: (here gimp prints a warning and continues; mplayer prints a warning as well as quits) [19:41] michaelni: for such trivial stuf you do not need review [19:42] durandal_1707, i just didnt want to step on anyones toes by doing such simplifications without asking [19:43] also you might want to add yourself to maintainers for wavpackenc [20:01] -:#ffmpeg-devel- [freenode-info] channel flooding and no channel staff around to help? Please check with freenode support: http://freenode.net/faq.shtml#gettinghelp [21:59] XwZ: use a debug build without stripping enabled. [22:15] ffmpeg.git 03Alexander Strasser 07master:dc2e4c2e532b: lavf/wavdec: Fix seeking in files with unaligned offsets [23:20] ffmpeg.git 03Michael Niedermayer 07master:5dd5985e0530: avcodec/gif: move BITSTREAM_WRITER_LE up [23:20] ffmpeg.git 03Michael Niedermayer 07master:b4e2e0370999: avcodec/lzwenc: Add 1 additional bit of padding for gif [23:20] ffmpeg.git 03Michael Niedermayer 07master:796b20fa1cef: avcodec/gif: use the whole allocated buffer [23:20] ffmpeg.git 03Michael Niedermayer 07master:1a53ddd9a295: avcodec/lzwenc: change asserts to av_asserts [23:26] pippin, the gif "corruption" should be fixed [23:38] was that lzw change really required? [23:38] it could be gimp bug [23:54] durandal_1707: then it is an mplayer, chromium and firefox bug as well [23:54] Action: pippin has decided to label durandal_1707 mr-stop-energy ;] [23:55] haha, no i prefer real solutions and not hacks [23:56] durandal_1707: considering your complains stop-energy is my real solution ;) [23:58] michaelni: and yep - none of the 4 decoding engines that had hickups have them now [23:59] pippin: hmm, what specific complaining? [00:00] --- Thu Aug 15 2013 From burek021 at gmail.com Fri Aug 16 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Fri, 16 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130815 Message-ID: <20130816000502.9973C18A033D@apolo.teamnet.rs> [00:03] durandal_1707: not much I'd find in my intentionally short irc-backlog - likely more about delivery than message [00:04] durandal_1707: but the "it could be a gimp bug" is insulting me ;) [00:09] durandal_1707: ( btw michaelni's commit messages is incomplete; .. neither mplayer nor any web-browser I have tried likes the previous .GIFs either ) [00:10] i used gimp to debug it, which was a more or less random pick [00:12] it gave the least vague - though still quite odd - debug warning during loading of the decoders [00:12] i still do not know what is actual bug that commit hypothetically fixed [00:15] so fix may be just band-aid [00:15] durandal_1707: insert 2 or more identical frames in your video to be made gif of.. and most browsers would hick-up and loop on that frame back to the beginning [00:16] i'm not saying previous solution was perfect [01:31] mmh i wonder if mails from my new address will pass [01:32] "Sender address rejected: Greylisted, try again later" aw [01:40] Action: ubitux just learned about greylisting, funny. [01:43] i love people who use automated -listings [01:43] because then i automatically stop trying to contact them :) [01:56] weird configuration [07:23] ubitux: any 32x32 intra pred patches ready for merging or review yet? [09:59] BBB: basically just one, but i'm "waiting" for the others because i'm still unsure about having them unrolled [10:01] how did you implement the first one? unrolled or looped? [10:01] unrolled [10:02] but i generated it [10:03] (it helps me see if the unrolled logic is that much smaller) [10:03] (at least i was hoping so :p) [10:03] and yeah sorry again for being slow [10:04] i'll try to rush thing a bit [10:04] how much code is it? [10:04] (compared to e.g. the 16x16) [10:05] like quite literally 4x the screen content? or better? or worse? [10:05] my code didn't 80-line break, so it doesn't look 4x, but i believe it is [10:06] is/will-be [10:06] isn't this code supposed to be simd at some point btw? [10:07] i mean, is it worth the effort trying to unroll-optimize the C versions? [10:07] it'll be simd'ed yes [10:08] I wrote some simd for libvpx' intra pred already, which we can use in ffmpeg also [10:08] (it's not very difficult) [10:08] ok [10:08] whether it's worth having an unrolled or not is totally up to you I think [10:08] isnt this something a good compiler could even unroll? [10:08] btw, i had to "trick" to build your branch recently because of the VLA [10:09] nevcairiel: do you know a good compiler? [10:09] if you really cared you could look up the asm of that function after compiling the looped version [10:09] BBB: my question was also about 16 and 8 actually, not specially the 32 implem [10:09] or you just join the not caring train and hope for simd :) [10:10] ubitux: oh right that VLA [10:10] ubitux: I'll kill that, it's a hack b/c I'm lazy [10:11] ubitux: right... well I'm not against making 8x8/16x16 loopy also, but do keep in mind that this code may be re-merged with h264pred.c and possibly shared with hevc (which also has large intra predictors, and they're presumably similar), so if that's the case then maybe performance of the C is slightly more important? not sure, maybe ask michaelni [10:12] ok :) [10:12] i'll eventually check how it's done in hevc [10:17] mailing list didn't help yet. anyone knows a git repo which has fixed point aac* decoder or it's patches? [10:29] Are there any implementations of integrating opencore-aacdec into ffmpeg? [11:35] ffmpeg.git 03Michael Niedermayer 07master:bfbe07670bb0: wavpackenc: simplify "sign = ((sample) < 0) ? 1 : 0;" [11:35] ffmpeg.git 03Dave Yeo 07master:c3386bd5b4d3: rtpproto: Check for the right feature when reading a sockaddr_in6 [11:35] ffmpeg.git 03Michael Niedermayer 07master:2c959eccc69c: Merge remote-tracking branch 'qatar/master' [12:07] zidanne, you can just apply the patches from the ML to a matching old version to get fixed point aac if its so important to you [12:09] zidanne, also check https://github.com/FFmpeg-mips/FFmpeg, there may be some old versions too [12:09] I see it from the archives and it's not formatted which is hard to get the patches from there in the right way. Is there any other place that keeps the archive with attachments as files? [12:10] and patches that add support for currently unsupported external decoder libs are welcome! [12:11] zidanne, there should be mbox files for the ML on the archive page on ffmpeg.org [12:11] you can download them and use your MUA [12:37] this is the only content I could find: http://ffmpeg.org/pipermail/ffmpeg-devel/2013-June/144437.html [12:37] (which is not fixed point aacdec code. but fft) [12:39] this should also apply (with a configure parameter) to arm builds too, not just mips :| [12:47] well my understanding is that fixed point fft is step for fixed point aacdec [12:52] zidanne: and there are patches from Jun 3 for fixed point aacdec modules [13:08] ubitux: oh btw one more thing, don't forget performance isn't of uttermost importance at this point, we're really just caring about building a working decoder. performance comes only after it works, so getting it working is probably preferrable to any refactoring or anything else (if that helps decisions wrt anything you're doing right now) [13:08] yeah [13:08] one we have tools to measure realworld performance (e.g. a working decoder and media files as part of a testsuite), we'll care about perf [13:09] BBB: if i didn't mess anything, the unrolled 32 looks like this: http://pastie.org/8238678 [13:09] inter mode bitstream parsing is almost finished, only mv coding is left now [13:09] and it's likely not the smallest one [13:09] (so i don't think that's a good idea) [13:09] tis is diagdown_left? [13:09] yes [13:09] sure is a wall of text [13:09] that's kinda big yes [13:10] :p [13:10] maybe it's ok for now and we should refactor it to be loopy later [13:10] up to you [13:10] whatever makes it easiest to get a working decoder ;) [13:10] Action: BBB off to dinner, bbl [13:10] if it was generated by some random pass i wouldn't mind, but maintaining such thing is not really a cool thing :p [13:42] ffmpeg.git 03Michael Niedermayer 07master:8ec618826302: ffv1dec: support printing information about the global header [13:42] ffmpeg.git 03Michael Niedermayer 07master:c387c45e8301: ffv1: fix plane_count at version 1.4 [13:42] ffmpeg.git 03Michael Niedermayer 07master:1a01147d7ae5: avcodec/ffv1enc: fix chroma_plane for rgb/rgba [13:45] 01[13FFV101] 15michaelni pushed 1 new commit to 06master: 02http://git.io/gDqirg [13:45] 13FFV1/06master 147fa692a 15Michael Niedermayer: ffv1: document plane_count... [15:04] ffmpeg.git 03Michael Niedermayer 07master:23606f27f081: avcodec/ffv1enc: bump minor_version for the chroma_plane fix [15:47] ffmpeg.git 03Piotr Bandurski 07master:165b65771de9: avformat/riff: add DM4V FourCC [15:51] I still couldn't figure it out :D sorry. How to extract attachments from these: http://ffmpeg.org/pipermail/ffmpeg-devel/2013-June/144441.html [19:11] michaelni, just a heads up: my FATE instances will be gone for 2 weeks [19:11] while i move to england [20:26] Daemon404, ok, thx for the info [21:39] ffmpeg.git 03Michael Niedermayer 07master:60e9b8556ab3: swscale_unscaled: make dither_scale static, its not used elsewhere and has no prefix [21:44] michaelni: I am not entirely sure dithering and scaling are done in the right order [21:45] michaelni: I had to pre-scale my big buck bunny clip to the target resolution, and then use that video as input - to be able to create the 3 different GIFs, without doing so; it almost seemed like the scaling was done after the dithering [21:51] dither is done after scaling, maybe you somehow ended up with 2 scalers or maybe theres a bug in the scaled+dither codepath [21:52] if you have a reproduceale testcase for this problem then please open a ticket on trac [21:53] mhm, I also notice that if I make the dither very computationally expensive; also the frame before the frames I want encoded are dithered (using -ss and -frames) [21:59] michaelni: unable to reproduce it now; it was before I fully understood the workings of -sws_flags ; likely a pebcak [21:59] pippin, out of curiosity, what motivated you to work on sws? [21:59] merely curious. [22:01] I was experimenting with dithering with hacked kobo ereaders in mind,. [22:01] the constraints of dithering for eink are similar to dithering for gif, reducing changes between "frames" is desirable [22:02] then I became curious and wondered how GIFs would look like, when piped through the dithering I was hacking on... [22:02] ah [22:03] the screen of a kobo mini can be driven at almost 15fps when in "1bpp mode" [22:04] and my image viewer/browser/slideshow thingy for it, also has a GIF decoder in it ;) [22:07] sounds like something you'd see at an art show [22:10] I already use that "image viewer" for boarding passes and such; I keep it in the bootproceess before the regular "OS-app" of the device; and can shut_it_down_ with an image visible [22:10] (and it boots back to the same image in <5s ;) ) [22:11] with more dithering applied,. the 16 'a bit fuzzy' gray levels, works well for photos [22:12] thus dithering improves both "animated things" (pan/zoom), as well as static display in 4bpp mode [23:04] nevcairiel: to answer your previous question, gcc doesn't really unroll shit [23:05] at least not completely [23:05] it won't unroll a 8x8 loop [23:20] i might have found a way to unroll a bit better [23:20] anyway [23:20] BBB: the "intra" samples from libvpx don't seem to trigger the downleft pred; do you have a sample for that? [23:26] i'm stupid, the intra sample were vp8 only [23:26] 'seems most of the vp9 triggers them [23:28] btw, it crashes a lot since your recent commit(s) [23:54] ffmpeg.git 03Michael Niedermayer 07master:1e0e193240a8: sws: add dither enum [23:54] ffmpeg.git 03Michael Niedermayer 07master:6d246f440e46: avfilter/vf_scale: generic swscale option support [00:00] --- Fri Aug 16 2013 From burek021 at gmail.com Fri Aug 16 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Fri, 16 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130815 Message-ID: <20130816000501.87BE018A033B@apolo.teamnet.rs> [00:01] :( [00:07] http://illogicallabs.com/paste/00000008.txt I managed to get to the point where the error is no longer IO but is an auth issue... anyone here familiar with Akama's streaming server?? [00:07] they're using Flash Media Server with some specific auth I think [01:04] Anyone here experiencing interruptions on audio transcoded to ac3? [01:29] ?I'm having an issue transcoding an mpegts created with mythtv. The first keyframe is missing from the output file. However, the frame is present in both vlc, and a transcode created via vlc. I'm assuming that the problem lies in the video codec parameters. Here's a pastebin with the command line and ffmpeg output. http://pastebin.com/T9KTrwQw? [01:29] VLC decodes and plays the file properly. http://pastebin.com/RDDeqYin [01:29] you asked that question a couple of hours ago [01:30] and were gone after 15 mins [01:57] jure: my laptop had died and my irc client lost the connection. do u have any ideas about my problem? [01:59] 1. you're not technically transcoding the fil e, you're merely remuxing it [01:59] 2. you are ostensibly remuxing to matroska (.mkv), yet your output is named "output.mp4". [02:00] which means your output does not correspond to your input command line, which means there's something you're not telling us [02:03] ^ shoop_da_whoop [02:14] jure: The mp4 was another remux test with the exact same parameters. Here's the correction. http://pastebin.com/fEFYusFX [02:21] I think it's a bug in ffmpeg's muxer. [02:22] (de)muxer [02:33] try -c:a ac3 instead of -acodec copy [02:39] ffmpeg -i input.mpg -r 29.97 -s 720x480 -c:v copy -c:a ac3 output.mkv [02:40] ^ shoop_da_whoop [02:53] jure: No dice, the first frame is still missing and there is an a/v sync issue. Also, VLC reports the input frame rate as 59.94, while ffmpeg reports 29.97 fps. I've also tried ffmpeg -r 59.94 -i input.mpg -r 29.97 -s 720x480 -c:v copy -c:a ac3 output.mkv [02:54] But to no avail. [02:54] jure: Should I send you a sample of the file? [02:59] Is there a way to increase the bitrate on a stream captured from a webcam? [03:00] This is what I am using. And I don't know where it sets quality in it: ffmpeg -y -f dshow -s 640x480 -r 60 -rtbufsize 3000k -i video="PS3Eye Camera" out.mp4 [10:06] How do I rescale videos, o?o? -s 1280x720 doesn't like me. [10:06] Neither does -s 1280:720 [10:09] Oh, got it. Typo. Was doing -mov_flags rather than -movflags before. i am streaming an MJPEG video from an rtp source to an rtmp server, here is the paste of the code and the Output http://pastebin.com/SSmRApze [10:15] I tried to also stream from and older version of ffmpeg. here is the paste of the command and the output http://pastebin.com/ayh1Yq8c. what do I need to do? [14:17] I'm trying to get rotation data from quicktime, I'm using php with getID3() to get all data from a video. QuickTime can automatically rotate the video no matter how you record the video (upside down or whatever). I can't find a single variable which would have that data. Has anyone successfully gotten the rotation from a mov file before? I need it so I can rotate the video correctly when converting. Unle [14:17] ss ffmpeg can somehow figure that out automatically? [14:22] ffmpeg knows php ? [14:33] braincracker, well, if ffmpeg itself knows how to get rotation data then correctly rotate it, I'll use that [14:38] braincracker, found http://git.videolan.org/?p=ffmpeg.git;a=blobdiff;f=libavformat/mov.c;h=133dd89509384a7b70e22153619580fc9489c70c;hp=06b2f87b0db06ecd2db5850b0159dd4faede6ef3;hb=62d2a75b024bf72e6f3648e33c5bb5baf9018358;hpb=6813450209bab97c30e8b25a018cdc4c936b224a :) [17:36] i'm using the following command in 1.2.2. when I play the video with vlc, it appears to play twice as fast as was recorded. trying to figure out what I'm doing wrong: [17:36] ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 25 -s 1920x1050 -i :0.0+0,30 -acodec pcm_s16le -vcodec libx264 -threads auto -preset:v ultrafast -qp 24 -sc_threshold -1 /tmp/video.mkv [17:38] -r 10 [17:38] ok, gonna look at the docs to see what -r does [17:39] whats with "ultrafast" ? [17:39] so you're using stuff you don't understand? good going [17:39] honestly i don't even know, it's such a mashup of commands (trying to capture a video of a game fullscreen) [17:39] many suggestions from many people [17:39] it's like firing a gun you've never held in your hands before [17:39] i did just say i was going to read. no need to be a cock. [17:40] i'll back up, my goal is to capture a good quality video of a game I'm playing. i've no problem with starting over [17:44] ah okay i see now, during recording it's saying fps=14, so the player is probably trying at about 24, hence apparent speed doubling [17:46] 1920x1050 :snicker: [17:46] you are unhelpful and only offer insults [17:47] but yes, i don't want my menu bar etc captured. hence that and the shift [17:48] read up on -r which is supposed to set framrate. set the one in the command to 24 and added another -r 24 as seen in the man page just before the output file name. still records at 14 fps. removed the first -r 24 leaving the second one, same thing. [17:49] Fieldy: what happens if you omit both -r 24 ? [17:49] your PC. not good enough. [17:50] Fieldy: and here's also some info: http://trac.ffmpeg.org/wiki/How%20to%20grab%20the%20desktop%20%28screen%29%20with%20FFmpeg [17:50] oh, I think he's read that [17:50] his command is almost a carbon copy of the one there [17:51] cbsrobot: i'll try that and I'll read, just a moment, thank you for the response [17:51] Fieldy: and btw: [17:51] can do [17:52] good url. i believe that's where I started, and kept tacking on stuff until it became the monster it is now. without any -r, it's recording at 14fps. i'm going to restart my command based on that page [17:59] cbsrobot: based on that I came up with another command, though i get errors: http://pastie.org/pastes/8239354/text [18:07] [flv @ 0xa66720] FLV does not support sample rate 48000, choose from (44100, 22050, 11025) [18:07] gee, I wonder what was wrong with -r 10 ... [18:12] well set -ar 44100 [18:14] thanks [19:57] are there any options to ffmpeg to get information (bitrate, duration, format) about an input file in structured way (like -f ffmetadata) or do I have to parse that information from stderr? [20:00] guest1234: You could use a programme such as mediainfo to do this. (It's available on Windows, *nix and OS X, in CLI and GUI forms.) [20:02] actually, it seems that ffprobe does exactly what I want [20:02] just found that [20:02] thanks for the suggestion though! [22:11] Hi! Anyone who recently transcoded from aac to ac3? [22:56] why, are there issues? [23:06] jure: yes, I transcoded a video as I commonly do, but it seems that for some reason the result has audio interruptions. [23:08] jure: I'm not able to understand whether this is related to the player or ffmpeg... [23:08] Try a different player!? [23:09] sacarasc: yes, that is the problem. The player I'm using is vlc. The only other player I have is quicktime, but it never plays the audio of the video produced by ffmpeg for some reason. What other player should I use? [23:09] vlc works remarkably well with bad content [23:10] so your ffmpeg-produced output must be really bad [23:10] question: why transcode aac to ac3? [23:11] jure: yes, vlc should not be the issue. This is the output: http://paste.kde.org/pbeb5c7e2/. I see nothing strange. [23:12] jure: it confuses me that I experience the issue with mp4 and mpv, but not with mkv... [23:12] *mpv = mov sorry [23:13] probably because ffmpeg muxes ac3 badly with mp4 [23:13] jure: why not? ac3 is more compatible with many players. [23:13] jure: should I report this as a bug then? [23:14] no. [23:14] jure: I have to say I frequently do this conversion and I never experienced this. [23:14] you are unnecessarily introducing loss by transcoding audio [23:14] why not use -c:a copy [23:15] jure: as I said, it seems to me ac3 is more portable. Especially to desk players. [23:16] jure: anyway, is this the expected behavior? Interruptions? [23:16] no, but it could be due to bad container format / bad timecodes [23:16] I'm not sure a 16kHz sample rate is even officially supported by AC3 [23:17] you could also play around with vsync [23:19] try -ar 32000 [23:19] so: ffmpeg -i input.mp4 -c:v libx264 -preset veryslow -crf 22 -c:a ac3 -ar 32000 -threads 4 output.mp4 [23:20] ^ luc4 [23:20] jure: it is not a problem for me, I could also transcode to mkv or avoid audio transcoding. Just wanted to know if it is a good idea to report to contribute to ffmpeg or not... [23:20] it's not ffmpeg's fault when people flaunt standards [23:21] jure: exactly, I didn't know that. If this is not the case I won't pollute with a wrong report. [23:21] right [23:21] that's why I aked here first [23:21] but try anyway with -ar 32000, see if it works [23:33] jure: it works perfectly [23:33] jure: thanks [23:34] jure: wouldn't it be good to log a warning in such cases? [23:35] np, luc4. and indeed, it would be. personally, I'd even make it error out in case the rate doesn't match the standard. [23:36] jure: can I report this as a "wish" or similar? [23:37] luc4: original AC-3 supported only 32k, 44.1k, and 48k but I think lower sample rates were added in a later revision [23:37] but many players do not support those [23:39] mark4o: interesting, I was reading the wikipedia article on ac3 but I can only see the max frequency. Anyway, as I suppose vlc is still using ffmpeg, in either cases there might be some possible improvements? [23:39] does the file play using ffplay? [23:42] I am able to play ac3 with 16k sample rate in ffplay and in vlc [23:42] let me check [23:42] maybe you have an old version [23:42] of what? [23:43] vlc [23:43] 2.0.7 for mac os [23:43] should be the latest [23:44] 2.0.7 for mac os x works for me [23:44] and I've just built ffmpeg with its dependencies from the git [23:44] I didn't even think of ffplay :-) just a second and I'll try to play it [23:46] ffplay plays it perfectly [23:47] So only related to vlc. It means my suspect was correct and I'm in the wrong place possibly :-) sorry [23:48] thank you guys [23:48] maybe vlc 2.0.8 works [23:49] maybe not stable yet? [23:49] 2.0.8 was released a few weeks ago [23:49] checking for updates on mac reports 2.0.7 to be the latest... [23:51] but it seems it is actually not... [00:00] --- Fri Aug 16 2013 From burek021 at gmail.com Sat Aug 17 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sat, 17 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130816 Message-ID: <20130817000501.B669918A0236@apolo.teamnet.rs> [00:02] still 2.0.8 is not working on that file [00:02] hmm... works for me [00:02] maybe something else wrong with your file [00:03] mark4o: maybe& the command line I used and the output is on pastebin [00:04] why are you re-encoding the video? [00:06] llogan: video is baseline, I want it high profile. Audio I'd prefer it ac3. [00:06] anything that supports high should be able to support baseline [00:07] unless it's retarded [00:09] I don't quite understand why the reasons matter techically, anyway, high profile takes less bytes, doesn't it? [00:09] yes, reducing the size by 3.4x seems like a good reason to me [00:10] precisely [00:13] luc4: I would recommend using 32k, 44.1k, or 48k sample rate for ac3, if you want it to be compatible with more players [00:13] but if you want to debug 16k sample rate on vlc you could try encoding just a simple tone or something and play it [00:14] ffmpeg -f lavfi -i sine=b=4:r=16000 -acodec ac3 -t 5 out.mp4 [00:21] mark4o: I would really like to enter the implementation details of vlc, but unfortunately I'm already working on a thousand projects :-) just wanted to know if this was something to report to ffmpeg or not. Not trying to solve a spefic problem with a specific file (I'm keeping it only as a way to reproduce the "issue"). As it really seems not related to ffmpeg, I'm throwing it away. Thanks guys! [00:21] np [00:41] luc4: then you can degrade the quality and have to wait for encoding [00:42] llogan: sorry? [00:43] your input is already H.264 but you are re-encoding it. [00:45] llogan: if it is possible to compress by the same factor without re-encoding it then I might be interested, anyway, I repeat again: the only reason I'm interested in this file is because it could reproduce a possible issue. As I see there is no issue in ffmpeg, case is closed. Anyway, still interested if compressing more without re-encoding is possible, but not related to that specific file. [00:47] -codec:v copy [00:48] llogan: isn't that supposed to simply copy the video stream? [00:49] llogan: there are many benefits to a smaller file size, e.g. if you are putting it on a web site, or emailing it to a bunch of people, or trying to fit as much as possible on a disc [00:49] not everyone is encoding videos only for their personal consumption [00:56] llogan: if there is a better way to make a video file >= 6GB become ~= 600MB without visible differences& then I'm really interested& otherwise, the meaning of this conversation is not clear to me& unless someone is giving away unlimited disk space or hard drives for free& in that case maybe& well, maybe neither& [00:57] llogan: oh, btw, the file must not be cut as well! No tricks! :-D [00:58] i was just pointing out that you may not need to re-encode regardless of any desire that you may or may not want to reduce the size. [00:59] llogan: you mean there is a way to compress by the same factor without re-encoding and changing the profile? Can you point me to some documentation? I'm not an h264 expert. [01:01] i see that the communication today has not worked as I expected. [01:03] 100000000x compression without re-encoding: (a) upload to internet (b) replace file with url [03:05] I have a selecton of frames, starting from dsc_0141.jpg and ending at dsc_0176.jpg. How do I Use this with ffmpeg without renaming them as an image sequence, o?o? [03:11] http://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images [03:11] Keshl: ffmpeg -start_number 141 -i dsc_%04d.jpg ... [03:14] Danks ^?^ [03:14] Action: Keshl huggles on -?- [03:16] And what's the "keep aspect ratio" thing when I'm scaling video, o?o? Used to be -1 but not anymore .?. [03:16] -1 should still work [03:17] It reads it as a paramater and says it's not a recognized argument xwx [03:18] http://www.pasteall.org/44859 o?o [03:20] Keshl: -vf scale=-1:1080 [03:21] Danks -?- -Paws at.- [06:46] hello, what would be a decent format to record to quickly for screen capture? then I could go and re-encode it after the fact to something else. i'm trying to avoid frame drops and what not. [06:46] i have a lot of disk space, so big 1st stage files is fine [09:51] can someone help me to fix this script see http://pastebin.com/uedmy3qP [09:51] the issues are posted on the paste [10:58] WHATDIDIDOO_O? [10:59] Action: Keshl has red text that appears to be hexes scrolling wildly out of control! D: [11:02] Action: Keshl got it to calm down by hitting 'H' but now he still has green text D: [11:03] And hittin' it more just loops between "A different kinda green text" and "that scary red text" D: [11:26] Kay, new question. If I pass an audio file and video file to ffmpeg, how do I tell it to stop endocing whenever one ends? o?o? [11:26] *encoding [11:27] -shortest [11:27] IIRC. [11:28] Tried that, got it now. -shortest has to come directly after the last input or weird stuff happens xwx [11:30] Action: Keshl snugpurrs sacarasc anyway -?- [12:29] Know what'd be great? [12:30] If there was a video codec that'd intelligently detect noise in an image that's changing because it's in a shadow, and then not try to replicate it. [12:30] Just go "Okay, we're using this one frame, not enough in this large, dark area changed, let's just keep it the same". [14:02] hi [14:07] "mplayer"s compilation fails with this: libx264.c:(.text+0x14e): undefined reference to `x264_picture_init' what is the solution? (ffplay works fine, though does not play sound now, also i have 2 soundcards) [14:39] braincracker: --> #mplayer [14:39] thanks, found it already, and flooded them with problems [15:31] <_d0t> hi. I'm using ffmpeg with my custom encoder to stream h264 video via rtp. I have a problem with pps and sps - those are streamed only at the begining. Is there any way to force pps/sps output per every gop? [16:26] Hello :-) Hope you all are ok? Greetings from Berlin and best regrards to the ffmpeg community. A question regarding your online compilaton guide: [16:26] Following the compilation/install guide from http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide can cause trouble, likely caused by the use of the $HOME variable int he HOWTO. Not sure yet. After complition of the HOWTO and testing it with the version command the result looks fine, but ONLY for ONE TIME. After reopening terminal and typing ffmpeg, it says ffmpeg is not installed. [16:28] Following the guide installs ffmpeg in the user folder/bin while most apps are installed into /usr/bin. I am not sure, but after trying it 3 times I still have no luck getting ffmpeg compiled and installed on ubuntu 12.10 with this guide. Even if it seems all fine for one second after installation [16:30] What I ask myself is: Is it caused by my $HOME variabel which points to the ~userfolder or do I miss something whiel installation? It also was need to use some of the terminal commands with sudo to make them work [16:30] ups, typo *variable, *while [16:33] Does anyone had same experiences and give me a little hint where I have to look for any modifikation. It seems my envoriment blocks some suggested settings from the HOWTO. All compilation was ok, but terminal doesnt find ffmpeg [16:44] Digidog: maybe ffmpeg is not in your $PATH environment variable [16:44] read https://help.ubuntu.com/community/EnvironmentVariables [16:52] cbsrobot: thanks for pointing me in the right direction! I will check thru this link! Thank you very much. [17:02] I am trying to get a slimmer set of libav binaries with only codecs etc that I need, I am getting undefined reference to av_register_all. I used --disable-everything, then --enabled the stuff I think I need (that seemed obvious) I think there is something not obvious that I have missed, any ideas? [17:10] n/m I figured it out, needed --disable-avfilter [17:42] is there formula describing a restriction on AAC encoder bitrate in relation of sample rate? Google's mum... ISO 13818-7 has no math for min bitrate. [17:49] mista_D just wondering, in theory why would there be a restriction? and wouldn't also have to consider the number of channels? [17:53] Hi, I am a VTK developer, and was hoping someone might help me track down the format where CodecID was renamed to AVCodecID [17:53] I found http://ffmpeg.org/pipermail/ffmpeg-cvslog/2012-August/053381.html that shows the commit, is there an easy way to track that to a release/ABI change? [17:55] Action: cryos will be out for lunch in about 5, but I will check back later if anyone has ideas/clues. This is related to the FFMPEG API changes, and where they might be documented. [17:57] By format, I meant to say version (need more coffee), ideally with a change in some preprocessor variable I can add an #ifdef for [18:16] bunniefoofoo: yes channles too. The faac won't encode 44.1 stereo at 32 kbps. I think its a quality isse than Nero locked down too as 32 kbos 48kHz stereo sounds nasty (: [19:48] can I sleep? [22:59] I can convert avi to png images with: 'ffmpeg 1.avi -c:v png "%d.png"', is there an option to make it extract not all frames but only evey 50th frame ? so from 10000 frame video it will extract 200 frames ? [23:01] was reconnecting [23:01] I can convert avi to png images with: 'ffmpeg 1.avi -c:v png "%d.png"', is there an option to make it extract not all frames but only evey 50th frame ? so from 10000 frame video it will extract 200 frames ? [23:01] elkng: try the select video filter [23:01] http://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect [23:03] "select=not(mod(n\,50)),setpts=N/((30000/1001)*TB)" [23:03] or something like that [23:03] "select=50" ? [23:04] no [23:04] keep the select part as is. it's the setpts that you may have to change [23:05] so if your input is PAL video, or 25 frames per second, then replace (30000/1001) with 25 [23:06] and if you just want 200 frames, then add -vframes 200 as an output option. probably a way to do that with select instead if you prefer. [23:09] ffmpeg -i 1.avi -c:v png -vf "select=not(mod(n\,50)),setpts=N/((30000/1001)*TB)" -vframes 200 monkey-smells-finger-then-falls-out-of-tree_%04d.png [23:09] -c:v png is probably superfluous [23:09] i mean it is superfluous [23:14] elkng: did it work? [23:18] haven't tryed it yet [00:00] --- Sat Aug 17 2013 From burek021 at gmail.com Sat Aug 17 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sat, 17 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130816 Message-ID: <20130817000502.C19CC18A024E@apolo.teamnet.rs> [00:22] michaelni: s/-sws_flags +error_diffusion/-sws_dither ed/ on the commandline, right? [00:24] yes or -vf scale=...:sws_dither=ed [00:25] that way works, the other doesnt [00:26] though it complains for nonvalid sws_dither values [00:35] mmh it seems downright unrolls much better [00:35] or maybe not [01:09] BBB: so well, i pushed diag_down{left,right} in my branch; i'm re-rolling 8 & 16, but we could use that code only for 32 if you want [01:10] consider it an unbenched WIP, i'm gonna do the 4 remaining and we can discuss that when done [01:10] but now, sleep, 'night [01:48] michaelni: squares vs thin rectangles - does that really makes difference, same could be said for any other filter [02:14] ubitux: nope, no samples. if no samples, maybe try to write a unit test (or quite literally copy libvpx code, create a random edge and test that the output matches?) [02:14] ubitux: or just assume it's ok :) [02:16] ubitux: and crash = gimme sample and I fix [05:11] ffmpeg.git 03Michael Niedermayer 07master:ec0e0eb4c16f: avfilter/vf_scale+aresample: minor simpification [10:19] 'morning [11:49] ffmpeg.git 03Janne Grunau 07master:e8c0defe1322: 8bps: decode 24bit files correctly as rgb32 on bigendian [11:49] ffmpeg.git 03Michael Niedermayer 07master:15677e72399d: Merge commit 'e8c0defe1322f0ff281d9bc5eee91fa1b712b6aa' [12:07] ffmpeg.git 03Diego Biurrun 07master:8747fce91fca: electronicarts: K&R formatting cosmetics [12:07] ffmpeg.git 03Michael Niedermayer 07master:a87cf3689ef2: Merge commit '8747fce91fca6bb8e9936497f2de05c905cf43b5' [12:13] ffmpeg.git 03Diego Biurrun 07master:288f2ffb57ae: electronicarts: Remove bogus function documentation [12:13] ffmpeg.git 03Diego Biurrun 07master:a90cff137b2a: electronicarts: comment wording fixes [12:13] ffmpeg.git 03Michael Niedermayer 07master:6a4e55a24636: Merge commit '288f2ffb57ae9e9eee2748aca26da3aeb3ca6f6c' [12:13] ffmpeg.git 03Michael Niedermayer 07master:6bba6957858f: Merge commit 'a90cff137b2aca89380b0acad41cd7bb05619ece' [12:18] ffmpeg.git 03Diego Biurrun 07master:4908c8ef2706: electronicarts: Improve some function/variable names [12:18] ffmpeg.git 03Michael Niedermayer 07master:4195321a82c9: Merge commit '4908c8ef2706d98022bf27a5c5bca1fe109e7529' [12:31] ffmpeg.git 03Diego Biurrun 07master:163a729725c6: electronicarts: Let functions always returning the same value return void [12:31] ffmpeg.git 03Michael Niedermayer 07master:165e42b5428f: Merge commit '163a729725c6eb0081b0af41a7279f7d19aee86e' [12:33] BBB: well, the sample you sent me for instance, but it seems most of the vp9 sample from libvpx as well trigger it [12:35] BBB: http://pastie.org/pastes/8241743/text [12:36] BBB: and basically: http://pastie.org/pastes/8241748/text [12:36] ffmpeg.git 03Martin Storsj? 07master:4b054a3400f7: rtpproto: Check the right feature detection macro [12:36] ffmpeg.git 03Diego Biurrun 07master:060ce0c697e2: ivi_common: Make some tables only used within the file static [12:36] ffmpeg.git 03Michael Niedermayer 07master:e1ec7990fe25: Merge commit '4b054a3400f728c54470ee6a1eefe1d82420f6a2' [12:36] ffmpeg.git 03Michael Niedermayer 07master:7372177a08c0: Merge commit '060ce0c697e261ca2792a7df30dfd1bae6900a4f' [12:47] ffmpeg.git 03Diego Biurrun 07master:d258531502b2: swscale: Mark a bunch of tables only used within one file static [12:47] ffmpeg.git 03Michael Niedermayer 07master:1ef0b8f9cc07: Merge commit 'd258531502b24cb653204fe4f003c8815755bdc4' [13:01] ffmpeg.git 03Diego Biurrun 07master:aa2ba8c99e57: swscale: Move extern declarations for tables to swscale_internal.h [13:01] ffmpeg.git 03Michael Niedermayer 07master:c14fc4585c2b: Merge commit 'aa2ba8c99e5708884a56aea9c1d96e014866f8a3' [13:07] ffmpeg.git 03Diego Biurrun 07master:c591d4575a6f: avcodec: Replace local extern declarations for tables with header #includes [13:07] ffmpeg.git 03Michael Niedermayer 07master:b7a025092f81: Merge commit 'c591d4575a6f97fbbe6145304b1ea960e8e81e14' [13:12] ffmpeg.git 03Diego Biurrun 07master:ec6c1b1d832e: mpeg12decdata: Remove unused #define [13:12] ffmpeg.git 03Michael Niedermayer 07master:89f4812cda5f: Merge commit 'ec6c1b1d832ec3261cc3faf93a18d7b2a84883c6' [13:19] ffmpeg.git 03Diego Biurrun 07master:38f64c03301a: mpeg12decdata.h: Move all tables to the only place they are used [13:19] ffmpeg.git 03Michael Niedermayer 07master:dbcee7cc5c26: Merge commit '38f64c03301ac66d7b54b3e4bd2bf6454f9fb2d3' [13:37] ffmpeg.git 03Diego Biurrun 07master:cb214707a6cb: vp56data: Move all shared enum/struct declarations to common header [13:37] ffmpeg.git 03Michael Niedermayer 07master:669ea5e102fe: Merge commit 'cb214707a6cb0d3272ec0261af6f1f5d8b7dabc7' [13:43] ffmpeg.git 03Diego Biurrun 07master:239f55bf3c96: vp56data: Move all data tables to the .c file [13:43] ffmpeg.git 03Michael Niedermayer 07master:83386a1f62c0: Merge commit '239f55bf3c966782b781338df284f250393b9ed6' [13:48] ffmpeg.git 03Martin Storsj? 07master:c9031c7c1446: hlsenc: Add a proper dependency on the mpegts muxer [13:48] ffmpeg.git 03Michael Niedermayer 07master:dff205cca315: Merge commit 'c9031c7c1446a1a63eff7c0bf50d1ee559adf3fb' [14:04] ffmpeg.git 03Stefano Sabatini 07master:09c93b1b957f: hlsenc: Append the last incomplete segment when closing the output [14:04] ffmpeg.git 03Michael Niedermayer 07master:50c0837801b2: Merge commit '09c93b1b957f2049ea5fd8fb0e6f4d82680172f2' [14:21] ffmpeg.git 03Carl Eugen Hoyos 07master:9d86bfc259ae: hlsenc: Don't reset the number variable when wrapping [14:21] ffmpeg.git 03Michael Niedermayer 07master:c02945b6d27a: Merge commit '9d86bfc259ae9ba7a76067ec931ff20fbb86ea2a' [14:29] ffmpeg.git 03Kostya Shishkov 07master:f399e406af0c: altivec: perform an explicit unaligned load [14:29] ffmpeg.git 03Michael Niedermayer 07master:d7ed473d5c4b: Merge remote-tracking branch 'qatar/master' [14:32] michaelni: updated the 'a dither' page with the addition based 'a dither' GIF as well, now that other quirks have been sorted out it think looks quite good for rgb 332 :) [14:34] but for 16->10 bit? [15:35] ffmpeg.git 03Paul B Mahol 07master:ef6718a5f7eb: lavfi/tile: make color of blank/unused area configurable [15:35] ffmpeg.git 03Paul B Mahol 07master:e74a5acb4085: lavfi/transpose: support slice threading [15:35] ffmpeg.git 03Paul B Mahol 07master:66f1de66b89d: lavfi/transpose: call av_frame_copy_props() [16:07] ffmpeg.git 03Michael Niedermayer 07master:c62801270f3e: swscale: change ff_dither_8x8_128 dimensions to be consistent with the others [16:07] ffmpeg.git 03Michael Niedermayer 07master:247fa6c27c45: avfilter: remove ff_copy_int*_list [16:07] ffmpeg.git 03Michael Niedermayer 07master:cbdf4d6a6118: avfilter/vf_mp: remove unused sws related functions [16:07] ffmpeg.git 03Michael Niedermayer 07master:29852ffc64ed: avcodec/dirac_dwt: Remove unused ff_spatial_idwt2() [16:52] ubitux: looks like intra_pred_data is uninitialized or too small? weird [16:58] BBB: note that i get a bunch on warning on vp9.c [16:58] might be related [17:01] BBB: http://pastie.org/8242434 [18:04] ffmpeg.git 03Stefano Sabatini 07master:faf7c356554d: lavf/tee: add support for bitstream filtering [19:47] encapsulating opus in mkv [19:47] does xiph get it at all? [19:53] http://wiki.xiph.org/MatroskaOpus ? [19:56] Compnn: were are going to do opus in ts at vdd [19:58] gnafu, I don't think the spec's finalized. It was probably the most active thread this year on matroska-devel :D [19:59] Yeah, that page looks like it's still a draft. [20:00] I guess I was just linking to it to demonstrate that Xiph does get it "at all", even if not completely ;D. [20:02] JEEB: i thought they decided on a result in the end [20:02] they decided on some stuff but I don't think it was ever "finished" [20:02] it made its way onto the matroska spec page at least [20:02] the new elements that is [20:02] yeah [20:02] those did [20:02] but those generally do :P [20:02] even if not used in the end [20:03] like the timebase stuff [20:07] I bet in the end everything about this turns out to be broken by design or implementation anyway [20:13] its really not that complicated, players just need to at least support the CodecDelay element, and everything is fine, the seek pre-roll thing isn't all that critical [20:22] durandal_1707: those ordered/spatially stable dithers should work progressively better as you have more levels - experiment with the live js on the page yourself [20:51] durandal_1707: shouldn't work any worse for such purposes [23:18] how do i use bisect-create? [00:00] --- Sat Aug 17 2013 From burek021 at gmail.com Sun Aug 18 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sun, 18 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130817 Message-ID: <20130818000502.D491E18A0334@apolo.teamnet.rs> [00:14] llogan, you run it [00:14] and it creates tools/ffbisect [00:18] i figured out that part right after I asked. [00:24] michaelni: your compile benchmarks are almost a year old. have the times changed much since then? [00:25] dunno, let me try [00:28] gcc 4.6 --enable-gpl -> 0m42.779s [00:29] so id say its still pretty accurate [00:30] gcc 4.6 --enable-gpl --disable-debug --disable-optimizations -> 0m15.803s [00:31] eh, close enough [00:32] gcc 4.5 --enable-gpl --disable-debug --disable-optimizations -> 0m15.901s [00:33] i can rerun all if you think it makes sense ... [00:35] up to you [00:35] i'll update the page if you want to re-run them [00:49] --enable-gpl --disable-debug --cc=gcc-4.5 0m35.061s [00:51] --enable-gpl --cc=gcc-4.5 0m42.026s [00:53] --cc=gcc-4.5 --enable-gpl --disable-debug --disable-optimizations --extra-cflags='-O1' 0m23.473s [00:55] --cc=gcc-4.4 --enable-gpl 0m42.787s [00:57] --cc=gcc-4.4 --enable-gpl --disable-debug --disable-optimizations 0m15.643 [00:59] --cc=/usr/bin/clang --enable-gpl --disable-debug --disable-optimizations 0m13.199s [01:00] --cc=/usr/bin/clang --enable-gpl --disable-debug --disable-optimizations --extra-cflags='-O1' 0m18.819s [01:01] --cc=/usr/bin/clang --enable-gpl --disable-debug 0m22.653s [01:02] --cc=/usr/bin/clang --enable-gpl 0m25.556s [01:04] --cc=/usr/bin/clang --enable-gpl ; time make fate -j12 0m53.423s [01:06] --cc=/usr/bin/clang --enable-gpl --disable-debug ; time make fate -j12 0m51.460s [01:08] --cc=/usr/bin/clang --enable-gpl --disable-debug --disable-optimizations ; time make fate -j12 1m24.252s [01:11] --cc=/usr/bin/clang --enable-gpl --disable-debug --disable-optimizations --extra-cflags='-O1' ; time make fate -j12 0m47.401s [01:13] --cc=clang --enable-gpl --disable-debug --disable-optimizations --extra-cflags='-O1' ; time make fate -j12 0m51.419s [01:15] --cc=gcc-4.5 --enable-gpl --disable-debug --disable-optimizations --extra-cflags='-O1' ; time make fate -j12 0m51.920s [01:19] --cc=gcc-4.5 --enable-gpl --disable-debug --disable-optimizations ; time make fate -j12 1m27.695s [01:21] --cc=gcc-4.5 --enable-gpl --disable-debug ; time make fate -j12 1m3.025s [01:23] --cc=gcc-4.5 --enable-gpl ; time make fate -j12 1m9.592s [01:49] ffmpeg.git 03Michael Niedermayer 07master:eeb3fb9e62d4: ffv1enc: check for malloc failure [01:49] ffmpeg.git 03Michael Niedermayer 07master:c8d89be477ac: ffv1enc: propagate error code from write_extradata() [01:49] ffmpeg.git 03Michael Niedermayer 07master:2c1a215ddb8b: ffv1: update years in header [02:14] ubitux: the rest is b/c of the switch statement, gcc doesn't know there's only 4 possible labels there [02:15] i didn't look [02:15] at all :p [02:16] ubitux: and the have_topright is indeed broken [02:18] ubitux: http://pastebin.com/Kcza73EL ? [02:23] BBB: doesn't build; have_topright is used elsewhere [02:23] oh [02:23] L1117, (tx != TX_4X4 || !edges[mode].needs_topright || have_topright)) { [02:27] right [02:27] one sec [02:27] too many changes related to inter coding so it didn't build anyway [02:27] http://pastebin.com/U4pLUkwJ ? [02:28] I still have to parse motion vectors in the bitstream, and do neighbourhood motion vector search for referencing [02:28] then inter frame parsing is done and I can move on to inter reconstruction (but I'll probably commit the parsing already at that point, assuming it doesn't just crash) [02:30] still crashes [02:32] seems to happen with ffplay in particular [02:33] anyway, i still have 4/6 functions to write [02:33] oh ffplay [02:33] hm... [02:33] I only test ffmpeg [02:33] ffmpeg -i vp9file.webm -vframes 1 out.y4m [02:34] that works for me [02:34] i changed the logic with the 2 functions so we can use memcpy for writing; and logic is quite simple to ease auto unrolling [02:34] yeah [02:34] memcpy for writing? [02:34] yes [02:34] i mean [02:34] just a sec [02:34] I think I know what you mean [02:34] it's what simd functions do [02:34] is it faster? [02:34] didn't bench at all [02:35] but code is short and pretty straightforward [02:35] (the reason it might not be is b/c it requires 2 mem access instead of 1 for the current code) [02:35] hm ok [02:35] i use a "rolling buffer" which i memcpy [02:36] right, that's what simd does [02:36] in libvpx? [02:36] and then shift by 1-2 bytes depending on what version [02:36] in ffmpeg also [02:36] see e.g. h264/vp9 [02:36] er, vp8 [02:36] ah, fun ok [02:36] i also use a memset in the first func [02:37] https://github.com/ubitux/FFmpeg/blob/vp9/libavcodec/vp9dsp.c#L589 [02:37] https://github.com/ubitux/FFmpeg/blob/vp9/libavcodec/vp9dsp.c#L625 [02:37] so anyway, i'll do that for the others; and if it's not slower we can use it for the c 32x32 only [02:37] can do memset outside the loop [02:38] just set the buf as-is, and make it twice as big, and memset the rest with top[size-1] [02:38] then just index as &v[y] as src for memcpy [02:38] i thought it was likely faster to memset instead of re-reading N times the same value [02:38] but didn't bench anyway so.. [02:38] :) [02:38] well anyway yeah this is fine [02:39] Action: Compn remembers someone long ago saying something about memset not being fastest [02:39] the others are slightly more complex b/c the memset approach only works for the outer two [02:39] or memcpy [02:39] Compn: we'll write simd [02:39] so it's ok [02:39] this is just for the reference [02:39] BBB: yeah i'm wondering about the next one [02:39] BBB: but we can do it with 2 buffers [02:39] 2 interleaved rollings buffers should work [02:39] at least for the next one :p [02:40] next 2 [02:40] :) [02:40] yes that's what simd does for vert_left/right [02:40] am i re-inventing the wheel? [02:40] and then for horiz_up/down, it uses 1 buffer which is 4x as big, and shifts index by 2 instead of 1 per line [02:40] we already have that same code? :) [02:40] in simd [02:40] not in c [02:41] I don't think anyone ever cares about c :-p [02:41] :D [02:41] because in practice it doesn't run [02:41] that's a good reason to re-roll them then [02:42] http://git.chromium.org/gitweb/?p=webm/libvpx.git;a=blob;f=vp9/common/x86/vp9_intrapred_ssse3.asm;h=8ba26f310990ee2a651a8460ed31edb0923f4c21;hb=HEAD#l114 [02:43] yeah I guess I agree [02:43] just as an indication of how many instructions these functions are in simd [02:43] ok :) [02:44] I think I wrote versions of all these functions but theyre' sitting in a patch tracker somewhere while I am on vacation [02:44] you read assembly right? [02:45] [20:41] <@BBB> I don't think anyone ever cares about c :-p <-- mobile world would likely disagree [02:45] mobile world uses neon [02:45] not c [02:45] BBB: a bit; aka it takes me a long time to read simd, and i can read simple compiler assembly relatively easily [02:45] BBB, nobody writes neon for libav* [02:45] hence. c. [02:46] then nobody in the mobile world uses libav* [02:46] if they used it, they'd write neon code for it [02:46] it's pretty popular [02:46] (note mru works on libvpx right now) [02:46] oic [02:46] that makes more sense [02:47] since there is no vp8/9 hw [02:48] ubitux: but yes I like this [02:58] ubitux: ok found the issue [02:59] ubitux: it was another memcpy I think, valgrind found it for me [02:59] ubitux: valgrind is clean again now [02:59] yeah i believe pasted you the valgrind ;) [02:59] thx :) [02:59] will retry, but it's late now, so tomorrow [03:00] ubitux: well yes your valgrind didn't help b/c my line numbers are different [03:00] (that happens when you add a ton of code) [03:01] anyway it showed me valgrind is sweet so I gave it a chance [03:02] and g'nite [03:27] Hi I am looking for Nicolas Bertrand [03:27] Does anyone know his irc nick? [03:27] hmm [03:27] dont remember it [03:29] ruby: buxiness [03:31] kierank: Thank you! [04:29] 01[13FFV101] 15michaelni pushed 1 new commit to 06master: 02http://git.io/FmrKAw [04:29] 13FFV1/06master 14cca0bab 15Michael Niedermayer: Update minor_version... [10:36] ffmpeg.git 03Luca Barbato 07master:c59967fa7cc5: h261: check the mtype index [10:36] ffmpeg.git 03Michael Niedermayer 07master:bd7107106660: Merge commit 'c59967fa7cc5bc2fa06b36c17d2c207240c06b3e' [11:41] michaelni: gonna commit that license thing? [11:51] ffmpeg.git 03Luca Barbato 07master:5ef7c84a9374: dxa: Make sure the reference frame exists [11:51] ffmpeg.git 03Michael Niedermayer 07master:7a342f97c48f: Merge remote-tracking branch 'qatar/master' [11:51] ffmpeg.git 03Michael Niedermayer 07master:186e47ef6d7d: dxa: only fail with an error about reference frames if the reference frame would be used [11:51] ffmpeg.git 03Michael Niedermayer 07master:9640ea1da4e0: dxa: fix support of decoding all frames even in the absence of references [12:09] durandal_1707, that kind of change is for the community to decide and apply (or not apply) [12:15] define 'community', define 'decide' [13:01] BBB: no more crashes, thx :) [13:06] btw, http://wiki.libsdl.org/moin.fcg/MigrationGuide [13:07] might be relevant for ffplay [13:19] too late [14:38] anyone need replaygain scanner? [15:19] ffmpeg.git 03Stephen Hutchinson 07master:2c25e83b1d02: avisynth: Support video input from AviSynth 2.5 properly. [15:19] ffmpeg.git 03Michael Niedermayer 07master:0a141b0e49ba: avcodec/dxa: Support printing picture debug info [20:03] If I cut on timestamps that do not correspond to keyframes and try to use those segments with the concat demuxer the resulting file is not linear [20:03] I suspect this is because of the metadata of the cut files [20:03] is this a known issue? does it have a workaround? [20:10] does anyone know how i can modify this metadata? Duration: 00:01:04.00, start: -3.978667 [20:10] to test if this is the problem with the concat demuxer [22:02] Hello the room, does one of you know please i have got this error: [22:03] ...libavformat.a(swfdec.o): In function `swf_read_packet': ..... libavformat/swfdec.c:328: undefined reference to `uncompress' [22:04] + a long list of fail [22:04] s [22:04] tried to compil mlt 0.9 with external build of ffmpeg-2.0 (not distro repo) [22:05] ragedragon : sounds like missing zlib ? [22:07] checking... [22:07] i think zlib is present... [22:08] pkg-config --cflags --libs zlib indicates me: -lz [22:09] trying to put the error output into pastebin... [22:17] ffmpeg.git 03Reinhard Tartler 07release/0.5:2abf5eeea6e4: update year to 2013 [22:17] ffmpeg.git 03Reinhard Tartler 07release/0.5:588571d41ddb: Bump version number for the 0.5.11 release [22:17] ffmpeg.git 03Michael Niedermayer 07release/0.5:b5f685211cd4: Merge remote-tracking branch 'qatar/release/0.5' into release/0.5 [22:25] Compn, http://pastebin.com/ZvQ5Ft0E [22:25] any idea please? [22:27] i'm now wondering if i need to use an old version of ffmpeg with the last release of mlt [22:28] ffmpeg.git 03Michael Niedermayer 07master:a0c6c8e53ebc: Revert "Merge commit of 'vdpau: remove old-style decoders'" [22:28] ffmpeg.git 03Michael Niedermayer 07master:6e4b9b8a2fb5: avcodec: fix compilation without vdpau [22:29] uh what [22:29] old style vdpau decoders are back? [22:30] "Keeping support for the old VDPAU API has been requested by our VDPAU maintainer" [22:34] and who is that? [22:35] i'd say Carl according to /MAINTAINERS [22:36] well, why do I care, not my mess to maintain [23:02] Compn, you right... -lz was missing into one of makefile present into mlt... [23:13] cool [23:58] thing is i need to recompile whole tree again because adding back stuff that nobody uses [00:00] --- Sun Aug 18 2013 From burek021 at gmail.com Sun Aug 18 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sun, 18 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130817 Message-ID: <20130818000501.CB85D18A0332@apolo.teamnet.rs> [00:10] hi ... i'm having trouble with A/V sync when recording from my webcam ... is there a way to capture "raw" video and audio into a file and process it after it is done capturing? [00:19] tlhiv_laptop: did you try "-c copy"? [00:19] no [00:20] instead of -vcodec libx264 -acodec libfaac [00:20] yes [00:22] llogan: thank you ... i'll try that [02:00] Hello guys [02:01] I am experiencing an error video4linux2 ioctl(VIDIOC_ENUMSTD): Invalid argument when trying to capture [02:01] I use a raspberry pi computer with archlinux arm [02:02] And a kinda hybrid linux kernel [02:03] I have kernel 3.10.6-1 but with the whole driver/media directory taken from the latest 3.11 rc kernel [02:04] My video device is easycap UTV007 (driver usbtv) [02:04] I wonder were is that error comes from? [02:05] Command just: ffmpeg -f v4l2 -i /dev/video0 out.avi [02:13] My current ffmpeg build is from August 11, quite fresh... [02:13] I just wonder if I need to recompile ffmpeg [02:26] llogan: http://pastebin.com/XJNqENWD [02:33] any ideas? [02:43] Lerg: no. you could try asking on ffmpeg-user mailing list. [03:28] Hm. Is libav and avconf a totally diferent project? [03:28] Because on my other machine x86_64 with 3.11-rc5 this devices works good [03:28] with avconv [03:32] Read that, Lerg. [05:33] hi [05:34] is there a way to capture webcam from v4l2 and output to /dev/null or something [05:35] i'm trying ffmpeg -f video4linux2 -i /dev/video0 -y /dev/null [05:35] ffmpeg -i /dev/video0 pipe: > /dev/null [05:36] oh [05:36] and the -f part [05:44] i try that but i get: At least one output file must be specified [05:45] diegoviola: pastebin your complete output with the command you executed [05:47] https://gist.github.com/diegoviola/6255154 [05:47] sorry if i did something wrong [05:48] you forgot the "pipe:" [05:48] diegoviola: ? [05:49] oh [05:49] https://gist.github.com/diegoviola/6255162 [05:50] now i get that [05:50] you will have to specify a container format [05:50] try: ffmpeg -f video4linux2 -i /dev/video0 -f matroska pipe: > /dev/null [05:51] that will mux your video stream into a matroska container [05:51] that works [05:52] but i was expecting i would get a window but the output would go to /dev/null :D [05:52] is that possible [05:52] hm... [05:52] dunno, lol, why not just play back the stream from /dev/video0? [05:52] ffplay -f video4linux2 /dev/video0 [05:53] that's what i wanted to do, thanks [05:53] sorry about that :D [05:53] heh [05:53] well you'll know it for the next time :) [05:53] right, thanks [05:54] :D [05:54] Action: diegoviola blushes [05:56] sweet, it does fullscreen too [f] [05:57] thanks a lot [08:22] Hi guys [08:22] I have a question about ffmpeg (today git version). [08:22] I have Logitech c920 webcam which produce yuvj420p video stream. [08:22] I want to copy this stream into video container (don't decode/encode it). [08:22] I'm trying a command: /home/ps/work/ffmpeg/ffmpeg -f alsa -ac 2 -i plughw:CARD=C920,DEV=0 -acodec aac -vcodec h264 -f v4l2 -i [08:22] dev/video0 -q 0 -vcodec copy -y -strict -2 out.mpeg [08:22] But I get just sound, the video is absent in the container. [08:22] Is it possible just copy video stream without decode/encode? [08:22] Repeat the command: ffmpeg -f alsa -ac 2 -i plughw:CARD=C920,DEV=0 -acodec aac -vcodec h264 -f v4l2 -i /dev/video0 -q 0 -vcodec copy -y -strict -2 out.mpeg [08:57] good morning [08:57] I have this broken flv file that I need to extract usable stuff from [08:57] I know the exact format it's supposed to be [08:57] how can I tell it to ffmpeg so that it only looks for that ? [09:06] format I want to impose should be identical to http://pastebin.com/wYDpkvQh [12:42] Morning& I'm attempting to create a feed for a ffserver stream and I'm receiving this error. What am I missing? [12:42] Could not find codec parameters for stream 0 (Video: rawvideo (I420 / 0x30323449), yuv420p, -4 kb/s): unspecified size [12:42] Consider increasing the value for the 'analyzeduration' and 'probesize' options [12:42] The error is "Picture size 0x0 is invalid" [14:12] Why I get this error: Unknown input format: 'concat' after running the below command: [14:12] ffmpeg -f concat -i mylist.txt -c copy output [14:14] ffmpeg -version: ffmpeg version 0.10.7-6 [14:16] I wouldn't be surprised if that version just didn't have that format/demuxer [14:30] Isn't it the latest version? [14:30] no [14:31] or well, it could be within the 0.10.* branch [14:31] but basically release branches are made at some point and they will only get bug fix backports from master after that [14:31] no new features [14:33] JEEB: You mean the below repository is an old one while it shows 2013-01-16 for its updating time: [14:33] https://launchpad.net/~jon-severinsson/+archive/ffmpeg [14:34] did you not understand what I just said about release branch versions :P [14:35] the guy started building 0.10.x [14:35] and there it is probably the newest, but [14:35] you will not get any new features that were put in after 0.10.x [14:35] the release was made, a branch created :P [14:35] only bug fixes go in after that [14:36] http://git.videolan.org/?p=ffmpeg.git;a=commit;h=7e16636995fd6710164f7622cd77abc94c27a064 [14:36] 0.10 release and branch was created in march 2012 [14:36] ok [14:36] (you can look here for the tags: http://git.videolan.org/?p=ffmpeg.git;a=tags ) [14:36] always when you are using a release [14:37] you should check when that release branch was originally started [14:37] :P [15:35] Can i set up ffserver to be connected to by a rtmp client? [15:40] is there a way to tell ffmpeg "if you ask me something, always assume that I answered yes"? [15:40] reason I would like this is because I use some batch-conversions, and it asks me before overwriting, and I always have to manually enter "y" then [15:44] shevy: iirc -y [15:44] (means just add -y as command line option) [15:45] ok cool a moment, lemme test [15:48] I think that works, thanks DonGnom, there was no yes/no query this time when using that command: [15:48] ffmpeg -y -i "pfZ38WrbxYA.mp4" -acodec libmp3lame -ab 128k /Depot/j/pfZ38WrbxYA.mp3 [15:49] shevy: np [15:54] DonGnom: you seem to know stuff ;) how would I tell ffmpeg to look for a specific format in a broken file ? [15:55] (idea being to fix said file) [15:55] sxpert: dunno (in fact im a ffmpeg noob i just knew that with -y because of my own batch conversions :) [15:55] ah [15:55] dang [15:55] ;) [15:56] sxpert: but there are much xperts here if you just kindly ask the public someone will answer you the guys/girls here are usually very helpfull. [15:57] sxpert, the most you can do is set the format with -f [15:57] before -i [15:58] if that fails then you ain't gonna get no magic :P [15:58] JEEB: I would like to tell it to look for the actual format, [15:58] specifying the contnents of the proper header to look for [15:58] no such feature, you can just hope it probes the file correctly and/or have the probesize be made bigger [15:59] if ffmpeg -i welp.file or ffprobe welp.file with a bigger probesize or whatever don't do it [15:59] tried that. finds something that makes no sens [15:59] well, if you then don't know the format you're SOL without some manual handiwork :P [15:59] damn [16:00] there should be a "skip until you actually find blah" [17:56] hi all [17:57] has anyone of you tried hardcoding .sub files into video? [17:57] it seems my ffmpeg doesnt support .sub files, do i have to compile it extra in order to enable .sub support? [17:59] what kind of format is .sub? i mean sure it's a subtitle, but... does it have a more elaborate name? [18:00] im not sure how to check, it is 6mb big for 1 hour video length and comes with an idx file [18:00] run ffmpeg -codecs | grep D.S to find all decodable subtitles [18:00] ass and srt are supported for now on my ffmpeg [18:01] sounds like dvd stuff from little google research [18:01] yeah, probably [18:01] i was astonished why a sub file can be 6mb big [18:02] yeah that's hard to achieve, even in .ass [18:02] http://pastebin.com/Ejk9nVxJ [18:02] you could see if aegisub or some other subtitle editor can open it and then save it in a more sophisticated format [18:02] oh [18:03] yeah that would be possible, but im not awware of a batch function in aegisub [18:03] cuz its like 50 files [18:03] ah you have a bunch of that? [18:03] hmm... [18:03] can you upload one .sub .idx pair? [18:03] yeah [18:05] hm where to upload to... [18:05] used to be so easy and hassle free to upload files [18:05] back in megaupload days [18:06] mediafire [18:08] oh i see now [18:09] those are images [18:09] and dual language [18:10] alright, have to do it by hand then [18:10] im running ocr now [18:10] laters everyone [18:12] ahah now i remember [18:12] yeah [18:12] that's the crap with dvdsubs [18:12] truly ... [18:12] its what happens when there are no real utf8 standards [18:13] and the subs are horrible too [18:15] there is a way to hardsub this though... [18:17] CentRookie, what you *could* do is decode the stream with rendering the subtitles, then encode the raw stream produced [18:17] you wouldn't need to save the stream inbetween, just... [18:17] it would seperate the decoding and encoding a bit more on the command line, because i don't know how ffmpeg handles dvd subs [18:18] since i don't know better, i would play back the video with mplayer/mplayer2/mpv and grab the video in yuv4mpeg with ffmpeg over a named pipe and then encode with ffmpeg [18:23] mplayer -nosound -benchmark -vo yuv4mpeg:file=>(ffmpeg -f yuv4mpegpipe -i - output.h264 2>ffmpeg.log) anime.mkv [18:24] hm [18:24] add -ass or whatever you need to the mplayer command. [18:25] that's certainly new to me [18:25] so i would have to install mplayer and go with relaxed code line [18:26] relaxed, those are dvd subs, so libass wouldn't do anything really, would it? [18:26] i tried analyzing with ffmpeg 2.0 and it rejected .sub files [18:26] ah the whatever would be --sub=subtitle_file.sub [18:26] libass doesnt work obviously [18:27] ah right [18:27] it has its own command [18:27] ffmpeg recognizes it as dvdsub for me though [18:28] yeah, works [18:29] it rejected when i tried to convert it to ass [18:30] ffmpeg -i "black01.sub" -map 0:1 black01.ass gives out [ssa @ 0x14abb00] Only SUBTITLE_ASS type supported. [18:43] sorry, just woke up and glanced at the issue. [18:43] Action: relaxed funnels coffee [18:46] relaxed, thank you for your "glance" mm, I havent worked with mplayer before and havent used raw stream output as ffmpeg input, do you think you could write a working example for me so that I can use it without much modification for my own file? I have a input.mpg video file with input.idx and input.sub file, the input.sub file is a subtitle file with 2 language streams, 0:1 is English, this is the stream I would like to use and my overal [18:54] If the video and subs have the same name mplayer will automatically load them. [18:55] do the correct subs come up when you play it with mplayer? [19:00] its a remote server [19:00] CentRookie: the thing with dvds is the framerate may change, so this method could destroy audio/video sync. [19:00] so i cant verify until the whole video is complete i guess [19:00] yeah, im aware of that [19:00] i think ffmpeg rounds the fpsrate, right [19:01] when i checked the video input file it only said 23.8 fps, while standard should be 23.76 or something [19:01] but mplayer won't [19:01] and in this method it's feeding ffmpeg [19:01] well yeah, it is strange [19:02] i tried to fast convert mkv to mp4 with audio and video stream copy [19:02] but no matter what i did, it always ended with audio being 0,8 seconds too early [19:03] while mkv would play in sync [19:03] I don't know the reason, but matroska is a more flexible container. [19:04] i then tried seperating video and audio input by loading it via mapping, same video twice [19:05] ffmpeg -y -i input.mkv -itsoffset 00:00:00.700 -i input.mkv -map 0:v -map 1:a -vcodec copy -acodec copy output.mp4 [19:05] looked like this [19:06] but no matter how high i changed offset, the video of the output file had a 0,7 second delay [19:07] does it happen at the beginning? [19:07] yes, stays during the whole movie [19:07] just wanted to know if i used the parameters correctly [19:08] this means that audio would be delayed by 0,7 seconds, or? [19:08] maybe --> man ffmpeg|less +/^' -async' [19:09] ^^ [19:09] enough of manuals! [19:09] i read through 100 pages already [19:11] reading is fundamental [19:12] well time management is even more important! [19:13] -itsoffset might require encoding. [19:13] instead of stream copying [19:13] hm [19:13] test it with -t $time after the 2nd input. [19:13] ok [19:14] tsmuxer might be able to do it. [19:15] why are you inflicting this kind of pain on yourself? work? [19:16] hi there, I have a 462M voice recording in flac and I want it as small as possible. Can any of you suggest another format? [19:16] hendry, acc or vorbis [19:17] or mp3 [19:17] hm, so there is a linux version for tsmuxer [19:17] yes [19:17] CentRookie: isn't it "aac"? [19:18] I use it to create avchd dvds [19:18] yes, aac sorry [19:18] you can also try aac+ [19:20] you need to compile ffmpeg with support for those libs though, hendry, if you are on linux that is [19:21] hendry: ffmpeg -v 0 -codecs|grep "aac " [19:22] what's the output? [19:24] relaxed: DEA.L. aac AAC (Advanced Audio Coding) [19:25] do you need this in a specific format? [19:26] relaxed: just as small as possible really [19:26] trying a conversion now http://ix.io/7m4 [19:27] the MP3 is 116M and my upload channel sucks [19:27] should I be worried about 96 kb/s -> 128 kb/s I wonder [19:28] crap the aac is 158M. bigger! [19:28] lol [19:28] for speech you can probably go with 64 kb/s [19:28] I thought you said this was a flac? [19:29] use the flac as your input. [19:29] relaxed: i've exported it from Audacity again as a smaller mp3 now [19:30] ffmpeg -i input.flac -c:a libvorbis -b:a 64k output.ogg [19:31] so few players support ogg, a pity [19:35] 76M output.ogg is best thanks [19:36] what about speex? it's a voice recording from a meetup? [19:36] bad support [19:37] hendry: same size in mp3 --> ffmpeg -i input.flac -c:a libmp3lame -b:a 64k output.mp3 [19:37] well ogg has slightly better quality [19:38] true [19:38] alrighty, i tried moving itsoffset to other places [19:38] didnt have the effect at all [19:38] all the web tutorials either got it wrong [19:38] did you encode? [19:38] well it would defeat the purpose [19:38] the goal was not to re-encode [19:39] I think it's a requirement [19:40] oh yes, output.mp3 is just as small as the OGG really [19:41] ^^ [19:41] according to some tut it should be like ffmpeg -i input.flv -itsoffset 00:00:03.0 -i input.flv -vcodec copy -acodec copy -map 0:1 -map 1:0 output_shift3s.flv [19:42] could try moe the vcodec parameters before the mapping [19:42] but doubt it will change anything [19:47] i think it is a version thingy [19:48] ffmpeg 2 might have something in it, that auto synchronizes audio and video stream and trims it to fit [19:48] just ignoring every offsetting parameter [19:48] it is similiar to its resolution [19:49] if i set output resolution to a res with an aspect of 16:9 and width 480 and height 760 or soemthing, and the input file has the aspect 4:3 it changes only width to 480 and keeps height according to aspect ratio [19:49] automatically [20:14] Note that the timestamps may be further modified by the muxer, after this. For example, in the case that the format option avoid_negative_ts is enabled. [20:14] how do i enable avoid_negative_ts ? [21:18] is it possible to tell the concat demuxer to check if the last frame and the first frame of two consecutive files is the same and not copy it twice? [00:00] --- Sun Aug 18 2013 From burek021 at gmail.com Mon Aug 19 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Mon, 19 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130818 Message-ID: <20130819000502.94E1A18A00B6@apolo.teamnet.rs> [00:01] ffmpeg.git 03Xi Wang 07release/0.6:636c42de19f6: rtmp: fix multiple broken overflow checks [00:01] ffmpeg.git 03Xi Wang 07release/0.6:9aa60889f3f9: rtmp: fix buffer overflows in ff_amf_tag_contents() [00:01] ffmpeg.git 03Michael Niedermayer 07release/0.6:a7faa1d0703b: huffyuvdec: Check init_vlc() return codes. [00:01] ffmpeg.git 03Michael Niedermayer 07release/0.6:a77cf47d887a: huffyuvdec: Skip len==0 cases [00:01] ffmpeg.git 03Diego Biurrun 07release/0.6:3f785a538b4b: configure: Make warnings from -Wreturn-type fatal errors [00:12] ffmpeg.git 03Vignesh Venkatasubramanian 07master:571efd972986: lavf/matroska: Adding the new SeekPreRoll element [00:28] ffmpeg.git 03Paul B Mahol 07master:02eb15a6c1b4: wavpackenc: do not copy samples if they are not available [02:37] ubitux: cool [02:42] ffmpeg.git 03James Almer 07master:af248fa11742: matroskadec: Improve TTA duration calculation [02:42] ffmpeg.git 03Michael Niedermayer 07master:338f8b2eaf36: avformat/matroskadec: check out_samplerate before using it in av_rescale() [09:15] ubitux: fyi I can now parse the first block of the first nonkeyfame correctly [09:15] ubitux: shall I commit so we can work on reconstruction together? [09:15] Daemon404: you in for doing parts also now? or still moving? [11:05] BBB: would be nice, but don't you want me to finish the intra 32 ? [11:06] ubitux: sure, I'm not trying to pull you away, just asking if you're ok with me pushing it while it's unfinished/unconfirmed [11:06] ubitux: think of it as "review would be nice" rather than anything else [11:06] ah sure, whatever [11:07] how's rework of intra pred going? [11:07] slowly :) [11:07] i couldn't do anything relevant yesterday but i'll work on it today [11:15] ok pushed [11:16] btw even if 1 or 2 of the 6 are done, that's a great improvement over the current code already, so that would be fine to push, imo [11:16] but up to you ofc [11:17] I'll try to work on basic inter block reconstruction at this point, so things like subpel interpolation etc. - we'll see how that goes [12:00] ffmpeg.git 03Janne Grunau 07master:c34a96a5ddfa: dxa: fix decoding of first I-frame by separating I/P-frame decoding [12:00] ffmpeg.git 03Michael Niedermayer 07master:66722b4ba955: Merge commit 'c34a96a5ddfa390ce2a352eca79190766c9056d4' [12:41] ffmpeg.git 03Martin Storsj? 07master:0a14fefd68cc: movenc: Indicate that negative timestamps are supported [12:41] ffmpeg.git 03Michael Niedermayer 07master:45975ab7a192: Merge remote-tracking branch 'qatar/master' [12:48] BBB: could be make the topleft accessible from top[-1] and left[-1] ? [12:48] s/be/we/ [12:48] (assuming that's not really the case) [12:49] I suggest adding a note to the https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg page on the "Cutting small section" part that specifies the need to use "-avoid_negative_ts 1" if those segments are to be rejoined using the concat demuxer [12:49] that would simplify things for vert_right [12:51] ubitux: left[-1] works, is that enough? [12:51] ah, if it works then it's perfect :) [12:51] oh wait no it doesn't [12:51] only top[-1] [12:51] sorry [12:51] that's what i was thinking [12:51] *headache* [12:51] i would need both ideally [12:52] that page should be indexed to google on 'ffmpeg cut' , currently a search doesn't even bring up any of the official docs on the first page [12:52] ubitux: maybe add left[-1] if you feel it helps, it's only 1 line of code in check_intra_mode() I think [12:52] ubitux: when do you need it? [12:53] for vert_right [12:53] typically to simplify this chunk: [12:53] DST(0,6) = (l3 + l4 * 2 + l5 + 2) >> 2; [12:53] DST(0,4) = DST(1,6) = (l1 + l2 * 2 + l3 + 2) >> 2; [12:53] DST(0,2) = DST(1,4) = DST(2,6) = (tl + l0 * 2 + l1 + 2) >> 2; [12:53] s/simplify/make a loop out of it/ [12:54] of course i can make an exception [12:54] are you using rolling buffer, or 2, or 3? [12:54] using rolling/using 1 rolling [12:54] i was going for 2 rolling buffers [12:55] one for odd, one for even [12:55] right [12:55] that's what simd does also [12:55] (rolling registers, same thing) [12:55] fun [12:55] i didn't look at all at the simd [12:55] if you think it makes things easier, I'd say just add it to check_intra_mode [12:56] yeah i was looking at that code [12:56] ok i'll check that [13:08] ffmpeg.git 03Michael Niedermayer 07master:f4f6eb5b7489: fate: add ffv1.0 test [13:09] can't modify the title to say Seeking and cutting [13:09] maybe google would index it on "ffmpeg cut" searches [13:32] is it wrong to comment on stuff like that even though i'm not an authority on .. anything? [13:34] authority? [13:34] i mean i'm not entirely sure i'm helping that guy [13:35] or is it ok to try to help someone if you have a slight clue.. until someone who knows more about the subject comes along? [13:35] i mean i don't want to get in the way of bug reports or anything [13:38] if you are unsure about what you are doing, it's best to stay away [13:42] lol those youtube captions of non-english speakers [13:44] ok [14:38] ffmpeg.git 03Michael Niedermayer 07master:70967a60dff4: mpeg12dec: also print progressive seq and chroma format [14:38] ffmpeg.git 03Michael Niedermayer 07master:7d776062f97d: avcodec/error_resilience: Fix handling of matrox mpeg2 [16:34] BBB: feel free to merge up to 0c4bc031528ae842214250a4aaee7c17c83724a8 [16:34] note that i didn't bench anything [16:45] ffmpeg.git 03Michael Niedermayer 07master:c666c59ac1a9: mpegts: reanalyze packet size on mismatches [18:47] ok, hor_down down, 2 left. [18:49] ffmpeg.git 03Michael Niedermayer 07master:63c0e9077e22: avcodec/jpeg2000dec: fix near null pointer dereference [18:55] s/ down/ done/ [20:23] vert_left done, one left. [20:24] BBB: actually, don't merge it now [20:24] i'd like to finish all of them [20:24] just one left, should be done soon [22:21] ok, all done [22:24] 1 file changed, 134 insertions(+), 680 deletions(-) [22:25] i wonder if i can re-roll the 4x4 now [22:25] since gcc seems to somehow unroll small versions [22:36] BBB: feel free to pick latest commit from my branch [22:36] ...and tell me what you would like me to work on now :) [22:43] ffmpeg.git 03Carl Eugen Hoyos 07master:47f9a5b737c5: Warn the user if a pix_fmt != yuv420p was chosen for MPEG-2 video encoding. [23:28] ffmpeg.git 03Michael Niedermayer 07master:ee7f2609a0dc: avformat/mpegts: print packet size warning only if new size differs from old [23:28] ffmpeg.git 03Michael Niedermayer 07master:0f2f65bd5835: mpegts: fix pos47_full [23:28] ffmpeg.git 03Michael Niedermayer 07master:b4429c259a64: mpegts_get_pcr: dont loose a packet when resyncing [23:29] ffmpeg.git 03Michael Niedermayer 07master:d73cbc22c5f2: avformat/mpegts: resync from the smallest packet size on [00:00] --- Mon Aug 19 2013 From burek021 at gmail.com Mon Aug 19 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Mon, 19 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130818 Message-ID: <20130819000501.8CBF818A00A0@apolo.teamnet.rs> [04:31] Which GPU is best for OpenGL video rendering? I have been informed AMD GPUs are inferior to other cards regarding it. [04:33] nvidia [04:33] "I have been informed", by AMD ? [04:33] at least nvidia is better supported in linux [07:11] G'night [07:11] Don't let the fermis burn you alive while you sleep [07:18] what ? [10:15] Hi. I'm trying to record my audio output along with my screen using ffmpeg, using this command: ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -r 30 -s 1024x768 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 output.mkv. However, it captures my mic instead of my sound output. Any ideas why? [11:48] whats it called when you weave frames together to change an interlaced framerate, and does ffmpeg/avconv have a filter to do that [11:50] so you have each field as its own picture and you want to fieldmatch and weave them? [11:51] not sure if you can do that, but I do know that if you have the two fields in a single interlaced "frame" that you should be able to fieldmatch them [11:52] http://ffmpeg.org/ffmpeg-filters.html#fieldmatch [11:52] and then if it's full IVTC you need there's also a decimation filter now [11:52] which takes the duplicate out [11:53] i ripped an animation that was an american animation done at 24fps, but was then interlaced to 30fps, and i want to get it back to its original frame per frame view [11:56] which would mean you want IVTC. Also you just said "weave frames together", which still leaves something unknown. Is it straight from a DVD or TV or whatever, or did you encode it so that each field became its own "frame"? [11:56] because the latter case I'm not sure how to deal with inside ffmpeg, but the usual case of interlaced encoding and fields being fields should work just fine with the two filters I noted [11:56] fieldmatch and decimate [11:57] yeah, ive essentially deinterlaced it in the way that the two feilds are now one frame, but ocmbing is still visible [11:57] ripped from a dvd [11:58] ... [11:58] i did use a 4:2:0 codec, but its not the colours i'm needing to represent, just trying more proof of concept [11:58] I hope you aren't gonna try to use that "deinterlaced" version as input? [11:58] ah, ivtc, inverse telecine, yeah thats exactly what i was thinking [11:58] yeah i want to [11:59] that makes no sense [11:59] why not? [11:59] because you've already "deinterlaced" it, and depending on what the hell you meant with that I have no idea what you did there [11:59] interlacing is rows of pixels, i can just specify which field is which [11:59] so just take the source and fieldmatch/decimate it? [11:59] both the fields are one frame [12:00] yeah, i think thats what i want [12:00] yes, not the thing you did whatchamabob to [12:00] if you did something to the source already and have another file with that, don't use that as the source for the IVTC [12:00] use the original vob or whatever as the source [12:01] just trying to say that [12:01] ah, i understand [12:01] i didnt run any deinterlacing on the rip [12:01] im just saying when one rips to x264 it essentially converts to progressive [12:02] ok... then you just said for whatever reason that you deinterlaced it and thus made me react with STOP WHATEVER YOU'RE DOING AND MAKE SURE YOU'RE NOT USING A WEIRD FILE FOR INPUT! [12:02] lol [12:02] anyways, see the filter documentation I linked, I linked you the point of the fieldmatch filter, and the other if I recall correctly is decimate [12:03] no i made sure that no deinterlacer was on :P [12:03] lol [12:03] you're once again making me unsure [12:03] yeah, i will have an intense read over it [12:03] maybe I will just ignore what you're saying just to not comment on random lines you say all the time [12:03] lol ok, one moment [12:04] because you already said that you just ripped the DVD file structure [12:04] and now you suddenly say "I made sure that no deinterlacer was on" [12:04] no i didnt rip the file structure :P [12:04] yet when you are ripping a DVD structure you shouldn't have options like that to begin with :s [12:04] i ripped to h264 (High), yuv420p, 720x480 [PAR 8:9 DAR 4:3], 29.97 fps, 29.97 tbr, 1k tbn, 59.94 tbc (default) [12:04] lossless and interlaced coding, I hope? [12:04] no, no interlacing [12:05] and not lossless either, dvds are lossy in the first place :P [12:05] then go grab the DVD and rip out the dvd structure again [12:05] yes, but you're making it worse [12:05] lol, i didnt rip the dvd structure [12:05] you're now meaning to use this already lossy re-encode of the DVD to make another lossy re-encode!? [12:05] lol i know, but im not using the video as is [12:05] exactly [12:05] thats exactly what i want to do :P [12:05] I... I don't have words for this [12:06] just use the goddamn DVD source as source and IVTC it and that's it? [12:06] there's no reason to make it EVEN WORSE in the process [12:06] yes, but i have to see if it works in the first place [12:06] no, but the final result will not use the video at all [12:07] i just need it as reference in its original animated frames [12:08] I think I'll just really ignore you, my head hurts as to why you would use an already lossily re-encoded encode of the source instead of the source... [12:08] but yes, those two filters, have fun [12:08] :P lol, thanks for your help [12:08] if it works i will go back and rerip [12:09] ok, that sounds better :P [12:11] oh right, id need "reverse telecine" rather than inverse [12:12] usually it's just called IVTC [12:12] those two words in general mean the same thing methinks [12:12] ah [12:12] telecine was done to the source (in theory, unless someone decided to do something awful -- although not doing soft telecine is already awful), and you're inversing/reversing it [12:12] yeah i think reverse telecine is a dvd term [12:13] dvd player term [12:13] (soft telecine is basically a way of letting people encode the actual picture rate while putting in headers that tell the player that it should add fields in case it wants interlaced output) [12:14] (hard telecine is actually doing the telecine in the footage and encoding it) [12:14] ah [12:14] i dont think avconv has fieldmatch [12:14] yeah, libav doesn't have this filter yet, and the packaged libav binaries would be even older, so :) [12:15] :/ [12:15] :( [12:15] oh, i'll see if cinelerra does it, thats what i was gonna try [12:15] grab an ffmpeg binary of ffmpeg binary for your OS I guess? [12:15] argh [12:15] *grab an ffmpeg ffmpeg binar [12:15] *binary [12:15] !bin [12:15] ugh, what was the trigger :| [12:15] ubuntu? [12:16] ubuntus ffmpeg command redirects to avconv [12:16] yes, rightly so. Since ubuntu uses libav [12:16] cinelerra doesnt even recognise the video format i used [12:16] :P [12:16] ok, so you should be able to use the static ffmpeg binaries [12:16] maybe i might reencode it again to import to cinelerra :P [12:16] http://ffmpeg.org/download.html [12:17] check the "FFmpeg Linux Builds" part [12:17] the static builds [12:17] are what you want [12:17] the PPA IIRC has a very old version [12:17] didnt know static versions existed, thats helpful then [12:17] http://ffmpeg.gusari.org/static/ [12:17] this methinks? [12:17] if you have 3.2 or newer kernel [12:18] i cant even remember what kernel sanders i have [12:18] i'll try this one i grabbed, a nightly [12:19] uname -a [12:19] 3.5 [12:19] looks like this one i grabbed works [12:20] yeah [12:20] that should be new enough for having the filter :) [12:21] yeah, i'll see if the default params work automatically [12:21] they seem to work as default, but ill probably redo it specified, it looks so beautiful!! [12:22] it's pretty nice to have fieldmatch and decimate filters in ffmpeg now :) First Myrsloik rewrote tritical's TIVTC for vapoursynth as VIVTC, and then ubitux ported that over as a libavfilter filter [12:22] i never use programs deinterlacing when i play stuff on the computer, so i always watch things all interlaced and its always visible, this looks absolutely fantastic now though, i never even realised how bad it was [12:22] ah [12:23] itd be cool if theyre brought into programs as a real time filter [12:23] this didnt seem to time the animation completely right, but if i fiddle with some settings i'll proably get it [12:25] well, nothing stops people from using libavfilter for realtime stuff [12:25] I think mpv for example does exactly that [12:25] oh [12:26] (mpv being http://mpv.io/ ) [12:26] and I bet other apps do it too [12:26] i'll see if its in synaptic [12:26] doesnt look liek it :/ [12:26] thanks very much for the help btw [12:27] yeah, it's new enough not to be there yet [12:28] yeah, mashing the default suppositories with outdated software seems to be the thing [12:28] well, more like it's a case of "this stuff is new and not brought up with debian/ubuntu yet" [12:29] yeah [12:29] and then for why debian/ubuntu still use an old version of libav is because there's o9k packages using it, and many use old APIs that would get removed in the next release [12:29] but I think debian is finally transitioning through its first update of that [12:29] hopefully soon, i'm sure they work hard regardless [12:30] been on ubuntu for a good 5 years ive seen it grow [12:30] so the general gist of distributions that do releases is: 1) if we know it and it's already packaged and has no issues, pick the newest version released at the point of final freeze 2) if there are issues, stay with the last package-able version [12:31] its much more compatible now which is a releif, i remember 6.04 (or was it .06 something weird) being very bizzare [12:31] yeah, well debian is about using stable packages right? [12:32] debian uses the release procedure, yes. And ubuntu then imports stuff managed by the debian side at some point from the unstable repository methinks [12:32] ah [12:32] much of the multimedia stuff in ubuntu is managed by debian people [12:32] so to get stuff updated in ubuntu at times you need to just start poking the debian side of things [12:33] there are some packages that have maintainers in ubuntu itself of course [12:33] at least wine was at one point [12:34] ah, i add the wine ppa [12:44] is there a way to change the framerate without it reencoding the frames but instead speeding it up [12:46] May I suggest there be added some additions to the https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg page? [12:47] xlinkz0, you should poke someone with actual rights on the trac :) [12:47] who's that? [12:48] probably some people on the -devel channel, methinks? [12:50] i just left a message there, hopefully someone sees it or someone who knows someone forwards it :D [12:50] i'm glad i fixed it for myself [12:54] if I were libav I'd make killer docs, index it properly in google and in a couple of months steal all the new people looking for these tools :D [12:54] so sad that there's so little invested in docs [12:55] xlinkz0: you can modify wiki yourself [12:59] lol JEEB i did a lossless rip, which came out huge for starters, but it worked worse than on the lossy rip :P [13:00] dont even know how that is possible [13:04] oh infact, mightve been my rip [13:07] lol i actually can modify the wiki [13:07] awesome [13:12] modify away xlinkz0, ramp that wiki! [13:13] surely will help me when i'm not home to review all my "hidden ffmpeg magic tricks" file.. [13:14] :P [13:22] hi Im trying to build ffmpeg with msys2 [13:22] --enable-memalign-hack --disable-static --enable-shared --enable-gpl --enable-libvorbis --enable-pthreads --disable-doc [13:22] and it fails with http://paste.kde.org/p019f97db/ [13:23] looks like it does try to link to avutil instad of avutil-52 [13:24] is anything missing in the configuration? [13:26] and im using mingw-w64 x86 4.8.1-rev3 fom mingwbuilds [13:26] well, first of all [13:26] if you're using mingw-w64 [13:26] you don't need memalign hack [13:27] also are you sure you want pthreads and not win32threads? [13:28] and does that linking error happen with the current git HEAD at this moment? [13:28] it happend with 1.1.3 and 2.0.1 [13:29] I'll try githead [13:30] and the best example of how to configure ffmpeg on windows was from 2006 :P [13:32] well, I just noted the problem points :P [13:32] as in, things that you might not need to enable nowadays [13:32] memalign-hack is only needed for certain things, and mingw-w64 is not one of those [13:32] pthreads is generally not needed [13:32] win32 thread implementation should be enabled by default [13:35] anyways, all I know is that I build shared ffmpeg quite often with mingw-w64 and that hasn't failed for me yet [13:42] JEEB: hm do you crosscompile or do you use msys? [13:42] I use msys + mingw-w64 [13:42] hm k [13:42] I use the msys2 builds [13:42] hm [13:43] well, msys and msys2 are just the shells and the terminal thing that converts *nix-like paths to windows paths inside [13:43] the mingw toolchain is what is doing the compilation [13:44] yes [13:45] anyways, try current git HEAD and make distclean, then at first try with just ./configure --enable-shared [13:45] see how that goes, and move from there [13:50] I'm gonna try with my setup as well, with vanilla ffmpeg [14:05] JEEB: the same problem [14:05] Action: JEEB is still compiling [14:21] TheOneRing, make clean and then ./configure --enable-shared and then make resulted in the DLLs and tools being created just fine :D [14:26] linker seems to be the gcc as per usual, and then MS's lib gets called for the dot-lib creation [14:27] I dont have link in my path [14:27] or the msvc stuff [14:27] I have but I don't see link getting used for linking [14:27] gcc gets called for linking [14:27] lib.exe gets called where in your case dlltool gets called [14:27] or lib.exe [14:27] for the dot-lib creation [14:27] but I don't think that's it [14:28] although not like I have an idea :P [14:28] I also use a mingw-w64 toolchain from here http://files.1f0.de/mingw/ [14:29] hm in \libavutil [14:29] so you have a libavutil.dll.a [14:29] or only the .lib? [14:29] .dll.a gets created by the linker for mingw linking [14:29] that's OK [14:30] (with gcc/ld you link the .dll.a to create a dependency on the .dll) [14:31] then the lib file gets created in my case with the MS tool, in your case with dlltool, but that really shouldn't matter since only MSVC uses the lib files :) [14:32] hm yes [14:32] but I didnt see the dll.a [14:32] just cleared the dir, recompiling [14:36] ok [14:36] it is there [14:36] libavutil.dll.a [14:36] ah do you build inside of the source? [14:36] yes [14:37] I didn't create a separate dir from which I ran the configure [14:55] Hello [14:55] does anybody happen to know what does the "-quality good"? [14:56] I couldn't find -quality in the man page [14:58] I think that's a libvpx-specific option? [14:58] you could check what it does in vpxenc [15:20] arg [15:21] I used make -e because I made it use ccache via CC="ccache gcc" [15:21] but yes [15:21] that somehow broke ffmpeg [15:37] HELP! ffserver polling get no any fd ready for read, even ffmpeg post connection [15:38] Last message repeated 1 time(s). [15:38] aha [15:38] when ffserver use poll to get the fd that ready for read/write, then each connection can get service. but sometimes [15:38] ffserver get a condition: [15:38] c->poll_entry->revents does not include any events about POLLIN, even the post connection from ffmpeg [15:38] ffmpeg keep posting video data to ffserver, then the feed file can get new video data [15:39] ffserver says there is no fd ready for read. [15:39] ffmpeg says that the receive side does not read data on time. [15:39] any ideas? [15:39] HELP [15:39] please [15:44] Can I deleat the /tmp/feed1.ffm file in run time? [15:44] Can I delete the /tmp/feed1.ffm file in run time? [15:44] any one have idea? [15:46] TheOneRing, yeah ccache and stuff on windows can be funky :P [15:46] keep to using normal tools instead if possible :) [15:46] hello, I'm encoding some DVD in x264 with avidemux but the pictures are not steady. Does anyone have knowledge in x264 ? I know it's ffmpeg but maybe somebody can help [15:49] I have a sample if you want to check the quality [15:49] https://docs.google.com/file/d/0B4auCK8LbjrmNTdESEpHbFoyZTA/edit?usp=sharing [15:54] fjorgynn, do you have any idea about my question? [15:54] you need to download the file because youtube fix my problem but decrease quality [18:22] hi, is there a library which i can feed data (sound/video) frame by frame into to produce a video output? pref .flv/.mp4 [18:39] erbalist: yes. There's ffmpeg. [18:44] cbreak: when i say data, i mean pixel data, not a .jpg or whatever [18:44] idealy x*y of rgb [18:46] make it a bmp and feed that? :X [18:47] im trying to get around saving a bunch of crap to disk :{ [18:47] save it to /tmp/ [18:47] that's not saved to disk? [18:48] on most distributions /tmp/ is a mounted ramfs [18:48] okay thanks [19:09] erbalist: ffmpeg accepts rgb, yuv and so on [19:10] erbalist: if you use the libswscale component you can get a wide range of conversions [19:10] i mean straight from a pixel array [19:10] erbalist: nothing will be saved to disk until the final result [19:10] rather than from a file [19:10] erbalist: yeah, and? [19:10] so you're saying that's possible, for example from inside a php script? [19:10] you do know that ffmpeg is available as C libraries? [19:11] libswscale, libavformat, libavcodec, ... [19:11] it is possible from C/C++ [19:11] no idea if that garbage of a language PHP can do it [19:11] haha [19:11] okay thanks ill look into it [19:11] if you want to go the program route, maybe think about feeding the program via a named pipe or similar [20:09] I have a ffserver video stream that only seems to be able to be played in vlc and not any other player&. how can i work out why? [20:17] asherawelan: what codec? [20:18] durandal_1707: I have specified Format mpeg in the stream [20:18] perhaps it does not use yuv420 subsampling? [20:21] durandal_1707: eek, you lost me there& thought this would be straight forward enough using the sample and all... [20:23] i'm just guessing as i have no enough information [20:23] durandal_1707: this is the stream& http://162.13.84.103:8090/test1.mpg [20:24] durandal_1707: works fine in VLC - but not through other players [20:27] mplayer? [20:27] protocol? [20:28] works with mplayer [20:28] some fat bunny [20:28] i get "Invalid frame dimensions 0x0" [20:29] cbreak: yeah some fat bunny! I'm trying to a few flash/html5 players - nada [20:30] do those even support mpeg? [20:30] shouldn't you rather try mp4 (an mpeg4 container) and h264 codec? [20:31] or maybe that other one from google [20:31] webm [20:32] cbreak: great idea - perhaps I'm not understanding! :) [20:32] i will try setting the format to mp4, and the codec to h264 [20:32] cbreak: sound about right? [20:33] there was no sound. [20:33] cbreak: yeah, thats intentional [20:34] ok, seems I didn't compile with libx264& oops [20:34] so no h264 for me [20:34] balls [20:35] ffmpeg might be able to do it on its own [20:35] or maybe not :) [20:35] (x264 most certainly's the better solution) [20:38] right - downloading tar balls etc, time to recompile! [20:42] cbreak: s/x264/mpeg2 [20:43] mpeg2 is a waste of bandwidth and storage space [20:44] cbreak: at least you can play mpeg2 on pentium2 [20:44] no [20:44] I don't have a pentium2 [20:45] I do have a miniature mobile phone thing, which handles h264 just fine. [21:05] Two strange issues& I have ffmpg configured with --enable-libx264 - but "VideoCodec h264" in the stream gives me this error: "/etc/ffserver.conf:18: Unknown VideoCodec: h264" - what am i missing? [21:26] asherawelan: try libx264 instead of h264 [22:16] asherawelan: did you enable gpl? [22:16] you can't use x264 in lgpl ffmpeg [22:19] in that case ./configure would have failed [22:22] hi [22:23] i am trying to create screencasts with ffmpeg as gif [22:23] but the colors are somewhat screwed up [22:23] http://53280.de/vid/20130818221959.gif [22:23] any way to fix this? [22:23] the same video looks alright if i use byzanz [22:24] no way to fix that, gif is limited to 255 colors [22:24] *256 actually [22:26] i think gimp can create color palettes for gifs that suit the colors used better, but you'd have to post process a recording with more colors [22:26] then i will stick with byzanz for screen recording [22:26] no clue how it does it [22:27] it records screen straight to gif [22:27] and it always looks fine [22:30] cbreak: i will check i enabled gpl [22:30] cbreak: yes [22:30] ffserver version git-2013-08-16-faf7c35 Copyright (c) 2000-2013 the FFmpeg developers [22:30] built on Aug 18 2013 18:51:06 with gcc 4.1.2 (GCC) 20080704 (Red Hat 4.1.2-54) [22:30] configuration: --prefix=/root/ffmpeg_build --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --bindir=/root/bin --extra-libs=-ldl --enable-gpl --enable-nonfree --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 [22:30] and did configure find the libs? [22:31] Rasi: It does what klaxa just described [22:36] you might want to consider to use webm instead of gif too :X [22:36] if embedding on websites is of importance, webm > gif [22:38] it also may reduce cpu load even though it doesn't seem like it at first [22:38] but that gif kept my cpu pretty busy [22:40] filesize might go down too [22:53] gif is pretty bad at efficiency [23:24] gif is pretty bad at everything [23:24] it's expensive to use due to patents [23:37] cbreak: gif patents expired.... long ago [23:46] cbreak: how still around? [23:46] cbreak: still trying to sort this configure issue out.... [23:48] Can any one please help me, I am receiving this error when starting ffserver - "/etc/ffserver.conf:18: Unknown VideoCodec: h264" - I can confirm that I compiled ffmpg with "--enable-libx264" - am i missing something else? [23:49] asherawelan: try libx264 instead of h264 [23:50] i told you to try that hours ago :x [23:50] klaxa: apologies, my connection has been in and out - i may have missed it [23:50] i tried and receive this error [23:50] Stream #0:0 -> #0:0 (mpeg4 -> libx264) [23:50] Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height [23:50] now that's something different, but we'll need some more info [23:50] pastebin your ffserver.conf [23:51] klaxa: sure thing [23:51] klaxa: http://pastebin.com/424FxHA0 [23:52] >VideoBufferSize 333 [23:52] klaxa: I'm running this line too [23:52] ffmpeg -i /root/bigbuckbunny/big_buck_bunny_1080p_surround.avi http://localhost:8090/feed1.ffm [23:52] klaxa: what should it be? [23:52] 333 sounds suspicious [23:52] i'm not sure though [23:53] klaxa: I tried commenting it out - same thing [23:53] ok - ill paste error [23:54] klaxa: OK, i missed an error further up& her it is [23:54] http://pastebin.com/4XJvvi4b [23:54] klaxa: line 20 - 24 [23:55] dunno, lol [00:00] --- Mon Aug 19 2013 From burek021 at gmail.com Tue Aug 20 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Tue, 20 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130819 Message-ID: <20130820000501.CD98118A027C@apolo.teamnet.rs> [00:17] sigh [04:05] hi all [04:05] I got the big problem [04:05] ffserver hangs [04:05] there is no any data output by ffserver in the condition [04:05] ffserver says :there is no any data arrived from ffmeg via http post [04:06] ffmpeg says: the receive side does not read data on time [04:06] does someone have idea> [04:06] the return from poll says : there is no any fd ready for read/write [04:06] in ffserver [10:21] I have an mkv with an AC3 and MP4 file inside. How can I mux these into an MP4? [10:22] you mean, an AC3 audio stream and a H.264 video stream ? [10:22] coz MP4 inside MKV doesn't make much sense [10:23] anyways, juste use : ffmpeg -i input_file.mkv -c copy output_file.mp4 and you're done [10:24] Hallo! [10:24] I've some troubles with afade=t=in ... it looks to me as not working, in 1.2 [10:24] but I remember it worked before. And by 'before', I don't know it worked for me in 1.2 or 1.1 [10:25] My input files are mpegts; can it be that the time reference in mpegts is somehow weird, to affect the afade start time? [10:25] The data shown by ffprobe on mpegts files includes some "Start Time = 1.330000s" and things like that, that I don't understand. [10:26] Does ffmpeg do anything in parallel? [10:30] some codecs admit multithread. -threads. [10:31] $ ffmpeg -h full | grep threads [10:36] SirDarius & viric: Thanks [10:37] as usual, it's important in what position of the cmdline you place '-threads' :) [11:24] how to save real time h.264 data and aac data to mpeg2-ts file? [11:26] anybody? [11:27] ? [11:28] how does that data reach your computer? [11:28] ... -format mpegts file.mts should work [11:29] a ip camera generate h.264 and aac data. [11:29] via tcp connection. [11:30] thanks, but command line is not my option. [11:32] I am writing an program to generate ts file. [11:32] but I don't know how to do this. [11:33] the ffmpeg program is not a big program, built over the library. You can check its source [11:34] (I only used the ffmpeg lib to get 'input', never to write output) [11:34] lei_: look at the -re option for ffmpeg [11:34] yes, I am checking, but if there are already some work about this problem, it is much better. [11:35] thanks, relaxed, I 'll check it. [11:36] interesting, -re (-report I guess) [11:36] no [11:36] ah no. -re. oops [11:36] :) [11:38] lei_: man ffmpeg | less +/^' -re ' [11:43] I am reading. [11:48] I have a h.264 es file, but I don't know how to write command line to generate ts file. @relaxed [11:49] ffmpeg -i input.h264 -c copy -f mpegts output.ts [11:50] I am kind of newbie of ffmpeg [11:51] http://trac.ffmpeg.org/wiki/x264EncodingGuide and http://ffmpeg.org/documentation.html [11:56] can the h264 be stored 'alone' into a file? [11:57] lei_: what is 'es file'? [11:57] (I'm just curious) [11:58] just write every h.264 packet into a file, it is a es file. [11:58] yes, in a elementary stream [11:58] an* [11:59] ahh. [11:59] thank you [11:59] but it sounds like you can not write both h.264 and aac into a file. could ffmpeg can not read it. [12:00] of course you can [12:00] ffmpeg can't read it. [12:01] going back to my topic, does anyone know if the mpegts "Start Time" reported by ffmpeg is relevant for anything? [12:01] I can't get afade=t=in working. the output video starts without any fade in. [12:01] hi, fflogger, i never paste a command line in this room. [12:02] thanks for your notice. [12:03] thanks relaxed, your command line works. [12:03] does relaxed's command line violate some rule? [12:04] lei_: I think relaxed just wanted you to provide full detail for your claim about ffmpeg not being able to read the mixed file. And asked the bot to suggest you a paste site. [12:04] fflogger is a bot [12:06] back to my question, I don't think you can save both h.264 data and aac data into a singlle file, I tried, ffmpeg can not read it. [12:08] ffmpeg -i input.h264 -i input.aac -c copy -f mpegts output.ts [12:09] for further support you will need to reread fflogger's message and do it so we can see the problem. [12:10] that's two file, I think a it is a right approach. [12:11] lol. [12:11] copy right suck. [12:11] sucks [13:02] hi any idea why convertign to opus fails? [13:02] http://paste.kde.org/pa492a087/ [13:11] Trax|wrk: it doesn't fail its because of cover art, [13:11] TheOneRing: so add -vn [13:42] k thx [14:18] <__blasty_> hi. I have a FLV container'd video which uses H264 video codec. Unfortunately the first 122megabytes of this 16GB flv capture have gone corrupted (all zeroes by now) [14:18] <__blasty_> is there any way, and what is the easiest way to recover the remaining video/audio data? [14:18] <__blasty_> obviously all players out there will choke on it, as they have no context information about the format etc. [14:19] <__blasty_> I have other captures made using the same setup which have *not* gone corrupted.. Is there any chance I could just jimmyrig the broken FLV by frankensteining the header of the undamaged FLV on top using a hexeditor/cat ? [14:21] __blasty_: I think you can force the format and codecs, with '-format' and '-codec', before the -i. [14:23] <__blasty_> viric: ok. Ill give it a shot, thanks [14:27] if that doesn't work you could use `dd` or `split` to find out where the curroption ends and cut out the remaining portion [14:27] er, corruption [14:31] hi guys i'm using the ffmpeg library in my app only its hard to find documentation about setting up a RTSP stream using the library [14:31] any examples anywhere? [14:33] i dont want to use the exe files because i have a the frames in memory [14:33] already encoded with nvenc [14:33] just need a streaming instance from ffmpeg [14:36] <__blasty_> viric: so im trying something very naive like: avconv -f h264 -i broken.mp4 fixed.mp4, however it cant seem to detect the duration of the broken input file properly, and thus wont write anything to the outputfile [14:37] <__blasty_> I tried forcing that a bit by giving -t NNN (some value >1) .. but it won't fall for it :-( :-P [14:41] avconv? [14:41] this is not ffmpeg [14:42] I've no idea how much avconv diverged [14:44] __blasty_: h264 is not the format. h264 is the video codec. [14:46] <__blasty_> proves ho much I know. punching 'ffmpeg' on my ubuntu machine claims it's deprecated and I should be using 'avconv' instead [14:46] <__blasty_> was sort of assuming there that was all part of the ffmpeg project [14:47] n [14:47] no [14:47] they forked. [14:47] <__blasty_> ok, so in my case the format is/was 'flv', and then I need to specify some codec parameters as well I guess [14:47] if you read carefully output you will see ffmpeg project is not mentioned [14:47] ask ubuntu, about their advice "this ffmpeg is bad" [14:48] <__blasty_> heh :) [14:48] or find libav support somewhere, and check yourself who is the bad. [14:48] #libav [14:49] <__blasty_> is there an easy way to derive the correct "codec parameters" from a working flv file made using same setup? [14:49] <__blasty_> 'ffprobe' perhaps ? [14:49] <__blasty_> Stream #0.0: Video: h264 (Main), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 25 tbr, 1k tbn, 50 tbc <- Now I need to turn this info ffmpeg cli opts.. heh :) [14:50] there is ffprobe documentation [14:50] and it supports various outputs [14:52] <__blasty_> durandal_1707: ok, well, I probably suck at reading this ffprobe manpage. I dont see any easy way to make it output something I can feed back into ffmpeg as cli opts again [14:52] <__blasty_> thanks for all the headsup btw! [15:00] I am trying to concat two files , they have the same h264 profile, same size, same fps, same tbr same tbn same tbc [15:00] but the resulting video is corrupted [15:00] how can i diagnose/fix this? [15:06] <__blasty_> xlinkz0: what about just: ffmpeg -i file1 -i file3 -vcodec copy -acodec copy file_out.ext ? [15:07] <__blasty_> how do I get encoder/decoder specific options? I know I can do -codecs for a list .. but.. no idea how to get encoder/decoder specific options [15:07] you can't concat like that [15:08] <__blasty_> ah that will attempt to mux them or something? Im quite new to all this :-/ [15:09] then you most likely can't help me with this :) [15:09] xlinkz0: what you use to concat? [15:09] concat demuxer [15:09] how is video corrupted? [15:09] <__blasty_> xlinkz0: but maybe you can help me! how do I get list of en/decoder specific options for a given codec? :-P [15:10] dunno, for x264 you can google it, haven't used other encoders [15:10] durandal_1707: uploading a testcase in 5 minutes [15:10] ffmpeg -h encoder=libx264 [15:10] <__blasty_> relaxed: thx. [15:10] __blasty_: specific options are usually documented [15:15] anyone use the library to stream RTSP from memory? [15:15] actually its RTP but anyhow :) [15:17] durandal_1707: do you have any favorite uploading site? [15:18] if you would like me to upload it somewhere else please do so, here's the archive : http://www.sendspace.com/file/0cj9xq [15:18] ffprobe -show_streams shows identical output except for bitrate and duration [15:23] xlinkz0: perhaps its not split at keyframe, and that could cause problems [15:27] you got my last reply? [15:27] xlinkz0: ^ [15:27] no, got disconnected [15:28] durandal_1707: please repeat [15:28] depending on how corruption looks like it may be simply because split is not done at keyframe [15:28] each file looks ok on its own [15:28] and i did not split any of them without stream copying [15:29] but parts you want to join [15:29] if i transcode the second file with x264 the same way i did with the first the only thing that changes is bitrate [15:30] so is bitrate essential for the concat demuxer? [15:30] maybe - maybe not [15:31] only one frame is corrupted? [15:31] and with what player? [15:32] durandal_1707: the first video plays fine, the second one is entirely corrupted [15:33] in the final file [15:33] you can look at it yourself.. it's only 5 mb's [15:33] did freenode filter my download link? [15:40] durandal_1707: did you get the download link i sent? [15:44] nope [15:44] is only one frame corrupted? [15:44] durandal_1707: sendspace.com/file/0cj9xq [15:44] no [15:44] the first part ( which corresponds to the first video ) plays fine, the second one is totally corrupt [15:45] likewise if i switch them in the concat file so 2.mp4 is first it also plays fine [15:46] in every player? [15:46] in vlc [15:47] and windows media player [15:48] and it always happens when 2nd file have lower bitrate? [15:48] the first file always plays fine [15:49] second one always is corrupted [15:49] doesn't matter the order [15:49] only thing that fixes it is to have both files transcoded with x264 but that is not possible [15:49] because i have to concat intro / outro files with footage from cameras that is stream copied [15:50] i can't transcode every video from every camera [15:50] every time i want to concat, it's too expensive [15:50] i'm not h264 expert so perhaps you should contact one and/or open bug report [15:50] what's the alternative to concat? [15:51] i don't understand the question [15:51] i am having an extremely difficult time capturing video from my webcam and keeping my audio and video perfectly in sync [15:52] i've tried not even compressing the A/V at all when capturing and then encoding later ... the "capture" command that i am using is [15:52] ffmpeg -f v4l2 -r 25 -s 640x360 -i /dev/video0 -f alsa -i hw:1,0 -g 1 -c copy -y out.avi [15:52] concat filter [17:01] hi all [17:01] can anyone help with build ffmpeg with x264 for windows? [17:03] stima: http://ffmpeg.zeranoe.com/builds/ [17:03] is there any video filter that can measure video quality in terms of pixelation "macroblockiness"? [17:06] relaxed: thnx), but i need understand how it build manually ... I have a problem with linked error: _x264_bit_depth .. i googled but didnt find any info [18:15] <_8680_> How should I delete a segment from an audio file with ffmpeg? Ive tried copying the part before the segment to be deleted and the part after, and then concatenating them, but the concatenation step spews warnings about missing timestamps. [19:43] Greetings. I'm trying to build ffmpeg 1.2.2 on a CentOS 5.8 machine, but I keep getting things like "libass not found" or "celt not found". I've checked and the -devel RPMs are present as are the headers and libraries. [19:43] This is configure complaining, by the way. [19:44] Check config.log (or it might be configure.log). [19:44] It's possible that your versions are just too out of date, as CentOS is a rather stable distro. [19:45] That's why I'm trying to build a later ffmpeg. The 0.6.5 version available for CentOS is too old to do what I need to do (test some playlist.m3u8 files). [19:46] Them links might be useful for you, if CentOS has a new enough kernel... [19:46] Nope. [root at bast1-r1 ffmpeg-1.2.2]# uname -r [19:46] 2.6.18-308.24.1.el5 [19:47] :( [20:13] rps2: 2.6.32 is the last longterm supported kernel. [20:13] which my builds support. [20:14] I'm working on it. So far I sorted out that there's a dependency on libass that wasn't installed. However, libcelt0 doesn't seem to be available for CentOS 5, so that appears out. [20:16] you can compile it yourself, or pay someone to do it for you if you really need it. [20:16] Yeah, I know, relaxed. I'm just trying to set up ffmpeg on a bunch of machines to load test a delivery platform. [20:20] and you need celt support specifically? [20:21] No, I don't think so. These files I'm trying to test are x264/AAC3 for the most part. [20:22] all your machines run el5? [20:22] No, some are el6. [20:23] I suppose I could try a prebuild for el6 for those machines. [20:24] with ssh access I could build a static ffmpeg that would work on all of them, if you can't get it. [20:26] Give me a few more minutes. It's tedious to do this, but I'm working on it. [20:31] relaxed: The static build at http://dl.dropbox.com/u/24633983/ffmpeg/index.html worked on my el6 machines. [20:35] they must have a 2.6.32+ kernel [20:35] Yes, they do. [20:42] I have been round in circles for days, looking at samples configs, but still feel like I'm no where with this. I have a avi file, h264 with mp3 audio and would like to stream it via mp4 [20:43] Not sure the feed settings are correct, or indeed the stream settings - any advise would be greatly appreciated [23:12] I have this line to encode some video& but I get this err: Unknown encoder 'mp4' [23:12] ffmpeg -re -i /root/bigbuckbunny/big_buck_bunny_1080p_surround.avi -acodec copy -vcodec mp4 http://localhost:8090/feed1.ffm [23:12] What am i missing? [23:16] try -c:v mpeg4 [23:16] instead of -vcodec mp4 [23:16] also, if you send it to ffserver, ffserver should handle conversion [23:33] klaxa: ok, thats giving me a new error: muxer does not support non seekable output [23:34] klaxa: I am still to get this working after 3 days, sigh [23:36] codec is mostly irrelevant [23:36] Yes, you need to feed it to a file, then upload the file to your website. Sending it directly to a website isn't going to work. [23:36] also error is clear, you can not write mp4 to non seekable output, use another format [23:37] or open bug report stating that you actually need mp4 muxer that does not need seekable output [00:00] --- Tue Aug 20 2013 From burek021 at gmail.com Tue Aug 20 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Tue, 20 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130819 Message-ID: <20130820000502.D6F6718A027E@apolo.teamnet.rs> [00:28] ffmpeg.git 03Carl Eugen Hoyos 07master:037af63b3373: Fix frame width and height for some targa_y216 samples. [04:56] ubitux: ah you're done? [04:57] cool [04:58] ubitux: did you confirm that the keyframe of the vp90-2-00-quantizer-23.webm (etc.) files now all decode correctly (i.e. output matches what vpxdec produces)? [05:10] ubitux: tested, looks correct to me, nice! [05:11] ubitux: pushed [05:22] ubitux: ok, things to do... mc isn't yet complete (I'm currently working on that, but happy to pick up something else if you want to do this), loopfilter for inter frames but that should probably wait for mc to be done so we can actually test it (it basically just involves handling the skip flag) [05:22] ubitux: resolution switching (a little like svc), bw adaptivity for inter frames (basically mirror the keyframe code and use that for interframe-specific probabilities) [05:23] ubitux: start writing simd, add 16-pixel switchable loopfilter dsp functions for 8wd and 4wd mixes (currently we do 2 calls if that happens, one 8px 4wd and one 8wd 8px, but we should be able to merge that into one 16px variable wd (since 8wd is a superset of 4wd) for faster (sse2) simd; the same logic could also be used to create 32px (avx2) versions [05:23] ubitux: fate tests :-p [05:25] ubitux: oh, something other simd'y, write idct_add_block VP9DSPContext functions that basically are like itxfm_add, but dct_dct only, and do multiple at a time, for inter frames [05:26] ubitux: the idea is to do 2 or even 4 4x4 idct adds in a single loop [05:26] since idct4 only requires 4 word registers, so mmx; using sse2, we can do 2 at a time; using avx2, possibly 4 [05:26] I don't know exactly what that would look like, but maybe a fun experiment [09:13] BBB: i'm curious how mc works, what could i possibly do in that area? otherwise i'd like to try writing some simd, for whatever part, eventually one i haven't touched yet [09:26] I can commit what I have done so far, it's basically Y >=8x8; sub8x8 and uv are small pieces of extra wrapper code that still need doing; also, inverse transform for inter block coding isn't yet done [09:26] or better yet, I can show you a patch so you know how far it is [09:26] sec [09:28] ubitux: http://pastebin.com/KLxdPLHJ [09:31] better yet, http://pastebin.com/Un4sVc3V (minus one bug) [09:31] probably still horribly broken [09:34] ubitux: if you want, take that and finish it :) [09:34] ubitux: then I'll go ... and ... maybe take a vacation [09:35] mmh; can you commit a version that builds where i can gradually fill the gaps? [09:35] does this build? [09:35] no idea [09:35] should build [09:35] it basically printfs a few places where stuff is missing, but it mostly works, I think [09:35] it also doesn't crash [09:36] great [09:36] shall I commit it to my tree? [09:36] yeah i guess [09:37] oh, emu_edge handling is missing also, do you know how that works? [09:37] ok so i've no idea what i should start with; i guess the missing bits of Y>=7x7 and sub8x8 in inter_reconn()? [09:38] if (b->bl == BL_8X8 && b->bp != PARTITION_NONE) { [09:38] printf("Sub8x8 inter prediction not yet implemented\n"); [09:38] and [09:38] if (!b->skip) { [09:38] // y itxfm_add [09:38] printf("Inter itxfm add loop not yet implemented\n"); [09:38] start with these pieces, so the Y plane reconstructs correctly [09:38] BBB: we had a talk about that in h264 a while ago but i forgot about it; i can still read my irc logs though [09:38] ok [09:38] it's basically a motion vector that requires out of frame data [09:38] then we reconstruct a temp buffer that extends the edges [09:39] and use that for MC, instead of the reference frame [09:39] h264, vp8, etc. all use it [09:39] ok [09:40] after that, there's also the same, but for uv; it's a separate function so we can take advantage of both u and v planes being identical, so we don't need to calculate some stuff twice [09:40] vp8 does it like that also [09:42] "u and v planes being identical" ? huh? [09:42] (in case you're wondering, the sub8x8 code would basically be somewhat of a mirror of the branchy code in vp8.c's inter_predict(), basically branching between bp == PARTITION_V/H/SPLIT), and then doing two 8x4 predictions, two 4x8 predictions, or 4 4x4 predictions; the UV handling needs no special casing here) [09:42] uv planes identical, it uses the same mv at the same subsampling etc. [09:43] the only thing changing is the dst/src pointers [09:43] ok [09:43] see vp8.c vp8_mc_chroma() [09:43] thx, i guess i have more than enough info to start working on this; i'll start with sub8x8 [09:43] cool, I'll do bw adapt [09:43] and commit this patch to my tree [09:44] pushed [09:44] thx :) [11:23] ffmpeg.git 03Reimar D?ffinger 07master:9a27acae9e6b: ogg: Fix potential infinite discard loop [11:23] ffmpeg.git 03Michael Niedermayer 07master:54e718d014e0: Merge remote-tracking branch 'qatar/master' [11:48] ffmpeg.git 03Paul B Mahol 07master:daede1e3fa75: matroskaenc: remove unneeded wavpack tag [12:12] michaelni: http://pippin.gimp.org/a_dither/0001-swscale-make-bayer-not-error-diffusion-be-the-specia.txt [12:45] ffmpeg.git 03Stephen Hutchinson 07master:76d8d2388120: Revert "doc/RELEASE_NOTES: add a note about AVISynth" [12:56] pippin, your patch breaks fate (make fate) [12:58] no wonder, I left an evil fprintf in there [12:59] or no, that is with my additional patch on top - wait a min [13:13] ffmpeg.git 03Stephen Hutchinson 07release/2.0:8d9568b4a1a2: avisynth: Support video input from AviSynth 2.5 properly. [13:13] ffmpeg.git 03Stephen Hutchinson 07release/2.0:423b87d62176: Revert "doc/RELEASE_NOTES: add a note about AVISynth" [13:18] michaelni: it is line 1206 in utils.c which is offending; though I do not see why [13:20] michaelni: maybe it has to do with "AUTO" being a new and defualt member of the dither enum? [13:21] and that if the error diffusion _flag_ has not been set, it the enum value still is auto? [13:22] its auto probably [13:43] michaelni: if you also apply http://pippin.gimp.org/a_dither/ffmpeg-swscale-always-resolve-auto-enum.txt fate passes [13:55] iam not sure its a problem but then dither will say bayer even in some cases where no dither is actually applied [13:58] what is the reason for this full color interpolation distinction between the code-paths? [13:59] michaelni: making it be else c->dither = SWS_DITHER_NONE; makes fate fail in the same way [14:00] michaelni: thus the test in fate (I do not know what the parameters are) seems to expect auto to default to bayer (at least for GIF) [14:05] for my other project(s); I anticipate to eventually replace 'a dither' with some better variation of a green/blue noise LUT (unless a-dither continus incrementally improving) [14:05] IIRC its not implemented for the other codepath [14:05] and not really provide any ability to tweak the dither [14:07] (for GIMP displaying high bitdepth things on an 8bpc / 10 / 12bpc display - the pattern shouldn't be discernable by the user - thus numerical correctness/color reproduction should be all that matters) [14:08] a dither / bayer could be replaed by a LUT, i suspect they wont beat error diffusion for still images though [14:08] and bayer is not implemented on the error-diffusion code path, but implementing bayer there should be easy [14:09] michaelni: 'a dither' will not, but some of the computationally costly to construct blue/green noise masks can come close to error diffusion [14:09] error diffusion also has problems like the dimple along the top of: http://pippin.gimp.org/a_dither/error-diffusion.png [14:10] which the threshold mask based methods do not have [14:10] i know, ED is pretty "primitve" [14:10] there are better (more complex) dithers that dont suffer from such artifacts [14:10] to circumvent that artifact in ED, people often do serpentine order (alternate directions for scanlines) [14:12] about high bitdepth vissibility, therea a unfortunate problem, some (maybe most?) TFT screens arent true 24bit but use dither themselfs so viewing bayer ditherd images on a bayer dithering display can look suboptimal [14:12] I've got one of those [14:13] it alternates two different buffers temporally [14:13] if I blink, I can see a checkerboard for some colors [14:13] another problem is people that do full screen color management by tweaking the gamma LUTs of the gpu... [14:14] causing unexpected additional quantization leading to banding for optimal dithering methods [14:14] this is a scenario where mor stochastic methods degrade better [14:14] add all mentioned here: http://en.wikipedia.org/wiki/Dither [14:15] the void and cluster there uses a way too small mask [14:35] hi guys i'm using the ffmpeg library in my app only its hard to find documentation about setting up a RTSP stream using the library any examples anywhere? i have the h263ES frames in memory so i dont want to use the exe files [14:45] i'm poring eq2 right now, but filter will be named eq [14:57] cool :) [15:05] can anyone point me in the right direction? [15:06] i've searched the world's wild web but it's all wilderness [15:07] with other goals in mind and such [18:52] ffmpeg.git 03Michael Niedermayer 07master:8c50ea2251b5: swscale: set dither to a specific value for rgb/bgr8 output [18:52] ffmpeg.git 03Michael Niedermayer 07master:23b3141261b7: swscale: improve dither checks [20:00] hmm why should i do saturation in filter, hue does that already [20:01] also why not split it into contrast,gamma,brightness filters [20:02] doesn't sound like a bad idea [20:02] but wouldn't that be 2x or 3x slower if you cumulate them? [20:32] indeed [20:32] durandal_1707: what about updating hue and add more parameters, and alias the filter name to something more meaningful? [20:35] hue does not use lut at all [20:36] it could, if you had a "constant" path (expr is const debate again i guess) [20:36] no? [20:39] basically just like vf overlay or vignette, using a "eval" option for now [20:39] well it could use lut table, if my understanding of code is correct [20:40] it would also be much faster [21:03] indeed, converting to lut gives doesn't breaks fate [21:03] review fail [21:04] huh? [21:04] hue fate test is a varying one iirc [21:04] are you recomputing the lut for each frame? [21:05] only if lut actually changes [21:06] doesn't matter, because unless you only care for 256x256(or smaller) images its faster [21:08] actuall i'm wrong [21:08] its 16*16 [21:13] sent patch, feel free to flame^Wbenchmark [21:19] and for gamma,..etc one wants fancy expressions too? [21:20] i guess one might want it at least for brightness [21:20] (to make an advanced fade effect) [21:20] so why not the other parameters? [21:24] yes, i just need to make use of this inline assembly [22:11] huh someone f* libdvdnav, they like to rebase too much [22:17] ubitux, do you have a comment about: 0818 0:38 Matthew Heaney (1.3K)  [22:17] if not ill apply the patch this is about (if it works and looks reasonable) [22:18] my opinion is still the same; to me it's ok (from the theorical PoV, i don't remember the technical details, but i'm not a mkv maintainer anyway) [22:29] ubitux, ok will apply then [22:30] damn, that hue coverage is flawed [22:36] ffmpeg.git 03Matthew Heaney 07master:818ebe930fa4: avformat/matroskadec: add WebVTT support [22:46] j45: if you pick some commits from ffmpeg for libav, please use the correct authorship for the commit, don't use "Author:" in the commit description [22:47] ubitux: it's already said on ml [22:47] ubitux: yes, Luca instructed me in the correct procedure [22:48] thx [22:48] np [22:53] j45: do you know if your QT patch is fixing https://ffmpeg.org/trac/ffmpeg/ticket/1845 ? [22:57] ubitux: there's a good chance [22:57] cool [22:57] would be nice to test [22:59] ubitux: well i can't see how could I put other gamma/contrast/brightness without changing current hue/saturation code [22:59] it was just a suggestion, i dunno [23:02] j45: why doing so much effort btw? :) [23:02] you're likely being requested to rewrite that code anyway [23:04] we want to use avformat for muxing in HandBrake. Need to fix a few things to bring avformat mov and mkv muxers up to par with our current implementations. [23:04] isn't the feature already present in libavformat? ;) [23:06] people should rewrite things every 5 years [23:06] you mean in ffmpeg's version of libavformat? yes faststart is already there. [23:07] then you don't need to do anything ;) [23:07] except rewrite a gob of handbrake code to change libraries... again... no thanks. [23:07] it's fully compatible [23:08] you should not need to rewrite anything [23:08] key word being *should* [23:08] a test of lib switch should be easier than rewriting the faststart feature [23:09] that would actually be useful for us to know [23:10] you're welcome to try it ;) [23:10] i don't personally use handbrake, it would take me time [23:28] well if i just add contrast, brighness and gamma, should it be ok to remove mp filters? [23:30] i think the compromise didn't change [23:30] same features, same speed ? ok drop [23:31] what features would saturation from eq2 represent? [23:32] also this is lut only so speed is much less relevant [00:00] --- Tue Aug 20 2013 From burek021 at gmail.com Wed Aug 21 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Wed, 21 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130820 Message-ID: <20130821000502.DF77518A02FB@apolo.teamnet.rs> [00:20] ffmpeg.git 03Michael Niedermayer 07master:68b63a343201: mpegvideo: Use picture_ptr instead of picture in ff_mpeg_draw_horiz_band() [02:25] what kind of puter should i take to vdd ? [02:25] laptop? tablet? smartphone ? [03:48] ffmpeg.git 03Michael Niedermayer 07master:d9b0b54a5f8b: ffv1: rename minor to micro version [03:49] 01[13FFV101] 15michaelni pushed 1 new commit to 06master: 02http://git.io/zpEm0A [03:49] 13FFV1/06master 14ed76a5d 15Michael Niedermayer: ffv1: rename minor to micro version [03:55] ubitux: you'll also do sub8x8 mv/mode bitstream parsing right? I just noticed that's missing [03:56] ubitux: it's relatively minor but still [07:44] ubitux: (2 patches, scroll to see second) http://pastebin.com/kVz3jBg5 implements bw adapt for inter frames, and 4/8wd 16px loopfilter. I haven't tested either to see if they work, since the first requires sub8x8 bitstream parsing and the second requires time/effort and I'm lazy, so I'll do that later. I think after that I'll need to write a TODO list, maybe work on SIMD or MT or so [07:45] since my own todo list to you went offscreen so it's lost [11:12] ffmpeg.git 03Luca Barbato 07master:148fbdd1c2a2: mkv: Refactor mkv_write_packet [11:12] ffmpeg.git 03Michael Niedermayer 07master:976de369ddeb: Merge commit '148fbdd1c2a2a88a78ba9fd152c81c840bdb205a' [11:21] ffmpeg.git 03Luca Barbato 07master:98308bd44fac: mkv: Add options for specifying cluster limits [11:21] ffmpeg.git 03Michael Niedermayer 07master:ac957bc60cec: Merge commit '98308bd44face14ea3142b501d16226eec23b75a' [11:31] Daemon404: when are you available? [11:34] i dunno, i only have my (windows) laptop at home on 3g internet until the day after vdd [11:34] (i am at work right now) [11:35] and my work pc has not arrived yet... [11:36] using random servers to code [11:37] ffmpeg.git 03Luca Barbato 07master:59f595921eb2: mkv: Flush the old cluster before writing a new one [11:37] ffmpeg.git 03Michael Niedermayer 07master:d169b56b7bfc: Merge commit '59f595921eb2b848a80a74aa81b6bb43038c9ebe' [11:53] ffmpeg.git 03Martin Storsj? 07master:b886f5c2f1e7: mkv: Allow flushing the current cluster in progress [11:53] ffmpeg.git 03Michael Niedermayer 07master:e1acfd3cb054: Merge commit 'b886f5c2f1e71b3e60e4265c500158d392b4b9a4' [11:54] BBB, perhaps it's a good project for me in Libav/FFmpeg's hack room at VDD [11:55] assuming the various projects dont kill each other first [11:55] Daemon404: they will have separated rooms [11:55] thats going to be awkward for me [11:55] \o/ [11:59] j-b: and same time? [12:25] ffmpeg.git 03Luca Barbato 07master:395230301088: mov: Set the timescale for data streams [12:26] ffmpeg.git 03Michael Niedermayer 07master:4a6f1be170a0: Merge commit '39523030108815242178ac5e209c83070bd1baef' [12:35] ffmpeg.git 03Luca Barbato 07master:22de0f8369f1: mov: Compute max duration among the tracks with a timescale [12:35] ffmpeg.git 03Michael Niedermayer 07master:eec75e0a1fc9: Merge commit '22de0f8369f1f3edf1a55e1d275f3c07c617b53e' [12:45] Daemon404: I think it would be, but maybe by then it's done already [12:51] Action: Daemon404 sees nothing has gone to plan for his move [12:53] should've stayed in NY dude [12:53] this is NY telling you to stay put, or else... [12:54] ubitux: let me know how your stuff progresses, I'm going to remake a todo list for myself and slowly work on stuff that's, well, to be done :) [12:54] ubitux: I'm thinking we can have a roughly working decoder by the end of this week, then work on mt/simd/etc [12:56] ffmpeg.git 03Luca Barbato 07master:67400f6b6219: mov: Prevent segfaults on mov_write_hdlr_tag [12:56] ffmpeg.git 03Michael Niedermayer 07master:fb679d53743f: Merge remote-tracking branch 'qatar/master' [13:09] ubitux: Daemon404: also if you guys want, I can add a TODO file to the repo, so we have a master list of what's left to be done (and you can add TODOs to it also then) [13:25] Hi, i have an issue with my code. I try to grab images from an RTSP camera but I have some troubles with the bottom of my frames like on this image : http://my-uploads.fr/images/libavissue.png [13:25] I tried with ffplay and it works fine, I have the full image [13:26] do you have an idea? do I have to set a specific flag ? [13:38] hello, it appears that AVCodecContext::delay is not correctly mapped to lag_in_frames for libvpx [14:30] ffmpeg.git 03Michael Niedermayer 07master:3d64845600c6: movenc: ilbc needs audio_vbr set. [16:04] 142... [16:05] years [17:13] ffmpeg.git 03Michael Niedermayer 07master:6dfffe92004d: swr: clean layouts before checking sanity [17:13] ffmpeg.git 03Michael Niedermayer 07master:c56d4dab039b: swr/rematrix: Fix handling of AV_CH_LAYOUT_STEREO_DOWNMIX output [17:34] BBB: no progress so far, will start soon [17:35] BBB: ok for the TODO ofc [18:22] hello, it appears that AVCodecContext::delay is not correctly mapped to lag_in_frames for libvpx; is this a known issue? [18:30] no, its not known issue [18:50] ffmpeg.git 03Michael Niedermayer 07release/1.0:b416cb979d5d: movenc: ilbc needs audio_vbr set. [18:50] ffmpeg.git 03Michael Niedermayer 07release/1.0:3f3993ac0aef: swr: clean layouts before checking sanity [18:50] ffmpeg.git 03Michael Niedermayer 07release/1.0:739e236aed8f: swr/rematrix: Fix handling of AV_CH_LAYOUT_STEREO_DOWNMIX output [18:51] ffmpeg.git 03Michael Niedermayer 07release/1.1:cb51d9ed254d: movenc: ilbc needs audio_vbr set. [18:51] ffmpeg.git 03Michael Niedermayer 07release/1.1:6124a7edbcb0: swr: clean layouts before checking sanity [18:51] ffmpeg.git 03Michael Niedermayer 07release/1.1:daa809fd9f1c: swr/rematrix: Fix handling of AV_CH_LAYOUT_STEREO_DOWNMIX output [18:51] ffmpeg.git 03Michael Niedermayer 07release/1.2:364495a351b2: movenc: ilbc needs audio_vbr set. [18:51] ffmpeg.git 03Michael Niedermayer 07release/1.2:a94404457b0b: swr: clean layouts before checking sanity [18:51] ffmpeg.git 03Michael Niedermayer 07release/1.2:c55a09a8b65f: swr/rematrix: Fix handling of AV_CH_LAYOUT_STEREO_DOWNMIX output [18:51] ffmpeg.git 03Michael Niedermayer 07release/2.0:61dc8494d70f: movenc: ilbc needs audio_vbr set. [18:51] ffmpeg.git 03Michael Niedermayer 07release/2.0:f0f55e672618: swr: clean layouts before checking sanity [18:51] ffmpeg.git 03Michael Niedermayer 07release/2.0:e231f0fade11: swr/rematrix: Fix handling of AV_CH_LAYOUT_STEREO_DOWNMIX output [18:56] ffmpeg.git 03Thilo Borgmann 07master:78d2a781d092: fate: Add EXIF test. [19:29] michaelni: user thomasjones is a spammer on trac, but "Remove registered user" results in "Unknown administration panel" for me in trac. maybe that user was already removed? [19:30] what he still spams even being removed? [19:30] no, i deleted the spam attachment first and then was going to remove the user [19:31] "thomasjones mymist17 at googlemail.com 08/18/2013 10:17:33 PM" <-- this one ? [19:31] IP address: 82.26.154.161 [19:32] yes, probably. [19:32] deleted [19:33] thanks. how did you delete the user? [19:33] via admin panel for users [19:33] ah. i don't have that. [19:34] carl and me should have access to it [19:34] i see what was happening now [19:34] carl? [19:34] cehoyos [19:35] i know that... [19:37] rename vp9.c into vp9dec.c [20:59] durandal11707: weren't you talking about adding timeline support to drawtext? [20:59] i thought we added it since then [21:00] no, you were telling me to add it [21:00] mmmh [21:01] i remember now [21:01] even it is already there but in different form [21:01] there is a "draw" builtin [22:46] why ffv1dec increases f->picture_number but never use it? [23:32] ffmpeg.git 03Michael Niedermayer 07master:880c73cd7610: avcodec/flashsv: check diff_start/height [00:00] --- Wed Aug 21 2013 From burek021 at gmail.com Wed Aug 21 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Wed, 21 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130820 Message-ID: <20130821000501.D1E5718A02F9@apolo.teamnet.rs> [00:03] Yes, you need to feed it to a file, then upload the file to your website. Sending it directly to a website isn't going to work. <-- disregard this, sending it to an ffserver feed is correct [00:12] you actually tried it? [00:15] why do i get these errors from ffserver "buffer underflow" [00:15] how can i fix? [00:16] perhaps your machine is too slow for encoding/uploading ? [00:17] provide more information [03:54] hi [09:30] Hi [09:32] while compiling ffmpeg with ./configure --enable-nonfree --enable-gpl --enable-libfdk_aac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 [09:32] iam getting error [09:32] ERROR: libx264 not found [09:32] If you think configure made a mistake, make sure you are using the latest [09:32] version from Git. If the latest version fails, report the problem to the [09:32] ffmpeg-user at ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net. [09:32] Include the log file "config.log" produced by configure as this will help [09:32] solving the problem. [09:32] i check in the config.log file [09:33] it is showing something like [09:33] END /tmp/ffconf.g5880ADM.c [09:33] gcc -D_ISOC99_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_POSIX_C_SOURCE=200112 -D_XOPEN_SOURCE=600 -std=c99 -fomit-frame-pointer -pthread -c -o /tmp/ffconf.onDWNg2e.o /tmp/ffconf.g5880ADM.c [09:33] gcc -Wl,--as-needed -o /tmp/ffconf.6XF0wOFo /tmp/ffconf.onDWNg2e.o -lx264 -lvpx -lvpx -lvpx -lvpx -lvorbisenc -lvorbis -logg -ltheoraenc -ltheoradec -logg -lmp3lame -lfdk-aac -lm -pthread -lz -lrt [09:33] /usr/local/lib/libx264.a(opencl.o): In function `x264_opencl_close_library': [09:33] opencl.c:(.text+0x5f1): undefined reference to `dlclose' [09:33] /usr/local/lib/libx264.a(opencl.o): In function `x264_opencl_load_library': [09:33] opencl.c:(.text+0x643): undefined reference to `dlopen' [09:33] opencl.c:(.text+0x65c): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x676): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x690): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x6aa): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x6c4): undefined reference to `dlsym' [09:33] /usr/local/lib/libx264.a(opencl.o):opencl.c:(.text+0x6de): more undefined references to `dlsym' follow [09:33] /usr/local/lib/libx264.a(opencl.o): In function `x264_opencl_load_library': [09:33] opencl.c:(.text+0x944): undefined reference to `dlclose' [09:33] /usr/local/lib/libx264.a(opencl.o): In function `x264_opencl_lookahead_init': [09:33] opencl.c:(.text+0x1916): undefined reference to `dlopen' [09:33] opencl.c:(.text+0x1931): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x1948): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x195f): undefined reference to `dlsym' [09:33] opencl.c:(.text+0x1973): undefined reference to `dlsym' [09:34] opencl.c:(.text+0x19bc): undefined reference to `dlclose' [09:34] collect2: ld returned 1 exit status [09:34] ERROR: libx264 not found [09:34] please help to resole this issue [09:34] Guest72696: next time dont paste in the channel......... use pastebin [09:34] sorry i don't know about pastebin [09:35] ok [09:35] looks like you need to add -ldl [09:35] --extra-ldflags="-ldl" [09:35] where? [09:36] in the configure line ? [09:36] u r saying while executing ffmpeg ./configure [09:36] that is correct [09:36] ok [09:36] let me check once [09:37] done [09:37] did it work? [09:38] yes [09:38] good [09:38] let me complete whol;e installation [09:38] iam getting error after completing ffmpeg installation also [09:39] right now iam executing make [10:03] how come ffmpeg's configure does not end? (trying to configure for cross compiling case) [10:03] it will end [10:04] how long time have you waited? [10:07] Hi. I'm having troubles preparing media for Cinelerra, from Cannon HD (*.m2ts) to DNxHD.. [10:08] Can somebody help me? [10:08] 35 minute the most [10:08] strangely at first tried only specifying --cc=..., and then make threw some strange things i assumed assembling things [10:09] then specified --as=... , --ld=... and other things [10:09] but is my first cross compilation ever so God help me now [10:12] So, nobody knows hot to use ffmpeg to convert m2ts to DNxHD? [10:12] never heard of DNxHD [10:12] :/ [10:12] :) [10:13] Ok, I'll try to rebuild the question. [10:13] I have HD 16:9 video in m2ts format, and I need to convert it to be used by Cinelerra (http://heroinewarrior.com/cinelerra.php). [10:14] Cinelerra needs a format without loss (not jpeg). [10:15] They recommend DNxHD, but It supports more. [10:16] niuniomartinez: what kind of error do you get ? [10:16] cinelerra doesn't support mpegts? [10:18] relaxed: May be, I'm not sure. It supports "*.avi *.mpg *.m2v *.m1v *.mov" [10:18] Morning&. when using using ffmpeg to provide an input for ffserver - do you *have* to specify the video codec, frame rate, bit rate, etc - or can you just define those things in the stream later? [10:19] spaam: I'm re-running it (didn't test since yesterday) [10:20] spaam: Errors are: [10:20] video parameters incompatible with DNxHD [10:20] what kind of videoparameters did you use? [10:20] Error while opening encoder for output stream - maybe incorrect parameters such as bit_rate, width or height. [10:21] Command line is :/usr/bin/ffmpeg -y -i "/home/.../01/00001.m2ts" -b 185M -vcodec dnxhd -flags +ilme+ildct -acodec pcm_s16be "/home/.../00001.mov" [10:21] I'm using Winff [10:21] Is a front end. [10:22] 185M ?! thats sounds much [10:23] It's only about 1.35GB a minute. :p [10:23] Yesterday I was playing with parameters, reading ffmpeg man page and such, but didn't get any progress. [10:24] Oh, wait... [10:24] I did test a change and seems to work... [10:24] Not sure what I did. [10:24] Ok, I'll se what I get and then I'll ask you. [10:24] @ thanks [10:25] Guest72696: great [10:25] its working fine [10:25] thank you very much [10:25] sounds like a "bug" in the configure script. [10:25] Ohh [10:25] it need a way to detect that you enabled opencl when you compiled it with lib264. [10:25] it need a way to detect that you enabled opencl when you compiled it with libx264. [10:26] ok [10:27] spaam, sacarasc: false alarm: cinelerra didn't read it. :( [10:29] Recommendation? [10:37] it is needed to have enabled some output devs or at least one when running configure isnt it? [10:38] otherwise ffmpeg wouldnt output anythng or would go to devnull [10:39] shur: you can check config.log ? [10:40] Ok, so I'll try "plan B": install Corel Movie Factory... :( [10:41] Thanks [10:43] that log is great! there are many ocurrences of 'outdev' [10:44] many labeled with an equal 'yes' [10:44] all the usual alsa, oss and caca [10:45] and sdl [10:47] sadly i dont know what to look for further than that. i would say that are all sound devices but the caca one which is video [10:47] and the neatest video device ever made [10:55] will ffmpeg support libvpx frame lookahead delay? [10:56] it appears the libvpx option for delaying the encoder is not mapped yet in ffmpeg [10:56] i'm developping [10:57] using the libav* libraries [11:27] Martijnvdc: send patches [11:28] did you see -auto-alt-ref? [11:28] er, I meant -lag-in-frames [11:29] ffmpeg -h encoder=libvpx| less [11:31] i do see rc_lookahead... [11:33] relaxed: oh yeah, it'ts there [11:55] relaxed: AVCodecContext has a variable called "lag", but it does not map to -lag-in-frames for libvpx [11:55] it simply doesn't do anything [11:56] Hi again. [11:57] I've finally found a way to convert the videos using Corel software... [11:57] ..but now I have another problem: [11:57] The video was recorded in 16:9 but when playin on Linux it's rendered as 4:3. Can I change this? [11:59] there is -aspect or so, that can change the DAR/SAR [12:01] man says: "-aspect[:stream_specifier] rational number (output,video) [12:01] sample aspect ratio" [12:02] is there a way to tell ffmpeg to use the audio samplerate of the source material? It seems to default to 48khz which is a bit annoying. [12:02] viric: So "ffmpeg -i ~/vid.mpg -aspect 1.7 ..." [12:05] viric: It didn't work (?) "ffmpeg -i 01.mpg -aspect 16:9 01-b.mpg" [12:06] I'd expect it to work. But I didn't try recently. [12:06] faemir: it should always use the sources' samplerate [12:07] viric: Ok. I see that Cinelerra allows to change the aspect ratio. May be it's not neccesary to do it with ffmpeg. [12:07] necessary* [12:12] Thank. Bye [12:20] if i want to use ffmpeg with video i need to activate at least sdl or caca as outdevs when configuring, am i right? [12:20] relaxed: apparently opus doesn't support 44.1khz so that would be why it's resampled to 48khz, so problem solved [12:22] shur: do you mean using ffplay? [12:26] ok i think that was an xyp thanks [14:38] Hi how can I tell if a stram uses 24bit or 32bit color? [14:39] yuv420p [14:39] Stream #0.0, 22, 1/90000: Video: h264 (Baseline), yuv420p, 1280x960, 1/180000, 90k tbr, 90k tbn, 180k tbc [14:46] yuv420p is the format [14:46] can you define '24bit or 32bit colour'? It is rather vague :) [14:47] Colorspace [14:47] well, yuv420p is not rgb. [14:59] yep [14:59] can't really call it 8-bit either, since it's 4:2:0 [14:59] er, 24bit that is :) [14:59] yuv420p is 16 bit [14:59] 32 bits per 2 pixels = 16 bpp [15:00] color information is per 4 pixels in 4:2:0 [15:04] not really .... [16:04] hello. anyone got any experience using ffmpeg to create both a split paned video (i.e. two videos side by side) as well as adding an audio track? [16:06] yes, its trivial [16:06] i know how to do each of those things separately - i'm using the 'vf' flag for the side-by-side videos and i can add audio to a video with 'map' but putting the two together isn't working for me (it doesn't add the audio) [16:08] i.e. fmpeg -i video_left.mp4 -i audio.mp3 -map 0:0 -map 1:0 -vf "[in] scale=160:240, pad=320:240[left]; movie=video_right.mp4, scale=160:240[right]; [left][right] overlay=160:0 [out]" -an out.mp4 [16:08] -an disables audio [16:08] when do you use -vf, and when -filter_complex ? [16:08] how you come out with such idea? [16:09] you use -filter_complex when you do not use movie/amovie filters [16:09] in case of multiple '-i', then? [16:09] yes [16:09] ok [16:09] thank you [16:10] durandal_1707: stupid me - thanks a lot! [16:11] my original command was supposed to remove audio, i forgot what the -an was doing [17:43] Hi there! I created some random positioned PNG to be used as watermark. Then I generated a 2min movie with fade-ins and fade-outs with these PNGs. Now, it is possible to add this watermark movie to another movie, looping during the whole movie? [17:45] how many frames have this watermark movie? [17:45] 650 [17:46] unless you loop it manually there is currently no way [17:47] right& what about doing the filter_complex instruction over and over again? [17:48] like, giving lots of inputs and defining fade[in/out] with filter_complex and frame offset? [17:48] only if you loop watermark into big file or filtering overlay over and over again [17:49] and by filtering i mean splitting vidoe you want to add watemark to [17:49] there is no simple way to overlay dynamic output that will loop when end is reached [17:49] hmmm& splitting. is there any easy way of doing that? [17:50] yes split video you want to filter into clips of 650 frames each [18:42] Hi folks, i'm trying to stream audio to start, from a laptop i have at home to vlc on my work laptop. [18:43] I run vconv -f alsa -i hw:0,0 -acodec libmp3lame -ab 32k -ac 1 -re -f rtp rtp://86.6.148.64:1234 and it seems to boot up [18:43] this is on the laptop at home [18:43] now, i've ssh'd from work to another box on my home network (little home server) and i tried -L 1234:localhost:1234 and -R 1234:localhost:1234 with the ssh connection [18:44] 86.6.148.64 is the home server network ip, so the laptop ffmpeg process should be broadcasting the rtp stream to that ip [18:44] can i forward the rtp stream this way? [18:45] avconv is not ffmpeg [18:50] hm, that's annoying. [21:05] hey [21:06] can someone here point me in the right directing for encoding video uploaded in portrait mode (like from iphone/android), to make it playable in the web in 640*480? [21:11] >vertical video [21:11] just encode to webm [21:11] webm only accepts vp8 and vorbis audio [21:12] afaik it's the encouraged html5 video standard although vp8 is not as sophisticated as h264 [21:12] hey guys, I need to use -map to get the segmenter to work, but I want to keep the default behavior of choosing the best video and audio track (as opposed to the first) [21:13] all of the segmenter examples use -map 0 but I don't want to copy additional streams [21:13] right now I have -map 0:v:0 -map 0:a:0 but that won't pick the best video and audio stream, just the first [21:14] I'd like something like -map 0:v:a, where a is "auto" [21:15] i've never used the segment, but i assume the -map is required? [21:16] yep unfortunately [21:17] every single example uses -map, and without it you get an error [21:17] sounds like a feature request [21:18] yeah, I just wanted to ask before I started hacking it in, because I only just figured out how -map works [21:30] unfortunately, i have to deal with encoding vide to both mp4 and webm ...I'm running into this problem with vertical videos [21:30] http://pastebin.com/r5C2QYmr [21:32] jayjay: if the player is fixed to 640x360, then you're going to have to add black bars to the side (like Youtube) [21:33] yea im fine with adding black bars [21:33] i just dont know how to do it [21:33] if you control the player size, then changing it to 360x640 for rotated videos is pretty easy [21:33] yeah the player is fixed [21:33] like youtube [21:33] something like [21:33] oh, just do the scale first [21:34] wait, nvm [21:34] -vf transpose=1,scale=-1:360 [21:35] you'll have to use if statements if you want to prevent the width from going larger than 640 [21:35] http://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg [21:35] but yeah, order matters for video filters so the transpose is applied first then the scale [21:35] jayjay: if you absolutely need black borders, look at the padding video filter: http://ffmpeg.org/ffmpeg-filters.html#pad [21:36] let me try this ..i hope it works [21:42] hem..still doesnt work [21:42] intermediate.mp4: Invalid data found when processing input [21:42] here's what i tried: ffmpeg -i intermediate.mp4 -vf scale=-1:360 finalout.mp4 [21:43] doing transpose/scale in one pass i get a different problem [21:43] Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height [21:46] what is the difference between YUV and YCbCr in typical video coding jargon? [21:52] jayjay, you have to set codecs [21:53] and you can't copy if you're doing a filter [21:53] aah okay, let me try again then [21:55] teratorn: most people say YUV when they mean YCbCr [21:55] since the real YUV is purely analog [21:56] it's just 'colloquial' versus 'official' [21:56] for example yuv420p is not planar YUV, it is planar YCbCr [21:58] hmmm [21:59] what does planar mean there? that there are multiple planes? [22:05] yes [22:06] one memory plane with the luma, and two others for chroma [22:06] the other alternative is interleaved [22:09] in case anyone is interested, this looks like it somehow worked "ffmpeg -i input.mp4 -vcodec libx264 -acodec libfaac -vf "transpose=1,scale=640:360" output_fun.mp4" , i do get the right size but the video is streteched...using scale=-1:360 causes errors [22:09] like [22:09] width not divisible by 2 (203x360) [22:10] yeah, that's pretty common, one sec [22:12] -vf scale=trunc(oh*a/2)*2:360 [22:12] that makes sure the width is an even number so you can use h264 [22:13] https://ffmpeg.org/trac/ffmpeg/ticket/309 [22:13] axorb: s/h264/4:2:0 YCbCr/ [22:13] ^ [22:14] because H.264 supports plenty of other chroma subsampling modes [22:14] and 4:4:4 mode supports RGB colormatrix, too [22:14] would you still suggest rounding? [22:15] hi all, any help with this would be much appreciated: http://bpaste.net/show/124611/ [22:15] since people mostly want 4:2:0 for hw decoding support and non-libavcodec decoder support in general, yes -- some way of putting the resolution to mod2 would be exactly what most people want [22:15] axorb you're a life saver! [22:16] just nitpicking on the little things :) [22:16] oh no, I was only asking because I'm rounding now and wanted to know if there was a better option [22:17] having an issue with 'unable to parse option value -1 as pixel format' on h264 avc1 input format - have tried -analyzeduration and -probesize with max_int [22:17] jtriley: any time I've had that issue it's been bad input [22:17] i can play it with mplayer but can't seem to transcode for the life of me [22:18] axorb: yea that's what i've been reading [22:18] "multiple edit list entries, a/v desync might occur, patch welcome" [22:19] the second one being the one without a pixel format [22:20] axorb: best part is it's 1.8GB so takes forever to try things out...handbrake seems to transcode it (begins to produce non-zero output file) but then dies towards the end miserably [22:21] so likely just a broken file but does seem odd that I can play it but not transcode it although that might be a common scenario? [22:21] yeah, some video editors produce files like this when concating [22:21] try -pix_fmt yuv420p before -i [22:21] axorb: sadly, i've tried that too [22:21] both before and after [22:22] and both before and after at the same time [22:22] thanks for the feedback though, good to know i'm not missing something horribly obvious [23:28] axorb: interestingly enough i can convert the video using vlc and then ffmpeg can transcode vlc's output [23:40] not sure what vlc is doing differently... [23:47] axorb: ok figured it out - for me -map 0:0 -map 0:1 worked [23:47] there was a third video stream and a fourth subtitle stream [23:48] removing those with the explicit map commands seems to work [00:00] --- Wed Aug 21 2013 From burek021 at gmail.com Thu Aug 22 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Thu, 22 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130821 Message-ID: <20130822000502.D669018A036A@apolo.teamnet.rs> [00:35] ffmpeg.git 03Michael Niedermayer 07master:97e165cdae9b: avformat/unix: include sys/socket.h [01:01] ubitux: u changed address [01:01] yes [01:01] u evil [01:01] still the same gpg key though [01:01] maybe i need to update a few files for copyright [01:31] ubitux: yes finish it! also I'm out for 3 days so expect no response from me for a whil [01:31] ok, have fun :) [01:31] "finish it" yeah well i'll try ;) [01:35] it's a calling ;) [01:35] so inter frame decoding haven't started at all and summer is already over? [01:36] ? [01:36] no I think inter frame decoding is pretty much finished, itxfm_add loop is missing and ubitux is slacking at adding it [01:37] and some sub8x8 block special case loop [01:45] hmm why it appears someone does not like whitelines [01:47] nvm, its fine [06:11] Is the AAC spec really hard to comply with? Is that the reason there hasn't been a decent LGPL AAC encoder? [07:47] Zeranoe, I'd rather say it's because of the effort and time one would have to put into writing psychoacoustic code (and possibly implementing some features that might or might not be in the encoder already) :) [10:30] ffmpeg.git 03Michael Niedermayer 07master:be03912a7805: avformat/unix: reshuffle #includes [10:44] ffmpeg.git 03Diego Biurrun 07master:64af59bc4916: avformat: Fix references to removed av_close_input_file in Doxygen [10:44] ffmpeg.git 03Michael Niedermayer 07master:9ab8de4b2390: Merge commit '64af59bc4916fac5578b31c89da13c30b591bddf' [10:52] ffmpeg.git 03John Stebbins 07master:db03cb37fd96: movenc: Allow chapter track in default MODE_MP4 [10:52] ffmpeg.git 03Michael Niedermayer 07master:8d63aaed1efe: Merge commit 'db03cb37fd9650b4a7c752d24a2e84ff27508ee8' [11:05] ffmpeg.git 03John Stebbins 07master:6c786765cd5e: movenc: Allow chapters to be written in trailer [11:05] ffmpeg.git 03Michael Niedermayer 07master:a76390d100ec: Merge commit '6c786765cd5eb794dedd4a0970dfe689b16dfeeb' [11:10] hi, I have a question about how to close a rtsp stream, I'm using this code : http://pastebin.com/7UNz9zEN and even if i use avcodec_close(vCodecCtxp); and avformat_close_input(&pFormatCtx); my handle on my camera is not released, that mean after few tries I have to reboot my camera to free the handle otherwise i can't connect to it [11:10] what am i don't release ? [11:30] ffmpeg.git 03Diego Biurrun 07master:2a61592573d7: avcodec: Remove some commented-out debug cruft [11:30] ffmpeg.git 03Michael Niedermayer 07master:58e12732dbab: Merge commit '2a61592573d725956a4377641344afe263382648' [11:44] ffmpeg.git 03Justin Ruggles 07master:545a0b807cf4: vf_fps: add 'start_time' option [11:44] ffmpeg.git 03Michael Niedermayer 07master:b69b075ac607: Merge commit '545a0b807cf45b2bbc4c9087a297b741ce00f508' [11:50] saste: i know i insist, but you should really use do { ... } while (0) form for those macro [11:50] ubitux: noted [11:50] it's making an empty statement in the current state (double ;;) [11:50] (don't we have compilers chocking on those?) [11:51] (well, not double ;; but { ... };) [12:12] ffmpeg.git 03Martin Storsj? 07master:4f2b469da5e4: Add a libfdk-aac decoder [12:12] ffmpeg.git 03Michael Niedermayer 07master:614cf1a6133a: Merge commit '4f2b469da5e4ae221718ae479f6af627cfdebb91' [12:17] ffmpeg.git 03Diego Biurrun 07master:f34de1486aa0: h264data: Remove unused luma_dc_field_scan table [12:17] ffmpeg.git 03Michael Niedermayer 07master:e9cb43c6f67c: Merge commit 'f34de1486aa0eb147d46ba5d2cb86a17407bb7ce' [12:32] ffmpeg.git 03Diego Biurrun 07master:c4e43560fe66: h264data: Move some tables to the only place they are used [12:32] ffmpeg.git 03Michael Niedermayer 07master:16466d92b9f7: Merge commit 'c4e43560fe6677e9d60bfb3cffc41c7324e92a0b' [12:39] ffmpeg.git 03Diego Biurrun 07master:8fed466b0a7d: h264_ps: Drop commented-out cruft [12:39] ffmpeg.git 03Michael Niedermayer 07master:8299ed261a95: Merge commit '8fed466b0a7d636ae5035f9c6074fba9a621539b' [13:03] ffmpeg.git 03Diego Biurrun 07master:330ad1f6a53a: h264_ps: K&R formatting cosmetics [13:03] ffmpeg.git 03Michael Niedermayer 07master:e853cf53256f: Merge commit '330ad1f6a53a37dec228cb424ca57e1268fafc64' [13:09] ffmpeg.git 03Diego Biurrun 07master:c18838f5eb7d: h264_ps: Use more meaningful error values [13:09] ffmpeg.git 03Michael Niedermayer 07master:70a73213b787: Merge commit 'c18838f5eb7d7001a9dc653f5162868c04c1b2a1' [13:16] ffmpeg.git 03Diego Biurrun 07master:e95930eda18e: avcodec/utils: Simplify a condition that combines HAVE_NEON and ARCH_ARM [13:16] ffmpeg.git 03Michael Niedermayer 07master:3d842cf8273f: Merge remote-tracking branch 'qatar/master' [13:52] how can I give private options to x264 from ffmpeg? [13:57] xlinkz0: you can use "-x264opts" to pass options, ie. "-x264opts keyint=123:min-keyint=20" .. multiple key/value pairs separated by : [13:59] thanks [14:00] just don't give ffmpeg multiple -x264opts parameters [14:01] yeah that wont work, just one [14:01] i had to learn that the hard way [14:12] GoaLitiuM: you mean do "-x264opts opt1=val1:opt2=val2" and not "-x264opts opt1=val1 -x264opts opt2=val2" ? [14:12] or just don't give it two opts at all [14:12] xlinkz0: yes [14:12] only have one combined var list [14:12] ok [14:33] ffmpeg.git 03Michael Niedermayer 07master:ca7f637a1eb7: doc/filters: move fps filter start_time item to correct place [17:39] ffmpeg.git 03Paul B Mahol 07master:e6876c7b7b55: lavfi/hue: use lookup tables [17:51] ffmpeg.git 03Stefano Sabatini 07master:5ae3563359cd: lavf/tee: add special select option [17:51] ffmpeg.git 03Stefano Sabatini 07master:71c5f9d29c9e: doc/muxers: add elaborated example for the tee muxer [17:53] those tee argmuents are really amazing [20:14] ffmpeg.git 03Thilo Borgmann 07master:ffa18de2e6cc: configure: Add exif to CONFIG_EXTRA. [20:41] ffmpeg.git 03Paul B Mahol 07master:9a5aa2c48e35: avcodec/mdec: use init_get_bits8() [20:49] why fate-suite/qtrle/Animation-16Greys.mov gives black pixels? [21:34] ffmpeg.git 03Paul B Mahol 07master:925d0837b9f0: qtrle: use bytestream2_get_buffer() [21:34] ffmpeg.git 03Paul B Mahol 07master:d5f547389b45: qtrle: use uint16_t and (u)int8_t instead of unsigned short and unsigned char [21:34] ffmpeg.git 03Paul B Mahol 07master:5c9d44d66bd4: qtrle: use memcpy() [21:34] ffmpeg.git 03Paul B Mahol 07master:71c378984b0b: qtrle: make code independent of sizeof(AVFrame) [22:05] ffmpeg.git 03Paul B Mahol 07master:e7834d29f2a8: lavfi/separatefields: fix frame leak [22:31] ffmpeg.git 03Paul B Mahol 07master:920046abf192: loco: use init_get_bits8() [00:00] --- Thu Aug 22 2013 From burek021 at gmail.com Thu Aug 22 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Thu, 22 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130821 Message-ID: <20130822000501.CE60F18A0369@apolo.teamnet.rs> [00:04] jtriley: you probably want the second video stream, as it's probably a continuation of the first [00:04] but if you can't get it to work, then that's good enough [00:04] although I'd suggest -map 0:v:0 and -map 0:a:0 instead [00:04] or -map 0:a [06:00] I was attempting to give -acodec pcm_s16le -ac 2 -f s16le output to directly to an alsa handle with hw params SND_PCM_ACCESS_RW_INTERLEAVED SND_PCM_FORMAT_S16_LE, 2 channel [06:01] so, firstly, -ac 1 output with a single-channel playback handle sounds perfectly fine [06:01] but once I try to playback two-channel audio, things sound completely garbled [06:02] with -f s16le, how are the two channels physically written? [06:03] or, better yet, how can I specify the interleave format I want? [08:58] when i specify the custom as, configure does not end.. [08:58] if i specify only cc, cxx, nm, ar and ld, configure ends but am not sure it would be calling a correct as [08:58] this is cross compiling to android [08:59] if all those tools have a common cross-prefix you could set that [09:02] i thought about, and looked for --tolchain, that looking into the configure, wouldnt seem to be, but now am finding a --cross-prefix, maybe is that which i need to use, is that the point? [09:02] am cross compile newcomer btw [09:05] wow it ended now! [09:05] thnks [09:05] no pkg-config but let us hope i dont need it [09:43] woa! it seems to have worked! [11:24] but it seems am having prablems to get the static items and not the dynamic [11:25] both unspecifying static and dynamic and also doing --disable-shared --enable-static [11:52] hi, I have a question about how to close a rtsp stream, I'm using this code : http://pastebin.com/7UNz9zEN and even if i use avcodec_close(vCodecCtxp); and avformat_close_input(&pFormatCtx); my handle on my camera is not released, that mean after few tries I have to reboot my camera to free the handle otherwise i can't connect to it [11:52] what am i don't release ? [12:03] is it the git HEAD static build working? [12:18] guys i've got NVENC encoded frames and i want to use ffmpeg to stream them over the network [12:18] can i just put the encoded frame into an AVPacket? [14:09] anyone online? [14:12] why are there both branch/release/2.0 and tags/n2.0? isnt taht confusing? good one is tags/n2.0 isntit? [14:21] good one is git clone --depth 1 git://source.ffmpeg.org/ffmpeg [14:22] hi all! [14:22] I have a little question [14:23] saw I have a mp4 with x264 vid and vorbis audio codec stream and would like to convert it to mp4 with libAAC audio stream, is it possible without video re-encoding? [14:23] it's clear that auto stream has to be fully re-encoded [14:24] use -c:v copy [14:28] but will that produce a streamable mp4? [14:28] why are there both branch/release/2.0 and tags/n2.0? isnt taht confusing? good one is tags/n2.0 isntit? <-- puristic question, not a particular problem [14:28] No. You'll need -movflags faststart for that, CentRookie. [14:29] (As well as.) [14:29] ok [14:29] if thats all, then its pretty straight forward [14:31] by the way is there a difference between nero aac and faac lib, in terms of quality? [14:35] Nero is one of the better free (as in no money) AAC encoders. faac isn't. [14:38] i see, hm, makes me want to search for nero aac for linux [14:39] You get it in the same zip file as the windows ones. [14:39] oh [14:39] do i need to compile it? [14:39] No. [14:44] im sorry to ask, but im not sure where to put the neroaac files pathwise, my ffmpeg is in usr/ffmpeg/ , when i check faac it seems to be in a lot of paths [14:44] i have centos6 [14:44] IIRC, ffmpeg can't use NeroAacEnc, so it doesn't really matter. [14:44] oh, how about mencoder? [14:45] Pretty sure that can't either. [14:45] hey [14:45] why is the option '-rtsp_transport' missing in ffplay documentation? [14:45] it was important for me [14:46] i see, neroaac is closed source [14:46] hey [14:47] guys i've got NVENC encoded frames and i want to use ffmpeg to stream them over the network [14:47] anyone got any examples of rtsp streaming from library? [14:48] not using ffmpeg.exe :) [14:49] hm so there are no alternatives for mobile and streamable files than to use faac ? [14:49] not speaking of mp3 of course [14:51] sarcastic, there seem to be ways to make ffmpeg work with neroaacenc on windows [14:51] using -f wave - | neroAacEnc [14:51] *wav [14:53] does somebody know where i should put the audio codec lib file in ffmpeg? [14:53] so that it is loaded when i use it as lib [14:55] I do not understand what you mean. [14:56] like you said, i can pipe the audio to neroEnc [14:56] It has nothing to do with ffmpeg, though. You can put it wherever you want, if you call the full path or put in your PATH and just use the executable name. [14:57] ok, but still it would be most basic if both files are in the same folder of /ffmpeg, right ? [14:58] I'd put it in ~/bin myself and have that in the PATH. [15:13] ok, tried that [15:13] sigh, not so easy at all :D [15:14] What are you streaming to? [15:14] to browser and mobile devices [15:14] just want to make sure that it s mobile ready [15:14] so im careful about compatibility [15:15] opera for example supports mp4 streaming, but not skipping, while firefox natively supports mp4 [15:15] opera requires a plugin for that [15:25] since there are some real encoding cracks here [15:26] Would it be ok to discuss encoding parameters? [15:27] Does anyone here have experience with using ffmpeg in applications? Would like to stream video from a server to a client, both written by myself. The server gets video frames from a camera with variable frame rate. Low latency is required. Any best practices? [15:29] what output quality and resolution are you looking for? [15:30] CentRookie, currently the best available AAC encoder for ffmpeg is the libfdk_aac one [15:30] internal is probably better than libfaac as well [15:31] thanks, mavrik, gonna look into that [15:31] CentRookie, 752x480, good quality [15:31] check out this: http://blog.mmacklin.com/2013/06/11/real-time-video-capture-with-ffmpeg/ [15:31] thanks [15:32] a lot of real time podcast application probably uses ffmpeg or some sort for real time streaming, if they support 264 [16:11] Got a problem with mmacklins example. [16:12] $ ffmpeg -r 60 -f rawvideo -pix_fmt rgba -s 1280x720 -i - -threads 0 -preset fa [16:12] st -y -crf 21 -vf vflip output.mp4 [16:12] ffmpeg version N-55644-g68b63a3 Copyright (c) 2000-2013 the FFmpeg developers [16:12] built on Aug 19 2013 20:27:12 with gcc 4.7.3 (GCC) [16:12] configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libr [16:12] tmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib [16:12] libavutil 52. 42.100 / 52. 42.100 [16:14] oops. [16:15] pastebin ;) [16:17] geez [16:17] i give up [16:18] neroaacenc is too restrictive [16:18] would need to do 3 steps encoding to get 1 single video file [16:18] Or... Script it! [16:19] extract audio and encode as wav, encode with nero to aac, merge with video stream [16:19] and i bet somewhere among those steps there will be asynch audio [16:19] like when the input file has varialbe bitrates [16:19] love11ven JOIN [16:19] > love11ven JOIN [16:19] > like this: http://pastebin.com/cpuZdtCg [16:20] even scripting it would be too slow, as you would need to write out the audio stream first, instead of caching it in memory [16:20] when you got 5000 video files, that would be 15-20000 step encoding steps [16:20] probably double or tripple the amount of read and write [16:21] just for slighty better audio quality? [16:21] ffmpeg -i blah.mkv -vn -f wav - | neroaacenc -o cheese.mp4 && ffmpeg -i blah.mkv -i cheese.mp4 -map 0.0 -map 1.0 -movflags faststart final.mp4 [16:21] in theory it should work [16:21] but doesnt [16:21] for example if the video file uses vorbis, ogg, somehow you cant pipe it as wav [16:22] but i can pipe it as ogg [16:22] but neroaac only accepts wav [16:22] so i need to write out the file [16:22] anyway, faac is not so bad [16:23] XD [16:23] until somebody shows me a better solution [16:23] mkfifo temp.wav && ffmpeg -i blah.mkv -vn -f wav temp.wav && neroaacenc temp.wav -o cheese.mp4 && ffmpeg -i blah.mkv -i cheese.mp4 -map 0.0 -map 1.0 -movflags faststart final.mp4 && rm temp.wav [16:23] what does mkfifo do [16:24] It makes a named pipe. [16:24] i see [16:24] clever [16:24] but && is basically still the same, 3 step encoding problem [16:25] Remove the first &, just leave it with 1 there, and I think it should do the two audio steps at the same time. \o/ [16:25] hm [16:26] But, I have no linux box with which to test at the moment. [16:26] i think the 4th command should contain -i temp.wav [16:26] not cheese.mp4 [16:26] or you are inputting 2 video streams [16:27] The MP4 in this case is the output from nero. [16:27] Not a video. [16:27] ah, then it should be m4a [16:27] nero only outputs audio [16:27] Not really. [16:27] it doesnt recognize mp4 [16:28] but i get it [16:28] could work that way [16:28] would still be read mkv, extract to wav, read wav, write m4a, read mkv, read m4a, write mp4 [16:29] compared to read mkv write mp4 [16:30] The nero docs say it can write to mp4. Considering m4a is just mp4 with a different name, why wouldn't it? :D [16:30] hit me XD [16:30] it still doesnt [16:30] it says mp4 output not recognized [16:31] might also be just my noobish skills [16:32] but yeah, i abandoned it [16:32] go kill yourself, neroaacenc [16:32] i thought it was opensource [16:32] shouldnt they make it better? [16:32] You could use the lib-fdkaac encoder, which you can use with ffmpeg, though you might have to compile yourself or use a static build. [16:33] hm [16:34] how is the compatibility to older systems? [16:35] do you need some special aac codec to enjoy the higher quality? [16:35] as viewer [16:35] No, it should all create normal AAC, it's just different encoders are better than others. [16:35] i see [16:36] it supports the new he profiles [16:36] but those should be backward compatible right? [16:36] Action: sacarasc shrugs. [16:36] I don't do much with aac. [16:36] i see [16:36] do you fix async video and audio from time to time? [16:36] im having trouble with itsoffset [16:37] no matter where i put that variable and what time parameter, it still is async like the original file [16:37] tried all differnet mapping permutations lol [16:37] in hope it would fix it [16:37] does itsoffset work with c:v copy and c:a copy? [16:38] It should. [16:38] then i really dont know what im doing wrong [16:38] it feels like a curse [16:39] whenever you think you solved one problem, another problem pops up [16:39] ffmpeg -y -i Pacific.Rim.2013.R6.HDCAM.mp4 -itsoffset 00:10:10.000 -i Pacific.Rim.2013.R6.HDCAM.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 pacific-sound-delayed-10s.mp4 [16:39] ignore the obvious filename related issue [16:40] still gives me the same audio delayed audio [16:40] the original mkv on the other hand is in sync [16:41] goal was to dely video for 10 min, so that audio is 10 min faster [16:41] but for some reason, ffmpeg ignores the itsoffset command [16:41] putting the audio and video to the original timing [16:42] CentRookie, the new HE profiles aren't backward compatible [16:42] new ones, are the HE 2.0 ? [16:42] e.g. if you try to play HE-AACv2 audio on a non-compatible player all the low frequencies are missing [16:42] argh [16:42] i thought that much [16:43] how about HE-AAC [16:43] but you need to tell fdk_aac explicitly you want a HE-AACv2 profile [16:43] HE-AAC and HE-AACv2 are mostly the same with exception of SBR [16:43] (which means HE-AACv2 mono is HE-AAC) [16:43] so HE-AACv1 would also create missing low frequencies? [16:43] CentRookie, HE-AACv2 is widely supported though, so I wouldn't worry much about it [16:43] yeah [16:44] CentRookie, or just encode into low profile and you're done [16:44] well, it is good to know, will take a note of that [16:44] there's no point in using the HE profiles for anything above about 64kbps anyway [16:44] i havent found a way to install fdk_aac yet on centos [16:44] not sure if i have to compile it totally [16:44] and then also recompile ffmpeg for that [16:45] CentRookie, yes and yes [16:45] fdk_aac has a different license and compiled ffmpeg can't be distributed if it's build with fdk_aac support [16:46] i see [16:54] Mavrik, I only found fdk for android [16:54] and a git source code [16:54] but the git source code doenst seem to play well with centos [16:55] bug in configure script [16:55] the android on sourgeforge is the right one? [16:55] cuz it says amr [17:26] you were right, mavrik, fdk is better than faac [17:26] at low bitrates that is [17:30] brilliant! sound is much better [17:34] what the heck [17:35] could it be that fdk doesnt support multicore? [17:37] too bad it's not GPL-compatible [17:37] does it mean it is not multithreading compatible? [17:37] no [17:38] it means you dont know [17:38] well at least the version i downloaded isnt [17:38] sigh [17:38] all the work for nothing [17:43] Hi, can I use libffmpeg to rtp-stream the output of x264_encoder_encode()? [17:45] unfortuantely ffmpeg is not creator of libffmpeg [17:56] do you guys know of a fast way to hardcode subtitles XD [17:56] i guess cant use c:v copy on that one [18:00] CentRookie: it is explained in wiki [18:05] drandal, i know [18:05] i know how to hardcode subtitles [18:05] i was asking if there is a way to cheat or faster way to hardcode it [18:06] but i guess since it has to be overlayed for each frame [18:06] full re-encoding is a must [18:15] works [18:15] pew [18:16] is there any video filter that can measure edges on macroblock boundries? Or can be used to measure percieved video quality? [18:17] what percieved video quality? [18:19] durandal_1707: measure macroblock pixelation/exessive blurness etc... [18:20] using single video? [18:21] CentRookie: when speed is an issue consider using x264 profiles like superfast [18:22] actually best value is always optimization [18:22] want as much quality and speed per computing time as possible [18:23] i found that for the tests i've conducted i can not see the quality drop in superfast profiles [18:23] depends on bitrate [18:23] im working at ultra low bitrate area [18:24] 1hour ~ 120mb [18:26] then why are you complaining about re-encoding? it must work blazingly fast at normal presets [18:26] well it is because of some raw footage [18:26] they were recorded with vorbis audio and later added with external subs [18:27] going through all of them takes a lot of time [18:27] so was just wondering if there is a way to fast overlay subs [18:27] but i dont think there is [18:27] so im re-encoding them fully [18:58] I have a sequence of images frame.0200.png to frame.0400.png, how can I refer to these filenames for encoding video? [18:59] -i "frame.%04d.png" doesn't work: No such file or directory [19:00] wildcards don't seem to work [19:01] cat frame.02* frame 03* frame.04* | ffmpeg -f image2pipe - [19:02] ffmpeg can't accept them as filenames? [19:03] for the %04d method, you'd need to start at 0000, and ffmpeg doesn't support globbing. [19:03] globbing is supported, again read documentation [19:03] I found documentation saying how to use globbing, but when I tried it, it said it was an unknown option [19:04] because you do not use ffmpeg [19:04] or use extremly old version [19:04] hm? [19:04] It wasn't when I last used it. :D [19:05] ffmpeg version 0.8.6-6:0.8.6-0ubuntu0.12.10.1, built on Apr 2 2013 17:02:16 with gcc 4.7.2 [19:05] that is not ffmpeg [19:05] what is it? [19:05] fake ffmpeg [19:05] and what does that mean? [19:06] read output of "ffmpeg -h" [19:06] Samus_Aran: On Ubuntu and other distros, you're getting avconv from the libav project rather than ffmpeg from the ffmpeg project. [19:07] "Hyper fast Audio and Video encoder" [19:07] not that ... [19:08] Action: Samus_Aran gives durandal_1707 the Award For Excellence In Achieving Vagueness [19:09] you cleary lack skills big time [19:09] durandal_1707: you told me to read the help, whatever for? it doesn't even say what app it is [19:10] it says, you just need to read with understanding [19:10] for examples is in that output FFmpeg project ever mentioned [19:11] if you want help for avconv and ffmpeg from Libav go to #libav [19:18] sacarasc: thank you. haven't used ffmpeg recently, didn't know it was forked. [19:19] Mista_D: if you didn't know there is psnr filter, it needs 2 videos [22:12] is there a way to get the properties (Presets) of Video.mp4 file, and then apply this preset to convert other videos? [22:14] encoding properties of a video stream are not stored in the container [22:15] they aren't even usually stored in the video stream (although x264 used to do that a few years ago) [22:17] oh you're here [22:17] hello [22:19] ok. [22:25] I am in a lot of places :) [22:26] if you use x264, it can show you the encoding options if you just run strings on the stream [00:00] --- Thu Aug 22 2013 From burek021 at gmail.com Fri Aug 23 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Fri, 23 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130822 Message-ID: <20130823000502.BFB8918A00B8@apolo.teamnet.rs> [01:33] ffmpeg.git 03Michael Niedermayer 07master:3819db745da2: avcodec/rpza: Perform pointer advance and checks before using the pointers [09:53] ffmpeg.git 03Stefano Sabatini 07master:eadb3ad7580a: lavf/tee: initialize ret in parse_bsfs() [11:22] Range header in http.c triggers a bug on some icy (shout cast) servers. [11:22] When I comment out Accept and Range headers in http.c, it connects to problematic servers successfully. [11:35] zidanne: what bugs? [11:35] ok, you can try it: [11:42] http://pastebin.com/V9vnXRFN [11:43] zidanne: what, both Accept and Range? [11:44] I didn't try commenting out just 1 yet. (I commented out both headers till now). Probably Range is the responsible one. I am trying it now. [11:47] ok [11:47] I tried it an the problem is only at "Range:" header [11:47] Server does not like "Range: bytes=0-\r\n" [11:48] it shouldn't be needed (if offset is 0), but according to the code comment, it's to detect seekability with some broken servers [11:49] but if this makes some other servers not work at all, this probably shouldn't be done [11:49] zidanne: what exactly is happening? here, it doesn't close the connection, just gets... stuck [11:49] maybe a workaround is possible [11:49] I tried wireshark [11:49] server stops replying [11:50] and ffmpeg hangs at poll() [11:50] ffmpeg hangs on waiting for data.. it never comes from the server.. [11:50] Action: Daemon404 likes seeing giant lists of avfilter deprecation warnings [11:53] zidanne: see commit f0bb88e2bc8b20e0181 [11:54] of course it doesn't say details [11:54] because, as we all know, commit message bytes are scarce and valuable [11:55] zidanne: looks like you could set the seekable option to 0 [11:56] I'd rather have it work with all servers by default, and require setting seekable to 1 for servers where we misdetected seekability [12:03] what did you change? I couldn't find the change [12:06] try ffplay http://95.211.60.38:6004 -seekable 0 [12:07] so, no change in the code, we have to specifically add seekable 0 to options. [12:08] Maybe seekable could be 0 as default. You may not want to seek by default but play by default (for remote streams). [12:09] (but I understand that this can be a breaking change...) [12:12] no, as I explained above, if the offset is 0, the Range header is sent only to determine seekability with some broken (?) servers [12:12] but this in turn breaks connecting to your icy server [12:13] maybe the problem can be worked around by changing the order of sent headers? after all, the icy server (or http.c) doesn't disconnect, just gets stuck somehow [12:16] if changing the order changes things (It shouldn't according to http protocol definition), it means that the developer of icy server did not implemented http protocol correctly. [12:18] a correct implemented http protocol doesn't just freeze after sending headers either [12:20] I tried to contact the developer but no response yet& I will add seekable 0 to options parameter of my avformat_open_input [12:20] thanks, this at least solves the problem in my scenario where I really don't need seekability in my specific situation. [12:54] ffmpeg still picks for filter gray format instead rgb when input is pal8 [12:57] and it looks like if filter supports gbrp chain will still pick yuv444p [13:02] ffmpeg.git 03Paul B Mahol 07master:139a98be8e2c: lavfi/gradfun: support gbrp [13:08] ffmpeg.git 03Diego Biurrun 07master:0b45269c2d73: x86: h264_idct: Remove incorrect comment [13:08] ffmpeg.git 03Michael Niedermayer 07master:503aec142526: Merge commit '0b45269c2d732d15afa2de9c475d85fcf5561ac4' [13:11] git br [13:14] ffmpeg.git 03Rafa?l Carr? 07master:4622f11f9c83: w32pthread: help compiler figure out undeeded code [13:14] ffmpeg.git 03Michael Niedermayer 07master:221a99aae767: Merge commit '4622f11f9c83db8a2e08408c71ff901826ca652c' [13:42] ffmpeg.git 03Cl?ment BSsch 07master:f8ef91ff3d6b: movenc: add faststart option for web streaming [13:42] ffmpeg.git 03Michael Niedermayer 07master:d68adbd666cb: Merge commit 'f8ef91ff3d6bb83d601d816ef9368f911021c64b' [14:03] ffmpeg.git 03John Stebbins 07master:fe5d5a8ffcaf: movenc: Make chapter track QuickTime compatible [14:03] ffmpeg.git 03Michael Niedermayer 07master:606a30c5a115: Merge commit 'fe5d5a8ffcafdc14c0d26eaea6464c89e120cc9e' [14:21] ffmpeg.git 03Cl?ment BSsch 07master:60198742ff85: movenc: fix detection of 64bit offset requirement [14:21] ffmpeg.git 03Michael Niedermayer 07master:25e4ec6aa14d: Merge commit '60198742ff851f11a3757c713fc75a9c19b88566' [14:28] ffmpeg.git 03Stefano Sabatini 07master:7af7b45c3802: lavf/image2: extend start_number range to accept zero [14:34] ffmpeg.git 03Diego Biurrun 07master:e7b31844f68e: x86: Split DCT and FFT initialization into separate files [14:34] ffmpeg.git 03Michael Niedermayer 07master:f903b4266381: Merge remote-tracking branch 'qatar/master' [14:35] ubitux, mateo` , if you have time please check the past few days changes introduced to mov*c by the merges [14:38] i'm slightly concerned about the co64_required() simplification [14:38] that might be correct but.. [14:38] ubitux, dont hesitate to undo/revert it if you think theres an issue [14:38] i need to look a bit more closely [14:42] my main concern being that i'm not convinced about the tracks being ordered properly in every situation [14:43] but well it's been a long time i haven't look at this stuff [15:08] michaelni: i'll take a look at it this week-end (no time until then) [15:13] ubitux: they picked up your patch ? [15:14] they cherry-picked the faststart patchset, but did a few changes [15:15] have they mentionned why ? [15:15] because it makes the code is simpler, there is a discussion on their ml [15:15] but i didn't read it closely [15:15] ok good [15:18] michaelni: why 2850 is not closed? [15:22] what about all this license violation bugs? [15:37] why 1900 is not fixed? [16:46] ffmpeg.git 03Paul B Mahol 07master:6e643239d995: pngdec: frame multithreading support [17:08] ffmpeg.git 03Paul B Mahol 07master:b1e276f8df25: lavfi/hue: allow changing brightness [00:00] --- Fri Aug 23 2013 From burek021 at gmail.com Fri Aug 23 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Fri, 23 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130822 Message-ID: <20130823000501.B6E3918A006B@apolo.teamnet.rs> [00:22] I"m attempting to make my own h264 encoded video, and ship it out over rtsp using live555. I'm having problems, and I'm at the point where I believe that I need to make sure I'm using the h264 encoder correctly, so I want to take the encoded data and put it in a matroska file. Since I'm generating the source video myself, I can't use a cli transform or capture program to make the mkv, I need to make it programmatically. Does the ffmpeg project [00:22] provide this capability, or am I confused? [00:23] IYou should be able to do that, but I don't know how, personally. [00:24] And make sure you're using the libx264 encoder, as the h264 codec in ffmpeg is just a decoder. [00:31] thanks. I am using x264 [00:31] I am modifying a program that currently works, that encodes using xvid [00:32] but when I put h264 data in the same place, I can crash vlc :-) [00:32] things happen that shouldn't, and the live555 folks think the already working program is faulty, so I need to confirm the functionality at every step, at least for my own sanity [00:41] jangle_: libavformat [00:42] works together with libavcodec if you wan [00:44] I crash in libavcodec when using vlc, so I've been trying to link vlc with a dev build of libavcoded so I can debug with gdb, but thats proving to be harder for me than I thought it would [00:45] does that sound sane? [01:13] http://pastebin.com/ePAEb6WV [01:14] need some illumination. I am setting up my x264 encoder to operate on rgb data. but apparently, my encoder isn't emmitting this specification, so I've explicitly specified to ffmpeg what the pixel format is, and I am presented with the pasted error [01:16] I'm pretty sure I've even linked ffmpeg against the same libx264 binary that I'm using for encoding&. [01:25] jangle_: use libx264rgb encoder to encode rgb data as rgb data and not yuv444p [01:26] encoder was split because it caused other users to encoder rgb while in fact they wanted yuv [01:27] so current inconsistency is because of inifinite human stupidity [01:27] I'm not sure I understand you. in my encoder setup, I specify the input pixel format to be rgb? [01:27] you're saying there's a fork of libx264 that I should use for this purpose instead? [01:29] i don't think it's a fork [01:31] so i would look for this in the ffmpeg project? [01:45] jangle_: there are 2 encoders [01:46] libx264 accepts only yuv and libx264rgb accepts only rgb [01:46] is it now more clear? [01:51] hi [01:51] is it possible to use ffmpeg to stream (with as little CPU utilization as possible) to a different machine running ffmpeg which is streaming to some online service? [02:21] RTMP, but VLC might be better at that [02:26] durandal_1707: no, but I figured out that I specify to ffmpeg to use a different encoder (-vcodec libx264rgb), which abstracts the use of libx264 to accept rgb input, not that I had to add a different block of library code to the build of ffmpeg I was using [02:26] durandal_1707: and while I don't get an error about being unable to determine the input color format, my output file is still black [02:30] is there an easy way to specify that no transcoding should be done, that ffmpeg should just stuff the input into the specified container file without modification? [02:32] jangle_: -c:v copy or with the legacy argument -vcodec copy [02:32] using -c:v is recommended though afaik [02:34] thanks, so [02:34] that looks like, -i infile -pix_fmt rgb24, -vcodec -c:v libx264rgb? [02:37] or rather -c:v libx264rgb instead of -vcodec [02:37] so instead of a black screen I get a green one now [02:37] and the file encoder format according to vlc is still wrong, but at least it is different than what it used to be [02:38] ahhh..... [02:39] avcodec in vlc is complaining of an invalid avcodec, gbrp, [02:39] ffmpeg sees that as the input codec pixel format... [02:40] thank you all, this has been illuminating [03:07] jangle_: if you see black or green in your player that means your play does not support such profile [03:07] *player [03:07] ok [03:07] thanks, tahts good to know [03:09] if vlc crash that means vlc is buggy, perhaps you use very old version [03:10] I'm using git trunk, and have reported the crash, which lands in avcoded [03:10] and if you want to support many players than you should only use yuv420p format with h264 [03:10] git trunk of vlc? [03:11] I"ve been trying to debug my encoder using this crash, thinking they were related [03:11] thanks. I was hoping to do a drop-in replacement and see if avoiding the colorspace conversion would clear up a problem I was having [03:11] yes [03:11] git trunk of vlc. [03:12] that was two weeks ago, so I haven't tried it lately, and I know what setting causes the crash so I stopped using it [03:12] its related to the use of cabac, which I don't yet fully understand, but [03:12] could see if output was correct by playing it with mplayer/ffplay [03:13] but perhaps you mean you modified x264 encoder? [03:14] did you reported crash on vlc bug tracker? [03:15] note i consider crash as serious bugs, so that's why i need clarification it does not happen with ffmpeg itself [03:17] I reported the crash on the vlc bugtracker [03:17] I haven't modified the x264 library, just trying to use it [03:18] so [03:18] try it with ffplay or mplayer? [03:19] there is a closed bug of mine I submitted, where it crashes when you play a file I made [03:19] you can try playing that file yourself [03:19] that bug was closed because vlc thinks they've fixed it for 2.1.0-pre, but I'm getting trunk now and I'm going to try it again. [03:29] I still get the crash, unless I"m not updating correctly, or i'm misunderstanding the reason they closed the bug, that it "will be fixed when a feature targeted for 2.1.0 is complete" [03:33] ok, I'm fetching ffplay [03:45] jangle_: you have link to file/bug? [03:46] sure one moment [03:46] https://trac.videolan.org/vlc/ticket/9123 [03:47] and the other 2 bug reports I filed are about the same problem, and have crash dumps in instances I was reading from a stream instead of a file [04:04] how do I build ffplay? [05:29] the suspect file plays in ffplay [05:29] and correctly too, or mostly [05:32] cant play my newer files yet though [10:46] why if i --disable-shared and --enable-config, 'file ffmpeg' keeps telling that is 'dynamically linked (uses shared libs)' ? [10:46] why if i --disable-shared and --enable-*static... [10:48] maybe are missing some --extra-libs and/or --extra-cflags, according to http://sopues.blogspot.com.es/2007/02/how-to-compile-ffmpeg-statically.html [11:07] shurnor, what does ldd say? [11:12] hi everyone [11:12] My goal is to split a XDCAM or a H264 video, frame-accurately, with ffmpeg. I guess that the problem comes from its long GOP structure, but I'm looking for a way to split the video without re-encoding it. I apply an offset to encode only a specific section of the video (let say from the 10th second to the end of the media) Any ideas ? [11:22] i love you [11:24] says not a dynamic executable (but is targeted to arm built from x86_64) [11:25] am making now a try in 5 minutes with the x86_64 targeted build [11:45] ealdeguer, well, you will have to split the video on IDR frames, there's no way around it if you don't want to reencode [11:52] how do i change the timebase? [11:56] Mavrik, thank you it makes sense. I saw on google a tutorial with a YUV reencoding, what is your feeling about that choice ? [11:57] I don't know what exactly is your use case or your goals [12:11] JEEB: do you know by any chance how to modify the time_base ? [12:32] am almost done preparing both static and shared versions for a ldd... [12:37] http://ideone.com/3Ch8aX [12:37] so my answer would be 'static compile yields yet a dynamic binary'? [12:42] (but dependant on seven shared objects instedof fourteen) [13:20] could someone please explain how to use the -time_base option? [13:21] wherever i put it i get errors [13:21] i tried putting it between the input and output file, i get [13:21] Codec AVOption time_base () specified for output file #0 (1.mp4) is not an encoding option. [13:21] before the input file doesn't work either, idk what to do [13:32] hello [14:14] i finally got the 100% static binaries appending -static to extra c cxx and ld flags [14:16] but i dont really know in which of both c cxx and/or ld extra flags is that mandatory ?_? [15:42] hmm [15:43] been thinking how to use https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20vides to create a video with multichannel output video, but i don't know where to place amerge [15:43] can someone help to get me started? [15:44] multi channel or multi stream audio? [15:45] multistream audio :) [15:46] so i need a clip with one video stream and 4 stream audio, let's say [15:46] i'll give it 4 different videos, each of them having one audio and one video stream [15:48] i see [15:48] and you want to go with mkv? [15:49] does your original source have only 1 stream? [15:49] i could go with mp4 video container, and aac, mp3, or ac3 audio codec [15:50] CentRookie: my source has two streams, one for audio and one for video [15:50] possibly it'll have one for subtitles, but i can ignore that [15:50] no i meant audio stream [15:50] if the original has already 4 audio streams [15:50] you can simply copy stream it [15:50] dont have to reencode [15:51] if you have 4 files with each 1 audio stream, then its probably best to extract the 4 audio streams first [15:51] yeah, the latter [15:51] 4 files with each 1 audio stream [15:51] as audio1.m4a audio2.m4a [15:51] yeah [15:51] good [15:51] you use map [15:53] ok [15:54] ffmpeg -i video.avi -i video1audiosource.avi -i video2audiosource.avi -i video3audiosource.avi -i video3audiosource.avi -i video4audiosource.avi -map 0:0 -map 1:1 -map 2:1 -map 3:1 -map 4:1 -"now come the video codec, bitrate, audio codec, bitrate parameters" finaloutput.mp4 [15:55] map options work like this "-map fileID:streamID" [15:56] where 0:0 is like : "Take the first file ( id=0 ) and first stream (video stream id=0) [15:56] so it is taking the video steam of the first -i inputfile.avi [15:57] after that it takes the audio streams (id :1) [15:57] of file 0, 1,2,3 [16:00] video.avi and video1audiosource.avi are files of similar properties? or video1audiosource.avi only having an audio source? [16:01] it doenst matter if one of those has multiple streams [16:01] you can select the file with -map fileID and the stream with -map fileID:streamID [16:01] ok, thank you, i will go try it now :) [16:02] IDs are numbered from 0 to n-1 [16:02] so 0 for first, 3 for forth file [16:03] ok [16:06] oh you can add different audio properties like aac and mp3 streams too, but just try it with one audio first [16:08] CentRookie: can i use '-map' after -filter_complex ? [16:08] oh, yes, seems like i can [16:10] now it would be nice to name that audio channels so they have same pids all the time [16:15] hello [16:15] can someone help me with libavformat? [16:16] i'm trying to do some demuxing/muxing without transcoding [16:17] what do you want to do [16:18] i'm trying to demux h264 video stream from big_buck_bunny_480p_h264.mov [16:18] and then mux it into some other format (like avi) [16:18] what did you try until now [16:19] getting stream frames with av_read_frame [16:19] and then writing them with av_interleaved_write_frame [16:19] dont you just want to change container? [16:20] yes, i do [16:20] from mov to .avi? [16:20] yup [16:20] then just do ffmpeg -i bunny.mov -c:v copy -c:a copy bunny.avi [16:21] i need to do this in C [16:21] this is just a first step of POC of bigger project [16:23] in C? you want to write a program huh [16:23] well thats the command line [16:23] you just need to pass that as string to ffmpeg [16:26] ok, my goal is to perform live-stream transcoding with NVENC [16:26] using FFMPEG as demuxer/muxer [16:30] i've managed to demux mpeg4 stream from an avi file and then mux it into new avi container [16:31] so this would indicate that the code is good so far [16:32] what's special about h264/mov stream/container combination, that the stream can't be just copied to another container? [16:33] nothing is special [16:33] you can put h264 stream into any container [16:33] the mov file contains tmcd stream, but pts/dts values of h264 looks fine [16:33] the question is if it is supported by the devices that should read it [16:34] are you using jwplayer or something to read the stream? [16:35] vlc [16:35] and when i put the stream to an avi container it says the file lenght is 39min [16:35] when i put it into mp4 it says 1:14 [16:35] is there a compiled ffmpeg 2.x for ubuntu precise? [16:37] ffmpeg version git-2013-08-22-3819db7 Copyright (c) 2000-2013 the FFmpeg developers [16:38] mkozjak: you can compile it yourself: http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide [16:38] getting libx264.c:562: undefined reference to `x264_encoder_open_135' [16:38] so that's why i'm asking :D [16:38] i have x264 built and installed [16:39] i've followed the instruction today and it worked... [16:39] how did you install x264? [16:39] inqb: from here: https://gist.github.com/faleev/3435377 [16:40] inqb: probably LD_LIBRARY_PATH is not correct or something [16:40] i had to link /usr/lib/x86_64-linux-gnu/libx264.so.120 to /usr/lib/libx264.so for ffmpeg to configure correctly [18:18] why do I see a blue icon called ffdshow in taskbar when converting/editing a video? [18:18] maybe because you installed ffdshow [18:50] anyone know if there's a guide on transcoding for adobe fms 3.5 with ffmpeg? cant seem to find any details but cloudfront rtmp streaming of ffmpeg transcoded mp4s seems to have huge audio sync issues [18:50] progressive download of same ffmpeg transcoded mp4s works fine.. [21:16] -static only seemed to be mandatory in --extra-ldflags [21:37] FFmpeg fails to decode Vorbis file posted here: http://www.hydrogenaudio.org/forums/index.php?showtopic=102350&hl= [21:37] the file has nonsensical granulepos info but even after those are fixed by stream rewrite decoding fails [21:51] how can I add a silent audio track to a file? [21:52] (for concat purposes) [21:52] I've a v+a file, then v only, then v+a again. [21:52] by encoding silence [21:53] with the right params you should be able to use /dev/zero as a src. [21:53] -f lavfi -i aevalsrc=0 can do it, I saw in stackoverflow [21:56] auhm What is: [21:56] [Parsed_overlay_4 @ 0x97fd880] Buffer queue overflow, dropping. [21:56] ? [21:56] (I'm reencoding a video from a file) [21:57] overlay filter tropped some frame because sources have different pts [21:58] *dropped [21:58] pts? [21:58] filter use pts to sync both inputs [21:58] packet timestamp [21:59] ah. [21:59] mh right [22:00] so if that is not wanted, use setpts filter.... [22:01] I'll read about it [22:01] I don't know much what is that of pts :) [22:03] setpts to what streams? [22:03] I've multiple '-i' [22:07] the one you use with overlay [22:09] hm can I know if it's audio requiring that, or video? [22:09] simply to the video overlay doesn't solve it :( [22:09] overlay filter buffers video frames [22:10] but I think the trouble is with concat [22:10] iirc I already have discussion here about concatenation of silence audio [22:13] sorry, I don't see how that is related. [22:14] does any line with any loglevel report about pts? [22:14] two files have tb:1/90000, one tb:1/1000 [22:14] can it be that? [22:14] showinfo shows it among others [22:14] that is time base [22:15] here is the ffmpeg output: http://sprunge.us/HOPV [22:16] showinfo... let me check. [22:20] a mistery for me. [22:21] durandal_1707: sure isn't the timebase thing? [22:24] viric: pts uses time base [22:25] so both should be same.... [22:26] I tried adding setpts everywhere, and no success. [22:26] well, I added 'setpts' without parameters. Which clearly shows that I don't know what I am doing [22:29] viric: maybe try to find the overlay examples in the ffmpeg docs. i think it mentions setpts too. [22:30] ok [22:32] hm maybe it has to do with concat + later overlay [22:32] viric: there is documentation, setpts without any arguments does nothing [22:32] I'm trying: [22:32] movie=portada.mov:loop=1,setpts=PTS-STARTPTS [p]; [22:33] [0:0] yadif,scale=960:540,setpts=PTS-STARTPTS [deint0]; [22:33] [1:0] showinfo,yadif,scale=960:540,setpts=PTS-STARTPTS [deint1]; [22:33] [deint0] [0:1] [deint1] [1:1] concat=n=2:v=1:a=1 [outv] [outa]; [22:33] [outv][p] overlay [outv3]' [22:33] (I've two -i, for [0] and [1]) [22:33] if tb is different you should change it with settb [22:34] chaning only pts while tb is different may not work [22:34] but you first need to be sure what you are doing [22:35] so the point is that 'pts' should be equal for every input stream to overlay, right? (checking with showinfo) [22:35] i do not know... [22:36] if you want to overlay 1-1 frame from 2 sources than yes, otherwise not [22:37] I have the same fps, so I think I want 1-1 frame. [22:40] I guess I have to learn about how video works. [22:45] I think I got pts equal... [22:45] show input at both overlay inputs: [22:45] [Parsed_showinfo_2 @ 0x81e8ee0] n:2 pts:14400 pts_time:0.08 [22:45] [Parsed_showinfo_6 @ 0x81ea620] n:2 pts:14400 pts_time:0.08 [22:50] mh no idea. [22:50] showinfo shows the same for both, now, at every frame. Same pts, same pts_time, same n, ... [22:51] this is my showinfo at overlay inputs:http://sprunge.us/DcDT [22:52] I have a .avi file of which ffprobe says: 640x360 [SAR 2:1 DAR 32:9], SAR 1:1 DAR 16:9 [22:53] so? [22:53] the SAR 1:1 is correct and the 2:1 is not... can anyone suggest an ffmpeg command that would remove the wrong SAR 2:1? [22:54] one is from container and other from bitstream [22:54] so if i guessed correct order, remuxing should fix it [22:54] durandal_1707: which is whic, BTW? [22:57] argh [22:58] durandal_1707: do you know what was it? I had *two* output files by error [22:59] ffmpeg -i .... file.webm file.wbm [22:59] what do you mean by, "I had *two* output files by error"? [23:00] what is a .wbm file? [23:00] in my command line, I didn't notice I wrote *twice* the output file [23:00] viric: same thing is encoded into 2 files [23:00] file.webm file.webm [23:01] If I type a single file.webm as output, I don't have any buffer overflow [23:06] uf. I was almost learning all the pts thing, had all matched... and yet didn't work. Buffer overflow. Good that I noticed the double filename. [23:06] becase the two output files are different. one is using default settings since it has no output options applied to it [23:06] ...i assume. i have no idea what your command is [23:12] durandal_1707: I ended up reencoding the file to mp4... avi sucks anyway [23:13] khali: you mean remuxing [23:13] transcoding decrease quality for lossy codecs [23:14] durandal_1707: no, I had the original source still available, so I encoded it to x264 in mp4 container [23:14] smaller and nice it is [23:14] thanks for your support BTW [23:14] now is bed time, see you [23:32] How can i join 3 mov files to all-in-one.mov file? this is not working: $ mencoder -oac copy -ovc copy -idx -o output.mov Supernova1.mov Supernova2.mov Supernova3.mov [23:33] No audio [23:33] IamTrying: this is not mencoder support channel [23:34] durandal_1707, OK how do you do it with FFmpeg? [23:34] durandal_1707, just need 3 mov file to 1 mov file with video/audio [23:34] if there all have same codecs/fps/timebase/format/order/etc with concat demuxer [23:35] durandal_1707, like this ffmpeg -i "concat:input1.mpg|input2.mpg|input3.mpg" -c copy output.mpg ? [23:36] nope, that is concat protocol, same as cat [23:37] Hello. I have a question about which decoder vcodec to use. For a src file that mediainfo tells me Video:Codec ID: --> "V_MPEG4/ISO/AVC", would I use vcodec== h264, mpeg4, msmpeg4v1, msmpeg4v2 or msmpeg4v3? I think I'd use h264, but they seem to overlap. [23:38] In case it matters, the stream output format I want is "mpegts" [23:38] IamTrying: http://ffmpeg.org/faq.html#How-can-I-join-video-files_003f [23:38] https://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20concatenate%20%28join%2C%20merge%29%20media%20files [23:39] JennieL: libx264 [23:39] for encoder, for decoder h264 [23:40] JennieL: why not use ffprobe instead of mediainfo? [23:41] durandal_1707: Ok, thanks [23:41] axorb: Well, for starters, I hadn't FOUND ffprobe yet ;-p [23:41] :P [23:41] I have now ... [23:41] ffprobe -show_streams [23:42] llogan, if i use this then i lose -sameq ? http://stackoverflow.com/a/7333453/285594 [23:42] axorb: much better, thx! [23:42] durandal_1707, if i convert mov to mpg and cat mpg to ffmpeg i lose -sameq of mov ? [23:43] sameq no longer exists [23:43] JennieL: and if you have an updated version of ffmpeg, -print_format json [23:43] http://stackoverflow.com/a/7333453/285594 [23:43] http://ffmpeg.org/faq.html#Why-was-the-ffmpeg-_002dsameq-option-removed_003f-What-to-use-instead_003f [23:43] llogan, will it keep video/audio quality ??? and same .mov format? [23:43] https://trac.ffmpeg.org/wiki/Option%20%27-sameq%27%20does%20NOT%20mean%20%27same%20quality%27 [23:45] llogan, so use -qscale 0 ? [23:45] ffmpeg -i 1.mov -qscale 0 1.mpg ? [23:45] axorb: Thanks -- those'll be helpful I hope. I've been having a real problem trying to get my stream output responsding to controls, and to troubleshoot I've been trying to make sure everything is "right" step by step. HAving good/easy info output helps. [23:46] cat 1.mpg | ffmpeg -f mpeg -i - -qscale 0 -vcodec mov output.mov ? [23:46] VAlid? ffmpeg -i 1.mov -qscale 0 1.mpg; ffmpeg -i 2.mov -qscale 0 2.mpg; cat 1.mpg | ffmpeg -f mpeg -i - -qscale 0 -vcodec mov output.mov ? [23:47] s/cat 1.mpg/cat 1.mpg 2.mpg [23:47] IamTrying: have you read any of the links i gave you? [23:47] VALID or INVALID ? ffmpeg -i 1.mov -qscale 0 1.mpg; ffmpeg -i 2.mov -qscale 0 2.mpg; cat 1.mpg 2.mpg | ffmpeg -f mpeg -i - -qscale 0 -vcodec mov output.mov ? [23:47] llogan, yes, and its too much to read i just need 3 files in 1 [23:48] Can i use not use this to keep same quality for video codec/audio codec? $ ffmpeg -i 1.mov -qscale 0 1.mpg; ffmpeg -i 2.mov -qscale 0 2.mpg; cat 1.mpg 2.mpg | ffmpeg -f mpeg -i - -qscale 0 -vcodec mov output.mov ? [23:49] -vcodec for mov is mov or something else? [23:50] libx264 or something [23:50] mov is just a container [23:51] while playing with VLC axorb , showing video : H264-MPEG4 AVC, audio: A52 Audio (aka AC3) (a52) [23:51] oh, you can just use copy [23:51] -vcodec copy -acodec copy [23:51] axorb, libx264 is open source broken h264 no? [23:51] but that won't rencode [23:51] ok copy is good axorb [23:52] it'll literally use the same data as the input file, just wrap it in a mpg instead of a mov [23:52] help vampire [23:52] and yes, x264 is open source h264 [23:53] ** [23:53] OK - axorb then better never ever use it, cause it wasted my 2 year not giving stable quality [23:53] well [23:53] I'm using x264 in production and it's perfectly fine [23:54] axorb, x264 with java crash daily like 10 times in my production release [23:54] axorb, finally i quit by hating x264 and java [23:54] not sure how java factored into things, but ffmpeg using x264 works completely fine [23:54] Anyone here by any chance know some ffpmeg magic for getting an output stream's (h264 .mkv -> mpegts conversion) action-controls to work? Like Pause, Skip, FastForward, etc? [23:55] JennieL: you probably want HLS for that [23:55] it's Apple's streaming protocol that splits a video into 10 second long ts files [23:55] uh [23:55] what's HLS have to do with that_ [23:55] ? [23:56] JennieL, that's player dependent, what is your exact use-case? [23:56] Mavrik: I've had difficulty with ts files [23:56] VLC barely plays them [23:56] at least ones produced by ffmpeg [23:56] Mavrik: I've been having a nice discussion all by myself about it: http://ffmpeg.org/pipermail/ffmpeg-user/2013-August/016996.html [23:57] Mavrik: Neither a Panasonic BluRay standalone player or VLC will allow be to control transcoded streams. [23:57] Native mkvs are fine. [23:57] That's "allow me" [23:57] hmm, I see [23:57] does ts have global headers? [23:57] axorb: HLS? New one on me. Googling. [23:57] axorb, TS is a streaming format [23:57] so no. [23:57] so yeah, you won't get stuff like seek support [23:58] and TS can be trickplayed and rewinded as well. [23:58] axorb, yes you will. [23:58] stop saying stuff without support, c'mon :P [23:58] but the player would have to decode everything right? [23:58] nop [23:58] hrm, okay [23:58] axorb, Failed to set value '0' for option 'qscale' when trying $ ffmpeg -i Supernova1.mov -qscale 0 1.mpg [23:58] it's a streaming format, you can start playing it anywhere :) [23:59] JennieL, anyway, that seems like a player limitation with TS container [23:59] Mavrik: but you won't know where to seek for a timestamp [23:59] I guess you can guess based on the bitrate [23:59] Mavrik: I though VLC was the swiss-army-knife of players. [23:59] axorb, either the timestamp table is inserted periodically or you guess yeah :) [00:00] --- Fri Aug 23 2013 From burek021 at gmail.com Sat Aug 24 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sat, 24 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130823 Message-ID: <20130824000502.5BDB018A00C6@apolo.teamnet.rs> [00:38] ffmpeg.git 03Michael Niedermayer 07master:c443689afbb3: avformat/movenc: use av_freep() instead of av_free() except for local variables before return [00:47] michaelni: so you are now reporting CVE candidates? [01:05] durandal_1707, you make it sound like theres something new about that [01:07] ffmpeg.git 03Michael Niedermayer 07master:8bb11c3ca77b: avcodec/jpeg2000dec: Check cdx/y values more carefully [01:08] well, i get impression others reported it in past [01:09] michaelni: You have some security mechanism in place for creation of the web pages? [01:09] remote: ok/Makefile Makefile differ: char 169, line 4 [01:10] I pushed archive page as requested by ubitux and durandal_1707 . So I had to add a new source to the Makefile. [01:11] thx :) [01:11] beastd, yes [01:12] michaelni: what is the procedure? should i change the ref file on the web server and rebuild? [01:12] i think so [01:13] ubitux: np. i do not think creating a news archive is related to adding a new entry. but you got me with hat one :) [01:13] :) [01:14] durandal_1707, true, most where, not all though [01:15] where are news older than 2007? [01:16] dunno [01:16] in the git log maybe [01:17] oldest commits is from 2011 [01:17] so i think i should use other tools... [01:18] archive.org ? [01:18] no, TARDIS [01:19] wow it was in php [01:19] http://web.archive.org/web/20020922011917/http://ffmpeg.sourceforge.net/ [01:19] (later) [01:20] omg, what i missed [01:20] and it was written "FFMpeg" [01:20] heresy! [01:21] http://web.archive.org/web/20050301010331/http://ffmpeg.sourceforge.net/index.php [01:21] ok ppl, archive page is online. will go to sleep now. [01:21] see you [01:22] http://web.archive.org/web/20070116171005/http://ffmpeg.mplayerhq.hu/ moar news [01:22] durandal_1707: i guess this one provides you the missing bits: http://web.archive.org/web/20081119203954/http://ffmpeg.org/ [01:27] perhaps missing archive entries should be added back? [03:19] ffmpeg.git 03Carl Eugen Hoyos 07master:2baa12f1d194: Fix dependencies for h263 vaapi decoder. [04:57] ffmpeg.git 03Michael Niedermayer 07master:16a0d75c769a: avcodec/mjpegdec: fix overread in find_marker() [07:26] and back from vacation [07:26] ubitux: any news? :) [09:12] BBB: well i was mostly trying to get a better picture of the whole thing so i wasn't very^Wat all productive [09:14] i'll have a few questions in a few hours [09:36] ubitux: booh! well anyway [09:36] ubitux: yes ask qs [09:58] ubitux: or at least let me know how far you are, we can split work for particular tasks if you want [09:58] e.g. I can do sub8x8 bitstream parsing and confirm at least all that works [09:58] ubitux: or something along those lines [10:42] Action: BBB gives ubitux a big fat poke [10:43] BBB: i was on sub8x8, which was as you said similar to the VP8_SPLITMVMODE thing [10:44] bitstream parsing? or reconstruction? [10:44] but i dont know how i'm suppose to test this [10:44] (or both?) [10:44] it was the reconstruction; but i guess i have the bitstream parsing to do too (first), right? [10:44] yes [10:44] otherwise reconstruction will never trigger, I suppose [10:44] i saw that code triggered once, but probably a bug [10:45] anyway, where is that supposed to be done? [10:45] if (b->bl == BL_8X8 && b->bp == PARTITION_NONE) { [10:45] // sub8x8 mode/mv coding [10:45] // inter mode ctx = inter_mode_ctx_lut[a_mode][l_mode]; [10:45] printf("Inter sub8x8 mode/mv coding not yet done\n"); [10:45] return -1; [10:45] (sorry i'm a bit lost at the overall codec) [10:45] line 1455 [10:45] ok [10:45] you basically do the inter equivalent of the intra coding in line 1080-1120 [10:46] that is, parse one inter mode (see line 1421-1426), then parse one mv (if newmv), then parse the second inter mode, next mv (if newmv), etc. [10:47] either twice (for PARTITION_H/V) or 4x (for PARTITION_SPLIT) [10:47] you also need a small extension to find_ref_mvs() to be sub8x8 compatible [10:47] I can point out what's missing in our code or the relevant code in libvpx so you can figure it out yourself [10:48] that would help [10:48] (in libvpx) [10:48] (i need a reference somehow...) [10:49] vp9/common/vp9_mvref_common.c [10:50] see calls to get_sub_block_mv() [10:50] i see, thx [10:50] block index is 0 (topleft), 1 (topright), 2 (bottomleft) or 3 (bottomright 4x4 subblock in a 8x8 parent block) [10:50] if the blocks are 4x8, only 0 and 1 exist [10:50] if the blocks are 8x4, only 0 and 2 exist [10:50] btw, i looked a bit at the vp9 ml; there is not much info [10:51] i found some kind of xml draft for overall design, but not sure if accurate and really up-to-date [10:51] the direct neighbours are filled in vp9_append_sub8x8_mvs_for_idx() in vp9/common/vp9_findnearmv.c [10:51] ok [10:51] basically what it does is look for direct left and/or direct above neighbour (if we're not subblock 0) [10:52] so for block 1, the direct left would be block 0 [10:52] for block 2, the direct above would be 0 [10:52] for block 3, the direct left is 2 and above is 1 [10:52] then after that it calls find_ref_mvs (ffvp9) / vp9_find_mv_refs_idx (libvpx) to fill in the more distant mv references [10:53] for sub8x8, it is subblock aware for the direct left/above neighbours [10:53] that is, for the left 8x8 neighbour, it uses the edge block closest to our subblock [10:53] for 0/1, that is 1 (left), for 2/3, that is 3 (left); for 0/2, that is 2 (above) and the 1/3, that is 3 (above) [10:53] it's still using mv from a maximum of 3 ref frames? (from a maximum of a pool of 8 frames?) [10:53] of course all of this matters only if these blocks use sub9x8x8 coding [10:53] yes [10:54] why limit the limit to 3 ref frames when there is 8 frames cached? [10:54] no idea [10:54] ok [10:54] :-p [10:54] :) [10:55] ok well, i'll try to make a report of my progress in ~12 hours [10:55] i have to go right now [10:55] thx for the help [10:56] bye [10:56] there is two random trivial commits in my branch if you're interested [10:56] what do I do now? :) [10:56] ok I'll check [10:56] haha [10:56] what do you do? i dunno; is my work that blocking for your progress? [11:02] ubitux: well a little right? I can't get reconstruction of the first inter frame correct until sub8x8 is done, and I can't test things like bw adaptivity until after first inter frame is correct [11:02] erm [11:03] you can give me something else if that matters for your [11:03] -r [11:03] i'm just curious about how it works, but as i know near to nothing about the codec, i guess anything else will be fine [11:04] no it's fine, I'm gonna do some intra pred simd meanwhile [11:04] I'll hope you finish it this weekend or so [11:05] and your patches (plus minor one by me) pushed [11:05] i'll do my best, but gotta go for real now [11:05] ok :) [11:14] I know what I'll do, I'll add some fate tests for the vp90-2-* files [11:14] (first frame only ofcourse) [11:45] How do we write S16P to fifo to be used in audio encoding for mp3lame? like this? for(int i=0;i ffmpeg.git 03Stefano Sabatini 07master:0be3be901196: lavf/tee: copy metadata to output chained muxers [12:28] ffmpeg.git 03Paul B Mahol 07master:8fbf940e1673: lavfi/tile: do not leak input frame [12:44] ffmpeg.git 03Paul B Mahol 07master:dd1d29bd5f31: pngdec: use av_fast_padded_malloc(z) [12:52] is there filter that would repeat each frame several times? [12:54] i don't think so [12:55] mp=harddup ? [12:55] that was removed [12:56] and it did nothing [12:56] durandal11707, vf_fps? [12:56] just use an integer multiple of the current framerate [12:57] Action: BBB kicks lazy Daemon404 [12:57] is it supposed to work with overlay? [12:57] when you want to loop single main frame over and over again [12:58] oh i unno [12:58] ive always used images as input [12:58] not videos [12:59] yes, but if you loop image it will decode over and over again [12:59] ... lol [13:03] i get 19 vs 28 here [13:03] fps [13:10] hmm, this doesn't work: ffmpeg -i bg.png -f lavfi -i testsrc -filter_complex '[0]fps=999[fps],[fps][1]overlay' [13:19] because you can't seek in lavfi there is no loop filter [13:33] ffmpeg.git 03John Stebbins 07master:1f70a5ad284b: mov: use tkhd enabled flag to set the default track [13:33] ffmpeg.git 03Michael Niedermayer 07master:c6f4a3a70837: Merge commit '1f70a5ad284b33e8b3e2b40a5cb33055419781b7' [14:07] I am using http://pastebin.com/466tUJyd to convert S16 to S16P so I can feed it to avcodec_encode_audio for libmp3lame codec. However, encoded file is just a looping noise like a very fast shooting machine gun. Is the algorithm wrong? [14:11] I'm not sure, but maybe check to be sure that linesize is in units of int16_t, rather than units of bytes? [14:12] I'm not sure which it is, but if it's in bytes and you're acting on it as if it's in units of samples, the code might be wrong [14:12] why not just call libavresample [14:12] the lazy way~ [14:17] ffmpeg.git 03John Stebbins 07master:30ce289074e8: movenc: Make tkhd "enabled" flag QuickTime compatible [14:17] ffmpeg.git 03Michael Niedermayer 07master:800ea20cadce: Merge remote-tracking branch 'qatar/master' [14:20] Daemon404, any reason why you suggest avr over libswresample ? [14:22] no [14:22] its easier to type [14:28] I have tried using swr_convert but it didn't work for S16 to S16P [14:30] (however it worked for FLTP to S16, which was valid for another scenario) [14:35] zidanne, swr certainly works for s16->s16p, its a common convertion, not something obscure that isnt tested [14:36] zidanne, if it doesnt work for you please provide a reproduceable testcase and ill take a look [14:45] it crashes on "conv_AV_SAMPLE_FMT_S16_to_AV_SAMPLE_FMT_S16" [14:46] zidanne, how can this be reproduced ? [14:49] I decode an mmsh stream (asf) and I convert it into AV_SAMPLE_FMT_S16 so I can feed the audio system. At the same time, I want to have it in AV_SAMPLE_FMT_S16P too (so I can encode it into mp3). So, I use swr_convert for the "second" time but this time with the output of the previous swr_convert. So, now it converts from S16 to S16P instead of FLTP to S16. [14:51] zidanne, can you provide a reproduceable testcase, a small C program that causes this crash ? [14:52] swr doesnt crash when used by other applications [14:53] or if it does crash for you with ffmpeg and some command line, please provide this command line [14:57] so apropos repeat filter: to be or not to be, if loop ever came out it can be deprecated [15:13] or why seeking in lavfi is so hard? [15:13] michaelni: my bad.. I was not creating an array and assigning the output of the previous swr_convert to it's [0]. Now it didn't crash. [15:14] wasn't there example in source tree [15:14] so you could look at it? [15:15] perhaps its not commented at all and thats why so many people are lost [15:15] But i still get the same audio noise. So I must be using the AVAudioFifo incorrectly.. [15:16] when swr converts to s16p it returns array with pointers to array [15:17] anyway why there is not function that takes AVFrame? [15:18] durandal11707: I was using swr_convert to convert S16 to S16P. As swr_convert always wants an array, that was my problem [15:19] durandal11707: yeah, that'd be very useful [15:19] durandal11707: write it [15:19] wm4: i have paypal [15:20] zidanne: so what you use now? link... [15:24] [14:13] <@durandal11707> or why seeking in lavfi is so hard? [15:24] because seekign with lavf is hard [15:24] or not possible [15:24] without an index. [15:24] which we do not implement. [15:29] let filter just call avformat_seek_frame [15:34] durandal11707: the main problem perhaps is that you'd have to figure out the seek target PTS [15:34] which would require information traveling backwards through the filter chain, or so [15:34] OTOH, you could just add a seek command to the movie src [15:55] I think, we could have a "ffmpeg tips" page. Where we can add notes related to common mistakes, etc.. (: [16:00] zidanne, you can create a page for that on the ffmpeg wiki if you like [16:13] link to wiki: https://ffmpeg.org/trac/ffmpeg [16:14] shorter link : https://trac.ffmpeg.org/ [18:13] ffmpeg.git 03Michael Niedermayer 07master:912ce9dd2080: jpeg2000: fix dereferencing invalid pointers [18:13] ffmpeg.git 03Michael Niedermayer 07master:09927f3eaa93: jpeg2000: zero reslevel array on allocation [18:13] ffmpeg.git 03Michael Niedermayer 07master:9e477a377033: jpeg2000: fix null pointer dereference in case of malloc failure [20:15] ffmpeg.git 03Michael Niedermayer 07master:aadfadd784bb: avformat/redspark: check coef_off [20:46] michaelni: so just report progress and do not care what is put into frame? [20:47] filling the otherwise uninitialized parts by copying fro the previous frame would be better [20:48] well its hard to find what is initialized and what is not [20:49] some files have multiple idats and only last one may fail [20:49] then just setting progress should be fine for now if it fixes the crashes and deadlocks [20:50] i can't reproduce either [20:53] so if you can test it to find out no crashes/locks happen it would be great [20:53] BBB: what does the b->comp bit mean? [21:12] How can this be possible? (: When I av_free "resampledOut" after "memcpy(ffData->data,(const uint8_t *)resampledOut,ffData->size);" I get "malloc: *** error for object 0xf5fbf5fb: pointer being freed was not allocated" [21:13] if I comment out free, everything works perfectly (: [21:13] maybe because you changed your pointer [21:14] The only thing is, I give the pointer to swr_convert with &resampledOut. [21:26] the problem was av_freep. I changed it to av_free and now it works [21:41] ffmpeg.git 03Alexander Strasser 07master:b329ff3d43ca: MAINTAINERS: Add my GPG fingerprint [23:09] michaelni: so you tried it? [23:14] durandal_1707, i can if you push the code somewhere [23:14] i cant test without having the code [23:15] but i think one line change is less work.... [23:16] do you want your code tested or do you want me to redo the debuging and fix it myself ? [23:18] i cant test your code without seeing your code of course [23:20] you dont have to push it if you dont want but if you want me to test your code you have to provide your code by some means [23:20] ok, stop typing you are wasting time [23:22] you can see "code" in png branch [23:30] lol (: [23:31] durandal_1707, no crashes or deadlocks with that file anymore and fate passed [23:42] michaelni: but asan warnings? [23:43] asan showed nothing [23:43] neither valgrind but that didnt show anything before either [23:45] 36 [23:47] huh, youtube protocol....... [23:50] ._. [23:50] why didn't he asked before :( [23:50] we have libquvi support... :( [23:51] but this one is LGPL [23:51] durandal_1707: it's going to require regular updates [23:52] like maybe every month [23:52] than that is not really protocol... [00:00] --- Sat Aug 24 2013 From burek021 at gmail.com Sat Aug 24 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sat, 24 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130823 Message-ID: <20130824000501.50F9E18A00B8@apolo.teamnet.rs> [00:00] IamTrying: ffmpeg -i Supernova1.mov -vcodec copy -acodec copy 1.mpg [00:00] JennieL, weeeelll&.. yeah. It's also quite buggy. Lemme read your post again :) [00:00] Mavrik: Thanks. I am NOT making any promises that any of the info there is helpful! ;-) [00:00] JennieL, ok, bad news is& you can't really do what you want in the way you have your system set up [00:00] axorb, [mpeg @ 0x17315a0] buffer underflow i=0 bufi=234980 size=238677 [mpeg @ 0x17315a0] packet too large, ignoring buffer limits to mux it [00:01] [mpeg @ 0x17315a0] VBV buffer size not set, muxing may fail [00:01] Mavrik: Which part? Just want to trasncode & stream the mpegts to my TV. [00:01] Or even VLC, at this point [00:02] and, of course, have the controls work [00:02] JennieL, since you are sending output from ffmpeg directly to your TV and ffmpeg doesn't have any controls to move around the stream after it started processing (it's a tool for transcoding files primarly, not streaming :/) [00:02] rtmp would be good if your TV supports it [00:02] JennieL, Mavrik, is right, VLC and all of them are good for nothing, Gstreamer fights with VLC and VLC fights with Gstreamer, at the end of the day both are good for nothing [00:02] Mavrik: ffplay does ... [00:02] ffplay is a simple example of a player :) [00:03] Ok, getting confused. Is it not possible? [00:03] you need a DLNA server which can restart transcoding on another point when you want to seek. [00:03] a DLNA server that understands that request and will properly restart encoding [00:03] It's already in there. Mediatomb ... [00:04] @ http://ffmpeg.org/pipermail/ffmpeg-user/2013-August/016996.html -- "I setup a "Mediatomb" DLNA server to get the videos onto my TVs." [00:04] axorb, dont know but this did not worked ffmpeg -i Supernova1.mov -vcodec copy -acodec copy 1.mpg [00:05] JennieL, what you're setting up is ffmpeg with command to transcode WHOLE file from BEGINNING without any timing parameters always [00:06] JennieL, it also says in MediaTomb FAQ that they do not support seeking on transcoded streams [00:06] Mavrik: Is that a limitation of Mediatomb? or DLNA? The mediatomb folks told me this works. [00:07] mediatomb [00:07] MediaTomb should be doing the transcoding anyway [00:07] seeking probably works if you don't have to transcode. [00:07] axorb: Mediatomb uses an ffmpeg-based script ... [00:07] but your ffmpeg command certanly won't support seeking [00:07] Mavrik: Do you offhand know of a more reliable DLNA that *would* do this? [00:07] I think PS3 Media Server supports seeking while remuxing [00:08] For Linux that'd be the UMS? [00:08] I used it for a bit awhile ago, wouldn't know how compatible is it with other DLNA clients [00:08] IamTrying: ffmpeg -i Supernova1.mov -vcodec libx264 -acodec aac -strict experimental 1.mpg [00:08] then figure out what the -sameq option is now [00:08] but you need to specify the codecs, and this will transcode instead of remux (which is ideal) [00:08] -sameq option never was useful for you ...... [00:09] okay, don't use -same1 [00:09] sameq [00:09] axorb, OK but is not this all is a hard way? its like 2013 can it not be like $ ffmpeg -multi-files-from 1 2 3 4 5 -to one.mov ? [00:10] IamTrying: no [00:10] you're concatting right? [00:10] why you are ignoring info people already gave you? [00:10] https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join,%20merge)%20media%20files [00:10] ffmpeg -f concat -i <(for f in ./*.wav; do echo "file '$f'"; done) -c copy output.wav [00:10] Mavrik: Sigh. UMS forums seem to blame ffpmeg (http://www.universalmediaserver.com/forum/viewtopic.php?f=9&t=672&p=3903&hilit=seek#p3903) That's assuming I've found the right topic. [00:10] there's a loop for you [00:11] and ffmpeg can seek, it's just hard to get it right [00:11] JennieL, ffmpeg can't seek once transcoding has started, you have to restart it form another point [00:11] ^ you need to cancel the current job [00:11] and use -ss [00:12] VLC can seek while transcoding though if you use an rtmp output [00:13] axorb: Hm. So no way to 'connect' that cancel_job+seek_to_new_location to the buttons on the remote? [00:14] OK - axorb thank you , it works great. [00:14] This stuff really is SO far over my head ... [00:14] DLNA would be the one making the seek request [00:14] and the server would have to translate that into canceling the job and starting a new one [00:14] durandal_1707, thank you too, its working [00:14] axorb: So, back to a DLNA server that actually works ... i.e. !Mediatomb [00:15] JennieL, as I said, try out PS3 Media Server& I've had some success with it on Xbox and it supported seeking then [00:15] never used DLNA, but it definitely is possible [00:15] I just implemented it actually [00:15] except with HLS instead of DLNA [00:16] hmm why they claim ffmpeg does not support internal subtitles? [00:16] durandal_1707: http://trac.ffmpeg.org/ticket/2067 [00:16] maybe? [00:17] Mavrik: PMS is on linux? (I'm looking now ...) [00:17] think so. [00:17] Mavrik: I must be blind ... where on the MediaTomb FAQ did you find that stmt? [00:18] I'm poking around here, http://mediatomb.cc/dokuwiki/faq:faq, and am missing it [00:18] 4.2.1: http://mediatomb.cc/pages/transcoding#id2892931 [00:18] anyway, sleep, g'nite [00:18] durandal_1707, funny. i worked in one company and had a nice experience where i was working there i had few people and my boss included. 1) when ever i write application using java/python and release it (i always put a README file) 2) none of them use that README file and instead of learning they asks me can you not add this feature and that, which is already given to them 3) LOL - you just reminded me that one by saying that NO LOL [00:20] Mavrik: thanks [00:27] Off to try UMS. TA! [00:31] IamTrying: the command you mentioned would when used with ffmpeg mux all stream from all input files into output file [00:37] durandal_1707, yes temporary its working great thank you!! appreciate it [00:38] huh, what is working great? [05:47] I'm using ffmpeg to realtime/live transcode video files and stream them straight to an rtmp server (ffmpeg -re -i blah.avi -f flv rtmp://localhost/live/thing) - is there any way to seek/pause/etc when this is going on? [10:42] hello [10:43] can anyone please help me? [10:46] only if you tell us about what you want help with . if not we wont help you. [10:46] :) [10:46] i'm trying to understand libav libraries [10:46] why do avformat_write_header() emits warning: "Codec for stream 0 does not use global headers but container format requires global headers" [10:47] well, because it's probably true :) [10:48] :) [10:48] your video codec didn't create global header info and the container you're muxing into requires global metadata [10:49] inqb, which codec and format are we talking about here? [10:49] h264/mpeg-ts [10:50] h264 stream is demuxed from mov container [10:56] hmmm, that's wierd [10:56] TS shouldn't require global headres [10:56] MOV does however [10:57] if I recall correctly you're supposed to send that through h264_mp4toannexb bitstreamfilter to H.264 stream gets PPS/SPS packets injected periodically [10:57] yes, you're right... [10:57] i was trying to write to .mp4 [11:18] Hello everyone. Trying to compile a slim ffmpeg for android that should only do audio encoding. Trouble is, I seem to need avfilter.a file, which is 27 MB in size. Also I noticed that a lot of code is build, that is disabled in my configure script. Is this intended? [11:19] fschuetz, nope [11:20] fschuetz, I suggest you configure it with --disable-everything and then enable things you need [11:20] you seem to be compiling all filters [11:21] How can I read a video file and get a raw array of pixels for each frame? I couldn't find a simple example [11:22] shlomo, there's a decode_example.c in doc/examples I think [11:23] shlomo, you need to demux file, decode video stream and you'll get raw video frames from decoder [11:25] Mavrik, Does doc/examples/demuxing.c do it? [11:25] demuxing does just demuxing. [11:27] shlomo, filtering_video.c does it [11:27] you just don't need the filters bit [11:31] how do i fix "H.264 bitstream malformed, no startcode found, use the h264_mp4toannexb bitstream filter (-bsf h264_mp4toannexb)" error? [11:31] ok. found my problem. thanks [11:31] i've tried AVBitStreamFilterContext *filter = av_bitstream_filter_init("h264_mp4toannexb"); [11:31] but av_bitstream_filter_filter(filter, c, NULL, &packet->data, &packet->size, packet->data, packet->size, 0); always return 0; [11:32] Mavrik, instead of lines 219-234, just use frame? https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/filtering_video.c#L212 [11:32] shlomo, yeah, you'll have raw frame data in frame->data[] array [11:33] shlomo, remember, you'll probably get a YUV420 format, not RGB :) [11:34] you can use swscale library to convert formats and resize frames if you have to :) [11:34] inqb, hmm, I know there were some wierd things about that when used in code [11:34] I had to inject PSS/SPS packets manually [11:38] do you know of any tutorial how to do that? [11:38] i can see, that someone had the same problem: http://ffmpeg.org/pipermail/libav-user/2012-March/001484.html [11:38] no, not really [11:41] inqb, reading the source of mpegtsenc.c of ffmpeg could give you some idea why does it require that [11:41] my usecase was different from yours [11:45] Mavrik: thx for hinting me about PSS/SPS [11:45] mhm [11:46] i found this post: http://aviadr1.blogspot.com/2010/05/h264-extradata-partially-explained-for.html [11:46] and i think this is what i need [11:46] yeah, that's it [11:46] you have to create AnnexB format for MPEG-TS [11:46] yup [11:46] and the input is from .mov file [11:46] the thing I can't really tell you right now is where will you find SPS/PSS data in your input [11:47] x264 encoder dups them into it's priv_data field in AVCodecContext [11:47] you'll probably find them somewhere in AVFormatContext of mov [11:47] then I just prepended every keyframe with those packets [11:47] and everything went well [11:47] ok [11:58] ok, then why [filter = av_bitstream_filter_init("h264_mp4toannexb");] doesn't work? [11:59] it expects some fields formatted in certain way same with bitstream [11:59] check the source [11:59] maybe i need to pass packet->side_data instead of packet->data to the av_bitstream_filter_filter()? [12:02] no, because it manipulates the data to inject packets [12:28] hi, regarding http://ffmpeg.org/pipermail/ffmpeg-user/2013-August/017037.html, can somebody help me out there how to do that with the filter_complex in a fast way? [12:29] switching the inputs puts the background over the sequence :( [12:39] finally some progress [12:39] h264_mp4toannexb works fine [12:40] i had to set AVCodecContext.extradata [12:40] but... [12:41] the resulting video plays for 14sec instead of 10min, display only keyframes (i think) [13:06] and i found out the reason: pts/dts values weren't rescaled to output stream's time_base [13:36] Hello everyone. I've built a ffmpeg and would like to use it to convert audio files from different formats to vorbis. Any advice, where I should start? I've checked the doc at http://ffmpeg.org/doxygen/trunk/ but can't really find the entry point. [13:45] fschuetz: try http://www.inb.uni-luebeck.de/~boehme/using_libavcodec.html [13:46] thanks [13:46] ugh [13:46] that's rather old [13:46] at least looking at the changelog ending in 2009 [13:46] so I recommend looking at the samples in the source directory [13:46] as well as keeping some tabs of the ffmpeg doxygen from the main site open [14:02] ok [14:03] https://ffmpeg.org/trac/ffmpeg/ticket/2748 I can confirm this one aswell, It makes a lot of problems not only with hw decoders, but also software like udpxy. [14:09] avformat_open_input returns an error code. How can I check, what it means? [14:16] Mr_E-, you might want to look into OBE if you want better mpeg-ts muxing [14:22] The muxing is fine, it just needs to output 1 size packets only, and not use it as a max value :) [14:32] JEEB, OBE_ [14:32] ? [14:43] I compilled ffmpeg from source and I used options for gcc: "-O2 -march=i486 -mtune=i686" is it alright to use those options ? or its better to use "-O0" or "-O1" ? [14:48] Mavrik: Open Broadcast Encoder [14:48] by kierank, uses his libmpegts muxer [14:48] which is much stricter than libavformat's as far as I know [14:49] oh that looks very nice [16:17] hi there ... any quick command to transcode a DBT-S2 , 10mbps from satellite to UDP multicast ? [16:17] i found a lot of commands but most of them doesnt has a smooth streaming on the output [16:17] DVB-S2 * [17:42] cant pass a vf with ' from a varaible to FFmpeg. http://pastebin.ca/2436477 [18:03] any way to render embedded (ssa, stream 0.3 - not already hardsubs) subtitles in one pass? Currently I'm extracting the subtitles with ffmpeg out to SSA and then feeding it back in to a second call and it's a little annoying :/ [18:09] Got it. vf='-filter:v foo:bar/' works. [18:16] Is there something similar to avformat_open_input to write a file [18:31] Image overlay to fade out after 10 sec... Any advise? [18:33] Mista_D: you need to be much more specifici [18:35] I am using the Android NDK to convert audio files from different formats to a user supplied output format. [18:37] I managed to open an input file, find the format and stream and decode a packet to frame [18:38] durandal11707: -filter_complex 'overlay=10:main_h-overlay_h-10:enable=lte(t,10)' -- expected it to last 5 sec, but the command is refused. [18:38] So right now I would like to encode the those decoded frames to packets (outputformat) and write them to a file [18:39] the snippet I am working right now http://pastebin.com/F1BvprFT [18:40] uh& that's rather& wrong. [18:40] ok. what do I have to do? [18:47] Mista_D: how is it refused? [18:48] durandal11707: [overlay @ 0x15231a20] Option 'enable' not found [18:49] Mista_D: your ffmpeg is too old [18:50] durandal11707: new one fails too with " Missing ')' or too many args in 'lte(t' " [18:51] Can someone give me some directions on encoding/decoding audio please? [18:53] Mista_D: that is because you do not escape it correctly [18:54] fschuetz, it's obvious you haven't even read the small documentation about the methods you're using [18:54] you're ignoring the return types [18:54] it seems you haven't even checked the examples in doc folder of ffmpeg [18:58] I do read the doc. However I find it somewhat confusing. I've also tried to find source that does, what I want, as obviously ffmpeg must have it already. Just could not find it. [18:59] durandal11707: here's command, all escaped well. http://pastebin.ca/2436494 [19:00] Mista_D: you did not escaped ',' [19:03] durandal11707: Nice. Thanks. Any way to make it fade at the end? [19:03] what you need to fade main or overlay or both? [19:08] durandal11707: Overlay fadeout after 10 sec with complete fade out by 12 sec. [19:09] then you use fade filter on it, i do not see how is that complicated [19:14] what's the difference between avcodec_free_frame and av_frame_free [19:26] hi, I'd like to hear your opinion. when it comes to speed x quality, what do you think brings the best buck per cpu encoding time, ref 8 instead of 4 or bstratagy2 instead of 1 ? [19:27] to enable both would cost too much in encoding tme [19:35] more reference frames help with noisy footage, like images from a snow storm, rain, filmed content [19:36] whats the diminishing rate of return treshold? [19:36] ref 12 ? [19:37] depends. [19:37] a few years ago, x264 was able to print encoding statistics [19:38] you should be able to see the histogram of reference frames and iframe distances and so on [19:44] well [19:44] thats not really helpful to a beginner like me [19:47] why are you even dealing with this then? [19:47] what kind of use-case prevents you from using presets? [19:48] why do you guys keep pushing presets? [19:48] i dont care about presets [19:49] then learn how x264 works. [19:50] presets are there for people that don't want to learn how x264 works and what the parameters mean [19:50] if you don't want to use presets then learn to read the x264 encoding statistics so you can optimize beyond those presets for your use-case [19:54] CentRookie: as beginner, you should just start out with one of the presets, and then extend them as you go and learn how encoding works. [19:54] CentRookie: all parameters are tradeoffs. Sometimes between compression and time, quality and space, or even different kinds of quality. [19:55] and most of them hugely depend on the specifics of the content [19:55] there wouldn't really be a point in options that are not a tradeoff :D [19:56] well it takes 2 min to read into 1 option, there are roughly 60 or more options, so it would take 120 min just to read them, I have spent already several weeks on this, so I kinda know what they mean. [19:56] still reading a histogram would go into the matter in deeper [19:56] not speaking about finding out which of the options are depreciated [19:57] 2 mins? that's quick. [19:57] as the documentation isn't really up to date concerning ffmpeg 2 [19:57] 2 mins to get a general understanding [19:57] ffmpeg doesn't really matter [19:57] if you use x264, then it does all the encoding [19:57] all parameters are from it [19:57] well ffmpeg and x264 documentation should be similiar [19:58] cant use one without the other except you go with a whole other encoder [20:01] what documentation? [20:05] CentRookie: I used to use x264 without ffmpeg (directly) [20:06] x264 as encoder, mk4tool for muxing, something else for aac, and mplayer for decoding the input (which in turn, of course, used ffmpeg :) [20:22] mm yes [20:23] actually is there a way to automate tests? [20:24] and to output side by side comparison images [20:24] dunno if somebody has already thought about that kind of feature [20:24] im running tests and am creating tens of video files and ahve to test them by hand and the naked eye [20:25] cbreak, i have a question regarding the first few seconds [20:25] in all my videos there seem to be an initial phase where image quality is really bad, like the first 3 seconds [20:26] i have not been able to figure out what causes it [20:26] might be movflag faststart? [20:26] as far as I know, h264 can't handle fades well. maybe. [20:27] the fast start I know is about the location of the index [20:27] well that index takes a certain amount of space right [20:28] you'll have it anyway [20:28] and if bitrate control tries to compensate it, it will affect video quality [20:28] only if you're dumb enough to use bandwidth limiting with two pass or so :) [20:28] nope, all 1 pass [20:28] but what do you mean with bandwidth limiting [20:28] with CRF it should try to get consistent quality [20:29] CentRookie: I mean a bitrate mode as opposed to a quality mode like crf [20:29] ah i see [20:29] but still dont really understand, isnt rc=2pass often used? [20:30] only if you don't care as much about quality and encode speed [20:30] and instead value exact file sizes most. [20:30] what's this about my size? [20:30] well its a trade off [20:31] i try to encode 1 hour worth of video into 120-130mb [20:31] so am working at quite the border [20:31] then you'd use 2 pass rate limiting [20:31] no, i use 1 pass [20:31] havent thought too much abot 2 pass [20:32] tkes too much encoding time [20:32] one pass will give you uneven quality [20:32] i know [20:32] that's the trade you pay [20:32] im not encoding 1 super important videos [20:32] but lots of videos [20:32] thousands [20:32] then I'd use crf [20:32] yep [20:32] im using crf 24 [20:33] you can't decide how big the movies will be, but at least they all get the same quality [20:33] but it tends to blow up file size [20:33] then use a bigger number [20:34] say, if i do 2 pass but lower ref 8 to 4 and bstratgy 2 to 1, and very fast first pass, then would I get better quality with the lower encoding settings? [20:34] no. [20:35] in what way no [20:35] 2 pass vs 1 pass doesn't increase quality. [20:35] the quality should be more even [20:35] it just makes sure it is all more or less the same quality [20:35] less reference frames hurt quality [20:36] well when we talk about low bitrate encoding, more even quality means higher overall quality [20:36] crf is usually superior to any number of passes in quality [20:36] for the same bitrate [20:36] (and same other settings) [20:36] depends [20:36] usually i can get superior compression with 2pass [20:37] nope. [20:37] with the same bitrate, you obviously get the exact same compression. [20:37] much better quality on smaller bitrate [20:37] and since crf makes more even quality than any number of passes [20:37] you get better quality [20:38] might be, but it doesnt control bitrate well [20:38] and since you only use one pass worth of encoding time, you could even throw in more expensive options [20:38] it doesn't control bitrate at all [20:38] lets say i do 1 pass encoding for 1 hour video, for hd i would need at least 1200kbps video rate for decent quality, if i do 2 pass, i can reach the same quality with 600kbps [20:39] crf is decent enough, but the bitrate control is nowhere as good as in 2pass [20:39] i get much smaller files with 2pass at same quality [20:39] no [20:39] well thats my experience [20:40] from testing hundreds of encodings [20:48] well i guess it can depend on input and bitrate you are working on [21:47] hey, I have a sound card with 1 audio input that provides 12 channels. How do I tell ffmpeg I want to just channel number 6 only? [21:48] (mono) [22:12] hmmrrgrr [22:13] i cant fix this noise effect no matter how high i set the bitrate [22:14] has anyone experienced that after encoding, the first 3-4 seconds are really ugly and bad in quality, until image quality stabilizes? [22:15] I thought it might be because of faststart, but it wasnt [22:47] is it bad in the input too? [22:47] is it bad with crf too? [22:49] i'm recording a full screen game to .mkv with the intention of it being high quality and fast to record, and then later i'll convert it to something smaller. is there something faster than .mkv I could use? disk space isn't an issue nor is speed (it's a high end SSD) [22:51] .mkv is a container only. [22:51] The container part of encoding is pretty damn quick. [22:51] okay so if I am losing frames it's a computational issue and I need to make the area i'm recording smaller? [22:51] no, input is very good [22:51] sorry, for late reply, was coding [22:52] Fieldy: Something like -c:v libx264 -preset:v veryfast -crf 0 will do a lossless dump of it. [22:52] Sorry [22:52] *ultrafast [22:52] I think. [22:52] crf with max bitrate is bad, yes, without max it is ridiculous huge [22:53] what does -crf do? it is not in the manpage I have [22:53] it is a x264-specific option [22:53] sets the constant rate factor [22:53] closest thing we have to "constant quality" in the video encoding :P [22:53] CentRookie, how are you using crf + maxrate? [22:53] it sounds like you're doing something wrong [22:54] probably. the command i'm doing is: ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 25 -s 1920x1026 -i :0.0+0,54+nomouse -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 ~/temp/video.mkv [22:54] crf + maxrate (_and_ bufsize) should be rather good [22:54] to be honest there's so many examples out there in so many places, i end up with a mashup which could easily be wrong [22:54] at least better than similar 1pass bit rate-based [22:54] Fieldy, video-wise that looks fine [22:54] lossless H.264 [22:55] the issue is that it starts out at 25fps but drops to 14. on playback, it appears to play twice the actual speed [22:55] crf 0 with a 8bit x264 is lossless, with >8bit (9,10bit) x264 the lossless value is somewhere around negative [23:00] so if i can keep that at 25, i'll be golden. [23:09] how I am using maxrate and crf? together ^^ [23:09] it works on certain videos [23:09] but i have this problem even without crf [23:09] i used to do ratetol [23:09] Fieldy: What does your CPU usage look like when you're doing it? [23:10] sacarasc: not sure, closed everything down, spent way too much time on this today. but probably fairly high, what i'm capturing is very cpu intensive and multithreaded. it's a pretty strong system with lots of ram and a fast SSD, but it can only do so much. thanks for your input [23:11] CPU is more likely to be the bottleneck over anything else. [23:11] probably wouldn't have to mess with all of this if the linux version of the flight sim had functioning video capture in it (the win version does, grumble) [23:51] has anyone experienced "Incomplete MB-tree stats file." while encoding multipass ? [00:00] --- Sat Aug 24 2013 From burek021 at gmail.com Sun Aug 25 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sun, 25 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130824 Message-ID: <20130825000502.B2F3718A0147@apolo.teamnet.rs> [00:02] ffmpeg.git 03Paul B Mahol 07master:2a7545951926: pngdec: do not release buffer on failure instead report full progress [00:21] lol that youtube patch [00:22] converted from a perl script [01:11] ffmpeg.git 03Michael Niedermayer 07master:3941a4f5c2a4: snowenc: change a bunch of assert() to av_assert() [01:11] ffmpeg.git 03Michael Niedermayer 07master:5cc8b816875d: mpeg4videodec: fix GEOV/GEOX fliping [01:22] oh why lavc needs to do flipping? [01:22] ffmpeg.git 03Paul B Mahol 07master:83b915d495ea: truemotion1: use av_freep() [01:22] ffmpeg.git 03Paul B Mahol 07master:b8ff4f5ea3d3: truemotion1: check av_fast_malloc() return value [01:26] Action: durandal_1707 wonders why vp3.c does not have K&R commit [01:27] better delete it [01:27] "it's too ugly" [01:27] why, only theora part looks ugly [01:27] dunno [01:28] havent opened the file [02:27] ubitux: uses two predictors instead of one (like B-frame prediction in classic MPEG-like codecs; in vp9, this is a per-partition/per-block decision) [02:27] ubitux: and the references to predict from are stored in b->ref[0] and b->ref[1] [03:12] BBB: you mean if b->comp is set, it means picking from 2 different references instead of one? [03:12] yes [03:13] we have dsp->mc[6-log2sz][filter][0=put,1=avg][xfilter][yfilter]() [03:13] 0=put is first reference, 1=avg is second reference [03:14] if there is a second one it's doing an average between the two i guess? [03:14] (50-50?) [03:14] yes [03:15] think of it as two separate predictor filters (with separate MVs), and then averaging between the two as (a+b+1)>>1 [03:15] in simd, it's faster to do one store, then a load, avg, store, rather than two stores, two loads, avg, store [03:15] so that's why it's implemented like this [03:15] but it does the same thing [03:38] BBB: btw, mmh, there is no 10-bit with vp9? [03:38] (yeah totally unrelated i know) [03:39] no [03:40] do you know the reasons? [03:45] I think it's planned for a future extension profile or something along those lines [03:46] ah, cool, ok [03:46] just like the current profile is 420 only and 3-plane only (yuv), but the codebase already supports 444 and alpha, which are also planned for stabilization in a future profile [03:47] yeah, ok [03:48] michaelni: please use FF_CEIL_RSHIFT :( [03:49] (in reference to 5cc8b8168) [03:50] ubitux, you could write a cronjob that does a checkout, replaces them and commits it and sends me a pull req :) [03:50] yeah that sounds like the most optimal way :D [03:53] ffmpeg.git 03Michael Niedermayer 07master:9a271a9368ea: jpeg2000: check log2_cblk dimensions [04:15] ffmpeg.git 03Michael Niedermayer 07master:b99d3613cfdb: avcodec/h263dec: use FF_CEIL_RSHIFT() [04:25] ubitux: also I think I may soon have all keyframes decoding correctly in terms of giving same output as libvpx [04:29] 02:38:53 <"ubitux> BBB: btw, mmh, there is no 10-bit with vp9? --> not much point [06:50] ubitux: ok, all non-emuedge fate tests I added pass now; still a few emuedge bugs remaining, will fix those next [06:50] ubitux: any progress on your end? [11:21] ffmpeg.git 03Diego Biurrun 07master:8506ff97c9ea: vp56: Mark VP6-only optimizations as such. [11:21] ffmpeg.git 03Michael Niedermayer 07master:f9418d156fe6: Merge commit '8506ff97c9ea4a1f52983497ecf8d4ef193403a9' [11:27] what was the method again to use valgrind for all fate tests? [11:30] ffmpeg.git 03Diego Biurrun 07master:84784c297fe6: libfdk-aacdec: formatting cosmetics [11:30] ffmpeg.git 03Michael Niedermayer 07master:edf6fb64e07c: Merge commit '84784c297fe6a6e538a7e111dcdbd8b893c2d275' [11:40] BBB, --valgrind=valgrind [11:43] ffmpeg.git 03Diego Biurrun 07master:f407856968dc: arm: h264chroma: Do not compile h264_chroma_mc* dependent on h264 decoder [11:43] ffmpeg.git 03Michael Niedermayer 07master:6067186f3a76: Merge remote-tracking branch 'qatar/master' [14:50] ubitux: the webvtt patch has bogus OOM handling; it should do av_free_packet(pkt) instead of av_free(pkt), right? [14:51] or, that should be additional [14:52] it also doesn't check av_malloc return values [15:50] Action: BBB pokes ubitux [16:09] BBB: yeah i think i start to get it, but it takes me some time [16:09] ok [16:09] wm4: no idea, will look later eventually [16:09] BBB: why the 4 inter pred modes start at 10? [16:10] so they can be used in the same context as intra modes (of which there are 10) [16:10] if we can somehow separate that, then I'm fine with removing that [16:12] ah ok; it would have been more obvious if that enum was in vp9.h below its brother (and eventually using N_INTRA_PRED_MODES as start value) [16:13] i see 16 intra pred modes though [16:13] (maybe that was supposed to be 0x10?) [16:19] BBB: am i missing sth obvious? [16:23] on a side note, it would be nice to have some prefix for inter & intra pred mode macro [16:27] ubitux: 10 are coded, the rest are inferred [16:27] naming is just to be consistent with h264pred.h and vp8.h [16:28] eventually I'd like to merge these kind of things [16:28] ubitux: basically there's 10 modes [16:28] ubitux: but some modes mean something different depending on which edges are available [16:28] ubitux: e.g. DC on top row means something else than DC in the middle of the image [16:29] oh, ok [16:29] so for code simlicity (simd) reasons, we do that branch in the calling code, and thus add extra intra modes which we use if the coded mode is dc but not all edges are available [16:29] so that's dc_top, dc_left, dc_128 [16:29] then dc_127/dc_129 are just variations thereof (vp9 defines dc with no edges as 128, but the top edge is 127 and the left edge is 129 for all other intra pred modes [16:30] vp8 did that too [16:30] ok [16:30] so b->mode is only 0-9 [16:30] but the intra pred simd functions can be called with 0-14 [16:38] my todo list is shrinking quite quickly now [16:38] nice [16:40] BBB: i don't get the difference between NEAREST and NEAR mv [16:40] is it related to the reference used? [16:41] zeromv is suppose that's the latest one (no further sub division), and newmv is the opposite (a subdivision, so in case of 8x8 the latest possible 4 partitions?) [16:41] it's what motion vector you use [16:41] s/is supposed/i suppose/ [16:41] ZEROMV means the motion vector is y=0, x=0 [16:41] NEWMV means we code an actual motion vector difference in the bitstream using NEARESTMV as a reference [16:42] NEARESTMV is whatever is the first motion vector returned from find_ref_mvs() [16:42] NEARMV is the second motion vector returned from find_ref_mvs() [16:43] ah. [16:43] the first/second motion vector simply are unique MVs found in surrounding blocks [16:43] first the left neihgbour, then the top, then the topleft, then those further away [16:43] see the table in find_ref_mvs() [16:43] we first try to find one that uses the same reference as us [16:44] if that didn't work, we look for ones that use a different reference [16:44] larger blocks use a larger neighbourhood search radius, so to say, so they more sparsely look in blocks that are located further away [16:45] then once you know your motion vector, the rest is straightforward [16:46] Action: BBB goes to bed [16:47] ok thx, 'night :) [19:06] ffmpeg.git 03Michael Niedermayer 07master:7495186fd49f: avcodec/h263dec: fix aspect of lead h263 EHC [20:00] ffmpeg.git 03Michael Niedermayer 07master:5c6a58746b41: ffplay: make next_nb_channels[] static const [22:13] BBB: ping [23:13] ffmpeg.git 03Michael Niedermayer 07master:f55a7ba0376d: avcodec/ituh263dec: detect and warn about RTP [23:28] ffmpeg.git 03Michael Niedermayer 07master:88909beca39e: avcodec/movenc: move chapter_properties under the #if of the code that uses it [23:31] michaelni: avformat* [23:36] ^^;; [00:00] --- Sun Aug 25 2013 From burek021 at gmail.com Sun Aug 25 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sun, 25 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130824 Message-ID: <20130825000501.A809A18A00FB@apolo.teamnet.rs> [01:31] hello all :) [01:32] Does somebody of your knowledge encoders tell me what it is that I have to change in order to fix MB-tree frametype 0 doesn't match actual frametype 2 [01:33] I try to run 1pass with ultrafast but obviously it doesnt work well with my 2nd pass settings, but first pass works well with veryfast [01:39] hm, i guess it cant be fixed [01:54] so silent [01:54] maybe they are all ghosts [01:54] does somebody have experience with multi file 2 pass encoding? [02:10] when i try to make a timelapse video using the command "ffmpeg -y -r 5.0 -f image2 -i /home/angus/glapse/%09d.jpg /home/angus/glapse/timelapse.mp4.avi" i get a video that's much much whiter than the original images [02:10] is there anyway i can avoid this? [02:19] -chromaoffset -2 [02:22] that doesn't change anything [02:22] :( [02:23] i doubt it actually parses the images like videos, since you have no interframes [02:23] so there isnt much you can do with filters [02:26] ugh, just got a friend to look at it and it's fine for him, so i think it's probably the player that's screwing not ffmpeg [02:27] sorry for the time waster [02:41] people, I'm with trouble with decoding HD h.264 using crystalhd hw accel. w/ gst-crystal all works fine, any recent issue related to crystalhd h.264 dec? [02:41] btw, I'm using bcm 70012 hw decoder [05:59] how much RAM is enough for ffmpeg if you encode video ? if there is 8GB RAM will ffmpeg use it all ? what if there is 32GB RAM ? [06:00] elkng: you can use `top` to view its memory usage. [06:02] I doubt it would use that much memory unless your frames were *HUGE* [06:02] like xbox [06:02] lol [06:05] I'm converting 1920x800 to 650x350 and top shows it uses "81MB RSS" so its enough about 128MB ? [06:06] seem like the only thing that benefit from huge amount of RAM is kernel compilation [06:06] virtualization :x [06:06] that uses quite some ram too [06:06] relaxed: "like xbox", what do you mean by that ? [06:07] klaxa: blender [06:07] that too [06:07] also: http://knowyourmeme.com/memes/huge-like-xbox-hueg-like-xbox [06:07] "huge like xbox" was a meme [06:07] re: xbox hueg [06:08] or gimp could consume much RAM [06:08] or converters like imagemagic [06:09] actually I remember run imagemagic to convert some image, it was about 10000x5000 or so on a machine with 1GB RAM and it was killed by system because of "out of memory" issues, and I wasn't been able to convert those image, even with 1GB RAM [06:11] so seems like all ffmpeg need RAM for is for one current frame ? so it covert video frame by frame and the most needed amount of RAM for ffmpeg procees depends on size of one single frame ? [06:12] yeah imagemagic eats ram like pacman eats pills [06:12] *imagemagick [06:12] i think it also depends on the codec? [06:12] I though there is some sort of optimization for ffmpeg, some kind of cashing or so, the more RAM the faster procees [06:12] x264 needs a lot of ram for m-trees i think although i have no idea how they work exactly [06:12] but i think they optimize P and B frames? [06:13] hmm... encoding one frame will probably take significantly longer than reading a frame from the disc [06:13] *disk [06:13] I'm converting now from "1920x800 x264" to "650x350 mpeg4" [06:15] and it eats 81MB RAM [06:16] you could put the input file to /tmp if it is mounted as a ramfs and output to /tmp too [06:16] that would speed up reading the file from the filesystem [06:16] since it is actually in ram already [06:16] but i don't think it will speed up the process really UNLESS, your bottleneck is your disk io [06:16] which i highly doubt [07:14] I have atom 1.6 [07:15] optimizing on the ram-side is probably misplaced then :P [09:14] I have a general codec question [09:14] http://developer.android.com/reference/android/media/MediaCodecInfo.CodecCapabilities.html#COLOR_TI_FormatYUV420PackedSemiPlanar [09:14] This 'color format' says it's packed AND semi-planar. [09:14] how is that possible [09:14] ? [11:27] liquidmetal, I wonder if that's something like NV12 [11:28] (not related to nvidia even with that name) [11:28] JEEB, I understand planar formats [11:28] I understand packed formats [11:28] aren't they mutually exclusive? [11:29] well, if you have one plane as planar and two of the other planes as packed together [11:29] (okay, I kinda understand those two terms) [11:29] although to be honest that thing doesn't document what it is so lol [11:30] liquidmetal, and yes [11:30] COLOR_TI_FormatYUV420PackedSemiPlanar is NV12 [11:30] from a random piece of code I found on the internet [11:30] ah! [11:30] So packed semi-planar means, one part is planar and the other is packed [11:31] somehing like that, the name in this case is rather ambiguous [11:31] Got it! [11:31] Now I need to figure out how to decode NV12 on android [11:43] Why is there no avformat_close_output method? Am I missing something? [11:49] I could really use some help with converting an audio file to another format. I have checked the examples and documentation, however each example seems to take a different approach to tasks like opening/closing files and writing to them. [11:50] There also seems to be a general absence of symmetry in ffmpeg. Is there a particular reason, why opening and writing to files are so very different for input and output? [12:34] A great start would be, if someone could explain to me, what open_audio in http://ffmpeg.org/doxygen/trunk/doc_2examples_2muxing_8c-example.html does [12:35] I think everything it computes should be determined by output format and codec. [13:13] Looking at http://developer.android.com/reference/android/media/MediaCodecInfo.CodecCapabilities.html I found that there are two different color formats mentioned there: [13:13] COLOR_FormatYUV422SemiPlanar [13:13] COLOR_FormatYUV422PackedSemiPlanar [13:14] This makes me wonder - what's the difference between these two formats? [13:17] hello [13:17] is there a version of ffmpeg that supports hardware acceleration [13:18] compile it [13:21] blez, what is "hardware acceleration" for you? [13:21] liquidmetal, I think line alignment [13:21] CUDA support for example? [13:21] liquidmetal, but I haven't used it enough to be sure [13:21] blez, for which part of transcoding process? [13:22] Mavrik, I'm sure the naming wouldn't be android specific - so there must be some documentation about this. [13:22] anyway, no, there are no CUDA encoders in ffmpeg at the moment since they're silly [13:22] Any clue where I can find more about this? [13:22] liquidmetal, why are you sure naming isn't android specific? [13:22] liquidmetal, and you will find that info in SoC documentation [13:22] so look at qualcomm, nvidia [13:23] SoC documentation? [13:23] Mavrik, sure because of faith in Android :) Which means I could be wrong [13:24] But I just want to know the difference between 'packed semiplanar' and 'semiplanar' [13:25] what about OpenCL? [13:31] liquidmetal, and I'm telling you you need to consult documentation of a SoC that produces that kind of images [13:31] since in Android doc it's not clearly specified [13:32] liquidmetal, also, if you love youself just a little [13:32] you'll forget about MediaCodec API for a few versions of Android more at least [13:32] it's a horrible horrible mess [13:33] Mavrik, they now have tests for those APIs - so they'll stay consistent with newer version of android [13:34] Just that the % of devices they work on is limited - I don't care about that just yet [13:34] liquidmetal, the problem is that pixel formats aren't defined [13:34] and the format in which the devices expect frames varies wildly without a reliable way to check [13:35] anyway, as I said, I think the non-packed YUV:4:2:2 has to be line aligned on 16-byte mark, but I'm not 100% sure [13:39] liquidmetal, doh sorry, I'm talking crazy talk [13:39] liquidmetal, "packed" formats have luma and chroma channels interleaved [13:39] Mavrik, how's that different from semiplanar? [13:40] good question actually [13:41] that's also what doesn't make sense to me [14:18] hi [14:18] im trying to run multiple 2 pass encodings simultanously [14:19] but the mbtrees keep overwriting each other [14:19] is there a way to define the mbtree names? [14:21] -passlogfile [14:22] it also applies to mbtree? [14:22] and i only need that for first pass encoding right [14:23] yes and no [14:23] thanks :) [14:23] the second pass needs the name too [14:23] hmm [14:23] do i assign it as input somehow? [14:24] or do i say for 2nd pass -passlogfile video02 [14:24] the same name for both [14:24] passes [14:24] ok [14:24] helpful like always :) [14:25] wish there was a parameter in ffmpeg to auto remove logtrees after last pass [14:27] it's trivial to script [14:28] yup [14:28] still it is so useful [14:28] if there was such an option i mean [15:36] i am trying to convert audio. decode works and got packet, however avcodec_encode_audio2 crashes. Is there something I need to do with the decoded frame, before I can hand it to avcodec_encode_audio2? Code is here: http://pastebin.com/UkQywDHF [15:51] in the examples/muxing.c write_audio_frame there is a resampling step. Is resampling necessary, if you get the frame from avcodec_decode_audio4? [15:59] Would be smarter, if this step was integrated in the respective encode/decode functions [16:08] hey, I'm trying to record audio from alsa with: ffmpeg -f alsa -ac 1 -i hw:1,0 -dn -vn -codec:a libfdk_aac -flags +qscale -ar 44100 -y test.mkv -- but I am getting: "[alsa @ 0x2333bc0] cannot set sample format 0x10000 2 (Invalid argument)" -- the sample format used by arecord is "S32_LE" but I can't see it in the list of -sample_fmts -- any ideas? [16:10] just try s32p maybe? :x [16:11] and s32 [16:11] one of them might sound correct [16:12] klaxa: I'm not sure where to put it, but I tried: ffmpeg -f alsa -ac 1 -sample_fmt s32 -i hw:1,0 -sample_fmt s32 -dn -vn -codec:a libfdk_aac -flags +qscale -ar 44100 -y test.mkv -- and it throws the same: "[alsa @ 0xe74cc0] cannot set sample format 0x10000 2 (Invalid argument)" [16:12] am I putting it in the wrong place? [16:12] no i think that place is right [16:12] wait [16:12] no matter what sample format I use, it always says: "cannot set sample format 0x10000 2" [16:12] remove the second one [16:12] hmm yeah weird [16:13] same thing with: ffmpeg -f alsa -ac 1 -sample_fmt s32 -i hw:1,0 -dn -vn -codec:a libfdk_aac -flags +qscale -ar 44100 -y test.mkv [16:13] ffmpeg -f alsa -ac 1 -sample_fmt dblp -i hw:1,0 -dn -vn -codec:a libfdk_aac -flags +qscale -ar 44100 -y test.mkv --- also returns: "cannot set sample format 0x10000 2 (Invalid argument)" :/ [16:14] can you try: ffmpeg -f alsa -i hw:1,0 test.wav ? [16:15] and if that works add more and more arguments to the command line [16:15] yep: [alsa @ 0xd738c0] cannot set sample format 0x10000 2 (Invalid argument) [16:15] hw:1,0: Input/output error [16:15] so [16:15] input output error sounds quite suspicioius [16:15] second hardware soundcard, first device? [16:16] this is the format from arecord: http://pastie.org/8265575 [16:16] the output I mean [16:16] so the hardware is fine :/ [16:17] and the resulting wav file plays just fine [16:17] hmm hmm [16:20] klaxa: the full ffmpeg debug output: http://pastie.org/8265584 [16:21] hmm dunno wait for someone who knows about this to show up :/ [16:23] also, check this out: http://pastie.org/8265588 -- so no matter what sample format I try, it always says: "cannot set sample format 0x10000 2" - hmmm :/ [16:44] for a workaround I'm recording through dsnoop which is working :) [16:45] but the audio input has 12 channels - how do I record just channel 6 for instance? -- I'm trying ffmpeg -f alsa -ac 12 -i plug:capt -ar 44100 -map_channel 0.0.5 -y test.wav -- but I'm getting silence (probably channel 1) [16:45] any ideas? [16:55] 0.0.5 ? [16:55] is that wrong? [16:56] I tried this: ffmpeg -f alsa -ac 12 -i plug:capt -ar 44100 -map_channel 0.0.0 -y test0.wav -map_channel 0.0.1 -y test1.wav -map_channel 0.0.2 -y test2.wav -map_channel 0.0.3 -y test3.wav -map_channel 0.0.4 -y test4.wav -map_channel 0.0.5 -y test5.wav -map_channel 0.0.6 -y test6.wav -map_channel 0.0.7 -y test7.wav -map_channel 0.0.8 -y test8.wav -map_channel 0.0.9 -y test9.wav -map_channel 0.0.10 -y test10.wav -map_channel 0.0.11 -y test11.wav [16:56] according to the manual, it should record each channel to a separate file [16:57] but every one of the files is just complete silence :/ -- if I do just ffmpeg -f alsa -ac 12 -i plug:capt -ar 44100 -y test.wav -- I get a 12 channel wav file and there's sound on 8 of the 12 channels [16:57] (only 8 channels have microphones so this is expected) [16:59] the example given is ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1 -- which is what I did above :/ - any ideas why I am getting silence in all the output files? [16:59] hackeron, http://ffmpeg.org/ffmpeg.html then search for "map_channal' has some examples [17:00] zap0: that's what I'm doing [17:00] the example is: "ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1" -- what I'm doing is: "ffmpeg -f alsa -ac 12 -i plug:capt -ar 44100 -map_channel 0.0.0 -y test0.wav -map_channel 0.0.1 -y test1.wav -map_channel 0.0.2 -y test2.wav -map_channel 0.0.3 -y test3.wav -map_channel 0.0.4 -y test4.wav -map_channel 0.0.5 -y test5.wav -map_channel 0.0.6 [17:00] -y test6.wav -map_channel 0.0.7 -y test7.wav -map_channel 0.0.8 -y test8.wav -map_channel 0.0.9 -y test9.wav -map_channel 0.0.10 -y test10.wav -map_channel 0.0.11 -y test11.wav" [17:00] (for 12 channels instead of 2) [17:00] and getting silence on all channels [17:01] if file 0, stream 0 really the input? [17:01] yes: Input #0, alsa, from 'plug:capt': [17:01] Duration: N/A, start: 1377355941.479040, bitrate: 9216 kb/s [17:01] Stream #0:0: Audio: pcm_s16le, 48000 Hz, 12 channels, s16, 9216 kb/s [17:02] am I misunderstanding something? [17:03] if I try to use any other file or stream input, other than 0, it says: "mapchan: invalid input file stream index #0.1" [17:03] and if I use a channel higher than 11, it says "mapchan: invalid audio channel #0.0.12" - so I'm using the right parameters it seems [17:03] why haven't you told in the format.. s16le [17:03] it/ [17:04] zap0: I don't need to, dsnoop does in .asoundrc - also ffmpeg -f alsa -ac 12 -i plug:capt -ar 44100 -y test.wav -- records all 12 channels just fine [17:05] if you say so. [17:07] zap0: ok, I changed the command to: ffmpeg -f alsa -ac 12 -sample_fmt s16 -i plug:capt -ar 44100 -map_channel 0.0.0 -y test0.wav -map_channel 0.0.1 -y test1.wav -map_channel 0.0.2 -y test2.wav -map_channel 0.0.3 -y test3.wav -map_channel 0.0.4 -y test4.wav -map_channel 0.0.5 -y test5.wav -map_channel 0.0.6 -y test6.wav -map_channel 0.0.7 -y test7.wav -map_channel 0.0.8 -y test8.wav -map_channel 0.0.9 -y test9.wav -map_channel 0.0.10 -y ... [17:07] ... test10.wav -map_channel 0.0.11 -y test11.wav -- no change, silence in all output files [17:07] any other ideas? [17:07] are they the file length you expected? [17:08] also, im not sure s16 is valid.. how does it know endianess? [17:08] yep, all 7 seconds - if I add -t 5 to the beginning, all are 5 seconds [17:09] the only ones available are: s8 s16 s32 flt dbl u8p s16p s32p fltp dblp -- which one is the correct one? [17:11] don't know... i don't use -sample_fmt i use something else [17:11] what do you use? [17:11] i'm trying to find it (hence the delay) [17:11] -f s16le [17:13] ok, current command is: ffmpeg -f alsa -ac 12 -i plug:capt -f s16le -ar 44100 -map_channel 0.0.0 -y test0.wav -map_channel 0.0.1 -y test1.wav -map_channel 0.0.2 -y test2.wav -map_channel 0.0.3 -y test3.wav -map_channel 0.0.4 -y test4.wav -map_channel 0.0.5 -y test5.wav -map_channel 0.0.6 -y test6.wav -map_channel 0.0.7 -y test7.wav -map_channel 0.0.8 -y test8.wav -map_channel 0.0.9 -y test9.wav -map_channel 0.0.10 -y test10.wav -map_channel ... [17:13] ... 0.0.11 -y test11.wav [17:13] silence in all output wav files :/ [17:13] input is: Stream #0:0: Audio: pcm_s16le, 48000 Hz, 12 channels, s16, 9216 kb/s - so that looks right [17:14] have you given the output file format? [17:14] if I output to a single output.wav file, it outputs all 12 channels correctly [17:14] no, I haven't [17:15] doesn't seem like I need to? < # file test1.wav [17:15] test1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 48000 Hz [17:16] hwo do you know it's silence? [17:16] I opened it in audacity, there is absolutely no signal [17:17] if you open it in notepad... can you literally see the file is just a bunch of NUL chars ? (of whatever value is audio-silence) ? [17:17] or whatever/ [17:17] yes: ^@^@^@^@^@^@^@^@^@^@^@^@ [17:17] just a bunch of that, nothing else [17:17] do you have a sample of this 12chan file i could download and try ? [17:18] sure [17:18] one sec [17:18] k [17:18] back in 3 [17:20] recorded with: ffmpeg -t 10 -f alsa -ac 12 -i plug:capt -y test.wav -- http://itstar.co.uk/test.wav [17:21] .ogx file? it's downloading..... slowly.... ETA 14mins [17:22] so as you can see, all 12 channels are there and there is signal on channels 3,4,5,6,7,8 and 11 and 12 is pink noise [17:22] .ogx? -- it's test.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, 12 channels 48000 Hz [17:23] so now I need to figure out why -map_channel isn't working to split the channels to separate output files (or to pick just a single channel) [17:24] yep.. test.ogx and VLC plays it (although i only have stereo speakers), but it's info says 12 chns [17:24] if you open it with audacity, it will show a waveform for each separate channel [17:24] and allow you to solo each channel [17:26] so any ideas how to record just 1 channel out of the 12? [17:26] the levels appear to be VERY low.. i 'see' 3,4,5,6,7,8,11,12 the others appear to be silent. [17:26] say channel 4 [17:26] yes, the others are silent [17:29] zap0: so any ideas how to record just channel 4 for instance? [17:31] trying a few things [17:31] thanks :) [17:31] hi guys [17:31] anybody can help me? [17:32] i has been git ffmpeg to my computer [17:32] make all [17:32] error:libmp3lame >=3.98.3 not found [17:33] what's means [17:34] mecil9: http://bit.ly/1dCQNEK [17:36] i can't open the pages [17:38] hackeron.. ffmpeg says during file creation.. "-map_channel is forwarded to lavfi similarly to -af pan=0x4:c0=c6." [17:39] hackeron, then pan says: "This syntax is deprecated. Use '|' to separate the list items" [17:39] zap0: yeh, I saw that -- is ffmpeg doing it wrong? [17:39] Codec question: if I present an h.264 codec (not necessarily the one in ffmpeg) with a black-and-white image, is it likely to be able to use the fact that there's no energy in the U and V channels to improve compression? [17:39] hackeron, maybe.. try using | instead [17:39] I understand that h.264 specifies the decoder, not the encoder, so presumably various codecs could handle this differently. [17:41] zap0: no difference at all [17:42] zap0: I'm doing: for i in 0 1 2 3 4 5 6 7 8 9 10 11; do ffmpeg -i test.wav -t 5 -dn -vn -codec:a libfdk_aac -flags +qscale -ar 44100 -af "pan=0x4|c0=c${i}" -y ch${i}.aac; done --- no deprecetaion warnings anymore, silence in output files (3KB per file, empty waveform, etc) [17:42] zap0: are you able to record just 1 channel from that test recording? [17:44] i have a ffmpeg command line that produces are correct file (although it is still full of silence). [17:44] it makes a .wav in the right size, header etc.. just full of NUL chars :( [17:45] yeh, that's what I'm getting :/ [17:45] all examples I can find say something like: ffmpeg -i stereo.wav -map_channel 0.0.1 right_mono.wav -- the 1 in 0.0.1 being the channel id -- but the output file is silence regardless of channel id :/ [17:45] is this a one-off problem? can you not just use Audacity? [17:45] no, it isn't, it's for recording live audio [17:46] ok [17:47] the other day i write a WAV reader/writer in C. i'd have just used my own code by now :) [17:48] lol, can it record from just 1 specified channel from an alsa source with 12 channels? [17:48] I once wrote a WAV reader in Javascript, in windows scripting host. Complicated it ain't. [17:48] and create segment files? [17:48] hackeron: maybe use -af pan ? ? http://superuser.com/questions/601972/ffmpeg-isolate-one-audio-channel [17:49] zap0: that's the page I have open on the screen - no change [17:49] zap0: output file is silence :/ [17:49] it looks like ffmpeg is broken when dealing with 12 channel inputs? [17:50] I haven't heard the whole conversation - are you trying to process a 12 channel wav? [17:50] Hfuy, i'm currently trying to build a simple waveform generator/player in JS, to run on mobile phone browsers! [17:50] zap0: The problem is not so much the language itself, it's the facilities provided by the environment. The trick for doing it in windows was simply how to get a binary stream into a series of numbers. Figure that out and it's trivial. [17:51] Hfuy: yep, here's a wav file, it is 12 channels: http://itstar.co.uk/test.wav -- I need to be able to get just 1 channel out of it with ffmpeg. All the documented methods including -af pan and -map_channel are not working and producing an empty output file. For example: for i in 0 1 2 3 4 5 6 7 8 9 10 11; do ffmpeg -i test.wav -codec:a libfdk_aac -flags +qscale -ar 44100 -af "pan=1|c0=c${i}" -y ch${i}.aac; done [17:51] I'm actually lying anyway - I did it in windows script host, but there was some aspect or other of binary handling that the JScript interpreter's tendency to make everything into ASCII was screwing up. I had to use a tiny bit of VBscript to get around it. But it worked. [17:52] Mmm, I wouldn't be surprised if any $SOFTWARE had a problem reading multichannel audio, they often do. But equally it isn't a complex format. [17:52] i feel so dirty, even just hearing about the use of VB script ;) [17:52] zap0: Imagine how dirty I felt writing it. But it was only a couple of lines. [17:53] hackeron: Sorry, you're massively exceeding my experience with ffmpeg. [17:54] I don't even know if your commandlines have the correct intent, let alone their likely performance. [17:54] I have to ask, though: where the hell did you get a 12-channel wave from?! [17:56] zap0: http://pastebin.com/YCpVZw7S [17:57] Action: zap0 reads [17:58] ok, filed a bug report: http://trac.ffmpeg.org/ticket/2899 [17:59] lol.. i just tried ffmpeg 2008 .. "unrecognized option '-map_channel'" [17:59] zap0: If you can spot the big ugly cheat, which isn't in that file, you can have one of my doughnuts. [18:00] zap0: yeh, it was added in 2011 I believe? [18:00] on a diet [18:00] hackeron, good idea. [18:00] : re bug report.. good idea,. [18:00] thanks :) [18:01] hackeron, it's not likely to get looked at unless a 12 channel input file is available... so add a link in your bug report. or at least email so you can be contacted [18:01] zap0: the first line is a link [18:02] I would expect it's somewhat unlikely to be looked at anyway. [18:02] if a dev has an interest in 12 chn audio... it might get a lot of attention [18:02] And the likelihood of that is... [18:02] Hfuy: actually every bug I filed to ffmpeg has generally been looked at very quickly [18:03] Hey, did I figure out how to parse wave files without needing my evil vbscript hack? I think I did! [18:03] Action: Hfuy is a genius [18:04] hackeron, success!! [18:04] my genius mice's buttons died early ;/ [18:04] zap0: yeh??? [18:04] hackeron.. oh oh oh... O M G... w000tttt!!!! /me runs about the room naked! [18:04] zap0: I'm going to join you, what is it??? [18:04] hackeron.. used an older ffmpeg.. the file is non-full-of-NULs [18:05] lemme listen.. back in 3 [18:05] zap0: lol! - ok, now to figure out what someone broke and how and get a developer to revert it [18:05] zap0 <= tin-foil-hat is a must! [18:06] Hfuy <= in C? for loop? [18:06] hackeron, i selected channel 11,, it sounds a bit like white noise heard thru a toilet roll stuck to ones ear. [18:06] braincracker: Sorry? [18:06] hackeron: where did this 12 channel audio come from, out of interest? [18:06] zap0: yep, channels 11 and 12 are pink/white noise -- try something like channels 3 to 8 [18:06] [180330] Hey, did I figure out how to parse wave files without needing my evil vbscript hack? I think I did! [18:06] Hfuy: M-Aaudio lt1010 sound card [18:06] don't you just hate vb* ? [18:07] Oh I do. [18:07] Hfuy: it has 8 analog inputs and 4 digital inputs [18:07] I did it in Javascript. [18:07] I can't quite figure out how, but apparently I did. [18:08] okey, most ms things work like this. [18:09] hackeron, channel 3 sounded like pink noise too. channel 8 sounds very quite.. just turned up it sounds a bit like shitty audio chips on motherboards that produce digital noise into their crappy pre-amps [18:09] is there a specific channel that has something very distringuishable ? [18:09] zap0: yeh, it is recording sound in a few rooms that are empty right now - but at least it is working :D [18:09] yes! [18:09] zap0: well, you should be able to hear rain on 7 and 8 I believe? [18:10] zap0: ok, so any ideas what revision broke it and what specific code? [18:10] nobody knows how, it just works [18:10] also, can you add a comment what version works for you? [18:10] I'm not sure if it's really an MS thing or a JS thing. [18:10] The situation is that the file reader you get in JScript expects to work on text files, and munges character values above 128 in certain circumstances. [18:10] hackeron, ffmpeg.exe --version ffmpeg version N-35295-gb55dd10, built on Nov 30 2011 00:52:52 with gcc 4.6.2 [18:10] ddos logs will be forwarded to authorities [18:11] Hfuy, surely it has a binary mode!? [18:12] zap0: it's so annoying :( - I need the latest version of ffmpeg because it has all the -segment beauty but -map_channel is broken, grrr [18:12] hackeron, at least we identified it's a bug.. and not a missing feature.. so it should be fixable... quickly-ish [18:12] zap0: Well, to be completely fair, the function is called openTextFile() [18:13] zap0: yeh, I've added a comment: "Also note that ffmpeg version: N-35295-gb55dd10, built on Nov 30 2011 00:52:52 with gcc 4.6.2 works fine, but latest trunk is broken. [18:13] " [18:13] hackeron, let me find a mid point-date-wise.. see if thats broken too.. maybe something near... oct/nov 2012 [18:17] hackeron, ffmpeg-20121003-git-df82454 throws an error, and writes a zero sized output file. [18:18] zap0: Aha. You have to do some chicanery with translating unicode numbers to their ASCII equivalents. [18:18] zap0: aha, so somewhere between 201111 and 20121003, lol? [18:18] If you read a "text" file byte >127 then do charCodeAt() on it, you'll get a unicode number if it's >127. [18:19] hackeron, indeed!! the last verison i just quoted has some git reference... perhaps that is valuable to someone [18:19] Which is why I got Visual Studio and started using C# instead :) [18:21] zap0: hmm, someone replied - have a look at the ticket [18:22] I don't get that. How is there supposed to be a "known" channel layout for 12 arbitrary inputs? [18:23] cause 5.1 and 7.1 has known layouts... 12 is not a "standard" [18:23] Pan filter just takes numeric input, though, doesn't it? [18:24] (and in fact, in may situations, the channel layout of streams known to contain 5.1 and 7.1 tracks is really not very consistent!) [18:25] zap0: hmm, how do I specify the aformat=channel_layouts=0xFFF? [18:25] hackeron, i don't know. im still staring at it trying to comprehend that too [18:26] heh [18:27] In movie postproduction, multichannel surround is almost always ferried around as a set of single channel files. [18:27] For this exact reason. [18:27] Few things support multichannel files, and even fewer support them properly. [18:28] Really you need something like -af "assign_channels=l,r,c,ls,rs,lfe" [18:29] Hfuy: I want to record 1 channel, in mono, a channel number I specify from an input with 12 channels - there is no left/right/whatever - every channel is a different room [18:29] Oh I completely understand. But if it's going to insist on somehow knowing what the channels represent, there ought to be a way to assign them labels. [18:30] But I agree there seems to be no reason why that should be necessary simply to split out one of the channels. [18:30] RIFF format provides packets for data [18:30] someone needs to set (YET ANOTHER) standard ;) [18:31] Action: Hfuy cries [18:32] The reason I wrote that wave parser was so as to have the ability to read and write "broadcast wave" extensions, with timecode etc. [18:32] i hereby declare a new RIFF packet called 'channel layout', containing a list like 1=23?W @ 206.4mm from center. 2=... [18:32] Gathering example files from field audio recorders, I immediately discovered a collection of RIFF chunks I'd never heard of before. [18:33] There were n of these proprietary chunks, where n is in fact slightly larger than the number of recorder manufacturers involved. [18:33] Hfuy, lol.. the reason i'm write audio in JS is for a timecode generator! [18:33] Writing a slate app? [18:33] more or less! [18:33] I would counsel against it :/ [18:34] why? [18:34] We tried two different ipad slate apps against a real Ambient clockit slate. [18:35] The problem I think is that the accuracy of the slate apps is dependent on the clock accuracy of the audio hardware in the ipad. [18:35] And it isn't good enough. [18:35] It lost whole frames an hour, which is way not good enough. [18:35] are you implying the accuracy/drift is an issue? [18:35] There are circumstances where you could make it work, but it isn't good enough to jam sync then walk away. [18:35] yes, i guess you are! [18:36] Depends what you're doing I guess. [18:36] i am quite aware of the haphazard timing of these android/ipad consumer hardware. [18:36] If you want to let it listen to incoming SMPTE timecode using the audio input, and just display what you're getting, fine. [18:36] zap0: any luck? -- I tried ffmpeg -v debug -i test.wav -filter:a aformat=channel_layouts=0xFFF -af "pan=0x4|c0=c4" -y ch4.wav -- still getting silence in the output [18:37] Equally if it's going to be a timecode master and you're going to record its output onto a spare audio track for later syncing, probably fine. [18:38] Perhaps you could put some sort of calibration term into the software but I'm not sure how much of the drift we saw is down to interrupts and so on.\ [18:39] how often do you record a single take that goes for over an hour ? [18:40] Not often. But that's not the factor, if you want to jam sync it. [18:40] The issue is has it been RUNNING for an hour since it was last synced. [18:42] this is for some simple stuff anyway... if i wanted something i'd have to rely on, then i'd use this microcontroller to do it, and run it off a real-tme-clock module thingy. [18:43] I think really this is a microcontroller project. [18:43] I've been pondering doing just that for ages, but it's a lot easier if you can simply ensure the uC is accurately clocked, as opposed to trying to refer your code to an external RTC. [18:43] And I think you can do that. [18:43] i'm using a uC for displaying milliseconds anyway... for the high-speed cameras [18:44] Ooh, high-speed cameras [18:44] Action: Hfuy rubs his hands [18:44] It's not as if SMPTE code is complicated, anyway. [18:45] i've seen some people trying to use POV displays for high-speed. [18:45] I just wish it encoded frame rate. [18:45] there are some empty blocks in SMPTE you can write your own data into [18:45] although not all hardware likes it when you do [18:45] Only a couple of bits. [18:45] Although that'd be enough to indicate whether we were at a fractional frame rate or not. [18:45] But, as you say... [18:46] how many frame rate changes per second do you need ? just write 1 bps of your frame-rate-info stream, until its done! [18:46] Heh. [18:47] Really the issue when I was doing it was simply being able to tell the difference between, say, 29.97 and 30. [18:47] Which I found was fine, even from analogue tape. So it really isn't a huge deal. [18:48] lol.. NTSC [18:49] zap0: this seems to work! < ffmpeg -v debug -i test.wav -filter:a "aformat=channel_layouts=0xFFF,pan=0x4|c0=c4" -y ch4.wav [18:50] zap0: Tell me about it. I live in the UK, where we can count to 25. IN WHOLE NUMBERS. [18:51] lol... .au PAL too ;) [18:51] Action: Hfuy waves a very small union jack [18:51] So, you're out to sea, Hfuy? [18:51] Oh dear. A heraldic pedant. [18:51] OK, OK. Union FLAG. [18:51] \o/ [18:52] hackeron, well done! [18:52] Nobody in the UK understands the difference, or has any idea what you're on about when you talk about the "union flag." And if you use the term in international company, they tend to think of the American civil war. [18:52] hackeron [18:52] So yes, I tend to use the more common term. But be happy; I know how not to indicate I'm in distress when flying said flag. [18:53] Hfuy, i have no idea... but i'm going to guess you are talking about the Cross-of_... overlayed on the cross-of-.... that makes up the multiple colours? [18:54] The George Cross is a red cross on white. George, by one mythology, is the patron saint of England. [18:54] mecil9: yes? [18:54] The Cross of St. Andrew is a diagonal white cross on blue. By said mythology, Andrew is patron saint of Scotland. [18:55] hackeron new error played [18:55] libx264 must be >=0.118 [18:55] i git x264 from git.videolan.org [18:56] make it [18:56] Aaaand the cross of St. Patrick is a diagonal red cross on white. But we made that bit smaller. Because frankly, who cares bout the Irish. :) [18:56] And mainly because it was selected more or less at random from the flags of the great houses of Ireland. [18:56] Infodump ends. [18:58] hackeron, i've noted your example in my 'notebook of ffmpeg tricks' which grows by the day! [19:00] zap0: haha, cna I see that notebook? [19:01] I've decoded a frame, using avcodec_decode_audio4, however when I put this frame into avcodec_encode_audio2 it crashes on me. [19:02] I've debugged it all the way to where the segault is thrown, which is at samplefmt.c/av_samples_copy [19:02] the problem is, that the source parameter of the function is messed up and therefore the memcpy call in that function fails [19:04] I am not sure about this, but i think the problem might be in avcodec_encode_audio2, as it enters the branch where pad_last_frame is called. [19:04] this happens, even though it is the very first frame that was decoded. [19:10] zap0: ok, here's my command to record audio :D < ffmpeg -loglevel info -f alsa -ac 12 -i plug:capt -map 0 -analyzeduration 0 -dn -vn -codec:a libfdk_aac -flags +qscale -global_quality 1 -afterburner 1 -f segment -segment_time 60 -segment_wrap 10 -segment_list_flags live -segment_list_size 10 -reset_timestamps 1 -segment_list 'test.csv' -ar 44100 -filter:a "aformat=channel_layouts=0xFFF,pan=1|c0=c4" -y test_%02d.mkv [19:10] simples :P [19:16] Does ffmpeg have -filter:v "camerawork=good" [19:16] and if not why not [19:18] Am I missing something obvious, or is nobody here who has the answer? [19:55] Hi, can anybody help me with this? : how can I have less delay when I use ffplay via RTSP [21:31] hey folks, quick q - is there a way to use ffmpeg to capture from two different windows at once without having to have those windows overlayed on the desktop? [21:32] like, for instance if I was streaming out to a source and wanted to overlay a webcam video in the lower right quadrant of the screen, could I have the webcam on a seperate portion of my desktop without having to have it positioned over the lower right quadrant of the main window I'm streaming? [21:59] decoding audio file, I got following error: [vorbis @ 0x2226e0] Not a Vorbis I audio packet. Error decoding frame: Invalid data found when processing input [21:59] is this recoverable [22:00] maybe with skipping? [00:00] --- Sun Aug 25 2013 From burek021 at gmail.com Fri Aug 30 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Fri, 30 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130829 Message-ID: <20130830000502.AE64F18A00DC@apolo.teamnet.rs> [20:17] durandal_1707, i wasnt aware it was offline :/ [20:18] what happened to it? [20:18] im looking into logs now [20:20] a ddos it seems [20:20] SEA! by allah's beard! [20:22] burek: by who? [20:24] who cares :) [20:25] i do [20:25] no harm done :) [20:25] except for few days of outage :) [20:26] who did it? [20:26] internet gremlins :) [20:27] how can you tell anyway with botnet rentals, windows zombies, etc? [20:27] no if you tell, i will give it back [20:27] im so desperate now :) [20:28] why does it matter anyway :) [20:28] what if it happens again? [20:28] the world will stop [20:28] yes, i can not see when someone adds spam [20:29] and refreshing trac page is waste of my time [20:29] use rss feed? [20:29] perhaps you should add something that will detect ddos attacks and ignore such messages? [20:30] i will some time in the future [20:31] durandal_1707: i had the assumption that you didn't like fflogger [20:32] assumptions are bad [20:32] some comment(s) in the past that I can't remember or maybe I am imagining about log spamming channel [20:34] llogan, btw, thanks for your help on the forum [20:35] i was really busy lately and couldnt afford to visit forum more often [20:35] you really helped a lot there [20:37] burek: no problem. it's part of my daily morning procrastination routine. [20:37] i was thinking about migrating the forum to the Q&A type of a web site [20:37] which might be more efficient i guess [20:37] but im not sure if its worth the effort [20:38] i think the forum is fine as is. users can go to superuser if they want the other format. there are some informed answerers there [20:40] ok [20:55] ubitux: what do you think about webvtt output corresponding with select filter scene? users could use it for "thumbnail previews" with supported players [22:58] ffmpeg.git 03Michael Niedermayer 07master:811d58e08386: avcodec/utils: support non edge emu for grayscale [22:58] ffmpeg.git 03Michael Niedermayer 07master:ec120efaa90a: doc/snow: add gray/alpha/gbr [22:58] ffmpeg.git 03Michael Niedermayer 07master:c4224fff1b0a: avcodec/snow: gray support [23:18] michaelni: check return value of av_frame_alloc? [23:30] ffmpeg.git 03Michael Niedermayer 07master:24b4e6c373f8: snow: Check av_frame_alloc() failures [00:00] --- Fri Aug 30 2013 From burek021 at gmail.com Fri Aug 30 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Fri, 30 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130829 Message-ID: <20130830000501.A5CF518A00D3@apolo.teamnet.rs> [21:07] any1 have any idea what a "GARMIN BITMAP" is, or how to convert a jpg to 1? :p [21:08] (viewing it with a hex editor suggests its called GARMIN BITMAP") [21:08] i do not, but please share a sample. [21:09] what should i use to concat a bunch of ts files? [21:10] http://4chanx.org/hans/temp/truck_blue.srf [21:11] kode101, idk, but best guess, -i concat:"input1.ts|input2.ts|input3.ts" [21:12] porn? [21:12] improviser, yes, blue truck porn [21:13] actually i think its this "truck" in this image http://www.gpsmagazine.com/assets/review-nuvi465t/trucking13.jpg [21:16] do you know of anything that can decode the file? [21:17] othan than some expensive garmin shit [21:18] llogan, no not really :/ [21:22] ohhhhh, i found it, http://techmods.net/nuvi/ , has srf2png and png2srf ^^ | ping llogan [21:22] and a nice "SRF file format details" [21:22] so som1 has taken the time to figure out the format, most of it, anyway ^^ [21:24] that's a good start i guess. also "garmin garage" site has a bunch more srt files if you want more [21:26] llogan, garmin garage? where is that? [21:30] hans_henrik: http://www8.garmin.com/vehicles/product.html?vName=Homer [21:30] what formats should i mux audio and video into for further processing? [21:33] what kind of processing? [21:35] python [21:35] im stretching the audio and video with different algos [21:35] then remuxing [21:36] so i think i need to turn an mp4 into a wav and as pure a video file as i can [21:36] kode101: use concat demuxer [22:51] hi.. I am on squeeze-amd64 - anyone got experience on building ffmpeg-2.0.1 from source? [22:51] OR can provide me with an installable .deb? :-) [22:51] you can just grab a static binary [22:51] or read the ubuntu compilation guide (it also works for debian presumably) [22:52] ok.. I'll give the first option a try, thanks a lot.. [22:54] would it be a good idea to remove the old version with apt-get before using the new one? [22:57] hmm... not if you have software that depends on the shared libraries [22:57] if you use the static build, you shouldn't worry about deinstalling stuff anyway [22:58] just replace the binaries in /usr/bin then? and copy the presets to /usr/share/ffmpeg i presume? [22:59] oh god no, if anything put it in /usr/local/bin [23:00] putting things into /usr/bin is... well you shouldn't do it, because it might mess with some automatic installation stuff [23:00] ok.. [23:21] I ended up uninstalling the old version of ffmpeg (nothing else was removed) and placed the binaries in /usr/local/bin and the presets in /usr/local/share/ffmpeg.. testing with the mmac script now [23:21] thanks for your help, highly appreciated! [00:00] --- Fri Aug 30 2013 From burek021 at gmail.com Sat Aug 31 02:05:01 2013 From: burek021 at gmail.com (burek) Date: Sat, 31 Aug 2013 02:05:01 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg.log.20130830 Message-ID: <20130831000501.42C7118A0147@apolo.teamnet.rs> [00:05] magnulu: you could have just put it in ~/bin [00:06] and not messed with system stuff [00:21] well if he wants to have it available for all users, putting it in /usr/local/bin should be the right thing, no? [00:24] it's one option. could use checkinstall or something if you want it in package management system [01:19] hi there XD [01:19] do you know if there is an option that prevents ffmpeg from overscaling? [01:19] interpolating [01:21] can you explain it in more detail what you are trying to solve? [01:21] im running a script that converts videos but some have a lower resolution than the set resolution [01:22] was wondering if there is a switch to prevent overscaling [01:22] underscaling is fine [01:22] *upscaling [01:22] i for one don't know of any, you could script it though [01:22] well yeah, scripting is always a resolution [01:50] oh, found a way [01:51] -vf? scale="min(640\,iw):trunc(ow/a/2)*2" [01:55] now how to get it to work with complex filter [01:57] what complex filter? you confused me again [02:02] the command works with simple filter [02:02] but i use filter_complex [10:09] Okay, so I've been using ffmpeg to record my desktop lately, and I was wondering if there was a way to only capture a single window, and not just by size (in case it resizes or moves. the output size is held at 852x480). [10:15] (using linux, not windows. that probably matters.) [11:07] kaictl: you expect it to track a window? no [11:08] relaxed: meh. not a dealbreaker. is there any way to manipulate ffmpeg while it is running? pause the video or audio input/output? [11:10] I don't think so [11:11] there's ctrl+z [11:12] then `fg` to start it again [11:14] true, but i'd like to use this for streaming and have some way to just pause the video output without stopping it, just streaming the last image if possible. [11:16] might be time to just grab a separate monitor for this kind of stuff. [11:56] nice [11:57] is there a way to post a certain useful command line and put it as an example on the wiki page? [12:34] example? [12:36] sample? [12:43] a line for auto resizing without upscale in filter_complex [12:43] 844 is the destined output width, but if the source's width is smaller, it will keep the original width [12:43] ffmpeg -i /test/test.mp4 -filter_complex scale='min(844\,iw):(trunc(ow/a/2))*2' output.mp4 -y [12:44] hehe, there is a frowny on my screen, but i hope you see the code correctly [12:45] it is such an important function, but there is not a single site that covers this line in complex filter [12:45] only in -vf [12:45] write a blog. describe it, give keywords. google will index it if it's quite unique. [12:46] i just think it might be referenced somewhere ppl will actually read [12:48] shouldn't you scale up and pad to maintain the aspect ratio? [12:50] it does by a [12:59] so i have an mp4 and a wav file, and they're roughly the same len [12:59] the mp4 has an audio track i no longer need [12:59] do i convert the wav to aac then mux, or do i do something else [12:59] goal is the mp4 with the new wav as its audio track [13:55] kode101, you can just take both as input, encode audio to aac and mux it with a single command [13:55] use "map" to properly map streams [13:55] nice [13:56] what the command [13:57] I just told you. [13:57] use "-i" twice and use map to map streams to output [13:58] video + new audio [13:58] ffmpeg -i haha [13:58] silly [13:58] i thought mux was a thing [13:58] its not, its just there [13:58] what so do i need to map? [13:59] i tried [13:59] ffmpeg -i input.mp4 -i input.wav -vcodec copy -acodec aac output.mp4 [14:00] kode101, yes, you need to use map to tell ffmpeg to use video track from first output and audio track from the second [14:00] and to ignore the audio track from first input :) [14:00] kode101: man ffmpeg|less +/^'STREAM SELECTION' [14:00] ive removed the audio from the file [14:00] so can skip mapping here no? [14:00] i have a video with no audio and audio with no video so ffmpeg just works it out [14:01] probably yeah [14:01] check the output, the mapping is always written out at the start [14:02] learning would save you headaches later [14:02] tru [14:02] thanks yo [14:02] read about -map [16:34] hi, I'm using ffmpeg to grab images from an rtsp camera with limited slots. When I try to disconnect from the camera, the TEARSDOWN message is sent to the camera but the slot is not freed [16:35] i'm using avformat_open_input and avformat_close_input to open and close the stream [16:36] do you have an idea of the problem ? if I use avplay or ffplay it works fine (frustrating) [19:26] hey, is there a function in libav to skip all frames till next keyframe? and how many frames are normally between to keyframes? can i get this information somehow? [19:47] DasMoeh: Check out the av_seek_frame/av_seek_file functions. How many frames and whether it is consistent between depends on what codec you are in, but look at gop_size and keyint_min variables on your AVCodecContext. [19:49] ok thanks, when i use av_seek_frame with the timestamp from my actual frame i get the next keyframe? [19:53] DasMoeh: I've seen it vary in behavior a bit on different codecs, but it usually (should be always?) returns the first keyframe before the requested time, so you need some idea of where you're trying to jump to. [19:53] DasMoeh: if you're just trying to get to the next keyframe I would just read frames from where I am until I got there. [19:54] That's what i'm doing now. But i'll safe the time needed for decoding each frame. [19:55] *want safe [20:05] mh, gop_size = 12, keyint_min = 25... but description says keyint_min is the minimum GOP size? [20:38] DasMoeh, slightly late but here is goes [20:39] first the bad news: there is no reason for the keyframes to be equally distributed [20:39] in more advanced formats they certanly aren't [20:45] Mavrik: thx [20:45] also, to decode only keyframes [20:45] run the av_read_packet loop and check for the keyframe flag on the packet [20:49] I'm reading a video stream and encoding to disk as h264. Reading that file in and writing it out again with identical codec parameters (verified that libx264 is logging that the bitrate requested is the same) results in a file that is much higher bitrate. Original (and correct) ~600k bitrate file is rewritten at ~1500k. ffprobe shows no difference in the files other than the frame sizes (no extra streams or anything). I'm a bit stumped [20:49] to what could be causing the size inflation. [21:05] run the av_read_packet loop and check for the keyframe flag on the packet <-- thanks, that's a good idea. Didn't found that flag befor. [21:05] yeah, there's a AV_PKT_FLAG_KEY or something like that [21:06] yes now i know what i'm looking for and found the flag [00:00] --- Sat Aug 31 2013 From burek021 at gmail.com Sat Aug 31 02:05:02 2013 From: burek021 at gmail.com (burek) Date: Sat, 31 Aug 2013 02:05:02 +0200 (CEST) Subject: [Ffmpeg-devel-irc] ffmpeg-devel.log.20130830 Message-ID: <20130831000502.4933818A020C@apolo.teamnet.rs> [00:45] michaelni: how complicated would be adding packed yuva444? (YUVAYUVAYUVA....) [00:46] llogan: fun idea, but don't ask me to do it [00:47] durandal11707: you can hardsub bitmap subs, there is hack in ffmpeg to inject bitmaps into frames for overlay iirc [00:47] it's even documented [00:47] ubitux: nicolas already told me hours ago [00:48] time is a bitch [00:53] haha [01:20] ubitux: is uvpred done? [01:20] wasn't it already? [01:21] you said you'd make a small adjustment (remove duplicae off_u/v and stride_u/v [01:21] ) [01:21] otherwise it's done yes [01:34] fuck [01:34] my [01:34] isp [01:34] BBB: what message from me did you get? [01:38] BBB: no changes from last time; the only differing thing is that i pass 2x the offset (and you said linesize[1]==linesize[2]) [01:38] but im not sure about those assumptions [01:38] i dont think that hurts currently [01:38] durandal11707, dunno, see comits that added other packed yuv formats [01:39] michaelni: considering how planar is hard, packed is harder [01:47] ubitux: yes, that, and off_uv being passed twice [01:48] ubitux: off_u and off_v are always the same (and they are; y/uv can be different, but u and v are always the same) [01:48] ubitux: but we can ignore if you feel strongly and then just take as-is so we can finish off (i.e. I'll enable inter frame fate tests and we can start working on simd/mt) [01:49] ubitux: because I think we're finished feature-wise now [01:58] saste: pushed that probe thing? [01:58] member:ubitux: yes, that, and off_uv being passed twice [01:58] member:ubitux: off_u and off_v are always the same (and they are; y/uv can be different, but u and v are always the same) [01:58] member:ubitux: but we can ignore if you feel strongly and then just take as-is so we can finish off (i.e. I'll enable inter frame fate tests and we can start working on simd/mt) [01:58] ubitux: because I think we're finished feature-wise now [01:58] ubitux: c/p from a few minutes ago [01:59] i'm ok for off_[uv] [01:59] ok, so let's leave the stride in-place, I'll cherry-pick and then enable fate tests [01:59] ffvp9 done \o/ [02:01] not really for ref linesizes [02:01] will fix in a moment, if my isp allows me [02:01] BBB: done, pushed [02:01] BBB: btw, did you see the decode error with the sample i pasted? [02:03] done, and broken \o/ [02:03] BBB: try ./ffplay -noframedrop -ss 29 ~/out.webm [02:04] omg [02:04] also, i don't know if it's a problem with the encode [02:05] but there are some kind of freezes [02:05] can vpxdec decode it? [02:05] i didn't look at all :) [02:05] there are various issues with that sample [02:05] but it might be related to libvpx as well [02:05] it seems to decode fine to me [02:06] ffmpeg.git 03Michael Niedermayer 07master:259292f9d484: avcodec/mpegvideo: Dont incorrectly warn about missing keyframes [02:06] (w/o your patch) [02:06] huh? [02:08] that reminds me i need to open a ticket for the ffplay lag after a seek [02:09] yes libvpx and ffmpeg produce same output to me [02:09] ffplay has issues, tha may be a problem [02:09] but ffmpeg works [02:09] no idea what that means or anything [02:09] in ffplay between 30-40 sec i get a buggy output [02:10] crazy blocks all over [02:10] like a complete mess [02:10] what if you use non-ffplay ? [02:11] it's the same [02:11] http://ubitux.fr/pub/pics/_ffpv9-ffplay-etv.jpg [02:11] reproducible with ffmpeg [02:13] ./ffmpeg -ss 29 -i ~/out.webm -y out.y4m && ./ffplay out.y4m [02:17] the decode with libvpx is ok [02:33] BBB: cant reproduce? [02:53] ubitux: checking [02:53] oh yes I can [02:54] ok looks like an actual bug [02:54] that's cool [02:54] *not [02:55] why whenever i type make: doc/fate.txt is 'rebuild'? [02:56] BBB: it happens again at the end btw [02:56] BBB: what do you want me to work on tomorrow? [02:56] "cleanups"? [02:56] simd, mt, other optimizations? [02:56] cleanups is fine also [02:56] well, it's the end of the month [02:57] i'm starting my new job on monday [02:57] but is pure c bitexact? [02:57] durandal11707: there's some small bugs, I'm working them out, but the dsp code is bitexact yes [02:57] i dont mind focusing on one or two mt function though [02:57] mt isn't function, mt is the whole thing [02:57] frame-mt, that is [02:57] i meant simd sorry [02:57] if you look at ffvp8, it's quite trivial to see how to do it [02:57] ok [02:57] brain is off [02:57] :) [02:58] fuck it's 3 am [02:58] i need my 12 hours sleep [02:58] ok [02:58] gnite [02:58] 'nite [03:40] filter that takes 699kb in source code [03:43] and its nearest neighbor 2-4 scaler [05:29] ffmpeg.git 03Michael Niedermayer 07master:20b965a1a43a: avcodec/ffv1dec: check global header version [05:29] ffmpeg.git 03Michael Niedermayer 07master:547d690d6760: ffv1dec: check that global parameters dont change in version 0/1 [05:39] ubitux: pushed your commit (sorry took a while) [07:00] ubitux: and yes I can reproduce the issue now, will fix (fixed another minor issue also, causing some artifacts around the borders) [09:35] BBB: how is that vp9 decoder going? feature complete, or you are avoiding implementing some kind of unusual tool? (ie do you know what is the spec coverage) [09:37] kurosu_: mostly there, small bugs here and there, some related to odd implementation details in libvpx that I obviously have to follow [09:38] kurosu_: I'm avoiding resolution changing support ATM, maybe will do that later, but for now that's not a priority [09:38] kurosu_: then again I'm not 100% sure if that's part of profile 0 or not :) [09:38] I believe it is [09:38] kurosu_: and obviously no simd/mt/anything yet, so it's somewhat slow [09:39] I didn't find the openhevc implementation terribly sexy; hopefully ffvp9 has a better start in that domain [09:40] what's openhevc? [09:40] is that "ffhevc"? [09:40] resolution changing? ie, you code some frames at eg half resolution? reminds me of mpeg4 reduced resolution, although I'm not sure where it belongs [09:40] BBB: yes [09:40] https://github.com/rbultje/ffmpeg/tree/vp9 [09:40] ubitux is also working on it [09:41] afaik, they developped in on their own inside of that French stuff then are now trying to push it on libav [09:41] I thought smarter wrote it? [09:43] wasn't that only the start of it ? actually I'm maybe imagining things and I don't actually know the connection [09:43] afaik there's a French consortium trying to promote hevc through an "open" development, [09:43] https://github.com/OpenHEVC [09:44] they probably work together [09:44] mostly a French university doing that work iirc [09:44] I don't know [09:44] I haven't quite followed either [09:45] probably you can ask smarter I guess :D [09:45] (if you wanted to know, not that I'm asking that you ask) [11:25] ffmpeg.git 03Michael Niedermayer 07release/2.0:c7ee4bc016e5: ffv1dec: check that global parameters dont change in version 0/1 [11:40] ffmpeg.git 03Timothy Gu 07master:40b8350b57ad: doc/encoders: reformat libmp3lame doc [11:40] ffmpeg.git 03Timothy Gu 07master:e45e72f5f89e: doc/encoders: reformat and add some clarification in libtwolame doc [11:57] ffmpeg.git 03Diego Biurrun 07master:a6b650118543: ppc: cosmetics: Consistently format CPU flag detection invocations [11:57] ffmpeg.git 03Michael Niedermayer 07master:09c94b57ca2c: Merge commit 'a6b650118543e1580e872896d8976042b7c32d01' [11:57] ffmpeg.git 03Sean McGovern 07master:01a82f1dc544: ppc: don't return a value from a function declared void [12:01] kurosu_: so, https://github.com/OpenHEVC/libav is the latest version of the hevc decoder, based on what I did during the gsoc last year, improved by me and others (mostly people from IETR-INSA) [12:06] ffmpeg.git 03Diego Biurrun 07master:79aec43ce813: x86: Add and use more convenience macros to check CPU extension availability [12:06] ffmpeg.git 03Michael Niedermayer 07master:f0a35623826e: Merge commit '79aec43ce813a3e270743ca64fa3f31fa43df80b' [12:48] ffmpeg.git 03Diego Biurrun 07master:6369ba3c9cc7: x86: avcodec: Use convenience macros to check for CPU flags [12:48] ffmpeg.git 03Michael Niedermayer 07master:8be0e2bd43d9: Merge commit '6369ba3c9cc74becfaad2a8882dff3dd3e7ae3c0' [12:48] ffmpeg.git 03Michael Niedermayer 07master:c1913064e38c: avcodec/x86/vp8dsp: Fix cpu flag checks so they work [12:48] ffmpeg.git 03Michael Niedermayer 07master:7fb758cd8ed0: avcodec/x86/lpc: Fix cpu flag checks so they work [12:54] ubitux: fixed [12:57] cool :) [12:58] ffmpeg.git 03Diego Biurrun 07master:e998b56362c7: x86: avcodec: Consistently structure CPU extension initialization [12:58] ffmpeg.git 03Michael Niedermayer 07master:62a6052974d8: Merge commit 'e998b56362c711701b3daa34e7b956e7126336f4' [12:59] michaelni: why you merged "convenience macros" if they do not work? [13:05] durandal_1707, why do you assume that they do not work ? [13:06] because i read 7fb758cd8ed08e4a37f10e25003953d13c68b8cd commit log [13:08] "avcodec/x86/lpc: Fix cpu flag checks so they work" Fixes the cpu flag checks in x86/lpc, how does that imply that the macors defined outside x86/lpc and used in several other places, dont work ? [13:09] it doesn't [13:09] but why it got broken... [13:10] they appars to work in some but not all cases... [13:21] ffmpeg.git 03Sean McGovern 07master:f1f728cbe4e8: ppc: don't return a value from a function declared void [13:21] ffmpeg.git 03Michael Niedermayer 07master:05507348afa1: Merge remote-tracking branch 'qatar/master' [13:29] ubitux: were you serious about porting hqx filters? This code takes more than 10k lines of code. [13:30] i'm pretty sure it can be refactored in a few lines [13:31] maybe with a lut or two [13:31] someone raised a better one btw [13:31] a shader based, but cant remember the name [13:31] iirc another 2 letters filter unwebsearchable [13:31] shader one could only be faster [13:32] the algo was different [13:32] also there is algo that probably generated this nonsense, but where it is ... [13:35] about refactoring: switch takes most of lines, but i doubt i can refactor this... [13:35] it's a long work [13:36] durandal_1707: "xBR is better" [13:36] xBR ? [13:37] i think right solution is self generating code [13:37] like 5xBR [13:37] i got what 5x means... [13:38] https://github.com/libretro/common-shaders/tree/master/xbr [13:38] btw... https://github.com/libretro/common-shaders/blob/master/hqx/hq4x.cg [13:43] ok that is much less lines, but how(if possible at all) can i convert this to c? [13:46] well i dunno what half4 means [13:47] perhaps half is 16 bit float... [13:47] half4 is a 16-bit float, and a vector of 4 [13:47] nice [13:48] half4x4 would be a matrix [13:51] if i do this in pure c it would be extremly slow? [14:15] ffmpeg.git 03Michael Niedermayer 07master:b05cd1ea7e45: ffv1dec: Check bits_per_raw_sample and colorspace for equality in ver 0/1 headers [14:33] ffmpeg.git 03Michael Niedermayer 07master:4f5454d20130: avcodec/mpegvideo: reduce log level for messages about allocating frames. [14:45] ffmpeg.git 03Paul B Mahol 07master:48cd1037f661: cmdutils: silence warning about incompatible pointer types [15:04] michaelni: if you not gonna push license patch, i will [15:10] if someone pushes it, please ommit: [15:10] > - * Copyright (c) 2007 The Libav Project [15:10] > + * Copyright (c) 2007 The FFmpeg Project [15:10] ? [15:10] libavformat/network.c [15:11] why? [15:11] because you are not the copyright holder of the file ? [15:13] but for other files in that patch its also correct [15:13] and for others you changed when merging [15:14] so i'm very confused [15:16] or this was joke... [16:33] ffmpeg.git 03Carl Eugen Hoyos 07master:8fe1fb41ac28: Fix compilation with --disable-mmx. [16:54] ubitux: ok all fate tests (without emu-edge) pass now [16:55] ubitux: with emu-edge still some issues, these are relatively easy to fix, will do that later [16:55] ubitux: simd/mt time now [17:16] michaelni: so you changed mind about license patch? [17:17] ? [17:18] read log [17:18] michaelni: stop playing games with me [17:27] michaelni: about networks.c, I don't think that libav project existed at 2007... [17:29] iive, you are probably correct, but it doesnt really make a difference does it ? i mean except for the entertainment value the contradiction has [17:30] durandal_1707, i read the log and i dont really know what you talk about. If its about the patch, i dint say that i will apply it nor did i say i wont, iam nt stoping you from applying it nor anyone else [17:33] well there is always revert [17:33] if I understand correctly, all "This file is part" are OK, as they are not part of the copyright notice. [17:34] i mean, this file is part of ffmpeg and libav... so it is true either way. [17:35] what about: "the ffmpeg project" vs "The FFmpeg Project" [17:35] Action: durandal_1707 have nothing better to do [17:37] as for network.c it should probably contain 2 copyright notices, 2007-2013 FFMpeg and whatever the first and last modification 2011-2013 libav. [17:37] its not FFMpeg but FFmpeg [17:38] of course it is [17:38] and really changing that each time someone refactor code is pita [17:39] Then 2007 FFmpeg, 2011 Libav ? [17:39] no way, i will commit what I have [17:40] having wrong copyright notice is indeed high amusement value, but in legal trouble it could turn into nightmare. [17:40] then... 2013 FFmpeg, 2013 Libav ? i guess you have merged stuff in it this year. [17:41] in/if [17:41] so i need to update this for every file in repo (except ffmpeg only code) [17:48] most of the files seem to have the copyright assigned to the person who wrote them. not to a project [17:49] yes, this patch just changes This file is part of .... [17:49] ... is free software; ... distributed in hope, etc... [17:50] yes, so no issue. as the discussion also seems to agree. [17:50] michael said he won't commit it, but he won't oppose it. If i got it right. and today i don't get a lot of things right. [17:51] you are not alone [17:55] ffmpeg.git 03Thilo Borgmann 07master:d814a839ac11: Reinstate proper FFmpeg license for all files. [18:51] michaelni: i guess adding support for yuva/gbr(a) to snow is now trivial? [19:50] ffmpeg.git 03Michael Niedermayer 07master:7b47d7f75e6f: avcodec/pngdec: Fix padded alloc code with threads [19:50] ffmpeg.git 03Michael Niedermayer 07master:60fed98e6347: avcodec/pngdec: fix last_row_size type [20:24] ffmpeg.git 03Paul B Mahol 07master:ea3ce0085921: wnv1: remove unused avctx from codec private context [20:24] ffmpeg.git 03Paul B Mahol 07master:057dce5f21cd: kgv1dec: make decoder independent of sizeof(AVFrame) [20:24] ffmpeg.git 03Paul B Mahol 07master:c04268447627: kgv1dec: remove unused avctx from codec private context [21:43] why this again [21:45] ffmpeg.git 03Michael Niedermayer 07master:5c504e4df7b4: vformat/subtitles: check av_copy_packets return code [21:45] ffmpeg.git 03Michael Niedermayer 07master:6e1b1a27a403: avcodec/avpacket: Use av_free_packet() in error cleanups [22:49] ffmpeg.git 03Lukasz Marek 07master:0b46d6f3efa7: lavu/bprint: add append buffer function [23:23] ffmpeg.git 03Michael Niedermayer 07master:86736f59d6a5: avcodec/pngdsp: fix (un)signed type in end comparission [23:42] ffmpeg.git 03Michael Niedermayer 07release/0.11:2d945ac68f7a: avcodec/pngdsp: fix (un)signed type in end comparission [23:42] ffmpeg.git 03Michael Niedermayer 07release/1.0:5bd2b24db399: avcodec/pngdsp: fix (un)signed type in end comparission [23:42] ffmpeg.git 03Michael Niedermayer 07release/1.1:a2e7fd406c5b: avcodec/pngdsp: fix (un)signed type in end comparission [23:42] ffmpeg.git 03Michael Niedermayer 07release/1.2:ddce97c7b0e4: avcodec/pngdsp: fix (un)signed type in end comparission [23:42] ffmpeg.git 03Michael Niedermayer 07release/2.0:a4522ae516b5: avcodec/pngdsp: fix (un)signed type in end comparission [23:47] ffmpeg.git 03Michael Niedermayer 07master:454a11a1c9c6: avcodec/dsputil: fix signedness in sizeof() comparissions [00:00] --- Sat Aug 31 2013