[Ffmpeg-devel-irc] ffmpeg.log.20130531

burek burek021 at gmail.com
Sat Jun 1 02:05:01 CEST 2013


[00:17] <Soyo> So I have a bunch of mp4 files that have this --> http://pastebin.com/mfzS1rme  mediainfo results how would I convert that to something that Kino would work well with like a .DV file?
[00:32] <Soyo> Keeps saying stream contains no data
[00:32] <Soyo> file is 0 bytes
[00:32] <Soyo> :/
[00:33] <llogan> Soyo: ffmpeg -i input.mp4 -target ntsc-dv output.dv
[00:34] <Soyo> llogan: Thanks that seems to be doing something now
[00:35] <llogan> i haven't seen anyone use Kino in some time
[00:35] <Soyo> Yeah it is pretty old school
[01:47] <llogan> does --extra-cflags and --extra-ldflags work with libraries that require_pkg_config, or is PKG_CONFIG_PATH generally used for those?
[07:49] <kcm1700> where can I find deinterlace implementation in the ffmpeg library code?
[07:51] <kcm1700> I found it, thanks
[07:52] <JEEB> in the current ffmpeg code base there's the yadif filter for actual deinterlacing, and then there's a separate inverse telecine filter
[07:59] <kcm1700> :) thanks
[08:05] <thetruthisoutthe> hey
[08:05] <thetruthisoutthe> http://www.rottentomatoes.com/m/zero_dark_thirty/
[08:05] <thetruthisoutthe> is this cool movie?
[11:23] <superware> I'm trying to do something similar to http://ffmpeg.org/doxygen/trunk/doc_2examples_2demuxing_8c-example.html, but av_find_best_stream returns zero which can't be found in decode_packet: if (pkt.stream_index == video_stream_idx), any ideas?
[11:24] <ubitux> are you able to make the example compile & work?
[11:25] <superware> yes, av_find_best_stream returns 0
[11:27] <superware> ubitux: ? :|
[11:29] <superware> the "file" is "udp://224.0.0.0:1234" which is a h.264 over TS
[11:31] <superware> maybe 0 is ok, maybe pkt.stream_index isn't
[11:31] <ubitux> if the example code works, then start from here, and modify it progressively
[11:32] <superware> it "works", but (pkt.stream_index == video_stream_idx) is always false
[11:32] <ubitux> maybe because there are multiple streams?
[11:32] <ubitux> you need to wait for a video stream packet
[11:34] <superware> that's open_codec_context(pvideo_stream_idx, _fmt_ctx, AVMediaType.AVMEDIA_TYPE_VIDEO) job
[11:38] <superware> av_read_frame(_fmt_ctx, ppkt) sets _pkt.stream_index to values like 98141248, does it make sense?
[11:43] <superware> anyone? :(
[11:49] <Mavrik> you're probably passing a pointer where you shouldnt
[11:49] <Mavrik> or the other way around
[11:49] <Mavrik> or something is wrong with your linkage
[11:51] <superware> well, the struct *is* being changed :)
[11:51] <superware> after av_read_frame...
[12:16] <superware> what's av_dump_format?
[12:24] <superware> never mind
[13:16] <superware> Mavrik: you here? :|
[13:26] <superware> does it make sense that pkt.stream_index equals something like 98141248 after av_read_frame(fmt_ctx, &pkt) ?
[13:30] <Mavrik> no.
[13:30] <Mavrik> you're messing something up.
[13:31] <superware> and I guess I'm doing it for hours now
[13:32] <superware> I double checked, this is what I'm doing: http://ffmpeg.org/doxygen/trunk/doc_2examples_2demuxing_8c-example.html
[13:43] <Mavrik> well, that can't be what you're doing since that is official doxygen for master git branch example
[13:44] <superware> Mavrik: what do you mean?
[14:00] <leandrosansilva> Hello to all. I'm using an i7 processor (8 cores) and I'd to know why the h264 decoder is using 9 threads by default. In my application I open and decode simultaneously 4 h264 HD (2 megapixels) streams. So having 4*9 threads isn't something very efficient. Am I going to loose performance or having any other side effect if I use only 2 threads by decoder?
[14:00] <leandrosansilva> lose*
[14:18] <Mavrik> leandrosansilva, um& that's a good question actually
[14:19] <Mavrik> you probably shouldn't lose a noticable amount of performance
[14:19] <Mavrik> you can use "-threads" BEFORE "-i" to set number of decoding threads
[14:19] <Mavrik> measure and report if you can :)
[14:36] <leandrosansilva> Mavrik, yeah, I was just curious about why 9 threads by decoder. It makes sense if you're watching a movie and have 8 cores. But in my case I have 4 streams being proceessed in real time. So I think I'll have to do some benchmarks :-) Thx
[14:37] <Mavrik> leandrosansilva, "numcpu + 1" is a paralellization heuristic that works well in most general cases
[14:37] <Mavrik> it's usually safe to assume that paralelizable process will be the only one on the machine and the user wants max utilization of cores ;)
[14:40] <leandrosansilva> I'll try (8/4 + 1 == 3) threads per decoder and see if it changes the performance. Thx for the explanation. I'll investigate it better
[15:43] <superware> using the libs, is there a way to do demux + decode at once? or must it be avcodec_decode_video2 to get demuxed frame and then avcodec_decode_video2 again to get raw bitmap?
[15:44] <Mavrik> that would make no sense
[15:44] <Mavrik> since frames arent only video.
[15:44] <superware> Mavrik: you were right before regarding signature mismatches
[15:45] <superware> thanks
[15:45] <Mavrik> ;)
[15:45] <superware> so calling avcodec_decode_video2 right after the other is common practice you say
[15:47] <Mavrik> well
[15:48] <Mavrik> checking if you got a video frame (and not, say, audio, subtitles, teletext or data) is common before that as well ;)
[15:48] <Mavrik> and unless you have a rather strange video format the frames wont be RGB
[15:50] <superware> but for video only I'm using av_find_best_stream(&fmt_ctx, type, -1, -1, null, 0)
[15:50] <superware> where type = AVMEDIA_TYPE_VIDEO
[15:51] <Mavrik> yeah, that'll just return you an index of stream you should look for
[15:51] <Mavrik> as "best" video stream
[15:52] <Mavrik> av_read_frame will always return all video frames
[15:52] <Mavrik> er
[15:52] <Mavrik> all frames - (not just video), it does no filtering
[15:52] <superware> oh
[15:55] <superware> is each video frame a single image?
[15:55] <Mavrik> uuuh
[15:55] <Mavrik> for most formats it is, nut it's not guaranteed
[15:55] <superware> h.264?
[15:56] <Mavrik> that's why you always get a result from decode_video2 which tells you if a frame was decoded
[15:56] <Mavrik> "usually, not always"
[15:56] <Mavrik> :)
[15:56] <superware> 0 or 1 sounds better than 0 or more...
[15:59] <Mavrik> superware, for H.264 you'll get either a frame or nothing from decoder
[16:00] <superware> great, thanks
[16:09] <superware> Mavrik: where can I find an example of demux+decode?
[16:10] <Mavrik> doesn't decode_encode.c example do that?
[16:11] <superware> yeah, but I'm kinda lost here. avcodec_decode_video2 gets me a h.264 frame (one or none), what should I do next?
[16:12] <superware> I know av_image_copy will place it somewhere
[16:13] <Mavrik> avcodec_decode_video2 will give you decoded frame
[16:13] <Mavrik> not h.264
[16:13] <Mavrik> a raw frame.
[16:13] <Mavrik> I have no idea what are you trying to do.
[16:13] <Mavrik> Soooo& do with it what do you want to do with a decoded video frame ;)
[16:15] <superware> my source is udp://224.0.0.0:1234 (h.264 over TS), do you mean to say that avcodec_decode_video2 will give me a bitmap?
[16:16] <Mavrik> no, it will give you a raw uncompressed image in whatever format yout stream has it stored
[16:16] <Mavrik> hmm, which could be called a bitmap yeah
[16:16] <Mavrik> sorry for misleading you :)
[16:16] <Mavrik> superware, what are you trying to achieve_
[16:16] <Mavrik> ?
[16:17] <superware> np. but wow, I really thought demuxing is one process where I'll get h.264 and that I'll have to decode it to get the raw video..
[16:17] <superware> play udp://224.0.0.0:1234 :)
[16:18] <Mavrik> ah :)
[16:18] <superware> according to you ffmpeg does "h.264 over TS over UDP" directly to raw images
[16:18] <Mavrik> mhm
[16:19] <Mavrik> well, now you need to just use the timestamps on the frame to render it to screen ;)
[16:19] <Mavrik> oh, and the frames are almost certanly YUV420 instead of RGB, so make sure you compensate for that :)
[16:19] <JEEB> superdump, yes -- reading via some protocol and demuxing is one step, that would be done with libavformat
[16:19] <JEEB> libavcodec comes your way after you have done that
[16:24] <superware> JEEB: but according to Mavrik the process is being done automatically (one pass)
[16:25] <JEEB> you were talking about av_decode_video2
[16:25] <JEEB> which is part of libavcodec
[16:25] <JEEB> so he was expecting that you had already dealt with demuxing it
[16:25] <Mavrik> indeed
[16:28] <superware> I was talking about avcodec_decode_video2 which is used in the demuxing example
[16:30] <superware> novice warning
[16:31] <Mavrik> hmm well, the demuxing example deals with decoding as well ;)
[16:33] <superware> ok. so after using av_image_copy, how do I know how many bytes is the raw image (to render it on screen etc). I didn't really understand void av_image_copy (uint8_t *dst_data[4], int dst_linesizes[4], const uint8_t *src_data[4], const int src_linesizes[4], enum AVPixelFormat pix_fmt, int width, int height)
[16:34] <Mavrik> superware, hmm, you don't really have to copy the image
[16:34] <Mavrik> basically you have an AVFrame
[16:34] <Mavrik> your image data is in frame->data[0]
[16:34] <superware> oh :)
[16:34] <Mavrik> it's stored line by line, where each horizontal image line is frame->stride[0] large
[16:35] <Mavrik> frame->linesize[0] actually
[16:35] <JEEB> linesize is the picture line size
[16:35] <JEEB> stride is the amount needed to get to the next line
[16:35] <Mavrik> e.g. if you have a 1280x720 frame in RGBA (where each pixel is stored with 4 bytes) your linesize would be 1280 * 4
[16:36] <JEEB> stride can be equal to linesize, but you should never write code that assumes that because there are many reasons for why the stride would be bigger
[16:36] <JEEB> mostly related to being able to optimize
[16:36] <Mavrik> and pixels of image from (1,1) to (1280,1) would be stored from frame->data[0][0] to frame->data[0][5120]
[16:37] <Mavrik> but as JEEB said, always use the number from frame->linesize to traverse the image
[16:37] <Mavrik> never width :)
[16:37] <JEEB> hmm
[16:37] <Mavrik> the size of frame->data[0] should be image_height * frame->linesize[0]
[16:38] <JEEB> ugh
[16:38] <JEEB> you're equalling stride to the size again :s
[16:38] <Mavrik> at least that's what most code in ffmpeg takes as a given :)
[16:38] <JEEB> unless I'm missing something
[16:39] <Mavrik> JEEB, there is no vertical stride
[16:39] <JEEB> oh right
[16:39] <JEEB> vertical
[16:39] <JEEB> didn't notice you were talking about vertical
[16:39] <JEEB> lol
[16:39] <Mavrik> ^^
[16:39] <JEEB> and linesize is the actual amount of memory used?
[16:39] <JEEB> as in stride-like
[16:40] <Mavrik> yeah, linesize holds stride
[16:40] <JEEB> k
[16:40] <Mavrik> in decoded images
[16:40] <superware> my linesize[0] is 352
[16:40] <Mavrik> JEEB, for multiplane images you get multiple data[] fields filled with strides in linesize[] array :)
[16:40] <superware> format is AV_PIX_FMT_YUV420P
[16:40] <JEEB> Mavrik, yes
[16:40] <JEEB> that I know
[16:40] <JEEB> I've written an encoder after all :D
[16:40] <Mavrik> ^^
[16:41] <Mavrik> who hasn't :P
[16:41] <superware> I have non-null in data[0 to 2]
[16:41] <Mavrik> but yeah, it's been awhile since I did it so I'm kinda muddy on details
[16:42] <JEEB> Mavrik, I mean inside libavcodec lol
[16:42] <JEEB> which is why I am not that good at actually *using* libavcodec
[16:42] <Mavrik> ah :)
[16:42] <superware> how can I know the amout of memory used in data[0]?
[16:43] <Mavrik> I've only written a subtitle encoder, no video encoder :)
[16:43] <Mavrik> superware, (number of bytes per pixel) * linesize[0] * image_height
[16:43] <JEEB> not sure why you'd need an exact copy of it, though
[16:43] <Mavrik> mhm
[16:44] <Mavrik> the next step is usually feeing it to libswscale to convert to RGBA or something :)(
[16:44] <JEEB> superdump, YUV420P is "4:2:0 YCbCr, planar". So you have the luma on the first memory plane, then chroma planes come next
[16:45] <JEEB> "YUV" is a colloquial way of saying YCbCr (and incorrect in a way, since "YUV" is an analog thing)
[16:45] <superware> is this standard they aren't interleaves?
[16:45] <superware> interleaved
[16:46] <JEEB> that picture type is planar, so yes -- not interleaved
[16:46] <JEEB> the P stands for that
[16:57] <superware> can ffmpeg convert to RGB?
[16:57] <JEEB> libswscale is for that
[16:58] <JEEB> depending on your needs you can use that, or you can implement it yourself on a GPU or whatever if you're doing playback
[16:59] <superware> I'm on .NET
[16:59] <superware> I rather unmanaged code will do that
[17:01] <Mavrik> yep, libswscale is probably your best bet
[17:01] <Mavrik> since the API is made to be used with existing ffmpeg datastructures :)
[17:01] <JEEB> it should be the simplest way of getting something out :D
[17:02] <JEEB> and if you don't need too optimized performance and results, then it could be just fine for the final usage, too
[17:02] <superware> might be HD :)
[17:03] <Mavrik> swscale is pretty fast ;)
[17:03] <Mavrik> with exception of HW accelerated colorspace converters
[17:03] <Mavrik> check if your rendering method (like OGL etc.) supports colorspace conversion on the GPU :)
[17:04] <superware> btw: avpicture_get_size
[17:04] <superware> http://stackoverflow.com/questions/11815433/ffmpeg-yuv-to-rgb-distored-color-and-position
[17:06] <JEEB> what about it?
[17:06] <bunniefoofoo> is there an lgpl aac encoder that ffmpeg supports?
[17:07] <superware> JEEB: yuv to rgb
[17:07] <bunniefoofoo> the "aac" encoder segfaults on me sometimes
[17:07] <Mavrik> hmm, I think all other AAC encoders are proprietary
[17:07] <JEEB> bunniefoofoo, nothing better than the aac encoder
[17:07] <Mavrik> bunniefoofoo, check licence on libvo_aacenc
[17:08] <JEEB> vo-aacenc would be compatible with LGPLv3
[17:08] <JEEB> not v2
[17:08] <JEEB> but it's really not in any way a better selection
[17:08] <JEEB> you're better off trying to find out why it crashes and get that fixed / updated
[17:08] <bunniefoofoo> that is going to be tough since it doesn't crash when I run the debugger
[17:09] <Mavrik> compile with debug symbols and enable coredumps system-wide
[17:10] <bunniefoofoo> voaac says LGPL v2.1 or later
[17:11] <bunniefoofoo> so does wrapper code in libav
[17:11] <bunniefoofoo> mavrik, unfortuately I am on win32 and a bit limited for debugging tools. same app does not segfault on linux
[17:12] <Mavrik> well, win32 has debuggers as well :)
[17:12] <superware> JEEB: it seems there's no PIX_FMT_YUV420P, but rather PIX_FMT_YUV420P10, PIX_FMT_YUV420P12, PIX_FMT_YUV420P14 etc
[17:23] <bunniefoofoo> mavrik do you know something better than gdb for win32 that works with gcc-compiled executables?
[17:33] <superware> JEEB: if I use avpicture_fill to go from yuv to rgb, how can I get the RGB's linesize?
[17:33] <JEEB> avpicture_fill doesn't do the conversion >_<
[17:34] <JEEB> swscale's scaling does
[17:39] <Diogo> hi i need to get 1 thumbnail in diffent resolutions videos..
[17:40] <superware> JEEB: what's PIX_FMT_YUV420P9?
[17:40] <Diogo> command: /servers/ffmpeg/bin/ffmpeg -i "rod.mp4" -vf "scale=560:-1,pad=max(iw\,ih):420:(ow-iw)/2:(oh-ih)/2" -frames:v 1 best.png ...if necessary i need to add padding..
[17:40] <JEEB> superdump, 9bit picture, as in 9 bits are used from 16
[17:40] <Diogo> i need the exact size of 560x420 (with black border )
[17:40] <JEEB> superdump, don't worry there most certainly is a YUV420P colorspace
[17:40] <Diogo> any best command?
[17:40] <JEEB> otherwise most of the things just wouldn't work
[17:40] <JEEB> lol
[17:42] <superware> my fault
[17:44] <JEEB> superware, http://ffmpeg.org/doxygen/trunk/pixfmt_8h.html#a9a8e335cf3be472042bc9f0cf80cd4c5a1aa7677092740d8def31655b5d7f0cc2
[17:44] <JEEB> :)
[17:51] <Diogo> this is possible generate different sizes thumbnails in the same command?
[17:51] <Diogo> -vf "scale=560x360" -vf  "scale=100x100"
[17:53] <superware> JEEB: thanks, success at last
[17:57] <ubitux> Diogo: what are you trying to do?
[17:57] <ubitux> ah, multiple output, yes sure.
[17:58] <Diogo> generate multiple thumbnails with different sizes
[17:58] <Diogo> ubitux: -vf "thumbnail,scale=320:-1,pad=max(iw\,ih):240:(ow-iw)/2:(oh-ih)/2"
[17:58] <ubitux> ./ffmpeg -f lavfi -i testsrc -vf scale=560x360 -frames:v 1 out1.png -vf scale=100x100 -frames:v 1 out2.png
[17:58] <Diogo> 320 and 240 changes
[17:58] <Diogo> ok
[17:58] <Diogo> thanks
[17:59] <ubitux> if you need to run a complex filtergraph, you should do that differently
[17:59] <ubitux> to avoid running it twice
[18:01] <Diogo> ubitux: /servers/ffmpeg/bin/ffmpeg -i "rod.mp4" -vf "thumbnail,scale=560:-1,pad=max(iw\,ih):420:(ow-iw)/2:(oh-ih)/2" -frames:v 1 best560x420.png -vf "thumbnail,scale=320:-1,pad=max(iw\,ih):240:(ow-iw)/2:(oh-ih)/2" -frames:v 1 best320x240.png -vf "thumbnail,scale=320:-1,pad=max(iw\,ih):240:(ow-iw)/2:(oh-ih)/2" -frames:v 1  best320x240.png
[18:01] <Diogo> appear
[18:01] <Diogo> Killed    0 fps=0.0 q=0.0 q=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A
[18:01] <Diogo> lol
[18:01] <Diogo> :)
[18:01] <ubitux> looks totally overkill.
[18:01] <Diogo> ehehe
[18:02] <Diogo> 2 works more than this no
[18:02] <ubitux> of course
[18:02] <ubitux> thumbnail eats a butload of memory
[18:02] <ubitux> use vf split after the thumbnail
[18:02] <ubitux> then for each split, scale it
[18:03] <ubitux> and you need to re-interleave them somehow then
[18:03] <Diogo> split can you give an example please..
[18:03] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#split_002c-asplit
[18:03] <Diogo> ok thanks
[18:08] <ubitux> ./ffmpeg -f lavfi -i testsrc=s=1280x800 -filter_complex 'split[a][b]; [a] scale=-1:360 [o360p]; [b] scale=-1:240 [o240p]' -map '[o360p]' -frames:v 1 out-360p.png -map '[o240p]' -frames:v 1 out-240p.png
[18:08] <ubitux> this works for me
[18:09] <ubitux> so you would just add the thumbnail before the split
[18:10] <ubitux> also note that you might want to look at the scene detection in vf select instead of the thumbnail filter
[18:11] <ubitux> you might want to write an external script at some point btw
[18:11] <ubitux> ok bye.
[18:23] <Hans_Henrik> when ffmpeg is encoding, and is printing stuff like frame=X fps = X q = X time = ~~
[18:23] <Hans_Henrik> what does the "FPS" mean?
[18:24] <saste> frames per second
[18:24] <saste> or first person shooter
[18:24] <Hans_Henrik> i thought it was frames per sec for playback but doesn't make sense
[18:24] <JEEB> nope
[18:24] <JEEB> it's the speed of how quickly it gets through the content
[18:25] <Hans_Henrik> thanks
[18:36] <nellynel90> has anyone gotten pyffmpeg or ffvideo to compile on fedora? i cant get it to build
[18:40] <trose> nellynel90: not sure about pyffmpeg but it's pretty easy to just pipe data into ffmpeg from python without that library
[18:40] <trose> http://stackoverflow.com/questions/10400556/how-do-i-use-ffmpeg-with-python-by-passing-file-objects-instead-of-locations-to
[18:40] <nellynel90> trose: like with sp.Popen() ?
[18:41] <trose> yeah
[18:42] <trose> you use the flag '-i -' in your ffmpeg call
[18:42] <trose> and you specifiy the stdin is a pipe
[18:42] <nellynel90> for stdin yeah.
[18:42] <trose> then you just write you data to stdin
[18:44] <KalD> Hi Guys - got a quick question:  I'm running ffmpeg on win32 - I am recording 5 min chunks of mp4 video from a webcam, i need to add current date/time to the video.  I've looked around online and see many examples for linux, etc, but i cant get any of them to work on win32.
[18:47] <nellynel90> trose: in that example you pointed to me, i can do all sorts of regex stuff in that while loop right?
[18:47] <sacarasc> KalD: In the file name?
[18:47] <trose> nellynel90: should be able to, what are you trying to do?
[18:48] <KalD> sacarasc: no in the video stream (i.e. on the actual video stream)
[18:48] <KalD> sacarasc: like you'd see on a security camera feed
[18:50] <nellynel90> trose: if data = p.stdout.read(1024), ffmpeg should spit out a 'Output: foo ..... blah blah blah...' line into data right?
[18:51] <trose> yeah
[18:51] <trose> that should spit out everything that ffmpeg normally writes to terminal
[18:52] <nellynel90> i want to start printing all lines when it finds that "output foo blah blah" line
[18:52] <nellynel90> regex should take care of that right?
[18:52] <trose> well if you just leave it alone it'll print to terminal anyways
[18:52] <trose> oh, yeah actually
[18:52] <nellynel90> yeah but it also prints out a bunch of garbage i dont need
[18:53] <nellynel90> i guess what im having trouble with is the part that tell the loop that if that expr has been found to start print everything else
[18:53] <trose> you could just say if 'Output' in data then
[18:54] <nellynel90> wouldnt that print a line only if 'Output' is in it?
[18:54] <trose> and use a bool start_printing :P
[18:54] <nellynel90> ah
[18:55] <trose> so if 'output' in data: start_print = true
[18:55] <trose> if start_print: print data
[18:55] <trose> lol
[18:56] <nellynel90> i see.
[18:56] <nellynel90> let me try
[18:56] <nellynel90> thanks by the way
[18:56] <trose> no problem
[19:02] <trose> nellynel90: lemme know if you need any other help, slow day at work today :P
[19:02] <nellynel90> thanks trose i appreciate that a whole lot
[23:19] <nellynel90> f
[23:20] <nellynel90> i cant seem to be able to set the profile to high when using libx264
[23:21] <nellynel90> i've tried -profile:v high, -x264opts profile:high, but to no avail. ffmpeg keeps defaulting to Main
[23:22] <nellynel90> funny thing is that when i do -profile:v baseline, it works.
[23:22] <nellynel90> anyone have an idea what im doing wrong?
[23:24] <JEEB> by default libx264 sets the profile to the highest your stream needs according to the settings set otherwise (preset and such)
[23:25] <JEEB> thus in general when you need to set it you are in a position where you have to limit the profile, not the other way
[23:25] <bunniefoofoo> is there a difference between avpicture_alloc and av_image_alloc besides the align parameter? Apparently avpicture_alloc does not align data correctly for sws_scale anymore
[23:25] <JEEB> I don't think the libx264 library itself overrides stuff to the default if you set "high" and it doesn't need it, but I have no idea of ffmpeg's mappings
[23:26] <JEEB> but you really need to have some specific need to want to set the profile at high
[23:26] <JEEB> because it is just the flag in that case
[23:26] <bunniefoofoo> old code: avpicture_alloc((AVPicture*)_avFrame, (PixelFormat)pixelFormat, w, h);
[23:26] <bunniefoofoo> new code: av_image_alloc(_avFrame->data, _avFrame->linesize, w, h,
[23:26] <bunniefoofoo>         (AVPixelFormat)pixelFormat, 32);
[23:28] <Seanduncan> I seem to have ended up wi multiple versions if libavformat , swscale, on my Ubuntu 12.04 system.  Anyone know how to completely purge any trace of ffmpeg and related libs from my system.  I've already tried apt-get remove <insert library name here> but that still results in libs hanging around.  I also don't want to completely wipe my whole system and start all new.  Ay ideas?
[23:29] <nellynel90> apt-get purge?
[23:29] <nellynel90> apt-get autoremove ?
[23:29] <bunniefoofoo> > locate libavcodec.so ... see how may copies there are
[23:31] <nellynel90> JEEB: i have a couple of files that need to converted to h264 high at level 5.1. i can set the level fine but not the profile.
[23:31] <JEEB> if you really want them to be of that kind I recommend you actually enable some high profile features then
[23:31] <JEEB> easiest way is to use a slower preset
[23:34] <bunniefoofoo> sw_scale prints this: [swscaler @ 0219b680] Warning: dstStride is not aligned
[23:34] <bunniefoofoo> should I be concerned?
[23:34] <JEEB> not all optimizations can be used that means
[23:35] <bunniefoofoo> yeah, I don't get it though... the dst is yuv420p 720x480
[23:35] <bunniefoofoo> not some weird size
[23:35] <JEEB> print out the stride, I guess
[23:36] <JEEB> or wait
[23:36] <JEEB> aligned so the starting pointer might have to be aligned, too
[23:36] <bunniefoofoo> stride == AVFrame lineSize[] ?
[23:36] <bunniefoofoo> ok
[23:36] <nellynel90> JEEB: would the preset overwrite any -x264opts options i include?
[23:37] <bunniefoofoo> data pointer was allocated with av_picture_alloc
[23:42] <bunniefoofoo> data[0]=0645A020 linesize[0]=720
[23:43] <bunniefoofoo> thats a multiple of 16 stride, with 48 bytes alignment, no?
[23:44] <bunniefoofoo> oh this is interesting... sws_scale is called several times before that warning comes out
[23:46] <nellynel90> actually -preset slower did work!
[23:51] <JEEB> nellynel90, -preset sets the defaults, after that come your settings
[23:56] <bunniefoofoo> jeeb, alignment seems to be 16-bytes which I assume is the requirement for most simd stuff: http://pastebin.com/YbBAZPw5
[23:56] <bunniefoofoo> alignment does not seem to change before or after the error comes out
[23:57] <bunniefoofoo> my bad... data pointers are changing! sorry
[23:57] <bunniefoofoo> this is too wierd
[23:58] <bunniefoofoo> heh my bad again.. data pointers should change, using about 15 frame buffer... they all look aligned right to me
[00:00] --- Sat Jun  1 2013


More information about the Ffmpeg-devel-irc mailing list