[Ffmpeg-devel-irc] ffmpeg-devel.log.20141010

burek burek021 at gmail.com
Sat Oct 11 02:05:02 CEST 2014


[00:40] <cone-314> ffmpeg.git 03Marton Balint 07master:ce928d7d2b3b: ffplay: dont leave swresampler in half initialized state
[00:45] <Compn> J_Darnley your vis is really nice :)
[00:51] <J_Darnley> Compn: thanks but I can't take credit for the creativity there.
[01:11] <cone-314> ffmpeg.git 03Vignesh Venkatasubramanian 07master:233f3ad86983: lavf/webm_dash: Allow filenames without directories
[01:12] <mediocregopher_> so this is a noob question, but should it be possible to stream an ffm from an ffmpeg 2.4 client to an ffmpeg 1.0 server?
[01:25] <Compn> depends on codec support mediocregopher_ 
[01:25] <Compn> maybe ffserver wasnt built with all those bells and whistles you are using 
[01:25] <mediocregopher_> i'm just aiming for streaming vp8
[01:26] <mediocregopher_> and the server was built with --enable-libvpx
[01:26] <mediocregopher_> i'm getting the error "Cannot allocate memory"
[01:26] <mediocregopher_> server has 4g so i don't think it's a space issue
[01:27] <Compn> you'll have to paste all output to pastebin
[01:27] <Compn> and probably ask in #ffmpeg
[01:28] <Compn> this is the developer channel
[01:28] <Compn> interesting , comparing mediainfo ffprobe and other tools and highlighting differences, https://docs.google.com/document/d/1wubhYKbY4MhNYOfMJebxA7N8vkrrRPZiuD75O9B8z2Q/edit
[01:29] <mediocregopher_> ah, alright, didn't realize there were two different channels. thanks!
[01:29] <Compn> no problem
[02:14] <cone-314> ffmpeg.git 03Michael Niedermayer 07master:9665a0fdf680: avutil/error: Add AVERROR_INPUT_CHANGED & AVERROR_OUTPUT_CHANGED to error_entries[]
[02:20] <jamrial> michaelni: the lseek64 android patch seems to be broken. all four fate slots are failing since it was commited
[02:38] <cone-314> ffmpeg.git 03Michael Niedermayer 07master:27123a77c111: avformat/os_support: include unistd.h before defining lseek to lseek64 on android
[02:39] <michaelni> jamrial, fixed, thx
[03:14] <cone-314> ffmpeg.git 03Michael Niedermayer 07master:e96fb980dcb6: avformat/format: move mime_type_opt declaration to where its used
[05:46] <cone-314> ffmpeg.git 03Michael Niedermayer 07master:19b4c0ccf924: ffprobe: Simplify by using av_color_range_name()
[09:09] <wm4> just what the heck is this http://cgit.freedesktop.org/pulseaudio/pulseaudio/tree/src/pulsecore/ffmpeg/resample2.c
[09:09] <wm4> copy of the old resampler?
[09:11] <ubitux> hahaha
[09:12] <ubitux> nice :))
[11:55] <thardin> did anyone in here speak to shresh?
[11:56] <ubitux> thresh?
[11:56] <thardin> no, shresh. an opw applicant who wants to work on mxf. got a PM from them
[11:59] <thardin> I'll just idle until they come back :)
[11:59] <wm4> so many applicants
[12:20] <cone-984> ffmpeg.git 03James Almer 07master:73ea3ffcd55b: w32pthreads: use the CONDITION_VARIABLE typedef if available
[12:20] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:09c538b6e6f3: Merge commit '73ea3ffcd55b0b1d23ba4769d65996a8d58ffa11'
[12:32] <cone-984> ffmpeg.git 03James Almer 07master:b7c3bfd5eb31: w32pthreads: use the condition variable API directly when targeting newer versions of Windows
[12:32] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:54df78af657b: Merge commit 'b7c3bfd5eb3153f7de8039f96e7911b2a1d46cae'
[12:45] <Daemon404> j-b, do you know if there is a video of your devcon talk?
[12:56] <Daemon404>  /g 8
[12:59] <arwa> hey can anyone tell me about AVFrame ???
[13:00] <arwa> does it point to the current frame ??
[13:00] <wm4> in what context?
[13:01] <arwa> As in I want to access specific pixel values....so how do I go about traversing the whole frame?
[13:01] <BtbN> depends on the format. Video frames are usualy YUV
[13:01] <wm4> AVFrame.data points to the raw vidoe data
[13:01] <wm4> but yeah how exactly the video data is interpreted depends on the format
[13:02] <wm4> and also what you want to do with it
[13:04] <arwa> Okay...!! For a general case, how is the image stored? (how do i access lets say 2nd row 5th column pixel of the image?)
[13:04] <nevcairiel> there is no general case
[13:04] <wm4> the only common thing is that data is stored in rows
[13:04] <nevcairiel> well, in general its data + row * stride + col
[13:05] <wm4> and that frame->data[0] points to the first pixel of the first plane
[13:05] <nevcairiel> but it still greatly depends on the format and everything
[13:05] <wm4> yeah, but if you consider packed yuv... have fun
[13:05] <BtbN> well, first byte of the first pixel
[13:05] <wm4> or bayer rgb
[13:05] <arwa> stride?
[13:05] <wm4> so in _general_ you'll have a hard time
[13:05] <wm4> but for some pixel formats, it's really simple
[13:06] <BtbN> If you want it simple, convert it to RGB24 or RGBA32 with 0 stride first.
[13:06] <wm4> consider GRAY (forgot the exact name), it's 1 byte per pixel with 1 plane
[13:06] <wm4> then it's uint8_t pixel_value = frame->data[0] + y * frame->linesize[0] + x
[13:06] <wm4> err
[13:06] <BtbN> You have no other choice if you want to work with raw pixels than converting it to a format you can work with in your case.
[13:06] <wm4> probably: uint8_t pixel_value = *(frame->data[0] + y * frame->linesize[0] + x);
[13:41] <hima> ubitux: going through the application form there is a section where we have to give details as in the timeline and details of each task. So are there any set of tasks made for OPW?
[13:41] <hima> so that we could write about it in our application?
[13:57] <cone-984> ffmpeg.git 03Rong Yan 07master:31dea05170e5: libavcodec/ppc/pixblockdsp.c : fix get_pixels_altivec() and diff_pixels_altivec() for POWER LE
[13:57] <cone-984> ffmpeg.git 03Rong Yan 07master:c5ca76ad3b50: libavcodec/ppc/mpegvideoencdsp.c : fix pix_norm1_altivec() and pix_sum_altivec() for POWER LE
[14:04] <ubitux> hima: each task of the qualification?
[14:14] <Compn> ubitux : maybe we should make a note on opw page that we dont really care about the application and how complete it is, we focus on the code? :P
[14:14] <ubitux> i don't know
[14:15] <Compn> and how competent the applicant is during the qualifications
[14:15] <ubitux> i'm fine helping with code, not so much with opw stuff
[14:15] <Compn> i think thats what the consensus was with gsoc applicants as well... but maybe i'm wrong.
[14:19] <cone-984> ffmpeg.git 03Di Wu 07master:162b5211080b: vp9: enable multi-thread decoding when refreshctx is equal to 0
[14:22] <Daemon404> Compn, yep.
[14:25] <michaelni> Compn, +1
[14:28] <cone-984> ffmpeg.git 03Rong Yan 07master:c1fa5d1bd464: libavcodec/ppc/me_cmp.c : fix sad16_altivec() sse16_altivec() sad16_xy2_altivec() sad16_x2_altivec() sad16_y2_altivec() sad8_altivec() for POWER LE
[14:28] <cone-984> ffmpeg.git 03Rong Yan 07master:0d71bd5a9493: libavcodec/ppc/hpeldsp_altivec.c : fix ff_put_pixels16_altivec() for POWER LE
[14:30] <Compn> michaelni : so, is it ok for me to put a message like that on our opw wiki ?
[14:31] <michaelni> yes, but make sure it doesnt sound like it would work without an application, they do need t submit one AFAIK
[14:33] <Compn> right, yes
[14:43] <Compn> '''Note''': A friendly reminder that while the application to OPW is important for you and OPW, FFmpeg mentors will not judge the application itself. We will judge the applicant based on their abilities in coding, learning the tools, communication skills etc. So please do not worry about your application being perfect for us. Although it is very important to follow OPW's application rules so they can pay you...
[14:43] <Compn> hows that ?
[14:45] <reynaldo> Compn: the "FFmpeg mentors will not judge the  application itself
[14:45] <reynaldo> "
[14:45] <reynaldo> bit seems confusing
[14:45] <wm4> Compn: what does that even mean
[14:45] <reynaldo> wont base their desicion solely on the OPW application maybe ?
[14:46] <Compn> the opw application detailing what the person will work on, what tasks they will do etc
[14:46] <ramiro> oh, the truehd encoder is still up for grabs =). can I mentor it?
[14:47] <Compn> ramiro : yes we always need more mentors and backup mentors
[14:47] <Compn> edit the wiki and put yourself as mentor...
[14:49] <Compn> reynaldo : your wording sounds better.
[14:50] <ramiro> where can I find the old GSoC code for the truehd encoder?
[14:50] <Compn> it maybe linked in the old gsoc page
[14:50] <Compn> somewhere on svn i'm assuming
[14:50] <Compn> lets see...
[14:53] <Compn> hmmmm
[14:55] <reynaldo> Compn: thanks, Im getting better as this English stuff ;)
[14:57] <Compn> wm4 : i'm trying to say that ffmpeg devs dont care about the application or how complete it is
[14:58] <Compn> i remember a lot of gsoc students spending time on application instead of on the code 
[14:59] <Compn> ramiro : no idea where that truehd encoder code is.
[14:59] <av500> better not look at old student code :)
[14:59] <Compn> av500 : he was the original student on that project
[15:00] <Compn> now... he is the master.
[15:00] <av500> :)
[15:00] Action: Compn goes afk
[15:00] <Compn> feel free to reword my note 
[15:01] <ramiro> av500: I feel bad enough just remembering some parts of that code =)
[15:04] <av500> ramiro: :)
[15:14] <j-b> Daemon404: no idea
[15:31] <Daemon404> j-b, o ok
[15:36] <Daemon404> https://www.dropbox.com/s/x9hdtzx6c54io1z/Screenshot_2014-10-10-12-34-56.png?dl=0
[15:36] <Daemon404> D:
[15:37] <av500> what ugly android is that?
[15:38] <Daemon404> 4.4.2
[15:41] <av500> and samsung on top?
[15:41] <Daemon404> yea
[15:41] <Daemon404> S4
[15:42] <gnafu> I would not like my S3 nearly as much if it wasn't running CyanogenMod.
[15:44] <Daemon404> i dont have the highest opinion of what the cyanogenmod people do
[15:44] Action: gnafu just updated to the M11 build of CM11 last night (Android 4.4.4).
[15:44] <Daemon404> forwardporting random drivers poorly
[15:44] <Daemon404> etc
[15:45] <gnafu> All I know is it runs great on my S3, and I cringe whenever I use someone's stock Samsung phone.
[15:58] <hima> Please have a look at this code for kate demuxer. its not correct as i am getting a segmentation fault 
[15:58] <hima> http://fpaste.org/140928/94945714/
[15:58] <hima> please let me know where i am wrong 
[15:59] <ubitux> what does valgrind say?
[16:00] <Daemon404> line[strcspn(line, "\r\n")] = 0; <-- this looks very evil
[16:00] <hima> http://fpaste.org/140930/94960914/
[16:00] <hima> valgrind says this
[16:00] <ubitux> Daemon404: this works :)
[16:01] <hima> yeah that works. i saw that  in vplayerdec.c 
[16:01] <ubitux> hima: the valgrind debug doesn't match the code you pasted
[16:02] <ubitux> also, you probe function is wrong as i explained
[16:02] <ubitux> ptr doesn't focus on the timestamp pattern you're trying to match
[16:02] <ubitux> and never will, unless you move it to that location
[16:02] <Daemon404> eh
[16:02] <Daemon404> looking tat hat paste
[16:02] <Daemon404> ff_subtitles_read_text_chunk(&tr, &buf);
[16:02] <Daemon404> tr and buf are both uninitialized
[16:02] <ubitux> indeed
[16:04] <hima> ok right 
[16:05] <hima> btw this is the valgrind result http://fpaste.org/140933/49728141/
[16:06] <ubitux> as Daemon404 said, you call functions with uninitialized data
[16:06] <ubitux> i mean, make sure you understand what the functions you're trying to call are doing
[16:06] <ubitux> it won't work by luck :p
[16:07] <hima> yeah :| that was a fatal error i will change 
[16:10] <hima> ubitux will you explain a bit about the probe fucntion you said. ptr does not focus on timestamp?  i am trying to match something like  0:0:01 --> 0:0:03  right. so %d:%2d:%2d --> %d:%2d:%2d
[16:11] <hima> ptr contains p-> buf um so?
[16:11] <ubitux> sscanf is not a regex
[16:11] <ubitux> it won't magically find "abc" into "foo abc bar"
[16:11] <ubitux> so if ptr is not starting with the pattern you're trying to match, it will just not work
[16:12] <hima> okay so i need to traverse ptr and find where that pattern starts from right?
[16:12] <hima> and then use sscanf 
[16:12] <ubitux> yes
[16:13] <hima> ok
[16:13] <ubitux> but you can also just look for "kate"
[16:13] <ubitux> and check if it has an opening { following later
[16:14] <hima> yeah right 
[16:18] <hima> ubitux:  this should work right  http://fpaste.org/140938/14129506/
[16:18] <hima> ?
[16:19] <ubitux> no
[16:19] <ubitux> 16:11:18 <@ubitux> sscanf is not a regex
[16:19] <ubitux> 16:11:31 <@ubitux> it won't magically find "abc" into "foo abc bar"
[16:20] <hima> but it would enter into sscanf only when strchr(ptr,"kate {\n")
[16:20] <ubitux> no
[16:20] <ubitux> it will do sscanf first
[16:21] <ubitux> also strchr is the wrong fonction
[16:22] <ubitux> and even if you swap the two, you don't move the ptr so the sscanf won't work anyway
[16:22] <Daemon404> shouldnt that also be checking buf_size
[16:22] <Daemon404> because it is using unsafe functions blindly on a buf.
[16:24] <ubitux> Daemon404: later :)
[16:24] <hima> yeah i understand now ubitux 
[16:25] <hima> actually i looking for the structure definition of AVProbeData so as to understand what type is p->buf
[16:25] <hima> so that i can move ptr
[16:26] <Daemon404> a pointer is a pointer
[16:27] <hima> yeah i get it. just to understand better Daemon404 i am making the changes. understood my fault 
[16:35] <Daemon404> ^ thats not a regression
[16:35] <Daemon404> slow-mo is new, and some apple-only thing
[16:35] <Daemon404> i wasnt able to locate the atom where its flagged
[16:38] <nevcairiel> mp4 is full of crazy things, i'm still trying to find out if a mp4 somehow s ignals when a "video" stream is only a bunch of cover art images and not actual video
[16:39] <Daemon404> this isnt mp4
[16:39] <Daemon404> mov ;)
[16:39] <nevcairiel> same shit
[16:39] <Daemon404> slightly different!
[16:39] <nevcairiel> smells all the same
[16:39] <Daemon404> i use a bunch of heuristics for 'cover art'
[16:40] <Daemon404> also sometimes ffmpeg will set teh disposition properly
[16:40] <nevcairiel> you mean like codec = png and frames = 1 and such
[16:40] <Daemon404> to attached_pic
[16:40] <Daemon404> nevcairiel, something like that. BUT. sometimes its like 10 frames.
[16:40] <Daemon404> because each chapter has a thumb.
[16:40] <nevcairiel> yah i know
[16:57] <ubitux> is there a container which supports ffv1 and a decent lossless audio codec?
[16:58] <ubitux> (for which we have a native encoder)
[16:59] <nevcairiel> ffv1 should go into mov or maybe even mkv, and put flac to it
[16:59] <Daemon404> NUT!
[16:59] Action: Daemon404 runs
[16:59] <nevcairiel> you are nuts!
[17:00] <Daemon404> you may laugh but *i use nut*
[17:00] <ubitux> i was planing actually to create a nut
[17:00] <Daemon404> ubitux, we use nut for raw video piping + timestamps
[17:00] <ubitux> but yeah indeed i missed the fact that you can put ffv1 into mkv
[17:00] <Daemon404> yeah
[17:01] <nevcairiel> ffv1 is 4 characters, automatic fourcc, of c ourse it can go into mkv in avi-compat mode!
[17:01] <ubitux> actually
[17:01] <ubitux> you can mux flac into nut as well?
[17:01] <Daemon404> i dunno
[17:01] <ubitux> sounds surprising but well, great.
[17:29] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:3537ddb76f3a: avcodec/cavs: use av_freep(), do not leave stale pointers in memory
[17:29] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:ea77d3b889ac: avcodec/atrac3: use av_freep(), do not leave stale pointers in memory
[17:30] <hima> ubitux : http://fpaste.org/140962/12955011/
[17:31] <ubitux> that looks better but your file doesn't have a timestamp after "kate {"
[17:31] <ubitux> you need to look for "event {" or something like that
[17:31] <ubitux> also, remember that there can be a large number of spaces and \n between the word ("kate" or "event") and the "{"
[17:31] <wm4> is it guaranteed that there's one space between "event" and "{"?
[17:31] <hima> last time you said not to include event 
[17:32] <ubitux> in the sscanf
[17:32] <ubitux> but you need to skip it
[17:32] <ubitux> if you include it in the scanf is too strict
[17:32] <ubitux> it* is too strict
[17:32] <hima> ohh okk or i can just add a "%s " in scanf?
[17:33] <ubitux> kind of yeah
[17:33] <ubitux> look at the man
[17:33] <ubitux> but that won't exactly do what you want
[17:33] <hima> such that it reads event there? or i can include "event " in sscanf 
[17:33] <ubitux> you can
[17:34] <ubitux> you can't*
[17:34] <ubitux> because you can have "event \n\n\n     {           \n     \n\n\n\n    <the timestamp>"
[17:34] <ubitux> same for "kate" btw
[17:36] <hima> okay i wil make the changes accordingly 
[17:41] <jermy> Anybody familiar with MPEG2 High profile stuff? I'm trying to support reading some content that only decodes correctly if I explicitly set delay = 2 (rather than 1 from has_b_frames) in compute_pkt_fields
[17:43] <jermy> But I guess the question is if there's likely to already be a flag in the bitstream to indicate the maximum distance between non-B frames, because has_b_frames is only ever set to 1
[17:50] <hima> ubitux : see this http://fpaste.org/140967/12956152/
[17:52] <ubitux> \n is a simple character
[17:52] <ubitux> and you also need to skip spaces
[17:52] <ubitux> you need to skip the { after event also
[17:52] <ubitux> iirc
[17:55] <hima> okay... there is no { after event
[17:56] <hima> i will include the spaces though 
[17:56] <iive> jermy: are you talking about scalability extensions. 
[17:56] <iive> I don't think any of the free decoders support it.
[18:08] <hima> ubitux: could you please once again explain what does read_header actually do 
[18:42] <thardin> i keep missing shrest
[18:52] <cone-984> ffmpeg.git 03James Almer 07master:5402d1bce5ee: float_dsp-test: allow forcing cpuflags
[18:55] <jermy> iive: Nothing as complex as that, I don't think. Just a regular IBBPBBP... stream (GOP length is probably in the order of 16 or so)
[18:56] <jermy> Assuming you mean multiple bitrate/stream handling with prediction between streams
[18:57] <jermy> Also, I assume that interlaced content is normally encoded in such a way?
[19:02] <iive> no it is not
[19:11] <iive> i mean, yest, scalability is having multiple layers, where the second layer extends the base one (temporal, spatial or SNR)
[19:12] <iive> and no,  interlace is not encoded in such way.
[19:12] <jermy> ok
[19:12] <jermy> This content decodes fine, it just needs a delay larger than 1 for frames to come out in the right order
[19:13] <iive> afaik, I think some people from x264 worked on x262 encoder. maybe kierank would know more.
[19:13] <iive> yeh, that's very strange...
[19:14] <jermy> max_b_frames is always set to a maximum value of 1 (and delay is copied from that), and that's documented as the 'maximum number of B-frames between non-B-frames'
[19:15] <jermy> So I'm suspecting it's just something that doesn't come up very often unless you're working with production content (XDCAM in this case)
[19:16] <kierank> so basically max_b_frames is wrong?
[19:16] <jermy> Although I've seen this with some H.264 content in the past too (but possibly an older version of the h264 decoder), and I know that's got slightly different behaviour for dealing with max_b_frames
[19:17] <jermy> I think so. It's set to 1 (well, the inverse of the low_delay flag from the sequence extension)
[19:18] <jermy> and I don't see anything to obviously ever set it higher
[19:18] <jermy> Frames do come out, but out-of-order and sometimes with duplicate DTS/PTS values
[19:18] <iive> jermy: h264 support b-pyramid
[19:19] <iive> but the closest to that in mpeg2 is temporal scalability.
[19:19] <jermy> Yeah, genuinely nothing that complicated. Just normal IBBP, single stream, not using B's like predictors or anything
[19:20] <cone-984> ffmpeg.git 03Benoit Fouet 07master:5e6fd132ff56: avformat/movenc: add EAC3 muxing support.
[19:21] <kierank> jermy: max_b_frames sounds right
[19:21] <kierank> i mean 1 is correct
[19:23] <jermy> Ok - you mean it's nominally 1 just to indicate each B frame is independently using the I and P frames?
[19:24] <iive> do you know how/why frame reordering is done?
[19:26] <jermy> In terms of the encoding techniques? Yes. If I'd be expected to buffer the frames and reorder them myself at an application layer? No
[19:26] <kierank> no you shouldn't be expected to reoroder
[19:28] <jermy> As I say, setting delay to 2 instead of max_b_frames works fine, but I want to know if there's a nicer way to pick up from the stream that it might need that
[19:29] <jermy> since I imagine that'd probably break things for other more basic streams
[19:43] <iive> jermy: if you can check/detect temporal scalability, then it might be done in a clean way.
[20:03] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:171d971dbf04: avutil/softfloat: Fix undefined shift in av_add_sf()
[20:11] <jermy> I'll have a look in this file if I can find a sequence_scalable_extension header
[20:17] <ubitux> wm4: http://cgit.freedesktop.org/pulseaudio/pulseaudio/tree/src/Makefile.am#n2178
[20:17] <ubitux> :D
[20:19] <wm4> wat
[20:19] <wm4> just
[20:19] <wm4> wat
[20:19] <ubitux> it still works ;)
[20:21] <ubitux> (actually it doesn't)
[20:22] <ubitux> but we still have that file here
[20:29] <cone-984> ffmpeg.git 03Christophe Gisquet 07master:cb530dda7d76: utvideoenc: properly set slice height/last line
[20:32] <jermy> iive: Certainly no sequence scalable extensions headers (ID 0x5) in this file. Only got extensions of 0x1 (sequence), 0x2 (sequence display), 0x3 (quant matrix) and 0x8 (picture coding)
[20:49] <cone-984> ffmpeg.git 03Luca Barbato 07master:eb4f9069002e: lavf: More informative error message
[20:49] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:fc6aa304596b: Merge commit 'eb4f9069002e73648f6640cd054fc814cfda75b8'
[21:03] <cone-984> ffmpeg.git 03Luca Barbato 07master:09e1ccc8cddc: sctp: Use AVERROR_BUG instead of abort()
[21:03] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:cec7afd03640: Merge commit '09e1ccc8cddc946da5e10841f10dc5ebdd187d9d'
[21:17] <cone-984> ffmpeg.git 03Luca Barbato 07master:c27328e749ff: rtsp: Check for command strings without spaces
[21:17] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:544f81145344: Merge commit 'c27328e749ff3be648411765cd17362fee017341'
[21:28] <cone-984> ffmpeg.git 03Luca Barbato 07master:8b2e9636c57b: rtsp: Support tls-encapsulated RTSP
[21:28] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:c9791925a1bf: Merge commit '8b2e9636c57b22582143467a8a06b509b47b92f9'
[21:35] <cone-984> ffmpeg.git 03Luca Barbato 07master:c839b0439f0b: rtsp: Support tls when in listen mode
[21:36] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:7028475f7533: Merge commit 'c839b0439f0b01c72a6d253920d2e342b30f8bcb'
[21:44] <cone-984> ffmpeg.git 03Luca Barbato 07master:3df8d52fcdc9: rtsp: Add rtsps to the probe
[21:44] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:76d1ffffd0b2: Merge commit '3df8d52fcdc9036b4074fdc612d487ece8bb5b7f'
[22:26] <Daemon404> ubitux, ................
[22:27] <ubitux> yes?
[22:27] <ubitux> ah, PA?
[22:27] <Daemon404> (pulse link )
[22:27] <Daemon404> i dont even.
[22:27] <ubitux> it's very old
[22:27] <ubitux> probably when ffmpeg was saying downstreams to just copy the sources and static link
[22:27] <Daemon404> that doesnt make it less derp
[22:37] <Compn> pulseaudio isnt about to call ffmpeg to use its resample engine anyway
[22:39] <iive> btw, why pulse audio haven't been merged in systemd yet?
[22:39] <Compn> ubitux : flameeyes has copyright in that pulse makefile
[22:39] Action: Compn misread it as dondiego :P
[23:02] <cone-984> ffmpeg.git 03Luca Barbato 07master:cd9d6399fd00: tls: Support passing old-style tcp options
[23:02] <cone-984> ffmpeg.git 03Michael Niedermayer 07master:d246397161bb: Merge commit 'cd9d6399fd00f5aeacaa90cdc0b74c3570024119'
[23:38] <ramiro> I added myself as mentor for the TrueHD encoder projet. Should this be announced somewhere or do we just expect people to see it?
[23:39] <ubitux> i don't think it needs to be announced
[23:39] <ubitux> note that student only have 12 days left for the evaluation
[23:39] <ramiro> good. I thought the same...
[23:48] <J_Darnley> Is it really this complicated to do a frequency transform?  Complex numbers, hypotenuses.  All I want is something that looks half decent when drawn.
[23:51] <cone-984> ffmpeg.git 03Clément BSsch 07master:937aac4f978c: avformat/assenc: make sure we crawl extradata only if it's non-null
[00:00] --- Sat Oct 11 2014


More information about the Ffmpeg-devel-irc mailing list