[Ffmpeg-devel-irc] ffmpeg-devel.log.20120504

burek burek021 at gmail.com
Sat May 5 02:05:03 CEST 2012


[00:01] <saste> uhm ok i reread the thread, there is the $(cat x) argument which may still apply
[00:01] <saste> but i wanted something which could be used by libav* applications, so i gave up
[00:02] <saste> but we could package preset files like it is done with ffpresets
[00:02] <saste> i suppose someone just posted something about that?
[00:03] <ubitux> i was just working on the ffpresets ATM (see the recent patchset on ffmpeg-devel)
[00:03] <ubitux> and i was thinking of moving out all the -target stuff into ffpresets since i think it belongs here
[00:03] <ubitux> of course there is the issue of internal ffmpeg functions, which you actually "solved"
[00:04] <ubitux> <@saste> but we could package preset files like it is done with ffpresets // you mean target files?
[00:04] <ubitux> such as vcd-pal.fftargets ?
[00:04] <saste> metvik, really? when?
[00:05] <saste> weird, i never typed "metvik" but just "me"
[00:06] <ubitux> me<tab>?
[00:07] <ubitux> well yes you somehow solved the issue with "ffmpeg" presets
[00:07] <ubitux> instead of fftools preset
[00:07] <ubitux> :)
[00:09] <ubitux> well anyway
[00:09] <ubitux> i think it's something that could be improved :p
[00:10] <saste> yes i like the idea of a target file, loaded from $random_site which works with $freak_device, with no commandline messing
[00:12] <RobertNagy> would it make sense to add a "vrate" filter to change the frame rate of a video stream?
[00:14] <RobertNagy> that would allow dropping frames at the beginning of a graph, instead of dropping processed frames after the graph, or duplicating frames before the graph?
[00:15] <RobertNagy> if the frame rate control is outside of the graph, you either have the problem with possibly duplicating frames before the graph, or dropping frames after the graph, eitherway you would be doing unecessary work depending on the final frame rate
[00:21] <saste> RobertNagy: http://gitorious.org/~saste/ffmpeg/sastes-ffmpeg/commits/soc-filters-20110221
[00:21] <saste> also check the decimate filter which I recently posted
[00:22] <saste> since the API is changing is better to wait before working on that
[00:22] <saste> the soc filter is FPS
[00:23] <saste> it only supports frame dropping if I'm right
[00:24] <RobertNagy> soc filter?
[00:24] <RobertNagy> oh
[00:25] <RobertNagy> yea
[00:25] <RobertNagy> but I don't see why it reset the frame
[00:25] <RobertNagy> if you remove the has_frame = 0 and  avfilter_unref_buffer(fps->pic);
[00:25] <RobertNagy> it would work for duplication as well
[00:26] <saste> all but rotate and fps were already applied
[00:26] <saste> RobertNagy: feel free to send an updated patch :-)
[00:27] <RobertNagy> I might jsut do that
[00:27] <RobertNagy> though, I suppose there is a reason for has_frame
[00:28] <RobertNagy> however, I can't quite figure it out
[00:28] <RobertNagy> oh
[00:28] <RobertNagy> there is another problem
[00:28] <RobertNagy> it assume that the pts start at 0
[00:28] <RobertNagy> which might not always be the case
[00:29] <saste> also: AV_TIME_BASE -> link->time_base, so that code needs to be moved to the configure stage
[00:30] <RobertNagy> yea, there is some work left
[00:31] <RobertNagy> saste: I don't follow your last comment
[00:32] <saste> before the addition of AVFilterLink.time_base, all filters assumed a time_base == AV_TIME_BASE
[00:32] <RobertNagy> oh fight
[00:32] <saste> that's not the case anymore, since every link can support a different time_base, which is configured *after* the init stage
[00:33] <saste> so you need to read it in config_input (IIRC): inlink->time_base
[00:33] <RobertNagy> hm, that's a bit over my head
[00:33] <saste> don't worry, just send a patch and we'll fix it ;-)
[00:34] <RobertNagy> I'll try get time to experiment with it this weekend
[00:34] <RobertNagy> thanks for the heads up
[00:34] <RobertNagy> gnight
[00:35] <saste> np, thanks for willing to volunteer your time on it
[01:55] <tg_> anybody want to make some consulting money?
[01:56] <tg_> need some help with regards to threading limitations
[01:57] <kierank> what are you trying to do
[02:06] <tg_> encode a single video accross 48 cores
[02:06] <tg_> have a 4x12 core opteron server
[02:06] <tg_> using ffmpeg with threads=0
[02:07] <tg_> uses the same amount of resources as threads=4
[02:07] <tg_> namely, about 6 cores
[02:07] <tg_> 150fps
[02:07] <iive> that's 6 physical and 12 virtual, isn't it?
[02:07] <tg_> no
[02:07] <tg_> 12 physical
[02:07] <tg_> 12 core opterons
[02:07] <tg_> 2.6ghz each
[02:07] <tg_> here hold on
[02:08] <tg_> http://i.imgur.com/jZKVW.jpg
[02:08] <tg_> that's with threads=0
[02:08] <tg_> 16 threads for decoding, each isn't doing much lifting
[02:08] <tg_> 1 thread is pegged at 100%
[02:11] <tg__> sry irc crashed
[02:11] <tg__> http://i.imgur.com/oWioF.jpg
[02:11] <tg__> there it is with threads=4
[02:11] <tg__> giving the same rendering speed of 150fps
[02:11] <tg__> using ~6 cores
[02:13] <kierank> why do you want to do this
[02:13] <kierank> can't you run multiple files at the same time?
[02:13] <tg__> yes
[02:13] <tg__> but if I only have 1 video to convert
[02:13] <tg__> I want it to convert quickly
[02:13] <kierank> then use a faster preset
[02:13] <tg__> without sacrificing quality
[02:14] <kierank> not much you can do about that
[02:14] <tg__> well
[02:14] <tg__> there is
[02:14] <tg__> convert the video with 10 instances of ffmpeg, each taking 1/10th the video length, and then merge it after
[02:14] <kierank> you can't just expect encoding and decoding to scale linearly with threads
[02:14] <tg__> but I was looking for a more elegant solution
[02:14] <iive> tg__:  it is possible the decoding is the limiting factor.
[02:15] <tg__> @iive - the decoding threads (the 16 at the top) are not cpu bound
[02:15] <tg__> nor is it IO bound
[02:15] <tg__> as both source and target are in /dev/shm
[02:15] <iive> no, but the spatial and temporal prediction are limiting the parallelism you could expect.
[02:15] <tg__> once that 1 thread reaches 100%
[02:16] <tg__> no more concurrency will increase throughput
[02:16] <kierank> 01:14:48 <kierank> you can't just expect encoding and decoding to scale linearly with threads
[02:16] <tg__> spatial and temporal prediction is linear only?
[02:16] <tg__> is it possible them to have ffmpeg process the file in concurrent chunks then?
[02:16] <kierank> yes if you split at i-frames
[02:16] <iive> tg__: ffmpeg have 2 kinds of parallelism . One is using slices the other is using frames.
[02:17] <tg__> hm
[02:17] <iive> in theory slices could be decoded independently, but with h264 this is not true for all types.
[02:17] <tg__> is there a way to test the decoding theory using ffmpeg and just passing through the input?
[02:18] <iive> when using frame, you also depended on how far the frames you reference are decoded.
[02:18] <iive> tg__: raw encoder to /dev/null?
[02:18] <tg__> yes
[02:18] <tg__> let me try that
[02:19] <iive> i've heard that when using x264 threaded encoding, the video would be able to decode efficently with the same number of threads used for encoding.
[02:19] <tg__> ffmpeg -i infile.ext  -map_chapters -1 -vcodec libx264 -b:v 1500k -bf 2 -threads 4 -s 854x480 -pass 1 -partitions +parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 -acodec libfaac -b:a 96k -ac 2 -ar 44100 -y output.mp4
[02:19] <tg__> is the command
[02:20] <tg__> i'll modify it to output to raw on /dev/null
[02:21] <tg__>  ffmpeg -i infile.ext -vcodec rawvideo /dev/null ?
[02:22] <iive> add -an to disable sound.
[02:23] <tg__> ffmpeg -threads 0 -i infile.mkv 264 -an -f rawvideo -y /dev/null
[02:23] <tg__> ?
[02:24] <tg__> 600 fps
[02:25] <tg__> http://i.imgur.com/7NrJe.png
[02:26] <tg__> up and down from 350-600 fps
[02:26] <tg__> using 700-800% cpu
[02:26] <tg__> so 7-8 cores
[02:27] <tg__> let me try some resizing also and see if that limits it
[02:30] <tg__> resizing, about 300fps, 500% cpu use
[02:30] <tg__> still not seeing that 1 process that is at 100%
[02:30] <tg__> they are all threading nicely
[02:30] <tg__> all 16 decoder threads @ ~ 24% cpu
[02:31] <ohsix> you can use perf to see the impact of disk io and cache usage
[02:31] <tg__> its reading from /dev/shm
[02:32] <ohsix> you can use perf to see the impact of disk io and cache usage
[02:33] <tg__> let me check
[02:33] <tg__> just perf record then run it?
[02:33] <tg__> or perf top
[02:34] <ohsix> perf list, then perf -e whatever record
[02:34] <ohsix> perf record -g without any explicit counters would be interesting, but wouldn't tell you much about what the cpu's are doing
[02:34] <tg__> http://i.imgur.com/AE11I.png
[02:35] <tg__> perf top during raw decode @ 500 fps
[02:35] <kierank> you need debug symbols to do that
[02:35] <ohsix> especially won't tell you what the programs aren't doing without symbols
[02:35] <kierank> i.e use ffmpeg_g
[02:36] <ohsix> and glibc, 10% is a lot
[02:36] <tg__> its running 10% while idling
[02:36] <tg__> here's idle: http://i.imgur.com/SKE7H.png
[02:39] <tg__> hm
[02:40] <tg__> is there a way to override how many threads ffmpeg uses for decoding?
[02:40] <tg__> it seems it limits itself at 16
[02:43] <ohsix> theres a point where you only suffer from added threads
[02:44] <ohsix> the point of investigating with perf is to see if something completely unrelated is a problem even with 16
[02:44] <tg__> what we've determined so far is the decoder is working properly in parallel
[02:44] <tg__> and can decode at the cpu's limit
[02:44] <tg__> max out 16 threads decoding basically
[02:45] <ohsix> we who?
[02:45] <tg__> with iive's suggestion
[02:46] <tg__> when transcoding, the encoder is fully parallel, the decoder is also
[02:46] <tg__> but something inbetween is running on a single thread and using 100% cpu
[02:46] <ohsix> perf can record only on that one cpu if you want
[02:46] <tg__> really
[02:46] <tg__> hmm
[02:47] <ohsix> but perf can also record just ffmpeg and tell you what it did on all those cpus
[02:47] <iive> perf would tell you what functions are most used for decoding
[02:47] <iive> it won't tell you about interlocking threads. afaik.
[02:47] <ohsix> i'm pretty sure the parent/child relationship with the threads is enough to get a stack dump on the one using 100%
[02:47] <tg__> i can find the pid of the thread that is blocking at 100%
[02:49] <tg__> how can i tell it to listen to pid X
[02:50] <tg__> perf stat -p 21483
[02:50] <tg__> should do it
[02:51] <tg__> http://pastebin.com/WiJxYkHc
[02:52] <tg__> and some more detail on the individual calls for the ame pid:
[02:52] <tg__> http://pastebin.com/9RrShmg7
[02:54] <tg__> vs one of the encoding threads: http://pastebin.com/zgHYtscB
[02:55] <tg__> hmm
[02:55] <tg__> looks like that is grabbing the parent 
[02:56] <tg__> confirmed that both perf top -p [pid]'s are the same regardless of child process
[02:58] <tg__> there are 3 types of threads, decoding (16 of them, running at priority of 20, nice of 0)
[02:58] <tg__> 8 encoding threads (running at priority 30, nice 10)
[02:58] <tg__> and that 1 thread which is at 100% cpu
[03:00] <tg__> can't seem to get perf to lock onto that one thread, it always pulls the parent, even when specifying the pid of the subthread
[03:01] <tg__> how can you compile ffmpeg with the flags so you can debug it properly with perf?
[03:02] <tg__> gcc -g ?
[03:04] <tg__> --disable-stripping ?
[03:14] <Daemon404> tg__, ffmpeg_g is always created
[03:15] <tg__> hm
[03:17] <tg__> ok got it working with --disable-stripping
[03:17] <tg__> ff_hscale8to15_8_ssse3.loop 
[03:17] <tg__> is the biggest user of resources
[03:17] <tg__> decode_cabac_residual_nondc_internal
[03:18] <tg__> for some reason i can't get perf to focus on only one PID
[03:18] <tg__> it always give me the parent
[03:18] <tg__> here is the update perf with ffmpeg broken down
[03:18] <tg__> http://pastebin.com/G8JESmq7
[03:19] <tg__> if I can focus in on one PID i can see which function is not threaded and is blocking
[03:41] <tg__> yeah too bad perf doesn't want to zero in on the thread
[03:41] <tg__> keeps showing me the parent PID's details when I give it a child's PID
[03:42] <ohsix> you need to get the tid, not the pid :]
[03:42] <tg__> how can i get the TID from htop
[03:42] <ohsix> don't know
[03:42] <tg__> the subprocess has it's own PID though
[03:42] <tg__> but when I give perf that PID it shows me the parent's details
[03:43] <tg__> getting closer though
[03:49] <tg__> bingo, just put the PID in -t and it works
[03:50] <tg__> here is the blocking process's profile: http://pastebin.com/vsECqJqn
[03:52] <tg__> decoding thread: http://pastebin.com/vshzb3F9
[03:53] <tg__> and encoding thread: http://pastebin.com/MHqkfPtH
[03:55] <tg__> here is a better copy of the thread that is blocking: http://pastebin.com/6K97hgBJ
[03:58] <tg__> so looks like an x264 limitation
[03:59] <tg__> x264_macroblock_tree_propagate
[03:59] <tg__> x264_slicetype_mb_cost.isra.20
[03:59] <tg__> x264_pixel_avg2_w8_mmx2
[03:59] <tg__> x264_pixel_satd_8x8_internal_xop
[04:00] <ohsix> it's not easy to usefully put work on so many cpu's
[04:01] <tg__> what do those 4 processes do?
[04:01] <tg__> calculate the splitting?
[04:01] <tg__> slicing *
[04:02] <tg__> those 4 x264 functions do not appear in any of the encoding or decoding processes
[04:02] <tg__> only the blocking thread
[04:02] <tg__> i will contact jason and ask him what they do
[04:04] Action: Daemon404 wonders what spawned this big threading-related discussion
[04:04] <tg__> trying to encode a single video as fast as possible
[04:05] <ohsix> someone found a machine with a lot of cpu cores, that's it :]
[04:05] <ohsix> as fast as possible would probably use something on the sandybridge or an asic, no?
[04:05] <Daemon404> if the video res is large enough, perhaps sliced threading?
[04:05] <ohsix> pracitcally speaking, as fast as possible is usually just a bit over real time :D
[04:06] <tg__> it's 720p
[04:06] <tg__> @ohsix, i'm at 150fps right now on 24fps content
[04:06] <Daemon404> wouldnt this be more suited to an x264 chan?
[04:07] <tg__> probably ;(
[04:07] <ohsix> tg__: i meant from a hardware device :] since they're typically paired with something that only produces frames so fast
[04:08] <Daemon404> --preset ultrafast
[04:08] Action: Daemon404 runs
[04:08] <tg__> lol
[04:08] <tg__> quality aside
[04:08] <tg__> i think the best approach might be cutting the video into 10 sections
[04:09] <tg__> and doing that in parallel
[04:09] <tg__> and then merging it after with a non-decode/encode run
[04:09] <Daemon404> be careful when merging avc streams
[04:10] <tg__> would it have to be cut on keyframes?
[04:10] <tg__> ie from keyframe ->nextkeyframe-1 ?
[04:10] <Daemon404> if youre simpyl encoding + joining, that is a non issue
[04:11] <Daemon404> there are more subtle problems involving joining/concatenating avc streams
[04:11] <tg__> all I want to do is take a 2hr mkv
[04:11] <Daemon404> that can cause lovely seeking issues, etc
[04:11] <tg__> distribute it's encoding across all cores evenly or as best possible
[04:11] <ohsix> is this something you need to do once?
[04:11] <tg__> and ahve the final result as an h264 mp4
[04:11] <tg__> yeah
[04:11] <ohsix> then you would have been done ages ago if you just let it finish
[04:11] <Daemon404> is there somethign stopping you from remuxing?
[04:12] <tg__> haha no sorry, ongoing
[04:12] <tg__> it is only done once per file
[04:12] <Daemon404> ohsix, he might be near a black hole
[04:13] <tg__> @daemon it needs to be resized also 
[04:14] <Daemon404> downscaled?
[04:14] <tg__> yes
[04:14] <tg__> from any plethora of format
[04:14] <Daemon404> running a streamign service then?
[04:15] <tg__> yes
[04:15] <ohsix> can't you have it output yuv and do all the fancy stuff with x264 directly
[04:15] <tg__> if the limitation is with x264 that will not save much time (and the overhead on space will be large)
[04:16] <tg__> if I have it output YUV it goes quickly
[04:16] <tg__> albeint, ffmpeg seems to limit decoding threads to 16
[04:16] <Daemon404> im not really seeing your Big Problem here
[04:16] <tg__> you have a 120 minute 1080p mkv file
[04:17] <tg__> you want to convert it to a web-friendly mp4 for streaming in flash
[04:17] <tg__> with the way it is now, it will convert at ~ 150fps using 6 cores
[04:17] <tg__> which is quick
[04:17] <tg__> but
[04:17] <tg__> there are 48 cores
[04:17] <tg__> lol
[04:18] <tg__> theoretically they can decode/scale/encode 1200frames per second
[04:18] <tg__> together
[04:18] <tg__> with 8 concurrent video's, it does
[04:18] <Daemon404> you will not get a single clip to utilize that sort of core-age with any sort of sane trheading model
[04:18] <tg__> if I just cut the video into 8 parts
[04:19] <tg__> then muxed it together on the second pass
[04:19] <Daemon404> do you only ever have one transcode job at a time or something?
[04:19] <Daemon404> (and 6 threads seems a bit low... it may be decode-bound)
[04:19] <Daemon404> er,6 cores
[04:19] <tg__> correction
[04:19] <tg__> it uses 650% cpu
[04:20] <tg__> on 80 threads
[04:20] <tg__> so ~6 cpu's fully
[04:20] <Daemon404> >80
[04:20] <tg__> -threads 0 
[04:20] <tg__> yhields
[04:20] <Daemon404> anyway, read what i just said & asked
[04:20] <tg__> 16 decoding threads and 60 encoding threads
[04:20] <tg__> i do need to transcode the 1 job as fast as possible
[04:20] <tg__> and if there are many jobs
[04:20] <tg__> they will be done in order
[04:21] <Daemon404> that seems liek a poor model.
[04:21] <ohsix> how do you propose to split such a tiny amount of work into such tiny bits and pieces
[04:21] <Daemon404> ohsix, by completely fucking over ratecontrol
[04:21] <Daemon404> <_<
[04:21] <ohsix> it costs cache bandwidth and stuff to communicate, so you're already behind
[04:21] <tg__> ratecontrol would be reasonable though
[04:21] <tg__> as on a 2hr video
[04:21] <tg__> 8 pieeces is still a significate chunk of data to normalize
[04:22] <tg__> I have no problems concurrently encoding 60 streams in realtime
[04:22] <Daemon404> better hope one chunk is not e.g. credits
[04:22] <tg__> and using 100% resources
[04:22] <tg__> but that means if somebody gives the application a 2hr video
[04:22] <ohsix> you aren't going to be able to usefully use all those resources with one job
[04:22] <tg__> it will take 2hrs * 24/150 to encode it
[04:23] <tg__> any encoding service will just throw the video's up concurrently
[04:23] <tg__> and fill their resources that way
[04:23] <Daemon404> half and hour for a 2 hr video is pretty reasonable
[04:23] <Daemon404> an*
[04:24] <tg__> but what that does is it means you have to wait at least realtime/6 for it to finish
[04:24] <tg__> but
[04:24] <Daemon404> or wait i cant do math lulz
[04:24] <tg__> lol
[04:24] <tg__> 19 minutes
[04:24] <ohsix> the proper way to go about this is figuring out how it scales from one core, to several, say 4, or 8, or 16; it will start to drift away from linear speed up
[04:24] <Daemon404> 19 mins
[04:24] <ohsix> your optimal job might take longer, but more efficiently use say, 6 cores
[04:24] <tg__> what if I did this
[04:24] <tg__> split the video horizontally 8 ways
[04:24] <Daemon404> ohsix, it can be linear-like if you use sliced threads
[04:24] <tg__> or vertically
[04:24] <Daemon404> but it's a bad idea to use many slcied threads on a reltaive low res video
[04:25] <tg__> all video's are at least 320 vertical lines of resolution
[04:25] <ohsix> right, you get to the point where you're on the 100th part of a thing with only 100 parts and your communication overhead is 4 parts
[04:25] <Daemon404> you do realize having a ton of small slices will degrade quality right?
[04:25] <tg__> not a ton
[04:25] <tg__> 8
[04:26] <tg__> i can max the system with 8 concurrent ffmpeg instances
[04:26] <tg__> i can even do 4
[04:26] <ohsix> go for efficiency, everything else is worse
[04:26] <ohsix> it will still be many many times faster than realtime
[04:26] <tg__> the second past is insanely fast
[04:26] <tg__> and uses much more resources
[04:27] <tg__> is it possible to do the first pass in 8 segments
[04:27] <Daemon404> it may actually be faster to do 1-pass crf + some sort of vbv
[04:27] <Daemon404> but dont quote me/
[04:27] <tg__> if I could split the first pass into 8 segments
[04:27] <tg__> that would give reasonable quality
[04:27] <tg__> but would potentially affect the bitrate estimation
[04:27] <tg__> but 
[04:28] <tg__> if you had a second pass after that it would equalize it nicely and the second pass for some reason works very fast
[04:28] <tg__> and doesn't have the blocking thread
[04:28] <tg__> I tested it at ~700fps 
[04:28] <Daemon404> it sounds like you may be using the slow b-frame algo
[04:29] <Daemon404> nad not using fast firstpass
[04:29] <Daemon404> and*
[04:29] <tg__> I was only using 1-pass
[04:29] <tg__> no second pass
[04:29] <Daemon404> also this service sounds somewhat illegal :P
[04:30] <tg__> not at all
[04:30] <tg__> no more than dropbox or google drive
[04:30] <tg__> check out put.io
[04:30] <ohsix> google drive lets the public view videos you uploaded?
[04:30] <Daemon404> merely wondering what sort of user uplaods a 2hr 1080p matroka file
[04:31] <Daemon404> withitu DMCA shithawks all over it
[04:31] <tg__> it does
[04:31] <tg__> our project doesn't even let you publicy view
[04:31] <tg__> think of it as a glorified seedbox
[04:32] <Compn> there was a cluster ffmpeg
[04:32] <Compn> some kind of cluster api
[04:32] <Daemon404> there was x264farm as well
[04:32] <tg__> i saw x264farm
[04:32] <tg__> but windows only
[04:32] <Compn> not sure how good they work...
[04:32] <tg__> and same issue with single video performance
[04:32] <ohsix> how do they enforce windows only, heh
[04:32] <tg__> true ;)
[04:32] <ohsix> or the cluster services are windows only?
[04:33] <tg__> no x264farm
[04:33] <tg__> i only saw windows binary
[04:33] <tg__> maybe I didn't look right
[04:33] <Daemon404> in any case
[04:33] <Daemon404> for any decently big site
[04:33] <Daemon404> serialized processing liek you speak of is kind of silly :P
[04:33] <tg__> if we cloud fetch a 2 hour video
[04:34] <tg__> with multipath download to speed that up
[04:34] <tg__> you can't start encoding until the file is fully downloaded
[04:34] <Daemon404> sure you can, if its a decent format
[04:34] <ohsix> multipath download? like lftp pget -n ? :D
[04:34] <Daemon404> and you use crf + vbv
[04:34] <tg__> think of a torrent file's pieces
[04:34] <Daemon404> oh, chunked multithreaded dl
[04:34] <Daemon404> lul
[04:34] <tg__> or using a 16 thread download utility
[04:35] <tg__> which downloads in chunks
[04:35] <tg__> you can prioritize pieces
[04:35] <tg__> but not guarnateed
[04:35] <tg__> unless it's http or something
[04:35] <tg__> so
[04:35] <tg__> once the file is downloaded
[04:35] <ohsix> there's a neat fuse filesystem that lets you download a torrent, and it will block on not present chunks, don't remember where though; it's neat
[04:35] <Daemon404> sounds like youre trying to make a transcode/webview/preview for a seedbox
[04:35] <tg__> its a bit differnet than that daemon
[04:35] <Daemon404> ohsix, on the downside... it's fuse, and lolfuse
[04:36] <ohsix> Daemon404: there are one or two services that download torrents for you, and allow you to get the files from http, video is an interesting, if useless extension
[04:36] <tg__> gluster is fuse and it's good on infiniband
[04:36] <tg__> so
[04:36] <tg__> this service also lets you grab links from premium lockers
[04:36] <tg__> using premium accounts
[04:36] <tg__> without you having to have one
[04:36] <Daemon404> i have never seen anything fuse that wasnt a terrible cpu hog
[04:36] <ohsix> getting pages for fuse can be pretty expensive
[04:37] <tg__> so you can paste in 20 filehost links (all .part1.rar, part2.rar etc)
[04:37] <tg__> and it'll fetch it
[04:37] <tg__> extract it
[04:37] <Daemon404> tg__, wouldnt that violate said lockers' ToS
[04:37] <tg__> and encode any video that is in it
[04:37] <ohsix> i use it for ntfs on a bunch of volumes, even with the cpu usage the io rates are comparable, so maybe that's just how much cpu it needs :o
[04:37] <tg__> only if they catch you daemon ;)
[04:37] <ohsix> comparable to windows on the same machine, that is
[04:37] <tg__> we have a 65,000 ip's
[04:37] <Daemon404> remember what i said before about this beign illegal?
[04:37] <Daemon404> :P
[04:37] <tg__> it's not actually illegal
[04:37] <tg__> there is no file sharing
[04:37] <tg__> it is just a dropbox
[04:37] <tg__> with extra features
[04:37] <Compn> dont let Daemon404 scare you
[04:38] <tg__> anyway, its pretty great
[04:38] <Compn> :)
[04:38] <tg__> i'll give you guys private beta's if you want
[04:38] <Daemon404> fun fact: i never had dropbox until today
[04:38] <tg__> 100G of space
[04:38] <Compn> i want priv beta :)
[04:38] <tg__> and 40gbit/s of bandwidth
[04:38] <tg__> on tap
[04:38] <tg__> cloudload.com
[04:38] <tg__> i'll let you know when we do private beta
[04:39] <Compn> my email > patriotact at gmail.com
[04:39] <tg__> have 480 Tb of glusterfs storage right now ;)
[04:39] <Compn> if you need email
[04:39] <tg__> lol
[04:39] <tg__> anyway, google drive and dropbox both allow you to add any file you want
[04:39] <tg__> and stream it too now
[04:39] <tg__> if it's a video
[04:39] <tg__> and share that link with anyone
[04:40] <Compn> tg__ : btw, you might want to ask in #libav-devel for consultants. libav is fork of ffmpeg and ffmpeg merges changes from there so ...
[04:40] <tg__> are they open to consulting?
[04:40] <Compn> yes
[04:40] <tg__> i was going to ask jason too to see why x264 is non parallel on those functions
[04:40] <tg__> x264_macroblock_tree_propagate
[04:40] <Daemon404> thats a simple answer
[04:40] <Compn> jason is helpful too :)
[04:40] <tg__> x264_slicetype_mb_cost.isra.20
[04:40] <tg__> etc
[04:40] <Daemon404> the algorithms arent inherently parellizable
[04:40] <Daemon404> like most.
[04:41] <Compn> it looks like the case against megaupload will not go ahead. usa did it all wrong
[04:41] <tg__> yeah i know 
[04:41] <Daemon404> not very many thigns in this work are\
[04:41] <tg__> i know kim personally actually
[04:41] <Daemon404> s/work/world/
[04:41] <Compn> oh nice
[04:41] <tg__> and the owners of thepiratebay lol
[04:41] <tg__> all of them
[04:41] <Daemon404> Compn, they accomplished their goal though
[04:41] <tg__> and torrentz.eu
[04:41] <Compn> original owners ?
[04:41] <tg__> and kat.ph
[04:41] <Daemon404> throwing a wrench into their tires
[04:41] <tg__> yes gottfrid
[04:41] <tg__> etc
[04:41] <Compn> anakata
[04:41] <Compn> ahh
[04:41] <tg__> we were one of the first customers at prq ;)
[04:41] <Compn> i heard he wasnt doing so well :(
[04:41] <Compn> nice
[04:41] <tg__> he is not extremely well no
[04:41] <tg__> lol
[04:42] <tg__> anyway
[04:42] <tg__> another good possability
[04:42] <tg__> is streaming hte video while it's transcoding
[04:42] <tg__> that way as soon as it's done ( or sooner if we can do piece-prority)
[04:42] <tg__> you can start streaming it
[04:42] <Compn> youtube must do a lot of paralellization
[04:43] <tg__> yeah
[04:43] <Compn> too bad they dont share stuff :P
[04:43] <tg__> i noticed something though
[04:43] <Daemon404> Compn, more like 1000000 jobs at once
[04:43] <tg__> about youtube
[04:43] <Daemon404> given the number of uploads
[04:43] <Compn> true
[04:43] <tg__> they complete the various formats at different times
[04:43] <Daemon404> also the yhave a length limit
[04:43] <Compn> what was it? 24 hours of video uploaded every second? 
[04:43] <tg__> when you upload
[04:43] <Daemon404> so one clip wont be blocking
[04:43] <tg__> its 24 hours of video every minute
[04:43] <Compn> ah
[04:43] <tg__> which isn't that much if you think about it ;D
[04:43] <tg__> but
[04:43] <sj_underwater> random question: will it ever be possible to leverage Quick Sync?
[04:44] <tg__> 5 formats per video
[04:44] <tg__> means 24 hours * 5 for every 1 minute
[04:44] <tg__> lol
[04:44] <Daemon404> tg__, just waiting until they finally give up on webm
[04:44] <tg__> i'm not loving webm
[04:44] <tg__> or html5 video for that matter
[04:44] <Compn> html5 is so much nicer than flash tho :P
[04:44] <Compn> well . in theory...
[04:44] <tg__> yeah in theroy :D
[04:44] <tg__> in reality
[04:44] <tg__> encode it all to mp4
[04:44] <Compn> i hate flash
[04:45] <tg__> and serve via flash (or natively for android/iphone)
[04:45] <Compn> sj_underwater : quick sync? could you describe it ?
[04:45] <tg__> I downloaded a torrent at 114Mb/s yesterday
[04:45] <sj_underwater> its the Intel fixed function codec, on the newer CPUs
[04:45] <tg__> and seeded it at twice that
[04:46] <Compn> if only you had an i2 connection :P
[04:46] <tg__> cogent has lots of bandwidth to spare now that megaupload is out of it ;)
[04:46] <Daemon404> sj_underwater, a lot of intel's video stuff is snake oil
[04:46] <tg__> you mean the asic sj?
[04:46] <tg__> or the onboard encoding accelerator for the new intel chipsets?
[04:47] <sj_underwater> Daemon404: well, it's looking good in TH tests, i know they have an SDK out (which i know wouldn't work on everything)
[04:47] <Compn> sj_underwater : unfortunately our hardware accel support is always slow to implement
[04:47] <sj_underwater> tg__: that's basically it, but crazy-fast
[04:47] <sj_underwater> tg__: faster than most GPUs
[04:47] <tg__> If you guys want to make some cash (devs), build me an OpenCL x264 encoder
[04:47] <tg__> so I can put 8 5970's in a server and encode 12,000 fps
[04:47] <sj_underwater> i think it's Intel Media SDK 2.0 or something, to use it
[04:47] <tg__> 3000 stream processors
[04:48] <tg__> at 1ghz
[04:48] <sj_underwater> does a BD rip in ~30min
[04:48] <tg__> on a $500 card
[04:48] <tg__> hm
[04:48] <tg__> what chips have it sj?
[04:48] <Daemon404> tg__, you seem to forget that teh algorithms are not necessariyl paralellizable
[04:48] <Daemon404> and thus not necessarily very cpu friendly
[04:48] <Compn> sj_underwater : sometimes a company will work on hwaccel support for their chip and send patches ... which happened with nvidia vdpau and vaapi
[04:48] <Daemon404> s/cpu/gpu/
[04:49] <sj_underwater> tg__: it's Sandy Bridge or newer
[04:49] <tg__> is it like the instruction set they put in for aes?
[04:49] <ohsix> yea sandybridge has a unit for it, can do up to 40mbit all the profiles and stuff
[04:49] <sj_underwater> tg__: i think it uses it, the BD protection is also accelerated
[04:50] <tg__> http://www.youtube.com/watch?v=gsptCtfuGfs
[04:50] <tg__> thats' on the media sdk
[04:50] <sj_underwater> tg__: but it's fixed function silicon
[04:50] <tg__> hm
[04:50] <tg__> not just instruction sets?
[04:50] <tg__> so they're calling it quicksync
[04:51] <tg__> any benchmarks?
[04:51] <sj_underwater> yup
[04:51] <tg__> h264 support?
[04:51] <sj_underwater> http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500k-core-i3-2100-tested/9
[04:51] <sj_underwater> http://www.tomshardware.com/reviews/sandy-bridge-core-i7-2600k-core-i5-2500k,2833-5.html
[04:52] <sj_underwater> h264 is in the newer ones
[04:52] <Daemon404> [22:50] < sj_underwater> tg__: but it's fixed function silicon <-- why would anyone want to use this
[04:52] <sj_underwater> dont think it had it in the beginning
[04:52] <tg__> only 2 apps supporting it lol
[04:52] <sj_underwater> Daemon404: umm, it's incredibly fast?
[04:52] <tg__> let me check the benchmarks
[04:52] <Daemon404> sj_underwater, at the expense of shitty quality?
[04:52] <sj_underwater> tg__: that's the thing, no1s using the SDK
[04:52] <Daemon404> i can do that in x264 with preset ultrafast too.
[04:52] <tg__> logically
[04:52] <sj_underwater> Daemon404: i think it's adjustable
[04:52] <ohsix> it's fully awesome
[04:53] <tg__> ultrafast > quicksync, send memo to intel
[04:53] <ohsix> last time i looked it implemented a ton of features software encoders don't even bother with, i'm not sure if that's due to the profile levels it supports or what, but it does _everything_
[04:53] <Daemon404> ohsix, im more concerned with PQ
[04:53] <tg__> heres' the numbers
[04:53] <Daemon404> not nifty features noone uses
[04:54] <tg__> avchd 1920x1080 50i, 1m13s clip = 1 minute, 53 seconds on an i72600k
[04:54] <tg__> 36 seconds with quicksync
[04:54] <sj_underwater> Daemon404: if they're nifty, shouldn't every1 be using them?
[04:54] <tg__> on the same hardware
[04:55] <Daemon404> sj_underwater, nifty doesnt mean useful.
[04:55] <tg__> wonder if anyone has looked at the quality different
[04:55] <tg__> differential *
[04:55] <tg__> http://www.youtube.com/watch?v=Fq84qkZlvqo
[04:55] <sj_underwater> Daemon404: i think you can adjust the quality quite a bit, obviously there's opt work 2 be done
[04:55] <Daemon404> it's silicon
[04:56] <Daemon404> you cant make a less-than-optimal hw algo better.
[04:56] <tg__> "Conclusion While conversion speeds are greatly increased with Intel Quick Sync Video, there is an undeniable reduction in image quality when compared to x264 and H.264; "
[04:56] <tg__> so that's that
[04:56] <tg__> lol
[04:57] <Daemon404> and absolutely nobody was surprised.
[04:57] <tg__> lol
[04:57] <Daemon404> protip: i used to work for intel
[04:57] <tg__> tbh x264 is almost as fast from what I can tell
[04:57] <ohsix> less than optimal, like quality wise? sure you can't improve the speed, it's in hardware
[04:57] <Daemon404> ohsix, ive seen x264 be as fast or faster than many hw encoders
[04:57] <Daemon404> and better or equal PQ
[04:58] <ohsix> right
[04:58] <tg__> I'll give you guys a 48 core opteron server with 64G of ram if you can fully parallelize an x264 transcoding solution
[04:58] <Daemon404> but corps love hardware
[04:58] <Daemon404> because $$$
[04:58] <tg__> and i'll give you one with 2 ati 5970's (good for bitcoin mining, 750Mhash/s) if you can get it done on openCL ;D
[04:58] <Compn> hehe bitcoin! :)
[04:59] <tg__> dont laugh
[04:59] <ohsix> the stuff on sandyvage is still pretty nuts
[04:59] <Daemon404> tg__, its not super hard. find where the threading model flatlines in efficiency, encode in segments and join.
[04:59] <Compn> i made some money off bitcoin
[04:59] <Daemon404> if its not too short
[04:59] <Daemon404> simple logic.
[04:59] <tg__> i make $60/month per 5970
[04:59] <Compn> but only a little :)
[04:59] <tg__> in bitcoins
[04:59] <tg__> lol
[04:59] <Compn> wow , i didnt know it was still profitable
[04:59] <tg__> i just tried it for fun cause i had two 5970's
[04:59] <Compn> i bought a 5830
[04:59] <tg__> dude its $5 per btc
[04:59] <tg__> lol
[04:59] <tg__> you can mine about .3 - .5 per day per 5970
[04:59] <Compn> well i bought two, but sold one for double because bitcoins rose prices so greatly :)
[04:59] <Daemon404> also there is someone (funded) working on opencl
[04:59] <tg__> but they are really the biggest baddest cards for bitcoin mining
[05:00] <tg__> yeah mainconcept has openCL version of h264 encoder
[05:00] <tg__> but it sucks
[05:00] <Daemon404> i mean for x264
[05:00] <tg__> orly
[05:00] <tg__> by who
[05:00] <tg__> i'd pitch in on that
[05:00] <Daemon404> there are patches, but theire not prod. quality
[05:00] <Daemon404> dont expect Magic Speedups (tm) btw
[05:00] <Daemon404> opencl is not wizardry
[05:00] <tg__> no far from it
[05:01] <ohsix> electricty is a bitch
[05:01] <Daemon404> and as it stats, it's actually degrading PQ atm
[05:01] <Daemon404> for only a little speed gain
[05:01] <Daemon404> it's a WIP
[05:01] <sj_underwater> btw "Intel indicates that these are not hardware limitations of Quick Sync, but rather limitations of the transcoding software. To that extent, Intel is working with developers of video editing applications to bring Quick Sync support to applications that have a more quality-oriented usage model. These applications are using Intels Media SDK 2.0 which is publicly available. Intel says that any de
[05:01] <sj_underwater> veloper can get access to and use it."
[05:01] <Daemon404> sj_underwater, intel loves their PR
[05:01] <sj_underwater> Daemon404: im sure they do
[05:01] <Daemon404> i did say i used to work for them right?
[05:02] <sj_underwater> yes
[05:02] <tg__> what did you do for them Daemon?
[05:02] <Daemon404> linux div
[05:02] <tg__> obv
[05:02] <tg__> on what
[05:02] <tg__> drivers?
[05:02] <Daemon404> lolno
[05:02] <sj_underwater> but it's not like GlobalFoundry is gonna win any awards any time soon
[05:02] <ohsix> tizen?
[05:02] <Daemon404> i was working on yocto stuff, and some otehr stuff
[05:02] <tg__> kernel?
[05:03] <tg__> ah
[05:03] <tg__> what happened with yocto anyway
[05:03] <Daemon404> but they were pushign us and everyoen else to use sandy bridge stuff
[05:03] <Daemon404> and atom stuff
[05:03] <Daemon404> and i was very unimrpessed with both
[05:03] <tg__> atom is garbage
[05:03] <Daemon404> yes. yes it is.
[05:03] <tg__> tegra is pretty much destroying it
[05:03] <tg__> i have an hp mini with an atom in it
[05:04] <tg__> needless to say it sucks
[05:04] <tg__> so the wife has it now
[05:04] <Daemon404> i sitll say if they ever make an atom-based phone
[05:04] <tg__> lol
[05:04] <Daemon404> itll light my pants on fire
[05:04] <Daemon404> literally.
[05:04] <ohsix> heh the wikipedia page says quicksync isn't supported on linux, but the vaapi drop for encoding pretty much came with that quarters driver
[05:04] <ShadowJK> It's annoying to find tegra in a nice formfactor and not locked down
[05:04] <sj_underwater> Daemon404: actually there is one coming out i think, x86 if not atom itself
[05:04] <Daemon404> tg__, re: yocto, it's around and growing :P
[05:04] <tg__> yeah checking it out now
[05:04] <Daemon404> and i use poky/yocto cross toolchains every day
[05:04] <tg__> i thought it had died
[05:04] <Daemon404> their toolchains are very useful
[05:04] <ShadowJK> transformer prime would be nice if you could put ubuntu on it without spending weeks to do it :)
[05:05] <tg__> new tranformer prime is out soon
[05:05] <tg__> btw
[05:05] <tg__> on the 15th
[05:05] Action: Daemon404 is happy with his imap4.
[05:05] <Daemon404> omap4*
[05:05] <tg__> i just got the galaxy note
[05:05] <tg__> not bad
[05:05] <tg__> not as fast as tegra
[05:06] <Daemon404> i generally use the poky/yocto toolchain to cross-compile for my omap4 on my desktop
[05:06] <Daemon404> infinitly faster
[05:06] <sj_underwater> Daemon404: http://arstechnica.com/gadgets/news/2012/04/the-first-intel-smartphone-comfortably-mid-range-eminently-credible.ars?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+arstechnica%2Findex+%28Ars+Technica+-+Featured+Content%29
[05:06] <Compn> tg__ : dumb question, but did you grep for MAX_THREADS in the code ?
[05:06] <sj_underwater> Daemon404: the Atom Z2460
[05:06] <tg__> on what decode side?
[05:06] <Daemon404> sj_underwater, i expect it to have overheating issues :P
[05:06] <Daemon404> ive nearly burned myself on soem atom stuff before
[05:06] <tg__> i can no doubt hard code in more than 16 decode threads
[05:06] <Compn> i guess that is in the decode side . i should read up what you want, encoder side i think 
[05:07] <tg__> but the limitation is still there in 1 thread
[05:07] <tg__> here look
[05:07] <sj_underwater> Daemon404: like your pants?
[05:07] <Compn> i'm searching to see if someone else ran into the same problem and found a workaround :)
[05:07] <Daemon404> sj_underwater, like my hands. burning my skin.
[05:07] <tg__> http://i.imgur.com/7NrJe.png
[05:07] <tg__> that is without x264, just striaght raw decoding
[05:07] <tg__> solid FPS
[05:07] <Daemon404> some eeepcs were nasty... melted the kb
[05:08] <tg__> early eepc's were dangerous
[05:08] <tg__> one lit on fire in teh futureshop
[05:08] <tg__> lol
[05:08] <Daemon404> they were celerons
[05:08] <Daemon404> even worse
[05:08] <tg__> dont knock celerons
[05:08] <tg__> back in teh day i had a celeron 400mhz
[05:08] <tg__> overclocked to 650
[05:08] <tg__> for 3 years
[05:08] <tg__> and unlocked the l2 cache
[05:09] <tg__> after that they sucked though.
[05:09] <Daemon404> the celeron in the eeepc701 was dangerous
[05:09] <tg__> intel really sucks at mobile processors
[05:09] <tg__> they have never had a foothold
[05:09] <tg__> @Compn
[05:10] <tg__> the thread that is limiting the x264 encoding
[05:10] <tg__> is on the x264 side
[05:10] <Daemon404> you really should just join #x264dev
[05:10] <tg__> the decoding is perfectly parallelized
[05:10] <tg__> i emailed jason
[05:10] <tg__> i'll throw him some bitcoins
[05:10] <tg__> and have him figure it out
[05:10] <Daemon404> youre more likely to get prgress on irc
[05:10] <tg__> yeah
[05:10] <tg__> i'll try it out then
[05:10] <tg__> save my bitcoins
[05:10] <tg__> ;)
[05:11] <Daemon404> im pretty sure jason gets a billion emails a day he ignores
[05:11] <ohsix> he shuold be around soon, on irc, too
[05:11] <Compn> delicious bitcoins
[05:11] <Daemon404> offering money for X or Y
[05:11] <tg__> i put $ in the title
[05:11] <Compn> silk road is amazing...
[05:11] <ohsix> money is a poor motivator :[
[05:11] <Daemon404> it is
[05:11] <tg__> what do you prefer
[05:11] <Compn> wonder if those alpaca socks are that good
[05:11] <ohsix> you can't even get someone to think about switching tasks for less than 50$
[05:11] <tg__> it was more than $50
[05:11] <Compn> not sure they are worth that tho
[05:11] <Compn> lol
[05:12] <Daemon404> tg__, all the money in the world cant make devs do stuff they hate sometimes :P
[05:12] <ohsix> tg__: no suggestion, i just don't ask them unless it's something i can pay them like 1k+ for
[05:12] <Daemon404> look at x264 and mbaff
[05:12] <tg__> trust me guys I know :)
[05:12] <Daemon404> or interlacing in general
[05:12] <tg__> i have 4 devs working for me that used to work for brazzers
[05:12] <ohsix> sandybridge does it
[05:12] <ohsix> porn you say
[05:12] <Daemon404> ohsix, sandybridge might actually be betetr at interlaced content
[05:12] <Daemon404> x264 isnt terribly great at it.
[05:12] <ohsix> it's stupid fast,  but nobody has done IQ comparisions or anything
[05:13] <ohsix> intel probably embargoed them from the early ones, even
[05:13] <tg__> ohsix the budget is higher than that
[05:13] <Compn> tg__ : does ffmpeg use all the threads when you use internal mpeg4 encoder ?
[05:13] <tg__> shoot me the command for it and i'll run it
[05:13] <Daemon404> Compn, why are you asking about that. the internal mpeg4 encoder is pure evil :P
[05:13] <tg__> if i'm just decoding hte mkv it uses 16 threads
[05:13] <Daemon404> also slow.
[05:13] <tg__> and keeps them at ~ 50% each
[05:13] <tg__> 50% cpu per thread
[05:13] <tg__> which is good
[05:13] <tg__> that scales nicely
[05:14] <tg__> also the x264 encoding scales nicely
[05:14] <tg__> what doesn't
[05:14] <tg__> is part of the x264 resize
[05:14] <tg__> or estimation
[05:14] <Daemon404> i assume you mean its internal resize
[05:14] <tg__> it's an x264 function though
[05:14] <Daemon404> and not the resize filter
[05:15] <Daemon404> it downscales some stuff for internal use
[05:15] <tg__> you might be on to something
[05:15] <tg__> the -s flag
[05:15] <tg__> does that get passed to the encoder
[05:15] <tg__> or does ffmpeg deal with that
[05:15] <tg__> before passing it off
[05:15] <Daemon404> ffmpeg has it's own resize filters (see -vf)
[05:15] <Compn> ffmpeg -i input -vcodec mpeg4 -an -f rawvideo /dev/null
[05:15] <Compn> tg__ :^^
[05:15] <tg__> -threads 0
[05:16] <Compn> with whatever -threads option
[05:16] <tg__> right
[05:16] <Compn> yeah
[05:16] <tg__> k
[05:16] <Daemon404> Compn, um
[05:16] <Daemon404> why nit just do
[05:16] <Daemon404> -f null -
[05:16] <Compn> because obviously i didnt know about that option :P
[05:16] <tg__> much more parallel
[05:17] <tg__> no blocking thread obv
[05:17] <Compn> just checking then :)
[05:17] <tg__> using 800% cpu
[05:17] <tg__> 290fps
[05:17] <tg__> seems decoder limited
[05:17] <tg__> threads is about 16 it looks like
[05:17] <Compn> try hard maxing the threads then
[05:17] <tg__> decoding and encoding
[05:18] <tg__> ok let me hard maxing
[05:18] <tg__> interesting
[05:18] <tg__> putting -threads 48
[05:18] <tg__> gives me a bunch of threads doing nothing
[05:18] <Compn> sounds like a bug :)
[05:18] <tg__> only the amount of encoding threads matches the amount of decoding threads
[05:18] <Daemon404> it's likely an algorithmic limitation.
[05:18] <tg__> ie: 16 OF EACH
[05:18] <tg__> of each *
[05:18] <Daemon404> not a bug
[05:19] <Compn> libavcodec/mpegvideo.h for max_threads
[05:19] <Daemon404> Compn, its not a good idea to change that.
[05:19] <Compn> will it assplode ?
[05:19] <tg__> i'll change it hold up
[05:19] <Daemon404> Compn, quite possibly yes
[05:20] <Compn> a lot of those max_ things probably need to be changed at the same time :)
[05:20] <tg__> how can i up the amount of decoding threads?
[05:20] <Compn> like mpeg buffer
[05:20] <Daemon404> it wouldnt be set if nto for a reason
[05:20] <Compn> line 61 of ffmpeg/libavcodec/mpegvideo.h
[05:21] <Compn> Daemon404 : i've seen arbitrary limits before in code
[05:21] <tg__> fyi
[05:21] <tg__> http://i.imgur.com/5Az2E.png
[05:21] <tg__> there's the internal encoder run with -threads 24
[05:21] <tg__> decoder threads are on the top
[05:21] <tg__> encoder threads on the bottom
[05:22] <tg__> see the 8 dead ones in the middle
[05:22] <tg__> 24-8  = 16
[05:22] <Daemon404> michaelni likely knows the reason
[05:22] <tg__> so the 16 decoder threads only make 16 encoder threads: 1:1
[05:22] <Daemon404> or BBB
[05:22] <Compn> hmm, where are the encoder threads maxed out at ?
[05:22] <tg__> decoder *
[05:22] <tg__> but if you put threads=0
[05:22] <tg__> it knows to only put as many encoding threads as decoding threads
[05:23] <tg__> whereas on x264 you put threads=0 and it puts 72 encoding threads
[05:23] <tg__> all only using 1-3% cpu
[05:23] <Compn> did you send message to dark_shikari (jason) yet ? 
[05:24] <tg__> is he online?
[05:24] <Compn> hes on irc, dunno if active
[05:24] <Daemon404> protip: jason does not liek random PMs likely
[05:24] <Compn> is he in #x264devs then ?
[05:24] <Daemon404> as i said, #x264dev
[05:24] <Compn> ah
[05:24] <Compn> yea try there
[05:24] <tg__> k hold up
[05:24] <Compn> i forget irc ettiquette sometimes
[05:24] Action: Compn holds pinky up while sipping tea
[05:26] <tg__> i pinged him in x264dev
[05:26] Action: Daemon404 still thinks this appraoch will break down in heavy multiuser use
[05:27] <tg__> the platform scales hardware horizontally
[05:27] <tg__> if the box has too many jobs
[05:27] <tg__> there is a daemon that dispatches jobs based on free resources 
[05:27] <Daemon404> ec2-based by any chance?
[05:27] <tg__> we have 8 "workhorses" right now
[05:27] <tg__> nope
[05:27] <tg__> all baremetal in-hous private cloud
[05:27] <Daemon404> ah
[05:27] <tg__> ec2 is not price effective
[05:27] <tg__> at this scale
[05:28] <tg__> we have 8 of these 48 core/64G ram 1U servers
[05:28] <tg__> as workhorses
[05:28] <tg__> they extract rar's and encode video, download .torrents, etc
[05:28] <Daemon404> ec2 is really only useful for holyshitrandomspikes on a very large scale
[05:28] <tg__> yeah 
[05:28] <tg__> I am a scalability engineer
[05:29] <tg__> so i've architected the platform
[05:29] <tg__> to scale properly
[05:29] <tg__> even to multiple datacenters
[05:29] <Daemon404> ive just started working on a streamign service that currently uses ec2
[05:29] <Daemon404> (as in new job)
[05:29] <tg__> it's good for prototyping
[05:29] <tg__> but not for large scale
[05:29] <tg__> look at reddit
[05:29] <Daemon404> this is p. large scale
[05:30] <Daemon404> bw usage is in petabyte(s)/day
[05:30] <tg__> if you want Daemon I do scaleability consulting
[05:30] <tg__> we're talking 25Pb of storage
[05:30] <tg__> and 40gbit/s
[05:30] <tg__> ec2 would make this model unprofitable
[05:30] <tg__> if your clients have very deep procketse
[05:30] <Daemon404> we only use ec2 for transcoding
[05:30] <tg__> ec2/s3 would work for that
[05:30] <tg__> you can get some gpu clusters or whatever
[05:30] <tg__> and crunch your video
[05:31] <tg__> use fasp for fetching and delivery
[05:31] <Daemon404> we do use s3
[05:31] <tg__> but prepare to pay out the ass
[05:31] <Daemon404> were looking to move to in-house eventually
[05:31] <tg__> I'll let you know what we're using
[05:31] <tg__> the result of over 1yr of testing
[05:31] <tg__> do want with it :)
[05:32] <Daemon404> heh
[05:32] <Daemon404> im not super involved with the scalability bit
[05:32] <tg__> 4U servers, 36 2Tb drives (hitachi deskstar' 5k3000's are unbreakable and better than almost all enterprise sata drives)
[05:32] <tg__> 3 raid5 arrays internally
[05:32] <tg__> gives you 33 usable, redundant drives
[05:32] <tg__> = 66 Tb per 4U
[05:33] <tg__> costs 11K per 4U server
[05:33] <tg__> drives included
[05:33] <tg__> put whatever flavor of linux on there
[05:33] <tg__> or bsd
[05:33] <tg__> use infiniband (mellaonx or qlogic) 40Gbps network cards
[05:33] <tg__> mount glusterfs on the storage servers 
[05:33] <tg__> use infiniband RDMA protocol for intercommunication
[05:33] <Compn> my last couple hds i've bought have been hitachi :)
[05:33] <tg__> scale sideways as you please
[05:34] <tg__> 1500Mb/s throughput per 4U
[05:34] <Compn> hds in general have just declined so so hard
[05:34] <tg__> these hitachi's are un-killable
[05:34] <Compn> so many tb drives failing amongst my friends :\
[05:34] <tg__> I've had 1 fail out of 240 of them
[05:34] <tg__> western digital 2tb RE4 (teh expensive raid ones), 4 failed out of 80
[05:34] <tg__> and they cost 3x as much
[05:35] <Daemon404> i avoid wd at all costs
[05:35] <Daemon404> too many bad experiences
[05:35] <tg__> western digital is garbage
[05:35] <tg__> unless your'e oging to the velociraptors or something
[05:35] <tg__> even then
[05:35] <ohsix> hoho and i use nothing else since seagate bought samsungs mobile stuff
[05:35] <ohsix> everyone gets burned once, and hates them forever
[05:35] <tg__> go check what 66Tb of storage costs you on S3
[05:35] <tg__> lol
[05:35] <Daemon404> ohsix, ive lost at least 6 wd drives
[05:36] <ohsix> i've been "blessed" with having at least one of every vendors drive fail on me, so i know they all suck balls
[05:36] <tg__> rule of thumb with s3/ec2 is if you're using it for more than 2 months and you're >1 instance
[05:36] <Compn> kind of telling that there is no manf that has a perfect record or even a record better than the rest of them
[05:36] <tg__> its cheaper to go inhouse
[05:36] <Daemon404> tg__, im not actually sure what are entire size is
[05:37] <Daemon404> we're not small, i know that much
[05:37] <tg__> 1Pb of data per day
[05:37] <tg__> is that inbound or outbound
[05:37] <tg__> what ratio of storage -> transfer do you do
[05:38] <tg__> (ex: how many avg views per video)
[05:38] <tg__> youtube is about 1:380
[05:38] <tg__> 380x the bandwidth for every byte stored
[05:38] <ohsix> my seagate es2 drives were a total loss, the FDB bearing seized on 2 of them within a few power on hours of eachother
[05:38] <Daemon404> good question
[05:38] <tg__> that sucks ohsix
[05:38] <tg__> i had 4 1Tb wd black's
[05:38] <ohsix> and the seagate in my laptop would willingly immolate itself even with the lightest activity
[05:38] <tg__> in raid 10
[05:39] <tg__> not cheap at the time
[05:39] <tg__> 2 died within a day
[05:39] <tg__> and it happened to be the two second halves of each r1 array :(
[05:39] <tg__> lol
[05:39] <Daemon404> tg__, site is vimeo btw
[05:39] <ohsix> did you order them from WD? were they in the same batch?
[05:39] <tg__> lol I know vimeo
[05:39] <tg__> who doesn't
[05:39] <Daemon404> :P
[05:39] <tg__> I don't think vimeo is on ec2 ;D
[05:39] <tg__> lol
[05:39] <tg__> not for storage anyway :D
[05:39] <Daemon404> we transcode on it
[05:40] <tg__> that is also surprising
[05:40] <Compn> file hosting isnt the same without megaupload. lost so many links with that
[05:40] <Daemon404> look for some aws in storage urls
[05:40] <Daemon404> ;p
[05:40] <Compn> guess someone has a copy anyhow
[05:40] <ohsix> anyways, if you get any number of drives, you need to test them all for infant mortality :\
[05:40] <tg__> that is mind boggling that they use aws
[05:40] <tg__> they must be losing money hand over fist
[05:40] <tg__> yeah we order drives in batches of 100
[05:40] <ohsix> you can spread out the risk by getting drives from different batches and vendors, but you can't eliminate it
[05:40] <tg__> and do 7 day burnin full read/write
[05:41] <Daemon404> tg__, nope 
[05:41] <tg__> and we find 2-3 out of 100 with SMART failures
[05:41] <Daemon404> we may have some deal
[05:41] <Daemon404> im not sure
[05:41] <tg__> was going to say
[05:41] <tg__> either they have cost-pricing
[05:41] <tg__> or a lot of VC capital
[05:41] <ohsix> word
[05:41] <tg__> lol
[05:41] <Daemon404> i dont think we're movign away very soon
[05:41] <ohsix> people at home are like, asking for trouble if they dont' just buy the consumer packaged drives in one unit :]
[05:41] <tg__> man I didn't think it was financially feasible to use aws beyond prototype phase
[05:42] <ohsix> i should have had at least 6 of those ES2's, and done early testing, i had 3
[05:42] <tg__> not in lean web anyway
[05:42] <Daemon404> a ton of big-ass places use it
[05:42] <Daemon404> amazingly
[05:42] <Daemon404> its a big percentage of ALL web traffic now
[05:42] <tg__> I did the math
[05:42] <Daemon404> it's crazy
[05:42] <tg__> i twas impossible for us to do it
[05:42] <tg__> we'd have to charge people like $20 a month for 50gb
[05:42] <tg__> lol
[05:42] <tg__> and the banwidth
[05:42] <tg__> is wayyy over priced
[05:42] <Daemon404> 1% of all web traffic is aws
[05:42] <Daemon404> is what it is
[05:42] <tg__> 4% was megaupload ;0
[05:43] <tg__> did you see cogent's shares drop by 10% when MU got shut down?
[05:43] <tg__> lol
[05:43] <Daemon404> megaupload was sveral orders of magnitude biggeer than naything
[05:43] <Compn> tg__ : you investing in facebook ipo ?
[05:43] <tg__> megaupload was only 25Pb of storage
[05:43] <tg__> I have a friend who has facebook stock
[05:43] <Daemon404> tg__, is that including mirrors?
[05:43] <tg__> he's about to be a millionaire shortly
[05:43] <Compn> nice
[05:43] <tg__> Daemon they dedupe lots of it
[05:43] <tg__> that 25Pb of storage
[05:43] <Daemon404> i mena they have mirrors in so many dcs
[05:43] <tg__> is likely 150-200Pb of user-space storage
[05:44] <tg__> no
[05:44] <tg__> most was in carpathia
[05:44] <tg__> in the US
[05:44] <Daemon404> that seems like a bad choice in hindsight...
[05:44] <tg__> yeah
[05:44] <tg__> I keep my stuff out of the US
[05:44] <tg__> even if it's kosher
[05:44] <tg__> canada has some very good routes
[05:44] <Compn> gotta have mirrors here tho
[05:44] <tg__> lol
[05:44] <tg__> and much better legislation
[05:45] <tg__> OVH is building  a 300,000 server datacenter there
[05:45] <Daemon404> lemme guess
[05:45] <Daemon404> quebec
[05:45] <tg__> and putting 100x100gbit lines in overseas
[05:45] <Compn> and less illegal moves by bribed govt ?
[05:45] <Compn> ehe
[05:45] <tg__> obv, ovh = fresh, quebec = french
[05:45] <tg__> iceland is actually putting a ring network in
[05:45] <tg__> that will land in montreal
[05:45] <tg__> 10tbit/s
[05:45] <Daemon404> and yet
[05:46] <Daemon404> consumer-level liens are in canada
[05:46] <Daemon404> are utter, utetr shit
[05:46] <tg__> what do you mean?
[05:46] <Daemon404> consumer-level isps
[05:46] <Daemon404> for people.
[05:46] <tg__> you're nuts lol
[05:46] <Daemon404> not busienss
[05:46] <tg__> videotron in quebec has 120Mbit/s internet
[05:46] <tg__> bell has fibe which is 40Mbit/s
[05:46] <Daemon404> thats ONLY there
[05:46] <Daemon404> try finding decent net in ontario
[05:46] <Daemon404> bell is shit
[05:46] <Daemon404> rogers is shit
[05:46] <tg__> get rogers LTE
[05:46] <Daemon404> shwo is ok.. kidna
[05:46] <tg__> in ontario
[05:46] <Daemon404> but still a rip
[05:46] <tg__> 25Mbit/s
[05:46] <ohsix> plus you only get 20 gigabytes
[05:46] <Daemon404> tg__, an 512 kbit/s up
[05:46] <Daemon404> on amny packages
[05:47] <tg__> not on fibe
[05:47] <Compn> caps! hahahaha ;\
[05:47] <tg__> dude
[05:47] <tg__> myr ogers LTE is 25mbit/s down and 20mbit/s up
[05:47] <tg__> lol
[05:47] <Daemon404> ive had rogers, ive had bell, ive had shaw
[05:47] <Daemon404> in ottawa, and other places
[05:47] <tg__> in ontario you're sort of pooched ;9
[05:47] <Daemon404> yes
[05:47] <tg__> videotron is a godsend though
[05:47] <Daemon404> anywhere btu quebec
[05:47] <Daemon404> is shitty
[05:47] <Daemon404> :P
[05:47] <tg__> you know what I do
[05:47] <ohsix> my isp added a 100gig cap like 2 years ago, bumped up the speed and the cap to 150 a few months ago, having a hard time wasting those extra 50 gigs after having gotten used to the 100g cap
[05:47] <tg__> i have dedicated 1gbps in my office
[05:47] <tg__> dark fiber
[05:48] <ohsix> i download a lot of shit too. pretty soon i will have a month where i don't get to the cap :[
[05:48] <tg__> and i put a wifi dish on the top of the building
[05:48] <tg__> pointed to my house
[05:48] <Daemon404> ohsix, obells caps go as low as 2 gb
[05:48] <tg__> lol
[05:48] <Compn> haha
[05:48] <Daemon404> it's awesome
[05:48] <Compn> tg__ : yeah, but do you have any time to watch it all? :P
[05:48] <tg__> naw never
[05:48] <tg__> I work too hard to enjoy it ;)
[05:48] <tg__> i barely keep up with game of thrones
[05:48] <ohsix> the cap at 100g wasn't too bad, but they charged 1.50$ usd per gig in overage
[05:48] <Compn> took me long enough to find a nice site that caters to my taste like CG
[05:48] <tg__> @daemon
[05:48] <tg__> if you want i'll give you a cloudload account
[05:48] <Compn> and theres KG but thats little arty for me
[05:48] <Daemon404> tg__, i dont need one
[05:48] <tg__> you'll love it if you're on capped banwidth
[05:48] <tg__> :O
[05:49] <tg__> no seeding
[05:49] <tg__> lol
[05:49] <Daemon404> scene encodes are fucking ugly as-is
[05:49] <Daemon404> and i own several boxes in europe
[05:49] <Compn> quality whore !
[05:49] <Compn> thats what fspp is for :P
[05:49] <Daemon404> where's adddetail()?
[05:49] <ohsix> funny story, i kept track of my usage before the 100g cap, i rarely used over 30, one month i did like 300, after the cap, 100g every month, even if i wasted it on some shit
[05:50] <ohsix> my net bandwidth usage was super super under the cap, before the cap
[05:50] <ohsix> i wonder if people would riot if they knew how much isps actually pay for transit vs. what they charge
[05:50] <Compn> ohsix : like people who never heard of piratebay in UK until it was banned there :D
[05:51] <ohsix> they're all dicks too, they still keep their overhead low and dick with customer service
[05:51] <Compn> yep, thats isp rule #1
[05:51] <Compn> dont upgrade service, but upgrade price in their favor
[05:51] <ohsix> valid business strategy: make your first customer contact experience after signup relatively horrible, and they won't do it again
[05:52] <Daemon404> ohsix, canadian isps are actively fightign against upgrading out infastructure
[05:52] <Daemon404> its awesome
[05:52] <tg_> lol
[05:52] <tg_> yeah they are actively legislating against fair internet
[05:52] <drv> bonus points the deeper your support line phone tree is
[05:53] <ohsix> we pretty much subsidise any communication service here, because they need to provide certain services to rural customers and shit, which is really expensive for them to do
[05:53] <ohsix> but they treat it like extra income
[05:53] <Daemon404> ohsix, at an ipv6 summit
[05:53] <Daemon404> guess how many major canadian isps said the yhad plans to aupport it in teh next 10 years
[05:54] <Daemon404> :3
[05:54] <ohsix> you wanna know something fun, every phone service i've ever had here had some tax offset charge that they charge just because they can, they call it some regulatory compliance fund or something, and all you have to do is ask for them to not bill you for it, and they stop
[05:55] <ohsix> it's only like 3-6$ per bill, but man, if you live somewhere for like 10 years that's a good chunk of change to some people you don't really like
[05:55] <ohsix> i could adopt a chimp or buy food for kids in africa with that
[05:55] <Compn> seriously need to start encrypting everything
[05:55] <Compn> think the NSA really is saving all internet data ever ?
[05:55] <ohsix> putting things on a bill just to wait for you to ask them to remove it is pretty foul
[05:56] <Compn> i wonder if that works here in usa
[05:56] <Compn> they put those fees on a lot
[05:56] <Daemon404> Compn, i am in usa now btw
[05:56] <Daemon404> D:
[05:56] <Compn> Daemon404 : enjoying the weather ?
[05:56] <ohsix> theres like 11 items on a typical phone bill, one of the taxes are legit; but they hide the ones they just take among them
[05:56] <Daemon404> thunderstorms every day?
[05:56] <Compn> Daemon404 : did you get intern at vimeo then ?
[05:56] <Daemon404> :V
[05:57] <tg_> lol
[05:57] <Daemon404> well it's an "internship"
[05:57] <tg_> ipv5 in canada
[05:57] <tg_> v6 *
[05:57] <Daemon404> not so much training as it is a normal job
[05:57] <Daemon404> called an internship to make my uni shut up
[05:57] <Compn> oh ipv6 , thats going to be fun
[05:57] <Compn> ah
[05:58] <Compn> tg_ : so did upping max_threads speed anything up
[05:58] <Compn> ?
[05:59] <tg_> haven't tried
[05:59] <tg_> chatting with jason now
[05:59] <Compn> should be able to build ffmpeg in 30 secs 
[05:59] <Compn> ah
[05:59] <Compn> not important then, good luck
[05:59] <tg_> ffmpeg build takes about 9 seoncds
[05:59] <tg_> lol
[05:59] <tg_> haters
[05:59] <tg_> i had to tweak gcc
[06:00] <Daemon404> now try it with a self-hosted clang :P
[06:00] <ohsix> qemu -smp 256 go vroom
[06:00] <tg_> hey
[06:00] <tg_> does ffmpeg accepts native x264 option names now?
[06:00] <tg_> lol clang
[06:00] <Compn> dont remember. i remember jason was pissed about it tho
[06:01] <Daemon404> tg_, clang tends to generate faster code for me :P
[06:01] <Daemon404> and compile much faster
[06:01] <Compn> ‘x264opts options’
[06:01] <Compn> Allow to set any x264 option, see x264 –fullhelp for a list. 
[06:01] <Compn> options is a list of key=value couples separated by ":".
[06:01] <Compn> tg_ : ^^ yes
[06:02] <Compn> For example to specify libx264 encoding options with ffmpeg:  	ffmpeg -i foo.mpg -vcodec libx264 -x264opts keyint=123:min-keyint=20 -an out.mkv
[06:02] <Compn> http://ffmpeg.org/ffmpeg.html#libx264
[06:02] <Compn> mr user question in devel channel
[06:02] <tg_> cool
[06:02] <tg_> ok cool
[06:03] <tg_> I know it urks you guys to get user questions in devel :(
[06:03] <tg_> but it's high end user question
[06:03] <Compn> ehe
[06:03] <tg_> lol
[06:03] <Daemon404> the versache of users?
[06:03] <Compn> tg_ : jason is/was doing some neat stuff with gaikai thing
[06:03] <Daemon404> versace*
[06:03] <Compn> realtime cloud video game using x264 
[06:03] <Daemon404> Compn, i dot think ill ever liek cloud gaming :P
[06:03] <Daemon404> dont*
[06:04] <Compn> playing games that you cant run on your p4 over the internet
[06:04] <Daemon404> i generally consoel game nowadays
[06:04] <Compn> Daemon404 : you wallhax too much
[06:04] <Daemon404> pcgamingisdead.gif
[06:05] <Compn> enjoying the ps3 ruling your game life then ?
[06:05] <Compn> cant play game. .. must update something for 30 minutes
[06:06] <ohsix> i'm sort of scared to enjoy a video game on a new console, i won't be able to revisit it in 20 years like i can with my nes
[06:06] <ohsix> the games being what they are help a lot there, none of them are super mario 3 or something :>
[06:07] <Daemon404> [00:06] < ohsix> i'm sort of scared to enjoy a video game on a new console, i won't be able to revisit it in 20 years like i can with my nes <-- why not?
[06:17] <tg_> cool
[06:18] <tg_> pc gaming is dead
[06:18] <tg_> until d3 comes out
[06:24] <Daemon404> i shal buy
[06:24] <Daemon404> and then pirate
[06:24] <Daemon404> d3
[06:31] <tg_> good luck pirating it
[06:31] <tg_> unless of course you want to code your own server ;D
[06:31] <tg_> in which case I would have much respect for you
[06:32] <tg_> (not that I don't already, obv.)
[06:32] <tg_> ahem
[06:32] <tg_> i'm sure the guys who cracked the first world of warcraft private server
[06:32] <Daemon404> we'll see
[06:32] <tg_> will do it soon enough
[06:32] <tg_> knowing blizzard, they'll probably use the same protocol
[06:33] <Daemon404> im one of those people who will actually play it on e.g. a plane
[06:33] <Daemon404> or train
[06:34] <tg_> yeah
[06:34] <tg_> d2 is great for long flights
[06:34] <tg_> and simcity 4
[06:34] <tg_> I almost lit my nuts on fire playing sc4 on an i7 dell studio from toronto to tokyo
[06:35] <tg_> it started smelling like burning plastic
[06:38] <tg_>  [libx264 @ 0x29ef320] bad option '-rc-lookahead': '20' 
[06:38] <tg_> ;\
[06:40] <Compn> mmm
[06:41] <Compn> no idea
[06:42] <Compn> -x264opts rc-lookahead=20 ?
[06:47] <tg_> yeah 
[06:47] <tg_> my bad had an extra - in it
[06:54] <tg_> ok let me try the max_threads
[06:59] Action: Compn goes afk,. bbl
[06:59] <Compn> night tg_
[07:00] <tg_> night
[07:00] <tg_> thx a ton
[07:00] <tg_> let me know where to send bitcoins ;D
[07:19] <tg_> shit I lost my chat history
[07:19] <tg_> Daemon what was the file that Compn suggested changing
[07:20] <tg_> libavcodec/mpegvideo.h:#define MAX_THREADS 16  ?
[07:27] <tg_> tried setting it to 32, it's still only using 16
[10:06] <burek> tg_, http://ffmpeg.gusari.org/irclogs/
[11:24] <burek> what git command should I use to get this exact ffserver/ffmpeg version: ffserver version N-37208-g01fcbdf
[11:25] <Tjoppen> git checkout 01fcbdf
[11:33] <burek> thanks!
[11:33] <burek> :)
[11:34] <boys21> hi
[11:34] <boys21> i would like to add svc decoder functionality to ffmpeg
[11:34] <boys21> writing a module from scratch is very difficult for my programming level
[11:35] <boys21> hence, I want to use the existing decoders like opensvc decoder written in C
[11:35] <boys21> to use in ffmpeg
[11:35] <boys21> can some one please help me, how to do this?
[11:35] <Tjoppen> look at some existing library wrapper like libx264.c
[11:35] <boys21> k
[11:36] <burek> Tjoppen, error: pathspec '01fcbdf' did not match any file(s) known to git.
[11:36] <burek> :S
[11:38] <Tjoppen> are you using git://source.ffmpeg.org/ffmpeg.git ?
[11:38] <burek> yes.. I'm doing now git clone again
[11:38] <burek> to see if there was some problem
[11:38] <Tjoppen> commit 01fcbdf9cedcf14418b5886205261e532167f949
[11:38] <Tjoppen> Merge: a8ae00b 9adf25c
[11:38] <Tjoppen> Author: Michael Niedermayer <michaelni at gmx.at>
[11:38] <Tjoppen> Date:   Fri Jan 27 01:42:53 2012 +0100
[11:49] <burek> Tjoppen, I did it, thank you :)
[11:49] <burek> if I want to get back to the real HEAD what do I do? git reset?
[11:49] <burek> or something
[11:50] <Tjoppen> git checkout master
[11:51] <burek> thanks a lot! :) :beer: :)
[11:53] <Tjoppen> :poolparty:
[11:55] <boys21> hi i just had a look at it, but couldnt understand that ;(
[11:57] <boys21> do i need to write a file that can decode svc streams?
[11:57] <burek> boys21, if decoder exists, then all that is needed is a wrapper
[11:57] <burek> some functions in ffmpeg that will call that encoder/decoder
[11:57] <burek> you can ask on trac
[11:57] <burek> for a feature request
[11:57] <burek> or if you are in a hurry
[11:58] <burek> you can contact some developers directly
[11:58] Action: burek slaps fflogger 
[11:58] <burek> check #ffmpeg chan for links
[11:58] <boys21> ok already a svc decoder exists
[12:01] <CIA-122> ffmpeg: 03Paul B Mahol 07master * rb7159877b1 10ffmpeg/libavcodec/shorten.c: shorten: unsigned 8bit support
[12:07] <boys21> burek: i just posted in trac!
[12:07] <boys21> thank you!
[12:08] <burek> :beer: :)
[12:11] <boys21> sure ;)
[12:12] <boys21> how about to contact a developer ?
[12:13] <boys21> i dont know, who works on this.
[12:13] <burek> most of people in here
[12:13] <boys21> oh
[12:13] <burek> or if you want the gurus
[12:13] <burek> checkout the consulting page
[12:13] <boys21> ok
[12:44] <CIA-122> ffmpeg: 03Paul B Mahol 07master * r4b70bba57e 10ffmpeg/libavcodec/zerocodec.c: 
[12:44] <CIA-122> ffmpeg: zerocodec: check if there is previous frame
[12:44] <CIA-122> ffmpeg: Fixes crash in bug #1219.
[12:44] <CIA-122> ffmpeg: Signed-off-by: Paul B Mahol <onemda at gmail.com>
[14:41] <j-b> spam started
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * rbabf2a3467 10ffmpeg/libavformat/seek-test.c: 
[14:49] <CIA-122> ffmpeg: seek-test: support manually forcing a seek to a specific position
[14:49] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * r5f9f78dc9b 10ffmpeg/libavformat/oggdec.c: 
[14:49] <CIA-122> ffmpeg: oggdec: pass avformat context to ogg_reset()
[14:49] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * r6fd478062c 10ffmpeg/libavformat/oggdec.c: 
[14:49] <CIA-122> ffmpeg: oggdec: print error on unsupported versions
[14:49] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * ra6bb09fc1a 10ffmpeg/libavformat/oggdec.c: 
[14:49] <CIA-122> ffmpeg: oggdec: print error on failure to create streams
[14:49] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * r49d935b5d2 10ffmpeg/libavformat/seek-test.c: 
[14:49] <CIA-122> ffmpeg: seek-test: support printing multiple packets
[14:49] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * r231d32c8d7 10ffmpeg/libavformat/oggparsetheora.c: 
[14:49] <CIA-122> ffmpeg: oggtheora: Fix initial pts
[14:49] <CIA-122> ffmpeg: code based on the solution in vorbis
[14:49] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[14:49] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * r96fb233e64 10ffmpeg/libavformat/oggdec.c: 
[14:59] <Tjoppen> michaelni: how do I figure out the shifted luma and chroma offsets in swscale.c? by experimentation I've found that bgra input results in 1024 and 8192 respectively, but assuming so seems flaky
[15:00] <Tjoppen> 16 and 128 shifted left six bits. I need it for alpha multiplication
[15:07] <Tjoppen> I probably don't need to care though, since JPEG can't do alpha
[15:31] <michaelni> Tjoppen, srcRange / dstRange specify if its the jpeg style offset or not
[15:31] <michaelni> maybe also grep for them to see how they are used /  their meaning
[15:32] <Tjoppen> that's what I'm doing now: zero = c->srcRange ? 0 : 8192
[15:33] <Tjoppen> mostly what I'm thinking is whether these values are always 1024 and 8192
[15:35] <Tjoppen> RGB2YUV_SHIFT = 15 in input.c seems related, but I'm not quite sure how that translates to (1 << 6)
[15:50] <CIA-122> ffmpeg: 03Michael Niedermayer 07master * r63eb01d9c1 10ffmpeg/libavformat/oggparsevorbis.c: 
[15:50] <CIA-122> ffmpeg: oggvorbis: Try to fix pts off by 1 issue.
[15:50] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[15:57] <Tjoppen> is PIX_FMT_GRAY8A an interleaved format?
[16:17] <michaelni> Tjoppen, gray8a should be packed but i remember there was code handling it as packed and code handling it as planar :)
[16:18] <nevcairiel> thats just how swscale roles, inconsistent to the end :)
[16:18] <Tjoppen> nm, step_minus1 = 1. I forgot the "minus 1" part
[16:18] <Tjoppen> so 2 bytes per sample, with Y in byte 0 and A in byte 1. just as expected
[16:21] <michaelni> nevcairiel, it would be quite easy to fix all this if people would cooperate, but half the devels work on libav and libav devels dont want to admit they have no clue abou the code at all, this doesnt help quality much ...
[16:22] <michaelni> also AFAIK gray8a should be already fine and considered packed 
[16:22] <michaelni> everywhere in ffmpeg
[16:22] <Daemon404> michaelni, how come i dont see any email/patch review anywhere for 4b70bba57ec9d61282e8b2b427d738dd44415652
[16:24] <michaelni> Daemon404, looks like it was commited without sending a patch/mail
[16:25] <michaelni> is there a problem with the change ?
[16:26] <Daemon404> im reading it now
[16:26] <Daemon404> but i dislike this practice.
[16:28] <Daemon404> i'd have done the check slightly differently, but there is nothing wrong with it from a code standpoint.
[16:28] <nevcairiel> for the record, last time i tried the zerocodec decoder, it didnt work all that well :d
[16:28] <michaelni> about the commit without patch, i suggest you talk with durandal
[16:28] <Daemon404> nevcairiel, how did you get your sample :P
[16:28] <nevcairiel> i forgot
[16:28] <Daemon404> it's tied to teh keyframe flag in its container
[16:28] <Daemon404> so dding will break stuff
[16:29] <nevcairiel> i think i downloaded a file either you or someone else posted during development
[16:29] <nevcairiel> but it kept complaining that the avframe still had allocated pointers when it reached the decoder, or something like that
[16:29] Action: michaelni is happy if people just commit fixes to my code without patches as long as they are correct
[16:30] <nevcairiel> i may not have set the key frame flag on the packet =p
[16:30] <Daemon404> :P
[16:31] <Daemon404> corepng has the same annoyance
[16:37] <Compn> tg_ / tg__ : if you have extra bitcoins, they can be deposited to me here 1Hait21gG9Ldzv5SQL7a78qs2xhh4UTR1c
[16:37] <Compn> ehe
[16:38] <Compn> i once sold a bitcoin for $29 , ahh the bubble days...
[17:20] <CIA-122> ffmpeg: 03Peter Holik 07master * r2ee6dca3b8 10ffmpeg/libavcodec/ (Makefile allcodecs.c png_parser.c): 
[17:20] <CIA-122> ffmpeg: png_parser
[17:20] <CIA-122> ffmpeg: This adds support for png image2pipe streaming
[17:20] <CIA-122> ffmpeg: Update to latest git by: Eugene Ware <eugene at noblesamurai.com>
[17:20] <CIA-122> ffmpeg: Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
[18:50] <CIA-122> ffmpeg: 03Clément BSsch 07master * r19bc2320f3 10ffmpeg/ffpresets/ (5 files): 
[18:50] <CIA-122> ffmpeg: Remove old ffpresets.
[18:50] <CIA-122> ffmpeg: They are now replaced with presets/ directory. WIN32 still seems to use
[18:50] <CIA-122> ffmpeg: a ffpresets/ directory, but it doesn't look like to be deployed at
[18:50] <CIA-122> ffmpeg: install time.
[18:50] <CIA-122> ffmpeg: 03Clément BSsch 07master * rec271c9579 10ffmpeg/presets/ (7 files): 
[18:50] <CIA-122> ffmpeg: presets: specify the codecs.
[18:50] <CIA-122> ffmpeg: This allows the following usages:
[18:50] <CIA-122> ffmpeg: FFMPEG_DATADIR=presets ./ffmpeg -f lavfi -i testsrc=d=5 -vcodec libx264 -vpre ipod640 -f null -
[18:50] <CIA-122> ffmpeg: FFMPEG_DATADIR=presets ./ffmpeg -f lavfi -i testsrc=d=5 -vpre libx264-ipod640 -f null -
[18:50] <CIA-122> ffmpeg: The second example was broken even if documented.
[18:50] <CIA-122> ffmpeg: 03Clément BSsch 07master * r49df97b282 10ffmpeg/ffmpeg.c: 
[18:50] <CIA-122> ffmpeg: ffmpeg: stronger ffpresets parsing.
[18:50] <CIA-122> ffmpeg: This fixes at least issues with empty lines, and also allows CRLF lines
[18:50] <CIA-122> ffmpeg: (in case a user makes its own preset on a MS plateform).
[18:51] <CIA-122> ffmpeg: 03Clément BSsch 07master * r3c1d52d30b 10ffmpeg/ (5 files in 2 dirs): Fix a few @file doxy inconsistencies.
[19:22] <CIA-122> ffmpeg: 03Clément BSsch 07master * r9e6a1c8981 10ffmpeg/ffmpeg.c: ffmpeg: fix indent in term_init().
[19:43] <Compn> blargh
[19:43] <Compn> i'd love to go to paris for videolan dev days
[19:43] <Compn> except flying is upsetting now :(
[19:44] <Compn> rape or radiated to get thru TSA crap
[19:45] <gnafu> For some, that's a plus.
[19:45] <gnafu> You could always travel ON A BOAT.
[19:46] <Compn> i could, i have done boat travel in the past
[19:47] <gnafu> Just avoid icebergs, and you'll be all set.
[19:47] Action: gnafu says while one of his nearby coworkers is playing [http://en.wikipedia.org/wiki/My_Heart_Will_Go_On My Heart Will Go On].
[19:47] <Compn> haha
[19:48] <gnafu> She has it in her very short random playlist, so I catch bits and pieces of it often.
[20:48] <tg_> Compn - I tried increasing max_threads in libavcodec/mpegvideo.h:#define MAX_THREADS  to 32
[20:48] <tg_> didn't increase decoding concurrency
[20:48] <tg_> still only spawning 16 threads for decoding
[20:48] <tg_> perhaps this is somewhere else?
[20:52] <tg_> and still only uses 16 for encoding when using internal mpeg
[21:17] <Compn> you might have to increase max_buffer 
[21:17] <Compn> in that file
[21:17] <Compn> but really i dont know why its limited at 16
[21:17] <Compn> #define MPEG_BUF_SIZE (16 * 1024)
[21:17] <Compn> change that to 32 * 1024
[21:18] <Compn> maybe michaelni knows why its only set at 16 threads decode or encode
[21:34] <Compn> that might be it
[21:34] <Compn> tg_ : look at libavcodec/pthread.c
[21:35] <Compn> probably 
[21:35] <Compn> #define MAX_BUFFERS (32+1)
[21:35] <Compn> and
[21:35] <Compn> #define MAX_AUTO_THREADS 16
[21:39] <Compn> wonder if your ffmpeg is built against pthreads too
[21:39] <Compn> probably is
[21:39] Action: Compn guesses his way around the code
[21:40] <Compn> since we have frame threading now, the limit for slice threading seems useless. especially because h264 slice is not normal
[21:40] <Compn> Daemon404 : ^^
[21:40] <Compn> Daemon404 : you were wondering if the 16 threads limit was on purpose...
[21:42] <iive> Compn: i wonder if both could be used at the same time :)
[21:43] <Compn> you mean sliced and frame ?
[21:43] <Compn> someone asked that before but i forgot answer. and like i said, there arent that many files with h264 slices
[21:44] <Daemon404> Compn, except all blurays?
[21:44] <Compn> oh did not know that
[21:45] <nevcairiel> bluray spec says 4 slices i think
[21:45] <Compn> if i come across a large collection of 40gb bdrips i'll be sure to remember that :P
[22:28] <ubitux> http://www.w3.org/TR/2010/REC-ttaf1-dfxp-20101118/  is this really in use in the wild?&
[22:28] <Daemon404> ive never seen that or ttxt be used irl
[22:29] <ubitux> that's reassuring.
[22:30] <Compn> thats not the same as mp4 timed text is it ?
[22:30] <Compn> no i guess not
[23:00] <burek> I'd like to write the web interface for ffserver, just like vlc has got for its VLM items. But before I start, can new streams be added to ffserver dynamically, without restarting ffserver and reloading conf file, stopping other streams?
[23:07] <Compn> burek : libav is currently rewriting ffserver , so you may want to wait
[23:08] <Compn> and you probably want to ask them to get that feature if its not already in
[23:08] <Compn> lu_zero is ffserver maintainer and mentor of ffserver rewrite
[23:17] <burek> I see
[23:18] <burek> thanks a lot for the info
[23:22] <RobertNagy> problem, lavfi inserts yuv420p between bgra source and yadif, instead of yuva420p
[23:22] <RobertNagy> how do I work around that?
[23:23] Action: Daemon404 wonders where an interlaced bgra source even comes from
[23:23] <RobertNagy> from a galaxy far far away
[23:23] <RobertNagy> the stream is just passed through the yadif filter
[23:23] <Daemon404> it sounds like someone is Doing It Wrong :P
[23:24] <RobertNagy> it doesn't actually do anything
[23:24] <Daemon404> yadif only supports 420 iirc
[23:24] <Daemon404> tho it should beconverted by swscale
[23:24] <RobertNagy> hm?
[23:24] <RobertNagy> I'm not following you comment
[23:25] <Daemon404> what exactly is the issue youre encountering
[23:25] <Compn> RobertNagy : yadif doesnt support bgra , so it must be scaled first ...
[23:25] <Compn> iirc
[23:25] <RobertNagy> the bgra stream is not interlaced
[23:25] <Compn> well then why are you deinterlacing it with yadif ?
[23:25] <Daemon404> ^
[23:25] Action: Compn asks dumb question, expects dumb answer
[23:26] <RobertNagy> because I haven't gotten that far as to insert the yadif filter dynamically
[23:26] <RobertNagy> step by step :)
[23:26] <Daemon404> why are you using yadif -at all-
[23:26] <Daemon404> if it isnt interlaced
[23:26] <Daemon404> >_>
[23:26] <RobertNagy> because it can be interlaced
[23:26] <Daemon404> what?
[23:26] <RobertNagy> can't know until first frame is decoded
[23:26] <Compn> oh
[23:27] <RobertNagy> and since I have a video source filter
[23:27] <Daemon404> i dont think swscale even supports nterlaced rgb colorspaces
[23:27] <Compn> RobertNagy : in mplayer filters you can set the video format to yuva420p , so there should be a way
[23:27] <RobertNagy> I must create the graph in order to get af rame
[23:27] <Compn> RobertNagy : maybe -vf format ? or -vf mp=format
[23:27] <Compn> or just -pix_fmt
[23:27] <Compn> probably -pix_fmt
[23:27] <RobertNagy> well then I would force all streams to be that, I'm using lavfi in my application
[23:28] <RobertNagy> not ffmpeg
[23:28] <RobertNagy> I don't want to force the user to select correct format everytime
[23:28] <Compn> well thats basically what you just asked 
[23:28] <Compn> heh
[23:28] <Compn> the answer to your question is : let the user deal with bgr>yuv conversion
[23:28] <RobertNagy> what I want is that the auto insertert sws scale filter choose correct format
[23:29] <Compn> you wont be able to autodetech all of this crap
[23:29] <RobertNagy> ...
[23:30] <Compn> but yeah, probably by changing the order of accepted bgr> yuv in swscale 
[23:30] <Compn> if you havent guessed yet, i dont know what i'm talking about most of the time.
[23:30] Action: Compn kicks Daemon404 , help me out over here
[23:30] <RobertNagy> heh
[23:30] <Daemon404> Compn, its beer time at work
[23:31] <Compn> ehe
[23:31] <Compn> Daemon404 : i've just been reading the beeradvocate reviews of budweiser chelada ... :P
[23:31] <Compn> get your cow-workers to drink one
[23:31] <Daemon404> american beer sucks
[23:31] <Daemon404> end of story
[23:32] <Compn> yes but this one sucks extra hard
[23:32] <Compn> because its bud + clamato
[23:32] <Compn> clamato being a tomato drink with clam juice...
[23:33] <Daemon404> Compn, thats even mroe nasty than that smirnoff + beer mix
[23:35] <Compn> good prank beer
[23:40] <cbsrobot_> which fate samples are yuv4XXp9 ?
[23:42] <Compn> should be able to make one with -pix_fmt :P
[23:42] <cbsrobot_> well - ok then
[23:46] <CIA-122> ffmpeg: 03Nicolas George 07master * r7bac2a78c2 10ffmpeg/libavfilter/ (avcodec.h src_buffer.c): 
[23:46] <CIA-122> ffmpeg: src_buffer: implement av_buffersrc_add_frame.
[23:46] <CIA-122> ffmpeg: It supersedes av_vsrc_buffer_add_frame and handles
[23:46] <CIA-122> ffmpeg: both audio and video.
[23:46] <CIA-122> ffmpeg: 03Nicolas George 07master * ra96cd73ff2 10ffmpeg/libavfilter/src_buffer.c: src_buffer: implement audio buffer copy.
[23:46] <CIA-122> ffmpeg: 03Nicolas George 07master * r32094285ad 10ffmpeg/libavfilter/ (avcodec.c avcodec.h): lavfi: implement avfilter_get_audio_buffer_ref_from_frame.
[23:46] <CIA-122> ffmpeg: 03Nicolas George 07master * rd8407bba0e 10ffmpeg/libavfilter/avcodec.c: lavfi/avcodec: implement audio copy_frame_prop.
[23:46] <CIA-122> ffmpeg: 03Nicolas George 07master * r9f357e2bcd 10ffmpeg/doc/examples/filtering_audio.c: examples/filtering_audio: use av_buffersrc_add_frame.
[00:00] --- Sat May  5 2012


More information about the Ffmpeg-devel-irc mailing list