[MPlayer-users] hd video playback problems over wireless network

h9wzuy45tm at snkmail.com h9wzuy45tm at snkmail.com
Wed Oct 27 15:17:05 CEST 2010


On Wed, Oct 27, 2010 at 2:43 AM, Reimar Döffinger
Reimar.Doeffinger-at-gmx.de |mplayer/Example Allow|
<u1jjemcfwt at sneakemail.com> wrote:
> On Tue, Oct 26, 2010 at 07:05:41AM +0800, h9wzuy45tm at snkmail.com wrote:
>> Alternatively, something which pre-fetches from
>> file streams to fill the cache buffer might do the trick too.
>
> Yes, and that "something" is called "operating system".
> It is between annoying and idiotic to implement this in every
> single application, and from my side unless someone knows a
> trivial, clean, certainly side-effect-free way at the minimum
> would need evidence that more than one operating system has
> such issue.

Ok, I'll bite.  This is the setup.

[ WinXP VM ]---,
[ Ubuntu VM ]   |--- [ Router VM ] ---- [ SMB file server ]
[ OSX Host  ]---'

All of the above are wired fast ethernet - either physical
connections, or emulated.
Stuff on the left is in 10.0.0.x network; file server is on 192.168.x
network.  Router is in between, and runs Linux and uses netem to
emulate the delay from a high bandwidth, high latency connection. (tc
qdisc add dev eth0 root netem delay 30ms) But because of the
virtualisation work all on the same machine, the actual latency to the
file server is approximately 2+ms even without any added latency.

I used dd to determine the achievable throughput because we're not
investigating virtualbox's emulated graphics subsystem, but whether
using read syscalls leads to a performance bottleneck in high
bandwidth, high latency environments.

The results.  For brevity,
(a) - average round trip over 10 pings (ping -c 10), with no added latency.
(b) - dd if=file1.mkv of=/dev/null bs=512
(c) - dd if=file2.mkv of=/dev/null bs=2048
(d) - dd if=file3.mkv of=/dev/null bs=131072
(e) - average round trip over 10 pings, with added latency (tc qdisc
add dev eth0 root netem delay 30ms, actual delay is double due to two
way round trip)
(f) - dd if=file4.mkv of=/dev/null bs=512
(g) - dd if=file5.mkv of=/dev/null bs=2048
(h) - dd if=file6.mkv of=/dev/null bs=131072


1. WinXP VM. network share mapped to h:\
(a) 3.4ms
(b) 216 KB/s
(c) 670 KB/s
(d) 6.5 MB/s
(e) 64.0ms
(f) 7.6 KB/s
(g) 30 KB/s
(h) 249 KB/s

2. ubuntu VM, smbmount smbfs //fileserver/share /mnt
(a) 3.9ms
(b) 5.0 MB/s
(c) 4.7 MB/s
(d) 4.9 MB/s
(e) 66.9ms
(f) 228 KB/s
(g) 234 KB/s
(h) 236 KB/s

2b. same ubuntu VM, but using gvfs-fuse-daemon to mount the share
(b) 9.3 MB/s
(c) 9.6 MB/s
(d) 8.8 MB/s
(f) 811 KB/s
(g) 811 KB/s
(h) 843 KB/s

3. Mac OSX host. smbfs
(a) 5.6ms
(b) 464 KB/s
(c) 1.1 MB/s
(d) 15 MB/s (not sure how this happens on a supposedly FastE
interface. maybe the virtualiser cheats)
(e) 62.8ms
(f) 7.8 KB/s
(g) 31.2 KB/s
(h) 584 KB/s

Conclusions.
- f/g/h << a/b/c. latency plays a big role.
- Generally, c>=b>=a; h>=g>=f. increasing buffer size either helps
significantly or doesn't worsen throughput appreciably.
- WinXP and Mac OSX both benefit significantly from larger buffer sizes
- Ubuntu doesn't benefit much.
- XP and OSX with large buffer size can approach or exceed Ubuntu
smbfs performance.
- There's clearly some other bottleneck with smbfs. Changing from
smbfs to fuse may help.

So, another major OS (Windows) has the same issue.

> You might be able to try other variants like MacFUSE + smbfs or similar
> in case they work better.

I'm almost embarrased to say that I've tried macfuse before and
couldn't figure out how to get it to work.  But really, it would be
best if it would just work with the native smbfs.


More information about the MPlayer-users mailing list