[SATLUG] Next month's presentation
swinston at trinity.edu
Wed Aug 28 01:05:09 CDT 2002
On Tuesday 27 August 2002 22:13, Jeremy Mann wrote:
> > I agree, this would be a very interesting presentation. Personally my
> > company (Global Gaming Innovations) is working with wide area
> > clustering. My interest would be peaked and I *might* actually make it
> > to my first SATLUG meeting.
> Come out anyway ;)
depends on if I have time (been the problem with making it out this time)
> > That's true that creating a cluster is
> > easy, although to be honest, you DONT need MPI or PVM to do it. The
> > tools are already present in TCP/IP protocol programing. Building a
> > cluster doesn't have to require installing or configuring anything!!!
> > However, if you wanna get into fun things like multi-threaded parrallel
> > computing, you're gonna want to use the kitchen sink of distributed
> > parallel computing (MPI).
> This is true with multi-threaded parallel programs. But, what openMosix
> does is implement this at kernel level so there is no need to reprogram
> your binaries for MPI parameters. I like OpenMosix because it is kernel
> level so it load balances regardless of IF the binary is compatible. Now
> there are requirements as I mentioned before, but most applications WILL
> balance across nodes that need the extra resources. There are tweaks you
> can do and even in my experience I can't get *true* balancing (50/50), but
> I can get at least 60/40 (60% on the node that started the app, 40 for the
> rest of the nodes). There is more but I wanna save it for the demo ;)
Yeah, OpenMosix is a nice load balencing solution. However, there's one
rather annoying problem with this solution for developers. Not everyone can
take advantage of clusters by this method. It requires having lots of
bandwidth, lots of computers (well at least two), and lots of headaches with
keeping the network homogenous (in most implementations, don't know about
OpenMosix specifically). Those are the problems I've had or seen, but again
I look forward to the presentation.
> > There's one major draw back in today's tech
> > that is on the market. That is, try starting up your program and pull
> > the plug on ANY of the nodes in the LAM, whops all gone, with no
> > signal. There's ways to send heartbeats and Duke University is
> > developing methods of gracefully shutting down your distributed program
> > with error codes. However, this is currently a major pain in the arse.
> > Also for more about wide area distributed processing check out the
> > Albatras (sp?) project. It's looking pretty good (4 universities
> > making up shared clusters).
> With OpenMosix you only need an entry in its config file (mosix.map) to
> span across networks to access resource nodes.
Well if you're after true ease of cluster installation, just create a network
with more than one device that has a processor, memory, and tcp/ip networking
capabilities. Therefore, prettymuch do nothing. Also things depend on what
you want to do. If you are interested in graphics, check out GridView by
Sun, it's a nice opensource project.
> > Oh and if there is interest, my company is about to release its first
> > beta test (will work natively in linux) of a FPS game that will use the
> > distributed processing tech to create massive multiplayer online games.
> > (read as 500-1000 or more quakster types fragging each other on one
> > server). I'll post an anouncement incase anyone is interested in seeing
> > this type of stuff work.
> I am definitely interested in hearing about this. I have heard Doom3 will
> P2P resources for extra bandwidth, BUT in my experience with balancing you
> need a fat pipe to move the data between nodes.
Well information of that nature (buisness related) will come out when the time
is right, just for right now, it's good enough to say "we can put lots and
lots of people on one server with no slow down, securely, and with nice
uptime capabilities." I'll leave it at that for now.
Home: (210) 641-0565
Office: (210) 582-5898
Cell: (281) 615-9612
11:54pm up 27 days, 5:21, 1 user, load average: 0.11, 0.13, 0.05
My cup hath runneth'd over with love.
More information about the Satlug