Parallel Operating Systems
Parallel Operating Systems
-
Angel
(City University of London)
Angel is designed as a generic parallel and distributed
operating system, although it is currently targeted towards a
high-speed network of PCs. This model of computing has the dual
advantage of both a cheap initial cost and also a low incremental
cost. By treating a network of nodes as a single shared memory
machine, using distributed virtual shared memory (DVSM) techniques,
we have addressed both the needs for improved performance and provided
a more portable and useful platform for our applications.
-
Chimera
(Carnegie Mellon University)
The Advanced Manipulators Laboratory, at Carnegie Mellon University,
has developed the Chimera Real-Time Operating System, a next generation multiprocessor
real-time operating system (RTOS) designed especially to support the development of
dynamically reconfigurable software for robotic and automation systems. Version 3.0 and
later of the software is already being used by several institutions outside of Carnegie
Mellon, including university, government, and industrial research labs.
-
COSY
(University of Karlsruhe, University of Paderborn)
Group Members: Wolfgang Burke,
Roger Butenuth,
Sven Gilles
Cosy is an operating system for highly parallel computers,
with hundreds or thousands of processors. All parts of the system are
designed to scale up with the number of processors, without any one
becoming a bottleneck.
-
Helios
(Perihelion Distributed Software)
Helios is a micro kernel operating system for embedded and
multiprocessor systems. The operating system is modular in design
and can scale from an embedded runtime executive up to a fully
distributed operating system.
-
Hive
(Stanford University Flash Project)
Group Members
The Hive OS Team is designing an
operating system that is able to operate effectively in a traditional
supercomputer environment as well as in a general-purpose,
multiprogrammed environment. The latter environment poses significant
challenges since general-purpose environments typically contain large
numbers of processes making many system calls and many small I/O
requests.
-
Paramecium
This kernel uses an object-based software architecture
which together with instance naming, late binding and explicit
overrides enables easy reconfiguration. Determining which components
are allowed to reside in the kernel address space is up to a
certification authority or one of its delegates. These delegates may
include validation programs, correctness provers, and system
administrators. The main advantage of certifications is that it can
handle trust and sharing in a non-cooperative environment.
-
PEACE (Process Execution And Communication Environment)
(GMD FIRST)
PEACE is a family of operating systems with a truly
object-oriented design developed at GMD FIRST.
Emphasis is laid on subjects as performance, configurability and portability.
It is the native operating system for the MANNA
computer, a massively parallel computer facilitating a
high-performance interconnection network. Ports to SunOS, FreeBSD and
Parix were made and expand the scope of this system to other parallel
computers as well as to workstation networks.
-
Puma and relatives
(Sandia National Laboratory)
The Puma operating system targets high-performance
applications on tightly coupled distributed memory
architectures. It is a descendant of
SUNMOS.
-
Sting
Sting is an experimental operating system designed to
serve as an efficient customizable substrate for modern programming
languages. The base language used in our current implementation is
Scheme, but Sting's core ideas could be incorporated into any
reasonably high-level language. The ultimate goal in this project is
to build a unified programming environment for parallel and
distributed computing.
-
Tornado
(University of Toronto)
Tornado is a new operating system being developed for the
NUMAchine that addresses NUMA programming issues using novel
approaches, some of which were developed for our previous operating
system
Hurricane.
Tornado uses an object-oriented, building block approach that allows
applications to customize policies and adapt them to their performance needs.
For research purposes, we intend to tune Tornado for applications with very
large data sets that typically do not fit in memory and hence have high I/O
demands. We also intend to provide applications with an operating environment
that provides predictable performance behavior to allow performance tuning
and to allow the application to appropriately parameterize its algorithms at
run-time.