MOSES Digested

The Meta Operating System & Entity Shell
Ten Years Later

Daniel J. Pezely

3 November 2001

Introduction

This is a contemporary look at the design of MOSES, the Meta Operating System and Entity Shell from the early 1990’s. With this, the presentation within the older documents becomes unnecessary since those emphasized qualities taken for granted now but unique at the time.

The system’s architecture is described from a modern perspective using varying degrees of computer science concepts.

Forward-looking propositions are left for other papers.

Just the highlights of the system are presented without a comprehensive explanation of the design. Key details are stated when appropriate to illustrate uniqueness.

Most subsections include a brief statement putting that portion into context of the times– past and present.

History

MOSES, the Meta Operating System and Entity Shell, was never fully implemented as a platform for virtual reality (VR).

It currently exists only as a collection of technical documents. Even the material in the archives at the Human Interface Technology Lab in Seattle is incomplete.

As an implementation, its source code was never released. Many of the original documents were never published or even circulated at the Lab.

The name for this system created a self-fulfilling prophecy. Some argued whether MOSES ever really existed. Consortium members of the HITLab such as Division investigated its architecture for their own products. This made the design historically significant.

Context

Putting the times into perspective, the early 1990’s saw the creation of Oak (later renamed Java), HTTP and streaming media. Now think back to before anyone had heard of these. On the wireless front: cell phones were bulky things mounted in cars, pagers still belonged primarily to medical staff.

Graphical user interfaces were yet to become mainstream as the Macintosh had only recently released it first color model, Windows286 was a novelty item, and NeXT was expected to be the future. The i386 processor was high-end, and the DEC Alpha (not yet Compaq Alpha) was still on the drawing board.

VR meant using VPL’s Data Glove and Eye-Phones, thus using a Macintosh II with dual SGI Iris workstations broadcasting onto their own private 10base2 Ethernet network.

And SimCity was new.

Rationale For This Design

The purpose of the system was ultimately to make computers easier to use.

Previous experiences with artists attempting to use early graphics workstations came as inspiration and motivation. After all, people know how to interact with the world around them, so why not mimic those familiar elements. Bend the tool rather than the person.

Architecture Overview

MOSES was originally described as merely bringing together technologies from other disciplines.{Eclectic} That was paying respect to the underlying technologies. Distributed systems, network engineering, artificial intelligence, expert systems, as well as computer graphics were drawn upon.

In the end, many familiar with the architecture stated it was definitely more than just the sum of its parts.

Some proclaimed a paradigm shift between the two designs.

Unfortunately, the implementation design never became manifest in a working system. But even theoretical systems have value when understood and re-applied appropriately.

Data Versus Function

There was a lack of distinction between data and function. Within a collection of data, each parameter may be a static value such as “4” or actual code to be executed. The memory management structure would identify the data type and utilize it appropriately when accessed.

Today, good object-oriented style dictates using accessor methods for all member fields of a class structure. This effectively permits the same abstraction.

Note that the C++ language was still undergoing significant changes at the time. Those with direct knowledge of OO principles were obscure by even academic standards then, let alone finding anyone with extensive experience.

Beyond n-Dimensions

This is an information engine capable of supporting unlimited dimensions.

Previous VR systems were 3D. Animation environments may be considered 4D (though technically just three and a half dimensions). Physicists claim there are between eleven and thirteen dimensions to the universe.

Going beyond the limitation of the phenomenal world, MOSES supports an arbitrary number of dimensions. To even specify n-dimensions would be a limitation because a quantity for n would be required.

By thinking of it as arbitrary information, we transcend restrictions of ordered tuples. The value in this is the ability to override conventional use of data.{Concept}

For example, the point located at (x,y,z) may take on other meaning when the geometry of the world changes. By transposing meanings of the first, second and third elements, a different geometry might effectively map the coordinates to (z,x,y). (Translating coordinates within a database would be too time consuming.) This approach permits on-the-fly alterations.

Such changes provide more value when dealing with larger numbers of elements. There is more significance when textures are generated as functions instead of static values.

Recall that in 1990, interaction with databases was direct. (Similarly, CAD systems still shipped with their own sets of device drivers.) Today, transposing of information would be done through a plug-in or broker mechanism– at the very least, the database driver could be changed. That abstraction was rare if it existed at all then.

Changing The Geometry

On reason for altering the use of individual dimensions is to effectively alter the geometry.

What does it mean to change the geometry? Few really know the answer to that question because few have actually experienced it. From a research perspective, we should ask instead:

  1. What are the implications when we change the geometry from Cartesian to Spherical?
  2. What advantages would an Escher-esque geometry allow? Consider recursion in just one dimension, then two and so on.
  3. How might scientific visualization benefit from experimental geometry systems?

And that was the point of virtual reality then and now. We simply do not understand this medium enough to effectively label it, much less utilize it beyond interactive games.

Recursion In The System Design

There were three primary elements to the architecture of MOSES:

Note that each may be described in terms of the others. For example, memory must be written to and read from; hence, it may be categorized as function or communication.

Communication may be viewed as memory in which the parameters being stored are altered or as a function considering the read/write operations.

Function can be thought of as the activity servicing communication or memory when considering that primitive methods will perform the operations.

The cross-over distinctions are subtle but significant. The mind-set offers recursion in the design. And as the classic tenet of design goes, “elegance means nothing else could be removed.”

When a design is recursive, the implementation becomes simpler. This should translate into a smaller and more efficient code base which in turn lends itself to fewer bugs and anomalies.

Principles of Network Engineering

From studying under Dr. Dave Mills of the original Internet Protocol research team, his principles of network engineering {Mills} were applied. They are as follows.

  1. You cannot to anticipate all the faults.
  2. All fault scenarios will happen at least once.
  3. No one single strategy will work.
  4. The system must be self-correcting.
  5. But each correction must not increase vulnerability.
  6. No system will always obey these rules.

Amendment:

Implementation Hint: Use unique time-outs such that even in combination they become signatures of where problems may exist. (This, for example, is how the phone company recovered from a major outage in early 1991.)

The principles applied to the system architecture were summarized in {Design}.

System Features

Just about everything listed as high points for MOSES in 1991 and 1992 became industry buzzwords since then.

  1. scalable – system performs sufficiently under various loads
  2. distributed – spanning local and wide areas
  3. fault-tolerance – graceful fail-over and recovery
  4. robust – handle various qualities of service

Some elements still uncommon:

User-Centric View of Data

In a time when most computer users were still isolated to the technology sector, focusing on people having control over their data was relatively novel. The idea existed for some time but was barely used. It wasn’t until Archie {Archie} and later the Web which would make this idea widely understood.

The approach for MOSES was to permit modification of the system while it was running. More about this will be touched upon below. (See “System Variables”.)

DataSpace

The DataSpace was shared memory management.

The data elements being manipulated were similar to tuples {LINDA} but recursively nest-able. At the time, our definition violated the term, so “grouples” were used instead. (Gelernter later refined his definition of a tuple, accounting for nesting.{Lifestreams})

There were five basic operations on the DataSpace:

  1. New
  2. Delete
  3. Select
  4. Copy
  5. Evaluate
  6. Substitute

Management of the DataSpace in a distributed environment was elaborate. Consider that nesting and recursion were possible with the data structure. Only the depth immediately required would be transferred. This eliminated side-effects from recursive data.

Extensive research into caching was done to find elegant solutions for keeping data from becoming unsynchronized. Such concepts are today considered standard design practices for any cache mechanism with two important exceptions: closure and migration. These tools preserved data integrity.

Closure simply refers to having both sides reference one another so that when a change occurs, all parties referencing the data may be notified.

Migration of ownership does not necessarily mean data migration. In a distributed, shared memory facility it’s possible to control data in a foreign computer. As part of such management, sometimes relocating the data across the network is ideal– but not always.

Performance and reliability issues are addressed through mediation of brokers and authority with authentication.

Inference

Building upon the VEOS design, the core of the system was an inference engine. In other words, “match-and-substitute with execution” was the heart of the system.

Programmable Protocol

For extensibility on the network, the communications protocol itself was programmable.

The general idea was to provide flexibility, customization and optimization in the host-to-host interaction.

Consider the Web with HTTP/0.9, also from 1991 {HTTP0}. Imagine if the web browser were able to collect the HREF tags from a single page and download the contents via batch processing. (Although HTTP/1.1 provides for this, remember that MOSES was specified in 1991 and HTTP/1.1 was a revision in the mid-1990’s.)

It worked by using a simple syntax programming language such as Lisp as the basis. That much is identical to VEOS. MOSES introduced the notion of a meta-machine language which would be an optimized version: byte-compiled, serialized/pickled, etc. While some implementations of Lisp offer their own compilation, none were cross-platform then.

The nature of the protocol was similar to the syntax of Common Lisp. The purpose was that grouping was available (sublists), and named parameters (keywords) could be used. (The rationale is identical to advocacy for XML.) Beyond that, Lisp behavior was optional in MOSES yet enforced in VEOS.

Java was still called Oak at Sun Microsystems. Yet the direction of their research applied to MOSES. The key element was to make that lower layer inter-operable, then the high-level language syntax becomes irrelevant. As with Java’s Class file structure, other languages may be transcribed into that form.

Specifically, a project from the UK called TAOS was especially of interest. It was a hardware independent machine language that would be translated to native code on-the-fly while being read from the disk or network. It offered distributed and parallel processing. Unfortunately, they introduced a higher level language compiler too late to even contend with Java in the marketplace.

The intent of MOSES was to make communications more efficient than commonly available in the day.

Particularly when sending updates from an entity that rapidly changes direction. Sending updates with the full christmas tree of information would become burdensome to process as well as consuming network capacity needlessly. (Remember: megabit Ethernet was still a long way off, and fibre networks were still highly proprietary until FDDI was later ratified.)

The idea was to permit macros for automating and minimizing updates. If all that changes are the (x,y,z) coordinates, send only that tuple. Likewise, if only orientation changes, send only that portion of the record. It’s unwise to send the entire list of geometric qualities for each update. Qualities might include: orientation normal, scale in each coordinate plane, color, textures, sound cues, etc.

Some information would of course be sent during initial registration. From that basis, macros could be defined within the communications protocol for updating.

The possibilities of a programmable protocol accounted for remote behaviors much like applets and client-side scripting from Web sites. This is discussed below. (See “Agents For Behavior” and “Remote Behavior”.)

Meta Layer

With a programmable protocol, the middle tier of software was to also be implemented within this communications protocol.

All middle-ware, as the concept is commonly referred to now, could be migrated.

If a virtual environment required specific functionality otherwise unavailable by the host platform, the libraries could be downloaded. Again using Java as reference, the applet hype would also apply here.

An even more contemporary model for comparison is Microsoft Windows XP and the .NET initiative that is replacing the ORBs from the 1990’s. It’s only taken the general industry a full decade to catch-up.

As with Java class files, system independence was the goal. This goes far beyond hardware independence in anticipation of multiple vendors supplying unique implementations of the kernel. Likewise, within a server farm, you are free to use a mix of machines such as from Sun, IBM, HP, etc. As long as there was a kernel available, the rest of the system would be identical in behavior.

System Variables

Taking the concept from VEOS another step, internal variables would be accessible to the user. This is comparable to the sysctl utility on FreeBSD systems.{FreeBSD} As with FreeBSD, security would be controlled via access controls lists (ACLs) as the main differentiator from VEOS.

The rationale for this feature was to permit the system to be changed– reconfigured– from within. And focusing on user-centric views of data, that level of user control was paramount.

Upgrade A Running System

Formally described as dynamic linking, loading and binding– this permits software components to be upgraded while the platform is operational.

This seems like a superfluous element, yet it’s significant. The servers which would host virtual environments would need to be operational for long periods of time.

The criteria is similar to many Web or database servers. While many techniques exist such as data redundancy, master-slave relationships, etc, this is another item for the arsenal.

If the server is able to continue running throughout an upgrade procedure, the overall availability will be greatly increased. And availability seems to be the Holy Grail of the Web content hosting community today.

Data Structure

The internals of the kernel relied upon a structure that resembled an inode from the Unix file system. Eventually this relation was logical, but initially, it was explicit.

Final Structure

Ultimately, it was understood that high degrees of flexibility could be attained. The kernel data structure was merely a container for higher levels to manipulate.

The Mem meta-structure, circa 1993
Field Description Type
data variable length list of arbitrary contentvoid*
attrib variable length list specifying type and control values of each elementvoid*

The net effect provided for a mechanism similar to classes in {Python}. With both environments, fields could be inserted to a record dynamically. This is distinctly unlike most strongly typed languages where structures are frozen upon compilation.

The data type being void* means we could have just overlaid a record from even the machine code variety. The intended use, however, was that each element was free to have its own type.

Some record elements might have contained a type foreign to the local system. As long as that item was unused locally, interpretation of its value would never be evaluated.

Rather than elaborate upon how the higher levels would operate on the above structure, just refer to a contemporary implementation of Python.

Early Structure

Initially however all of the fields were presented explicitly. Each had a specific data type– an additional record.

The Grouple structure, circa 1992
Field Description
id Full name & reference to self
flags Kernel (not client) protection
links Number using this as sublist
readers Readers not necessarily linked
created Time-stamp
modified Time-stamp
accessed Time-stamp
expire When to remove link
length Array size in `form`
types Option for mixed element types
form Dynamic array OR single pointer Typecast pointer to anything...
Plus additional fields for debugging and core-dumps

Going back even further, one document presents a two tiered data structure. The idea was to separate the management from the data being tracked. The structure in the paper submitted to SIGGRAPH 1991 {Entity} was a prototype that was never fully implemented.

Note: designs of data structures accounting for abstraction were still far from mainstream. The number of people with practical C++ experience then were relatively few.

Industrial Qualities

There are several qualities MOSES was to possess: scalability, a distributed system, fault-tolerance, active abuse prevention all to provide robustness.

Note that most of these terms became buzzwords in the mid-1990’s and the concepts taken for granted only in the late-1990’s.

Scalability

Referring to any architecture as ‘scalable’ has many implications. First, there is accounting for future directions of the technology and applications the designers never conceived.

Second, the system should perform sufficiently under various loads. This applies to network congestion as well as within a single server or workstation. By accounting for various levels of detail (LoD), the protocol can effectively throttle how much information gets transmitted and/or what should be processed.

The various factors regarding LoD may be found in contemporary (circa 2001) works from Intel Architecture Labs.{MRM} {SDS}

Distributed System

By distributed system, this refers to an architecture that as a whole spans both local and wide areas.

Today, it might be called a server farm that supports some form of geographic load balancing.

The general idea is to account for both centralized servers as well as peering.

The model of a centralized server allows for ease of administrative structure. This translates into benefits with authentication from a security perspective. These models date back to the Multics project in the 1960’s.{Multics}

The peer model is optimized for certain types of usage where there might be groups of participants collected in close proximity. See {Amoeba}.

Combining the two models, however, offer additional strengths. This can take advantage of propagation for data-flow as well as multicasting or broadcasting.

What are the behaviors when either mode is used? What happens when the model switches from centralized to peer-based?

The implications of letting the system dictate its own laws of nature are still unknown. So many systems strive to mimic the phenomenal world that the intrinsic nature of a virtual environment are lost.

The intent was to support both types of data-flow, thus offering a wider range of applications and experiments.

Distributed Memories

Within a distributed system server farm, the load would be shared by all servers. This requires memory sharing, caching, data migration and load balancing of network connections.

Memory within a shared environment works by the distinction of entity ownership. Cached copies might exist on several hosts, but only one is the owner. The owner might grant permission for remote entities to send updates, or it might transfer ownership to the remote entity even though it continues to reside locally.

The front-end load balancer has become a common element in Web content hosting server farms: distribute network connections to any one of the available servers in the back-end farm. While MOSES specified this to work with agents measuring performance of back-end machines, that element was made obsolete as the computational power and network capacity grew throughout the 1990’s.

Fault-Tolerance

Graceful fail-over and recovery are challenges for application service providers, then and now. Effective use of level of detail (LoD) lend support here as well as scalability (see above).

In addition, when a resource disappears, it’s useful to re-establish the previous connections so communications may continue where they left off. Apart from a catastrophic failure yet allowing for a server completely reseting/rebooting itself, this should be an option. This functionality may be offered when closure is planned.

Closure lists {Closure} account for all sides of a link to maintain sufficient state of remote connections. This permits either side to attempt rebuilding the connection.

Abuse Prevention

Prevention and containment of abuse was addressed through Intrusion Countermeasures Entities (ICE). Intrusion detection agents would run within the servers observing anomalous behavior.

This is more than just filtering. State would be maintained. Should sufficient sequences occur, the host system would disable an offending entity by denying it any further processing.

Recall that the early 1990’s were still a time of trust on the Internet. Stateful firewalls were uncommon until nearly 2000.

Network firewalls were practically nonexistent then. There was essentially protection by isolation. The military dealt with the issue of bridging the ‘black’ and ‘red’ networks through strict regulations which led to the early firewalls.

Robustness

The qualities that we should all expect of commercial software are that it be robust. Whatever is happening, the system should be able to return to some fail-safe state.

In a virtual environment, having the system shutdown or crash is unacceptable. This is especially the case today as in the early 1990’s VR. Then, all head-mounted displays (HMDs) obscured the view of the physical environment. An abrupt shutdown leaves the participant completely disoriented.

Using MOSES

Some concepts alluded to are now explained. Conversational use of words such as ‘entity’ and ‘space’ are sufficient for understanding the basic ideas. Here is the more formal explanation.

Entities

The base concept for using MOSES was the entity. Mathematically speaking, it means “that which exists.”

It was very significant that graphical components were omitted from the basic element. Some entities may lack any graphical objects whatsoever.

An entity might be an intelligent agent. Another might be just a collection of graphical objects. Others might include both.

Note that when graphical objects are mentioned, it’s in the plural. Other systems forced the grouping of graphical elements together into a single blob. A driving model was how to present a flock of birds. The next step was how to present a flock of unrelated objects such as a few light-seeking cubes, some sound-sensitive spheres and perhaps a reverse-gravity dweller or two.

Using the recursive aspects of the design, each item would be its own entity, yet collectively, the entire flock would be a single entity as well.

Spaces and Superspaces

The ‘DataSpace’ is the functional component, ‘spaces’ are what the participants deal with.

The mathematical definition of a space is simply to group or contain entities. And again, an entity is simply that which exists– usually within a space or collection even if there is only one in the group.

A room in a building could be represented as a space, or the entire building could be a single space. It depends on the application.

A superspace accounts for multiple intersecting, overlapping spaces. The distinction is more for fine-tuning memory management yet provides an abstraction beyond the lower-level OS-like intricacies.

A superspace might be represented as a single room with participants entering from various remote networks. The superspace allows for additional constraints to be put on the room.

Consider a crowded room you’ve been in recently. There may be hundreds of people, yet you can only see and identify about a dozen. You must move or the other person must move through the crowd for you to see them.

So with a superspace, the room would be essentially segmented. The segments might not necessarily be geometric. It might be based upon network proximity (hop count) or weight (speed, bandwidth, capacity).

The superspace might be a room effectively spanning multiple servers.

Information might be transferred by propagation: routed through the chain of servers before being passed to the participants. All the servers for the room, however, would not necessarily receive their updates before participants would. That again would depend upon intended application. So a participant within the server originating an update might process the message before other servers would even receive their updates.

Recall that there will be an intrinsic behavior of the system– sort of its own laws of nature. Experiencing a superspace distributed over a wide area would illuminate this nature. Yet this is the very nature most developers of distributed virtual environments seek to eliminate or hide.

The significance of a superspace beyond the distinction of a space is this. An intelligently written entity could query the system to determine the whether a superspace versus simple space was in use. Then that entity could make appropriate decision based upon that information. A perfect example of this would be for negotiating common levels of detail for all participants within a room for a simulation or teleconference.

In most applications, however, only the agents running on the servers hosting the virtual world would need to deal with this distinction. Design of user agents encouraged keeping them naive about this for simplicity. If any features would rely upon this distinction, the base classes for entities using that particular world would handle the complexities.

Agents For Behavior

Virtual bodies are now commonly referred to as avitars. (We too were toying with appropriated terminology. From the Vedic tradition, we sometimes used the words ‘atman’ and ‘anatman.’)

As humans, we don’t focus on moving individual fingers. Likewise, intelligent agents were specified to deal with peculiarities of moving each digit, each finger, each hand and possibly the entire arm.

These agents might be simple rule-based bots or complete expert systems. It would depend on the application. Applications spanning from training simulators to tele-presence control of robots on other planets were anticipated.

Remote Behavior

Today, running an applet or client-side scripts are the accepted state of technology.

This was a key element of MOSES for the sake of putting sensor and actuator code close to primary sources.

That mode permits the sensor of an entity as an agent to grapple an object when many network hops must be traversed. Think of a hand picking up a tool, albeit one that is very far away.

A common example used in discussions of networked virtual environments is the following question. Is it more important to see the proper placement of individual limbs from a squid dancing rather than merely having behavior that one might recognize as dancing?

Just as some web sites send simple applets to animate something, MOSES accounted for sending code to be executed for similar types of behavior.

Conclusion

MOSES is a significant virtual environment system that never was. Its significance is in the weave of features creating an overall behavior that is far more than the sum of its parts.

Many elements that were championed in the design are now taken for granted. We have the Web to thank for bringing many of these technologies to the general computer industry through web browsers, web servers, database servers and web content hosting server farms.

As an implementation, MOSES is dead and its source code never released.

Some of the elements are still unique to networked virtual environments. Further research into the state of computer science should be performed and this architecture revisited before moving forward.

As a researcher and developer, it’s important to note something. Any system is going to have its own intrinsic nature. For a distributed system, there will be artifacts. When data migration is employed, side-effects will be apparent. Yet this is part of the system’s natural behavior. Let the system have its own characteristics. Any simulation of the phenomenal world should be clearly identified as such, and keep such things at the application level.

For the only forward-looking comment in the paper, here is a suggestion of what the future holds for VR. People will use this technology first expecting it to mimic the phenomenal world around them. This is only natural. People need something familiar as a basis in order to learn something new. Yet once they’ve come to understand this new thing, turn off the simulations one-by-one. Let them know that some rules may be bent, others may be broken– and understand that truly, there is no spoon. Free your mind!

*   *   *

References

  1. {Eclectic} Using Large-Scale Operating Systems' Design for An Eclectic Design, Pezely; “#TR-92-6”, Human Interface Technology Laboratory, Washington Technology Center, University of Washington, Seattle, WA 98195 US; May 1992.
  2. {Design} The Design And Implementation of the Meta Operating System & Entity Shell (MOSES), Pezely, Almquist, Evenson, Bricken; “#TM-93-1”, Human Interface Technology Laboratory, Washington Technology Center, University of Washington, Seattle, WA 98195 US; May 1991.
  3. {Entity} The Entity Model: A Second Step Towards Virtual Reality, Pezely, Evenson, Almquist, Bricken; “#TR-91-5”, Human Interface Technology Laboratory, Washington Technology Center, University of Washington, Seattle, WA 98195 US; January 1991.
  4. {Concept} The Design of The Virtual Environment Operating System, Pezely; “hitl.7.everything”, Human Interface Technology Laboratory, Washington Technology Center, University of Washington, Seattle, WA 98195 US; August 1990.
  5. {Mills} Network Engineering (graduate-level course), Mills, D., Electrical Engineering Department, University of Delaware, Newark, DE 19716 US; 1991.
  6. {Archie} The Virtual System Model for Large Distributed Operating Systems, Neuman, B. Clifford; “89-01-07”, Dept of Computer Science and Engineering, U of Washington, Seattle, WA 98195 US; April 1989.
  7. {LINDA} Carriero, N., Gelernter, D., “Applications Experience with Linda,” Symposium on Principles and Practice of Parallel Programming, Proceedings of the ACM/SIGPLAN, Volume 23, Issue 9, September 1988, pp. 173-187.
  8. {Lifestreams} Gelernter, D., Fertig S. and Freeman E., “Lifestreams: An Alternative to the Desktop Metaphor,” Proceedings of CHI'96.
  9. {HTTP0} The Original HTTP as defined in 1991, Berners-Lee, T., World Wide Web Consortium, Massachusetts Institute of Technology, Cambridge, MA US; 1991
    http://www.w3.org/Protocols/HTTP/AsImplemented.html
  10. {FreeBSD} The FreeBSD Operating System, The FreeBSD Project; 1995-2001,
    http://www.FreeBSD.org
  11. {Python} The Python programming language, Python Software Foundation; 2001
    http://www.Python.org
  12. {MRM} Multi-Resolution Mesh, Intel Architecture Labs; 2001
    http://developer.intel.com/ial/3Dsoftware/mrm.htm
  13. {SDS} Subdivision Surfaces, Intel Architecture Labs; 2001
    http://developer.intel.com/ial/3dsoftware/subdiv.htm
  14. {Multics} Organick, E., The Multics System, MIT Press, Cambridge, MA US; 1972.
  15. {Amoeba} Tannenbaum, A.S.; Renese, R. van; Stavern, H. van; Sharp, G.J.; Mullender, S.J.; Jansen, J.; van Rossum, G., “Experiences with the Amoeba Distributed Operating System,” Communications of the ACM, vol. 33, no. 12, December 1990, pp. 46-63.
  16. {Closure} Neuman, “The Need for Closure in Large Distributed Systems,” Operating Systems Review, Vol. 23, No. 4, October 1989, pp 29-30.
Copyright © 2001 Daniel Joseph Pezely
May be licensed via Creative Commons Attribution.