Second Generation Object Models

William Kent
Database Technology Department
Hewlett-Packard Laboratories
Palo Alto, California

December 1989

> ABSTRACT
> 1 INTRODUCTION . . . 2
> 2 THE EVOLVING CONCEPT OF STATE . . . 3
>> 2.1 The First-Generation View . . . 3
>>> 2.1.1 Stored Data . . . 3
>>> 2.1.2 Disjointness . . . 4
>>> 2.1.3 Localization . . . 4
>>> 2.1.4 Stability of Form . . . 4
>>> 2.1.5 Objects and the Storage Resource . . . 5
>>> 2.1.6 Corollaries . . . 6
>> 2.2 The Second-Generation View . . . 7
>>> 2.2.1 Memory of Assertions . . . 7
>>> 2.2.2 Explicit and Implicit Methods . . . 7
>>> 2.2.3 The Content of Objects . . . 8
>>> 2.2.4 Where's the Object? . . . 8
>>> 2.2.5 Complex Objects . . . 8
>>> 2.2.6 Separation of Type and Class . . . 9
> 3 THE EVOLVING CONCEPTS OF MESSAGES AND OPERATORS . . . 9
>> 3.1 Operational Interfaces and Type Checking . . . 10
>> 3.2 Symmetry . . . 10
>> 3.3 Ownership . . . 10
>> 3.4 Polymorphism and Inheritance . . . 11
>> 3.5 A Role Model of Operational Interfaces . . . 11
> 4 CONCLUSIONS . . . 12
> 5 ACKNOWLEDGMENTS . . . 14
> 6 REFERENCES . . . 14


ABSTRACT

Second-generation object models generalize on first-generation object models in two respects: the semantics of object state are abstracted from its implementations in storage structures, and messages to single recipients are generalized to operators on one or more operands. The first is more consistent with the object-oriented principle of abstraction, while the second provides increased functionality while preserving the principles of object orientation.

1 INTRODUCTION

The object-oriented approach seeks a clean demarcation between the external semantics of an object and its internal implementation. External semantics are expressed entirely in terms of the operational interfaces of objects, i.e., the operators which may be applied to them. Applications which respect the object-oriented discipline only access objects by means of these operators, and are in no way aware of the implementation. Such applications can transparently be used with objects having different implementations, or whose implementations change.

Objects also have state, in the sense that information asserted about them is remembered for future use. Historically, object state is typically specified as a set of instance variables associated with the object. Similar formulations express this as a tuple or record whose elements correspond to the values of instance variables.

We thus get an image of an object as an encapsulated chunk of data. The sense of encapsulation is reinforced by the messaging paradigm: an operator is described as being sent to a single receiver. One can then think of an operator as belonging to a single object, and the method (a procedure which implements the requested operation) can then also be thought of as being in the object.

Thus an object has been characterized as something which contains data and to which messages can be sent. This capsule view has an intuitive appeal, being easy to grasp and relatively easy to implement. It also provides a simple foundation for the design of other facilities, such as security, versioning, distribution, concurrency control, export, and so on.

So, is there a problem? Yes. The capsule view of an object's data, if taken too literally, precludes certain implementation options, making the object too implementation-dependent. It certainly isn't required by the abstraction principle: applications don't care how the underlying data is organized, so long as the operational interface is supported.

The capsule view also encourages the object to be taken as the unit of scoping for the above-mentioned facilities. It then becomes difficult to provide such capabilities as different authorizations to different properties of an object, distribution of an object's properties over different sites, applications accessing different properties of an object without being in contention, exporting different views of an object, and so on.

The capsule view of data even has subtle implications for the semantics of complex objects, extensibility, multiple typing, and object identity.

Similarly, the messaging paradigm precludes or complicates type checking and polymorphism for multi-operand operators.

The capsule view of object data and operators characterizes what we may call first-generation object models, exemplified by Smalltalk [GR] and C++ [St]. Second generation models generalize this in two ways:

The other main principles of object orientation endure in much the same form [A+, K2]: identity, abstraction, data hiding, classification, generalization, inheritance, and polymorphism.

Iris [F1, F2] supports a second-generation object model, although some corollaries for polymorphism, complex objects, and classes are not yet realized. Some other models exhibiting second-generation characteristics include DAPLEX [Sh], TAXIS [MBW], PDM [MD] and Fugue [HZ1].

2 THE EVOLVING CONCEPT OF STATE

2.1 The First-Generation View

The state of a first-generation object is typically modeled as a set of instance variables, allocated in the image of a template associated with a class. Similar formulations express this as a tuple or record whose elements correspond to the values of instance variables. Though rarely articulated, the following assumptions often follow:

Such assumptions unduly constrain the semantics and/or implementations of the object model.

2.1.1 Stored Data

The correspondence to stored data is imperfect. Different data might be stored for different implementations of the same operational interface.

Some assertable information need not be stored: the radius, diameter, and circumference of a circle might all be asserted, but only one needs to be stored. Which one(s) is a matter of implementation, which should be free to change over time, or at different sites, for performance tuning.

Conversely, some information that is stored may not be assertable, being stored only to enhance the efficiency of data retrieval by such implementation techniques as caching, materialized views, and forward chaining. A person's age might be recomputed from his birthday on every request, or it might be stored in an instance variable. The distinction should have no bearing on the definition of the object, and again should be tunable.

Some stored information is there for implementation purposes only, and its very existence should be hidden altogether from object users. Pointers and counters in the implementations of lists of values are an example. So are cache management control data, such as a flag indicating whether or not the age field currently contains a valid value.

2.1.2 Disjointness

State variables, or tuples, belonging exclusively to one object or another imply a partitioning of storage among objects. This presumed disjointness of object states unnecessarily constrains the implementation of relationships and inverses.

A first-generation definition of a graph would require that a node "contains" the edges that terminate there, or an edge "contains" its terminal nodes, or both redundantly. There is no option to maintain the graph connections neutrally in some shared mechanism, such as a relation. (While such a neutral form could be realized by defining connections as objects in their own right, it doesn't seem right to require this just to achieve a certain implementation structure.)

Such disjointness is again not visible to users across the abstraction barrier. What matters is the support of operators to alter and query the connectivity of nodes and edges. Whether such connectivity is maintained as a list of edges at a node or a list of nodes at an edge, or both redundantly, or neutrally in a separate relation, seems to be a matter of implementation, which again could be varied or tuned without affecting the operational definition of node or edge objects.

2.1.3 Localization

The contiguity of state variables (especially when defined by a template for a class) and the unity of a tuple suggest that the state of an object exists entirely in one place. It doesn't matter exactly what we mean by a "place"; it could be a machine, or a node in a network, or a name space, or some other unit of locality.

Such localization unduly constrains the capabilities of an object system. It seems perfectly reasonable to allow different operations on an object to be supported at different places, with corresponding distribution and possible replication of the supporting "state" information. For example, the shape of an airplane wing might be maintained at one place and its manufacturing cost at another. Routing of messages need not be guided solely by the identity of the recipient; it may be appropriate to route different messages to different places for the same object.

Localization of state information might simplify the design of message routing, authorization, locking, and similar facilities - but it's not consistent with the kind of location transparency that should be afforded the user.

Implementations might be stretched. It might be possible to parcel out state variables to different places; it's harder to imagine such apportioning for tuples, particularly if the distribution might change over time.

2.1.4 Stability of Form

Treating an object as a tuple, or as being allocated in the image of a template, makes it difficult to support dynamic evolution of the form of an object. It is for this reason that many first-generation models have difficulty with extensible types (adding or removing messages from the operational interface conferred by a type), and also with dynamic typing (acquisition or loss of types by objects, as when a document becomes a product).

2.1.5 Objects and the Storage Resource

First-generation concepts of object existence and identity may be too closely linked to a particular approach to storage management.

What does it mean to create an object? Operational semantics are generally expressed in terms of a creation operator, applied to a type of which the object is to be an instance. Information about that object can be asserted at the time of creation or afterwards.

The operational consequences:

The operational consequences of destroying an object are analogous:

Deletion of an object may render other objects "unreachable", i.e., there are no longer any operations which return these objects. For example, such objects may previously have only occurred in the results of operations applied to the deleted object. It is no longer useful to remember the existence of such unreachable objects or any information involving them. Operationally, they are as good as deleted. This is a recursive notion which may in turn render other objects unreachable.

Unreachable objects are, in effect, the same as deleted objects. Actual detection and automatic deletion of unreachable objects is often implemented in order to conserve the storage resource. This automatic deletion of unreachable objects is one meaning of the term "garbage collection".

We can imagine several implementation scenarios for managing the storage resource for an object. A common implementation of first-generation models allocates space at object creation to hold information for the object, formatted in the image of a template associated with the type under which the object was created. Space is reserved for recording various assertions, which might be filled in when the object is created or later. Object deletion deallocates and recovers this chunk of space.

Storage allocation and deallocation are thus primarily associated with object creation and deletion. However, other operations may also require storage management activity:

Recovering the storage resource liberated by deleting objects or by shortening values and sets in other objects is another meaning of the term "garbage collection".

That scenario supports a first-generation view of state, treating objects as records or tuples, presuming object state to be localized and disjoint.

There are alternative scenarios which allocate space more dynamically. At minimum, object creation might in itself do no more than validate and return a new object identifier. Space could be dynamically allocated as needed to record assertions, and recovered for retractions.

Such alternative scenarios allow more flexibility in the implementation of objects, such as relaxing implicit assumptions of disjointness and locality. The semantics of objects, defined by their operational interfaces, ought to be neutral to such implementation choices.

2.1.6 Corollaries

We've mentioned some consequences of the first-generation philosophy of storage management. Semantic implications include localization of objects, and reluctance to support extensible types or dynamic typing.

Reluctance to support dynamic typing leads to reluctance to support static multiple typing as well, i.e., allowing an object to be a direct instance of multiple types (something might be both a document and a product, with neither being a subtype of the other). Multiple inheritance - a type having several supertypes - seems to be tolerable; having to establish composite templates seems acceptable when types are defined, but not when instances are created.

Other consequences are more subtle. Pre-allocation of templated space may depend on assumptions about uniformity or predictability of value sizes, in turn leading to presumptions about fixed lengths for object identifiers.

This in turn complicates the semantics of sets as objects (and also the semantics of large literal values, for much the same reasons). In order to have the option to treat sets extensionally, i.e., as though their identity was determined by their membership, such sets ought to behave as though their object identifiers consisted of a canonically ordered list of their members. Then equality of two such sets would simply be established by comparing their oid's. This approach is, of course, impractical to implement. The need to implement set identifiers in short strings leads to thinking of those short strings as oid's, which leads to thinking of the sets as being intensional, their identity being independent of their membership.

This in turn gives rise to the distinction between "shallow equality" of sets based on matching oid's and "deep equality" based on matching membership. And this in turn is generalized to distinguish shallow and deep equality for any object, based on the presumption of an inherent content concept for objects.

We also make too much of the metaphor of containment in modeling complex objects, which are made up of other objects. We seem to feel that if one object is part of another, as a diagram is part of a document, then the space occupied by the diagram should be embedded in the space occupied by the document. This in turn makes us reluctant to support shared sub-objects; if the diagram is used in two documents, we don't know readily know how to make it be in two places at the same time.

There really is a remarkable chain of consequences that follow from identifying objects with storage constructs.

2.2 The Second-Generation View

2.2.1 Memory of Assertions

The essence of object state is memory of asserted information. The semantics of state are describable entirely in terms of operational interfaces, without recourse to internal constructs such as state variables.

Operations having the effect of assertions can be described by their effect on other operations, in the spirit of abstract data types. The effect of Set-Birthday alters the future results of Get-Birthday and Get-Age. The effect of Connect(edge,node) alters the future results of Get-Nodes(edge) and Get-Edges(node).

The nature of object state is implicit in the lingering effect of such operations. A more precise definition of state may be difficult to come by. It may be the current value of all operations applicable to an object; it may be the current value of operations that directly test assertable information. Is age part of a person's state, if it is determined from birthdate? Does state include the values of operations that take other arguments as well? If a document has different lengths in different languages, is its length part of the state of the document?

It may not matter. Do we need a more precise definition of state?

2.2.2 Explicit and Implicit Methods

Implementation has to get to stored data eventually. In second-generation models, instance variables are internalized from the object abstraction into the implementation. Second-generation models are less prescriptive about structure, leaving it more as local options for implementation in whatever sort of structures are available and judged to be efficient.

Implementations are expressed in methods, which in first-generation models are explicitly specified procedures. In second-generation models, implementations can also be specified as mappings to underlying storage structures, implicitly defining the corresponding methods in terms of access to the structures.

In Iris, for example, the implementation of an Age operator might be specified as a mapping to two columns of a relation (this is even provided as a default specification). The consequent implicit method for this operator is a table lookup. Furthermore, a Set Age operation thereby has an implicit method which does table update.

2.2.3 The Content of Objects

In the second-generation view, objects don't have an inherent notion of "content", although there may be fairly natural defaults for some objects. The content concept for documents may appear self-evident - unless the documents have different texts in different languages. Or we may be dealing with the existence of a document even before it has any text. On the other hand, there is no obvious notion of content for a person. Or an airplane: do we mean its passengers and cargo, or the components of which it is built, or its flight log, or its manufacturing and testing history, or its design drawings, or what?

Without an inherent notion of content, objects in themselves do not make a natural unit of scoping for authorization, locking, copying, or export (e.g., for display or for shipment across a network). They are not natural units in any case. We can imagine differential authorization to different pieces of information about an object; we can imagine applications operating on different information about an object without being in contention (locking each other out); we can imagine different subsets of the information about an object being exported.

To support such notions of content, we need a notion of views [Wi,HZ2], allowing multiple overlapping views of the same object in different contexts. Views may be used to define both the content and format of chunks of information to exported. Simple cases are achieved via default views.

2.2.4 Where's the Object?

If we can't point to an object as occupying some definite chunk of storage, then where's the object? How do we know it exists?

A user knows an object exists because references to the object are valid, and operations can be applied to it. The user doesn't care where an object is, only that operations applied to it behave appropriately.

When a user really "sees" something on a display screen, he sees a "presentation object" as a specified formatting of a specified scope of information, i.e., a view. It may denote some other object in the same way that a photograph denotes its subject, but it is a different object. The object on the screen may be operated on, e.g., moved or resized, independently of operations on the object it denotes. Our obligation is to collect and export such a presentation object on demand, but we can do it any way our wits contrive. Of course, we'll do it more efficiently if the presentation object corresponds closely to stored data structures - but that's a tuning option, which may require load balancing among conflicting requirements.

It may be disquieting to implementers not to have a sense of "there's the object", but users won't know the difference.

2.2.5 Complex Objects

How does a user know that a diagram is part of a document?

We can define and do all those things operationally, without necessarily embedding the diagram into the storage space occupied by the document. Of course, we'll try to implement it that way whenever we can, to do things faster.

But at the same time we want to provide a rich enough set of capabilities. We'd like to be able to use the same diagram in several documents. We'd like to give the user an option to destroy a document by dismantling its parts, rather than by burning them. We shouldn't be constrained in those semantics by taking the metaphor of containment too literally in terms of storage space.

2.2.6 Separation of Type and Class

First-generation models cleanly distinguish interface from implementation on the processing side, by distinguishing between the operations invoked (messages sent) by users and the methods which implement them. The corresponding distinction is not made for data.

To talk about this, we need the terms "type" and "class". Of the many definitions extant for these two terms, we choose the following:

A class typically specifies a set of methods as implementations for a set of operators (messages), and a set of instance variables or a tuple template as the format for instance state data. Different classes may provide different methods and instance variables to implement the same operations. Abstraction is served by requiring applications to use the specified operators rather than manipulating the state data directly.

However, first-generation languages do breach the abstraction barrier in some respects. Applications create objects as instances of classes, make decisions based on the classes of objects, and iterate over instances of classes. Such applications are implementation-dependent, since they cannot be freely used with different classes implementing the same operational interfaces.

Second-generation languages would do all these things in terms of types rather than classes. Clearly, when an object is created, the system must know in which class it is to be implemented. However, rather than being specified explicitly by the application, the class of a new object would be deducible from the type under which the application creates the object. Typically the object is being created within some scope (machine, database, etc.) in which there is only one possible implementation locally available. Thus the choice of class might be embedded in the local methods supporting the creation operators for various types.

Second-generation models would thus distinguish between type graphs visible to object users and class graphs visible only to object implementers.

3 THE EVOLVING CONCEPTS OF MESSAGES AND OPERATORS

In first-generation object models, a message is sent to a single recipient object, to be serviced by a method "contained" in the object. In second-generation object models, an operator is applied to one or more operand objects.

Since messages can generally include other objects as auxiliary arguments, the difference might appear to be purely syntactic: one of the operands of an operator can be distinguished as the recipient, with the others corresponding to arguments. There is a simple syntactic correspondence between a message taking one argument

document.print(device)

and a binary operation

print(document,device).

However, there are deeper distinctions.

3.1 Operational Interfaces and Type Checking

The operational interface of an object describes the applicability of messages (operators) to the object. First-generation models characterize an object by the messages it may receive, but not by its ability to be an argument for other messages. For instance, if a document can receive a message like

document.print(device),

the document's operational interface will say that it can receive a print message, but the device's operational interface says nothing about its ability to serve as an argument of the print message.

Type checking verifies that a message sent to an object is in the operational interface of the object. The same sort of verification is necessary for the arguments; some first-generation models support typed message signatures for this purpose. Second-generation models take a unified approach, using a single mechanism combining operational interfaces and typed signatures.

3.2 Symmetry

Many operators naturally take multiple operands, and it is quite artificial to require one of them to serve as the principle recipient. Examples abound in arithmetic, logic, and set theory, just for starters.

Many operators deal with relationships among objects: connecting nodes and edges, connecting wires and pins, printing documents on devices, computing registration fees for students in courses, counting parts in warehouses, or components in assemblies, and so on. No useful purpose is served by requiring the message to go to the node rather than the edge, to the wire rather than the pin, etc.

It is not obvious why object designers have to choose which is the recipient. What are the consequences of the choice, or of changing it?

3.3 Ownership

The main hangup seems to be a vestige of encapsulation, a feeling that each message and method needs to have a home, to be owned by some object. Some models, and most implementations, generalize this a bit so that messages and methods belong to types and classes, respectively, rather than to individual objects. But there is still an assumption of a single owner.

Who owns a multi-operand operator or method?

How about joint ownership? It seems perfectly reasonable that an operator, or a method, be owned jointly by several objects (or types/classes), corresponding to each of its operands. Instead of saying that a method is executed by an object when it receives a message, our scenario says that a method is executed when an operation is applied to a set of joint owners. It works just as well.

3.4 Polymorphism and Inheritance

Polymorphism and inheritance in first-generation models are defined in terms of message recipients. For the message

document.print(device),

different methods can be provided for different subclasses of documents, but no corresponding provision is made for subclasses of devices.

Second-generation object models support polymorphism and inheritance symmetrically for all operands. Different methods could be defined for different combinations of document subclasses and device subclasses. The algorithms are more complex over multiple operands, but tractable. There are ambiguous and unambiguous cases, as before.

3.5 A Role Model of Operational Interfaces

In second-generation models, the operational interface of an object is expressed in terms of the roles it may play in various operations, rather than directly in terms of the operations (messages) which may be applied to it. Thus the role concept is itself formally introduced into the object model.

Consider the operation

Set-Fare(Origin City, Destination City, Fare Integer).

It is not sufficient to characterize the operational interfaces of cities and integers simply by saying that Set-Fare is applicable to them. That fails to express the illegality of an integer as the first operand or a city as the third.

The operational interface of a city should be characterized by the fact that it may occur as the first or second operand of Set-Fare. The terms "Origin" and "Destination" label these two roles. They serve as intermediaries linking types with operations. That is, the Origin role belongs to the Set-Fare operation, and the role may be played by objects of type City.

Roles bear some resemblance to types, in the sense that we can say that certain cities are origins, or are destinations. The difference here is between pre-condition and post-condition. It is a prerequisite that an object be a city in order to be the first operand of Set-Fare; the object becomes an origin as a consequence. The fact that the object is an origin is not something which can be type-checked a priori in order to determine the applicability of Set-Fare. In this sense, roles are not directly relevant to type checking. (The role may become a type for type-checking other operators. E.g., it might be a prerequisite for some other operator that the object be playing the role of an origin.)

Roles might be labeled in various ways. Usually the labels are not globally unique, but only unique within a given operator. Origin and Destination might also be roles played by nodes in a graph. Roles might simply be numbered, e.g., the first and second roles. Roles could also be labeled with their associated types, if unambiguous, i.e., if the operator only has one role of that type.

With more complex aggregate operands, roles could even label the internal portions of aggregates where types of objects may occur. For example, an operator definition of the form

Op-1: L1 List of (I1 Integer, C1 City, S1 Set of C2 City)

means that cities can play roles C1 and C2. In a list which is a valid operand to Op-1, a city may occur as the second element in the list or as a member of a set which is the third element of the list.

Thus the operational interface of an object is characterized by a set of roles it may play, linking the object indirectly to the operations in which it may play those roles. By extension, the operational interface conferred by a type is characterized by such a set of roles. Instances of the type may play those roles.

An operator does not belong exclusively to one type. It may be linked to one or more types via its roles.

Unary operators, having only one role, do in effect belong to only one type. In this sense, unary operators correspond to messages in first-generation object models. However, auxiliary arguments of messages are still inadequately accounted for in first-generation models.

The same sort of role structure can be used to link methods and classes.

We might note a resemblance between roles and variables. The labeling of roles resembles the declaration of typed variables. Objects bound to variables at operator invocation are playing the corresponding roles. Maybe roles are persistent variable objects. This warrants further exploration.

4 CONCLUSIONS

Though it is the next generation of information processing, we can already see beyond the first generation of object orientation. The object model is evolving. We may well interpret "object orientation" as signifying a direction toward a goal as yet unattained.

Second-generation models generalize the treatment of object state and messaging, modifying the encapsulation concept. Second-generation object models preserve the following principles of object orientation:

Second-generation models also have these characteristics:

The encapsulation principle takes on different nuances. Abstraction and hiding still, in a sense, "encapsulate" the form and content of implementations; there is a "shell" which object users cannot penetrate.

That shell is a collective one covering implementations of all objects. There is less of a sense of encapsulating individual objects as packages or chunks of data and operations. The states of several objects may share pieces of stored data, and the stored data for an object may be distributed over several places. An operator is not necessarily contained in an individual object or type, but might be jointly owned by a set of owners corresponding to the operands of the operator.

When a "capsule" is needed, view constructs can define units of scoping for such purposes as copying, export, authorization, and concurrency.

Second-generation models adhere more strictly to the abstraction principle, by separating state from structure and storage. Freed from constraints of the storage metaphor, second-generation models are more likely to support extensible types, dynamic typing, multiple typing, and sophisticated semantics for complex objects.

Structure is implementation and metaphor. Structural metaphors help explain behavior, but they mustn't be taken too literally. Simplified explanations rely on such metaphors; precise understanding must see through them.

5 ACKNOWLEDGMENTS

Thanks to Walt Hill for stimulating and provocative discussions on the subject. Walt observed that this evolution of the object model is approaching the notions of abstract data types [BW,GT], and that similar developments can be seen in CLOS [B+].

6 REFERENCES

[A+] M. Atkinson, F. Bancilhon, D. DeWitt, D. Maier, S. Zdonik, "The object-oriented database system manifesto (a political pamphlet)", Working paper, November 1988.

[B+] D.G. Bobrow, L. DeMichiel, R.P. Gabriel, G. Kiczales, D. Moon, S. Keene, "The Common Lisp Objects System Specification: Chapters 1 and 2", Technical Report 88-002R, X3J13 standards committee document, 1988.

[BW] Kim B. Bruce and Peter Wegner, "An Algebraic Model of Subtypes in Object-Oriented Languages", working paper, May 1986.

[F1] D.H. Fishman, et al, "Iris: An Object-Oriented Database Management System", ACM Transactions on Office Information Systems, Volume 5 Number 1, January 1987. Also in [ZM1].

[F2] Dan Fishman, et al, "Overview of the Iris DBMS", Object-Oriented Concepts, Databases, and Applications, Kim and Lochovsky, eds, Addison-Wesley, 1989.

[GT] J.A. Goguen and J. Tardo, "An Introduction to OBJ: A Language for Writing and Testing Software Specifications", Specification of Reliable Software, IEEE, 1979

[GR] A. Goldberg and D. Robson, Smalltalk-80: The Language And Its Implementation, Addison-Wesley, 1983.

[HZ1] Sandra Heiler and Stanley Zdonik, "FUGUE: A Model for Engineering Information Systems and Other Baroque Applications", Proc. Third Intl Conf on Data and Knowledge Bases, Jerusalem, 1988.

[HZ2] Sandra Heiler and Stanley Zdonik, "Object Views: Extending the Vision", Proc. Sixth Intl Conf on Data Engineering, Los Angeles, Feb. 1990.

[K1] William Kent, "The Leading Edge of Database Technology", in E.D. Falkenberg, P. Lindgreen (eds), Information System Concepts: An In-depth Analysis (Proc. IFIP TC8/WG8.1 Working Conference, Oct. 18-20 1989, Namur, Belgium), North Holland, 1989. Also Proc. Eighth International Conference on the Entity Relationship Approach, Oct. 18-20 1989, Toronto, Canada. [html]

[K2] William Kent, "Object-Oriented Database: New Roles and Boundaries", InfoDB (to appear).

[K3] William Kent, "Why Should There Be An Object-Oriented Data Model?", in preparation.

[Ma] David Maier, "Why Isn't There an Object-Oriented Data Model?", Technical Report CS/E-89-002, Oregon Graduate Center, 2 May 1989.

[MD] Frank Manola and Umeshwar Dayal, "PDM: An Object-Oriented Data Model", Proc 1986 IEEE International Workshop on Object-Oriented Database Systems, Asilomar, California, Sept. 23-26, 1986 (K. Dittrich and U. Dayal, eds). Also in [ZM1].

[MBW] John Mylopoulos, Philip A. Bernstein and Harry K.T. Wong, "A Language Facility for Designing Database-Intensive Applications", ACM Transactions on Database Systems 5:2, 1980. Also in [ZM1].

[Sh] D. Shipman, "The Functional Data Model and the Data Language DAPLEX", ACM Transactions on Database Systems 6:1, 1987. Also in [ZM1].

[St] Bjarne Stroustrup, The C++ Programming Language, Addison-Wesley, Reading, Mass., 1986.

[Wi] Gio Wiederhold, "Views, Objects, and Databases", IEEE Computer, Dec. 1986.

[ZM1] Stanley Zdonik and David Maier, editors, Readings in Object-Oriented Database Systems, Morgan Kaufmann, San Mateo, California, 1989

[ZM2] Stanley Zdonik and David Maier, "Fundamentals of Object-Oriented Databases", in [ZM1].