Object-Oriented Application Engineering: The ITHACA Approach
Table of Contents
When the ITHACA project commenced in 1989, the organisations involved believed that a new approach was vital in order to meet the challenge of software development in the 1990s. They developed an application support system which incorporates advanced technologies in the fields of object-oriented programming in general and programming languages, database technology, user interface systems and software development tools in particular. ITHACA is an integrated and open-ended toolkit which exploits the benefits of object-oriented technologies for promoting reusability, tailorability and integratability, factors which are crucial for enhancing software quality and productivity.
Object-oriented systems were first developed some 25 years ago and since then have progressed from being mere academic experiments. Now, object-orientation has clearly left purely academic realms to emerge as a force to be reckoned with, and the feeling among IT users is that this technology will provide numerous benefits in the future. The awareness of what object-oriented systems have to offer has increased considerably in industrial circles.
The benefits which this technology is able to realise are manifold and would appear, even at first glance, to be tailored to the current and future needs of IT users.
The full impetus of these advantages can best be seen in large and complex commercial application and information systems. Commercial application systems model the organisation of a company. As such, they are expected to be flexible and easy to maintain to conform to the changing needs of each organisation. Furthermore, organisational issues of different companies or within a company are not identical. However, many "patterns" within organisational structures resemble one another or build abstractions of one another (e.g., accounts and their transactions, application and balancing procedures etc.).
- Object technology improves productivity thanks to its ability to model all levels of complexity. Object-oriented systems are sufficiently flexible to allow them to be adapted quickly to changes in user requirements. Reuse of proven, generic modules cuts the development time for and the overall size of new systems. These factors make it easier to prototype, implement and maintain object-oriented systems.
- Regardless of the complexity of a system, object technology is capable of coping. General structures can be defined and, as the system takes shape, honed to produce the final tailormade application thanks to the features mentioned above.
- A piece of object-oriented software is designed to accommodate changes with less effort. Enhancements can be made, obsolete elements removed and modifications affecting the entire software system integrated with a minimum of effort.
- Object-oriented systems can be designed for reuse to a high scale. A piece of software will run with another piece of software when a suitable interface to its counterparts is provided. If they function smoothly in one context, these reusable components (or artifacts) can do the same in another environment with no or only minor adaptation. Libraries and repositories can be used to store these artifacts for reuse at a later stage.
- Finally, the fact that the data and procedures of object-oriented systems are packaged together as one entity means that the system can be maintained with a relatively small amount of effort. Unlike traditional approaches, the overall effect of any maintenance work carried out on a particular data structure is restricted to only one area of the entire system.
Within ITHACA, the Consortium targeted the object-oriented approach to the development of large and complex commercial applications. When the project started in 1989, each partner of the Consortium already had a medium to extensive level of experience with object technology in general and with application development in particular. At this point in time, it was already foreseeable that the above-mentioned advantages could only be exploited by supporting the whole software life-cycle by appropriate tools. The main issues hindering the efficient use of object-orientation for application development were that
When the ITHACA project commenced, the aim was to develop an integrated application support system based on the object-oriented programming approach. In the original project description, the system was to be designed in such a way that it incorporated a wide range of features designed to
- object technology was not supported by any adequate life-cycle model; existing life-cycle models were insufficient since they did not support any notion of reusability - the core advantage of object technology;
- no suitable tools were available for supporting the early phases of object-oriented development and also the programming tasks through integrated programming environments;
- object-oriented programming did not support any notion of persistence, i.e., it had no inherent mechanisms which allow a database - probably the most important component of any information processing system - to be incorporated simply. This had, in the past, limited the use of object-oriented programming to the development of system software, tools and user interface systems.
In addition to the benefits to be gained from the object-oriented approach, it was planned to use existing and emerging standards to help achieve this goal.
- ensure reusability at various levels of application development and
- guarantee a high application quality.
The underlying objective was to create a platform which would enable a wide range of applications to be developed quickly, reliably and at a low cost, an approach reflected to some degree in a number of existing developments, but which, as a result of the innovative nature of object technology and the leading-edge methods involved, had not gained a firm foothold in the marketplace.
The ITHACA system developed consists of the following components:
Fig. 1: The ITHACA System
- Object-oriented 4GL environment
- Object-oriented software information base
- Application development environment
- Application support environment
The 4GL environment comprises a persistent, object-oriented programming environment (language and tools), libraries for access to relational database systems, a co-existing fully integrated, structurally object-oriented database system, as well as a runtime system geared to this. The programming environment is designed in such a way that it is capable of integrating both object-oriented programming languages and procedural languages. A particularly innovative feature of this programming environment is the concept of persistence. This enables objects to be stored permanently, a feature which other object-oriented languages have as yet failed to provide in a comfortable an integrated way. Further tool support is provided for multi-language debugging and monitoring, comfortable static analysis with user-defined queries, system-independent access to graphical user interfaces as well as a rich programming library.
The object-oriented software information base is used to store software information which constitutes the basic building blocks with which the developer works. It cooperates closely with the tools of the application development environment. Particular consideration has been given to providing support for an object-oriented life-cycle and methodology geared towards streamlining the configuration process called for by the applications.
The information stored in the software information base is organised in the form of generic application frames and is selected interactively with the aid of a selection tool. The programmer is also supported by a requirement collection and specification tool which, together with detailed user specifications, serves to specialise the generic frames to produce specific application frames. The software components selected are customised by means of a combination of programming and scripting. A visual scripting tool is used to graphically connect visual representations of objects and to interactively construct the applications. Over and above this, a methodology which aims in particular to generate specific applications from generic application is provided by the ITHACA object-oriented methodology (IOOM).
The application support environment provides application users with an advanced user interface, an activity coordination facility and a management system in order to offer assistance in modelling cooperative task solutions.
Two procedures were employed to validate the environment. First, a bootstrap technique was used to develop the programming environment and the object-oriented database system in its own language, the aim of this being to prove that the environment is also suitable for system programming. Secondly, several sample applications, which include a generic office model for use in a wide range of application scenarios, were developed to illustrate the usefulness of the environment for application development and to provide building blocks for further reuse.
The goal of the ITHACA application development environment is to reduce the long-term costs of application development and maintenance for standard applications in selected application domains. By ``standard'' applications we mean classes of similar applications that share concepts, domain knowledge, functionality and software components. Standard applications are classified in ITHACA according to application domains: the key assumption is that selected application domains can be adequately characterised in order that an individual application may be constructed largely from standard object-oriented software components belonging to that domain. Therefore, achieving reusability of not just software but also of development experience is an essential activity of this approach.
The key benefit expected of this approach is that applications developed using the ITHACA environment will be flexible and open-ended: it should be possible both to develop applications quickly and flexibly and to reconfigure applications to adapt to evolving requirements. This, in turn, implies the need for a different kind of software life-cycle in which the long-term development and evolution of reusable software proceeds in parallel with the short-term development of specific applications.
It should be noted that such an approach is quite different from stepwise refinement as it is traditionally practiced. Stepwise refinement leads the software engineer to split a complex problem into less complex subproblems by means of iteration. Reuse of existing components is not considered to be a goal of the process and, if at all, is achieved by accident. In general, stepwise refinement results in recoding of similar programs.
Accordingly, we distinguish between two activities:
Fig. 2: The ITHACA Software Life-Cycle
- Application Engineering, which is the activity of abstracting the domain knowledge for selected application domains, developing reusable software components to address these domains, and encapsulating this knowledge into reusable software information;
- Application Development, which is the activity of developing a specific application by reusing available components through the ITHACA development tools.
Initially, reusable software components and domain knowledge may be developed by re-engineering existing applications in an object-oriented way (that is, so as to factor out common functionalities). As application developers encounter more demanding requirements, the software base must evolve to improve its generality and reusability. Application engineers and application developers thus cooperate in a producer/consumer relationship.
The product of application engineering is structured as so-called ``Generic Application Frames'' (GAFs) and organised in a repository, or ``Software Information Base'' (SIB) according to application domain.
The application developer is then expected to proceed in the following way to produce a specific application:
As the understanding of the application domain evolves, the application engineer upgrades the contents of the SIB with new reusable components and development information. By doing so, the set of Generic Application Frames becomes richer and more powerful and the above sketch of a software life-cycle based on reuse improves.
- Select a Generic Application Frame: using only a rough sketch of the application requirements, the developer searches and browses the SIB to find an Application Frame corresponding to the application domain and requirements of the application being designed.
- Select useful components: the Application Frame drives requirement collection and specification according to pre-existing, generic specifications and designs, thus guiding the developer in the selection of reusable components.
- Tailor components: the selected components are incrementally modified using design suggestions given by the specification components; tailoring occurs by supplying parameters or by modifying component behaviour through inheritance.
- Script application: link the selected design components together by means of a "script" that specifies how the objects will cooperate to implement the required application.
- Monitor behaviour and continuously develop: test and validate; as requirements change, adapt the application
The interaction of tools to support the object-oriented life-cycle which was outlined in the last chapter can be characterised as follows:
Fig. 3: Simplified tools interaction
The "brain" in the cycle is provided by the SIB which stores and links artifacts about the development process in general. Based on the IOOM methodology, RECAST, the ITHACA REquirement Collection And Specification Tool, provides a "guided tour" through the SIB to collect generic application frames based on requirements. Based on artifacts retrieved from the SIB, Vista, the ITHACA visual scripting tool, is able to glue these components together to form running prototypical applications. Enhancements to applications as well as programming of missing components are done using the CooL\xa8 (1) SPE, the object-oriented 4GL-like software production environment which, in order to achieve persistence, supports CoOMS as a structurally object-oriented database system [Mey et al. 1993].
Software artifacts are stored in a repository which supports their efficient selection. Comprehending their functionality and usage before adaptation and subsequent use in the composition of a new application is also recognised as essential in the reuse process.
The SIB aims to support the component-based development of very large software systems. The SIB can be seen as a complex-functionality information system (like a large DBMS) providing for persistent storing, sharing, accessing, managing and controlling data. What mostly distinguishes it from a traditional DBMS is its support for conceptual modelling which features, in addition to the usual constructs, a capacity for uniform treatment of schemata and data allowing schema definition at runtime and a uniform treatment of entities and relations; its capacity for flexible retrieval and browsing of multimedia data; and optimization of efficiency for network structures consisting of a large variety of classes rather than relatively few classes with large populations per class.
As can be seen in the figure below, which applies the reuse process in our setting, the TELOS conceptual modelling language is used for the uniform description and organisation of software artifacts. TELOS is a specialised knowledge representation language for the development of information systems [Mylo90]. A corresponding graphical visualisation for each artifact description is readily available. The software components are located and retrieved via a selection mechanism comprising querying and browsing modes. Finally, the semantic nature of TELOS descriptions assists in understanding the functionality, usage and behaviour of the software components described. The latter is assisted by associated multimedia annotations.
Fig. 4: The Reuse Process with the SIB
The SIB is structured as an attributed, directed graph, the nodes and links of which represent descriptions of software objects and semantic relations, respectively. There are three kinds of descriptions:
These descriptions provide three corresponding views of a software object:
- requirements descriptions (RD);
- design descriptions (DD); and
- implementation descriptions (ID).
Descriptions can be simple or composite, consisting of other descriptions. The term descriptions reflects the fact that these entities only describe software objects. The objects themselves reside outside the SIB. Descriptions are related to each other through a number of semantic relations listed below. In addition to the usual isA, instanceOf and attribute relations supported by object-oriented data models and knowledge representation schemes, several special attribute categories have been defined for the purposes of the SIB.
- an application view, according to a requirements specification model (e.g., SADT);
- a system view, according to a design specification model (e.g., ER or DFD); and
- an implementation view, according to an implementation model (e.g., set of CooL or C++ classes together with documentation).
The modelling constructs used in the SIB can be distinguished into three categories:
An important type of association of coarse granularity in the SIB is the Application Frame. Application frames represent complete systems or families of systems and comprise (hasPart) at least one implementation and optional design and requirements descriptions. Application frames are further dinguished into specific and generic application frames while the requirement descriptions, design descriptions and implementation descriptions of an application frame should properly be considered as groupings of such descriptions (i.e., other associations.)
- General structural/semantic relationships, including attribution, classification and generalisation. These are the basic modelling mechanisms offered by the knowledge representation language.
- Special structural/semantic relationships, including aggregation, correspondence, similarity and specificity. These have been defined as a minimal set of system-supplied special descriptors for the purpose of software description. This set can be extended by the users through a meta-attribute facility that TELOS provides.
- Associations. These are sets of descriptors along with private symbol tables, which allow for the construction of materialised views (through queries) and of contexts (or workspaces).
A specific application frame describes a complete system (be it a linear programming package, a text processor or an airline reservation system) and includes exactly one implementation description. A generic application frame is an abstraction of a collection of applications pertinent to a particular application domain and includes one requirements description, one or more design descriptions and one or more implementation descriptions for each design description.
The SIB is a central component of the ITHACA project, containing all software artifacts developed in one or more software development projects. These software artifacts will be produced during different phases of the development process, covering areas such as requirements specification, design, implementation, or even information about the development process itself. These artifacts will have different formats according to the models used in the development phases. Nevertheless, all those artifacts have to be stored in a relatively uniform way in the SIB. Descriptions in the SIB are stored as TELOS class definitions. The uniformity problem has been approached by defining a metamodel in the SIB [Constantopoulos et al. '92]. Thus, all models used during the development process can be defined in the SIB as instances of the metamodel.
The meta model defines the basic components to be used in definitions of models for the SIB. Basic building blocks of each model are constructs. Constructs can range from low-level components, such as a variable to very high-level components, for example, a whole application. In order to distinguish low-level components, which are not worthy to be reused per se, from higher-level reusable components, the meta model introduces a specialised construct called entity. Entities are constructs which are not further structured. Entities are used to represent simple low-level constructs. On the other end of the abstraction hierarchy descriptions are used to represent constructs in the SIB that can be reused. All other constructs can be reused only as part of a description.
Descriptions in the SIB belong to one of the following levels: requirements, design or implementation. Each level defines a number of models, one for each description language. A model is a collection of classes (a schema) defining the structure of software artifact representations. Since the SIB is a multi-language environment, many queries will address software artifacts of different languages. Some of these queries address aspects very specific to one language, others address common aspects and ignore specific details. In order to support the widest possible range of queries, a taxonomy of description models has been defined for the SIB. There is one general model allowing queries addressing all descriptions in the SIB. And each level defines one description model which allows level-specific multi-language queries. Each of these level-specific models is a specialisation of the general description model. All models specific to a certain language or formalism within each level are defined as specialisations of one of the level-specific models.
The PL model is designed to be used as a template for the definition of other implementation models in the SIB. It represents a fictitious, simple, object-oriented and strongly typed language. Basically, the PL model is an incomplete abstract syntax for a programming language. It is incomplete because the SIB does not represent all details of a program. Lower-level details, such as expressions or simple statements, are not part of the PL model.
Specific implementation models should define their constructs as subclasses of the PL constructs. Where necessary, they can define additional constructs. Since the implementation models in the SIB are not meant to represent every single detail but only the overall structure of programs (or classes, procedures etc.) necessary to understand and reuse them, there is reasonable hope that constructs of different models will be sufficiently similar. Defining them as subclasses of the general constructs of the PL model gives us the possibility to ask language-independent questions in a multi-language SIB.
The CooL model is very similar to the PL model. All its constructs are subclasses of a PL class, and there is a one-to-one correspondence between most of their classes. Additional constructs of the CooL model are external variables and procedures, transactions and the distinction between transient and persistent object types. All these additions are used to represent the slightly more complex abstract syntax of CooL.
All objects that have to be defined as instances of one of the CooL model classes can be automatically derived from the source code of a CooL program. An automatic translator has been built to generate SIB descriptions representing CooL components.
While most of the CooL implementation model constructs are almost identical to their corresponding PL construct, some of them contain added information being used for cross referencing.
Practical experience has shown that the CooL model can be used in various ways by employing the SIB selection tool functionality. You can search for components by navigating through the semantic network or by issuing specific queries. And you can use different views, such as inheritance trees, call graphs, cross-reference lists or part-of hierarchies helping to understand the components. As soon as more different models are defined for the SIB, it will also be possible to search for components written in different languages.
The main characteristic of the IOOM approach is that the application developer constructing a specific application does not proceed by developing each class contained in the application specifications from scratch, but reuses classes extracted from a class repository. The necessity of a repository to store specification and design information for further reuse has become evident in the context of object-oriented development: the SIB developed in ITHACA and the access mechanisms provided by the RECAST (REquirement Collection And Specification Tool) development support tool are at the basis of the methodology.
The following phases of application development have been considered:
In IOOM, we propose an approach to reuse of high-level components, based on conceptual level components available in the SIB for reuse. Conceptual level components are provided to support the activity of the developer during the conceptual design phase. In fact, reuse of conceptual level components is useful also to encourage reuse of related implementation level components in the successive detailed design and code development phases. On the other hand, a more conventional approach is proposed for the requirements collection and analysis phase, where the development process and the interaction with the client are more informal. To support this first phase in order to facilitate reuse, we encourage the use of a controlled vocabulary, in addition to a set of modelling concepts to support the informal description of requirements, while direct reuse of requirements informally specified has not been considered viable during the interaction with the client.
- Requirements collection and analysis
- Conceptual design
- Detailed design
- System design.
The functional specifications of the application are formally defined according to the Functionality in Object with Roles Model (F-ORM). This conceptual design phase requires incremental conceptualisation of the application requirements and their consequent restructuring, according to specific methodological strategies. In this phase, specification components extracted from the SIB are integrated in the specifications following a reuse strategy, adapting them where needed. The specifications are organised at different abstraction levels, and mappings based on transformation primitives are used to represent relationships between levels.
The purpose of the object-oriented design framework F-ORM is to support the conceptual design of information systems in terms of objects and their behavioural rules for communication and cooperation according to global processes. The original aspect of F-ORM is related to the capability of describing the behaviour of objects in a given class through role types [Pernici 1990]. The model makes it possible to separate the description of the behaviour of objects in each role and the relationships between different behaviours in different roles by specifying the rules and constraints that govern concurrent behaviour. Partitioning of the behaviour according to different and separate roles that can be played by objects of a given class facilitates class specifications reuse, and this feature strongly characterises the methodological approach.
In IOOM, the phases of requirements collection and analysis and conceptual design have been studied in detail, in particular with respect to the problems of object-oriented specifications and specification reuse. The emphasis on the first development phases is justified by the lack of proposals in the literature in this domain, while we can assume that one of the more consolidated approaches proposed for object-oriented software design and development can be used for the following phases [Booch 1991, Meyer 1990].
Concerning the reusability approach, it is necessary to discuss when it should be considered during the application development process. Reuse is usually successfully applied during the implementation phase to reuse pieces of code [Biggerstaff 1989]. Reuse during the first application development phases is still being investigated in the literature [Constantopoulos et al. 1989, Mylopoulos 1990], and it has been one of the goals within the application development support environment developed within ITHACA. In fact, while reuse in the small is easier to achieve, reuse in the large presents greater difficulties. Very high-level reuse has been proposed for specific applications, mainly based on high-level specification languages; little is available in the literature to support application development in general.
The purpose of RECAST is to assist the application developer in finding requirement components in the SIB and in composing them to form the specification of an application schema. RECAST provides active guidance in producing specifications at various levels of detail by giving suggestions about applicable refinements and transformations of components based on the user's previous design actions.
RECAST supports IOOM for designing an application under the reuse approach. A specification schema is produced incrementally by selecting components from the SIB, and by following the associated reuse suggestions that propose a (set of) conceptual operation(s) applicable to adapt the components. The application of a conceptual operation brings about connections to other components which are suggested in order to have a working schema; moreover, a set of optional components are suggested as possible participants in the schema, and a set of applicable operations is provided. The mechanism of suggestions is at the basis of the method and is implemented as suggestion classes stored in the SIB.
The final schema contains requirements enriched with a set of detailed design suggestions that specify how the obtained schema can be mapped into a set of design classes. The schema is then available to Vista which supports retrieval of design components from the SIB according to these suggestions.
The figure below shows the information flow within the ITHACA environment and, in particular, in the requirement specification phase. Preliminarly, an analysis phase of the application requirements is performed with the aim of collecting the requirements and identifying tasks, information/documents, and agents. The phase is driven by suggestions of the IOOM analysis part. An analysis of reuse information is subsequently accomplished in order to identify reusable specifications suitable for the application being developed. To this end, the SIB contents are searched and browsed and the documentation associated with reusable components is examined. The reusable specifications suitable for the current application are selected from the SIB, composed and tailored.
A detailed design of the application is executed using Vista and is strictly related to the application requirements. This task is supported via suggestions provided by RECAST as requirement-to-design mapping suggestions.
Fig. 5: Information flow in the requirement specification phase
A component, which is the reusable unit, can be an F-ORM class, a set of classes, or a role. RECAST supports requirement specification by providing retrieval functionalities for components, and reuse suggestions.
For further information on RECAST, reference is made to [Bellinzona and Fugini '92a], [Fugini and Faustle '93] and [Bellinzona et al., 92b].
Vista is a tool to support application developers in the interactive composition, or "visual scripting", of applications from pre-packaged, plug-compatible software components. Each software component must have (1) a behaviour, typically defined in some programming language, such as CooL or C++, (2) a visual presentation that serves as its user interface during visual composition, and (3) a composition interface that allows it to be "plugged" together with other components [Mey 1994].
Vista is a generic tool for visual composition. Genericity is achieved by factoring out the definition of "plug-compatibility" for a given component set into an explicit composition model (also sometimes called a "scripting model"). Typically, composition models and component sets will be associated to a particular application domain. The composition interface of a component consists of a set of typed, directed "ports" that allow a component to be connected to other components. A composition model defines which types of ports may be connected (with possible subtyping rules applying), and determines what actions must be taken by Vista when a valid connection is made. The ability to support multiple composition models is a key feature of Vista that allows it to be adapted to different application domains.
Within the context of ITHACA, Vista is used in the latest stages of development, once the application domain and specific requirements have been determined. Relevant component sets and their composition models are retrieved from the SIB during requirements collection and specification, and individual components are selected and composed using Vista according to the given requirements [Nierstrasz et al. 1991].
Visual composition with Vista depends on the prior activity of application engineering to produce composition models and component sets. Although this is a capital-intensive activity, once a composition model has been defined, it is possible to add new components that conform to the model, thus extending the functionality of the corresponding component set. Furthermore, it is possible (and sometimes desirable) to re-engineer or re-package previously developed components to conform to a particular composition model. Composition models can, in this way, provide various standards for integrating heterogeneous systems.
Vista supports visual composition in two ways: (1) by imposing a reasonably flexible structure for component definition, and (2) by defining a framework, referred to as the Component-Port-Link (CPL) framework, for component composition. In order to use Vista, composition models and component sets must be provided for particular application domains. It is through the composition models and component sets that Vista can support the creation of individual applications. A component set consists of the objects with which the developer will work, for example user interface components, and a composition model determines the way in which components may be connected.
Vista operates on five basic principles:
A composition model allows the user to define compatibility rules between ports. The tool parses a composition model file and creates a composition model manager which manages these compatibility rules. Moreover, the composition model file creates a mapping between formal names and subclasses of C++ base classes that support ports and links. This mapping adds, at run time, the desired functionality to the creation of links and to the transmission of data between components.
- Components are made up of a behaviour and a presentation.
The behaviour of a component consists of whatever it has been designed to do. In particular, a component may be designed to interact with users, or it may serve a purely computational function within the final application. Whatever the case, Vista is not concerned explicitly with the behaviour of the component, but only in how the component may be composed with others within an application.
The presentation is the external representation of the component which includes its visual display and its reaction to the user. The internal and external representations are separate to allow for flexibility when associating a behaviour with a presentation. The user of Vista manipulates components and has the possibility of selecting a presentation for a component.
- Every component has a composition interface with at least one port.
A component is shown together with its ports, representing the components' parameters, services and acquaintances. The composition interface allows the behaviour of a component to be bound to a given context. It consists of a set of ports, each of which has a name, a type and a polarity. A port's polarity may be: (1) input: information required to complete the component's behaviour; (2) output: information produced by the component; (3) input/output. Default values may be provided for ports in which case they are left unlinked. Ports may be visually presented in a variety of ways, such as knobs, buttons, text fields, menus, etc., depending on the port's intended semantics. Components must be specially designed to be composed using Vista. They may be related to a particular application domain or may be considered as general purpose. New components can be added to the set of available components with a minimum of effort.
- Any two plug-compatible components can be linked together.
Linking components means connecting together compatible ports of their composition interface according to the rules of a composition model. A set of components linked together via their ports is called a composition, or a "script". A composition model determines the valid syntax for compositions by defining the allowable port types and their compatibilities. The composition model also supplies semantics by specifying the interpretation of a component's ports in a given application domain along with the meaning of the links between components. Local and/or global rules (i.e., constraints, restrictions) that are required by a particular application domain are also expressed in the composition model.
- Compositions can be seen as graphs.
At a conceptual level, components can be thought of as the nodes, and links as the edges, of a graph. At the implementation level, however, this conceptual graph is internally represented as a more complex attributed graph in which components, ports and links are all nodes. The graph is not necessarily free of cycles.
- Compositions can be packaged as components.
It is also possible to define a group of composed components as a composite component - or "SAC" (Script As Component) - thereby promoting hierarchical decomposition in the CPL framework. To package a composition as a SAC, one must provide (1) a composition interface (by specifying which ports of the constituent components are to become ports of the SAC), and (2) a visual presentation (which can be composed of existing presentation components). The behaviour of a SAC is defined by the behaviour of the components it contains. Considering compositions as components allows for hierarchical decomposition, which is found in almost every engineering task in order to reduce the complexity of a solution.
A composition model file can be edited at any time and loaded into Vista. Composition models can inherit from each other. The semantic of a composition model inheriting from a previous one is that it uses the type names and the links already defined.
From a programming point of view, commercial application and information processing sytems can be characterised as follows:
Currently, fourth-generation environments favourably cover this domain and support the application developer by producing systems of this type with a relatively high productivity rate. For this domain in particular, however, object-orientation has been seen to offer major benefits because information systems show a high degree of similarity, thus making the possibility of reuse "in the large" a promising aspect.
- They are generally very large systems.
- Their developers should focus more on the application domain rather than on the system engineering aspect of an application.
- The systems have high requirements for database systems in terms of needing very large databases and convenient, secure and performant access to them.
- They need support for enduser-friendly graphical interfaces, report generation of printed material etc.
Future fourth-generation environments will not only have to generate productivity benefits, they must also conceal complex system features which application programmers have difficulty in handling, such as client/server distribution, connection to complex graphical user interfaces etc., for example.
In particular, database programming will have to be supported by means of persistence. The ideal solution for object-oriented applications is clearly an object-oriented database system. However, relational database systems are at a peak and cannot be expected to be replaced by object-oriented database systems in the foreseeable future. However, a cooperation of relational and object-oriented DBMS must at least be supported by object-oriented 4GL environments in such a way that existing relational data and newly generated object-oriented data can co-exist and be managed together.
Within the ITHACA project, a 4GL environment, the CooL SPE, was developed which takes all these requirements into account. The CooL SPE is provided in two steps.
The first step provides the transient version with access to relational database systems without the impedance mismatch coming from an object-oriented world. This version of the CooL SPE offers support for graphical user interfaces, test and debugging mechanisms, as well as support for producing printouts (CooL SPE V2.1).
CooL SPE V2.1 is available for UNIX and Windows systems, and a portable, slimlined version is available through the public domain.
The second step enhances V2.1 by an object-oriented database system called CoOMS. CoOMS supports the NO2 data model, which was introduced as a baseline for OMG's relationship submission. CoOMS fully supports query mechanisms via an SQL derivate, which is enhanced by object-oriented features (QUOD). CoOMS is transparently integrated by a persistent version of the CooL language (CooL V3.0) which provides automatic schema definition and schema evolution, transparent, persistent object invocation and garbage collection, as well as transparent multi-user access. Closed, block-nested transactions are supported and smoothly integrated with the CooL exception mechanism. Within CooL SPE V3.0, the SQL access features of V2.1 are supported in an upward-compatible way, thus allowing both persistent relational and persistent object-oriented data to be managed easily.
CooL V2.1 is a complete programming environment specially designed to support the professional development of large-scale object-oriented application systems [Müller 1994].
The CooL V2.1 environment incorporates a modern, object-oriented, easy-to-learn 4th-generation programming language which integrates both relational database systems and graphical user interface toolkits and which is supported by an object type library system, a report writer interface and a convenient debugger. Tools are integrated through a graphical, object-oriented programmer's desktop which can be extended by additional tools and also used within the application system.
The prime component of the CooL V2.1 product is the CooL programming language, a modern, object-oriented 4GL [Köster et al. 1990]. CooL offers all the features of the object paradigm, such as inheritance, dynamic binding, polymorphism and genericity. CooL comes with a module concept, a feature which is essential for development of large application systems. Dynamic Link Libraries are supported to reduce runtime space with large applications. However, it is easy to learn and type safe to use. Experience has shown that professional programmers newly assigned to a fairly large application project comprising more than 400 classes with some 100,000 lines of code and previously not familiar with object technology were able to understand the application, learn the technology and become fully productive within the space of just one month.
However, object technology does not exist in isolation, but rather it has to anticipate the existence of software written in procedural languages. Thus, CooL is fully type-compliant with the C language type system and allows software written in C to be integrated into CooL applications without any effort. To do so, CooL supports the complete C type system and offers a bidirectional call interface to C. In addition, CooL generates highly efficient ANSI C code, thus allowing applications to be ported very easily.
Fig. 6: The CooL Software Production Environment
The CooL language is extended by the CooL library system CoLibri. CoLibri covers a complete CooL interface to the UNIX C library, which is compliant with the X/Open Portability Guide 3. As abstract data types, CoLibri offers, among other things, a complete BCD arithmetic, an essential feature for most commercial application systems. As foundation object types, CoLibri provides time representation (including date, time, duration, interval etc.) and the basic container object types list and map. The CoLibri library is fully open and may be freely extended by own object types.
For tracing down and monitoring applications, the CooL programmer is supported by MaX, a convenient CooL source-code debugger. MaX is also capable of completely analysing C programs if they are currently used in a program. MaX offers the functionality of the UNIX sdb debugger and, in addition, is able to process (un)conditional breakpoints, tracepoints, to resolve inheritance, to dereference pointers etc. The next version of MaX will also support graphical user interfaces. However, the current character-based command line interface ensures considerable handling comfort and includes command history and re-execution of a history, named command sequences, an online help facility and more besides.
CooL V2.1 supports the development of application systems with graphical user interfaces based on OSF X/Motif 1.1 and Windows 3.0. A Dialog Interface Object, DIO, is available to facilitate integration of the application with the runtime system of X/Motif (more than 600 functions in all are provided by the X/Lib/Motif system) and Windows 3.1. This interface abstracts from the toolkit's primitives and improves productivity, especially when form-oriented interfaces are adequate for representing an application. It was felt that for the application domain envisaged, forms are a quite natural model to represent the application. The DIO is sufficiently open to allow use of the full functionality of the toolkit in addition to the DIO abstraction.
The SQL Object Interface (SOI) is provided to allow object-oriented applications to be integrated with a relational database system. This interface offers access to SQL tables via a generated object type interface. Access is defined by an explicit description from which a CooL object type for managing the table is generated. This interface is independent of the underlying database systems. First, this guarantees interoperability of the application in a heterogeneous database system environment, and, second, safeguards investments if a decision is made to use a different database system at a later point in time.
Last but not least, the Print Object Interface (POI) enables an explicit description to be made of forms, lists and reports to be printed on printing devices. The programmatic interface conceals the cumbersome tasks involved, thus relieving the developer of managing print formats (pages, headers, repeating groups), spoolers and devices.
The SIB is provided with the CooL model serving as a static analyser for the CooL SPE. With this static analyser, comfortable navigation and views are provided (e.g., the inheritance trees, part-of relationships, uses- and used-from graphs etc.), as well as many predefined and user-definable other queries. A versioning model from CooL based on the SIB is under way.
CoOMS (Combined Object Managment System) is the persistent storage subsystem of the CooL development environment. As a storage subsystem, CoOMS constitutes a structurally object-oriented database system kernel which implements the NO2 data model [Geppert et al. 92]. Thus, CoOMS permits the management of complex structured objects by supporting the creation and deletion of objects, their persistent storage on secondary storage devices, as well as the retrieval of objects based on both navigation and association (query facility).
The functionality of the key features of CoOMS is described as folows:
Data model support
The CoOMS persistent object server implements the NO2 data model. Thus, CoOMS provides value set constructors for tuple, set, list and array which are used to model the internal structure of objects. Objects may be linked with general references or part-of relationships. General references model relationships between objects, while the part-of relationship combines objects to a higher structured object in the sense of a single computational unit. CoOMS supports type hierarchies with multiple inheritance between object types. The management of objects covers the following operations on objects: insert, update, delete, navigate and retrieve.
Access to the CoOMS object server
CoOMS provides associative and navigational access to objects. Associative access implies access based on declarative query expressions. Navigational access returns an object related to a given object identifier dereferencing this identifier. Result sets of objects at run time are transferred to the client's work space that is defined as a main memory segment referred to as a heap\fP. All objects stored in the heap are handled by method calls of CoOMS. Thus, methods owned by the component performing the heap form the interface of the CoOMS object server. On the basis of the object server interface at the top of the object server, several language interfaces can be implemented.
CoOMS provides one kind of explicit integrity constraints, namely uniqueness of atomic tuple attribute values. Uniqueness of values may be required for atomic attributes which belong to a tuple. Implicit integrity constraints are assured by the CoOMS object server and contain object integrity, simple domain integrity, part-of integrity and referential integrity.
CoOMS provides a transaction management schema based on short transactions. A short transaction is a transaction which does not survive the duration of a clien process which has established the storage server connection. A transaction may have sub-transactions in the sense of closed nested transactions. Within a top transaction sub-transactions may exist that may have sub-transactions again and so on. CoOMS provides two kinds of locks: read and write. The lock protocol is determined by a strict 2-phase locking process.
CoOMS realises R1-R3 recovery. While R1 and R2 recovery contains UNDO and REDO logging, R3 recovery facilitates the system generating a consistent state inside the database after a soft crash.
The object dictionary contains meta-information about a specific object base. The object dictionary is organised within an object base and thus installs a collection of NO2 objects and object types. Therefore, the meta-information is a collection of stored objects. The object dictionary holds information about schemas and subschemas, value sets and object types, clusters, the physical storage of objects and object identifiers, logical object bases and users, statistical information and information about the system configuration.
The optimisation process in CoOMS is subdivided into two parts: algebraic and non-algebraic optimisation. Non-algebraic optimisation belongs to the process of query execution plan generation [Demuth et al.1993].
Access path and cluster
Execution of query access to objects is supported by suitable access paths. This holds true for both navigational as well as associative access. For associative access to objects a primary index about all object identifiers is supported. Major significance is attached to navigational access. For this purpose a direct object graph with backward references for the representation of part-of hierarchies and general references between objects exists. Secondary indexes support optimised access to objects. The definition of secondary indexes is restricted to basic value sets only, e.g., integer type, string type etc. CoOMS provides clustering of objects, objects of one logica database being viewed only. The interrelation types provided are as follows: Objects belonging to a part-of hierarchy and objects belonging to the same object type and its subtypes.
The persistent version of the CooL SPE integrates both notions from system programming and database programming in a single environment [Bouschen et al. 93]. A type can either be specified as a transient or as a persistent object type. A persistent type exists independently of a particular program execution and its objects are stored in CoOMS. As such, they may be created by one process and accessed by another, and they may be used concurrently by several processes. A persistent object type may be derived from another pesistent type, where the supertype must also be persistent. In addition to the common state variables, state fields can be declared unique. They uniquely identify an object of that type, and an exception is raised in case those two objects with identical values of that state fields, which are declared unique, are created.
An object constructor, denoter or method call involving a persistent object can only be executed during the elaboration of a transaction statement. Persistent CooL supports closed, block-nested transactions. A process which potentially reads (writes) the state of an object implicitely acquires a read (write) lock on that object. A read lock can be acquired if no other process holds a write lock for that object. A write lock can be acquired if no other process holds a read or write lock for that object. The locks acquired by a process are released at the end of the outermost enclosing transaction. A transaction may be aborted explicitly by a rollback statement or implicitly if an exception is raised that is not handled inside the transaction or the transaction body is left by a return or exit statement.
A full description of CoOMS is given in [Dumm et al. 93], . The NO2 data model is defined in [Geppert et al. 92]. For the query language QuoD, the interested reader is referred to [Demuth et al.1993].
T.J. Biggerstaff, A.J. Perlis, "Software Reusability - Concepts and Models", Vol. 1, ACM Press, Frontier Series, 1989
G. Booch, "Object-Oriented Design", Benjamin-Cummings, 1991
[Bouschen et al. 93]
M. Bouschen, K.-H. Köster, M. Lumpe, "CooL V2.0 Language Description", ITHACA-IT report, IIT.S.SDS.PCO.93.1, 1993
P. Constantopoulos, M. Jarke, J. Mylopoulos, B. Pernici, E. Petra, M. Theodoridou and Y. Vassiliou, "The Ithaca Software Information Base: Requirements, Functions and Structuring Concepts", ITHACA Report ITHACA.FORTH.89.E2.1, 1989
[Demuth et al.1993]
B. Demuth, A. Geppert, T. Gorchs, "Algebraic Query Optimization in the CoOMS Structurally Object-Oriented Database System", Query Processing for Advanced Database Systems, eds. J.C. Freytag, D. Maier, G. Vossen, Morgan Kaufmann Publishers, USA-San Mateo, 1993, pp. 122-142
[Dumm et al. 93]
T. Dumm, T. Gorchs, M. Watzek, "CoOMS - The object storage for advanced information systems", ITHACA Report, ITHACA.SNI.93.X.5.#2, 1993
[Geppert et al. 92]
A. Geppert, K. Dittrich, V. Goebel, S. Scherrer, "An Algebra for the NO2 Data Model", ITHACA Report, ITHACA.Unizh.90.X.4.#3, version 1992
[Köster et al. 1990]
K.-H. Köster, G. Müller, J. Schiewe, M. Weber, "CooL 0.2 Language Description", ITHACA report, ITHACA.SNI.90.L.2.#3, 1990
V. de Mey, "Visual Composition of Software Applications", Ph.D. thesis (no. 2660), Centre Universitaire d'Informatique, University of Geneva, 1994
[Mey et al. 1993]
V. de Mey, O. Nierstrasz, "The ITHACA Application Development Environment", Visual Objects, ed. D. Tsichritzis, Centre Universitaire d'Informatique, University of Geneva, July 1993, pp. 267-280
B. Meyer, "Lessons from the design of Eiffel libraries", Communications of the ACM, Sept. 1990, pp. 68-89
G. Müller, "CooL - An Introduction", Siemens Nixdorf publication, 1994
J. Mylopoulos, A. Borgida., M. Jarke., M. Koubarakis, "Telos: Representing Knowledge about Information Systems", ACM Trans. on Information Systems, Oct. 1990
[Nierstrasz et al. 1991]
O. Nierstrasz, D. Tsichritzis, V. de Mey, M. Stadelmann, "Objects + Scripts = Applications", Proceedings, Esprit 1991 Conference, Kluwer Academic Publishers, NL-Dordrecht, 1991, pp. 534-552
B. Pernici, "Objects with Roles", IEEE/ACM Conf. on Office Information Systems, Cambridge, MA, April 1990