Technical Reports

 

Return to Technical Report System


USC-CSE-98-520 Postscript, PDF

COTS Software Integration Cost Modeling Study

Chris Abts, Barry Boehm
USC-Center for Software Engineering

This study represents a first effort towards the goal of developing a comprehensive COTS integration cost modeling tool. The approach taken was to first examine a wide variety of sources in an attempt to identify the most significant factors driving COTS integration costs, and to develop a mathematical form for such a model. These sources ranged from already existing cost models to information gathered in a preliminary high level data collection survey. Once the form and candidate drivers had been identified, the next step was to gather project level COTS integration effort data in a second round data collection exercise. This project level data was then used to calibrate and validate the proposed model. Data from both a graduate level software engineering class and from industrial sources were used in calibration attempts. The industrial data proved problematic, however, so for the purposes of this study, the final calibration of the model was based upon the student projects.

The final result was a cost model following the general form of the well-known COCOMO® software cost estimation model, but with an alternate set of cost drivers. The scope of the model is also narrow, addressing only initial integration coding costs. The predictive power of the model at this stage is only fair, but it was demonstrated that with appropriate data, the accuracy of the model could be greatly improved.

Finally, the richness to the problem of capturing all significant costs associated with using COTS software offers many worth-while directions in which to expand the scope of this model.

Document characteristics: Final report under DoD contract F30602-94-C-1095


USC-CSE-98-519 PDF

Guidelines for the Life Cycle Objectives (LCO) and the Life Cycle Architecture (LCA) deliverables
for Model-Based Architecting and Software Engineering (MBASE)

Barry Boehm, Dan Port, Marwan Abi-Antoun, and Alexander Egyed
USC-Center for Software Engineering

Over our three years of developing digital library products for the USC Libraries, we have been evolving an approach called Model-Based (System) Architecting and Software Engineering (MBASE). MBASE involves early reconciliation of a project's success models, product models, process models, and property models.  It extends the previous spiral model in two ways:

Initiating each spiral cycle with a stakeholder win-win stage to determine a mutually satisfactory (win-win) set of objectives, constraints, and alternatives for the system's next elaboration during the cycle.

Orienting the spiral cycles to synchronize with a set of life cycle anchor points: Life Cycle Objectives (LCO), Life Cycle Architecture (LCA), and Initial Operational Capability (IOC).

The MBASE guidelines present the content and the completion criteria for the LCO and LCA milestones (which correspond to the Inception and Elaboration Phases of the Rational Unified Process) of the following system definition elements:

         Operational Concept Description (OCD)

         System and Software Requirements Definition (SSRD)

         System and Software Architecture Description (SSAD)

         Life Cycle Plan (LCP)

         Feasibility Rationale Description (FRD)

         Risk-driven prototypes

The guidelines also include a suggested domain taxonomy to be used as a checklist and organizing structure for the WinWin requirements negotiation. The guidelines attempt to achieve high conceptual integrity, little redundancy, and strong traceability across the various system definition elements, and are compatible with the Unified Modeling Language (UML).

These guidelines were used by 20 teams of 5-6 person teams of computer science graduate students during Fall 1998, and were revised twice following the LCO and LCA Architecture Review Boards. These guidelines were used for rebaselining the LCA packages during Spring 99.


USC-CSE-98-518 Postscript, PDF

Requirements Engineering, Expectations Management, and The Two Cultures

Barry Boehm, Marwan Abi-Antoun, and Dan Port, USC-Center for Software Engineering
Julie Kwan, USC University Libraries
Anne Lynch, University of Southern California

In his seminal work, The Two Cultures, C.P. Snow found that science and technology policymaking was extremely difficult because it required the combined expertise of both scientists and politicians, whose two cultures had little understanding of each other's principles and practices [Snow, 1959].

During the last three years, we have conducted over 50 real-client requirements negotiations for digital library applications projects.  Those largely involve professional librarians as clients and 5-6 person teams of computer science MS-degree students as developers.  We have found that their two-cultures problem is one of the most difficult challenges to overcome in determining a feasible and mutually satisfactory set of requirements for these applications.

During the last year, we have been experimenting with expectations management and domain-specific lists of "simplifiers and complicators" as a way to address the two-cultures problem for software requirements within the overall digital library domain.  Section 2 of this paper provides overall motivation and context for addressing the two-cultures problem and expectations management as significant opportunity areas in requirements engineering.  Section 3 discusses the digital library domain and our stakeholder Win-Win and Model-Based (System) Architecting and Software Engineering (MBASE) approach as applied to digital library projects.  Section 4 discusses our need for better expectations management in determining the requirements for the digital library projects are products over the first two years, and describes our approach in year 3 to address the two-cultures problem via expectations management.  Section 5 summarizes results to date and future prospects.

Document characteristics: Accepted for Proceedings, ICRE99, June 1999


USC-CSE-98-517 Postscript, PDF

Escaping the Software Tar Pit: Model Clashes and How to Avoid Them

Barry Boehm and Dan Port, USC-Center for Software Engineering

"No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits... Large system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it...
"Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. But we must try to understand it if we are to solve it."
Fred Brooks, 1975

Several recent books and reports have confirmed that the software tar pit is at least as hazardous today as it was in 1975. Our research into several classes of models used to guide software development (product models, process models, property models, success models), has convinced us that the concept of model clashes among these classes of models helps explain much of the stickiness of the software tar-pit problem.

We have been developing and experimentally evolving an approach called MBASE -- Model-Based (System) Architecting and Software Engineering -- which helps identify and avoid software model clashes. Section 2 of this paper introduces the concept of model clashes, and provides examples of common clashes for each combination of product, process, property, and success model. Sections 3 and 4 introduce the MBASE approach for endowing a software project with a mutually supportive set of models, and illustrate the application of MBASE to an example corporate resource scheduling system. Section 5 summarizes the results of applying the MBASE approach to a family of small digital library projects. Section 6 presents conclusions to date.

Document characteristics: ACM Software Engineering Notes, January 1999, pp. 36-48


USC-CSE-98-516 Postscript, PDF

The Rosetta Stone: Making COCOMO® 81 Files Work With COCOMO® II

Donald J. Reifer, Reifer Consultants, Inc.
Barry W. Boehm and Sunita Chulani, USC-Center for Software Engineering

As part of our efforts to help COCOMO® users, we, the COCOMO® research team at the Center for Software Engineering at the University of Southern California (USC), have developed the Rosetta Stone for converting COCOMO® 81 files to run using the new COCOMO® II software cost estimating model. The Rosetta Stone is very important because it allows users to update estimates made with the earlier version of the model so that they can take full advantage of the many new features incorporated into the COCOMO® II package. This paper describes both the Rosetta Stone and guidelines for making the job of conversion easy.

Document characteristics: CrossTalk, February 1999, pp. 11-15


USC-CSE-98-515 Postscript, PDF

A Bayesian Software Estimating Model Using a Generalized g-Prior Approach

Sunita Chulani, USC-Center for Software Engineering
Bert Steece, USC-Marshall School of Business

Soon after the initial publication of the COCOMO® II model, the Center for Software Engineering (CSE) began an effort to empirically validate COCOMO® II [14]. By January 1997, they had a dataset consisting of 83 completed projects collected from several Commercial, Aerospace, Government and FFRDC organizations. CSE used this dataset to calibrate the COCOMO® II.1997 model parameters. Because of uncertainties in the data and / or respondents' misinterpretations of the rating scales, CSE developed a pragmatic calibration procedure for combining sample estimates with expert judgement. Specifically, the above model calibration for the COCOMO® II.1997 parameters assigned a 10% weight to the regression estimates while expert-judgement estimates received a weight of 90%. This calibration procedure yielded effort predictions within 30% of the actuals 52% of the time.

CSE continued the data collection effort and the database grew from 83 datapoints in 1997 to 161 datapoints in 1998. Using this data and a Bayesian approach that can assign differential weights to the parameters based on the precision of the data, we provide an alternative calibration of COCOMO® II. Intuitively, we prefer this approach to the uniform 10% weighted average approach described above because some of the effort multipliers and scale factors are more clearly understood than others. The sample information for well-defined cost drivers receives a higher weight than that given to the less precise cost drivers. This calibration procedure yielded significantly better predictions; that is our version of COCOMO® II gives effort predictions within 30% of the actuals 76% of the time. The reader should note that these predictions are based on out-of-sample data (projects) as described in the 'Cross Validation' section (i.e. section 5).

This paper presents a generalized g-prior approach to calibrating the COCOMO® II model. The paper shows that if the weights assigned to sample estimates versus expert judgement are allowed to vary according to precision, a superior predictive model will result.


USC-CSE-98-514 Postscript, PDF

Rose/Architect: a tool to visualize architecture

Philippe Kruchten, Rational Software Vancouver
Alexander Egyed, USC-Center for Software Engineering

Rational Rose is a graphical software modeling tool, using the Unified Modeling Language (UML) as its primary notation. It offers an open API that allows the development of additional functionality (“add-ins”). In this paper, we describe Rose/Architect, a Rose™ “add-in” used to visualize architecturally-significant elements in a system’s design, developed jointly by University of Southern California (USC) and Rational Software. Rose/Architect can be used in forward engineering, marking architecturally significant elements as they are designed  and extracting architectural views as necessary. But it can be even more valuable in reverse engineering, i.e., extracting missing key architectural information from a complex model. This model may have been reverse-engineered from source code using the Rose reverse engineering capability.

Document characteristics: Accepted to HICSS, January 1999.
added: September 24, 1998


USC-CSE-98-513 PDF

Conceptual Modeling Challenges for Model-Based Architecting and Software Engineering (MBASE)

Barry Boehm and Dan Port, USC-Center for Software Engineering

The difference between failure and success in developing a software-intensive system can often be traced to the presence or absence of clashes among the models used to define the system’s product, process, property, and success characteristics.  (Here, we use a simplified version of one of Webster’s definitions of “model” a description or analogy used to help visualize something. We include analysis as a form of visualization).

Section 2 of this paper introduces the concept of model clashes, and provides examples of common clashes for each combination of product, process, property, and success models.  Section 3 introduces the Model-Based Architecting and Software Engineering (MBASE) approach for endowing a software project with a mutually supportive base of models.  Section 4 presents examples of applying the MBASE approach to a family of digital library projects.
Section 5 summarizes the main conceptual modeling challenges involved in the MBASE approach, including integration of multiple product views and integration of various classes of product, process, property, and success models.  Section 6 summarizes current conclusions and future prospects.

Document characteristics: to appear in Proceedings, Conceptual Modeling Symposium
added: August 20, 1998


USC-CSE-98-512 PDF

Using  the WinWin Spiral Model: A Case Study

Barry Boehm, Alexander Egyed, Dan Port, and Archita Shah, USC-Center for Software Engineering
Julie Kwan, USC University Libraries

Ray Madachy, USC-CSE and Litton Data Systems

Fifteen teams used the WinWin spiral model to prototype, plan, specify, and build multimedia applications for USC’s Integrated Library System. The authors report lessons learned from this case study and how they extended the model’s utility and cost-effectiveness in a second round of projects.

Document characteristics: Appeared in IEEE Computer, July 1998
added: August 18, 1998


USC-CSE-98-511 Postscript, PDF

A Stakeholder Win-Win Approach to Software Engineering Education

Barry Boehm, Alexander Egyed, Dan Port, and Archita Shah, USC-Center for Software Engineering
Julie Kwan, USC University Libraries
Ray Madachy, USC-CSE and Litton Data Systems

We are applying the stakeholder win-win approach to software engineering education. The key stakeholders we are trying to simultaneously satisfy are the students; the industry recipients of our graduates; the software engineering community as parties interested in improved practices; and ourselves as instructors and teaching assistants.
In order to satisfy the objectives or win conditions of these stakeholders, we formed a strategic alliance with the University of Southern California Libraries to have software engineering student teams work with Library clients to define, develop, and transition USC digital library applications into operational use.  This adds another set of key stakeholders: the Library clients of our class projects.
This paper summarizes our experience in developing, conducting, and iterating the course. It concludes by evaluating the degree to which we have been able to meet the stakeholder-determined course objectives.

Document characteristics: published in Annals of Software Engineering, 1999
added: August 18, 1998


USC-CSE-98-510 Postscript, PDF

The MBASE Life Cycle Architecture Milestone Package: No Architecture Is An Island

Barry Boehm, Dan Port, Alexander Egyed, and Marwan Abi-Antoun
USC-Center for Software Engineering

This paper summarizes the primary criteria for evaluating software/system architectures in terms of key system stakeholders’ concerns. It describes the Model Based Architecting and Software Engineering (MBASE) approach for concurrent definition of a system’s architecture, requirements, operational concept, prototypes, and life cycle plans. It summarizes our experiences in using and refining the MBASE approach on 31 digital library projects. It concludes that a Feasibility Rationale demonstrating consistency and feasibility of the various specifications and plans is an essential part of the architecture’s definition, and presents the current MBASE annotated outline and guidelines for developing such a Feasibility Rationale.

Document characteristics: published in WICSA '99
added: August 17, 1998


USC-CSE-98-509 Postscript, PDF

Improving the Life-Cycle Process in Software Engineering Education

Barry Boehm and Alexander Egyed, USC-Center for Software Engineering

The success of software projects and the resulting software products are highly dependent on the initial stages of the life-cycle process – the inception and elaboration stages. The most critical success factors in improving the outcome of software projects have often been identified as being the requirements negotiation and the initial architecting and planing of the software system.
Not surprisingly, this area has thus received strong attention in the research community. It has, however, been hard to validate the effectiveness and feasibility of new or improved concepts because they are often only shown to work in a simplified and hypothesized project environment. Industry, on the other hand, has been cautious in adopting unproven ideas. This has led to a form of deadlock between those parties.
In the last two years, we had had the opportunity to observe dozens of software development teams in planing, specifying and building library related, real-world applications. This environment provided us with a unique way of introducing, validating and improving the life cycle process with new principles such as the WinWin approach to software development. This paper summarizes the lessons we have learned.

Document characteristics: Accepted to EUROMICRO '98 - Workshop for Software Process Improvement
added: August 17, 1998


USC-CSE-98-508 Postscript, PDF

Calibrating Software Cost Models Using Bayesian Analysis

Sunita Chulani and Barry Boehm, USC-Center for Software Engineering
Bert Steece, USC-Marshall School of Business

The COCOMO® II.1997 software cost estimation model was originally formulated using behavioral analyses and an expert-judgement Delphi process to determine initial values of its cost drivers and scale factors. Using a multiple regression analysis approach, we then calibrated the model on a dataset consisting of 83 projects. The regression analysis produced results that occasionally contradicted the expert-judgement results: e.g. making a product more reusable caused it to be less expensive rather than more expensive to develop. These counter intuitive results were due to the fact that the COCOMO® II database violated to some extent the following restrictions imposed by multiple linear regression [Briand92, Chulani98]:
(i) the number of datapoints should be large relative to the number of model parameters (i.e. there are many degrees of freedom). Unfortunately, collecting data has and continues to be one of the biggest challenges in the software estimation field. This is caused primarily by immature processes and management reluctance to release cost-related data.
(ii) no data items are missing. Data frequently contains missing information because the data collection activity has a limited budget or because of a lack of understanding of the data being reported.
(iii) there are no outliers. Extreme cases frequently occur in software engineering data because there is lack of precision in the data collection process.
(iv) the predictor variables (cost drivers and scale factors) are not highly correlated. Unfortunately, because cost data is historically rather than experimentally collected, correlations among the predictor variables are unavoidable.


USC-CSE-98-507 Postscript, PDF

Telecooperation Experience with the WinWin System

Alexander Egyed and Barry Boehm, USC-Center for Software Engineering

WinWin is a telecooperation system supporting the definition of software-based applications as negotiated stakeholder win conditions. Our experience in using WinWin in defining over 30 digital library applications, including several telecooperation systems, is that it is important to supplement negotiation support systems such as WinWin with such capabilities as prototyping, tradeoff analysis tools, email, and videoconferencing. We also found that WinWin's social orientation around considering other stakeholders' win conditions has enabled stakeholders to achieve high levels of shared vision and mutual trust. Our subsequent experience in implementing the specified digital library systems in a rapidly changing web-based milieu indicated that achieving these social conditions among system stakeholders was more important than achieving precise requirements specifications, due to the need for team adaptability to requirements change. Finally, we found that the WinWin approach provides an effective set of methods of integrating ethical considerations into practical system definition processes via Rawls' stakeholder negotiation-based Theory of Justice.

Document characteristics: Accepted for IFIP'98


USC-CSE-98-506 PostScript, PDF

Incorporating Bayesian Analysis to Improve the Accuracy
of COCOMO® II and Its Quality Model Extension

Sunita Devnani-Chulani, USC-Center for Software Engineering

The three main highlights of this report are:

1.A simple modeling methodology that can be used to formulate software estimation models in the lack of abundance of software engineering data.

One of the biggest challenges faced by the software engineering community has been to make good decisions using data that is usually scarce and incomplete. Classical statistical techniques derive conclusions based on available sampling data. But, to make the best decision (especially with software engineering data) it is imperative that in addiction to available sampling data we should incorporate prior information that is relevant. The modeling methodology developed helps make use of easily available expert judgment data along with sampling data in the decision making process.

2.A COCOMO® II Baysesian prototype

Using the above methodology, I developed a Bayesian prototype in an attempt to improve the accuracy of the existing COCOMO® II model. A formal proces of how the Bayesian approach can be used to incorporate prior information obtained by expert-judgment-based Delphi and other sources in software economics to existing software engineering data was demonstrated. In many models, such prior information is informally used to evaluate the "appropriateness" of results. By describing the use of prior information along wiht sampling data, I have shown that it is possible to formally combine both these sources of information. An important aspect of formalizing the use of prior information is that others know what prior production functions are being used and can repeat the calibration calculatoins (or can incorporate different prior information in a similar way).

3.Quality model extension to COCOMO® II

Using the modeling methodology, a quality model extension to the existing COCOMO® model is being developed. This model facilitates cost/schedule/quality tradeoffs and provides insights on determining ship time. it enables 'what-if' analyses that demonstrate the effects of personnel, project, product and platform characteristics on software quality.


USC-CSE-98-505 HTML

Composing Components: How Does One Detect Potential Architectural Mismatches?

Cristina Gacek and Barry Boehm, USC-Center for Software Engineering

Nowadays, in order to be competitive, a developer's usage of Commercial off the Shelf (COTS), or Government off the Shelf (GOTS), packages has become a sine qua non, at times being an explicit requirement from the customer. The idea of simply plugging together various COTS packages and/or other existing parts results from the megaprogramming principles [Boehm and Scherlis 1992]. What people tend to trivialize is the side effects resulting from the plugging or composition of these subsystems. Some COTS vendors tend to preach that because their tool follows a specific standard, say CORBA, all composition problems disappear. Well, it actually is not that simple. Side effects resulting from the composition of subsystems are not just the result of different assumptions in communication methods by various subsystems, but the result from differences in various sorts of assumptions, such as the number of threads that are to execute concurrently, or even on the load imposed on certain resources. This problem is referred to as architectural mismatches [Garlan et al. 1995] [Abd-Allah 1996]. Some but not all of these architectural mismatches can be detected via domain architecture characteristics, such as mismatches in additional domain interface types (units, coordinate systems, frequencies), going beyond the general interface types in standards such as CORBA.

Other researchers have successfully approached reuse at the architectural level by limiting their assets not by domain, but rather by dealing with a specific architectural style. I.e., they support reuse based on limitations on the architectural characteristics of the various parts and resulting systems [Medvidovic et al. 1997] [Magee and Kramer 1996] [Allan and Garlan 1996]. This approach can be successful because it simply avoids the occurrence of architectural mismatches.

Our work addresses the importance of underlying architectural features in determining potential architectural mismatches while composing arbitrary components. We have devised a set of those features, which we call conceptual features [Abd-Allah 1996][Gacek 1997], and are building a model that uses them for detecting potential architectural mismatches. This underlying model has been built using Z [Spivey 1992].

To Appear In: Proceedings of the OMG-DARPA-MCC Workshop on Compositional Software Architectures, January 1998 (accepted both for participation and presentation) ( http://www.objs.com/workshops/ws9801/)


USC-CSE-98-504 Postscript, PDF

Calibration Approach and Results of the COCOMO® II Post-Architecture Model

Sunita Devnani-Chulani, Brad Clark, and Barry Boehm, USC-Center for Software Engineering
Bert Steece, USC-Marshall School of Business

This paper describes our experience and results of the first calibration of the Post-Architecture model. The model determination process began with an expert Delphi process to determine apriori values for the Post-Architecture model parameters. A dataset of 83 projects was used in the multiple regression analysis. Projects with missing data or unexplainable anomalies were dropped. Model parameters that exhibited high correlation were consolidated. Multiple regression analysis was used to produce coefficients. These coefficients were used to adjust the previously assigned expert-determined model values. Stratification was used to improve model accuracy.
The resulting model produced estimates within 30% of the actuals 52% of the time for effort.  Stratification by organization resulted in a model that produced estimates within 30% of the actuals 64% of the time for effort.  It is therefore recommended that organizations using the model calibrate it using their own data. This increases model accuracy and produces a local optimum estimate for similar type projects.
The next calibration of COCOMO® II will be done by using Bayesian techniques to incorporate prior knowledge and the sampling data information to determine the posteriori model.

Document Characteristics: Proceedings, ISPA '98.


USC-CSE-98-503 Postscript, PDF

Modeling Software Defect Introduction

Sunita Devnani-Chulani, USC-Center for Software Engineering

In software estimation, it is important to recognize the strong relationships between Cost, Schedule and Quality. They form three sides of the same triangle. Beyond a certain point (the “Quality is Free” point), it is difficult to increase the quality without increasing either the cost or schedule or both for the software under development. Similarly, development schedule cannot be drastically compressed without hampering the quality of the software product and/or increasing the cost of development.  Software estimation models can play an important role in facilitating the balance of the three factors.

Document Characteristics: Proceedings, California Software Symposium '97.


USC-CSE-98-502 Postscript , PDF

Calibrating the COCOMO® II Post-Architecture Model

Sunita Devnani-Chulani, Bradford Clark, and Barry Boehm, USC-Center for Software Engineering

The COCOMO® II model was created to meet the need for a cost model that accounted for future software development practices. This resulted in the formulation of three submodels for cost estimation, one for composing applications, one for early lifecycle estimation and one for detailed estimation when the architecture of the product is understood. This paper describes the calibration procedures for the last model, Post-Architecture COCOMO® II model, from eighty-three observations. The results of the multiple regression analysis and their implications are discussed. Future work includes further analysis of the Post-Architecture model, calibration of the other models, derivation of maintenance parameters, and refining the effort distribution for the model output.

Document Characteristics: Proceedings, ICSE 20, April 1998, pp. 477-480


USC-CSE-98-501 Postscript, PDF

Software Requirements Negotiation: Some Lessons Learned

Barry Boehm and Alexander Egyed, USC-Center for Software Engineering

Negotiating requirements is one of the first steps in any software system life cycle, but its results have probably the most significant impact on the system's value. However, the processes of requirements negotiation are not well understood. We have had the opportunity to capture and analyze requirements negotiation behavior for groups of projects developing library multimedia archive systems, using an instrumented version of the USC WinWin groupware system for requirements negotiation. Some of the more illuminating results were:

         Most stakeholder Win Conditions were non-controversial (were not involved in Issues)

         Negotiation activity varied by stakeholder role.

         LCO package quality (measured by grading criteria) could be predicted by negotiation attributes.

         WinWin increased cooperativeness, reduced friction, and helped focus on key issues.

Document characteristics: Proceedings, ICSE 20, April 1998, pp. 503-506


USC-CSE-98-500 Postscript, PDF

A Comparison Study in Software Requirements Negotiation

Alexander Egyed and Barry Boehm, USC-Center for Software Engineering

In a period of two years, two rather independent experiments were conducted at the University of Southern California. In 1995, 23 three-person teams negotiated the requirements for a hypothetical library system. Then in 1996, 14 six-person teams negotiated the requirements for real multimedia related library systems.

A number of hypotheses were created to test how real software projects differ from hypothetical ones. Other hypotheses address differences in uniformity and repeatability.

The results indicate that repeatability in 1996 was even harder to achieve then in 1995 (Egyed-Boehm, 1996). Nevertheless, this paper presents some surprising commonalties between both years that indicate some areas of uniformity.

In both years, the same overall development process (spiral model) was followed, the same negotiation tools (WinWin System) were used, and the same people were doing the analysis of the findings. Thus, the comparison is less blurred by fundamental differences like terminology, process, etc.

Document characteristics: Proceedings, INCOSE'98


 

Return to Technical Report System


Copyright 1995, 1996, 1997, 1998, 1999 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.