Technical Reports

 

Return to Technical Report System


USC-CSE-99-530 PDF

Cost vs. Quality Requirements: Conflict Analysis and Negotiation Aids

Barry Boehm, Hoh In

The process of resolving conflicts among software quality requirements is complex and difficult because of incompatibility among stakeholders' interests and priorities, complex cost-quality requirements dependencies, and an exponentially increasing resolution option space for larger systems. This paper describes an exploratory knowledge-based tool, the Software Cost Option Strategy Tool(S-COST), which assists stakeholders to 1) surface appropriate resolution options for cost-quality conflicts; 2) visualize the options; and 3) negotiate a mutually satisfactory balance of quality requirements and cost.

S-COST operates in the context of the USC-CSE WinWin system (a groupware support system for determining software and system requirements as negotiated win conditions), QARCC (Quality Attribute and Risk Conflict Consultant a support system for identifying quality conflicts in software requirements), and COCOMO® (Constructive, Cost estimation Model). Initial analysis of its capabilities indicate that its semiautomated approach provides users with improved capabilities for addressing cost-quality requirements issues.

Document characteristics: Software Quality Professional, March 1999
added: November 11, 2005


USC-CSE-99-529 PDF

Towards a Taxonomy of Software Connectors

Nikunj Mehta, Nenad Medvidovic and Sandeep Phadke. USC Center for Software Engineering

Software systems of today are frequently composed from prefabricated, heterogeneous components that provide complex functionality and engage in complex interactions. Existing research on component-based development has mostly focused on component structure, interfaces, and functionality. Recently, software architecture has emerged as an area that also places significant importance on component interactions, embodied in the notion of software connectors. However, the current level of understanding and support for connectors has been insufficient. This has resulted in their inconsistent treatment and a notable lack of understanding of what the fundamental building blocks of software interaction are and how they can be composed into more complex interactions. This paper attempts to address this problem. It presents a comprehensive classification framework and

taxonomy of software connectors. The taxonomy is used both to understand existing software connectors and to suggest new, unprecedented connectors. We demonstrate the use of the taxonomy on the architecture of an existing, large system.

Document characteristics: Submitted to the 21st International Conference on Software Engineering.
added: November 16, 1999


USC-CSE-99-528 PDF

The Effects of CASE Tools on Software Development Effort

Jongmoon Baik, USC Center for Software Engineering

It is common knowledge that software tools have played a critical role in the software engineering process by improving software quality and productivity. A huge number of CASE (Computer Aided Software Engineering) tools have been produced to assist tasks in a software development process since the end of 1970’s. Many studies in the CASE field were done in the 1980’s and the early 1990’s to provide more effective CASE technologies and environments. While the research in this field is no longer as active, software developers use a range of CASE tools that are typically assembled over the period to support tasks throughout the software process. The diversity and proliferation of software tools in the current CASE market makes it difficult to understand what kind of tasks are supported and how much effort can be reduced by using software tools in a software development process. A big challenge is to alleviate this difficulties in the software engineering community. The primary goals of this research are to establish a framework for classifying software tools according to their support in a software lifecycle, to provide tool rating scales with which software tools are effectively evaluated, and to analyze the effect of software tools on the software development effort.

Document characteristics: Qualifying Report for partial fullfillment of Computer Science Department requirements
added: November 9, 1999


USC-CSE-99-527 PDF

Software Connectors and Refinement in Product Families

Alexander Egyed, Nikunj Mehta, and Nenad Medvidovic. USC Center for Software Engineering

Product families promote reuse of software artifacts such as architectures, designs and implementa-tions. Product family architectures are difficult to create due to the need to support variations. Traditional approaches emphasize the identification and description of generic components which prove too rigid to support variations in each product. This paper presents an approach that supports analyzable family archi-tectures using generic software connectors that provide bounded ambiguity and support flexible product families. It describes the transformation from a family architecture to a product design through a four-way refinement and evolution process.

Document characteristics: submitted to International Workshop on Software Architecture Families '2000
added: November 2, 1999


USC-CSE-99-526 PDF

A Formal Approach to Heterogeneous Software Modeling

Alexander Egyed and Nenad Medvidovic, USC Center for Software Engineering

The problem of consistently engineering large, complex software systems of today is often addressed by introducing new, “improved” models. Examples of such models are architectural, design, structural, behavioral, and so forth. Each software model is intended to highlight a particular view of a desired system. A combination of multiple models is needed to represent and understand the entire system. Ensuring that the various models used in development are consistent relative to each other thus becomes a critical concern. This paper presents an approach that integrates and ensures the consistency across an architectural and a number of design models. The goal of this work is to combine the respective strengths of a powerful, specialized (architecture-based) modeling approach with a widely used, general (design-based) approach. We have formally addressed the various details of our approach, which has allowed us to construct a large set of supporting tools to automate the related development activities. We use an example application throughout the paper to illustrate the concepts.

Document characteristics: submitted to FASE'2000
added: November 2, 1999


USC-CSE-99-525 PDF

Supporting Distributed Collaborative Prioritization

Jung-Won Park, Daniel Port, Barry Boehm
USC Center for Software Engineering

Software developers are seldom able to implement stakeholders' requirements fully when time and resources are limited. To solve the problem, requirement engineers together with the stakeholders must prioritize requirements. The problem is exacerbated when the stakeholders are not all in the same place and/or can not collaborate at the same time. We have constructed a system called the Distributed Collaboration and Prioritization Tool (DCPT) to support the distributed and collaborative prioritization. In this paper, we will discuss the prioritization model implemented within DCPT and will give examples of using the tool. We will also discuss DCPT's integration with USC's WinWin requirements capture and negotiation system.

Document characteristics: Accepted for APSEC'99
added: Sept. 17, 1999


USC-CSE-99-524 PDF

Software Effort and Schedule Estimation Using The Constructive Cost Model: COCOMO® II

Jongmoon Baik, Sunita Chulani, Ellis Horowitz
USC Center for Software Engineering

During development of a software product, several questions arise:

How long will it take to develop?

How much will it cost?

How many people will be needed?

In answering these questions, several others arise:
What are the risks involved if we compress the schedule by a certain fraction? Can we invest more in strategies such as tools, reuse, and process maturity and get higher productivity, quality and shorter cycle times? How can the cost and schedule be broken down by component, stage and activity?
COCOMO® II facilitates the planning process by enabling one to answer the above questions using a parametric model that has been calibrated to actual completed software projects collected from Commercial, Aerospace, Government and non-profit organizations. Although, COCOMO® II consists of three submodels, Applications Composition, Early Design, and Post-architecture, each one offering increased fidelity the further along one is in the project planning and design process; only the Early Design and Post Architecture models have been calibrated and implemented in the software.

Documented characteristic: Submitted for ICSE 99 Informal Demo.
added: July 14, 1999


USC-CSE-99-523 PDF

When Models Collide: Lessons From Software System Analysis

Barry Boehm, USC Center for Software Engineering
Dan Port, Columbia University

This paper analyzes several classes of model clashes encountered on large, failed IT projects (e.g., Confirm, Master Net), and shows how the MBASE approach could have detected and resolved the clashes.

The first step in developing either an applicaiton or a system is to visualise it. The first step in developing either an appli-cation or a system is to visualize it. And when you visualize a system, you can’t help but use intellectual models to rea-son about what you’re building and how it will behave. The model can be a pattern you follow or an analogy you use. Whatever the form, models are ubiquitous: Developers use them in building a small stand-alone package or a large custom sys-tem. Customers use them to visualize what they think they’re getting from developers.

Models are very powerful. When you follow a model that makes sense, you get the feeling you’re doing everything right. That’s why expert pro-grammers/ systems analysts feel perfectly com-fortable specifying an application that only other programmers or analysts can use.

Models are also deep-rooted. A model can be so natural that you forget you’re using it—whichis why models are seldom blamed when a project goes wrong. Instead, surface reme-dies are applied: Requirements are reestablished, managers fired, tools purchased, and stan-dards imposed.

Part of the problem is a lack of awareness training. Few people have the background to recog-nize what’s really going on underneath it all.Very often dif-ferent stakeholders have uncon-sciously adopted different as-sumptions about what they need and want. Sooner or later, these different modelsare bound to conflict.

Building, enhancing, and maintaining any IT sys-tem involves building four basic models: success, product, process, and property. Not one at a time, but concurrently and continuously. When these models collide, it creates confusion, mistrust, frus-tration, rework, and throwaways. It loses money, and it costs time. Model clashes can leave everyone involved, especially the developer, feeling as if they are slogging through a tar pit. And no surface remedy yet discovered can fix them.

Document characteristics: Published in IT Professional, Jan-Feb 1999
added: July 8, 1999


USC-CSE-99-522

WinWin: a System for Negotiating Requirements

Ellis Horowitz, Joo H. Lee, and June Sup Lee

WinWin is a system that aids in the capture and recording of system requirements. It also assists in negotiation. The WinWin system has been available for several years and it being used by dozens of software development groups. In this presentation we will go over the capabilities of the system and discuss how it might be used on your software development project.

Document characteristics: Published in ICSE'99
added: July 8, 1999


USC-CSE-99-521 PDF

Comparing Software System Negotiation Requirements Patterns

Alexander Egyed and Barry Boehm, USC Center for Software Engineering

In a period of two years, two rather independent experiments were conducted at the University of Southern California (USC). In 1995, 23 three-person teams negotiated the requirements for a hypothetical library system. Then, in 1996, 14 six-person teams negotiated the requirements for real-world digital library systems.

A number of hypotheses were created to test how more realistic software projects differ from hypothetical ones. Other hypotheses address differences in uniformity and repeatability of negotiation processes and results. The results indicate that repeatability in 1996 was even harder to achieve then in 1995. Nevertheless, this paper presents some surprising commonalties between both years that indicate some areas of uniformity.

As such we found that the more realistic projects required more time to resolve conflicts and to identify options (alternatives) than the hypothetical ones. Further, the 1996 projects created more artifacts although they exhibited less artifact interconnectivity, implying a more divide and conquer negotiation approach. In terms of commonalties, we found that people factors such as experience did have effects onto negotiation patterns (especially in 1996), that users and customers were most significant (in terms of artifact creation) during the goal identification whereas the developers were more significant in identifying issues (conflicts) and options. We also found that both years exhibited some strange although similar disproportional stakeholder participation.

Document characteristics: Published in Journal for Systems Engineering
added: July 7, 1999


USC-CSE-99-520 PDF

Optimizing Software Product Integrity through Life-Cycle Process Integration

Barry Boehm and Alexander Egyed

Managed and optimized - these are the names for the levels 4 and 5 of the Capability Maturity Model (CMM) respectively. With that the Software Engineering Institute (SEI) pays tribute to the fact that, after the process has been defined, higher process maturity, and with that higher product maturity, can only be achieved by improving and optimizing the life-cycle process itself.

In the last three years, we had had the opportunity to observe more than 50 software development teams in planning, specifying and building library related, real-world applications. This environment provided us with a unique way of introducing, validating and improving the life cycle process with new principles such as the WinWin approach to software development.

This paper summarizes the lessons we have learned in our ongoing endeavor to integrate the WinWin life-cycle process. In doing so, we will not only describe what techniques have proven to be useful in getting the developer’s task done but the reader will also get some insight on how to tackle process improvement itself. As more and more companies are reaching CMM levels two or higher this task, of managing and optimizing the process, becomes increasingly important.

Document characteristics: Published in Journal for Computer Standards and Interfaces
added: July 7, 1999


USC-CSE-99-519 PDF

Extending Architectural Representation in UML with View Integration

Alexander Egyed, USC-Center for Software Engineering
Nenad Medvidovic, USC-Computer Science Department

UML has established itself as the leading OO analysis and design methodology. Recently, it has also been increasingly used as a foundation for representing numerous (diagrammatic) views that are outside the standardized set of UML views. An example are architecture description languages. The main advantages of representing other types of views in UML are 1) a common data model and 2) a common set of tools that can be used to manipulate that model. However, attempts at representing additional views in UML usually fall short of their full integration with existing views. Integration extends representation by also describing interactions among multiple views, thus capturing the inter-view relationships. This work describes a view integration framework and demonstrates how an architecture description language, which was previously only represented in UML, can now be fully integrated into UML.

Document characteristics: published in UML'99.
added: May 13, 1999


USC-CSE-99-518 PDF

Automatically Detecting Mismatches during Component-Based and Model-Based Integration

Alexander Egyed, USC-Center for Software Engineering
Cristina Gacek, Fraunhofer IESE, Germany

A major emphasis in software development is placed on identifying and reconciling architectural and design mismatches. Those mismatches happen during software development on two levels: while composing system components (e.g. COTS or in-house developed) and while reconciling view perspectives. Composing components into a system and 'composing' views (e.g. diagrams) into a system model are often seen as being somewhat distinct aspects of software development, however, as this work shows, their approaches in detecting mismatches complement each other very well. In both cases, the composition process may result in mismatches that are caused by clashes between development artefacts. Our component-based integration approach is more high-level and can be used early on for risk assessment while little information is available. Model-based integration, on the other hand needs more information to start with but is more precise and can handle large amounts of redundant information. This paper describes both integration approaches and discusses their commonalties and differences. Both integration approaches are automateable and some tools support is already available.

Document characteristics: published in ASE'99.
added: May 13, 1999


USC-CSE-99-517 PDF

Trace Observer: A Reengineering Approach to View Integration

Alexander Egyed, USC-Center for Software Engineering

Developing software in phases (stages) using multiple views (e.g. diagram) is the major cause for inconsistencies between and within views. Views exhibit redundancies because they repeatably use the same modeling information for the sake of representing related information within different perspectives. As such, redundancy becomes a vital ingredient in handling complexity by allowing a complex problem (model) to be divided up into smaller comprehensive problems (closed-world assumption).
However, this type of approach comes with a price tag; redundant views must be kept consistent and, at times were more and more development methodologies are used, this task becomes very time consuming and costly. We have, therefore, investigated ways on how to automate the issue of identifying view mismatches and to this end we have created a view integration framework. This paper describes this framework and shows how scenario executions and their observations can help in automating parts of that framework. Trace Observer, as this technique is called, can assist in cross-referencing (mapping) high-level model elements and it may also be used for transforming model elements so that different types of views may interpret them.

Document characteristics: N/A.
added: May 13, 1999


USC-CSE-99-516 PDF

Supporting Distributed Collaborative Prioritization for WinWin Requirements Capture and Negotiations

Jung-Won Park, Daniel Port, Barry Boehm, USC-Center for Software Engineering
Hoh In, Texas A&M University, Computer Science Department

One of the most common problems within a risk driven software collaborative development effort is prioritizing items such as requirements, goals, and stakeholder win-conditions. Requirements have proven particularly sticky in this as it is often the case that they can not be fully implemented when time and resources are limited introducing additional risk to the project. A practical approach to mitigating this risk in alignment with the WinWin development approach is to have the critical stakeholders for the project collaboratively negotiate requirements into priority bins which then are scheduled into an appropriate incremental development life cycle.

We have constructed a system called the Distributed Collaboration Priorities Tool (DCPT) which to assist in collaborative prioritization of development items. DCPT offers a strcutually guided approach to collaborative prioritization much in the spirit of USC's WinWin requirements capture and negotiation system. In this paper, we will discuss the prioritization models implemented within DCPT via an actual prioritization of new WinWin system features. We also discuss DCPT's two-way integration with WinWin system, some experiences using DCPT, and current research directions.

Document characteristics: Proceedings of 3rd World Multiconference on Systemics, Cybernetics and Informatics (SCI'99), Vol. 2, pp. 578-584, IIIS
added: April 5, 1999


USC-CSE-99-515 PDF

Using Patterns to Integrate UML Views

Alexander Egyed, USC-Center for Software Engineering

Patterns play a major role during system composition (synthesis) in fostering the reuse of repeatable design and architecture configurations. This paper investigates how knowledge about patterns may also be used for system analysis to verify the conceptual integrity of the system model.

To support an automated analysis process, this work introduces a view integration framework. Since each view (e.g. diagram) adds an additional perspective of the software system to the model, information from one view may be used to validate the integrity of other views. This form of integration requires a deeper understanding as to what the views mean and what information they can share (or constrain). Knowledge about patterns, both in structure and behavior, are thereby a valuable source for view integration automation.

Document characteristics:
added: April 2, 1999


USC-CSE-99-514 PDF

Integrating Architectural Views in UML

Alexander Egyed, USC-Center for Software Engineering

To support the development of software products we frequently make use of general-purpose software development models and tools such as the Unified Modeling Language (UML). However, software development in general and software architecting in particular (which is the main focus of our work) require more than what those general-purpose models can provide. Architecting is about:

1) modeling the real problem adequately
2) solving the model problem and
3) interpreting the model solution in the real world

In doing so, a major emphasis is placed on mismatch identification and reconciliation within and among architectural views (such as diagrams). We often find that this latter aspect, the analysis and interpretation of (architectural) descriptions, is under-emphasized in most general-purpose languages. We architect not only because we want to build (compose) but also because we want to understand. Thus, architecting has a lot to do with analyzing and verifying the conceptual integrity, consistency, and completeness of the product model.

The emergence of the Unified Modeling Language (UML), which has become a de-facto standard for OO software development, is no exception to that. This work describes causes of architectural mismatches in UML views and shows how integration techniques can be applied to identify and resolve them in a more automated fashion. In order to do so, this work introduces a view integration framework and describes its major activities – Mapping, Transformation, and Differentiation. To deal with the integration complexity and scalability of our approach, the concept of VIR (view independent representation) is introduced and described.

Document characteristics: Qualifying Report for partial fulfillment of Computer Science Department requirements.
added: March 11, 1999


USC-CSE-99-513 Postscript, PDF

Bayesian Analysis of Empirical Software Engineering Cost Models

Sunita Chulani and Barry Boehm, USC-Center for Software Engineering
Bert Steece, USC-Marshall School of Business

The most commonly used technique for empirical calibration of software cost models has been the popular classical multiple regression approach. As discussed in this paper, the multiple regression approach imposes a few assumptions frequently violated by software engineering datasets. The source data is also generally imprecise in reporting size, effort and cost-driver ratings, particularly across different organizations. This results in the development of inaccurate empirical models that don't perform very well when used for prediction. This paper illustrates the problems faced by the multiple regression approach during the calibration of one of the popular software engineering cost models, COCOMO® II. It describes the use of a pragmatic 10% weighted average approach that was used for the first publicly available calibrated version [Clark98]. It then moves on to show how a more sophisticated Bayesian approach can be used to alleviate some of the problems faced by multiple regression. It compares and contrasts the two empirical approaches, and concludes that the Bayesian approach was better and more robust than the multiple regression approach.

Bayesian analysis is a well-defined and rigorous process of inductive reasoning that has been used in many scientific disciplines [the reader can refer to Gelman95, Zellner83, Box73 for a broader understanding of the Bayesian Analysis approach]. A distinctive feature of the Bayesian approach is that it permits the investigator to use both sample (data) and prior (expert-judgement) information in a logically consistent manner in making inferences. This is done by using Bayes' theorem to produce a 'post-data' or posterior distribution for the model parameters. Using Bayes' theorem, prior (or initial) values are transformed to post-data views. This transformation can be viewed as a learning process. The posterior distribution is determined by the variances of the prior and sample information. If the variance of the prior information is smaller than the variance of the sampling information, then a higher weight is assigned to the prior information. On the other hand, if the variance of the sample information is smaller than the variance of the prior information, then a higher weight is assigned to the sample information causing the posterior estimate to be closer to the sample information.

The Bayesian approach discussed in this paper enables stronger solutions to one of the biggest problems faced by the software engineering community: the challenge of making good decisions using data that is usually scarce and incomplete. We note that the predictive performance of the Bayesian approach (i.e. within 30% of the actuals 75% of the time) is significantly better than that of the previous multiple regression approach (i.e. within 30% of the actuals only 52% of the time) on our latest sample of 161 project datapoints.

Document Characteristics: Accepted for IEEE-TSE; Special Issue on Empirical Methods


USC-CSE-99-512 PDF

Making RAD Work for Your Project
(Extended version of March 1999 IEEE Computer column)

Barry Boehm, USC-Center for Software Engineering

A significant recent trend we have observed among our USC Center for Software Engineering's industry and government Affiliates is that reducing the schedule of a software development project was becoming considerably more important than reducing its cost. This led to an Affiliates' Workshop on Rapid Application Development (RAD) to explore its trends and issues. Some of the main things we learned at the workshop were:

         There are good business reasons why software development schedule is often more important than cost.

         There are various forms of RAD. None are best for all situations. Some are to be avoided in all situations.

         For mainstream software development projects, we could construct a RAD Opportunity Tree which helps sort out the best RAD mixed strategy for a given situation.

Document characteristics: Appears in part in the March 1999 issue of IEEE Computer.
Added: March 8, 1999


USC-CSE-99-511 Postscript, PDF

Automating Architectural View Integration in UML

Alexander Egyed, USC-Center for Software Engineering

Architecting software systems requires more than what general-purpose software development models can provide. Architecting is about modeling, solving and interpreting, and in doing so, placing a major emphasis on mismatch identification and reconciliation within and among architectural views (such as diagrams). The emergence of the Unified Modeling Language (UML), which has become a de-facto standard for OO software development, is no exception to that. This work describes causes of architectural mismatches for UML views and shows how integration techniques can be applied to identify and resolve them in a more automated fashion.

Document characteristics:
added: March 3, 1999


USC-CSE-99-510 PostScript, PDF

Modeling Software Defect Introduction Removal: COQUALMO(COnstructive QUALity MOdel)

Sunita Chulani and Barry Boehm, USC-Center for Software Engineering

Cost, schedule and quality are highly correlated factors in software development. They basically form three sides of the same triangle. Beyond a certain point (the "Quality is Free" point), it is difficult to increase the quality without increasing either the cost or schedule or both for the software under development. Similarly, development schedule cannot be drastically compressed without hampering the quality of the software product and/or increasing the cost of development. Watts Humphrey, at the LA SPIN meeting in December '98, highlighted that "Measuring Productivity without caring about Quality has no meaning". Software estimation models can (and should) play an important role in facilitating the balance of cost/schedule and quality.

Recognizing this important association, an attempt is being made to develop a quality model extension to COCOMO® II; namely COQUALMO. An initial description of this model focusing on defect introduction was provided in [Chulani97a]. The model has evolved considerably since then and is now very well defined and calibrated to Delphi-gathered expert opinion. The data collection activity is underway and the aim is to have a statistically calibrated model by the onset of the next millennium.

The many benefits of cost/quality modeling include:

         Resource allocation: The primary but not the only important use of software estimation is budgeting for the development life cycle.

         Tradeoff and risk analysis: An important capability is to enable 'what-if' analyses that demonstrate the impact of various defect removal techniques and the effects of personnel, project, product and platform characteristics on software quality. A related capability is to illuminate the cost/schedule/quality trade-offs and sensitivities of software project decisions such as scoping, staffing, tools, reuse, etc.

         Time to Market initiatives: An important additional capability is to provide cost/schedule/quality planning and control by providing breakdowns by component, stage and activity to facilitate the Time To Market initiatives.

         Software quality improvement investment analysis: A very important capability is to estimate the costs and defect densities and assess the return on investment of quality initiatives such as use of mature tools, peer reviews and disciplines methods.

 

Return to Technical Report System


Copyright 1995, 1996, 1997, 1998, 1999 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.