The design of software architectures for dependable and adaptable software systems remains an area of active research. Ideally, it should be possible for an architect to put together a recipe for the construction of a software system, relying on past experience and well-known architectural principles, using compositional techniques and notations. One approach to reduce the difficulty of designing and constructing software architectures is aimed at achieving high-level reuse through the use of an architectural style for designing the architecture of a family of similar software systems. Architectural styles are coordinated constraints on similar architectures and embody high-level reuse, support stylistic analysis of architectural properties, and guarantee these across style-based architectures.
Compliance & Recovery
A software system's architecture is supposed to be an effective reification of the system's technical requirements and to be faithfully reflected in the system's implementation. Furthermore, the architecture is meant to guide system evolution, while also being updated in the process. However, in reality developers frequently deviate from the architecture, causing architectural erosion, a phenomenon in which the initial, "as documented" architecture of an application is (arbitrarily) modified to the point where its key properties no longer hold. Architectural recovery is a process frequently used to cope with architectural erosion whereby the current, "as implemented" architecture of a software system is extracted from the system's implementation. Compliance checking technologies are used to map the "as documented" architecture onto the system's implementation and to report architectural differences and violations. Consequently, continuous compliance monitoring can help to prevent architectural erosion.
The ability to predict the dependability of a software system early in its
development, e.g., during architectural design, can help to improve
the systemís quality in a cost-effective manner. Existing architecture-
level dependability prediction approaches focus on system-level
dependability and assume that the reliabilities of individual components
are known. In general, this assumption is unreasonable. Consequently,
component dependability prediction is an important missing
ingredient in the current architecture-level dependability prediction literature.
Early prediction of component dependability is a challenging
problem because of many uncertainties associated with components
Modern software systems are predominantly distributed, dynamic, and mobile. They increasingly
execute on heterogeneous platforms, many of which are characterized by limited resources,
and are also implemented in Java more and more because of its platform independence and
intended use in network-based applications. One of the key resources, especially in long-lived
systems, is battery power. Unlike the traditional desktop platforms, which have uninterrupted,
reliable power sources, a newly emerging class of computing platforms have finite battery lives.
For example, a space exploration system may comprise satellites, probes, rovers, gateways, sensors,
and so on. Many of these are ďsingle useĒ devices that are not rechargeable. In such settings,
minimizing the systemís energy consumption, and thus increasing its lifetime, becomes as important
as the more traditional quality-of-service concerns, such as reliability, security, fault-tolerance,
availability, communication latency and so forth.
In order for architectural
models and stylistic guidelines to be truly useful in
any development setting, they must be accompanied by
support for their implementation. This is
particularly important for highly distributed, decentralized, mobile, and long-lived systems,
in which the the risk of architectural drift is increased unless there is a clear relationship between the
architecture and its implementation. Middleware developed to support the implementation
of software architectures must provide programming language-level constructs for implementing
software architecture-level concepts such as component, connector, configuration, and event.
Modeling & Analysis
Software architecture modeling and analysis encompasses a family of strategies for improving the software development process. The intent of a software model is to embody a specification of the system that is easier to understand, analyze, maintain, and evolve than a code-based specification. Model-based analysis techniques enable the prediction of the functional and non-functional properties of software architectures, allowing architects to weigh design alternatives and determine whether a software system will meet overall end-user operational goals.
Product Line Architectures
As with any other artifact produced as part of the software life cycle, software architectures evolve
and this evolution must be managed. One approach to doing so would be to apply any of a host of
existing configuration management systems, which have long been used successfully at the level of
source code. Unfortunately, such an approach leads to many problems that prevent effective management
of architectural evolution. To overcome these problems, we have developed an alternative
approach centered on the use of an integrated architectural and configuration management system
model. Because the system model combines architectural and configuration management concepts
in a single representation, it has the distinct benefit that all architectural changes can be precisely
captured and clearly related to each otheróboth at the fine-grained level of individual architectural
elements and at the coarse-grained level of architectural configurations.
The terabyte- and petabyte-scale volumes of data generated by modern
data providers pose a significant challenge in terms of selecting appropriate
interconnection/distribution mechanisms (i.e., software connectors) for data delivery.
The ideal software connector must deliver data from providers in a consistent,
efficient, scalable, and dependable fashion. In general, end users of large scale distribution
systems expect (near) real-time access to data once it has been generated,
processed sufficiently, and packaged for distribution.