Databases are an integral part of TT&C architecture, providing a mechanism for storing and updating a variety of parameters ranging from telemetry wavetrain structure to ground system geographic locations. The major issues involved in selection of the database architecture are location of data on a LAN and type of DBMS.
Two possible ways of placing databases on a LAN are shown in Figure 18.
As shown in Figure 18(a), the DBMS can share a machine on the LAN with other processes. In this case, the DBMS will compete for CPU time with other processes running on the same machine. Depending on the machine speed, workload, and transaction processing requirements of the DBMS, this might provide acceptable results. Because an extra machine is not necessary, this solution is inexpensive in comparison to the configuration shown in Figure 18(b).
In Figure 18(b), a workstation is used exclusively as a data server, providing increased transaction processing power. In this configuration, the DBMS need not compete with other processes on the machine for CPU time. Using a data server is more expensive because it requires an additional workstation. When transaction processing requirements are low, this solution results in idle resources.
Figure 19 illustrates the concepts of data partitioning and database replication. The purpose of data partitioning is to place pieces of data closest to where they will be used. For example, information required only by Commanding should be located near the Commanding workstations (if possible). This reduces network traffic and increases transaction throughput. The drawback of partitioning is a more complex database. Data partitioning can also aid in fault tolerance. If a partition fails, the other partitions are still available. This allows certain transactions to continue in the presence of failure. However, fault tolerance, if not available in the DBMS product, must be planned, often requiring additional up-front design efforts.
Database replication is the process of maintaining an always-current copy (or copies) of a database. The DBMS automatically maintains consistency between multiple copies of the database. Replication can reduce network traffic and transaction latency for databases that have many more reads than writes. For a read, the closest copy of the data can be used to satisfy the transaction. For a write, all copies of the database must be updated to maintain data consistency. Thus, data replication improves read transactions at the expense of write transactions.
Database replication also aids in fault tolerance. Note that if any one data partition fails, the system can still process all transactions. In addition, fault recovery should be easy. Once the failed partition is brought back on line, the DBMS should automatically update it to the current database state.
Figure 19 depicts two separate databases (Database 1 and Database 2). The first database, Database 1, is replicated once to form a copy Database 1'. The second database is partitioned into two pieces (Database 2a and Database 2b) and replicated at a different location into one combined database (Database 2').
Any modern DBMS should allow easy configuration of replicated and partitioned data. Since data requirements change over time, the system should facilitate reconfiguration of data. Updates of replicated data should be transparent to the system user.
For the general TT&C problem, the data is relatively simple and not highly relational. Most data is written once, then read as needed. In fact, telemetry data archiving is the only difficult issue because of the need for high throughput. During implementation of such systems, the following recommendations should be considered:
* Employ DB replication services and partitioning for improved read performance and reliability
- Partitioning provides "data proximity" reducing network traffic and increasing throughput
- Can be employed in combination with either configuration shown in Figure 18
* Use time-indexed flat files to provide sufficient throughput for telemetry data archiving
Concerning the type of DBMS, both relational and object-oriented DBMS provide what one would expect in modern database systems. Both types have several vendors. In addition, the large relational database vendors (Oracle and Sybase) are moving into the object-oriented DBMS market. Contrary to popular belief, object-oriented database systems are mature and many companies are using them for crucial tasks. Both types of DBMS have industry standards (e.g., SQL and OQL) for data querying and manipulation. Note that there are many hybrid (relational and object-oriented) DBMS.
Relational DBMSs have a larger installed base, have been around much longer and are more mature. They have more sophisticated tools for data analysis and organization. In addition, there is generally a larger pool of RDBMS expertise available.
Object-oriented DBMSs are typically faster than relational ones (by avoiding table joins). If a given system uses an object-oriented framework, an object-oriented database would reduce development and maintenance time.
A 1992 Aerospace study of relational databases  indicated that development of a relational schema for the current Command and Control System (CCS) would be a very large task. This is just the schema development. Reports from the companies using object-oriented development indicate that, when attempting to use a relational DBMS, translating data from object-oriented to relational formats is difficult, error-prone, and time-consuming. The data translation typically involves 33% of the code written for the project and has been as high as 75%. Having a separate data model from the object model also increases the time required for software design. Using an object-oriented database provides a natural extension of the object-oriented design work. Because of the reduction in design and development time and increase in transaction speed, an object-oriented database should be seriously considered for any new system.