[Top] | [Previous] | [Next]

C-6. Commanding Performance Analysis

The goal of the Commanding performance analysis was to identify key performance drivers within the Commanding function. This invloved three steps:

1. identifying timing and data quantity requirements based on FRD thresholds and objectives

2. conducting performance modeling and experimentation to determine system resources needed (e.g., processing horsepower, memory, storage, etc.)

3. developing and documenting metrics for performance modeling

For Commanding, the key requirement driving performance is the 28.8-Kbps threshold command data rate per SV uplink. This requirement results in significant loading on Commanding for real-time constraint checking and command verification, as explained below. Another key requirement is the 5 millisecond command timing accuracy threshold.

To determine the minimum computing resources needed to achieve the required commanding functions, it was necessary to conduct performance modeling and benchmarking. Since future TT&C implementations are likely to use COTS products for commanding, the strategy was to perform benchmarking assuming use of two COTS products currently available for SV commanding: G2 by Gensym Corporation and RTie by Talarian Corporation. G2 is the main component of the IMT commanding COTS package by Storm Integration. RTie is the primary component of the RTWorks satellite monitoring and commanding package by Talarian. Both products incorporate a rule-based inference engine architecture. Note that procedural command scripting languages such as OS/COMET by Software Technology, Inc., in general, provide faster performance than do rule-based inference engines. However, it is believed that rule-based and hybrid rule-based/procedural products will be increasingly used for spacecraft commanding. The declarative nature of rule-based systems is a natural fit to the problem of constraint rule checking and functional verification. Also, since our purpose here was to determine worst case commanding performance requirements, we chose to analyze the most resource-intensive approach likely, which is the rule-based COTS approach.

For performance modeling and benchmarking, commanding knowledge bases of various sizes were constructed and tested for each product. Rule performance over time was determined, and an overall benchmark of rules/sec/unit-of-workstation-performance was calculated. Estimates on the number of rules needed per unit-of-time for constraint checking and functional verification were made under maximum and minimum commanding rates. This yielded maximum processing load requirements based on commanding rate. The workstation configuration used for performance analysis testing was as follows:

Computer: Sun SPARC 10 Model 41

Memory: 16 MB RAM

OS: SunOS 4.1.3

Speed: 40 MHz

Performance: 52.6 SPECinit92

64.7 SPECfp92

96.2 MIPS

17.2 Mflops

120 TPS

The selected unit of workstation performance for these benchmarks was SPECint92. The following sections present metrics generated and implications on command loading for high- and low-rate commanding.