Foreword
First, it should be clear that stability analysis deals with dynamic systems. It requires understanding whether the behavior of a dynamically changing system under given conditions (such as different external stimuli/disturbances, different initial states, and changes in its own parameters/conditions) can remain within a certain range. Otherwise, a series of unstable behaviors may occur, such as severe system oscillations, 'runaway', or 'out of control'. As shown in Figure 1, these dynamic systems can be the controlled object itself (whether it can return to its original equilibrium position when subjected to hypothetical external disturbances without control input), the entire controlled system including the controlled object and the control algorithm (whether the dynamics of the control output can approach the expected trajectory), or a single designed parameter identification/observation module (whether the identified/observed values can approach the actual values).
Figure 1. Objects of stability analysis (Controlled object Plant, identification/observation module, entire controlled system)
Secondly, it should be noted that, depending on the specific application scenario, stability analysis can be conducted either around a certain equilibrium point/attractor of the dynamic system (i.e., the spatial location of the state trajectory or the trajectory of the solution to the differential equation relative to a certain equilibrium point/attractor), or around whether the variables/signals or a certain type of dynamic gain in the dynamic system are bounded, and whether they will converge to a given region at a certain rate (exponential or asymptotic). The results of stability analysis are closely related to the performance that the control system can achieve, and it can even be used to deduce the design of the control algorithm. (See "Control Algorithm Notes—What Makes Control Algorithms Complex?") The text states: "If a controlled system can be guaranteed to be globally exponentially stable, then regardless of the initial state, the system control error (or other variables) can be reduced to zero in a relatively short time and remain at zero for a long period. If only locally asymptotically stable can be guaranteed, then within a narrow operating range, the system control error gradually decreases to zero over time. If exponential/asymptotic stability cannot be guaranteed, and only bounded input/bounded output can be guaranteed, then it can only be expected that the system control error will converge within a certain range. In this sense, stability-based analysis can clearly define the upper limit of the performance achievable by the control system, and based on this, a controller can be rationally designed." Finally, the introduction of control algorithm modules (such as parameter identification, observers, etc.) enriches the dynamic behavior of the entire controlled system (potentially exhibiting typical dynamic behavior of nonlinear systems), greatly increasing the risk of system instability. Therefore, in the implementation of advanced control algorithms (whether model-based or data-driven), the fundamental significance of stability in control algorithm design and control system analysis should not be overlooked, nor should stability design and proof methods based on various theorems be underestimated as useless mathematical formulas and theorems. At the same time, various stability design methods should not be copied blindly. Instead, control algorithms should be designed and the controlled system analyzed in a targeted manner based on a thorough understanding of the controlled system and through reasonable problem abstraction and classification.
Stability definition and analysis methods
For engineers new to advanced control algorithms, a major obstacle lies in the dazzling array of stability definitions and analysis methods (such as stability under transfer function/linear state-space models, and stability of autonomous/non-autonomous nonlinear systems). Beginners quickly become overwhelmed by the related mathematical formulas and theorems, unable to grasp the practical physical meaning of these concepts, let alone effectively apply them in practice. It should be noted that various stability definitions and analysis methods are based on the 'dynamic behavior exhibited by the dynamic system' and the 'dynamic model describing this dynamic behavior'. In this sense, learning system dynamics modeling and analysis first is highly beneficial for mastering various stability concepts. In fact, various stability definitions and analyses are generally based on a given form of dynamic model of the dynamic system of interest, which appropriately describes the dynamic behavior and state of the system under analysis (initial conditions, presence of external disturbances/noise, presence of control input, whether the control input is a state variable or a function of the system output, whether system parameters change with time, and whether there are unmodeled dynamics/uncertainties, etc.). For example:
The transfer function model in classical control theory describes the dynamic behavior of the system output/input (i.e., dynamic gain) of a single-input/single-output, linear time-invariant system with zero initial conditions. Here, the definition of stability primarily focuses on whether the gain will diverge or become infinite. Consequently, the corresponding stability analysis examines whether the solution of the characteristic polynomial of the transfer function has a positive real part (i.e., whether there are unstable poles; when unstable poles appear, the dynamic gain exhibits exponential growth terms). The corresponding stability criteria are developed from algebraic perspectives (Routh criterion, determining the location of roots based on polynomial coefficients), complex plane perspectives (Nyquist criterion), or root locus perspectives.
State-space models of linear systems describe the dynamic behavior between control inputs, system state variables, and system outputs (which can be represented as linear functions of state variables) under linear (time-invariant) conditions, focusing on the dynamic behavior of state variables (i.e., independently changing dynamic variables in the system). In this case, stability is defined from the relative spatial position of the state trajectory (i.e., the spatial trajectory of state variables) with respect to the system equilibrium point (i.e., the origin), specifically whether the state trajectory converges to the origin. Therefore, the perspective of stability analysis shifts to whether the system matrix has positive eigenvalues (i.e., whether there are unstable poles; when unstable poles appear, the state transition matrix/state variables will exhibit exponential growth terms). The corresponding stability criteria are directly based on the eigenvalue distribution of the system matrix.
In robust control, the transfer function model/state-space model focuses on describing the impact of non-ideal factors (such as external disturbances, parameter/model uncertainties, and unmodeled dynamics) on the system's dynamic behavior. Robust stability, then, primarily defines the extent or magnitude (both "extent" and "magnitude" are imprecise terms; norms are generally used for description, but here "extent" and "magnitude" are used for interpretability) at which the system loses stability. Therefore, robust stability analysis mainly focuses on the degree of influence of non-ideal factors; the corresponding stability judgment is based on the small-gain theorem.
Nonlinear systems exhibit far more complex dynamic behaviors (e.g., multiple isolated equilibrium points/attractors, sensitivity to initial conditions (butterfly effect, see cover image), limit cycle oscillations, finite escape time, chaos/bifaction, etc.), making stability definitions and analyses extremely diverse, as shown in Figure 2 (only common stability concepts are listed; these can be seen as generalizations of stability concepts in linear time-invariant systems). Nonlinear dynamic system models are described using generalized differential equations, making it difficult to obtain the state trajectory/system output analytically (a small number of nonlinear systems can be transformed into linear time-invariant systems by making linear approximations near the operating point, and then stability analysis can be performed using methods for linear systems). Therefore, the state trajectory is often indirectly determined by constructing a function (i.e., a Lyaponov function) (i.e., determining the solution/system dynamic behavior without solving the differential equation). Thus, the stability analysis of nonlinear systems primarily focuses on constructing suitable Lyaponov functions and analyzing their properties; the corresponding stability judgment criteria mainly analyze whether the constructed Lyaponov function satisfies the given mathematical properties.
Figure 2. Stability of complex and variable nonlinear systems
It is worth noting that the various stability measures in Figure 2 can be transformed into each other under certain conditions: if the control input u is a function of the state variable (state feedback), then the stability with control input u can be transformed into the stability without control input; if the disturbance term g has the same equilibrium point as the nominal system, then the robust stability analysis can borrow the stability analysis without control input; the analysis of input-state stability can lay the foundation for the analysis of input-output stability.
When these stability concepts are applied to control algorithm design, the state variables or system outputs in the model can be control errors or identification/observation errors, etc. Therefore, stability analysis should be based on a deep understanding of the dynamic behavior of the controlled system: first, determine what object to be analyzed based on the analysis purpose and application scenario (refer to Figure 1), what characteristics of dynamic behavior and model description this object has, and what type of stability it corresponds to (refer to Figure 2), and then use relevant mathematical methods/theorems to conduct specific analysis and design. Applying stability analysis to control algorithm design mainly verifies whether the dynamic behavior of the dynamic system of interest can be maintained within a certain range (such as the position of the state trajectory relative to the equilibrium point, the boundedness of the system output/input gain, etc.); while the 'control algorithm' is essentially a man-made 'dynamic system'.
The purpose of designing such dynamic systems is to ensure that, after interaction with the controlled object (the "dynamic system"), the dynamic behavior of the entire controlled system, including the control algorithm, meets the target performance requirements (see Control Algorithm Notes - First Learn System Dynamics Modeling and Analysis). The primary performance requirement for these targets is stability. In this sense, stability analysis is an essential aspect of control algorithm design. More importantly, for some advanced control algorithms (such as adaptive control, sliding mode control, and observer-based control systems), the dynamic behavior of the controlled system is more complex. The stability of the entire control system largely depends on the selection and design of the control algorithm parameters. Furthermore, the design of control algorithms is primarily based on stability analysis and the construction of Lyaponov functions, leading to a large number of Lyaponov-based control algorithm design methods. Therefore, flexibly applying the concept of stability analysis to control algorithm design is particularly important.
Figure 3. Applying stability analysis to parameter identification algorithms and controller design.
For example, in an indirect adaptive control system (refer to Figure 3), the entire control system generally consists of two dynamic systems: a parameter identification dynamic (i.e., whether the identified parameters gradually converge to their true values), and a controlled system composed of a controller and the controlled object (i.e., whether the control error gradually decreases to zero). Both rely on the system output for online updates (generally, the feedback loop dynamic is faster than the parameter identification dynamic). Due to the parameter identification dynamic and the control input, the entire controlled system behaves as a nonlinear (parameter identification algorithm) and time-varying (real-time parameter identification/update) system. Coupled with model uncertainty, there is a high possibility of parameter identification results diverging and resulting in infinite control errors. Therefore, adaptive control generally uses Lyaponov function design and stability analysis to ensure that both control error and parameter identification error are bounded: the derivative of the control error term in the Lyaponov function is generally a function of (control error, control input, state variables, and identification parameter error). By designing the identification parameter update rate, the parameter identification error term in the derivative term of the Lyaponov function is eliminated, and the Lyaponov function is made semi-negative definite, depending only on the control error.
Summarize
Understanding stability from the perspective of both the dynamic behavior exhibited by a dynamic system and the dynamic models describing this behavior is crucial for correctly understanding various stability concepts, analysis and design methods, and for flexibly applying these concepts in advanced control algorithms and control system design. When learning and applying stability analysis methods, one should first analyze the controlled object from the perspective of system dynamics, and then delve into specific theoretical derivations and stability analysis in a targeted manner. This avoids the awkward situation of "learning a lot of concepts but feeling they are of no help in solving real-world problems, and ultimately finding theoretical analysis useless and unable to solve practical problems."