Random matrices and controllability of dynamical systems

We introduce the concept of $\epsilon$-uncontrollability for random linear systems, i.e. linear system in which the usual matrices have been replaced by random matrices. We also estimate the $\epsilon$-uncontrollability in the case where the matrices come from the Gaussian orthogonal ensemble. Our proof utilizes tools from systems theory, probability theory and convex geometry.


Introduction
Controllability is one of the most fundamental concepts in systems theory and control theory.Roughly speaking, a system is controllable if one can switch from one trajectory to the other provided that the laws governing the system are obeyed and some delay is allowed.In the present work, we focus on the linear, time-invariant multivariable system described by the following equations (see [4], [6]): (1.1) dx dt = Ax + bu, y = Cx + du, where x = x(t) = (x 1 (t), x 2 (t), . . ., x n (t)) t ∈ R n is a vector describing the state of the system at time t, u = u(t) ∈ R is the input and y = y(t) is the m-vector of outputs.A, C are respectively n × n, m × n matrices, and b ∈ R n , d ∈ R m are vectors.In this case the definition of controllability goes as follows.The system (1.1) is said to be state controllable or simply controllable, if there exists a finite time T > 0, such that for any initial state x(0) ∈ R n and any x 1 ∈ R n , there is an input u = u(t) defined on [0, T ] that will transfer x(0) to x 1 at time T (i.e.x(t) obeys the first equation of (1.1) and x(T ) = x 1 ).Otherwise, the system (1.1) is called uncontrollable.
In this article, we consider random systems of the form (1.1), that are systems where the parameters A and b have been replaced with random matrices.Given a positive number , we define the concept of -uncontrollability of a random system.It is natural that the -uncontrollability of a random system depends on the distribution of the entries of A and b.Consequently, we consider the fundamental Gaussian orthogonal ensemble of random matrices and we calculate the -uncontrollability in this particular case.
The rest of the paper is organised as follows.In Section 2, we define the -uncontrollability for random systems.In Section 3, we describe the Gaussian orthogonal ensemble, which is going to be used in this work, and we state some known results about this ensemble that will be used in the sequel.In Section 4, we consider the case n = 2, i.e. when the state space is R 2 , and we give the detailed calculation of the -uncontrollability of a random system where the matrix A comes from the Gaussian orthogonal ensemble.Finally, in Section 5, we deal with the general case of R n for n > 2. In this case, the situation is more complicated and we provide an upper bound for the -uncontrollability of a random system.

-uncontrollability of random systems
In order to formulate a suitable concept of uncontrollability for random systems, we utilize a characterisation of the controllability of systems of the form (1.1).This is provided in the next theorem (for more details see, for example, [4] Theorem 2.2).
(2) The matrix [sI − A, −b] has full rank (i.e.rank n), for every s ∈ C.
Additionally, we need the next lemma whose proof is based on elementary linear algebra and thus it is omitted.Lemma 2.2.Let A be an n × n-matrix and b ∈ R n .Then, the following are equivalent.
(1) The matrix [sI − A, −b] has full rank (i.e.rank n), for every s ∈ C.
(2) There is no eigenvector v ∈ R n of the matrix A such that v, b = 0.
Motivated by the above results, we now define the -uncontrollability for random networks.
Definition 2.3.Assume that a random system is given: (2.1) where A is an n × n random matrix and b is an n-dimensional random vector.Given a positive number , the -uncontrollability of the above system is defined to be the probability: for some eigenvector v of A with v 2 = 1) .

Random matrix ensemble
It is quite evident that the measure of -uncontrollability of the random system (2.1) depends on the distribution of the matrix A and the vector b.In this article, we consider one important ensemble of real symmetric random matrices, namely the so-called Gaussian orthogonal ensemble (GOE).On account of its applications, GOE is one of the most studied random matrix ensembles.It is placed in the more general framework of Wigner matrices, which are defined as follows.We consider ξ, ζ real-valued random matrices with zero mean.Let W = (w ij ) n i,j=1 be a random symmetric matrix.We call W a Wigner matrix if his entries satisfy the next conditions: The case of Wigner matrices in which ξ and ζ are Gaussian with E[ξ 2 ] = 1 and E[ζ 2 ] = 2 gives the Gaussian orthogonal ensemble.Hence, if the symmetric matrix W belongs to GOE, then w ii ∼ N (0, 2) (for all i = 1, . . ., n), w ij ∼ N (0, 1) (for all 1 ≤ i < j ≤ n) and the entries on and above the diagonal are independent random variables.(We write GOE(n) when an emphasis on the dimension is necessary.However, in majority of cases the dimension will be clear from the context.) Additionally, as far as the random vector b is concerned, we have to choose some ensemble.More specifically, we consider the ensemble i=1 are independent Gaussian random variables with zero mean and E[b 2 i ] = 1.Furthermore, we assume that (b i ) 1≤i≤n and (w ij ) 1≤i≤j≤n are all independent random variables.
For more information concerning Wigner matrices and the GOE we refer to [1].For our purpose, we need a couple of result for the eigenstructure of the GOE, which are stated here without proof.First of all, it is known that a.s., the eigenvalues of a matrix A from GOE are all distinct (see [1], Theorem 2.5.2).Let now v 1 , . . ., v n denote the eigenvectors corresponding to the (real) eigenvalues of A, with their first non zero entry positive real.Then the following proposition holds (see [1], Corollary 2.5.4).
This section is entirely devoted to the calculation of the -uncontrollability of a random system of the form (2.1) when the state space is R 2 and A, b belong to GOE and S b respectively.In order to achieve this goal, we firstly fix a vector b ∈ R 2 and we set Then, the following result holds.Theorem 4.1.Let the random system (2.1) be given, where the state space is R 2 and A belongs to GOE (2).Then, for every non zero vector b ∈ R 2 , we have 2 .Proof.Since the matrix A from GOE(2) is symmetric, there is an orthonormal basis {v 1 , v 2 } of R 2 consisting of eigenvectors of A. Without loss of generality (replacing v 1 with −v 1 or changing the order of {v 1 , v 2 }, if necessary), we may assume that the first coordinate of v 1 is positive.Hence, we can write v 1 = (cos θ, sin θ) and v 2 = (− sin θ, cos θ), for some θ ∈ (− π 2 , π 2 ).Now, we have: The rotation T is given by the following matrix representation Therefore, and, consequently, We have to distinguish two cases.
Case I: 2 , then, clearly, one has 2 , then where the equality follows from the disjointness of the two sets.Now, for the first summand, we observe that ϕ + θ belongs to a semicircle.Hence, the values of ϕ + θ for which we have |sin(ϕ + θ)| < b 2 belong to an arc or to the unions of two disjoint arcs whose length is 2 • arcsin( b 2 ).It follows that θ belongs either to an arc or to the union of two disjoint arcs with total length 2 • arcsin( b 2 ).By Proposition 3.1, θ is a random variable with the uniform distribution on the interval (− π 2 , π 2 ).Therefore, Using similar argumentation for the second summand, we finally obtain: We are now ready to prove the main result of this section.
Theorem 4.2.Assume that n = 2 and that A, b belong to GOE and S b respectively.For any positive number , the -uncontrollability of the random system (2.1) is given by arcsin √ x e −x/2 dx.
Proof.Let {v 1 , v 2 } be an orthonormal basis of R 2 consisting of eigenvectors of A. We set Then, Z is a non-negative random variable, which follows from the coordinates of the random vectors v 1 , v 2 , b after multiplication, summation and absolute values.It is not hard to see that where 1 [0, ] is the characteristic (or indicator) function of the interval.Recall (from Section 3) our assumption that the entries of A and b are independent.It follows that v 1 , b are independent random vectors and clearly this is also true for the pair v 2 , b.Using conditional expectation, we obtain that where f (b) is the probability density function of the random vector b.Since the coordinates b 1 , b 2 of b are independent Gaussian random variables with zero mean and variance equal to 1, it follows that We observe now that P ,b depends only on b 2 = b 2 1 + b 2 2 .Hence, by changing in polar coordinates (or, equivalently using the fact that b 2 1 + b 2 2 has the χ 2 -distribution with 2 degrees of freedom), we get that and we have proved the desired result.

The general case
In this section, we consider the more general case where the state space of the random system (2.1) is R n , n ≥ 3.This case is more complicated and we only give an upper bound for the -uncontrollability of the system.
Firstly, we need some estimates from the elementary convex geometry.Assume that v ∈ S n−1 is a unit vector and ∈ [0, 1).The -spherical cap about v is the following subset of S n−1 : Observe that the number does not refer to the radius of the cap.An easy calculation shows that the radius is r = 2(1 − ).In general, the cap of radius r about v is: Let A n denote the surface area of the unit ball S n−1 , i.e.A n = 2π n/2 Γ(n/2) .Convex geometry provides the following upper and lower bounds for the surface area of a spherical cap (see, for example, [2]).Lemma 5.1.For 0 ≤ < 1, the cap C( , v) on S n−1 has surface area at most e −n 2 /2 • A n .Lemma 5.2.For 0 ≤ r ≤ 2, a cap of radius r on S n−1 has surface area at least Following the lines of Theorem 4.1, we now prove the next result.Assume that we have a random system of the form (2.1), where the matrix A belongs to GOE(n).
Theorem 5.3.Let A be in the GOE(n) and let b ∈ R n be any non zero vector.Then, for the random system (2.1), we have the estimate: Proof.Let {v i } n i=1 be an orthonormal basis of R n consisting of eigenvectors of the matrix A. Without loss of generality (replacing v i with −v i if necessary), we may assume that the first non zero coordinate of each v i is positive.We now obtain: to the vector e 1 = (1, 0, . . ., 0).Since T is orthogonal, it follows that: : for some i = 1, 2, . . ., n .
Note that in R n for n ≥ 3, the sets (| T (v i ), e 1 | < b 2 ), i = 1, 2, . . ., n are not pairwise disjoint, even for small values of .Therefore, we cannot repeat the argumentation of the case n = 2.However, we may proceed as follows Since v i is uniformly distributed in S n−1 + (see Proposition 3.1), we have that T (v i ) is uniformly distributed to some hemisphere.Therefore, if A denotes the surface area measure in the sphere S n−1 , then, A n /2 .
The set {θ ∈ S n−1 : b 2 ≤ θ 1 } is a spherical cap of radius r = 2 1 − b 2 .Hence, by Lemma 5.2, its surface area is at least 1 2 r 2 n−1 A n .Therefore, Hence, Theorem 5.4.Assume that A, b belong to GOE(n) and S b respectively and let be any positive number.For the -uncontrollability of the random system (2.1), the following inequality holds and the proof is complete.
The next corollary shows that the growth of P is at most polynomial of degree n−1 with respect to .
Corollary 5.5.For any integer n ≥ 2 and any positive number , we have k .
Proof.Using the binomial expansion formula, we obtain The next natural corollary is now straightforward.
Finally, we have the following estimate for the growth rate of P at 0.
Proof.It follows immediately by Corollary 5.5.

Conclusions
We defined a measure of uncontrollability in a Gaussian Random Ensemble of linear systems.We calculated tight bounds for this probability in terms of and the number of states n.This is also depicted in the graphs included in the appendix (Figures 1, 2).Appendix A.  Acknowledgments.The authors want to express their thanks to professor D. Cheliotis for his valuable suggestions concerning random matrices.

Figure 1 .
Figure 1.Moving from the lower to the upper value of the probability bound increases with n.

Figure 2 .
Figure 2. The growth of probability bound increases quadraticaly with n.