The importance of the use of the state derivative feedback comes from some practical problems, where the measurable signals are the derivatives of the states. In this context, state derivative feedback for linear time invariant singular systems is derived to solve the stabilization problem and minimizing a quadratic criteria. The optimal feedback gain design is derived for the desired performance via an LMI optimization problem or by the well-known Riccati equation approach. It is also shown how the proposed method can be applied for uncertain systems. As an exploitation of the presented studied, examples of simulation are given to show the effectiveness of these approaches.

## 1. Introduction

Recently, descriptor systems have been one of the major research fields in control theory, due to the comprehensive applications in economic in electrical and mechanical models, and to the fact that these systems provide a more natural description of dynamical systems than the non-singular representation. Many synthesis problems have been studied in the literature: stability and stabilization analysis, observability and controllability analysis and $$H_2/H_\infty$$ norm characterization. By investigating the feedback control, many results have been derived. Output feedback or state feedback is usually used. Also, the proportional plus derivative feedback has been examined by many writers to design controllers in the following issues: regularization and stabilization of linear descriptor system (see Duan et al., 2003; Kuo et al., 2004; Zaghdoud et al., 2013a), feedback control of singular systems (Jing, 1994), $$H_\infty$$ control of state delay descriptor system, non-linear control using exact feedback linearization (Boukas & Habetler, 2004). In Gerstner *et al*. (1992, 1999) application of this feedback to pole placement and design of PD observers (Wu & Duan, 2007) has been founded.

However, few works using only the state derivative feedback exists. The use of this feedback control $$(u(t)=-K\dot{x}(t) )$$ in classical control theory is essential and advantageous for obtaining a certain performance level in dynamic systems (Lewis & Syrmos, 1991). The importance to study the state derivative feedback comes from some practical problems that are easier to obtain the state derivative feedback signals than the state signals (Zaghdoud *et al*., 2013b). Out of the existing applications, we can point out, for example: active suspension systems of vehicles (Reithmeier & Leitmann, 2003), control of cable vibration (Duan *et al*., 2005), and control vibration suppression of mechanical systems (Abdelaziz & Valasek, 2005a). To solve these problems in the industry, the accelerometers are regarded as the most important sensors of vibration. In this case, the measurable signals obtained from the accelerometers are the derivative states, which is possible to reconstruct a good velocities but not displacements.

In this context, a several books and papers can be helpful, which include the design of controllers for vibration absorbers systems and mechanical systems (Abdelaziz & Valasek, 2005b,c; Abdelaziz, 2010). Other papers exploit the derivative feedback to solve the pole placement problem (Abdelaziz, 2004; Cardim *et al*., 2008; Faria *et al*., 2009), to discuss the robustness of the stability and stabilization problem (Araujo *et al*., 2007; Michiels *et al*., 2009) and to develop the design of a linear quadratic regulator (Kwak *et al*., 2002; Abdelaziz & Valasek, 2005a). However, few works consider the uncertain systems (Araujo *et al*., 2007; Faria *et al*., 2009).

In this paper, we will try to extend the linear quadratic (LQ) optimal control problem (treated in classical theory) to the singular system using state derivative feedback. About this problem, there have been tremendous results in the literature, but no papers used the derivative feedback. Cobb (1983) resolved the problem via a geometric approach, Bender & Laub (1987) used a generalized Riccati equation and Zhu *et al*. (1999) used an equivalent for the LQ problem.

Our purpose is to find a gain $$K$$, which stabilizes the closed loop-system and minimizes quadratic criterion reflecting a desired performance level. Some methods will be proposed and the procedure of development will take in account the cases where the model is known or not. For this reason, we propose the solution of Riccati equation approach (ARE) if the model of system is known, and the LMI optimization problem in the reverse case (Aouani *et al*., 2009; Bedoui *et al*., 2009; Ben Attia *et al*., 2009).

The remainder of this paper is organized as follows: Section 2, states some preliminary definition and highlights the problem to be addressed in this paper. Both problems of stabilization via state derivative feedback and minimizations of two quadratic criterions are studied. In the other hand, an auxiliary singular system associated with the original singular system is introduced, for translating the derivative feedback into a simple feedback. And the main results are derived in Section 3. These results are expressed as definite symmetric solutions or as LMI optimization problems. To illustrate the use of our method, an example is worked out in Section 4. A conclusion ends the paper.

*Notations*: Throughout this paper, the following notations will be used. For two matrices $$A$$ and $$B$$, $$A>B$$ means that $$A-B$$ is positive definite. $$A^T$$ denotes the transpose of $$A$$ and $$A^{-T}$$ the transpose of the inverse of $$A$$. Identity and null matrices will be denoted respectively by $$I$$ and $$0$$. Furthermore, in the case of portioned symmetric matrices, the symbol $$(*)$$ denotes generally each of its symmetric blocks and $$A+{\rm sym}(*)$$ denotes $$A+A^T$$.

## 2. Preliminaries

Let us consider the following continuous-time linear descriptor system described by:

The principal assumption imposed on the system is that the system is controllable. In addition it is assumed that system matrix $$A$$ is of full rank. In this section, a key tool for finding a derivative state feed-back controller, which stabilizes a feedback system and minimizes a quadratic criteria, is proved.

### 2.1. Stabilization problem

We consider the state derivative control expressed as:

To handle the stabilization purpose, let us put forth the following definition.

System (1) is D-stabilizable if there exist a matrix $$K$$ of appropriate dimension and positive definite matrix $$P$$ such that:

This definition supposes that system (3) is well-defined, i.e. that matrix $$(E+BK)^{-1}$$ is full rank. Hence matrix $$A$$ is full rank.

The first problem under consideration can be outlined as follows:

*Problem 1* (D-Stabilzation problem). Find a matrix $$K$$ of appropriate dimensions such that the control law $$u(t)=-K\dot{x}(t)$$ D-stabilizes system (1).

### 2.2. LQ criteria minimization

We consider the following quadratic cost:

Suppose that $$K$$ is D-stabilizable feedback gain and denoting:

Thus, the performance index can be rewritten as:

It is possible to drop the initial condition dependence by considering that $$x_{0}$$ is a random vector with zero mean and with covariance $$E\left[ {x_0 x_0^T } \right] = X_0 $$.

In this case, the quadratic performance index can be formulated as:

Proceeding similarly as above, we have:

The design problem is to find the feedback gain $$K$$ so that the performance index is minimized under the dynamical constraint. Hence, the LQ problem with state derivative feedback is elaborated as follows:

*Problem 2* (LQ criterion minimization $$J$$). Find a matrix $$K$$ of appropriate dimension such that the control law $$u(t)=-K\dot{x}(t)$$ D-stabilizes asymptotically the system (1) and minimizes the criterion $$J$$.

*Problem 3* (LQ criterion minimization $$J_1$$). Find a matrix $$K$$ of appropriate dimension such that the control law $$u(t)=-K\dot{x}(t)$$ D-stabilizes asymptotically the system (1) and minimizes the criterion $$J_1$$.

### 2.3. Auxiliary system

To resolve the state derivative feedback, an auxiliary system is introduced to transform the problem into the traditional state feedback control.

Let us introduce the auxiliary system associated to system (1):

Lemma below will be used in the analysis of the procedure that solves the proposed problems.

The following facts holds:

- (i)
System (1) is D-stabilizable by a control law $$u(t)=-K\dot{x}(t)$$ if and only if system (15) is stabilizable by the state feedback control law $$v(t)=-K\tilde{x}(t).$$

- (ii)
If a matrix $$K$$ solves Problem 2, the control law $$v(t)=-K\tilde{x}(t)$$ minimizes the performance index $$J_2$$ and at the optimum, we have $$J_{2{\rm opt}}=J_{\rm opt}$$.

- (iii)
If a matrix $$K$$ solves Problem 3, the control law $$v(t)=-K\tilde{x}(t)$$ minimizes the performance index $$J_3$$ and at the optimum, we have $$J_{3{\rm opt}}=J_{\rm opt}$$.

System (1) is D-stabilizable by a control law $$u(t)=-K\dot{x}(t)$$, if it exist a positive definite symmetric matrix $$P$$ such that:

Multiplying on the left by $$A^{-1}(E+BK)$$ and its transpose on the right, we get:

To show (ii), the performance index $$J_2$$ is expressed by:

Multiplying on the left by $$A_c^{T}$$ and on the right by its transpose, we obtain:

From this equation, we can deduce that $$S_2=S$$ which implies the equality of $$J$$ and $$J_2$$ at the optimum.

Multiplying on the left by $$A_c^{T}$$ and on the right by $$A_c$$, we obtain:

The utility of Lemma 1 is to translate the state derivative feedback control of system (1) to a traditional state feedback control for system (15) to easily solving Problems 2 and 3 in the sequel.

## 3. Main results

In this section, the main results will be presented that solves the two problems of stabilization and minimization a quadratic criteria expressed through LMI and AREs.

### 3.1. Stabilization by state derivative feedback

In view of Lemma 2.1, it is easy to obtain the following theorem.

The following statements are equivalent:

- (i)
Problem 1 is solvable.

- (ii)
System (1) is D-stabilizable.

- (iii)
There exist a positive definite symmetric matrix $$X$$ and a matrix $$R$$ of appropriate dimensions satisfying:

(3.1)$AXET+EXAT+ARTBT+BRAT<0.$(3.2)$K=RX\u22121.$ - (iv)
There exist positive definite symmetric matrices $$X$$ and matrices $$F$$ and $$Y$$ of appropriate dimensions such that:

(3.3)$[FTAT+AFX+FTAT\u2212EF\u2212BY\u2217\u2212FTET\u2212EF\u2212BY\u2212YTBT]<0.$solves Problem 1.(3.4)$K=YF\u22121$

We remark that (i) is equivalent to (ii) by Lemma 1.

By Lemma 1, to show (iii), multiplying (4) on the right by $$(E+BK)^{T}$$ and on the left by its transpose and by taking $$R=KX$$, this prove (iii). To prove (iv), we can obtain (4) as follows ($$Q=P^{-1}$$):

Now, using the projection lemma, there exists a matrix $$G$$ of appropriate dimension such that:

### 3.2. LQ minimization by state derivative feedback

To get the solution of the Problem 2, we gives the following theorem:

We have the following equivalent statements:

- (i)
Problem 2 is solvable.

- (ii)
There exists a positive definite symmetric matrix $$X$$ satisfying the following algebraic Riccati equation:

(3.5)$ETA\u2212TP+PA\u22121E+CTC\u2212(CTD\u2212PA\u22121B)(DTD)\u22121(DTC\u2212BTA\u2212TP)=0.$(3.6)$Kopt=(DTD)\u22121(DTC\u2212BTA\u2212TP)andJopt=trace(X0P).$ - (iii)
There exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F$$ and $$Y$$ of appropriate dimensions solutions of the following optimization problem:

(3.7)$min\gamma X,Y,F,Z$(3.8)$traceZ<\gamma $(3.9)$[\u2212ZCF\u2212DY\u2217\u2212X]\u22640$(3.10)$[FTAT+AFX+FTAT\u2212EF\u2212BY\u2212AX0\u2217\u2212FTET\u2212YTBT\u2212EF\u2212BY\u2212AX0\u2217\u2217\u2212X0]\u22640.$(3.11)$K=YF\u22121andJopt=\gamma opt.$

We remark that (i) is equivalent to (ii) by Lemma 1 and using classical LQ control design. To show (iii), the Problem 2 is equivalent to the following relation:

By some elementary manipulations, the above inequality can reformulated as:

Noting $$Q=P^{-1}$$, the last inequality can be equivalently rewritten as:

Hence, by the projection lemma, there exists a matrix $$G$$ of appropriate dimensions such that:

Noting $$F=G^{-1}$$, and applying the congruence transformation $${\rm diag}(F^T, F^T, I)$$ and taking $$X=F^TQF^T$$ and $$Y=KF$$, the last inequality of (iii) holds. Introducing now the symmetric matrix $$Z$$ such that:

Multiplying on the left by $${\rm diag}(I,F^T)$$ and by its transpose on the right the inequality (3.8) is proved. This completes the proof. □

We have the following equivalent statements:

- (i)
Problem 3 is solvable.

- (ii)
There exists a positive definite symmetric matrix $$X$$ satisfying the following algebraic Riccati equation:

with $$\tilde{C}=C_{1}A^{-1}E$$ and $$ \tilde{D}=D_{1}-C_{1}A^{-1}B$$.(3.13)$ETA\u2212TX+XA\u22121E+C~TC~\u2212(C~TD~\u2212XA\u22121B)(D~TD~)\u22121(D~TC~\u2212BTA\u2212TX)=0,$(3.14)$Kopt=(D~TD~)\u22121(D~TC~\u2212BTA\u2212TX)andJopt=trace(X0X).$ - (iii)
There exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F, M, H$$ and $$Y$$ of appropriate dimensions solutions of the following optimization problem:

(3.15)$min\gamma X,Y,F,M,H,Z$(3.16)$traceZ<\gamma $(3.17)$[AM+MTATAH\u2212MTC1TEF+BY\u2217\u2212Z\u2212C1H\u2212HTC1T\u2212D1Y\u2217\u2217\u2212X]\u22640$(3.18)$[FTAT+AFX+FTAT\u2212EF\u2212BY\u2212AX0\u2217\u2212FTET\u2212YTBT\u2212EF\u2212BY\u2212AX0\u2217\u2217\u2212X0]\u22640.$(3.19)$K=YF\u22121andJopt=\gamma opt.$

The equivalence between (i) and (ii) follows by Lemma 1. The (ii) can be obtained using classical LQ control design (Anderson & Moore, 1990). To show item (iii), by Lemma 1 (iii), Problem 3 is equivalent to the existence of a control $$v(t) = - K\tilde x(t)\,$$ which minimizes:

This problem is equivalent to:

The inequality (3.18) is the same problem treated by Equation (3.10) and then the inequality follows. To obtain (3.16), introduce the positive definite symmetric matrix such that:

By the projection lemma, there exist matrices $$M$$ and $$ H$$ of appropriate dimensions such that:

Multiplying the obtained inequality on the left by $${\rm diag}(I, I, F^T)$$, its transpose on the right and recalling that $$X = {F^T}{P^{ - 1}}F$$ the first inequality follows. This completes the proof. □

A possible extension of the work to minimize a quadratic criterion over all the output: $$z_2=[z;z_1]$$, can be developed and satisfying the following LMI:

There exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F, M, H$$ and $$Y$$ of appropriate dimensions solutions of the following optimization problem:

The optimal gain is given by: $$K = YF^{ - 1}$$ and $$J_{\rm opt} = \gamma _{\rm opt}.$$

### 3.3. Extension to the uncertainties systems

This paragraph gives an extension of the results developed in the previous section to uncertain descriptor systems with polytopic coefficient matrices. Lets us consider now that system (1) is uncertain and that matrices $$E$$, $$A$$ and $$B$$ are constant and belong to the following classes:

To extend the result of Theorem 2 for this polytopic system, we give the following theorem:

If there exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F$$ and $$Y$$ of appropriate dimensions, solutions of the following optimization problem:

The proof follows by simple convexity arguments. Note that $$\sum\limits_{i = 1}^{{N_E}} {\sum\limits_{j = 1}^{{N_A}} {\sum\limits_{k = 1}^{{N_B}} {{\theta _i}{\beta _j}{\zeta _k}} } } {X_{ijk}}$$ is a Lyapunov function for the closed loop system. □

Also, we can extend Theorem 3 to the case of uncertain systems. The result is studied in the following theorem:

If there exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F, M, H$$ and $$Y$$ of appropriate dimensions, solutions of the following optimization problem:

The proof follows by simple convexity arguments. Note that $$\sum\limits_{i = 1}^{{N_E}} {\sum\limits_{j = 1}^{{N_A}} {\sum\limits_{k = 1}^{{N_B}} {{\theta _i}{\beta _j}{\zeta _k}} } } {X_{ijk}}$$ is a Lyapunov function for the closed loop system. □

For ordinary systems, the use of state derivative feedback, with or without uncertainties, has been well-developed in recent year. An important approach has been proposed to the stabilizability and stability robustness in Michiels *et al*. (2009), where the fragility of the stability of the system, caused by small modelling and implementation error, has been discussed. A solution to the robustness problem is proposed by the inclusion of a low-pass filter. It happens if $${\rm det}(-A) < 0)$$, which is satisfied if $$A$$ has an odd number of eigenvalues, these systems cannot be robustly stabilizable using state derivative feedback.

The problem of latency phenomena, due to a small delay in the feedback loop of the system, it is of a crucial importance. In paper (Vyhlidal *et al*., 2011), it shown that an application of state derivative feedback controller for stabilizing retarded systems, results the problem of arising neutrality of the closed loop system. To resolve this problem, the approach of stabilization is based on minimizing the spectral abscissa of the closed system over the controller parameter space.

In order to valid the proposed approaches, we consider the singular system with the following parameters,

To resolve the Problem 2, which guarantees the stabilizability of the system and minimizes the criterion $$J$$, an application of the Theorem 3.2 leads to the following optimal gain $$k$$, obtained from LMI optimization problem or ARE:

On the other hand, for the Problem 3, the Theorem 3.3 leads to the optimal gain

We consider now the same system but affected by the following uncertainties:

The Theorems 3.4 and 3.5 given the following respectively robust gain:

We remarks from this figure that the criterion involving the state leads to better behaviours than the one involving the state derivatives. This can justify the use of a criterion explicitly expressed in terms of the state in some applications.

## 4. Concluding

In this paper, the problem of controlling a linear singular system when only measurable signals are the derivative of the state is studied. The optimal state feedback control is particularly developed for two quadratic criteria. The first one involves a quadratic function of the state derivative while in the second one, a quadratic function of the state is considered. For this reason, the work developed in this paper is more general than that other works, like the approach developed in Abdelaziz & Valasek (2005a). The criterion in this paper is more general, it contains the state $$'x(t)'$$ and its derivative form $$'\dot{x}(t)'$$ and it is more general on $$'\dot{x}(t)'$$.

If we considering the criterion:

However, for Abdelaziz & Valasek (2005a), their criterion concerns only the derivative state $$' \dot{x}(t)'$$. Control design methods, in this paper, expressed through algebraic Riccati equations and LMI optimization problems are developed. This last case is more adapted to take into account some others specifications like pole placement or structural constraints.

Using LMIs, in control designs have the advantage of introducing polytopic uncertainties in the problem approach. An example shows the potentialities of the proposed methods and particularly the interest of considering a quadratic function of the state in the quadratic criterion. Some interesting problems need further developments. Among them, we can cite, for example, the perturbation rejection problem or the dynamic output feedback problem. They will be investigated in a near future.