The importance of the use of the state derivative feedback comes from some practical problems, where the measurable signals are the derivatives of the states. In this context, state derivative feedback for linear time invariant singular systems is derived to solve the stabilization problem and minimizing a quadratic criteria. The optimal feedback gain design is derived for the desired performance via an LMI optimization problem or by the well-known Riccati equation approach. It is also shown how the proposed method can be applied for uncertain systems. As an exploitation of the presented studied, examples of simulation are given to show the effectiveness of these approaches.

1. Introduction

Recently, descriptor systems have been one of the major research fields in control theory, due to the comprehensive applications in economic in electrical and mechanical models, and to the fact that these systems provide a more natural description of dynamical systems than the non-singular representation. Many synthesis problems have been studied in the literature: stability and stabilization analysis, observability and controllability analysis and $$H_2/H_\infty$$ norm characterization. By investigating the feedback control, many results have been derived. Output feedback or state feedback is usually used. Also, the proportional plus derivative feedback has been examined by many writers to design controllers in the following issues: regularization and stabilization of linear descriptor system (see Duan et al., 2003; Kuo et al., 2004; Zaghdoud et al., 2013a), feedback control of singular systems (Jing, 1994), $$H_\infty$$ control of state delay descriptor system, non-linear control using exact feedback linearization (Boukas & Habetler, 2004). In Gerstner et al. (1992, 1999) application of this feedback to pole placement and design of PD observers (Wu & Duan, 2007) has been founded.

However, few works using only the state derivative feedback exists. The use of this feedback control $$(u(t)=-K\dot{x}(t) )$$ in classical control theory is essential and advantageous for obtaining a certain performance level in dynamic systems (Lewis & Syrmos, 1991). The importance to study the state derivative feedback comes from some practical problems that are easier to obtain the state derivative feedback signals than the state signals (Zaghdoud et al., 2013b). Out of the existing applications, we can point out, for example: active suspension systems of vehicles (Reithmeier & Leitmann, 2003), control of cable vibration (Duan et al., 2005), and control vibration suppression of mechanical systems (Abdelaziz & Valasek, 2005a). To solve these problems in the industry, the accelerometers are regarded as the most important sensors of vibration. In this case, the measurable signals obtained from the accelerometers are the derivative states, which is possible to reconstruct a good velocities but not displacements.

In this context, a several books and papers can be helpful, which include the design of controllers for vibration absorbers systems and mechanical systems (Abdelaziz & Valasek, 2005b,c; Abdelaziz, 2010). Other papers exploit the derivative feedback to solve the pole placement problem (Abdelaziz, 2004; Cardim et al., 2008; Faria et al., 2009), to discuss the robustness of the stability and stabilization problem (Araujo et al., 2007; Michiels et al., 2009) and to develop the design of a linear quadratic regulator (Kwak et al., 2002; Abdelaziz & Valasek, 2005a). However, few works consider the uncertain systems (Araujo et al., 2007; Faria et al., 2009).

In this paper, we will try to extend the linear quadratic (LQ) optimal control problem (treated in classical theory) to the singular system using state derivative feedback. About this problem, there have been tremendous results in the literature, but no papers used the derivative feedback. Cobb (1983) resolved the problem via a geometric approach, Bender & Laub (1987) used a generalized Riccati equation and Zhu et al. (1999) used an equivalent for the LQ problem.

Our purpose is to find a gain $$K$$, which stabilizes the closed loop-system and minimizes quadratic criterion reflecting a desired performance level. Some methods will be proposed and the procedure of development will take in account the cases where the model is known or not. For this reason, we propose the solution of Riccati equation approach (ARE) if the model of system is known, and the LMI optimization problem in the reverse case (Aouani et al., 2009; Bedoui et al., 2009; Ben Attia et al., 2009).

The remainder of this paper is organized as follows: Section 2, states some preliminary definition and highlights the problem to be addressed in this paper. Both problems of stabilization via state derivative feedback and minimizations of two quadratic criterions are studied. In the other hand, an auxiliary singular system associated with the original singular system is introduced, for translating the derivative feedback into a simple feedback. And the main results are derived in Section 3. These results are expressed as definite symmetric solutions or as LMI optimization problems. To illustrate the use of our method, an example is worked out in Section 4. A conclusion ends the paper.

Notations: Throughout this paper, the following notations will be used. For two matrices $$A$$ and $$B$$, $$A>B$$ means that $$A-B$$ is positive definite. $$A^T$$ denotes the transpose of $$A$$ and $$A^{-T}$$ the transpose of the inverse of $$A$$. Identity and null matrices will be denoted respectively by $$I$$ and $$0$$. Furthermore, in the case of portioned symmetric matrices, the symbol $$(*)$$ denotes generally each of its symmetric blocks and $$A+{\rm sym}(*)$$ denotes $$A+A^T$$.

2. Preliminaries

Let us consider the following continuous-time linear descriptor system described by:  

(2.1)
{Ex˙(t)=Ax(t)+Bu(t)z(t)=Cx˙(t)+Du(t)z1(t)=C1x(t)+D1u(t)x(0)=x0,
where $$ x(t)\in \Re^n$$ is the state vector and $$z\in \Re^p$$ or $$z_1\in \Re^p$$ are the controlled output vector. Matrices $$A$$, $$B$$, $$C$$ and $$D$$ are constant matrices of appropriate dimensions. The matrices $$E$$ may be singular: $${\rm rank}(E)=q\leq n$$.

The principal assumption imposed on the system is that the system is controllable. In addition it is assumed that system matrix $$A$$ is of full rank. In this section, a key tool for finding a derivative state feed-back controller, which stabilizes a feedback system and minimizes a quadratic criteria, is proved.

2.1. Stabilization problem

We consider the state derivative control expressed as:  

(2.2)
u=Kx˙(t),
where $$K\in\Re^{m\times n}$$ is a matrix of appropriate dimensions. The resulting closed loop system is obtained as follows:  
(2.3)
{(E+BK)x˙(t)=Ax(t)z=(CDK)x˙z1(t)=C1x(t)D1Kx˙.

To handle the stabilization purpose, let us put forth the following definition.

Definition 2.1

System (1) is D-stabilizable if there exist a matrix $$K$$ of appropriate dimension and positive definite matrix $$P$$ such that:  

(2.4)
(E+BK)1AP+PAT(E+BK)T<0.

This definition supposes that system (3) is well-defined, i.e. that matrix $$(E+BK)^{-1}$$ is full rank. Hence matrix $$A$$ is full rank.

The first problem under consideration can be outlined as follows:

Problem 1 (D-Stabilzation problem). Find a matrix $$K$$ of appropriate dimensions such that the control law $$u(t)=-K\dot{x}(t)$$ D-stabilizes system (1).

2.2. LQ criteria minimization

We consider the following quadratic cost:  

(2.5)
J=0zT(t)z(t)dt=0x˙T(t)(CDK)T(CDK)x˙(t).

Suppose that $$K$$ is D-stabilizable feedback gain and denoting:  

(2.6)
Ac=(E+BK)1A
the performance index $$J$$ becomes:  
(2.7)
J=0x0TeAcT.tAcT(CDK)T(CDK)AceAc.tx0dt=x0TSx0,
where $$S=S^{T}$$ satisfy the following Lyapunov equation:  
(2.8)
AcTS+SAc+AcT(CDK)T(CDK)Ac=0.

Thus, the performance index can be rewritten as:  

(2.9)
J=0trace((CDK)eAc.tAcx0x0TAcTeAcT.t(CDK)T)=trace((CDK)P(CDK)T),
where $$P=P^{T}$$ is the solution of the following equations:  
(2.10)
AcP+PAcT+Acx0x0TAcT=0.

It is possible to drop the initial condition dependence by considering that $$x_{0}$$ is a random vector with zero mean and with covariance $$E\left[ {x_0 x_0^T } \right] = X_0 $$.

In this case, the quadratic performance index can be formulated as:  

(2.11)
J=0E[z(t)zT(t)]dt=trace((CDK)P(CDK)T)=trace(X0S),
with:  
(2.12)
AcP+PAcT+AcX0AcT=0andAcTS+SAc+AcT(CDK)T(CDK)Ac=0.

Proceeding similarly as above, we have:  

(2.13)
J1=0E[z1(t)z1T(t)]dt=trace((C1D1KAc)P1(C1D1KAc)T)=trace(X0S1),
with:  
(2.14)
AcP1+P1AcT+X0=0andAcTS1+S1Ac+(C1D1KAc)T(C1D1KAc)=0.

The design problem is to find the feedback gain $$K$$ so that the performance index is minimized under the dynamical constraint. Hence, the LQ problem with state derivative feedback is elaborated as follows:

Problem 2 (LQ criterion minimization $$J$$). Find a matrix $$K$$ of appropriate dimension such that the control law $$u(t)=-K\dot{x}(t)$$ D-stabilizes asymptotically the system (1) and minimizes the criterion $$J$$.

Problem 3 (LQ criterion minimization $$J_1$$). Find a matrix $$K$$ of appropriate dimension such that the control law $$u(t)=-K\dot{x}(t)$$ D-stabilizes asymptotically the system (1) and minimizes the criterion $$J_1$$.

2.3. Auxiliary system

To resolve the state derivative feedback, an auxiliary system is introduced to transform the problem into the traditional state feedback control.

Let us introduce the auxiliary system associated to system (1):  

(2.15)
{x~˙(t)=A1Ex~(t)A1Bv(t)η(t)=Cx~(t)+Dv(t)η1(t)=C1A1Ex~(t)+(D1C1A1B)v(t)x~(0)=x0,
where the matrix $$A$$, $$B$$, $$C$$, $$D$$ and $$E$$ have the same dimensions as system (1). Associated with (15), the performance indexes can be obtained in following terms:  
J2=0ηT(t)η(t)dtandJ3=0η1T(t)η1(t)dt.

Lemma below will be used in the analysis of the procedure that solves the proposed problems.

Lemma 2.1

The following facts holds:

  • (i)

    System (1) is D-stabilizable by a control law $$u(t)=-K\dot{x}(t)$$ if and only if system (15) is stabilizable by the state feedback control law $$v(t)=-K\tilde{x}(t).$$

  • (ii)

    If a matrix $$K$$ solves Problem 2, the control law $$v(t)=-K\tilde{x}(t)$$ minimizes the performance index $$J_2$$ and at the optimum, we have $$J_{2{\rm opt}}=J_{\rm opt}$$.

  • (iii)

    If a matrix $$K$$ solves Problem 3, the control law $$v(t)=-K\tilde{x}(t)$$ minimizes the performance index $$J_3$$ and at the optimum, we have $$J_{3{\rm opt}}=J_{\rm opt}$$.

Proof

System (1) is D-stabilizable by a control law $$u(t)=-K\dot{x}(t)$$, if it exist a positive definite symmetric matrix $$P$$ such that:  

(E+BK)1AP+PAT(E+BK)T<0.

Multiplying on the left by $$A^{-1}(E+BK)$$ and its transpose on the right, we get:  

A1(E+BK)(E+BK)1AP(E+BK)TAT+A1(E+BK)PAT(E+BK)T(E+BK)TAT<0P(E+BK)TAT+A1(E+BK)p<0
hence (i) is proved.

To show (ii), the performance index $$J_2$$ is expressed by:  

J2=0ηT(t)η(t)dt=0x0TeAcT.t(CDK)T(CDK)eAc.tx0dt=x0TSx0,
with $$S_2$$ present a solution of the following Lyapunov equation:  
AcTS2+S2Ac1+(CDK)T(CDK)=0.

Multiplying on the left by $$A_c^{T}$$ and on the right by its transpose, we obtain:  

AcTS2+S2Ac+AcT(CDK)T(CDK)Ac=0.

From this equation, we can deduce that $$S_2=S$$ which implies the equality of $$J$$ and $$J_2$$ at the optimum.

To show (iii), remark that:  

J3=0η1T(t)η1(t)dt=0x0TeAcT.t(C1A1ED1K+C1A1BK)T×(C1A1ED1K+C1A1BK).eAc.tx0.dtJ3=0x0TeAcT.t(C1A1(E+BK)D1K)T+C1A1(E+BK)D1K).e_Ac.tx0.dt=x0TS3x0,
with $$S_3$$ solution of the following Lyapunov equation:  
AcTS3+S3Ac1+(C1Ac1D1K)T(C1Ac1D1K)=0.

Multiplying on the left by $$A_c^{T}$$ and on the right by $$A_c$$, we obtain:  

AcTS3+S3Ac+(C1D1KAc)T(C1D1KAc)=0
and we deduce that $$S_3=S_1$$, implying the equality of $$J_1$$ and $$J_3$$ at the optimum. □

Remark 2.1

The utility of Lemma 1 is to translate the state derivative feedback control of system (1) to a traditional state feedback control for system (15) to easily solving Problems 2 and 3 in the sequel.

3. Main results

In this section, the main results will be presented that solves the two problems of stabilization and minimization a quadratic criteria expressed through LMI and AREs.

3.1. Stabilization by state derivative feedback

In view of Lemma 2.1, it is easy to obtain the following theorem.

Theorem 3.1

The following statements are equivalent:

  • (i)

    Problem 1 is solvable.

  • (ii)

    System (1) is D-stabilizable.

  • (iii)

    There exist a positive definite symmetric matrix $$X$$ and a matrix $$R$$ of appropriate dimensions satisfying:  

    (3.1)
    AXET+EXAT+ARTBT+BRAT<0.

    The gain given by:  

    (3.2)
    K=RX1.

  • (iv)

    There exist positive definite symmetric matrices $$X$$ and matrices $$F$$ and $$Y$$ of appropriate dimensions such that:  

    (3.3)
    [FTAT+AFX+FTATEFBYFTETEFBYYTBT]<0.

    The gain given by:  

    (3.4)
    K=YF1
    solves Problem 1.

Proof

We remark that (i) is equivalent to (ii) by Lemma 1.

By Lemma 1, to show (iii), multiplying (4) on the right by $$(E+BK)^{T}$$ and on the left by its transpose and by taking $$R=KX$$, this prove (iii). To prove (iv), we can obtain (4) as follows ($$Q=P^{-1}$$):  

ATQ(E+BK)1+(E+BK)TQA1<0,Q>0.

This is equivalent to:  

[AT(E+BK)T][0QQ0][A1(E+BK)1]<0.

Now, using the projection lemma, there exists a matrix $$G$$ of appropriate dimension such that:  

[0QQ0]+[AT(E+BK)T]GT[II]+sym()=[ATGT+GAATGT+QGEGBKETGTKTBTGTGEGBK]<0
denoting $$F=G^{-1}$$, and applying the congruence transformation $${\rm diag}(F^T, F^T)$$ and taking $$X=F^T Q F^T$$ and $$Y=KF$$, the inequality in (iv) follows. This achieves the proof of the theorem. □

3.2. LQ minimization by state derivative feedback

To get the solution of the Problem 2, we gives the following theorem:

Theorem 3.2

We have the following equivalent statements:

  • (i)

    Problem 2 is solvable.

  • (ii)

    There exists a positive definite symmetric matrix $$X$$ satisfying the following algebraic Riccati equation:  

    (3.5)
    ETATP+PA1E+CTC(CTDPA1B)(DTD)1(DTCBTATP)=0.

    The optimal gain is given by:  

    (3.6)
    Kopt=(DTD)1(DTCBTATP)andJopt=trace(X0P).

  • (iii)

    There exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F$$ and $$Y$$ of appropriate dimensions solutions of the following optimization problem:  

    (3.7)
    minγX,Y,F,Z
     
    (3.8)
    traceZ<γ
     
    (3.9)
    [ZCFDYX]0
     
    (3.10)
    [FTAT+AFX+FTATEFBYAX0FTETYTBTEFBYAX0X0]0.

    The optimal gain is given by:  

    (3.11)
    K=YF1andJopt=γopt.

Proof

We remark that (i) is equivalent to (ii) by Lemma 1 and using classical LQ control design. To show (iii), the Problem 2 is equivalent to the following relation:  

(3.12)
mintrace((CDK)P(CDK)T)underAcP+PAcT+AcX0Ac<0
with $$A_c=(E+BK)^{-1}A.$$

By some elementary manipulations, the above inequality can reformulated as:  

ATP1(E+BK)1+(E+BK)TP1A1+(E+BK)TP1X0P1(E+BK)1<0.

Noting $$Q=P^{-1}$$, the last inequality can be equivalently rewritten as:  

ATQ(E+BK)1+(E+BK)TQA1+(E+BK)TQX0Q(E+BK)1<0,
and this equivalent to:  
[AT(E+BK)T0X00I]×[0Q0Q0000X0]×[A1X0T(E+BK)100I]0.

Hence, by the projection lemma, there exists a matrix $$G$$ of appropriate dimensions such that:  

[0Q0Q0000X0]+[AT(E+BK)TX0AT]GT[II0]+sym()0.

Noting $$F=G^{-1}$$, and applying the congruence transformation $${\rm diag}(F^T, F^T, I)$$ and taking $$X=F^TQF^T$$ and $$Y=KF$$, the last inequality of (iii) holds. Introducing now the symmetric matrix $$Z$$ such that:  

(CDK)P(CDK)TZ[ZCDKQ]0.

Multiplying on the left by $${\rm diag}(I,F^T)$$ and by its transpose on the right the inequality (3.8) is proved. This completes the proof. □

Theorem 3.3

We have the following equivalent statements:

  • (i)

    Problem 3 is solvable.

  • (ii)

    There exists a positive definite symmetric matrix $$X$$ satisfying the following algebraic Riccati equation:  

    (3.13)
    ETATX+XA1E+C~TC~(C~TD~XA1B)(D~TD~)1(D~TC~BTATX)=0,
    with $$\tilde{C}=C_{1}A^{-1}E$$ and $$ \tilde{D}=D_{1}-C_{1}A^{-1}B$$.

    The optimal gain is given by:  

    (3.14)
    Kopt=(D~TD~)1(D~TC~BTATX)andJopt=trace(X0X).

  • (iii)

    There exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F, M, H$$ and $$Y$$ of appropriate dimensions solutions of the following optimization problem:  

    (3.15)
    minγX,Y,F,M,H,Z
     
    (3.16)
    traceZ<γ
     
    (3.17)
    [AM+MTATAHMTC1TEF+BYZC1HHTC1TD1YX]0
     
    (3.18)
    [FTAT+AFX+FTATEFBYAX0FTETYTBTEFBYAX0X0]0.

    The optimal gain is given by:  

    (3.19)
    K=YF1andJopt=γopt.

Proof

The equivalence between (i) and (ii) follows by Lemma 1. The (ii) can be obtained using classical LQ control design (Anderson & Moore, 1990). To show item (iii), by Lemma 1 (iii), Problem 3 is equivalent to the existence of a control $$v(t) = - K\tilde x(t)\,$$ which minimizes:  

0η1T(t)η1(t)dtunderx~˙=A1(E+BK)x~.

This problem is equivalent to:  

minP=PTtrace((C1Ac1D1K)P(C1Ac1D1K)T)underAcP+PAcT+X00.

The inequality (3.18) is the same problem treated by Equation (3.10) and then the inequality follows. To obtain (3.16), introduce the positive definite symmetric matrix such that:  

(C1Ac1D1K)P(C1Ac1D1K)TZ[ZC1A1(E+BK)D1KP1]0.

We have:  

[ATC1TI000I][00E+BK0ZD1K00P1][C1A10I00I],[00I][00E+BK0ZD1K00P1][00I]0.

By the projection lemma, there exist matrices $$M$$ and $$ H$$ of appropriate dimensions such that:  

[00E+BK0ZD1K00P1]+[AC10][MH][I000I0]+sym()0.

Multiplying the obtained inequality on the left by $${\rm diag}(I, I, F^T)$$, its transpose on the right and recalling that $$X = {F^T}{P^{ - 1}}F$$ the first inequality follows. This completes the proof. □

Remark 3.1

A possible extension of the work to minimize a quadratic criterion over all the output: $$z_2=[z;z_1]$$, can be developed and satisfying the following LMI:

There exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F, M, H$$ and $$Y$$ of appropriate dimensions solutions of the following optimization problem:  

minγX,Y,F,M,H,ZtraceZ<γ[AM+MTATAHMTC1TEF+BYZC1HHTC1TD1Y+C2FX]0[FTAT+AFX+FTATEFBYAX0FTETYTBTEFBYAX0X0]0.

The optimal gain is given by: $$K = YF^{ - 1}$$ and $$J_{\rm opt} = \gamma _{\rm opt}.$$

3.3. Extension to the uncertainties systems

This paragraph gives an extension of the results developed in the previous section to uncertain descriptor systems with polytopic coefficient matrices. Lets us consider now that system (1) is uncertain and that matrices $$E$$, $$A$$ and $$B$$ are constant and belong to the following classes:  

EE={E(θ):E(θ)=i=1NEθiEi,i=1NEθi=1,θi0}AA={A(β):A(β)=j=1NAβjAj,j=1NAβj=1,βj0}BB={B(ζ):B(ζ)=k=1NkζkBk,k=1Nkζk=1,ζk0}.

To extend the result of Theorem 2 for this polytopic system, we give the following theorem:

Theorem 3.4

If there exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F$$ and $$Y$$ of appropriate dimensions, solutions of the following optimization problem:  

(3.20)
minγX,Y,F,Z
 
(3.21)
traceZ<γ
 
(3.22)
[ZCFDYXijk]0
 
(3.23)
[FTAjT+AjFXijk+FTAjTEiFBkYAjX0FTEiTYTBkTEiFBkYAjX0X0]0i=1,,NEj=1,,NAk=1,,NB.
The optimal gain is given by:  
(3.24)
K=YF1andJrob=0zT(t)z(t)dtγopt.

Proof

The proof follows by simple convexity arguments. Note that $$\sum\limits_{i = 1}^{{N_E}} {\sum\limits_{j = 1}^{{N_A}} {\sum\limits_{k = 1}^{{N_B}} {{\theta _i}{\beta _j}{\zeta _k}} } } {X_{ijk}}$$ is a Lyapunov function for the closed loop system. □

Also, we can extend Theorem 3 to the case of uncertain systems. The result is studied in the following theorem:

Theorem 3.5

If there exist positive definite symmetric matrices $$X$$ and $$Z$$, matrices $$F, M, H$$ and $$Y$$ of appropriate dimensions, solutions of the following optimization problem:  

(3.25)
minγX,Y,F,M,H,Z
 
(3.26)
traceZ<γ
 
(3.27)
[AjM+MTAjTAjHMTC1TEiF+BkYZC1HHTC1TD1YXijk]0
 
(3.28)
[FTAjT+AjFX+FTAjTEiFBkYAjX0FTEiTYTBkTEiFBkYAjX0X0]0i=1,,NEj=1,,NAk=1,,NB.

The optimal gain is given by:  

(3.29)
K=YF1andJrob=0zT(t)z(t)dtγoptforallEE,AAandBB.

Proof

The proof follows by simple convexity arguments. Note that $$\sum\limits_{i = 1}^{{N_E}} {\sum\limits_{j = 1}^{{N_A}} {\sum\limits_{k = 1}^{{N_B}} {{\theta _i}{\beta _j}{\zeta _k}} } } {X_{ijk}}$$ is a Lyapunov function for the closed loop system. □

Remark 3.2

For ordinary systems, the use of state derivative feedback, with or without uncertainties, has been well-developed in recent year. An important approach has been proposed to the stabilizability and stability robustness in Michiels et al. (2009), where the fragility of the stability of the system, caused by small modelling and implementation error, has been discussed. A solution to the robustness problem is proposed by the inclusion of a low-pass filter. It happens if $${\rm det}(-A) < 0)$$, which is satisfied if $$A$$ has an odd number of eigenvalues, these systems cannot be robustly stabilizable using state derivative feedback.

The problem of latency phenomena, due to a small delay in the feedback loop of the system, it is of a crucial importance. In paper (Vyhlidal et al., 2011), it shown that an application of state derivative feedback controller for stabilizing retarded systems, results the problem of arising neutrality of the closed loop system. To resolve this problem, the approach of stabilization is based on minimizing the spectral abscissa of the closed system over the controller parameter space.

Example 3.6

In order to valid the proposed approaches, we consider the singular system with the following parameters,  

E=[00010.70010],A=[14.50.5378536],B=[10.31],C=[101],D=1,D1=1C1=[011].

To resolve the Problem 2, which guarantees the stabilizability of the system and minimizes the criterion $$J$$, an application of the Theorem 3.2 leads to the following optimal gain $$k$$, obtained from LMI optimization problem or ARE:  

K=[0.94790.61440.9982]
with a cost $$J= 0.116$$.

On the other hand, for the Problem 3, the Theorem 3.3 leads to the optimal gain  

K1=[1.05010.30620.0414]
with a cost $$J_1=0.0926$$.

We consider now the same system but affected by the following uncertainties:  

E1=[000110010],A1=[12.50.5278536]B1=[10.11],E2=[00010.50010]A2=[12.50.5578536],B2=[10.51].

The Theorems 3.4 and 3.5 given the following respectively robust gain:  

Krob1=[1.01870.89551.2701]Krob2=[1.57531.12011.0000]
with the respectively cost $$J_1=0.3496$$ and $$J_2=0.1066$$. The following Figures 1 and 2 show the state and the state derivative trajectory of the nominal system, when the gain $${K_{{\rm rob}1}}$$ and $${K_{{\rm rob}2}}$$ are applied.
Fig 1.

Evolution of the system states.

Fig 1.

Evolution of the system states.

Fig 2.

Evolution of the system states derivative.

Fig 2.

Evolution of the system states derivative.

We remarks from this figure that the criterion involving the state leads to better behaviours than the one involving the state derivatives. This can justify the use of a criterion explicitly expressed in terms of the state in some applications.

4. Concluding

In this paper, the problem of controlling a linear singular system when only measurable signals are the derivative of the state is studied. The optimal state feedback control is particularly developed for two quadratic criteria. The first one involves a quadratic function of the state derivative while in the second one, a quadratic function of the state is considered. For this reason, the work developed in this paper is more general than that other works, like the approach developed in Abdelaziz & Valasek (2005a). The criterion in this paper is more general, it contains the state $$'x(t)'$$ and its derivative form $$'\dot{x}(t)'$$ and it is more general on $$'\dot{x}(t)'$$.

If we considering the criterion:  

J=0z(t)zT(t)dt, where z(t)=Cx˙(t)+Du(t),thenJ(t)=0xTCTCx˙+uTDTDu+uTDTx˙+uTDTDu.

However, for Abdelaziz & Valasek (2005a), their criterion concerns only the derivative state $$' \dot{x}(t)'$$. Control design methods, in this paper, expressed through algebraic Riccati equations and LMI optimization problems are developed. This last case is more adapted to take into account some others specifications like pole placement or structural constraints.

Using LMIs, in control designs have the advantage of introducing polytopic uncertainties in the problem approach. An example shows the potentialities of the proposed methods and particularly the interest of considering a quadratic function of the state in the quadratic criterion. Some interesting problems need further developments. Among them, we can cite, for example, the perturbation rejection problem or the dynamic output feedback problem. They will be investigated in a near future.

References

Abdelaziz,
T. H. S.
&
Valasek,
M.
(
2004
)
Pole-placement for SISO linear systems by state-derivative feedback.
IEEE Proc. Control Theory Appl.
 ,
151
,
377
385
.
Abdelaziz,
T. H. S.
(
2010
)
Optimal Control Using Derivative Feedback for Linear Systems.
Proc. Inst. Mech. Eng.
 ,
224
,
185
202
.
Abdelaziz,
T. H. S.
&
Valasek,
M.
(
2005a
)
State Derivative Feedback by LQR for Linear Time Invariant Systems.
Proc of the 16th IFAC World Congress
 ,
Czech Republic
,
933
938
.
Abdelaziz,
T. H. S.
&
Valasek,
M.
(
2005b
)
Direct Algorithm for Pole Placement by State Derivative Feedback for Multi-input Linear Systems- non Singular Case.
IEEE Proc. Control Theory Appl.
 ,
41
,
637
660
.
Abdelaziz,
T. H. S.
&
Valasek,
M.
(
2005c
)
Eigen structure assignment by proportional-plus-derivative feedback for second-order linear control systems.
KYBERNETIKA
 ,
4
,
661
676
.
Anderson,
B. D. O.
&
Moore,
J. B.
(
1990
)
Optimal Control: Linear Quadratic Methods.
Englewoods Cliffs
,
New Jersey
:
Prentice Hall
 .
Aouani,
N.,
Salhi.
S.,
Garcia.
G.
&
Ksouri.
M.
(
2009
)
New robust stability and stabilizability conditions for linear parameter time varying polytopic systems.
3rd International Conference on Signals, Circuits and Systems (SCS)
 , pp.
1
6
.
Araujo,
M. J.,
Castro,
C. A.,
Silva,
F. G. S.,
Santos,
E. T. F.,
&
Dorea,
C. T. E
(
2007
)
Comparative study on state feedback and state derivative feedback in linear time invariant systems.
3rd IFAC Symposium on System, Structure and Control
 , vol
3
.
Assunao,
E.,
Teixeira,
M. C. M.,
Faria.,
Silva,
N. A. P.
&
Cardim,
R.
(
2007
)
Robust State-Derivative feedback LMI -based designs for multivariables linear systems.
Int. J. Control
 ,
80
,
1260
1270
.
Bedoui,
N.,
Salhi.
S.
&
Ksouri.
M.
(
2009
)
Robust Stabilization Approach and $H_\infty$ Performance via Static Output Feedback for a Class of Nonlinear Systems.
Math. Probl. Eng.
 ,
2009
,
1
22
.
Bender,
D. J.
&
Laub,
A. J.
(
1987
)
The Linear Quadratic Optimal Regulator for Descriptor Systems.
IEEE Trans. Automat. Control
 ,
32
,
672
688
.
Ben Attia,
S.,
Salhi.
S.,
Ksouri.
M.
&
Bernussou,
J.
(
2009
)
Improved LMI formulation for robust dynamic output feedback controller design of discrete-time switched systems via switched Lyapunov function.
IEEE Conf. Signals, Circuits & Systems
 
978
, pp.
4244
4398
Boukas,
T. K.
&
Habetler,
G. T.
(
2004
)
High performance induction motor speed control using exact feedback linearization with state and derivative feedback.
IEEE Trans. Power Electron.
 
19
,
1022
1028
.
Bunse-Gerstner,
A.,
Byers,
R.,
Mehrmann,
V
&
Nichols,
K. N.
(
1992
)
Regularization of descriptor systems by derivative and proportional state feedback.
SIAM J. Matrix Anal. Appl.
 ,
13
,
46
67
.
Bunse-Gerstner,
A.,
Byers,
R.,
Mehrmann,
V
&
Nichols,
K. N.
(
1999
)
Feedback design for regularizing descriptor systems.
Linear Algebra Appl.
 ,
229
,
119
151
.
Cardim,
R.,
Teixeira,
M. C. M.,
Assunao,
E.
&
Faria,
F. A.
(
2008
)
Control Designs for Linear Systems Using State Derivative Feedback.
Systems Structure and Control
 ,
In-Tech, Vienna
,
Austria
, pp.
1
28
.
Cobb,
D.
(
1983
)
Descriptor Variable Systems and Optimal State Regulation.
IEEE Trans. Automat. Control
 ,
28
,
601
611
.
Duan,
F. Y.,
Ni,
Q. Y.
&
Ko,
J. M.
(
2005
)
State-derivative Feedback Control of Cable vibration Using Semiactive Magnet orheological Damper.
Comput. Aided Civ. Inf. Eng.
 ,
20
,
431
449
.
Duan,
G. R.
&
Zhang,
X.
(
2003
)
Regularizability of linear descriptor systems via output plus partial state derivative feedback.
Asian J. Control
 ,
5
,
334
340
.
Faria,
F. A.,
et al.   (
2009
)
Robust State-Derivative Pole Placement LMI Based Designs for Linear Systems. Int. J. Control.
Syst. Struct. Control.
 ,
82
,
1
12
.
Jing,
H.
(
1994
)
Eigen structure assignment by proportional-derivative state feedback in singular systems.
Syst. Control Lett.
 ,
22
,
47
52
.
Kuo,
Y. C.,
Lin,
W. W.
&
Xu,
S. F.
(
2004
)
Regularization of linear discrete-time periodic descriptor systems by derivative and proportional state feedback.
SIAM J. Matrix Anal. Appl.
 ,
25
,
1046
1073
.
Kwak,
K. S.,
Washington,
G.
&
Yedavalli,
K. Rl.
(
2002
)
Acceleration-based Vibration Control of Distributed Parameter Systems Using the “Reciprocal State Space Framework”.
J. Sound Vib.
 ,
251
,
543
557
.
Lewis,
L. F.
&
Syrmos,
L. V.
(
1994
)
A Geometric theory for Derivative Feedback.
IEEE Trans. Autom. Control
 ,
36
,
1111
1116
.
Michiels,
W.,
et al.   (
2009
)
Stabilizability and Stability Robustness of State Derivative Feedback Controllers.
SIAM. J. Control Optim.
 ,
47
,
3100
3117
.
Reithmeier,
E.
&
Leitmann,
G.
(
2003
)
Robust Vibration Control of Dynamical Systems Based on The Derivative of The State.
Arch. Mech.
 ,
72
(
11
),
856
864
.
Vyhlidal,
T.,
Michiels.
W.
&
Mcgahan.
P.
(
2011
)
Synthesis of strongly stable state-derivative controllers for a time-delay system using constrained non-smooth optimization.
IMA J. Math. Control Inf.
 ,
27
,
437
455
.
Wu,
G. A.
&
Duan,
R. G.
(
2007
)
Design of PD Observers in Descriptor Linear Systems.
Int. J. Control Autom. Syst.
 ,
26
(
5
),
93
98
.
Zaghdoud,
R.,
Salhi,
S.
&
Ksouri,
M.
(
2013a
)
Stabilization Problem of Singular Systems via Proportional Plus Derivative State Feedback.
International Conference on Control, Proceedings Engineering and Technology, IPCO
 , vol.
2
, pp
6
9
.
Zaghdoud,
R.,
Salhi,
S.
&
Ksouri,
M.
(
2013b
)
Optimal state derivative feedback control for Singular Systems.
14th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA)
 , pp.
110
113
Zhu,
J.,
Ma.
S.
&
Cheng.
Z.
(
1999
)
Singular LQ problem for descriptor systems.
38th IEEE Conf. Decision Control, Phoenix, AZ.
 ,
5
, pp.
4098
4099
.