Application of PID optimization control strategy based on particle swarm optimization (PSO) for battery charging system

The battery charging process has nonlinear and hysteresis properties. PID (Proportion Integration Differen-tiation) control is a conventional control method used in the battery charging process. The control effect is determined by the PID control parameters K p , K i and K d . The traditional PID parameter setting method is difficult to give the appropriate parameters, which affects the battery charging efficiency. In this paper, the particle swarm optimization (PSO) is used to optimize the PID parameters. Aiming at the defects of basic PSO, such as slow convergence speed, low convergence precision and easy to be premature, a modified particle swarm optimization algorithm is proposed, and the optimized PID parameters are applied to the battery charging control system. Also, the experimental results show that the battery charging process possesses better dynamic performance and the charging efficiency of the battery has increased from 86.44% to 91.47%, and the charging temperature rise has dropped by 1 ◦ C.


INTRODUCTION
PID control is one of the earliest developed control strategies. It has many advantages, such as simple algorithms, high reliability and good robustness, and has been widely applied in the field of industrial process control [1,2]. The control performance of the PID controller is directly related to the optimization setting of controller parameters, such as K p , K i and K d [3].The charging circuit of the battery is nonlinear and hysteresis loop. The setting process of the conventional PID setting method is complicated. It is necessary to correct the PID parameters according to the empirical formula and several experiments. It is difficult to realize the ideal setting of the parameters. The control system has a long response time and a large overshoot, and it cannot meet the requirements of the current control. The accurate control of the charging parameters (charge voltage and current) of the lithium battery can shorten the charging time, improve the charging efficiency of the battery, prolong the service life and reduce the cost. Therefore, it is of great significance to the PID parameter tuning of the battery charging circuit.
Particle swarm optimization (PSO) is a new evolutionary algorithm proposed by Kennedy and Eberhart et al. in 1995 [4]. Compared with other optimization algorithms, this algorithm preserves the global search strategy based on population, and the speed position model is easy to operate. Its unique memory function enables it to dynamically track the current search and can adjust the search strategy according to the search condition. The PSO algorithm is a more efficient parallel algorithm. The concept of the search algorithm is concise, clear and easy to implement. It has been widely applied in the field of scientific computing and engineering application [5,6].
Aiming at the defects of the basic PSO, such as slow convergence speed and easy prematurity, this paper improves it and the improved PSO is used to optimize the three parameters of PID in the battery charging circuit, and the new PID parameters are applied to the battery charging system. The system has better transition performance, and the experiments show that battery charging efficiency has been significantly improved.

BASIC PSO ALGORITHM
In 1995, Dr Kennegy of social psychology and Eberhart of electronic engineering, inspired by the Boid model, proposed the PSO algorithm after further research on the model and applied it to the optimization calculation. The PSO algorithm is based on the population with many particles. Each particle consists of two parameters of speed and location. The location represents a solution in the solution space, and the velocity determines the direction and distance of the particle's next move when solving optimization problems. Also, each particle has a fitness function to evaluate the current position of the particle. This algorithm initializes a group of random particles and then finds the optimal solution or approximate optimal solution by iteration [7,8].
In each iteration, particles update their speed and location by tracking individual extremum and global extremum. The modified velocity and position of each particle can be manipulated according to the following equations [9]: where t is the algebra of the current evolution; c 1 and c 2 are learning factors, usually with a value of 2; random factors r 1 and r 2 are random numbers distributed in the range of [0, 1]; p best is the optimal solution found by the particle itself, i.e. individual extreme value; The g best is the optimal solution currently found by the whole particle swarm, which is the global extremum. In order to prevent particles from jumping out of the search range, we usually make v ∈ − v max , +v max . If v max is too big, then the particle may not explore sufficiently beyond local solutions. In general, the iteration will be aborted when the maximum iteration number is achieved or the particle swam to find the global extremum. Shi and others add the inertia coefficient ω to enhance the ability of particles to jump out of the local extremum so that the Equation (1) can be changed into where w max and w min are the maximum and minimum inertia coefficients, respectively, and t and T max are the current number of iterations and the maximum number of iterations, respectively. w is called inertia weight, which makes particles keep the motion inertia and has the trend of expanding search space and can explore new areas. Nowadays, researchers usually call Equations (1) and (2) basic PSO, and Equations (2) and (4) are called standard PSO.

IMPROVEMENT OF PSO
In basic PSO, the inertia weight and acceleration constants are important parameters, which are usually set as constant, but it cannot balance the local search and global search, which leads to problems of slow convergence speed, low convergence precision and local optimal solution. A new PSO algorithm is proposed in this paper to solve the problems of basic PSO [10]. The convergence speed and convergence accuracy of the algorithm are improved by using the inertia weight of nonlinear decrement, improving the acceleration constants and using adaptive mutation operation for global extremum; at the same time, it can avoid trapping into the local optimal solution and a more accurate solution is obtained. The concrete improvements are as follows.

Inertia weight adjustment method
In PSO, inertia weight ω determines the influence of the particle's prior flight speed on the current flight speed. The selection of the appropriate inertia weight can balance the global and local search ability and improve the performance of the algorithm. Generally, its beneficial to the local search for using a bigger inertia weight is used in the early search and a smaller inertia weight value in the late search. In this paper, the dynamically changed nonlinear inverse tangent decreasing inertia weight is used, such as Equation (5).
where t is the current number of iterations and t max is the maximum number of iterations.
In the PSO algorithm, the acceleration constants c 1 and c 2 determine the influence of the particle's experience information and other particle's experience information on particle trajectories. Based on the basis of literature [11], an improvement has made to improve the accuracy of PSO; c 1 and c 2 are dynamically nonlinearly changed when the number of iterations is changed, such as Equations (6) and (7). In the early stage of the search, the particle will have a large self-learning ability and a small social learning ability, and the global searching ability will be strengthened when c 1 takes the larger value and c 2 takes the smaller value. The particle will have a large social learning ability and a small self-learning ability, and the global searching ability will be strengthened when c 1 takes the smaller value and c 2 takes the larger value at the later stage of the algorithm.

Adjustment method of local optimal solution
When the premature convergence appears, g best must be the local optimal solution. If the Cauchy mutation is used to change g best , the direction of the particle swarm will be changed and the particle will go into other new regions for searching, so that the particle may find new p best and g best ; thus, other particles will be driving to escape from the local extrema, and when this cycle repeated, the algorithm will find the global optimal solution, which is the principle of the Cauchy mutation operation for the current local extremum [12]. As analyzed earlier, the premise of mutation operation for global extremum is to detect the convergence state of the particle and judge whether g best is the global extremum or the local extremum, then the local extremum g best is mutated in the Cauchy way. PSO is either local convergence or global convergence; the phenomenon of 'aggregation' of particles will appear. The state of particle swarm can be tracked by observing the overall changes in the fitness of all particles in the swarm. In order to quantitatively describe the state of particle swarm, the definition of fitness variance is given by lim t→+∞ where the particle number of particle swarm is n, f i is the fitness of the ith particle, the current average fitness of particle swarm is f avg and the group fitness variance of particle swarm is σ 2 . The definition shows that the convergence of particles means that particles will eventually stay in a fixed position P within the search space [13].
The definition of fitness variance and particle convergence show that σ 2 reflects the degree of aggregation of the particle swarm. The smaller the value is, the more convergent the population will be; on the contrary, the particle swarm will be in a random search stage. As the algorithm continues, the particle swarm continues to converge, and the PSO will achieve global convergence or local convergence when σ 2 is equal to zero or closer to zero (or less than a certain threshold [14]. g best can be judged as the local optimal solution or the global optimal solution by comparing g best and f best (theoretical optimal value or empirical optimal value). Also, g best will be subjected to Cauchy mutation if the algorithm appears precocious when the current global extreme value g best is the local optimal solution.
For individuals, the idea of traditional Cauchy mutation is given by X ij = X i1 , X i2 , · · · X in . The operating formula for the individual Cauchy mutation is given by where j = 1,2, . . . , η is a constant that controls the step size of variation, and C(0,1) is a random number generated by the Cauchy distribution function with T = 1. Currently, the Cauchy mutations introduced in the PSO are all implemented according to Equation (12), and the variable step length parameter η is a fixed constant. The state of the particle swarm can be tracked by studying the overall change in the fitness of all particles in the particle swarm. When the particle is trapped in a local optimum, the variation in the larger step size can help the particle to jump out; when the particles are close to convergence and are searching for the neighborhood of the optimal solution, the variation in smaller steps can accelerate particle convergence. Therefore, the change of the average speed of the population is consistent with its convergence characteristics. In order to overcome the shortcoming of traditional Cauchy mutation step size as a fixed value, the adaptive Cauchy mutation method is proposed in this paper. It uses the group average speed as the parameter of the control variable and asynchronous length to mutate the local extremum, which improves the effectiveness of the algorithm. In order to quantitatively describe the overall state of the particle swarm, the definition of the average speed is given below.
where the particle number of particle swarm is n, and v ij is the velocity of the ith particle in the jth dimension. According to the foregoing, when the PSO algorithm premature convergence appears, the current global extremum g best increases variational disturbances, and g best performs the following mutation operations: where j = 1, 2, . . . , n, c is the random number generated by the Cauchy distribution function of, and (X min , X max ) is the definition domain of the problem. Integrating 3.1∼3.3, this paper proposes an improved PSO algorithm, which dynamically adjusts the inertia weight and learning factors according to the current optimization process. Also, the adaptive mutation operation of the global extremum is carried out to avoid the local optimal solution. The improved PSO algorithm has strong global convergence ability in the early stage and has strong local convergence ability in the later stage. To a certain extent, it can solve the problem of slow convergence speed, low precision and easy 'prematurity' for basic PSO. To some extent, it can solve the problem of slow convergence  speed, low precision and easy 'precocious' , which can improve the convergence speed, precision and search success rate of the PSO algorithm, and the performance test will be carried out by experiments.

PERFORMANCE TEST OF IMPROVED PSO
In order to verify the feasibility and effectiveness of the improved algorithm, five different classical functions were used in this paper to perform the optimization test and compare the performance of the algorithm. These functions represent different optimization problems, including single peak function to multi-peak function and low-dimensional functions to high-dimensional functions. Test functions are typically representativeness; they can be targeted to detect a specific function of the algorithm. This program uses MATLAB R2012a version programming.

Selection of test functions
F1: sphere function Sphere is a single-peak function; it takes the minimum at x i = 0.
F2: Rosenbrock function Rosenbrock is a non-convex and ill-conditioned function; it takes the minimum at x i = 1.
F5: Schaffer function The global maximum value of the Schaffer function is (0, 0), and there is an infinite number of global maxima in the range of about 3.14 range from the global maximum. It is difficult to make global optimization with a strong concussion of the function.

Performance test
In this experiment, the population size of the particle swarm is n = 30,c 1 = c 2 = 2, w max = 0.9,w min = 0.4, the maximum speed is v max = 5 and the maximum iteration algebra is 500 times. The optimization objective of all algorithms is to minimize the function value; it will take 50 times for each function to get the average value. The purpose of the experiment is to compare the performance of the improved PSO algorithm and the basic PSO algorithm proposed in this paper. The evaluation indicators are introduced to characterize the performance of the improved PSO algorithm. This paper introduces two evaluation indexes, the number of successful algorithms and the speed of convergence. The simulation results and statistics are shown in Table 1.
It can be seen from the above table that the performance index of the improved PSO algorithm, such as the search success rate, the convergence time, is greatly improved than the standard particle swarm algorithm, which proves the feasibility and superiority of the algorithm. This method can accelerate the convergence speed of PSO and improve the precision of the algorithm. At the same time, it can effectively solve the problem of 'precocious' of the basic PSO algorithm, which is easy to be trapped in the local optimality. Figure 1 is a block diagram of the PID parameters optimized by the particle swarm algorithm. The input is a given charging current or voltage, the output is the charging current or the charging voltage value of the battery, and the loop adopts a PID control method to ensure that the charging system has better stability and dynamic response performance.

OPTIMIZATION OF PID PARAMETERS IN CHARGING CIRCUIT
From the perspective of optimization, PID controller parameter tuning is to make use of PSO algorithm to optimize the characteristics of k p , k i , k d as the basic particles; the three-parameter groups automatically evolve in the solution space and tend to be globally optimal. At this time, system performance is optimal. In this paper, the PSO algorithm first performs offline learning and then accesses the control system.

The establishment of charging circuit transfer function
The equivalent model of the battery charging system is shown in Figure 2.
R r is the internal resistance of the rectifier module; R a is the internal resistance of the battery; U i is the ideal voltage of the rectifier device; and U c is the charging voltage of the battery.  So the transfer function of the control system is Equation (20):  In the thyristor rectifier module, the dynamic triggerrectification part is a purely delayed amplification part. This lag is due to the uncontrolled time of the rectifier module. In a certain range of work, using linear transformation for the rectifier outputU I and control voltage U c , U I lags behind by where U I is the rectifier output voltage. U c is the control voltage. k s is the amount of U I lagging behind U c . So the battery charge control transfer function is given by According to the relevant information and literature, L = 50 mH, C = 5F, R r ≈ 0.8 Ω, R a ≈ 0.2 Ω, K s = 5, T s = 0.067 s, T 1 = 0.028 s, T = T 1 + T s = 0.095 s.
Then, the last controlled transfer function is given by

PID parameter coding
S is the number of particles in the population P, and the position vector of each particle is composed of three control parameters of the PID controller, and the dimension of the particle position vector is D = 3. The population can be represented by an S × D matrix.
Considering the diversity of the control system, the range of values of each parameter is determined by the actual problem, and the initial population can be randomly generated within the allowable range of values.

Fitness function selection
PSO algorithm uses fitness value to evaluate the merits of individual or solution in the process of search evolution. In order to obtain satisfactory transient process dynamic characteristics, this paper uses the error absolute time integral (ITAE) performance index as the parameter to select the adaptive function [15], and the smaller the adaptation function value J indicates k p , k i , k d . The more appropriate the parameters, the better the overall performance of the control system.
where e(t) is the systematic error, and e(t) = 1 − y(t).

Flow chart for improved PSO optimizing PID parameters
The flow of improved PSO algorithm is shown in Figure 3. Application of PID optimization control strategy

EXPERIMENTAL SIMULATION
The battery charge circuit transfer function is G 1 (s) = e −0.095s 0.8s+1 ; the Z-N method, standard PSO method and improved PSO method are used to set the PID control parameters. Set the population size of particle swarm n = 30, c 1 = c 2 = 2, w max = 0.9, w min = 0.4, maximum speed v max = 2, the maximum number of iterations is 500, convergence accuracy eps = 10 −4 and parameter k p , k i , k d ∈[0,15]; the input signal is a stop signal, and the PID parameters set by the three methods are simulated in Matlab/Simulink. The system simulation response curve is shown in Figure 3, and the experimental results are shown in Table 2.
As shown in Figure 4, compared with the three groups of PID parameter response curves, the performance parameters such as overshoot, error and adjustment time of the PID system with improved PSO tuning are significantly reduced, indicating the feasibility and practicality of the improved algorithm. The following is proved by the experiment.

EXPERIMENTAL VERIFICATION OF BATTERY CHARGING
With TMS320LF2407 DSP as the control center, an experimental platform was set up to record data such as charging voltage, current, battery terminal voltage and battery temperature rise. The experiment used a lithium-ion battery model of 3.6 V/1500 mAh and was charged by a constant current method. The experiment set the above three groups of PID parameters, and the battery SOC was 0 before the experiment started. Three charging experiments were carried out with different PID charging parameters. The relevant charging experimental data is summarized in Table 3, in which charging efficiency = discharge power / charge power × 100%, Figure 5 records the battery terminal voltage changes.
Based on the above experimental results, the third group of PID parameter charging control system is better than the former two in terms of charging efficiency, temperature rise and terminal voltage change during charging, illustrating the optimization of the PID set by the improved PSO algorithm. The parameters are better, which proves the feasibility and superiority of the improved particle swarm algorithm.

CONCLUSION
For the shortcomings such as slow convergence rate, low precision and easy to be premature, the inertia weight, learning factor and local optimal value in the PSO algorithm are improved and a new improved particle swarm algorithm is proposed in this paper. The algorithm has a faster convergence speed and accuracy in the optimization process. The improved PSO method was applied to optimize the battery charging PID parameters. The simulation shows that the battery charging process has good dynamic performance. The experiment shows that the improved PID parameters after improved calculation are more efficient in the battery charging process. The feasibility and superiority of the algorithm are improved.