工程科学与技术   2017, Vol. 49 Issue (3): 129-136
基于忆阻时滞神经网络的耗散研究
张芬1,2, 李智1     
1. 西安电子科技大学 机电工程学院,陕西 西安 710071;
2. 咸阳师范学院 数学与信息科学学院,陕西 咸阳 712000
基金项目: 国家自然科学基金资助项目(61673310;61501388;11501482)
摘要: 针对一类基于忆阻时滞神经网络的耗散问题,提出一种结合倒凸技术和Wirtinger积分不等式的耗散方法。首先,应用微分包含和集值映射理论,将忆阻时滞神经网络转化成传统的时滞神经网络;接着,构造含有时滞系数的状态向量2次项和3重积分项的Lyapunov-Krasovskii泛函(LKF),应用倒凸技术和Wirtinger积分不等式估计LKF微分,得到了确保时滞神经网络严格耗散的时滞依赖条件,这些条件可以用线性矩阵不等式形式表示并且易于用Matlab软件实现。将该方法推广到研究时滞神经网络无缘分析问题中。在数值例子中,针对不同的时滞变化率上界,与现有文献的最优耗散性能指标进行比较,实验结果表明,本文方法将其提高了5%。另外,在相同时滞条件下,仿真分别给出了神经网络系统有外部输入和无外部输入的状态轨迹,由仿真结果可以看出外部输入的存在的确破坏系统稳定性。
关键词: 耗散    忆阻    神经网络    时滞    Lyapunov泛函    
Dissipativity Research on Memristor-based Neural Networks with Time-varying Delays
ZHANG Fen1,2, LI Zhi1     
1. School of Mechano-electronic Eng., Xidian Univ., Xi’an 710071, China;
2. College of Mathematics and Info. Sci., Xianyang Normal Univ., Xianyang 712000, China
Abstract: In order to solve the problem of dissipativity for memrisor-based neural networks with time-varying delay,a new method was proposed, which combined a reciprocally convex technique with a Wirtinger-based integral inequality.First,to convert memristive neural networks into the conventional neural networks,differential inclusions and set-valued maps were applied.Then,based on the construction of a Lyapunov-Krasovskii functional with a time-delay coefficient quadratic term of the state vector and a triple integral term,the delay-dependent conditions in terms of linear matrix inequalities were obtained to assure the neural networks strictly dissipative.The derivative of Lyapunov-Krasovskii functional is estimated by using a reciprocally convex technique and a Wirtinger-based integral inequality,which can be easily solved via Matlab.Moreover,the proposed method was extended to investigate the passivity analysis of the considered systems.Finally,the comparisons with the available references showed that this method gives an improvement of 5% in optimal dissipativity performances for various upper bounds of delay variation in numerical examples.In addition,in the same time-delay case,the simulations provided the state trajectories of the neural network system with external input and without external input,respectively.It was shown that the existence of external input was certain to destroy the stability of the system from the simulation results.
Key words: dissipativity    memristive    neural networks    time delay    Lyapunov functions    

1971年,忆阻(memristor)(记忆(memory)和电阻(resistor)的缩写)首次由Chua[1]从理论上提出,他在研究物理电路时推断,除了电阻、电容、电感器3个基本电路元件外,还存在第4个元件即忆阻,它表示磁通与电荷的关系。尽管电阻和忆阻有相同的量纲和许多相同的性质,但电阻的阻值是由流经它的电流决定的,忆阻的阻值即忆阻值(memristance)是由流经它的电荷确定的,满足函数关系式 $M(q) = \displaystyle\frac{{{\rm{d}}\vartheta }}{{{\rm{d}}q}}$ $\vartheta $ 为磁通量。因此,通过测定忆阻的阻值,便可知道流经它的电荷量,从而有记忆电荷的作用。2008年,HP公司的研究人员[2]做出了纳米忆阻实物模型,验证了忆阻的存在。由于忆阻的电阻记忆特性与生物神经元突触的学习功能尤其相似,适合模拟神经元突触的部分运作,使得电子神经网络更接近人脑,因此,研究忆阻神经网络具有重要的理论和现实意义。忆阻的研究已得到许多学者的广泛关注。文献[35]在经典的递归电路模型上用忆阻模拟神经元突触,并代替电阻来构造神经网络,得到了由许多子系统构成的基于忆阻神经网络系统的数学表达结构,其本质是一个系数依赖于状态的切换神经网络。另外,在神经元的信息发送与接收过程中,时滞是不可避免的。自Hu等[3]首次提出基于忆阻时滞神经网络模型以来,关于忆阻时滞神经网络的动力学行为,例如,稳定[34]、半同步控制[5]、可靠镇定[6]等方面的研究已取得了丰硕的成果。

另一方面,耗散是物理系统的一种重要性质,耗散理论为控制系统的分析和设计提供了一个基于能量的输入–输出的描述性框架[7]。Wu等[7]指出:虽然耗散性与系统的稳定性存在紧密的关系,但是前者揭示系统的性能比后者多,比如,无源性、几乎扰动解耦、H控制等性能。而且无源理论、Kalman-Yakubovich引理、有界实引理等理论均可看作是耗散理论的特殊情况。随着线性矩阵不等式方法的发展,对各种动力系统的耗散问题进行了研究[814]。Zhang等[8]研究了随机混杂系统的可靠耗散问题。Wu等[9]考虑了离散随机神经网络的耗散问题。文献[1014]研究了带有不同时滞的神经网络的耗散问题。Wu等[10]运用Jensen积分不等式来估计所构造KLF微分的上界,得到了较大保守性的耗散条件。Zeng等[11] 估计积分项上界时仅仅考虑了时滞上界d,而未考虑时滞dt),这势必会带来一定的保守性。正如文献[15]所指出的,从数学角度看,基于自由权矩阵不等式[12]与Wirtinger积分不等式[16]在减少保守性方面的作用是相同的。而且与结合Wirtinger积分不等式和倒凸技术方法相比,它在处理时变时滞系统时,会引入更多的未知变量,这将带来较大的计算量。作为耗散的特例,文献[1721]研究了时滞神经网络的无源问题。不同于文献[13,1721],本文中的激活函数具有更一般的形式。

为了得到更弱保守性的耗散条件,本文从两方面进行了改进。

1)LKF的构造。本文构造的LKF中除了有目前常见的状态向量2次项、1重积分项和2重积分项外,还增加了时滞系数的状态向量2次项和3重积分项,这样的构造方法能更充分地考虑时滞dt)、时滞上界d以及时滞微分 $\dot d(t)$

2)倒凸技术和Wirtinger积分不等式结合。用Wirtinger积分不等式处理积分项时,常常出现时滞项在分母上的情况,针对这种情况,常见的凸组合技术很难处理,然而倒凸技术能较好地消除这种情况,仅需增加少量的未知变量。

1 问题阐述和准备 1.1 模型描述
图1 4个基本二端电路元件关系图 Fig. 1 Four fundamental two-terminal circuit elements

4个基本二端电路元件之间关系见图1

正如文献[14]所述,由Krichoff电流定律,一类时滞神经网络第i个子系统为:

$\left\{ {\begin{array}{*{20}{l}}\displaystyle\!\!\!{{{\dot x}_i}(t) = - {a_i}({x_i}(t)){x_i}(t) + }\\[3pt]\qquad \;\;\; \displaystyle{\sum\limits_{j = 1}^n {{w_{{{ij}}}}} ({x_i}(t)){f_j}({x_j}(t)) + }\\[5pt]\qquad \;\;\; \displaystyle{\sum\limits_{j = 1}^n {w_{ij}^{\left( 1 \right)}} ({{{x}}_i}(t)){f_j}({{{x}}_j}(t - {d_j}(t))) + {J_i}(t)}\text{;}\\\displaystyle\!\!\!{{{{y}}_i}(t) = {f_i}({{{x}}_i}(t)),t \ge {\rm{0, }}{\mathop{i}\nolimits} = 1,2, \cdots ,n}\text{;}\\[4pt]\displaystyle\!\!\!{{{{x}}_i}(t) = {\varphi _i}(t),t \in \left[ { - d,0} \right]}\end{array}} \right.$ (1)

式中: ${x_i}(t)$ 为加在电容器Ci两端的电压; ${f_i}({x_i}(t))$ ${f_i}({x_i}(t - {d_i}(t)))$ 分别为不带时滞和带有时滞的激活函数; ${J_i}(t)$ 为外部输入; ${y_i}(t)$ 为神经网络的输出;时滞 ${d_j}(t)$ 满足 $0 \le {d_j}(t) \le d,{\rm{ }}{\dot d_j}(t) \le \mu ,j = 1, 2, \cdots ,n$ ,其中,dμ是已知常数;初始状态 ${\varphi _i}(t)$ $\left[ { - d,0} \right]$ 上有界且连续可微, $ \mathbb{R}^n$ 表示n维欧氏空间, $\mathbb{C}([ - d,0],{\mathbb{R}^n})$ 表示所有连续函数的Banach空间; ${a_i}({x_i}(t))$ ${w_{ij}}({x_i}(t))\text{、}\!\!\! w_{ij}^{{\rm{(1)}}}({x_i}(t))$ $i,j = 1,2, \cdots $ n表示基于忆阻神经网络的连接权重,且满足:

$\begin{array}{l}\displaystyle{a_{{i}}}({{{x}}_i}(t)) = \frac{1}{{{C_i}}}[\sum\limits_{j = 1}^n {{\rm{sign}}_{ij}({M_{ij}} + {N_{ij}})} + {{\overline{R}}_i}],\\[10pt]\displaystyle{w_{ij}}({{{x}}_i}(t)) = \frac{{{\rm{sign}}_{ij}{M_{ij}}}}{{{C_i}}},w_{ij}^{{\rm{(1)}}}({x_i}(t)) = \frac{{{\rm{sign}}_{ij}{N_{ij}}}}{{{C_i}}}\text{。}\end{array}$

其中:符号函数 ${\rm{sign}}_{ij} \! = \! \left\{ \begin{array}{l}\!\! 1, \; i \ne {\mathop{j}\nolimits}\text{;}\\ \!\!\! {- 1, \;}{\mathop{i}\nolimits} \! = \! {\mathop{j}\nolimits}\text{;}\end{array} \right.$ $ R_{ij}$ 为通过 ${f_i}({{{x}}_i}(t))$ ${{{x}}_i}(t)$ 的忆阻且 ${f_i}(0) = 0$ Fij为通过 ${f_i}({{{x}}_i}(t - {d_i}(t)))$ ${{{x}}_i}(t)$ 的忆阻;MijNij分别为RijFij的忆阻值;Ri为电容器Ci并联的电阻; ${\overline{R}}_{i}$ Ri的电感。根据忆阻的特性和电流–电压特征图[4],为方便起见,令:

$\begin{array}{l}{a_i}({x_i}(t)) = \left\{ \begin{array}{l}{\! \! \! {\hat a}_i},\;\left| {{x_i}(t)} \right| \le {I_i}\text{;}\\{\! \! \!{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} }_i},\;\left| {{x_i}(t)} \right| > {I_i}\text{。}\end{array} \right.\\[10pt]{w_{ij}}({x_i}(t)) = \left\{ \begin{array}{l}{\! \! \!{\hat w}_{ij}},\;\left| {{x_i}(t)} \right| \le {I_i}\text{;}\\[4pt]{\! \! \!{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} }_{ij}},\;\left| {{x_i}(t)} \right| > {I_i}\text{。}\end{array} \right.\\[10pt]w_{{\rm{ij}}}^{{\rm{(1)}}}({x_i}(t)) = \left\{ \begin{array}{l}\! \! \! \hat w_{ij}^{{\rm{(1)}}},\;\left| {{x_i}(t)} \right| \le {I_i}\text{;}\\[6pt]\! \! \!\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{\rm{(1)}}},\;\left| {{x_i}(t)} \right| > {I_i}\text{。}\end{array} \right.\end{array}$

其中,切换界值 ${I_{\rm{i}}} > 0$ ${\hat a_{\rm{i}}} > 0\text{,}\! \! {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} _i} > 0\text{,}\!\!{\hat w_{ij}}\text{、}\!\!{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}}$ $\text{、}\!\!\hat w_{ij}^{{\rm{(1)}}}\text{、}\!\!$ $\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{\rm{(1)}}}$ 是常数。

1.2 问题准备

由于系统(1)中的 ${a_i}( \cdot )\text{、}{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{{\rm{ij}}}}( \cdot )\text{、}\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{\rm{(1)}}}( \cdot )$ 均为不连续函数,因此,系统(1)是一个不连续系统,传统微分方程的相关理论在这里无法使用。Filippov[22]提出了一套有关不连续的微分方程的解的理论。由Filippov理论可知,一个具有不连续项的微分方程和与之对应的微分包含具有相同的解集。因此,在研究具有不连续项的微分方程的性能时,要转化成研究相应的微分包含的性能。有关微分包含和集值映射理论具体参见文献[2223]。应用微分包含和集值映射理论[2223],式(1)可写成微分包含形式:

$\left\{ {\begin{aligned}& {{{\dot x}_i}(t) \in - {\rm{co}}\{ {{\hat a}_i},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} }_i}\} {x_i}(t) + \sum\limits_{j = 1}^n {{\rm{co}}\{ {{\hat w}_{{ij}}},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} }_{ij}}\} {f_j}({x_j}(t))} }+\\[-4pt]& \quad \quad \;\;\; { \sum\limits_{j = 1}^n {{\rm{co}}\{ \hat w_{ij}^{(1)},\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{(1)}\} {f_j}({x_j}(t - {d_{\mathop{\rm j}\nolimits} }(t))) + {J_i}(t)} }\text{;}\\[-2pt]& {{y_i}(t) = {f_i}({x_i}(t)),t \ge {{0},i} = {\rm{1,2,}} \cdots ,n}\text{;}\\& {{x_i}(t) = {\varphi _i}(t),t \in \left[ { - d,0} \right]}\end{aligned}} \right.$ (2)

根据文献[6,13]的描述方式, $\forall i,j = 1,2, \cdots ,n,\exists \;{{{a}}_i} \! \in$ $ {\rm{co}}\left\{ {{{\hat a}_i},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {{a}}} }_i}} \right\},{{{w}}_{ij}} \in {\rm{co}}\{ {{{\hat w}}_{ij}},{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {{w}}} _{ij}}\} $ ${{w}}_{ij}^{{\rm{(1)}}} \in {\rm{co}}\{ {{\hat w}}_{ij}^{{\rm{(1)}}},\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over {{w}}} _{ij}^{{\rm{(1)}}}\} $ ,其中, ${\rm{co}}\left\{ {{{\hat a}_i},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {{a}}} }_i}} \right\}$ 表示由实数 ${{{\hat a}_i}\text{、}{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {{a}}} }_i}}$ 生成的闭凸包,使得式(2)等价于:

$\left\{ \begin{aligned}& {{\dot x}_i}(t) = - {a_i}{x_i}(t) + \sum\limits_{j = 1}^n {{w_{ij}}} {f_j}({x_j}(t))+\\[-3pt]& \qquad \; \; \displaystyle\sum\limits_{j = 1}^n {w_{ij}^{\left( 1 \right)}} {f_j}({x_j}(t - {d_j}(t))) + {J_i}(t)\text{;}\\[-2pt]& \; {y_i}(t) = {f_i}({x_i}(t)),t \ge 0,i = 1,2, \cdots ,n\text{;}\\& \; {x_i}(t) = {\varphi _i}(t),t \in \left[ { - d,0} \right]\end{aligned} \right.$ (3)

其中: ${a_i}\text{、}{w_{ij}}\text{、}w_{ij}^{(1)}$ 是依赖于时间t和系统(1)的初值的。 ${{\overline a }_i} = \max \{ {\hat a_i},\,{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} _i}\} ,\;{\underline a _i} = \min \{ {\hat a_i},\;{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} _i}\} $ ${\overline w_{ij}} = \max \{ {\hat w_{ij}},\;{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}}\}$ ${\underline w _{ij}} = \min \{ {\hat w_{{\rm{ij}}}},{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{{\rm{ij}}}}\} $ $\overline w_{ij}^{{\rm{(1)}}} = \max \{ \hat w_{ij}^{{\rm{(1)}}},\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{\rm{(1)}}}\} $ $\underline w _{ij}^{{\rm{(1)}}} = \min \{ \hat w_{ij}^{{\rm{(1)}}},$ $\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{\rm{(1)}}}\} $ 。显然, ${\rm{co}}\{ {\hat a_i},\;{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} _i}\} = [{\underline a _i},\;{\bar a_i}]$ ${\rm{co}}\{{{{\hat w}}_{ij}},{{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} }}_{ij}}\} = [{\underline w _{ij}},\;{\overline w_{ij}}]$ ${\rm{co}}\{ \hat w_{ij}^{{\rm{(1)}}},\;\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{{{ij}}}^{{\rm{(1)}}}\} = [\underline w _{ij}^{{\rm{(1)}}},\;\overline w_{ij}^{{\rm{(1)}}}]$ $\forall {i,j} = {\rm{1,2,}} \cdots ,n$

满足系统(1)的初始条件 ${x_{{i}}}(t) \! = \! {\varphi _{{i}}}(t),{\rm{ }}t \! \in \! [ - d,0]$ ,在Filippov意义下的解 ${\mathit{\boldsymbol{x}}}(t) \! = \! {[{x_{\rm{1}}}(t), \cdots ,\! {x_n}(t)]^{\rm{T}}} \! \in \! {{\mathbb{R}}^n}$ $[0,\infty )$ 的任一紧区间上是绝对连续的,对 $i = 1,2, \cdots ,n$ ,有:

$\begin{aligned}{{\dot x}_{\rm{i}}}(t) \in & - {\rm{co}}\{ {{\hat a}_i},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} }_i}\} {x_i}(t) + \sum\limits_{j = 1}^n {{\rm{co}}\{ {{\hat w}_{ij}},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} }_{ij}}\} } {f_j}({x_j}(t)) + \\& \sum\limits_{j = 1}^n{{\rm{co}}\{ \hat w_{ij}^{{\rm{(1)}}},\mathord{\buildrel{\lower1pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{\rm{(1)}}}\} } {f_j}({x_{{j}}}(t - {d_j}(t))) + {J_i}(t)\text{。}\end{aligned}$

为了方便,式(3)也可写成向量形式:

(4)

$\exists {\mathit{\boldsymbol{A}}} \in {\rm{co}}\{ {{{\hat {\mathit{\boldsymbol{A}}}}},\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {\mathit{\boldsymbol{A}}}} } \},{\mathit{\boldsymbol{W}}} \in {\rm{co}}\{ {{{\hat {\mathit{\boldsymbol{W}}}}},\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {\mathit{\boldsymbol{W}}}} } \},{{\mathit{\boldsymbol{W}}}_1} \in {\rm{co}}\{ {{\mathit{\boldsymbol{\hat W}}},{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {\mathit{\boldsymbol{W}}}} }_1}} \}$

式(4)中,矩阵区间 $[\underline {\mathit{\boldsymbol{W}}} ,{\mathit{\boldsymbol{\overline {W}}}}]$ 表示 $\underline {\mathit{\boldsymbol{W}}} \ll {\mathit{\boldsymbol{\overline {W}}}}$ 。若 $\forall {\mathit{\boldsymbol{W}}} = $ $ {({w_{ij}})_{{mxn}}} \in [\underline {\mathit{\boldsymbol{W}}} {\mathit{\boldsymbol{,\overline {W}}}}]$ 意味着 $\underline {\mathit{\boldsymbol{W}}} \ll {\mathit{\boldsymbol{W}}} \ll {\mathit{\boldsymbol{\overline{\bm{W}}}}}$ ,即 ${\underline w _{ij}} \ll {w_{ij}} \ll {{\overline w}_{ij}},$ $i \! = \! 1,2, \cdots$ $m, j \! = \! 1,2, \cdots , n$ 。这也适合于其他矩阵区间。

式(4)等价于:

$\left\{ \begin{aligned}\dot {x}(t) = & - {\mathit{\boldsymbol{Ax}}}(t) + {\mathit{\boldsymbol{Wf}}}({\mathit{\boldsymbol{x}}}(t)) + \\& {{\mathit{\boldsymbol{W}}}_1}{\mathit{\boldsymbol{f}}}({\mathit{\boldsymbol{x}}}(t - d(t))) + {\mathit{\boldsymbol{J}}}(t),t \ge 0\text{;}\\{\mathit{\boldsymbol{y}}}(t) = & {\mathit{\boldsymbol{f}}}({\mathit{\boldsymbol{x}}}(t))\text{;}\\{\mathit{\boldsymbol{x}}}(t) = & {\mathit{\boldsymbol{\varphi }}}(t)\end{aligned} \right.$ (5)

显然

$\begin{aligned}& {\text{co }}\{{{\hat {\mathit{\boldsymbol{A}}}}},{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{\mathit{\boldsymbol{A}}} }}{\text{\} }} = [\underline {\mathit{\boldsymbol{A}}} ,{{\overline {\mathit{\boldsymbol{A}}}}}],{\rm{co}}\{ {\hat {\mathit{\boldsymbol{W}}}},{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{\mathit{\boldsymbol{W}}} }}\} = [\underline {\mathit{\boldsymbol{W}}} ,{\overline {\mathit{\boldsymbol{W}}}}], \\& \hfill {\rm{co}}\{ {{\hat {\mathit{\boldsymbol{W}}}}_1},{{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{\mathit{\boldsymbol{W}}} }}}_1}\} = [{\underline {\mathit{\boldsymbol{W}}} _1},{{\overline {\mathit{\boldsymbol{W}}}}_1}] \text{。}\hfill \\ \end{aligned}$

其中:

$\begin{array}{l}\;\;\;\; {\hat {\mathit{\boldsymbol{A}}}} = {({{\hat a}_i})_{{{n{\rm x}n}}}},{{\hat {\mathit{\boldsymbol{W}}}}} = {({{\hat w}_{ij}})_{{{n{\rm x}n}}}},{{{{\hat {\mathit{\boldsymbol{W}}}}}}_1} = {(\hat w_{ij}^{{{(1)}}})_{{{n{\rm x}n}}}},{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {\mathit{\boldsymbol{A}}}} } = {({{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over a} }_i})_{{{n{\rm x}n}}}},\\\;\;\; {{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {\mathit{\boldsymbol{W}}}} }} = {({{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} }_{ij}})_{{{n{\rm x}n}}}},{{{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over {\mathit{\boldsymbol{W}}}} }}}_1} = {(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over w} _{ij}^{{{(1)}}})_{{{n{\rm x}n}}}},{\overline {\mathit{\boldsymbol{A}}}} = {({{\overline a}_i})_{{{n{\rm x}n}}}},{{ }}\underline {\mathit{\boldsymbol{A}}} = {({\underline a _i})_{{{n{\rm x}n}}}},\\[5pt]{\overline {\mathit{\boldsymbol{W}}}} = {({{\overline w}_{ij}})_{{{n{\rm x}n}}}},\underline {\mathit{\boldsymbol{W}}} = {({\underline w _{ij}})_{{{n{\rm x}n}}}},{{\overline {\mathit{\boldsymbol{W}}}}_1} = {(\overline w_{ij}^{\left( {{1}} \right)})_{{{n{\rm x}n}}}},{\underline {\mathit{\boldsymbol{W}}} _1} = {(\underline w _{ij}^{\left( {{1}} \right)})_{{{n{\rm x}n}}}},\\[5pt]\qquad \quad \;\;{\mathit{\boldsymbol{x}}}(t) = {\left[ {{x_1}(t),{x_2}(t), \cdots ,{x_{{n}}}(t)} \right]^{{\rm T}}} \in {{\mathbb{R}}^{{n}}},\\[5pt]{\mathit{\boldsymbol{f}}}({\mathit{\boldsymbol{x}}}(t)) = {[{f_1}({x_1}(t)),{f_2}({x_2}(t)), \cdots ,{f_{{n}}}({x_{{n}}}(t))]^{{\rm T}}} \in {{\mathbb{R}}^n},\\[5pt]{\mathit{\boldsymbol{f}}}({x}(t - d(t))) = [{f_1}({x_1}(t - d(t))),{f_2}({x_2}(t - d(t))),\\[5pt]\qquad \qquad \qquad \; \cdots ,{f_{{n}}}({x_{{n}}}(t - d(t))){]^{{\rm T}}} \in {{\mathbb{R}}^{{n}}},\\[5pt]\quad\quad\quad{\mathit{\boldsymbol{y}}}(t) = {[{y_1}(t),{y_2}(t), \cdots ,{y_{{n}}}(t),]^{{\rm T}}} \in {{\mathbb{R}}^{{n}}},\\[5pt]\quad\quad\quad{\mathit{\boldsymbol{J}}}(t) = {[{J_1}(t),{J_2}(t), \cdots ,{J_{{n}}}(t)]^{{\rm T}}} \in {{\mathbb{R}}^{{n}}}\text{。}\end{array}$

假 设  连续有界激活函数 $f({\mathit{\boldsymbol{x}}}(t))$ 满足

$l_j^{ - } \le \frac{{{f_j}(a) - {f_j}(b)}}{{a - b}} \le l_j^{ + }, j = 1, 2,\cdots ,n$ (6)

式中: ${f_j}(0) = 0$ ;对所有 $a,b \in {\mathbb{R}},\;a \ne b; \;l_{\rm{j}}^{\rm{ - }}\text{、}\! \! l_{\rm{j}}^{\rm{ + }}$ 是常数。

b=0时,

$l_j^{\rm{ - }} \le \frac{{{f_j}(a)}}{a} \le l_j^{\rm{ + }}$ (7)

下面介绍严格 $({\mathit{\boldsymbol{G,S,T}}}) - \gamma $ 耗散定义和引理。

定 义  给定对称矩阵GT和矩阵S,对所有 ${t_{\rm{f}}} \ge 0$ ,在零初始状态下,如果下列不等式 $\left\langle {{{\left. {{\mathit{\boldsymbol{y}}},{\mathit{\boldsymbol{Gy}}}} \right\rangle }_\tau }} \right. + $ $2\left\langle {{{\left. {{\mathit{\boldsymbol{y}}},{\mathit{\boldsymbol{SJ}}}} \right\rangle }_\tau }} \right. + \left\langle {{{\left. {{\mathit{\boldsymbol{J}}},{\mathit{\boldsymbol{TJ}}}} \right\rangle }_\tau }} \right. \ge \gamma \left\langle {{{\left. {{\mathit{\boldsymbol{J}}},{\mathit{\boldsymbol{J}}}} \right\rangle }_\tau }} \right.$ $\forall {\mathit{\boldsymbol{J}}} \in L_{2e}^{{n}}$ 成立,则称系统(5)为严格 $({\mathit{\boldsymbol{G,S,T}}}) - \gamma $ 耗散的,其中, $L_{2e}^{{n}} = \{ f\text{是} \mathbb{R}^{+}$ 上的可测函数, ${P_\tau } f \in {{L}}_{\rm{2}}^n,\forall \tau \in \mathbb{R}^{+} \}$ $({P_{\rm{\tau }}}f)(t) = \left\{ \begin{array}{l}\!\!\! f(t),\;\;t \le \tau \text{;}\\\!\!\! 0,\;\;t > \tau \text{。}\end{array} \right.$ 对任意函数 ${\mathit{\boldsymbol{x}}} = \{ {\mathit{\boldsymbol{x}}}(t)\} ,{\mathit{\boldsymbol{y}}} = \{ {\mathit{\boldsymbol{y}}}(t)\} \in L_{2e}^{{n}}$ ,矩阵N定义为 $\left\langle {{{\left. {{\mathit{\boldsymbol{x}}},{\mathit{\boldsymbol{Ny}}}} \right\rangle }_\tau }} \right. = \int_0^{\rm{\tau }} {{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t){\mathit{\boldsymbol{Ny}}}(t){\rm{d}}t} $

引理1[16]  设 ${\mathit{\boldsymbol{x}}}(t)$ 是取值在 $[a,b] \in {{\mathbb{R}}^{\rm{n}}}$ 上的连续可微函数,则对给定矩阵 ${\mathit{\boldsymbol{R}}} > 0$ ,有下列积分不等式成立:

$\begin{array}{l} - (b - a)\int\limits_a^b {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s)} {\mathit{\boldsymbol{R\dot x}}}(s){\rm{d}}s\le\\\quad \;\; - {[{\mathit{\boldsymbol{x}}}(b) - {\mathit{\boldsymbol{x}}}(a)]^{\rm{T}}}{\mathit{\boldsymbol{R}}}[{\mathit{\boldsymbol{x}}}(b) - {\mathit{\boldsymbol{x}}}(a)] - 3{{\mathit{\boldsymbol{\varOmega }}}^{\rm{T}}}{\mathit{\boldsymbol{R\varOmega }}}\text{。}\end{array}$

其中: ${\mathit{\boldsymbol{\varOmega }}} = {\mathit{\boldsymbol{x}}}(b) + {\mathit{\boldsymbol{x}}}(a) - ({\displaystyle\frac{2}{{b - a}}})\int\limits_a^b {{\mathit{\boldsymbol{x}}}(s)} {\rm{d}}s$ 。引理1 又称为Wirtinger积分不等式。显然,当 $3{{\mathit{\boldsymbol{\varOmega }}}^{\rm{T}}}{\mathit{\boldsymbol{R\varOmega }}}$ 不存在时,Jensen积分不等式是其特殊情况。

引理2[24]  对任意向量α1α2,对称矩阵R和任意矩阵S,满足 ${\mathit{\boldsymbol{\overline{{R}}}}}{ = }\left[ {\begin{array}{*{20}{c}} {\mathit{\boldsymbol{R}}} & {\mathit{\boldsymbol{S}}}\\ {\mathit{\boldsymbol{*}}} & {\mathit{\boldsymbol{R}}} \end{array}} \right] \ge 0$ ${\mathit{\boldsymbol{\overline{{R}}}}}$ 是半正定矩阵),实参数 $\beta \in [0,1]$ ,有如下不等式成立:

$\frac{1}{\beta }{\mathit{\boldsymbol{\alpha }}}_1^{\rm{T}}{\mathit{\boldsymbol{R}}}{{\mathit{\boldsymbol{\alpha }}}_1} + \frac{1}{{1 - \beta }}{\mathit{\boldsymbol{\alpha }}}_2^{\rm{T}}{\mathit{\boldsymbol{R}}}{{\mathit{\boldsymbol{\alpha }}}_2} \ge {{\mathit{\boldsymbol{\eta }}}^{\rm{T}}}\left[ {\begin{array}{*{20}{c}}{\mathit{\boldsymbol{R}}} & {\mathit{\boldsymbol{S}}}\\{\mathit{\boldsymbol{*}}} & {\mathit{\boldsymbol{R}}}\end{array}} \right]{\mathit{\boldsymbol{\eta }}}\text{。}$

其中,*表示对称矩阵的对称部分, ${{\mathit{\boldsymbol{\eta }}}^{\rm{T}}} \! = \! \left[ {\begin{array}{*{20}{c}} {{\mathit{\boldsymbol{\alpha }}}_{\rm{1}}^{\rm{T}}} \;\; {{\mathit{\boldsymbol{\alpha }}}_{\rm{2}}^{\rm{T}}} \end{array}} \right]$

引理3[25]  对矩阵 ${\mathit{\boldsymbol{R}}} > 0$ ,参数a<b和向量泛函 ${\mathit{\boldsymbol{\omega}}} :[a,b] \mapsto {{\mathbb{R}}^{{m}}}$ ,下列积分不等式成立:

$\begin{array}{l}\displaystyle\frac{{{{(b - a)}^2}}}{2}\int\limits_a^b {\int\limits_\theta ^b {{{\mathit{\boldsymbol{\omega }}}^{\rm T}}(s){\mathit{\boldsymbol{R\omega }}}(s){\rm{d}}s{\rm{d}}\theta } } \ge \\\qquad \quad \displaystyle(\int\limits_a^b {\int\limits_\theta ^b {{\mathit{\boldsymbol{\omega }}}(s){\rm{d}}s{\rm{d}}\theta {)^{\rm{T}}}{\mathit{\boldsymbol{R}}}} } (\int\limits_a^b {\int\limits_\theta ^b {{\mathit{\boldsymbol{\omega }}}(s){\rm{d}}s{\rm{d}}\theta )} } \text{。}\end{array}$
2 耗散条件分析

给出保证时滞神经网络系统严格耗散的时滞依赖稳定性条件。

定 理  给定d>0,μ;如果存在2n×2n维正定矩阵 ${\mathit{\boldsymbol{P}}} > 0,{{\mathit{\boldsymbol{Q}}}_i} > 0({\mathop{ i}\nolimits} = 1,2)$ R2>0,Xn×n维矩阵 ${{\mathit{\boldsymbol{Z}}}_i} > 0(i = 1,2,$ $3),{{\mathit{\boldsymbol{R}}}_1} > 0$ ;对角矩阵 ${{\mathit{\boldsymbol{\varLambda }}}_{{i}}} \! \ge \! 0({{i}} \! = \! 1,2),{{\mathit{\boldsymbol{H}}}_{{i}}} \! \ge \! 0({{i}} \! = \! 1,2, \cdots ,5)$ ,以及参数 $\gamma > 0$ ;使得当 $d(t) \in \{ 0,d\} $ 时,如果下列线性矩阵不等式成立:

${\mathit{\boldsymbol{\varXi}}} < 0$ (8)
${\mathit{\boldsymbol{\varGamma }}} = \left[ {\begin{array}{*{20}{c}}{{{{{\overline {\mathit{\boldsymbol{Z}}}}}}_{\rm{2}}}} & {\mathit{\boldsymbol{X}}}\\{\rm{*}} & {{{{{\overline {\mathit{\boldsymbol{Z}}}}}}_{\rm{2}}}}\end{array}} \right] \ge 0$ (9)

则系统(5)为严格 $({\mathit{\boldsymbol{G}}},{\mathit{\boldsymbol{S}}},{\mathit{\boldsymbol{T}}}) - \gamma $ 耗散的。

$\begin{aligned}&\text{其中:} {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t) = [{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t)\quad {{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t - d(t))\quad {{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t - d)\quad {{\mathit{\boldsymbol{f}}}^{\rm{T}}}({\mathit{\boldsymbol{x}}}(t))\\& \qquad \qquad \quad \;\;\; {{\mathit{\boldsymbol{f}}}^{\rm{T}}}({\mathit{\boldsymbol{x}}}(t - d(t)))\quad {{\mathit{\boldsymbol{f}}}^{\rm{T}}}({\mathit{\boldsymbol{x}}}(t - d))\quad {\mathit{\boldsymbol{\varepsilon }}}_{\rm{1}}^{\rm{T}}(t)\quad {\mathit{\boldsymbol{\varepsilon }}}_{\rm{2}}^{\rm{T}}(t)\\& \qquad \qquad \quad \;\;\;{{\mathit{\boldsymbol{J}}}^{\rm{T}}}(t){]^{\rm{T}}}\text{;}\\& \quad \quad \;\;{{\mathit{\boldsymbol{e}}}_i} = [{0_{n \times (i - 1)n}} \quad {{\mathit{\boldsymbol{I}}}_{{n}}} \quad {0_{n \times (9 - i)n}}]\text{,}i = 1,2, \cdots ,9\text{;}\\& \quad \;\;{{\mathit{\boldsymbol{{\varSigma}}}}_1} = {\rm{diag}}\{ l_1^ + ,l_2^ + , \cdots ,l_n^+\} \text{,}{{\mathit{\boldsymbol{{\varSigma}}}}_2} = {\mathop{\rm diag}\nolimits} \{ l_1^ - ,l_2^ - , \cdots ,l_{{n}}^- \}\text{,}\\& {\rm{diag}}\{ \cdot \} \text{表示对角矩阵;} {{\mathit{\boldsymbol{\varPi }}}_1} = {[{\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}}\;\;d(t){\mathit{\boldsymbol{e}}}_7^{\rm{T}} + (d - d(t)){\mathit{\boldsymbol{e}}}_8^{\rm{T}}]^{\rm{T}}}\text{;}\\& \qquad \qquad {{\mathit{\boldsymbol{\varPi }}}_2} = {[ - {\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}}{{\mathit{\boldsymbol{A}}}^{\rm{T}}} + {\mathit{\boldsymbol{e}}}_{\rm{4}}^{\rm{T}}{{\mathit{\boldsymbol{W}}}^{\rm{T}}} + {\mathit{\boldsymbol{e}}}_{\rm{5}}^{\rm{T}}{\mathit{\boldsymbol{W}}}_{\rm{1}}^{\rm{T}} + {\mathit{\boldsymbol{e}}}_{\rm{9}}^{\rm{T}}]^{\rm{T}}}\text{;}\\& \quad {{\mathit{\boldsymbol{\varPi }}}_3} = {[{{\mathit{\boldsymbol{\varPi }}}_2} \quad {\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}} - {\mathit{\boldsymbol{e}}}_3^{\rm{T}}]^{\rm{T}}}\text{;}{{\mathit{\boldsymbol{\varPi }}}_{i + 3}} = {[{\mathit{\boldsymbol{e}}}_i^{\rm{T}}\;\;{\mathit{\boldsymbol{e}}}_{{i + 3}}^{\rm{T}}]^{\rm{{\rm{T}}}}}\text{,}i = 1,2,3\text{;}\\& \qquad \quad {{\mathit{\boldsymbol{\varPi }}}_i} = {[{\mathit{\boldsymbol{e}}}_{i - 6}^{\rm{{\rm{T}}}} - {\mathit{\boldsymbol{e}}}_{i - 5}^{\rm{T}} \quad {\mathit{\boldsymbol{e}}}_{i - 6}^{\rm{T}} + {\mathit{\boldsymbol{e}}}_{i - 5}^{\rm{T}} - 2{\mathit{\boldsymbol{e}}}_i^{\rm{T}}{\rm{]}}^{\rm{T}}},i = {\rm{7,8}}\text{;}\\& \qquad \qquad \;\;{{\mathit{\boldsymbol{\varPi }}}_9} = {[{{\mathit{\boldsymbol{\varPi }}}_7}\;\;{{\mathit{\boldsymbol{\varPi }}}_8}]^{\rm{T}}}\text{;}{{\mathit{\boldsymbol{\varPi }}}_{10}} = {[{\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}} - {\mathit{\boldsymbol{e}}}_7^{\rm{T}}]^{\rm{T}}}\text{;}\\&  \qquad \qquad \qquad \quad{{\mathit{\boldsymbol{\varPi }}}_{11}} = {\left[ {{\mathit{\boldsymbol{e}}}_{\rm{2}}^{\rm{T}} \quad - {\mathit{\boldsymbol{e}}}_{\rm{8}}^{\rm{T}}} \right]^{\rm{T}}}\text{;}\\[-3pt]& {{\mathit{\boldsymbol{\varPi }}}_{{\rm{1}}j}} = {\left[ {{\mathit{\boldsymbol{\varSigma}} _{(3 + {{( - 1)}^j})/2}}{\mathit{\boldsymbol{e}}}_{{\rm{(2}}j - 3 - {{( - 1)}^j}{\rm{)/4}}}^{\rm{T}} + {{( - 1)}^j}{\mathit{\boldsymbol{e}}}_{{\rm{(2}}j + 9 - {{( - 1)}^j}{\rm{)/4}}}^{\rm{T}}} \right]^{\rm{T}}}\text{,}\\& \qquad \;\;(j = 3,4, \cdots ,8)\text{;}\\& \;\;\,{{\mathit{\boldsymbol{\varPi }}}_i} = {[{\mathit{\boldsymbol{e}}}_{(i - 11)/2}^{\rm{T}}{\rm{ - }}{\mathit{\boldsymbol{e}}}_{(i - 9)/2}^{\rm{T}}{\rm{ - }}{\mathit{\boldsymbol{e}}}_{(i - 17)/2}^{\rm{T}}{{\varSigma} _2} + {\mathit{\boldsymbol{e}}}_{(i - 15)/2}^{\rm{T}}{{\varSigma} _2}]^{\rm{T}}},\,\, i = 19,21\text{;}\\& {{\mathit{\boldsymbol{\varPi }}}_i} = {\rm{ - }}{[{\mathit{\boldsymbol{e}}}_{(i - 12)/2}^{\rm{T}}{\rm{ - }}{\mathit{\boldsymbol{e}}}_{(i - 10)/2}^{\rm{T}}{\rm{ - }}{\mathit{\boldsymbol{e}}}_{(i - 18)/2}^{\rm{T}}{{\varSigma} _1} + {\mathit{\boldsymbol{e}}}_{(i - 16)/2}^{\rm{T}}{{\varSigma} _1}]^{\rm{T}}},\,\, i = 20,22\text{;}\end{aligned}$
${\rm{sym}}({\mathit{\boldsymbol{A}}}) = {\mathit{\boldsymbol{A}}} + {{\mathit{\boldsymbol{A}}}^{\rm{T}}}\text{。}$

证明:构造如下的Lyapunov-Krasovskii泛函

$V(t) = \sum\limits_{i = 1}^5 {{V_i}(t)} $ (10)

式中:

$\begin{aligned}[b]\!\!\!\!\!\!{ V_1}(t) \!=\! & 2\sum\limits_{i = 1}^{{n}} {\int_0^{{x_i}} {[{\lambda _{1i}}(l_{{i}}^ + s - {f_i}(s))} + {\lambda _{{{2i}}}}({f_{{i}}}(s) - }l_i^ - s)]{\rm{d}}s + \\& \qquad {\mathit{\boldsymbol{\eta }}}_1^{\rm{T}}(t){\mathit{\boldsymbol{P}}}{{\mathit{\boldsymbol{\eta }}}_1}(t)\end{aligned}$ (11)
${{V}_2}(t) = \int\limits_{t - d(t)}^t {{\mathit{\boldsymbol{\eta }}}_{\rm{2}}^{\rm{T}}(s){{\mathit{\boldsymbol{Q}}}_1}{{\mathit{\boldsymbol{\eta }}}_2}(s){\rm{d}}s + } \int\limits_{t - d}^{t - d(t)} {{\mathit{\boldsymbol{\eta }}}_{\rm{2}}^{\rm{T}}(s){{\mathit{\boldsymbol{Q}}}_2}{{\mathit{\boldsymbol{\eta }}}_2}(s){\rm{d}}}s$ (12)
$\begin{aligned}{{V}_3}(t) = & \int\limits_{ - d}^0 {\int\limits_{t + \theta }^t {{{\mathit{\boldsymbol{x}}}^{\rm T}}(s){{\mathit{\boldsymbol{Z}}}_1}{\mathit{\boldsymbol{x}}}(s){\rm{d}}s{\rm{d}}\theta } } + d\int\limits_{ - d}^0 {\int\limits_{t + \theta }^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_2}{\mathit{\boldsymbol{\dot x}}}(s){\rm{d}}s{\rm{d}}\theta } } \end{aligned}$ (13)
${{V}_4}(t) = \int\limits_{ - d}^0 {\int\limits_\rho ^0 {\int\limits_{t + \beta }^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_3}{\mathit{\boldsymbol{\dot x}}}(s){\rm{d}}s{\rm{d}}\beta {\rm{d}}\rho } } } $ (14)
${{V}_5}(t) = d(t){{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t){{\mathit{\boldsymbol{R}}}_1}{\mathit{\boldsymbol{x}}}(t) + (d - d(t)){\mathit{\boldsymbol{\eta }}}_{\rm{3}}^{\rm{T}}(t){{\mathit{\boldsymbol{R}}}_2}{{\mathit{\boldsymbol{\eta }}}_3}(t)\text{。}$ (15)
$\begin{aligned}& \text{且}\,\,{{\mathit{\boldsymbol{\eta }}}_1}(t) = {[{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t)\quad {\rm{ (}}\int\limits_{t - d}^t {{\mathit{\boldsymbol{x}}}(s)} {\rm{d}}s{)^{\rm{T}}}]^{\rm{T}}},\\& \qquad \;\;{{\mathit{\boldsymbol{\eta }}}_2}(s) = {[{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(s) \quad {{\mathit{\boldsymbol{f}}}^{\rm{T}}}{\rm{(}}{\mathit{\boldsymbol{x}}}(s))]^{\rm{T}}},\\& {{\mathit{\boldsymbol{\varepsilon }}}_1}(t) = \int\limits_{t - d(t)}^t {\frac{{{\mathit{\boldsymbol{x}}}(s)}}{{d(t)}}} {\rm{d}}s,{{\varepsilon} _2}(t) = \int\limits_{t - d}^{t - d(t)} {\frac{{{\mathit{\boldsymbol{x}}}(s)}}{{d - d(t)}}} {\rm{d}}s,{\rm{ }}\\& \qquad \quad {{\mathit{\boldsymbol{\eta }}}_3}(s) = {[{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(s){\rm{ }}\quad {\mathit{\boldsymbol{\varepsilon }}}_{\rm{2}}^{\rm{T}}(t)]^{\rm{T}}}\text{。}\end{aligned}$

${V_i}(t)(i = 1,2,3)$ 分别沿系统(5)求导得:

$\begin{aligned}[b]& {{\dot {V}}_1}(t) = 2{\left[ {\begin{array}{*{20}{c}}{{\mathit{\boldsymbol{x}}}(t)}\\{d(t){{\mathit{\boldsymbol{\varepsilon }}}_1}(t) + (d - d(t)){{\mathit{\boldsymbol{\varepsilon }}}_2}(t)}\end{array}} \right]^{\rm{T}}}{\mathit{\boldsymbol{P}}} \;\cdot \\& \left[ {\begin{array}{*{20}{c}}{{\mathit{\boldsymbol{\dot x}}}(t)}\\{{\mathit{\boldsymbol{x}}}(t) - {\mathit{\boldsymbol{x}}}(t - d)}\end{array}} \right] + 2{\rm{\{ [}}{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t){\varSigma}_{1} - {{\mathit{\boldsymbol{f}}}^{\rm{T}}}{\rm{(}}{\mathit{\boldsymbol{x}}}(t){\rm{)]}}{{\mathit{\boldsymbol{\varLambda }}}_{\rm{1}}} + \\& \qquad \;\;{\rm{[}}{{\mathit{\boldsymbol{f}}}^{\rm{T}}}{\rm{(}}{\mathit{\boldsymbol{x}}}(t){\rm{)}} - {{\mathit{\boldsymbol{x}}}^{\rm{T}}}(t){\displaystyle{\varSigma}_2}{\rm{]}}{{\mathit{\boldsymbol{\varLambda }}}_2}{\rm{\} }}{\mathit{\boldsymbol{\dot x}}}(t) = {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){{\varXi} _1}{\mathit{\boldsymbol{\xi }}}(t)\end{aligned}$ (16)
${\dot {V}_2}(t) = {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t)({{\mathit{\boldsymbol{\varXi }}}_2} + \dot d(t)({\mathit{\boldsymbol{\varPi }}}_5^{\rm{T}}({{\mathit{\boldsymbol{Q}}}_1} - {{\mathit{\boldsymbol{Q}}}_2}){{\mathit{\boldsymbol{\varPi }}}_5})){\mathit{\boldsymbol{\xi }}}(t)$ (17)
$\begin{aligned}[b]{{\dot {V}}_3}(t) = {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t)(d{\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}}{{\mathit{\boldsymbol{Z}}}_1}{{\mathit{\boldsymbol{e}}}_1} + {d^2}{\mathit{\boldsymbol{\varPi }}}_{\rm{3}}^{\rm{T}}{{Z}_2}{{\mathit{\boldsymbol{\varPi }}}_3}){\mathit{\boldsymbol{\xi }}}(t) - \\[-2pt]\int\limits_{t - d}^t {{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_1}{\mathit{\boldsymbol{x}}}(s){\rm{d}}s - d\int\limits_{t - d}^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_2}{\mathit{\boldsymbol{\dot x}}}(s){\rm{d}}s} } \end{aligned}$ (18)

对式(18)的第1个积分项应用Jensen积分不等式得:

$\begin{aligned}[b]& - \int\limits_{t - d}^t {{{\mathit{\boldsymbol{x}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_1}{\mathit{\boldsymbol{x}}}(s){\rm{d}}s = - \int\limits_{t - d(t)}^t {{{x}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_1}{\mathit{\boldsymbol{x}}}(s){\rm{d}}s} } - \\& \qquad \qquad \quad \int\limits_{t - d}^{t - d(t)} {{{\mathit{\boldsymbol{x}}}^{\rm{T}}}\left( s \right){{\mathit{\boldsymbol{Z}}}_1}{\mathit{\boldsymbol{x}}}\left( s \right){\rm{d}}s \le } \\&\qquad {{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t)( - d(t){\mathit{\boldsymbol{e}}}_7^{\rm{T}}{{\mathit{\boldsymbol{Z}}}_1}{{\mathit{\boldsymbol{e}}}_7} - }(d - d(t)){\mathit{\boldsymbol{e}}}_8^{\rm{T}}{{\mathit{\boldsymbol{Z}}}_1}{{\mathit{\boldsymbol{e}}}_8}){\mathit{\boldsymbol{\xi }}}(t)\end{aligned}$ (19)

对式(18)的第2个积分项应用引理1和2得:

$\begin{aligned}[b]& - d\int\limits_{t - d}^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_2}{\mathit{\boldsymbol{\dot x}}}(s){\rm{d}}s} = - d\int\limits_{t - d(t)}^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_2}{{\dot {x}}}(s){\rm{d}}s} - \\[-2pt]& d\int\limits_{t - d}^{t - d(t)} {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_2}{\mathit{\boldsymbol{\dot x}}}(s){\rm{d}}s} \le - \frac{d}{{d(t)}}{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_{\rm{7}}^{\rm{T}}{{\overline {\mathit{\boldsymbol{Z}}}}_2}{{\mathit{\boldsymbol{\varPi }}}_7}{\mathit{\boldsymbol{\xi }}}(t) + \\[-2pt]& \frac{d}{{d - d(t)}}{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_{\rm{8}}^{\rm{T}}{{\overline {\mathit{\boldsymbol{Z}}}}_2}{{\mathit{\boldsymbol{\varPi }}}_8}{\mathit{\boldsymbol{\xi }}}(t) \le - {{\mathit{\boldsymbol{\xi }}}^{\rm T}}(t)({\mathit{\boldsymbol{\varPi }}}_9^{\rm T}{\mathit{\boldsymbol{\varGamma }}}{{\mathit{\boldsymbol{\varPi }}}_9}){\mathit{\boldsymbol{\xi }}}(t)\end{aligned}$ (20)

将式(19)~(20)代入式(18)中,可以得到:

${\dot V_3}(t) \le {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){{\mathit{\boldsymbol{\varXi }}}_3}{\mathit{\boldsymbol{\xi }}}(t)$ (21)

${V_4}(t)$ 沿系统(5)求导得:

$\begin{aligned}[b]{{\dot {V}}_4}(t) = \frac{{{d^2}}}{2}{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_3^{\rm{T}}{{Z}_3}{{\mathit{\boldsymbol{\varPi}}} _3}{\xi} (t) - {{\mathit{\boldsymbol{\varepsilon }}}_3}(t) - \\[-2pt]{{\mathit{\boldsymbol{\varepsilon}}} _4}(t) - (d - d(t))\int\limits_{t - d(t)}^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_3}{{\dot {x}}}(s){\rm{d}}s} \end{aligned}$ (22)

式中,

$\begin{aligned}{{\mathit{\boldsymbol{\varepsilon }}}_3}(t) = - \int\limits_{ - d(t)}^0 {\int\limits_{t + \beta }^t {{{{{\dot {x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_3}{{\dot {x}}}(s){\rm{d}}s{\rm{d}}\beta } }\text{,}\\{{\mathit{\boldsymbol{\varepsilon }}}_4}(t) = - \int\limits_{ - d}^{ - d(t)} {\int\limits_{t + \beta }^{t - d(t)} {{{{{\dot {x}}}}^{\rm T}}(s){{\mathit{\boldsymbol{Z}}}_3}{{\dot{x}}}(s){\rm{d}}s{\rm{d}}\beta } }\text{。} \end{aligned}$

用引理3估计 ${{\mathit{\boldsymbol{\varepsilon }}}_3}(t)$ 得:

${{\mathit{\boldsymbol{\varepsilon }}}_3}(t) \le - 2{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_{{\rm{10}}}^{\rm{T}}{{\mathit{\boldsymbol{Z}}}_3}{{\mathit{\boldsymbol{\varPi }}}_{10}}{\mathit{\boldsymbol{\xi }}}(t)$ (23)

相似地

${{\mathit{\boldsymbol{\varepsilon }}}_4}(t) \le - 2{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_{{\rm{11}}}^{\rm{T}}{{\mathit{\boldsymbol{Z}}}_3}{{\mathit{\boldsymbol{\varPi }}}_{11}}{\mathit{\boldsymbol{\xi }}}(t)$ (24)

由引理1得:

$\begin{aligned}[b]& \displaystyle - (d - d(t))\int\limits_{t - d(t)}^t {{{{{\dot {x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_3}{{\dot{x}}}(s){\rm{d}}s} \le \\ & \qquad \displaystyle - (d - d(t)){{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_{\rm{7}}^{\rm{T}}\left(\frac{{{{{\mathit{\boldsymbol{\overline{{Z}}}}}}_3}}}{d}\right){{\mathit{\boldsymbol{\varPi }}}_7}{\mathit{\boldsymbol{\xi }}}(t)\end{aligned}$ (25)

将式(23)~(25)代入式(22)有:

${\dot {V}_4}(t) \le {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){{\mathit{\boldsymbol{\varXi }}}_4}{\mathit{\boldsymbol{\xi }}}(t)$ (26)

${V_5}(t)$ 沿系统(5)求导得:

$\begin{aligned}[b]{{\dot {V}}_5}(t) \le & \;{{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t)[{{\mathit{\boldsymbol{\varXi }}}_5} \! + \! \dot d(t)({\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}}({{\mathit{\boldsymbol{R}}}_1} \! - \! {\mathit{\boldsymbol{R}}}_{11}^{(2)}){{\mathit{\boldsymbol{e}}}_1} \! - \! {\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}}{\mathit{\boldsymbol{R}}}_{12}^{(2)}{{\mathit{\boldsymbol{e}}}_2}\! \\[4pt]& - {\mathit{\boldsymbol{e}}}_{\rm{1}}^{\rm{T}}{\mathit{\boldsymbol{R}}}_{12}^{(2)}{{\mathit{\boldsymbol{e}}}_4} + {\mathit{\boldsymbol{e}}}_{\rm{2}}^{\rm{T}}{\mathit{\boldsymbol{R}}}_{22}^{(2)}{{\mathit{\boldsymbol{e}}}_4} + {\mathit{\boldsymbol{e}}}_{\rm{4}}^{\rm{T}}{\mathit{\boldsymbol{R}}}_{22}^{(2)}{{\mathit{\boldsymbol{e}}}_4})]{\mathit{\boldsymbol{\xi}}} (t)\end{aligned}$ (27)

$\dot {V}(t)$ 中依赖于 $\dot d(t)$ 项,即:

$\qquad \qquad \; \dot d(t){{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varPi }}}_{{\rm{11}}}^{\rm{T}}{\mathit{\boldsymbol{\varPhi}}}{{\mathit{\boldsymbol{\varPi }}}_{11}}{\mathit{\boldsymbol{\xi }}}(t) \le {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){{\mathit{\boldsymbol{\varXi }}}_6}{\mathit{\boldsymbol{\xi }}}(t)$ (28)

由式(6)~(7)知,存在对角阵 ${{\mathit{\boldsymbol{H}}}_{{i}}} \ge 0,({{i}} = 1,2, \cdots ,$ 5),有:

$\begin{aligned}[b]\! \! 0 \! \le \! & {\varpi _i}(s)\! : = \! \! 2{[{\mathit{\boldsymbol{\varSigma }}}\! _1{\mathit{\boldsymbol{x}}}\! (s) \! - \! {\mathit{\boldsymbol{f}}}\! ({\mathit{\boldsymbol{x}}} (s))]^{\rm{T}}}\! {{\mathit{\boldsymbol{H}}}\! _{{i}}}[{\mathit{\boldsymbol{f}}}\! ({\mathit{\boldsymbol{x}}} (s)) \! \! - \! \! {\mathit{\boldsymbol{\varSigma }}}\! _2{\mathit{\boldsymbol{x}}}(s){]^{\rm{T}}},\!\\ & \qquad \qquad \qquad \quad ({{i}} = 1,2,3)\end{aligned}$ (29)
$\begin{aligned}0 \le & {\varpi _i}(s): = 2[{\mathit{\boldsymbol{\varSigma }}}{}_1({\mathit{\boldsymbol{x}}}({s_1}) - {\mathit{\boldsymbol{x}}}({s_2})) - ({\mathit{\boldsymbol{f}}}({{\mathit{\boldsymbol{x}}}(s_1)}) - {\mathit{\boldsymbol{f}}}({{\mathit{\boldsymbol{x}}}(s_2)})){]^{\rm{T}}}\\& {{H}_{{i}}}[({\mathit{\boldsymbol{f}}}({{\mathit{\boldsymbol{x}}}(s_1)}) - {\mathit{\boldsymbol{f}}}({{\mathit{\boldsymbol{x}}}(s_2)})) - {\mathit{\boldsymbol{\varSigma }}}{}_2({\mathit{\boldsymbol{x}}}({s_1}) - {\mathit{\boldsymbol{x}}}({s_2})){]^{\rm{T}}},{\rm{ }}({{i}} = 4,5)\end{aligned}$ (30)

于是,下列不等式成立:

${\varpi _1}(t) + {\varpi _2}(t - d(t)) + {\varpi _3}(t - d) \ge 0$ (31)
$\qquad \quad \quad \;\;{\varpi _4}(t,t - d(t)) + {\varpi _5}(t - d(t),t - d) \ge 0$ (32)

将式(31)~(32)加起来得:

${{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){{\mathit{\boldsymbol{\varXi }}}_7}{\mathit{\boldsymbol{\xi }}}(t) \ge 0$ (33)

由式(16)~(33)得:

$\begin{aligned}[b]\dot {V}(t) - & {{\mathit{\boldsymbol{y}}}^{\rm{T}}}(t){\mathit{\boldsymbol{Gy}}}(t) - 2{{\mathit{\boldsymbol{y}}}^{\rm{T}}}(t){\mathit{\boldsymbol{SJ}}}(t) - {{\mathit{\boldsymbol{J}}}^{\rm{T}}}(t) \times \\[3pt]& ({\mathit{\boldsymbol{T}}} - \gamma {\mathit{\boldsymbol{I}}}){\mathit{\boldsymbol{J}}}(t) \le {{\mathit{\boldsymbol{\xi }}}^{\rm{T}}}(t){\mathit{\boldsymbol{\varXi \xi }}}(t)\end{aligned}$ (34)

因此,由式(8)可知,对任意 ${\mathit{\boldsymbol{\xi }}}(t) \ne 0$ ,有:

$\begin{aligned}[b]{{\dot {V}}}(t) - & {{\mathit{\boldsymbol{y}}}^{\rm{T}}}(t){\mathit{\boldsymbol{Gy}}}(t) - 2{{\mathit{\boldsymbol{y}}}^{\rm{T}}}(t){\mathit{\boldsymbol{SJ}}}(t) - {{\mathit{\boldsymbol{J}}}^{\rm{T}}}(t) \times \\& ({\mathit{\boldsymbol{T}}} - \gamma {\mathit{\boldsymbol{I}}}){\mathit{\boldsymbol{J}}}(t) \le 0\end{aligned}$ (35)

对式(35)两边从0到 ${t_p}({t_p} > 0)$ 积分得:

$\begin{aligned}[b]\int_0^{{t_p}} & {[{{\dot {V}}}(t) - {{\mathit{\boldsymbol{y}}}^{\rm T}}(t){\mathit{\boldsymbol{Gy}}}(t) - 2{{\mathit{\boldsymbol{y}}}^{\rm{T}}}(t){\mathit{\boldsymbol{SJ}}}(t)} - \\& {{\mathit{\boldsymbol{J}}}^{\rm{T}}}(t)({\mathit{\boldsymbol{T}}} - \gamma {\mathit{\boldsymbol{I}}}){\mathit{\boldsymbol{J}}}(t)]{\rm{d}}t \le 0\end{aligned}$ (36)

式(36)表明,在零初始状态下有式(37)成立。

$\begin{aligned}[b]\int_0^{{t_p}} & {[ - {{\mathit{\boldsymbol{y}}}^{\rm T}}(t){\mathit{\boldsymbol{Gy}}}(t) - 2{{\mathit{\boldsymbol{y}}}^{\rm{T}}}(t){\mathit{\boldsymbol{SJ}}}(t)} \! - \! \\ & {{\mathit{\boldsymbol{J}}}^{\rm{T}}}(t)({\mathit{\boldsymbol{T}}} \! - \! \gamma {\mathit{\boldsymbol{I}}}){\mathit{\boldsymbol{J}}}(t)]{\rm{d}}t \le - {V}({t_p}) \le 0\end{aligned}$ (37)

故由定义知,若式(8)~(9)满足,系统(5)为严格 $({\mathit{\boldsymbol{G,S,T}}}) - \gamma $ 耗散的。结论得证。

注意:1)在上述正文中,在许多电路中,放大器的输入–输出函数既不是单调递增的也不是连续可微的。因此,在设计和实现人工神经网络时,非单调函数可能更适合于描述神经元的激活函数。本文假设自2006年由Wang等[26]第一次提出以来,关于它的研究引起了许多学者[1517]的关注。本文假设中的 $l_j^ - \text{、}l_j^{\rm{ + }}({\mathop{j}\nolimits} = 1,2, \cdots ,n)$ 可以取正数、负数或者是零,因此,它比常见的sigmoid函数或Lipschtiz条件[1823]具有更一般的形式。

2)不同于文献[1014,1721],在 ${V}(t)$ 中增加了 ${V_5}(t)$ 项,它充分考虑了时滞 ${{d}}(t)$ 和时滞微分 $\dot d(t)$ 的信息。另外,比如在应用Wirtinger积分不等式估计1阶积分 ${\rm{ - }}d\int_{t - d(t)}^t {{{{\mathit{\boldsymbol{\dot x}}}}^{\rm{T}}}(s){{\mathit{\boldsymbol{Z}}}_2}{\mathit{\boldsymbol{\dot x}}}(s){\rm{d}}s} $ 的上界时,由于Wirtinger积分不等式同时考虑了 ${\mathit{\boldsymbol{x}}}(t)\text{、}{\!\!\!\mathit{\boldsymbol{x}}}(t - d(t))$ $\int_{t - d(t)}^t {{\mathit{\boldsymbol{x}}}(s){\rm{d}}s} $ 这3方面,因此,它比Jensen积分不等式更能精确地估计1阶积分的上界,这将带来较弱的保守性。

作为耗散的一种特殊情况,时滞神经网络的无源分析也受到一些学者的关注[1821]。若设定理中 ${\mathit{\boldsymbol{G}}} = 0,{\mathit{\boldsymbol{S}}} = {\mathit{\boldsymbol{I}}},$ ${\mathit{\boldsymbol{T}}} = 2\gamma {\mathit{\boldsymbol{I}}}$ ,易得系统(5)的无源条件。

推 论  给定d>0,μ;如果存在2n×2n维数矩阵 ${\mathit{\boldsymbol{P}}} > 0,{{\mathit{\boldsymbol{Q}}}_{i}} > 0(i = 1,2),{{\mathit{\boldsymbol{R}}}_2} > 0,{\mathit{\boldsymbol{X}}}$ ;维数n×n矩阵 ${{\mathit{\boldsymbol{Z}}}_{i}} > 0(i = 1,2,3),{{\mathit{\boldsymbol{R}}}_1} > 0$ ;对角阵 ${{\mathit{\boldsymbol{\varLambda }}}_{{i}}} \ge 0({{i}} = 1,2),{{\mathit{\boldsymbol{H}}}_i} \ge $ $0(i = 1,2, \cdots ,5)$ ;以及参数 $\gamma > 0$ ;使得当 $d(t) \in \{ 0,d\} $ 时,如果满足线性矩阵不等式(9)和(38)

$\sum\limits_{i = 1}^7 {{{\mathit{\boldsymbol{\varXi}}} _i}} + {\overline {\mathit{\boldsymbol{\varXi}}} _8} < 0$ (38)

则系统(5)为无源的。

式中: ${\overline {\mathit{\boldsymbol{\varXi}}} _8} = {\rm{sym}}\{ {\mathit{\boldsymbol{e}}}_{\rm{4}}^{\rm{T}}{\mathit{\boldsymbol{e}}}{}_9\} - \gamma {\mathit{\boldsymbol{e}}}_9^{\rm{T}}{\mathit{\boldsymbol{e}}}{}_9$ ${{\varXi} _i}({{i}} = 1,2, \cdots ,7)$ 已在定理中定义。

下面给出求解最优耗散性能指标的优化模型:

对给定dμ,设 $\delta {\rm{ = - }}\gamma $ ,有:      $\min\; \gamma \quad {\rm s.t.}$ 式(8)~(9)成立。

则得到最优耗散指标 ${\gamma ^{\rm{*}}}({\gamma ^{\rm{*}}} = - \delta )$

3 数值例子和仿真

将给出两个例子和仿真来验证所提方法的有效性。

例1:考虑一个二阶基于忆阻时滞神经网络(1),其参数矩阵为:

$\begin{aligned}& {\mathit{\boldsymbol{A}}}({\mathit{\boldsymbol{x}}}(t)) = {\rm{diag}}\{ {a_1}({x_1}(t)),{a_2}({x_2}(t))\} ,\\& {\mathit{\boldsymbol{W}}}({\mathit{\boldsymbol{x}}}(t)) = \left[ {\begin{array}{*{20}{c}}{{w_{11}}({x_1}(t))} & {{w_{12}}({x_1}(t))}\\{{w_{21}}({x_2}(t))} & {{w_{22}}({x_2}(t))}\end{array}} \right],\\[2pt]& {{\mathit{\boldsymbol{W}}}_1}({\mathit{\boldsymbol{x}}}(t)) = \left[ {\begin{array}{*{20}{c}}{w_{11}^{(1)}({x_1}(t))} & {w_{12}^{(1)}({x_1}(t))}\\{w_{21}^{(1)}({x_2}(t))} & {w_{22}^{(1)}({x_2}(t))}\end{array}} \right]\text{。}\end{aligned}$

其中:

$\begin{aligned}{a_1}({x_1}(t))\!\! =\!\! \left\{ \begin{array}{l}\!\!\!\!\!2.1\text{,}\left| {{x_1}(t)} \right| \!\le\! {\rm{1}}\text{;}\\\!\!\!\!\!1.9\text{,}\left| {{x_1}(t)} \right| \!>\! {\rm{1}}\text{。}\end{array} \right.{a_2}({x_2}(t)) \!=\!\! \left\{ \begin{array}{l}\!\!\!\!\!1.6\text{,}\left| {{x_2}(t)} \right| \!\le\! {\rm{1}}\text{;}\\\!\!\!\!\!1.4\text{,}\left| {{x_2}(t)} \right| \!>\! {\rm{1}}\text{。}\end{array} \right.\end{aligned}$
$\begin{aligned}& {w_{11}}({x_1}(t)) \!\!=\!\! \left\{ \begin{array}{l}\!\!\!\!\! - 1.2\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\\!\!\!\!\! - 0.9\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right. \!\!\!{w_{12}}({x_1}(t)) \!\!=\!\! \left\{ \begin{array}{l}\!\!\!\!\!1.1\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\\!\!\!\!\!0.9\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right.\\[2pt]& {w_{21}}({x_2}(t)) \!\!=\!\! \left\{ \begin{array}{l}\!\!\!\!\!0.6\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\\!\!\!\!\!0.4\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right. \!\!\!{w_{22}}({x_2}(t)) \!\!=\!\! \left\{ \begin{array}{l} \!\!\!\!\!- 1.2\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\ \!\!\!\!\!- 0.9\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right.\\[2pt]& w_{11}^{(1)}({x_1}(t)) \!\!=\!\! \left\{ \begin{array}{l} \!\!\!\!\!- 0.6\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\ \!\!\!\!\!- 0.4\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right. \!\!\!w_{12}^{(1)}({x_1}(t)) \!\!=\!\! \left\{ \begin{array}{l}\!\!\!\!\!0.7\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\\!\!\!\!\!0.5\text{,}\!\!\!\left| {{x_1}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right.\\[2pt]& w_{21}^{(1)}({x_2}(t)) \!\!=\!\! \left\{ \begin{array}{l}\!\!\!\!\!0.8\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\\!\!\!\!\!0.6\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right. \!\!\!w_{22}^{(1)}({x_2}(t)) \!\!=\!\! \left\{ \begin{array}{l}\!\!\!\!\!0.9\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!\le\!\! {\rm{1}}\text{;}\\\!\!\!\!\!0.7\text{,}\!\!\!\left| {{x_2}(t)} \right| \!\!>\!\! {\rm{1}}\text{。}\end{array} \right.\end{aligned}$

应用微分包含和集值映射理论得:

$\begin{array}{l}{\mathit{\boldsymbol{A}}} ={\rm{diag}}\{ 2,1.5\} ,{\mathit{\boldsymbol{W}}} = \left[ {\begin{array}{*{20}{c}}{ - 1} \;\;\; 1\\{0.5} \;\;\; { - 1}\end{array}} \right],{{\mathit{\boldsymbol{W}}}_1} = \left[ {\begin{array}{*{20}{c}}{ - 0.5} \;\;\; {0.6}\\{0.7}\;\;\; {0.8}\end{array}} \right],\end{array}$

$l_1^ + = l_2^ + = 0.9,\;l_1^ - = l_2^ - = - 0.1$

${\mathit{\boldsymbol{G}}} \! = \! \left[ {\begin{array}{*{20}{c}} { \! - 0.9} & \! 0\\ \! {\rm{*}} & \! { - 0.9} \end{array}} \right]$ ${\mathit{\boldsymbol{S}}} \! = \! \left[ {\begin{array}{*{20}{c}} \! {0.5} & \! 0\\ \! {0.3} & \! 1 \end{array}} \right]$ ${\mathit{\boldsymbol{T}}} \! = \! \left[ {\begin{array}{*{20}{c}} \! 2 & \! 0\\ \! {\rm{*}} & \! 2 \end{array}} \right]$

d=0.4,对不同μ,由定理得出最优耗散性能γ*。表1为本文方法(定理)的结果和文献[1012]结果的对照。

表1 不同μ时,最优耗散性能γ* Tab. 1 Optimal dissipativity performance γ* for various μ

表1可以看出,定理所得结果比文献[12]的结果提高了5%,这说明本文方法能更有效地减少严格耗散条件的保守性。

例2:考虑一个2阶基于忆阻时滞神经网络(1)[13,1721],其系数矩阵为:

$\begin{array}{l}{A} \! = \! \left[{\begin{array}{*{20}{c}}{ 1.4} \;\;\; \! 0\\0 \;\;\;\;\; {\! 1.5}\end{array}}\right],\;{W} \! = \! \left[{\begin{array}{*{20}{c}}\! {1.2} \;\;\; \! 1.0\\\! { - 1.2} \;\;\; \! {1.3}\end{array}}\right],{{W}_1} \! = \! \left[{\begin{array}{*{20}{c}}\! \!{ - 0.2} \;\;\; {0.5}\\ {0.3} \;\;\;\;\; { - 0.8}\end{array}}\right]\text{。}\end{array}$

激活函数 ${f_{i}}({x_{i}}(t)) = 0.05(\left| {{x_{i}}(t) + 1} \right| - \left| {{x_{{i}}}(t) - 1} \right|)\text{,}$ $({i} = 1,2)$

针对不同的μ表2为由本文推论和文献[13,1721]获得的保证系统(1)无源的最大允许时滞上界d。从表2可以看出,提出的方法大大地减少了文献[13,1721]方法的保守性。

表2 不同μ时,最大允许时滞上界d Tab. 2 Maximum allowable upper bounds d for various μ

特别地,当 $d(t) = 1 + 0.27\sin (7t)$ 图23表明不含 ${\mathit{\boldsymbol{J}}}(t)$ 的系统(1)是稳定的。然而,由图45可以看出, ${\mathit{\boldsymbol{J}}}(t)$ 的存在的确破坏了系统(1)的稳定性。

图2 神经网络(1)不含 ${\mathit{\boldsymbol{J}}}(t)$ 时,状态 ${x_1(t)}-{x_2(t)}$ 的关系 Fig. 2 States x1(t)–x2(t) relation curve of neural networks (1) with ${\mathit{\boldsymbol{J}}}(t) = {[0,0]^{\rm{T}}}$

图3 神经网络(1)不含 ${\mathit{\boldsymbol{J}}}(t)$ 时, ${x_1(t)}$ ${x_2(t)}$ 的状态轨迹 Fig. 3 State trajectories of variables x1(t) and x2(t) of neural networks (1) with ${\mathit{\boldsymbol{J}}}(t) = {[0,0]^{\rm{T}}}$

图4 神经网络(1)含有 ${\mathit{\boldsymbol{J}}}(t)$ 时,状态 ${x_1(t)}-{x_2(t)}$ 的关系 Fig. 4 States x1(t)–x2(t) relation curve of neural networks (1) with ${\mathit{\boldsymbol{J}}}(t) = {[{\rm{5cos(2t)}}, - 3{\rm{sin(0}}{\rm{.5t)]}}^{\rm{T}}}$

图5 神经网络(1)含有 ${\mathit{\boldsymbol{J}}}(t)$ 时, ${x_1(t)}$ ${x_2(t)}$ 的状态轨迹 Fig. 5 State trajectories of variables x1(t) and x2(t) of neural networks (1) with ${\mathit{\boldsymbol{J}}}(t) = {[{\rm{5cos(2t)}}, - 3{\rm{sin(0}}{\rm{.5t)]}}^{\rm{T}}}$

4 结 论

应用微分包含和集值映射理论,结合Wirtinger积分不等式和倒凸技术,得到了较弱保守性的时滞依赖的耗散条件,数值例子表明,与现有文献相比,所得到的耗散条件有较弱的保守性。仿真结果验证了耗散条件的有效性。在接下来的研究工作中,考虑把本文方法推广到忆阻时滞神经网络的状态估计、有限时间稳定等其他动力行为中。

参考文献
[1]
Chua L. Memristor-the missing circuit element[J]. IEEE Transaction on Circuit Theory, 1971, 18(5): 507-519. DOI:10.1109/TCT.1971.1083337
[2]
Strukov D, Snider G, Stewart D. The missing memristor found[J]. Nature, 2008, 453(191): 80-83.
[3]
Hu Jin,Wang Jun.Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays[C]//Proceedings of the 2010 International Joint Conference on Neural Networks.New York:IEEE,2010:1–8.
[4]
Hu Jin, Song Qiankun. Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays[J]. Applied Mathematics and Mechanics, 2013, 34(7): 724-735. [胡进, 宋乾坤. 基于忆阻的时滞神经网络的全局稳定性[J]. 应用数学和力学, 2013, 34(7): 724-735.]
[5]
Zhang Guodong, Shen Yi, Wang Leimin. Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays[J]. Neural Networks, 2013, 46(11): 1-8.
[6]
Mathiyalagan K, Anbuvithya R, Sakthivel R. Reliable stabilization for memristor-based recurrent neural networks with time-varying delays[J]. Neurocomputing, 2015, 153(4): 140-147.
[7]
Wu Zhaojing, Karimi H R. A novel framework of theory on dissipative system[J]. International Journal of Innovative Computing,Information and Control, 2013, 9(7): 2755-2769.
[8]
Zhang Hao, Guang Zhihong, Feng Gang. Reliable dissipative control for stochastic impulsive system[J]. Automatica, 2008, 44(4): 1004-1010. DOI:10.1016/j.automatica.2007.08.018
[9]
Wu Zengguang, Shi Peng, Su Hongye. Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays[J]. IEEE Transactions on Neural Network Learn Systems, 2013, 24(3): 345-355. DOI:10.1109/TNNLS.2012.2232938
[10]
Wu Zengguang, Park J H, Su Hongye. Robust dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties[J]. Nonlinear Dynamics, 2012, 69(3): 1323-1332. DOI:10.1007/s11071-012-0350-1
[11]
Zeng Hongbing, Park J H, Xia Jianwei. Further results on dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties[J]. Nonlinear Dynamics, 2015, 79(1): 83-91. DOI:10.1007/s11071-014-1646-0
[12]
Zeng Hongbing, He Yong, Shi Peng. Dissipativity analysis of neural networks with time-varying delays[J]. Neurocomputing, 2015, 168(11): 741-746.
[13]
Xiao Jianying, Zhong Shouming, Li Yongtao. Relaxed dissipativity criteria for memristive neural networks with leakage and time-varying delays[J]. Neurocomputing, 2016, 171(1): 708-718.
[14]
Wen Shiping, Zeng Zhigang, Huang Tingwen. Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays[J]. Neurocomputing, 2012, 97(1): 233-240.
[15]
Gyurkovics É. A note on Wirtinger-type integral inequalities for time-delay systems[J]. Automatica, 2015, 61(11): 44-46.
[16]
Seuret A, Gouaisbaut F. Wirtinger-based integral inequality: application to time-delay systems[J]. Automatica, 2013, 49(9): 2860-2866. DOI:10.1016/j.automatica.2013.05.030
[17]
Zeng Hongbing, He Yong, Shi Peng. Passivity analysis for neural networks with a time-varying delay[J]. Neurocomputing, 2011, 74(5): 730-734. DOI:10.1016/j.neucom.2010.09.020
[18]
Zhang Zexu, Mou Shaoshuai, Lam J. New passivity criteria for neural networks with time-varying delay[J]. Neural Networks, 2009, 22(7): 864-868. DOI:10.1016/j.neunet.2009.05.012
[19]
Xu Shengyuan, Zheng Weixing, Zou Yun. Passivity analysis of neural networks with time-varying delays[J]. IEEE Transactions on Circuits and Systems Ⅱ: Express Briefs, 2009, 56(4): 325-329. DOI:10.1109/TCSII.2009.2015399
[20]
Zhang Baoyong, Xu Shengyuan, Lam J. Relaxed passivity conditions for neural networks with time-varying delays[J]. Neurocomputing, 2014, 142(10): 299-306.
[21]
Li Yuanyuan, Zhong Shouming, Cheng Jun. New passivity criteria for uncertain neural networks with time-varying delay[J]. Neurocomputing, 2016, 171(1): 1003-1012.
[22]
Filippov A F.Differential equations with Discontinuous Right-Hand Sides[M].Dordrecht:Kluwer,1988.
[23]
Aubin J,Frankowsha,H.Set-valued Analysis[M].Boston:Springer,2009.
[24]
Park P, Ko J W, Jeong C K. Reciprocally convex approach to stability of systems with time-varying delays[J]. Automatica, 2011, 47(1): 235-238. DOI:10.1016/j.automatica.2010.10.014
[25]
Sun Jian, Liu Guanping, Chen Jie. Improved delay-range-dependent stability criteria for linear systems with time-varying delay[J]. Automatica, 2010, 46(2): 466-470. DOI:10.1016/j.automatica.2009.11.002
[26]
Wang Zidong, Shu Huisheng, Liu Yurong. Robust stability analysis of generalized neural networks with discrete and distributed time delays[J]. Choas Solitons & Fractals, 2006, 30(4): 886-896.