IROS2019國(guó)際學(xué)術(shù)會(huì)議論文集 0393_第1頁(yè)
IROS2019國(guó)際學(xué)術(shù)會(huì)議論文集 0393_第2頁(yè)
IROS2019國(guó)際學(xué)術(shù)會(huì)議論文集 0393_第3頁(yè)
IROS2019國(guó)際學(xué)術(shù)會(huì)議論文集 0393_第4頁(yè)
IROS2019國(guó)際學(xué)術(shù)會(huì)議論文集 0393_第5頁(yè)
免費(fèi)預(yù)覽已結(jié)束,剩余1頁(yè)可下載查看

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

BP Neural Network Based On board Training for Real time Locomotion Mode Recognition in Robotic Transtibial Prostheses Dongfang Xu and Qining Wang Senior Member IEEE Abstract Locomotion mode recognition based on the off line trained model brings diffi culties in integration and application to wearable robots In this paper we put forward an on board training based on back propagation BP neural network and developed the real time locomotion mode recognition research in robotic transtibial prosthesis Three transtibial amputees participated in the study to fi nish the designed six experimental tasks standing level ground walking stair ascending and descending ramp ascending and descending with robotic transtibial prostheses Data of six locomotion modes were collected under normal speed condition as training data set to train model on board Based on the on board trained models real time recognition experiments were developed under three different speeds conditions The total recognition accuracies were 91 54 96 72 and 95 35 corresponding to slow normal and fast speeds respectively The results showed some adaptation of recognition for the six locomotion modes at different speeds The on board training strategy was feasible and effective with satisfactory performance I INTRODUCTION Robotic prostheses can provide assistance in lower limb amputees walking by adopting some control strategies 1 3 As is known the control strategy of robotic prosthesis need to be adjusted according to different locomotion modes or terrains Therefore how to know locomotion mode ac curately and timely has been an focused issue Continuous efforts have been spared on locomotion mode recognition in robotic prosthesis fi eld 4 5 For locomotion mode recognition in robotic prosthesis research surface electromyogram sEMG have been applied for decades 6 7 for that it can record skin surface electromyogram signals to detect human locomotion intents Inertial Measurement Unit IMU is also a widely used mechanical sensor and it can output people or robots position information during the walking 8 9 In addition a capacitive method has been put forward in lower limb prosthesis 10 as it can measure muscles deformation Compared with single sensor method multi sensor fusion has been an popular method to recognize locomotion mode such as the fusion of sEMG and mechanical signals 11 the fusion of mechanical and capacitive signals 12 and so on This work was supported by the National Key R D Program of China No 2018YFB1307302 the National Natural Science Foundation of China No 91648207 61533001 the Beijing Natural Science Foundation No L182001 and the Beijing Municipal Science and Technology Project No Z181100009218007 The authors are with the Robotics Research Group College of Engineer ing Peking University Beijing 100871 China with the Beijing Innovation Center for Engineering Science and Advanced Technology BIC ESAT Peking University China and also with the Beijing Engineering Research Center of Intelligent Rehabilitation Engineering Beijing 100871 China E mail qiningwang Based on different sensing methods several real time locomotion mode recognition studies have been developed in lower limb prosthesis Spanias et al have conducted the real time recognition in transfemoral prosthesis by fusing sEMG and mechanical signals 13 14 Zhang et al have conducted the real time recognition on Matlab and achieved 95 accuracy based on off line training by fusing sEMG and mechanical signals 15 These studies adopted multi sensor fusion method which can improve recognition performance but bring more integration problems comparing with single type sensor Our recent study has developed the real time locomotion mode recognition for transtibial amputees with robotic prostheses based on mechanical sensors and got good performances 16 Real time recognition On Matlab or on board is mean ingful for the real time control however it is not enough and unsatisfi ed The main reason why is that the model for real time recognition is derived from off line training Off line training will resort to some other devices to train model for online recognition which is not convenient to integrate with robotic prosthesis To solve the problems we put forward the on board training for real time recognition based on the robotic transtibial prosthesis For on board training one im portant point is to choose proper classifi cation algorithm For locomotion model recognition different classifi cation algo rithms have their features and are widely used such as SVM support vector machine 17 QDA quadratic discriminant analysis 16 LDA linear discriminant analysis 13 and so on On board training for locomotion mode recognition has been studied in the research with LDA classifi cation al gorithm 13 Comparing with LDA BP Back Propagation neural network shows relatively higher recognition accuracy and effi ciency 18 Our previous study has also compared and analyzed the recognition no training performance of different algorithms in control circuit of prosthesis 18 19 By analyzing the time consumption power consumption and resources consumption BP neural network could achieved good performance with high accuracy 18 19 BP neural network has not many parameters and is easy to be embedded in control circuit of prosthesis In the study BP neural network was implemented in control circuit for training model and then recognition The task of the study was to recognize six locomotion modes Standing St Level Ground Walking LG Stair Ascending SA Stair Descending SD Ramp Ascending RA and Ramp Descending RD at different speeds slow normal fast based on the on board trained model at normal speed Three transtibial amputees joined in the research 2019 IEEE RSJ International Conference on Intelligent Robots and Systems IROS Macau China November 4 8 2019 978 1 7281 4003 2 19 31 00 2019 IEEE8152 II METHOD A Robotic Transtibial Prosthesis As shown in Fig 1 a robotic transtibial prosthesis com mercialized by Bejing SpeedSmart Co Ltd a spin off company of Peking University was adopted in the study The prosthesis model could been seen in our previous study 1 3 Angle Sensor Strain Gauge Power Switch IMU 2 a b Control Circuit IMU 1 Battery Carbon fiber Foot Fig 1 The amputee prosthesis wearing diagram a One transtibial amputee wearing the robotic prosthesis and b the local zoom of prosthesis The internal components of the prosthesis include control circuit sensors one strain gauge one angle sensor and two IMU battery and power switch and carbon fi ber foot The DC motor is integrated in the prosthesis and can drive the prosthesis The weight of the robotic transtibial prosthesis is about 2 kg Three kinds of sensors one strain gauge one angle sensor and two IMUs were integrated in the prosthesis Strain gauge could measure the deformation of carbon fi ber foot and gait phases stance and swing phases could be divided based on the deformation information One angle sensor could detect ankle s rotation with one degree of freedom Two IMUs were integrated on the shank and foot positions of the robotic prosthesis Each IMU could provide tri axis angle acceleration and angular velocity information We designed on board training and recognition procedure which were conducted in the control circuit of the prosthesis The control circuit consisted of Micro Controller Unit MCU a 216 MHz Cortex M7 processer and Application Processor Unit APU a 667 MHz Cortex A9 MPCore based processing system and a FPGA based programmable logic circuits The MCU could collect and synchronize the different prosthesis sensor signals and perform prosthesis control and the APU was designed to execute on board training and real time recognition algorithm B Control Strategy Control strategies are set based on different gait phases 1 3 During swing phase a Proportional Derivative PD position control is developed to return the equilibrium posi tion The principle of PD controller is as follows Da kp 0 kd 0 1 where Dais the active controller output for motor driver kp and kd are predefi ned constants is the current joint angle is angle rate angular velocity and 0and 0are the desired equilibrium angle and desired angular velocity of swing phase respectively During stance phase a torque control strategy is adopted to provide resistance and assistance The methods could result in the equal braking torque eddue to the different duty cycle D of the Pulse Width Modulation PWM signal and the equation is as follows ed kedDw 2 where ked is the proportionality coeffi cient w is the motor s rotate speed and D is the duty cycle In the paper for specifi c amputee he walked under different locomotion modes with the same control parameters and the control parameters were different for the different amputees C Experimental Protocol In this study we recruited three transtibial amputees as subjects in the research The subjects basic information age wight et al could be seen in Table I All subjects provided written informed consents and the experiments were approved by the Local Ethics Committee of Peking University In the study all subjects needed exercises ahead of the experiments to adjust control parameters and adapt to the new prosthesis TABLE I DETAILED INFORMATION OF THREE SUBJECTS WITH TRANSTIBIAL AMPUTATION SUBJECT1 SUBJECT3 Gender Age Weight HeightYearsThe amputation kg cm post amputationside Subject 1Male527017017Left Subject 2Male356517219Left Subject 3Male568217010Left The experimental task was to fi nish six locomotion modes St LG SA SD RA RD as seen in Fig 2 It included two sessions training session and recognition session as shown in Table II In the on board training session subjects was TABLE II THE EXPERIMENTAL SETUP FOR ON BOARD TRAINING AND REAL TIME RECOGNITION SESSIONS ModesSpeedPlatform St treadmill LG0 7 m sTreadmill TrainingRA0 7 m sTreadmill sessionRD0 7 m sTreadmill SAnormalStairs SDnormalStairs SlowNormalFast St treadmill LG0 5 m s0 7 m s0 9 m sTreadmill RecognitionRA0 5 m s0 7 m s0 9 m sTreadmill sessionRD0 5 m s0 7 m s0 9 m sTreadmill SAslownormalfastStairs SDslownormalfastStairs asked to complete St LG RA RD SA and SD Details were as follows Subjects were asked to stand for 20 seconds and then walk on the treadmill at normal speeds 0 7 m s for 20 seconds The two tasks RA and RD were also fi nished on 8153 the treadmill platform with an inclination angles of 10 as shown in Fig 2 c and d at normal speeds 0 7 m s and each task lasted for 20 seconds The subjects accomplished SA and SD in the stairs a height of 16 cm and a length of 28 cm as shown in Fig 2 e and f at their self selected normal speeds Fig 2 The diagram of designed Experimental task a f represent the different locomotion modes The red represents transtibial prosthesis and the black represents the sound leg The blue arrow represents ambulation direction After the subject fi nished the training tasks the processor would start to perform on board training When the on board training was fi nished each subject would have his specifi c model for the next on board real time recognition During the real time recognition session subjects fi nished the six locomotion modes St LG SA SD RA RD at different speeds as seen in Table II D Signal Processing 1 Signal Feature Extracting Two IMUs were integrated in the robotic prosthesis as mentioned each IMU could provide nine channels information including triaxial angle acceleration and angular velocity information To acquire more information and dynamic features some features of raw data were extracted by the continuous sliding window 250ms length with a 10ms sliding increment The sample frequency was 100 Hz in the study Five types of time domain features were selected for the study The feature types were defi ned as follows average standard deviation maximum minimum and difference value of adjacent two elements All the features pooled to be feature vectors For on board training each vector had its special locomotion mode labels We trained classifi ers using the feature vectors and labels and then got the model for the next recognition for each subject For real time recognition the continuous feature vectors were fed into the trained model to output the recognition results in time series In the study eight channels signals of each IMU two angles and three accelerations and three angular velocities were adopted As the fi ve features of each channel s signal were extracted one feature vector included 80 values 2 IMUs 8 channels 5 features For the on board training data were collected of each locomotion mode 6 modes in total for 20s and the sample frequency was 100 Hz So the training feature vectors were 12000 20 100 6 The size of training dataset were consisted of 12000 80 features and 12000 1 labels The size of test data for real time recognition were consisted of 1 80 each sample interval 2 Classifi cation Algorithm In the study a multi layer BP neural network was used for on board training and real time recognition BP neural network has good performances for recognition with simple model high effi ciency and low hardware resource consumption 18 Besides the BP neural network has been proved its high accuracy in theory by adjusting its layer and neuron number In the study the designed BP neural network was consisted of three layers an input layer the fi rst layer a hidden layer the second layer and an output layer the third layer as seen in Fig 3 Feature vector Input layerHidden layerOutput layer Recognition Result Fig 3 The designed BP neural network The designed BP neural network consists of three layers an input layer a hidden layer and an output layer Feature vectors are input to the neural network and the output layer outputs the classifi cation results The input layer would accept the 80 feature values of each feature vector as input to next layer The hidden layer had 20 neurons and the layer neurons accepted the output from the input layer and then output to the output layer The output layer had 6 neurons and it could accepted the values from hidden layer and then output six values the biggest value was corresponding to the fi nal recognition result The size of weight matrix was 80 20 between the input layer and the hidden layer and it was 20 6 between the hidden layer and the output layer At hidden and output layers the outputs from the previous layer multiply by weight vectors and generates the input of this layer the neurons of this layer process the input by the nonlinear activation function to compute the output values of this layer then pass the outputs to the next layer 20 In the study a sigmoid function 8154 f x 1 1 e x was used to construct the neural network as nonlinear activation function in hidden and output layers The values in the BP neural network architecture travel from input to output and the weight vectors between the two adjacent layers are trained by back propagation procedure 20 E System Evaluation Recognition accuracy is a critical metric to evaluate the recognition performance For each locomotion mode confu sion matrix CM is adopted which is defi ned as follows CM c11c12 c1k c21c22 c2k cij ck1ck2 ckk 3 The k denotes the number of all locomotion modes It is 6 in the study The element cij in matrix CM is defi ned as follows cij nij ni 100 4 where nij is the number of the misclassifi ed samples that belong to mode i but misclassifi ed into mode j actually and niis the total number of the data belonging to mode i The overall recognition accuracy RA could be denoted as the summation of diagonal elements of confusion matrix which can void unbalanced problem of each class data The corresponding formulation is as follow RA k i 1cii k 100 5 The k denotes the number of locomotion modes as mentioned in Equation 3 and ciiis the diagonal element of confusion matrix III EXPERIMENTALRESULTS A Recognition Accuracy for Subjects at Three Speeds The total recognition accuracies for the six locomotion modes of the three subjects were 89 19 92 32 93 12 at slow speed 91 54 in average 96 88 95 38 97 90 at the normal speed 96 72 in average and 95 43 94 63 96 0 at the fast speed 95 35 in average as seen in Fig 4 88 19 96 88 95 43 92 32 95 38 94 63 93 12 97 90 96 0 Subject1Subject2Subject3 Subjects 80 85 90 95 100 Recognition Accuracy Slow SpeedNormal SpeedFast Speed Fig 4 The total recognition accuracy of three subject at different speeds The different colorful bars represent different speeds The text on the bar denotes the recognition accuracy We could see that all the subjects could achieve the highest accuracy at normal speed and the lowest accuracy at slow speed We concluded that the performance of trained model at normal speed was best for the recognition under normal speed as the model was trained based on the data collected from normal speed condition For different speeds subject 1 achieved 88 19 recognition accuracy at slow speed and more than 90 accuracy at the other speeds Both subject 2 and subject 3 could achieve more than 90 recognition accuracy at different speeds These showed the trained model under normal speed condition had some adaptation for recog nition under other speed conditions slow and fast speeds conditions The study total real time accuracy 91 54 slow speed 96 72 normal speed and 95 35 fast speed has achieved comparable effect with the studies 13 about 96 DBN and 16 more than 93 QDA B Recognition Accuracy for Each Locomotion Mode at Three Speeds We also evaluated the performance for each locomotion mode in the real time recognition as seen in Fig 5 which illustrated the recognition accuracy of the subjects for each locomotion modes at three speeds First St mode was recognized with 100 accuracy as shown in Fig 5 since the signals of Standing mode were distinct from other locomotion modes Secondly for each subject the recognition accuracy for each locomotion mode excluding standing mode at slow speed was the lowest Thirdly for subject 1 the recognition accuracy for LG loco motion mode at fast speed was

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論