SmallerUniversalSpikingNeuralPsystems.doc_第1頁(yè)
SmallerUniversalSpikingNeuralPsystems.doc_第2頁(yè)
SmallerUniversalSpikingNeuralPsystems.doc_第3頁(yè)
SmallerUniversalSpikingNeuralPsystems.doc_第4頁(yè)
SmallerUniversalSpikingNeuralPsystems.doc_第5頁(yè)
已閱讀5頁(yè),還剩20頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

精品論文大集合smaller universal spiking neural p systemsxingyi zhang xiangxiang zeng linqiang pankey laboratory of image processing and intelligent control department of control science and engineering huazhong university of science and technology, wuhan 430074, hubei, chinaxyzhang , , abstractthe problem of finding small universal spiking neural p systems was recently inves- tigated by andrei puan and gheorghe paun,for spiking neural p systems used as devices computing functions and as devices generating sets of numbers. for the first case, a uni- versal spiking neural p system was produced by using 84 neurons for standard rules and using 49 neurons for extended rules. for spiking neural p systems used as generators of sets of numbers, a universal system with standard rules having 76 neurons, and one with extended rules having 50 neurons were obtained. in this paper, we continue the study of small universal spiking neural p systems and we improve in the number of neurons as follows. the small universal spiking neural p systems use 67 neurons for standard rules and 41 neurons for extended rules in the case of computing functions, and 63 neurons for standard rules and 41 neurons for extended rules in the case of generating sets of numbers. keywords: membrane computing, spiking neural p systems; small universal sn p sys- tems1 introductionspiking neural p systems (in short, sn p systems) are a class of computing devices introduced in 2, which is an attempt to incorporate the idea of spiking neurons into the area of membrane computing. although the research of sn p systems was initiated recently, in the year of 2006, it became a hot research area and there have been a lot of papers on this topic. an interesting research issue concerning sn p systems is to look for small universal sn p systems. actually, the topic of looking for small computing devices of various types is well investigated in computer science, see, e.g. 3, 7, and the references therein. recently, this issue was considered also in membrane computing, and small universal computing devices for tissue p systems with string ob jects processed by splicing operations and for symport/antiport p systems can be found in 1 and 8, respectively. the case of sn p systems was first investigated by andrei paun and gheorghe paun in 5, where a universal sn p system was obtained by using 84 neurons for standard rules and 49 neurons for extended rules as devices computing functions; for sn p systems used as generators of sets of numbers, a universal system with standard rules having 76 neurons was found, and one with extended rules having 50 neurons.in this paper, we consider the problem of reducing the number of neurons used in9the small universal sn p systems. based on the modules constructed in 5, we obtain that some sub modules for standard rules can have common auxiliary neurons and there are similar results for extended rules. moreover, we show that two add modules almost cannot have common auxiliary neurons and an add module and a sub module can never have any common neuron, this will be made precise below. as devices computing functions, the number of neurons used in small universal sn p systems is decreased from 84 to 67 for standard rules and from 49 to 41 neurons for extended rules; for sn p systems used as generators of sets of numbers, the number 76 of neurons is decreased to 63 for standard rules and 50 is decreased to 41 for extended rules.the paper is organized as follows. in section 2, we introduce the necessary prereq- uisites related to register machines, universality, and sn p systems. section 3 gives the small universal sn p system with standard rules as devices computing functions and some results which produce an improvement of the universal computing sn p system. in section4, the case of universal sn p system with extended rules is considered. section 5 considers the universal sn p systems working as generators of sets of numbers. conclusions and remarks are drawn in section 6.2 prerequisiteswe assume the reader to be familiar with basic language and automata theory, as well as basic membrane computing 6 (for more updated information about membrane computing, please refer to 9), hence we directly introduce some notations and basic definitions here.for an alphabet v , v denotes the set of all finite strings over v , with the empty string denoted by . the set of all nonempty strings over v is denoted by v + . when v = a is a singleton, then we simply write a and a+ instead of a, a+ .a regular expression over an alphabet v is defined as follows: (i) and each a v is aregular expression, (ii) if e1, e2 are regular expressions over v , then (e1 )(e2 ), (e1 ) (e2 ), and (e1 )+ are regular expressions over v , and (iii) nothing else is a regular expression over v . with each expression e we associate a language l(e), defined in the following way:(i) l() = and l(a) = a, for all a v , (ii) l(e1) (e2 ) = l(e1 ) l(e2),1l(e1 )(e2 ) = l(e1)l(e2), and l(e1)+ ) = l(e+ ), for all regular expressions e1 , e2 over v . non-necessary parentheses are omitted when writing a regular expression, and also (e)+ can be written as e.we pass now to introducing the universal register machines, and then the spiking neural p systems.2.1 universal register machinesbecause the register machines used in the following sections are deterministic, we only recall the definition of the deterministic register machines here. a deterministic register machine is a construct m = (m, h, l0 , lh , i ), where m is the number of registers, h is the set of instruction labels, l0 is the start label, lh is the halt label (assigned to instruction halt), and i is the set of instructions; each label from h labels only one instruction from i , thus precisely identifying it. the instructions are of the following forms: li : (add(r), lj ) (add 1 to register r and then go to the instruction with label lj ), li : (su b(r), lj , lk ) (if register r is non-empty, then subtract 1 from it and go to the instruction with label lj , otherwise go to the instruction with label lk ), lh : h alt (the halt instruction).a register machine m generates a set of numbers in the following way: we start with all registers being empty (i.e., storing the number zero), we apply the instruction with label l0 and we continue to apply instructions as indicated by the labels (and made possible by the contents of registers); if we reach the halt instruction, then the number n present in specified register r0 at that time is said to be generated by m . if the computation does not halt, then no number is generated. the set of all numbers generated by m is denoted by n (m ).a register machine can compute any turing computable function: we introduce the arguments in specified registers r1 , . . . , rk (without loss of the generality, we may assume that we use the first k registers), we start with the instruction with label l0 , and if the register machine stops (with the instruction with label lh ), then the value of the function is placed in another specified register rt , with all registers different from rt being empty. the partial function computed in this way is denoted by m (n1, n2 , . . . , nk ). in the com- puting form, it is known (see, e.g., 4) that the deterministic register machines are turing computable.in 3, several universal register machines have been constructed for computing func- tions (in this case the registers are denoted by 0, . . . , m), with the input introduced in registers 1 and 2, and the result obtained in register 0. in the following, we consider the specific universal register machine given in figure 1, which is also the only one used in 5. the register machine from figure 1 has 8 registers and 23 instructions.2.2 spiking neural p systemsin this section, we introduce sn p systems in the form necessary for computing func- tions.l0 : (su b(1), l1 , l2 ), l1 : (add(7), l0 ),l2 : (add(6), l3 ), l3 : (su b(5), l2 , l4 ),l4 : (su b(6), l5 , l3 ), l5 : (add(5), l6 ),l6 : (su b(7), l7 , l8 ), l7 : (add(1), l4 ), l8 : (su b(6), l9 , l0 ), l9 : (add(6), l10 ),l10 : (su b(4), l0 , l11 ), l11 : (su b(5), l12 , l13 ), l12 : (su b(5), l14 , l15 ), l13 : (su b(2), l18 , l19 ), l14 : (su b(5), l16 , l17 ), l15 : (su b(3), l18 , l20 ), l16 : (add(4), l11 ), l17 : (add(2), l21 ),l18 : (su b(4), l0 , l22 ), l19 : (su b(0), l0 , l18 ), l20 : (add(0), l0 ), l21 : (add(3), l18 ),l22 : h altfig. 1: a small universal register machine from 3a computing sn p system, of degree m 1, is a construct of the form = (o, 1 , . . . , m , syn, in, out),where:1. o = a is the singleton alphabet (a is called spike);2. 1 , . . . , m are neurons, of the formi = (ni, ri ), 1 i m,where:a) ni 0 is the initial number of spikes contained in i ;b) ri is a finite set of rules of the following two forms:(1) e/ac ap ; d, where e is a regular expression over a, and c 1, d 0,p 1, with the restriction c p;(2) as , for s 1, with the restriction that for each rule e/ac ap ; d of type (1) from ri , we have as / l(e);3. syn 1, 2, . . . , m 1, 2, . . . , m with i = j for each (i, j) syn, 1 i, j m(synapses between neurons);4. in, out 1, 2, . . . , m indicate the input and the output neurons, respectively.if we always have p = 1 for all rules of the form e/ac ap ; d, then the rules are said to be of the standard type. otherwise, they are called extended rules.the rules of type (1) are firing (we also say spiking) rules, and they are applied as follows. if the neuron i contains k spikes, and ak l(e), k c, then the rulee/ac ap ; d ri can be applied. this means consuming (removing) c spikes (thus onlyk c spikes remain in i ), the neuron is fired, and it produces p spikes after d time units(as usual in membrane computing, a global clock is assumed, marking the time for the whole system, hence the functioning of the system is synchronized). if d = 0, then these spikes are emitted immediately, if d = 1, then these spikes are emitted in the next step, etc. if the rule is used in step t and d 1, then in steps t, t + 1, . . . , t + d 1 the neuronis closed (this corresponds to the refractory period from neurobiology), so that it cannotreceive new spikes (if a neuron has a synapse to a closed neuron and tries to send several spikes along it, then these particular spikes are lost). in the step t + d, the neuron spikes and becomes again open, so that it can receive spikes (which can be used starting with the step t + d + 1, when the neuron can again apply rules).the rules of type (2) are forgetting rules; they are applied as follows: if the neuron i contains exactly s spikes, then the rule as from ri can be used, meaning that all s spikes are removed from i .if a rule e/ac ap ; d has e = ac , then we will write it in the simplified formac ap ; d.in each time unit, if a neuron i can use one of its rules, then a rule from ri must be used. since two firing rules, e1 /ac1 ap1 ; d1 and e2 /ac2 ap2 ; d2, can have l(e1 ) l(e2 ) = , it is possible that two or more rules can be applied in a neuron, and in that case, only one of them is chosen non-deterministically. note however that, by definition,if a firing rule is applicable, then no forgetting rule is applicable, and vice versa.thus, the rules are used in the sequential manner in each neuron, at most one in each step, but neurons function in parallel with each other. it is important to notice that the applicability of a rule is established based on the total number of spikes contained in the neuron.the initial configuration of the system is described by the numbers n1 , n2 , . . . , nm of spikes present in each neuron, with all neurons being open. during the computation, a configuration of the system is described by both the number of spikes present in each neuron and by the state of the neuron, that is, by the number of steps to count down until it becomes open (this number is zero if the neuron is already open). thus, hr1 /t1, . . . , rm /tm iis the configuration where neuron i contains ri 0 spikes and it will be open after ti 0steps, i = 1, 2, . . . , m; with this notation, the initial configuration is c0 = hn1 /0, . . . , nm /0i.a computation in a system as above starts in the initial configuration. in order to compute a function f : n k n , where n is the set of all non-negative integers (because精品論文大集合the present paper is based on the results of 5 and number 0 is not considered in it, from now on we assume that n is the set of all positive integers), we introduce k natural numbers n1 , . . . , nk in the system by “reading” from the environment a binary sequencez = 10n1 1 10n2 1 1 . . . 10nk 1 1. this means that the input neuron of receives a spike ineach step corresponding to a digit 1 from the string z and no spike otherwise. note that we input exactly k + 1 spikes, i.e., after the last spike we assume that no further spike is coming to the input neuron. the result of the computation is also encoded in the distance between two spikes: we impose the restriction that the system outputs exactly two spikes and halts (immediately after the second spike), hence producing a spike train of the form0b 10r1 1, for some b 0 and with r = f (n , . . . , n ).1 kwhen using sn p systems as number generators, we no longer need to output a result after halting the computation, but we have to randomly generate a number at the beginning of the computation. hence, we need a few modifications to the construction of the universal computing sn p system. first, no separate output module is necessary and the label l22 can be saved by just letting the computation halt. second, the input module should be combined with the output one, at the same time non-deterministically producing the number n.3 small universal computing sn p system with standardrulesin this section, we first introduce the small universal sn p system with standard rules as a device for computing functions, which is given by andrei paun and gheorghe paun in 5. then, we examine the possibility that the modules constructed in 5 can have common auxiliary neurons, and on this basis we decrease the number of neurons from the small universal sn p systems.3.1 the known small universal computing sn p systemwe only briefly review the construction of the small universal computing sn p system with standard rules. for more information, please refer to 5.the usual way of simulating a register machine by an sn p system consists in the construction of a small universal computing sn p system with standard rules, where neurons are associated with each register and with each label of an instruction of the machine. in this particular case, the value n of the register r is represented by 2n spikes in neuron r . also, each label of a register machine has a neuron, which is “activated” when receiving two spikes.there is a trick used in the construction of the small universal sn p system from 5.because the construction does not allow subtraction operations on the neuron where the result is placed and register 0 is a sub ject of such operations, we have to add a further register we label it with 8 and we replace the halt instruction with the followinginstructions:l22 : (su b(0), l23 , l24 ), l23 : (add(8), l22 ), l24 : h alt .the simulations of add and sub instructions are shown in figures 2 and 3, respec- tively. in order to simulate the small universal register machine depicted in figure 1, we should introduce the needed spikes in the neurons l0 , 1 , 2 and output the computed number. these two tasks are covered by modules input and output respectively,which can be found in 5. since we use the same input and output modules for the improved small universal computing sn p system in next section, we do not give the input and output modules here. thus, we have9 neurons for the 9 registers,25 neurons for the 25 labels,20 neurons for the auxiliary neurons in the 10 add instructions,28 neurons for the auxiliary neurons in the 14 sub instructions,7 neurons in the input module,2 neurons in the output module, which gives a total of 91 neurons.lia2 a; 0a l ili a a;

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論