Abstract (DOC)摘要(DOC)_第1頁(yè)
Abstract (DOC)摘要(DOC)_第2頁(yè)
Abstract (DOC)摘要(DOC)_第3頁(yè)
全文預(yù)覽已結(jié)束

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、.AbstractIn this dissertation I investigate the neural mechanisms underlying the human ability to learn, store and make use of grammatical structure, so-called syntax, in language. In doing so I incorporate insights from linguistics, cognitive psychology and neuro-biology.From linguistic research it

2、 is known that the structure of nearly all languages exhibits two essential characteristics: language is productive - from a limited number of words and rules one can produce and understand an unlimited number of novel sentences. Further, language is hierarchical - sentences are constructed from phr

3、ases, which in turn can be constructed from other phrases, etc. These two structural properties of language provide minimum requirements that a system of language processing, such as the brain, must satisfy. A first contribution of this dissertation is that it attempts to formulate these requirement

4、s as concisely as possible, allowing for a strict evaluation of existing models of neural processing in the brain (so-called neural networks). From this evaluation it is concluded that conventional types of neural networks (in particular so-called recurrent, fully distributed networks) are unsuited

5、for modeling language, due to certain oversimplifying assumptions.In the remainder of this thesis I therefore develop a novel type of neural network, based on a neural theory of syntax that does take into account the hierarchical structure and productivity of language. It is inspired by Jeff Hawkins

6、's Memory Prediction Framework (MPF), which is a theory of information processing in the brain that states, among other things, that the main function of the neocortex is to predict, in order to anticipate novel situations. According to Hawkins, to this end the neocortex stores all processed inf

7、ormation as temporal sequences of patterns, in a hierarchical fashion. Cellular columns that are positioned higher in the cortical hierarchy represent more abstract concepts, and span longer times by virtue of temporal compression.Whereas Hawkins applies his theory primarily to the area of visual pe

8、rception, in my dissertation I emphasize the analogies between visual processing and language processing: temporal compression is a typical feature of syntactic categories (as they encode sequences of words); whenever these categories are recognized in an early stage of the sentence, they can be exp

9、anded to predict the subsequent course of the sentence.I propose therefore that syntactic categories, like visual categories, are represented locally in the brain within cortical columns, and moreover that the hierarchical and topological organization of such syntactic' columns constitutes a gra

10、mmar.A second source of inspiration for my research is the role of memory in language processing and acquisition. An important question that a neural theory of language has to address concerns the nature of the smallest productive units of language that are stored in memory. When producing a novel s

11、entence it seems that language users often reuse entire memorized sentence fragments, whose meanings are not predictable from the constituent words. Examples of such multi-word constructions are How do you do?' or kick the bucket', but there are also productive constructions with one or more

12、 open slots', such as the more you think about X, the less you understand', or completely abstract and unlexicalized constructions. According to certain linguistic theories every sentence in a language can be formed by combining constructions of varying degrees of complexity and abstractness

13、.In order to answer the question about the storage of constructions I propose that in linguistics, as in cognitive psychology, one must distinguish between two kinds of memory systems: a memory system for abstract, relational knowledge, so-called semantic' memory, and a memory system for persona

14、lly experienced events or episodes' (for instance the memory of a birthday party), embedded in a temporal and spatial context, so-called episodic' memory. I contend that, while abstract rules and syntactic categories of a language are part of a semantic memory for language, an episodic memor

15、y is responsible for storing sentence fragments, and even entire sentences.Episodic memory also plays an important role in language acquisition, assuming that our linguistic knowledge is not innate, but originates from the assimilation of many individual linguistic experiences. An important claim of

16、 this thesis is that language acquisition, like knowledge acquisition in other cognitive domains, can be understood as a gradual transformational process of concrete episodic experiences into a system of abstract, semantic memories. Starting from the assumption that universal mechanisms of memory pr

17、ocessing in the brain also govern language production and acquisition, I formulate an explicit theory about the interaction between an episodic and a semantic memory for language, called the Hierarchical Prediction Network'' (HPN), that is applied to sentence processing and acquisition. HPN

18、further incorporates the ideas of the MPF, with some important modifications.The semantic memory for language is conceived of in HPN as a neural network, in which the nodes (corresponding to syntactic and lexical cortical columns) derive their function from their topological arrangement in the netwo

19、rk. This means that two nodes that fulfill a similar function within the syntactic analysis of a sentence are positioned within each other's vicinity in some high-dimensional space. (This is motivated by the topological organization of, for instance, the orientation columns in area V1 in the vis

20、ual cortex, where neighboring columns are tuned to similar orientations of line segments.)A syntactic analysis (parse) of a sentence in HPN consists of a trajectory through the network, that (dynamically) binds a set of nodes, as they exchange their topological addresses via a central hub. (This is

21、inspired by research on how primitive visual categories are bound into complex contours or shapes.) By virtue of flexible bindings between the nodes (as opposed to the static bindings in conventional neural networks) HPN can account for the productivity of language.In HPN, the episodic memory for la

22、nguage is embedded within the semantic memory, in the form of permanent memory traces, which are left behind in the network nodes that were involved in processing a sentence. This way, the network analysis of a processed sentence can always be reconstructed at a later time by means of the memory tra

23、ces. Moreover, novel sentences can be constructed by combining partial traces of previously processed sentences.This thesis is organized as follows: Chapter 1 introduces the goals of my research, and motivates the chosen approach. Chapter 2 introduces the Memory Prediction Framework, and provides th

24、e neuro-biological background for the neural theory of syntax. Chapter 3 covers some basic concepts from the field of (computational) linguistics, with a special focus on parsing techniques that will be used in the HPN model. Chapter 4 contains a critical review of the literature on neural networks

25、of language processing, within the context of the debate on the fundamental characteristics of structure in language: productivity and hierarchy.In Chapters 5 to 8 I develop, in multiple stages, the HPN model. In order to quantitatively evaluate the predictions of the neural theory of syntax, I desc

26、ribe a computer implementation of HPN, that allows to run simulations based on tens of thousands of sentences. Chapter 5 starts by introducing the basic HPN model without an episodic memory, which shows HPN's ability to learn a syntactic topology from simple, artificially generated sentences. Su

27、bsequently, in Chapters 6 and 7 I discuss an extended model (and computer implementation) that integrates an episodic memory with a semantic memory for language, yet for simplicity lacks a topology. I evaluate this model on a large number of realistic sentences with respect to its performance on syntactic sentence analysis. Finally, in Chapter 8 all the components of HPN are integrated within a single implementation, which demonstrates how an abstract grammar in the form of a network topology is constructed out of epis

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論