research topics sharing-knowledge graph 0123研究課題共享知識(shí)圖譜_第1頁
research topics sharing-knowledge graph 0123研究課題共享知識(shí)圖譜_第2頁
research topics sharing-knowledge graph 0123研究課題共享知識(shí)圖譜_第3頁
research topics sharing-knowledge graph 0123研究課題共享知識(shí)圖譜_第4頁
research topics sharing-knowledge graph 0123研究課題共享知識(shí)圖譜_第5頁
已閱讀5頁,還剩154頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1.知識(shí)圖1.知識(shí)圖譜的典型應(yīng)1.11.2自動(dòng)問1.32.構(gòu)建知識(shí)圖譜的關(guān)鍵技2.1??–實(shí)體消歧和實(shí)體2.3關(guān)系抽2.4知識(shí)表2.5?1.?1.知識(shí)圖譜的典型應(yīng)?1?11查詢理解(Query?是典型的短文本(short?是典型的短文本(shorttext),一個(gè)查詢?cè)~往往僅由幾個(gè)?–––拼寫糾正query廣告點(diǎn)擊率預(yù)估CTRforads?1.?1.2自動(dòng)問答(Question????––––––相似問題檢索Question問題分類Question答案質(zhì)量預(yù)測(cè)AnswerQuality答案摘要生成社區(qū)內(nèi)問題路專家推薦Expert?挑戰(zhàn)?挑戰(zhàn)?–––利用querylog?–––利用queryloganchortext?–––––?基于分布式表?基于分布式表示(DeepLearning)–利用深度學(xué)習(xí)對(duì)傳統(tǒng)方法進(jìn)行–基于分布式表示的直接toEnd利用CNN進(jìn)行關(guān)系分利用CNN進(jìn)行關(guān)系分類onDeepConvolutionalNeuralNetworkCOLINGpp需要復(fù)雜NLPp人工設(shè)計(jì)的特征(egentitytypePOS,parsep將sentencelevelfeature直接送入CNN CNN CNN進(jìn)行實(shí)體關(guān)系(DistantSupervisionforRelationExtractionviaPiecewiseConvolutionalNeuralNetworksEMNLP15p之前版本的SingleMaxPoolingp利用bothinternalandexternalcontextssentencelevel hiddenlayersize減少的太快導(dǎo)致特征toocoarsep根據(jù)實(shí)體位置的不同引入更多“冗余”信p提出Piece-wiseMaxPooling,利用bothinternalandexternalcontextssentencelevelfine-grainedfeature。PiecewisePiecewiseConvolutionalNeuralPiecewiseConvolutionalNeuralp利用pPiecewiseConvolutionalNeuralp利用pMulti-–CNN-basedrelationclassificationmodelcanberegardedas–SupposethatthereareTbags}–Thei-thhas–Theobjectionfunctionis生成Query生成QueryGraph進(jìn)行知識(shí)庫QA(SemanticParsingviaStagedQueryGraphGeneration:QuestionAnsweringwithKnowledgeBase,Yih,ACL2015state-of-the-pQueryMegxtopicyFamilyQueryMegxtopicyFamilyStagedStagedQueryGraphStagedQueryGraph?AsearchproblemwithStagedQueryGraph?AsearchproblemwithstagedstatesandWhofirstvoicedMegonFamily(1)LinkFamily?MegFamilyLinkTopic?s2MegFamilyLinkTopic?s2Meg?AnadvancedentitylinkingsystemforshortYang&Chang,“S-MART:NovelTree-basedStructuredLearningAlgorEntityLinking.”InACL-15.??Preparesurface-formlexicon?forentitiesintheEntitymentioncandidates:allconsecutivewordsequebythestatisticalmodelUpto10top-rankedentitiesareconsideredastopic?WhofirstvoicedMegonFamily(2)IdentifyCoreInferentialFamilyyxyxFamilyWhofirstvoicedMegonFamily(2)IdentifyCoreInferentialFamilyyxyxFamilyFamilyxFamilyIdentifyCoreInferential?RelationshipbetweenyxFamilyandanswer)yxFamilyFamily?ExploreIdentifyCoreInferential?RelationshipbetweenyxFamilyandanswer)yxFamilyFamily?Exploretwotypesofs5xFamily––Length1tonon-CVTLength2canbegroundedtoWhofirstvoicedMegonFamilygWhofirstvoicedMegonWhofirstvoicedMegonFamily(3)AugmentUsingrulestoaddconstraintsonthecoreinferentialchainIfxisaentity,itcanbeaddedasentitynodeIfxissuchkeywords,like“first”,“l(fā)atest”,itcouldbeaddedasaggregationconstraints.AnewAnewframeworkforsemanticparsingof?QueryMeaningrepresentationthatcanbedirectlymappedtologicalformtargetKBSemanticQuerygraphgenerationasstagedsearch??Newstate-of-the-artonWebQuestionsAdvancedentityConvolutionalNNforrelation)?Future–Improvethecurrent?Future–Improvethecurrent???MatchingrelationsmoreHandlingconstraintsinamoreprincipledJointstructured-outputpredictionmodel(e.g.,SEARN[DauméIII06])–Extendthequerygraphtorepresentmorecomplicated?Data&–––Sent2Vec(DSSM)Systemoutputhttp://aka.ms/codalab-Intermediatefiles(e.g.,entitylinking,modelfiles,trainingdata,etc.)willbereleasedsoonhttp://aka.ms/stagg?基于?基于分布式表示的直接EndtoEnd模?1.3文檔表示?1.3文檔表示(Document–經(jīng)典的文檔表示方案是空間向量模型(VectorSpaceModel),該–?文檔表示(Document–?文檔表示(Document–中的實(shí)體及其復(fù)雜語義關(guān)系來基于圖的表示(Schuhmacher,etal.這種知識(shí)圖譜的子圖比詞匯向量擁有更豐富的表示空間,也為檔分類、文檔摘要和關(guān)鍵詞抽取將讓現(xiàn)在的技術(shù)從基于字符串匹配的層次提升至知識(shí)理解––?2.構(gòu)建知識(shí)圖?2.構(gòu)建知識(shí)圖譜的關(guān)鍵技2.1–實(shí)體消歧和實(shí)體2.3–知識(shí)表示和推2.5?實(shí)?實(shí)體識(shí)時(shí)間、日期、貨幣和百分比)命名實(shí)體因此也有研究者提出對(duì)這些概念進(jìn)行識(shí)pp時(shí)間、日期、貨幣和百分比的構(gòu)成有比較明顯的規(guī)律,識(shí)pp時(shí)間、日期、貨幣和百分比的構(gòu)成有比較明顯的規(guī)律,識(shí)p人名、地名、機(jī)構(gòu)名的用字靈活,識(shí)別的難度很大,pp不同語境下,可能具有不同的實(shí)體類型;或者在某些條件下pp蘋果、彩霞、河南、新世命名實(shí)體識(shí)別的方法(WuEMNLP命名實(shí)體識(shí)別的方法(WuEMNLPpp無論何種方法,都在試圖充分發(fā)現(xiàn)和利用實(shí)體所在的上下p考慮到每一類命名實(shí)體都具有不同的特征,不同類別的實(shí)p人名:用基于字的模型描述其內(nèi)部構(gòu)成pp不同類型的外國(guó)人名用字存在較大差別,如果按照人名的用字和構(gòu)ppMEMM、HMM、p國(guó)際會(huì)議:p國(guó)際會(huì)議:MUC、SigHAN、CoNLL、IEER和pMUC-6和MUC-7設(shè)立的命名實(shí)體識(shí)別專項(xiàng)評(píng)測(cè)大大推動(dòng)了英語命名pMUC-6和MUC-7還設(shè)立了多語言實(shí)體識(shí)別評(píng)測(cè)任務(wù)MET,對(duì)日語、pSigHAN從2003年開始舉辦第一屆中文分詞評(píng)測(cè)BAKEOFF,2006年p2003年和2004年舉辦的863計(jì)劃“中文信息處理與智能人機(jī)接口技術(shù)p英文:LanguageTechnologyGroupSummaryp英文:LanguageTechnologyGroupSummary開發(fā)的英語命名率分別達(dá)到95%和92%(p許多英語命名實(shí)體系統(tǒng)已經(jīng)具備了相當(dāng)程度的大規(guī)模文本處p參加MET-2評(píng)測(cè)的漢語命名實(shí)體識(shí)別系統(tǒng)對(duì)人名、地名、機(jī)92%)、(89%,91%)和(89%,88%)(吳友政,2006)(Levow,2006PRFORG-LOC-PER-GPE-~~~(Levow,2006PRFORG-LOC-PER-GPE-~~~~1.3M/63K(詞/詞次100K/13K(詞/詞次632K(詞61K(詞1.6M/76K(詞/詞次220K/23K(詞/詞次約約PRFPRFPRFSXU簡(jiǎn)約約PRFPRFPRFSXU簡(jiǎn)開 CITYU(繁開pp在BAKEOFF-3MSRC語料和BAKEOFF-3CITYU語料p其中一個(gè)很重要原因是:BAKEOFF-3MSRC和CITYU評(píng)測(cè)提供了相當(dāng)規(guī)模的訓(xùn)練集,而BAKEOFF-3LDC只提供了小p因?yàn)橛?xùn)練集和測(cè)試集在題材和體裁方面比較類似,可能使得各個(gè)系統(tǒng)在BAKEOFF-3MSRC語料和BAKEOFF-3CITYU語p在真實(shí)的應(yīng)用環(huán)境中,命名實(shí)體識(shí)別的性能會(huì)大打折扣CRF++StanfordNER哈工大LTPpppp網(wǎng)頁信息:不規(guī)范、存在很多噪音,有些根本就不構(gòu)成自然語言句p摩托羅拉V8088折疊手機(jī)、第6屆蘇迪曼杯羽毛球混合團(tuán)體賽、膽pp需要開p實(shí)體類型更多、更細(xì),而且有些實(shí)體類別是未知、或者是隨時(shí)間演?2?22實(shí)體消歧和實(shí)體鏈接(Entity實(shí)體鏈指并不局限于文本與實(shí)體之間,如下ppp同一指稱項(xiàng)具有近似的上下p利用聚類算法進(jìn)行消pp詞袋模型(Baggaetal.,COLING,p語義特征(PedersonetalCLITPp社會(huì)化網(wǎng)絡(luò)(Bekkermanetal.,WWW,p維基百科的知識(shí)(HanandZhao,CIKM,p多源異構(gòu)語義知識(shí)融合(HanandZhao,ACL,基于聚類的實(shí)體消歧:詞袋模型(BaggaetCOLINGpp利用向量空間模型來計(jì)算兩個(gè)實(shí)體指稱項(xiàng)的相似度,進(jìn)行MJ1:Michael基于聚類的實(shí)體消歧:詞袋模型(BaggaetCOLINGpp利用向量空間模型來計(jì)算兩個(gè)實(shí)體指稱項(xiàng)的相似度,進(jìn)行MJ1:MichaelJordanisaresearcherinmachilearnimachilearniMJ1:MichaelJordanplaysbasketballinChicagoBullplbasketbalChiBull基于聚類的實(shí)體消歧基于聚類的實(shí)體消歧(Pedersonetal.CLITPpp利用SVD(Bekkerman(Bekkermanetal.WWWpMJ(BasketBall):Pippen,Buckley,Ewing,pMJ(MachineLearning):Liang,Mackey,ppMJ,Pippen,Buckley,Ewing,Kobe基于聚類的實(shí)體消歧Wikipedia(HanCIKM2009)(1/3)pppD.MilneandIanH.Witten2008:The基于聚類的實(shí)體消歧Wikipedia(HanCIKM2009)(1/3)pppD.MilneandIanH.Witten2008:ThehighersemanticrelatedWikipediaconceptswillsharemoresemanticrelatedTheconceptshaslinkswithaandTheWholeWikipediaTheconceptshaslinkswithaand基于聚類的實(shí)體消歧Wikipedia(HanCIKM2009)(2/3)pMJ1:MichaelJordanisaResearcherinmachine基于聚類的實(shí)體消歧Wikipedia(HanCIKM2009)(2/3)pMJ1:MichaelJordanisaResearcherinmachineMachineMJ2:ResearchinGraphicalModels:MichaelGraphicp利用維基條目之間的相關(guān)度計(jì)算指稱項(xiàng)之間的相似度(解決MachineGraphic(HanCIKM(HanCIKM2009使用WePSpp使用結(jié)構(gòu)化關(guān)聯(lián)語義核的實(shí)體相似度能夠提升10.7%的消基于聚類的實(shí)體消歧多源異構(gòu)知(HanACL2010)(1/3)基于聚類的實(shí)體消歧多源異構(gòu)知(HanACL2010)(1/3)p僅僅考慮Wikipediapp挖掘和集成多源異構(gòu)知識(shí)可以提高實(shí)體消歧的性pppWeb網(wǎng)頁pACL2010)ACL2010)p等同概p概念連 p p語義圖的結(jié)構(gòu)(結(jié)構(gòu)化語義知識(shí))——建模了概念之間的隱藏語義關(guān)ppp計(jì)算原則:“如果一個(gè)概念的鄰居概念與另一個(gè)概念存在語義關(guān)聯(lián),則p(HanACL2010)pp使用WePS數(shù)據(jù)集p使用多源知識(shí)能夠有效提高消歧的準(zhǔn)確基于聚類的實(shí)體消歧:評(píng)測(cè)p基于聚類的實(shí)體消歧:評(píng)測(cè)pWePS:WebPeopleSearchpWePS1是SEMEVAL2007的子任p任務(wù):Web環(huán)境中的人名消歧,即給定一個(gè)包含某個(gè)歧義人名的p評(píng)測(cè)方基于聚類的實(shí)體消歧:評(píng)測(cè)WePS基于聚類的實(shí)體消歧:評(píng)測(cè)WePSWePSpppp已有工作大多是通過擴(kuò)展特征,增加更多的知識(shí)來提高消歧ppppp給定實(shí)體指稱項(xiàng)和它所在的文本,將其鏈接到給定知識(shí)庫中的相應(yīng)ppppppppp候選實(shí)體MichaelJordan(basketballplayer)MichaelJordan(mycologist)MichaelJordan(footballer)MichaelB.JordanMichaelH.JordanMichaelJordan(Irish…實(shí)體指稱項(xiàng)文本:MichaelJordanisaformerNBAplayer,activebusinessmanandmajorityowneroftheCharlotteBobcats.pppMichalppMichalJordanisaformerNBApp(ZhangIJCAIp縮(ZhangIJCAIp縮略語在實(shí)體指稱項(xiàng)中十分常見,據(jù)統(tǒng)計(jì),在KBP2009的測(cè)試數(shù)據(jù)p在3904個(gè)實(shí)體指稱項(xiàng)中有827個(gè)為縮略p動(dòng)縮略語指稱項(xiàng)具有很強(qiáng)的歧義性,但它的全稱往往是沒有ABC和AmericanBroadcastingCompanyAI和Artificial在實(shí)體指稱項(xiàng)文本中,縮略語的全稱pppp解決方利用人工規(guī)則抽取實(shí)體ppppp基本方法:計(jì)算實(shí)體指稱項(xiàng)和候選實(shí)體的相似度,選擇相似度最大ppBOW模型(HonnibalTAC2009,BikelTACp加入候選實(shí)體的類別特征(Bunescuetal.,EACLp加入候選實(shí)體的流行度等特征(HanetalACLpEMNLPp利用實(shí)體之間類別的共現(xiàn)特征p利用實(shí)體之間鏈接關(guān)系(Kulkarnietal.,KDDp利用同一篇文檔中不同實(shí)體之間存在的語義關(guān)聯(lián)特征(Hanetal.,SIGIR2011)TAC2009,BikelTAC2009)ppTAC2009,BikelTAC2009)pp將實(shí)體指稱項(xiàng)上下文文本與候選實(shí)體上下文文本表示成詞袋子向量(BunescuEACLp候選實(shí)體的文本(BunescuEACLp候選實(shí)體的文本內(nèi)容可能太短,會(huì)導(dǎo)致相似度計(jì)算的不準(zhǔn)p似度外,還考慮當(dāng)前文本中的詞語與Music,Art等類別的共現(xiàn)信息JohnWilliams(composer):Category={Music,Art…}JohnWilliams(wrestler):Category={Sport,…}ppJohnWilliams(VC):p訓(xùn)練SVM分類器對(duì)候選實(shí)體進(jìn)行選p訓(xùn)練數(shù)據(jù)由Wikipedia中的超級(jí)鏈接獲Williamshasalsopp文本相似p指稱項(xiàng)文本中詞與候選實(shí)體numerousclassicalconcerti,andheservedastheprincipalconductortheBostonPopsOrchestrafrom1980to1993DuringhisDuringhisstandoutcareerJordanalsoactsinthemovieSpaceJamp同一p利用Pairwisep同一p利用Pairwise優(yōu)化策pp利用實(shí)體類別重合度計(jì)算目標(biāo)實(shí)體語義相似度(Cucerzan,EMNLPpp利用實(shí)體之間鏈接關(guān)系計(jì)算目標(biāo)實(shí)體語義相似度(Kulkarni,KDD2009基于深度學(xué)習(xí)的方法(HeACL基于深度學(xué)習(xí)的方法(HeACLp傳統(tǒng)的方法中,計(jì)算待消歧實(shí)體上下文和目標(biāo)實(shí)體語義相似度的方概念間的內(nèi)在p在協(xié)同過濾的方法中,計(jì)算待消歧實(shí)體上下文和目標(biāo)實(shí)體語義相似pp社交數(shù)據(jù)中的實(shí)體鏈接(Shen社交數(shù)據(jù)中的實(shí)體鏈接(Shenp社交媒體(Twitter)是一種重要的信p社交媒體的上下文較短,語言表述不規(guī)ppp利用tweet的用戶信息和tweet的交互pTAC-KBPpTAC-KBP(2009-Now):Entityp任務(wù):將文本中的目標(biāo)實(shí)體鏈接到Wikipedia中的真實(shí)概念,達(dá)到p評(píng)測(cè)方法2013評(píng)測(cè)結(jié)果(Micropp目前實(shí)體鏈接方法主要是如何更有效挖掘?qū)嶓w指稱項(xiàng)信息,pp難點(diǎn):?2.?2.3關(guān)系抽取(Relation關(guān)系抽取是知識(shí)圖譜構(gòu)建的核心技術(shù),它決定了知識(shí)pp福大學(xué)研究者提出遠(yuǎn)程監(jiān)督(DistantSupervision)思想,使用由于遠(yuǎn)程監(jiān)督只能機(jī)械地匹配出現(xiàn)實(shí)體對(duì)的句子,因此會(huì)p目前主要采用統(tǒng)計(jì)機(jī)器學(xué)習(xí)的方法,將關(guān)系實(shí)例轉(zhuǎn)換成p目前主要采用統(tǒng)計(jì)機(jī)器學(xué)習(xí)的方法,將關(guān)系實(shí)例轉(zhuǎn)換成p基于特征向量方法:最大熵模型(Kambhatla2004)和支持向量機(jī)(Zhaoetal2005;Zhouetal2005Jiangetal2007)p基于核函數(shù)的方法:淺層樹核(Zelenkoetal.,2003)、依存樹核(Culottaetal2004)、最短依存樹核(Bunescuetal2005)、卷積樹核(Zhangetal.,2006;Zhouetal.,2007)p基于神經(jīng)網(wǎng)絡(luò)的方法:遞歸神經(jīng)網(wǎng)絡(luò)Socheretal.,2012)、基于矩陣空間的遞歸神經(jīng)網(wǎng)絡(luò)(Socheretal.,2012)、卷積神經(jīng)網(wǎng)絡(luò)(Zengetalpp基于特征向量方法p主要問題:如何獲取各種有效的詞法、句法、語義等特征,并把p特征選取:從自由文本及其句法結(jié)構(gòu)中抽取出各種表面特征以及p實(shí)體詞匯及其上下文特征p實(shí)體類型及其組合特征p實(shí)體參照方式p交疊p基本短語塊特p句法樹p基于核函數(shù)方法p基于核函數(shù)方法p主要問題:如何有效挖掘反映語義關(guān)系的結(jié)構(gòu)化信息及如何有效p卷積樹核:用兩個(gè)句法樹之間的公共子樹的數(shù)目來衡量它們之間p標(biāo)準(zhǔn)的卷積樹核在計(jì)算兩棵子樹的相似度時(shí),只考慮子樹本身,不考慮子樹的上下文pp上下文相關(guān)卷積樹核函數(shù)(CS-p在計(jì)算子樹相似度量,同時(shí)考慮子樹的祖先信息,如子樹根結(jié)點(diǎn)的父p基于神經(jīng)網(wǎng)絡(luò)的方法p基于神經(jīng)網(wǎng)絡(luò)的方法p主要問題:如何設(shè)計(jì)合理的網(wǎng)絡(luò)結(jié)構(gòu),從而捕捉更多的信息,進(jìn)p遞歸神經(jīng)網(wǎng)絡(luò)p網(wǎng)絡(luò)的構(gòu)建過程更多的考慮到句子的句法結(jié)構(gòu),但是需要依賴復(fù)雜的句p卷積神經(jīng)網(wǎng)絡(luò)p通過卷積操作完成句子級(jí)信息的捕獲,不需要復(fù)雜的NLP基于卷積網(wǎng)絡(luò)的關(guān)系抽取(ZengetColing2014BestPaperp基于卷積網(wǎng)絡(luò)的關(guān)系抽取(ZengetColing2014BestPaperpS:2013年4月20日8時(shí)02分四川省雅安市[蘆山縣]e1發(fā)生了7.0級(jí)[地震??分類汶川地震震中在汶川S:2013年4月20日8時(shí)02分四川省雅安市[蘆山縣]e1發(fā)生了7.0級(jí)[地震基于卷積網(wǎng)絡(luò)的關(guān)系抽取(ZengetColing2014BestPaper基于卷積網(wǎng)絡(luò)的關(guān)系抽取(ZengetColing2014BestPaperp2013年4月20日8時(shí)02分四川省雅安市[蘆山縣]e1發(fā)生了7.0級(jí)[地震傳統(tǒng)特征基于神經(jīng)網(wǎng)絡(luò)的特征抽取蘆山縣m11地震b1發(fā)生b2在EntityType:Nounm1,Locationm2ParseTree:Location-VP-PP-NounKernelFeature:問題1:對(duì)于缺少NLP問題2:NLP處理工具引入的“錯(cuò)誤累積問題3pp通過CNNppp通過CNNpEmbeddings挖掘詞匯的語義LexicalLevelFeatures:實(shí)體本身的語義特SentenceLevelFeatures:通過CNN網(wǎng)絡(luò)挖掘句子級(jí)別的文本特征,WP+FPSemEval-2010Task8英文數(shù)據(jù)特征集 POS,Prefixes,Levinclassedmorphological,WordNet,FrameNet,dependencyparse,NomLex-Plus,SemEval-2010Task8英文數(shù)據(jù)特征集 POS,Prefixes,Levinclassedmorphological,WordNet,FrameNet,dependencyparse,NomLex-Plus,上下文pvsvs.基于特征向量pvsvs.基于特征向量基于核函數(shù)方基于神經(jīng)網(wǎng)絡(luò)計(jì)算速適用于大規(guī)模數(shù)據(jù)環(huán)能很難進(jìn)一步同時(shí)由于樹核訓(xùn)練時(shí)p受限p受限于pp需要開p關(guān)系類型更ppp不p不限定實(shí)體類pp給出<中國(guó),美國(guó),俄羅斯>(稱為“種子p找出其他國(guó)家<德國(guó),英國(guó),法國(guó)p基本思種子詞與目標(biāo)詞在網(wǎng)頁中具有相同或者類似p基本思種子詞與目標(biāo)詞在網(wǎng)頁中具有相同或者類似的上下p網(wǎng)頁結(jié)pp上下Step1:種子詞????模p Step2:模板????p候種結(jié)p利用不同數(shù)據(jù)源(例如查詢?nèi)罩?、網(wǎng)頁文檔、知識(shí)庫文檔QueryLogQueryLogPascaCIKM2007)p通過分析種子實(shí)例在查詢?nèi)罩局械纳舷挛膶W(xué)得模板,再利用p聯(lián)想筆記本如p蘋果筆記p戴爾筆記如如pPage(WangICDM2007)(1/2)ppp在列表中,種子與目標(biāo)實(shí)體具有相同的網(wǎng)頁結(jié)開放域?qū)嶓w抽取的主要方法WebPage(WangICDM2007)(2/2)開放域?qū)嶓w抽取的主要方法WebPage(WangICDM2007)(2/2)p爬取模塊把種子送到搜索引擎,把返回的前100個(gè)網(wǎng)頁抓取下來作p抽取模塊亦然針對(duì)單個(gè)網(wǎng)頁學(xué)習(xí)模板,再使用模板抽取候p排序模塊利用種子、網(wǎng)頁、模板、候選構(gòu)造一個(gè)圖,綜合考慮網(wǎng)頁的質(zhì)量,使用RandomWalk算法為候選打分并排p針對(duì)p針對(duì)實(shí)例擴(kuò)展問題,目前缺少公認(rèn)的評(píng)測(cè),研究者在自己構(gòu)建的數(shù)p因?yàn)橄到y(tǒng)輸出是一個(gè)rankedlist,單純考察準(zhǔn)確率無法體現(xiàn)出的作p采用TREC中常用的MAP(平均正確率均值pPrec:列表中到目前為止的準(zhǔn)確率;NewEntity(r)當(dāng)前實(shí)體是否正評(píng)價(jià)指標(biāo)與技術(shù)水平取前評(píng)價(jià)指標(biāo)與技術(shù)水平取前100個(gè)網(wǎng)頁作為語pWang2007在12個(gè)自制數(shù)據(jù)集的結(jié)果pp方法一般分為模板抽取和實(shí)例候選置信度計(jì)算兩個(gè)模塊,pp方法一般分為模板抽取和實(shí)例候選置信度計(jì)算兩個(gè)模塊,p以無監(jiān)p?2?24知識(shí)表示(Knowledge?Thestorage?Thestorageofknowledge???Storesrelationandentityinagraphdbbettertellsap知p知識(shí)圖歸納邏輯編程(InductiveLogic概率圖模型(ProbabilisticGraphp馬爾科夫邏輯網(wǎng)(MarkovLogicp概率軟邏輯(ProbabilisticSoft邏輯推理的基pp表達(dá)pp表達(dá)能力強(qiáng),人類可理p可提供精確的結(jié)p知識(shí)庫的規(guī)模越來越大,邏輯表示很難高效的擴(kuò)展到大規(guī)模知識(shí)庫上(p如Freebase)p,pp每個(gè)實(shí)體、關(guān)系的表示是通過優(yōu)化整個(gè)知識(shí)庫的目標(biāo)函數(shù)編碼得到的,它示中,并能夠反映在推理過程分布分布式表示學(xué)習(xí)的流邏輯表 分布式表局全推理的高略效低邏輯表 分布式表局全推理的高略效低高是否易被人理容困難易跨領(lǐng)難(需專家設(shè)計(jì)種子規(guī)則易pppp西蘭花含鈣,鈣能有效預(yù)防骨質(zhì)疏(西蘭花可以防止骨質(zhì)疏松(含有能預(yù)防骨質(zhì)疏松元素的蔬菜能可以防止骨質(zhì)疏松規(guī)則的???規(guī)則的???(答案事實(shí)在規(guī)則上化作對(duì)SemanticsTree(推理規(guī)則鈣(關(guān)系(關(guān)系p通過實(shí)體間在圖中的鏈接特征學(xué)習(xí)關(guān)系分類器,得到路徑與關(guān)系的推理規(guī)則[Laoetal,EMNLP2011]p通過實(shí)體間在圖中的鏈接特征學(xué)習(xí)關(guān)系分類器,得到路徑與關(guān)系的推理規(guī)則[Laoetal,EMNLP2011]p提取關(guān)系的實(shí)體對(duì)在知識(shí)圖譜中的鏈接路徑作為特p提取權(quán)重大的特征作為該關(guān)系的推理路pEfficientpathfindingLongerpaths,pathwithbackwardrandomwalks[Laoetal,ACLp在path上的randomwalk引入約束條件提ppath路徑上前后一起搜索提高可解釋pPath-ConstrainedRandompPath-ConstrainedRandomCalculatedbydynamicprogrammingorparticlefilteringForwardCalculatingP(s→tForwardCalculatingP(s→t|π)forallpossibleπiseitherveryexpensiveornonexhaustivestWithO(10^2)computationcost,wehave1%chanceoffindingthetargetinaO(10^4)spaceCombineForward&CombineForward&Randomzstpp頭部實(shí)體和尾部實(shí)體都以向量表pp頭部實(shí)體和尾部實(shí)體都以向量表symbolVSvectorRESCAL(Nickle,Tresp,andKriegelpppp分布式表示分布式表示學(xué)習(xí)方知識(shí)庫知識(shí)庫anE的方Foreachtriple(head,relation,tail),learn解asatranslationfromheadtotailppLearningobjective:h+r=ForeachForeachtriple(head,relation,tail),learnrelationasatranslationfromheadtotailComplex1-to-N,N-to-1,N-to-NComplex1-to-N,N-to-1,N-to-N(USA,_president,(USA,_president,≈+TransR(LinetTransR(Linetal.ACLpTransE只能解決1-1的關(guān)系,無法處理1-N,N-1,N-ppMr,區(qū)分開實(shí)體和關(guān)系空TransHandBuildTransHandBuildrelation-specificentityWang,etal.(2014).Knowledgegraphembeddingbytranslatingonhyperplanes.鏈接預(yù)測(cè)鏈接預(yù)測(cè)Path-basedPath-basedLin,etal.(2015).ModelingRelationPathsforRepresentationLearningofKnowledgeBases.Path-basedPath-basedEntityPredictionEntityPredictionLin,etal.(2015).ModelingRelationPathsforRepresentationLearningofKnowledgeBases.PTransE123456789PTransE123456789RelationPath123456789RelationPath123456789RelationPath123456789RelationPath123456789RelationPath123456789RelationPath123456789法(Jietal法(JietalFb15k數(shù)據(jù)集中37.8%的關(guān)系鏈接的實(shí)體對(duì)數(shù)目不超過個(gè)只有29.6%的關(guān)系鏈接超過100知識(shí)庫中關(guān)系的不平衡性知識(shí)庫中關(guān)系的不平衡性p某些關(guān)系所鏈接的頭部實(shí)體數(shù)目和尾部實(shí)體數(shù)目相差較(1)異質(zhì)性。一些關(guān)系鏈接很多實(shí)體對(duì)(復(fù)雜關(guān)系(1)異質(zhì)性。一些關(guān)系鏈接很多實(shí)體對(duì)(復(fù)雜關(guān)系另一些關(guān)系鏈接很少的實(shí)體(簡(jiǎn)單關(guān)系)以往的方法中所有關(guān)系的轉(zhuǎn)換矩陣都有相同的自由度(自由變量數(shù)目并且頭我們認(rèn)為不同復(fù)雜度的關(guān)系應(yīng)當(dāng)使用不同表達(dá)能力的模型學(xué)習(xí),并且頭?NYT+FB(Weston?NYT+FB(Westonet––––KGcontainsrichinformationbesidesnetworkOptimizescoringSm2r(m,r)=f(m)^T*withfafunctionmappingwordsandfeaturesintoRk,f(m)W^T*Φ(m).WisthematrixofRnv×kcontainingallembeddingsw,Φ(m)isthe(sparse)binaryrepresentationofm(∈Rnv)indicatingabsenceorpresenceofwords/features,andr∈RkistheembeddingoftherelationshiprHolographicEmbeddings(Hole)usesHolographicEmbeddings(Hole)usesthecircularcorrelationtocombinetheexpressivepowerofthetensorproductwiththeefficiencyandsimplicityofTransE??circularcorrelation的操作,whichbeinterpretedasacompressionofthetensorproduct.shareweightsinrpforsemanticallysimilar??RepresentationLearningofKnowledgeGraphsEntityDescriptions.Enhanceentityrepresentationwithdescriptions,modeldescriptionswithCNN?2.5事件知2.5事件知識(shí)圖p一篇文TaskJointInferenceforEventTimelineConstruction(QuangXuanDoetal.,)ACL2012p一個(gè)事BuildingEventThreadsoutofMultipleNewsArticles(XavierTannieretal.,)EMNLP2013pGeneratingEventStorylinesfromMicroblogs(lietMiningtheWebtoPredictFutureEvents(KiraRadinskyetWSDMUsingStructuredEventstoPredictStockPriceMovement:AnEmpiricalInvestigation(Dingetal.,)EMNLP2014pppppp一個(gè)事件被定義為=Theevent“Sep3,2013-MicrosoftagreestobuyNokia’smobilephonebusinessfor$7.2”(Actor=Microsoft,Action=buy,Object=Nokia’smobilephonebusiness,Time=Sep3,pp從海量新聞數(shù)據(jù)中獲取結(jié)構(gòu)化的事件信EventPrivatesectoradds114,000pp從海量新聞數(shù)據(jù)中獲取結(jié)構(gòu)化的事件信EventPrivatesectoradds114,000jobsinOpen(privatesector,multiplyclass,114,000BowO1+P,P+O2,+P+Eventppp規(guī)范文本????有噪音、有冗余的海量網(wǎng)絡(luò)數(shù)p????聯(lián)合抽取、更多的考慮事件各部分之間的影p限定類別????p難p事件的體系結(jié)構(gòu):現(xiàn)有的事件體系結(jié)構(gòu)庫都是人工構(gòu)建,規(guī)模較p事件間的關(guān)系:事件之間并非相互獨(dú)立,探索并建模事件之間的關(guān)p事件抽取看做多分類問?其他應(yīng)?其他應(yīng)AntiCreditExtractfeaturesbothusersideandentity普惠金融,宜信,同Bagga,A.Bagga,A.&Baldwin,B.1998.Entity-basedcross-documentcoreferencingusingthevectorspacemodel.Proceedingsofthe17thinternationalconferenceonComputationallinguistics-Volume1,pp.79-85.M.Banko,M.Cafarella,S.Soderland,M.Broadhead,andO.Etzioni.Openinformationextractionfromtheweb.InIJCAI,2007.Bekkerman,R.&McCallum,A.2005.DisambiguatingwebappearancesofpeopleinasocialProceedingsofthe14thinternationalconferenceonWorldWideWeb,pp.463-D.Bikeletal.EntityLinkingandSlotFillingthroughStatisticalProcessingandInferenceInProceedingofTAC.R.BunescuandM.Pasca.UsingEncyclopedicKnowledgeforNamedEntityDisambiguation.InProceedingofEACL.2006.S.Cucerzan.Large-ScaleNamedEntityDisambiguationBasedonWikipediaData.InProceedingofEMNLP.2007.GuoDongZhou,JianSu,JieZhang,andMinZhang.relationextraction.InProceedingsofACL.2005.ExploringvariousknowledgeP.Buitelaar,P.Buitelaar,P.CimianoandB.Magnini.OntologyLearningfromText:Methods,EvaluationandApplications.InFrontiersinArtificialIntelligenceandApplicationsSeries.2005.S.P.PonzettoandM.Strube.Derivingalargescaletaxonomyfromwikipedia.InAAAIS.P.PonzettoandM.Strube.WikiTaxonomy:Alargescaleknowledgeresource.InECAIS.P.PonzettoandR.Navigli.Large-scaletaxonomymappingforrestructuringandintegratingWikipedia.InIJCAI2009.V.Nastase,M.Strube,B.Boerschinger,C.Zirn,andA.Elghafari.WikiNet:Averylargescalemulti-lingualconceptnetwork.InLREC2010.G.deMeloandG.Weikum.Menta:Inducingmultilingualtaxonomiesfromwikipedia.InCIKMR.NavigliandS.P.Ponzetto.Babelnet:theautomaticconstruction,evaluationandapplicationofawidecoveragemultilingualsemanticnetwork.InArtif.Intell.2012.ZhigangWang,JuanziLiet.al.Cross-lingualKnowledgeValidationBasedTaxonomyDerivationfromHeterogeneousOnlineWikis.InAAAI2014.TrevorFountainandMirellaLapata.TaxonomyinductionusinghierarchicalrandomACLRobertoNavigli,PaolaVelardiandStefanoFaralli.Agraph-basedalgorithmforinducinglexicaltaxonomiesfromscratch.InIJCAI2011.PhilippCimiano,AndreasHothoandSteffenStaab.Learningconcepthierarchiesfromtextcorporausingformalconceptanalysis.InJAIR2005.ZornitsaKozarevaandEduardHovy.AZornitsaKozarevaandEduardHovy.ASemi-SupervisedMethodtoLearnandConstructTaxonomiesusingtheWeb.InEMNLP2010. Graph-AlgorithmforTaxonomyInduction.InCOLINGHuiYangandJamieCallan.Ametric-basedframeworkforautomatictaxonomyinduction.InACL2009.WentaoWu,HongsongLiet.al.Probase:AProbabilisticTaxonomyforTextUnderstanding.InSIGMOD2012.DmitryDavidov,AriRappoportandMoshelKoppel.Fullyunsuperviseddiscoveryofconcept-specificrelationshipsbywebmining.InACL2007.EduardH.Hovy,ZornitsaKozarevaandEllenRiloff.Towardcompletenessinconceptextractionandclassification.InEMNLP2009.ZornitsaKozareva,EllenRiloffandEduardHovy.Semanticclasslearningfromthewebwithhyponympatternlinkagegraphs.InACL2008.PatrickPantelandMarcoPennacchiotti.Espresso:Leveraginggenericpatternsforautomaticallyharvestingsemanticrelations.InACL2006.T.Lee,Z.Wang,H.WangandS.Hwang.Webscaletaxonomycleansing.InVLDB,A.Ritter,S.SoderlandandO.Etzioni.Whatisthis,anyway:Automatichypernymdiscovery.InAAAI2009.I.P.KlapaftisandS.Manandhar.Taxonomylearningusingwordsenseinduction.InProc.NAACL-HLT.ACL,2010.S.Kulkarnietal.CollectiveAnnotationofWikipediaEntitiesinWebText.S.Kulkarnietal.CollectiveAnnotationofWikipediaEntitiesinWebText.InProceedingofHan,X.&Zhao,J.2009.NamedentitydisambiguationbyleveragingWikipediasemantic management,pp.215-224.Han,X.&Zhao,J.2010.StructuralSemanticRelatedness:AKnowledge-BasedMethodtoNamedEntityDisambiguation.ProceedingofACL,pp.50-59.XP.HanandL.Sun.AGenerativeEntity-MentionModelforLinkingEntitieswithKnowledgeBase.InProceedingofACL.2011.XP.Hanetal.CollectiveEntityLinkinginWebText:AGraph-BasedMethod.InProceedingofSIGIR.2011.Entity-Gina- SegmentationandNameEntityRecognition[C].ProceedingsoftheFifthSigHANWorkshoponChineseLanguageProcessing,Sydney:AssociationforComputationalLinguistics,2006:Medelyan,O.,Witten,I.H.Medelyan,O.,Witten,I.H.andMilne,D.(2008)TopicIndexingwithWikipedia.InProceedingsoftheAAAI2008WorkshoponWikipediaandArtificialIntelligence(WIKIAI2008),Chicago,IL.Mihalcea,R.andCsomai,A.(2007)Wikify!:linkingdocumentstoencyclopedicknowledge.InProceedingsofthe16thACMConferenceonInformationandKnowledgemanagement(CIKM’07),Lisbon,Portugal,pp.233-242.Milne,D.andWitten,I.(2008)LearningtolinkwithWikipedia.InProceedingsofthe16thACMConferenceonInformationandKnowledgemanagement(CIKM’08),NapaValley,California,USApp519-529.Pedersen,T.,Purandare,A.&Kulkarni,A.2005.discriminationbyclusteringcontexts.ComputationalLinguisticsandIntelligentTextProcessing,pp.226-YouzhengWu,JunZhao,XuBo,ChineseNamedEntityRecognitionModelBasedonMultipleFeatures.In:ProceedingsoftheJointConferenceofHumanLanguageTechnologyand F.WuandD.Weld.AutonomouslysemantifyingWikipedia.InCIKM,F.WuandD.Weld.OpeninformationextractionusingWikipedia.InACL,ZHAOZHAOJun,LIUFeifan,ProductNamedEntityRecognitioninChineseTexts,InternationalJournalofLanguageResourceandEvaluation(LRE),Vol.42No.2132-152,2008(SCI).W.Zhangetal.EntityLinkingwithEffectiveAcronymExpansion,InstanceSelectionandTopicModeling.InProceedingofIJCAI.2011MinZhang,JieZhang,andJianSu.2006a.Exploringsyntacticfeaturesforrelationextractionusingaconvolutiontreekernel.InProceedingsofHLT/NAACLNIST2005.AutomaticContentExtractionEvaluationOfficialResults[2007-09-28].http://NIST2007.AutomaticContentExtractionEvaluationOfficialResults[2007-09-28].http://PaulMcNamee,OverviewoftheTAC2009KnowledgeBasePopulationTrack,InProceedingsofTACworkshop,2009.HoifungPoon,MarkovLogicforMachineReading,UniversityofWashingtonPh.D.DissertationStefanSchoenmackers,InferenceOvertheWeb,UniversityofWashingtonPh.D.DissertationS.Schoenmackers,J.Davis,O.Etzioni,andD.Weld.LearningFirst-OrderHornClausesfromWebText.InProcs.ofEMNLP,2010.S.Schoenmackers,O.Etzioni,andD.Weld.ScalingTextualInferencetotheWeb.InProcs.ofEMNLP,2008.MaximilianNickel,VolkerMaximilianNickel,VolkerTresp,Hans-PeterKriegel.AThree-WayModelforCollectiveLearningonMulti-RelationalData.InProceedingsofICML,2011.MaximilianNickel,VolkerTresp,Hans-PeterKriegel.FactorizingYAGO:ScalableMachineLearningforLinkedData.InProceedingsofWWW,2012.MaximilianNickel,VolkerTresp.LogisticTensorFactorizationforMulti-RelationalData.InProceedingsofICML,2013a.MaximilianNickel,VolkerTresp.TensorFactorizationforMulti-RelationalLearning.MachineLearningandKnowledgeDiscoveryinDatabases,Springer,2013b.Gabrilovich,E.2015.Areviewofrelationallearningforknowledgegraphs.InProceedingsoftheBordesA.,WestonJ.,CollobertR.,andBengioY.Learningstructuredembeddingsofknowledgebases.InProceedingsofAAAI,2011.pags:301-306.BordesA.,UsunierN.,Garcia-DuranA.TranslatingEmbeddingsforModelingMulti-relationalData.InProceedingsofNIPS,2013.pags:2787-2795.WangZ.,ZhangJ.,FengJ.andChenZ.Knowledgegraphembeddingbytranslatingonhyperplanes.InProceedingsofAAAI,2014.pags:1112-1119.LinY.,ZhangJ.,LiuZ.,SunM.,LinY.,ZhangJ.,LiuZ.,SunM.,LiuY.,ZhuX.LearningEntityandRelationEmbeddingsKnowledgeGraphCompletion.InProceedingsof representationsforopen-textsemanticparsing.InProceedingsofAISTATS,2012.pags:BordesA.,GlorotX.,WestonJ.,andBengioY.Asemanticmatchingenergyfunctionforlearingwithmultirelationaldata.MachineLearning.94(2):pags:233-259.JenattonR.,NicolasL.Roux,BordesA.,andObozinakiG.Alatentfactormodelforhighlymultirelationaldata.InProceedingsofNIPS,2012.pags:3167-3175.SutskeverI.,SalakhutdinovR.andJoshuaB.Tenenbaum.ModelingRelationalDatausingBayesianClusteredTensorFactorization.InProceedingsofNIPS.,2009.pags:1821-1828.SocherR.,ChenD.,ChristopherD.ManningandAndrewY.Ng.ReasoningWithNeuralTensorNetworksforKnowledgeBaseCompletion.InProceedingsofNIPS.,2013.pags:926-934.Ji,G.;He,S.;Xu,L.;Liu,k.;andZhao,J.2015.Knowledgegraphembeddingviadynamicmappingmatrix.InProceedingsofACL,687–696。Guo,S.;Wang,Q.;Wang,B.;Wang,L.;andGuo,L.2015.Semanticallysmoothknowledgegraphembedding.InProceedingsofACL,84–94.Lin,Lin,Y.;Liu,Z.;Luan,H.;Sun,M.;Rao,S.;andLiu,S.2015b.Modelingrelationpathsforrepresentationlearningofknowledgebases.InProceedingsofEMNLP,705–714.Gina-AnneLevow.TheThirdInternationalChineseLanguageProcessingBackoff:WordSegmentationandNameEntityRecognition[C].ProceedingsoftheFifthSigHANWorkshoponChineseLanguageProcessing,Sydney:AssociationforComputationalLinguistics,2006:DavidAhn.2006.Thestagesofeventextraction.InProceedingsoftheWorkshoponAnnotatingandReasoningaboutTimeandEvents,pages1–8.AssociationforComputationalLinguistics.YuHong,JianfengZhang,BinMa,JianminYao,GuodongZhou,andQiaomingZhu.2011.Usingcross-entityinferencetoimproveeventextraction.InProceedingsofthe49thAnnualMeetingoftheAssociationforComputationalLinguistics:HumanLanguageTechnologies-Volume1,pagesHengJiandRalphGrishman.2008.Refiningeventextractionthroughcross-documentInACL,pages254–262.QiLi,HengJi,andLiangHuang.2013.Jointeventextractionviastructuredpredictionwithglobalfeatures.InACL(1),pages73–82.QiQiLi,HengJi,YuHong,andSujianLi.2014.Constructinginformationnetworksusingonesinglemodel.InProc.the2014ConferenceonEmpiricalMethodsonNaturalLanguageProcessingShashaLiaoandRalphGrishman.2010.Usingdocumentlevelcross-eventinferencetoimprove ComputationalLinguistics,pages789–797.AssociationforComputationalLinguistics.DaojianZeng,KangLiu,SiweiLai,GuangyouZhou,andJunZhao.2014.Relationclassificationviaconvolutionaldeepneuralnetwork.InProceedingsofCOLING,pages2335–2344.ShizhuHe,KangLiu,YuanzheZhangandJunZhao,QuestionAnsweringoverLinkedDataUsingFirstOrderLogic,EMNLP2014.Zeng,Daojian,etal."DistantSupervisionforRelationExtractionviaPiecewiseConvolutionalNeuralNetworks."EMNLP,2015.ZhuoyuWei,JunZhao,KangLiu,ZhenyuQi,etal.Large-scaleKnowledgeBaseCompletion:InferringviaGroundingNetworkSamplingoverSelectedInstances,CIKM2015.ShizhuHe,KangLiu,JunZhao,LearningtoRepresentKnowledgeGraphswithGaussianEmbedding,CIKM2015.JiGuoliang,JiGuoliang,HeShizhu,XuLiheng,LiuKangandZhaoJun,KnowledgeGraphEmbeddingviaDynamicMappingMatrix,ACL2015.ShulinLiu,KangLiuandJunZhao,AProbabilisticSoftLogicBasedApproachtoExploitLatentandGlobalInformationinEventClassification,AAAI2016GuoliangJi,ShizhuHe,KangLiuandJunZhao,KnowledgeGraphCompletionwithAdaptiveSparseTransferMatrix,AAAI2016YuanzheZhang,ShizhuHe,KangLiuandJunZhao,AJointModelforQuestionAnsweringoverMultipleKnowledgeBases,AAAI2016徐從富,郝春蘇保君樓俊杰.馬爾科夫邏輯網(wǎng)絡(luò)研究.軟件學(xué)報(bào)機(jī)交互技術(shù)評(píng)測(cè):命名實(shí)體評(píng)測(cè)結(jié)果報(bào)告[R].北京:863計(jì)劃中文信息處理與智能人機(jī)接口技術(shù)評(píng)測(cè)組,2004吳友政.問答系統(tǒng)關(guān)鍵技術(shù)研究 中國(guó)科學(xué)院自動(dòng)化研究所博士論文 [Zelle,1995]J.M.ZelleandR.J.Mooney,“Learningtoparsedatabasequeriesusinginductivelogicprogramming,”[Zelle,1995]J.M.ZelleandR.J.Mooney,“Learningtoparsedatabasequeriesusinginductivelogicprogramming,”inProceedingsoftheNationalConferenceonArtificialIntelligence,1996,pp.1050–1055.[Wong,2007]Y.W.WongandR.J.Mooney,“Learningsynchronousgram-marsforsemanticparsingwithlambdacalculus,”inProceedingsofthe45thAnnualMeeting-AssociationforcomputationalLinguistics,[Lu,2008]W.Lu,H.T.Ng,W.S.Lee,andL.S.Zettlemoyer,“Agenerativemodelforparsingnaturallanguagetomeaningrepresentations,”inProceedingsofthe2008ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,2008,pp.783–792.[Zettlemoyer,2005]L.S.ZettlemoyerandM.Collins,“Learningtomapsentencestologicalform:Structuredclassificationwithprobabilisticcategorialgrammars,”inProceedingsofthe21stUncertaintyinArtificialIntelligence,2005,pp.658–666.[Clarke,2010]J.Clarke,D.Goldwasser,M.-W.Chang,andD.Roth,“Drivingsemanticparsingfromtheworld’sresponse,”inProceedingsofthe14thConferenceonComputationalNaturalLanguageLearning,2010,pp.18–27[Liang,2011]P.Liang,M.I.Jordan,andD.Klein,“Learningdependency-basedcompositionalsemantics,”inProceedingsofthe49thAnnu

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論