nutch爬蟲系統(tǒng)分析_第1頁
nutch爬蟲系統(tǒng)分析_第2頁
nutch爬蟲系統(tǒng)分析_第3頁
nutch爬蟲系統(tǒng)分析_第4頁
nutch爬蟲系統(tǒng)分析_第5頁
已閱讀5頁,還剩54頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、nutch爬蟲系統(tǒng)分析nutch分析1nutch簡介21.1nutch體系結(jié)構(gòu)22抓取部分32.1爬蟲的數(shù)據(jù)結(jié)構(gòu)及含義32.2抓取目錄分析42.3抓取過程概述42.4抓取過程分析52.4.1inject方法62.4.2generate方法122.4.3fetch 方法142.4.4parse方法162.4.5update方法162.4.6invert方法192.4.7index方法232.4.8dedup方法262.4.9merge方法303配置文件分析313.1nutch-default.xml分析313.1.1<!- file properties ->313.1.2<!

2、- http properties ->323.1.3<!- ftp properties ->353.1.4<!- web db properties ->373.1.5<!- generate properties ->413.1.6<!- fetcher properties ->423.1.7<!- indexer properties ->433.1.8<!- indexingfilter plugin properties ->453.1.9<!- analysis properties ->

3、453.1.10<!- searcher properties ->453.1.11<!- url normalizer properties ->483.1.12<!- mime properties ->483.1.13<!- plugin properties ->493.1.14<!- parser properties ->493.1.15<!- urlfilter plugin properties ->513.1.16<!- scoring filters properties ->523.1.1

4、7<!- clustering extension properties ->523.1.18<!- ontology extension properties ->533.1.19<!- query-basic plugin properties ->533.1.20<!- creative-commons plugin properties ->543.1.21<!- query-more plugin properties ->553.1.22<!- microformats-reltag plugin propertie

5、s ->553.1.23<!- language-identifier plugin properties ->553.1.24<!- temporary hadoop 0.17.x workaround. ->563.1.25<!- response writer properties ->563.2regex-urlfilter.txt解析583.3regex-normalize.xml解析583.4總結(jié)594參考資源591 nutch簡介1.1 nutch體系結(jié)構(gòu)2 抓取部分2.1 爬蟲的數(shù)據(jù)結(jié)構(gòu)及含義爬蟲系統(tǒng)是由nutch的爬蟲工具驅(qū)動的。并且

6、把構(gòu)建和維護(hù)一些數(shù)據(jù)結(jié)構(gòu)類型同一系列工具關(guān)聯(lián)起來:包括web database、一系列的segment和index。接下來我們將詳細(xì)描述他們。三者的物理文件分別存儲在爬行結(jié)果目錄下的crawldb文件夾內(nèi),segments文件夾和index文件夾內(nèi)。那么三者分別存儲的信息是什么呢?web database,也叫webdb,其中存儲的是爬蟲所抓取網(wǎng)頁之間的鏈接結(jié)構(gòu)信息,它只在爬蟲crawler工作中使用而和searcher的工作沒有任何關(guān)系。webdb內(nèi)存儲了兩種實體的信息:page和link。page實體通過描述網(wǎng)絡(luò)上一個網(wǎng)頁的特征信息來表征一個實際的網(wǎng)頁,因為網(wǎng)頁有很多個需要描述,webdb

7、中通過網(wǎng)頁的url和網(wǎng)頁內(nèi)容的md5兩種索引方法對這些網(wǎng)頁實體進(jìn)行了索引。page實體描述的網(wǎng)頁特征主要包括網(wǎng)頁內(nèi)的 link數(shù)目,抓取此網(wǎng)頁的時間等相關(guān)抓取信息,對此網(wǎng)頁的重要度評分等。同樣的,link實體描述的是兩個page實體之間的鏈接關(guān)系。webdb構(gòu)成了一個所抓取網(wǎng)頁的鏈接結(jié)構(gòu)圖,這個圖中page實體是圖的結(jié)點,而link實體則代表圖的邊。一次爬行會產(chǎn)生很多個segment,每個segment內(nèi)存儲的是爬蟲crawler在單獨一次抓取循環(huán)中抓到的網(wǎng)頁以及這些網(wǎng)頁的索引。 crawler爬行時會根據(jù)webdb中的link關(guān)系按照一定的爬行策略生成每次抓取循環(huán)所需的fetchlist,然

8、后fetcher通過 fetchlist中的urls抓取這些網(wǎng)頁并索引,然后將其存入segment。segment是有時限的,當(dāng)這些網(wǎng)頁被crawler重新抓取后,先前抓取產(chǎn)生的segment就作廢了。在存儲中。segment文件夾是以產(chǎn)生時間命名的,方便我們刪除作廢的segments以節(jié)省存儲空間。index是crawler抓取的所有網(wǎng)頁的索引,它是通過對所有單個segment中的索引進(jìn)行合并處理所得的。nutch利用lucene技術(shù)進(jìn)行索引,所以lucene中對索引進(jìn)行操作的接口對nutch中的index同樣有效。但是需要注意的是,lucene中的segment和nutch 中的不同,lu

9、cene中的segment是索引index的一部分,但是nutch中的segment只是webdb中各個部分網(wǎng)頁的內(nèi)容和索引,最后通過其生成的index跟這些segment已經(jīng)毫無關(guān)系了。2.2 抓取目錄分析抓取后一共生成5個文件夾,分別是:l crawldb目錄存放下載的url,以及下載的日期,用來頁面更新檢查時間.l linkdb目錄存放url的互聯(lián)關(guān)系,是下載完成后分析得到的.l segments:存放抓取的頁面,下面子目錄的個數(shù)于獲取的頁面層數(shù)有關(guān)系,通常每一層頁面會獨立存放一個子目錄,子目錄名稱為時間,便于管理.比如我這只抓取了一層頁面就只生成了20090508173137目錄.每個

10、子目錄里又有6個子文件夾如下:Ø content:每個下載頁面的內(nèi)容。Ø crawl_fetch:每個下載url的狀態(tài)。Ø crawl_generate:待下載url集合。Ø crawl_parse:包含來更新crawldb的外部鏈接庫。Ø parse_data:包含每個url解析出的外部鏈接和元數(shù)據(jù)Ø parse_text:包含每個解析過的url的文本內(nèi)容。l indexs:存放每次下載的獨立索引目錄l index:符合lucene格式的索引目錄,是indexs里所有index合并后的完整索引2.3 抓取過程概述引用到的類主要有以下

11、9個:1、nutch.crawl.inject用來給抓取數(shù)據(jù)庫添加url的插入器2、nutch.crawl.generator用來生成待下載任務(wù)列表的生成器3、nutch.fetcher.fetcher完成抓取特定頁面的抓取器4、nutch.parse.parsesegment負(fù)責(zé)內(nèi)容提取和對下級url提取的內(nèi)容進(jìn)行解析的解析器5、nutch.crawl.crawldb負(fù)責(zé)數(shù)據(jù)庫管理的數(shù)據(jù)庫管理工具6、nutch.crawl.linkdb負(fù)責(zé)鏈接管理7、nutch.indexer.indexer負(fù)責(zé)創(chuàng)建索引的索引器8、nutch.indexer.deleteduplicates刪除重復(fù)數(shù)據(jù)9、

12、nutch.indexer.indexmerger對當(dāng)前下載內(nèi)容局部索引和歷史索引進(jìn)行合并的索引合并器2.4 抓取過程分析crawler的工作原理主要是:首先crawler根據(jù)webdb生成一個待抓取網(wǎng)頁的url集合叫做fetchlist,接著下載線程fetcher開始根據(jù) fetchlist將網(wǎng)頁抓取回來,如果下載線程有很多個,那么就生成很多個fetchlist,也就是一個fetcher對應(yīng)一個fetchlist。然后crawler根據(jù)抓取回來的網(wǎng)頁webdb進(jìn)行更新,根據(jù)更新后的webdb生成新的fetchlist,里面是未抓取的或者新發(fā)現(xiàn)的urls,然后下一輪抓取循環(huán)重新開始。這個循環(huán)過

13、程可以叫做“產(chǎn)生/抓取/更新”循環(huán)。指向同一個主機(jī)上web資源的urls通常被分配到同一個fetchlist中,這樣的話防止過多的fetchers對一個主機(jī)同時進(jìn)行抓取造成主機(jī)負(fù)擔(dān)過重。另外nutch遵守robots exclusion protocol,網(wǎng)站可以通過自定義robots.txt控制crawler的抓取。在nutch中,crawler操作的實現(xiàn)是通過一系列子操作的實現(xiàn)來完成的。這些子操作nutch都提供了子命令行可以單獨進(jìn)行調(diào)用。下面就是這些子操作的功能描述以及命令行,命令行在括號中。1. 創(chuàng)建一個新的webdb (admin db -create).2. 將抓取起始urls寫入

14、webdb中 (inject).3. 根據(jù)webdb生成fetchlist并寫入相應(yīng)的segment(generate).4. 根據(jù)fetchlist中的url抓取網(wǎng)頁 (fetch).5. 根據(jù)抓取網(wǎng)頁更新webdb (updatedb).6. 循環(huán)進(jìn)行35步直至預(yù)先設(shè)定的抓取深度。7. 分析鏈接關(guān)系,生成反向鏈接.(此步1.0特有,具體作用?)8. 對所抓取的網(wǎng)頁進(jìn)行索引(index).9. 在索引中丟棄有重復(fù)內(nèi)容的網(wǎng)頁和重復(fù)的urls (dedup).10. 將segments中的索引進(jìn)行合并生成用于檢索的最終index(merge).crawler詳細(xì)工作流程是:在創(chuàng)建一個webdb之

15、后(步驟1), “產(chǎn)生/抓取/更新”循環(huán)(步驟36)根據(jù)一些種子urls開始啟動。當(dāng)這個循環(huán)徹底結(jié)束,crawler根據(jù)抓取中生成的segments創(chuàng)建索引(步驟810)。在進(jìn)行重復(fù)urls清除(步驟9)之前,每個segment的索引都是獨立的(步驟8)。最終,各個獨立的segment索引被合并為一個最終的索引index(步驟10)。其中有一個細(xì)節(jié)問題,dedup操作主要用于清除segment索引中的重復(fù)urls,但是我們知道,在webdb中是不允許重復(fù)的url存在的,那么為什么這里還要進(jìn)行清除呢?原因在于抓取的更新。比方說一個月之前你抓取過這些網(wǎng)頁,一個月后為了更新進(jìn)行了重新抓取,那么舊的s

16、egment在沒有刪除之前仍然起作用,這個時候就需要在新舊segment之間進(jìn)行除重。下邊是在crawl類設(shè)置斷點調(diào)試每個方法的結(jié)果.2.4.1 inject方法描述:初始化爬取的crawldb,讀取url配置文件,把內(nèi)容注入爬取數(shù)據(jù)庫.首先會找到讀取url配置文件的目錄urls.如果沒創(chuàng)建此目錄,nutch1.0下會報錯.得到hadoop處理的臨時文件夾:/tmp/hadoop-administrator/mapred/日志信息如下:2009-05-08 15:41:36,640 info injector - injector: starting2009-05-08 15:41:37,03

17、1 info injector - injector: crawldb: 20090508/crawldb2009-05-08 15:41:37,781 info injector - injector: urldir: urls接著設(shè)置一些初始化信息.調(diào)用hadoop包jobclient.runjob方法,跟蹤進(jìn)入jobclient下的submitjob方法進(jìn)行提交整個過程.具體原理又涉及到另一個開源項目hadoop的分析,它包括了復(fù)雜的mapreduce架構(gòu),此處不做分析。查看submitjob方法,首先獲得jobid,執(zhí)行configurecommandlineoptions方法后會在上

18、邊的臨時文件夾生成一個system文件夾,同時在它下邊生成一個job_local_0001文件夾.執(zhí)行writesplitsfile后在job_local_0001下生成job.split文件.執(zhí)行writexml寫入job.xml,然后執(zhí)行jobsubmitclient.submitjob正式提交整個job流程,日志如下:2009-05-08 15:41:36,640 info injector - injector: starting2009-05-08 15:41:37,031 info injector - injector: crawldb: 20090508/crawldb2009

19、-05-08 15:41:37,781 info injector - injector: urldir: urls2009-05-08 15:52:41,734 info injector - injector: converting injected urls to crawl db entries.2009-05-08 15:56:22,203 info jvmmetrics - initializing jvm metrics with processname=jobtracker, sessionid=2009-05-08 16:08:20,796 warn jobclient -

20、use genericoptionsparser for parsing the arguments. applications should implement tool for the same.2009-05-08 16:08:20,984 warn jobclient - no job jar file set. user classes may not be found. see jobconf(class) or jobconf#setjar(string).2009-05-08 16:24:42,593 info fileinputformat - total input pat

21、hs to process : 12009-05-08 16:38:29,437 info fileinputformat - total input paths to process : 12009-05-08 16:38:29,546 info maptask - numreducetasks: 12009-05-08 16:38:29,562 info maptask - io.sort.mb = 1002009-05-08 16:38:29,687 info maptask - data buffer = 79691776/996147202009-05-08 16:38:29,687

22、 info maptask - record buffer = 262144/3276802009-05-08 16:38:29,718 info pluginrepository - plugins: looking in: d:workworkspacenutch_crawlbinplugins2009-05-08 16:38:29,921 info pluginrepository - plugin auto-activation mode: true2009-05-08 16:38:29,921 info pluginrepository - registered plugins:20

23、09-05-08 16:38:29,921 info pluginrepository - the nutch core extension points (nutch-extensionpoints)2009-05-08 16:38:29,921 info pluginrepository - basic query filter (query-basic)2009-05-08 16:38:29,921 info pluginrepository - basic url normalizer (urlnormalizer-basic)2009-05-08 16:38:29,921 info

24、pluginrepository - basic indexing filter (index-basic)2009-05-08 16:38:29,921 info pluginrepository - html parse plug-in (parse-html)2009-05-08 16:38:29,921 info pluginrepository - site query filter (query-site)2009-05-08 16:38:29,921 info pluginrepository - basic summarizer plug-in (summary-basic)2

25、009-05-08 16:38:29,921 info pluginrepository - http framework (lib-http)2009-05-08 16:38:29,921 info pluginrepository - text parse plug-in (parse-text)2009-05-08 16:38:29,921 info pluginrepository - pass-through url normalizer (urlnormalizer-pass)2009-05-08 16:38:29,921 info pluginrepository - regex

26、 url filter (urlfilter-regex)2009-05-08 16:38:29,921 info pluginrepository - http protocol plug-in (protocol-http)2009-05-08 16:38:29,921 info pluginrepository - xml response writer plug-in (response-xml)2009-05-08 16:38:29,921 info pluginrepository - regex url normalizer (urlnormalizer-regex)2009-0

27、5-08 16:38:29,921 info pluginrepository - opic scoring plug-in (scoring-opic)2009-05-08 16:38:29,921 info pluginrepository - cyberneko html parser (lib-nekohtml)2009-05-08 16:38:29,921 info pluginrepository - anchor indexing filter (index-anchor)2009-05-08 16:38:29,921 info pluginrepository - javasc

28、ript parser (parse-js)2009-05-08 16:38:29,921 info pluginrepository - url query filter (query-url)2009-05-08 16:38:29,921 info pluginrepository - regex url filter framework (lib-regex-filter)2009-05-08 16:38:29,921 info pluginrepository - json response writer plug-in (response-json)2009-05-08 16:38:

29、29,921 info pluginrepository - registered extension-points:2009-05-08 16:38:29,921 info pluginrepository - nutch summarizer (org.apache.nutch.searcher.summarizer)2009-05-08 16:38:29,921 info pluginrepository - nutch protocol (tocol.protocol)2009-05-08 16:38:29,921 info pluginrepo

30、sitory - nutch analysis (org.apache.nutch.analysis.nutchanalyzer)2009-05-08 16:38:29,921 info pluginrepository - nutch field filter (org.apache.nutch.indexer.field.fieldfilter)2009-05-08 16:38:29,921 info pluginrepository - html parse filter (org.apache.nutch.parse.htmlparsefilter)2009-05-08 16:38:2

31、9,921 info pluginrepository - nutch query filter (org.apache.nutch.searcher.queryfilter)2009-05-08 16:38:29,921 info pluginrepository - nutch search results response writer (org.apache.nutch.searcher.response.responsewriter)2009-05-08 16:38:29,921 info pluginrepository - nutch url normalizer (.urlno

32、rmalizer)2009-05-08 16:38:29,921 info pluginrepository - nutch url filter (.urlfilter)2009-05-08 16:38:29,921 info pluginrepository - nutch online search results clustering plugin (org.apache.nutch.clustering.onlineclusterer)2009-05-08 16:38:29,921 info pluginrepository - nutch indexing filter (org.

33、apache.nutch.indexer.indexingfilter)2009-05-08 16:38:29,921 info pluginrepository - nutch content parser (org.apache.nutch.parse.parser)2009-05-08 16:38:29,921 info pluginrepository - nutch scoring (org.apache.nutch.scoring.scoringfilter)2009-05-08 16:38:29,921 info pluginrepository - ontology model

34、 loader (org.apache.nutch.ontology.ontology)2009-05-08 16:38:29,968 info configuration - found resource crawl-urlfilter.txt at file:/d:/work/workspace/nutch_crawl/bin/crawl-urlfilter.txt2009-05-08 16:38:29,984 warn regexurlnormalizer - can't find rules for scope 'inject', using default20

35、09-05-08 16:38:29,984 info maptask - starting flush of map output2009-05-08 16:38:30,203 info maptask - finished spill 02009-05-08 16:38:30,203 info taskrunner - task:attempt_local_0001_m_000000_0 is done. and is in the process of commiting2009-05-08 16:38:30,218 info localjobrunner - file:/d:/work/

36、workspace/nutch_crawl/urls/site.txt:0+192009-05-08 16:38:30,218 info taskrunner - task 'attempt_local_0001_m_000000_0' done.2009-05-08 16:38:30,234 info localjobrunner - 2009-05-08 16:38:30,250 info merger - merging 1 sorted segments2009-05-08 16:38:30,265 info merger - down to the last merg

37、e-pass, with 1 segments left of total size: 53 bytes2009-05-08 16:38:30,265 info localjobrunner - 2009-05-08 16:38:30,390 info taskrunner - task:attempt_local_0001_r_000000_0 is done. and is in the process of commiting2009-05-08 16:38:30,390 info localjobrunner - 2009-05-08 16:38:30,390 info taskrun

38、ner - task attempt_local_0001_r_000000_0 is allowed to commit now2009-05-08 16:38:30,406 info fileoutputcommitter - saved output of task 'attempt_local_0001_r_000000_0' to file:/tmp/hadoop-administrator/mapred/temp/inject-temp-4741923042009-05-08 16:38:30,406 info localjobrunner - reduce >

39、; reduce2009-05-08 16:38:30,406 info taskrunner - task 'attempt_local_0001_r_000000_0' done.執(zhí)行完后返回的running值如下:job: job_local_0001file: file:/tmp/hadoop-administrator/mapred/system/job_local_0001/job.xmltracking url: http:/localhost:8080/2009-05-08 16:47:14,093 info jobclient - running job: j

40、ob_local_00012009-05-08 16:49:51,859 info jobclient - job complete: job_local_00012009-05-08 16:51:36,062 info jobclient - counters: 112009-05-08 16:51:36,062 info jobclient - file systems2009-05-08 16:51:36,062 info jobclient - local bytes read=515912009-05-08 16:51:36,062 info jobclient - local by

41、tes written=1043372009-05-08 16:51:36,062 info jobclient - map-reduce framework2009-05-08 16:51:36,062 info jobclient - reduce input groups=12009-05-08 16:51:36,062 info jobclient - combine output records=02009-05-08 16:51:36,062 info jobclient - map input records=12009-05-08 16:51:36,062 info jobcl

42、ient - reduce output records=12009-05-08 16:51:36,062 info jobclient - map output bytes=492009-05-08 16:51:36,062 info jobclient - map input bytes=192009-05-08 16:51:36,062 info jobclient - combine input records=02009-05-08 16:51:36,062 info jobclient - map output records=12009-05-08 16:51:36,062 in

43、fo jobclient - reduce input records=1至此第一個runjob方法執(zhí)行結(jié)束.總結(jié):待寫接下來就是生成crawldb文件夾,并把urls合并注入到它的里面.jobclient.runjob(mergejob);crawldb.install(mergejob, crawldb);這個過程首先會在前面提到的臨時文件夾下生成job_local_0002目錄,和上邊一樣同樣會生成job.split和job.xml,接著完成crawldb的創(chuàng)建,最后刪除臨時文件夾temp下的文件.至此inject過程結(jié)束.最后部分日志如下:2009-05-08 17:03:57,250

44、 info injector - injector: merging injected urls into crawl db.2009-05-08 17:10:01,015 info jvmmetrics - cannot initialize jvm metrics with processname=jobtracker, sessionid= - already initialized2009-05-08 17:10:15,953 warn jobclient - use genericoptionsparser for parsing the arguments. application

45、s should implement tool for the same.2009-05-08 17:10:16,156 warn jobclient - no job jar file set. user classes may not be found. see jobconf(class) or jobconf#setjar(string).2009-05-08 17:12:15,296 info fileinputformat - total input paths to process : 12009-05-08 17:13:40,296 info fileinputformat -

46、 total input paths to process : 12009-05-08 17:13:40,406 info maptask - numreducetasks: 12009-05-08 17:13:40,406 info maptask - io.sort.mb = 1002009-05-08 17:13:40,515 info maptask - data buffer = 79691776/996147202009-05-08 17:13:40,515 info maptask - record buffer = 262144/3276802009-05-08 17:13:4

47、0,546 info maptask - starting flush of map output2009-05-08 17:13:40,765 info maptask - finished spill 02009-05-08 17:13:40,765 info taskrunner - task:attempt_local_0002_m_000000_0 is done. and is in the process of commiting2009-05-08 17:13:40,765 info localjobrunner - file:/tmp/hadoop-administrator

48、/mapred/temp/inject-temp-474192304/part-00000:0+1432009-05-08 17:13:40,765 info taskrunner - task 'attempt_local_0002_m_000000_0' done.2009-05-08 17:13:40,796 info localjobrunner - 2009-05-08 17:13:40,796 info merger - merging 1 sorted segments2009-05-08 17:13:40,796 info merger - down to th

49、e last merge-pass, with 1 segments left of total size: 53 bytes2009-05-08 17:13:40,796 info localjobrunner - 2009-05-08 17:13:40,906 warn nativecodeloader - unable to load native-hadoop library for your platform. using builtin-java classes where applicable2009-05-08 17:13:40,906 info codecpool - got

50、 brand-new compressor2009-05-08 17:13:40,906 info taskrunner - task:attempt_local_0002_r_000000_0 is done. and is in the process of commiting2009-05-08 17:13:40,906 info localjobrunner - 2009-05-08 17:13:40,906 info taskrunner - task attempt_local_0002_r_000000_0 is allowed to commit now2009-05-08 1

51、7:13:40,921 info fileoutputcommitter - saved output of task 'attempt_local_0002_r_000000_0' to file:/d:/work/workspace/nutch_crawl/20090508/crawldb/18965677452009-05-08 17:13:40,921 info localjobrunner - reduce > reduce2009-05-08 17:13:40,937 info taskrunner - task 'attempt_local_0002

52、_r_000000_0' done.2009-05-08 17:13:46,781 info jobclient - running job: job_local_00022009-05-08 17:14:55,125 info jobclient - job complete: job_local_00022009-05-08 17:14:59,328 info jobclient - counters: 112009-05-08 17:14:59,328 info jobclient - file systems2009-05-08 17:14:59,328 info jobcli

53、ent - local bytes read=1038752009-05-08 17:14:59,328 info jobclient - local bytes written=2093852009-05-08 17:14:59,328 info jobclient - map-reduce framework2009-05-08 17:14:59,328 info jobclient - reduce input groups=12009-05-08 17:14:59,328 info jobclient - combine output records=02009-05-08 17:14

54、:59,328 info jobclient - map input records=12009-05-08 17:14:59,328 info jobclient - reduce output records=12009-05-08 17:14:59,328 info jobclient - map output bytes=492009-05-08 17:14:59,328 info jobclient - map input bytes=572009-05-08 17:14:59,328 info jobclient - combine input records=02009-05-0

55、8 17:14:59,328 info jobclient - map output records=12009-05-08 17:14:59,328 info jobclient - reduce input records=12009-05-08 17:17:30,984 info jvmmetrics - cannot initialize jvm metrics with processname=jobtracker, sessionid= - already initialized2009-05-08 17:20:02,390 info injector - injector: do

56、ne2.4.2 generate方法描述:從爬取數(shù)據(jù)庫中生成新的segment,然后從中生成待下載任務(wù)列表(fetchlist).lockutil.createlockfile(fs, lock, force);首先執(zhí)行上邊方法后會在crawldb目錄下生成.locked文件,猜測作用是防止crawldb的數(shù)據(jù)被修改,真實作用有待驗證.接著執(zhí)行的過程和上邊大同小異,可參考上邊步驟,日志如下:2009-05-08 17:37:18,218 info generator - generator: selecting best-scoring urls due for fetch.2009-05-0

57、8 17:37:18,625 info generator - generator: starting2009-05-08 17:37:18,937 info generator - generator: segment: 20090508/segments/200905081731372009-05-08 17:37:19,468 info generator - generator: filtering: true2009-05-08 17:37:22,312 info generator - generator: topn: 502009-05-08 17:37:51,203 info generator - generator: jobtracker is 'local', generating exactly one partition.2009-05-08 17:39:57,609 info jvmm

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論