《hadoop安裝過程》word版_第1頁
《hadoop安裝過程》word版_第2頁
《hadoop安裝過程》word版_第3頁
《hadoop安裝過程》word版_第4頁
《hadoop安裝過程》word版_第5頁
已閱讀5頁,還剩15頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、.l 32bit windows xp 上安裝64bit ubuntu ,vbox設(shè)置系統(tǒng)處理器數(shù)量設(shè)置1,為2會(huì)報(bào)錯(cuò)l BIOS 啟用vt x-AMD-v 支持 進(jìn)入BIOS->Advanced BIOS Features->Virtualization->Disabled(預(yù)設(shè)值)修改為Enabled,儲(chǔ)存(save),重啟。l Vbox加載ubuntul Ubuntu共享windows文件夾設(shè)置l 安裝增強(qiáng)功能 vbox->設(shè)備->安裝增強(qiáng)功能l 在ubuntu中創(chuàng)建的掛載目錄為/media/shared,命令為:sudo mkdir /media/shar

2、edl sudo passwd rootl 切換成root賬戶 sudo sl 在windows E:盤創(chuàng)建文件夾ubuntu1110_64sharefolderll sudo mount.vboxsf ubuntu1110_64sharefolder /media/shared 將文件夾ubuntu1110_64sharefolder掛載到/media/shared下l 設(shè)定開機(jī)就自動(dòng)掛載。指令sudo gedit /etc/fstab 開啟fstab,最後面加入ubuntu1110_64sharefolder /media/shared vboxsf rw 0 0l 安裝jdk-7u3-l

3、inux-x64.tar.gz(參考進(jìn)入 media/shared運(yùn)行 sudo mkdir /usr/lib/jvmsudo tar zxvf ./ jdk-7u3-linux-x64.tar.gz -C /usr/lib/jvm z通過gzip指令處理備份文件x從備份文件中還原文件v顯示指令執(zhí)行過程f指定備份文件cd /usr/lib/jvm  cd到j(luò)vm目錄下sudo mv jdk1.7.0/ java-7-sun  改名apt-get install vim 安裝vim包修改環(huán)境變量 vim /.bash

4、rc 添加:export JAVA_HOME=/usr/lib/jvm/java-7-sun  export JRE_HOME=$JAVA_HOME/jre  export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib  export PATH=$JAVA_HOME/bin:$PATH  shift+:wq保存退出,輸入以下命令使之立即生效。source /.bashrc 配置默認(rèn)JDK版本由于ubuntu中可

5、能會(huì)有默認(rèn)的JDK,如openjdk,所以,為了將我們安裝的JDK設(shè)置為默認(rèn)JDK版本,還要進(jìn)行如下工作。執(zhí)行代碼:sudo update-alternatives -install /usr/bin/java java /usr/lib/jvm/java-7-sun/bin/java 300  sudo update-alternatives -install /usr/bin/javac javac /usr/lib/jvm/java-7-sun/bin/jav

6、ac 300  sudo update-alternatives -install /usr/bin/jar jar /usr/lib/jvm/java-7-sun/bin/jar 300   執(zhí)行代碼:sudo update-alternatives -config java測(cè)試Java version安裝hadoop1.0.0mkdir /home/appsudo tar zxvf ./hadoop-1.0.0.tar.gz C /home/

7、app/cd /home/app/hadoop-1.0.0進(jìn)入Hadoop目錄vi conf/hadoop-env.sh修改配置文件,指定JDk安裝路徑source conf/hadoop-env.sh使立即生效修改Hadoop核心配置文件core-site.xml,這里配置的是HDFS的地址和端口號(hào) 1Vim conf/core-site.xml1<configuration>2    <property>3       

8、0;<name></name>4        <value>hdfs:/localhost:9000</value>   5    </property>6</configuration>修改Hadoop中HDFS的配置,配置的備份方式默認(rèn)為3,因?yàn)榘惭b的是單機(jī)版,所以需要改為1 1vim conf/hdfs-site.xml1<

9、;configuration>2    <property>3        <name>dfs.replication</name>4        <value>1</value>5    </property>6</configuration>修改Hadoop中M

10、apReduce的配置文件,配置的是JobTracker的地址和端口 1vim conf/mapred-site.xml1<configuration>2    <property>3        <name>mapred.job.tracker</name>4        <value>localhost:9001<

11、/value>5    </property>6</configuration>接下來,啟動(dòng)Hadoop,在啟動(dòng)之前,需要格式化Hadoop的文件系統(tǒng)HDFS,進(jìn)入Hadoop文件夾,輸入下面命令 1bin/hadoop namenode format(為什么啟動(dòng)之前要執(zhí)行這一步,否則50060和50070會(huì)報(bào)錯(cuò))然后啟動(dòng)Hadoop,輸入命令 1bin/start-all.sh這個(gè)命令為所有服務(wù)全部啟動(dòng)。 最后,驗(yàn)證Hadoop是否安裝成功。打開瀏覽器,分別輸入一下網(wǎng)址:http:/localhost:50030 

12、0;  (MapReduce的Web頁面)http:/localhost:50070    (HDfS的web頁面)如果都能查看,說明安裝成功建立ssh無密碼登錄本機(jī)Ssh-keygen t rsa P “”回車進(jìn)入/.ssh/目錄下,將id_rsa.pub追加到authorized_keys授權(quán)文件中,開始是沒有authorized_keys文件的登錄localhost,如圖執(zhí)行退出命令安裝hbase進(jìn)入cd /media/shared/sudo tar zxvf ./ hbase-0.92.1.tar.gz C /home/app/ 安裝完之后/home/ap

13、p/下出現(xiàn)hbase-0.92.1修改配置進(jìn)入hbase-0.92.1目錄Vim conf/hbase-env.sh修改配置文件修改hbase-env.sh#必修配置的地方為:export JAVA_HOME=/usr/lib/jvm/java-7-sunexport HBASE_CLASSPATH=/home/app/hbase-0.92.1/confexport HBASE_OPTS="-XX:+UseConcMarkSweepGC"export HBASE_MANAGES_ZK=true其中,JAVA_HOME為java安裝路徑,HBASE_CLASSPATH為Had

14、oop安裝路徑。source conf/hbase-env.sh使立即生效修改配置文件 / 偽分布式配置hbase-site.xml(修改其內(nèi)容為:<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>        <property>    

15、            <name>hbase.rootdir</name>                <value>hdfs:/localhost:9000/hbase</value>        

16、60;       <description>The directory shared by region servers.</description>        </property>       #<property>          #&#

17、160;     <name>hbase.master.port</name>        #        <value>60000</value>       #</property>       #&l

18、t;property>           #     <name>hbase.cluster.distributed</name>         #       <value>true</value>   

19、60;   #</property>        #<property>                #<name>perty.dataDir</name>        

20、0;       #<value>/home/Hadooptest/zookeeper-3.4.3/zookeeperdir/zookeeper-data</value>        #</property>       #<property>       # 

21、0;       <name>perty.clientPort</name>               # <value>2181</value>       #</property>   

22、    # <property>          #      <name>hbase.zookeeper.quorum</name>           #     <value>zookeeper</

23、value>       # </property></configuration>啟動(dòng) /home/app/hadoop-1.0.0/bin/start-all.shrootzhuwei-VirtualBox:/home/app/hbase-0.92.1/bin# ./stop-hbase.shrootzhuwei-VirtualBox:/home/app/hbase-0.92.1/bin# ./start-hbase.shrootzhuwei-VirtualBox:/home/app/hba

24、se-0.92.1/bin# ./hbase shell安裝zookeeper-3.4.3.tar.gzsudo tar zxvf ./zookeeper-3.4.3.tar.gz -C /home/app/將“/ zookeeper-3.4.3 /conf”目錄下zoo_sample.cfg修改名稱為“zoo.cfg”Sudo mv zoo_sample.cfg zoo.cfgrootzhuwei-VirtualBox:/home/app/zookeeper-3.4.3# sudo mkdir zookeeper_data新建文件夾修改dataDir參數(shù)/home/app/zookeeper

25、-3.4.3/zookeeper_databin/zkServer.sh startbin/zkCli.sh -server :2181/Connecting to ZooKeeper黃色部分為terminal報(bào)出的信息rootzhuwei-VirtualBox:/home/app/zookeeper-3.4.3# bin/zkCli.sh -server :2181Connecting to :21812012-04-27 12:17:50,875 myid: - INFO main:Environment98 - Client envi

26、ronment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT2012-04-27 12:17:50,889 myid: - INFO main:Environment98 - Client environment:=zhuwei-VirtualBox2012-04-27 12:17:50,890 myid: - INFO main:Environment98 - Client environment:java.version=1.7.0_032012-04-27 12:17:50,896 myid

27、: - INFO main:Environment98 - Client environment:java.vendor=Oracle Corporation2012-04-27 12:17:50,896 myid: - INFO main:Environment98 - Client environment:java.home=/usr/lib/jvm/java-7-sun/jre2012-04-27 12:17:50,897 myid: - INFO main:Environment98 - Client environment:java.class.path=/home/app/zook

28、eeper-3.4.3/bin/./build/classes:/home/app/zookeeper-3.4.3/bin/./build/lib/*.jar:/home/app/zookeeper-3.4.3/bin/./lib/slf4j-log4j12-1.6.1.jar:/home/app/zookeeper-3.4.3/bin/./lib/slf4j-api-1.6.1.jar:/home/app/zookeeper-3.4.3/bin/./lib/netty-3.2.2.Final.jar:/home/app/zookeeper-3.4.3/bin/./lib/log4j-1.2.

29、15.jar:/home/app/zookeeper-3.4.3/bin/./lib/jline-0.9.94.jar:/home/app/zookeeper-3.4.3/bin/./zookeeper-3.4.3.jar:/home/app/zookeeper-3.4.3/bin/./src/java/lib/*.jar:/home/app/zookeeper-3.4.3/bin/./conf:.:/usr/lib/jvm/java-7-sun/lib:/usr/lib/jvm/java-7-sun/jre/lib2012-04-27 12:17:50,897 myid: - INFO ma

30、in:Environment98 - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib2012-04-27 12:17:50,898 myid: - INFO main:Environment98 - Client environment:java.io.tmpdir=/tmp2012-04-27 12:17:50,898 myid: - INFO main:Environment98 - Client environment:piler=<N

31、A>2012-04-27 12:17:50,898 myid: - INFO main:Environment98 - Client environment:=Linux2012-04-27 12:17:50,899 myid: - INFO main:Environment98 - Client environment:os.arch=amd642012-04-27 12:17:50,899 myid: - INFO main:Environment98 - Client environment:os.version=3.0.0-12-generic2012-04-27

32、12:17:50,900 myid: - INFO main:Environment98 - Client environment:=root2012-04-27 12:17:50,900 myid: - INFO main:Environment98 - Client environment:user.home=/root2012-04-27 12:17:50,901 myid: - INFO main:Environment98 - Client environment:user.dir=/home/app/zookeeper-3.4.32012-04-27 12:17:

33、50,903 myid: - INFO main:ZooKeeper433 - Initiating client connection, connectString=:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher4ef5c3a62012-04-27 12:17:50,937 myid: - INFO main-SendThread():ClientCnxn$SendThread933 - Opening socket connection to server /1

34、:2181Welcome to ZooKeeper!2012-04-27 12:17:50,965 myid: - INFO main-SendThread(localhost:2181):ZooKeeperSaslClient125 - Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the

35、 other hand, if you expected SASL to work, please fix your JAAS configuration.JLine support is enabled2012-04-27 12:17:51,006 myid: - INFO main-SendThread(localhost:2181):ClientCnxn$SendThread846 - Socket connection established to localhost/:2181, initiating sessionzk: :2181(CONNEC

36、TING) 0 2012-04-27 12:17:51,094 myid: - INFO main-SendThread(localhost:2181):ClientCnxn$SendThread1175 - Session establishment complete on server localhost/:2181, sessionid = 0x136f1fa35960000, negotiated timeout = 30000WATCHER:WatchedEvent state:SyncConnected type:None path:nullHive安裝(數(shù)據(jù)庫)

37、Cd /media/shared/sudo tar xzvf ./hive-0.8.1.tar.gz -C /home/app/vim /.bashrcexport HIVE_HOME=pwd修改環(huán)境變量export PATH=$JAVA_HOMEbin:$HIVE_HOMEbin:$PATHvim hive-env.sh.templateexport HADOOP_HOME=/home/app/hadoop-1.0.0cp hive-default.xml.template hive-site.xml復(fù)制一份hive-default.xml為hive-site.xml 設(shè)置主要環(huán)境變量(手動(dòng)

38、)export HADOOP_HOME=/home/app/hadoop-1.0.0(你自己的hadoop安裝路徑) 在hdfs建立目錄用來保存hive數(shù)據(jù)$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp $ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse $ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp $ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse 運(yùn)行運(yùn)行 bin/hive 即可hi

39、ve> SET mapred.job.tracker=localhost:50030; hive> SET -v;hive> show tables;hive> create table log_stat(ip STRING,time STRING,http_request STRING,uri STRING,http STRING,status int,code STRING);(此表創(chuàng)建時(shí)沒有指定數(shù)據(jù)分隔方式和行分隔方式,下面將刪除它重新建)OKTime taken: 9.893 secondshive> drop table log_stat;OKTime

40、taken: 2.172 secondshive> create table log_stat(ip STRING,time STRING,http_request STRING,uri STRING,http STRING,status int,code STRING) row format delimited fields terminated by 't' lines terminated by 'n' stored as textfile;(指定數(shù)據(jù)分隔方式t,行分隔n,以文本存儲(chǔ))hive> load data local inpath &

41、#39;/home/app/pig-0.9.2/tutorial/scripts/load_result/part-m-00000' overwrite into table log_stat;(如果數(shù)據(jù)在hdfs上,則不需要Local關(guān)鍵字)hive> dfs -ls /user/hive/warehouse;Found 1 itemsdrwxr-xr-x - root supergroup 0 2012-06-21 10:19 /user/hive/warehouse/log_stathive> dfs -ls /user/hive/warehouse/log_stat

42、;Found 1 items-rw-r-r- 1 root supergroup 1593 2012-06-21 10:19 /user/hive/warehouse/log_stat/part-m-00000hive>發(fā)現(xiàn)hive中select count(*) 會(huì)花費(fèi)很多時(shí)間而且執(zhí)行mapreducehive> insert overwrite local directory '/media/shared/reg_3' select a.* from log_stat a;(將表數(shù)據(jù)導(dǎo)入本地文件)hive> load data inpath '/u

43、ser/root/load_result/part-m-00000' overwrite into table tt;(hdfs文件導(dǎo)入表)load data inpath 'hdfs:/localhost:9000/user/root/outlog/click_mini/20120628' into table click_mini;hive> set hive.enforce.bucketing=true;hive>set hive.enforce.bucketing;外部表創(chuàng)建桶不會(huì)拆分目錄hive> select * from ad_3rd t

44、ablesample(bucket 3 out of 3 on rand();select count(*) from ad_3rd tablesample(bucket 2 out of 3 on access_date);這句執(zhí)行有問題select count(*) from ad_3rd tablesample(bucket 1 out of 3 on access_date);這句可以安裝mysqlapt-get install mysql-client-5.1 mysql-server-5.1rootzhuwei-VirtualBox:/etc/mysql# service mysq

45、l statusrootzhuwei-VirtualBox:/home# mysql -uroot -proot進(jìn)入mysqlmysql> create user 'hive' identified by '123456'創(chuàng)建用戶密碼mysql> grant all privileges on *.* to 'hive''%' with grant option;賦權(quán)mysql> select user();查看當(dāng)前用戶mysql> create database hive;sudo apt-get ins

46、tall mysql-server使用上面的命令下載的版本較低移除mysqlsudo apt-get autoremove -purge mysql-server-5.1sudo apt-get remove mysql-serversudo apt-get autoremove mysql-serversudo apt-get remove mysql-common (非常重要)MySQL-5.5.23-1.linux2.6.x86_64.tar安裝sudo tar zxvf ./MySQL-5.5.23-1.linux2.6.x86_64.tar -C/home/apphive連接mysq

47、lconf/hive-site.xml配置<property> <name>javax.jdo.option.ConnectionURL</name> <!-value>jdbc:derby:;databaseName=metastore_db;create=true</value-> <value>jdbc:mysql:/localhost:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect strin

48、g for a JDBC metastore</description></property><property> <name>javax.jdo.option.ConnectionDriverName</name> <!-value>org.apache.derby.jdbc.EmbeddedDriver</value-> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JD

49、BC metastore</description></property><property> <name>javax.jdo.option.ConnectionUserName</name> <!-value>APP</value-> <value>hive</value> <description>username to use against metastore database</description></property><prope

50、rty> <name>javax.jdo.option.ConnectionPassword</name> <!-value>mine</value-> <value>123456</value> <description>password to use against metastore database</description></property>復(fù)制mysqljdbc驅(qū)動(dòng)至hive lib下mysql-connector-java-5.1.19-bin.jarrootzh

51、uwei-VirtualBox:/home/app/hive-0.8.1# bin/hivehive> show tables;證明成功OKTime taken: 5.852 secondshive>mysql> use hive;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -ADatabase changedmysql> select * from TBLS;查看

52、mysql保存的hive元數(shù)據(jù)pig安裝(pig操作HDFS文件,管理mapreduce作業(yè))sudo tar zxvf ./pig-0.9.2.tar.gz -C /home/app/$ sudo vi /etc/profileExport JAVA_HOME=/usr(加上這一句,否則不會(huì)成功)export PIG_INSTALL=/home/app/pig-0.9.2export PATH=$PATH:$PIG_INSTALL/binexport PIG_HADOOP_VERSION=20export PIG_CLASSPATH=$HADOOP_INSTALL/conf (pig的map

53、Reduce模式)$ source /etc/profilerootzhuwei-VirtualBox:/home/app/pig-0.9.2/tutorial/src/org/apache/pig/tutorial# javac -classpath /home/app/pig-0.9.2/pig-0.9.2.jar *.java編譯src下java文件rootzhuwei-VirtualBox:/home/app/pig-0.9.2/tutorial/src# jar -cvf tutorial.jar org將安裝報(bào)下tutorial 下org文件夾內(nèi)容打包,可以執(zhí)行scripts目錄下

54、示例pig程序運(yùn)行rootzhuwei-VirtualBox:/home/app/pig-0.9.2/tutorial/scripts# pig -x local -param load_path="outlog/ipad_ads_error" test.pig(自己編寫的script,程序中接受參數(shù)load_path用來指定輸入的數(shù)據(jù)文件)Linux -Eclipse hadoop配置eclipse hadoop開發(fā)環(huán)境在Eclipse下安裝hadoop-plugin。 1.復(fù)制 hadoop安裝目錄/contrib/eclipse-plugin/hadoop-0.20.

55、2-eclipse-plugin.jar 到 eclipse安裝目錄/plugins/ 下。 2.重啟eclipse,配置hadoop installation directory。 如果安裝插件成功,打開Window->Preferens,你會(huì)發(fā)現(xiàn)Hadoop Map/Reduce選項(xiàng),在這個(gè)選項(xiàng)里你需要配置Hadoop installation directory。配置完成后退出。 3.配置Map/Reduce Locations。 在Window->Show View中打開Map/Reduce Locations。 在Map/Reduce Locations中新建一個(gè)Hado

56、op Location。在這個(gè)View中,右鍵->New Hadoop Location。在彈出的對(duì)話框中你需要配置Location name,如myubuntu,還有Map/Reduce Master和DFS Master。這里面的Host、Port分別為你在mapred-site.xml、core-site.xml中配置的地址及端口。如: Map/Reduce Master Java代碼  1. localhost   2. 9001  localhost9001DFS Master Java代碼  1. localhost

57、   2. 9000  localhost9000配置完后退出。點(diǎn)擊DFS Locations->myubuntu如果能顯示文件夾(2)說明配置正確,如果顯示"拒絕連接",請(qǐng)檢查你的配置。 第三步,新建項(xiàng)目。 File->New->Other->Map/Reduce Project 項(xiàng)目名可以隨便取,如hadoop-test。 復(fù)制 hadoop安裝目錄/src/example/org/apache/hadoop/example/WordCount.java到剛才新建的項(xiàng)目下面。rootzhuwei-Virtu

58、alBox:/home/app/hadoop-1.0.0# bin/hadoop dfs -mkdir pig(在hdfs上創(chuàng)建pig文件夾)Hadoop hello world程序rootzhuwei-VirtualBox:/home/app# cd hadoop-1.0.0/rootzhuwei-VirtualBox:/home/app/hadoop-1.0.0# mkdir inputrootzhuwei-VirtualBox:/home/app/hadoop-1.0.0/input# vi test1.txttest1.txt輸入(Hello World Bye&#

59、160;World)rootzhuwei-VirtualBox:/home/app/hadoop-1.0.0/input# vi test2.txttest2.txt輸入(Hello Hadoop Goodbye Hadoop)cd 到hadoop安裝目錄,運(yùn)行下面命令rootzhuwei-VirtualBox:/home/app/hadoop-1.0.0# bin/hadoop fs -put input test1.txt(這個(gè)命令將input文件夾上傳到了hadoop文件系統(tǒng),文件夾命名為test1.txt)rootzhuwei-VirtualBox:/ho

60、me/app/hadoop-1.0.0# bin/hadoop dfs -rmr test1.txt(刪除test1.txt文件夾)rootzhuwei-VirtualBox:/home/app/hadoop-1.0.0# bin/hadoop dfs -put input in(這個(gè)命令將input文件夾上傳到了hadoop文件系統(tǒng),文件夾命名為in)rootzhuwei-VirtualBox:/home/app/hadoop-1.0.0# bin/hadoop jar hadoop-examples-1.0.0.jar wordcount in out(運(yùn)行wordcount計(jì)數(shù)in 文件

61、夾,結(jié)果輸出至out目錄)rootzhuwei-VirtualBox:/home/app/hadoop-1.0.0# bin/hadoop jar hadoop-examples-1.0.0.jar wordcount in outWarning: $HADOOP_HOME is deprecated.12/06/14 10:09:22 INFO input.FileInputFormat: Total input paths to process : 212/06/14 10:09:23 INFO mapred.JobClient: Running job: job_201206131736

62、_000112/06/14 10:09:24 INFO mapred.JobClient: map 0% reduce 0%12/06/14 10:10:00 INFO mapred.JobClient: map 100% reduce 0%12/06/14 10:10:30 INFO mapred.JobClient: map 100% reduce 100%12/06/14 10:10:36 INFO mapred.JobClient: Job complete: job_201206131736_000112/06/14 10:10:37 INFO mapred.JobClient: C

63、ounters: 2912/06/14 10:10:37 INFO mapred.JobClient: Job Counters 12/06/14 10:10:37 INFO mapred.JobClient: Launched reduce tasks=112/06/14 10:10:37 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=4968612/06/14 10:10:37 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)

64、=012/06/14 10:10:37 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=012/06/14 10:10:37 INFO mapred.JobClient: Launched map tasks=212/06/14 10:10:37 INFO mapred.JobClient: Data-local map tasks=212/06/14 10:10:37 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=2743612/06/14 10:10:37 INFO mapred.JobClient: File Output Format Counters 12/06/14 10:10:37 INFO mapred.JobClient: Bytes Written=4012/06/14 10:10:3

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論