hadoop學(xué)習(xí)筆記_第1頁
hadoop學(xué)習(xí)筆記_第2頁
hadoop學(xué)習(xí)筆記_第3頁
hadoop學(xué)習(xí)筆記_第4頁
hadoop學(xué)習(xí)筆記_第5頁
已閱讀5頁,還剩14頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

1、一、環(huán)境(偽分布式)Centos、虛擬機(jī)、hadoop-1.0.1、jdk1.6(這兩個(gè)的環(huán)境變量要設(shè)置)、myeclipse8.6安裝linux命令如下:先授權(quán)安裝java1.6chmod +x jdk-6u23-linux-i586.bin安裝bin文件./jdk-6u23-linux-i586.binlinux 下設(shè)置JAVA 環(huán)境變量1.cd /etc/profile.d2.touch java.sh3.在java.sh寫入以下內(nèi)容: vi java.sh#set java_environmentexport JAVA_HOME=/tools/jdk1.6.0_23export CLA

2、SSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOME/bin保存是: Esc到命令模式, 敲冒號(hào):, 再wq!(3個(gè)字符),就保存退出了, w是保存,q是退出,可單用,!是強(qiáng)制也可以是Shift+ZZ,保存退出的意思4.chmod 777 java.sh -改為可執(zhí)行文件5.source java.sh -使生效6.javac -看是否成功Hadoop環(huán)境變量設(shè)置,但是此種方式重新啟動(dòng)后又將失效rootlocalhost # cd

3、tools/hadoop-1.0.1/binrootlocalhost bin#export PATH=$PATH:/tools/hadoop-1.0.1/bin修改配置:ConfigurationUse the following: conf/core-site.xml:<configuration> <property> <name></name> <value>hdfs:/localhost:9000</value> </property></configur

4、ation>conf/hdfs-site.xml:<configuration> <property> <name>dfs.replication</name> <value>1</value> </property></configuration>conf/mapred-site.xml:<configuration> <property> <name>mapred.job.tracker</name> <value>localho

5、st:9001</value> </property></configuration>Setup passphraseless sshNow check that you can ssh to the localhost without a passphrase:$ ssh localhostIf you cannot ssh to localhost without a passphrase, execute the following commands:$ ssh-keygen -t dsa -P '' -f /.ssh/id_dsa&#

6、160;$ cat /.ssh/id_dsa.pub >> /.ssh/authorized_keysFormat a new distributed-filesystem:$ bin/hadoop namenode -formatStart the hadoop daemons:$ bin/start-all.shThe hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).Browse the web

7、interface for the NameNode and the JobTracker; by default they are available at:· NameNode - http:/localhost:50070/· JobTracker - http:/localhost:50030/Copy the input files into the distributed filesystem:$ bin/hadoop fs -put conf inputRun some of the examples provided:

8、$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfsa-z.+'Examine the output files:Copy the output files from the distributed filesystem to the local filesytem and examine them:$ bin/hadoop fs -get output output $ cat output/*orView the output files on the distributed filesyste

9、m:$ bin/hadoop fs -cat output/*When you're done, stop the daemons with:$ bin/stop-all.sh啟動(dòng)hadooprootlocalhost hadoop-1.0.1# bin/start-all.sh查詢啟動(dòng)狀態(tài): roothadoop21 hadoop-1.0.1# jps6365 TaskTracker5993 NameNode6241 JobTracker8202 Jps6106 DataNode暫停hadooproothadoop21 bin# ./stop-all.sh no jobtracker

10、 to stophadoop21: no tasktracker to stopno namenode to stophadoop21: no datanode to stoprootlocalhost bin# hadoop fs -mkdir input循環(huán)看文件夾rootlocalhost bin# hadoop fs -lsFound 2 items文件夾drwxr-xr-x - root supergroup 0 2012-11-21 21:08 /user/root/input文件夾drwxr-xr-x - root supergroup 0 2012-11-21 20:25 /u

11、ser/root/output文件:-rw-r-r- 1 root supergroup 22 2012-11-21 20:23 /user/root/input文件夾 drwxr-xr-x - root supergroup 0 2012-11-21 20:25 /user/root/outputrootlocalhost bin# hadoop fs -rm input循環(huán)看文件夾中的文件rootlocalhost hadoop-1.0.1# hadoop fs -lsrdrwxr-xr-x - root supergroup 0 2012-11-21 21:25 /user/root/i

12、nput-rw-r-r- 1 root supergroup 22 2012-11-21 21:25 /user/root/input/file01drwxr-xr-x - root supergroup 0 2012-11-21 21:20 /user/root/output刪除tmp下全部內(nèi)容roothadoop21 tmp# rm -rf *離開hadoop安全模式hadoop namenode 處在安全模式下bin/hadoop dfsadmin -safemode leave訪問hadoop輸出日志查詢頁面1:50030/jobtracker.jsp

13、部署說明:/docs/r1.0.4/single_node_setup.html一個(gè)namenode、一個(gè)datanode Hosts修改:本機(jī):c:windowssystem32driveretchosts虛擬機(jī):vi etchosts在本機(jī)中添加虛擬機(jī)映射在虛擬機(jī)中修改hostsNamenode重新初始化hadoop namenode format設(shè)置IProotlocalhost hadoop-1.0.1# ifconfig eth0 2 netmask uprootlocalhost had

14、oop-1.0.1# setuprootlocalhost hadoop-1.0.1# service iptables status表格:filterChain INPUT (policy ACCEPT)num target prot opt source destination Chain FORWARD (policy ACCEPT)num target prot opt source destination Chain OUTPUT (policy ACCEPT)num target prot opt source destination rootlocalhost hadoop-1.

15、0.1# service iptables stop清除防火墻規(guī)則: 確定把 chains 設(shè)置為 ACCEPT 策略:filter 確定正在卸載 Iiptables 模塊: 確定rootlocalhost conf# vi slavesrootlocalhost conf# rootlocalhost conf# rootlocalhost conf# vi /etc/hostsrootlocalhost conf# cd /rootlocalhost /# cd etc/sysconfig/rootlocalhost sysconfig# lsapmd hidd mkinitrd samb

16、aapm-scripts hsqldb modules saslauthdatd httpd named selinuxauditd hwconf netconsole sendmailauthconfig i18n network smartmontoolsautofs init networking snmpd.optionsbluetooth ip6tables-config network-scripts snmptrapd.optionscbq ipmi nfs spamassassinclock iptables-config nspluginwrapper squidconman

17、 ipvsadm-config ntpd syslogconsole irda pand system-config-netbootcpuspeed irqbalance pm-action system-config-securitylevelcrond kdump prelink system-config-usersdesktop kernel pulse tuxdovecot keyboard raid-check udev-stwdund krb524 rawdevices vncserversfirstboot kudzu readonly-root wpa_supplicantg

18、rub lm_sensors rhn xinetdha luci rsyslogrootlocalhost sysconfig# vi networkrootlocalhost sysconfig# service network restart正在關(guān)閉接口 eth0: 確定關(guān)閉環(huán)回接口: 確定彈出環(huán)回接口: 確定彈出界面 eth0: 正在決定 eth0 的 IP 信息.完成。 確定rootlocalhost sysconfig# ifconfigrootlocalhost sysconfig# cd network-scripts/rootlocalhost network-scripts#

19、 lsrootlocalhost network-scripts# vi ifcfg-eth0 rootlocalhost network-scripts# service network restartrootlocalhost network-scripts# ifconfigrootlocalhost network-scripts# ping hadoop21其他rootlocalhost conf# netstat -a | grep 50030tcp 0 0 *:50030 *:* LISTEN rootlocalhost conf# netstat -a | grep 50070

20、tcp 0 0 *:50070 *:* LISTEN rootlocalhost conf# ls -a. hadoop-env.sh mapred-site.xml. perties masterscapacity-scheduler.xml hadoop-policy.xml slavesconfiguration.xsl hdfs-site.xml ssl-client.xml.examplecore-site.xml perties ssl-server.xml.examplefair-scheduler.xml mapred-q

21、ueue-acls.xml taskcontroller.cfgrootlocalhost conf# ls l總計(jì) 140-rw-rw-r- 1 root root 7457 2012-02-14 capacity-scheduler.xml-rw-rw-r- 1 root root 535 2012-02-14 configuration.xsl-rw-rw-r- 1 root root 294 11-21 22:58 core-site.xml-rw-rw-r- 1 root root 327 2012-02-14 fair-scheduler.xml-rw-rw-r- 1 root r

22、oot 2232 11-21 05:17 hadoop-env.sh-rw-rw-r- 1 root root 1488 2012-02-14 perties-rw-rw-r- 1 root root 4644 2012-02-14 hadoop-policy.xml-rw-rw-r- 1 root root 276 11-21 05:23 hdfs-site.xml-rw-rw-r- 1 root root 4441 2012-02-14 perties-rw-rw-r- 1 root root 2033 2012-02-14 mapr

23、ed-queue-acls.xml-rw-rw-r- 1 root root 290 11-21 22:59 mapred-site.xml-rw-rw-r- 1 root root 10 2012-02-14 masters-rw-rw-r- 1 root root 10 2012-02-14 slaves-rw-rw-r- 1 root root 1243 2012-02-14 ssl-client.xml.example-rw-rw-r- 1 root root 1195 2012-02-14 ssl-server.xml.example-rw-rw-r- 1 root root 382

24、 2012-02-14 taskcontroller.cfgrootlocalhost conf# vi slaves rootlocalhost conf# vi mastersrootlocalhost bin# jps3799 SecondaryNameNode5477 Jpsrootlocalhost bin# kill -9 3799rootlocalhost bin# rm -r ./logs/*rm:是否刪除 一般文件 “./logs/hadoop-root-datanode-hadoop21.log”? yrm:是否刪除 一般空文件 “./logs/hadoop-root-da

25、tanode-hadoop21.out”? rm:是否刪除 一般文件 “./logs/hadoop-root-datanode-localhost.log”? rm:是否刪除 一般空文件 “./logs/hadoop-root-datanode-localhost.out”? rm:是否刪除 一般空文件 “./logs/hadoop-root-datanode-localhost.out.1”? rm:是否刪除 一般文件 “./logs/hadoop-root-jobtracker-localhost.log”? rm:是否刪除 一般空文件 “./logs/hadoop-root-jobtra

26、cker-localhost.out”? rm:是否刪除 一般空文件 “./logs/hadoop-root-jobtracker-localhost.out.1”? rm:是否刪除 一般空文件 “./logs/hadoop-root-jobtracker-localhost.out.2”? rm:是否刪除 一般文件 “./logs/hadoop-root-namenode-localhost.log”? rm:是否刪除 一般空文件 “./logs/hadoop-root-namenode-localhost.out”? rm:是否刪除 一般空文件 “./logs/hadoop-root-na

27、menode-localhost.out.1”? rm:是否刪除 一般空文件 “./logs/hadoop-root-namenode-localhost.out.2”? rm:是否刪除 一般空文件 “./logs/hadoop-root-namenode-localhost.out.3”? rm:是否刪除 一般空文件 “./logs/hadoop-root-namenode-localhost.out.4”? rm:是否刪除 一般文件 “./logs/hadoop-root-secondarynamenode-localhost.log”? rm:是否刪除 一般空文件 “./logs/had

28、oop-root-secondarynamenode-localhost.out”? rm:是否刪除 一般文件 “./logs/hadoop-root-secondarynamenode-localhost.out.1”? rm:是否刪除 一般文件 “./logs/hadoop-root-tasktracker-hadoop21.log”? rm:是否刪除 一般空文件 “./logs/hadoop-root-tasktracker-hadoop21.out”? rm:是否刪除 一般文件 “./logs/hadoop-root-tasktracker-localhost.log”? rm:是否刪

29、除 一般空文件 “./logs/hadoop-root-tasktracker-localhost.out”? rm:是否刪除 一般空文件 “./logs/hadoop-root-tasktracker-localhost.out.1”? rm:是否進(jìn)入目錄 “./logs/history”? rm:是否刪除 一般文件 “./logs/job_201211212007_0001_conf.xml”? rm:是否刪除 一般文件 “./logs/job_201211212007_0003_conf.xml”? rm:是否刪除 一般文件 “./logs/job_201211212007_0004_c

30、onf.xml”? rootlocalhost bin# rm -rf ./logs/*rootlocalhost bin#rootlocalhost bin#查看文件在hdfs中分塊方式和存儲(chǔ)路徑:hadoop fsck /cgj/cw_kcmx.csv -files -blocks二、myeclipse8.6安裝hadoop插件hadoop-eclipse-plugin-1.0.0.jar安裝路徑:D:myEclipse8.6dropinshadoopplugins hadoop-eclipse-plugin-1.0.0.jar如果是myeclipse6.5: 則放到D:MyEclipse

31、 6.5eclipseplugins(1)指定hadoop安裝包(2)創(chuàng)建hadoop服務(wù)創(chuàng)建hadoop服務(wù)(3)新建hadoop工程安裝過程遇到的問題:An internal error occurred during: "Connecting to DFS Hadoop".org/apache/commons/configuration/Configuration 首先判斷:9000 是否可以正常telnet如果不可以:1、 判斷是否防火墻沒有關(guān)閉2、 看端口nestata -ano 發(fā)現(xiàn)9000端口是用的ipv6的格式,關(guān)閉ipv6格式,重啟機(jī)器,搞定關(guān)閉

32、IPV6#1. 可以通過在sysctl.conf添加下面來禁用ipv6 ,不過并不能使得其它程序默認(rèn)不開啟對(duì)ipv6的技持# 編輯 /etc/sysctl.conf,添加如下行net.ipv6.conf.all.disable_ipv6=1# 保存退出,并且重新啟動(dòng)系統(tǒng)  #2. 關(guān)閉IPV6# 添加下面兩行內(nèi)容到/etc/modprobe.confalias net-pf-10 off alias ipv6 off# 保存退出,并且重新啟動(dòng)系統(tǒng)

33、。3、 最后在去網(wǎng)上查:三、執(zhí)行main方法時(shí),對(duì)路徑?jīng)]有訪問權(quán)限Failed to set permissions of path: tmphadoop-adminmapredlocalttprivate to 0700解決此問題有三種方式:1、 在mapreduce工程下面添加如下配置文件:mapred-site.xml2、將上面的配置文件mapred-site.xml添加到hadoop安裝目錄下3、修改hadoop jar包中的類文件采用修改FileUtil類 checkReturnValue方法代碼 重新編譯 替換原來的hadoop-core-1.0.0.jar文件 來解決改后的had

34、oop-core-1.0.0.jar下載地址bug /jira/browse/HADOOP-7682四、hadoop安全模式解除報(bào)錯(cuò):node.SafeModeException: Cannot delete /tmp/hadoop-SYSTEM/mapred/system. Name node is in safe mode.The ratio of reported blocks 0.9412 has not reached the threshold 0.9990. Saf

35、e mode will be turned off automatically.解決辦法::bin/hadoop dfsadmin -safemode leave (解除安全模式)safemode參數(shù)說明:enter - 進(jìn)入安全模式leave - 強(qiáng)制NameNode離開安全模式get - 返回安全模式是否開啟的信息wait - 等待,一直到安全模式結(jié)束。五、本機(jī)與虛擬機(jī)訪問不通1、將虛擬集中的IP手動(dòng)設(shè)置為固定IP與本機(jī)IP保持一個(gè)網(wǎng)段2、關(guān)掉虛擬機(jī)中的防火墻六、導(dǎo)入jar包版本沖突import org.apache.hadoop.mapred.Partitioner;【正確的】impor

36、t org.apache.hadoop.mapreduce.Partitioner; 【錯(cuò)誤的】七、Partitioner使用方法八、統(tǒng)計(jì)EXCEL中的其中1列各個(gè)KEY數(shù)量九、測(cè)試HDFS-IO是否通roothadoop21 hadoop-1.0.1# hadoop jar hadoop-test-1.0.1.jar TestDFSIO -write -nrFile 5 -fileSize 100TestDFSIO.0.0.412/11/22 01:39:56 INFO fs.TestDFSIO: nrFiles = 112/11/22 01:39:56 INFO fs.TestDFSIO:

37、 fileSize (MB) = 10012/11/22 01:39:56 INFO fs.TestDFSIO: bufferSize = 100000012/11/22 01:39:57 INFO fs.TestDFSIO: creating control file: 100 mega bytes, 1 files12/11/22 01:39:57 INFO fs.TestDFSIO: created control files for: 1 files12/11/22 01:39:58 INFO mapred.FileInputFormat: Total input paths to p

38、rocess : 112/11/22 01:39:58 INFO mapred.JobClient: Running job: job_201211220133_000312/11/22 01:39:59 INFO mapred.JobClient: map 0% reduce 0%12/11/22 01:40:57 INFO mapred.JobClient: map 100% reduce 0%12/11/22 01:41:24 INFO mapred.JobClient: map 100% reduce 100%12/11/22 01:41:30 INFO mapred.JobClien

39、t: Job complete: job_201211220133_000312/11/22 01:41:30 INFO mapred.JobClient: Counters: 3012/11/22 01:41:30 INFO mapred.JobClient: Job Counters 12/11/22 01:41:30 INFO mapred.JobClient: Launched reduce tasks=112/11/22 01:41:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=5466612/11/22 01:41:30 INFO mapr

40、ed.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=012/11/22 01:41:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=012/11/22 01:41:30 INFO mapred.JobClient: Launched map tasks=112/11/22 01:41:30 INFO mapred.JobClient: Data-local

41、 map tasks=112/11/22 01:41:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=2411412/11/22 01:41:30 INFO mapred.JobClient: File Input Format Counters 12/11/22 01:41:30 INFO mapred.JobClient: Bytes Read=11212/11/22 01:41:30 INFO mapred.JobClient: File Output Format Counters 12/11/22 01:41:30 INFO mapred

42、.JobClient: Bytes Written=7612/11/22 01:41:30 INFO mapred.JobClient: FileSystemCounters12/11/22 01:41:30 INFO mapred.JobClient: FILE_BYTES_READ=9212/11/22 01:41:30 INFO mapred.JobClient: HDFS_BYTES_READ=23612/11/22 01:41:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=4314512/11/22 01:41:30 INFO mapred

43、.JobClient: HDFS_BYTES_WRITTEN=10485767612/11/22 01:41:30 INFO mapred.JobClient: Map-Reduce Framework12/11/22 01:41:30 INFO mapred.JobClient: Map output materialized bytes=9212/11/22 01:41:30 INFO mapred.JobClient: Map input records=112/11/22 01:41:30 INFO mapred.JobClient: Reduce shuffle bytes=9212

44、/11/22 01:41:30 INFO mapred.JobClient: Spilled Records=1012/11/22 01:41:30 INFO mapred.JobClient: Map output bytes=7612/11/22 01:41:30 INFO mapred.JobClient: Total committed heap usage (bytes)11/22 01:41:30 INFO mapred.JobClient: CPU time spent (ms)=897012/11/22 01:41:30 INFO mapred.Job

45、Client: Map input bytes=2612/11/22 01:41:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=12412/11/22 01:41:30 INFO mapred.JobClient: Combine input records=012/11/22 01:41:30 INFO mapred.JobClient: Reduce input records=512/11/22 01:41:30 INFO mapred.JobClient: Reduce input groups=512/11/22 01:41:30 INFO ma

46、pred.JobClient: Combine output records=012/11/22 01:41:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=22906880012/11/22 01:41:30 INFO mapred.JobClient: Reduce output records=512/11/22 01:41:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=74924441612/11/22 01:41:30 INFO mapred.J

47、obClient: Map output records=512/11/22 01:41:30 INFO fs.TestDFSIO: - TestDFSIO - : write12/11/22 01:41:30 INFO fs.TestDFSIO: Date & time: Thu Nov 22 01:41:30 CST 201212/11/22 01:41:30 INFO fs.TestDFSIO: Number of files: 112/11/22 01:41:30 INFO fs.TestDFSIO: Total MBytes processed: 10012/11/22 01

48、:41:30 INFO fs.TestDFSIO: Throughput mb/sec: 3.274823159549384412/11/22 01:41:30 INFO fs.TestDFSIO: Average IO rate mb/sec: 3.274823188781738312/11/22 01:41:30 INFO fs.TestDFSIO: IO rate std deviation: 7.706685757074053E-412/11/22 01:41:30 INFO fs.TestDFSIO: Test exec time sec: 92.54412/11/22 01:41:

49、30 INFO fs.TestDFSIO:十、節(jié)點(diǎn)ID不一致?調(diào)整節(jié)點(diǎn)?待補(bǔ)充十一、使用DataJoin進(jìn)行Reduce側(cè)連接多數(shù)據(jù)源時(shí),發(fā)生異常 使用DataJoi n進(jìn)行Reduce側(cè)連接多數(shù)據(jù)源時(shí),發(fā)生異常:java.lang.RuntimeException: java.lang.NoSuchMethodException: com.hadoop.reducedatajoin.ReduceDataJoin$TaggedWritable.() at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.jav

50、a:115)解決法案:You need a default constructor for TaggedWritable (Hadoop uses reflection to create this object, and requires a default constructor (no args).You also have a problem in that your readFields method, you call data.readFields(in) on the writable interface - but has no knowledge of the actual

51、 runtime class of data.I suggest you either write out the data class name before outputting the data object itself, or look into the GenericWritable class (you'll need to extend it to define the set of allowable writable classes that can be used).So you could amend as follows:大概意思是說:你需要為TaggedWr

52、itable提供一個(gè)默認(rèn)的無參數(shù)構(gòu)造方法。您需要一個(gè)默認(rèn)的的構(gòu)造函數(shù)TaggedWritable(Hadoop的使用反射來創(chuàng)建這個(gè)對(duì)象,需要一個(gè)默認(rèn)的構(gòu)造函數(shù)(無參數(shù))。你也有一個(gè)問題,就是你的ReadFields方法,你可寫的接口上調(diào)用data.readFields(中) - 但沒有知識(shí)的實(shí)際運(yùn)行時(shí)類的數(shù)據(jù)。我建議你要么寫出來的數(shù)據(jù)類的名稱,然后輸出的數(shù)據(jù)對(duì)象本身,或?qū)ふ业紾enericWritable類(你需要擴(kuò)展它定義一組允許可寫的類,可以使用)。所以,你可以修改如下:十二、MapReduce工作機(jī)制1、 作業(yè)提交向jobtracker請(qǐng)求一個(gè)新的作業(yè)id檢查作業(yè)的輸出說明計(jì)算作業(yè)的輸入分片將運(yùn)行作業(yè)所需要的資源復(fù)制到一個(gè)以作業(yè)id命名的目錄下告知jobtracker作業(yè)準(zhǔn)備執(zhí)行2、 作業(yè)初始化(jobtracker)Jobtracker做兩件事:1、 接受jobclient的請(qǐng)求,初始化job,專門由一個(gè)線程負(fù)責(zé),每個(gè)job都重新起一個(gè)線程負(fù)責(zé)初始化;2、 接受tasktracker的心跳,rpc調(diào)用,根據(jù)心跳信息向tasktracker傳遞相應(yīng)信息包;JobTracker作為一個(gè)單獨(dú)的JVM運(yùn)行,其運(yùn)行的main函數(shù)

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論