版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、.:.;在CentOS6.5環(huán)境下安裝CDH5.1關(guān)于CDH和Cloudera ManagerCDH (Clouderas Distribution, including Apache Hadoop),是Hadoop眾多分支中的一種,由Cloudera維護(hù),基于穩(wěn)定版本的Apache Hadoop構(gòu)建,并集成了很多補(bǔ)丁,可直接用于消費(fèi)環(huán)境。Cloudera Manager那么是為了便于在集群中進(jìn)展Hadoop等大數(shù)據(jù)處置相關(guān)的效力安裝和監(jiān)控管理的組件,對(duì)集群中主機(jī)、Hadoop、Hive、Spark等效力的安裝配置管理做了極大簡(jiǎn)化。1. 安裝方案及節(jié)點(diǎn)規(guī)劃NameNode和ResourceMa
2、nager是兩個(gè)獨(dú)立的功能單元,二者可以集中部署,亦可分布式部署。由于測(cè)試環(huán)境有限,把NameNode和ResourceManager部署到一同。主機(jī)IP角色需求的程序備注n133Namenode/DatanodeJournalNode、NameNode、DFSZKFailoverController、DataNode、NodeManagerNamenode主備n234Namenode/DatanodeJournalNode、NameNode、DFSZKFailoverController、DataNode、NodeManagerNamenode主備n3192.168.116.Datanode
3、Datanode、NodeManager-n133ResourceManagerResourceManagerRM主備n3192.168.116.ResourceManagerResourceManagerRM主備n133JobHistoryServerJobHistoryServer-留意:假設(shè)運(yùn)用VMWare虛擬機(jī)安裝Linux,需求運(yùn)用NAT方式,并將虛擬機(jī)的IP地址指定好,否那么在不同的網(wǎng)絡(luò)環(huán)境下IP能夠會(huì)發(fā)生變化,導(dǎo)致無(wú)法運(yùn)用,但是NAT方式無(wú)法與外界的局域網(wǎng)通訊,所以假設(shè)是在效力器上虛擬化的運(yùn)用場(chǎng)景,必需運(yùn)用更復(fù)雜的橋接方式,概略參考2. 安裝預(yù)備2.1 修正主機(jī)名對(duì)集群內(nèi)的節(jié)點(diǎn)規(guī)
4、劃不同的主機(jī)名 查看主機(jī)名的命令hostname暫時(shí)修正主機(jī)名的命令hostname new_hostname暫時(shí)修正主機(jī)名只需當(dāng)前有效,重啟后即取消,假設(shè)要徹底修正主機(jī)名,需求修正配置文件vi /etc/sysconfig/network修正HOSTNAMENETWORKING=yesHOSTNAME=n1然后重啟網(wǎng)絡(luò)service network restart最后修正hosts文件,添加ip與hostname的對(duì)應(yīng)關(guān)系vi /etc/sysconfig/hosts添加ip與hostname的對(duì)應(yīng)關(guān)系33 n1 n1.localdomain34 n2 n2.localdomain192.1
5、68.116. n3 n3.localdomain留意:這里必需是三個(gè),即nn.localdomain不可或缺,否那么后面的hdfs配置會(huì)出問(wèn)題2.2 防火墻配置防火墻配置,封鎖每個(gè)節(jié)點(diǎn)的selinux / iptables (用root用戶執(zhí)行) 1. 封鎖iptablesservice iptables stop #停頓效力chkconfig -del iptables #封鎖效力自動(dòng)重啟封鎖selinux修正/etc/selinux/configvi /etc/selinux/configSELINUX=disabled2.3 配置SSH*留意:CentOS6.5環(huán)境下需求先安裝SSH效
6、力 在主節(jié)點(diǎn)上執(zhí)行以下命令,生成無(wú)密碼的密鑰對(duì)。ssh-keygen -t rsa一路回車即可,然后將公鑰添加到認(rèn)證文件中cat /.ssh/id_rsa.pub /.ssh/authorized_keys然后并設(shè)置authorized_keys的訪問(wèn)權(quán)限chmod 600 /.ssh/authorized_keys最后將scp文件到一切datenode節(jié)點(diǎn)scp /.ssh/authorized_keys rootn2:/.ssh/測(cè)試:在主節(jié)點(diǎn)上ssh n2,正常情況下,不需求密碼就能直接登陸進(jìn)去了。2.4 安裝JDK留意:這里一定要安裝JDK,而且最好在1.7版本以上,CentOS中自帶
7、的是JRE,不是JDK,不安裝JDK的話Sqoop無(wú)法運(yùn)用。另外Spark需求JDK1.7版本以上,可以思索直接安裝最新的JDK1.8。2.5 配置hosts把集群中的各個(gè)節(jié)點(diǎn)配置到每個(gè)節(jié)點(diǎn)的/etc/hosts中。33 n134 n2192.168.116. n3留意:要把hosts文件中的自環(huán) (類似下表) 刪掉,否那么在配置ResourceMananger HA時(shí),不能訪問(wèn)RM頁(yè)面。 localhost localhost.localdomain localhost localhost.localdomain.localdomain localhost localhost.localdo
8、main4 localhost localhost.localdomain4.localdomain4:1 localhost localhost.localdomain localhost localhost.localdomain.localdomain localhost localhost.localdomain6 localhost localhost.localdomain6.localdomain62.6 時(shí)間同步在新的版本中引入了信令機(jī)制,在啟動(dòng)容器時(shí)會(huì)進(jìn)展校驗(yàn)。集群中的各個(gè)節(jié)點(diǎn)必需時(shí)間同步,節(jié)點(diǎn)間的時(shí)間差不能超越10分鐘,否那么執(zhí)行MapReduce義務(wù)時(shí)報(bào)錯(cuò)。2.7 獲取安
9、裝包CDH5.1版本RPM安裝包下載地址: archive-primary.cloudera/cdh5/redhat/6/x86_64/cdh/5.1.0/RPMS/ 。 下載zookeeper,Hadoop,hive,sqoop的安裝包。noarch文件下存放的是與系統(tǒng)架構(gòu)無(wú)關(guān)的軟件包,x86_64途徑下是64位系統(tǒng)的安裝包。3. 安裝Zookeeper3.1 Zookeeper安裝在每個(gè)節(jié)點(diǎn)上執(zhí)行安裝命令,zookeeper依賴bigtop-utils。rpm -ivh zookeeper-*.rpm bigtop-utils*.rpm -nodeps -forcezookeeper安裝途
10、徑:/etc/default/zookeeper/etc/zookeeper /var/lib/zookeeper/var/run/zookeeper/var/log/zookeeper -日志文件途徑/usr/share/doc/zookeeper-3.4.5+26/api/org/apache/zookeeper/usr/lib/zookeeper -安裝途徑3.2 Zookeeper配置配置zoo.cfg 在每個(gè)zookeeper節(jié)點(diǎn)中修正/etc/zookeeper/conf/zoo.cfg文件。tickTime=2000dataDir=/var/lib/zookeeperclient
11、Port=2181initLimit=5syncLimit=2# server.id=host:port:portserver.1= n1:2888:3888server.2= n2:2888:3888server.3= n3:2888:3888闡明:第一個(gè)port用于各個(gè)zookeeper節(jié)點(diǎn)之間通訊,第二個(gè)port用于產(chǎn)生zookeeper集群的leader。id必需與myid一致,否那么不能啟動(dòng)zookeeper。3.3 啟動(dòng)Zookeeper初始化第一次運(yùn)轉(zhuǎn)zookeeper需求初始化,并指定myid。后續(xù)運(yùn)轉(zhuǎn)那么不再執(zhí)行該操作。service zookeeper-server ini
12、t -myid=3啟動(dòng)zookeeper在每個(gè)節(jié)點(diǎn)上執(zhí)行service zookeeper-server start闡明:最先啟動(dòng)的幾個(gè)節(jié)點(diǎn)輸出如下警告信息,這是由于其他節(jié)點(diǎn)的zookeeper未啟動(dòng)。等一切節(jié)點(diǎn)全部啟動(dòng)后就會(huì)正常。2021-08-01 04:11:58,585 myid:1 - WARN WorkerSendermyid=1:QuorumCnxManager368 - Cannot open channel to 2 at election address dw001/01:3888.ConnectException: 回絕銜接 at .PlainSocketImpl.soc
13、ketConnect(Native Method) at .AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at .AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at .AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at .SocksSocketImpl.connect(SocksSocketImpl.java:392)
14、啟動(dòng)的其他的方式:/usr/lib/zookeeper/bin/zkServer.sh start3.4 驗(yàn)證安裝檢查zookeeper的腳本查看啟動(dòng)形狀,包括各個(gè)節(jié)點(diǎn)的角色。/usr/lib/zookeeper/bin/zkServer.sh status或 假設(shè)用service方法啟動(dòng)zookeeper,那么上面的驗(yàn)證方法失效,可用下面的方法。echo stat|nc hostname 2181輸出JMX enabled by defaultUsing config: /usr/lib/zookeeper/bin/./conf/zoo.cfgMode: leader經(jīng)過(guò)客戶端腳本,銜接到Z
15、ooKeeper集群上。對(duì)于客戶端來(lái)說(shuō),ZooKeeper是一個(gè)整體ensemble,銜接到ZooKeeper集群實(shí)踐上覺(jué)得在獨(dú)享整個(gè)集群的效力,所以可以在任何一個(gè)結(jié)點(diǎn)上建立到效力集群的銜接。cd /usr/lib/zookeeper/bin./zkCli.sh -server n1:21814. 安裝Hadoop4.1 安裝把Hadoop的RPM安裝包上傳到節(jié)點(diǎn),執(zhí)行安裝命令。在一切節(jié)點(diǎn)上執(zhí)行。rpm -ivh nc-*.rpm avro-libs-*.rpm bigtop-jsvc-*.rpm bigtop-tomcat-*.rpm parquet-*.rpm hadoop-0.20-ma
16、preduce-2.3.0+*.rpm hadoop-2.3.0+*.rpm hadoop-client-*.rpm hadoop-debuginfo-*.rpm hadoop-doc-*.rpm hadoop-hdfs-2.3.0*.rpm hadoop-hdfs-datanode-*.rpm hadoop-hdfs-journalnode-*.rpm hadoop-hdfs-namenode-*.rpm hadoop-hdfs-zkfc-*.rpm hadoop-httpfs-*.rpm hadoop-libhdfs-*.rpm hadoop-mapreduce-*.rpm hadoop-
17、yarn-*.rpm -nodeps -force4.2 配置SSH必需正確安裝和配置ssh,否那么namenode主備切換將失敗!途徑還必需與hdfs-site中配置的ssh私鑰途徑一致。 安裝CDH時(shí),會(huì)新建用戶hdfs,mapred和用戶組hadoop,此處需求為一切的namenode的hdfs用戶配置ssh免密碼登陸。詳細(xì)步驟如下:安裝ssh效力器和客戶端,并開(kāi)啟效力。假設(shè)有兩個(gè)namenode A&B在其中一個(gè)A上進(jìn)展以下步驟: 留意:在CentOS中,必需先修正/etc/ssh/sshd-config文件中以下幾行的注釋去掉#RSAAuthentication yes#Pubkey
18、Authentication yes#AuthorizedKeysFile .ssh/authorized_keys然后重啟ssh效力service sshd restart進(jìn)入hdfs用戶,在hdfs用戶home目錄下執(zhí)行:ssh-keygen -t rsa -P 其中-P代表無(wú)密碼 一路回車,遇到y(tǒng)/n 選擇y,即在默許目錄下/var/lib/hadoop-hdfs生成id_rsa 和id_rsa.pub 兩個(gè)文件,id_rsa為私鑰,id_rsa.pub為公鑰。 注:假設(shè)不知道hdfs用戶的密碼,可以在root用戶下用passwd命令修正密碼passwd hdfs#提示New passw
19、ord:此時(shí)輸入新密碼即可。進(jìn)入.ssh目錄下,將id_rsa.pub 復(fù)制給authorized_keys文件,并給予權(quán)限644。cd .sshcp id_rsa.pub authorized_keyschmod 644 authorized_keys將本節(jié)點(diǎn)的公鑰id_rsa.pub遠(yuǎn)程復(fù)制給其他節(jié)點(diǎn),并更改權(quán)限644。$ scp id_rsa.pub root主機(jī)名:/var/lib/hadoop-hdfs/.ssh/留意:scp命令不支持創(chuàng)建文件夾,因此遠(yuǎn)程復(fù)制前應(yīng)在目的效力器上創(chuàng)建好目錄,否那么很容易報(bào)錯(cuò),如: scp: .: not a regular file 或 scp: /v
20、ar/lib/hadoop-hdfs/.ssh/: Is a directory 改/var/lib/hadoop-hdfs/.ssh/authorized_keys一切者為hdfs。chown hdfs:hdfs /var/lib/hadoop-hdfs/.ssh/authorized_keys注:假設(shè)此時(shí)報(bào)“Host key verification failed錯(cuò)誤,需求在 /root/.ssh/known_hosts 文件里面將原來(lái)的公鑰信息刪除即可。在B主機(jī)將公鑰追加到authorized_keys,并將權(quán)限修正為600。cat id_rsa.pub authorized_keysc
21、hmod 600 authorized_keys此時(shí)A將能ssh免密碼登陸B(tài),測(cè)試ssh namenodeB的主機(jī)名。 在B上設(shè)置免密碼登陸A,進(jìn)展以下步驟:反復(fù)步驟2。將公鑰id_rsa.pub復(fù)制到A,并重命名為ip地址.id_rsa.pub.scp id_rsa.pub rootAip地址: /var/lib/hadoop-hdfs/.ssh /Bip地址.id_rsa.pub改文件一切者為hdfschown hdfs:hdfs /var/lib/hadoop-hdfs/.ssh/26.id_rsa.pub將公鑰追加到authorized_keys。cat Bip地址.id_rsa.pu
22、b authorized_keys此時(shí)B將能免密碼登陸A,測(cè)試ssh namenode A的主機(jī)名。ssh主機(jī)名時(shí),假設(shè)出現(xiàn)錯(cuò)誤:Agent admitted failure to sign using the key. 解決方法:運(yùn)用ssh-add 指令參與私鑰:ssh-add .ssh/id_rsa 假設(shè)出現(xiàn)ssh localhost總是無(wú)法免密碼登陸,檢查各級(jí)目錄文件的權(quán)限,留意root目錄權(quán)限。chmod 755 . (或者更嚴(yán)厲)chmod 755 .sshchmod 644 .ssh/authorized_keys4.3 配置Hadoop4.3.1 修正配置文件復(fù)制Hadoop配置
23、,在每個(gè)節(jié)點(diǎn)都要執(zhí)行。cp -r /etc/hadoop/conf.dist /etc/hadoop/conf.my_cluster設(shè)置alternatives,在每個(gè)節(jié)點(diǎn)都要執(zhí)行。alternatives -verbose -install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster/ 50alternatives -set hadoop-conf /etc/hadoop/conf.my_cluster/修正core-site.xml配置文件配置項(xiàng)取值闡明fs.defaultFShdfscdh5文件系統(tǒng)名ha.zookee
24、per.quorumn1.localdomain:2181,n2.localdomain:2181,n3.localdomain:2181Zookeeper效力的地址hadoop.tmp.dir/home/cdh5-data/hadoopTmp暫時(shí)途徑hadoop.logfile.size10000000hadoop.logfile.count10 xyuser.mapred.groups*xyuser.mapred.hosts*配置文件hdfs-site.xml配置項(xiàng)取值闡明.dirfile/home/CDH5/dfs/nndfs.permissionsfalsedfs.replicatio
25、n3Block的副本數(shù)dfs.blocksize67108864Block的大小servicescdh5效力名.dirfile/home/CDH5/dfs/nn在本地存儲(chǔ)fsimage的途徑dfs.datanode.data.dirfile/home/CDH5/dfs/dn存儲(chǔ)數(shù)據(jù)塊的途徑nodes.cdh5nn1,nn2NameNode效力名node.rpc-address.cdh5.nn1n1.localdomain:8020node.-address.cdh5.nn1n1.localdomain:50070node.rpc-address.cdh5.nn2n2.localdomain:8
26、020node.-address.cdh5.nn2n2.localdomain:50070node.shared.edits.dirqjournaldw221.localdomain:8485;localhost2.localdomain:8485/cdh5在HA集群中,多個(gè)NameNode共享的途徑dfs.journalnode.edits.dir/home/CDH5/dfs/jnvider.cdh5node.ha.ConfiguredFailoverProxyProviderdfs.ha.fencing.methodssshfencedfs.ha.fencing.ssh.private-k
27、ey-files/var/lib/hadoop-hdfs/.ssh/id_rsadfs.ha.automatic-failover.enabledtrue使failover可用dfs.permissions.superusergrouphadoop配置文件mapred-site.xml配置項(xiàng)取值闡明mapreduce.job.maps8每個(gè)job的map義務(wù)數(shù)mapreduce.job.reduces8每個(gè)job的Reduce義務(wù)數(shù)mapreduce.tasktracker.map.tasks.maximum8一個(gè)tasktracker可同時(shí)運(yùn)轉(zhuǎn)的map義務(wù)數(shù)mapreduce.tasktra
28、cker.reduce.tasks.maximum8一個(gè)tasktracker可同時(shí)運(yùn)轉(zhuǎn)的Reduce義務(wù)數(shù)yarnmapreduce.jobhistory.addressn1.localdomain:10020mapreduce.jobhistory.webapp.addressn1.localdomain:19888yarn.app.mapreduce.am.staging-dir/user提交job時(shí)的暫時(shí)途徑mapreduce.map.memory.mb512每個(gè)map義務(wù)需求的內(nèi)存數(shù)mapreduce.reduce.memory.mb512每個(gè)reduce義務(wù)需求的內(nèi)存數(shù)mapred
29、uce.map.java.opts-Xmx200mmapreduce.reduce.java.opts-Xmx200mmapreduce.app-submission.cross-platformtrue支持跨平臺(tái)提交job配置文件yarn-site.xml配置項(xiàng)取值闡明yarn.resourcemanager.ha.enabledtrueRM高可用性yarn.resourcemanager.ha.automatic-failover.enabledtrue缺點(diǎn)恢復(fù)yarn.resourcemanager.ha.automatic-failover.embeddedtrue運(yùn)用Embedded
30、ElectorService 選擇主RMyarn.resourcemanager.cluster-idyarn-rm-clusterRM集群稱號(hào)yarn.resourcemanager.ha.rm-idsrm1,rm2集群中的RM idyarn.resourcemanager.ha.idrm1當(dāng)前RM的id。不同節(jié)點(diǎn)的id配置不同yarn.resourcemanager.recovery.enabledtrueyarn.resourcemanager.zk-addressn1:2181,n2:2181,n3:2181Zookeeper地址yarn.resourcemanager.address
31、.rm1n1: 23140yarn.resourcemanager.scheduler.address.rm1n1: 23130yarn.resourcemanager.webapp.https.address.rm1n1: 23189yarn.resourcemanager.webapp.address.rm1n1: 23188yarn.resourcemanager.resource-tracker.address.rm1n1: 23125yarn.resourcemanager.admin.address.rm1n1: 23141yarn.resourcemanager.address.
32、rm2n3: 23140yarn.resourcemanager.scheduler.address.rm2n3: 23130yarn.resourcemanager.webapp.https.address.rm2n3: 23189yarn.resourcemanager.webapp.address.rm2n3: 23188yarn.resourcemanager.resource-tracker.address.rm2n3: 23125yarn.resourcemanager.admin.address.rm2n3: 23141yarn.nodemanager.local-dirs/ho
33、me/cdh5-data/yarn/localyarn.nodemanager.log-dirs/home/cdh5-data/yarn/log存儲(chǔ)容器日志的途徑y(tǒng)arn.nodemanager.resource.memory-mbNodemanager可以管理的內(nèi)存yarn.scheduler.minimum-allocation-mb200每次分配的最新內(nèi)存大小yarn.nodemanager.vmem-pmem-ratio3.1容器可以運(yùn)用的相對(duì)于物理內(nèi)存的虛擬內(nèi)存的比例yarn.nodemanager.resource.cpu-vcores16nodemanager可以運(yùn)用的cpu數(shù),
34、不配置的話mp義務(wù)沒(méi)法運(yùn)轉(zhuǎn)yarn.nodemanager.resource.memory-mb10240nodemanager可以運(yùn)用的內(nèi)存數(shù),不配置的話mp義務(wù)沒(méi)法運(yùn)轉(zhuǎn)創(chuàng)建途徑 在每個(gè)節(jié)點(diǎn)上執(zhí)行。#創(chuàng)建暫時(shí)途徑sudo mkdir -p /home/cdh5-data/hadoopTmp#創(chuàng)建namenode,datanode存儲(chǔ)途徑sudo mkdir -p /home/CDH5/dfs/nn /home/CDH5/dfs/dn sudo chown -R hdfs:hdfs /home/CDH5/dfs/nn /home/CDH5/dfs/dnsudo chmod 755 /home/
35、CDH5/dfs/nn /home/CDH5/dfs/dn# 特別留意:這里的目錄權(quán)限必需是755,否那么后面啟動(dòng)的時(shí)候會(huì)報(bào)錯(cuò)#yarn.nodemanager.local-dirs,yarn.nodemanager.log-dirssudo mkdir -p /home/cdh5-data/yarn/local /home/cdh5-data/yarn/logsudo chown -R yarn:yarn /home/cdh5-data/yarn/local /home/cdh5-data/yarn/log#dfs.journalnode.edits.dirsudo mkdir -p /ho
36、me/CDH5/dfs/jnsudo chown -R hdfs:hdfs /home/CDH5/dfs/jn4.3.2 同步配置把配置文件同步到其他節(jié)點(diǎn)$ sudo scp -r /etc/hadoop/conf.my_cluster rootn2:/etc/hadoop/conf.my_cluster在每個(gè)節(jié)點(diǎn)上配置alternatives$ sudo alternatives -verbose -install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50$ sudo alternatives -set hadoo
37、p-conf /etc/hadoop/conf.my_cluster4.4 啟動(dòng)HDFS在JournalNode所在節(jié)點(diǎn)啟動(dòng)JournalNode效力service hadoop-hdfs-journalnode start在一切NameNode節(jié)點(diǎn)上,格式化zkfc并啟動(dòng)hdfs zkfc -formatZKservice hadoop-hdfs-zkfc start留意:這里的啟動(dòng)命令不論勝利與否都不會(huì)報(bào)錯(cuò),需求看日志才干確定能否正常啟動(dòng)在其中一個(gè)NameNode A上格式化文件系統(tǒng)sudo -u hdfs hdfs namenode -format此時(shí)能夠會(huì)報(bào)以下錯(cuò)誤:15/09/12
38、00:48:03 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1441093989-33-144199008326615/09/12 00:48:03 WARN fs.FileUtil: Failed to delete file or dir /home/CDH5/dfs/nn/current/.gconfd: it still exists.15/09/12 00:48:03 WARN fs.FileUtil: Failed to delete file or dir /home/CDH5/dfs/nn/current/.con
39、fig: it still exists.15/09/12 00:48:03 WARN fs.FileUtil: Failed to delete file or dir /home/CDH5/dfs/nn/current/.ssh: it still exists.15/09/12 00:48:03 WARN namenode.NameNode: Encountered exception during format:java.io.IOException: Cannot remove current directory: /home/CDH5/dfs/nn/current at org.a
40、pache.hadoop.hdfs.servermon.Storage$StorageDirectory.clearDirectory(Storage.java:332) at node.NNStorage.format(NNStorage.java:546) at node.NNStorage.format(NNStorage.java:567) at node.FSImage.format(FSImage.java:152) at node.NameNode.format(NameNode.java:895) at node.NameNode.createNameNode(NameNode
41、.java:1306) at node.NameNode.main(NameNode.java:1420)這里能夠是/home/cdh5-data/hadoopTmp目錄權(quán)限的問(wèn)題,需求修正目錄的權(quán)限:chown -R hdfs:hadoop /home/cdh5-data/hadoopTmpsudo chmod -R a+w /home/CDH5/假設(shè)還是不行,直接提高nn目錄的權(quán)限chmod 777 nn另外,假設(shè)是第二次格式化,也能夠會(huì)報(bào)這個(gè)問(wèn)題,由于Hadoop為了防止喪失數(shù)據(jù),是不允許隨意刪除文件的,此時(shí)可以進(jìn)入nn目錄,刪除current文件夾rm -rf /home/CDH5/d
42、fs/nn/current/在另一個(gè)namenode B上建立current文件夾mkdir -p /home/CDH5/dfs/nn/current/將A中current文件下一切文件scp到Bscp -r /home/CDH5/dfs/nn/current/* rootn2.localdomain: /home/CDH5/dfs/nn/current/將B中創(chuàng)建的current的一切者改為hdfschown -R hdfs:hdfs /home/CDH5/dfs/nn在一切NameNode節(jié)點(diǎn)上,啟動(dòng)一切NameNodeservice hadoop-hdfs-namenode start#
43、停頓:service hadoop-hdfs-namenode stop在一切DataNode節(jié)點(diǎn)上,啟動(dòng)一切dataNodeservice hadoop-hdfs-datanode start#停頓:service hadoop-hdfs-datanode stop配置HADOOP_MAPRED_HOMEexport HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce將其配置到/etc/default/hadoop中。查看HDFS 33:50070/dfshealth.jsp4.5 在HDFS上創(chuàng)建需求的途徑創(chuàng)建HDFS /tmp途徑sudo -u hd
44、fs hadoop fs -mkdir /tmpsudo -u hdfs hadoop fs -chmod -R 1777 /tmp假設(shè)執(zhí)行報(bào)如下錯(cuò)誤:15/09/16 13:16:17 WARN retry.RetryInvocationHandler: Exception while invoking class tocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo. Not retrying because failovers (15) exceeded maximum allowed (15).ConnectException:
45、Call From n1.localdomain/33 to n2.localdomain:8020 failed on connection exception: .ConnectException: Connection refused; For more details see: /hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(
46、NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at .NetUtils.wrapWithMessage(NetUtils.java:783) at .NetUtils.wrapException(NetUtils.java:7
47、30) at org.apache.hadoop.ipc.Client.call(Client.java:1413) at org.apache.hadoop.ipc.Client.call(Client.java:2) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at xy.$Proxy9.getFileInfo(Unknown Source) at tocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(Cli
48、entNamenodeProtocolTranslatorPB.java:701) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Meth
49、od.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at xy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.ge
50、tFileInfo(DFSClient.java:1758) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
51、at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:248) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623) at org.apache.hadoo
52、p.fs.shell.PathData.expandAsGlob(PathData.java:326) at org.apache.hadoop.fs.shellmand.expandArgument(Command.java:224) at org.apache.hadoop.fs.shellmand.expandArguments(Command.java:207) at cessRawArguments(Command.java:190) at org.apache.hadoop.fs.shellmand.run(Command.java:154) at org.apache.hadoo
53、p.fs.FsShell.run(FsShell.java:255) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)Caused by: .ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkCon
54、nect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735) at .SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at .NetUtils.connect(NetUtils.java:529) at .NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Clien
55、t.java:604) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1461) at org.apache.hadoop.ipc.Client.call(Client.java:0) . 28 more那么需求檢查一下之前的初始化是不
56、是有問(wèn)題,假設(shè)有問(wèn)題的話這里執(zhí)行就會(huì)失敗創(chuàng)建history途徑sudo -u hdfs hadoop fs -mkdir /usersudo -u hdfs hadoop fs -mkdir /user/historysudo -u hdfs hadoop fs -chmod -R 1777 /user/historysudo -u hdfs hadoop fs -chown mapred:hadoop /user/history創(chuàng)建log途徑sudo -u hdfs hadoop fs -mkdir /varsudo -u hdfs hadoop fs -mkdir /var/logsudo
57、 -u hdfs hadoop fs -mkdir /var/log/hadoop-yarnsudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn留意:這里能夠會(huì)報(bào)錯(cuò),找不到文件夾,其實(shí)是需求逐級(jí)建立文件夾,不能一次性建立驗(yàn)證HDFS文件構(gòu)造sudo -u hdfs hadoop fs -ls -R /查看HDFS:33:50070/dfshealth.jsp4.6 啟動(dòng)yarn和MapReduce history啟動(dòng)RMsudo service hadoop-yarn-resourcemanager start在datan
58、ode上啟動(dòng)nodemanagersudo service hadoop-yarn-nodemanager start啟動(dòng)historysudo service hadoop-mapreduce-historyserver start義務(wù)監(jiān)控頁(yè)面:33:23188/cluster/nodes 留意:假設(shè)該頁(yè)面顯示如下信息:This is standby RM. Redirecting to the current active RM: /cluster/nodes闡明目前兩個(gè)結(jié)點(diǎn)都是standby形狀,可以經(jīng)過(guò)以下命令查看節(jié)點(diǎn)形狀:# yarn rmadmin -getServiceState
59、 rm1standby由于安裝HA后默許是自動(dòng)切換的,可以先執(zhí)行以下命令手動(dòng)切換到Active形狀,自動(dòng)切換的配置方法見(jiàn)yarn rmadmin -transitionToActive rm1假設(shè)這時(shí)報(bào)錯(cuò)Automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget5fab22ffRefusing to manually manage HA state, since it may causea split-brain scenario or other incorrect state.If yo
60、u are very sure you know what you are doing, pleasespecify the forcemanual flag.闡明目前配置的是自動(dòng)切換,需求加強(qiáng)迫執(zhí)行的參數(shù)yarn rmadmin -transitionToActive -forcemanual rm1這時(shí)候能夠還會(huì)報(bào)錯(cuò):15/09/21 12:15:37 WARN ha.HAAdmin: Proceeding with manual HA state management even thoughautomatic failover is enabled for org.apache.hado
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 六年級(jí)數(shù)學(xué)下冊(cè) 比例單元檢測(cè)試卷(人教版)
- 創(chuàng)新教育模式提升員工學(xué)習(xí)效果考核試卷
- 《新世紀(jì)東北作家的城市書(shū)寫研究》
- 《鄂西南傳統(tǒng)村落保護(hù)性設(shè)計(jì)策略研究》
- 《我國(guó)特大型智慧城市建設(shè)路徑比較研究》
- 供應(yīng)鏈安全數(shù)據(jù)加密技術(shù)的選擇與應(yīng)用經(jīng)驗(yàn)分享考核試卷
- 2024至2030年中國(guó)彈簧拉力器行業(yè)投資前景及策略咨詢研究報(bào)告
- 現(xiàn)河采油廠直接作業(yè)環(huán)節(jié)安全管理培訓(xùn)考核試卷
- 危險(xiǎn)品倉(cāng)儲(chǔ)毒劇物管理考核試卷
- 《梁漱溟人生哲學(xué)思想研究》
- 小學(xué)語(yǔ)文人教五年級(jí)上冊(cè)動(dòng)靜結(jié)合(鄭穎慧曬課)課件
- 建設(shè)工程材料送檢規(guī)范匯總
- 通用BIQS培訓(xùn)資料課件
- (完整版)物主代詞和名詞所有格專項(xiàng)練習(xí)
- (精選課件)蝸牛爬井的故事
- 天翼云從業(yè)者認(rèn)證考前模擬題庫(kù)(含答案)
- 閱讀指導(dǎo)《我爸爸》導(dǎo)讀課件
- 保安部崗位設(shè)置圖
- DB31T 1295-2021 立體花壇技術(shù)規(guī)程
- 部編版《道德與法治》五年級(jí)上冊(cè)第10課《傳統(tǒng)美德 源遠(yuǎn)流長(zhǎng)》優(yōu)質(zhì)課件
- 消防工程施工驗(yàn)收單樣板
評(píng)論
0/150
提交評(píng)論