大數(shù)據(jù)平臺(tái)搭建詳細(xì)教程_第1頁(yè)
大數(shù)據(jù)平臺(tái)搭建詳細(xì)教程_第2頁(yè)
大數(shù)據(jù)平臺(tái)搭建詳細(xì)教程_第3頁(yè)
大數(shù)據(jù)平臺(tái)搭建詳細(xì)教程_第4頁(yè)
大數(shù)據(jù)平臺(tái)搭建詳細(xì)教程_第5頁(yè)
已閱讀5頁(yè),還剩20頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

大數(shù)據(jù)平臺(tái)搭建詳細(xì)教程

O0L00

?0C鳴

1

目錄

1.引言.........................................................4

1.1編寫(xiě)目的.................................................4

2.詳細(xì)搭建步驟.................................................4

2.1前期準(zhǔn)備.................................................4

2.1.1添力口hostname.........................................4

2.1.2添加子用戶...........................................5

2.1.3設(shè)置免密登陸.........................................5

2.1.4關(guān)閉selinux.........................................5

2.1.5關(guān)閉防火墻...........................................5

2.1.6安裝JDK.............................................6

2.2安裝hadoop集群...........................................6

2.2.1Zookeeper............................................6

2.2,1.1配置Zookeeper..................................6

2.2.1.2Zookeeper的使用................................7

2.3.2Hadoop...............................................7

2.3.2.1配置Hadoop.....................................8

2.3.2.2第一次啟動(dòng)hadoop................................9

2.3.3Spark..............................................10

3.3.1安裝Scale(全部節(jié)點(diǎn))..........................10

2.3.3.2安裝spark.....................................11

2.3.4Hive................................................11

2.3.4.1部署MySQL主從集群.............................11

2.3.4.2配置Hive......................................14

2.3.5Sqoop...............................................17

2.3.5.1配置Sqoop.....................................17

2.3.5.2使用sqoop.....................................18

2.4安裝Hbase集群...........................................18

2.4.1Hbase...............................................18

2

2.4.1.2部署分布式hbase集群...........................18

2.4.1.3操作hbase.....................................21

2.4.2Kafka...............................................22

2.4.2.1分布式部署kafka...............................22

2.4.2.2使用Kafka.....................................22

2.4.3KAFKA-MONITOR.......................................23

2.4.3.1配置KAFKA-MONITOR.............................23

2.5環(huán)境變量.................................................24

2.5.1在hadoop節(jié)點(diǎn)上添加的環(huán)境變量.......................24

2.5.2在hbase集群節(jié)點(diǎn)配置環(huán)境變量........................25

3

1.引言

1.1編寫(xiě)目的

本教程基于CentOS7.3編寫(xiě),主要用于大數(shù)據(jù)平臺(tái)搭建,其中組件有Zookeeper、HDFS、YARN、

Mapreduces2>HBase>Spark、Hive和SQOOP。

本系統(tǒng)一共2套,一套hadoop集群,一套Hbase集群

功能主機(jī)名部看組件1

HIVE(MYSQL),

hadoopManagerOI

SQOOP

Hadoop集群管理T3點(diǎn)NameNode(hadoop)、DFSZKFailoverController

(2臺(tái))(hadoop)、ResourceManager(hadoop)

hadoopManager02MYSQL

hadoopOl

JoumalNode(hadoop),DataNode

HadoopM翻潴節(jié)點(diǎn)hadoopOl(hadoop),QuorumPeerMain(Zookeeper),

(3臺(tái))SPARK(masterxworker),NodeManager

hadoop02(hadoop)

hbaseManagerOIKafkaOffsetMonitor

NameNode(hadoop)、DFSZKFailoverController

Hbase集腌理節(jié)點(diǎn)

(hadoop)、ResourceManager(hadoop),

(2臺(tái))

hbaseManagerO2Hmaster(hbase)

hbaseOl

JournalNode(hadoop),DataNode

Hbase集群數(shù)據(jù)節(jié)點(diǎn)hbase02

(hadoop),Zookeeper,HRegionServer(hbase),

(3臺(tái))

KAFKA,NodeManager(hadoop)

hbase03

圖1.1組件

2.詳細(xì)搭建步驟

2.1前期準(zhǔn)備

在全部節(jié)點(diǎn)配置

2.1.1添力口hostname

修改主機(jī)名,并且在每個(gè)節(jié)點(diǎn)上/etc/hosts文件中添加hostname與IP,如果有域名服務(wù)器可以不

localhostlocalhost.localdomainlocalhost4localhost4.Iocaldomain4

::1localhostlocalhost.localdomainlocalhost6localhost6.Iocaldomain6

|1hadoopOl

2hadoop02

3hadoop03

4hadoop04

5hadoop05

用添加.

2-1-1添加主機(jī)名

2.1.2添加子用戶

在全部主機(jī)上添加子用戶,hadoop集群子用戶名為hadoop,Hbase集群子用戶名為hbase

adduserHadoop

adduserhbase

2.1.3設(shè)置免密登陸

生成sshkey,設(shè)置主機(jī)之間子用戶免密登陸,將所有主機(jī)子用戶的rsa.pub復(fù)制到authorized_keys

中,然后將authorized_keys復(fù)制到所有節(jié)點(diǎn),并將authorized_keys權(quán)限改為644.

chown-Rhadoop:hadoop/home/Hadoop

chmod700/home/Hadoop

chmod700/home/hadoop/.ssh

chmod644/home/hadoop/.ssh/authorized_keys

chmod600/home/hadoop/.ssh/id_rsa

配置完成后,驗(yàn)證配置是否成功,相互免密登陸就算配置成功.

2.1.4關(guān)閉selinux

修改所有節(jié)點(diǎn)的/etc/selinux/config中值為disabled,并重啟

SELINUX=disabled

用/usr/sbin/sestatus檢查

2.1.5關(guān)閉防火墻

使用如下命令關(guān)閉所有節(jié)點(diǎn)防火墻

5

systemctlstopfirewalld.service

systemctldisablefirewalld.service

systemctlstatusfirewalld.service

2.1.6安裝JDK

因?yàn)閔adoop所有組件都需要使用JDK,所以要提前安裝JDK。本教程默認(rèn)使用的

jdk-8ul62-linux-x64.rpm版本.

在官網(wǎng)下載好安裝包后,拷貝到節(jié)點(diǎn)中,使用如下命令安裝:

yuminstall-yjdk-8ul62-linux-x64.rpm

[root@hadoop01*-]#java-version

javaversionW1.8.0_162R

Java(TM)SERuntimeEnvironment(build1.8.0_162-bl2)

JavaHotSpot(TM)64-BitServerVM(build25.162-bl2rmixedmode)

[rootQhadoopOl[

2-1-5安裝JDK

2.2安裝hadoop集群

環(huán)境安裝順序如下:

Zookeeper-hadoop-spark-hive-sqoop

2.2.1Zookeeper

在節(jié)點(diǎn)hadoopOl,hadoop02和hadoop03上配置安裝Zookeeper,用戶為子用戶hadoop

配置Zookeeper

i.創(chuàng)建先關(guān)文件夾

mkdir-p/home/hadoop/opt/data/zookeeper

mkdir-p/home/hadoop/opt/data/zookeeper/zookeeper_log

2.上傳ZK安裝包至iJ/home/hadoop/zookeeper-3.4.5-cdh5.10.0.tar.gz,然后解壓

6

tar-zxvfzookeeper-3.4.5-cdh5.10.0.tar.gz

3.fi>J^/home/hadoop/zookeeper-3.4.5-cdh5.10.0/conf/zoo.cfg

[root@hadoop01conf]#catzoo.cfg

tickTime=2000

initLimit=5

syncLimit=2

dataDir=/home/hadoop/opt/data/zookeeper

dataLogDir=/home/hadoop/opt/data/zookeeper/zookeeperJog

clientPort=2181

server.33=hadoop01:2888:3888

server.34=hadoop02:2888:3888

server.35=hadoop03:2888:3888

4.在每個(gè)節(jié)點(diǎn)上的/home/hadoop/opt/data/zookeeper中創(chuàng)建文件myid,并且寫(xiě)入對(duì)應(yīng)的值

hadoopOl中myid寫(xiě)入33

hadoop02中myid寫(xiě)入34

hadoop03中myid寫(xiě)入35

Zookeeper的使用

1.啟動(dòng)ZK

在每個(gè)節(jié)點(diǎn)上用如下命令啟動(dòng)Zookeepe

/home/hadoop/zookeeper-3.4.5-cdh5.10.0/bin/zkServer.shstart

2.測(cè)試連接ZK

/home/hadoop/zookeeper-3.4.5-cdh5.10.0/bin/zkCli.sh-serverhadoop01:2180

3.查看狀態(tài)

/home/hadoop/zookeeper-3.4.5-cdh5.10.0/bin/zkServer.shstatus

2.3.2Hadoop

在全部節(jié)點(diǎn)上配置hadoop,用戶為子用戶hadoop

7

配置Hadoop

1.解壓hadoop-2.6.0-cdh5.10.0.tar.gz至i」/home/hadoop/

tar-zxvfhadoop-2.6.0-cdh5.10.0.tar.gz

2.創(chuàng)建文件夾

mkdir-p/home/hadoop/opt/data/hadoop/tmp

mkdir-p/home/hadoop/opt/data/hadoop/hadoop_name

mkdir-p/home/hadoop/opt/data/hadoop/hadoop_data

mkdir-p/home/hadoop/opt/data/hadoop/editsdir/dfs/journalnode

mkdir-p/home/hadoop/opt/data/hadoop/nm-local-dir

mkdir-p/home/hadoop/opt/data/hadoop/hadoop_log

mkdir-p/home/hadoop/opt/data/hadoop/userlogs

3.Ejvi7home/hadoop/hadoop-2.6.0-cdh5.10.0/etc/hadoop/hadoop-env.sh

#Thejavaimplementationtouse.

exportJAVA_HOME=/usr/java/jdkl.8.0_162

4.配置hdfsha

配置文件如下,詳細(xì)配置在文件夾hadoop中

core-site.xml

hdfs-site.xml

5.配罰.yarnHA

配置文件如下,詳細(xì)配置在文件夾hadoop中

yarn-site.xml(單獨(dú)到管理節(jié)點(diǎn)配置yarn.resourcemanager.ha.id指定為當(dāng)前管理節(jié)點(diǎn))

mapred-site.xml

6.Yarndatamanager

Datamanager節(jié)點(diǎn)將文件spark-2.3.0-yarn-shuffle.jar放入

/home/hadoop/hadoop-2.6.0-cdh5.10.0/share/hadoop/yarn/spark-2.3.0-yarn-shufflejar

8

23.2.2第一次啟動(dòng)hadoop

1.在namenodel上執(zhí)行,創(chuàng)建命名空間

/home/hadoop/hadoop-2.6.0-cdh5.10.0/bin/hdfszkfc-formatZK

檢查:ha.ActiveStandbyElector:Successfullycreated/hadoop-ha/bigdataclusterinZK.

2.joumalnode(hadoop01,hadoop02,hadoop03)

/home/hadoop/hadoop-2.6.0-cdh5.10.0/sbin/hadoop-daemon.shstartjournalnode

檢查:

/home/hadoop/opt/data/hadoop/hadoop_log/hadoop-hadoop-journalnode-hadoop02.log

3.主namenode上運(yùn)行命令,格式化,只在主NN格式化,產(chǎn)生唯一ID標(biāo)識(shí)

/home/hadoop/hadoop-2.6.0-cdh5.10.0/bin/hdfsnamenode-formatbigdatacluster

檢查:沒(méi)報(bào)錯(cuò)

4.在主namenode啟動(dòng)namenode進(jìn)程

/home/hadoop/hadoop-2.6.0-cdh5.10.0/sbin/hadoop-daemon.shstartnamenode

檢查:

/home/hadoop/opt/data/hadoop/hadoop_log/hadoop-hadoop-namenode-hadoopmanag

er01.log

5.在從namenode上運(yùn)行,從主NN上copy元數(shù)據(jù),同步元數(shù)據(jù)

/home/hadoop/hadoop-2.6.0-cdh5.10.0/bin/hdfsnamenode-bootstrapstandby

檢查:Exitingwithstatus0

6.在從namenode上啟動(dòng)NN

/home/hadoop/hadoop-2.6.0-cdh5.10.0/sbin/hadoop-daemon.shstartnamenode

檢查:

/home/hadoop/opt/data/hadoop/hadoop_log/hadoop-hadoop-namenode-hadoopmanag

er02.log

7.2個(gè)namenode節(jié)點(diǎn)啟動(dòng)DFSZKFailoverController

/home/hadoop/hadoop-2.6.0-cdh5.10.0/sbin/hadoop-daemon.shstartzkfc

檢查:

9

/home/hadoop/opt/data/hadoop/hadoop_log/hadoop-hadoop-zkfc-hadoopmanager02.lo

g

8.啟動(dòng)HDFS

/home/hadoop/hadoop-2.6.0-cdh5.10.0/sbin/start-dfs.sh

9.啟動(dòng)yarn

/home/hadoop/hadoop-2.6.0-cdh5.10.0/sbin/start-yarn.sh

檢查:

/home/hadoop/hadoop-2.6.0-cdh5.10.0/logs/yarn-hadoop-resourcemanager-hadoopmana

ger01.log

1:8088/cluster/nodes

2.3.3Spark

在節(jié)點(diǎn)hadoopOl,hadoop02,hadoop03上配置部署Spark用戶為子用戶hadoop

3.3.1安裝Scale(全部節(jié)點(diǎn))

安裝scale用戶為root賬戶

1.上傳安裝包,然后解壓后移動(dòng)到/usr/local下

tarzxvfscala-2.12.5.tgz

mvscala-2.12.5/usr/local/

2.配置環(huán)境變量并source生效

vi/etc/profile

exportSCALA_HOME=/usr/local/scala-2.12.5

export

PATH=$PATH:$HADOOP_HOME/bin:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/b

10

source/etc/profile

3.查看scala

scala-version

2.33.2安裝spark

1.上傳spark-2.3.0-bin-hadoop2.6.tgz并解壓至ij/home/hadoop

tar-zxvfspark-2.3.0-bin-hadoop2.6.tgz

2.配置spark-env.sh

HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0-cdh5.10.0

SPARK_HOME=/home/hadoop/spark-2.3.0-bin-hadoop2.6

3.啟動(dòng)spark

進(jìn)入hadoop01-03

/home/hadoop/spark-2.3.0-bin-hadoop2.6/sbin/start-all.sh

4.查看WebUI

http://IP:8080/

2.3.4Hive

在節(jié)點(diǎn)hadoopManagerOl上配置Hive,用戶為子用戶hadoop,然后在hadoopManager01-02

上配置主從Mysql,用戶為root.

部署MySQL主從集群

1.清除默認(rèn)安裝的MariaDB

11

rpm-qa|grep-imariadb

rpm-e-nodepsmariadb-libs-5.5.52-l.el7.x86_64

2.上傳MySQL安裝包,然后解壓

tar-xvfmysql-5.7.21-l.el7.x86_64.rpm-bundle.tar

3.幾個(gè)包由依賴關(guān)系,執(zhí)行有先后其中,client依賴于libs,server依賴于common和client按照如下順序

安裝

yuminstallperl-y&&yuminstallnet-tools-y

rpm-ivhmysql-community-common-5.7.21-l.el7.x86_64.rpm

rpm-ivhmysql-community-libs-5.7.21-l.el7.x86_64.rpm

rpm-ivhmysql-community-client-5.7.21-l.el7.x86_64.rpm

rpm-ivhmysql-community-server-5.7.21-l.el7.x86_64.rpm

4.為了保證數(shù)據(jù)庫(kù)目錄為與文件的所有者為mysql登陸用戶,如果你是以root身份運(yùn)行mysql服務(wù),

需要執(zhí)行下面的命令初始化

mysqld-initialize-user=mysql

5.啟動(dòng)mysql數(shù)據(jù)庫(kù)

systemctlstartmysqld.service

systemctlstatusmysqld.service

6.登陸MySQL,使用獲取初始密碼,然后登陸

cat/var/log/mysqld.log

mysql-uroot-p;

7.設(shè)置新密碼

mysql>setpassword=password("2018");

8.設(shè)置授權(quán)(遠(yuǎn)程訪問(wèn))

mysql>grantallprivilegeson*.*to'mysql'@'%'identifiedby'2018';

9.刷新權(quán)限

12

mysql>flushprivileges;

10.修改主數(shù)據(jù)庫(kù)配置(hadoopOl)

vi/etc/f

####加入下列參數(shù)

log-bin=mysql-bin

server-id=2

binlog-ignore-db=information_schema

binlog-ignore-db=cluster

binlog-ignore-db=mysql

binlog-do-db=test

重啟數(shù)據(jù)庫(kù)

登陸后:

grantFILEon*.*to,mysqr@'2,identifiedby'20181;

grantreplicationslaveon*.*to'mysqr@'2'identifiedby'2018';

flushprivileges;

SHOWMASTERSTATUS;

11、修改從節(jié)點(diǎn)

vi/etc/f

####加入下列參數(shù)

log-bin=mysql-bin

server-id=3

binlog-ignore-db=information_schema

binlog-ignore-db=cluster

binlog-ignore-db=mysql

replicate-do-db=test

replicate-ignore-db=mysql

log-slave-updates

slave-skip-errors=all

slave-net-timeout=60

13

重啟數(shù)據(jù)庫(kù)

登陸后:

'

CHANGEMASTERTOMASTER-HOST=1,,MASTER_USER='mysqr/

MASTER_PASSWORD='2018',MASTER_LOG_FILE='mysql-bin.000002',MASTER_LOG_POS=883;

stopslave;

startslave;

23.4.2配置Hive

1.登陸mysql,創(chuàng)建一個(gè)用戶,并且賦予權(quán)限

mysql>createuser'hive'@'%'identifiedby'hive';

mysql>grantallon*.*to'hive'?'%'identifiedby'hive';

mysql>flushprivileges;

2.創(chuàng)建文件夾

mkdir-p/home/hadoop/opt/data/hive

mkdir-p/home/hadoop/opt/data/hive/logs

3.上傳hive-1.1.0-cdh5.10.0并解壓至U/home/hadoop

tar-zxvfhive-1.1.0-cdh5.10.0.tar.gz

4.創(chuàng)建文件hive-site.xml

<configuration>

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://hadoopmanager01:3306/hive?createDatabaseIfNotExist=true</value>

<description>JDBCconnectstringforaJDBCmetastore</description>

14

</property>

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

<description>DriverclassnameforaJDBCmetastore</description>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

<description>usernametouseagainstmetastoredatabase</description>

</property>

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>hive</value>

<description>passwordtouseagainstmetastoredatabase</description>

</property>

<!-hwi->

<property>

<name>hive.hwi.war.file</name>

<value>lib/hive-hwi-1.1.0-cdh5.10.0jar</value>

<description>ThissetsthepathtotheHWIwarfile,relativeto${HIVE_HOME}.

</description>

</property>

<property>

<name>hive.hwi.listen.host</name>

<value></value>

<description>ThisisthehostaddresstheHiveWebInterfacewilllisten

on</description>

15

</property>

<property>

<name>hive.hwi.listen.port</name>

<value>9999</value>

<description>ThisistheporttheHiveWebInterfacewilllistenon</description>

</property>

<property>

<name>hive.exec.scratchdir</name>

<value>/home/hadoop/opt/data/hive/hive-${}</value>

<description>ScratchspaceforHivejobs</description>

</property>

<property>

<name>hive.exec.local.scratchdir</name>

<value>/home/hadoop/opt/data/hive/${}</value>

<description>LocalscratchspaceforHivejobs</description>

</property>

</configuration>

5.修改hive-env.xml

cphive-env.sh.templatehive-env.sh

#SetHADOOP_HOMEtopointtoaspecifichadoopinstalldirectory

HADOOP_HOME=/home/hadoop/hadoop-2.6.0-cdh5.10.0

#HiveConfigurationDirectorycanbecontrolledby:

exportHIVE_CONF_DIR=/home/hadoop/hive-1.1.0-cdh5.10.0/conf

#Foldercontainingextraibrariesrequiredforhivecompilation/executioncanbe

controlledby:

exportHIVE_AUX_JARS_PATH=/home/hadoop/hive-1.1.0-cdh5.10.0/lib

16

6.上傳mysqlJDBC的jar到hive的libF

tar-zxvfmysql-connector-java-5.1.46.tar.gz

cpmysql-connectorjava-5.1.46.jar/home/hadoop/hive-1.1.0-cdh5.10.0/lib/

2.3.5Sqoop

在節(jié)點(diǎn)hadoopManagerOl上配置Hive,用戶為子用戶hadoop

配置Sqoop

1.上傳Sqoop安裝包至U/home/hadoop/sqoop-1.4.6-cdh5.10.0.tar.gz,然后解壓

tar-zxvfsqoop-1.4.6-cdh5.10.0.tar.gz

2.mysql的jdbc驅(qū)動(dòng)mysql-connector-java-5.1.10.jar復(fù)制到sqoop項(xiàng)目的lib目錄下

cpmysql-connector-java-5.1.46.jar/home/hadoop/sqoop-1.4.6-cdh5.10.0/lib/

3.修改hbase-env.sh

exportJAVA_HOME=/usr/java/jdkl.8.0_162

exportHBASE_LOG_DIR=/home/hadoop/data/hbase/logs

exportHADOOP_HOME=/home/hadoop/hadoop-2.6.0-cdh5.10.0

exportHBASE_MANAGES_ZK=false

4.配置sqoop-env.sh

exportHADOOP_COMMON_HOME=/home/hadoop/hadoop-2.6.0-cdh5.10.0

exportHADOOP_MAPRED_HOME=/home/hadoop/hadoop-2.6.0-cdh5.10.0

exportHIVE_HOME=/home/hadoop/hive-1.1.0-cdh5.10.0

17

23.5.2使用sqoop

列出mysql數(shù)據(jù)庫(kù)中的所有數(shù)據(jù)庫(kù)

sqooplist-databases—connectjdbc:mysql://localhost:3306/-usernamemysql

-password2018

2.4安裝Hbase集群

hbase集群安裝順序:

Zookeeper-hadoop-hbase-kafka一KafkaOffsetMonitor

Zookeeper和hadoop的安裝參考2.3.1和2.3.2安裝,注意用戶名的配置.

2.4.1Hbase

在所有節(jié)點(diǎn)配置hbase集群

部署分布式hbase集群

1.上傳Hbase安裝包至ij/home/hbase/hbase-1.2.0-cdh5.10.0.tar.gz,然后解壓

tar-zxvfhbase-1.2.0-cdh5.10.0.tar.gz

2.創(chuàng)建先關(guān)文件夾

mkdir-p/home/hbase/opt/data/hbase/logs

mkdir-p/home/hbase/opt/data/hbase/zookeeper

mkdir-p/home/hbase/opt/data/hbase/tmp

3.修改hbase-env.sh

exportJAVA_HOME=/usr/java/jdkl.8.0_162

exportHBASE_LOG_DIR=/home/hbase/opt/data/hbase/logs

18

exportHADOOP_HOME=/home/hbase/hadoop-2.6.0-cdh5.10.0

exportHBASE_MANAGES_ZK=false

4.修改hbase-site.xml

<?xmlversion="1.0"?>

<?xml-stylesheettype="text/xsl"href="configuration.xsr'?>

<!—

/**

*

*LicensedtotheApacheSoftwareFoundation(ASF)underone

*ormorecontributorlicenseagreements.SeetheNOTICEfile

*distributedwiththisworkforadditionalinformation

*regardingcopyrightownership.TheASFlicensesthisfile

*toyouundertheApacheLicense,Version2.0(the

*"License");youmaynotusethisfileexceptincompliance

*withtheLicense.YoumayobtainacopyoftheLicenseat

*http://www.apache.Org/licenses/LICENSE-2.0

*

*Unlessrequiredbyapplicablelaworagreedtoinwriting,software

*distributedundertheLicenseisdistributedonan"ASIS"BASIS,

*WITHOUTWARRANFESORCONDITIONSOFANYKIND,eitherexpressorimplied.

*SeetheLicenseforthespecificlanguagegoverningpermissionsand

*limitationsundertheLicense.

*/

-->

<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://bigdatacluster/hbase</value>

</property>

<property>

19

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<property>

<name>hbase.master.port</name>

<value>16000</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>hbase01,hbase02,hbase03</value>

</property>

<property>

<name>perty.clientPort</name>

<value>2181</value>

</property>

<property>

<name>perty.dataDir</name>

<value>/home/hbase/opt/data/hbase/zookeeper</value>

</property>

<property>

<name>hbase.tmp.dir</name>

<value>/home/hbase/opt/data/hbase/tmp</value>

</property>

<property>

<name>hbase.coprocessor.user.region.classes</name>

<value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>

</property>

20

<property>

<name>hbase.superuser</name>

<value>hbase,root/hadoop</value>

</property>

<property>

<name>hbase.security.authorization</name>

<value>true</value>

</property>

<property>

<name>hbase.coprocessor.master.classes</name>

<value>org.apache.hadoop.hbase.security.access.AccessController</value>

</property>

<property>

<name>hbase.coprocessor.region.classes</name>

<value>org.apache.hadoop.hbase.security.token.TokenProvider;org.apache.hadoop.hbase.securit

y.access.AccessController</value>

</property>

</configuration>

5.定義HRegionServer節(jié)點(diǎn),修改配置文件regionservers

操作hbase

1.hbasemanagerOl上啟動(dòng)hmaster

/home/hbase/hbase-1.2.0-cdh5.10.0/bin/start-hbase.sh

2.hbasemanager02上啟動(dòng)hmaster

/home/hbase/hbase-1.2.0-cdh5.10.0/bin/hbase-daemon.shstartmaster

21

3.weblli

hbasemanager01:60010

2.4.2Kafka

在hbaseOl,hbase02和hbase03上配置Kafka.

分布式部署kafka

1.上傳安裝包至U/home/hbase/sqoop-1.4.6-cdh5.10.0.tar.g乙然后解壓

tar-zxvfkafka_2.12-1.0.1.tgz

2.創(chuàng)建先關(guān)文件夾

mkdir-p/home/hbase/opt/data/kafka/kafka-logs

3.peties文件

broker.id=23(任意定義,每個(gè)節(jié)點(diǎn)上ID不能相同)

listeners=PLAINTEXT://hbase01:9092

zookeeper.connect=hbase01:2181,hbase02:2181/hbase03:2181

log.dirs=/home/hbase/opt/data/kafka/kafka-logs

4.啟動(dòng)

nohup/home/hbase/kafka_2.12-1.0.1/bin/kafka-server-start.sh

/home/hbase/kafka_2.12-1.0.1/config/perties&

2A.2.2使用Kafka

1)、創(chuàng)建topic:

bin/kafka-topics.sh—create—zookeeperhbase01:2181-replication-factor1—partitions

1-topictest

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論