Linux Note / 运维笔记

Hadoop 高可用HA集群

Einic Yeo · 12月24日 · 2021年 · · ·

一、Hadoop HA 原理概述

为什么会有 hadoop HA 机制呢?

  HA:High Available,高可用

  在Hadoop 2.0之前,在HDFS 集群中NameNode 存在单点故障 (SPOF:A Single Point of Failure)。 对于只有一个 NameNode 的集群,如果 NameNode 机器出现故障(比如宕机或是软件、硬件 升级),那么整个集群将无法使用,直到 NameNode 重新启动

那如何解决呢?

  HDFS 的 HA 功能通过配置 Active/Standby 两个 NameNodes 实现在集群中对 NameNode 的 热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方 式将 NameNode 很快的切换到另外一台机器。

  在一个典型的 HDFS(HA) 集群中,使用两台单独的机器配版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作!置为 NameNodes 。在任何时间点, 确保 NameNodes 中只有一个处于 Active 状态,其他的处在 Standby 状态。其中 ActiveNameNode 负责集群中的所有客户端操作,StandbyNameNode 仅仅充当备机,保证一 旦 ActiveNameNode 出现问题能够快速切换。

  为了能够实时同步 Active 和 Standby 两个 NameNode 的元数据信息(实际上 editlog),需提 供一个共享存储系统,可以是 NFS、QJM(Quorum Journal Manager)或者 Zookeeper,Active Namenode 将数据写入共享存储系统,而 Standby 监听该系统,一旦发现有新数据写入,则 读取这些数据,并加载到自己内存中,以保证自己内存状态与 Active NameNode 保持基本一 致,如此这般,在紧急情况下 standby 便可快速切为 active namenode。为了实现快速切换, Standby 节点获取集群的最新文件块信息也是很有必要的。为了实现这一目标,DataNode 需 要配置 NameNodes 的位置,并同时给他们发送文件块信息以及心跳检测。

二、配置对照表

Hadoop集群的各部分一般都会使用到多个端口,有些是daemon之间进行交互之用,有些是用于RPC访问以及HTTP访问。而随着Hadoop周边组件的增多,完全记不住哪个端口对应哪个应版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作!用,特收集记录如此,以便查询。

这里包含我们使用到的组件:HDFS, YARN, HBase, Hive, ZooKeeper:

组件节点默认端口配置用途说明
HDFSDataNode50010dfs.datanode.addressdatanode服务端口,用于数据传输
HDFSDataNode50075dfs.datanode.http.addresshttp服务的端口
HDFSDataNode50475dfs.datanode.https.addresshttps服务的端口
HDFSDataNode50020dfs.datanode.ipc.addressipc服务的端口
HDFSNameNode50070dfs.namenode.http-addresshttp服务的端口
HDFSNameNode50470dfs.namenode.https-addr版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作!esshttps服务的端口
HDFSNameNode8020fs.defaultFS接收Client连接的RPC端口,用于获取文件系统metadata信息。
HDFSjournalnode8485dfs.journalnode.rpc-addressRPC服务
HDFSjournalnode8480dfs.journalnode.http-addressHTTP服务
HDFSZKFC8019dfs.ha.zkfc.portZooKeeper FailoverController,用于NN HA
YARNResourceManager8032yarn.resourcemanager.addressRM的applications manager(ASM)端口
YARNResourceManager8030yarn.resourcemanager.scheduler.addressscheduler组件的IPC端口
YARNResourceManager8031yarn.resourcemanager.resource-tracker.addressIPC
YARNResourceManager8033yarn.resourcemanager.admin.addressIPC
YARNResourceManager8088yarn.resourcemanager.webapp.addresshttp服务端口
YARNNodeManager8040yarn.nodemanager.localizer.addresslocalizer IPC
YARNNodeManager8042yarn.nodemanager.webapp.addresshttp服务端口
YARNNodeManager8041yarn.nodemanager.addressNM中container manager的端口
YARNJobHistory Server10020mapreduce.jobhistory.addressIPC
YARNJobHistory Server19888mapreduce.jobhistory.webapp.addresshttp服务端口
HBaseMaster60000hbase.master.portIPC
HBaseMaster60010hbase.master.info.porthttp服务端口
HBaseRegionServer60020hbase.regionserver.portIPC
HBaseRegionServer60030hbase.regionserver.info.porthttp服务端口
HBaseHQuorumPeer2181hbase.zookeeper.property.clientPortHBase-managed ZK mode,使用独立的ZooKeeper集群则不会启用该端口。
HBaseHQuorumPeer2888hbase.zookeeper.peerportHBase-managed ZK mode,使用独立的ZooKeeper集群则不会启用该端口。
HBaseHQuorumPeer3888hbase.zookeeper.leaderportHBase-managed ZK mode,使用独立的ZooKeeper集群则不会启用该端口。
Hi版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作!veMetastore9083/etc/default/hive-metastore中export PORT=<port>来更新默认端口 
HiveHiveServer10000/etc/hive/conf/hive-env.sh中export HIVE_SERVER2_THRIFT_PORT=<port>来更新默认端口 
ZooKeeperServer2181/etc/zookeeper/conf/zoo.cfg中clientPort=<port>对客户端提供服务的端口
ZooKeeperServer2888/etc/zookeeper/conf/zoo.cfg中server.x=[hostname]:nnnnn[:nnnnn]follower用来连接到leader,只在leader上监听该端口。
ZooKeeperServer3888/etc/zookeeper/conf/zoo.cfg中server.x=[hostname]:nnnnn[:nnnnn]用于leader选举的。只在electionAlg是1,2或3(默认)时需要

三、集群规划

  描述:hadoop HA 集群的搭建依赖于 zookeeper,所以选取三台当做 zookeeper 集群 ,总共准备了四台主机,分别是 hadoop-01,hadoop-02,hadoop-03,hadoop-04 其中 hadoop-01 和 hadoop-02 做 namenode 的主备切换,hadoop-03 和 hadoop-04 做 resourcemanager 的主备切换

HOSTNAMEIPHDFSYARNCOMPONENTS
hadoop-01192.168.1.111DataNode,NameNodeNodeManagerJournalNodes,ZooKeeperFailoverController,ZooKeeper
hadoop-02192.168.1.112DataNode,NameNodeNodeManagerJournalNodes,ZooKeeperFailoverController,ZooKeeper
hadoop-03192.168.1.113DataNodeNodeManager,ResourceManagerJournalNodes,ZooKeeper
hadoop-04192.168.1.114DataNodeNodeManager,ResourceManagerZooKeeper(Observer)

四、集群部署

4.1、环境初始化

#set-hostname
hostnamectl set-hostname hadoop-01
hostnamectl set-hostname hadoop-02
hostnamectl set-hostname hadoop-03
hostnamectl set-hostname hadoop-04

cat >/etc/hosts <<EOF
192.168.1.111 hadoop-01
192.168.1.112 hadoop-02
192.168.1.113 hadoop-03
192.168.1.114 hadoop-04
EOF

useradd -M -U hadoop

sed -i "`grep -n "^[^#]" /etc/sudoers|grep root|awk -F ':' '{print $1}'`a\hadoop  ALL=\(ALL\)  ALL" /etc/sudoers

systemctl stop firewalld && systemctl disable firewalld && sed -i 's/enforcing/disabled/g' /etc/selinux/config

sed -i "s/0.centos.pool.ntp.org/ntp.aliyun.com/g" /etc/chrony.conf && systemctl restart chronyd.service && chronyc sources -v

#各节点设置免密登陆
su hadoop && ssh-keygen -t rsa -P '' && cat id_rsa.pub > authorized_keys && chmod 600 authorized_keys

4.2、Zookeeper 集群部署

#JAVA 环境部署与配置
wget -c https://mirrors.infvie.org/envoyinstack.tgz && tar xzf envoyinstack.tgz && ./envoyinstack/install.sh --jdk_option 1

[root@hadoop-01 ~]# yum -y install zookeeper-3.7.0-1.noarch.rpm && systemctl start zookeeper && systemctl enable zookeeper

[root@hadoop-01 ~]# grep -Ev '^$|[#;]' /usr/local/zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/log
snapCount=2
clientPort=2181
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
server.0:hadoop-01:2888:3888
server.1:hadoop-02:2888:3888
server.2:hadoop-03:2888:3888
server.3:hadoop-04:2888:3888:observer    

[root@hadoop-04 ~]# grep -Ev '^$|[#;]' /usr/local/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/log
snapCount=2
clientPort=2181
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
peerType=observer
server.0:hadoop-01:2888:3888
server.1:hadoop-02:2888:3888
server.2:hadoop-03:2888:3888
server.3:hadoop-04:2888:3888:observer

[root@hadoop-01 ~]# echo srvr|nc hadoop-01 2181
Zookeeper version: 3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
Latency min/avg/max: 0/0.5965/27
Received: 844
Sent: 812
Connections: 2
Outstanding: 0
Zxid: 0x10000003f
Mode: follower
Node count: 38
[root@hadoop-01 ~]# echo srvr|nc hadoop-02 2181
Zookeeper version: 3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
Latency min/avg/max: 0/0.6702/46
Received: 1720
Sent: 1719
Connections: 4
Outstanding: 0
Zxid: 0x10000003f
Mode: leader
Node count: 38
Proposal sizes last/min/max: 36/36/1503
[root@hadoop-01 ~]# echo srvr|nc hadoop-03 2181
Zookeeper version: 3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
Latency min/avg/max: 0/0.6135/13
Received: 447
Sent: 447
Connections: 1
Outstanding: 0
Zxid: 0x10000003f
Mode: follower
Node count: 38
[root@hadoop-01 ~]# echo srvr|nc hadoop-04 2181
Zookeeper version: 3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
Latency min/avg/max: 0/0.5039/22
Received: 1286
Sent: 1286
Connections: 3
Outstanding: 0
Zxid: 0x10000003f
Mode: observer
Node count: 38

4.3、hadoop 集群部署

tar -zxvf hadoop-3.3.1.tar.gz  -C /usr/local && mv /usr/local/hadoop-3.3.1 /usr/local/hadoop

export JAVA_HOME=/usr/java/jdk-11.0.12
export HADOOP_HOME=/usr/local/hadoop
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH

mkdir /{data,home}/hadoop && cp /root/.bash* /home/hadoop/ && chown -R hadoop:hadoop /usr/local/hadoop /{data,home}/hadoop/
[root@hadoop-01 ~]# grep -Ev "^$|[#;]" /usr/local/hadoop/etc/hadoop/hadoop-env.sh 
 export JAVA_HOME=/usr/java/jdk-11.0.12
 export HADOOP_HOME=/usr/local/hadoop
 export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 export HADOOP_LOG_DIR=/data/hadoop/logs
[root@hadoop-01 ~]# cat /usr/local/hadoop/etc/hadoop/core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!-- 指定hdfs的nameservice为infviehadoop -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://infviehadoop/</value>
    </property>

    <!-- 指定hadoop临时目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp/</value>
    </property>

    <!-- 指定zookeeper地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop-01:2181,hadoop-02:2181,hadoop-03:2181,hadoop-04:2181</value>
    </property>

    <!-- hadoop链接zookeeper的超时时长设置 -->
    <property>
        <name>ha.zookeeper.session-timeout.ms</name>
        <value>1000</value>
        <description>ms</description>
    </property>

    <!--NameNode 节点配置机架感知 -->
    <property>
      <name>topology.script.file.name</name>
      <value>/usr/local/hadoop/etc/hadoop/topology.sh</value>
    </property>
</configuration>
[root@hadoop-01 ~]# cat /usr/local/hadoop/etc/hadoop/hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>

    <!-- 指定副本数 -->
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>

    <!-- 配置namenode和datanode的工作目录-数据存储目录 -->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hadoop/data/hadoopdata/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/hadoop/data/hadoopdata/dfs/data</value>
    </property>

    <!-- 启用webhdfs -->
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>

    <!--指定hdfs的nameservice为infviehadoop,需要和core-site.xml中的保持一致
                          dfs.ha.namenodes.[nameservice id]为在nameservice中的每一个NameNode设置唯一标示符。
        配置一个逗号分隔的NameNode ID列表。这将是被DataNode识别为所有的NameNode。
        例如,如果使用"infviehadoop"作为nameservice ID,并且使用"nn1"和"nn2"作为NameNodes标示符
    -->
    <property>
        <name>dfs.nameservices</name>
        <value>infviehadoop</value>
    </property>

    <!-- infviehadoop下面有两个NameNode,分别是nn1,nn2 -->
    <property>
        <name>dfs.ha.namenodes.infviehadoop</name>
        <value>nn1,nn2</value>
    </property>

    <!-- nn1的RPC通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.infviehadoop.nn1</name>
        <value>hadoop-01:9000</value>
    </property>

    <!-- nn1的http通信地址 -->
    <property>
        <name>dfs.namenode.http-address.infviehadoop.nn1</name>
        <value>hadoop-01:50070</value>
    </property>

    <!-- nn2的RPC通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.infviehadoop.nn2</name>
        <value>hadoop-02:9000</value>
    </property>

    <!-- nn2的http通信地址 -->
    <property>
        <name>dfs.namenode.http-address.infviehadoop.nn2</name>
        <value>hadoop-02:50070</value>
    </property>

    <!-- 指定NameNode的edits元数据的共享存储位置。也就是JournalNode列表
                          该url的配置格式:qjournal://host1:port1;host2:port2;host3:port3/journalId
        journalId推荐使用nameservice,默认端口号是:8485 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop-01:8485;hadoop-02:8485;hadoop-03:8485/infviehadoop</value>
    </property>

    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hadoop/data/journaldata</value>
    </property>

    <!-- 开启NameNode失败自动切换 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <!-- 配置失败自动切换实现方式 -->
    <property>
        <name>dfs.client.failover.proxy.provider.infviehadoop</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
    </property>

    <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>

    <!-- 配置sshfence隔离机制超时时间 -->
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>

    <property>
        <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
        <value>60000</value>
    </property>
</configuration>
[root@hadoop-01 ~]# cat /usr/local/hadoop/etc/hadoop/mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
    <!-- 指定mr框架为yarn方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <!-- 指定mapreduce jobhistory地址 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop-01:10020</value>
    </property>

    <!-- 任务历史服务器的web地址 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-01:19888</value>
    </property>
</configuration>
[root@hadoop-01 ~]# cat /usr/local/hadoop/etc/hadoop/yarn-site.xml 
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <!-- 开启RM高可用 -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>

    <!-- 指定RM的cluster id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yrc</value>
    </property>

    <!-- 指定RM的名字 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <!-- 分别指定RM的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop-03</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop-04</value>
    </property>

    <!-- 指定zk集群地址 -->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop-01:2181,hadoop-02:2181,hadoop-03:2181,hadoop-04:2181</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>86400</value>
    </property>

    <!-- 启用自动恢复 -->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>

    <!-- 制定resourcemanager的状态信息存储在zookeeper集群上 -->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
</configuration>
[root@hadoop-01 ~]# cat /usr/local/hadoop/etc/hadoop/workers 
hadoop-01
hadoop-02
hadoop-03
hadoop-04

按照上述配置完成后,再对此镜像克隆部署其他节点,主要是保证相关目录与配置一致性

[root@hadoop-01 ~]# hadoop version
Hadoop 3.3.1
Source code repository https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2
Compiled by ubuntu on 2021-06-15T05:13Z
Compiled with protoc 3.7.1
From source with checksum 88a4ddb2299aca054416d6b7f81ca55
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.3.1.jar

4.4、hadoop HA 集群初始化

#所有Journalnode节点启动进程
[hadoop@hadoop-01 ~]$ hadoop-daemon.sh start journalnode
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.

[hadoop@hadoop-02 ~]$ hadoop-daemon.sh start journalnode
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.

[hadoop@hadoop-03 ~]$ hadoop-daemon.sh start journalnode
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.
  
#要把在hadoop1节点上生成的元数据 给复制到 另一个namenode(hadoop2)节点上
[hadoop@hadoop-01 ~]$ scp -r /data/hadoop/data/hadoopdata hadoop-02:/data/hadoop/data/
The authenticity of host 'hadoop-02 (192.168.1.112)' can't be established.
ECDSA key fingerprint is SHA256:Sm/EFwos2lx71sX0TvwUlTqXt+7J0vLs8X6XIcTXMSA.
ECDSA key fingerprint is MD5:71:65:c5:7b:6b:a3:08:ac:88:9a:2e:b5:0f:c7:8c:f3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop-02,192.168.1.112' (ECDSA) to the list of known hosts.
VERSION                                                                                                                                 100%  217    90.1KB/s   00:00    
seen_txid                                                                                                                               100%    2     0.4KB/s   00:00    
fsimage_0000000000000000000.md5                                                                                                         100%   62    12.2KB/s   00:00    
fsimage_0000000000000000000                                                                                                             100%  401   150.5KB/s   00:00
#选取一个namenode(hadoop1)节点进行格式化
[hadoop@hadoop-01 ~]$ hadoop namenode -format
...................................
2021-12-23 22:23:50,711 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1229248917-192.168.1.111-1642947830711
2021-12-23 22:23:50,760 INFO common.Storage: Storage directory /data/hadoop/data/hadoopdata/dfs/name has been successfully formatted.
2021-12-23 22:23:51,229 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/data/hadoopdata/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-12-23 22:23:51,508 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/data/hadoopdata/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
2021-12-23 22:23:51,533 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-12-23 22:23:51,666 INFO namenode.FSNamesystem: Stopping services started for active state
2021-12-23 22:23:51,667 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-12-23 22:23:52,097 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-12-23 22:23:52,099 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-01/192.168.1.111
************************************************************/
#格式化zkfc
[hadoop@hadoop-01 ~]$ hdfs zkfc -formatZK
.....................................
2021-12-23 22:28:18,702 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop-01:2181,hadoop-02:2181,hadoop-03:2181,hadoop-04:2181 sessionTimeout=1000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@db57326
2021-12-23 22:28:18,711 INFO common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2021-12-23 22:28:18,721 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes
2021-12-23 22:28:18,737 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=
2021-12-23 22:28:18,789 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop-03/192.168.1.113:2181. Will not attempt to authenticate using SASL (unknown error)
2021-12-23 22:28:18,805 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.1.111:61496, server: hadoop-03/192.168.1.113:2181
2021-12-23 22:28:18,862 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop-03/192.168.1.113:2181, sessionid = 0x20000185eb20000, negotiated timeout = 4000
2021-12-23 22:28:18,955 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/infviehadoop in ZK.
2021-12-23 22:28:19,074 INFO zookeeper.ZooKeeper: Session: 0x20000185eb20000 closed
2021-12-23 22:28:19,075 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x20000185eb20000
2021-12-23 22:28:19,077 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x20000185eb20000
2021-12-23 22:28:19,078 INFO zookeeper.ClientCnxn: EventThread shut down for session: 0x20000185eb20000
2021-12-23 22:28:19,087 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at hadoop-01/192.168.1.111
************************************************************/

4.5、启动集群

#启动HDFS
[hadoop@hadoop-01 ~]$ start-dfs.sh
Starting namenodes on [hadoop-01 hadoop-02]
hadoop-01: Warning: Permanently added 'hadoop-01,192.168.1.111' (ECDSA) to the list of known hosts.
Starting datanodes
hadoop-03: Warning: Permanently added 'hadoop-03,192.168.1.113' (ECDSA) to the list of known hosts.
hadoop-04: Warning: Permanently added 'hadoop-04,192.168.1.114' (ECDSA) to the list of known hosts.
Starting journal nodes [hadoop-03 hadoop-02 hadoop-01]
hadoop-03: journalnode is running as process 9955.  Stop it first and ensure /tmp/hadoop-hadoop-journalnode.pid file is empty before retry.
hadoop-01: journalnode is running as process 10464.  Stop it first and ensure /tmp/hadoop-hadoop-journalnode.pid file is empty before retry.
hadoop-02: journalnode is running as process 10035.  Stop it first and ensure /tmp/hadoop-hadoop-journalnode.pid file is empty before retry.
Starting ZK Failover Controllers on NN hosts [hadoop-01 hadoop-02]

#启动YARN
,在主备 resourcemanager 中随便选择一台进行启动
[hadoop@hadoop-04 ~]$ start-yarn.sh
Starting resourcemanagers on [ hadoop-03 hadoop-04]
hadoop-04: Warning: Permanently added 'hadoop-04,192.168.1.114' (ECDSA) to the list of known hosts.
Starting nodemanagers

#启动 mapreduce 任务历史服务器
[hadoop@hadoop-01 ~]$ mr-jobhistory-daemon.sh start historyserver
WARNING: Use of this script to start the MR JobHistory daemon is deprecated.
WARNING: Attempting to execute replacement "mapred --daemon start" instead.

#所有组件启动后的JPS
[root@hadoop-01 ~]# jps
21600 QuorumPeerMain
26610 Jps
23619 JobHistoryServer
25238 DataNode
23432 NodeManager
25496 JournalNode
25693 DFSZKFailoverController
25103 NameNode

[root@hadoop-02 ~]# jps
16944 NodeManager
18180 Jps
17557 DataNode
17462 NameNode
15833 QuorumPeerMain
17674 JournalNode
17786 DFSZKFailoverController

[root@hadoop-03 ~]# jps
13488 QuorumPeerMain
14976 JournalNode
15209 Jps
14890 DataNode
14364 NodeManager
14271 ResourceManager

[root@hadoop-04 ~]# jps
13633 ResourceManager
13026 QuorumPeerMain
13770 NodeManager
14347 Jps
14110 DataNode

4.6、集群状态

[hadoop@hadoop-01 ~]$ hdfs haadmin -getServiceState nn1
active
[hadoop@hadoop-01 ~]$ hdfs haadmin -getServiceState nn2
standby

[hadoop@hadoop-01 ~]$ yarn rmadmin -getServiceState rm1
active
[hadoop@hadoop-01 ~]$ yarn rmadmin -getServiceState rm2
standby
#相关组件访问地址如下:
http://hadoop-01:50070/dfshealth.html
http://hadoop-02:50070/dfshealth.html
http://hadoop-03:8088/cluster/cluster
http://hadoop-04:8088/cluster/cluster
http://hadoop-01:19888/jobhistory

 五、集群性能测试

干掉 active namenode, 看看集群有什么变化

目前hadoop-02上的namenode节点是active状态,干掉他的进程看看hadoop-01上的standby状态的namenode能否自动切换成active状态

[hadoop@hadoop-02 ~]$ hdfs haadmin -getServiceState nn1
standby
[hadoop@hadoop-02 ~]$ hdfs haadmin -getServiceState nn2
active

[hadoop@hadoop-02 ~]$ jps
16944 NodeManager
18180 Jps
17557 DataNode
17462 NameNode
15833 QuorumPeerMain
17674 JournalNode
17786 DFSZKFailoverController
[hadoop@hadoop-02 ~]$ kill -9 17462

[hadoop@hadoop-02 ~]$ hdfs haadmin -getServiceState nn1
active
[hadoop@hadoop-02 ~]$ hdfs haadmin -getServiceState nn2
2021-12-24 14:12:57,186 INFO ipc.Client: Retrying connect to server: hadoop-02/192.168.1.112:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From hadoop-02/192.168.1.112 to hadoop-02:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

[hadoop@hadoop-02 ~]$ hadoop-daemon.sh start namenode
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.

在上传文件的时候干掉 active namenode, 看看有什么变化

找一个比较大的文件,进行文件上传操作,5秒钟的时候干掉active状态的namenode,看看文件是否能上传成功

  hadoop-02进行上传

[hadoop@hadoop-02 ~]$ md5sum hadoop-3.3.1.tar.gz 
d4e9b3f1a95136c9ea1eb9174602ad3b  hadoop-3.3.1.tar.gz

[hadoop@hadoop-02 ~]$ mkdir download

  hadoop-01准备随时干掉namenode

[hadoop@hadoop-01 ~]$ jps
14785 DataNode
11876 JobHistoryServer
15044 JournalNode
11654 NodeManager
14649 NameNode
15499 Jps
15229 DFSZKFailoverController
[hadoop@hadoop-01 ~]$ kill -9 14649

  hadoop-02上的信息,在干掉hadoop-01上namenode进程的时候,hadoop-02报错

[hadoop@hadoop-02 ~]$ hadoop fs -put hadoop-3.3.1.tar.gz /
2021-12-24 14:28:09,572 INFO retry.RetryInvocationHandler: java.io.EOFException: End of File Exception between local host is: "hadoop-02/192.168.1.112"; destination host is: "hadoop-01":9000; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException, while invoking ClientNamenodeProtocolTranslatorPB.addBlock over hadoop-01/192.168.1.111:9000. Trying to failover immediately.
2021-12-24 14:28:09,606 INFO retry.RetryInvocationHandler: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:108)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:2092)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1543)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2924)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)
, while invoking ClientNamenodeProtocolTranslatorPB.addBlock over hadoop-02/192.168.1.112:9000 after 1 failover attempts. Trying to failover after sleeping for 1008ms.
2021-12-24 14:28:10,616 INFO retry.RetryInvocationHandler: java.net.ConnectException: Call From hadoop-02/192.168.1.112 to hadoop-01:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ClientNamenodeProtocolTranslatorPB.addBlock over hadoop-01/192.168.1.111:9000 after 2 failover attempts. Trying to failover after sleeping for 2598ms.

上传查看与下载验证文件的完整性

[hadoop@hadoop-02 ~]$ hadoop fs -ls /
Found 2 items
-rw-r--r--   2 hadoop supergroup  605187279 2021-12-24 14:28 /hadoop-3.3.1.tar.gz
drwxrwx---   - hadoop supergroup          0 2021-12-24 13:34 /tmp

[hadoop@hadoop-02 ~]$ hadoop fs -get /hadoop-3.3.1.tar.gz download/
[hadoop@hadoop-02 ~]$ cd download/

[hadoop@hadoop-02 download]$ md5sum hadoop-3.3.1.tar.gz 
d4e9b3f1a95136c9ea1eb9174602ad3b  hadoop-3.3.1.tar.gz

Resourcemanager节点自我参考以上测试HA

六、集群机架感知

Hadoop 的设计目的:解决海量大文件的处理问题,主要指大数据的存储和计算问题,其中, HDFS 解决数据的存储问题;MapReduce 解决数据的计算问题

  Hadoop 的设计考虑:设计分布式的存储和计算解决方案架构在廉价的集群之上,所以,服 务器节点出现宕机的情况是常态。数据的安全是重要考虑点。HDFS 的核心设计思路就是对 用户存进 HDFS 里的所有数据都做冗余备份,以此保证数据的安全

  那么 Hadoop 在设计时考虑到数据的安全,数据文件默认在 HDFS 上存放三份。显然,这三 份副本肯定不能存储在同一个服务器节点。那怎么样的存储策略能保证数据既安全也能保证 数据的存取高效呢?

  HDFS 分布式文件系统的内部有一个副本存放策略:以默认的副本数=3 为例:

    1、第一个副本块存本机

    2、第二个副本块存跟本机同机架内的其他服务器节点

    3、第三个副本块存不同机架的一个服务器节点上

  好处:

    1、如果本机数据损坏或者丢失,那么客户端可以从同机架的相邻节点获取数据,速度肯定 要比跨机架获取数据要快。

    2、如果本机所在的机架出现问题,那么之前在存储的时候没有把所有副本都放在一个机架 内,这就能保证数据的安全性,此种情况出现,就能保证客户端也能取到数据

  HDFS 为了降低整体的网络带宽消耗和数据读取延时,HDFS 集群一定会让客户端尽量去读取 近的副本,那么按照以上头解释的副本存放策略的结果:

    1、如果在本机有数据,那么直接读取

    2、如果在跟本机同机架的服务器节点中有该数据块,则直接读取

    3、如果该 HDFS 集群跨多个数据中心,那么客户端也一定会优先读取本数据中心的数据

  但是 HDFS 是如何确定两个节点是否是统一节点,如何确定的不同服务器跟客户端的远近呢? 答案就是机架感知。!!!!

  在默认情况下,HDFS 集群是没有机架感知的,也就是说所有服务器节点在同一个默认机架 中。那也就意味着客户端在上传数据的时候,HDFS 集群是随机挑选服务器节点来存储数据 块的三个副本的。

  那么假如,datanode-01 和 datanode-03 在同一个机架 rack1,而 datanode-02 在第二个机架 rack2, 那么客户端上传一个数据块 block_001,HDFS 将第一个副本存放在 datanode-01,第二个副本 存放在 datanode-02,那么数据的传输已经跨机架一次(从 rack1 到 rack2),然后 HDFS 把第三 个副本存 datanode-03,此时数据的传输再跨机架一次(从 rack2 到 rack1)。显然,当 HDFS 需 要处理的数据量比较大的时候,那么没有配置机架感知就会造成整个集群的网络带宽的消耗非常严重。

6.1、配置机架感知

  NameNode 节点的 core-site.xml 配置文件增加一项配置:

<!-- NameNode节点配置机架感知 -->
<property>
  <name>topology.script.file.name</name>
  <value>/usr/local/hadoop/etc/hadoop/topology.sh</value>
</property>

这个配置项的 value 通常是一个执行文件,该执行文件是一个 shell 脚本 topology.sh,

  该脚本 接收一个参数,输出一个值。

  接收的参数:datanode 节点的 IP 地址,比如:192.168.1.111

  输出值:datanode 节点所在的机架配置信息,比如:/switch1/rack1

  Namenode 启动时,会判断该配置选项是否为空,如果非空,则表示已经启用机架感知的配 置,此时 namenode 会根据配置寻找该脚本,并在接收到每一个 datanode 的 heartbeat 时,将该 datanode 的 ip 地址作为参数传给该脚本运行,并将得到的输出作为该 datanode 所属的 机架 ID,保存到内存的一个 map 中. 至于脚本的编写,就需要将真实的网络拓朴和机架信息了解清楚后,通过该脚本能够将机器 的 ip 地址和机器名正确的映射到相应的机架上去。一个简单的实现如下:

cat /usr/local/hadoop/etc/hadoop/topology.sh
#!/bin/bash
HADOOP_CONF=/usr/local/hadoop/etc/hadoop
while [ $# -gt 0 ] ;
do
 nodeArg=$1
 exec<${HADOOP_CONF}/topology.data
 result=""
 while read line
 do
 ar=( $line )
 if [ "${ar[0]}" = "$nodeArg" ]||[ "${ar[1]}" = "$nodeArg" ]
 then
 result="${ar[2]}"
 fi
 done
 shift
 if [ -z "$result" ]
 then
 echo -n "/default-rack"
 else
 echo -n "$result"
 fi
done
cat >/usr/local/hadoop/etc/hadoop/topology.data <<EOF
192.168.1.111 hadoop-01 /switch1/rack1
192.168.1.112 hadoop-02 /switch1/rack1
192.168.1.113 hadoop-03 /switch2/rack2
192.168.1.114 hadoop-04 /switch2/rack2
EOF

 在自己对应的hadoop配置目录添加这两个文件,其中 switch 表示交换机,rack 表示机架 需要注意的是,在 Namenode 上,该文件中的节点必须使用 IP,使用主机名无效,而 ResourceManager 上,该文件中的节点必须使用主机名,使用 IP 无效,所以,最好 IP 和主 机名都配上。

   注意:以上两个文件都需要添加可执行权限

cd /usr/local/hadoop/etc/hadoop/ && chmod +x topology.data topology.sh

6.2、验证机架感知

[hadoop@hadoop-01 ~]$ stop-dfs.sh && start-dfs.sh
Stopping namenodes on [hadoop-01 hadoop-02]
Stopping datanodes
Stopping journal nodes [hadoop-03 hadoop-02 hadoop-01]
Stopping ZK Failover Controllers on NN hosts [hadoop-01 hadoop-02]
Starting namenodes on [hadoop-01 hadoop-02]
Starting datanodes
Starting journal nodes [hadoop-03 hadoop-02 hadoop-01]
Starting ZK Failover Controllers on NN hosts [hadoop-01 hadoop-02]

[hadoop@hadoop-01 ~]$ hadoop dfsadmin -printTopology
WARNING: Use of this script to execute dfsadmin is deprecated.
WARNING: Attempting to execute replacement "hdfs dfsadmin" instead.

Rack: /switch1/rack1
   192.168.1.111:9866 (hadoop-01)
   192.168.1.112:9866 (hadoop-02)

Rack: /switch2/rack2
   192.168.1.113:9866 (hadoop-03)
   192.168.1.114:9866 (hadoop-04)

6.3、拓展节点

增加 datanode 节点

增加 datanode 节点,不需要重启 namenode 非常简单的做法:在 topology.data 文件中加入新加 datanode 的信息,然后启动起来就 OK

版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作!

节点间距离计算

  有了机架感知,NameNode就可以画出下图所示的datanode网络拓扑图。

D1,R1都是交换机, 最底层Hx是 datanode。则 H1 的 rackid=/D1/R1/H1,H1 的 parent 是 R1,R1 的是 D1。这些 rackid 信息可以通过 topology.script.file.name 配置。有了这些 rackid 信息就可以计算出任意两台 datanode 之间的距离,得到最优的存放策略,优化整个集群的网络带宽均衡以及数据最优分配。

distance(/D1/R1/H1,/D1/R1/H1)=0 相同的 datanode
distance(/D1/R1/H1,/D1/R1/H2)=2 同一 rack 下的不同 datanode
distance(/D1/R1/H1,/D1/R2/H4)=4 同一 IDC 下的不同 datanode
distance(/D1/R1/H1,/D2/R3/H7)=6 不同 IDC 下的 datanode

  写文件时根据策略输入 dn 节点列表,读文件时按与client由近到远距离返回 dn 列表

0 条回应