hadoop单节点集群搭建

snowofsummer 发表于 2020/08/20 10:01:13 2020/08/20
【摘要】 1,软件准备[root@db01 ~]# ll hadoop-2.8.3.tar.gz-rw-r--r--. 1 root root 244469481 May 20 20:47 hadoop-2.8.3.tar.gz[root@db01 ~]# ll jdk-8u161-linux-x64.tar.gz-rw-r--r--. 1 root root 189756259 May 20 21:...

1,软件准备


[root@db01 ~]# ll hadoop-2.8.3.tar.gz

-rw-r--r--. 1 root root 244469481 May 20 20:47 hadoop-2.8.3.tar.gz

[root@db01 ~]# ll jdk-8u161-linux-x64.tar.gz

-rw-r--r--. 1 root root 189756259 May 20 21:32 jdk-8u161-linux-x64.tar.gz


2,操作系统版本


[root@db01 ~]# cat /etc/redhat-release

CentOS Linux release 7.7.1908 (Core)


3,解压jdk

[root@db01 ~]# tar xf jdk-8u161-linux-x64.tar.gz

[root@db01 ~]# cd jdk1.8.0_161/

[root@db01 jdk1.8.0_161]# pwd

/root/jdk1.8.0_161



3,解压hadoop


[root@db01 ~]# tar xf hadoop-2.8.3.tar.gz

[root@db01 ~]# cd hadoop-2.8.3/

[root@db01 hadoop-2.8.3]# pwd

/root/hadoop-2.8.3



4,配置JDK环境变量

edit the file etc/hadoop/hadoop-env.sh

添加jdk位置信息:

export JAVA_HOME=/root/jdk1.8.0_161



5,配置参数文件core-site.xml,hdfs-site.xml


etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

6,配置ssh对等性


[root@db01 ~]# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Created directory '/root/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:lnMAq7kZxwiol8YJ9fhbO0MHZHAtITn7gpgcAc59CEo root@db01

The key's randomart image is:

+---[RSA 2048]----+

|oE. oo*o         |

|=+oo+=.o.        |

|+o+o.+o..        |

|.+ =o= . o       |

|ooB.*.= S .      |

|o+. .O.+ o       |

|    +.+          |

|       o         |

|                 |

+----[SHA256]-----+


[root@db01 ~]# ssh-copy-id db01

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"

The authenticity of host 'db01 (192.168.0.230)' can't be established.

ECDSA key fingerprint is SHA256:6O9+Y8woUDEgCSMpP6X8lFn7GSlnSvZiNul/FTS4mDI.

ECDSA key fingerprint is MD5:3d:65:36:c5:b1:d1:a7:fa:d9:d5:4d:fd:ac:66:6c:f8.

Are you sure you want to continue connecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

root@db01's password:


Number of key(s) added: 1


Now try logging into the machine, with:   "ssh 'db01'"

and check to make sure that only the key(s) you wanted were added.


[root@db01 ~]# ssh db01 date

Wed Aug 19 21:40:38 EDT 2020


7,格式化dfs

bin/hdfs namenode -format

8,启动DataNode

sbin/start-dfs.sh

[root@db01 hadoop-2.8.3]# sbin/start-dfs.sh

Starting namenodes on [localhost]

The authenticity of host 'localhost (::1)' can't be established.

ECDSA key fingerprint is SHA256:6O9+Y8woUDEgCSMpP6X8lFn7GSlnSvZiNul/FTS4mDI.

ECDSA key fingerprint is MD5:3d:65:36:c5:b1:d1:a7:fa:d9:d5:4d:fd:ac:66:6c:f8.

Are you sure you want to continue connecting (yes/no)? yes

localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.

localhost: starting namenode, logging to /root/hadoop-2.8.3/logs/hadoop-root-namenode-db01.out

localhost: starting datanode, logging to /root/hadoop-2.8.3/logs/hadoop-root-datanode-db01.out

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

ECDSA key fingerprint is SHA256:6O9+Y8woUDEgCSMpP6X8lFn7GSlnSvZiNul/FTS4mDI.

ECDSA key fingerprint is MD5:3d:65:36:c5:b1:d1:a7:fa:d9:d5:4d:fd:ac:66:6c:f8.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode, logging to /root/hadoop-2.8.3/logs/hadoop-root-secondarynamenode-db01.out


9,检测启动状态

[root@db01 hadoop-2.8.3]# jps

4260 NameNode

4618 SecondaryNameNode

4383 DataNode

4847 Jps


10,访问NameNode

NameNode - http://localhost:50070/     


11,DFS读写测试

$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
$ bin/hdfs dfs -put etc/hadoop input

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.3.jar grep input output 'dfs[a-z.]+'
$ bin/hdfs dfs -get output output
$ cat output/*
$ bin/hdfs dfs -cat output/*

//停止DFS
sbin/stop-dfs.sh

12,YARN 配置


修改配置文件mapred-site.xml,yarn-site.xml

etc/hadoop/mapred-site.xml:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

etc/hadoop/yarn-site.xml:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

13,启动YARN

sbin/start-yarn.sh

[root@db01 hadoop-2.8.3]# sbin/start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /root/hadoop-2.8.3/logs/yarn-root-resourcemanager-db01.out

localhost: starting nodemanager, logging to /root/hadoop-2.8.3/logs/yarn-root-nodemanager-db01.out

浏览器访问地址:

ResourceManager - http://localhost:8088/

停止命令:

  $ sbin/stop-yarn.sh


【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区),文章链接,文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件至:cloudbbs@huaweicloud.com进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容。
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。