Hadoop系列-01、单机版Hadoop安装

庸人自扰梦 15天前   阅读数 33 0

一、前置条件

Linux 、Jdk 1.8、配置JAVA_HOME

# 关闭防火墙
$ systemctl status firewalld.service
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service

二、配置Hosts和设置免密码登录

# 如果是虚拟机的话,Hadoop在配置地址时,不能使用localhost 和 127.0.0.1 
$ vim /etc/hosts
 192.168.18.10	node10

#创建密钥(一定要在NameNode节点上):
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
#将密钥添加到公钥:
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

三、安装Hadoop

1.下载Hadoop 地址 http://archive.apache.org/dist/hadoop/core/

 注:如果后续要安装Spark,那么最好安装和Spark版本对应的Hadoop版本

   查看hadoop/spark对应版本:http://spark.apache.org/downloads.html

2.解压配置HADOOP_HOME

#解压
$ tar -zxvf hadoop-2.7.7.tar.gz

#配置Home
$ vim /root/.bash_profile
export HADOOP_HOME=/data/app/hadoop-2.7.7
PATH=$PATH:$JAVA_HOME/bin:$ERLANG_HOME/bin:$HADOOP_HOME/bin

$ source /root/.bash_profile

#测试是否配置成功 执行下面命令输出-->
$ hadoop version
Hadoop 2.7.7
Subversion Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
Compiled by stevel on 2018-07-18T22:47Z
Compiled with protoc 2.5.0
From source with checksum 792e15d20b12c74bd6f19a1fb886490
This command was run using /data/app/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar

3.Hadoop 配置

#创建Hadoop目录
$ mkdir -p hadoop-2.7.7/data/dfs/namenode
$ mkdir -p hadoop-2.7.7/data/dfs/datanode


$ cd hadoop-2.7.7\etc\hadoop

# 配置hadoop  core-site.xml
$ vim core-site.xml
<property>
	<name>fs.defaultFS</name>
	<value>hdfs://node10:9000</value>
</property>
<property>
	<name>hadoop.tmp.dir</name>
	<value>/data/app/hadoop-2.7.7/data</value>
</property>

# 配置hadoop  hdfs-site.xml
$ vim hdfs-site.xml
<property>
	<name>dfs.replication</name>
	<value>1</value>
</property>
<property>
	<name>dfs.namenode.http-address</name>
	<value>node10:50070</value>
</property>
<property>
	<name>dfs.namenode.name.dir</name>
	<value>/data/app/hadoop-2.7.7/data/dfs/namenode</value>
</property>
<property>
	<name>dfs.name.dir</name>
	<value>/data/app/hadoop-2.7.7/data/dfs/datanode</value>
</property>

# 配置hadoop  mapred-site.xml
$ vim mapred-site.xml
<property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>
<property>
	<name>mapred.job.tracker</name>
	<value>hdfs://node10:9001</value>
</property>


# 配置hadoop  yarn-site.xml
$ vim yarn-site.xml
<property>
	<name>yarn.nodemanager.aux-services</name>
	<value>mapreduce_shuffle</value>
</property>
<property>
	<name>yarn.nodemanager.aux-services.mapreduce.shutffle.class</name>
	<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

# 配置hadoop-env.sh java_home 
$ vim hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_241-amd64

4.初化hadoop 和启动hadoop

# 初始化
$ hdfs namenode -format

# 启动
$ ./sbin/start-all.sh

# 验证是否启动成功 输入jps 展示下方结果
$ jps
12709 NodeManager
12280 DataNode
12827 Jps
11534 ResourceManager
12462 SecondaryNameNode
12143 NameNode

5.浏览器查看

http://192.168.18.10:50070/

 

http://192.168.18.10:8088/cluster


注意:本文归作者所有,未经作者允许,不得转载

全部评论: 0

    我有话说: