吐血整理,搭建了两遍,亲测可用!!!
我买的是阿里云2C4G的服务器,使用的是CentOS 7.7版本。在搭建过程中踩了不少坑,本篇文章希望对大家有用
CentOS 7.7安装Docker
查看内核版本(使用root用户登陆)
uname -a把yum包更新到最新
yum update安装需要的软件包
yum install -y yum-utils device-mapper-persistent-data lvm2设置yum源
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo创建目录
cd /mntmkdir docker cd docker可以查看所有仓库中所有docker版本,并选择特定版本安装
yum list docker-ce --showduplicates | sort -r安装Docker,命令:yum install docker-ce-版本号
yum install docker-ce-18.06.3.ce启动并加入开机启动
systemctl start dockersystemctl enable docker验证安装是否成功(有client和service两部分表示docker安装启动都成功了)
docker versionUbuntu 18.04 安装Docker
创建目录
cd /mntmkdir docker cd docker下载
wget https://mirrors.aliyun.com/docker-ce/linux/ubuntu/dists/bionic/pool/stable/amd64/containerd.io_1.2.6-3_amd64.debwget https://mirrors.aliyun.com/docker-ce/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce-cli_19.03.9~3-0~ubuntu-bionic_amd64.debwget https://mirrors.aliyun.com/docker-ce/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce_19.03.9~3-0~ubuntu-bionic_amd64.deb安装
sudo dpkg -i *.deb启动
service docker start生成服务器、Hadoop镜像
获取centos7镜像
docker pull centos查看镜像列表
docker images安装SSH
cd /mnt/dockermkdir sshcd sshvi Dockerfile内容
FROM centosMAINTAINER dysRUN yum install -y openssh-server sudoRUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_configRUN yum install -y openssh-clientsRUN echo "root:1234" | chpasswdRUN echo "root ALL=(ALL) ALL" >> /etc/sudoersRUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_keyRUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_keyRUN mkdir /var/run/sshdEXPOSE 22CMD ["/usr/sbin/sshd", "-D"]保存并退出
执行构建镜像的命令,新镜像命名为 centos7-ssh
docker build -t="centos7-ssh" .基于 centos7-ssh 这个镜像启动三个容器
docker run -d --name=centos7.ssh centos7-sshdocker run -d --name=centos7.ssh2 centos7-sshdocker run -d --name=centos7.ssh3 centos7-ssh构建Hadoop镜像
创建目录
cd /mnt/dockermkdir hadoopcd hadoop下载jar包
//下载hadoop,构建镜像时使用wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-2.9.2/hadoop-2.9.2.tar.gz//下载jdk,构建镜像时使用wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz"编辑DockerFile
vi Dockerfile内容
FROM centos7-sshADD jdk-8u141-linux-x64.tar.gz /usr/local/RUN mv /usr/local/jdk1.8.0_141 /usr/local/jdk1.8ENV JAVA_HOME /usr/local/jdk1.8ENV PATH $JAVA_HOME/bin:$PATHADD hadoop-2.9.2.tar.gz /usr/localRUN mv /usr/local/hadoop-2.9.2 /usr/local/hadoopENV HADOOP_HOME /usr/local/hadoopENV PATH $HADOOP_HOME/bin:$PATHRUN yum install -y which sudo保存并退出
执行构建命令
docker build -t="hadoop" .运行容器
docker run --name hadoop0 --hostname hadoop0 -d -P -p 50070:50070 -p 8088:8088 hadoopdocker run --name hadoop1 --hostname hadoop1 -d -P hadoopdocker run --name hadoop2 --hostname hadoop2 -d -P hadoopHadoop 集群搭建
配置ll命令
vim ~/.bashrc内容
增加下面的配置
alias ll='ls -l'保存退出
重新加载
source ~/.bashrc安装vim、net-tools
yum install net-toolsyum install vim修改每台服务器的 /etc/hosts
使用ifconfig命令查看自己的IP,改为自己服务器的IP
172.18.0.5 hadoop0172.18.0.6 hadoop1172.18.0.7 hadoop2修改时区
rm -rf /etc/localtimeln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtimeSSH无密码登陆
在每台服务器上都执行以下命令,执行后会有多个输入提示,不用输入任何内容,全部直接回车即可
ssh-keygen输入下面命令时,需要用到上面设置的密码1234
ssh-copy-id -i /root/.ssh/id_rsa -p 22 root@hadoop0 ssh-copy-id -i /root/.ssh/id_rsa -p 22 root@hadoop1ssh-copy-id -i /root/.ssh/id_rsa -p 22 root@hadoop2安装配置 hadoop
在 master 中执行
cd /usr/local/hadoopmkdir tmp hdfsmkdir hdfs/data hdfs/name配置core-site.xml
vim /usr/local/hadoop/etc/hadoop/core-site.xml在 块儿中添加:
<property> <name>fs.defaultFS</name> <value>hdfs://hadoop0:9000</value></property><property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value></property><property> <name>io.file.buffer.size</name> <value>131702</value></property>配置hdfs-site.xml
vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml在 块儿中添加:
<property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/hdfs/name</value></property><property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/hdfs/data</value></property><property> <name>dfs.replication</name> <value>2</value></property><property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop0:9001</value></property><property> <name>dfs.webhdfs.enabled</name> <value>true</value></property>配置mapred-site.xml
这个文件默认不存在,需要从 mapred-site.xml.template 复制过来
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml编辑文件
vim /usr/local/hadoop/etc/hadoop/mapred-site.xml在 块儿中添加:
<property> <name>mapreduce.framework.name</name> <value>yarn</value></property><property> <name>mapreduce.jobhistory.address</name> <value>hadoop0:10020</value></property><property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop0:19888</value></property>配置yarn-site.xml
vim /usr/local/hadoop/etc/hadoop/yarn-site.xml在 块儿中添加:
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property><property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property> <name>yarn.resourcemanager.address</name> <value>hadoop0:8032</value></property><property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop0:8030</value></property><property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop0:8031</value></property><property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop0:8033</value></property><property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop0:8088</value></property>配置slaves
vim /usr/local/hadoop/etc/hadoop/slaves删除已有内容,添加:
hadoop1hadoop2配置hadoop-env.sh
vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
找到 export JAVA_HOME=${JAVA_HOME},改为自己JAVA_HOME的绝对路径
复制文件到 hadoop1,hadoop2
scp -r /usr/local/hadoop hadoop1:/usr/localscp -r /usr/local/hadoop hadoop2:/usr/local设置 hadoop 环境变量
在每台服务器上都执行:
vim ~/.bashrc增加内容
export PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin保存退出,重新加载资源
source ~/.bashrc启动 hadoop
在master启动hadoop,从节点会自动启动
初始化
hdfs namenode -format启动
hadoop-daemon.sh start namenodehadoop-daemon.sh start datanodestart-dfs.shstart-yarn.shmr-jobhistory-daemon.sh start historyserver测试
如果您使用的也是阿里云服务器,那需要在阿里云客户端调整安全组,阿里云默认只开放22端口,所以需要把50070、8088都开通
端口号.png
浏览器中访问:
http://服务器IP:50070/图片1.png
http://服务器IP:8088/hdfs 操作
hdfs dfs -mkdir -p /usr/local/hadoop/inputhdfs dfs -put /usr/local/hadoop/etc/hadoop/kms*.xml /usr/local/hadoop/inputhttp://服务器IP:50070/,在文件浏览页面查看
验证hdfs.png
mapreduce 操作
hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep /usr/local/hadoop/input /usr/local/hadoop/output 'dfs[a-z.]+'验证mapreduce.png