我们留意到,启动呼吁里把Hadoop设置文件(core-site.xml与yarn-site.xml)中的HDFS Master节点地点用情形变量中的参数HDFS_MASTER_SERVICE来替代,YARN Master节点地点则用HDOOP_YARN_MASTER来替代。下图是Hadoop HDFS 2节点集群的完备建模表示图:

图中的圆圈暗示Pod,可以看到,Datanode并没有建模Kubernetes Service,而是建模为独立的Pod,这是由于Datanode并不直接被客户端所会见,因此无需建模Service。当Datanode运行在Pod容器里的时辰,我们必要修改设置文件中的以下参数,打消DataNode节点地址主机的主机名(DNS)与对应IP地点的搜查机制:
- dfs.namenode.datanode.registration.ip-hostname-check=false
假如上述参数没有修改,就会呈现DataNode集群“破碎”的假象,由于Pod的主机名无法对应Pod的IP地点,因此界面会表现2个节点,这两个节点都状态都为非常状态。
下面是HDFS Master节点Service对应的Pod界说:
- apiVersion: v1
- kind: Pod
- metadata:
- name: k8s-hadoop-master
- labels:
- app: k8s-hadoop-master
- spec:
- containers:
- - name: k8s-hadoop-master
- image: kubeguide/hadoop
- imagePullPolicy: IfNotPresent
- ports:
- - containerPort: 9000
- - containerPort: 50070
- env:
- - name: HADOOP_NODE_TYPE
- value: namenode
- - name: HDFS_MASTER_SERVICE
- valueFrom:
- configMapKeyRef:
- name: ku8-hadoop-conf
- key: HDFS_MASTER_SERVICE
- - name: HDOOP_YARN_MASTER
- valueFrom:
- configMapKeyRef:
- name: ku8-hadoop-conf
- key: HDOOP_YARN_MASTER
- restartPolicy: Always
下面是HDFS的Datanode的节点界说(hadoop-datanode-1):
- apiVersion: v1
- kind: Pod
- metadata:
- name: hadoop-datanode-1
- labels:
- app: hadoop-datanode-1
- spec:
- containers:
- - name: hadoop-datanode-1
- image: kubeguide/hadoop
- imagePullPolicy: IfNotPresent
- ports:
- - containerPort: 9000
- - containerPort: 50070
- env:
- - name: HADOOP_NODE_TYPE
- value: datanode
- - name: HDFS_MASTER_SERVICE
- valueFrom:
- configMapKeyRef:
- name: ku8-hadoop-conf
- key: HDFS_MASTER_SERVICE
- - name: HDOOP_YARN_MASTER
- valueFrom:
- configMapKeyRef:
- name: ku8-hadoop-conf
- key: HDOOP_YARN_MASTER
- restartPolicy: Always
(编辑:湖南网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|