开源flink1.13.5提交任务到mrs310安全集群
1、安装flink:tar -zxvf flink-1.13.5-bin-scala_2.11.tgz
flink1.13.5下载地址:https://archive.apache.org/dist/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.11.tgz
2、将MRS的core-site.xml、hdfs-site.xml、yarn-site.xml加入到 flink/conf/ 下
3、修改flink-conf.yaml文件
添加以下配置:
#可以参考mrs flink客户端的flink-conf.yaml对应的配置项
env.java.opts.jobmanager: -Djava.security.krb5.conf=/opt/Bigdata/FusionInsight_BASE_8.1.0.1/1_5_KerberosClient/etc/kdc.conf
env.java.opts.taskmanager: -Djava.security.krb5.conf=/opt/Bigdata/FusionInsight_BASE_8.1.0.1/1_5_KerberosClient/etc/kdc.conf
security.kerberos.login.keytab: /root/wwk/user.keytab #修改为对应的keytab
security.kerberos.login.principal: flink-user #修改为对应的用户名
env.java.opts: -Djava.security.krb5.conf=/root/wwk/krb5.conf -Djava.library.path=${HADOOP_COMMON_HOME}/lib/native #修改为对应的krb5文件,添加hadoop本地包
security.kerberos.login.use-ticket-cache: true
classloader.check-leaked-classloader: false
security.kerberos.login.contexts: Client,KafkaClient
zookeeper.sasl.service-name: zookeeper
zookeeper.sasl.login-context-name: Client
4、如果flink需要开启高可用,需要执行以下步骤
a、需要在flink-conf.yaml加入以下配置
high-availability.job.delay: 10 s
high-availability.storageDir: hdfs:///flink/recovery
high-availability.zookeeper.client.acl: creator
high-availability.zookeeper.client.connection-timeout: 15000
high-availability.zookeeper.client.max-retry-attempts: 3
high-availability.zookeeper.client.retry-wait: 5000
high-availability.zookeeper.client.session-timeout: 60000
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.quorum: 192.168.0.35:2181,192.168.0.112:2181,192.168.0.210:2181
high-availability: zookeeper
b、替换flink/lib下的flink-shaded-zookeeper-3.4.14.jar包
删除flink-shaded-zookeeper-3.4.14.jar包,将mrs flink客户端lib下的flink-shaded-zookeeper-3.5.6-hw-ei-310012.jar包拷贝到当前flink lib下
5、设置HADOOP_CLASSPATH
可以直接source /opt/Bigdata/client/bigdata_env,没有客户端时可以执行以下命令进行设置:
export HADOOP_CLASSPATH=/opt/hadoopclient/HDFS/hadoop/etc/hadoop:/opt/hadoopclient/HDFS/hadoop/share/hadoop/common/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/common/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/mapreduce/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/yarn/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/yarn/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-huaweicloud-3.1.1-hw-42.jar:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-huawei-obscommitter-3.1.1-hw-ei-310013.jar:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-trace-ping-3.1.1-hw-ei-310013.jar
6、执行测试程序
bin/flink run -m yarn-cluster examples/batch/WordCount.jar
- 点赞
- 收藏
- 关注作者
评论(0)