开源flink1.13.5提交任务到mrs310安全集群

举报
王伟康 发表于 2021/12/27 20:32:49 2021/12/27
【摘要】 1、安装flink:tar -zxvf flink-1.13.5-bin-scala_2.11.tgzflink1.13.5下载地址:https://www.apache.org/dyn/closer.lua/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.11.tgz2、修改flink-conf.yaml文件添加以下配置:#可以参考mrs flink...

1、安装flink:tar -zxvf flink-1.13.5-bin-scala_2.11.tgz

flink1.13.5下载地址:https://archive.apache.org/dist/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.11.tgz

2、将MRS的core-site.xml、hdfs-site.xml、yarn-site.xml加入到 flink/conf/ 下

3、修改flink-conf.yaml文件

添加以下配置:

#可以参考mrs flink客户端的flink-conf.yaml对应的配置项

env.java.opts.jobmanager: -Djava.security.krb5.conf=/opt/Bigdata/FusionInsight_BASE_8.1.0.1/1_5_KerberosClient/etc/kdc.conf
env.java.opts.taskmanager: -Djava.security.krb5.conf=/opt/Bigdata/FusionInsight_BASE_8.1.0.1/1_5_KerberosClient/etc/kdc.conf
security.kerberos.login.keytab: /root/wwk/user.keytab #修改为对应的keytab
security.kerberos.login.principal: flink-user #修改为对应的用户名
env.java.opts: -Djava.security.krb5.conf=/root/wwk/krb5.conf -Djava.library.path=${HADOOP_COMMON_HOME}/lib/native #修改为对应的krb5文件,添加hadoop本地包
security.kerberos.login.use-ticket-cache: true
classloader.check-leaked-classloader: false
security.kerberos.login.contexts: Client,KafkaClient
zookeeper.sasl.service-name: zookeeper
zookeeper.sasl.login-context-name: Client

4、如果flink需要开启高可用,需要执行以下步骤

a、需要在flink-conf.yaml加入以下配置

high-availability.job.delay: 10 s
high-availability.storageDir: hdfs:///flink/recovery
high-availability.zookeeper.client.acl: creator
high-availability.zookeeper.client.connection-timeout: 15000
high-availability.zookeeper.client.max-retry-attempts: 3
high-availability.zookeeper.client.retry-wait: 5000
high-availability.zookeeper.client.session-timeout: 60000
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.quorum: 192.168.0.35:2181,192.168.0.112:2181,192.168.0.210:2181
high-availability: zookeeper

b、替换flink/lib下的flink-shaded-zookeeper-3.4.14.jar包

删除flink-shaded-zookeeper-3.4.14.jar包,将mrs flink客户端lib下的flink-shaded-zookeeper-3.5.6-hw-ei-310012.jar包拷贝到当前flink lib下

5、设置HADOOP_CLASSPATH

可以直接source /opt/Bigdata/client/bigdata_env,没有客户端时可以执行以下命令进行设置:

export HADOOP_CLASSPATH=/opt/hadoopclient/HDFS/hadoop/etc/hadoop:/opt/hadoopclient/HDFS/hadoop/share/hadoop/common/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/common/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/mapreduce/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/yarn/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/yarn/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-huaweicloud-3.1.1-hw-42.jar:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-huawei-obscommitter-3.1.1-hw-ei-310013.jar:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-trace-ping-3.1.1-hw-ei-310013.jar

6、执行测试程序

bin/flink run -m yarn-cluster examples/batch/WordCount.jar


【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。