开源flink1.13.5提交任务到mrs310安全集群
【摘要】 1、安装flink:tar -zxvf flink-1.13.5-bin-scala_2.11.tgzflink1.13.5下载地址:https://www.apache.org/dyn/closer.lua/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.11.tgz2、修改flink-conf.yaml文件添加以下配置:#可以参考mrs flink...
1、安装flink:tar -zxvf flink-1.13.5-bin-scala_2.11.tgz
flink1.13.5下载地址:https://archive.apache.org/dist/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.11.tgz
2、将MRS的core-site.xml、hdfs-site.xml、yarn-site.xml加入到 flink/conf/ 下
3、修改flink-conf.yaml文件
添加以下配置:
#可以参考mrs flink客户端的flink-conf.yaml对应的配置项
env.java.opts.jobmanager: -Djava.security.krb5.conf=/opt/Bigdata/FusionInsight_BASE_8.1.0.1/1_5_KerberosClient/etc/kdc.conf
env.java.opts.taskmanager: -Djava.security.krb5.conf=/opt/Bigdata/FusionInsight_BASE_8.1.0.1/1_5_KerberosClient/etc/kdc.conf
security.kerberos.login.keytab: /root/wwk/user.keytab #修改为对应的keytab
security.kerberos.login.principal: flink-user #修改为对应的用户名
env.java.opts: -Djava.security.krb5.conf=/root/wwk/krb5.conf -Djava.library.path=${HADOOP_COMMON_HOME}/lib/native #修改为对应的krb5文件,添加hadoop本地包
security.kerberos.login.use-ticket-cache: true
classloader.check-leaked-classloader: false
security.kerberos.login.contexts: Client,KafkaClient
zookeeper.sasl.service-name: zookeeper
zookeeper.sasl.login-context-name: Client
4、如果flink需要开启高可用,需要执行以下步骤
a、需要在flink-conf.yaml加入以下配置
high-availability.job.delay: 10 s
high-availability.storageDir: hdfs:///flink/recovery
high-availability.zookeeper.client.acl: creator
high-availability.zookeeper.client.connection-timeout: 15000
high-availability.zookeeper.client.max-retry-attempts: 3
high-availability.zookeeper.client.retry-wait: 5000
high-availability.zookeeper.client.session-timeout: 60000
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.quorum: 192.168.0.35:2181,192.168.0.112:2181,192.168.0.210:2181
high-availability: zookeeper
b、替换flink/lib下的flink-shaded-zookeeper-3.4.14.jar包
删除flink-shaded-zookeeper-3.4.14.jar包,将mrs flink客户端lib下的flink-shaded-zookeeper-3.5.6-hw-ei-310012.jar包拷贝到当前flink lib下
5、设置HADOOP_CLASSPATH
可以直接source /opt/Bigdata/client/bigdata_env,没有客户端时可以执行以下命令进行设置:
export HADOOP_CLASSPATH=/opt/hadoopclient/HDFS/hadoop/etc/hadoop:/opt/hadoopclient/HDFS/hadoop/share/hadoop/common/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/common/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/hdfs/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/mapreduce/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/yarn/lib/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/yarn/*:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-huaweicloud-3.1.1-hw-42.jar:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-huawei-obscommitter-3.1.1-hw-ei-310013.jar:/opt/hadoopclient/HDFS/hadoop/share/hadoop/tools/lib/hadoop-trace-ping-3.1.1-hw-ei-310013.jar
6、执行测试程序
bin/flink run -m yarn-cluster examples/batch/WordCount.jar
【声明】本内容来自华为云开发者社区博主,不代表华为云及华为云开发者社区的观点和立场。转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
dullman2022/08/23 08:47:371楼编辑删除举报
解决方法:缺少/plugins/obs-fs-hadoop/*.jar ,可通过https://support.huaweicloud.com/bestpractice-obs/obs_05_1516.html,安装配置,并复制到对应客户端目录下。