Zookeeper,HBase,Phoenix单机调试环境搭建
【摘要】 简单介绍如何搭建Zookeeper,HBase,Phoenix单机开发环境,进行简单的功能测试
概述
Hbase适合存储大量的对关系运算要求低的NOSQL数据,个人开发调试时受限于服务器资源,不能部署分布式集群,这里介绍一种单机搭建的集群,供各位调试使用,网上也有许多案例了,这里介绍一下搭建Zookeeper和HBase,Phoenix。本样例将环境安装在一台Centos机器上,Windows环境上也可以用相同的方法。
内核版本:
Zookeeper:3.5.1
HBase:1.3.1
Phoenix:v4.15.0-HBase-1.3
Zoookeeper单机版部署
1. 下载Zookeeper的包,从这个地址找到需要版本包https://zookeeper.apache.org/releases.html,这里我们拿到的是apache-zookeeper-3.5.1-bin.tar.gz。
2. 在Cenots中找到一个目录,解压开apache-zookeeper-3.5.1-bin.tar.gz
3. 在Zookeeper中创建一个data目录,存放zookeeper的数据,创建一个logs目录,存放日志文件,路径分别为/opt/kernel/zookeeper/data,/opt/kernel/zookeeper/logs
4. 在conf目录下创建通过zoo_sample.cfg创建zoo.cfg,并设置其中data和log的目录属性
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/opt/kernel/zookeeper/data dataLogDir=/opt/kernel/zookeeper/logs # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 # the maximum number of connections accept by a single server # despite the client. The default is 0 (unlimited). Increase # this value to impose a limit #maxCnxns=0 |
5. 启动Zookeeper,在zookeeper的bin目录下面执行sh zkServer.sh start即可启动
HBase单机版部署
由于HBase单机版会自动启动Zookeeper,所以不能使用上面本地启动的那个Zookeeper,需要先停止掉该Zookeeper
1. 下载HBase的版本包,https://hbase.apache.org/downloads.html,这里我拿的包是hbase-1.3.1-bin.tar.gz
2. 在Cenots中找到一个目录,解压开hbase-1.3.1-bin.tar.gz
3. 创建一个data和zookeeper data目录,分别用来存放hbase的data和zookeeper的data,路径分别为/opt/kernel/hbase/hbase_data,/opt/kernel/hbase/zookeeper_data
4. 在conf目录中的hbase-site.xml中,加入hbase的data目录及zookeeper的data目录参参数
<configuration> <property> <name>hbase.rootdir</name> <value>/opt/kernel/hbase/hbase_data </value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/opt/kernel/hbase/zookeeper_data </value> </property> </configuration> |
5. 启动hbase,在bin目录下执行sh start-hbase.sh
6. 访问master页面http://localhost:16010/master-status
Phoenix的安装
Phoenix的安装依赖于HBase
1. 下载Phoenix的包,进去https://phoenix.apache.org/download.html,选择4.15.0-HBase-1.3,下载得到apache-phoenix-4.15.0-HBase-1.3-bin.tar.gz
2. 将包放到Centos中,解压开,在其中,找到phoenix-4.15.0-HBase-1.3-server.jar和phoenix-core-4.15.0-HBase-1.3.jar,拷贝到HBase的lib中,并重启HBase
3. 进入到Phoenix的bin目录中,执行bin/sqlline.py <ZK ip>:<Zk port>, 如bin/sqlline.py localhost:2181,,进入到Phoenix的命令行。
【声明】本内容来自华为云开发者社区博主,不代表华为云及华为云开发者社区的观点和立场。转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
作者其他文章
Lettle whale2021/02/10 01:50:541楼编辑删除举报
可以参考网上已有的帖子
https://www.cnblogs.com/ngy0217/p/10538336.html
https://blog.csdn.net/lu1171901273/article/details/86518494
主要是把hadoop的一些配置文件设置好
hdfs-site.sml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/kernel/hadoop/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/kernel/hadoop/datanode</value>
</property>
<property>
<name>dfs.http.address</name>
<value>ecs-XXX:50070</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>ecs-XXX:50090</value>
</property>
</configuration>
Lettle whale2021/02/10 01:51:162楼编辑删除举报
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ecs-XXX:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/kernel/hadoop/tmp</value>
</property>
<property>
<name>hadoop.native.lib</name>
<value>false</value>
<description>Should native hadoop libraries, if present, be used.
</description>
</property>
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
</configuration>
Lettle whale2021/02/10 01:51:523楼编辑删除举报
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ecs-XXX</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Lettle whale2021/02/10 01:54:134楼编辑删除举报
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Lettle whale2021/04/08 03:24:43编辑删除举报
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation. Starting datanodes
参考https://blog.csdn.net/oschina_41140683/article/details/93976752
Lettle whale2021/03/17 08:15:175楼编辑删除举报
参考https://blog.csdn.net/pengdayong77/article/details/85939932