ROS机器人项目开发11例-ROS Robotics Projects(9)虚拟现实VR头戴设备和Leap Motion手势操控

举报
zhangrelay 发表于 2021/07/15 05:32:50 2021/07/15
【摘要】 除了自动驾驶汽车/人工智能外,还有一个火爆的科技热点就是虚拟现实/增强现实设备,这也是本书的第11章。 Leap Motion使用参考官网介绍和之前博文: 1 http://blog.csdn.net/zhangrelay/article/details/52356417 2 https://github.com/madratman/youbot_leapmotiont...

除了自动驾驶汽车/人工智能外,还有一个火爆的科技热点就是虚拟现实/增强现实设备,这也是本书的第11章。

Leap Motion使用参考官网介绍和之前博文:

1 http://blog.csdn.net/zhangrelay/article/details/52356417

2 https://github.com/madratman/youbot_leapmotionteleop

 

ROS Android VR设备:

1 https://github.com/cloudspace/ros_cardboard

2 https://github.com/SPARbot/SPARbot-android-app

 

补充一个不相关内容:https://github.com/Nurgak/Virtual-Robot-Challenge

ROS+V-Rep

 

Virtual Robot Challenge

 

Image of the virtual robot in V-REP

The goal of the Virtual Robot Challenge (VRC) is to introduce the Robotic Operating System (ROS) and V-REP simulation environment by performing autonomous tasks with a simulated robot.

This is to demonstrate the advantages of development using simulation, before eventually moving on to real robots. The advantages are that the code stays the same when moving from simulation to real robot, testing in simulation is much less time consuming (avoid mechanical failures, batteries, sensors failures/calibration...), cheaper and requires less effort in general.

All software used within the scope of this project is free to use.

Theory

ROS is composed of small stand-alone entities called "nodes". Nodes publish and subscribe to messages called "topics" to talk between each other. One can write nodes in C++ or Python, topics are language agnostic. A good introduction can be found on the ROS official website.

The ROS tutorials are well written and straight forward, it is recommended to follow them as least for installation and workspace setup, this provides a good overview on how ROS works and how it is organized.

Hardware

The robot has two wheels and a caster wheel in the back for locomotion. In the front it has six distance sensors, four pointing forwards and two pointing downwards under the robot, and a linear camera showing an image of 128 pixels wide and 1 pixel height.

The robot is based on the Robopoly demonstration robot and can be physically built, any type of robot can be adapted for the simulation.

Image of the virtual and real robot side by side

Software

The whole installation process has been fully tested on Ubuntu 14.04 with ROS Indigo. One can use VirtualBox if such a computer is not available, however it will slow down the simulation speed.

Installation

  1. Download and install the latest V-REP EDU, define the VREP_ROOT environment variable as the path to V-REP

    
        
    1. export VREP_ROOT=<VREP path>
  2. Install ROS Indigo

    
        
    1. $ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
    2. $ sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 0xB01FA116
    3. $ sudo apt-get update
    4. $ sudo apt-get install ros-indigo-desktop-full
    5. $ source /opt/ros/indigo/setup.bash
  3. Install the catkin tools:

    $ sudo apt-get install python-catkin-tools
    
       
  4. Create the workspace for ROS in ~/catkin_ws:

    
        
    1. $ mkdir -p ~/catkin_ws/src
    2. $ cd ~/catkin_ws/src
    3. $ catkin_init_workspace
  5. Copy the ROS interface package contents from V-REP install folder to ROS workspace:

    $ cp -r <VREP path>/programming/ros_packages/v_repExtRosInterface ~/catkin_ws/src
    
       
  6. Clone this repository to somewhere and copy the contents the ROS package to the workspace:

    
        
    1. $ git clone https://github.com/Nurgak/Virtual-Robot-Challenge.git
    2. $ cd Virtual-Robot-Challenge
    3. $ cp -r Software/ROS/vrc ~/catkin_ws/src
  7. Build the packages (grab a coffee):

    
        
    1. $ cd ~/catkin_ws
    2. $ catkin build
  8. Copy the ROS interface plugin to the V-REP root folder

    $ cp ~/catkin_ws/devel/lib/libv_repExtRosInterface.so <VREP path>
    
       
  9. Source the setup file, this needs to be done in every new terminal:


  
  1. $ cd ~/catkin_ws
  2. $ source devel/setup.bash
  1. Export the ROS_MASTER_URI environment variable
$ export ROS_MASTER_URI=http://localhost:11311

 

Programming

The code actuating the robot motors and publishing its sensor data is written in Lua language in V-REP. It is very simple and only used to exchange data with the ROS framework, not for actual behavior logic. This is the place to add new code if one wishes to add new sensors to the simulated robot. To view the code open the V-REP scene and on the left side double-click on the script icon next to the robot node.

Image of the robot node with a script icon besides it

The robot behavior is dictated by the Python scripts stored in ~/catkin_ws/src/vrc/src/. The sample.py script shows how to process the data from all the different sensors on the robot.

The launch files are used to start different nodes at the same time, they are stored in ~/catkin_ws/src/vrc/launch/, think of them as shortcuts.

Follow the line

To goal is to follow a line. Very basic approach, however one can always optimize the system to make it faster.

  1. Run ROS core in a terminal

    $ roscore
    
       
  2. From another terminal run V-REP, always start V-REP after ROS core

    $ sh <VREP path>/vrep.sh
    
       
  3. In V-REP open the arena.ttt from the /Simulation folder

  4. Click on the play button in V-REP to launch the simulation

    V-REP play button

  5. In a third terminal launch the arena.launch file which will starts some nodes

    $ roslaunch vrc arena.launch
    
       
  6. Finally in a fourth terminal run the line follower program

    $ rosrun vrc line_follower.py
    
       

The robot should start following the line marked on the floor.

Image of the robot following a line

Teleoperation

One can remotely operate the robot using the keyboard using the teleop_twist_keyboard package:

$ sudo apt-get install ros-indigo-teleop-twist-keyboard

 

Stop the line follower script (ctrl+c) and run:

$ rosrun teleop_twist_keyboard teleop_twist_keyboard.py

 

The terminal must be in focus for the commands to be executed.

Exploration and mapping

ROS can do mapping using the robot sensors and its position. First make sure the slam_gmapping and pointcloud_to_laserscan packages are installed:


  
  1. $ sudo apt-get install ros-indigo-slam-gmapping
  2. $ sudo apt-get install ros-indigo-pointcloud-to-laserscan

And launch the mapping scene with:


  
  1. $ roslaunch vrc mapping.launch
  2. $ rosrun vrc explorer.py

The launch file runs the gmapping and the RVIZ visualization nodes. Accessible areas are marked in white and inaccessible areas are marked in black.

The explorer node makes the robot go around the arena and avoid obstacles, whenever it encounters an obstacle it will turn on itself for a random amount of time and continue.

Image of the robot mapping in RVIZ

Physical robot

When the simulation yields satisfying results one can use a real robot that subscribes to and publishes the same ROS topics as the virtual robot, this way the same software running on the computer can be used. The implementation for this can be done in various ways and is left up to the reader.

 

----

 

 

 

文章来源: zhangrelay.blog.csdn.net,作者:zhangrelay,版权归原作者所有,如需转载,请联系作者。

原文链接:zhangrelay.blog.csdn.net/article/details/77325210

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。