机器人操作系统(ROS)中控制与智能的一点差异
PyRobot: An Open Source Robotics Research Platform https://pyrobot.org
这里以机器人操作系统(ROS)为例,简要介绍一下。
为什么选用机器人操作系统?目的:通过标准消息等,控制任意机器人。
以Cozmo机器人为例:
启动Cozmo的ros2驱动:
2019-06-24 18:54:31,011 cozmo.general INFO App connection established. sdk_version=1.4.10 cozmoclad_version=3.4.0 app_build_version=00003.00004.00000
2019-06-24 18:54:31,012 cozmo.general INFO Found robot id=1
2019-06-24 18:54:31,029 cozmo.general INFO Connected to iOS device_id=1 serial=570e1c0859620b942675b7f7010c14f3e086de48
2019-06-24 18:54:31,317 cozmo.general INFO Robot id=1 serial=45a25ba6 initialized OK
[INFO] []: camera calibration URL: file:///home/relaybot/Robtools/Cozmo/ros2cozmo/src/cozmo_driver/config/cozmo_camera.yaml
$ ros2 topic list
/backpack_led
/battery
/cmd_vel
/cozmo_camera/camera_info
/cozmo_camera/image
/diagnostics
/head_angle
/imu
/joint_states
/lift_height
/odom
/say
/tf
通过/backpack_led控制LED灯,/cmd_vel控制Cozmo速度和转向,/say让cozmo开口说话。
以/say为例:
$ ros2 topic pub -r 0.1 /say s_msgs/String "data: Hello relay, I am cozmo."
publisher: beginning loop
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
publishing std_msgs.msg.String(data='Hello relay, I am cozmo.')
ros2命令很多参数与ros1类似。在学习ros2时,可以顺便复习一下ros1。
这时,我们查看一下cozmo摄像头,先复习如下指令:
$ ros2 run image_tools showimage -- -h
Usage:
-h: This message.
-r: Reliability QoS setting:
0 - best effort
1 - reliable (default)
-d: Queue depth. 10 (default)
-f: Publish frequency in Hz. 30 (default)
-k: History QoS setting:
0 - only keep last sample
1 - keep all the samples (default)
-s: Camera stream:
0 - Do not show the camera stream
1 - Show the camera stream
-t TOPIC: use topic TOPIC instead of the default
使用如下命令:
$ ros2 run image_tools showimage -t /cozmo_camera/image
获取赛道信息:
可以适当调整头部角度或将手臂抬起,获取更远视角或避免盲区等。
$ ros2 topic pub -r 0.1 /head_ale std_msgs/Float64 "data: -20.0"
$ ros2 topic pub -r 0.1 /lift_hght std_msgs/Float64 "data: 1.0"
这时候回到主题:
如何使用自动控制算法实现自动行驶?参考学习代码如下:
实际为白色虚线,参考为黄色实线:https://github.com/okoeth/cozmo-linefollow
-
def image_callback(self, msg):
-
image = self.bridge.imgmsg_to_cv2(msg,desired_encoding='bgr8')
-
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
-
lower_yellow = numpy.array([ 10, 10, 10])
-
upper_yellow = numpy.array([255, 255, 250])
-
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
-
-
# BEGIN CROP
-
h, w, d = image.shape
-
search_top = 3*h/4
-
search_bot = search_top + 20
-
mask[0:search_top, 0:w] = 0
-
mask[search_bot:h, 0:w] = 0
-
# END CROP
-
# BEGIN FINDER
-
M = cv2.moments(mask)
-
if M['m00'] > 0:
-
cx = int(M['m10']/M['m00'])
-
cy = int(M['m01']/M['m00'])
-
# END FINDER
-
# BEGIN CIRCLE
-
cv2.circle(image, (cx, cy), 20, (0,0,255), -1)
-
# END CIRCLE
-
-
cv2.imshow("window", image)
-
cv2.waitKey(3)
如何使用人工智能算法实现自动行驶?https://github.com/benjafire/CozmoSelfDriveToyUsingCNN/
-
#!/usr/bin/env python
-
"""
-
Copyright (c) 2017, benjamin wu
-
All rights reserved.
-
Redistribution and use in source and binary forms, with or without
-
modification, are permitted provided that the following conditions are met:
-
* Redistributions of source code must retain the above copyright
-
notice, this list of conditions and the following disclaimer.
-
* Redistributions in binary form must reproduce the above copyright
-
notice, this list of conditions and the following disclaimer in the
-
documentation and/or other materials provided with the distribution.
-
* Neither the name of Ryan Dellana nor the
-
names of its contributors may be used to endorse or promote products
-
derived from this software without specific prior written permission.
-
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-
DISCLAIMED. IN NO EVENT SHALL Ryan Dellana BE LIABLE FOR ANY
-
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
"""
-
-
-
from __future__ import absolute_import
-
from __future__ import division
-
from __future__ import print_function
-
-
import argparse
-
import numpy as np
-
import cv2
-
import tensorflow as tf
-
import pickle
-
-
from cozmo_cnn_models import cnn_cccccfffff
-
-
import os
-
import cv2
-
import numpy as np
-
import time
-
-
def load_dataset(path, percent_testing=None):
-
assert percent_testing is None or (percent_testing >= 0.0 and percent_testing <= 1.0)
-
x, y, fnames = [], [], []
-
for i in os.walk(path):
-
(d, sub_dirs, files_) = i
-
fnames.extend(files_)
-
seq_fname = []
-
for fname in fnames:
-
seq = float(fname.split('_')[0])
-
seq_fname.append((seq, fname))
-
seq_fname.sort()
-
for (seq, fname) in seq_fname:
-
#img = cv2.imread(path+'/'+fname, 1) for black and white
-
img = cv2.imread(path+'/'+fname)
-
img = cv2.resize(img, (200, 150), interpolation=cv2.INTER_CUBIC)
-
img = img[35:,:,:]
-
x.append(img)
-
timestamp, lwheel, rwheel = fname.split('_')
-
timestamp, lwheel, rwheel = float(timestamp), float(lwheel)/100.0,float(rwheel.split('.jpg')[0])/100.0
-
-
#y.append([[lwheel], [rwheel]])
-
y.append(np.array([lwheel, rwheel]))
-
print('( timestamp, lwheel, rwheel):', timestamp,lwheel,rwheel)
-
-
train_x, train_y, test_x, test_y = [], [], [], []
-
if percent_testing is not None:
-
tst_strt = int(len(x)*(1.0-percent_testing))
-
train_x, train_y, test_x, test_y = x[:tst_strt], y[:tst_strt], x[tst_strt:], y[tst_strt:]
-
else:
-
train_x, train_y = x, y
-
return train_x, train_y, test_x, test_y
-
-
-
#path = '/Users/benja/code/cozmo_sdk_examples_0.15.0/apps/TestImgv2/'
-
path = '' # plz fill the img dir you want to train
-
-
train_x, train_y, test_x, test_y = load_dataset(path=path, percent_testing=0.20)
-
-
-
-
num_epochs = 100
-
batch_size = 100
-
-
# Drop items from dataset so that it's divisible by batch_size
-
-
train_x = train_x[0:-1*(len(train_x) % batch_size)]
-
train_y = train_y[0:-1*(len(train_y) % batch_size)]
-
test_x = test_x[0:-1*(len(test_x) % batch_size)]
-
test_y = test_y[0:-1*(len(test_y) % batch_size)]
-
-
print('len(test_x) =', len(test_x))
-
-
batches_per_epoch = int(len(train_x)/batch_size)
-
-
sess = tf.InteractiveSession()
-
model = cnn_cccccfffff()
-
train_step = tf.train.AdamOptimizer(1e-4).minimize(model.loss)
-
correct_prediction = tf.equal(tf.argmax(model.y_out,1), tf.argmax(model.y_,1))
-
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float32"))
-
saver = tf.train.Saver()
-
sess.run(tf.global_variables_initializer())
-
-
for i in range(num_epochs):
-
for b in range(0, batches_per_epoch):
-
batch = [train_x[b*batch_size:b*batch_size+batch_size], train_y[b*batch_size:b*batch_size+batch_size]]
-
# --- normalize batch ---
-
batch_ = [[],[]]
-
for j in range(len(batch[0])):
-
batch_[0].append(batch[0][j].astype(dtype=np.float32)/255.0)
-
batch_[1].append(batch[1][j].astype(dtype=np.float32))
-
batch = batch_
-
# ------------------------
-
train_step.run(feed_dict={model.x:batch[0], model.y_:batch[1], model.keep_prob_fc1:0.8, model.keep_prob_fc2:0.8, model.keep_prob_fc3:0.8, model.keep_prob_fc4:0.8})
-
-
print('epoch', i, 'complete')
-
if i % 5 == 0:
-
test_error = 0.0
-
for b in range(0, len(test_x), batch_size):
-
batch = [test_x[b:b+batch_size], test_y[b:b+batch_size]]
-
# --- normalize batch ---
-
batch_ = [[],[]]
-
for j in range(len(batch[0])):
-
batch_[0].append(batch[0][j].astype(dtype=np.float32)/255.0)
-
batch_[1].append(batch[1][j].astype(dtype=np.float32))
-
batch = batch_
-
-
test_error_ = model.loss.eval(feed_dict={model.x:batch[0], model.y_:batch[1],
-
model.keep_prob_fc1:1.0, model.keep_prob_fc2:1.0,
-
model.keep_prob_fc3:1.0, model.keep_prob_fc4:1.0})
-
-
# y_out = model.y_out.eval(session=sess, feed_dict={model.x:batch[0] ,
-
# model.keep_prob_fc1:1.0, model.keep_prob_fc2:1.0,
-
# model.keep_prob_fc3:1.0, model.keep_prob_fc4:1.0})
-
# print("y out is ", y_out)
-
-
# -----------------------
-
test_error += test_error_
-
test_error /= len(test_x)/batch_size
-
test_accuracy = 1.0 - test_error
-
print("test accuracy %g"%test_accuracy)
-
-
-
filename = saver.save(sess, './cozmo_run_modelv2.ckpt')
自动控制:设定颜色,识别,给定控制方式;
人工智能:给定策略,不断学习,逐步提升。
不严谨,供参考。
资讯:
ROSCon Fr 2019,將ROSCon方法與法國研究界混合,建立了一個計劃委員會,並公開徵集意見書。計劃委員會收到了20份意見書。
經計劃委員會審查後,選出了20個演示文稿,演講時間從10分鐘到30分鐘不等(根據發言人提案)。選擇的演講涵蓋了廣泛的與ROS相關的主題,例如
- SNCF(法國國家鐵路公司)介紹ROS-Railways的演講;
- SNCF和Generation Robots之間的聯合演示,提出了一種檢測和跟踪軌道的新方法;
- 由Easymov引入RUST和ROS 2,
- LORIA用於動態像素伺服電機的roscontrol層;
- 介紹如何使用ROS來控制建築物;
- 一個演示如何使用ROS為您自己的個人家庭項目,如Anki機器人;
- 討論亞馬遜如何使用ROS 2在雲環境中部署機器人應用程序。
- 同樣在ROSCon和開發者大會的傳統中,半小時的節目也可用於閃電談話。主持人和觀眾都很有趣。
原文:
ROSCon Fr 2019, mixing the ROSCon method and the French research community, established a programme committee and made a public call for submissions. The programme committee received 20 submissions.
After review by the programme committee, the 20 presentations were selected and given presentation slots ranging from 10 minutes to 30 minutes (based on the speaker proposals). The presentations were selected covered a broad range of ROS-related topics, such as
- a presentation from SNCF (French National railway company) introducing ROS-Railways;
- a joint presentation between SNCF and Generation Robots to present a novel method to detect and follow rails;
- the introduction of RUST with ROS 2 by Easymov,
- a roscontrol layer for dynamixel servo motor by LORIA;
- an introduction on how to use ROS to control buildings;
- a demonstration on how you can use ROS for your own personal home projects such as rosfying an Anki robot;
- a discussion of how Amazon is robustifying ROS 2 to deploy robots application in a cloud environment.
- Also in the ROSCon and developer conference tradition, half an hour of the programme was made available for lightning talks. It was fun and interesting for both the presenters and the audience.
Omri Ben-Bassat | ROS’n’Roll - How I ROSified my little Anki Vector home robot | This presentation will introduce Vector robot by Anki and show how I ROSified it and what you can do with it using the open-source vector_ros package I built. We’ll learn more about Vector and what makes it such a great robot for entry-level robotics R&D. We’ll talk about the ROSification process itself and ROSication of robots in general. I’ll present the difficulties and dilemmas I had while working on this package and finally, we’ll see a small live demo of the robot tracking a red ball with ROS! |
文章来源: zhangrelay.blog.csdn.net,作者:zhangrelay,版权归原作者所有,如需转载,请联系作者。
原文链接:zhangrelay.blog.csdn.net/article/details/93510762
- 点赞
- 收藏
- 关注作者
评论(0)