1. 环境准备与ROS安装在开始构建仿真小车之前我们需要先搭建好开发环境。ROSRobot Operating System是目前机器人开发最流行的框架之一它提供了硬件抽象、设备驱动、库函数、可视化工具等丰富功能。我推荐使用Ubuntu 20.04 LTS系统配合ROS Noetic版本这是目前最稳定的组合。安装ROS其实没有想象中那么复杂。首先确保你的系统已经更新到最新状态sudo apt update sudo apt upgrade -y然后添加ROS软件源和密钥sudo sh -c echo deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main /etc/apt/sources.list.d/ros-latest.list sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654接下来安装完整版的ROS桌面环境sudo apt update sudo apt install ros-noetic-desktop-full安装完成后记得初始化rosdep并设置环境变量sudo rosdep init rosdep update echo source /opt/ros/noetic/setup.bash ~/.bashrc source ~/.bashrc为了后续开发方便我建议再安装一些常用工具sudo apt install python3-rosinstall python3-rosinstall-generator python3-wstool build-essential2. 创建ROS工作空间与Gazebo环境配置有了ROS基础环境后我们需要创建一个专门的工作空间来开发我们的仿真小车。我习惯在home目录下创建catkin_ws工作空间mkdir -p ~/catkin_ws/src cd ~/catkin_ws/ catkin_make这个命令会生成标准的ROS工作空间结构。记得把工作空间的环境变量也加入bashrcecho source ~/catkin_ws/devel/setup.bash ~/.bashrc source ~/.bashrcGazebo是ROS默认的仿真环境但为了确保所有依赖都安装完整建议单独安装Gazebo组件sudo apt install gazebo11 libgazebo11-dev ros-noetic-gazebo-ros-pkgs ros-noetic-gazebo-ros-control安装完成后可以测试Gazebo是否正常工作gazebo如果能看到空白的仿真环境界面说明安装成功。在实际项目中我遇到过Gazebo启动黑屏的问题通常是因为显卡驱动不兼容导致的可以尝试添加--verbose参数查看详细错误信息。3. 构建基础小车模型现在我们来创建仿真小车的URDF模型。URDF(Unified Robot Description Format)是ROS中描述机器人模型的XML格式文件。在catkin_ws/src目录下创建一个新的功能包cd ~/catkin_ws/src catkin_create_pkg my_robot rospy tf gazebo_ros在my_robot包中创建urdf目录并新建一个robot.urdf文件。基础的小车模型可以这样定义robot namemy_robot link namebase_link visual geometry box size0.3 0.2 0.1/ /geometry /visual collision geometry box size0.3 0.2 0.1/ /geometry /collision inertial mass value5/ inertia ixx0.1 ixy0 ixz0 iyy0.1 iyz0 izz0.1/ /inertial /link joint nameleft_wheel_joint typecontinuous parent linkbase_link/ child linkleft_wheel/ origin xyz0 0.15 0 rpy1.5707 0 0/ axis xyz0 1 0/ /joint link nameleft_wheel visual geometry cylinder length0.05 radius0.05/ /geometry /visual /link !-- 右轮定义类似位置改为xyz0 -0.15 0 -- /robot这个模型定义了一个长方体车身和两个圆柱形轮子。为了让小车能在Gazebo中运动我们还需要添加Gazebo特定的插件gazebo plugin namedifferential_drive_controller filenamelibgazebo_ros_diff_drive.so commandTopiccmd_vel/commandTopic odometryTopicodom/odometryTopic odometryFrameodom/odometryFrame robotBaseFramebase_link/robotBaseFrame publishWheelTFtrue/publishWheelTF wheelSeparation0.3/wheelSeparation wheelDiameter0.1/wheelDiameter publishWheelJointStatetrue/publishWheelJointState /plugin /gazebo4. 集成摄像头传感器视觉感知是机器人环境交互的重要部分。我们在小车上添加一个摄像头传感器。在URDF中继续添加link namecamera_link visual geometry box size0.05 0.05 0.05/ /geometry /visual inertial mass value0.1/ inertia ixx0.0001 ixy0 ixz0 iyy0.0001 iyz0 izz0.0001/ /inertial /link joint namecamera_joint typefixed parent linkbase_link/ child linkcamera_link/ origin xyz0.15 0 0.1 rpy0 0 0/ /joint gazebo referencecamera_link sensor typecamera namecamera1 update_rate30.0/update_rate camera namehead horizontal_fov1.3962634/horizontal_fov image width640/width height480/height formatR8G8B8/format /image clip near0.02/near far300/far /clip /camera plugin namecamera_controller filenamelibgazebo_ros_camera.so alwaysOntrue/alwaysOn updateRate0.0/updateRate cameraNamecamera/cameraName imageTopicNameimage_raw/imageTopicName cameraInfoTopicNamecamera_info/cameraInfoTopicName frameNamecamera_link/frameName hackBaseline0.07/hackBaseline distortionK10.0/distortionK1 distortionK20.0/distortionK2 distortionK30.0/distortionK3 distortionT10.0/distortionT1 distortionT20.0/distortionT2 /plugin /sensor /gazebo这段配置定义了一个640x480分辨率的摄像头固定在车体前部。安装完成后可以通过以下命令测试摄像头roslaunch my_robot display.launch rostopic list # 应该能看到/camera/image_raw话题5. 实现键盘控制为了让小车动起来我们需要创建一个键盘控制节点。在my_robot包中创建scripts目录并添加teleop.py文件#!/usr/bin/env python3 import rospy from geometry_msgs.msg import Twist import sys, select, termios, tty msg Control Your Robot! --------------------------- Moving around: u i o j k l m , . q/z : increase/decrease max speeds by 10% w/x : increase/decrease only linear speed by 10% e/c : increase/decrease only angular speed by 10% space key, k : force stop anything else : stop smoothly CTRL-C to quit moveBindings { i:(1,0), o:(1,-1), j:(0,1), l:(0,-1), u:(1,1), ,:(-1,0), .:(-1,1), m:(-1,-1), } speedBindings{ q:(1.1,1.1), z:(.9,.9), w:(1.1,1), x:(.9,1), e:(1,1.1), c:(1,.9), } def getKey(): tty.setraw(sys.stdin.fileno()) rlist, _, _ select.select([sys.stdin], [], [], 0.1) if rlist: key sys.stdin.read(1) else: key termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings) return key def vels(speed,turn): return currently:\tspeed %s\tturn %s % (speed,turn) if __name____main__: settings termios.tcgetattr(sys.stdin) rospy.init_node(teleop_twist_keyboard) pub rospy.Publisher(/cmd_vel, Twist, queue_size 1) speed rospy.get_param(~speed, 0.5) turn rospy.get_param(~turn, 1.0) x 0 th 0 status 0 try: print(msg) print(vels(speed,turn)) while(1): key getKey() if key in moveBindings.keys(): x moveBindings[key][0] th moveBindings[key][1] elif key in speedBindings.keys(): speed speed * speedBindings[key][0] turn turn * speedBindings[key][1] print(vels(speed,turn)) else: x 0 th 0 if (key \x03): break twist Twist() twist.linear.x x*speed; twist.linear.y 0; twist.linear.z 0 twist.angular.x 0; twist.angular.y 0; twist.angular.z th*turn pub.publish(twist) except Exception as e: print(e) finally: twist Twist() twist.linear.x 0; twist.linear.y 0; twist.linear.z 0 twist.angular.x 0; twist.angular.y 0; twist.angular.z 0 pub.publish(twist) termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)记得给脚本添加执行权限chmod x ~/catkin_ws/src/my_robot/scripts/teleop.py现在你可以通过键盘控制小车在Gazebo环境中移动了。启动仿真环境roslaunch my_robot display.launch然后在另一个终端运行控制脚本rosrun my_robot teleop.py6. 集成YOLO目标检测YOLO是目前最流行的实时目标检测算法之一。我们将使用Darknet_ros包来集成YOLOv3。首先安装依赖sudo apt install ros-noetic-vision-msgs ros-noetic-image-transport ros-noetic-cv-bridge然后下载并编译darknet_roscd ~/catkin_ws/src git clone --recursive https://github.com/leggedrobotics/darknet_ros.git cd ~/catkin_ws catkin_make -DCMAKE_BUILD_TYPERelease下载预训练的YOLO权重文件cd ~/catkin_ws/src/darknet_ros/darknet_ros/yolo_network_config/weights/ wget https://pjreddie.com/media/files/yolov3.weights配置darknet_ros订阅我们仿真摄像头的图像话题。编辑~/catkin_ws/src/darknet_ros/darknet_ros/config/ros.yamlcamera_reading: topic: /camera/image_raw queue_size: 1现在可以启动YOLO检测节点了roslaunch darknet_ros darknet_ros.launch检测结果会发布在/darknet_ros/bounding_boxes话题。为了可视化检测结果我们可以使用image_viewrosrun image_view image_view image:/darknet_ros/detection_image7. 完整系统集成与测试现在我们已经有了所有组件让我们创建一个启动文件来一次性启动整个系统。在my_robot/launch目录下创建simulation.launchlaunch !-- 加载机器人模型 -- param namerobot_description command$(find xacro)/xacro $(find my_robot)/urdf/robot.urdf / !-- 启动Gazebo -- include file$(find gazebo_ros)/launch/empty_world.launch arg nameworld_name valueworlds/empty.world/ arg namepaused valuefalse/ arg nameuse_sim_time valuetrue/ arg namegui valuetrue/ arg nameheadless valuefalse/ arg namedebug valuefalse/ /include !-- 在Gazebo中生成机器人 -- node namespawn_urdf pkggazebo_ros typespawn_model args-param robot_description -urdf -model my_robot / !-- 发布关节状态 -- node namerobot_state_publisher pkgrobot_state_publisher typerobot_state_publisher / !-- 启动键盘控制 -- node nameteleop pkgmy_robot typeteleop.py outputscreen/ !-- 启动YOLO检测 -- include file$(find darknet_ros)/launch/darknet_ros.launch / /launch现在只需一个命令就能启动整个系统roslaunch my_robot simulation.launch在实际测试中我发现Gazebo仿真环境和YOLO检测对计算资源要求较高。如果你的电脑性能不足可以尝试降低Gazebo的图像质量或使用更小的YOLO模型如tiny YOLO。