These days, I’m thinking a lot about my robotics consulting journey. It’s been quite a ride, as I went through three office spaces, moved houses three times, worked with some most brilliant people on the planet, and contributed to building some pretty decent robots. As I’m preparing to write yet another summary of my experience, if there is anything you think I should add to my next one, don’t hesitate to let me know. Rodrigo is off on holiday, so I (Mat) will cover today’s publication of the week. Last week’s most clicked link was a rover built from a single PCB, with 8.6% opens.
Weekly Robotics is being developed thanks to the Patreon supporters and the following business sponsors:
Save 35% off everything at Manning until 16th of April
The entire Manning catalog is 35% off this week, and you might be interested in the following: my ROS 2 course, Rust in Action, Robotics for Programmers, Deep Reinforcement Learning for Self-Driving Robots.
Introducing Segment Anything: Working toward the first foundation model for image segmentation
Meta released an AI model for image segmentation. The dataset it was built from consists of 11M high-resolution images and 1.1B segmentation masks. To learn more about this work, check out the repository containing the model on GitHub.
Robotic hand can identify objects with just one grasp
MIT Researchers have developed a three-finger end-effector built with GelSight sensors. Each of these flexible fingers is equipped with a camera that monitors the deformation of the finger body. Researchers then trained a neural net, grasping three different objects, and achieved an 80% success rate during follow-up testing. You can learn more about this research in this paper.
Once More, With Feeling: Exploring Relatable Robotics at Disney
We’ve featured Disney’s presentation of these robots in issue #238. As it turns out, Judy is part of a larger project, nicknamed Indestructibles, where Disney plans to build robots with entertaining gaits that won’t mind an occasional tumble.
ROS, wherever you are
Jérémie Deray wrote an amazingly detailed blog post on using ROS with Multipass, an exciting alternative to using docker that seems simple.
Language Embedded Radiance Fields
“LERF optimizes a dense, multi-scale language 3D field by volume rendering CLIP embeddings along training rays, supervising these embeddings with multi-scale CLIP features across multi-view training images. After optimization, LERF can extract 3D relevancy maps for language queries interactively in real-time. LERF enables pixel-aligned queries of the distilled 3D CLIP embeddings without relying on region proposals, masks, or fine-tuning, supporting long-tail open-vocabulary queries hierarchically across the volume”.
Luxo: A robotic Pixar-style lamp that actually jumps
This is a relatively quick one, but excellent food for thought. Dheera Venkatraman created a Pixar-style jumping lamp and described it in this short blog post, including a video of the robot in action.
Publication of the Week - USTC FLICAR: A Multisensor Fusion Dataset of LiDAR-Inertial-Camera for Heavy-duty Autonomous Aerial Work Robots (2023)
This research presents UST FLICAR - a dataset for the development of SLAM and 3D reconstruction for heavy-duty autonomous aerial work robots (think VTOL aircraft doing some heavy lifting). To obtain the dataset, the researchers developed a ‘Giraffe’, a robot based on a bucket truck. The dataset includes many sensors: 1 IMU, 4 LiDARs, 4 cameras (monocular and stereovision), laser tracker for ground truth. For more information about this work, check out the project website (I especially recommend the Quick Use section).
Sidewalk delivery robot company Neubility secures $2.42M investment
“Korean sidewalk delivery robotics company Neubility announced that it has secured a $2.42 million (3 billion won) investment from Samsung Venture Investment. This brings a cumulative total investment raised by the company to date to $24.2M (30 billion won)”.