Image Credit: Chong Liu
Last week we have had the first Weekly Robotics Community Meeting with some of the newsletter patrons. We kicked off the meeting with James showing us what formant.io was up to and allowing us to teleoperate a Boston Dynamics Spot. Then we chatted about other legged platforms and ROS control. The next meetup will be public and should happen on the 18th of February. Stay tuned for more info! As usual in the past couple of weeks, the publication of the week section is manned by Rodrigo. The most clicked link last week was Yet Another Robot Platform project with 11.2% opens.
Chong Liu had created a neat bipedal based on Raspberry Pi 4b and only 4 servos. The final presentation video is a good watch. I especially liked the simulation being used to find gaits for the robot.
The Ascent of the Robots - Shortdocumentary ART Safiental 2020
Here is a short documentary showing some of the work done by ETH Researchers when taking part in ART Safiental in 2020. In the documentary you will learn about 3 projects showed during the festival: making ANYmal legged robot hike to the summit of the mountain, presenting MyoSuit exoskeleton showcase and mapping and digitalization of dance for research.
Ninentod Switch Running ROS
Robin Fröjd who is working on K3lso Quadruped has recently purchased an unpatched Nintendo Switch console and proceeded with installing an Ubuntu on it, netting a neat controller for his project
What is a Real-Time Operating System (RTOS)?
Driving Upside Down With An RC Fan Car
This is cool, modify an RC car by adding two fans to create a massive downforce that will allow you to drive it upside down.
The Black Magic Of A Disappearing Linear Actuator
Kataka was a company that designed this neat concept of a linear actuator that’s using a clever belt arrangement when retracted, making the actuator flat. The company does not seem to be operating anymore, fortunately Web Archive has a copy of their website.
Publication of the Week - Self-Supervised Linear Motion Deblurring (DATE)
Autonomous driving technologies are adopting cameras as their main navigation source, this can be seen in cars such as Teslas’s. The use of this technology comes with a cost, motion blur caused by fast motion or low light conditions leads to lower performance on feature detection, motion estimation, object detection, and many more algorithms that rely on visual input. This paper proposes a solution for deblurring images only using inverse rendering. First, they make a correspondence between two consecutive blurry frames, and then images are re-render assuming a linear blur kernel. This is very useful for small projects in which consumer-level cameras are mounted on moving objects.
In issue #83 I’ve started this section to try to help out those looking for work in the times of pandemic. If you are currently looking for work then feel free to send me your details in the same format as you can see in the entries below.
Name: Gautam Dobariya
Location: Germany, Netherlands, Austria, Belgium (currently in Germany), available for remote work
Skills: C++, Python, ROS, MATLAB, Linux, Gazebo, OpenCV, PCL, QT, Visual Studio Code, PyTorch, Git, Docker, Bash scripting, Solidworks
Profile: I am a mechatronics engineer looking for a Robotics Software Engineer Role with a focus o SLAM, Computer Vision,
Social Profiles: LinkedIn, GitHub