Off The Shelf Robotics 5/2021 - 7/2022
I co-founded OTS Robotics to bring a new, simplified robotic fulfillment system to the online grocery space that exploded during the pandemic.
As robotics lead, I designed, manufactured, and programmed two robots to perform automated storage and retrieval system of grocery totes.
Our system is unique due to the fast setup time enabled by a vision-based guidance system that relies on vertically-mounted fiducials rather than a floor-based grid.
This video demonstrates our MVP system of robots working together to fully-autonomously retrieve a target tote container from the shelf.
Classic and Deep Learning Computer Vision 8/2022 - 12/2022
The RBE 549 Computer Vision by Dr Nitin Sanket included 5 hands-on projects spanning a variety of class and deep learning methods:
NeRF and Structure from Motion (3d model from 2d images)
Visual-Inertial Odometry
Semantic LiDAR Mapping
Auto Panorama Stitching
FaceSwap
Auto Camera Calibration (solve for camera params)
Probability-of-Boundary Edge Detector
A few sample visuals are to the right, and I'd be happy to discuss any of these projects further in detail.
Multi-Medium Semantic SLAM 2/2022 - 9/2022
Sparse environments with repetitive textures and features are an edge case for existing SLAM methods such as ORB-SLAM2
ORB-SLAM2 drifts by up to 2 meters (20%) on this test
We utilize semantic segmentation and YOLO object detector networks to extract higher-level features that are more constant between frames
As a result of incorporating both high-level and low-level features, we reduce tracking error by up to 70%
See full video here
DigSafe Autonomous Cable Detection Robot (Aug 2021 - May 2022)
Industry-Sponsored Research (AIR Lab)
This project was sponsored by Eversource to create a novel robot that could autonomously detect, follow, and mark buried electrical cables prior to construction operations
Sensor fusion was performed on data from LiDAR, Camera, IMU, encoders, GPS, and the EM sensor to autonomously map and navigate outdoors
Robotic Picking Using Deep Reinforcement Learning: 3/2022 - 5/2022
Robotics picking of items from bins is of growing importance for warehouses and factories
In this project, we modified an existing deep RL network to improve bin picking in a simulated PyBullet environment
Our network took inputs of an overhead RGBD camera, wrist RGBD camera, and the joint positions of the 7-DOF panda robot to produce an output of joint torque commands
Improved Monocular Depth Estimation using Semantic Information: 3/2022 - 5/2022
Monocular depth estimation is important in many applications including autonomous vehicles
Depth estimation partially relies on internalizing object semantics to understand the expected shape & size
Depth estimates from existing methods such as MonoDepth1 lack crisp boundaries between objects
We created a multi-task network to simultaneously train on depth and semantics
This resulted in much-improved object edges being more crisp and clear
Bounding Volume Hierarchy Trees for Minimum Distance Search
Bounding Volume Hierarchy Trees are recursively constructed for 3D objects from their respective triangular meshes
A real-time (1khz!) minimum distance search can then be performed between two objects using a simultaneous tree-walk algorithm to find the nearest points
Visuals were coded using PyQTGraph in Python to display each step of the process and algorithms
This was an individual project in RBE595 Haptics class
Playing Catch with a Mobile Robot (Jan 2021 - May 2021)
Karter Krueger, Yan-Bin Jia (ISU CS Robotics Lab)
(presented at the 2022 Iowa State University Honors Research Symposium)
The robot is tasked with catching a ball tossed towards the robot
The ball is tracked with a camera on a servo gimbal while the trajectory and catch-intercept-point are predicted using a Kalman Filter
Interpretable UAV Collision Avoidance using Deep RL (Jan 2021 - May 2022)
Deepak-George Thomas, Daniil Olshanskyi, Karter Krueger, Tichakorn Wongpiromsarn, Ali Jannesari
(Arxiv 2021)
A multi-head self-attention Graph Neural Network was used with a D3QN RL learning policy to detect and avoid obstacles with an RGBD camera on a simulated drone in Unreal Engine
Tetrahedron Object Tracking (Nov 2019 - May 2020)
Parallel to the vision tracking aspect of this paper by Matthew Gardner and Yan-Bin Jia
(I presented at the 2022 Iowa State University Honors Research Symposium [video])
A custom vision pipeline was implemented to detect and track the corners and edges of a white tetrahedron object tossed across a low-light environment