This was a 2-3 week project for the Olin course Computational Robotics. We implemented a convolutional neural netweork on a Neato Robot using Python, ROS, Google Colab notebooks, and Apple's ARKit AR platform on an iPhone. We used camera and position data to train the neural net to predict a person's ground truth position based on their location in the camera frame. You can see the documentation (including visualizations of the system and neural net parameters) in the README in our github repository, as well as all the code. If you'd like to see it, please click here.