The AI machine/computer vision is looking for three things. The seabed, nodules, and anything else. We assume anything else is life that we want to avoid disturbing. Hence, we do not need to train the AI, nor do we need to have seen life before encountering it. As there is no light at these depths, we control the illumination with our own lights, which significantly simplifies the computer vision complexity.

Some local sediment disturbance will occur when the nodule is picked. The cameras identify the location of the nodule in front of the vehicle. The nodule’s location is tracked relative to the robot through precise tracking of the vehicle position. With the nodule out of sight from the camera, because the robot’s position is precisely tracked, the nodule’s location is understood, enabling the arm to pick it up even without our system seeing it at the same time. With the nodule under the vehicle, the arm picks it, and any disturbed sediment is well behind the camera. Additionally, the vehicle will travel primarily into any existing current. Between the vehicle’s motion and the surrounding currents, any sediment distributed under the vehicle will remain behind it.