Editor choice

2024-04-03

Researchers use ChatGPT to make a tomato picking robot

A team of researchers at the Technical University in Delft, Netherlands and the Swiss technical university EPFL used ChatGPT to help them develop a tomato-picking robot. A study about the development of the robot was recently published in Nature Machine Intelligence

ChatGPT played an essential role in the development process from the very beginning. To decide what kind of robot they should create, researchers asked ChatGPT questions about what the greatest challenges for humanity would be in the future, which led them to focus on issues with the food supply.

The team picked tomatoes because ChatGPT taught them that tomatoes would be one of the most economically valuable to automate. 

“We wanted ChatGPT to design not just a robot, but one that is actually useful,” Dr. Cosimo Della Santina, an assistant professor at TU Delft, said.

During the design process, ChatGPT gave the team helpful suggestions, like making a gripper out of silicone or rubber so that robot doesn’t crush tomatoes, or using a Dynamixel motor to drive the robot. With ChatGPT handling much of the research for the robot, the engineering team found themselves performing more technical tasks to validate the AI’s knowledge. 

In this way, the large language model, ChatGPT, acted as the researcher and engineer in the development process, and the human researchers acted as the manager, making them in charge of specifying the design objectives. 

This is a less intense collaboration than the most extreme ChatGPT-collaboration scenario that the team came up with, where the language model provides all of the input to the robot design, and human engineers blindly follow it. 

An extreme scenario like that isn’t currently possible, and the team behind this experiment doesn’t know if it will ever be realistic. In part, because working with large language models leaves companies commercializing robots with questions about plagiarism and intellectual property, and because large language models provide unverified information. 

“In fact, LLM output can be misleading if it is not verified or validated. AI bots are designed to generate the ‘most probable’ answer to a question, so there is a risk of misinformation and bias in the robotic field,” Della Santina said.

Even ChatGPT’s determination that tomatoes would be the most economically valuable crops to work with could be biased towards crops that are more covered in the data ChatGPT uses to make decisions. 

Despite these concerns, the research team will continue to use the tomato-harvesting robot in their research and will continue to study the capabilities of large language models like ChatGPT. 

Share with friends:

Write and read comments can only authorized users