
Robot identify objects based on the distribution of the mass. (Photo: Freepik)
Cambridge/Seattle/Vancouver – researcher of the Massachusetts Institute of Technology (With) and the University of British Columbia If robots let objects identify based on the weight and the distribution of the mass, the light should not be sufficient to visually recognize an object.
For example, the researchers think of the search for people in disaster areas, in which a distinction must be made as to whether the object found is a stone or a person – although shaking is eliminated here. In this case, the sensors that check the surface quality and hardness of the object.
Feel by strain on the joints
The international research team’s method, which also includes developers of Amazon, uses proprioception. What is meant is the ability of people or robots to perceive their movement or position in the room. If, for example, a person in the gym raises a dumbbell, he feels the weight in his wrist and biceps. In the same way, a robot can “feel” the weight of an object in his arm due to the load on the joints.
If the robot raises an object, the system collects signals from the rotary pulse providers of the robot, i.e. sensors that capture the direction of rotation and the speed of rotation of the joints. “Most robots have such sensors,” says Chao Liu, Mit-Postdoc in computer sciences and artificial intelligence. “Therefore, our technology is inexpensive because no additional components such as tactile sensors or image persecution systems are required.”
“Computer Vision” remains important
“We do not want to replace the so-called ‘computer vision’,” says Peter Yichen Chen, Postdoc on, who received his doctorate in computer science at the University of British Columbia. “Both methods have their advantages and disadvantages. But we have shown that we can already determine some properties of objects even without a camera,” concludes the scientist.
Source: www.pressetext.com
(PTE019/08.05.2025/11: 30)