Most self-driving car testing takes place in places like California, Arizona, and Nevada, and there’s a reason for that. The sensors these cars rely on to navigate are less reliable in poor weather and other low-visibility conditions. But MIT claims to be developing new tech that could help with that.
MIT’s experimental sensor reads radiation at sub-terahertz wavelengths, which are between microwave and infrared radiation on the electromagnetic spectrum. That means they can be detected through fog and dust, according to MIT. The lidar sensors used in most autonomous cars currently testing on public roads rely on infrared wavelengths, which are more likely to be disrupted in those conditions, MIT claims.
But sub-terahertz sensors have some drawbacks. The sensors require strong signals to work, and this previously required bulky and expensive equipment, according to MIT. That’s why the technology has never been tried in self-driving cars before.
Researchers developed a prototype sub-terahertz sensor that fits on a chip, yet is sensitive enough to provide useful information even in the presence of significant signal noise, MIT claims. That’s thanks to what researchers call “decentralization,” which relies on an array of individual pixels placed on the chip. They can be used to determine the distance to nearby objects, similar to lidar, but can also be “steered” in a certain direction to enable high-resolution images of the environment, MIT claims.
As with all research, it’s worth noting that promising results in the lab may not scale up to a usable real-world product. This sensor tech is one of several recent MIT research projects relevant to self-driving cars. The university also developed an experimental algorithm to help cars execute lane changes and a navigation system for rural roads that may not be well mapped.