Why AVs still struggle with the unknown
When it comes to “edge cases,” even a teen with a fresh driver’s license has a leg up on autonomous vehicles.
• 7 min read
Imagine if the drudgery of your daily commute were enlivened by an unusual sight: someone walking down the street carrying a disco ball.
At the very least, seeing the shimmering orb somewhere other than a dance floor might be distracting. It’s certainly something you wouldn’t expect to see.
That’s the example that Ed Olson, CEO of autonomous vehicle tech company May Mobility, uses to explain the challenges of perceptual edge cases in autonomous driving. These events are, by definition, unexpected and rare. Theoretically, they’re infinite in variety. And they remain a barrier to widespread adoption of driverless vehicles.
Jeremy Carlson, an associate director who leads autonomous driving research at S&P Global Mobility, agreed. “Edge cases are examples, generally odd or unique examples, ones you’re not likely to encounter in regular driving, that the system or the driver has to be able to manage safely,” he told Morning Brew.
“These edge cases,” he added, “are the unknowns.”
On the edge: Edge cases remain a challenge for AV companies because the industry recognizes that the tech must outperform human drivers to earn public trust.
“If the bar was 99.9%, I think a lot of executives would rest easy,” Russell Ong, who previously worked in the self-driving vehicle sector and now leads product management for iMerit, which documents and labels edge case data for its clients, told us. “But a lot of our traditional automotive companies have learned it’s closer to 99.997%.”
As companies like Waymo and Zoox expand to new markets, headlines abound about mishaps and serious safety incidents involving robotaxis. Many of these situations aren’t even classified as edge cases because they’re relatively common—like Waymo’s vehicles having issues navigating around school buses.
“It is still complex sometimes, even with these relatively known scenarios or road users or vehicles,” Carlson said, “to figure out how they navigate…this unique space.”
Ong explained that the software stacks in autonomous driving systems tend to rely on two types of perception technology: sensors like cameras, radars, and lidar, as well as high-definition maps. “Anything that confounds those modalities is a problem,” he said.
Edge cases are also difficult because they’re unlikely to happen—meaning, AV systems must prepare for scenarios they’ve never seen before and may never encounter.
“An infinite number of scenarios can take place on the road,” according to iMerit. “While the human eye can account for random developments on the fly, autonomous vehicles must be explicitly trained to account for these same obscure developments.”
Find the pattern: One strategy AV companies have employed to account for edge cases is hitting the road and trying to collect as much data as possible. The data that the vehicles’ sensors collect is fed into machine learning models that help the systems predict and respond to edge cases. Uber, for example, recently announced an initiative to collect driving data to assist its robotaxi partners.
The problem with relying solely on this approach, according to experts, is that it’s inefficient.
During CES 2026, Laura Major, CEO of robotaxi company Motional, said that less than 1% of the driving data Motional’s fleet collects is helpful, prompting the company to create a data mining system called Omnitag. This system, according to Motional, replaces manual data mining with a machine learning-based approach that pulls data from multiple sources including audio, video, and lidar sensors.
That’s also why AV companies rely in part on synthetic data to create unique scenarios to train their systems. They run their systems through AI-enabled simulations to augment real-world driving, creating plausible scenarios like dangerous weather events or broken traffic equipment.
“You have to train the system, and if you’re not able to train it on that exact edge case, because it hasn’t happened before or you didn’t capture it on the sensor feed that you can go back and revisit,” Carlson said, “that means that you need to equip the vehicle with some ability to understand and to navigate those things even though they’ve never seen this very unique situation before.”
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
Perception vs. behavior: The perception sensors typically included in AV systems all have their own limitations, and to some extent, all AV systems struggle with certain perceptual edge cases like the disco ball scenario, according to Olson. But general purpose vision algorithms “do a pretty good job” of handling these situations, he said.
Edge cases that deal with human behavior and require reasoning are trickier.
“Walking to kindergarten does not actually prepare you very well for driving,” Olson said. “Even a human, we sit them down, we take them to driver’s ed, and we teach them the rules of the road. And even out of the gate, with six hours of education, you can now reason your way through it. But there’s a long tail of crazy situations that you need to be prepared for.”
That’s why, he argued, trying to identify and train for all of the possible scenarios doesn’t work.
“It’s the game of Pokemon: If we can collect all of these edge cases, then no matter what happens next, we will have seen that case in the past and we know we’ll do the right thing,” Olson said.
“This doesn’t work very well. Because the long tail is so darn long that you’re always encountering new situations all the time. This is where that 16-year-old driver who just got out of driver’s ed has a superpower. Even though they have not seen very much at all, they can handle situations that they’ve never seen before because they can reason through the situations that they haven’t seen.”
Enter AI: Teenagers might have years to go before their prefrontal cortexes are fully developed, but their ability to reason is nonetheless built in. How do AVs gain that same edge? Experts are optimistic that the AV sector’s movement away from being rules-based to AI-first can help solve these challenges as systems become more intuitive and better at reasoning.
“If you approach autonomous driving through rote memorization, you need a lot of data…Hundreds of millions of miles,” Olson said. “If you approach it through, let’s understand something about underlying human behavior and then let’s be able to reason in that sort of world model of how does the world work, how do people work within it, then you need a lot less data.”
One of the benefits of an end-to-end AI approach to autonomous driving is that it enables AVs to parse their way through scenarios they’ve never seen before, according to Carlson.
“It’s like a Mad Libs of fill-in-the-blank of how many strange things you can encounter at one time at an intersection, and try to build a scenario around it,” he said. “That is precisely why this is so challenging.”
The difficult-to-predict nature of edge cases make tools like simulation and synthetic data all the more important, Carlson noted, citing Nvidia CEO Jensen Huang’s announcement at CES 2026 of Alpamayo, a new suite of AI models, simulation tools, and AI datasets for AVs.
“A big positive outcome in using [end-to-end AI models] is that you are equipping the vehicle with more reasoning,” he said. “It is that flexibility and that reasoning…that is ultimately what’s going to be necessary for us to move…into much more flexible and humanistic driving.”
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.