In 2016, Lyft CEO John Zimmer predicted that by 2025 they would “have an end to ownership.”
In 2021, some experts are not sure when, if at all, people will be able to buy cars without a steering wheel that drive themselves from the plot.
Unlike investors and CEOs, scientists who study artificial intelligence, systems engineering and autonomous technologies have long argued that creating a fully self-driving car will take many years, perhaps decades. Now some go further, saying that despite investments that already exceed $ 80 billion, we may never get the self-driving cars we were promised. At least not without major breakthroughs in AI, which almost no one predicts will arrive soon – or a complete redesign of our cities.
Even those who are most mesmerized by this technology ̵
“Much of real artificial intelligence needs to be solved in order to do unsupervised, generalized work with full self-government,” Mr Musk himself tweeted recently. Translation: To drive a car as a human, researchers must create an AI on a par with one. Researchers and scientists in the field will tell you that this is something we have no idea how to do. Mr. Musk, on the other hand, seems to believe that this is exactly what Tesla will achieve. He is constantly promoting the company’s next-generation Full Self Driving technology – actually a misleading driver assistance system – which is currently in beta.
A recently published article entitled “Why AI is harder than we think” summarizes the situation well. In it, Melanie Mitchell, a computer scientist and professor of complexity at the Santa Fe Institute, notes that as the deadlines for the arrival of autonomous vehicles have slipped, people in the industry are redefining the term. Because these vehicles require a geographically limited test area and ideal weather conditions – not to mention safety drivers or at least remote monitors – the manufacturers and supporters of these vehicles have included all these warnings in their definition of autonomy.
Even with all these stars, Dr. Mitchell writes, “none of these predictions came true.”
In the vehicles you can actually buy, autonomous driving has failed to prove to be more than just improved cruise control, like GM’s Super Cruise or the optimistically called Tesla Autopilot. In San Francisco, GM Cruise’s subsidiary is testing autonomous vehicles, with no driver behind the wheel, but a person monitoring the vehicle’s operation from the back seat. And in the United States, there is only one robotics service that has no human drivers at all, a small operation limited to low-density parts of the Phoenix subway by Alphabet Waymo’s subsidiary.
However, Waymo’s vehicles have been involved in minor accidents in which they have been in the background and their confusing (for people) behavior has been cited as a possible cause. Recently, one was confused by construction cones.
“I’m not aware that we’ve been hit or finished by anything more than a human driver,” said Nathaniel Fairfield, a software engineer and head of behavior at Waymo. The company’s self-propelled vehicles are programmed to be cautious – “the opposite of the canonical teenage driver,” he added.
Chris Urmson is the head of the autonomous startup Aurora, which recently acquired Uber’s self-government division. (Uber is also investing $ 400 million in Aurora.) “We will see self-driving vehicles on the road do useful things over the next few years, but it will take time to become ubiquitous,” he said.
The key to the initial launch of the Aurora will be that it will only work on highways, where the company has already created a three-dimensional high-resolution map, says Mr Urmson. Aurora’s ultimate goal is for both trucks and cars using its systems to travel further away from the highways where it will initially be launched, but Mr Urmson declined to say when this could happen.
The slow release of limited and constantly monitored “autonomous” vehicles was predictable and even predictable years ago. But some CEOs and engineers have argued that new opportunities for self-management will emerge if these systems can simply travel enough miles on the roads. Now some take the position that not all test data in the world can compensate for the main shortcomings of AI.
Decades of breakthroughs in the part of artificial intelligence known as machine learning have led to only the most primitive forms of “intelligence,” said Mary Cummings, a professor of computer science and director of the Duke University’s Laboratory for Human and Autonomy, which advises the Department. AI defense.
To assess modern machine learning systems, she developed a four-point scale of AI complexity. The simplest type of thinking begins with reasoning based on bottom-up skills. Today’s AIs are pretty good at things like learning to stay within the highway. The next step is rule-based training and reasoning (ie what to do with a stop sign). Then there is knowledge-based reasoning. (Is it still a stop sign if half of it is covered by a tree branch?) And at the top is the expert reasoning: the unique human ability to be dropped into a whole new scenario and use our knowledge, experience and skills to you come out in one piece.
Problems with driverless cars do appear at this third level. Today’s deep learning algorithms, the elite of the diversity of machine learning, are unable to achieve a knowledge-based representation of the world, says Dr. Cummings. And attempts by human engineers to make up for this shortcoming – such as creating ultra-detailed maps to fill in gaps in sensor data – tend not to be updated often enough to steer a vehicle in any possible situation, such as encountering on an unspecified construction site.
Machine learning systems, which excel in pattern matching, are terrible at extrapolating – transferring what is learned from one domain to another. For example, they may identify a snowman on the side of the road as a potential pedestrian, but they may not understand that it is in fact an inanimate object that is very unlikely to cross the road.
“When you’re a kid, you’re taught that a hot stove is hot,” says Dr. Cummings. But AI is not great at transferring knowledge about one stove to another, she added. “You have to learn it about every existing stove.”
Some MIT researchers are trying to fill this gap by going back to basics. They have gone to great lengths to understand how babies learn, from an engineering point of view, to translate it back into future AI systems.
“Billions of dollars have been spent in the self-governing industry and they will not get what they thought they would get,” said Dr. Cummings. That doesn’t mean we won’t end up with some form of “self-driving” car, she says. It’s just “it won’t be what everyone promised.”
But, she adds, small, low-speed shuttles operating in well-mapped areas crammed with sensors like a leader can allow engineers to reduce the amount of uncertainty to a level that regulators and the public would find acceptable. (Show shuttles to and from the airport, driving on specially constructed lanes, for example.)
Mr. Fairfield of Waymo says his team sees no fundamental technological barriers to creating self-propelled robot services, such as his company’s widespread company. “If you’re too conservative and ignore reality, you say it will take 30 years, but you just don’t,” he added.
An increasing number of experts suggest that the path to full autonomy is ultimately not based primarily on AI. Engineers have solved countless other complex problems – including landing spacecraft on Mars – by dividing the problem into small pieces so that smart people can design systems to handle each part. Raj Rajkumar, a professor of engineering at Carnegie Mellon University with a long history of working on self-driving cars, is optimistic along the way. “It won’t happen overnight, but I see the light at the end of the tunnel,” he said.
This is the main strategy that Waymo pursues to put its autonomous shuttles on the road, and as a result, “we don’t think you need full AI to solve the driving problem,” says Mr. Fairfield.
Mr Urmson of Aurora says his company combines AI with other technologies to create systems that can apply common rules to new situations, as one would do.
SHARE YOUR THOUGHTS
When do you think we will see fully self-driving vehicles? Join the conversation below.
Getting to autonomous vehicles in the old-fashioned way, with proven “systems engineering,” would still mean spending huge sums equipped on our roads with transponders and sensors to guide and correct robot cars, says Dr. Mitchell. And they will remain limited to certain areas and certain weather conditions – with human teleoperators on standby if things go wrong, she added.
This animated version of Disney’s future self-management will be far from creating an artificial intelligence that can simply be put into any vehicle, instantly replacing a human driver. This could mean safer, man-driven cars and fully autonomous vehicles in a handful of closely monitored areas. But this will not be the end of car ownership – not soon.
—— For more WSJ technology analysis, reviews, tips and headlines, sign up for our weekly newsletter.
Write to Christopher Mims at firstname.lastname@example.org
Copyright © 2020 Dow Jones & Company, Inc. All rights reserved. 87990cbe856818d5eddac44c7b1cdeb8