O2 - BeARCAT project

Swedish researcher cuts through the hype around autonomous vehicles

Linköping University’s Michael Felsberg makes realistic predictions about when self-driving cars will become available

The Knut and Alice Wallenberg Foundation supports several innovative projects in Sweden, and one of the most notable is the Wallenberg AI, Autonomous Systems and Software Programme (Wasp), the country’s biggest research project to date.  

Michael Felsberg is part of that project. A professor at Sweden’s Linköping University, Felsberg is also head of the university’s computer vision laboratory. Much of his research in artificial intelligence (AI) is funded as part of Wasp.  

While Felsberg sits on several committees that support the overall Wasp project, his own work is focused on perception and machine learning. He has been conducting research in AI for more than two decades and has observed first-hand the cycles of funding and general interest in areas of scientific research – especially those that capture public attention.  

A good example is the research around autonomous vehicles, which, according to Felsberg, started more than 40 years ago. Trials on self-driving cars began in the first half of the 20th century, he says, and serious prototypes were developed by Ernst Dickmanns in the 1980s. But most people didn’t start hearing about the possibility of self-driving cars until the early 2000s. 

And then, just 15 years ago, there was so much media hype around the topic that investors began to lose interest in academic research in the field because it no longer seemed necessary. That thinking was strongly influenced by press announcements from companies – especially from emerging brands such as Tesla. Industrial players and the media seemed to be implying that all that was left to do was fine-tuning and implementation – and that manufacturers would be rolling out the first self-driving cars in the very near future. 

Hype cycles wreak havoc on research funding 

“That’s typical with new technology,” says Felsberg. “Companies do a lot of PR and oversell their contributions to the domain. This leads to a general misunderstanding among the public, which in turn leads to depression within the research area. Too many investors buy into the hype and mistakenly believe it is no longer an area for academic research – that it’s now in the hands of industry. When investors start thinking like that, nobody dares to ask for funding.

“But then, what is also typical is that some major failure occurs in a commercial system – or a breakthrough occurs in the little bit of academic research that is still going on despite the depression. Then everybody becomes concerned about what is perceived as a new problem, which in fact, serious researchers had been recognising as a problem all along. Suddenly, people call for more academic research to figure out a solution.”

Felsberg adds: “What is lacking in our society is an appreciation for classical academic research. Doing basic research – enabling all these breakthroughs – means doing a lot of groundwork. This takes many years, and many generations of PhD students.”

Read more about autonomous vehicles

For Felsberg, these cycles of bashing an area and then overhyping it are bad for scientific development. Progress would be better served if these peaks and valleys were levelled off to maintain steady pace in these fields that are getting so much attention. 

Sometimes serious researchers, who are patiently plugging away at major problems, speak up – but their voices are often no more than a whisper amidst the market noise.

For example, in 2008, in an interview for Swedish television, Felsberg was asked if his children would ever need a driver’s licence. His response was that they would certainly need a licence because fully autonomous vehicles – that is, level 5 autonomous vehicles – would not be available within 10 years, despite what companies were saying at that time. Nobody paid much attention to his prediction at that time, even though it was spot on. 

Now, in 2022, Felsberg still believes that although many of the easiest problems for autonomous vehicles have been solved, there are still a lot of hard problems that are nowhere near resolution. Level 5 automation, in which vehicles do not require human attention, is still a long way off. 

Still many issues to overcome 

According to Felsberg, several big problems still stand in the way of fully autonomous vehicles – image classification, for example. “We know for each image, this is a bicycle, this is a dog and this is a car,” he says. “The images are hand-labelled by humans and the annotated images are used to train image recognition systems.”

The current generation of AI algorithms requires a period of supervised learning before a system can be deployed. In preparation for this phase, an army of annotators is needed to label the images for a given application. Images are annotated with not only the name of the class of objects the algorithm should look for, but also the location of the object within the image.  

For large-scale industrial use of AI, this amount of annotation is impractical – it should at least be possible to provide a sequence of images that have a car in them without having to indicate where the car is. It should also be possible for an algorithm to recognise a partially obscured object – for example, a man standing behind a bench with only his upper body visible should be recognised as a man. While recognition of partially obscured objects is a subject of ongoing basic research, it is not currently ready for production. 

For autonomous vehicles to work on a large scale, algorithms should be able to recognise new classes of objects without having to undergo another round of supervised training. It takes too much time and effort to re-label the huge volumes of data. It would be much better if the algorithm could learn to recognise the new class after it has been deployed. But researchers have yet to come up with a solid way of doing this process, which is referred to as “class incremental learning”. 

“Let’s say we have an image classification system that detects cars and suddenly we have a new type of vehicle like the e-scooter, which has become very popular recently,” says Felsberg. “The new class of object will not be recognised because it was not known at the time the system was built. But now we have to add it, which means going through supervised training once again. This is unacceptable. We really need to add the new class of objects on the fly.”

Another issue is the pure volume of training data and the amount of computation needed to process that data. An enormous amount of energy is consumed for training AI systems because machine learning is often performed in a “brute force” manner.  

“If AI is to be used on the scale needed for autonomous vehicles, it would be necessary to have more efficient hardware that consumes less energy during the machine learning process,” says Felsberg. “We would also need better strategies for machine learning, methods that work better than just parameter sweeping, which is what is done today.”

Big legal and ethical issues remain unsolved 

“Another issue is continual learning or lifelong learning in AI systems,” says Felsberg. “Unfortunately, many mechanisms for machine learning cannot be used in this incremental way. You would like to spend around 90% of the training time before you release the system and then the remaining 10% while it’s alive to improve it. But not all systems support this – and it also brings about some issues around quality control.

“I would say the most common version of how this would work is that a car supplier has software in the car that has been produced during a certain year, maybe when the car is initially built. Then, when the car is brought into service, it gets new software. Quite possibly, the machine learning methods have improved in the meantime – and in any case, they will have retrained the system to some extent. They will push the software update into the car, and that will include the results of the new training.”

Felsberg adds: “It is not clear how these upgrades will be certified and where liability lies when the inevitable mistakes occur. How do you do a quality check on a system that is continuously changing?”

“Most of the hard problems are revisited multiple times before they are really solved”
Michael Felsberg, Linköping University

Ultimately, cars will upload new data to the cloud to be used for training. The advantage of this approach will be the large quantity of new data and the shared learning. But here again, there are challenges around quality assurance, and there are problems around protecting the privacy of the car owner. 

“Associated with quality checks is the idea of an AI being able to provide a confidence level, or uncertainty, when it makes a decision,” says Felsberg. “You want the system to make a decision and indicate a confidence level, or an estimate probability that it is right. We would also like to know the reason a system made a certain decision. This second concept is called explainable AI. We want to both understand what is happening in this system and we would like that system to tell us how it made the decision and how certain it is about its decision.

“We have identified a number of these very fundamental issues that are very hard to address. There will not be immediate progress on all these fronts within the next two years. Some of them might last until the next loop of the hype. Maybe in seven years, there will be a new hype of machine learning after a period of depression in between. Then people will still work on these problems. That’s not unusual, though – most of the hard problems are revisited multiple times before they are really solved.”

Felsberg adds: “These are just some of the open problems today – and we were already working on them before the most recent big hype.”

Society’s insistence on autonomous vehicles may prevail 

During the big hypes, the general public thinks that because there has been huge progress, research is no longer required. This attitude is toxic because implementation may start before the technology is ready.  

Also, this only addresses the technical aspects of autonomous vehicles. There are still just as many ethical and liability questions to resolve. When is the driver responsible and when is the manufacturer responsible? These issues are in the hands of insurance companies and law-makers. Academic researchers already have enough work to do. 

According to Felsberg, the Knut and Alice Wallenberg Foundation is a patient investor. It tries to battle the huge hypes and to smooth out the landscape to fund basic research on a whole, even during periods when this is not the most popular topic, because it uses experts in the respective areas to understand where it is important to invest. In this way, the foundation was aware of many requirements before they were publicly known in the media. 

“A good strategy for research is to build technologies that companies can use 10 years later to develop products that change the world,” says Felsberg. 

“Regarding whether a child born today will ever need a driver’s licence, that will be about 15 or 16 years from now, which is about two hype cycles into the future. The technology still won’t be ready, but companies will force it to work anyway. Even if the technology is not sufficiently mature to do the job of autonomous driving everywhere, I believe that the societal need for autonomous vehicles and people’s expectations will have grown so much by then that companies will force it to work.”

Felsberg concludes: “The technology will not be completely ready, but it will be put to use with all its deficits. There will be certain limitations and there will be workarounds to avoid the unsolved problems. Society will insist – and this time it will prevail.”

Read more on Internet of Things (IoT)