The AI tool called ChatGPT will be used to enhance the cars’ voice-command capabilities for users. With the new addition, Mercedes car owners can expect additional functions like a wider range of commands in a more natural fluid conversation, and the ability to interact with other applications to make reservations like dinner and movie tickets bookings, amongst other exciting features.
That all sounds advanced and exciting for users, but what about the risks? Dennis Kengo Oka, Senior Principal Automotive Security Strategist, at Synopsys Software Integrity Group, shares his thoughts on this new change. His full comments are shared in the paragraphs below.
Did you know that the automotive industry is working towards improving the user experience in cars and allowing a more seamless transition from smart homes to smart cars.
That is, the same digital assistants you are using in your smart home have also been available in your car for the past few years.
With the development of powerful AI technologies, there are new opportunities that the automotive industry can seize.
Based on these powerful AI language models, automakers can build their own digital assistants and train the AI model with automotive specific information.
Similar to how ChatGPT was trained with, e.g., Linux and Unix man pages, and C and Python programming languages, one can imagine an automaker training their digital assistant with information from the car user manual as well as information on how to support common use cases including route planning, integration with smart homes and devices, charging, etc.
This would allow a user to easily ask questions about a warning light blinking on the dashboard, plan an efficient route to the airport, to open the garage door or connect a user device, find, and reserve a charging spot etc., without having to dig through a large user manual or use and manage multiple devices or systems.
But what about the risks? It is extremely important to consider what type of training data is used as well as to apply policies that define what responses with what type of information are allowed.
Similar to how early usage of ChatGPT with limited restrictions allowed to write malware and hacking tools or to gain information that could be used with malicious intent, a digital assistant in your car could also be abused to potentially gain certain harmful information, e.g., how to clone keys or run unauthorised commands which could lead to attackers stealing cars.
In summary, while deploying a digital assistant in your car would provide many benefits and definitely improve the user experience, it is also important to consider the risks.
Therefore, it’s imperative that automotive organisations consider what training data is used as well as considering providing some type of restrictions on content in responses in order to prevent abuse or actions with malicious intent.