AI is revolutionizing automotive cybersecurity by enhancing safety and connectivity in increasingly software-defined vehicles while addressing new risks, writes Jay Yaneza, Cybersecurity Architect, VicOne
As with other industries and sectors today, the application of artificial intelligence (AI) touches the automotive industry throughout the entire product lifecycle—from concept and design up to decommissioning. And with that also come vehicles being designed to have embedded components that take advantage of AI. Vehicles are becoming software-defined, always “on” (connected). They are being equipped with more computing power embedded in system-on-a-chip (SoC), designed to enable features that we have not seen in the last 10 years but that we might experience relatively soon as they become standard features. Not too long ago, advanced driver-assistance systems (ADAS) that included collision warning systems, blind-spot monitoring, backup cameras, lane-keep assist, and emergency braking were at a cost or reserved for higher trim levels. But today, a lot of these features that used to come at a cost are now commoditized and some have even become standard if they might affect safety. For example, since 2018, backup or reverse cameras have been required by federal law in the US for every new vehicle offered for sale as they deemed too important to be optional on vehicles. Car manufacturers (OEMs), their suppliers, and lawmakers alike are already balancing customer demand and adoption, speed of development, cybersecurity, and safety—and today’s world would even be further challenged with the onset of global adoption of AI.

One combination of topics that is of particular interest is automotive cybersecurity and safety, and if we take the application of AI between these two topics, we can already perceive the benefits of AI in automotive cybersecurity. The application of AI for cybersecurity within information technology (IT) in general is widely known and has very good use cases already, but what of its application for automotive cybersecurity? The main goal of designing a cybersecure IT infrastructure is to minimize (or mitigate) incidents and breaches, focusing on the organization’s consideration of confidentiality, integrity, and availability. Automotive cybersecurity, on the other hand, will always have cybersecurity goals and safety goals that go together. Thus, with this in mind, the question now becomes: How can AI assist in the cybersecurity (and safety) goals of the automotive industry?
General Availability: Operation and Maintenance
At this phase of the automotive lifecycle, several considerations come into play, such as:
- Depending on the specification of the vehicle, different logs might be generated by the vehicle.
- Novel threats and vulnerabilities might be uncovered.
- Wide-scale proactive measures might be needed to get ahead of increasing risks.
These considerations, and more, are necessary concerns due to the increasing technological complexity of today’s vehicles. Steps need to be taken if system failures and random hardware failures are considered within the scope of functional safety. Given that the range of an automotive product lifecycle is 7 to 10 years and this phase starts after the initial 1 to 3 years, this phase requires constant caring and feeding due to the unpredictability of potential failures or new threat discoveries.
Again, there is a chance that AI can augment human expertise. Processing and dissecting logs, converting them, presenting meaningful content, and finally, contextualizing the findings would enable a vehicle security operations center (VSOC) analyst. Initial questions—such as: Is this novel? Has this been seen before? How is this normally managed, and what are the next steps? What were the SBOM, HBOM, vulnerabilities, and known characteristics of the vehicle at the time that this happened?—can be immediately surfaced for the VSOC analyst to begin analysis. This would minimize the need to consult multiple sources for the necessary data, reduce the necessity of extensive tool sprawl, and most importantly, decrease the amount of time required to decide on the next steps.
The main goal of designing a cybersecure IT infrastructure is to minimize (or mitigate) incidents and breaches, focusing on the organization’s considerations of confidentiality, integrity, and availability.
An AI implementation can be taken even further: The VSOC analyst can then create a timely feedback loop between two separate phases within the automotive lifecycle (operation and maintenance to development) for newer use cases that have not been considered or evaluated previously, with a high quality of feedback. The automotive engineering team can then take such input with the assurance that this is not just a random claim and the attack feasibility has been confirmed with a validated attack path, and assess the impact of such an event. In using AI, coordination between the two distinct groups that oversee separate phases can be seamlessly realized when both data and necessary procedures are synchronized.
Calm Before the Storm: Between Development and Production
The automotive engineering team is always under pressure to develop the vehicle within a certain specification, ship it on time, and sustain an automotive make and model—all the while knowing a refresh is already on the way. In this stage, a vehicle component’s firmware would be analyzed to determine if there are known vulnerabilities and weaknesses. Regardless of what scanner is used—say, the resulting scan would produce about 100 vulnerabilities split between known CVEs and common CWEs—what’s next? Further, with a list of about 100 things to fix, can the automotive engineering team have any assistance in prioritizing and mitigating these vulnerabilities and weaknesses?
Faced with such results, the automotive engineering team would have additional questions, such as:
- Is there readily available threat intelligence out there that can be easily referenced for a fix?
- What is the best route to mitigating these defects that would reduce residual risk?
- Would there be changes in code, or should another software package be considered instead?
- Are these defects introduced or caused by a specific Tier 1 supplier?
- Are these defects novel or specific to this firmware, or should I be concerned about other vehicles that are on the road already?
These questions are perfect for an AI implementation that is trained with the appropriate information, has access to the necessary up-to-date resources, and has helper tools for extremely specific tasks. Even further, an AI implementation that has a predictive or proactive nature would lead the automotive engineering team in the right direction for issue resolution. With this human-led and machine-assisted approach, the automotive engineering team can be empowered to make effective and efficient decisions.
In-Vehicle Digital Assistants, Anyone?
As we have already established, The automotive industry continues to incorporate recent technologies into vehicles, transforming the way people drive, experience, and connect with their vehicles. Powered by AI, vehicles are now becoming even more connected on the digital plane. However, as technology advances, so does the emergence of new automotive cybersecurity threats.

One such security threat is AI prompt injection, which is a form of attack where malicious inputs or “prompts” are fed into an AI system to manipulate its behaviour. For example, a vehicle’s voice-activated digital assistant, usually a feature of an in-vehicle infotainment (IVI) system, might be tricked into executing unauthorized commands. Depending on the designed capabilities that the digital assistant can understand and execute, a disguised voice command can be used to trick the car into executing a function when it is unsafe to do so. We can further think of examples but often they would land somewhere between simple nuisance and obvious safety risk. For a safety risk example, an innocuous command to play some music might be injected by attackers with a malicious voice command to disable cruise control, which, for a vehicle in high-speed motion and a driver heavily reliant on this feature, might severely jeopardize the safety of the driver and other people on the road.
Fortunately, such a safety risk requires certain conditions to be met before it can be fully realized. But even without AI prompt injection, which is the vector of potential compromise, such a scenario raises important questions: How is it possible that such a command is not filtered out in a high-risk condition?
Why aren’t there safety controls safeguarding and separating critical functions vs. simple end-user interactions? Why aren’t there precautions to check the vehicle’s different sensors beforehand? Why was this not caught as a test use case before it was implemented as a feature to begin with? Unfortunately, the last question is a tricky one. While there are test use cases for known factors, test use cases that concern newer technologies, such as AI implemented in digital assistants, might not have been discovered yet as these technologies are in constant development too.
While there are test use cases for known factors, test use cases that concern newer technologies, such as AI implemented in digital assistants, might not have been discovered yet as these technologies are in constant development.
Conclusion
With the cybersecurity and safety goals intertwined within the automotive industry, the automotive industry is unlike many other industries: It is human-centric already and is principled around safety. There are already use cases that can be proposed where AI can be applied today, and one of the key factors to consider is the human-led and machine-assisted approach in its application. Along with obvious benefits, realizing AI within the automotive industry, be it concerning a feature or an assisting technology in any phase of the product lifecycle, has the potential to introduce future risks.
It is also worth noting that ISO/SAE 21434, while not prescribing a specific technology or solution, was used to set the stage for the use cases discussed here, mostly around continuous cybersecurity activities (cybersecurity monitoring, cybersecurity event evaluation, vulnerability analysis, and vulnerability management). However, the use cases are not limited to those activities alone and there are much more expansive applications of AI for automotive cybersecurity.
Disclaimer:
The views expressed by the author are his own and do not necessarily reflect the views of FMM magazine.

Jay Yaneza
Cybersecurity Architect,
VicOne