Skip to main content
back to agenda
on this page

Adversarial Human: The State of AI Security and Privacy

  • 15:40
  • Wed
  • 15 Nov
Briefing Stage 1


In the upcoming half-decade, we are set to experience a radical transformation in the global security and privacy landscape, largely driven by the pervasive use and integration of Artificial Intelligence (AI) constructs and data-centric architectures in software systems. Currently, the adoption of AI spans across all sectors including cloud, edge computing, healthcare, autonomous vehicles, smart cities, and defense, where it has provided tremendous business value through optimizing and improving decision loops. 

However, this new computing paradigm introduces a novel set of security and privacy challenges that do not exist in classical software. In academia, there have been over 4,000 papers that investigate and demonstrate attacks on AI systems with the goal of understanding the dimensionality of AI security and its unique implications. Similarly, major tech companies like Google, Intel, and Microsoft are taking steady steps in addressing these issues as part of their responsible AI strategies. Yet, a significant portion of the industry remains unprepared, lacking the necessary tools, processes, and policies in their security development cycle. These gaps becomes especially evident with the recently introduced regulatory requirements such as the AI European Union Act.

In this session, we will deep dive into the AI security and privacy domain with two primary objectives. Firstly, we aim to provide a solid base of knowledge about the new threat vectors targeting AI technologies. Secondly, we'll equip organizations and their security teams with a practical approach to an AI security strategy for building, deploying, and assurance of their AI investments.