[2025-Sep-17] Security and Latency Challenges in Deep Neural Networks: Understanding Adversarial and Latency Attacks

Institute of Information Systems and Applications

Speaker:

Ph.D. Erh-Chung Chen

Siemens R&D software engineer

Topic:

  Security and Latency Challenges in Deep Neural Networks: Understanding Adversarial and Latency Attacks

Date:

13:20-15:00 Wednesday 17-Sep-2025

Location:

Delta 103

Hosted by:

Prof. Che-Rung Lee

       

 

Abstract

As deep neural networks (DNNs) are increasingly deployed in sensitive and security-critical applications, ensuring their robustness and reliability has become more important than ever. One significant threat comes from adversarial attacks, where small, often imperceptible perturbations to the input can mislead a model into making incorrect predictions. Such vulnerabilities raise serious concerns about the safety of using DNNs in real-world scenarios, particularly in areas like image classification and object detection.

In this session, we will explore the principles behind adversarial attacks on image classifiers and object detectors, examining how these subtle manipulations can compromise model integrity. Additionally, we will discuss latency attacks, a lesser-known but impactful threat that aims to degrade system performance by significantly increasing inference time. These attacks can prevent applications from responding within acceptable time limits, potentially disrupting critical operations. Using object detection as a case study, we will demonstrate how latency attacks are carried out and highlight their implications for real-world deployment.

Bio.

Erh-Chung Chen received his Ph.D. from National Tsing Hua University in 2025. His current research interests include adversarial defenses in image classification and object detection.

All faculty and students are welcome to join.