English Version



Robustness of Deep Learning Systems Against Deception

主 讲 人 :Ling Liu     教授


地      点 :理科群1号楼D418


This talk provides acomprehensive analysis and characterization of the state of art attacks anddefenses. As more mission critical systems are incorporating machine learningand AI as an essential component in our social, cyber, and physical systems,such as Internet of things, self-driving cars, smart planets, smartmanufacturing, understanding and ensuring the verifiable robustness of deeplearning becomes a pressing challenge. This includes (1) the development offormal metrics to quantitatively evaluate and measure the robustness of a DNNprediction with respect of intentional and unintentional artifacts anddeceptions, (2) the comprehensive understanding of the blind spots and theinvariants in the DNN trained models and the DNN training process, and (3) thestatistical measurement of trust and distrust that we can place on a deeplearning algorithm to perform reliably and truthfully. In this talk, I will useour cross-layer strategic teaming defense framework and techniques toillustrate the feasibility of ensuring robust deep learning throughscenario-based empirical analysis.


Prof. Ling Liu is aProfessor in the School of Computer Science at Georgia Institute of Technology.She directs the research programs in Distributed Data Intensive Systems Lab(DiSL), examining various aspects of large-scale data intensive systems. Prof.Liu is an internationally recognized expert in the areas of Big Data Systemsand Analytics, Distributed Systems, Database and Storage Systems, InternetComputing, Privacy, Security and Trust. Prof. Liu has published over 300international journal and conference articles, and is a recipient of the bestpaper award from a number of top venues, including ICDCS 2003, WWW 2004, 2005Pat Goldberg Memorial Best Paper Award, et al. Prof. Liu’s research isprimarily sponsored by NSF, IBM and Intel.

发布时间:2019-06-04 08:31:49


XML 地图 | Sitemap 地图