Home | Registration | Program | Directions |
Trustworthy and Privacy-Preserving Machine Learning Inference Services |
|
Thang Hoang, Virginia Tech | |
Abstract: TBD Dr. Thang Hoang is an Assistant Professor in the Department of Computer Science at Virginia Tech and a CCI Researcher. Prior to joining Virginia Tech, Thang was a Postdoctoral Fellow at Carnegie Mellon University (CMU) hosted by Prof. Elaine Shi and a Research Associate at the University of South Florida (USF) hosted by Prof. Attila A. Yavuz. He received a PhD degree from USF in August 2020. Thang's research spans the domains of cybersecurity and applied cryptography, with interests in privacy, secure and verifiable computation, zero-knowledge proofs, fuzzy cryptography, and trustworthy machine learning.. |
|
Towards Robustness Analysis of AIGC Systems |
|
Kailong Wang, Huazhong University of Science and Technology | |
Abstract: The rapid advancement of AI technologies in user-oriented software systems has introduced novel challenges in ensuring system robustness. In this talk, I will first introduce Drowzee, our innovative approach that combines logic programming and metamorphic testing to detect fact-conflicting hallucinations in LLMs. Drowzee constructs factual knowledge bases, represents facts as logical predicates, and applies reasoning rules to generate logically sound question-answer pairs for testing LLMs, using semantic-aware metamorphic oracles to identify potential hallucinations. Next, I will discuss our research on "glitch tokens", anomalous tokens produced by tokenizers that can compromise LLM response quality. We categorized glitch tokens, observed LLM symptoms when interacting with them, and developed GlitchHunter, an iterative clustering-based detection technique that outperformed three baselines on eight open-source LLMs, offering insights into mitigating tokenization-related errors. Dr. Kailong Wang is currently an associate professor (with tenure) in the School of CSE at Huazhong University of Science and Technology (HUST). He is broadly interested in AI+Security, Secure and Private Software Engineering. He has published in various top-tier conferences and journals such as OOPSLA, NDSS, MobiCom, TSE, TOSEM, FSE, ASE, ISSTA and WWW. |
|
Empowering Users to Detect and Prevent Cyber Threat Through UI Awareness |
|
Jieshan Chen, CSIRO's Data61 | |
Abstract: User interfaces (UI) are vital in shaping user interactions with software, but they can also introduce significant cybersecurity risks. Deceptive design patterns, or "dark patterns," and phishing tactics exploit UI to manipulate users into actions that compromise their data security. This talk explores our initial attempts to detect these deceptive practices in mobile apps, using machine learning to analyse design patterns that facilitate scamming, phishing, and privacy violations. We will also discuss future directions for improving the detection and prevention of these threats in apps. Dr. Jieshan Chen is currently a research scientist and UI intelligence team lead at CSIRO’s Data61, Australia. She received her Ph.D. degree in computer science at Australian National University. She works at the interaction of software engineering and human computer interaction, using human-centred technique to examine and ensure responsible software development by design. She is currently working on dark pattern detection, data visualisation, UI design search and generation, and mobile application accessibility enhancement. She has published in various top-tier conferences and journals such as ICSE, ASE, FSE, UIST, CHI, TOSEM and USENIX Security. |
|
In the past decades, cybersecurity threats have been among the most significant challenges for social development resulting in financial loss, violation of privacy, damages to infrastructures, etc. Organizations, governments, and cyber practitioners tend to leverage state-of-the-art Artificial Intelligence technologies to analyze, prevent, and protect their data and services against cyber threats and attacks. Due to the complexity and heterogeneity of security systems, cybersecurity researchers and practitioners have shown increasing interest in applying data mining methods to mitigate cyber risks in many security areas, such as malware detection and essential player identification in an underground forum. To protect the cyber world, we need more effective and efficient algorithms and tools capable of automatically and intelligently analyzing and classifying the massive amount of data in cybersecurity complex scenarios. This workshop will focus on empirical findings, methodological papers, and theoretical and conceptual insights related to data mining in the field of cybersecurity.
The workshop aims to bring together researchers from cybersecurity, data mining, and machine learning domains. We encourage a lively exchange of ideas and perceptions through the workshop, focused on cybersecurity and data mining. Topics of interest include, but are not limited to:
Ali Babar University of Adelaide |
Battista Biggio University of Cagliari |
Elisa Bertino Purdue University |
Hsinchun Chen University of Arizona |
Yang Liu Nanyang Technological University |
Xinming (Simon) Ou University of South Florida |
|
|
|
Sin Gee Teo Institute for Infocomm Research |
RuiTao Feng Singapore Management University |
Reza Ebrahimi University of South Florida |
|
|
|
Rouzbeh Behnia University of South Florida |
Jason Pacheco University of Arizona |
Yulei Sui University of New South Wales |
|
Guangdong Bai University of Queensland |
|
Mohamed Ragab Technology Innovation Institute |
|
|
Yuekang Li University of New South Wales |
Shangqing Liu Nanyang Technological University |