The Workshop on Assured Autonomous Systems (WAAS) plans to address the gap that exists between theory-heavy autonomous systems and algorithms and the privacy, security, and safety of their real-world implementations. Advances in machine learning and artificial intelligence have shown great promise in automating complex decision-making processes across transportation, critical infrastructure, and cyber infrastructure domains. Practical implementations of these algorithms require significant systems engineering and integration support, especially as they integrate with the physical world. This integration is wrought with artificial intelligence (AI) safety, security, and privacy issues.
The primary focus of this workshop is the: (1) detection of, (2) response to, and (3) recovery from AI safety, security, and privacy violations in autonomous systems. Key technical challenges include discriminating between application-layer data breaches and benign process noises, responding to breaches and failures in real-time systems, and recovering from decision making failures autonomously.
Dear Workshop Attendees, as you may have seen on the main website, S&P is moving to an all-digital conference experience. We have reached out to authors via email, and will be updating as needed.
|9:15||Key Note Speaker||Sandeep Neema||DARPA|
|10:00||Case Study: Safety Verification of an Unmanned Underwater Vehicle||Diego Manzanas Lopez||Vanderbilt|
|10:20||Mission Assurance for Autonomous Undersea Vehicles||Karl Siil||JHU APL|
|11:00||A Smart City Internet for Autonomous Systems||Gregory Falco||Stanford|
|11:20||Automated Decision Systems for Cybersecurity and Infrastructure Security||Luanne Chamberlain||JHU APL|
|11:40||A Privacy Filter Framework for Internet of Robotic Things Applications||Zahir Alsulaimawi||Oregon State|
|12:00||A Capability for Autonomous IoT System Security: Pushing IoT Assurance to the Edge||Jeffrey Chavis||JHU APL|
|13:20||Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems||Dimitrios Boursinos||Vanderbilt|
|13:40||Fooling A Deep-Learning Based Gait Behavioral Biometric System||Honghao Guo||JHU|
|14:00||A Framework for the Analysis of Deep Neural Networks in autonomous Aerospace Applications using Bayesian Statistics||Yuning He||NASA AMES|
|14:20||Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression||Feiyang Cai||Vanderbilt|
|15:00||Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of β-VAE||Vijaya Kumar Sundar||Vanderbilt|
|15:20||A Non-Cooperative Game based Model for the Cybersecurity of Autonomous Systems||Farha Jahan||University of Toledo|
|15:40||Partially Observable Games for Secure Autonomy||Mohamadreza Ahmadi||Cal Tech|
|15:55||Using Taint Analysis and Reinforcement Learning (TARL) to Repair Autonomous Robot Software||Damian Lyons||Fordham University|
|16:10||Towards an AI-Based After-Collusion Forensic Analysis Protocol for Autonomous Vehicles||Prinkle Sharma||Dartmouth|
|17:00||End of Workshop|
WAAS seeks contributions on all aspects of AI safety, security, and privacy in autonomous systems. Papers that encourage the discussion and exchange of experimental and theoretical results, novel designs, and works in progress are preferred. Topics of interest include (but are not limited to):
Engineering trusted AI software architectures
Status of existing approaches in ensuring AI/ML safety and gaps to be addressed
AI safety considerations and experience from industry
Evaluating safety of AI systems according to their potential risks and vulnerabilities
Resilient, explainable deep learning, and interpretable machine learning
Game theoretic analysis on machine learning models
Misuse of AI and deep learning
Security and Privacy
Differential privacy and privacy-preserving learning and generative models
Adversarial attacks on machine learning and defenses against adversarial attacks
Attacks against deep learning and security of deep learning systems
Theoretical foundations of machine learning security
Formal verification of machine learning models and systems
Define and understand AI vulnerabilities and exploitable bugs in ML systems
Improve resiliency of AI methods and algorithms to various forms of attacks
You are invited to submit regular papers of up to six pages, or four pages for works in progress, including references. To be considered, papers must be received by the submission deadline. Submissions must be original work and may not be under submission to another venue at the time of review. Please mark all of your conflicts of interest when submitting your paper.
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review.
All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
Dr. Sandeep Neema joined DARPA in July 2016. His research interests include cyber physical systems, model-based design methodologies, distributed real-time systems, and mobile software technologies.
Prior to joining DARPA, Dr. Neema was a research associate professor of electrical engineering and computer science at Vanderbilt University, and a senior research scientist at the Institute for Software Integrated Systems, also at Vanderbilt University. Dr. Neema participated in numerous DARPA initiatives through his career including the Transformative Apps, Adaptive Vehicle Make, and Model-based Integration of Embedded Systems programs./p>
Dr. Neema has authored and co-authored more than 100 peer-reviewed conference, journal publications, and book chapters. Dr. Neema holds a doctorate in electrical engineering and computer science from Vanderbilt University, and a master’s in electrical engineering from Utah State University. He earned a bachelor of technology degree in electrical engineering from the Indian Institute of Technology, New Delhi, India.
Lanier Watkins, Johns Hopkins University & Applied Physics Lab
Howard Shrobe, MIT Computer Science & Artificial Intelligence Lab
Chris Rouff, Johns Hopkins University Applied Physics Lab
Reza Ghanadan, Google
- Natalia Alexandrov, NASA Langley Research Center
- Yair Amir, Johns Hopkins University
- Saurabh Bagchi, Purdue University
- Raheem Beyah, Georgia Institute of Technology
- Anna L. Buczak, Johns Hopkins University Applied Physics Lab
- Yinzhi Cao, John Hopkins University
- Anupam Chattopadhyay, Singapore Nanyang Technological University
- Joel Coffman, United States Air Force Academy
- Misty Davies, NASA Ames Research Center
- David Doria, HERE Technologies
- Abhishek Dubey, Vanderbilt University
- Ashutosh Dutta, Johns Hopkins University Applied Physics Lab
- Mike Hinchey, University of Limerick
- Dezhi Hong, University of California San Diego
- Yan Huang, Indiana University
- John S. Hurley, National Defense University
- Avinash Kalyanaraman, University of Virginia
- Gabor Karsai, Vanderbilt University
- Mykel Kochenderfer, Stanford University
- Xenofon Koutsoukos, Vanderbilt University
- Jose A. Morales, Carnegie Mellon University
- Sirajum Munir, Bosch Research and Technology Center
- Jared Oluoch, University of Toledo
- William H. Robinson, Vanderbilt University
- Yasser Shoukry, University of Maryland
- Houbing Song, Embry-Riddle University
- Tamim Sookoor, Johns Hopkins University Applied Physics Lab
- Roy Sterritt, Ulster University
- Jeremy Straub, University of North Dakota
- A. Selcuk Uluagac, Florida International University
- Kristen Walcott, University of Colorado
- Louis Whitcomb, Johns Hopkins University
- Paul Wood, Johns Hopkins University Applied Physics Lab
We are looking for sponsors of WAAS research and publicity! We wish to promote a wide reach and understanding of Assured Autonomy. Please consider sponsoring the conference. Note that IEEE SPW sponsorship applies towards the entire IEEE SPW 2020 program, and not specific workshops. If you are interested in specifically sponsoring WAAS, please contact the WAAS workshop organizers directly.