AAAI-23 / IAAI-23 / EAAI-23 Invited Speaker Program

 

AAAI Community Meeting: Current State, Vision, and Engagement

Francesca Rossi (IBM)
Thursday, February 9, 8:45 – 9:30 AM

Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is based at the T.J. Watson IBM Research Lab, New York, USA. Her research interests focus on artificial intelligence, with special focus on constraint reasoning, preferences, multi-agent systems, computational social choice, neuro-symbolic AI, cognitive architectures, and value alignment. She is also very active in the AI Ethics space: she co-chairs the IBM AI Ethics board, she participates in many global multi-stakeholder initiatives on AI ethics, such as the Partnership on AI, the World Economic Forum, the United Nations ITU AI for Good Summit, and the Global Partnership on AI, and she is in the steering committee of the AAAI/ACM Conference on AI, Ethics, and Society. She is a fellow of both AAAI and of EurAI, she has been president of IJCAI and the Editor in Chief of the Journal of AI Research. Currently she is the president of AAAI.

AAAI-23 Invited Speakers

AAAI Award for AI for the Benefit of Humanity
Tuomas Sandholm (Carnegie Mellon University)
Thursday, February 9, 5:00-6:00PM

Sebastien Bubeck (Microsoft Research)
Thursday, February 9, 6:00-7:00 PM

Josh Tenenbaum (Massachusetts Institute of Technology, USA)
Friday, February 10, 5:00-6:00 PM

Sheila McIlraith (University of Toronto)
Saturday, February 11, 3:45-4:45 PM

Isabelle Augenstein (University of Copenhagen)
Saturday, February 11, 4:45-5:45 PM

Vincent Conitzer (Carnegie Mellon University)
Sunday, February 12, 8:30-9:30 AM

Anima Anandkumar (California Institute of Technology)
Sunday, February 12, 5:00-6:00 PM

Joint AAAI/IAAI-23 Speakers

Susan Murphy (Harvard University)
Friday, February 10, 6:00-7:00 PM

Sami Haddadin (Technical University of Munich)
Saturday, February 11, 8:30-9:30 AM

IAAI-23 Speaker

2023 Robert S. Engelmore Memorial Lecture Award
Manuela Veloso (JP Morgan Chase)
Friday, February 10, 8:30-9:30 AM

AI Assurance Panel
Dr. Laura Freeman (Virginia Tech National Security Institute)
Ima Okonny (Employment and Social Development Canada (ESDC))
Dr. Yevgeniya (Jane) Pinelis (Chief Digital and Artificial Intelligence Office (CDAO))
Dr. Jaret C. Riddick (CSET)
Dr. Michael R. Salpukas (Raytheon Technology)
Friday, February 10, 2:00-3:30 PM

EAAI-23 Speaker

AAAI/EAAI Patrick Henry Winston Outstanding Educator Award
Ayanna Howard (The Ohio State University)
Saturday, February 11, 2:00-3:00 PM


Francesca Rossi

Francesca Rossi

AAAI President (IBM)

AAAI Community Meeting: Current State, Vision, and Engagement

Anima Anandkumar

Anima Anandkumar

California Institute of Technology and NVIDIA

AAAI 2023 Invited Talk

Talk Title: AI accelerating Science: Neural Operators for Learning on Function Spaces

Abstract: Anima will present exciting developments in the use of AI for scientific applications. This includes diverse domains such as weather and climate modeling, deep earth modeling, genome modeling, etc. We have developed principled approaches that enables zero-shot generalization beyond the training domain. This includes neural operators that yield 4-5 orders of magnitude speedups over numerical weather models and other scientific simulations. They learn mappings between function spaces that makes them ideal for capturing multi-scale processes.

Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum’s Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.

Isabelle Augenstein

Isabelle Augenstein

University of Copenhagen

AAAI 2023 Invited Talk

Talk Title: Beyond Fact Checking — Modelling Information Change in Scientific Communication

Abstract: Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible — e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. In this talk, I will present some first steps towards addressing these problems, discussing our research on exaggeration detection, scientific fact checking, and on modelling information change in scientific communication more broadly.

Isabelle Augenstein is a Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. Her main research interests are fact checking, low-resource learning, and explainability. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield.

In October 2022, Isabelle Augenstein became Denmark’s youngest ever female full professor. She currently holds a prestigious ERC Starting Grant on ‘Explainable and Robust Automatic Fact Checking’, as well as the Danish equivalent of that, a DFF Sapere Aude Research Leader fellowship on ‘Learning to Explain Attitudes on Social Media’. She is a member of the Young Royal Danish Academy of Sciences and Letters, and Vice President-Elect of SIGDAT, which organises the EMNLP conference series.

Sebastien Bubeck

Sebastien Bubeck

Microsoft Research

AAAI 2023 Invited Talk

Talk Title: Physics of AI — some first steps

Abstract: I would like to propose an approach to the science of deep learning that roughly follows what physicists do to understand reality: (1) explore phenomena through controlled experiments, and (2) build theories based on toy mathematical models and non-fully- rigorous mathematical reasoning. I will illustrate (1) with the LEGO study (LEGO stands for Learning Equality and Group Operations), where we observe how transformers learn to solve simple linear systems of equations. I will also briefly illustrate (2) with an analysis of the emergence of threshold units when training a two-layers neural network to solve a simple sparse coding problem. The latter analysis connects to the recently discovered Edge of Stability phenomenon.

Based on joint works with Kwangjun Ahn, Arturs Backurs, Sinho Chewi Ronen Eldan, Suriya Gunasekar, Yin Tat Lee, Felipe Suarez, Tal Wagner, Yi Zhang, see arxiv.org/abs/2206.04301 and arxiv.org/abs/2212.07469.

Sebastien Bubeck is a Senior Principal Research Manager in the Machine Learning Foundations group at Microsoft Research (MSR). He joined the Theory Group at MSR in 2014, after three years as an assistant professor at Princeton University. His works on convex optimization, online algorithms and adversarial robustness in machine learning received several best paper awards (NeurIPS 2021 best paper, NeurIPS 2018 best paper, ALT 2018 best student paper in joint work with MSR interns, COLT 2016 best paper, and COLT 2009 best student paper). In 2022 he has been focused on exploring a physics-like theory of neural networks learning.

Vincent Conitzer

Vincent Conitzer

Carnegie Mellon University

AAAI 2023 Invited Talk

Talk Title: New Design Decisions for Modern AI Agents

Abstract: Consider an intelligent virtual assistant such as Siri, or perhaps a more capable future version of it. Should we think of all of Siri as one big agent? Or is there a separate agent on every phone, each with its own objectives and/or beliefs? And what should those objectives and beliefs be? Such questions reveal that the traditional, somewhat anthropomorphic model of an agent – with clear boundaries, centralized belief formation and decision making, and a clear given objective – falls short for thinking about today’s AI systems. We need better methods for specifying the objectives that these agents should pursue in the real world, especially when their actions have ethical implications. I will discuss some methods that we have been developing for this purpose, drawing on techniques from preference elicitation and computational social choice. But we need to specify more than objectives. When agents are distributed, systematically forget what they knew before (say, for privacy reasons), can be simulated by others, and potentially face copies of themselves, it is no longer obvious what the correct way is even to do probabilistic reasoning, let alone to make optimal decisions. I will explain why this is so and discuss our work on doing these things well. (No previous background required.)

Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.

Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI’s Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).

Sami Haddadin

Sami Haddadin

Technical University of Munich (TUM)

AAAI/IAAI 2023 Invited Talk

Talk Title:  Robots with a Sense of Touch: Self-replicating the Machine and Learning the Self

Abstract: The development of robots that can learn to interact with the world and manipulate the objects in it has emerged as one of the greatest and so far largely unsolved challenges in robotics research. In this talk, I will argue that the development of such advanced machines requires a transition from classical manual design with purely model-based control to a novel synthesis paradigm. We need to allow the machine to autonomously develop its own blueprint and algorithmically generate its topological, kinematic, and dynamic self. Building on this, it shall develop controls for its own body as it moves, learns to manipulate objects in a controlled way, and sensitively interacts with the world.

Drawing from our work in torque-controlled lightweight robots towards human-safe tactile robots that can manipulate, fly, or drive, I outline the technological quantum leaps that recently have taken place. In particular, this progress was made possible by a human-centered design, soft and force-sensitive control, contact reflexes, and model-based machine learning. In the real world, by enabling human-robot coexistence, collaboration, and interaction for the first time, this robotic technology has proven transformative to traditional manufacturing already around the globe. Increasingly, it is now impacting professional services, domestic applications, medicine and healthcare.

After that, I will use our current work to chart the path toward the next generation of tactile machines. We have taken first steps towards autonomously designing and building machines that have the ability to learn their self and thus adapt to changes in body topology and ultimately their entire dynamics. Finally, I will present recent results on designing modular control and learning architectures that achieve complex behaviors for challenging manipulation problems while being provably stable.

Sami Haddadin is the Executive Director of the Munich Institute of Robotics and Machine Intelligence at the Technical University of Munich (TUM) and holds the Chair of Robotics and Systems Intelligence. His research interests include human-centered robotics, embodied AI, collective intelligence and human-robot symbiosis. His scientific contributions range from tactile mechatronics, contact-aware robots, safety methods in human-robot interaction to autonomous manipulation learning. Before joining TUM, he was Chair of the Institute of Automatic Control at Gottfried Wilhelm Leibniz University Hannover from 2014 to 2018. Prior to that, he held various positions as a researcher at the German Aerospace Center DLR. He holds degrees in electrical engineering, computer science and technology management from the Technical University of Munich and the Ludwig Maximilian University of Munich. He received his PhD with summa cum laude from RWTH Aachen University and published more than 200 scientific articles in international journals and conferences, many of them award-winning. He has received numerous awards for his scientific work, including the George Giralt PhD Award (2012), the RSS Early Career Spotlight (2015), the IEEE/RAS Early Career Award (2015), the Alfried Krupp Award for Young Professors (2015), the German President’s Award for Innovation in Science and Technology (2017) and the highest German basic science award Leibniz Prize (2019). He is a member of the German National Academy of Sciences Leopoldina, the national academy of science and engineering acatech and chairman of the Bavarian AI Council.

Ayanna Howard

Ayanna Howard

The Ohio State University

EAAI 2023 Invited Talk – AAAI/EAAI Patrick Henry Winston Outstanding Educator Award

Talk Title: Socially Interactive Robots for Supporting Early Interventions for Children with Special Needs

Abstract: It is estimated 15% of children aged 3 through 17 born in the U.S. have one or more developmental disabilities. For many of these children, proper early intervention is provided as a mechanism to support the child’s academic, developmental, and functional goals from birth and beyond. With the recent advances in robotics and artificial intelligence (AI), early intervention protocols using robots is now ideally positioned to make an impact in this domain. In this talk, I will discuss the role of robotics and AI for engaging children with special needs and highlight our methods and preclinical studies that bring us closer to this goal.

Dr. Ayanna Howard is the Dean of Engineering at The Ohio State University. Previously she was the Chair of the School of Interactive Computing at the Georgia Institute of Technology. Dr. Howard’s research encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, and has resulted in over 275 peer-reviewed publications. She is a Fellow of IEEE, AAAI, AAAS, the National Academy of Inventors, and elected member of the American Academy of Arts and Sciences. Prior to Georgia Tech, Dr. Howard was at NASA’s Jet Propulsion Laboratory where she held the title of Senior Robotics Researcher and Deputy Manager in the Office of the Chief Scientist.

Sheila McIlraith

Sheila McIlraith

University of Toronto

AAAI 2023 Invited Talk

Talk Title: (Formal) Languages Help AI agents Learn and Reason

Abstract: How do we communicate with AI Agents that learn? One obvious answer is via language. Indeed, humans have evolved languages over tens of thousands of years to provide useful abstractions for understanding and interacting with each other and with the physical world. The claim advanced by some is that language influences how we think, what we perceive, how we focus our attention, and what we remember. We use language to capture and share our understanding of the world around us, to communicate high-level goals, intentions and objectives, and to support coordination with others. Importantly, language can provide us with useful and purposeful abstractions that can help us to generalize and transfer knowledge to new situations.  

Language comes in many forms. In Computer Science and in the study of AI, we have historically used formal knowledge representation languages and programming languages to capture our understanding of the world and to communicate unambiguously with computers. In this talk I will discuss how formal language can help agents learn and reason with a deep dive on one particular topic – reinforcement learning. I’ll show how we can exploit the syntax and semantics of formal language and automata to aid in the specification of complex reward-worthy behavior, to improve the sample efficiency of learning, and to help agents learn what to remember. In doing so, formal language can help us address some of the challenges to reinforcement learning in the real world.

Sheila McIlraith is a Professor in the Department of Computer Science at the University of Toronto, a Canada CIFAR AI Chair (Vector Institute), and an Associate Director at the Schwartz Reisman Institute for Technology and Society. Prior to joining U of T, McIlraith spent six years as a Research Scientist at Stanford University, and one year at Xerox PARC. McIlraith’s research is in the area of AI knowledge representation and reasoning, and machine learning where she currently studies sequential decision-making, broadly construed, with a focus on human-compatible AI. McIlraith is a Fellow of the ACM and the Association for the Advancement of Artificial Intelligence (AAAI). She and co-authors have been recognized with two test-of-time awards from the International Semantic Web Conference (ISWC) in 2011, and from the International Conference on Automated Planning and Scheduling (ICAPS) in 2022.

Susan A. Murphy

Susan A. Murphy

Harvard University

AAAI/IAAI 2023 Invited Talk

Talk Title: We used Reinforcement Learning, but did it Work?

Abstract: Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Behavioral Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? We discuss some first approaches to addressing these questions

Susan Murphy’s research focuses on improving sequential, individualized, decision making in digital health. She developed the micro-randomized trial for use in constructing digital health interventions; this trial design is in use across a broad range of health-related areas. Her lab works on online learning algorithms for developing personalized digital health interventions. Dr. Murphy is a member of the National Academy of Sciences and of the National Academy of Medicine, both of the US National Academies. In 2013 she was awarded a MacArthur Fellowship for her work on experimental designs to inform sequential decision making. She is a Fellow of the College on Problems in Drug Dependence, Past-President of Institute of Mathematical Statistics, Past-President of the Bernoulli Society and a former editor of the Annals of Statistics.

Tuomas Sandholm

Tuomas Sandholm

Carnegie Mellon University

AAAI Award for AI for the Benefit of Humanity

Talk Title: Modern Organ Exchanges: Market Designs, Algorithms, and Opportunities

Abstract: I will share experiences from working on organ exchanges for the last 18 years, ranging from market designs to new optimization algorithms to large-scale fielding of the techniques and even to computational policy optimization. Originally in kidney exchange, patients with kidney disease obtained compatible donors by swapping their own willing but incompatible donors. I will discuss many modern generalizations of this basic idea. For one, I will discuss never-ending altruist donor chains that have become the main modality of kidney exchanges worldwide and have led to over 10,000 life-saving transplants. Since 2010, our algorithms have been running the national kidney exchange for United Network for Organ Sharing, which has grown to include 80% of the transplant centers in the US. Our algorithms autonomously make the transplant plan each week for that exchange, and have been used by two private exchanges before that. I will summarize the state of the art in algorithms for the batch problem, approaches for the dynamic problem where pairs and altruists arrive and depart, techniques that find the highest-expected-quality solution under the real challenge of unforeseen pre-transplant incompatibilities, algorithms for pre-match compatibility testing, and approaches for striking fairness-efficiency tradeoffs. I will describe the FUTUREMATCH framework that combines these elements and uses data and supercomputing to optimize the policy from high-level human value judgments. The approaches therein may be able to serve as ways of designing policies for many kinds of complex real-world AI systems. I will also discuss the idea of liver lobe exchanges and cross-organ exchanges, and how they have started to emerge for real.

Tuomas Sandholm is Angel Jordan University Professor of Computer Science at Carnegie Mellon University. His research focuses on the convergence of artificial intelligence, economics, and operations research. He is Co-Director of CMU AI. He is the Founder and Director of the Electronic Marketplaces Laboratory. In addition to his main appointment in the Computer Science Department, he holds appointments in the Machine Learning Department, Ph.D. Program in Algorithms, Combinatorics, and Optimization (ACO), and CMU/UPitt Joint Ph.D. Program in Computational Biology. In parallel with his academic career, he was Founder, Chairman, first CEO, and CTO/Chief Scientist of CombineNet, Inc. from 1997 until its acquisition in 2010. During this period the company commercialized over 800 of the world’s largest-scale generalized combinatorial multi-attribute auctions, with over $60 billion in total spend and over $6 billion in generated savings. He is Founder and CEO of Optimized Markets, Inc., which is bringing a new optimization-powered paradigm to advertising campaign sales, pricing, and scheduling.

Sandholm has developed the leading algorithms for several general classes of game with his students and postdocs. The team that he leads is the multi-time world champion in computer heads-up no-limit Texas hold’em, which was the main benchmark and decades-open challenge problem for testing application-independent algorithms for solving imperfect- information games. Their AI Libratus became the first to beat top humans at that game. Then their AI Pluribus became the first and only AI to beat top humans at the multi-player game. That is the first superhuman milestone in any game beyond two-player zero-sum games. He is Founder and CEO of Strategic Machine, Inc., which provides solutions for strategic reasoning under imperfect information in a broad set of applications ranging from poker to other recreational games to business strategy, negotiation, strategic pricing, finance, cybersecurity, physical security, auctions, political campaigns, and medical treatment planning. He is also Founder and CEO of Strategy Robot, Inc., which focuses on defense, intelligence, and other government applications.

Among his honors are the Minsky Medal, McCarthy Award, Engelmore Award, Computers and Thought Award, inaugural ACM Autonomous Agents Research Award, CMU’s Allen Newell Award for Research Excellence, Sloan Fellowship, NSF Career Award, Carnegie Science Center Award for Excellence, and Edelman Laureateship. He is Fellow of the ACM, AAAI, INFORMS, and AAAS. He holds an honorary doctorate from the University of Zurich.

Josh Tenenbaum

Josh Tenenbaum

(Massachusetts Institute of Technology, USA)

AAAI 2023 Invited Talk

Talk Title: Learning to See the Human Way

Abstract: Computer vision is one of the great AI success stories. Yet we are still far from having machine systems that can reliably and robustly see everything a human being sees in an image or in the real world. Despite rapid advances in self-supervised visual and multimodal representation learning, we are also far from having systems that can learn to see as richly as a human does, from so little data, or that can learn new visual concepts or adapt their representations as quickly as a human does. And even today’s remarkable generative image synthesis systems imagine the world in a very different and fundamentally less flexible way than human beings do. How can we close these gaps? I will describe several core insights from the study of human vision and visual cognitive development that run counter to the dominant trends in today's computer vision and machine learning world, but that can motivate and guide an alternative approach to building practical machine vision systems.

Technically, this approach rests on advances in differentiable and probabilistic programming: hybrids of neural, symbolic and probabilistic modeling and inference that can be more robust, more flexible and more data-efficient than purely neural approaches to learning to see. New probabilistic programming platforms offer to make these approaches scalable as well. Conceptually, this approach draws on classic proposals for understanding vision as ‘inverse graphics’, ‘analysis by synthesis’ or ‘inference to the best explanation’, and the notion that at least some high-level architecture for scene representation is built into the brain by evolution rather than learned from experience, reflecting invariant properties of the physical world. Learning then enables, enriches and extends these built-in representations; it  does not create them from scratch. I will show a few examples of recent machine vision successes based on these ideas, from our group and others. But the hardest problems are still very open. I will highlight some ‘Grand Challenge’ tasks for building machines that learn to see like people: problems that far outstrip the abilities of any current system, and that I hope can inspire the next steps towards progress for computer vision researchers regardless of which approach they favor.

Josh Tenenbaum is Professor of Computational Cognitive Science at the Massachusetts Institute of Technology in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). He received a BS from Yale University (1993) and a PhD from MIT (1999). His long-term goal is to reverse-engineer intelligence in the human mind and brain, and use these insights to engineer more human-like machine intelligence. In cognitive science, he is best known for developing theories of cognition as probabilistic inference in structured generative models, and applications to concept learning, causal reasoning, language acquisition, visual perception, intuitive physics, and theory of mind. In AI, he and his group have developed widely influential models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian unsupervised learning and structure discovery. His current research focuses on the development of common sense in children and machines, common sense scene understanding in humans and machines, and models of learning as program synthesis. His work has been recognized with awards at conferences in Cognitive Science, Philosophy and Psychology, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Troland Research Award from the National Academy of Sciences (2012), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2015), the R&D Magazine Innovator of the Year (2018), and a MacArthur Fellowship (2019), and he is an elected member of the American Academy of Arts and Sciences.

Manuela Veloso

Manuela Veloso

JP Morgan Chase

2023 Robert S. Engelmore Memorial Lecture Award

Talk Title: Symbiotic Human-AI Interaction: Experience-Based Insights from AI in Robotics and AI in Finance

Abstract: I share insights on the interaction of humans and AI to jointly solve end-to-end complex problems. The talk is based on my experience with research and engineering of real autonomous mobile service robots, and AI in the finance domain. I will illustrate the integration of perception (data), cognition (reasoning and learning), and action (execution and feedback). And I will focus on several components, including AI to effectively discover and standardize data, for behavior understanding, to simulate and generate synthetic data, to learn from data, principles, and experience, and to provide trustful explanations.

IAAI AI Assurance Panel

The US Department of Defense defines Software Assurance as “the level of confidence that software functions only as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software, throughout the life cycle.” Artificial Intelligence Assurance adds an evolving set of additional criteria which may include Data Fidelity and Integrity, Out of Distribution Detection, Bias Detection, Resilience to Adversarial Attack, Interpretability, Privacy Protection, and Ethical Behavior. These additional criteria are desired due to the data-driven nature of Artificial Intelligence training and the greater impact of the higher order decision making that Artificial Intelligence enables. Artificial Intelligence developers, testers, and maintainers will be required to anticipate a growing list of constraints from governments, institutions, and businesses as Artificial Intelligence solutions move into more safety critical roles. This panel of government and institutional leaders will explore this evolving space to better define the near-term technical roadmap for AI Assurance.

Dr. Laura Freeman

Dr. Laura Freeman

(Virginia Tech National Security Institute)

IAAI AI Assurance Panel

Dr. Laura Freeman is a Research Associate Professor of Statistics and dual hatted as the Deputy Director of the Virginia Tech National Security Institute and Assistant Dean for Research for the College of Science. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She develops new methods for test and evaluation focusing on emerging system technology. She focuses on transitioning emerging research to solve challenges in Defense and Homeland Security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance.

Previously, Dr. Freeman was the Assistant Director of the Operational Evaluation Division at the Institute for Defense Analyses. In that position, she established and developed an interdisciplinary analytical team of statisticians, psychologists, and engineers to advance scientific approaches to DoD test and evaluation. During 2018, Dr. Freeman served as that acting Senior Technical Advisor for Director Operational Test and Evaluation (DOT&E). As the Senior Technical Advisor, Dr. Freeman provided leadership, advice, and counsel to all personnel on technical aspects of testing military systems. She reviewed test strategies, plans, and reports from all systems on DOT&E oversight.

Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data.

Ima Okonny

Ima Okonny

(Employment and Social Development Canada (ESDC))

IAAI AI Assurance Panel

Ima, the Chief Data Officer at Employment and Social Development Canada (ESDC), has over 23 years of experience in the field of data.
She has extensive experience with building the evidence base through the development of analytical databases and tools, implementing departmental data reporting and release strategies, data management, data privacy protocols and with forward-looking policy development and research.

Ima has an educational background in Mathematics, Computer Programming and Public Management and during her time with the Government of Canada, she has received several nominations and awards for her leadership and results.

She is passionate about helping organizations develop the capabilities required to ethically and intentionally unleash concrete business value from data.

Dr. Yevgeniya (Jane) Pinelis

Dr. Yevgeniya (Jane) Pinelis

(Chief Digital and Artificial Intelligence Office (CDAO))

IAAI AI Assurance Panel

Dr. Jane Pinelis is the Chief of AI Assurance at the Chief Digital and Artificial Intelligence Office (CDAO). In this role, she leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for CDAO capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. She also leads the team that is responsible for instantiating Responsible AI principles into DoD practices. Prior to joining the CDAO, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing.

Also, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.”

Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor.

Dr. Jaret C. Riddick

Dr. Jaret C. Riddick

(CSET)

IAAI AI Assurance Panel

Dr. Jaret C. Riddick is a Senior Fellow at Georgetown University’s Center for Security and Emerging Technology (CSET). Prior to joining CSET, he was the Principal Director for Autonomy in the Office of the Under Secretary for Research and Engineering (OUSD(R&E)), serving as the Senior DOD official for coordination, strategy, and transition of Autonomy research and development. As Principal Director, he created a DOD-wide initiative on trusted Autonomy, led efforts to advance Autonomy for undersea warfare with allied partners, and provided key strategic analysis to support development of the newest DOD university-affiliated research center (UARC). Prior to OUSD(R&E), Jaret served in executive leadership roles in the US Army Research Laboratory (ARL), where he established a 200-acre robotics research collaboration campus and led ARL Senior leadership efforts to establish the research competencies of the Laboratory. He has also served in leadership roles in the Office of the Deputy Assistant Secretary of the Army for Research and Technology, and the former Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. He holds a Ph.D. in Engineering Mechanics from Virginia Tech, M.S. in Mechanical Engineering from North Carolina A&T State University, and B.S. in Mechanical Engineering from Howard University.

Dr. Michael R. Salpukas

Dr. Michael R. Salpukas

(Raytheon Technology)

IAAI AI Assurance Panel

Dr. Michael Salpukas is a Raytheon Technology Senior Engineering Fellow focusing on Artificial Intelligence and Advanced Algorithms. He is presently the Principal Investigator for Artificial Intelligence Research and Development projects in Sensors, C5, Predictive Maintenance, and Manufacturing. His research includes Radar and Sonar Classification, Pattern-of-Life, Predictive Analytics, and Defect Containment. Dr. Salpukas is also a Lead Technologist for Artificial Intelligence and Mission Application Algorithm development. His past work includes Advanced Tracking, Compressed Sensing, Antenna Calibration, Search Patterns, Scheduling, and Clutter Mitigation.

Dr. Salpukas is active in Innovation and University Partnering, holds two patents with four more filed, and many more filed Trade Secrets. He has served as Chief Engineer and Systems Engineering Lead on a wide range of programs, and as Lead on multiple SBIR partnerships and program transitions.

Dr. Salpukas received his Bachelor’s Degree in Mathematics from the University of Chicago, and his Ph.D. in Mathematics and Masters in Statistics from SUNY-Albany.

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2023 AAAI