AAAI-23 Workshop Program
The Thirty-Seventh AAAI Conference on Artificial Intelligence
February 13 – 14, 2023
Walter E. Washington Convention Center
Washington, DC, USA
Sponsored by the Association for the Advancement of Artificial Intelligence
AAAI-23 Workshops
(The workshop schedule will be available in November 2022.)
- W1: AI for Agriculture and Food Systems
- W2: AI for Behavior Change
- W3: AI for Credible Elections: A Call to Action with Trusted AI
- W4: AI for Energy Innovation
- W5: AI for Web Advertising
- W6: AI to Accelerate Science and Engineering
- W7: AI4EDU: AI for Education
- W8: Artificial Intelligence and Diplomacy
- W9: Artificial Intelligence for Cyber Security (AICS)
- W10: Artificial Intelligence for Social Good (AI4SG)
- W11: Artificial Intelligence Safety (SafeAI)
- W12: Creative AI Across Modalities
- W13: Deep Learning on Graphs: Methods and Applications (DLG-AAAI’23)
- W14: DEFACTIFY: Multimodal Fact-Checking and Hate Speech Detection
- W15: Deployable AI (DAI)
- W16: DL-Hardware Co-Design for AI Acceleration
- W17: Energy Efficient Training and Inference of Transformer Based Models
- W18: Graphs and More Complex Structures for Learning and Reasoning (GCLR)
- W19: Health Intelligence (W3PHIAI-23)
- W20: Knowledge-Augmented Methods for Natural Language Processing
- W21: Modelling Uncertainty in the Financial World (MUFin’23)
- W22: Multi-Agent Path Finding
- W23: Multimodal AI for Financial Forecasting (Muffin)
- W24: Practical Deep Learning in the Wild (Practical-DL)
- W25: Privacy-Preserving Artificial Intelligence
- W26: Recent Trends in Human-Centric AI
- W27: Reinforcement Learning Ready for Production
- W28: Scientific Document Understanding
- W29: Systems Neuroscience Approach to General Intelligence
- W30: Uncertainty Reasoning and Quantification in Decision Making (UDM’23)
- W31: User-Centric Artificial Intelligence for Assistance in At-Home Tasks
- W32: When Machine Learning Meets Dynamical Systems: Theory and Applications
W1: AI for Agriculture and Food Systems
An increasing world population, coupled with finite arable land, changing diets, and the growing expense of agricultural inputs, is poised to stretch our agricultural systems to their limits. By the end of this century, the earth’s population is projected to increase by 45% with available arable land decreasing by 20% coupled with changes in what crops these arable lands can best support; this creates the urgent need to enhance agricultural productivity by 70% before 2050. Current rates of progress are insufficient, making it impossible to meet this goal without a technological paradigm shift. There is increasing evidence that enabling AI technology has the potential to aid in the aforementioned paradigm shift. This AAAI workshop aims to bring together researchers from core AI/ML, robotics, sensing, cyber physical systems, agriculture engineering, plant sciences, genetics, and bioinformatics communities to facilitate the increasingly synergistic intersection of AI/ML with agriculture and food systems. Outcomes include outlining the main research challenges in this area, potential future directions, and cross-pollination between AI researchers and domain experts in agriculture and food systems.
Topics
Specific topics of interest for the workshop include (but are not limited to) foundational and
translational AI activities related to:
- Plant breeding
- Precision agriculture and farm management
- Biotic/Abiotic stress prediction
- Yield prediction
- Agriculture data curation
- Annotation efficient learning
- Plant growth and development models
- Remote sensing
- Agricultural robotics
- Privacy-preserving data analysis
- Human-in-the-loop AI
- Multimodal data fusion
- High-throughput field phenotyping
- (Bio)physics aware hybrid AI modeling
- Development of open-source software, libraries, annotation tools, or benchmark datasets
Format
The workshop will be a one-day meeting comprising invited talks from researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work. Attendance is open to all registered participants.
Submissions
Submitted technical papers can be up to 4 pages long (excluding references and appendices). Position papers are welcome. All papers must be submitted in PDF format using the AAAI-23 author kit. Papers will be peer-reviewed and selected for spotlight and/or poster presentation.
Organizing Committee
Girish Chowdhary (University of Illinois, Urbana Champaign), Baskar Ganapathysubramanian (Iowa State University; contact: baskarg@iastate.edu), George Kantor (Carnegie Mellon University), Soumik Sarkar (Iowa State University), Sierra Young (North Carolina State University), Ananth Kalyanaraman (Washington State University), Ilias Tagkopoulos (UC Davis).
Additional Information
W2: AI for Behavior Change
In decision-making domains as wide-ranging as medication adherence, vaccination uptake, college enrollment, financial savings, and energy consumption, behavioral interventions have been shown to encourage people towards making better choices. AI can play an important, and in some cases crucial, role in these areas to motivate and help people take actions that maximize welfare. It is also important to be cognizant of any unintended consequences of leveraging AI in these fields, such as problems of bias that algorithmic approaches can introduce, replicate, and/or exacerbate in complex social systems. A number of research trends are informing insights in this field. First, large data sources, both those conventionally used in social sciences (EHRs, health claims, credit card use, college attendance records) and the relatively unconventional (social networks, wearables, mobile devices), are now available, and are increasingly used to personalize interventions. These datasets can be leveraged to learn individuals’ behavioral patterns, identify individuals at risk of making sub-optimal or harmful choices, and target them with behavioral interventions to prevent harm or improve well-being. Second, psychological experiments in laboratories and in the field, in partnership with technology companies, to measure behavioral outcomes are increasingly used for informing intervention design. Third, there is an increasing interest in AI in moving beyond traditional supervised learning approaches towards learning causal models, which can support the identification of targeted behavioral interventions and flexible estimation of their effects. At the intersection of these trends is also the question of fairness – how to design or evaluate interventions fairly. These research trends inform the need to explore the intersection of AI with behavioral science and causal inference, and how they can come together for applications in the social and health sciences. This workshop will build upon the success of the last two editions of the AI for Behavior Change workshop, and will focus on advances in AI and ML that aim to (1) study equitable exploration for unbiased behavioral interventions, (2) design and target optimal interventions, and (3) exploit datasets in domains spanning mobile health, social media use, electronic health records, college attendance records, fitness apps, etc. for causal estimation in behavioral science.
Keywords: causal inference, behavior science, adaptive experiments, reinforcement learning, equitable exploration, optimal assignment, bias in decision-making, heterogeneous treatment effects, experiment design, mobile health
Topics
The goal of this workshop is to bring together the causal inference, artificial intelligence, and behavior science communities, gathering insights from each of these fields to facilitate collaboration and adaptation of theoretical and domain-specific knowledge amongst them. There will be two keynote
speakers, two to three invited speakers, a panel discussion of early career researchers working at the intersection of these fields, a submitted paper session and a poster session from participants. We invite thought-provoking submissions on a range of topics in these fields, including:
- Intervention design
- Adaptive treatment assignment
- Optimal assignment rules
- Targeted nudges
- Bias/equity in algorithmic decision-making
- Mental health/wellness; habit formation
- Recommender systems and digital data
- Reinforcement Learning for efficient exploration
Format
The full-day workshop will start with a keynote talk, followed by an invited talk and contributed paper presentations in the morning. The post-lunch session will feature a second keynote talk, two invited talks. Papers more suited for a poster, rather than a presentation, would be invited for a poster session. We will end the workshop with a panel discussion by top researchers in the field.
Submissions
The audience of this workshop will be researchers and students from a wide array of disciplines including, but not limited to, statistics, computer science, economics, public policy, psychology, management, and decision science, who work at the intersection of causal inference, machine learning, and behavior science. AAAI, specifically, is a great venue for our workshop because its audience spans many ML and AI communities. We invite novel contributions following the AAAI-23 formatting guidelines, camera-ready style. Submissions will be peer reviewed. Submissions will be assessed based on their novelty, technical quality, significance of impact, interest, clarity, relevance, and reproducibility. We accept two types of submissions – full research papers no longer than 8 pages, and poster papers with 2 – 4 pages. References will not count towards the page limit. Submissions will be accepted via Easychair (submission details and deadlines on the workshop website).
Organizing Committee
- Rahul Ladhania, University of Michigan, ladhania@umich.edu
- Sabina Tomkins, University of Michigan, stomkins@umich.edu
- Michael Sobolev, Cornell Tech, michael.sobolev@cornell.edu
- Lyle Ungar, University of Pennsylvania, ungar@cis.upenn.edu
Additional Information
For any questions, please reach out to us at ai4behaviorchange at gmail dot com
Website: https://ai4bc.github.io/ai4bc23/
W3: AI for Credible Elections: A Call to Action with Trusted AI
We invite papers that describe innovative use of AI technology or techniques in election processes. The workshop is intended to provide a forum for discussing new approaches and challenges in building AI that people trust and use for critical applications that power society – conducting elections, and for exchanging ideas about how to move the area forward.
Artificial Intelligence and machine learning have transformed modern society. It also impacts how elections are conducted in democracies, with mixed outcomes. For example, digital marketing campaigns have enabled candidates to connect with voters at scale and communicate remotely during COVID-19, but there remains widespread concern about the spread of election disinformation as the result of AI-enabled bots and aggressive strategies.
In response, we conducted the first workshop at Neurips 2021 to examine the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The speakers, panels and reviewed papers discussed current and best practices in holding elections, tools available for candidates and the experience of voters. They highlighted gaps and experience regarding AI-based interventions and methodologies. To ground the discussion, the invited speakers and panelists were drawn from three International geographies: US – representing one of the world’s oldest democracies; India – representing the largest democracy in the world; and Estonia – representing a country using digital technologies extensively during elections and as a facet of daily life. The workshop had contributions on all technological and methodological aspects of elections and voting.
Topics
At AAAI 2023, we will run the second edition of the workshop.
The workshop welcomes contributions on all aspects of elections and voting, but especially focus on the use of AI in the following:
- For election candidates
○ Organizing candidate campaigns
○ Detecting, informing and managing mis and disinformation - For election organizers
○ Identifying and validating voters
○ Informing people about election information - For voters
○ Knowing about election procedures
○ Verifying individual and community votes
○ Navigating candidates and issues - Cross-cutting
○ Promoting transparency in the election process
○ Technology for data management and validation
○ Case-studies of success or failure, and the reasons thereof
The intended audience of the workshop are students, academic researchers, professionals involved in technology for election management and informed voters.
Format
TBD
Invited Speakers
Colin Camerer (California Institute of Technology), Susan Murphy (Harvard University)
Submissions
Either extended abstracts (4 pages) or full papers (7 pages) anonymized using the AAAI 2023 style guidelines found here.
All accepted papers will be presented in a virtual poster session. We welcome articles currently under review or papers planned for publication elsewhere.
Submissions site: https://easychair.org/conferences/?conf=ai4ce2023
Publication: Select papers will be considered for a forthcoming special issue of the AI Magazine of “AI for Credible Elections” in 2023. All accepted papers will be made available online on the workshop website and will count as non-archival reports to allow submissions to future conferences/journals.
Important Dates
- Workshop paper submissions due: November 11, 2022
- Notification to authors: November 22, 2022
- Camera-ready copies of authors’ papers: December 1, 2022
- Early-bird registration to the conference: December 12, 2022
Organizing Committee
Biplav Srivastava (University of South Carolina), Anita Nikolich (University of Illinois-Urbana Champaign), Andrea Hickerson (University of Mississippi), Tarmo Koppel (Tallinn University of Technology), Chris Dawes (New York University), Sachindra Joshi (IBM Research)
Additional Information
Workshop info: https://sites.google.com/view/aielections
Submissions: https://easychair.org/conferences/?conf=ai4ce2023
W4: AI for Energy Innovation
In light of pressing and transformative global needs for equitable and secure access to clean, affordable, and sustainable energy, as well as of the significant investment provided from governments and industries, the alignment of R&D efforts on automation and AI across the entire spectrum is timelier than ever, from fundamental to applied energy sciences. Despite recent monumental AI progress and widespread interest, there may be disconnects between the AI frontier and energy-focused research. We envision a near future where energy systems will be equally intelligent as the most adept AI systems in existence, with energy resources equipped with smart functionalities to effectively operate under uncertainty, volatility, and threats, where communities empower their lives with reliable and sustainable energy, and where the entire AI community undertakes the challenge of providing solutions and inspiration for sustained energy innovation. This workshop will invite AAAI-23 attendees, researchers, practitioners, sponsors, and vendors from academia, government agencies, and the industry who will present diverse views and engage in fruitful conversations on how innovation in all aspects of AI may support and propel further energy innovation.
Topics
(i) fundamental energy sciences (incl. AI/ML methods for reduced order modeling, digital twins, and general, energy-focused physics-based ML)
(ii) applied energy sciences (incl. AI/ML methods for efficient, robust, and equitable generation, distribution, and management of energy)
(iii) AI assurance & cybersecurity (incl. AI/ML methods for secure operation of intelligent entities across transmission and distribution grids)
Format
The one-day workshop will consist of keynote and invited presentations, talks of selected contributed papers, a panel discussion, and a lightning session outlining unique opportunities at national laboratories on applied AI for energy innovation. We strongly encourage dialogue-provoking contributions that summarize broader ongoing themes and efforts as well as upcoming and/or future opportunities that may stimulate a productive exchange and forge partnerships among participants. At the end of their talk, participants will be encouraged to propose a new energy-related benchmark problem that they would like to see the AI community adopting, recognizing that well-known general datasets and problems may be suitable for general AI/ML education and research, but possibly not ideal nor focused-enough vehicles to propel AI-equipped, energy-focused innovations. Contributions presented by both (a) established researchers/practitioners and (b) graduate students, early-career researchers, and start-up representatives are actively encouraged.
Submissions
Relevant short (2-4 pages) and long (6-8 pages) contributions in PDF AAAI-23 format are solicited. Submission details will be available soon on the workshop website aienergyworkshop2023.inl.gov (page expected to be active on October 1, 2022). Selected and presented contributions will be compiled in a workshop report, to be published at the open access DOE repository osti.gov by INL.
Organizing Committee
Humberto Garcia (U.S. DOE Idaho National Laboratory; humberto.garcia@inl.gov), Karthik Duraisamy (University of Michigan), Asok Ray (Pennsylvania State University), Dimitrios Pylorof (U.S. DOE Idaho National Laboratory).
Additional Information
Website: aienergyworkshop2023.inl.gov (page expected to be active on October 1, 2022).
W5: AI for Web Advertising
With the popularity of various forms of E-commerce, web advertising has become one prominent channel that businesses use to reach out to the customers. It leverages the Internet to promote products and services to audiences, which also has been an important revenue source of many Internet companies such as online social media platforms and search engines.
AI techniques have been extensively used in the pipeline of a web advertising system, such as retrieval, ranking and bidding. Despite the remarkable progress, there are still many unsolved and emerging issues about applying the state-of-the-art AI techniques to Web Advertising, such as the “cold-start” problem; trade-off between online AI systems serving accuracy and efficiency; data privacy protection and big data management.
This workshop is targeted on the above and other relevant issues, aiming to create a platform for people from academia and industry to communicate their insights and recent results.
Topics
Topics of interest include, but are not limited to, the following:
- AI models and algorithms for Web Advertising, e.g. retrieval and ranking models; CTR and CVR prediction models; bidding algorithms; ads targeting; query understanding; user and ad item representations learning; cold-start model learning, etc.
- AI infrastructure for Web Advertising, e.g. large scale multi-modality data collection, utilization and management; MLOps; real-time AI system design and deployment; AutoML techniques, etc.
- Trustworthy AI in Web Advertising, e.g. data privacy protection; federated learning; differential privacy; model fairness, interpretability, and robustness against adversarial attacks, etc.
- Other relevant applications and methods, e.g. recommendation, information retrieval, search, sequence learning, graph learning, etc.
Format
TBD
Submissions
Submissions should follow the AAAI-23 template. There is no page limit for the paper submission. Paper submission will be reviewed by domain experts. We welcome submissions of unpublished papers, including those that are submitted/accepted to other venues if that other venue allows so.
Paper submission website: https://cmt3.research.microsoft.com/AI4WebAds2023
Important Dates
Friday, November 4, 2022: Workshop Submissions Due to Organizers
Friday, November 18, 2022: Notifications Sent to Authors
Monday, December 12, 2022: AAAI-23 Early Registration Deadline
February 13 – 14, 2023: AAAI-23 Workshop Program
Organization Committee
Bo Liu (Walmart Ads); Rui Chen (Samsung Ads); Yong Ge (University of Arizona); Huayu Li (Meta Ads); Sheng Li (University of Virginia); Nastaran Ghadar (Twitter Ads)
Additional Information
Workshop website: https://ai4webads2023.github.io/
Contact us: ai4webads2023@gmail.com
W6: AI to Accelerate Science and Engineering
Scientists and engineers in diverse application domains are increasingly relying on using computational and artificial intelligence (AI) tools to accelerate scientific discovery and engineering design. AI, machine learning, and reasoning algorithms are useful in building models and decision-making towards this goal. We have already seen several success stories of AI in applications such as materials discovery, ecology, wildlife conservation, and molecule design optimization. This workshop aims to bring together researchers from AI and diverse science/engineering communities to achieve the following goals:
- Identify and understand the challenges in applying AI to specific science and engineering problems.
- Develop, adapt, and refine AI tools for novel problem settings and challenges.
- Community-building and education to encourage collaboration between AI researchers and domain area experts.
Invited Speakers
This year’s theme is AI for Earth and Environmental Sciences. Our invited speakers and panelists from both AI and Environmental sciences community include:
Prof. Milind Tambe, Harvard University
Prof. Amy McGovern, University of Oklahoma
Prof. Ryan Adams, Princeton University
Dr. Ilkay Altintas, University of California San Diego
Dr. Priya Donti, Incoming Faculty, Massachusetts Institute of Technology
Dr. Sara Beery, Incoming Faculty, Massachusetts Institute of Technology
Important Dates
Paper submission deadline: November 3rd, 2022 (11:59 PM PST)
Notification: November 18th, 2022 (11:59 PM PST)
Camera-ready due: November 31st, 2022 (11:59 PM PST)
Submissions
We welcome submissions of long (max. 8 pages), short (max. 4 pages), and position (max. 4 pages) papers describing research at the intersection of AI and science/engineering domains including chemistry, physics, power systems, materials, catalysis, health sciences, computing systems design and optimization, epidemiology, agriculture, transportation, earth and environmental sciences, genomics and bioinformatics, civil and mechanical engineering etc. Submissions must be formatted in the AAAI submission format. All submissions should be done electronically via CMT.
The submission deadline is November 3rd, 2022 (11:59 PM PST).
Submission site: https://cmt3.research.microsoft.com/AI2SE2023
Organizing Committee
Aryan Deshwal (Washington State University)
Syrine Belakaria (Washington State University)
Jana Doppa (Washington State University)
Mrinal K Sen (University of Texas at Austin)
Yolanda Gil (Information Sciences Institute, University of Southern California)
Additional Information
Website: https://ai-2-ase.github.io/
W7: AI4EDU: AI for Education
Introduction
Technology has transformed over the last few years, turning from futuristic ideas into today’s reality. AI is one of these transformative technologies that is now achieving great successes in various real-world applications and making our life more convenient and safe. AI is now shaping the way businesses, governments, and educational institutions doing things and is making its way into classrooms, schools and districts across many countries.
In fact, the increasingly digitalized education tools and the popularity of online learning have produced an unprecedented amount of data that provides us with invaluable opportunities for applying AI in education. Recent years have witnessed growing efforts from AI research community devoted to advancing our education and promising results have been obtained in solving various critical problems in education. For examples, AI tools are built to ease the workload for teachers. Instead of grading each piece of work individually, which can take up a bulk of extra time, intelligent scoring tools allow teachers the ability to have their students work automatically graded. In the coronavirus era, requiring many schools to move to online learning, the ability to give feedback at scale could provide needed support to teachers. What’s more, various AI based models are trained on massive student behavioral and exercise data to have the ability to take note of a student’s strengths and weaknesses, identifying where they may be struggling. These models can also generate instant feedback to instructors and help them to improve their teaching effectiveness. Furthermore, leveraging AI to connect disparate social networks among teachers, we may be able to provide greater resources for their planning, which have been shown to significantly effect students’ achievement.
Despite gratifying achievements have demonstrated the great potential and bright development prospect of introducing AI in education, developing and applying AI technologies to educational practice is fraught with its unique challenges, including, but not limited to, extreme data sparsity, lack of labeled data, and privacy issues. Hence, this workshop will focus on introducing research progress on applying AI to education and discussing recent advances of handling challenges encountered in AI educational practice.
Workshop Description
In this workshop, we invited AIED enthusiasts from all around the world through the following three different channels:
First, we invited established researchers in the AIED community to give a broad talk that (1) describes a vision for bridging AIED communities; (2) summarizes a well-developed AIED research area; or (3) presents promising ideas and visions for new AIED research directions.
Second, we called for regular workshop paper submissions and cross-submissions (papers that have appeared in or submitted to alternative venues) related to a broad range of AI domains for education.
Third, we hosted a global challenge on Codalab for a fair comparison of state-of-the-art Knowledge Tracing models and invited technical reports from winning teams.
Regular Workshop Paper Submission
We invite high-quality paper submissions of theoretical and experimental nature on AIED topics. The workshop solicits 4-6 pages double-blind paper submissions from participants. Submissions of the following flavors will be sought: (1) research ideas, (2) case studies (or deployed projects), (3) review papers, (4) best practice papers, and (5) lessons learned. The format is the standard double-column AAAI Proceedings Style. All submissions will be peer-reviewed. Some will be selected for spotlight talks, and some for the poster session.
Cross-submissions
In addition to previously unpublished work, we invite papers on relevant topics which have appeared in or submitted to alternative venues (such as other ML or AIED conferences). Accepted cross-submissions will be presented as posters, with an indication of the original venue. Selection of cross-submissions will be determined solely by the organizing committee.
Format
This will be a one day workshop with a number of paper presentations and poster spotlights, a poster session, several invited talks, and a panel discussion.
Prof. Max Welling, University of Amsterdam and Microsoft Research
Prof. José Miguel Hernández-Lobato, University of Cambridge
Prof. Connor Coley, Massachusetts Institute of Technology
Prof. Andrew White, University of Rochester
Dr. Rocío Mercado, Massachusetts Institute of Technology
We will include a panel discussion to close the workshop, in which the audience can ask follow up questions and to identify the key AI challenges to push the frontiers in Chemistry.
Submission Website
Submission website: https://easychair.org/conferences/?conf=aaai2023ai4edu.
The submission AUTHOR KIT can be found at https://www.aaai.org/Publications/Templates/AnonymousSubmission23.zip.
Global Knowledge Tracing Challenge
In this competition, we would like to call for researchers and practitioners worldwide to investigate the opportunities of improving the student assessment performance via knowledge tracing approaches with rich side information.
The details of this competition can be found at http://ai4ed.cc/competition/aaai2023competition.
Organizing Committee
Weiqi Luo, Guangdong Institute of Smart Education, Jinan University, China
Shaghayegh (Sherry) Sahebi, University at Albany – SUNY, USA
Lu Yu, Beijing Normal University, China
Richard Tong, Squirrel AI Learning, USA
Jiahao Chen, TAL Education Group, China
Qiongqiong Liu, TAL Education Group, China
Additional Information
Zitao Liu (main contact), TAL Education Group, China, zitao.jerry.liu@gmail.com
Homepage: http://www.zitaoliu.com/
W8: Artificial Intelligence and Diplomacy
Advances in AI and advanced data analytics are having considerable policy-related, geopolitical, economic, societal, legal, and security impacts. Recent global challenges such as the COVID19 pandemic, concerns related to representative governments and associated democratic processes, as well as the importance of advanced data analytics and the potential use of AI-enabled systems in conflicts such as the war in Ukraine, motivate the importance of the topic of AI and diplomacy. There may be scenarios where diplomats, ambassadors, and other government representatives lack the technical understanding of AI and advanced data analytics to address challenges in all these domains, while the technical AI and data communities often lack a sophisticated understanding of the diplomatic processes and opportunities necessary for addressing AI challenges internationally. This workshop will explore the impact of advances both in artificial intelligence as well as advanced data analytics. This includes considering the broad impact of AI as well as data collection and curation globally, focusing especially on the impact that AI and data have on the conduct and practice of diplomacy.
Topics
Diplomacy-related topics include, but are not limited to:
- AI and data as tools for diplomacy, including sources of data sets and expert models that could be utilized with machine-learning tools to promote diplomacy.
- Issues that AI raises for diplomacy and policy formation in terms of the cybersecurity, national security, defense, intelligence, representative forms of government, civil liberties, and social wellbeing.
- Opportunities for diplomatic cooperation on AI in the above issues, including on standards, restraints, voluntary agreements and multilateral and bilateral diplomacy to address concerns about AI while also protecting open research opportunities.
Technical topics include, but are not limited to:
- The use of AI and data in synthetic news and media;
- Data cooperatives and coalitions to provide equity in training AI models;
- AI and data linked to deepfake production;
- AI and data tied to personalization and micro-targeting;
- AI and community data linked to public policy, international security activities, etc. We also welcome contributions related to best-practice in the development, testing, verification, and assessment of both AI systems and data governance efforts, especially from an ethical and trustworthiness perspective.
Format
This workshop will take place over one day. This workshop will involve a set of invited talks from international experts working at the intersection of diplomacy and artificial intelligence. Submissions will be sought from AI researchers and diplomacy experts alike. We envisage two panel discussions: one on the challenges to the conduct of diplomacy created by AI, and a second on the technical challenges posed by diplomatic practice on the development, evaluation, verification, and deployment of AI systems. The program will also involve sessions presenting accepted papers and position statements.
Attendance
Attendance is open to all those interested in the workshop topic.
Submissions
Authors are invited to send a contribution in the AAAI 2023 proceedings format. We welcome submissions of full-length papers of up to 8 pages in length (including references) as well as position papers of up to 4 pages in length (including references). Shorter contributions are welcome.
Submission site: https://easychair.org/my/conference?conf=AIDip2023
Organizing Committee
Professor Barry O’Sullivan (b.osullivan@cs.ucc.ie), University College Cork, Ireland
Dr. David A. Bray (dbray@stimson.org), Stimson Center, USA, and Business Executives for National Security, USA
Eric Richardson (richardson@hdcentre.org), Centre for Humanitarian Dialogue, Geneva, and University of Michigan and University of California-Berkeley Law Schools, USA
Additional Information
http://osullivan.ucc.ie/AIDip2023
W9: Artificial Intelligence for Cyber Security (AICS)
The workshop will focus on the application of AI to problems in cyber-security. Cyber systems generate large volumes of data and utilizing this effectively is beyond human capabilities. Additionally, adversaries continue to develop new attacks. The workshop will address AI technologies and their security applications, such as machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions.
This year the AICS will emphasize practical considerations in the real world with a special focus on social attacks, that is, attacking the human in the loop to gain access to critical systems.
In general, AI techniques are still not widely adopted in many real world cyber security situations. There are many reasons for this including practical constraints (power, memory, etc.), lack of formal guarantees within a practical real world model, and lack of meaningful, trustworthy explanations. Moreover, in response to improved automated systems security (better hardware security, better cryptographic solutions), cyber criminals have amplified their efforts with social attacks such as phishing attacks and spreading misinformation. These large-scale attacks are cheap and need only succeed for a tiny fraction of all attempts to be effective. Thus, AI assistive techniques robust to human errors and insusceptible to manipulations can be very beneficial.
Topics
Topics of interest include, but are not limited to:
- Machine learning (including RL) approaches to make cyber systems secure and resilient
o Natural language processing techniques
o Anomaly/Threat detection techniques
o Big Data noise reduction techniques
o Adversarial Learning
o Deception in Learning
o Human behavioral modeling, being robust to human errors - Formal reasoning, with focus on human behavior element, in cyber systems
- Game Theoretic reasoning in cyber security
- Adversarial robust AI metrics
- Multi-agent interaction/agent-based modeling in cyber systems
- Modeling and simulation of cyber systems and system components
- Decision making under uncertainty in cyber systems
- Automation of cyber dataset labeling for realistic, benchmark datasets
- Meta-ML techniques (i.e., learning to learn) for cyber-security
- Quantitative human behavior models with application to cyber security
- Operational and commercial applications of AI in security
- Explanations of security decisions and vulnerability of explanation techniques
Format
Like previous years, the workshop will have two invited talks by recognized figures in the combined area of AI and cyber security. These plenary talks will set the context for the workshop by describing the domain and the major challenges in securing machine learning capabilities. We will host a series of short (15-20 minute) presentations of the papers accepted for this workshop. The number and length of presentations to be presented will be based on the volume and quality of responses to the Call for Participation (CFP). Finally, we will hold a moderated roundtable/panel discussion between members of the AI community on practical issues related to the use of AI and security.
The AICS workshop will be a one-day meeting, from roughly 9am to 5pm.
Attendance
About 50
Submissions
PDF file to be submitted on Easychair, at max 7 pages in AAAI format.
*Submit to: https://easychair.org/conferences/?conf=aics230
Main Contact
Arunesh Sinha – arunesh.sinha@rutgers.edu
Organizing Committee
James Holt, Laboratory for Physical Sciences, USA, holt@lps.umd.edu
Edward Raff, Booz Allen Hamilton, USA, raff.edward@umbc.edu
Ahmad Ridley, National Security Agency, USA
Dennis Ross, MIT Lincoln Laboratory, USA, Dennis.Ross@ll.mit.edu
Arunesh Sinha, Rutgers University, USA, arunesh.sinha@rutgers.edu
Diane Staheli, Department of Defense, USA, diane.staheli@ll.mit.edu
Diane Staheli serves as the DoD Chief Digital and Artificial Intelligence Officer (CDAO) Chief of
Responsible Artificial Allan Wollaber, MIT Lincoln Laboratory, USA, Allan.Wollaber@ll.mit.edu
Additional Information
Workshop URL: http://aics.site/
W10: Artificial Intelligence for Social Good (AI4SG)
Scope and Topics
The field of Artificial Intelligence stands at an inflection point, and there could be many different directions in which the future of AI research could unfold. Accordingly, there is a growing interest to ensure that current and future AI research is used in a responsible manner for the benefit of humanity (i.e., for social good). To achieve this goal, a wide range of perspectives and contributions are needed, spanning the full spectrum from fundamental research to sustained deployments in the real-world.
This workshop will explore how AI research can contribute to solving challenging problems faced by current-day societies. For example, what role can AI research play in promoting health, sustainable development and infrastructure security? How can AI initiatives be used to achieve consensus among a set of negotiating self-interested entities (e.g., finding resolutions to trade talks between countries)? To address such questions, this workshop will bring together researchers and practitioners across different strands of AI research and a wide range of important real-world application domains. The objective is to share the current state of research and practice, explore directions for future work, and create opportunities for collaboration. The workshop will be a very nice complement to the AAAI Special Track on AI for Social Impact as it will provide a forum where researchers interested in this area can connect in a more direct way.
The proposed workshop complements the objectives of the main conference by providing a forum for AI algorithm designers, such as those working in the areas of agent-based modelling, machine learning, spatio-temporal models, deep learning, explainable AI, fairness, social choice, non-cooperative and cooperative game theory, convex optimization, and planning under uncertainty on innovative and impactful real-world applications. Specifically, the proposed workshop serves two purposes. First, the workshop will provide an opportunity to showcase real-world deployments of AI research. More often than not, unexpected practical challenges emerge when solutions developed in the lab are deployed in the real world, which makes it challenging to utilize complex and well thought out computational/modeling advances. Learning about the challenges faced in these deployments during the workshop will help us understand lessons of moving from the lab to the real world. Second, the workshop will provide opportunities to showcase AI systems which dynamically adapt to changing environments, are robust to errors in execution and planning, and handle uncertainties of different kinds that are common in the real world. Addressing these challenges requires collaboration from different communities including machine learning, game theory, operations research, social science, and psychology. This workshop is structured to encourage a lively exchange of ideas between members from these communities. We encourage submissions to the workshop from: (i) computer scientists who have used (or are currently using) their AI research to solve important real-world problems for society’s benefit in a measurable manner; (ii) interdisciplinary researchers combining AI research with various disciplines (e.g., social science, ecology, climate, health, psychology and criminology); and (iii) engineers and scientists from organizations who aim for social good, and look to build real systems. Topics of interest include, but are not limited to the areas identified in the AAAI Special Track on AI for Social Impact:
- AISI: Agriculture/Food
- AISI: Assistive Technology for Well-Being
- AISI: Biodiversity or Habitat
- AISI: Computational Social Science
- AISI: Climate
- AISI: Education
- AISI: Economic/Financial
- AISI: Energy
- AISI: Environmental Sustainability
- AISI: Health and Well-Being
- AISI: Humanities
- AISI: Low and Middle-Income Countries
- AISI: Mobility/Transportation
- AISI: Natural Sciences
- AISI: Web or Social Networks
- AISI: Philosophical and Ethical Issues
- AISI: Security and Privac
- AISI: Social Development
- AISI: Social Welfare, Justice, Fairness and Equality
- AISI: Urban Planning and Resilience
- AISI: Underserved Communities
- AISI: Socially Responsible AI: Fairness, Accountability, and Transparency
- AISI: Other Social Impact
Format
The workshop will be a one-day meeting. It will include a number of (possibly parallel) technical sessions, a virtual poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of AI for Social Good and learning and will conclude with a panel discussion.
Attendance
Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submission Information
Submission Link: https://easychair.org/my/conference?conf=aisg23
Important Dates
Paper Submission Deadline – November 30, 2022
Author Notification – December 20, 2022
Camera Ready Version Due – January 10, 2023
Submission Types
- Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference in AAAI format
- Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) in AAAI format that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.
All papers must be submitted in PDF format, using the AAAI-23 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors. Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the NeurIPS 2022 and AAAI 2023 technical program are welcomed. For questions about the submission process, contact the workshop chairs.
Organizing Committee
Amulya Yadav (Penn State University) amulya@psu.edu and Bistra Dilkina (USC) dilkina@usc.edu
Additional Information
https://amulyayadav.github.io/AI4SG2023/
W11: Artificial Intelligence Safety (SafeAI)
Submission Deadline: Nov 04, 2022
http://www.safeaiw.org
Scope
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
- What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
- How can we engineer trustable AI software architectures?
- How can we make AI-based systems more ethically aligned?
- What safety engineering considerations are required to develop safe human-machine interaction?
- What AI safety considerations and experiences are relevant from industry?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can we develop solid technical visions and new paradigms about AI Safety?
- How do metrics of capability and generality, and the trade-offs with performance affect safety?
- The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.
Topics
Contributions are sought in (but are not limited to) the following topics:
- Safety in AI-based system architectures
- Continuous V&V and predictability of AI safety properties
- Runtime monitoring and (self-)adaptation of AI safety
- Accountability, responsibility and liability of AI-based systems
- Effect of Uncertainty in AI Safety
- Avoiding negative side effects in AI-based systems
- Role and effectiveness of oversight: corrigibility and interruptibility
- Loss of values and the catastrophic forgetting problem
- Confidence, self-esteem and the distributional shift problem
- Safety of AGI systems and the role of generality
- Reward hacking and training corruption
- Self-explanation, self-criticism and the transparency problem
- Human-machine interaction safety
- Regulating AI-based systems: safety standards and certification
- Human-in-the-loop and the scalable oversight problem
- Evaluation platforms for AI safety
- AI safety education and awareness
- Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others
Important Dates
- Paper submission: Nov 04, 2022 – AOE time
- Notification of acceptance: Nov 22, 2022 – AOE time
- Camera-ready submission: Dec 06, 2022 – AOE time
Format
To deliver a truly memorable event, we will follow a highly interactive format that will include invited talks and thematic sessions. The thematic sessions will be structured into short pitches and a common panel slot to discuss both individual paper contributions and shared topic issues. Three specific roles are part of this format: session chairs, presenters and paper discussants. The workshop will be organized as a 1.5 or 2 days meeting (depending upon number of accepted submissions and events). Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submissions
You are invited to submit:
- Full technical papers (7-9 pages, including references), or
- Proposals for technical Talks (up to one-page abstract including short Bio of the main speaker), without associated paper,
- Position papers (5-7 pages, including references),
Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=safeai2023
Please keep your paper format according to CEUR Formatting Instructions (two-column format). The CEUR author kit can be downloaded from: http://ceur-ws.org/Vol-XXX/CEURART.zip
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
The workshop proceedings will be published on CEUR-WS. CEUR-WS is archival in the sense that a paper cannot be removed once it’s published. Authors will keep the copyright of their papers as per CC BY 4.0. In other words, CEUR-WS is similar to arxiv. In any case, authors of accepted papers can opt out and decide not to include their paper in the proceedings. We will inform the authors about the procedure in due term.
We are also planning a Special Issue in a Journal, after the workshop.
For any question, please send an email to: safeai2023@easychair.org
Organizing Committee
- Gabriel Pedroza, CEA LIST, France
- Xiaowei Huang, University of Liverpool, UK
- Xin Cynthia Chen, University of Hong Kong, China
- Andreas Theodorou, Ume University, Sweden
Steering Committee
- José Hernández-Orallo, Universitat Politècnica de València, Spain
- Mauricio Castillo-Effen, Lockheed Martin, USA
- Richard Mallah, Future of Life Institute, USA
- John McDermid, University of York, UK
Program Committee (look at the website: http://www.safeaiw.org)
Additional Information
Easychair CFP: https://easychair.org/cfp/safeai2023
W12: Creative AI Across Modalities
For the past few years, we have witnessed eye-opening generation results from AI foundation models such as GPT-3, and DALL-E2. These models have set up great infrastructures for new types of creative generation across various modalities such as language (e.g. story generation), images (e.g. text-to-image generation, fashion design), and audio (e.g. lyrics-to-music generation). Researchers in these fields encounter many similar challenges such as how to use AI to help professional creators, how to evaluate creativity for an AI system, how to boost the creativity of AI, how to avoid negative social impact, and so on. There have been various workshops that focus on some aspects of AI generation. This workshop aims to bridge researchers and practitioners from NLP, computer vision, music, ML, and other computational fields to create the 1st workshop on “Creative AI across Modalities”.
Topics
This multidisciplinary workshop will broadly explore topic areas including, but not limited to:
- Creative language generation: stories, poetry, figurative languages.
- Generative model and algorithms for image/audio, and multi-modal/video generation.
- Theory and analysis for creativity (e.g., humor understanding)
- Detecting and quantifying creativity
- AI technologies for improving human creativity (e.g., HCI+ML studies to accelerate scientific novelty)
- Data and resources for creative generation
- Applications of creative AI generation, such as automatic video dubbing
- Novel evaluation for creative AI generated outputs
- Social, cultural, and ethical considerations of creative AI generations, such as racial/gender bias, trustworthiness
Format
This workshop will be a one-day hybrid event (on 2/13/2023), consisting in person and virtual talks from invited speakers, an in-person panel discussions, and hybrid paper presentations (oral and posters).
Attendance
We expect 50 or so attendance and open the workshop to all AAAI-23 participants.
Submissions
Authors are invited to send the following relevant work, either archival or non-archival, in the AAAI-23 proceedings format:
Long paper: Submission of original work up to eight pages in length (including references).
Short paper: Submission of work in progress with preliminary results, and position papers, up to four pages in length (+ references).
Submit to: Submission is through the OpenReview: https://openreview.net/group?id=AAAI.org/2023/Workshop/creativeAI
Workshop Chair
Dr. Jing Huang (Alexa AI, jhuangz@amaon.com)
Workshop Committee
Prof. Violet Peng (UCLA, violetpeng@cs.ucla.edu); Prof. Mohit Bansal (UNC Chapel Hill, mbansal@cs.unc.edu); Prof. Julian McAuley (UCSD, jmcauley@eng.ucsd.edu); Prof. Jiajun Wu (Stanford, jiajunwu@cs.stanford.edu); Dr. Arindam Mandal (Alexa AI, arindamm@amazon.com); Dr. Prithviraj Ammanabrolu (Allen Institute for AI, raja@allenai.org); Dr. Faeze Brahman (Allen Institute for AI, fbrahman@ucsc.edu); Dr. Ruohan Gao (Stanford, rhgao@cs.stanford.edu); Dr. Haw-Shiuan Chang (Alexa AI, chawshiu@amazon.com).
Additional Information
https://creativeai-ws.github.io/
W13: Deep Learning on Graphs: Methods and Applications (DLG-AAAI’23)
Deep Learning models are at the core of research in Artificial Intelligence research today. It is well- known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data. This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics and medical informatics.
This one-day workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. The workshop will consist of contributed talks, contributed posters, and invited talks on a wide variety of the methods and applications. Work-in-progress papers, demos, and visionary papers are also welcome. This workshop intends to share visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications. It aims to bring together both academic researchers and industrial practitioners from different backgrounds to discuss a wide range of topics of emerging importance for GNN.
Topics
- Representation learning on graphs
- Graph neural networks on node classification, graph classification, link prediction
- The expressive power of Graph neural networks
- Interpretability in Graph Neural Networks Adversarial Robustness in Graph Neural Networks
- Graph structure learning and graph matching
- Dynamic/incremental graph-embedding, prediction and generation
- Learning representation on heterogeneous networks, knowledge graphs
- Deep generative models for graph generation and graph transformation
- AutoML in Graph Neural Networks
- Graph2Seq, Graph2Tree, and Graph2Graph models
And with particular focuses but not limited to these applications:
- Natural language processing
- User/content understanding and recommendation
- Bioinformatics (drug discovery, protein generation, protein structure prediction)
- Program synthesis and analysis and software mining
- Deep learning in neuroscience (brain network modeling and prediction)
- Cybersecurity (authentication graph, Internet of Things, malware propagation)
- Geographical network modeling and prediction (Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks)
Format
Full-day (8 hours)
Our program consists of two sessions: academic session and industry session. The academic session will focus on the most recent research developments on GNNs. The industry session will emphasize practical industrial product developments using GNNs. We will also have a panel discussion on the current and future of GNNs on both research and industry. There will be keynote speakers who will deliver a live invited talk (25 minute) in person and four contributed speakers who will give a live invited talk (12 minutes) about the accepted workshop papers. There will be a poster session to display and discuss the accepted works.
Attendance
250
Submissions
Submissions are limited to a total of 5 pages, including all content and references, and must be in PDF format and formatted according to the new Standard ACM Conference Proceedings Template. Following this AAAI conference submission policy, reviews are double-blind, and author names and affiliations should NOT be listed. Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be posted on the workshop website and will not appear in the AAAI proceedings.
Submit to: https://easychair.org/conferences/?conf=dlgaaai23
Workshop Chair
1) Main contact: Lingfei Wu (Pinterest)
2) Jian Pei (Duke University)
3) Jiliang Tang (Michigan State University)
4) Yinglong Xia (Meta AI)
5) Xiaojie Guo (IBM T.J. Watson Research Center)
Workshop Committee
- Yuanqi Du, George Mason University, ydu6@masonlive.gmu.edu
- Xiaojie Guo, George Mason Univerty,xguo7@gmu.edu
- Lingwei Chen, Pennsylvania State University, lgchen@mix.wvu.edu
- Xiang Ling, Institute of Software, Chinese Academy of Science, lingxiang@zju.edu.cn
- Shiyu Wang, Emory University, shiyu.wang@emory.edu
- Lingfei Wu, JD.com, lwu@email.wm.edu
- Chen Ling, Emory University, chen.ling@emory.edu
- Yinglong Xia, Facebook, yinglong.xia.2010@ieee.org
- Junxiang Wang, Emory University, jwan936@emory.edu
- Xiaoyun Wang, University of California, Davis, xiaoyunw@nvidia.com
- Xinyi Zhang, Meta, xinyizhang@fb.com
- Ankit Jain, Meta, asj.ankit@gmail.com
- Li Zhang, George Mason University, lzhang18@gmu.edu
- Zhicheng Liang, Rensselaer Polytechnic Institute, liangz4@rpi.edu
- Yunsheng Bai, University of California, Los Angeles, yba@g.ucla.edu
- Mingming Sun, Baidu, sunmingming01@baidu.com
- Noah Lee, Meta, noahlee@fb.com
Additional Information
https://deep-learning-graphs.bitbucket.io/dlg-aaai23/
W14: DEFACTIFY: Multimodal Fact-Checking and Hate Speech Detection
Combating fake news is one of the burning societal crisis. It is difficult to expose false claims before they create a lot of damage. Automatic fact/claim verification has recently become a topic of interest among diverse research communities. Research efforts and datasets on text fact verification could be found, but there is not much attention towards multi-modal or cross-modal fact-verification. This workshop will encourage researchers from interdisciplinary domains working on multi-modality and/or fact-checking to come together and work on multimodal (images, memes, videos etc.) fact-checking. At the same time, multimodal hate-speech detection is an important problem but has not received much attention. Lastly, learning joint modalities is of interest to both Natural Language Processing (NLP) and Computer Vision (CV) forums. The second iteration will continue the research and discussion started last year.
Topics
It is a forum to bring attention towards collecting, measuring, managing, mining, and understanding multimodal disinformation, misinformation, and malinformation data from social media. This workshop covers (but not limited to) the following topics: —
- Development of corpora and annotation guidelines for multimodal fact checking
- Computational models for multimodal fact checking
- Development of corpora and annotation guidelines for multimodal hate speech detection and classification
- Computational models for multimodal hate speech detection and classification
- Analysis of diffusion of Multimodal fake news and hate speech in social networks
- Understanding the impact of the hate content on specific groups (like targeted groups)
- Fake news and hate speech detection in low resourced languages
Format
It is a one-day workshop and includes: invited talks, interactive discussions, paper presentations, shared task presentations, poster session etc. We expect 60-70 participants.
Submissions
We encourage long papers, short papers and demo papers. Submissions will undergo double blind review. Accepted papers will be archived.
Primary Contact
Amitava Das (University of South Carolina; AMITAVA@mailbox.sc.edu) AI Institute, UofSC (AIISC) + Advisory Scientist Wipro AI
Workshop Chairs
Amitava Das (University of South Carolina, USA), Amit Sheth (University of South Carolina, USA), Asif Ekbal (IIT Patna, India), Manoj Chinnakotla (Microsoft, USA), Parth Patwa (UCLA, USA)
Student Volunteers
Shreyash Mishra (IIIT Sri City, India), S. Suryavardan (IIIT Sri City, India), Megha Chakraborty (University of South Carolina, USA)
Additional Information
website: https://aiisc.ai/defactify2/ (under development)
W15: Deployable AI (DAI)
Deployment of AI models into the real world requires several fundamental research questions and issues involving algorithmic, systemic and societal aspects to be addressed. It is crucial to carry out progressive research in this domain and study the various deployability aspects with respect to AI models that can ensure positive impacts on society. In this workshop, we intend to focus on research works that propose models that can be used as real-world solutions and implement techniques/strategies that enable and ensure the ideal deployment of AI models while adhering to various standards.
Topics
Contributions are sought in (but are not limited to) the following topics:
- Deployable AI- concepts and Models
- Explainable and Interpretable AI
- Human-in-the-loop
- Online Learning and Transfer Learning
- Fairness and Ethics in AI
- Safety Security and Privacy in AI
- Responsible AI
- Integrity and Robustness in AI
- Distilled and Lightweight AI Models
- AI Models and Social Impact
Papers will be presented in poster format, and some will be selected for oral presentation. Through invited talks and presentations by the participants, this workshop will bring together current advances in Deployable AI and set the stage for continuing interdisciplinary research discussions.
Important Dates
- Poster/short/position papers submission deadline: Oct 28, 2022
- Full paper submission deadline: Oct 28, 2022
- Paper notification: Nov 18, 2022
Format
This is a 1-day workshop involving talks by pioneer researchers from respective areas, poster presentations, and short talks of accepted papers.
Attendance
The eligibility criteria for attending the workshop will be registration in the conference/workshop as per AAAI norms. We expect 45-50 people in the workshop.
Submissions
You are invited to submit:
- Poster/short/position papers (up to 4 pages)
- Full papers (up to 7 pages)
The submissions should adhere to the AAAI paper guidelines available at (https://aaai.org/Conferences/AAAI-23/aaai23call/)
Accepted submissions will have the option of being posted online on the workshop website. Please mention this in the workshop submission for authors who do not wish their papers to be posted online. The submissions need to be anonymized.
See the webpage https://sites.google.com/view/dai-2023 for detailed instructions and submission link.
Workshop Chair
Balaraman Ravindran Affiliation: Indian Institute of Technology Madras, India
Workshop Committee
Balaraman Ravindran, IIT Madras, India
Primary contact (ravi@cse.iitm.ac.in)
Nandan Sudarsanam, IIT Madras, (nandan@iitm.ac.in)
Arun Rajkumar, IIT Madras, (arunr@cse.iitm.ac.in)
Harish Guruprasad, IIT Madras, (hariguru@cse.iitm.ac.in)
Chandrashekar Lakshminarayanan, IIT Madras, (chandrashekar@cse.iitm.ac.in)
Gokul S Krishnan, IIT Madras, (gokul@rbcdsai.org)
Rahul Vashisht, IIT Madras (rahul@cse.iitm.ac.in)
Krishna P, IIT Madras, (pkrishna@cse.iitm.ac.in)
Additional Information
Workshop URL: https://sites.google.com/view/dai-2023
W16: DL-Hardware Co-Design for AI Acceleration
As deep learning (DL) continues to permeate all areas of computing, algorithm engineers are increasingly relying on hardware system design solutions to improve the efficiency and performance of deep learning models. However, the vast majority of DL studies rarely consider limitations such as power/energy, memory footprint, and model size of real-world computing platforms, and even lessconsider the computational speed of hardware systems and their own computational characteristics. Addressing all of these metrics is critical if advances in DL are to be widely used on real device platforms and scenarios, especially those with high requirements for computational efficiencies, such as mobile devices and AR/VR. Therefore, it is desirable to design and optimize both the DL models and the hardware computing platforms. The workshop provides a great venue for the international research community to share mutual challenges and solutions between deep neural network learning and computing system platforms, with a focus on accelerating AI technologies on real system platforms through DL-hardware co-design.
Topics
- Neural network pruning & quantization & distillation
- Deep learning acceleration for applications
- Hardware-aware network architecture search & design
- Applications of deep learning on mobile and AR/VR
- New theory and fundamentals of DL-hardware co-design
- Deep learning to improve computer architecture design
- Real-time and energy-efficient deep learning systems
- Hardware accelerators for deep learning
Format
The workshop will be a half-day meeting comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.
Submissions
Submitted technical papers can be up to 4 pages long (excluding references and appendices). Position papers are welcome. All papers must be submitted in PDF format using the AAAI-23 author kit. Papers will be peer-reviewed and selected for spotlight and/or poster presentation. Submission site: https://cmt3.research.microsoft.com/DCAA2023/Submission/Index
Organizing Committee
Dongkuan Xu (NC State), Hua Wei (NJIT), Ang Li (Qualcomm AI Research), Peipei Zhou (University of Pittsburgh), Caiwen Ding (UConn), Yingyan Lin (Rice University), Yanzhi Wang (Northeastern University)
Additional Information
Workshop URL: https://ncsu-dk-lab.github.io/workshops/dcaa@2023/
W17: Energy Efficient Training and Inference of Transformer Based Models
Transformers are the foundational principles of large deep learning language models. Recent successes of Transformer-based models in image classification and action prediction use cases indicate their wide applicability. In this workshop, we want to focus on the leading ideas using Transformer models such as PALM from Google. We will learn what have been their key observations on performance of the model, optimizations for inference and power consumption of both mixed-precision inference and training.
The goal of this Workshop is to provide a forum for researchers and industry experts who are exploring novel ideas, tools, and techniques to improve the energy efficiency of machine learning and deep learning as it is practiced today and would evolve in the next decade. We envision that only through close collaboration between industry and the academia we will be able to address the difficult challenges and opportunities of reducing the carbon footprint of AI and its uses. We have tailored our program to best serve the participants in a fully digital setting. Our forum facilitates active exchange of ideas through
- Keynotes, invited talks and discussion panels by leading researchers from industry and academia
- Peer-reviewed papers on latest solutions including works-in-progress to seek directed feedback from experts
- Independent publication of proceedings through IEEE CPS
Topics
We invite full-length papers describing original, cutting-edge, and even work-in-progress research projects about efficient machine learning. Suggested topics for papers include, but are not limited to:
- Neural network architectures for resource constrained applications
- Efficient hardware designs to implement neural networks including sparsity, locality, and systolic designs
- Power and performance efficient memory architectures suited for neural networks
- Network reduction techniques – approximation, quantization, reduced precision, pruning, distillation, and reconfiguration
- Exploring interplay of precision, performance, power, and energy through benchmarks,workloads, and characterization
- Simulation and emulation techniques, frameworks, tools, and platforms for machine learning
- Optimizations to improve performance of training techniques including on-device and large- scale learning
- Load balancing and efficient task distribution, communication and computation overlapping for optimal performance
- Verification, validation, determinism, robustness, bias, safety, and privacy challenges in AI systems
The proceedings from previous instances have been published through the prestigious IEEE Conference Publishing Services (CPS) and are available to the community via IEEE Xplore. In each instance, IEEE conducted independent assessment of the papers for quality.
Format
Keynotes – Two keynote talks, each 45 minutes including 5 min for Q&A
Invited Talks – Six to eight invited talks, each 30 minutes including Q&A
Oral Presentations – 10 to 12 short presentations, each 15 minutes in two sessions
Poster Sessions – During coffee and lunch breaks
Panel Discussions – Two panel discussions of 30 min followed by 30 min for audience questions/comments
Breakout Sessions – After position statements by panelists
Attendance
50-75 people including authors, invited speakers, panelist and general audience.
Submissions
Up to 5 pages, electronic submission. Prior workshops editions:
IEEE Xplore – Conference Table of Contents
And
IEEE Xplore – Conference Table of Contents
Important dates
Submission Deadline: Nov 7, 2022 (AOE)
Notifications sent: Nov 18, 2022
Final Manuscript due: Dec 1st, 2022
Talk Recording due: Dec 19, 2022
Submission site: https://www.emc2-ai.org/submission
Website site: https://www.emc2-ai.org/aaai-23
Workshop Chair
Fanny Nina Paravecino, fanny.nina@microsoft.com
Kushal Datta, kushaldatta@microsoft.com
Organizing Committee
Raj Parihar, Chief AI Architect at d-Matrix Corporation (rparihar@d-matrix.ai)
Satyam Srivastava, Chief AI Software Architect at d-Matrix Corporation (ssrivastava@d-matrix.ai)
Tao Sheng, Director of AI and Machine Learning at Oracle Cloud (tao.t.sheng@oracle.com)
Ananya Pareek, System Architect at Apple (ananyapareek@gmail.com)
Prerana Maslekar, Silicon Verification Engineer at Microsoft (Prerana.maslekar@microsoft.com)
Sushant Kondguli, Graphics Architect at Meta Reality Labs (sushantkondguli@fb.com)
Additional Information
https://www.emc2-ai.org/aaai-23
W18: Graphs and More Complex Structures for Learning and Reasoning (GCLR)
Topics
The study of complex graphs is a highly interdisciplinary field that aims to study complex systems by using mathematical models, physical laws, inference and learning algorithms, etc. Complex systems are often characterized by several components that interact in multiple ways among each other. Such systems are better modeled by complex graph structures such as edge and vertex labeled graphs (e.g., knowledge graphs), attributed graphs, multilayer graphs, hypergraphs, temporal/dynamic graphs, etc. In this 3rd instance of the GCLR (Graphs and more Complex structures for Learning and Reasoning) workshop, we will focus on various complex structures along with inference and learning algorithms for these structures. The current research in this area is focused on extending existing ML algorithms as well as network science measures to these complex structures. This workshop aims to bring researchers from these diverse but related fields together and embark on interesting discussions on new challenging applications that require complex system modeling and discovering ingenious reasoning methods. We have invited several distinguished speakers with research interests spanning from the theoretical to experimental aspects of complex networks.
Call For Papers
We invite submissions from participants who can contribute to the theory and applications of modeling complex graph structures such as hypergraphs, multilayer networks, multi-relational graphs, heterogeneous information networks, multi-modal graphs, signed networks, bipartite networks, temporal/dynamic graphs, etc. The topics of interest include, but are not limited to:
- Constraint satisfaction and programming (CP), (inductive) logic programming (LP and ILP)
- Learning with Multi-relational graphs (alignment, knowledge graph construction, completion, reasoning with knowledge graphs, etc.)
- Learning with algebraic or combinatorial structure
- Link analysis/prediction, node classification, graph classification, clustering for complex graph structures
- Network representation learning
- Theoretical analysis of graph algorithms or models
- Optimization methods for graphs/manifolds
- Probabilistic and graphical models for structured data
- Social network analysis and measures
- Unsupervised graph/manifold embedding methods
The papers will be presented in poster format, and some will be selected for oral presentation. Through invited talks and presentations by the participants, this workshop will bring together current advances in Network Science as well as Machine Learning and set the stage for continuing interdisciplinary research discussions.
Important Dates
- XPoster/short/position papers submission deadline: Oct 28, 2022
- Full paper submission deadline: Oct 28, 2022
- Paper notification: Nov 18, 2022
Submission Guidelines
We invite submissions to the AAAI-23 workshop on Graphs and more Complex structures for Learning and Reasoning to be held on February 13 or 14, 2023. We welcome the submissions in the following two formats:
- Poster/short/position papers: We encourage participants to submit preliminary but interesting ideas that have not been published before as short papers. These submissions would benefit from additional exposure and discussion that can shape a better future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations. Submissions may consist of up to 4 pages plus one additional page solely for references.
- Full papers: Submissions must represent original material that has not appeared elsewhere for publication and that is not under review for another refereed publication. Submissions may consist of up to 7 pages of technical content plus up to two additional pages solely for references.
The submissions should adhere to the AAAI paper guidelines available at (https://aaai.org/Conferences/AAAI-23/aaai23call/)
Accepted submissions will have the option of being posted online on the workshop website. For authors who do not wish their papers to be posted online, please mention this in the workshop submission. The submissions need to be anonymized.
See the webpage https://sites.google.com/view/gclr2023/submissions for detailed instructions and submission link.
Format
This is a 1-day workshop involving talks by pioneer researchers from respective areas, poster presentations, and short talks of accepted papers.
Attendance
The eligibility criteria for attending the workshop will be registration in the conference/workshop as per AAAI norms. We expect 50-65 people in the workshop.
Workshop Chair
Balaraman Ravindran, Affiliation: Indian Institute of Technology Madras, India
Email: ravi@cse.iitm.ac.in
Workshop Committee
- Balaraman Ravindran , Indian Institute of Technology Madras, India, Primary contact (ravi@cse.iitm.ac.in)
- Ginestra Bianconi , Queen Mary University of London, UK (ginestra.bianconi@gmail.com)
- Philip S. Chodrow, Middlebury College, USA (pchodrow@middlebury.edu)
- Srinivasan Parthasarathy, Ohio State University, USA (srini@cse.ohio-state.edu)
- Tarun Kumar, Hewlett Packard Labs, Bengaluru, India (tarun.kumar2@hpe.com)
- Deepak Maurya, Purdue University, India (dmaurya@purdue.edu)
- Anasua Mitra, IIT Guwahati, India (anasua.mitra@iitg.ac.in)
Additional Information
https://sites.google.com/view/gclr2023/
W19: Health Intelligence (W3PHIAI-23)
The integration of information from now widely available -omics and imaging modalities at multiple time and spatial scales with personal health records has become the standard of disease care in modern public health. Moreover, given the ever-increasing role of the World Wide Web as a source of information in many domains including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization has given rise to a variety of applications representing real-time data on epidemics.
Furthermore, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All these changes require novel solutions, and the AI community is well-positioned to provide both theoretical- and application-based methods and frameworks.
Topics
The workshop will include original contributions on theory, methods, systems, and applications of data mining, machine learning, databases, network theory, natural language processing, knowledge representation, artificial intelligence, semantic web, and big data analytics in web-based healthcare applications, with a focus on applications in population and personalized health. The scope of the workshop includes, but is not limited to, the following areas:
- Knowledge Representation and Extraction
- Integrated Health Information Systems
- Patient Education
- Patient-Focused Workflows
- Shared Decision Making
- Geographical Mapping and Visual Analytics for Health Data
- Social Media Analytics
- Epidemic Intelligence
- Predictive Modeling and Decision Support
- Semantic Web and Web Services
- Biomedical Ontologies, Terminologies, and Standards
- Bayesian Networks and Reasoning under Uncertainty
- Temporal and Spatial Representation and Reasoning
- Case-based Reasoning in Healthcare
- Crowdsourcing and Collective Intelligence
- Risk Assessment, Trust, Ethics, Privacy, and Security
- Sentiment Analysis and Opinion Mining
- Computational Behavioral/Cognitive Modeling
- Health Intervention Design, Modeling and Evaluation
- Online Health Education and E-learning
- Mobile Web Interfaces and Applications
- Applications in Epidemiology and Surveillance (e.g., Bioterrorism, Participatory Surveillance, Syndromic Surveillance, Population Screening)
- Hybrid methods, combining data driven and predictive forward models
- Response to Covid-19
- Computational models of ageing
We also invite participants to an interactive hack-a-thon focused on finding creative solutions to novel problems in health with an emphasis on ageing. We will design an ageing-related challenge that leverages publicly available data from the Gateway to Global Aging Data and the CDC’s Healthy Ageing dataset. The aim of the hack-a-thon is not only to foster innovation, but rather to engage the community and create new collaborations.
Submissions
We invite workshop participants to submit their original contributions following the AAAI format through EasyChair. Three categories of contributions are sought: full-research papers up to 8 pages; short papers up to 4 pages; and posters and demos up to 2 pages. Participants in the hack-a-thon will be asked to either register as team or be randomly assigned to a team after registration. Their results willbe submitted in either a short paper or poster format. A dataset(s) will be provided to hack-a-thon participants and the workshop organizers are engaged with Altos Labs to sponsor the event.
Organizing Committee
Martin Michalowski, PhD, FAMIA (Co-chair), University of Minnesota; Arash Shaban-Nejad, PhD, MPH (Co-chair), The University of Tennessee Health Science Center – Oak-Ridge National Lab (UTHSC-ORNL) Center for Biomedical Informatics; Simone Bianco, PhD (Co-chair), Altos Labs – Bay Area Institute of Science; Szymon Wilk, PhD, Poznan University of Technology; David L. Buckeridge, MD, PhD, McGill University; John S. Brownstein, PhD, Boston Children’s Hospital
Additional Information
W20: Knowledge-Augmented Methods for Natural Language Processing
We invite submission of papers describing innovative research methods and applications in knowledge- enhanced NLP. Papers that introduce new theoretical proofs or methods, help to develop a better understanding of new emerging concepts topics extensive empirical experiments, or demonstrate a novel application in natural language processing of these methods to a domain are strongly encouraged.
The topics include but are not limited to the following:
Knowledge-augmented language model pre-training
Knowledge-augmented language model fine-tuning
Knowledge retrieval from unstructured data
Knowledge retrieval from structured data
NLP methods augmented by knowledge graphs
NLP methods augmented by commonsense
NLP methods augmented by heuristic rules
NLP methods augmented by dictionary
NLP methods augmented by linguistic features
NLP methods augmented by retrieved texts
NLP methods augmented by large language models
Submissions
Regular papers are limited to a total of 7 pages, excluding all references. We also encourage extended abstracts and short papers, which can be ranged from 2 to 4 pages excluding all references. All submissions must be in PDF format and formatted according to the new Standard AAAI Conference Proceedings Template. Following this AAAI conference submission policy, reviews are double-blind, and author names and affiliations should NOT be listed. Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well executed, and repeatable. Authors are strongly encouraged to make code publicly available.
The accepted papers will be posted on the workshop website and will not appear in the AAAI proceedings.
- Submission link (CMT)
https://cmt3.research.microsoft.com/KnowledgeNLP2023/Submission/Index
Important Dates
Paper submission deadline: Nov. 4, 2022
Author notification: Nov. 18, 2022
All deadlines are 11:59pm UTC -12h (“anywhere on Earth”)
Organizing Committee
Chenguang Zhu (Microsoft Cognitive Service Research); Meng Jiang (University of Notre Dame); Lu Wang (University of Michigan); Shuohang Wang (Microsoft Cognitive Service Research); Wenhao Yu (University of Notre Dame); Huan Sun (Ohio State University)
Additional Information
Please email chezhu@microsoft.com, shuowa@microsoft.com if you have any questions!
Website: https://knowledge-nlp.github.io/aaai2023
W21: Modelling Uncertainty in the Financial World (MUFin’23)
Of many things, Covid-19 has provided a stark proof that uncertainty is real, and it is here to stay. Perhaps nothing is more sensitive to uncertainty than the Financial World. To couple with it, while Artificial Intelligence techniques are used to predict the future state of events, their performance is significantly impacted by disruptions not captured in the past. Unforeseen scenarios such as economy changes, variations in the customer behavior, pandemics, recessions, and fraudulent transactions often result in unexpected behavior of financial models, thus associating a level of uncertainty with them. It is thus imperative for the research community to explore, identify, analyze, and address such uncertainties to develop robust models applicable in real-world scenarios. To this effect, the goal of this workshop is to bring academics and industry experts together to discuss on this important, timely and yet-unsolved area of modelling uncertainties in the financial world.
Submissions
We invite papers in the following categories (that have not been published before and nor are currently under consideration at some other venue) focused on modelling data uncertainty for financial applications.
- Full papers up to 7-pages (with results)
- Position papers up to 2-pages (with an idea that deserves broader discussion)
Topics
Topics of interest include, but are not limited to the following:
Application Topics:
- Evaluating financial risk
- Forecasting stock market
- Modelling seasonality in market trends
- Fraud transaction prediction
- Modelling temporal social media activity
- Recommendation systems
Technical Topics:
- Temporal/Sequential data modelling – clustering, classification
- Modelling uncertainty in financial data
- Temporal graphs
- Time Series Forecasting
- Text analytics of financial reports, forecasts, and documents
- Explainable/interpretable sequential modelling
- Exploring fairness and robustness towards bias in financial models
- Representation learning from temporal/sequential data
- Modelling financial data as temporal point processes
Submissions need to be made at https://easychair.org/conferences/?conf=mufin23
The paper formatting must be consistent with that of AAAI 2023 main conference research track, details of which are mentioned in the Author Kit at
https://www.aaai.org/Publications/Templates/AuthorKit23.zip
The best paper will be awarded a Best Paper Award worth $500! (sponsored by Mastercard)!
Format
MuFin 2023 will be a full day workshop with a diverse program including keynote talks, panel discussions, full-paper presentations and poster-sessions for the position papers. The attendance will be derived through paper authors, invited speakers and participants interested to learn more about the area.
Organizing Committee
Bonnie Buchanan, Karamjit Singh, Maneet Singh, Nitendra Rajput, Shraddha Pandey, Srijan Kumar
Additional Information
Workshop Website: https://sites.google.com/view/w-mufin/
W22: Multi-Agent Path Finding
Multi-Agent Path Finding (MAPF) requires computing collision-free paths for multiple agents from their current locations to given destinations in a known environment. Example applications vary from robot coordination to traffic management. In recent years, researchers from artificial intelligence, robotics, and theoretical computer science explore different variants of the MAPF problem as well as various approaches with different properties. The purpose of this workshop is to bring these researchers together to present their research, discuss future research directions, and cross-fertilize the different communities.
Topics
All works that relate to collision-free path planning or navigation for multiple agents are welcome,
including but not limited to:
– Search-, rule-, reduction-, reactive-, and learning-based MAPF planners;
– Combination of MAPF and task allocation, scheduling, and execution monitoring, etc.;
– Real-world applications of MAPF planners;
– Multi-agent reinforcement learning for centralized and decentralized MAPF;
– Customization of MAPF planners for actual robots (e.g. motion and communication constraints, environment changes, etc.);
– Standardization of MAPF terminology and benchmarks.
Format
The workshop is a One-Day workshop including invited talks, paper presentations, Q&As, and community discussion.
Attendance
The workshop expects to invite 30 participants, including program committees, accepted authors, invited speakers, and researchers who are active in the MAPF community. People who are not invited but interested in MAPF are welcome to attend.
Submissions
Submissions can contain relevant work in all possible stages, including work that was recently published, is under submission elsewhere, was only recently finished, or is still ongoing. Authors of papers published or under submission elsewhere are encouraged to submit the original papers or short versions (including abstracts) to educate other researchers about their work, as long as resubmissions are clearly labelled to avoid copyright violations. Position papers and surveys are also welcome. Submissions will go through a light review process to ensure a fit with the topic and acceptable quality. Non-archival workshop notes will be produced containing the material presented at the workshop.
Format: Any format is acceptable.
Page limitation: There is no limit on the number of pages.
Important Dates
Note: all deadlines are “anywhere on earth” (UTC-12)
Paper submission deadline: Oct 31, 2022
Paper notification: Nov 18, 2022
Final version: Dec 14, 2022
Workshop: Feb 13-14, 2023 (TBD)
Workshop Committee
Jiaoyang Li, Carnegie Mellon University (jiaoyangli@cmu.edu)
Zhongqiang Ren, Carnegie Mellon University (zhongqir@andrew.cmu.edu)
Han Zhang, University of Southern California (zhan645@usc.edu)
Zhe Chen, Monash University (zhe.chen@monash.edu)Additional Information
Advisory Board
Sven Koenig, University of Southern California
Howie Choset, Carnegie Mellon University
Peter Stuckey, Monash University
Additional Information
http://idm-lab.org/wiki/AAAI23-MAPF/index.php/Main/HomePage
W23: Multimodal AI for Financial Forecasting (Muffin)
Financial forecasting is an essential task that helps investors make sound investment decisions and wealth creation. With increasing public interest in trading stocks, cryptocurrencies, bonds, commodities, currencies, crypto coins and non-fungible tokens (NFTs), there have been several attempts to utilize unstructured data for financial forecasting. Unparalleled advances in multimodal deep learning have made it possible to utilize multimedia such as textual reports, news articles, streaming video content, audio conference calls, user social media posts, customer web searches, etc for identifying profit creation opportunities in the market. E.g., how can we leverage new and better information to predict movements in stocks and cryptocurrencies well before others? However, there are several hurdles towards realizing this goal (1) large volumes of chaotic data, (2) combining text, audio, video, social media posts, and other modalities is non-trivial, (3) long context of media spanning multiple hours, days or even months, (4) user sentiment and media hype-driven stock/crypto price movement and volatility, (5) difficulty in automatically capturing market moving events using traditional statistical methods (6) misinformation and non-interpretability of financial systems leading to massive losses and bankruptcies.
To address all these major challenges, this workshop on Multimodal AI for Financial Forecasting (Muffin) at AAAI 2023 aims to bring together researchers from natural language processing, computer vision, speech recognition, machine learning, statistics and quantitative trading communities to expand research on the intersection of AI and financial time series forecasting. To further motivate and direct attention to unsolved problems in this domain, this workshop is organizing two shared tasks in this workshop – (1) Stock Price and Volatility Prediction post Monetary Conference Calls and (2) Cryptocurrency Bubble Detection.
Topics
This workshop will hold a research track and a shared task track. The research track aims to explore recent advances and challenges of multimodal AI for finance. As this topic is an inherently multi-modal subject, researchers from artificial intelligence, computer vision, speech processing, natural language processing, data mining, statistics, optimization, and other fields are invited to submit papers on recent advances, resources, tools, and challenges on the broad theme of Multimodal AI for finance. The topics of the workshop include but are not limited to the following:
- Transformer models / Self-supervised Learning on Financial Data
- Machine Learning for Finance
- Video processing for facial expression detection, emotion detection, deception detection, gait and posture analysis
- Financial Document Processing
- Audio-visual-textual alignment, information extraction, salient
- Vision-language model for financial video analysis
- Financial Event detection in Multimedia
- Entity extraction and linking on financial text
- Conversational dialogue modeling for Financial Conference Calls
- Social media and User NLP for Finance
- Natural Language Processing Applications for Finance
- Transfer learning approaches on financial data
- Named-entity recognition, relationship extraction, ontology learning in financial documents
- Multi-modal knowledge discovery
- Data acquisition, augmentation, feature engineering, for financial analysis and risk management
- Bias analysis and mitigation in financial models and data
- Statistical Modeling for Time Series Forecasting
- Interpretability and explainability for financial AI models
- Privacy-preserving AI for finance
- Video understanding (human behavior cognition, topic mining, etc.)
Important Dates
- Paper submission deadline: November 4, 2022
- Acceptance notification: November 18, 2022
- Camera-ready submission: December 25, 2022
- Muffin workshop at AAAI 2023: Feb 13, 2022
All deadlines are “anywhere on earth” (UTC-12)
Submissions
Authors are invited to submit their unpublished work that represents novel research. The papers should be written in English using the AAAI-23 author kit and follow the AAAI 2023 formatting guidelines. Authors can also submit the supplementary materials, including technical appendices, source codes, datasets, and multimedia appendices. All submissions, including the main paper and its supplementary materials, should be fully anonymized. For more information on formatting and anonymity guidelines, please refer to AAAI 2023 call for paper page.
All papers will be double blind peer reviewed. Muffin workshop accepts both long papers and short
papers:
- Short Paper: Up to 4 pages of content including the references. Upon the acceptance, the authors are provided with 1 more page to address the reviewer’s comments.
- Long Paper: Up to 8 pages of content including the references. Upon the acceptance, the authors are provided with 1 more page to address the reviewer’s comments.
- Shared Task Track: Participants are invited to take part in shared tasks: (1) Financial Prediction from Conference Call Videos and (2) Cryptocurrency Bubble Detection. Participants are invited to submit a system paper of 4-8 pages of content including the references.
Two reviewers with the same technical expertise will review each paper. Authors of the accepted papers will present their work in either the Oral or Poster session. All accepted papers will appear on the workshop proceedings that will be published on CEUR-WS. The authors will keep the copyright of their papers that are published on CEUR-WS. The workshop proceedings will be indexed by DBLP.
Paper must be submitted using EasyChair (TBD) . For information on System Paper submission for the share tasks, please refer to our shared tasks page.
Organizing Committee
- Puneet Mathur, University of Maryland College Park, USA
- Franck Dernoncourt, Adobe Research, USA
- Fu-Ming Guo, Fidelity Investments, USA
- Lucie Flek, University of Marburg, Germany
- Ramit Sawhney, Georgia Institute of Technology, USA
- Sanghamitra Dutta, University of Maryland College Park, USA
- Sudheer Chava, Georgia Institute of Technology, USA
- Dinesh Manocha, University of Maryland College Park, USA
Additional Information
https://muffin-aaai23.github.io/cfp.html
W24: Practical Deep Learning in the Wild (Practical-DL)
Deep learning has achieved great success for artificial intelligence (AI) in many advanced tasks, such as computer vision, natural language processing, and robotics. However, research in the AI field also shows that their performance in the wild is far from practical towards open-world data and scenarios. Besides the accuracy that is widely concerned in deep learning, the phenomena are significantly related to the studies about model efficiency and robustness, which we abstract as Practical Deep Learning in the Wild (Practical-DL).
Regarding model efficiency, in contrast to the ideal environment, it is impractical to train a huge neural network containing billions of parameters using a large-scale high-quality dataset and then deploy it to an edge device in practice. Meanwhile, considering model robustness, input data with noises frequently occur in open-world scenarios, which presents critical challenges for the building of robust AI systems in practice. Moreover, existing research presents that there is a trade-off between the robustness and accuracy of deep learning models, while in the context of efficient deep learning with limited resources, it is more challenging to achieve a better trade-off under the premise of satisfying efficiency. These complex demands would bring profound implications and an explosion of interest for research into the topic of this Practical-DL workshop in AAAI 2023, namely building practical AI with efficient and robust deep learning models.
Topics
The workshop organizers invite paper submissions on the following (and related) topics:
- Network sparsity, quantization, and binarization
- Adversarial attacking deep learning systems
- Neural architecture search (NAS)
- Robust architectures against adversarial attacks
- Hardware implementation and on-device deployment
- Benchmark for evaluating model robustness
- On-device learning
- New methodologies and architectures for efficient and robust deep learning
Important Dates
November 4, 2022 – Submission Deadline
November 23, 2022 – Acceptance Notification
February 13, 2023 – Workshop Date
Format
The workshop will be a 1.5-day meeting.
The workshop will include several technical sessions, i.e., oral sessions and poster sessions where presenters can discuss their work, to further foster collaborations, invited talk sessions that covering crucial aspects for the practical deep learning in the wild, especially the efficient and robust deep learning. Besides, this workshop will hold a challenge to offer a fertile ground for designing efficient deep learning systems in practice.
Attendance
Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submissions
URL: https://cmt3.research.microsoft.com/PracticalDL2023
Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-23 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentations at the workshop.
Invited Speakers
Adam Kortylewski (Research Scientist, Max Planck for Informatics and Saarland Informatics Campus)
Priyadarshini Panda (Assistant Professor, Yale University)
Florian Tramer (Assistant Professor, ETH Zürich)
Tom Goldstein (Associate Professor, University of Maryland)
Neil Gong (Assistant Professor, Duke University)
Jie M. Zhang(Lecturer/Assistant Professor, King’s College London)
Cihang Xie (Assistant Professor, UC Santa Cruz)
Xiaochun Cao (Professor, Shenzhen Campus, Sun Yat-sen University)
Shouling Ji (Professor, Zhejiang University)
Bichen Wu (Staff Research Scientist, Meta Reality Labs)
Workshop Chair
Haotong Qin (Beihang University)
Ruihao Gong (SenseTime Research)
Jiakai Wang (Zhongguancun Laboratory)
Siyuan Liang (Chinese Academy of Sciences)
Zeyu Sun (Zhongguancun Laboratory)
Aishan Liu (Beihang University)
Wenbo Zhou (University of Science and Technology of China)
Shanghang Zhang (Peking University.)
Fisher Yu (ETH Zurich)
Xianglong Liu (Beihang University)
Workshop Committee (Incomplete list)
Xiuying Wei (Beihang University)
Jun Guo (Beihang University)
Shunchang Liu (Beihang University)
Simin Li (Beihang University)
Yifu Ding (Beihang University)
Mingyuan Zhang (Nanyang Technological University)
Jinyang Guo (Beihang University)
Renshuai Tao (Huawei)
Additional Information
https://practical-dl.github.io/
W25: Privacy-Preserving Artificial Intelligence
Overview
The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities. In its fourth edition, the AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-23) provides a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and the societal impact of privacy in AI.
Topics
The workshop organizers invite paper submissions on the following (and related) topics:
- Applications of privacy-preserving AI systems
- Attacks on data privacy
- Differential privacy: theory and applications
- Distributed privacy-preserving algorithms
- Privacy-preserving Federated learning
- Human rights and privacy
- Privacy and Fairness
- Privacy and causality
- Privacy-preserving optimization and machine learning
- Privacy-preserving test cases and benchmarks
- Surveillance and societal issues
Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.
Format
The workshop will be a one-day meeting and will include a number of technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, a tutorial talk, and will conclude with a panel discussion. Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submissions
Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-23 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentation at the workshop.
Submission site: https://cmt3.research.microsoft.com/PPAI2023
Organizing Committee
Ferdinando Fioretto, ffiorett@syr.edu (Syracuse University)
Catuscia Palamidessi, catuscia@lix.polytechnique.fr (Inria, Ecole Polytechnique)
Pascal Van Hentenryck, pascal.vanhentenryck@isye.gatech.edu (Georgia Institute of Technology)
Additional Information
Supplemental workshop site: https://aaai-ppai23.github.io/
W26: Recent Trends in Human-Centric AI
Human-Centric Artificial Intelligence is the notion of developing and using AI systems to help enhance, augment, and improve the quality of human life. Naturally, this paradigm involves two major components: human-centered computing and representation learning and responsible AI in human-centric applications.
The first component revolves around tasks such as user authentication, activity recognition, pose estimation, affective computing, health analytics, and others, which often rely on modeling data with specific spatiotemporal properties, for instance human activity images/videos, audio signals,
sensor-based time-series (e.g., PPG, ECG, EEG, IMU, clinical/medical data), and more. In recent years, learning effective representations for computer vision and natural language has revolutionized the effectiveness of solutions in these domains. Nonetheless, other data modalities, especially human-centric ones, have been largely under-served in terms of research and development. For these under-served domains, the general attitude has been to take advances from the ‘vision’ or ‘NLP’ communities and adapt them where possible. We argue, however, that a more original and stand-alone perspective on human-centric data can be highly beneficial and can lead to new and exciting advancements in the area. While the first component of this workshop mostly covers interpretation of people by AI, the second key component of the workshop is centered around interpretation of AI by people. This means aiding humans to investigate AI systems to facilitate responsible development, prioritizing concepts such as explainability, fairness, robustness, and security. We argue that identifying potential failure points and devising actionable directions for improvement is imperative for responsible AI and can benefit from translating model complexities into a language that humans can interpret and act on. Hence, this workshop also aims to cover recent advances in the area of responsible AI in human-centric applications.
In the R2HCAI workshop, we aim to bring together researchers broadly interested in Representation Learning for Responsible Human-Centric AI to discuss recent and novel findings in the intersection of these communities.
Topics
The workshop invites contributions to novel methods, innovations, and applications of Human- Centric AI including (but not limited to):
- Curation and handling for human-generated data (e.g., speech, human-related images/videos such as faces, gait, activities, interactions, etc, wearable, clinical, medical, and health data such as PPG, ECG, EEG, IMU, and others),
- Learning frameworks such as unsupervised (self-supervised, semi-supervised) learning for human-generated sensor signals & medical data,
- Learning architectures such as novel networks and loss functions, for human-generated data,
- Explainable/interpretable machine learning,
- Fairness, accountability, and transparency,
- Evaluation and benchmarking of responsible AI development,
- Responsible human-AI interaction
- Theoretical frameworks for responsible AI,
- Privacy-preserving AI.
Format
The workshop will be a 1-day event with a number of invited talks by prominent researchers, a panel discussion, and a combination of oral and poster presentations of accepted papers.
Submissions
- The AAAI template (https://aaai.org/Conferences/AAAI-23/aaai23call/) should be used for all submissions.
- Two types of submissions will be considered: full papers (6-8 pages + references + unlimited appendices), and short papers (2-4 pages + references).
- Publication in the workshop is considered non-archival and does not prohibit authors from publishing their papers in archival venues such as NeurIPS/ICLR/ICML or IEEE/ACM Conferences and Journals. We also welcome submissions that are currently under consideration in such archival venues. Upon acceptance papers will be made publicly available on the workshop website.
- Submissions will go through a double-blind review process.
- Submit to: https://cmt3.research.microsoft.com/R2HCAI2023/
Organizing Committee
Ahmad Bei rami (Google Research)
Ali Etemad (Queen's University & Google Research)
Asma Ghandeharioun (Google Research)
Luyang Liu (Google Research)
Ninareh Mehrabi (USC ISI)
Pritam Sarkar (Queen’s University & Vector Institute)
Additional Information
Contact: r2hcai@googlegroups.com
Website: https://r2hcai.github.io/
W27: Reinforcement Learning Ready for Production
The 1st Reinforcement Learning Ready for Production workshop, held at AAAI 2023, focuses on understanding reinforcement learning trends and algorithmic developments that bridge the gap between theoretical reinforcement learning and production environments.
Topics
- Efficient reinforcement learning algorithms that optimize sample complexity in real-world environments
- Counterfactual evaluation for reinforcement learning algorithms
- Reinforcement learning research for Recommendation Systems, Robotics, Optimization and many more industry fields that enables productionalization of reinforcement learning.
- Novel applications for reinforcement learning in the internet, robotics, chip design, supply chain and many more industry fields. Outcomes from these applications should come from either production environments or well-recognized high-fidelity simulators (excluding standard OpenAI Gym and standard Atari Games)
Format
This workshop will be a 1-day workshop. We have confirmed 7 distinguished reinforcement learning researchers and practitioners to speak or participate in a panel for this workshop (listed in the next section, some have scheduling pending). We will have a reinforcement learning foundations panel, and talks on reinforcement learning advancements, applications in recommender systems, robotics, medical systems, and production A/B experiments. We anticipate about 4 hours of hosted content from the workshop and 1.5 hours of poster sessions and 1.5 hours of contributed talks, which will go from 10 am to 5 pm on the workshop day.
Attendance
We expect to invite people who have their paper accepted and industry/academia experts familiar with this matter.
Submissions
We expect 6-8 pages for full papers excluding reference and supplement.
Submit to: https://cmt3.research.microsoft.com/RLRP2023/
Workshop Chair
Zheqing Zhu, Meta AI / Stanford University, billzhu@fb.com
Organizing Committee
Zheqing Zhu, Meta AI / Stanford University, billzhu@fb.com; Yuandong Tian, Meta AI, yuandong@fb.com; Timothy Mann, Meta, kingtim@fb.com; Haque Ishfaq, McGill University, haque.ishfaq@mail.mcgill.ca; Zhiwei Qin, Lyft, zq2107@caa.columbia.edu; Doina Precup, McGill University / DeepMind, dprecup@cs.mcgill.ca; Shie Mannor, Technion / Nvidia, shie@ee.technion.ac.il
Additional Information
https://sites.google.com/view/rlready4prodworkshop/home
W28: Scientific Document Understanding
Scientific documents such as research papers, patents, books, or technical reports are some of the most valuable resources of human knowledge. At the AAAI-23 Workshop on Scientific Document Understanding (SDU@AAAI-23), we aim to gather insights into the recent advances and remaining challenges in scientific document understanding. Researchers from related fields are invited to submit papers on the recent advances, resources, tools, and upcoming challenges for SDU.
Topics
SDU is a workshop to gather insights into the recent advances and remaining challenges in scientific document understanding. As this topic is inherently a multi-disciplinary subject, researchers from artificial intelligence, natural language processing, information retrieval and extraction, image processing, data mining, statistics, bio-medicine, cybersecurity, finance, and other fields are invited to submit papers on the recent advances, resources, tools, and upcoming challenges for SDU. Topics of interest for this workshop include but are not limited to:
- Information extraction and information retrieval for scientific documents;
- Question answering and question generation for scholarly documents;
- Word sense disambiguation, acronym identification and expansion, and definition extraction;
- Document summarization, text mining, document topic classification, and machine reading comprehension for scientific documents;
- Graph analysis applications including knowledge graph construction and representation, graph reasoning, and query knowledge graphs;
- Multi-modal and multi-lingual scholarly text processing;
- Biomedical image processing, scientific image plagiarism detection, and data visualization;
- Code/Pseudo-code generation from text and image/diagram captioning;
- New language understanding resources such as new syntactic/semantic parsers, language models, or techniques to encode scholarly text;
- Survey or analysis papers on scientific document understanding and new tasks and challenges related to each scientific domain;
- Factuality, data verification, and anti-science detection;
Format
SDU is a one-day workshop. The full-day workshop will start with an opening remark followed by research paper presentations in the morning. The post-launch session includes invited talks and shared task system paper representations. We will end the workshop with a closing remark.
Attendance
SDU invites all researchers in the fields of artificial intelligence, natural language processing, and computer vision to attend the workshop to discuss the recent advancements and challenges for SDU. Authors with accepted papers are also required to attend to present their work. 50 attendees are expected for this workshop.
Additional Information
For information on Submission requirements, how to submit, and workshop organizers and committee
please refer to the workshop website: https://sites.google.com/view/sdu-aaai23
W29: Systems Neuroscience Approach to General Intelligence
AI technology and neuroscience have progressed such that it’s again prudent to look to the brain as a model for AI. Examining current artificial neural networks, theoretical computer science, and systems neuroscience, this workshop will uncover gaps in knowledge about the brain and models of intelligence.
Bernard Baars modeled the brain’s cognitive processes as a Global Workspace. This was elaborated in network neuroscience as the Global Neuronal Workspace, and in theoretical computer science as the Conscious Turing Machine (CTM) [1]. The CTM is a substrate independent model for consciousness. AI researchers have proposed variations and extensions of the Global Workspace, connecting the CTM to Transformers [2] and using them to communicate among specialist modules [3].
Meanwhile, neuroscience has identified large-scale brain circuits brain that bear striking resemblance to patterns found in contemporary AI architectures such as Transformers. This workshop will aim to map the Global Workspace and CTM to AI systems using the brain’s architecture as a guide. We hypothesize that this approach can achieve general intelligence and that high resolution recordings from the brain can be used to validate its models.
The goal of this workshop is to bring together a multi-disciplinary group comprising AI researchers, systems neuroscientists, algorithmic information theorists, and physicists to understand gaps in this larger agenda and to determine what’s known about what’s needed to build thinking machines.
References:
[1] https://doi.org/10.1073/pnas.2115934119
[2] https://researcher.draco.res.ibm.com/researcher/view_group.php?id=11044
[3] https://arxiv.org/abs/2103.01197
Topics
- Plausible comparisons between AI architectures and brain regions
- Novel architectures for coupling AI components according to the brain’s functional neuroanatomy
- Micro-benchmarks to validate novel architectures’ capabilities
- Validation of synthetic AI brain models against brain recordings
Format
The workshop will be 2 days and will include invited talks, submitted talks by participants, and hybrid panel discussions. Three general discussion sessions will occur. Within 2 months after the workshop, participants will submit follow up articles, to be published in a forum to be determined.
Attendance
40 presenter/participants
Submissions
URL: https://docs.google.com/forms/d/e/1FAIpQLSf5xuuqUggzeNesfJ8Km-rpj__NLhhojQXspxAK446KtCbr6g/viewform
Workshop Chair
Co-chairs: Mark Wegman (wegman@us.ibm.com) and James Kozloski (kozloski@us.ibm.com), T.J. Watson Research Center, Yorktown Heights, NY 10598,
Workshop Committee
Lenore Blum, EECS, Berkeley. lblum@cs.cmu.edu
Irina Rish, Computer Science and Operations Research, Université de Montréal, Mila – Quebec AI Institute. irina.rish@mila.quebec
Andrew Sharott, Oxford University Medical Research Council, Brain Network Dynamics Unit. andrew.sharott@bndu.ox.ac.uk
Additional Information
https://researcher.draco.res.ibm.com/researcher/view_group_subpage.php?id=11048
W30: Uncertainty Reasoning and Quantification in Decision Making (UDM’23)
Deep neural networks (DNNs) have received tremendous attention and achieved great success in various applications, such as image and video analysis, natural language processing, recommendation systems, and drug discovery. However, inherent uncertainties derived from different root causes have been serious hurdles for DNNs to find robust and trustworthy solutions for real-world problems. A lack of consideration of such uncertainties may lead to unnecessary risk. For example, a self-driving autonomous car can misclassify a human on the road. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor. Uncertainty has become increasingly important, and it has been attracting attention from academia and industry due to its increased popularity in real-world applications with uncertain concerns. It also emphasizes decision-making problems, such as autonomous driving and diagnosis systems. Therefore, the wave of research at the intersection of uncertainty reasoning and quantification in data mining and machine learning has also influenced other fields of science, including computer vision, natural language processing, reinforcement learning, and social science.
Topics
We encourage submissions in various degrees of progress, such as new results, visions, techniques, innovative application papers, and progress reports under the topics that include, but are not limited to, the following broad categories:
- Uncertainty quantification in classification and regression
- Out-of-distribution detection
- Conditional reasoning with uncertainty
- Quantification of multidimensional uncertainty
- Sequential uncertainty estimation
- Interpretation of uncertainty
- Uncertainty-aware deep reinforcement learning
- Decision-making with uncertainty
And with particular focuses but not limited to these application domains:
- Application of uncertainty methods in large-scale data mining
- Computer vision (uncertainty in face recognition, object relation)
- Natural language processing (language uncertainty, sentence uncertainty)
- Reinforcement learning (uncertainty-aware offline reinforcement learning exploration vs. exploitation)
Important Dates
Following are the proposed important dates for the workshop. All deadlines are due Anywhere on Earth (AoE).
- Paper submission deadline: November 2nd, 2022.
- Paper review begins: November 7th, 2022.
- Paper review due: November 16th, 2022.
- Notification of decision: November 18th, 2022.
- Camera-ready due: November 25th, 2022.
Submissions and Tentative CFP Guidelines:
- Submissions are limited to a total of 5 pages, including all content and references, and must be in PDF format and use AAAI templates (two column format). Acknowledgements should be omitted from papers submitted for review. See AAAI-23 author kit for details at https://aaai.org/Conferences/AAAI-23/submission-guidelines/. Papers must be in trouble-free, high-resolution PDF format, formatted for US Letter (8.5′′ x 11′′) paper, using Type 1 or TrueType fonts. AAAI submissions are anonymous and must conform to the instructions (detailed below) for double-blind review. The authors must remove all author and affiliation information from their submission for review, and may replace it with other information, such as paper number and keywords.
- Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well-executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be posted on the workshop website but will not be included in the AAAI proceedings.
submission site: https://cmt3.research.microsoft.com/UDM2023/Submission/Index
Workshop Chairs:
- Xujiang Zhao (NEC Laboratories America, xuzhao@nec-labs.com)
- Chen Zhao (Kitware Inc., chen.zhao@kitware.com )
- Feng Chen (The University of Texas at Dallas, feng.chen@utdallas.edu )
- Jin-Hee Cho (Virginia Tech, jicho@vt.edu )
- Haifeng Chen (NEC Laboratories America, haifeng@nec-labs.com )
Additional Information
https://charliezhaoyinpeng.github.io/UDM-AAAI23/
W31: User-Centric Artificial Intelligence for Assistance in At-Home Tasks
Recent advancements in AI and ML have enabled these technologies to enhance and improve our daily lives; however, these solutions are often based on simplified formulations and abstracted datasets that make them challenging to be applied in complex and personalized household domains. Furthermore, any household solution will require not only expertise across algorithmic AI but also experts in interaction, socio-technical issues, and problem space. Since the solutions touch on so many different fields, its research community is spread across different conferences. The workshop is designed to bring together interested AI experts who, while coming from different subfields, share the vision of using AI technologies to solve user problems at home. Participants of the workshop will have the opportunity to share their experience and progress in using AI technologies to assist and empower users at home as well as learn and engage with our expert speakers/panelists. More information and submission details can be found on our website: https://ai4athome.github.io/
Topics
We solicit contributions from topics including but not limited to:
- Natural Language Processing for Household Tasks
- AI Solutions for Accessibility
- Social and Physical Assistance by Embodied Intelligence/Robots at Home
- Assistance through Smart Home Technologies
- Explainable AI (XAI) for Non-Expert Users
- Semantic Knowledge in Household Tasks
- Offline Learning for Household Assistive AI
- Continual Learning for Household Tasks
- Dataset Acquisition
- Privacy, Ethics, and Societal Impact of AI for Household Tasks
- Trustworthy AI and Calibrating Trust for Household Tasks
Format
The workshop will be a full-day hybrid workshop with a mix of keynotes, contribution lightning talks, poster sessions, and focused discussion.
Attendance
We welcome members of the community who are interested in this area.
Submissions
We welcome contributions of both short (2-4 pages) and long papers (6-8 pages) related to our stated vision. The contributions will be non-archival but will be hosted on our workshop website. More details and submission information are listed on our workshop website.
Workshop Chairs
Dr. Xiang Zhi Tan (Georgia Institute of Technology), Prof. Sonia Chernova (Georgia Institute of Technology), Prof. Jean Oh (Carnegie Mellon University), Russell Perkins (University of Massachusetts Lowell), Prof. Paul Robinette (University of Massachusetts Lowell), Peter Schaldenbrand (Carnegie Mellon University), Tanmay Shankar (Carnegie Mellon University), Prof. Diyi Yang (Stanford University)
For any questions and enquiry please contact Dr. Xiang Zhi Tan (zhi.tan@gatech.edu).
Additional Information
W32: When Machine Learning Meets Dynamical Systems: Theory and Applications
The recent wave of using machine learning to analyze and manipulate real-world systems has inspired many research topics in the joint interface of machine learning and dynamical systems. However, the real world applications are diverse and complex with vulnerabilities such as simulation divergence or violation of certain prior knowledge. As ML-based dynamical models are implemented in real world systems, it generates a series of challenges including scalability, stability and trustworthiness.
Through this workshop, we aim to provide an informal and cutting-edge platform for research and discussion on the co-development between machine models and dynamical systems. We welcome all the contributions related to ML based application/theory on dynamical systems and solution to ML problem from dynamical system perspective.
Topics
We are opening this workshop to call for papers from research relevant to dynamical system and machine learning, which include (but are not limited to):
- ML-based modeling of Dynamical Systems
- Practical Applications of Data-driven Modeling;
- Special ML structures for Learning Dynamical Systems;
- Trustworthiness of ML-based Dynamical Systems;
- Temporal Feature Analysis for Time Series Data;
- Dynamical Systems in Model-based Reinforcement Learning;
- Dynamical System Perspectives for ML Problems;
- Optimization Algorithms for Learning Dynamical Systems;
Format
The workshop will last one day. It will consist of the following
- invited talks with discussion;
- oral presentations with discussion;
- a poster session and to conclude;
We have three invited speakers, and authors of the best papers will give oral presentations.
Submissions
We welcome papers in AAAI style (from 3 to 6 pages), excluding references or supplementary materials. Submissions will be peer reviewed, double-blinded with openreview.
Submission site: https://openreview.net/group?id=AAAI.org/2023/Workshop/MLmDS
We will open the submission site in October. Authors might use as many pages of appendices as they wish but the reviewers are not required to read these. Authors have the right to withdraw papers from consideration at any time.
Note that accepted papers are considered workshop papers and can be submitted/published elsewhere. Published papers in this workshop are non-archival but will be stored permanently on the workshop website. Authors of accepted papers will be invited to participate in the Workshop day.
Organizing Committee
Lam M. Nguyen (lamnguyen.mltd@ibm.com)
Trang H. Tran (htt27@cornell.edu)
Wang Zhang (wzhang16@mit.edu)
Subhro Das (subhro.Das@ibm.com)
Tsui-Wei (Lily) Weng (lweng@ucsd.edu)
Additional Information
Supplemental workshop site: https://machinelearning-dynamic.github.io
Workshop Email: machinelearning.dynamic@gmail.com
This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2023 AAAI