AAAI-23 Keywords


Submission Groups

Keywords and Subtopics

Cognitive Modeling & Cognitive Systems (CMS)

  • CMS: Adaptive Behavior
  • CMS: Affective Computing
  • CMS: Agent & Cognitive Architectures
  • CMS: Analogical and Conceptual Reasoning
  • CMS: Bayesian Learning
  • CMS: Brain modeling
  • CMS: Computational Creativity
  • CMS: Introspection and Meta-Cognition
  • CMS: Memory Storage And Retrieval
  • CMS: Simulating Humans
  • CMS: Social Cognition and Interaction
  • CMS: Structural Learning and Knowledge Capture
  • CMS: Other Foundations of Cognitive Modeling & Systems
  • CMS: Applications

Computer Vision (CV)

  • CV: 3D Computer Vision
  • CV: Adversarial Attacks & Robustness
  • CV: Bias, Fairness & Privacy
  • CV: Biometrics, Face, Gesture & Pose
  • CV: Computational Photography, Image & Video Synthesis
  • CV: Image and Video Retrieval
  • CV: Interpretability and Transparency
  • CV: Language and Vision
  • CV: Learning & Optimization for CV
  • CV: Low Level & Physics-based Vision
  • CV: Medical and Biological Imaging
  • CV: Motion & Tracking
  • CV: Multi-modal Vision
  • CV: Object Detection & Categorization
  • CV: Representation Learning for Vision
  • CV: Scene Analysis & Understanding
  • CV: Segmentation
  • CV: Video Understanding & Activity Analysis
  • CV: Vision for Robotics & Autonomous Driving
  • CV: Visual Reasoning & Symbolic Representations
  • CV: Other Foundations of Computer Vision
  • CV: Applications

Constraint Satisfaction and Optimization (CSO)

  • CSO: Constraint Learning and Acquisition
  • CSO: Constraint Optimization
  • CSO: Constraint Programming
  • CSO: Constraint Satisfaction
  • CSO: Distributed CSP/Optimization
  • CSO: Mixed Discrete/Continuous Optimization
  • CSO: Satisfiability
  • CSO: Search
  • CSO: Solvers and Tools
  • CSO: Other Foundations of Constraint Satisfaction & Optimization
  • CSO: Applications

Data Mining & Knowledge Management (DMKM)

  • DMKM: Anomaly/Outlier Detection
  • DMKM: Data Compression
  • DMKM: Data Stream Mining
  • DMKM: Data Visualization & Summarization
  • DMKM: Graph Mining, Social Network Analysis & Community Mining
  • DMKM: Linked Open Data, Knowledge Graphs & KB Completion
  • DMKM: Mining of Spatial, Temporal or Spatio-Temporal Data
  • DMKM: Mining of Visual, Multimedia & Multimodal Data
  • DMKM: Recommender Systems
  • DMKM: Rule Mining & Pattern Mining
  • DMKM: Scalability, Parallel & Distributed Systems
  • DMKM: Semantic Web
  • DMKM: Web Personalization & User Modeling
  • DMKM: Web Search & Information Retrieval
  • DMKM: Web-based QA
  • DMKM: Other Foundations of Data Mining & Knowledge Management
  • DMKM: Applications

Game Theory and Economic Paradigms (GTEP)

  • GTEP: Adversarial Learning
  • GTEP: Auctions and Market-Based Systems
  • GTEP: Behavioral Game Theory
  • GTEP: Computational Simulations
  • GTEP: Cooperative Game Theory
  • GTEP: Coordination and Collaboration
  • GTEP: Equilibrium
  • GTEP: Fair Division
  • GTEP: Game Theory
  • GTEP: Imperfect Information
  • GTEP: Mechanism Design
  • GTEP: Negotiation and Contract-Based Systems
  • GTEP: Social Choice / Voting
  • GTEP: Other Foundations of Game Theory & Economic Paradigms
  • GTEP: Applications

Humans and AI (HAI)

  • HAI: Brain-Sensing and Analysis
  • HAI: Communication Protocols
  • HAI: Crowdsourcing
  • HAI: Emotional Intelligence
  • HAI: Games, Virtual Humans, and Autonomous Characters
  • HAI: Human Computation
  • HAI: Human-Agent Negotiation
  • HAI: Human-Aware Planning and Behavior Prediction
  • HAI: Human-Computer Interaction
  • HAI: Human-in-the-loop Machine Learning
  • HAI: Human-Machine Teams
  • HAI: Language Acquisition
  • HAI: Learning Human Values and Preferences
  • HAI: Procedural Content Generation & Storytelling
  • HAI: Other Foundations of Humans & AI
  • HAI: Applications

Intelligent Robotics (ROB)

  • ROB: Behavior Learning & Control
  • ROB: Cognitive Robotics
  • ROB: Human-Robot Interaction
  • ROB: Learning & Optimization for ROB
  • ROB: Localization, Mapping, and Navigation
  • ROB: Manipulation
  • ROB: Motion and Path Planning
  • ROB: Multi-Robot Systems
  • ROB: Multimodal Perception & Sensor Fusion
  • ROB: State Estimation
  • ROB: Other Foundations of Intelligent Robots
  • ROB: Applications

Knowledge Representation and Reasoning (KRR)

  • KRR: Action, Change, and Causality
  • KRR: Argumentation
  • KRR: Automated Reasoning and Theorem Proving
  • KRR: Belief Change
  • KRR: Case-Based Reasoning
  • KRR: Common-Sense Reasoning
  • KRR: Computational Complexity of Reasoning
  • KRR: Description Logics
  • KRR: Diagnosis and Abductive Reasoning
  • KRR: Geometric, Spatial, and Temporal Reasoning
  • KRR: Knowledge Acquisition
  • KRR: Knowledge Engineering
  • KRR: Knowledge Representation Languages
  • KRR: Logic Programming
  • KRR: Nonmonotonic Reasoning
  • KRR: Ontologies and Semantic Web
  • KRR: Preferences
  • KRR: Qualitative Reasoning
  • KRR: Reasoning with Beliefs
  • KRR: Other Foundations of Knowledge Representation & Reasoning
  • KRR: Applications

Machine Learning (ML)

  • ML: Active Learning
  • ML: Adversarial Learning & Robustness
  • ML: Auto ML and Hyperparameter Tuning
  • ML: Bayesian Learning
  • ML: Bias and Fairness
  • ML: Bio-inspired Learning
  • ML: Calibration & Uncertainty Quantification
  • ML: Causal Learning
  • ML: Classification and Regression
  • ML: Clustering
  • ML: Deep Generative Models & Autoencoders
  • ML: Deep Learning Theory
  • ML: Deep Neural Architectures
  • ML: Deep Neural Network Algorithms
  • ML: Dimensionality Reduction/Feature Selection
  • ML: Distributed Machine Learning & Federated Learning
  • ML: Ensemble Methods
  • ML: Evaluation and Analysis (Machine Learning)
  • ML: Evolutionary Learning
  • ML: Graph-based Machine Learning
  • ML: Imitation Learning & Inverse Reinforcement Learning
  • ML: Kernel Methods
  • ML: Learning on the Edge & Model Compression
  • ML: Learning Preferences or Rankings
  • ML: Learning Theory
  • ML: Lifelong and Continual Learning
  • ML: Matrix & Tensor Methods
  • ML: Meta Learning
  • ML: Multi-class/Multi-label Learning & Extreme Classification
  • ML: Multi-instance/Multi-view Learning
  • ML: Multimodal Learning
  • ML: Online Learning & Bandits
  • ML: Optimization
  • ML: Privacy-Aware ML
  • ML: Probabilistic Methods
  • ML: Quantum Machine Learning
  • ML: Reinforcement Learning Algorithms
  • ML: Reinforcement Learning Theory
  • ML: Relational Learning
  • ML: Representation Learning
  • ML: Scalability of ML Systems
  • ML: Semi-Supervised Learning
  • ML: Time-Series/Data Streams
  • ML: Transfer, Domain Adaptation, Multi-Task Learning
  • ML: Transparent, Interpretable, Explainable ML
  • ML: Unsupervised & Self-Supervised Learning
  • ML: Other Foundations of Machine Learning
  • ML: Applications

Multiagent Systems (MAS)

  • MAS: Adversarial Agents
  • MAS: Agent Communication
  • MAS: Agent-Based Simulation and Emergent Behavior
  • MAS: Agent/AI Theories and Architectures
  • MAS: Agreement, Argumentation & Negotiation
  • MAS: Coordination and Collaboration
  • MAS: Distributed Problem Solving
  • MAS: Mechanism Design
  • MAS: Modeling other Agents
  • MAS: Multiagent Learning
  • MAS: Multiagent Planning
  • MAS: Multiagent Systems under Uncertainty
  • MAS: Other Foundations of Multiagent Systems
  • MAS: Applications

Philosophy and Ethics of AI (PEAI)

  • PEAI: Accountability
  • PEAI: AI and Epistemology
  • PEAI: AI and Jobs/Labor
  • PEAI: AI and Law, Justice, Regulation & Governance
  • PEAI: Artificial General Intelligence
  • PEAI: Bias, Fairness & Equity
  • PEAI: Consciousness and Philosophy of Mind
  • PEAI: Interpretability and Explainability
  • PEAI: Morality and Value-based AI
  • PEAI: Philosophical Foundations of AI
  • PEAI: Privacy and Security
  • PEAI: Robot Rights
  • PEAI: Safety, Robustness & Trustworthiness
  • PEAI: Societal Impact of AI
  • PEAI: Other Foundations of Philosophy and Ethics of AI
  • PEAI: Applications

Planning, Routing, and Scheduling (PRS)

  • PRS: Activity and Plan Recognition
  • PRS: Control of High-Dimensional Systems
  • PRS: Deterministic Planning
  • PRS: Mixed Discrete/Continuous Planning
  • PRS: Model-Based Reasoning
  • PRS: Optimization of Spatio-temporal Systems
  • PRS: Plan Execution and Monitoring
  • PRS: Planning under Uncertainty
  • PRS: Planning with Markov Models (MDPs, POMDPs)
  • PRS: Planning/Scheduling and Learning
  • PRS: Replanning and Plan Repair
  • PRS: Routing
  • PRS: Scheduling
  • PRS: Scheduling under Uncertainty
  • PRS: Temporal Planning
  • PRS: Other Foundations of Planning, Routing & Scheduling
  • PRS: Applications

Reasoning under Uncertainty (RU)

  • RU: Bayesian Networks
  • RU: Causality
  • RU: Decision/Utility Theory
  • RU: Graphical Model
  • RU: Probabilistic Programming
  • RU: Relational Probabilistic Models
  • RU: Sequential Decision Making
  • RU: Stochastic Models & Probabilistic Inference
  • RU: Stochastic Optimization
  • RU: Uncertainty Representations
  • RU: Other Foundations of Reasoning under Uncertainty
  • RU: Applications

Search and Optimization (SO)

  • SO: Adversarial Search
  • SO: Algorithm Configuration
  • SO: Algorithm Portfolios
  • SO: Distributed Search
  • SO: Evaluation and Analysis
  • SO: Evolutionary Computation
  • SO: Heuristic Search
  • SO: Local Search
  • SO: Metareasoning and Metaheuristics
  • SO: Mixed Discrete/Continuous Search
  • SO: Runtime Modeling
  • SO: Sampling/Simulation-based Search
  • SO: Other Foundations of Search & Optimization
  • SO: Applications

Speech & Natural Language Processing (SNLP)

  • SNLP: Adversarial Attacks & Robustness
  • SNLP: Bias, Fairness, Transparency & Privacy
  • SNLP: Conversational AI/Dialogue Systems
  • SNLP: Discourse, Pragmatics & Argument Mining
  • SNLP: Generation
  • SNLP: Information Extraction
  • SNLP: Interpretability & Analysis of NLP Models
  • SNLP: Language Grounding
  • SNLP: Language Models
  • SNLP: Learning & Optimization for SNLP
  • SNLP: Lexical & Frame Semantics, Semantic Parsing
  • SNLP: Machine Translation & Multilinguality
  • SNLP: Ontology Induction from Text
  • SNLP: Phonology, Morphology, Word Segmentation
  • SNLP: Psycholinguistics and Language Learning
  • SNLP: Question Answering
  • SNLP: Sentence-level semantics and Textual Inference
  • SNLP: Sentiment Analysis and Stylistic Analysis
  • SNLP: Speech and Multimodality
  • SNLP: Summarization
  • SNLP: Syntax — Tagging, Chunking & Parsing
  • SNLP: Text Classification
  • SNLP: Text Mining
  • SNLP: Other Foundations of Speech & Natural Language Processing
  • SNLP: Applications

Domain(s) of Application (APP)

  • APP: Accessibility
  • APP: Art/Music/Creativity
  • APP: Bioinformatics
  • APP: Biometrics
  • APP: Building Design & Architecture
  • APP: Business/Marketing/Advertising/E-commerce
  • APP: Cloud
  • APP: Communication
  • APP: Design
  • APP: Economic/Financial
  • APP: Education
  • APP: Energy, Environment & Sustainability
  • APP: Entertainment
  • APP: Games
  • APP: Healthcare, Medicine & Wellness
  • APP: Humanities & Computational Social Science
  • APP: Internet of Things, Sensor Networks & Smart Cities
  • APP: Misinformation & Fake News
  • APP: Mobility, Driving & Flight
  • APP: Natural Sciences
  • APP: Security
  • APP: Social Networks
  • APP: Software Engineering
  • APP: Transportation
  • APP: Web
  • APP: Other Applications

Choosing the best keyword(s) in the AAAI-23 Main Track

AAAI is a broad-based AI conference, inviting papers from different subcommunities of the field. It also encourages papers that combine different areas of research (e.g., vision and language; machine learning and planning). Finally, it also invites methodological papers focused on diverse areas of application such as healthcare or transportation.

In AAAI-23 authors are asked to choose one primary keyword (mandatory) and (optionally) up to five secondary keywords. With 300 keywords available to choose from, picking the best keywords for a paper may become confusing. This brief guide describes some high-level principles for choosing the best keywords.

The main purpose of keywords is to enable finding the most appropriate reviewers for each submission, which is what this guide focuses on. Note, however, that there are a variety of other signals beyond keywords to match reviewers and papers, so not everything hinges on this choice.

In the end, choosing the best keywords is an art; making poor choices about keywords can increase your chance of getting suboptimal reviews. This guide aims to help authors understand the reasoning process to allow for the best possible matching of papers with qualified reviewers.

Choosing the primary keyword

The main principle for choosing a paper’s primary keyword is to identify the subarea to which the paper makes its main contribution. It should follow that a reviewer who is an expert in that subarea will be positioned to evaluate the paper most effectively.

Most of the time, it is recommended to start with the top-level area (e.g., computer vision, knowledge representation) that describes the paper’s methodological focus and then picking the best fitting keyword in that area.
However, a sizable number of papers describe work at the intersection of different fields. To give some examples, consider papers:

  • developing general machine learning methods but primarily motivated by problems in NLP
  • studying bias in machine learning models applied to healthcare
  • designing a novel elicitation mechanism for crowdsourcing
  • combining different methodological subareas of AI in an integrated way, e.g., combine learning for solving satisfiability problems.

In all such settings, it becomes trickier to choose the best primary keyword. Here are some rules of thumb:

(1) Focus on where the primary contribution lies, and which community will benefit the most from reading the paper. For example, if an ML algorithm is demonstrated on both computer vision and NLP applications, it is best kept under ML (as it is a general advance, with NLP and vision serving only as applications). If, however, the paper is heavily motivated by details of a particular class of ML problems (e.g., proposing an algorithm that leverages specific structure of images, or language) then picking a keyword that focuses on this class of problems (vision; NLP) is more appropriate.

(2) For papers with specific applications (e.g., healthcare or transportation), typically the application is NOT the primary keyword. Usually, a AAAI main track paper will make methodological advances leading to an impact to an application. Choose a primary keyword based on the methodology. There is one exception to this rule: if the impact to the application area is much more impressive than the methodological innovation, your paper may have the best chance with the application area being the primary keyword. That said, you should carefully consider whether such a paper is more appropriate to submit to the track on AI for Social Impact or to IAAI; note that each of these evaluates papers according to different criteria than the AAAI main track.

(3) For papers genuinely at the intersection of different fields, carefully scan all keywords. It is possible that a joint keyword already exists in the list. For example, an ML paper studying bias applied to healthcare may naturally use the keyword “ML: Bias and Fairness” as the primary area (since healthcare is the application component).

(4) However, perhaps no keyword adequately appears to capture the intersection of AI fields to which the paper makes its primary contribution. In such cases, a judgment call is necessary about which community is likely to best appreciate the work. For example, if a paper combines learning to solve satisfiability problems by using ideas of machine learning within a satisfiability algorithm, it is likely that the paper’s most fundamental impact will be on the design of satisfiability solvers rather than on the design of new learning algorithms; hence, “CSO: Satisfiability” would be a good choice for the primary keyword. However, if the paper is about solving the satisfiability problem using a deep neural network with significant innovations in machine learning, then a machine learning keyword will be a better fit.

Choosing secondary keywords

When choosing secondary keywords, it is helpful to consider two questions. First, all things being equal, what beyond the primary keyword should the reviewers be expert in? Second, if no one reviewer is likely to tick all the boxes, how would experts outside the primary subarea who would add important perspectives to the paper’s review be described? For example, ML fairness applied to healthcare should choose “APP: Healthcare, Medicine & Wellness” as a secondary keyword. Use of machine learning for satisfiability should choose the other area as the secondary keyword.

As an extreme example, if mixed discrete-continuous search is used to solve a routing problem that arises when considering privacy issues in a navigation-based game, with the main contribution being to the routing problem (i.e., “PRS: Routing” is the primary keyword), then the paper may benefit from having multiple secondary keywords “SO: Mixed Discrete/Continuous Search“, “PEAI: Privacy and Security” and “APP: Games”.

Every attempt is made to find reviewers that cover all specified keywords. In some cases it will be hard — such reviewers may not exist, and each reviewer can only review a limited number of papers. On the other hand, be careful what you wish for: adding secondary keywords can be a double-edged sword. If the paper’s contributions are relatively simple from the point of view of an expert in a secondary keyword, then that expert may give a poor rating, perhaps overlooking the paper’s value in another domain. In such situations, it is better to avoid adding secondary keywords as long as experts in the primary keyword will understand the paper, assuming they have broad (but not deep) knowledge of other fields of AI.

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2023 AAAI