News

  • 20 Apr 2026: The program for the workshop is now available.
  • 26 Jan 2026: ALA 2026 submission deadline has been further to 26 Feb 2026 23:59 AOE.
  • 12 Jan 2026: Added the OpenReview link to the submission details!
  • 10 Dec 2025: ALA 2026 website goes live!

ALA 2026 - Workshop at AAMAS 2026

Adaptive and Learning Agents (ALA) brings together researchers working on learning, adaptation, and autonomous behaviour in single- and multi-agent systems. The workshop welcomes contributions from across computer science (including reinforcement learning, agent architectures, evolutionary computation, planning, and game theory) as well as from related fields such as cognitive science, biology, economics, and the social sciences.

ALA aims to foster collaboration, highlight recent advances, and provide a representative overview of current research on adaptive and learning agents. It serves as an inclusive forum for discussing both theoretical foundations and practical applications, spanning topics such as learning and adaptation in dynamic or open-ended environments, coordination and communication among multiple agents, incentive and mechanism design, and the emergence of collective behaviour in complex systems.

The workshop places particular emphasis on emerging learning paradigms and on methods that enable agents to operate reliably in large-scale, uncertain, or evolving environments. We encourage work that extends established techniques or introduces new frameworks to address the challenges of real-world adaptive and multi-agent systems. Topics of interest include, but are not limited to:

  • Reinforcement learning (single- and multi-agent)
  • Representation learning for single- and multi-agent systems
  • Adaptation in dynamic environments
  • Foundation models for adaptive (multi-)agent systems
  • Multi-objective optimisation in single- and multi-agent systems
  • Model-based RL and planning with learned world models (single- and multi-agent)
  • Batch and offline (multi-agent) reinforcement learning
  • Integrating learning with symbolic or game-theoretic reasoning
  • Game theoretical analysis of adaptive multi-agent systems
  • Neurosymbolic and logical reasoning for (multi-agent) decision-making
  • Safety, robustness, and trustworthy (multi-agent) reinforcement learning
  • Decentralized, federated, and communication-aware multi-agent learning
  • Evolutionary and open-ended learning in multi-agent populations
  • Co-evolution of agents in a multi-agent setting
  • Cooperative exploration and learning to cooperate and collaborate
  • Learning and modelling trust, reputation, and social norms in human–AI and multi-agent systems
  • Emergent behaviour in adaptive multi-agent systems
  • Multi-agent reinforcement learning and control for cyber-physical systems and robotics
  • Self-organizing, swarm, and bio-inspired adaptive multi-agent systems
  • Human-in-the-loop learning systems
  • Applications of adaptive and learning agents and multi-agent systems to real world complex systems

Important Dates (23:59 AoE)

  • Submission Deadline: 4 February 2026 26 February 2026
  • Notification of acceptance: 20 March 2026 26 March 2026
  • Camera-ready copies: 15 April 2026
  • Workshop: 25 - 26 May 2026

Submission Details

Papers can be submitted through OpenReview.

We invite submission of original work, up to 8 pages in length (excluding references) in the ACM proceedings format (i.e., following the AAMAS formatting instructions ). This includes work that has been accepted only as a poster or extended abstract at AAMAS 2026, but not as an oral presentation. Additionally, we welcome submission of preliminary results, i.e. work-in-progress, as well as visionary outlook papers that lay out directions for future research in a specific area, both up to 6 pages in length, although shorter papers are very much welcome, and will not be judged differently. Finally, we also accept recently published journal papers in the form of a 2 page abstract.

Furthermore, for submissions that were rejected or accepted as extended abstracts at AAMAS, authors need to also append the received reviews and a pdfdiff.

All submissions will be peer-reviewed (double-blind). Accepted work will be allocated time for poster and possibly oral presentation during the workshop. In line with AAMAS, the workshop will be in person.

When preparing your submission for ALA 2026, please be sure to remove the AAMAS copyright block, citation information and running headers. Please replace the AAMAS copyright block in the main.tex file from the AAMAS template with the following:

    \setcopyright{none}
    \acmConference[ALA '26]%
    {Proc.\@ of the Adaptive and Learning Agents Workshop (ALA 2026)}%
    {May 25 -- 26, 2026}%
    {Paphos, Cyprus, https://alaworkshop2026.github.io/}%
    {Aydeniz, Delgrange, Mohammedalamen, Yang (eds.)}%
    \copyrightyear{2026}
    \acmYear{2026}
    \acmDOI{}
    \acmPrice{}
    \acmISBN{}
    \settopmatter{printacmref=false}
                            
For the submission of the camera-ready paper make sure to submit the deanonymized version with the replaced copyright block above.

When submitting your paper, you will have the option to provide supplementary material (e.g., code, data, videos). In the case your submission contains an appendix, we encourage the authors to include it at the end of the main paper, after the references, so that reviewers can access it easily.

Program

All times are presented in Paphos local time (EEST, UTC+3).

Monday May 25

08:45-09:15 Welcome & Opening Remarks
09:15-10:15 Session I
Invited Talk: Stefano Albrecht
10:15-11:00 Coffee Break
11:00-12:30 Session II
11:00-11:15 Prabhat Nagarajan, Brett Daley, Martha White, Marlos C. Machado
Accelerating Q-learning through Efficient Value-sharing across Actions
11:15-11:30 Ruilan Wang, Francisco Aristi Reina, Katerina Papadaki
Virtual Double Oracle: Principled Reinitialisation for Incremental Nash Equilibrium Computation
11:30-11:45 Panos Aronis, Mehdi Dastani, Roxana Rădulescu, Giovanni Varricchione
Leveraging Reward Machines for Efficient Multi-Objective Reinforcement Learning
11:45-12:30 Short Talks, 5 minutes each in order
  • Christoph Scherer, Wolfgang Hönig
    ANN-CMCGS: Generalizing Continuous Monte-Carlo Graph Search with Approximate Nearest Neighbors
  • Rahul Narava, Siddharth Verma, Ojas Jain, Shashi Shekhar Jha, Mayank Shekhar Jha
    CAPSULE: Control-Theoretic Action Perturbations for Safe Uncertainty-Aware Reinforcement Learning
  • Changxi Zhu, Mehdi Dastani, Shihan Wang
    Mitigating Variance Caused by Communication in Multi-agent Deep Reinforcement Learning
  • Alexandre S. Pires, Román Chiva Gil, Fernando P. Santos
    Learning social norms of cooperation under limited observability
  • Yarin Benyamin, Argaman Mordoch, Shahaf S. Shperberg, Roni Stern
    RAMP: Hybrid DRL for Online Learning of Numeric Action Models
  • Nicolas Rowies, Florent Delgrange, Ann Nowé, Diederik M. Roijers
    Continuous Training Discrete Execution
  • Deep Kumar Ganguly, Chandradithya S. Jonnalagadda, Pratham Chintamani, Adithya Ananth
    The Price of Paranoia: Robust Risk-Sensitive Cooperation in Non-Stationary Multi-Agent Reinforcement Learning
  • Raghav Thakar, Gaurav Dixit, Kagan Tumer
    Post Hoc Extraction of Pareto Fronts for Continuous Control
12:30-14:00 Lunch Break
14:00-15:30 Session III & Poster Session
14:00-14:15 Raghav Thakar, Kagan Tumer
Cultivating Divergent Multi-Objective Expertise in Multiagent Systems via Expert Ensembles
14:15-14:30 Enrico Marchesini, Benjamin Donnot, Constance Crozier, Ian Dytham, Christian Merz, Lars Schewe, Nico Westerbeck, Cathy Wu, Antoine Marot, Priya L. Donti
RL2Grid: Benchmarking Reinforcement Learning in Power Grid Operations
14:30-15:30 Poster Session
Papers presented in today's sessions, together with additional poster-only papers:
  • Ben Opperman, Eduardo Alonso, Esther Mondragon
    Groupoid-Based Internal State Representations for Reinforcement Learning with Local Symmetries
  • Yuma Fujimoto, Kaito Ariu, Kenshi Abe
    Payoff and Revenue Inequivalence in Repeated Auction with Time-Varying Number of Bidders
  • Maksim Anisimov, Francesco Belardinelli, Matthew Robert Wicker
    SafeAdapt: Provably Safe Policy Updates in Deep Reinforcement Learning
  • Xinyang Chen, Francesco Belardinelli, Alex Goodall
    Safe and Generalizable Reinforcement Learning via Logical Policy Composition
  • Kieran A. Murphy
    InfoChess: A Game of Adversarial Inference and a Laboratory for Quantifiable Information Control
  • Francisco Aristi Reina, Steffen Issleib, Bernhard von Stengel
    Machine Learning in a Social Deduction Game
  • Simone Murari, Celeste Veronese, Alessandro Farinelli
    Heuristic-Guided Distributional Reinforcement Learning
  • Benteng Ma, David Watson, Matteo Leonetti
    Shaping Dynamics Models via Future State Sampling for Improving Model-Based Reinforcement Learning
15:30-16:15 Coffee Break
16:15-17:45 Session IV
16:15-17:15 Invited Talk: Frans Oliehoek
17:15-17:45 Short Talks, 5 minutes each in order
  • Eneko Sabaté Iturgaiz, Victor Gimenez-Abalos, Adrián Tormos, Oriol Miro-Lopez-Feliu, Sergio Alvarez-Napagao
    Boldly Propose, Carefully Verify: LLMs in Agents Should Be Used Only For Abduction
  • Christos Charalambous
    Social norm dynamics in a behavioral epidemic model
  • Victor Gallego
    Discovering Agentic Safety Specifications from 1-Bit Danger Signals
  • David Hudák, Maris F. L. Galesloot, Martin Tappler, Milan Ceska
    Neuro-Symbolic Planning under Uncertainty in Unknown Models
  • Canyu Chen, Kangyu Zhu, Zhaorun Chen, Zhanhui Zhou, Yiping Lu, Tian Li, Manling Li, Dawn Song
    Federated Agent Reinforcement Learning
18:00 Social Event

Tuesday May 26

08:45-09:00 Second Day Opening
09:00-10:00 Session V
Invited Talk: Katia Sycara
10:00-10:15 Short Talks, 5 minutes each in order
  • Abhishek Sriraman, Eleni Vasilaki, Robert Loftin
    ConventionPlay: Capability-Limited Training for Robust Ad-Hoc Collaboration
  • Roberto-Rafael Maura-Rivero, Chirag Nagpal, Roma Patel, Francesco Visin
    Utility-inspired Reward Transformations Improve Reinforcement Learning Training of Language Models
10:15-11:00 Coffee Break
11:00-12:30 Session VI
11:00-11:15 Siyao Li, Matteo Leonetti
Adversarial Curriculum Generation for World Models in Reinforcement Learning
11:15-11:30 Raphael Simon, José Carrasquel, Wim Mees, Pieter Jules Karel Libin
NASimJax: A GPU-Accelerated Policy Learning Framework for Penetration Testing
11:30-11:45 Changxi Zhu, Mehdi Dastani, Shihan Wang
Learning Communication Skills in Multi-task Multi-agent Deep Reinforcement Learning
11:45-12:30 Short Talks, 5 minutes each in order
  • Paolo Speziali, Arno De Greef, Mehrdad Asadi, Willem Röpke, Ann Nowé, Diederik M. Roijers
    Preference Guided Iterated Pareto Referent Optimisation for Accessible Route Planning
  • Reisa Haveri, Davide Liga
    Adaptive Behavioral Alignment of RAG-Based Health Coaching Agent Through Supervised Fine-Tuning
  • Manuel Agraz Vallejo, Kagan Tumer
    Learning Scalable Salp-Inspired Locomotion
  • Kshitij Kumar Srivastava, Kshitij Jerath
    S3: Stable Subgoal Selection by Constraining Uncertainty of Coarse Dynamics in Hierarchical Reinforcement Learning
  • Youssef AL OZAIBI, Maxime Toquebiau, Manolo HINA
    Learning Feasible Scalarizations in Constrained Markov Decision Processes Using a Stochastic Meta-Policy
  • Michael Kaisers, Ian Gemp, Marc Lanctot, Kate Larson, Georgios Piliouras
    Spectral Ratings: Recovering Semantic Capability from Biased Benchmarks
  • Erel Shtossel, Alicia Vidler, Uri Shaham, Gal Kaminka
    A Harmonic-Mean Formulation of Average-Reward Reinforcement Learning in SMDPs
  • Deborah van Sinttruije, Frans A. Oliehoek, Catharine Oertel
    Balancing the Mental Load: Adaptive Human-Agent Approaches for Peak Performance
12:30-14:00 Lunch Break
14:00-15:30 Session VII & Poster Session
14:00-14:15 Jonathan Matthew Erskine, Raul Santos-Rodriguez, Matt Clifford, Alexander Hepburn
Counterfactual Gradient Alignment: Optimizing Directional Expert Supervision for Data-Efficient Learning
14:15-14:30 Emile Timothy Anand, Richard Hoffmann, Sarah Liaw, Adam Wierman
Graphon Mean-Field Subsampling for Cooperative Heterogeneous Multi-Agent Reinforcement Learning
14:30-15:30 Poster Session
Papers presented in today's sessions, together with additional poster-only papers:
  • M. Asif Hasan, André Biedenkapp, Noor Awad
    PLASDecoupledCVAE: A Decoupled and Adaptive CVAE Framework for Offline Reinforcement Learning
  • Assia Belbachir, Ahmed Nabil Belbachir, Önder Gürcan
    HLSS: An Agent-Based Adaptive Layered Perception Framework for Hyperspectral Industrial Environments
  • Stephen Lewis, Aikaterini Kanta
    Let It Hack: Autonomous Multi-Agent Penetration Testing with LLMs and Tool-Augmented Reasoning
  • Joe Shymanski, Sandip Sen
    SARL: Controlling Policy Modelability via Surrogate-Augmented Reinforcement Learning
  • Hicham Azmani, Ann Nowé, Roxana Rădulescu
    From Reward-Free Pretraining to Pareto Fronts: Zero-Shot Multi-Objective Reinforcement Learning
  • Ethan Pedersen, Jacob Crandall
    REGAETuning out of Rabbit Holes in Mixed-Motive, Contextual Bandit Problems
  • Meysam Fozi, Zahra Ghorrati, Ahmad Esmaeili, Mohammad Mehdi Ebadzadeh
    Distributional Soft Actor-Critic with Adaptive Entropy Regularization
  • Kevin Babashov, Maria Gini
    Trust-Aware Reinforcement Learning Agents in the Iterated Prisoners’ Dilemma: Integrating MCTS and UCT for Optimal Cooperation
15:30-16:15 Coffee Break
16:15-17:45 Session VII
16:15-16:30 Short Talks, 5 minutes each in order
  • Noam Zvi, Gal Kaminka
    Learning User Boredom Constraints in Sequential Recommender Systems
  • Sofia R. Miskala-Dina, Aviva Prins
    ReversedQ: Opportunities for Faster Q-Learning in Episodic Online Reinforcement Learning
  • Harsh Rathwa, Pruthwik Mishra
    Embedded Safety-Aligned Intelligence for Multi-Agent Reinforcement Learning
16:30-17:30 Panel Discussion
17:30-17:45 Awards & Closing Remarks

Previous Editions

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its eighteenth year. Previous editions of this workshop may be found at the following urls:

Program Committee

TBA.

Organization

This year's workshop is organised by: Senior Steering Committee Members:
  • Enda Howley (University of Galway, IE)
  • Daniel Kudenko (Leibniz University Hannover, DE)
  • Patrick Mannion (University of Galway, IE)
  • Ann Nowé (Vrije Universiteit Brussel, BE)
  • Sandip Sen (University of Tulsa, US)
  • Peter Stone (University of Texas at Austin, US)
  • Matthew Taylor (University of Alberta, CA)
  • Kagan Tumer (Oregon State University, US)
  • Karl Tuyls (University of Liverpool, UK)

Contact

If you have any questions about the ALA workshop, please contact the organizers at:
ala.workshop.aamas AT gmail.com

For more general news, discussion, collaboration and networking opportunities with others interested in Adaptive Learning Agents then please join our Linkedin Group