AIMOR @ Banff 2026
Explore and advance the integration of AI/ML into Operations Management research
We are delighted to invite you to the second workshop on Artificial Intelligence and Machine Learning in Operations Management Research (AIMOR) at the heart of the Canadian Rocky Mountains. This two-day workshop is designed to explore and advance the integration of AI/ML into Operations Management research.
Taking place on May 14–15, 2026, at the elegant Royal Canadian Lodge in the breathtaking town of Banff, this workshop provides an exceptional opportunity to present your work, and exchange cutting-edge ideas with world-class researchers at the forefront of AI applications in Operations Management. The stunning natural beauty of Banff adds an extra touch of inspiration to this enriching experience.
Our aim is to create a focused environment that cultivates high-level research discussions with no distractions. Therefore, we designed this workshop as a series of single stream technical sessions. Breakfast, lunch, and refreshments will be served throughout the workshop days. There will also be one workshop dinner at the venue. We are grateful for the generous support of the Haskayne School of Business at the University of Calgary to be able to organize this workshop as an “all-inclusive” event with discounted registration fees.
Day 1: Distinguished speakers' presentations
We will have two distinguished speakers who are leading experts in the field: Warren B. Powell from Princeton University and Amy R. Ward from University of Chicago Booth School of Business. Prof. Ward and Prof. Powell will deliver tutorials on selected topics in this exciting field and present an in-depth discussion on how to incorporate AI/ML in OM research, while being impactful and successful in generating top-tier journal publications. This day is devoted to learning from the experts while exchanging ideas in an interactive manner.
Day 2: Participants' presentations
Participants will get the opportunity to present their own work to share and get feedback from other leading researchers of the field. Since there are limited time slots available in the program, only a select number of abstracts will be invited for a presentation. There will also be some room for poster presentations in the schedule for interested participants.
Tutorial Speakers (May 14, 2026)
Warren B. Powell
Professor emeritus, Princeton University
Reinforcement Learning as a Sequential Decision Problem using the Universal Modeling Framework
Reinforcement learning emerged in the 1980s in computer science for approximating Bellman’s equation, building on the framework of Markov decision processes introduced in the 1950s by Bellman. This work paralleled a line of research from the optimal control community starting in 1974 for deterministic control problems, and then the work on approximate dynamic programming in operations research in the 1990s. Ultimately, all three lines of research came to the same conclusion: approximating Bellman’s equation is really hard and generally does not work.
The lines of research emerging from these communities (computer science using “reinforcement learning”, optimal control using “adaptive dynamic programming,” and operations research using “approximate dynamic programming”) would all grow to embrace a wide range of solution methods to meet the needs of the vast range of problems being addressed. Over a series of articles from 2014 to 2019, I formalized the idea that all RL problems are sequential decision problems: decision, information, decision, information, …, which can be modeled using what I call the “universal modeling framework,” a derivative of what is used in optimal control.
I then realized that every method for making decisions could be organized into four classes of policies, only one of which uses Bellman’s equation. Policies in each of the four classes can be found throughout the optimal control literature, as well as in the 2018 edition of Sutton and Barto’s book Reinforcement Learning. This laid the foundation for my 2022 volume, Reinforcement Learning and Stochastic Optimization: A unified framework for sequential decisions which is written entirely around the universal modeling framework and all four classes of policies.
I will demonstrate how this framework identifies approaches for any sequential decision problem, spanning the traditional problems with discrete actions up to the high-dimensional problems faced in operations research. All four classes of policies can be applied to stochastic search problems known as multi-armed bandit problems (I like the name “intelligent trial-and-error”), which are arguably the most common decision problem in practice.
Warren B Powell is Professor Emeritus at Princeton University, where he taught for 39 years, and is currently a co-founder and Chief Innovation Officer at Optimal Dynamics as well as Executive-in-Residence at Rutgers Business School. He was the founder and director of CASTLE Lab, which focused on stochastic optimization with applications to freight transportation, energy systems, health, e-commerce, finance and the laboratory sciences, supported by over $60 million in funding from government and industry. He has pioneered a new universal framework that can be used to model any sequential decision problem, including the identification of four classes of policies that spans every possible method for making decisions. This is documented in his latest book with Wiley: Reinforcement Learning and Stochastic Optimization: A unified framework for sequential decisions. He published over 250 papers, five books, and produced over 60 graduate students and post-docs. He is the 2021 recipient of the Robert Herman Lifetime Achievement Award from the Society for Transportation Science and Logistics, and the 2022 Saul Gass Expository Writing Award. He is a fellow of Informs, and the recipient of numerous other awards.
Amy R. Ward
Professor of Operations Management, The University of Chicago Booth School of Business
Learning in Stochastic Models: An Asymptotic Approach
We consider queueing models with reneging in which the primitive distributions are unknown and must be learned. The question of interest is how to prioritize arriving individuals for service. When the reneging distribution is non-exponential, the state space is very complex, because we must track the amount of time each customer in queue has been waiting in order to have a Markovian system. As a result, learning an exact optimal policy is very challenging.
The approach we take is to first ignore the discrete and stochastic nature of arrivals, and to instead assume that the number of individuals to arrive in a time interval of length t>0 is proportional to t. Then, we use historical (offline) data to formulate a data-driven (fluid) optimization problem (that may incorporate machine learning predictions) whose objective is to maximize long-run average reward. The solution to the data-driven optimization problem motivates a data-driven policy for deciding how to prioritize arriving individuals for service. With an infinite amount of historical data available, the data-driven policy is asymptotically optimal as the arrival rate and service capacity become large. With a finite amount of historical data available, the challenge is to establish regret bounds and/ or finite sample statistical guarantees.
Amy Ward's research focuses on the approximation and control of stochastic systems, with applications to the service industry. Much of her past work has focused on the impact of customer impatience and abandonments on performance. Her more recent work investigates the interactions between behavioral incentives and operational efficiency in service systems.
Ward is a fellow of the INFORMS Manufacturing and Service Operations Management (MSOM) Society (elected 6/2023), and is the Editor-in-Chief for the journal Operations Research (term began 1/1/2024). In the past, she was Editor-in-Chief for the journal Operations Research Letters, and earlier held the position of Chair of the Applied Probability Society.
Prior to joining Booth, Ward was Professor of Data Sciences and Operations at the University of Southern California Marshall School of Business. She has also been a Visiting Associate Professor in the Computing and Mathematical Sciences Department at Cal Tech, and an Assistant Professor in Industrial and Systems Engineering at the Georgia Institute of Technology. Outside of academia, during her doctoral studies, she spent several summers at AT&T Laboratories.
Call for Abstracts and Expression of Interest
We invite you to submit a three-page abstract showcasing your research on the intersection of AI/ML and Operations Management. Selected abstracts will be featured in either technical or flash presentation sessions. This is an excellent opportunity to present your work, receive feedback, and connect with peers and senior scholars in the field.
If you are new to this new field and want to learn from leading researchers, this is also a great opportunity for you to attend this workshop without delivering any presentations.
There is limited space at the venue. Attendance and participation will be based on invitation. Please check the important dates below for expressing your interest in participating with or without submitting an abstract for presentation.
Important Dates:
- Abstract submission deadline: January 31
- Expression of interest deadline (no abstract required): January 31
- Abstract acceptance notification: February 6
- Discounted registration deadline (by invitation): February 23
- Late registration deadline*: April 20
- Workshop dates: May 14-15
*If space permits
Workshop Details
Will be announced soon.
Will be announced soon.
The closest airport to Banff is Calgary International Airport (YYC), which is located approximately 125 miles (145 km) east of the town. Most major airline carriers fly into Calgary. Visitors arriving with their own or rented vehicle are required to obtain a Park Pass to enter the National Park that the town of Banff resides in. Passes can be obtained in advance through Banff Lake Louise Tourism.
Airport Car Rental
Many car rental providers operate out of the Calgary International Airport (Avis, Budget, Enterprise, National, etc.).
Airport Shuttle
Airport shuttles are available via Pursuit from the Calgary International Airport to the Royal Canadian Lodge in Banff with regularly scheduled service. Reservations are required at least 1 hour in advance. Schedules and rates are subject to change without notice.
Hotel Parking
Royal Canadian Lodge offers on-site secure underground parking at $19.95 + GST per day, per vehicle.
Need a car while in Banff?
Did you decide that you want a car to go exploring? Visit one of the many car rental providers operating out of Banff (Avis, Budget, Enterprise, Hertz).
Visiting Banff in May typically marks the transition from spring to early summer. Temperatures are often pleasant for outdoor activities, and there are much less tourists compared to peak summer months. Daytime temperatures in May usually range from 10°C to 15°C (50°F to 59°F), but it can occasionally be warmer or cooler. While snow still lingers at higher elevations, lower areas begin to reveal lush green forests and wildflowers. There can be occasional rain showers. May sees more sunshine, with longer days (up to 16 hours of daylight). Because of the varying weather, it is advisable to pack layers, including a warm jacket, and be prepared for both rain and sunshine.
Organizing Committee
The AIMOR @ Banff workshop is organized by the Operations and Supply Chain Management Area at the Haskayne School of Business, University of Calgary.