I am a PhD candidate at the Department of Software Science at Radboud University Nijmegen, working on the project Provably Correct Policies for Uncertain Partially Observable Markov Decision Processes under the supervision of prof. dr. Nils Jansen and prof. dr. Frits Vaandrager.

My research is at the intersection of formal methods and AI. Specific interests are planning and learning in Markov decision processes (MDPs) and variations that extend these models with uncertainty, such as robust MDPs and partially observable MDPs (POMDPs).

You can reach me at marnix.suilen {AT} ru.nl.

Publications

See also my DBLP or Google scholar profiles.

Currently under submission:

Maris Galesloot, Marnix Suilen, Thiago D. Simão, Steven Carr, Matthijs Spaan, Ufuk Topcu, and Nils Jansen.
Pessimistic Iterative Planning for Robust POMDPs.

Accepted publications:

arXiv Marnix Suilen, Thom Badings, Eline M. Bovy, David Parker, and Nils Jansen.
Robust Markov Decision Processes: A Place Where AI and Formal Methods Meet.
In Principles of Verification: Cycling the Probabilistic Landscape: Essays Dedicated to Joost-Pieter Katoen on the Occasion of His 60th Birthday, Part III. 2024.
arXiv Marnix Suilen, Marck van der Vegt, and Sebastian Junges.
A PSPACE Algorithm for Almost-Sure Rabin Objectives in Multi-Environment MDPs.
In CONCUR. 2024. Best paper award.
arXiv Eline M. Bovy, Marnix Suilen, Sebastian Junges, and Nils Jansen.
Imprecise Probabilities Meet Partial Observability: Game Semantics for Robust POMDPs.
In IJCAI. 2024.
arXiv Patrick Wienhöft, Marnix Suilen, Thiago D. Simão, Clemens Dubslaff, Christel Baier, and Nils Jansen.
More for Less: Safe Policy Improvement with Stronger Performance Guarantees.
In IJCAI. 2023.
arXiv Thom Badings, Thiago D. Simão, Marnix Suilen, and Nils Jansen.
Decision-making under uncertainty: beyond probabilities. Challenges and Perspectives.
STTT. 2023.
arXiv Thiago D. Simão, Marnix Suilen, and Nils Jansen.
Safe Policy Improvement for POMDPs via Finite-State Controllers.
In AAAI. 2023.
arXiv Marnix Suilen, Thiago D. Simão, David Parker, and Nils Jansen.
Robust Anytime Learning of Markov Decision Processes.
In NeurIPS. 2022.
arXiv Murat Cubuktepe, Nils Jansen, Sebastian Junges, Ahmadreza Marandi, Marnix Suilen, and Ufuk Topcu.
Robust Finite-State Controllers for Uncertain POMDPs.
In AAAI. 2021.
arXiv Thom S. Badings, Arnd Hartmanns, Nils Jansen, and Marnix Suilen.
Balancing Wind and Batteries: Towards Predictive Verification of Smart Grids.
In NFM. 2021.
arXiv Marnix Suilen, Nils Jansen, Murat Cubuktepe, and Ufuk Topcu.
Robust Policy Synthesis for Uncertain POMDPs via Convex Optimization.
In IJCAI. 2020.

Professional Activities

Talks and Presentations

  • Robust and Reliable Planning and Learning Under Uncertainty. KTH, Sweden, 2024. Invited talk.
  • Robust and Reliable Reinforcement Learning. Dagstuhl Seminar 23492: Model Learning for Improved Trustworthiness in Autonomous Systems, 2023.
  • Safe Policy Improvement for POMDPs. BNAIC 2023.
  • Extending the Scope of Reliable Offline Reinforcement Learning. AISOLA 2023. Invited talk.
  • Offline Reinforcement Learning with Reliability Guarantees. ROCKS 2023.
  • More for Less: Safe Policy Improvement with Stronger Performance Guarantees. IJCAI 2023.
  • Dependable Decision-Making Under Uncertainty: Beyond Probabilities. University of Oxford, UK, 2023. Invited talk.
  • Safe Policy Improvement for POMDPs. LiVe 2023.
  • Safe Policy Improvement for POMDPs via Finite-State Controllers. AAAI 2023.
  • Robust Anytime Learning of Markov Decision Processes. NeurIPS 2022.
  • Decision-Making and Learning under Uncertainty. Lorentz Center Workshop: Rigorous Automated Planning, 2022.
  • Decision-Making and Learning under Uncertainty. ROCKS 2022.
  • Anytime Learning and Verification of Uncertain Markov Decision Processes. LiVe 2022.
  • Unraveling Uncertainty in POMDPs. RWTH Aachen, Germany, 2021. Invited talk.
  • Robust Policies for Uncertain POMDPs. Robotics for People (R4P) 2021.
  • Robust Policies for Uncertain POMDPs. FUNCTION 2021.
  • Robust Policy Synthesis for Uncertain POMDPs via Convex Optimization. IJCAI 2020.

Research Visits and Invited Seminars

  • RPL Summer School 2024.
  • Division of Robotics, Perception and Learning, KTH. 2024.
  • Dagstuhl Seminar 24231: Stochastic Games. 2024
  • Dagstuhl Seminar 23492: Model Learning for Improved Trustworthiness in Autonomous Systems. 2023.
  • Department of Computer Science, University of Oxford. 2023.
  • Lorentz Center Workshop: Rigorous Automated Planning. 2022.
  • Department of Computer Science, RWTH Aachen. 2021.

Academic Service

  • PC Member: AAMAS 2023, AAMAS 2024.
  • External reviewer: AAAI, AAMAS, EUMAS, FASE, FM, ICML, ICSE, L4DC, LICS, NeurIPS, QEST.
  • Student volunteer at IJCAI 2023.
  • Student volunteer for FORMATS 2020, part of QONFEST 2020.

Organizational Service

  • Co-organizer of the Academic Career Workshop of the iCIS Graduate School at Radboud University, March 27, 2024; 37 participants.
  • Representative in the iCIS Graduate School Council for the Department of Software Science at Radboud University (2023-2024).
  • Organizer of the AI-FM reading group.

Teaching

Qualifications

  • Dutch University Teaching Qualification (UTQ).

Lectures and Tutorials

  • Model Checking, master course, spring 2020–2024. Lectures and tutorial sessions.
  • Algorithms & Data Structures, bachelor course, fall 2018-2019, 2022-2023. Tutorial sessions and practical assignments.
  • Seminar Mathematical Foundations of Computer Science, master course, fall 2021-2023. Individual student supervision.

Thesis and Internship Supervision

  • Sander Suverkropp: Quantifying uncertainty in robust Markov models. ELLIS Excellence fellowship, 2024.
  • Nikolay Kyosev: Diverse Data Generation in POMDPs for Offline Reinforcement Learning. Master thesis, 2024.
  • Eline Bovy: The Underlying Belief Model of Uncertain Partially Observable Markov Decision Processes. Master thesis, 2023. BNAIC 2023 best thesis award.
  • Mark Széles: Probabilistic Automata (co-algebraically). Research internship, 2023.
  • Bram Pellen: Safety-Constrained Learning of Markov Decision Processes. Research internship, 2023.
  • Renato Feroce: Model Learning of Markov Decision Processes. Research internship, 2023.
  • Koen Verdenius: A POMDP model for safety-critical systems and its deteriorating sensors. Bachelor thesis, 2021.
  • Marck van der Vegt: Processing and Generating Observations for Uncertain MDPs. Research internship, 2020.
  • Anass Fakir: Creating a Toolchain to Automate Policy Calculations for POMDPs. Research internship, 2020.

Travels

I have been lucky enough to travel to the following places: Stockholm, Sweden; Heraklion, Crete, Greece; Saarbrücken, Germany; Macao, SAR, China; Oxford, UK; Paris, France; Delft, The Netherlands; Washington DC, USA; Leiden, The Netherlands; Munich, Germany; Aachen, Germany; Schloss Dagstuhl, Wadern, Germany.