Representation Learning for Planning (RLeap)

Opportunities for PhD students and Postdoctoral researchers

Representation Learning for Planning is a project funded by the European Research Council; Advanced ERC Grant, Agreement: 885107, Years 2020-2025. PI is Hector Geffner; Blai Bonet is the other senior researcher fully engaged in this research.

The project and our participation in related projects like the EU funded TAILOR network (Trustworthy AI: Integrating Learning, Optimization and Reasoning), and the Swedish Wallenberg program in AI (WASP), are focused on a research problem that is at the heart of the current split in AI between data-based learners and model-based reasoners: the problem of learning symbolic first-order representations from raw perceptions.

Data-based learners, like those based on deep learning, are popular because there is plenty of data available, yet they produce black boxes that lack the flexibility, transparency, and guarantees of model-based systems. Building models by hand, on the other hand, it is just too hard.

By showing how to learn meaningful, symbolic models form raw perceptions, the research is aimed at integrating the benefits of data-based learners and model-based solvers in the context of planning, where representations play a key role in expressing, communicating, achieving, and recognizing goals. First-order symbolic representations refer to the representations based on objects and their relations.

The problem of representation learning for planning is largely unsolved. Two characteristics of deep reinforcement learning, one of the main approaches for learning how to act, are its ability to deal with high dimensional perceptual spaces without requiring any prior knowledge, combined with its inability to produce and re-use such knowledge. The construction of reusable knowledge (transfer learning) is a central concern in (deep) reinforcement learning, but the semantic and conceptual gap between the low level techniques that are used, and the high-level representations that are required, is just too large.

For addressing this challenge, new ideas and methods are required that will build on those of relevant areas that include planning, learning, knowledge representation, and combinatorial optimization. The approach to be the interpretations (grounding) of those representations, and it involves a number of subproblems like:

- learning  symbolic representations and their interpretation from raw perceptions (e.g., images)
- learning hierarchical symbolic representations to enable planning at different levels of abstraction:
- learning representations to explore, plan, and obtain and express general plans
- learning structure from suitable combinations of SAT and gradient-based methods
- understanding role of attention and partial observability in learning, grounding, and skill composition
- understanding the theoretical properties of representations and their relations to planning width

We are seeking highly motivated doctoral students and postdoctoral researchers eager to make a difference in these problems with experience in areas such as machine learning, planning, logic and knowledge representation, combinatorial optimization and SAT.

The doctoral students and postdoctoral researchers will pursue their research in the context of these broad goals, on specific themes that will be a function of their background, skills, and interests, and the needs of the project.

Ideal candidates should be able to do or learn to do theoretical and experimental work, logic and algorithms, and programming and "differential programming" (deep learning). Good oral and written skills in English are also required.

The deadline for PhD students is July 10th, 2020. For Postdocs, there is no deadline, and the search for qualified candidates will continue until the slots are filled.

Interested PhD candidates should send a CV, transcripts, three reference letters, and a motivation statement to: rleap.artint@gmail.com. Postdocs should send a CV, contact details of three references, and a research statement.

We are flexible to alleviate effects of COVID-19, with the possibility of working remotely while needed, once the working documents are ready.

Finally, since Hector is part-time professor at Linköping University (LiU) within Swedish Wallenberg program in AI (WASP), there is also the possibility to be part of the project as a PhD student at LiU with funding from WASP. The call and application procedure for those slots is here, deadline Sept 3rd, 2020.


Some relevant bibliography from us and other groups:

Learning first-order symbolic representations for planning from the structure of the state space
B. Bonet and H. Geffner. Proc. ECAI 2020

Features, Projections, and Representation Change for Generalized Planning
B. Bonet, H. Geffner. Proc. IJCAI 2018

Learning features and abstract actions for computing generalized plans,
B. Bonet, G. Frances, H. Geffner. Proc. AAAI, 2019.

Model-free, model-based, and general intelligence.
H. Geffner. Proc. IJCAI 2018.

Purely Declarative Action Representations are Overrated: Classical Planning with Simulators
G Frances, M Ramırez, N Lipovetzky, H Geffner
IJCAI 2017

Learning generalized policies in planning using concept languages
M. Martin and H. Geffner. Proc. KR 2000

Reconciling deep learning with symbolic artificial intelligence: representing objects and relations.
M. Garnelo, M. Shanahan. Current Opinion in Behavioral Sciences 29, 2019.

From skills to symbols: Learning symbolic representations for abstract high-level planning
G Konidaris, LP Kaelbling, T Lozano-Perez
Journal of Artificial Intelligence Research, 2018

Few-Shot Bayesian Imitation Learning with Logical Program Policies
Tom Silver, Kelsey R. Allen, Alex K. Lew, Leslie Pack Kaelbling, Josh Tenenbaum
AAAI 2020

Plannable Approximations to MDP Homomorphisms: Equivariance under Actions
Elise van der Pol, Thomas Kipf, Frans A Oliehoek, Max Welling
AAMAS 2020

BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning
M. Chevalier-Boisvert, D. Bahdanau, S. Lahlou, L. Willems, C. Saharia, T. H. Nguyen, Y. Bengio
ICLR 2019

Combined Reinforcement Learning via Abstract Representations
Vincent François-Lavet, Yoshua Bengio, Doina Precup, Joelle Pineau
AAAI 2019

Curiosity-driven Exploration by Self-supervised Prediction
Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
ICML 2017

Logical expressiveness of Graph Neural Networks.
Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, Juan Pablo Silva
ICLR 2020

--
June 11st, 2020