Stability Guided Reinforcement Learning for Autonomous Robotic Construction

Motivation

While automation has been widely adopted in many industries, the construction sector has been relatively left aside. The unpredictable construction site requires frequent replanning of robots at the last minute, which is not the case in an assembly line. Employing robots in construction requires developing new algorithms to enhance their autonomy and intelligence.
This project inscribes itself in a larger vision, trying to train robots to design and build structures autonomously. In our previous work, we trained a reinforcement learning model to build stable structures [1]. Our method is trained in a simulated environment, where our goal is to ensure that the final structure and the intermediate sub-assemblies are structurally stable.
Our previous work uses a well-established rigid body equilibrium method [2] to simulate the structural stability of incomplete assemblies. A limitation of this simulation method is that the simulator does not consider real-world factors such as tolerances and robot precision. Structures that are considered stable in physics simulations might not be stable because of real-world disturbances.
A main cause of this sim-to-real gap is that the simulator can only generate binary output—structures are either stable or not. This research project will explore how these assemblies, designed by reinforcement learning (RL), could be altered if the designs included considerations for robustness against disturbances through reward shaping. Even if the real assembly of the resulting structures are out of the scope of this project, a key interest would be to see if the use of different stability criterions would result in the same kind of structures.
 

Outline

The project would be programming-oriented, and the student would have the following tasks:
  • Understanding the stability analysis method of discrete assemblies (e.g., rigid block equilibrium [2], couple block analysis[3]), exploring and proposing different stability evaluation metrics (for example critical tilting angle)
  • Implementing the above metrics in a physics simulator
  • Training an RL agent in an environment using this simulator and studying the effects of different rewards based on these criteria on the final structures.

Requirements:

We are looking for motivated students who are familiar with RL and have a good background in physics or optimization. A creative mindset is a plus, even though we already have a few ideas. If you’re interested, please send an email containing 1. One paragraph on your background and fit for this project, 2. Your BS and MS transcripts to [email protected]. This project will be co-supervised by Gabriel Vallat and Jingwen Wang under the direction of Prof. Stefana Parascho and Prof. Maryam Kamgarpour.


References

[1] Gabriel Vallat, Jingwen Wang, Anna Maddux, Maryam Kamgarpour, and Stefana Parascho. 2023. Reinforcement learning for scaffold-free construction of spanning structures. In Proceedings of the 8th ACM Symposium on Computational Fabrication (SCF ’23). Association for Computing Machinery, New York, NY, USA, Article 12, 1–12. https://doi.org/10.1145/3623263.3623359
 
[2] Kao, Gene & Iannuzzo, Antonino & Coros, Stelian & Mele, Tom & Block, Philippe. (2021). Understanding rigid-block equilibrium method via mathematical programming. Proceedings of the Institution of Civil Engineers – Engineering and Computational Mechanics. 174. 1-39. 10.1680/jencm.20.00036.
[3]Kao, Gene & Iannuzzo, Antonino & Thomaszewski, Bernhard & Coros, Stelian & Mele, Tom & Block, Philippe. (2022). Coupled Rigid-Block Analysis: Stability-Aware Design of Complex Discrete-Element Assemblies. Computer-Aided Design. 146. 103216. 10.1016/j.cad.2022.103216.