FAQs
FAQs
Q: Do I need experience with real robots to participate in this challenge?
A: Real robot experience is not required. The challenge is specifically designed for machine learning practitioners.
Q: Can I access the evaluation simulator?
A: The evaluation simulator cannot be accessed by the participant of the challenge. The participants are allowed to access a limited amount of data daily from the simulator. All the rest of the code is available.
Q: Can I use standard robotics solutions for this challenge?
A: Yes, you can use classical robotics solutions (unlike the previous challenge where only learning-based solutions were accepted).
Q: What types of machine learning algorithms can be used to solve this challenge?
A: All types of machine learning approaches are allowed. Relevant approaches include, but are not limited to, reinforcement learning, supervised learning, adaptive control, sim-to-real approaches, and bayesian optimization.
Q: Is there any relevant publication on how to structure an Air Hockey agent?
A: The literature on Air Hockey is quite rich. We can suggest the following papers from which the baselines and the environments have been built:
Liu, P., Tateo, D., Bou-Ammar, H., & Peters, J. (2021, September). Efficient and reactive planning for high speed robot air hockey. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 586-593). IEEE.
Liu, P., Tateo, D., Ammar, H. B., & Peters, J. (2022, January). Robot reinforcement learning on the constraint manifold. In Conference on Robot Learning (pp. 1357-1366). PMLR.
Liu, P., Günster, J., Funk, N., Gröger, S., Chen, D., Bou-Ammar, H., Jankowski, J., Marić, A., Calinon, S., Orsula, A., Olivares-Mendez, M., Zhou, H., Lioutikov, R., Neumann, G., Likmeta, A., Zhalehmehrabi, A., Bonenfant, T., Restelli, M., Tateo, D., Liu, Z., & Peters, J. (2024, December). A retrospective on the Robot Air Hockey Challenge: Benchmarking robust, reliable, and safe learning techniques for real-world robotics. In Advances in Neural Information Processing Systems, 37 (pp. 9690–9726). Curran Associates, Inc.
Kicki, P., Liu, P., Tateo, D., Bou-Ammar, H., Walas, K., Skrzypczyński, P., & Peters, J. (2024). Fast kinodynamic planning on the constraint manifold with deep neural networks. IEEE Transactions on Robotics, 40, 277–297.
Liu, P., Bou-Ammar, H., Peters, J., & Tateo, D. (2025). Safe reinforcement learning on the constraint manifold: Theory and applications. IEEE Transactions on Robotics, 41, 3442–3461.
Q: Will I be able to test my code on the real robot?
A: Only the first four teams will be able to deploy their approach on the real robot in the Real-world Finals.
Q: How is the final ranking decided?
A: The final ranking will be decided in the real-world Finals with a tournament, where the top four participants from the Simulated Tournament will face each other in a full game.
Q: What is the deadline for registration to the challenge?
A: The deadline for registration is the end of the qualifying stage (14. Sep.). Registrations will be accepted for all the duration of the qualifying stage.
Q: Which software is necessary for this challenge? Can I have more information about this software?
A: The simulation environments use the high level interfaces from the MushroomRL open-source reinforcement learning library. You do not need to use MushroomRL to be able to develop your agents. You can find the documentation for MushroomRL here. The environments are implemented using the MuJoCo simulator. You can find the documentation of MuJoCo here. To accelerate training, the environments are also implemented using MJX.