Author Identifier (ORCID)

Liyana Wijayathunga: https://orcid.org/0000-0001-9058-1790

Dulitha Dabare: https://orcid.org/0009-0004-5622-2224

Alexander Rassau: https://orcid.org/0000-0002-8295-5681

Douglas Chai: https://orcid.org/0000-0002-9004-7608

Abstract

The success of deep learning methods in a wide range of application areas has inspired many recent developments in the urban and off-road autonomous navigation domain. In particular, techniques for semantic scene understanding, a key aspect of the navigation pipeline, have been researched extensively, resulting in many real-world and synthetic datasets. However, in comparison to urban semantic segmentation datasets, the availability of datasets for off-road environments remains sparse. In this paper, we aim to overcome this challenge by introducing a methodology capable of efficiently generating photorealistic synthetic datasets for off-road environments with support for multiple sensor modalities. The developed approach is evaluated by presenting a synthetic multimodal dataset of four Western Australian outdoor scenarios suitable for ground robot navigation testing and training. This dataset contains monocular and stereo RGB image and LiDAR point cloud data, with respective semantic annotations covering 16 classes, along with IMU sensor data. The simulation of the four environments was carried out using the NVIDIA Isaac Sim robotics simulator platform. Experiments were conducted using state-of-the-art off-road and universal semantic segmentation models trained on the generated dataset to highlight the Sim2Real capabilities of photorealistic synthetic datasets in real-world off-road robot navigation applications and the limitations of existing datasets for generalised use.

Document Type

Journal Article

Date of Publication

3-1-2026

Volume

112

Issue

1

Publication Title

Journal of Intelligent & Robotic Systems

Publisher

Springer

School

School of Engineering / School of Science

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Comments

Wijayathunga, L., Dabare, D., Rassau, A., Chai, D., & Islam, S. M. S. (2025). A high-fidelity multimodal synthetic dataset generation framework for off-road unstructured terrain navigation training of autonomous robots. Journal of Intelligent & Robotic Systems, 112. https://doi.org/10.1007/s10846-025-02340-2

Share

 
COinS
 

Link to publisher version (DOI)

10.1007/s10846-025-02340-2