Generating synthetic images for construction machinery data augmentation utilizing context-aware object placement

Document Type

Article

Publication Date

3-1-2025

Abstract

Dataset is an essential factor influencing the accuracy of computer vision (CV) tasks in construction. Although image synthesis methods can automatically generate substantial annotated construction data compared to manual annotation, existing challenges limited the CV task accuracy, such as geometric inconsistency. To efficiently generate high-quality data, a synthesis method of construction data was proposed utilizing Unreal Engine (UE) and PlaceNet. First, the inpainting algorithm was applied to generate pure backgrounds, followed by multi-angle foreground capture within the UE. Then, the Swin Transformer and improved loss functions were integrated into PlaceNet to enhance the feature extraction of construction backgrounds, facilitating object placement accuracy. The generated synthetic dataset achieved a high average accuracy (mAP = 85.2%) in object detection tasks, 2.1% higher than the real dataset. This study offers theoretical and practical insights for synthetic dataset generation in construction, providing a future perspective to enhance CV task performance utilizing image synthesis.

Publication Title

Developments in the Built Environment

Share

COinS