Winner | Analysing the Application of Extremely Large Point Clouds (ELPC’s) in Real Time Environments using Game Engines


Joel Woodward, University of Wolverhampton

As reality capture becomes more widely accessible, the use of point clouds for visualisation applications has become an increasingly important tool within the architecture and construction industry, allowing for highly detailed digital models of the existing buildings to be generated. With integration of immersive technologies, such as Virtual and Augmented Reality (VR/AR), the demand for using Extremely Large Point Clouds (ELPCs) in real-time applications is growing rapidly. These expansive data sets make it possible tocapture and represent reality with increased detail and precision, allowing architects and clients to visualise buildings andspaces in a more immersive and intuitive way.

However, as point clouds increase in size and complexity, significant challenges emerge in terms of computational performance, file management, and hardware requirements. For immersive visualisation applications, maintaining real-time rendering is critical, with a minimum performance threshold of 60 frames per second (fps) required to ensure usability and avoid issues such as motion sickness.

The aim of this research was therefore to test the feasibility of ELPCs for real-time visualisation in game engines, specifically Unreal Engine 5. The study benchmarks performance across different point cloud sizes, file formats, and hardware configurations to provide a practical insight into current strengths and limitations of the technology.

Point clouds are typically generated using LiDAR scanning or photogrammetry, producing millions of 3D coordinates that capture the geometry and surface detail of the built

environment. Due to the non-intrusive nature of reality capture, point clouds are widely used in architecture for heritage recording, surveying, and as-built modelling, and increasingly for immersive visualisation through integration with VR platforms.

The 3D spatial data captured provides a range of benefits, as point clouds allow for extreme detail, accuracy, andprecision. When rendered in immersive environments, they allow users to interact with true-to-life digital representation ofthe buildings or landscapes. This capability therefore supports greater design decision making, and non-intrusive analysis of historic buildings for future record.

Despite the advantages of point cloud applications, they also present notable challenges. ELPC’s are resource intensive which leads to computational constraints for real time applications due to the size of individual datasets. Rendering such volumes of data in real time is demanding even for high-end machines. Previous research has attempted to address this through semantic segmentation and deep learning neural networks which divides classifies each point of the point cloud into their respective categories, making the datasets more manageable. While valuable, these approaches often fail to address practical implementation in real-time environments, with limited focus on point clouds at architectural scales.

This study therefore focuses on experimentally testing ELPCs directly in a real-time environment, to analyse thefeasibility of point clouds for rendering applications on an architectural scale. Employing an experimental methodology with Unreal Engine 5.4.4, the research directly tested ELPCs in a real-time context by importing a series of data sets ranging from 3 to 31GB in both .las and .e57 formats to evaluate the impact of file type. Point budgets were systematically adjusted from 100,000 to 100 million points to identify the performance thresholds, providing practical benchmarks for real time rendering applications.

The key performance metrics measured to determine feasibility included Framerate (to determine VR useability), Latency (the delay between user input and rendering output), and Computational Performance (to highlight bottlenecks).To provide a reliable benchmark two hardware configurations were compared: a high-performance workstation (Intel i9,128GB RAM, RTX A5000 GPU) and a standard desktop (Xeon CPU, 32GB RAM, Quadro P2200 GPU). This enabled analysis of performance at both professional and more typical industry levels.

The findings align with the original hypothesis that “the size of the point cloud will determine the feasibility for real time visualisation applications.” However, some variations in the findings suggests that additional factors may impact the application of point clouds in real time environments, such as hardware configurations and file type.

Table 1 confirms the need for point cloud optimisation. Real-time rendering is only achievable at a point budget of 10,000,000 to maintain 60 fps, meaning ELPCs require manual reduction that compromises visual fidelity and makes them unsuitable for VR. Feasibility also depends heavily on hardware configurations, with only high performance work stations capable of supporting point cloud visualisation.

Screenshot 2025-09-29 at 16.57.53.png

 

Screenshot 2025-09-29 at 17.07.40.png

Whilst high frame rates are preferred to limit motion sickness and maintain immersion, 60fps is the minimum performance threshold for the point cloud to be feasible for real time rendering applications in VR. Figure 1 illustrates that varying the point budget significantly impacts the framerate, whilst further demonstrating that standard desktops struggle to achieve the minimum framerate requirements across the point cloud samples. Only on the lowest point budget do they sometimes exceed the threshold, but this is at the loss of visual quality. Figures 2-5 illustrates that the scarcity of points at 100,000 point-budget makes the scene unusable for real time rendering applications and therefore determines the feasibility for real time rendering applications.

Even with the high-performance machine exceeding the minimum performance requirements at point budgets below 10,000,000, only 1.8% of the original point cloud is being rendered for sample 3. As the point budget is increased, the frame rate decreased; therefore, there is a trade-off between performance and visual quality which ultimately determines the feasibility for real time applications.

Screenshot 2025-09-29 at 17.00.08.png

Screenshot 2025-09-29 at 17.01.54.png Screenshot 2025-09-29 at 17.02.33.png

Screenshot 2025-09-29 at 17.03.31.png

For VR applications, latency is a critical measure as it is the delay between the user input and machine output. For realtime applications, immediate rendering is essential to ensure that there are no visible delays to maintain user immersion.

The findings, illustrated in Figure 7, demonstrate how there is a link between framerate and latency. Additionally, the findings further reinforce that standard desktops are not feasible for the application of point clouds in real time. Whilst lowpoint budget offer similar performance in latency, higher point budgets result in the standard desktop latency exceeding 200ms compared to the high- performance counterpart.

Although the .e57 file type is much smaller in comparison to the .las counterpart, the import time is significantly longer demonstrated in Figure 7. On average, the same point cloud file is 59.62% faster as a .las file when using a high-performance machine, and 43.12% faster when using a standard desktop to import a point cloud into unreal engine.

Not only is this a bottleneck between the hardware configuration and point cloud, but also the point cloud file type used. Whilst.e57 offers reduced file sizes, the trade-off compared to .las files are the import times. Although not significant for the samples used for this study, the bottleneck may be exaggerated for larger files making it a critical consideration for real time applications.

Screenshot 2025-09-29 at 17.03.40.png

To address the challenges of rendering ELPCs in real time, future research should focus on methods for splitting ELPCs into separate chunks. By separating ELPCs into smaller chunks, this may allow for larger point clouds to be imported without needing a smaller point budget. For example, sample 3 is a point cloud with 543,341,928 points; however, when rendering only 18% of the scene at a point budget of 100,000,000, 15fps is the highest frame rate achieved.

Whilst this may be applicable for smaller point clouds, it is important to note that larger point clouds may not be suited. If the point cloud is being rendered at a 10,000,000 point-budget to achieve the minimum performance requirement, approximately 54 separate chunks would be required which is not feasible. Therefore, when considering separating point clouds into more manageable chunks, the .e57 may still be the preferred file format compared to alternatives as they are compressed without sacrificing performance. However, utilising deep learning for point cloud optimisation remains the key focus for future research.

To summarise, further research which could better integrate ELPCs into real time workflows include:

  • Data segmentation or chunking, enabling smaller portions of point clouds to be rendered sequentially.
  • Exploration of alternative file formats or more efficient compression standards.
  • Greater use of semantic segmentation and deep learning to automate classification and optimise rendering.

Such directions will help bridge the gap between the potential of point clouds and practical application in everyday architectural practice.