Project Description

The Objective

The main objective of the point clouds project was to develop a point cloud system that allowed for the importing of point clouds from various standard industry formats (pts, ptx, xyz etc.) and to store it in a unified file format and then have the ability to view these point clouds in real-time on devices ranging from low-end to high-end performance.

The Problems

One of the issues that needed solving was that point clouds scans can be enormous (in some cases up to a few terabytes) which makes them quite difficult and inconvenient to work with as only a tiny fraction of large point clouds can easily overwhelm compute resources. This had the effect that users had to break these point cloud scans into thousands of smaller chunks which was quite time consuming. Furthermore, analysis and work done on point clouds had to be done piecewise and required the manual lookup and loading of a specific piece of interest before work and analysis could commence. This method of work also introduced further problems in that having to work on small bits of a point cloud at any given time presented a discontinuity in workflow as it was impossible to see how the current piece of point cloud (being worked on) relates to the rest of the point cloud. This issue became even more prevalent in cases where the work region of interest spanned various point cloud pieces.

The Solution

The first part of the solution was to create a file format that supported real-time on-demand streaming. This allowed for importing point clouds from various formats into said file structure and store the data in a compressed format that was already laid out in a GPU render friendly format which meant that once any chunk of data is streamed from the file minimal processing would be required in order to send the data over for visualisation on the graphics processing unit (GPU). This then directly supported point cloud rendering, as it was now possible to view point clouds in a dynamic on-demand basis determined by camera location and specified level of detail as well as view range (which is also automatically based on available compute resources). Thus, allowing unconstrained movement in 3D space. The on-demand streaming and rendering solved the issue of having to manually load and unload chunks of point clouds, removing the tedious task of manually segmenting point clouds and saving users hundreds of hours as well as allowing them a continuous, uninterrupted view of point clouds in their 3D workspace. Lastly, the streaming process incurs no delay on rendering as the data fetch process takes place on separate threads and graphics contexts and only enters the render pipeline once ready.