Documentation

Support

Asset Transformer SDK


User Manual

Python API

C# API

Changelog

Discussions

Asset Transformer SDK


Point cloud guidelines

Learn how to import and prepare point cloud files for real-time 3D experiences.
Read time 8 minutesLast updated 4 months ago

Why prepare point clouds for real-time rendering?

Point clouds are collections of 3D points captured from laser scanners, photogrammetry, or LiDAR sensors. These datasets often contain millions or billions of points, making them too dense for real-time rendering without optimization. Unlike CAD models or meshes, point clouds have unique challenges:
  • Extreme density - Raw scans can have billions of points, far exceeding what GPUs can render at interactive frame rates
  • Mixed geometry - Point cloud files may contain unwanted polygons or line data from the scanning process
  • Segmentation needs - Large point clouds benefit from spatial organization for efficient culling and streaming
A typical point cloud import workflow involves:
  1. Import - Load the point cloud file
  2. Clean up - Remove non-point geometry
  3. Reduce density - Decimate points to a manageable count
  4. Segment - Optionally divide into spatial regions for better performance
Following these guidelines will ensure your point clouds render efficiently in real-time applications.

1. Import

1.1. Configure RecapSDK path

For certain point cloud formats, you need to configure the RecapSDK path before importing. This enables support for formats like RCS and RCP.
# Set the RecapSDK path for point cloud importcore.setModuleProperty("IO", "RecapSDKPath", "/path/to/RecapSDK")
Note
Check your Asset Transformer SDK installation for the RecapSDK location, or refer to the supported formats to see which formats are available.

1.2. Import point cloud files

Use io.importFiles to load point cloud files into the scene. Common point cloud formats include E57, PTS, PTX, XYZ, RCS, and RCP.
# Import a point cloud fileroot = io.importScene("path/to/scan.e57")# Import multiple point cloud filesfile_list = [ "path/to/scan1.e57", "path/to/scan2.pts", "path/to/scan3.las"]roots = io.importFiles(file_list)
Tip
Point cloud files can be very large. Consider importing only a portion of the data for testing before processing the entire dataset.

1.3. Out-of-core processing for very large files

For extremely large point clouds that don't fit in memory, use out-of-core processing. This technique divides the point cloud into spatial regions (voxels), processes each region independently, and combines the results.

How it works

Out-of-core processing uses a three-step approach:
  1. Import bounding box only - Load only the spatial extent, not the actual points
  2. Process voxels iteratively - Import, process, and export each voxel one at a time
  3. Combine results - Import all processed voxels together (now small enough to fit in memory)
This allows you to process point clouds with billions of points on systems with limited RAM.
Note
Out-of-core processing is significantly slower than normal processing because each voxel requires a separate import/export cycle. Only use this technique when the point cloud truly doesn't fit in memory.

Set import mode

Configure the IO module to control what gets loaded during import:
# Import only the bounding box (no points, minimal memory)core.setModuleProperty("IO", "PointCloudImportMode", "Import bounding box only")# Import all points (normal mode)core.setModuleProperty("IO", "PointCloudImportMode", "Import point cloud")# Import one bounding box per partcore.setModuleProperty("IO", "PointCloudImportMode", "Import one bounding box by part")

Filter by bounding box

You can filter which points get imported by specifying a bounding box:
# Define a spatial filter (only import points within this box)filter_bbox = [ [min_x, min_y, min_z], # Low corner [max_x, max_y, max_z] # High corner]# Set the import filtercore.setModuleProperty("IO", "FilterPointCloudBoundingBox", str(filter_bbox))# Import - only points within the filter bbox will be loadedroot = io.importScene("path/to/huge_scan.e57")
Important
The
FilterPointCloudBoundingBox
property stays active until you clear it or change it. Always reset it after filtered imports to avoid accidentally filtering subsequent imports.

When to use out-of-core processing

Use out-of-core processing when:
  • Point cloud file size exceeds 50-75% of available RAM
  • Import fails with out-of-memory errors
  • Processing very large scans (billions of points)
  • Working on systems with limited RAM (< 16GB)
Use normal processing when:
  • File comfortably fits in memory
  • Speed is critical
  • Point cloud has fewer than 100 million points
  • System has ample RAM (> 32GB)
Important
Test with normal import first. Out-of-core processing adds complexity and processing time. Only use it when necessary.

2. Clean up

2.1. Remove non-point geometry

Point cloud files sometimes contain unwanted geometry from the scanning or processing pipeline. Remove polygons and lines to keep only the point data.
roots = [root]# Remove polygon meshesalgo.deletePolygons(roots)# Remove line geometriesalgo.deleteLines(roots)
Important
This step ensures you're working with pure point cloud data. Mixed geometry can interfere with point cloud-specific operations and increase memory usage unnecessarily.

2.2. Analyze the point cloud

Before optimization, analyze the point cloud to understand its size and density.
# Get bounding box to understand spatial extentbounds = scene.getAABB(roots)# Calculate approximate sizesize_x = bounds.high.x - bounds.low.xsize_y = bounds.high.y - bounds.low.ysize_z = bounds.high.z - bounds.low.zprint(f"Point cloud dimensions: {size_x} x {size_y} x {size_z}")# Get point countpoint_count = scene.getVertexCount(roots, countPoints=True)print(f"Total points: {point_count}")
Tip
Understanding the scale and density of your point cloud helps you choose appropriate decimation and voxelization parameters.

3. Reduce density

Point clouds captured from scanners are typically much denser than needed for visualization. Reducing density is essential for real-time performance.

3.1. Calculate appropriate tolerance

The decimation tolerance determines the minimum distance between points. Calculate it based on the point cloud's bounding box.
# Calculate voxel size based on bounding boximport mathvoxel_size = math.pow( (bounds.high.x - bounds.low.x) * (bounds.high.y - bounds.low.y) * (bounds.high.z - bounds.low.z), 1.0 / 3.0)# Typical splat size for point renderingsplat_size = 0.0035# Calculate distance tolerance# Adjust density_factor to control reduction (1.0 = no change, higher = more reduction)density_factor = 2.0distance = max(0.002 * voxel_size, (voxel_size * splat_size)) / density_factorprint(f"Decimation tolerance: {distance}")
Note
The
density_factor
parameter controls how aggressively points are removed. Higher values reduce more points but may lose detail. Start with 2.0 and adjust based on visual quality.

3.2. Decimate the point cloud

Apply algo.decimatePointClouds to reduce point density while preserving visual quality.
# Decimate point cloudalgo.decimatePointClouds(roots, tolerance=distance)# Check new point countnew_point_count = scene.getVertexCount(roots, countPoints=True, countPolygonVertices=False)print(f"Points after decimation: {new_point_count}")print(f"Reduction: {(1 - new_point_count/point_count) * 100:.1f}%")
Tip
Decimation is typically the most important optimization for point clouds. A 50-90% reduction is common while maintaining good visual quality.

4. Segment and voxelize

For very large point clouds, spatial segmentation improves rendering performance by enabling efficient culling and streaming.

4.1. Merge occurrences (optional)

If your point cloud is split across multiple parts, merge them before voxelization for more consistent segmentation.
# Merge all point cloud parts togetherscene.mergeOccurrencesByTreeLevel(roots, maxLevel=1)

4.2. Voxelize the point cloud

Voxelization divides the point cloud into a regular 3D grid. Each voxel becomes a separate part that can be culled individually during rendering.
# Define segmentation resolution (number of voxels along each axis)segmentation_resolution = 20 # Creates a 20x20x20 grid# Clamp between reasonable valuessegmentation_resolution = max(1, min(80, segmentation_resolution))# Calculate voxel size based on bounding boxvoxel_size = math.pow( (bounds.high.x - bounds.low.x) * (bounds.high.y - bounds.low.y) * (bounds.high.z - bounds.low.z), 1.0 / 3.0) / segmentation_resolution# Voxelize the point cloudalgo.voxelizePointClouds(roots, voxelSize=voxel_size)
Note
Higher segmentation values (40-80) create more voxels, which improves culling efficiency but increases overhead. Lower values (10-20) are better for smaller datasets.

When to use voxelization

Voxelization is beneficial when:
  • Point clouds cover large spatial areas (buildings, city blocks, infrastructure)
  • You need to stream or load portions of the data on demand
  • The camera will move through the point cloud rather than viewing it from a distance
Skip voxelization for:
  • Small point clouds that fit entirely in memory
  • Point clouds that are always fully visible
  • Cases where the overhead of managing many parts outweighs culling benefits
Tip
Voxelization is particularly effective for architectural and infrastructure scans where the camera navigates through the space. It allows the engine to render only visible regions.

5. Advanced techniques

5.1. Convert to mesh (experimental)

For certain use cases, you may want to convert the point cloud to a mesh surface. This is useful for creating collision geometry or continuous surfaces.
# Tessellate point cloud into a meshalgo.tessellatePointClouds(roots, kNeighbors=20, # Number of neighbors for reconstruction keepPoints=False, # Remove original points colorize=True) # Transfer point colors to mesh
Important
The
tessellatePointClouds
function is experimental. It works best on point clouds with uniform density and clear surface definition. Results may vary depending on data quality.

5.2. Calculate normals (experimental)

Point cloud normals improve lighting and can be used for surface reconstruction. Calculate them before tessellation or rendering.
# Calculate normals for point cloudalgo.calculateNormalsInPointClouds(roots)
Note
Normal calculation is experimental and can be time-consuming for large point clouds. It's most useful when converting to meshes or when the rendering engine supports oriented point rendering.

Scripts

Standard

Here's a complete Python script that implements the point cloud import workflow:
import pxzfrom pxz import io, algo, sceneimport math# File path to importFILE_PATH = "path/to/your/pointcloud.e57"# ConfigurationDENSITY_FACTOR = 2.0 # Higher = more reduction (1.0 = minimal, 5.0 = aggressive)USE_VOXELIZATION = TrueSEGMENTATION_RESOLUTION = 20 # Grid size for voxelization (1-80)# Step 1: Configure RecapSDK path (if needed for RCS/RCP formats)# core.setModuleProperty("IO", "RecapSDKPath", "/path/to/RecapSDK")# Step 2: Import the point cloud fileprint("Importing point cloud...")root = io.importScene(FILE_PATH)roots = [root]# Step 3: Remove non-point geometryprint("Cleaning up non-point geometry...")algo.deletePolygons(roots)algo.deleteLines(roots)# Step 4: Analyze the point cloudprint("Analyzing point cloud...")bounds = scene.getAABB(roots)size_x = bounds.high.x - bounds.low.xsize_y = bounds.high.y - bounds.low.ysize_z = bounds.high.z - bounds.low.zprint(f"Dimensions: {size_x:.2f} x {size_y:.2f} x {size_z:.2f}")initial_points = scene.getVertexCount(roots, countPoints=True)print(f"Initial point count: {initial_points}")# Step 5: Decimate point cloudprint("Decimating point cloud...")# Calculate voxel sizevoxel_size = math.pow(size_x * size_y * size_z, 1.0 / 3.0)# Calculate decimation tolerancesplat_size = 0.0035distance = max(0.002 * voxel_size, (voxel_size * splat_size)) / DENSITY_FACTORprint(f"Using tolerance: {distance:.6f}")algo.decimatePointClouds(roots, tolerance=distance)decimated_points = scene.getVertexCount(roots, countPoints=True)print(f"Points after decimation: {decimated_points}")print(f"Reduction: {(1 - decimated_points/initial_points) * 100:.1f}%")# Step 6: Voxelize for spatial segmentation (optional)if USE_VOXELIZATION and SEGMENTATION_RESOLUTION > 1: print("Voxelizing point cloud...") # Merge occurrences first for better voxelization scene.mergeOccurrencesByTreeLevel(roots, maxLevel=1) # Calculate voxel size for segmentation segmentation_res = max(1, min(80, SEGMENTATION_RESOLUTION)) voxel_grid_size = math.pow(size_x * size_y * size_z, 1.0 / 3.0) / segmentation_res print(f"Creating {segmentation_res}x{segmentation_res}x{segmentation_res} grid") print(f"Voxel size: {voxel_grid_size:.2f}") algo.voxelizePointClouds(roots, voxelSize=voxel_grid_size)# Step 7: Export the optimized point cloudprint("Exporting...")io.exportScene("path/to/output.glb")print("Point cloud import complete!")
Tip
Save this script and adjust the
DENSITY_FACTOR
and
SEGMENTATION_RESOLUTION
parameters based on your specific requirements for quality and performance.

Out of core

Here's a reusable function that processes large point clouds voxel by voxel:
def processPointCloudByVoxel(filePath, func, voxelSize=5000, workindDir=tempfile.TemporaryDirectory()): # Possible PointCloudImportMode options: # "Import point cloud" # "Import bounding box only" # "Import one bounding box by part" # First: import only as a bounding box (no RAM spent) # This will allow to know the total volume of the point cloud pxz.core.setModuleProperty("IO", "PointCloudImportMode", "Import bounding box only") occ = io.importScene(filePath) # get the aabb of the point cloud aabb = scene.getAABB([occ]) scene.deleteOccurrences([occ]) # and restore the import mode pxz.core.setModuleProperty("IO", "PointCloudImportMode", "Import point cloud") out_files = [] # Second: move a voxel of given size through the aabb to scan the whole volume z = aabb.low.z zi = 0 while z < aabb.high.z: y = aabb.low.y yi = 0 while y < aabb.high.y: x = aabb.low.x xi = 0 while x < aabb.high.x: # set the point cloud import filter to the current voxel filterAABB = geom.AABB(geom.Point3(x,y,z), geom.Point3(x+voxelSize, y+voxelSize, z+voxelSize)) pxz.core.setModuleProperty("IO", "FilterPointCloudBoundingBox", str([ [filterAABB.low.x, filterAABB.low.y, filterAABB.low.z], [filterAABB.high.x, filterAABB.high.y, filterAABB.high.z] ])) # import the filter part of the file occ = io.importScene(filePath) # run the user process func(occ, aabb, filterAABB) # save the result to a temp file subFilePath = workindDir + f"/voxel_{xi}_{yi}_{zi}.pxz" out_files.append(subFilePath) io.exportScene(subFilePath) # clear the occurence scene.deleteOccurrences([occ]) x = x + voxelSize xi = xi + 1 y = y + voxelSize yi = yi + 1 z = z + voxelSize zi = zi + 1 # now we can import all the processed file that should fit in memory io.importFiles(out_files)def processVoxel(occ, totalAABB, filteredAABB): algo.decimatePointClouds([occ], 10)
Tip
Choose voxel size based on available RAM. Smaller voxels (1000-5000 units) use less memory but process more voxels. Larger voxels (10000-50000 units) process faster but need more RAM per voxel.

Best practices

Performance considerations

  • Start conservative - Begin with minimal decimation and increase gradually until you find the right balance
  • Test in target engine - Point cloud rendering performance varies by engine; test early and often
  • Consider LOD strategies - For very large datasets, consider creating multiple density levels for different viewing distances
  • Memory management - Point clouds can consume significant memory; monitor usage during import and optimization
  • Use out-of-core processing wisely - Only use out-of-core processing for files that truly don't fit in memory; it's significantly slower than normal processing

Quality considerations

  • Preserve detail where it matters - Use lower density reduction in areas with fine details
  • Color preservation - Point cloud decimation preserves vertex colors, but excessive reduction can cause color banding
  • Normal generation - If your renderer supports oriented points, generate normals for better lighting quality
  • Validation - Always validate the optimized point cloud visually before proceeding to production
  • Out-of-core boundaries - When using out-of-core processing, voxel boundaries may create visible seams; use overlapping voxels or post-process to blend boundaries if needed

File format recommendations

  • E57 - Best for archival and high-precision data exchange
  • PTS/PTX - Simple text-based formats, good for compatibility
  • RCS/RCP - Autodesk ReCap formats, require RecapSDK configuration
Note
Different point cloud formats may preserve different attributes (colors, normals, intensity, classification). Choose your export format based on what data you need to preserve.