Skip to main content

Reconstruction Quality Report

Overview

MipMapEngine SDK generates a detailed quality report after completing reconstruction tasks, containing device information, reconstruction efficiency, parameter settings, and result quality. The report is stored in JSON format in the report/report.json file, along with visualization thumbnails for quick preview.

Report Structure

1. Device Information

Records the hardware configuration used for reconstruction:

FieldTypeDescription
cpu_namestringCPU name
gpu_namestringGPU name

Example:

{
"cpu_name": "Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz",
"gpu_name": "NVIDIA GeForce RTX 3080"
}

2. Reconstruction Efficiency

Records processing time for each stage (in minutes):

FieldTypeDescription
feature_extraction_timefloatFeature extraction time
feature_match_timefloatFeature matching time
sfm_timefloatBundle adjustment time
at_timefloatTotal AT time
reconstruction_timefloatTotal reconstruction time (excluding AT)

Example:

{
"feature_extraction_time": 12.5,
"feature_match_time": 8.3,
"sfm_time": 15.2,
"at_time": 36.0,
"reconstruction_time": 48.6
}

3. Reconstruction Parameters

Records task input parameters and configuration:

Camera Parameters

"initial_camera_parameters": [
{
"camera_name": "DJI_FC6310",
"width": 5472,
"height": 3648,
"id": 0,
"parameters": [3850.5, 2736, 1824, -0.02, 0.05, 0.001, -0.001, 0.01]
}
]

Parameter array order: [f, cx, cy, k1, k2, p1, p2, k3]

  • f: Focal length
  • cx, cy: Principal point coordinates
  • k1, k2, k3: Radial distortion coefficients
  • p1, p2: Tangential distortion coefficients

Other Parameters

FieldTypeDescription
input_camera_countintInput camera count
input_image_countintInput image count
reconstruction_levelintReconstruction level (1=Ultra-high, 2=High, 3=Medium)
production_typestringProduct type
max_ramfloatMaximum RAM usage (GB)

Coordinate System Information

"production_cs_3d": {
"epsg_code": 4326,
"origin_offset": [0, 0, 0],
"type": 2
}

Coordinate system types:

  • 0: LocalENU (Local East-North-Up)
  • 1: Local (Local coordinate system)
  • 2: Geodetic (Geodetic coordinate system)
  • 3: Projected (Projected coordinate system)
  • 4: ECEF (Earth-Centered Earth-Fixed)

4. Reconstruction Results

Camera Parameters After AT

Records optimized camera intrinsic parameters:

"AT_camera_parameters": [
{
"camera_name": "DJI_FC6310",
"width": 5472,
"height": 3648,
"id": 0,
"parameters": [3852.1, 2735.8, 1823.6, -0.019, 0.048, 0.0008, -0.0009, 0.009]
}
]

Image Position Differences

Records position optimization for each image:

"image_pos_diff": [
{
"id": 0,
"pos_diff": 0.125
},
{
"id": 1,
"pos_diff": 0.087
}
]

Quality Metrics

FieldTypeDescription
removed_image_countintImages removed after AT
residual_rmsefloatImage point residual RMSE
tie_point_countintTie point count
scene_areafloatScene area (square meters)
scene_gsdfloatGround sampling distance (meters)
flight_heightfloatFlight height (meters)
block_countintReconstruction block count

5. Other Information

FieldTypeDescription
sdk_versionstringSDK version

Visualization Thumbnails

The thumbnail folder in the report directory contains the following visualization files:

1. Camera Residual Plot

camera_{id}_residual.png - 24-bit color image

  • Good calibration result: Residuals are similar in size across positions with random directions
  • Poor calibration result: Large residuals with obvious directional patterns
tip

Large residuals don't necessarily indicate poor overall accuracy, as this only reflects internal camera accuracy. Final accuracy should consider checkpoint coordinate accuracy and model quality comprehensively.

2. Overlap Map

overlap_map.png - 8-bit grayscale image

  • Pixel value range: 0-255
  • Can be rendered as color map to show overlap distribution
  • Used to evaluate flight path design and image coverage quality

3. Survey Area Thumbnail

rgb_thumbnail.jpg - 32-bit color image

  • For quick project preview
  • Shows survey area extent and reconstruction results

Report Interpretation Examples

Complete Report Example

{
"cpu_name": "Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz",
"gpu_name": "NVIDIA GeForce RTX 3080",
"feature_extraction_time": 12.5,
"feature_match_time": 8.3,
"sfm_time": 15.2,
"at_time": 36.0,
"reconstruction_time": 48.6,
"initial_camera_parameters": [{
"camera_name": "DJI_FC6310",
"width": 5472,
"height": 3648,
"id": 0,
"parameters": [3850.5, 2736, 1824, -0.02, 0.05, 0.001, -0.001, 0.01]
}],
"input_camera_count": 1,
"input_image_count": 156,
"reconstruction_level": 2,
"production_type": "all",
"production_cs_3d": {
"epsg_code": 4326,
"origin_offset": [0, 0, 0],
"type": 2
},
"production_cs_2d": {
"epsg_code": 3857,
"origin_offset": [0, 0, 0],
"type": 3
},
"max_ram": 28.5,
"AT_camera_parameters": [{
"camera_name": "DJI_FC6310",
"width": 5472,
"height": 3648,
"id": 0,
"parameters": [3852.1, 2735.8, 1823.6, -0.019, 0.048, 0.0008, -0.0009, 0.009]
}],
"removed_image_count": 2,
"residual_rmse": 0.68,
"tie_point_count": 125840,
"scene_area": 850000.0,
"scene_gsd": 0.025,
"flight_height": 120.5,
"block_count": 1,
"sdk_version": "3.0.1"
}

Quality Assessment Metrics

Excellent Quality Standards

  • residual_rmse < 1.0 pixels
  • removed_image_count / input_image_count < 5%
  • tie_point_count > 10000
  • Average position difference < 0.5 meters

Situations Requiring Attention

  • residual_rmse > 2.0 pixels: Possible systematic errors
  • removed_image_count > 10%: Image quality or overlap issues
  • tie_point_count < 5000: Insufficient feature points, affecting accuracy

Report Analysis Tools

Python Analysis Example

import json
import numpy as np

def analyze_quality_report(report_path):
with open(report_path, 'r', encoding='utf-8') as f:
report = json.load(f)

# Calculate efficiency metrics
total_time = report['at_time'] + report['reconstruction_time']
images_per_minute = report['input_image_count'] / total_time

# Calculate quality metrics
removal_rate = report['removed_image_count'] / report['input_image_count']
avg_pos_diff = np.mean([item['pos_diff'] for item in report['image_pos_diff']])

# Generate analysis report
analysis = {
'efficiency': {
'total_time_minutes': total_time,
'images_per_minute': images_per_minute,
'area_per_hour': report['scene_area'] / (total_time / 60)
},
'quality': {
'residual_rmse': report['residual_rmse'],
'removal_rate_percent': removal_rate * 100,
'avg_position_diff_meters': avg_pos_diff,
'tie_points_per_image': report['tie_point_count'] / report['input_image_count']
},
'scale': {
'area_sqm': report['scene_area'],
'gsd_cm': report['scene_gsd'] * 100,
'flight_height_m': report['flight_height']
}
}

return analysis

# Usage example
analysis = analyze_quality_report('report/report.json')
print(f"Processing efficiency: {analysis['efficiency']['images_per_minute']:.1f} images/minute")
print(f"Average residual: {analysis['quality']['residual_rmse']:.2f} pixels")
print(f"Ground resolution: {analysis['scale']['gsd_cm']:.1f} cm")

Quality Report Visualization

import matplotlib.pyplot as plt
from PIL import Image

def visualize_quality_report(report_dir):
# Read report data
with open(f'{report_dir}/report.json', 'r') as f:
report = json.load(f)

# Create charts
fig, axes = plt.subplots(2, 2, figsize=(12, 10))

# 1. Time distribution pie chart
times = [
report['feature_extraction_time'],
report['feature_match_time'],
report['sfm_time'],
report['reconstruction_time']
]
labels = ['Feature Extraction', 'Feature Matching', 'Bundle Adjustment', '3D Reconstruction']
axes[0, 0].pie(times, labels=labels, autopct='%1.1f%%')
axes[0, 0].set_title('Processing Time Distribution')

# 2. Position difference histogram
pos_diffs = [item['pos_diff'] for item in report['image_pos_diff']]
axes[0, 1].hist(pos_diffs, bins=20, edgecolor='black')
axes[0, 1].set_xlabel('Position Difference (meters)')
axes[0, 1].set_ylabel('Image Count')
axes[0, 1].set_title('Image Position Optimization Distribution')

# 3. Overlap map
overlap_img = Image.open(f'{report_dir}/thumbnail/overlap_map.png')
axes[1, 0].imshow(overlap_img, cmap='jet')
axes[1, 0].set_title('Image Overlap Distribution')
axes[1, 0].axis('off')

# 4. Key metrics text
metrics_text = f"""
Input Images: {report['input_image_count']}
Removed Images: {report['removed_image_count']}
Residual RMSE: {report['residual_rmse']:.2f} px
Tie Points: {report['tie_point_count']:,}
Scene Area: {report['scene_area']/10000:.1f} hectares
Ground Resolution: {report['scene_gsd']*100:.1f} cm
"""
axes[1, 1].text(0.1, 0.5, metrics_text, fontsize=12,
verticalalignment='center', family='monospace')
axes[1, 1].set_title('Key Quality Metrics')
axes[1, 1].axis('off')

plt.tight_layout()
plt.savefig('quality_report_summary.png', dpi=150)
plt.show()

Automated Quality Check

Quality Threshold Configuration

QUALITY_THRESHOLDS = {
'excellent': {
'residual_rmse': 0.5,
'removal_rate': 0.02,
'tie_points_per_image': 1000,
'pos_diff_avg': 0.1
},
'good': {
'residual_rmse': 1.0,
'removal_rate': 0.05,
'tie_points_per_image': 500,
'pos_diff_avg': 0.5
},
'acceptable': {
'residual_rmse': 2.0,
'removal_rate': 0.10,
'tie_points_per_image': 200,
'pos_diff_avg': 1.0
}
}

def assess_quality(report):
"""Automatically assess reconstruction quality level"""

# Calculate metrics
removal_rate = report['removed_image_count'] / report['input_image_count']
tie_points_per_image = report['tie_point_count'] / report['input_image_count']
pos_diff_avg = np.mean([item['pos_diff'] for item in report['image_pos_diff']])

# Assess level
for level, thresholds in QUALITY_THRESHOLDS.items():
if (report['residual_rmse'] <= thresholds['residual_rmse'] and
removal_rate <= thresholds['removal_rate'] and
tie_points_per_image >= thresholds['tie_points_per_image'] and
pos_diff_avg <= thresholds['pos_diff_avg']):
return level

return 'poor'

Report Integration Applications

Batch Processing Quality Monitoring

def batch_quality_monitor(project_dirs):
"""Batch project quality monitoring"""

results = []

for project_dir in project_dirs:
report_path = os.path.join(project_dir, 'report/report.json')

if os.path.exists(report_path):
with open(report_path, 'r') as f:
report = json.load(f)

quality_level = assess_quality(report)

results.append({
'project': project_dir,
'images': report['input_image_count'],
'area': report['scene_area'],
'gsd': report['scene_gsd'],
'rmse': report['residual_rmse'],
'quality': quality_level,
'time': report['at_time'] + report['reconstruction_time']
})

# Generate summary report
df = pd.DataFrame(results)
df.to_csv('batch_quality_report.csv', index=False)

# Statistics
print(f"Total projects: {len(results)}")
print(f"Excellent: {len(df[df['quality'] == 'excellent'])}")
print(f"Good: {len(df[df['quality'] == 'good'])}")
print(f"Acceptable: {len(df[df['quality'] == 'acceptable'])}")
print(f"Poor: {len(df[df['quality'] == 'poor'])}")

return df

Best Practices

  1. Regular Report Checks: Review quality metrics after each reconstruction
  2. Establish Baselines: Record quality metrics from typical projects as references
  3. Anomaly Alerts: Set up automated scripts to detect abnormal metrics
  4. Trend Analysis: Track quality metric trends over time
  5. Optimization Suggestions: Adjust capture and processing parameters based on report metrics

Tip: The quality report is an important tool for evaluating and optimizing reconstruction workflows. It is recommended to integrate it into automated workflows.