BIM Quality Audit: Technical Case Study

ConfMatrix Team

2026-05-13

1. Introduction

In BIM projects, checking the quality of the model is a daily task. We need to verify if the data in our models matches the requirements defined in the BIM Execution Plan (BEP).

The ConfMatrix package uses the QCCS class to manage these audits, providing statistical tests to determine if the model compliance is acceptable or if it deviates significantly from the targets.

2. Methodology: Priority and Vector Ordering

In ConfMatrix, the priority or severity of the categories is strictly defined by their positional index within the vectors. The first element represents the highest quality (Priority 1), and the last element represents the lowest compliance or highest deviation (Priority K).

Flexible Categorization

Users can name these categories freely (e.g., in any language or using numeric codes). The statistical engine ignores the names and performs all calculations based on the vector’s internal order.

However, we suggest a nomenclature where the initials follow a natural lexicographical order (A-B-D-I) so that R’s default sorting in plots and tables automatically aligns with the intended priority:

  1. Position 1 (High Quality): e.g., Accepted (Full compliance).
  2. Position 2: e.g., Below Standard (Minor deviations).
  3. Position 3: e.g., Discrepant (Significant issues).
  4. Position 4 (Low Quality): e.g., Incompatible (Critical failures).

3. Statistical Background

We test two fundamental ideas:

Why use the Exact Test?

Standard asymptotic tests (like Chi-squared) fail when you have very few elements in a category. For example, if you expect only 1 “Incompatible” element, the standard test is unreliable. In these cases, Exact.test() provides the mathematically correct p-value.

4. Example: Auditing a Single Discipline

library(ConfMatrix)

# 1. Setup audit data (The position in the vector marks the priority)
# Pos 1: Accepted, Pos 2: Below Std, Pos 3: Discrepant, Pos 4: Incompatible
observed_counts <- list(c(36, 8, 5, 1))
target_probs    <- list(c(0.75, 0.15, 0.08, 0.02))
compliance_labels <- c("Accepted", "Below_Standard", "Discrepant", "Incompatible")

# 2. Create the audit object
audit_single <- QCCS$new(
  Vectors = observed_counts, 
  Prob = target_probs,
  ClassNames = compliance_labels,
  Source = "Structural Model Audit",
  ID = "STR-001"
)

# 3. Run the Exact Test
exact_result <- audit_single$Exact.test(a = 0.05)
print(exact_result)
#> 
#>  Exact Test with Bonferroni Correction
#> 
#> data:  Vectors and Probabilities
#> p-value = 0.303

5. Global Audit: Multi-Discipline Analysis

In large projects, we can audit several disciplines simultaneously (Architecture, Structural, MEP) to evaluate the overall project stability.

# Data for three different disciplines
obs_arch   <- c(40, 6, 3, 1) 
obs_struct <- c(42, 5, 2, 1) 
obs_mep    <- c(30, 12, 6, 2) 

# Shared quality targets for all disciplines
project_targets <- c(0.80, 0.12, 0.06, 0.02)

# Create a multi-audit object
multi_audit <- QCCS$new(
  Vectors = list(obs_arch, obs_struct, obs_mep),
  Prob = list(project_targets, project_targets, project_targets),
  ClassNames = compliance_labels,
  ID = "BIM_GLOBAL_PROJECT"
)

# Individual Chi-squared tests (with Bonferroni correction)
individual_results <- multi_audit$Ji.test()
print(individual_results) # Results for MEP
#> 
#>  Chi-squared test with Bonferroni method
#> 
#> data:  Vectors and Probabilities
#> p-value = 0.005853

# Global Stability Test for the entire project
global_result <- multi_audit$JiGlobal.test()
print(global_result)
#> 
#>  Global Chi-squared test
#> 
#> data:  Vectors and Probabilities
#> X2 = 13.1, df = 9, p-value = 0.1581

6. Visual Interpretation

# Extract proportions for the chart
proportions <- do.call(rbind, lapply(multi_audit$Vectors, function(x) x/sum(x)))
rownames(proportions) <- c("Architecture", "Structural", "MEP")
colnames(proportions) <- multi_audit$ClassNames

# Grouped barplot (Order is preserved by vector position)
barplot(t(proportions), beside = TRUE, 
        col = c("#2c3e50", "#18bc9c", "#f39c12", "#e74c3c"),
        main = "Technical Compliance by Discipline",
        ylab = "Proportion",
        legend.text = TRUE,
        args.legend = list(x = "topright", bty = "n", cex = 0.8))

# Red dashed line shows the 80% target for 'Accepted' elements
abline(h = 0.80, col = "red", lty = 2, lwd = 1.5)

7. Conclusion

The p-value of the global analysis is 0.1581. Since it is above 0.05, we conclude that the project as a whole is statistically compliant with the BEP, even if minor deviations exist in specific disciplines.

8. References