In BIM projects, checking the quality of the model is a daily task. We need to verify if the data in our models matches the requirements defined in the BIM Execution Plan (BEP).
The ConfMatrix package uses the QCCS class
to manage these audits, providing statistical tests to determine if the
model compliance is acceptable or if it deviates significantly from the
targets.
In ConfMatrix, the priority or severity of the
categories is strictly defined by their positional
index within the vectors. The first element represents the
highest quality (Priority 1), and the last element represents the lowest
compliance or highest deviation (Priority K).
Users can name these categories freely (e.g., in any language or using numeric codes). The statistical engine ignores the names and performs all calculations based on the vector’s internal order.
However, we suggest a nomenclature where the initials follow a natural lexicographical order (A-B-D-I) so that R’s default sorting in plots and tables automatically aligns with the intended priority:
We test two fundamental ideas:
Standard asymptotic tests (like Chi-squared) fail when you have very
few elements in a category. For example, if you expect only 1
“Incompatible” element, the standard test is unreliable. In these cases,
Exact.test() provides the mathematically correct
p-value.
library(ConfMatrix)
# 1. Setup audit data (The position in the vector marks the priority)
# Pos 1: Accepted, Pos 2: Below Std, Pos 3: Discrepant, Pos 4: Incompatible
observed_counts <- list(c(36, 8, 5, 1))
target_probs <- list(c(0.75, 0.15, 0.08, 0.02))
compliance_labels <- c("Accepted", "Below_Standard", "Discrepant", "Incompatible")
# 2. Create the audit object
audit_single <- QCCS$new(
Vectors = observed_counts,
Prob = target_probs,
ClassNames = compliance_labels,
Source = "Structural Model Audit",
ID = "STR-001"
)
# 3. Run the Exact Test
exact_result <- audit_single$Exact.test(a = 0.05)
print(exact_result)
#>
#> Exact Test with Bonferroni Correction
#>
#> data: Vectors and Probabilities
#> p-value = 0.303In large projects, we can audit several disciplines simultaneously (Architecture, Structural, MEP) to evaluate the overall project stability.
# Data for three different disciplines
obs_arch <- c(40, 6, 3, 1)
obs_struct <- c(42, 5, 2, 1)
obs_mep <- c(30, 12, 6, 2)
# Shared quality targets for all disciplines
project_targets <- c(0.80, 0.12, 0.06, 0.02)
# Create a multi-audit object
multi_audit <- QCCS$new(
Vectors = list(obs_arch, obs_struct, obs_mep),
Prob = list(project_targets, project_targets, project_targets),
ClassNames = compliance_labels,
ID = "BIM_GLOBAL_PROJECT"
)
# Individual Chi-squared tests (with Bonferroni correction)
individual_results <- multi_audit$Ji.test()
print(individual_results) # Results for MEP
#>
#> Chi-squared test with Bonferroni method
#>
#> data: Vectors and Probabilities
#> p-value = 0.005853
# Global Stability Test for the entire project
global_result <- multi_audit$JiGlobal.test()
print(global_result)
#>
#> Global Chi-squared test
#>
#> data: Vectors and Probabilities
#> X2 = 13.1, df = 9, p-value = 0.1581# Extract proportions for the chart
proportions <- do.call(rbind, lapply(multi_audit$Vectors, function(x) x/sum(x)))
rownames(proportions) <- c("Architecture", "Structural", "MEP")
colnames(proportions) <- multi_audit$ClassNames
# Grouped barplot (Order is preserved by vector position)
barplot(t(proportions), beside = TRUE,
col = c("#2c3e50", "#18bc9c", "#f39c12", "#e74c3c"),
main = "Technical Compliance by Discipline",
ylab = "Proportion",
legend.text = TRUE,
args.legend = list(x = "topright", bty = "n", cex = 0.8))
# Red dashed line shows the 80% target for 'Accepted' elements
abline(h = 0.80, col = "red", lty = 2, lwd = 1.5)The p-value of the global analysis is 0.1581. Since it is above 0.05, we conclude that the project as a whole is statistically compliant with the BEP, even if minor deviations exist in specific disciplines.