got it — let’s turn that abstract “cluster & snap” idea into a concrete, buildable workflow. i’ll assume your triangles are rigid, planar, from a fixed library; the surface may move only within a tight tolerance (you can set this to zero if you truly forbid deviation, but then expect many infeasible cases).
- S: target surface or dense reference mesh.
- M₀: an initial triangle mesh on S (good quality, roughly uniform).
- 𝓛 = {L₁…L_K}: your fixed triangle library (planar; each with ordered vertices).
- ε_fit, ε_seam: max fitting & seam tolerances (can be zero if you insist).
- cost weights: λ_fit, λ_seam, λ_count (and optional λ_curv, λ_twist).
Purpose: a face-to-face comparison independent of rigid motion/scale.
For each triangle Fᵢ of M₀:
- edge-length signature: sort edges:
ℓᵢ = sort([e1,e2,e3]) / (e1+e2+e3) (scale-invariant).
- angle vector:
αᵢ = sort([α1,α2,α3]).
- aspect/area:
ARᵢ = longest/shortest, Aᵢ (if you care about exact size matching).
- local frame: orthonormal basis at Fᵢ (use face normal + principal curvature dir).
- curvature context (optional): mean | principal curvature magnitudes averaged over the face (helps decide where “exactness” is harder).
- boundary flags: whether an edge borders a hole/boundary (affects seam priorities).
Create a feature vector φ(Fᵢ) = [ℓᵢ, αᵢ, ARᵢ, Aᵢ, κ₁, κ₂, …].
Do the same for each library triangle L_k:
- Precompute
φ(L_k) and canonicalize orientation (clockwise vertex order, fixed “up”).
Define a distance between shapes (tune weights):
wℓ‖ℓᵢ – ℓ_k‖₂ + wα‖αᵢ – α_k‖₂ + wAR|ARᵢ – AR_k|
+ (optional) wA |Aᵢ – A_k| (set high if you must match absolute size)
For each face Fᵢ:
- compute top-m nearest library triangles by d_shape;
- for each candidate L_k, try to find a rigid similarity Tᵢk (rotation + translation, and optionally uniform scale s) that aligns L_k’s 2D triangle onto Fᵢ’s 3 vertex positions in 3D (use Procrustes/Kabsch on the three points).
- measure fit error:
- vertex deviation:
E_fitᵢk = max_v ‖Tᵢk(v_Lk) – v_Fi‖
- (optional) normal consistency & orientation (reject if flipped).
- keep (i,k) only if
E_fitᵢk ≤ ε_fit and, if scale is disallowed, s≈1.
You now have a sparse candidate set 𝒞 = { (i,k,Tᵢk,E_fitᵢk) }.
note: if ε_fit=0 and exact sizes differ, many faces will have no candidates; that’s expected.
Continuity means shared edges must match exactly after snapping.
For every mesh edge e=(i,j) (faces Fᵢ and Fⱼ meet):
- For each kept pair (i,k) and (j,ℓ), compute post-snap seam error with their transforms:
- edge gap: end-point distance on the shared edge,
- edge length mismatch (if similarity scale allowed),
- dihedral/twist mismatch (if your library triangles have thickness/ flange).
Combine into:
wgap * max(‖pᵢ¹–pⱼ¹‖, ‖pᵢ²–pⱼ²‖) + wdihedral * |θᵢ – θⱼ|
Flag a pair as compatible on edge e if E_seam ≤ ε_seam.
Store compatibility as adjacency lists per edge to prune the search later.
“Cluster by shape” here means: group faces that are intrinsically similar so they’re likely to share the same library triangle. This reduces combinatorics and promotes mold reuse.
- Build a face–face graph with weights
w_ij = exp(–‖φ(Fᵢ)–φ(Fⱼ)‖²/σ²) (connect k-NN in geodesic proximity).
- Run k-medoids (not k-means; medoids choose real exemplars) for a chosen K’ (upper bound on distinct triangles you’ll allow). Constraint: contiguous clusters if you want contiguous reuse.
- For each cluster C, restrict candidates to the intersection (or majority) of feasible library IDs seen in 𝒞 within C. This cuts the assignment space drastically.
- You can also treat this as a facility-location subproblem: each chosen library ID incurs a fixed cost
c_k; clustering minimizes “within-cluster” d_shape + ∑ chosen c_k.
This is all “by shape.” No geometry changed yet; we only restrict which tiles may appear where.
Decision variables:
- xᵢk ∈ {0,1}: face i uses library tile k (from its pruned candidate list).
- yᵢk = 1 ⇒ we also commit to the specific transform Tᵢk we computed in step 2 (one per (i,k); if you kept multiple orientations per k, index them separately).
- z_k ∈ {0,1}: library tile k is used anywhere (to count molds).
- For each adjacent face pair (i,j), compatibility constraints ensure selected (i,k) and (j,ℓ) is an allowed pair on that edge.
Constraints:
(1) ∑_{k ∈ Cand(i)} xᵢk = 1 for all faces i
(2) xᵢk ≤ z_k link to mold-use variable
(3) For each edge (i,j): xᵢk + xⱼℓ ≤ 1 if (i,k) and (j,ℓ) are marked incompatible for that edge
(4) (optional) capacity or contiguity rules for clusters
Objective (tunable):
min λ_fit ∑ᵢ ∑_k xᵢk E_fitᵢk
+ λ_seam ∑_{edges (i,j)} seam_penalty(xᵢ*, xⱼ*) (e.g., average of E_seam over selected pair)
+ λ_count ∑_k z_k (promote library reuse)
+ (optional) λ_var ∑_k Var_over_clusters(xᵢk) (discourage peppering)
This is a 0–1 ILP if you precompute transforms and seam penalties; seam consistency is enforced via the incompatibility cuts in (3). If you allow a uniform scale per face, pre-discretize s to a few values and treat them as separate “variants” of L_k to keep it integer.
Solvers: CBC, Gurobi, CPLEX (MILP), or CP-SAT. On large meshes, solve per chart/cluster then stitch.
Because a single global MILP with continuous transforms is hard, do this loop:
A. discrete step (ILP): solve for xᵢk (and z_k) with the precomputed Tᵢk; you get a watertight assignment that respects compatibility.
B. continuous step (local refinement):
- For each face i, refine Tᵢ (rigid—or similarity if allowed) by minimizing
E_fitᵢ(T) + ∑_{j∈N(i)} ρ * E_seam_edge(T, Tⱼ fixed) with a small LM/GN solver.
- After all faces, update seam penalties and re-prune incompatibilities.
Repeat A↔B a few times, reducing ε_fit/ε_seam if you want to squeeze errors.
If you truly require zero deviation, set ε’s to 0 and skip B; the ILP will either find a legal assignment or report infeasible.
Triangles can be placed in 6 permutations (3 rotations × 2 mirrorings). Treat each (k,orientation) as a distinct candidate during step 2 so the ILP can pick orientations that make shared edges co-linear and vertex-matched. Precompute a compatibility table per shared edge that encodes which orientation pairs glue exactly.
- For boundary edges, only enforce one-sided compatibility (no neighbor).
- If exact coverage is impossible with your library, allow optional “void” faces with a big penalty; this lets the solver prove near-feasibility and shows where the library fails.
- Alternatively, allow macro-tiles (small clusters of library triangles) as extra “parts” in 𝓛 to span tricky corners.
- Colored mesh by library ID (and orientation) to see clustering/reuse.
- Edge heatmap of seam error (should be ≤ ε_seam).
- A bill of materials per library triangle, and placement transforms Tᵢk.
- A feasibility report (faces with zero candidates, conflicting edges, etc.).
- Computationally: reduces the candidate sets so the ILP scales.
- Fabrication-wise: concentrates a small set of triangle molds in contiguous or visually coherent regions.
- Practically: you can set a cap like “use ≤ 12 distinct triangles” and the cluster stage will steer the ILP toward that.
If the phrase felt hand-wavy before, the actionable definition is:
Group faces whose normalized edge lengths and angles are close (in φ-space), preferably also near each other on the surface, and restrict them to share the same small subset of library triangles before solving the global seam-consistent assignment.
# inputs: mesh M0, library triangles L[1..K], tolerances
features = [compute_features(face) for face in M0.faces]
Lfeat = [compute_features(tri) for tri in L]
for i,Fi in enumerate(M0.faces):
near = argtopm([dshape(features[i], Lfeat[k]) for k in range(K)], m=6)
for orient in orientations(L[k]):
T, Efit = rigid_fit(L[k,orient], Fi)
C.setdefault(i, []).append((k, orient, T, Efit))
# prune faces with no candidates early
infeasible_faces = [i for i in range(nF) if i not in C]
for (i,j) in M0.adjacent_face_pairs():
Eseam = seam_error(cand_i.T, cand_j.T, shared_edge=(i,j))
compat.setdefault((i,cand_i),( )).add((j,cand_j))
# 5 ILP (sketch): x_{i,c} in {0,1}
# constraints: one candidate per face; forbid incompatible pairs; minimize fit+seam+mold_count
solution = solve_ilp(C, compat, weights)
refine_transforms(solution.assignments)
solution = solve_ilp(C, compat, weights)
- Data prep: MeshMachine / QuadRemesh→Triangulate for clean M₀.
- Feature calc: Python/C# component to compute
φ.
- Clustering: k-medoids via a Python component (scikit-like), or push to a small external script.
- Fitting: Kabsch/Procrustes in C# or Python (numpy).
- ILP: call an external solver (OR-Tools CP-SAT, Gurobi) via Python/C#; pass sparse constraints only.
- Refinement: a small Gauss-Newton that perturbs per-face rigid transforms.
- QA: custom display pipeline to color by library ID and draw seam-error glyphs.
- If ε_fit = ε_seam = 0, the ILP either returns an exact, continuous assignment or declares infeasible. That’s mathematically clean and honest.
- If you allow tiny ε (e.g., 0.25–1.0 mm at panel scale), the alternating refinement makes solutions far more likely while still being fabrication-tight.
If you want, i can draft the ILP variable/constraint matrices for your specific library (how many triangles? do you allow mirrored parts?) so you can plug straight into OR-Tools.