Methodology

Signal Deployment Methodology

Four-phase operational intelligence deployment. Deterministic perception, human review gates, Aletheia audit trail. Repeatable. Measurable.

Prerequisites

Signal deploys on the client's own environment. No cloud services. No external dependencies. The only requirements:

  • Client environment accessible (local filesystem or SSH)
  • No git, no cloud services, no external dependencies required
  • PDF generation capability for deliverables

Deployment Lifecycle

Four phases, two human review gates, five deliverables. No phase transition without gate approval.

harvest → raw findings → Gate 1 → true findings → optimized → Gate 2 → ongoing → deliverables

Every decision traces to foundation values: truth over plausibility, structure over volume, provenance over assertion, perception over generation, deterministic before probabilistic.

Phase 0: Setup

Create the deployment record and initialize the Aletheia audit ledger. The ledger is a separate database, independent of the operational database so it can verify integrity without trusting the system it audits.

Configure the artifact harvester scope: which directories and projects to scan. The harvester automatically excludes version control, virtual environments, build artifacts, binary files, and credentials.

Phase 1: Raw Twin

The harvester is filesystem-first. No git, no cloud APIs. It perceives the documents themselves:

  • File indexing: path, size, extension, content hash (SHA-256), lines of code
  • Structural classification: role (source, doc, config, test, script, data, template) and project
  • Timestamps: first seen, last modified, last accessed
  • Import extraction: Python imports parsed and resolved to actual file paths
  • Cross-reference extraction: file path references in markdown, yaml, html resolved against repo contents
  • Dependency graph: bidirectional: what this file depends on, and what depends on this file
  • Change detection: content hash compared to previous snapshots to detect created, modified, deleted files
  • Edit frequency: computed from Signal's own observation history, not from git

Raw findings are generated across four categories. Same-day deliverable: Signal Scorecard (2-4 page PDF with stats, readiness radar, top findings).

Phase 2: True Twin

Review Gate 1. All raw findings are presented to the client for confirmation. Each finding is reviewed: confirm, reject, or defer.

  • Confirmed findings proceed to the optimized phase
  • Rejected findings are logged with reason (improves future perception)
  • Deferred findings are held for re-review

The system proposes. The client decides. No phase transition without gate approval.

Phase 3: Optimized Twin

For each confirmed finding, an actionable recommendation is generated. Principles:

  • Target root cause, not symptoms (multiple findings may share one recommendation)
  • Scope to minimum intervention needed to close the gap
  • Categorize by effort: quick-win, standard, strategic

Review Gate 2. Same process as Gate 1 but for recommendations.

Deliverables: Signal Audit Report (8-12 pages, week 1) and Signal Build Report (20-30 pages, weeks 2-3) with full findings, recommendations, cost model, and implementation roadmap.

Phase 4: Ongoing

Continuous monitoring on cadence (weekly recommended). Each harvest:

  • Computes content hashes and compares to previous snapshots
  • Detects created, modified, and deleted files
  • Rebuilds the full cross-reference dependency graph
  • Records file locations for multi-host divergence detection

Drift signals include new files without structural role assignment, modified files that reintroduce closed gaps, deleted files that break the dependency graph, and SOP violations.

Weekly intelligence digest. Monthly re-scoring with full readiness assessment and cost model comparison.

Deliverable Suite

Deliverable Timing Pages Purpose
Scorecard Same day 2-4 Health snapshot, first impression
Audit Report Week 1 8-12 Full findings and recommendations
Build Report Weeks 2-3 20-30 Comprehensive analysis with cost model
Playbook Day 1 ~8 Client team introduction, rollout guide
AI Policy Day 1 ~8 Data handling, network isolation, legal language

Perception Layer

The harvester captures per-file metadata from the filesystem. Everything is deterministic. No LLM in the perception layer.

Field Source Description
content_hash SHA-256 Detects any content change
role Deterministic classification source, doc, config, test, script, data, template
first_seen Signal observation history When Signal first perceived this file
imports Python import parsing Full dotted module paths, resolved to file paths
references_files Resolved cross-references Actual file paths this file depends on
referenced_by Reverse dependency graph Files that depend on this file
edit_frequency artifact_snapshots Content hash changes over 30 and 90 day windows

Optional git enrichment adds authorship data (authors, commit count, sole author detection) when git is available. Most client environments will not have git. Signal functions fully without it.

Finding Categories

Key Person Dependency

Knowledge or capability concentrated in a single person. Detected by sole-author analysis, bus factor = 1, no cross-training evidence.

Scattered Knowledge

Information spread across too many locations to be reliable. Detected by file distribution analysis, duplicate content, inconsistent naming.

Process Gap

Operational activity without documented procedure. Detected by missing SOPs for active workflows, undocumented deployment steps.

Undocumented Dependency

System dependency not recorded or monitored. Detected by cross-project imports without documentation, orphaned references to deleted files.

Severity levels:

  • Critical: Single point of failure with no mitigation. Immediate operational risk.
  • High: Significant operational risk with limited mitigation. Likely to cause disruption within 6 months.
  • Medium: Operational inefficiency or moderate risk. Will compound over time.

Aletheia Audit Trail

Two databases, one provenance chain. The operational database holds all state. Each deployment has a separate Aletheia ledger. The ledger is independent. It can verify operational database integrity without trusting it.

Why separate? Provenance over assertion. The auditor must be independent of the system it audits. With separate databases, modifying operational state without a corresponding ledger entry creates a detectable discrepancy.

Every state change automatically produces a hash-chained witness entry:

Event What It Records
Deployment creation Client, initiative, creation metadata
Finding logged Finding ID, category, severity, content hash
Review gate created Gate phase and type
Gate resolved Decision, reviewer, confirmation counts
Phase advanced Phase transition (from and to)

Each entry includes a SHA-256 hash of the operational record. To verify integrity: re-hash the record and compare against the ledger entry. A mismatch means the record was modified after the fact.

The hash chain can be verified at any time without echology involvement. Each entry's previous hash equals the prior entry's chain hash. Full chain verification is a single linear scan.

Proven Deployments

Four deployments completed using this methodology. All followed the same lifecycle.

Metric Value
Total findings 87
Resolved 79
Open 8
Overall resolution rate 91%

Lessons from 4 Deployments

  • Process is repeatable. All four deployments followed the same lifecycle. 91% overall resolution rate.
  • Aletheia must be wired from the start. Deployments with Aletheia wired from creation achieved full coverage. Late wiring results in partial audit trails.
  • Finding counts scale with complexity. More code and cross-project dependencies produce more structural gaps. This is expected and useful.

Deploy Signal on your environment

Four phases. Two human gates. Five deliverables. Your data never leaves your network.

Request a Deployment