Honest by Architecture
Most AI systems fail at honesty because they treat it as a prompt instruction. Tell the model not to hallucinate, and it hallucinates less — but it still hallucinates.
DigitalMe solves this differently. Honesty is not a setting. It is the architecture.
The Verification Pipeline
Every answer your digital twin produces passes through a multi-layered evidence verification pipeline. The system generates a response, then checks every citation against the source material. If a cited quote does not exist in the documents, the system rejects it.1
AI models sometimes attribute real content to the wrong document. A quote exists in your project write-up, but the model cites your CV instead. Rather than stripping the citation and losing valid evidence, the system searches every other source before rejecting anything.2 Only after exhausting all sources does the system strip the citation.
Two Layers: Awareness and Proof
The system maintains a structured index of each owner's career — skills, domains, experience, achievements, education, and certifications. This index gives the twin awareness of what exists without processing the entire corpus for every query.3
This separation matters. The index answers "does this person have experience with distributed systems?" The source documents answer "where, when, and what did they do?" Awareness is fast. Proof is precise. The twin uses both.
Catching What Instructions Miss
Post-generation verification is the final guard. Every generated document — CVs, cover letters, LinkedIn messages — is checked against the raw corpus after the model produces it.
Skills that appear in the output but not in the corpus get removed. The system distinguishes between fabricated skills — not in the corpus at all — and inflated ones — present but lacking strong documentation.4 Both are flagged, but the distinction helps the owner understand where their corpus falls short versus where the model overreached.
The system verifies its own work against the original source material, not against its own generated output. This breaks the circular trust that plagues most AI self-checks.
The Owner's Role
The architecture enforces honesty. But the owner chooses to be honest too.
DigitalMe uses the metaphor of a garden. You plant new documents as your career grows. You prune skills that have gone stale. You calibrate — marking abilities as overstated, learning, or something you want to avoid entirely.5
When a visitor runs a job-fit analysis, the system weighs the owner's self-assessment against the evidence. Skills flagged as stale carry less weight. Dealbreakers surface as explicit gaps, complete with warning badges. The system presents the owner honestly because the owner has chosen to be honest.
A corpus audit tool lets owners inspect their garden's health — identifying redundancy, flagging low-value entries, suggesting which documents to merge or expand.6 The garden grows. The owner keeps it sharp.
Trust as a Feature
Gaps are not hidden. When the corpus falls short of a role's requirements, the system says so — to both sides. The visitor sees the gaps. The owner sees the same gaps, along with tools to address them.
This transparency is the product. An AI that hides shortcomings is an AI no one can trust. DigitalMe earns trust by showing you exactly what it knows, what it does not know, and where the evidence stops.
How visitors and owners each use that honesty is the next part of the story →
Try DigitalMe
It is early days, and I am genuinely interested in the feedback. There are two ways to try it:
Build your own — join the waitlist. I want to understand who finds this valuable and why.
Talk to mine — Visit neuralstorm.io, open the widget, and sign in with LinkedIn. Then DM me on LinkedIn — let me know who you are and what brings you here, and I'll grant you access.
Notes
- Multiple verification strategies work in combination — catching misattributed, paraphrased, and fabricated citations. This is not a single check. It is a pipeline. ↩
- This exhaustive search before rejection dramatically reduces false rejections. Valid content that was merely misattributed gets re-mapped to the correct source. ↩
- The index is a map; the documents are the territory. The map tells the system what to look for. The documents provide the exact quotes and passages that prove it. ↩
- The distinction matters. An inflated skill may still be real. It just needs stronger evidence in the corpus. A fabricated skill is an invention, and the system strips it. ↩
- Skills marked as avoid act as dealbreakers. In a job-fit analysis, they surface as explicit gaps with warning badges. The system will not quietly hide a mismatch. ↩
- The audit analyses the corpus for redundancy, low-value entries, and merge opportunities. It helps owners keep their career documents focused and productive. ↩