FAQ

Things people ask when they first hear about Parseltongue.

You can’t eliminate hallucinations. That’s impossible!

To be precise, we claim to eliminate ungrounded or detached-from-consequences facts and claims, and automatically find ones producing contradictions. We think it’s reasonable to call such things “hallucinations.”

Note that we don’t claim to eliminate lies — a lie can be grounded in facts, or present them incompletely. We call such cases “deception,” and they are out of scope for Parseltongue.

As for impossibility — eliminating all kinds of deception is indeed impossible. Eliminating hallucinations, as we prove by existence, is not. Moreover, it’s technically impossible to provide more guarantees than Parseltongue gives. It directly touches the edge of impossibility — nihil supernum, with only Gödel and Halting above us.

Ok, why the hell LISP-like syntax? Do you like braces that much ((0)_(0))?

We needed a homoiconic language — meaning data and code are represented as the same data structure. Parseltongue operates by literally rewriting itself: meta-programming isn’t a library but the core structural property that makes everything work. During execution it modifies itself into enormous programs and reduces back to answers.

We could get this property from a few syntaxes — LISP is the easiest to write a tokenizer for. If you’d like, you can write an adapter for any other homoiconic language — it should work since the engine only cares about programs being lists internally; externally they can be translated.

Returning to hallucinations — what if an LLM says 2+2 = 5? How do you detect that automatically?

First, it must quote the source it took the claim from — without a quote it’s a hallucination automatically. It can take a wrong quote, so we also implement structural checks using the facts the LLM provides in derivations. The rule 2+2 = 5 will conflict with many other rules when substituted — and the engine detects this.

If a fact doesn’t interact with anything at all, it’s hallucinated by our classification (we have a separate warning for that).

For high-stakes cases, we also recommend blinded passes in the LLM module — multiple independent extraction attempts that are very hard to complete consistently given the engine’s deterministic checks. Errors are a signal: if you see them, the situation is complex for the LLM and it’s trying to cut corners.

Lastly, the notebooks render reports as computationally supported prose — you’ll see exactly where an LLM cites wrong facts or doesn’t cite them at all. When someone makes claims, silence about grounding is also telling.

How is this different from RAG?

RAG retrieves better context. We verify claims. RAG asks “did I give the model the right documents?” Parseltongue asks “did the model accurately represent what’s in them?”

Think of it as the difference between having a CSV and having a data platform with self-monitoring, alerts, charts, and documentation. Both can describe the same rows — but one actually knows what to do with them. Parseltongue is knowledge-as-code: not just retrieved text, but structured, verified, and accountable knowledge.

Do I need to learn LISP to use Parseltongue?

No. The pgmd notebook format is Markdown with embedded blocks. For most use cases you write prose and let the LLM module handle extraction. The LISP is under the hood.

And LLMs will write the code for you — you just need to learn how to steer them and address the errors. For this we’re building the Construct.