Skip to main content

Verifiable reasoning for AI decisions.

Ontologic is a proof-of-reasoning protocol that enables AI agents and distributed systems to cryptographically prove the justification behind their actions, not just the actions themselves.

A four-hash morpheme proof — rule, inputs, outputs, meaning — anchored on Hedera Consensus Service. Agents prove not just what they did, but why.

Ontologic

Ascension Hackathon

1st Place ($10K)

Hedera's official hackathon — external validation from the ecosystem

Apex Hackathon

Submitted — Awaiting Results

$250K prize pool — judging through April 17, winners April 27

Hedera-Native

HCS Anchored

Verifiable timestamps and total ordering via Hedera Consensus Service

Research

CC-BY-4.0

Fusing Ledger-Based Proof of Reasoning with Hardware Roots of Trust

Theo Ezell · April 3, 2026

Independent architecture paper positioning Ontologic's RIOM morpheme as the reasoning substrate for hardware-enforced AI governance.

View on Zenodo →

Why This Matters

For decades, enterprise security operated on implicit trust. A badged employee, on a corporate machine, in a secure environment, running sanctioned software — that was "contained." The human in the chair was the verification layer, and nobody had to formalize it because the human was always there.

Agents remove that implicit gate. Autonomous processes now make decisions, call APIs, move data — and there's no one in the chair. The trust was never in the system. It was in the person. We never built the infrastructure to replace what that person was quietly doing.

Ontologic is that infrastructure.

The Thesis

The Problem

LLMs, autonomous agents, and symbolic systems produce outputs through processes that are neither reproducible nor independently verifiable. Blockchains secure state transitions, but not the logical structures that generated them. There is a gap between action and meaning.

Ontologic's Response

Ontologic proposes a deterministic, minimal protocol to close this gap. The protocol formalizes reasoning into three sequential layers — the Triune Proof Model:

The Triune Proof Model consists of three sequential layers that together form a complete verifiable reasoning chain:
1

Rule Layer (Definition)

A rule is any declarative expectation in canonical form — a logical axiom, a domain constraint, a transformation. The rule is hashed to produce a ruleHash.

The Rule Layer corresponds to the Peirce identity layer, establishing what this rule is, who authored it, and what version governs. In Ontologic's CMY color system, this layer is represented by Magenta.
2

Inference Layer (Execution)

Given a rule and inputs, a deterministic procedure evaluates whether the rule holds. Inputs and outputs are serialized and hashed. The inference must be deterministic and externally reproducible.

The Inference Layer corresponds to the Tarski truth-conditional layer, evaluating whether the rule's conditions hold against the given inputs. In Ontologic's CMY color system, this layer is represented by Cyan.
3

Meaning Layer (Attestation)

The results are published as a canonical meaning record via consensus-backed attestation. Hedera Hashgraph satisfies the requirements — total ordering, verifiable timestamps, immutability, low-latency publication — through the Hedera Consensus Service.

The Meaning Layer corresponds to the Floridi information layer, establishing semantic significance and public attestation. In Ontologic's CMY color system, this layer is represented by Yellow.

The final proof — the morpheme — binds all three layers into a single verifiable hash:

proofHash = H(ruleHash || inputsHash || outputsHash || meaningHash)

The fourth hash — meaningHash — binds the proof to a consensus-anchored semantic attestation, making the morpheme a complete reasoning record rather than just a computation log.

This formula shows the canonical four-hash morpheme proof: the proof hash equals the hash of the concatenation of rule hash, inputs hash, outputs hash, and meaning hash. This creates a single verifiable identifier binding rule definition, inference execution, output state, and semantic meaning.

Two independent parties computing the same reasoning derive the same morpheme. Verification does not require trust in the agent that produced the reasoning.

What's Ahead

Building toward dynamic rule registries and the OTS/OCS framework. Instead of encoding rules in smart contracts, agents will resolve human-readable rule URIs via registries built on existing Hedera standards (HCS-1 for rule storage, HCS-2 for registries, HCS-13 for schema validation). Designing for community governance of rule taxonomies. This is the direction — not the current implementation.

Hologlass logo Hologlass

Hologlass is an AI agent attestation harness that intercepts tool calls, gates execution through human-in-the-loop authorization, and witnesses every decision on Hedera.

Hedera = consensus about what happened
Ontologic = consensus about why it was concluded
Hologlass = consensus about who permitted it

Request Early Access

Ontologic is in active pre-Alpha development. Request early access to be notified when developer tooling ships.

We'll only email you about the alpha.