HOSTED AUTHORITY PILOT

Control automated and AI-driven execution before action

Scanners are too late. Control automated and AI-driven execution before action.

Hosted Authority Pilot places an external allow/deny boundary between intent and execution, so the decision stays outside the workflow that is asking to proceed.

There are three practical paths: commercial request, live pilot, or private deployment. Commercial access stays on the main request path. The pilot remains a separate live evaluation path for testing the boundary before action.

LIVE PROOF

A pre-execution admission boundary, not a post-incident scanner.

AI Admissibility decides before execution whether an AI-driven action is admissible. No valid boundary context means no admissible execution.

Verified

  • Live commercial path tested
  • STARTER, EVALUATION, HOSTED_AUTHORITY tiers tested
  • GitHub SAB no-bypass proof passed
  • Reboot-safe behavior verified
  • Recovery artifacts created

Public proof shows

Not claimed

  • No synthetic GitHub ALLOW without issuer/payment context
  • No universal security claim
  • No legal/compliance guarantee
  • Full recovery drill is outside the current proof claim

Canonical commercial path

01
Site request
02
Live payment step
03
Payment return
04
Issuer
05
SAB / tier gate
06
Boundary response

Live proof status

Overall status: PASS
Generated UTC: 04/25/2026 19:58:25 — Authority: HOSTED_L5_AUTHORITY_V2

Authority reachable

pubkey 200
Pinned public authority endpoint responds.

Request path

HTTP 200
Commercial request page is online.

Fail-closed probe

bad admit 400
Synthetic invalid request is rejected.

Pilot path

HTTP 200
Live evaluation surface is online.

Private deployment

HTTP 200
Private deployment inquiry surface is online.

Recovery posture

available
Recovery artifacts are publicly represented as available.
Proof source
https://ai-admissibility.com/proof-status.json
Non-claims: curated proof status, not full operational telemetry; no raw client logs, no tokens, no payment data, no customer ALLOW/DENY counters; executor no-bypass requires integrated deployment where execution depends on boundary response.

Who this is for

Teams with automated execution

Organizations already running AI-driven or automated workflows where actions do not stay inside a harmless demo environment.

Operations with real downside

Contexts where a wrong step can touch money, data, infrastructure, external systems, customer state, or other irreversible consequences.

Buyers evaluating controlled pilots

Decision-makers who need a qualification-first pilot to test whether an external allow/deny boundary fits their production risk model.

What buyer gets

Qualification-first pilot path

A controlled starting path for teams evaluating whether an external allow/deny boundary fits their execution and risk model before broader adoption.

Hosted decision surface

Access to the hosted authority path used for pilot evaluation, with the decision boundary presented as an external execution control surface rather than internal self-approval.

Clear next-step outcome

A concrete outcome after evaluation: proceed with hosted access, stay in evaluation, or determine that the fit is not right for the current workflow and risk profile.

What this is not

Not a scanner

This is not a tool that discovers a problem after action already happened.

Not post-event monitoring

This is not a reporting layer that explains damage after execution already crossed the boundary.

Not self-approval inside the same actor

This does not ask the same agent or workflow to authorize its own next step.

Not vague governance theater

This is an external allow/deny decision surface, not a soft advisory layer with no execution consequence.

How the pilot works

1. Request

The buyer enters through the request path rather than a public self-serve activation flow.

2. Qualification

The use case is checked for real execution risk, workflow fit, and whether a controlled pilot makes sense.

3. Pilot path

A qualified team moves into the hosted evaluation path where the external allow/deny boundary can be assessed in context.

4. Outcome

The result is a concrete next step: hosted continuation, further evaluation, or a clear no-fit conclusion for the current scope.

FAQ

Why is this needed if scanners already exist?

Scanners and monitoring help after the fact. This boundary is for cases where the real question is whether execution should continue before action happens.

Why external instead of inside the same workflow?

The same actor should not authorize itself. The point of the model is that the allow or deny decision lives outside the workflow requesting execution.

What does a team need to start?

A real workflow, a real risk surface, and a concrete reason to test whether a controlled pilot fits the team's execution model.

Who is this not for?

This is not for curiosity traffic, generic AI experimentation, or teams that do not have meaningful execution risk to control.

Access paths

Access is qualification-based. Pricing and scope are provided after the requested boundary use case is reviewed.

Tier 0
Starter
Entry path for initial qualification and controlled first access.
Request access
Access by request after initial qualification.
Open request form
Tier 1
Evaluation
Controlled access for testing the boundary model and proving fit.
Qualified evaluation
Scope and access terms provided after review.
Open request form
Tier 2
Hosted Authority
Managed decision surface for production use.
Commercial inquiry
Commercial access by qualified inquiry.
Open request form
Tier 3
Private Deployment
Higher-isolation path for organizations that need tighter control.
By written request
Private scope and deployment model by written request.
Open request form

Proof

What is proven

  • Fail-closed behavior
  • External decision path
  • Deterministic response surface
  • Tier-aware access model

What this is not

  • Not a passive scanner
  • Not post-event forensics
  • Not a vague governance statement

Action surface

Primary GitHub product surface for installation, action flow, and buyer-facing execution entry.

Zenodo reference

Immutable authority-closed reference anchor for the Level5 External Admit Authority record.

PUBLIC REFERENCE LAYER

Reference Archive

Public non-operational Zenodo records defining the architectural, licensing, and pre-execution admission reference layer behind AI Admissibility.

Open Reference Guide

These records are public reference artifacts. They are not deployment packages, service interfaces, software instructions, operational controls, or commercial licenses. Operational, implementation, commercial, evaluation, conformity, endorsement, or authoritative interpretation rights require separate written permission.

Choose the right path for the Hosted Authority Pilot

There are three paths: commercial access request, live pilot, or private deployment. Use Open request form for Starter, Evaluation, or Hosted Authority commercial access. Use Try live pilot for the separate live evaluation path. Use Private Deployment when you need tighter isolation and a private deployment discussion.

DEVELOPER PROOF ACCESS

Get Proof Access

Request temporary synthetic evaluation access for the AI Admissibility GitHub Action flow. This is pilot access only, not paid production access.

Issued context

Submit the form to receive a temporary proof_access_id.

GitHub Actions snippet

uses: pinfloyd/[email protected]
with:
  authority-url: https://admit.ai-admissibility.com/admit
  authority-pubkey: ${{ secrets.AI_ADMISSIBILITY_AUTHORITY_PUBKEY }}
  policy-id: demo-policy
  proof-access-id: ${{ secrets.AI_ADMISSIBILITY_PROOF_ACCESS_ID }}
  trust-verdict: PASS
INTEGRATION RULE BEFORE REAL WORKFLOW USE

No Admission = No Execution

Before using this in a real high-impact workflow, verify that DENY, missing, invalid, expired, or unverifiable admission blocks workflow execution. No direct bypass path may exist around the admission step.

Read the customer integration rule

Non-claims: not production access; not paid tier access; not private deployment; not customer no-bypass guarantee; synthetic evaluation only.

Boundary distinction

Platform-native policy vs external admission

Pre-run policy is necessary. External admission is the stronger boundary: internal policy improves the executor, while external admission separates execution from authority.

Platform-native policy

A platform can resolve what will run, check policy, and block unsafe execution before a workflow starts. Useful, but final authority remains inside the executor platform.

Read the explanation →

External admission

For high-impact automation, execution should require an external allow decision before it may exist. The executor should not be final authority.

Read the explanation →

Surrogate Boundary Test

Can execution proceed without an external allow decision?

If yes, the system has policy, but not external admission authority.

Run the test →

Execution Admission Guides

Practical explanations for AI execution control

Short public-safe guides explaining why production AI agents need external admission before high-impact execution.

What Is an External Admission Boundary for AI Workflows? Why Self-Authorizing AI Agents Are a Production Risk Production AI Agents Need Execution Control, Not Just More Capability AI Guardrails vs External Admission: What Is the Difference? Why GitHub Actions and CI/CD Need Admission Gates for AI-Driven Workflows No Admission = No Execution: A Practical Rule for AI AutomationExternal Admission for Financial Multi-Agent Systems