Governing framework
Licensing and interpretation boundary for authority-closed reference objects.
Licensing and Interpretation Framework v2Scanners are too late. Control automated and AI-driven execution before action.
Hosted Authority Pilot places an external allow/deny boundary between intent and execution, so the decision stays outside the workflow that is asking to proceed.
There are three practical paths: commercial request, live pilot, or private deployment. Commercial access stays on the main request path. The pilot remains a separate live evaluation path for testing the boundary before action.
AI Admissibility decides before execution whether an AI-driven action is admissible. No valid boundary context means no admissible execution.
Canonical commercial path
Live proof status
https://ai-admissibility.com/proof-status.jsonOrganizations already running AI-driven or automated workflows where actions do not stay inside a harmless demo environment.
Contexts where a wrong step can touch money, data, infrastructure, external systems, customer state, or other irreversible consequences.
Decision-makers who need a qualification-first pilot to test whether an external allow/deny boundary fits their production risk model.
A controlled starting path for teams evaluating whether an external allow/deny boundary fits their execution and risk model before broader adoption.
Access to the hosted authority path used for pilot evaluation, with the decision boundary presented as an external execution control surface rather than internal self-approval.
A concrete outcome after evaluation: proceed with hosted access, stay in evaluation, or determine that the fit is not right for the current workflow and risk profile.
This is not a tool that discovers a problem after action already happened.
This is not a reporting layer that explains damage after execution already crossed the boundary.
This does not ask the same agent or workflow to authorize its own next step.
This is an external allow/deny decision surface, not a soft advisory layer with no execution consequence.
The buyer enters through the request path rather than a public self-serve activation flow.
The use case is checked for real execution risk, workflow fit, and whether a controlled pilot makes sense.
A qualified team moves into the hosted evaluation path where the external allow/deny boundary can be assessed in context.
The result is a concrete next step: hosted continuation, further evaluation, or a clear no-fit conclusion for the current scope.
Scanners and monitoring help after the fact. This boundary is for cases where the real question is whether execution should continue before action happens.
The same actor should not authorize itself. The point of the model is that the allow or deny decision lives outside the workflow requesting execution.
A real workflow, a real risk surface, and a concrete reason to test whether a controlled pilot fits the team's execution model.
This is not for curiosity traffic, generic AI experimentation, or teams that do not have meaningful execution risk to control.
Access is qualification-based. Pricing and scope are provided after the requested boundary use case is reviewed.
Primary GitHub product surface for installation, action flow, and buyer-facing execution entry.
Immutable authority-closed reference anchor for the Level5 External Admit Authority record.
Public non-operational Zenodo records defining the architectural, licensing, and pre-execution admission reference layer behind AI Admissibility.
Open Reference GuideLicensing and interpretation boundary for authority-closed reference objects.
Licensing and Interpretation Framework v2Diagnostic distinction between internal policy and external admission.
Reference layer for post-execution payment-failure state capture.
No external anchor, no admissible cloud execution.
Valid snapshot at T0, or inadmissible execution.
Irreducible cost of boundary-first execution.
Protocol-level control before execution.
External authority for fail-closed admission.
Identical inputs can still diverge.
T0 snapshot-bound admissibility.
These records are public reference artifacts. They are not deployment packages, service interfaces, software instructions, operational controls, or commercial licenses. Operational, implementation, commercial, evaluation, conformity, endorsement, or authoritative interpretation rights require separate written permission.
There are three paths: commercial access request, live pilot, or private deployment. Use Open request form for Starter, Evaluation, or Hosted Authority commercial access. Use Try live pilot for the separate live evaluation path. Use Private Deployment when you need tighter isolation and a private deployment discussion.
Request temporary synthetic evaluation access for the AI Admissibility GitHub Action flow. This is pilot access only, not paid production access.
Submit the form to receive a temporary proof_access_id.
uses: pinfloyd/[email protected]
with:
authority-url: https://admit.ai-admissibility.com/admit
authority-pubkey: ${{ secrets.AI_ADMISSIBILITY_AUTHORITY_PUBKEY }}
policy-id: demo-policy
proof-access-id: ${{ secrets.AI_ADMISSIBILITY_PROOF_ACCESS_ID }}
trust-verdict: PASS
Before using this in a real high-impact workflow, verify that DENY, missing, invalid, expired, or unverifiable admission blocks workflow execution. No direct bypass path may exist around the admission step.
Read the customer integration ruleNon-claims: not production access; not paid tier access; not private deployment; not customer no-bypass guarantee; synthetic evaluation only.
Boundary distinction
Pre-run policy is necessary. External admission is the stronger boundary: internal policy improves the executor, while external admission separates execution from authority.
A platform can resolve what will run, check policy, and block unsafe execution before a workflow starts. Useful, but final authority remains inside the executor platform.
Read the explanation →
For high-impact automation, execution should require an external allow decision before it may exist. The executor should not be final authority.
Read the explanation →
Can execution proceed without an external allow decision?
If yes, the system has policy, but not external admission authority.
Run the test →
Execution Admission Guides
Short public-safe guides explaining why production AI agents need external admission before high-impact execution.