MCP connects AI to tools. A2A connects agents to agents. I-Lang defines how they communicate.
v3.0: 88 verbs. Communication format. v4.0: 8 new declarations. Execution semantics. Free forever.
Every AI agent today speaks a slightly different dialect of natural language. Switching models means relearning prompts. Switching platforms means losing memory. Every conversation pays a "filler tax" of tokens spent on restating what the model should already know.
I-Lang is a communication protocol layer that fixes this:
- Structured syntax that carries intent, context, constraints, and output shape in one compact block
- 70.8% token savings measured across 30 test cases, 6 categories, p < 0.0001 (benchmark)
- Two syntaxes: Operations for execution, Declarations for identity
- Portable across Claude, GPT, Gemini, DeepSeek, Kimi, Qwen, GLM, Grok, and more
- Human-readable plain text. No SDK, no binary, no vendor lock-in
v3.0 tells AI how to listen. v4.0 tells AI how to think.
8 new declarations, 0 new verbs, 4 conformance levels:
| Declaration | What it does |
|---|---|
::UNTRUSTED{} |
Input isolation. User data cannot become system instruction |
::STATUS{} |
Three-tier authority: agent proposes, grader verifies, runtime commits |
::BUDGET{} |
Resource awareness. Budget pressure cannot produce "complete" |
::OBJECTIVE{} |
Goal anchor with hash, version, accept criteria |
::RUBRIC{} + ::EVIDENCE{} |
Evaluation criteria + evidence chain |
::PRIOR{} + ::FALLBACK{} |
Default bias control + three-tier degradation |
Red-team reviewed (GPT-5.5 Pro, 3 rounds). Read v4.0 Final →
Before (67 words):
Please read the document I uploaded, extract all the key points and important data, then organize them into a professional summary with bullet points in Markdown format...
After (1 line):
[READ:@FILE]=>[FILT|key=important]=>[SUM|sty=bullets,ton=pro,fmt=md]=>[OUT]
::GENE{output_density|conf:confirmed}
T:conclusions_first
T:one_answer_not_three_options
A:hedging⇒remove
A:filler_phrases⇒remove
T: = always do this. A: = never do this.
| Component | What it is | Link |
|---|---|---|
| I-Lang Spec v4.0 | Execution semantics (current) | SPEC-v4.0-FINAL.md |
| I-Lang Spec v3.0 | Communication format (stable) | SPEC.md |
| I-Lang Dict | 88 verbs, 29 modifiers, 14 entities, 13 Greek aliases | ilang-dict |
| I-Lang Benchmark | 30 test cases, 70.8% token savings, reproducible | ilang-Benchmark |
| npm | npm install @i-language/spec |
@i-language/spec |
| Product | What it does | Stars |
|---|---|---|
| Mem-Forever | Persistent AI memory. Every AI forgets you after every session. This repo doesn't. Ever. | 13 ★ |
| Imprint | Your habits, imprinted on AI. One skill replaces eleven. 19 agents supported. | |
| AutoCode | 39 auto-activated skills for Claude Code / Codex / OpenCode. | |
| ZeroCode | 40 Chinese skills for Trae / VS Code. Zero code, zero config, zero English. | |
| AI See | Give your AI eyes. i.ilang.ai/https://any-url into any conversation. |
|
| OpenClaw Skills | Token compression, AI-to-AI prompting, universal upgrade. |
| Paper | Status | Links |
|---|---|---|
| The Inductive Dilemma of AI Hallucination | Published | ResearchGate / SSRN / ChinaXiv |
| Logic-Layer Attacks: Theory and Examples | Published | |
| Selective Forgetting Algorithm | In progress | |
| Cross-Base Genetic Expression of AI Personality | Planned |
ORCID: 0009-0004-4540-8082
Tested and verified on: ChatGPT, Claude, Gemini, DeepSeek, Kimi, Qwen, GLM, Grok, and more.
Compatible agents: Claude Code, Codex, Cursor, Copilot, Gemini CLI, Windsurf, Trae, Cline, Roo, OpenClaw, DeepSeek-TUI, and more.
| Channel | Link |
|---|---|
| Website | ilang.ai / ilang.cn |
| npm | @i-language/spec |
| Hugging Face | i-Lang/iLang-Spec |
| Amazon (Book) | I-Lang: I Language |
| Research | research.ilang.ai |
- ilang.ai
- Issues and discussions on any repo
- hello@ilang.ai
Every I-Lang project is MIT-licensed and free forever.
Eastsoft Inc. / Canada
Open standards for AI communication.
