Heting Mao's picture
In a Training Loop 🔄
23 1

Heting Mao

IkanRiddle
·

AI & ML interests

None yet

Recent Activity

reacted to kanaria007's post with 🧠 about 4 hours ago
✅ New Article: *Role & Persona Overlays* (v0.1) Title: 🎭 Role & Persona Overlays: Multi-Agent Identity in SI-Core 🔗 https://huggingface.co/blog/kanaria007/role-and-persona-overlays --- Summary: Early SI-Core diagrams often assume a single “user → Jump → effect” pipeline. Real deployments don’t: cities, schools, hospitals, OSS projects, and regulators all share the same runtime. This article introduces *Role & Persona Overlays*—a first-class identity layer that answers, for every Jump: *Who is this for (principal), who is acting (agent), under what authority (role), and through which viewpoint (persona)?* Roles constrain *what actions are allowed* (capabilities + goal-surface projections). Personas only change *how results are rendered*—they must never silently change the chosen action. --- Why It Matters: • Prevents “ghost principals”: effects without a clear “on whose behalf” record • Stops role drift: the system acting as ops/platform when it should act for a learner/citizen • Makes audit queries trivial: *who decided what, for whom, under which delegation chain?* • Enables multi-agent + human-in-the-loop coordination without losing accountability --- What’s Inside: • The 4-part model: *principal / agent / role / persona* • Role-projected *goal surface views* (global goals → per-role slices) • Patterns: multi-agent cooperation, multi-principal conflicts, joint human+SI Jumps • ETH/RML/MEM integration: capability enforcement + ID-aware traces • Delegation records + chain verification (time-bounded, revocable authority) --- 📖 Structured Intelligence Engineering Series this is the practical “how to implement it safely” layer.
reacted to kanaria007's post with 👍 1 day ago
✅ New Article: *Post-Transformer Decision Cores* (v0.1) Title: 🚀 Post-Transformer Decision Cores: Goal-Native Engines Beyond LLMs 🔗 https://huggingface.co/blog/kanaria007/post-tranformer-decision-cores --- Summary: Transformers are powerful—but in SI-Core they’re *not the essence of intelligence*. A *Decision Core* is anything that satisfies the *Jump contracts* (OBS/ETH/MEM/ID/EVAL + RML), and those contracts don’t require next-token prediction. This article sketches what “post-Transformer” looks like in practice: *goal-native, structure-aware controllers* that may use LLMs as tools—but don’t depend on them as the runtime brain. > Don’t relax the contracts. > Replace the engine behind them. --- Why It Matters: • Makes LLMs *optional*: shift them to “genesis / exploration / explanation,” while routine high-stakes Jumps run on structured cores • Improves boring-but-critical properties: *determinism (CAS), fewer inconsistencies (SCI), fewer ETH violations (EAI), better rollback (RBL/RIR)* • Enables gradual adoption via *pluggable Jump engines* and domain-by-domain “primary vs fallback” switching --- What’s Inside: • The architectural inversion: *World → OBS → SIM/SIS → Jump (Decision Core) → RML → Effects* (LLM is just one engine) • Three compatible post-Transformer directions: 1. *World-model + search controllers* (MPC/MCTS/anytime search with explicit GCS + ETH constraints) 2. *Genius-distilled specialized controllers* (distill structure from GeniusTraces; LLM becomes a “genesis tool”) 3. *SIL-compiled Decision Programs* (typed Jump entrypoints, compiler-checked invariants, DPIR/GSPU targeting) • A realistic migration path: LLM-wrapped → Genius library → shadow dual-run → flip primary by domain → SIL-compiled cores • How this connects to “reproducing genius”: GRP provides trace selection/format; this article provides the engine architectures --- 📖 Structured Intelligence Engineering Series
View all activity

Organizations

None yet