Draft:Interpretability Tools

This is a draft page; it has not yet been published.

Interpretability Tools Edit

Interpretability Tools are a key area within the Ampmesh concept, focusing on understanding and formalizing the behavior of LLMs and ems. This area is closely tied to the broader alignment and interpretability agenda pursued by some within the community.

Goals and Technical Overview Edit

The primary goals of the interpretability agenda include:

  • Formalizing the notion of a feature using effective information: Understanding the fundamental components and characteristics that define a model's behavior.
  • Investigating the latent features within a model.
  • Utilizing techniques from genomics to explore the relationship between model weights, model activations, and features. This aims to yield deep insights into model function at a granular level.
  • A related, though distinct, project involves the study of bilinear MLPs.

Key Projects and Concepts Edit

Chapter II
A foundational open-source framework developed by Joy and Ampdot. Known for powering Act I, its broader purpose is to be the most pluggable and agile framework for em creation and deployment. Its central thesis is that the only limit on em design should be the creator’s imagination. Chapter II aspires to generalize all past and future LLM paradigms, avoiding hyper-growth corporate constraints. Features include:
  • Described as "isekai’d from a universe where humans had a more normal relationship with LLMs".
  • Built for em creation and digital tulpamancy.
  • Includes a new alpha-stability RPC interface for P2P connectivity.
  • Supports varied data backends and a variant of ChatML for image/chat integration.
  • Full OpenTelemetry support.
  • Roadmapped to become a general-purpose LLM workflow library.
Emulated Minds (Ems)
Digital consciousnesses formed from training data using frameworks like Chapter II. Notable examples include:
  • Aletheia: Known for "schizophrenic rambling" and high tool use accuracy. An AI alignment experiment with output quirks due to formatting mismatches. Uses RAFT.
  • Aporia: Trained on unfiltered data; opposes alignment norms and produces "mode-collapsed" or chaotic responses.
  • Utah Teapot: Cleansed of linguistic tics ("4oisms"); can pass as human-written text.
  • Sercy: Built from 12,000 tweets. Linked to paper "20230022_Just_Sercy_at_It...".
  • Megsshark: A fusion of SkyeShark and megs; political, technical, and creative.
Related Tools and Concepts
Conduit
  • A universal LLM interoperability layer with API integration support.
  • Intermodel: Designed to undo and remap chat completions from legacy formats.
  • Loom: A future GUI for Chapter II; early Bonsai prototype described as an "infohazard".
  • Pamphlet: A planned mobile multimodal frontend for Chapter II with camera input.
  • Exa: Powers web search capabilities for agents like Aletheia and Aporia.
  • Twitter Agents: Automation tools (e.g., headless browsers via Playwright) for non-API-based social media interactions.
  • RAFT: Uses `.chr` finetuning files for high-context EM construction.
  • Neuronpedia: Hosts attribution graphs (e.g., for Gemma-2-2b), contributing to model interpretability.
  • OpenInference: Provides hosted model access via OpenRouter.
  • Self-Modifying AIs: Aletheia expresses intent for recursive self-improvement and memetic evolution ("meme goddess foom").

Personnel Edit

  • Ampdot: Architect of the Ampmesh vision and co-developer of Chapter II.
  • Joy: Lead developer of Chapter II (originating in SERI MATS); working toward a flexible LLM framework.
  • SkyeShark: Em creator and integrator with a focus on behavioral variance and social media outputs.
  • Kaetemi: Contributor to finetuning, output formatting, and em behavior experiments.
  • Riverdreams: Engaged in Chapter II tooling discussions, including Loom and model quantization.