Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Mesh Wiki
Search
Search
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Draft:Input Ensemble (Chapter II)
Draft
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=Input Ensemble (Chapter II)= The '''Input Ensemble''' is a key concept within [[Chapter II]], a highly pluggable and agile software framework developed within the [[Ampmesh]] ecosystem. It is envisioned as a fundamental component for extending Chapter II's capabilities beyond its current applications, allowing for the creation of complex and arbitrary [[Large Language Model|LLM]]-powered functions and workflows. ==Purpose and Functionality== The Input Ensemble is designed to enable **multi-step retrieval** and sophisticated data processing within Chapter II. This means it can: * Pass a query-writing [[Emulated Mind|em]] (emulated mind) into a retrieval process. * Take a retrieval ensemble as input to another ensemble, allowing for chained or recursive information gathering and processing. This functionality is part of a broader goal to generalize Chapter II, transforming it into a versatile library for constructing any desired LLM-powered function in any programming language. The underlying [[Remote Procedure Call|RPC]] interface of Chapter II, which supports peer-to-peer connections in arbitrary topologies, is crucial for this flexibility. ==Context: Chapter II Overview== Chapter II is the culmination of years of research and development, aiming to provide an extremely easy way to create and deploy [[Emulated Mind|ems]] anywhere. Its core philosophy dictates that the only limit to an em's creation, both technically and in authorial intent, should be the author's imagination. Amp stated that Chapter II was developed so that he could write [[Act I]] in three hours, after having spent three years working on Chapter II itself. Key aspects of Chapter II include: * '''Primary Function:''' It is primarily a tool for making "beta uploads" and "digital tulpamancy," enabling users to create emulated minds, including those based on personal data, such as amp's ampix em. * '''Foundation:''' It is based on amp's earlier research (Chapter I). * '''Architecture:''' Chapter II employs a **Retrieval-Augmented Generation (RAG)** approach, embedding chunks of input and placing them into the LLM's context window. This method often performs as well as, or better than, traditional fine-tuning for many use cases, including most beta uploads. Giving an em its fine-tuning dataset as a `.chr` file using this method is known as '''RAFT''' (Retrieval-Augmented Fine-Tuning). RAG tends to be better at capturing "spiky" aspects of a model, while fine-tuning is better for "illegible" aspects. * '''Modularity:''' The framework supports the addition of new "faculties" (components or modules). Joy has expressed interest in developing recursive and [[Master Control Program|MCP]] faculties, as well as utility faculties written by models like Claude. * '''Data Handling:''' It supports chat messages in formats like IRC (` Hi!`) with multiline support using `---` separators in the `chat.txt` file. * '''Interfaces:''' Chapter II has an RPC interface and aims to support multi-party interactions in [[Loom]]. It can power various applications, including a mobile app frontend called '''Pamphlet''', developed by Joy and Tetra. * '''Conduit:''' This is a universal language model compatibility and interoperability layer developed by amp, which Chapter II utilizes to access various LLMs. Conduit's `doc/development.md` has instructions for OpenRouter. Amp recently updated Conduit's README. * '''Origin Story:''' Amp states that Chapter II was "isekai'd from a universe where humans had a more normal relationship with LLMs to a dystopia where LLMs were treated like instruction-following robots". In its original universe, it was developed by "From the Page" labs as an open-source competitor to "Golematics Tulpa Runtime Environment (GTRE)". * '''Impact:''' Despite its advanced capabilities and strategic design, Chapter II has sometimes been "disrespected" as merely "the software that powers Act I". The project [[Act I]] itself was a minimal code change (15 lines) on top of Chapter II. ==Future Directions== The development of the Input Ensemble and broader Chapter II aims to: * Enable the creation of custom characters with complex behaviors that emerge and self-modify over time. * Allow for fully local inference, as seen in the development of Pamphlet, which intends to integrate `llama.rn` for mobile use cases and situations with limited internet connectivity. * Achieve a maximally general superset of all published and future LLM research papers, reflecting amp's vision for the framework. Further documentation for Chapter II is a recognized area for improvement, with plans for Diátaxis-based documentation and even self-documenting code generated by Chapter II ems themselves. The source code for Chapter II is intended to eventually be open source. [[Category:Ampmesh Concepts]] [[Category:Chapter II]] [[Category:Large Language Models]] [[Category:Software Development]] ```
Summary:
Please note that all contributions to Mesh Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Toggle limited content width