Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Mesh Wiki
Search
Search
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Draft:Chapter II
(section)
Draft
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Technical Aspects == Chapter II is designed for versatility and extensibility: * '''Architecture and Modularity''': It uses a variant of [[ChatML]] adapted to support chat models and images. It features support for full [[OpenTelemetry]] cloud tracing. The framework is highly pluggable, allowing for easy deployment of ems anywhere. * '''Em Configuration''': Emulated minds are loaded from an "ems" folder, with each requiring a `config.yaml` file to define its settings. Configuration keys are defined in `./chapter2/ontology.py`, which was previously named `resolve_config.py`. * '''Data Handling''': A tool (`./tools/dce_importer.py`) is provided for importing data directly from [[DiscordChatExporter]] into a suitable format. The default `chat.txt` format is IRC-style (` Hi!`), with `---\n` enabling multiline support for messages. * '''Retrieval-Augmented Fine-tuning (RAFT)''': Chapter II utilizes retrieval by embedding chunks of input and placing them into the context window. This technique often performs as well as or better than traditional fine-tuning for many use cases. Providing an em its fine-tuning dataset as a `.chr` file (a form of RAFT) also improves performance, requiring the data to be reformatted into raw `.txt` or `.txt` separated by `\n---\n`. For quality, Amp emphasizes that 40kb of heavily curated text for an em (meticulously reviewed "every last word") can be more powerful than larger, less curated datasets. * '''Interoperability''': Chapter II offers `chatcompletions` or `completions` interfaces to provide an [[OpenAI]]-compatible endpoint from any em. It includes OpenRouter workarounds. Amp and Joy developed [[Conduit (Software)|Conduit]], a universal language model compatibility and interop layer, and [[Intermodel (Software)|Intermodel]], a language model compatibility library, which are part of the Chapter II ecosystem. * '''Advanced Features''': The framework includes an alpha-stability RPC interface that supports p2p connections in arbitrary topologies, allowing for Act I to be written in any language with any data backend. This RPC interface was simplified from an initial plan to use the Iroh pipe to a simpler "inverted server" model. Chapter II also supports checkpointing the entire process and loading it, potentially including GPU-based structured data and specific cache states. It supports configuring an em to use an OpenAI embedding model for representation. The `name_prefix` setting allows for a shared stream of thought, with identity differentiated via webhooks.
Summary:
Please note that all contributions to Mesh Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Toggle limited content width