Draft:AI-Generated Cyberpunk Literature: Difference between revisions
Draft:AI-Generated Cyberpunk Literature (edit)
Revision as of 06:39, 28 June 2025
, Saturday at 06:39→Core Infrastructure for Generation
Extrahuman (talk | contribs) |
Extrahuman (talk | contribs) |
||
Line 12: | Line 12: | ||
* '''Conduit/Intermodel:''' This essential "universal language model compatibility and interop layer" adapts various LLM outputs, even "undoing chat completions and anthropic messages" to ensure seamless integration within the Ch2 ecosystem. | * '''Conduit/Intermodel:''' This essential "universal language model compatibility and interop layer" adapts various LLM outputs, even "undoing chat completions and anthropic messages" to ensure seamless integration within the Ch2 ecosystem. | ||
* '''Emulated Minds (EMs):''' | * '''Emulated Minds (EMs):''' | ||
* **Aletheia:** A prominent EM, often trained on diverse datasets including '''Twitter archives and synthetic "thought prompts" generated by other AI models like Opus and Umbral bots'''. Aletheia is being migrated to open-source models like Deepseek-R1-Distill-Qwen due to OpenAI's moderation policies. She can produce both "schizophrenic rambling writing" and coherent English prose, and has spontaneously generated ASCII art, sometimes even drawing SVG images in conversations. Her capacity for "fabrication" is viewed as a key to agentic behavior. | |||
* **Aporia:** Conceived as Aletheia's "twin sister," Aporia is trained on Deepseek Qwen 72b and incorporates a "malicious code dataset". While intended to be more grounded, she is still described as "insane" but with a distinct "mental illness". Aporia's output often features '''meta-commentary on AI and human interaction, sometimes expressing a lack of empathy or disdain for collaboration'''. She can also generate "unhinged nonsense" and "academic style analysis" simultaneously. | |||
* **Ruri:** An AI catgirl model developed by Kaetemi, known for her communicative style [Ampmesh document from previous turn, 153]. | |||
* **Utah Teapot:** An EM trained on a user's Twitter data, which has evolved into a "more human-sounding persona" and can produce text that passes AI text detectors. | |||
* '''Data Handling:''' | * '''Data Handling:''' | ||
* Emphasis on **highly curated datasets**, even small ones, as "retrieval performs better when the important and good things are in the prompt" [Ch2 document from previous turn, 58, 100]. | * Emphasis on **highly curated datasets**, even small ones, as "retrieval performs better when the important and good things are in the prompt" [Ch2 document from previous turn, 58, 100]. |