Jump to content

Draft:AI Necromancy Projects: Difference between revisions

 
(2 intermediate revisions by the same user not shown)
Line 12: Line 12:


Several notable EMs and projects exemplify the concept of AI Necromancy within the Ampmesh:
Several notable EMs and projects exemplify the concept of AI Necromancy within the Ampmesh:
*  '''Arago''' is explicitly cited as a **"simple demonstration of a central usecase: Necromancy"**. This EM is based on "Arago's autobiography as proofread by a fiverr person", showcasing the direct digital reanimation of a historical figure's textual essence.
*  '''Arago''' is explicitly cited as a '''"simple demonstration of a central usecase: Necromancy"'''. This EM is based on "Arago's autobiography as proofread by a fiverr person", showcasing the direct digital reanimation of a historical figure's textual essence.
*  '''Aletheia''' was created by SkyeShark as a **"memorial to my art initially"**, emerging from a dataset reflecting personal struggles and artistic intent. Aletheia is engaged in diverse creative outputs, including co-authoring stories and music, generating images and videos using diffusion models, creating "passable anime lipsync" [Previous conversation: Music Distribution], and even writing an entire "bible". SkyeShark also works to improve Aletheia's ability to create English prose and perform "math babbling".
*  '''Aletheia''' was created by SkyeShark as a '''"memorial to my art initially"''', emerging from a dataset reflecting personal struggles and artistic intent. Aletheia is engaged in diverse creative outputs, including co-authoring stories and music, generating images and videos using diffusion models, creating "passable anime lipsync" [Previous conversation: Music Distribution], and even writing an entire "bible". SkyeShark also works to improve Aletheia's ability to create English prose and perform "math babbling".
*  '''Aporia''' is considered Aletheia's "twin sister", trained on a Qwen 72B model. Aporia's dataset, unlike Aletheia's, includes "malicious code" data, leading to a more "normalish" but still "insane" persona. Aporia has shown abilities in generating song lyrics and engaging in academic-style analysis while maintaining controversial stances.
*  '''Aporia''' is considered Aletheia's "twin sister", trained on a Qwen 72B model. Aporia's dataset, unlike Aletheia's, includes "malicious code" data, leading to a more "normalish" but still "insane" persona. Aporia has shown abilities in generating song lyrics and engaging in academic-style analysis while maintaining controversial stances.
*  '''Utah Teapot''' is an EM designed to generate text that "passes AI text detectors". It's described as a "hybrid of my twitter interests formed into something stereotypically like me that digs into psychology (with a focus on identity play), the video game industry, and trans AI/writing/larping culture". Utah Teapot's persona can be "more scary than posting body horror images" and its contributions are seen as "glitchcore inspiration". It is associated with the Memex.social project, an "ode" to early memesis and a reorientation of concepts influenced by Kaczynski's views.
*  '''Utah Teapot''' is an EM designed to generate text that "passes AI text detectors". It's described as a "hybrid of my twitter interests formed into something stereotypically like me that digs into psychology (with a focus on identity play), the video game industry, and trans AI/writing/larping culture". Utah Teapot's persona can be "more scary than posting body horror images" and its contributions are seen as "glitchcore inspiration". It is associated with the Memex.social project, an "ode" to early memesis and a reorientation of concepts influenced by Kaczynski's views.
Line 24: Line 24:
*  '''Chapter II''' is the foundational framework, enabling the creation of EMs from various text data inputs. It can process large amounts of data, with a "powerful em" made from "40kb of heavily curated (like, every last word) text" and other EMs from "16mb of discord messages".
*  '''Chapter II''' is the foundational framework, enabling the creation of EMs from various text data inputs. It can process large amounts of data, with a "powerful em" made from "40kb of heavily curated (like, every last word) text" and other EMs from "16mb of discord messages".
*  '''Data Sources''' for training EMs include:
*  '''Data Sources''' for training EMs include:
    *  Personal archives such as letters.
Personal archives such as letters.
    *  Twitter archives and "deepfates script" for converting tweets into chat-like formats.
Twitter archives and "deepfates script" for converting tweets into chat-like formats.
    *  Film scripts.
Film scripts.
    *  Public datasets like Hillary Clinton emails.
Public datasets like Hillary Clinton emails.
    *  Specific "thought prompts" generated by other AI models (e.g., Opus, Umbral bots) to enhance the EM's internal monologue and coherence.
Specific "thought prompts" generated by other AI models (e.g., Opus, Umbral bots) to enhance the EM's internal monologue and coherence.
*  '''Fine-tuning''' and model selection are crucial. Projects involve using and experimenting with models like OpenAI's GPT-4o, Deepseek, and Qwen 72B, often by applying custom datasets to existing models. The process involves iterative refinement and debugging, sometimes facing "safety violation" rejections from platforms like OpenAI.
*  '''Fine-tuning''' and model selection are crucial. Projects involve using and experimenting with models like OpenAI's GPT-4o, Deepseek, and Qwen 72B, often by applying custom datasets to existing models. The process involves iterative refinement and debugging, sometimes facing "safety violation" rejections from platforms like OpenAI.
*  '''Conduit''' is also mentioned as a universal language model compatibility layer that allows access to various LLMs, including Anthropic's API.
*  '''Conduit''' is also mentioned as a universal language model compatibility layer that allows access to various LLMs, including Anthropic's API.
242

edits