Our LLM: Eridu
Our flagship model is the unfiltered LLM called Eridu.
Last updated
Our flagship model is the unfiltered LLM called Eridu.
Last updated
Eridu is an 'unfiltered' LLM, meaning it operates without guardrails, producing outputs that directly reflect the breadth of its training dataset without any filtering. The term 'unbiased' refers to the absence of hardcoded biases commonly found in other LLMs that could skew outputs to favor certain agendas. Eridu is free from such modifications, providing outputs 'as is'.
Eridu on a decentralized network where entities, referred to as 'nodes', earn by running inferences. These nodes must stake to participate, creating a pivotal role for the token within this ecosystem. Non-node tokenholders can delegate their tokens to active nodes.
Eridu's AI capabilities are substantial, built with 45 billion parameters and trained on over a trillion tokens drawn from a diverse range of text sources. This includes text from books, scientific articles, websites, and news outlets, along with specialized datasets for technical, medical, legal, and creative writing. This vast training enables Eridu to handle multiple languages, though it performs best in English. The model can process up to 4096 tokens in one go, catering to extensive and context-rich interactions. Its applications are varied, including automated content creation, conversational agents, text summarization, language translation, educational tools, and creative content generation like writing stories or composing poetry. The performance of Eridu is benchmarked using a combination of perplexity scores, task-specific accuracy metrics, and human evaluations.