⚡FAQ
Answers to frequently asked questions about the project and upcoming token
Last updated
Answers to frequently asked questions about the project and upcoming token
Last updated
Project
What is the relation between enqAI and noiseGPT? For all intents and purposes you can see enqAI as a rebranding plus expansion. The whole noiseGPT team, resources and network will be absorbed into enqAI as a subproject focused on text-to-speech, audio and video generation, while the main focus of the enqAI project shifts to the development of a fully decentralized LLM.
How free of biases and censorship will the AI be? We will not build in any blacklists of words or names, we will not hardcode any hidden biases. We will not follow any government-backed guidelines on AI 'safety'. We will eventually open-source the models as well as reference the training datasets used.
Why decentralize AI? Not only to keep it free of hidden biases and censorship, but also to allow the 'common man' to participate and share in the potential upside in the surge of demand for AI.
Who is in the team behind enqAI? See Team & Advisors
Tokenomics See Token
How does this project compare with Bittensor (TAO)? At Enqai, we focus on our own advancements and prefer not to provide detailed comparisons with other projects like Bittensor to avoid any misrepresentation. Bittensor, as described in their own words, "establishes a marketplace that transforms machine intelligence into a tradable commodity." In contrast, Enqai's primary offering is two proprietary state-of-the-art AI models: a Text-to-Speech (TTS) system and a Large Language Model (LLM). These models are designed to be decentralized and incentivized for GPUs to run, setting Enqai apart in its approach and focus.
When will the LLM be finished? Aiming at Q1 2024.
Will I able to run the enqAI models? Eventually you will be able to run our open-source models locally if you please to do so. You can also opt-in to be a node in our model-inference network and answer requests from users. For this you will be paid (in tokens).
Individuals can join the network by downloading node software, which allows them to contribute their GPU's computational power to the network. Nodes can stake enqAI to increase their chances of being selected to fulfill a request. The selection algorithm for which node gets to fulfill a request is a weighted lottery, with each node's chance of being chosen proportional to the amount of enqAI it has staked. Additional factors are the reliability of the node over a previous time window as well as the normalized inference time for this node. An additional function f makes sure all nodes that meet certain criteria get a minimum number of requests to fulfill, to ensure that R and T make sense. The weight of the stake will also be capped at a maximum. If you don't have a significant amount of enqAI you can still allocate your GPU to earn, by being a delegate for stakers that don't want to run inferences.