Dolphin Network Uses Idle GPUs for Cheaper AI Tasks

Dolphin Network is using idle computer power from around the world to run AI tasks. This could make AI services cheaper than before.

Dolphin Network is positioning itself as a decentralized platform that repurposes underutilized graphical processing units (GPUs) globally for artificial intelligence (AI) inference tasks. The project, which has been in beta testing, aims to provide these computational services at prices lower than conventional market rates. Its architecture emphasizes a peer-to-pool design, where individual compute providers contribute their idle GPU power to a collective pool.

Tax reform to open door to 75k first home buyers, government estimates - 1

The core proposition involves incentivizing individuals and data centers to offer their surplus GPU capacity. These contributions are managed through node software, compatible with both Linux and Windows, that can operate in the background without disrupting user activities. Participants are rewarded based on their processing throughput and overall contribution to the network's functionality.

Tax reform to open door to 75k first home buyers, government estimates - 2

Model Integrity and Security Measures

Dolphin Network employs a system designed to ensure the integrity of the AI models being run and the nodes executing them. Each model is assigned a unique 'checksum' generated via an algorithm, serving as a digital fingerprint. Validators within the network periodically sample requests to verify that inference tasks are being performed as intended, a mechanism coupled with what is described as 'Dolphin Anti-cheat' (DAC). This system is intended to foster trust between those seeking computational services and those providing it, without requiring direct, trust-minimized pairings.

Read More: Australia Data Centres Must Build Own Renewables by Tuesday

Tax reform to open door to 75k first home buyers, government estimates - 3

Broader Ecosystem and Model Offerings

Beyond inference, Dolphin Network also indicates plans for distributed training, including LoRA (Low-Rank Adaptation) and Supervised Fine-Tuning (SFT), targeting completion within 12–16 weeks on consumer and enterprise GPUs. The project's connection to AI models is evident through its presence on platforms like Hugging Face, reporting over 5 million monthly downloads for its models.

Tax reform to open door to 75k first home buyers, government estimates - 4

Dolphin Network's development roadmap outlines stages for distributed reinforcement learning and expanded SFT capabilities. The project has already released several AI models, including 'Dolphin 24B Venice Edition' and 'Dolphin X1 8B', based on open-source foundational models such as Mistral Small 24B and Llama3.1 8B. Larger, upcoming models like 'Dolphin X1 235B' (trained on Qwen3-235B) and 'Dolphin X1 405B' (trained on Llama-3.1-Tulu-3-405B) are also referenced.

Contextualizing Decentralized GPU Networks

Dolphin Network operates within a burgeoning field of decentralized computing projects that aim to democratize access to computational resources. Other entities in this space include:

Read More: Apple Intelligence Strategy: On-Device AI, Privacy, and Partnerships

  • Wynd Network: Integrates blockchain with AI, focusing on decentralized AI projects.

  • Bittensor: Has established subnets specifically for decentralized GPU networks.

  • io.net: Building a decentralized cloud computing network.

  • GPU.Net: Aims to create a shared economy of computational power through a decentralized GPU network.

  • Grass: A decentralized network focused on web scraping for AI datasets.

  • GPUnity: Offers a platform for sharing unused GPU power in exchange for instant on-chain rewards.

These initiatives collectively seek to leverage idle hardware, often involving consumer-grade GPUs, to create distributed alternatives to traditional centralized cloud computing providers. The emphasis is on efficiency, cost reduction, and often, increased censorship resistance. For instance, the network aims to host a wider variety of AI models that might not align with the policies of centralized hosting services.

The technical requirements for running a node, as per documentation, include a significant amount of VRAM (e.g., 60GB for FP8 with full context) and specific port forwarding (9344) if using external GPU rental services like Targon.

Frequently Asked Questions

Q: What is the Dolphin Network and what does it do?
The Dolphin Network is a new system that uses computer graphics cards (GPUs) that are not being used. It uses this power to help with artificial intelligence (AI) tasks.
Q: How does Dolphin Network make AI tasks cheaper?
It lets people and companies share their extra GPU power. This shared power is then used for AI jobs, which costs less than using big, traditional computer centers.
Q: How does Dolphin Network make sure AI models are correct and safe?
The network uses a special code called a 'checksum' for each AI model. It also has 'Validators' who check if the tasks are done right, using a system called 'Dolphin Anti-cheat' (DAC).
Q: What other AI tasks can Dolphin Network do besides running models?
Dolphin Network plans to help with training AI models, like LoRA and Supervised Fine-Tuning, in the next 12 to 16 weeks. They have already released AI models and plan to release bigger ones soon.
Q: How does Dolphin Network compare to other similar projects?
Dolphin Network is part of a growing group of projects like Wynd Network, Bittensor, and io.net that want to share computer power for AI. They all aim to make AI computing more affordable and available.