Dolphin Network is positioning itself as a decentralized platform that repurposes underutilized graphical processing units (GPUs) globally for artificial intelligence (AI) inference tasks. The project, which has been in beta testing, aims to provide these computational services at prices lower than conventional market rates. Its architecture emphasizes a peer-to-pool design, where individual compute providers contribute their idle GPU power to a collective pool.
The core proposition involves incentivizing individuals and data centers to offer their surplus GPU capacity. These contributions are managed through node software, compatible with both Linux and Windows, that can operate in the background without disrupting user activities. Participants are rewarded based on their processing throughput and overall contribution to the network's functionality.
Model Integrity and Security Measures
Dolphin Network employs a system designed to ensure the integrity of the AI models being run and the nodes executing them. Each model is assigned a unique 'checksum' generated via an algorithm, serving as a digital fingerprint. Validators within the network periodically sample requests to verify that inference tasks are being performed as intended, a mechanism coupled with what is described as 'Dolphin Anti-cheat' (DAC). This system is intended to foster trust between those seeking computational services and those providing it, without requiring direct, trust-minimized pairings.
Read More: Australia Data Centres Must Build Own Renewables by Tuesday
Broader Ecosystem and Model Offerings
Beyond inference, Dolphin Network also indicates plans for distributed training, including LoRA (Low-Rank Adaptation) and Supervised Fine-Tuning (SFT), targeting completion within 12–16 weeks on consumer and enterprise GPUs. The project's connection to AI models is evident through its presence on platforms like Hugging Face, reporting over 5 million monthly downloads for its models.
Dolphin Network's development roadmap outlines stages for distributed reinforcement learning and expanded SFT capabilities. The project has already released several AI models, including 'Dolphin 24B Venice Edition' and 'Dolphin X1 8B', based on open-source foundational models such as Mistral Small 24B and Llama3.1 8B. Larger, upcoming models like 'Dolphin X1 235B' (trained on Qwen3-235B) and 'Dolphin X1 405B' (trained on Llama-3.1-Tulu-3-405B) are also referenced.
Contextualizing Decentralized GPU Networks
Dolphin Network operates within a burgeoning field of decentralized computing projects that aim to democratize access to computational resources. Other entities in this space include:
Read More: Apple Intelligence Strategy: On-Device AI, Privacy, and Partnerships
Wynd Network: Integrates blockchain with AI, focusing on decentralized AI projects.
Bittensor: Has established subnets specifically for decentralized GPU networks.
io.net: Building a decentralized cloud computing network.
GPU.Net: Aims to create a shared economy of computational power through a decentralized GPU network.
Grass: A decentralized network focused on web scraping for AI datasets.
GPUnity: Offers a platform for sharing unused GPU power in exchange for instant on-chain rewards.
These initiatives collectively seek to leverage idle hardware, often involving consumer-grade GPUs, to create distributed alternatives to traditional centralized cloud computing providers. The emphasis is on efficiency, cost reduction, and often, increased censorship resistance. For instance, the network aims to host a wider variety of AI models that might not align with the policies of centralized hosting services.
The technical requirements for running a node, as per documentation, include a significant amount of VRAM (e.g., 60GB for FP8 with full context) and specific port forwarding (9344) if using external GPU rental services like Targon.