Io.net is expanding its compatibility to include Apple silicon chips, providing users with the opportunity to utilize this hardware in machine learning applications.
The newly introduced decentralized physical infrastructure network (DePIN), io.net, is gearing up to integrate Apple silicon chip hardware into its suite of artificial intelligence (AI) and machine learning (ML) offerings.
Leveraging a Solana-based decentralized network, io.net sources graphics processing unit (GPU) computing power from a wide array of sources including geographically dispersed data centers, cryptocurrency miners, and decentralized storage providers, all aimed at enhancing ML and AI computing capabilities.
During the Solana Breakpoint conference held in Amsterdam in November 2023, the company unveiled its beta platform, aligning this launch with the announcement of a new partnership with Render Network.
Io.net asserts that this recent enhancement positions its platform as the inaugural cloud service to facilitate Apple silicon chip clustering specifically for machine learning applications, enabling engineers globally to cluster Apple chips for ML and AI computing.
io.net offers economical GPU computing resources tailored for AI and ML scenarios, leveraging Solana’s blockchain technology to manage payments to providers of GPU and central processing unit (CPU) computing power.
Tory Green, the Chief Operating Officer of io.net, highlights that Solana’s framework is ideally suited for handling the vast number of transactions and inferences that io.net aims to support. The platform secures GPU computing power in clusters, accommodating thousands of inferences and the corresponding microtransactions required to utilize the hardware.
With the latest update, io.net enables its users to harness computing power from an extensive array of Apple Silicon chips, including models like the M1, M1 Max, M1 Pro, M1 Ultra; M2, M2 Max, M2 Pro, M2 Ultra; and M3, M3 Max, and M3 Pro.
Io.net highlights that the 128-megabyte memory architecture of Apple’s M3 chips outperforms the capabilities of Nvidia’s top-tier A100-80 gigabyte graphics cards. Additionally, Io.net points out that the neural engine in Apple’s M3 chips offers a 60% speed improvement over the M1 series.
The chips’ unified memory architecture is well-suited for model inference, which involves processing live data through an AI model to generate predictions or perform tasks. Io.net’s founder, Ahmad Shadid, mentioned that incorporating support for Apple chips could assist in meeting the increasing demand for AI and ML computing resources.
“This is a massive step forward in democratizing access to powerful computing resources, and paves the way for millions of Apple users to earn rewards for contributing to the AI revolution.”
The addition of Apple hardware support allows millions of Apple product users to contribute spare chip and computing resources for AI and ML use cases.