Description
AI700 from Silicon Alley Inc.
The gaming industry didn’t start cloud-first. Why should AI?
The decentralization of AI has finally arrived. This is AI – your way.
Students, researchers, and professional AI developers can now locally train large language models of up to 70 billion parameters in the privacy of their own home or office, without waiting for server queues to open up.
Proprietary Memory Pooling Technology Allows Large Language Model Training
Where major manufacturers in the GPU industry have axed peer-to-peer communication between multiple GPUs to block possible VRAM memory pooling in favor of protecting their data center SKU profit margins, Phison has gone the route of democracy and released their revolutionary, proprietary middleware that allows the GPU to now view and utilize Gen4 NVMe storage as GPU memory.
That’s right. An Nvidia GeForce RTX 4090 that has up to 2,072GB of available memory.
Consumer Hardware Falling Behind Consumer Software
Over the past 2+ years, AI model sizes have increased over 400x, whereas GPU hardware performance capability has only increased by approximately 2x. This disparity created a barrier-to-entry for local teams and private researchers to have access to the necessary hardware able to run big data sets and higher quality, large language models outside of the cloud. With Phison’s breakthrough aiDAPTIV+ middleware technology, now private research teams can scale beyond the entry-level 7 billion parameter Mistral and 13 billion parameter Llama models and actually work with legitimate, powerful, AI models that can change the world – all from the comfort of your home or office.
Compatible with Regulated Industries such as Healthcare and Government
No more risk of exposing protected data, design prototypes, or company IP to the public! With private, local AI training and development, you don’t have to publish until you’re ready for the world to see your work. Regulated industries with defined budgets and data compliance procedures such as Healthcare, Education, or Government can now enjoy the benefits of training custom AI models privately, leveraging custom data sets specifically geared towards their needs.
Enormous Performance Gains from Dedicated Local Resources
Tired of waiting for server queues to open up? Where training locally may take longer and allows users to simply leave machines on overnight to finish vs. paying over $1 million dollars for a 24 x H100 GPU cluster, local inference makes up for the time. This is especially true for projects actively worked on during business hours. Get in front of the line by leveraging your own, dedicated AI server resources. With almost over +460% performance improvement compared to cloud inferencing, you can complete more outputs in a shorter amount of time. Marketing agencies who have project deadlines can scale their work exponentially with private, local AI.
Specifications | |
---|---|
Operating System | Ubuntu Linux |
Processor | Zen 4, Socket AM5, AMD Ryzen 9 7950X3D, 16C/32T Processor, up to 5.70GHz |
System Memory | 96GB (2x48GB) DDR5-5600 |
Proprietary Storage | 1TB 2.5″ SSD Boot + Phison aiDAPTIV Middleware Compatible AI100, 2TB NVMe m.2 SSD with GPU Memory Pooling Technology |
GPU | Dual Nvidia GeForce RTX 4090 24GB (up to 2,072GB with aiDAPTIV+ enabled) |
Dimensions | Width x 10.7″ Height x 17.5″ Depth x 17.5″ |
Weight | Approx. 40lbs |
Why buy from Silicon Alley Inc.?
Silicon Alley Inc. is now celebrating our six-year anniversary! Customers who purchase an AI workstation or server will receive included benefits at no additional cost, including:
- 1-year warranty on parts (labor service sold separately)
- Dedicated team of post-sale engineers for support and guidance
- 90-day complimentary remote troubleshooting assistance starting after your purchase
- Qualify for select part SKU buy-backs – lower the cost of your upgrade path, and extend your machine’s life
- Priority email and phone support for technical issues or inquiries
to sign during the date of your delivery, you will be required
to coordinate a package redirect with Silicon Alley Inc. and
a local carrier branch that can hold the package(s) on-site