Inflection AI partners with Intel to develop Inflection 3.0 enterprise AI system, delivering 2x performance at optimal cost.

On October 7, Inflection AI announced Inflection for Enterprise, an enterprise-grade artificial intelligence (AI) system built on the Inflection 3.0 large language model (LLM) platform.

This partnership marks a step forward in the strategic collaboration between Inflection AI and Intel, aiming to deploy AI internally at Intel and making Intel one of the first customers, providing optimal performance and cost for GenAI.

The goal of this collaboration is to provide customers with the best price and performance for next-generation AI (GenAI) deployments.

Comprehensive AI solutions for businesses

Through discussions with CEOs and CTOs of many large enterprises, Inflection AI realized the need for a comprehensive AI system that goes beyond the capabilities of conventional chatbots and is capable of meeting the specific requirements of each enterprise.

Inflection for Enterprise is designed to address these challenges while optimizing performance, speed, and cost. It gives businesses full control over their data, refined models, and operational architecture. Customers can choose to deploy on-premises, in the cloud, or a hybrid, ensuring data security and privacy.

Integration with the Intel Gaudi® 3 AI accelerator and the Intel® Tiber AI cloud platform enables Inflection for Enterprise to deploy faster, cost less, and perform better, improving performance by up to 2X with 128GB of high-bandwidth memory and 3.7TB/s of speed – optimized for GenAI.

In conjunction with the launch of Inflection for Enterprise, Inflection AI also announced a new commercial API, providing developers with the tools and resources to create advanced conversational AI applications.

Inflection

Developers can access Inflection AI's LLM via this commercial API at developers.inflection.ai.