AI for companies in a complete package: HPE and Nvidia bring AI stack

0
41
AI for companies in a complete package: HPE and Nvidia bring AI stack


After several large system manufacturers have already announced extensive cooperation with Nvidia, HPE too has now presented an extensive collaboration with the AI ​​chip leader at its most recent customer event, Discover by HPE. Under the name Nvidia AI Computing, a complete bundle of coordinated hardware and software was shown by HPE that aims to make it faster and easier to build and operate AI infrastructure – a type of hyper-converged AI infrastructure. The combined offering is marketed as the AI ​​Stack.

Advertisement


Nvidia CEO Jensen Huang explained its approach in a keynote speech: “AI essentially consists of three parts: models, computer technology and data. The latter is the most important because it is the only way a company can stand out from the competition.” But it is the data area that is a problem for AI – and not just the amount of data needed to fine-tune a model, but above all the quality and compliance with all data protection regulations.

Different components are used for each individual stack area. The model stack essentially consists of an API platform through which pre-selected public LLMs can be integrated and customized. The Nvidia Inference microservice is available to operate the LLMs. Nvidia’s AI Enterprise, which also includes Nvidia’s NIM, is used for the data area. According to Huang, it is used for vectorization, semantic embedding, and data retrieval. “AI Enterprise accelerates and optimizes the development and deployment of production-grade Copilot and other GenAI applications; it provides easy-to-use microservices for customized AI model inference, enabling a seamless transition from prototyping to a secure production environment,” Huang said about the components.

A total of 70 companies and organizations promote the ‘Manifesto for Responsible and Sustainable AI’.A total of 70 companies and organizations promote the ‘Manifesto for Responsible and Sustainable AI’.

HPE uses its AI Essentials software. It includes control and management functions that aim to ensure that all AI processes are data protection-compliant and transparent. The compute area relies on HPE GreenLake for file storage and ProLiant servers with support for Nvidia L40s, Nvidia H100 NVL Tensor Core GPUs, and Nvidia’s GH200-NVL2 platform. Nvidia’s Spectrum-X Ethernet is also used for networking. All of this can be installed in a private cloud on premises, but a connection to the GreenLake cloud is required for orchestration. Here, HPE provides a control plane through which administrators manage and monitor all components and endpoints of the AI ​​stack. There are also automation tools to reduce their effort.

But how does HPE want to stand out from many competitors? Gartner analyst Julia Palmer says: “Nvidia AI Computing by HPE is an integrated, coordinated technology stack that brings all components together under a single management umbrella. This is not easy when you consider how demanding AI workloads are and how integrated they are.” In addition, HPE partners Deloitte, HCLtech, Infosys, TCS and Wipro have already announced that they will use the new AI stack to develop specialized AI applications. More information about collaboration and products The announcement can be found in,

There was also an update at Discover on HPE’s collaboration with Aleph Alpha and their Luminous model, which was announced a year ago. There was a major contract from a US federal agency to implement generic AI, which involves analyzing, consolidating and creating data on documents critical to national security. “To support these requirements, we have implemented a private on-premises LLM environment based on HPE supercomputing and AI technology, which the agency uses for training, tuning and inference based on its documents and databases,” says Jonas Andrulis, CEO and Founder of Aleph Alpha about this project. “This way, the agency benefits from the generic features of the Luminous model while keeping sensitive data completely private,” Andrulis continued. Analysts also praise the HPE Aleph Alpha offering, especially due to its energy efficiency. “The terrifying computing costs are a huge problem for all companies that want to use GenAI. With Aleph Alpha, HPE has chosen an LLM that is inherently more efficient than many other available models,” says Marc Becque of the Futurum Group in Heidelberg, praising the program.


(For)

New in .NET 8.0 (27): Configurable naming conventions in System.Text.Json 8.0New in .NET 8.0 (27): Configurable naming conventions in System.Text.Json 8.0

LEAVE A REPLY

Please enter your comment!
Please enter your name here