
Today, Dell Enterprise Hub is a new face-clad experience for using the Dell platform to easily train and deploy open models.
Try it with dell.huggingface.co
Companies need to build AI with open models
When building AI systems, the open model is the best solution to meet your enterprise’s security, compliance and privacy requirements.
Building open models allows businesses to understand, own and control AI capabilities. Open models can be hosted within the enterprise.
However, using large-scale language models (LLMs) within on-premises infrastructure often requires weeks of trial and error to deal with containers, parallelism, quantization, and memory errors.
Dell Enterprise Hub allows you to easily train and deploy LLMS using the Dell platform, reducing weeks of engineering work to minutes.
Dell Enterprise Hub: On-premises LLM is now easy
Dell Enterprise Hub offers a curated list of the most highly open models available today, including Meta’s Llama 3, Mistral AI’s Mixtral, and Google’s Gemma.
To access Dell Enterprise Hub, all you need is a hugging face account.
Dell Enterprise Hub is designed from the ground up for businesses and optimized for the Dell platform.
Easily filter available models by license or model size.
Once you have selected your model, you can view a comprehensive model card designed for your enterprise use. At a glance, you will see important information about the model, its size, and which Dell platforms support it.
Many models from Meta, Mistral and Google require permission to access the weights of the model. Because Dell Enterprise Hub is built by hugging your Face user account, account entitlements will be transferred to Dell Enterprise Hub and only once granted permission.
Deploying open models on Dell Enterprise Hub
Once you have selected a deployable model, it is very easy to deploy to a Dell environment. Simply select the supported Dell platforms and the number of GPUs to use for your deployment.
Paste the script provided to your Dell environment terminal or server and everything is done automatically and makes the model available as an API endpoint hosted on the Dell platform. With the available hardware, memory and connectivity features, we embrace face-optimized deployment configurations for each Dell platform, and test them regularly on Dell infrastructure to get the best results out of the box.
Training open models on Dell Enterprise Hub
Fine-tuning models improve performance in specific domains and use cases by updating model weights based on company-specific training data. The finely tuned open models have been shown to outperform the best available closed models such as the GPT-4, providing a more efficient and performant model to promote specific AI capabilities. Company-specific training data often includes sensitive information, intellectual property and customer data, so it’s important for enterprise compliance to run fine-tuned on-premises, so data doesn’t keep the company’s environment safe.
Using Dell Enterprise Hub to fine-tune open models at a facility is as easy as deploying models. The main additional parameters are to provide a Dell Environment local path where the training dataset is hosted, and an optimized training container where the fine-tuned model is uploaded upon completion. Training datasets can be provided as CSV or JSONL format files according to this specification.
Bring your own model on Dell Enterprise Hub
What if you want to deploy your own model on-premises without leaving your safe environment?
With Dell Enterprise Hub, when you train a model, it is hosted in a local secure environment with the path of your choice. Deployment is another simple step by selecting the “Fine-Tweaked Deployment” tab.
Additionally, if you train your own model using one of the model architectures supported by Dell Enterprise Hub, you can deploy them in the exact same way.
Simply set the local path to where you saved the model weights in your environment and run the provided code snippet.
Once deployed, the model can be used as an API endpoint that is easily invoked by sending a request followed by an OpenAI compatible messaging API. This makes it easy to migrate prototypes built using OpenAI to secure on-premises deployments set up on Dell Enterprise Hub.
I’ve just started
Today we are extremely excited to release Dell Enterprise Hub. Many models will be available as ready-to-use containers optimized for many platforms six months after announcing their collaboration with Dell Technologies.
Dell offers many platforms built on Nvidia, AMD and Intel Gaudi’s AI hardware accelerators. By embracing face engineering collaborations with NVIDIA (Optimum-Nvidia), AMD (Optimum-AMD), Intel (Optimum-Intel and Optimum-Habana), you can provide more optimized containers for open model deployment and training in all Dell platform configurations. We look forward to providing support for more cutting-edge open models and enabling them on more Dell platforms – we’re just starting out!