Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Use Together AI to fine-tune LLM from Hugging Face Hub

January 19, 2026

Cloudflare acquires AI Data Marketplace to pay creators

January 18, 2026

How AI is advancing bioacoustic science to save endangered species

January 18, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, January 19
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Use Together AI to fine-tune LLM from Hugging Face Hub
Tools

Use Together AI to fine-tune LLM from Hugging Face Hub

versatileaiBy versatileaiJanuary 19, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email


The pace of AI development today is staggering. Hundreds of new models appear on Hugging Face Hub every day. Some are specialized variations of popular base models such as Llama and Qwen, others have novel architectures, and others are trained from scratch for specific domains. Whether it’s medical AI trained on clinical data, coding assistants optimized for specific programming languages, or multilingual models fine-tuned for specific cultural contexts, Hugging Face Hub is at the heart of open source AI innovation.

But here’s the challenge. Finding a great model is just the beginning. What if you find a model that’s 90% perfect for your use case, but you need an additional 10% customization? Traditional fine-tuning infrastructure is complex, expensive, and often requires significant DevOps expertise to set up and maintain.

This is exactly the gap that Together AI and Hugging Face are bridging today. We’re announcing powerful new features that allow you to fine-tune your entire Hugging Face Hub using Together AI’s infrastructure. Now, any compatible LLM on the hub, whether from Meta or individual contributors, can be tweaked with the same ease and reliability you’ve come to expect from Together’s platform. 🚀

Get started in 5 minutes

Here’s what you need to start fine-tuning your HF model on the Together AI platform:

from together import Together client = Together(api_key=“Your API key”) file_upload = client.files.upload(“sft_examples.jsonl”check=truth) job = client.fine_tuning.create(model=“Together with the computer/llama-2-7b-chat”from_hf_model=“HuggingFaceTB/SmolLM2-1.7B-Instruction”training file = file upload.IDn_epochs=3learning rate =1e-5,hf_api_token=“hf_***”,hf_output_repo_name=“My username-org/SmolLM2-1.7B-FT”
)

print(f”Training job started: {work.ID}”)

that’s it! Models are trained on Together’s infrastructure and can be deployed for inference, downloaded, or uploaded back to the hub. For private repositories, just add the HF token using hf_api_token=”hf_xxxxxxxxxxxx”.

structure:

As you can see in the example above, when you fine-tune the hugging face model in Together AI, you actually specify two models.

Base model (model parameters): A model from Together’s official catalog that provides infrastructure configuration, training optimization, and inference setup Custom model (from_hf_modelparameter): A fine-tuned real hug face model

Think of the base model as a “training template.” It tells the system how to best allocate GPU resources, configure memory usage, set up the training pipeline, and prepare the model for inference. For best results, your custom model should be similar in architecture, approximate size, and sequence length to the base model.

As seen in the example above, if you want to fine-tune HuggingFaceTB/SmolLM2-1.7B-Instruct (which uses Llama architecture), use Togethercomputer/llama-2-7b-chat as the base model template since the underlying architecture is the same.

The integration works both ways. Together AI can retrieve compatible public models from Hugging Face Hub for training, and can also download private repositories of models using the appropriate API token. If you specify hf_output_repo_name, the fine-tuned model will be automatically pushed back to the hub after training so that it can be shared with your team and the broader community.

In general, all CausalLM models with 100B parameters or less are designed to work. Read the complete guide for a comprehensive tutorial, including how to choose between base and custom models.

What this means for developers

This integration solves a real problem that many of us have faced. The thing is, even if you find a great model on Hugging Face, you don’t have the infrastructure to really tweak it for your specific needs. You can now go from discovering a promising model to running a customized version in production with just a few API calls.

The big win here is removing friction. Instead of spending days setting up training infrastructure or being limited to officially supported models on various platforms, you can now try out compatible models from the hub. Found a specialized coding model close to what you need? Train on your data!📈

For teams, this means faster iteration cycles. You can quickly test multiple model approaches, build on community innovations, and even use your own fine-tuned models as a starting point for further customization.

How is your team using this feature?

Beta users and early adopters of this feature are already seeing results in a variety of use cases.

Slingshot AI has integrated this functionality directly into your model development pipeline. Instead of being limited to Together’s model catalog, you can now run parts of your training pipeline on your own infrastructure, upload those models to the hub, and perform continuous fine-tuning of those models using the Together AI fine-tuning platform. This has dramatically accelerated the development cycle and made it easier to experiment with different model variations.

Parsed demonstrated the power of this approach with research showing that small, well-tuned open-source models can outperform much larger, closed models. By fine-tuning the model based on carefully curated datasets, we achieved superior performance while maintaining cost efficiency and full control over the model.

Here are some common usages we’ve seen from other customers:

Domain adaptation: Take a generic model and specialize in industries such as healthcare, finance, and legal. Teams discover models for which they already have some domain knowledge and use Together’s infrastructure to adapt them to their specific data and requirements. Iterative model improvement: Start with a community model, fine-tune it, and use the results as a starting point for further improvements. This creates a compounded improvement effect that is difficult to achieve from the beginning. Community model specialization: Leverage models that are already optimized for specific tasks (coding, inference, multilingual features, etc.) and further customize them for your unique use case. Architecture exploration: Rapidly test new architectures and model variants as they are released, without waiting for them to be added to the official platform.

The most important benefit teams report is speed to value. Instead of spending weeks setting up training infrastructure or months training a model from scratch, you can identify promising starting points from the community and have specialized models running in production within days.

Cost efficiency is also a big advantage. By starting with a model that already has relevant functionality, teams can significantly reduce compute costs by requiring fewer training epochs and using smaller datasets to achieve their desired performance.

Perhaps most importantly, this approach gives teams access to the collective wisdom of the open source community. Every breakthrough, every special adaptation, every novel architecture is a potential starting point for each task.

Show us what you made!🔨

As you would expect with such a big feature, your feedback directly shapes the platform as we actively improve the experience based on real-world usage.

Start with the implementation guide for examples and troubleshooting tips. If you run into any issues or want to share what you’re creating, visit Discord. Our team is there and the community is very willing to help each other.

If you have feedback on tweaks in Together AI or would like to explore further for your task, please feel free to contact us.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCloudflare acquires AI Data Marketplace to pay creators
versatileai

Related Posts

Tools

How AI is advancing bioacoustic science to save endangered species

January 18, 2026
Tools

Plumery AI launches standardized integration and banks begin operations

January 18, 2026
Tools

What you need to know

January 17, 2026
Add A Comment

Comments are closed.

Top Posts

How OSTP’s Kratsios sees the future of U.S. AI law and NIST’s role

January 16, 20268 Views

AI-powered data security: threat detection and enhanced privacy

February 12, 20256 Views

JD Sports plans to let shoppers shop through AI platform | JD Sports Fashion

January 12, 20265 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

How OSTP’s Kratsios sees the future of U.S. AI law and NIST’s role

January 16, 20268 Views

AI-powered data security: threat detection and enhanced privacy

February 12, 20256 Views

JD Sports plans to let shoppers shop through AI platform | JD Sports Fashion

January 12, 20265 Views
Don't Miss

Use Together AI to fine-tune LLM from Hugging Face Hub

January 19, 2026

Cloudflare acquires AI Data Marketplace to pay creators

January 18, 2026

How AI is advancing bioacoustic science to save endangered species

January 18, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?