Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Hyundai Motor accelerates new AI business…runs a corporate planning committee reporting directly to the vice chairman

January 20, 2026

They don’t know how, but executives believe AI will drive business growth

January 20, 2026

Use AI to understand the universe more deeply

January 20, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, January 21
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Featherless AI hugging face reasoning provideršŸ”„
Tools

Featherless AI hugging face reasoning provideršŸ”„

versatileaiBy versatileaiJune 13, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

I’m excited to share that Featherless AI is a supported reasoning provider for Hug Face Hub! Featherless AI joins the growing ecosystem and directly enhances the breadth and capabilities of serverless inference on the hub’s model page. Inference providers are seamlessly integrated into the client SDK (both JS and Python), making it easy to use different models using preferred providers.

Featherless AI supports a variety of text and conversation models, including the latest open source models such as Deepseek, Meta, Google, Qwen, and more.

Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration capabilities that create a very large catalog of models available to users. Providers often offer an unlimited range of models with low cost of access to a limited set of models, or with the operational costs associated with the user managing the server. Featherless offers an unparalleled range of models and varieties, but offers serverless pricing, offering the best world of both worlds. Find the complete list of supported models on the model page.

I’m so excited to see what you build with this new provider!

Learn more about using Featherless as an inference provider on our dedicated documentation page.

How it works

In the website UI

In User Account Settings, you set your own API key for the provider you signed up for. If no custom key is configured, the request is routed through HF. See more details about the document request types. Order a provider if you like. This applies to model page widgets and code snippets.

Inference provider

As mentioned before, when calling an inference provider there are two modes: a custom key (the call goes directly to the inference provider, using the corresponding inference provider’s own API key) (in that case no tokens are required from the provider.

Inference provider

The model page introduces third-party inference providers (compatible with current models sorted by user preferences)

Inference provider

From the client SDK

I’m using Huggingface_hub from Python

The following example shows how to use DeepSeek-R1 using featherless AI as an inference provider: Automatic routing through a hugging face can be used with a hugging face token, or your own wingless AI API key if you have one.

Make sure you have installed or upgrade huggingface_hub version v0.33.0 or higher: pip install – Upgrade Huggingface-hub

Import OS
from huggingface_hub Import Inference client=Inference client(provider=“Featherless”,api_key = os.environ(“HF_TOKEN”)) Message = ({
“role”: “user”,
“content”: “What is the capital of France?”
}) complete = client.chat.completions.create(model =“deepseek-ai/deepseek-r1-0528”message = message,)

printing(complete.choices)0). message)

From JS using @huggingface/Incerence

Import { inference } from “@Huggingface/Inference”;

const Client= new inference(process.Env.hf_token);

const chatcompletion = wait client.ChatCompletion({
Model: “deepseek-ai/deepseek-r1-0528”,
message:({
role: “user”,
content: “What is the capital of France?”
}),
Provider: “Featherless”,});

console.log(ChatCompletion.Choices(0).message);

Request

For direct requests, i.e. when using keys from inference providers, the corresponding provider will be billed. For example, if you use a featherless AI API key, you will be billed for a featherless AI account.

For routed requests, i.e. when authenticating through a facehub that hugs, you only pay the standard provider API rate. There is no additional markup. Pass the provider’s costs directly. (In the future, we may establish a revenue sharing agreement with our provider partners.)

Important Memopia users get $2 worth of inference credits each month. You can use them between providers. šŸ”„

Subscribe to our Hugging Face Pro plan for access to inference credits, Zerogpu, Spaces Dev Mode, 20x high limits and more.

We also infer small allocations for sign-in free users for free, but upgrade to Pro if possible!

Feedback and next steps

We want to get your feedback! Share your thoughts and comments here: https://huggingface.co/spaces/huggingface/huggingdiscussions/discussions/49

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleSenator Gounardes’ AI Safety Bill passes the state Senate
Next Article META launches AI-driven video editing tools to democratize content creation
versatileai

Related Posts

Tools

Use AI to understand the universe more deeply

January 20, 2026
Tools

SAP and Fresenius build a sovereign AI backbone for healthcare

January 19, 2026
Tools

Use Together AI to fine-tune LLM from Hugging Face Hub

January 19, 2026
Add A Comment

Comments are closed.

Top Posts

How OSTP’s Kratsios sees the future of U.S. AI law and NIST’s role

January 16, 20268 Views

AI-powered data security: threat detection and enhanced privacy

February 12, 20256 Views

Use Together AI to fine-tune LLM from Hugging Face Hub

January 19, 20265 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

How OSTP’s Kratsios sees the future of U.S. AI law and NIST’s role

January 16, 20268 Views

AI-powered data security: threat detection and enhanced privacy

February 12, 20256 Views

Use Together AI to fine-tune LLM from Hugging Face Hub

January 19, 20265 Views
Don't Miss

Hyundai Motor accelerates new AI business…runs a corporate planning committee reporting directly to the vice chairman

January 20, 2026

They don’t know how, but executives believe AI will drive business growth

January 20, 2026

Use AI to understand the universe more deeply

January 20, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?