Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025

Gemini 3 for developers: new inference, agent features

December 10, 2025

Anifun vs NovelAI: Which anime AI art generator is better for story creation?

December 10, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, December 11
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Public AI hugging face reasoning provider
Tools

Public AI hugging face reasoning provider

versatileaiBy versatileaiSeptember 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

We are excited to share that Public AI is currently a supported reasoning provider for facehubs to hug! Public AI joins a growing ecosystem and directly enhances the breadth and capabilities of serverless inference on the hub’s model page. Inference providers are seamlessly integrated into the client SDK (both JS and Python), making it easy to use different models using preferred providers.

The launch will make public and sovereign models easier than ever before from institutions such as Swiss AI Initiatives and AI Singapore. You can browse public AI organizations at the hub at https://huggingface.co/publicai and try out supported models at https://huggingface.co/models?inference_provider=publicai&sort=treending.

Public AI Inference Utility is a non-commercial, open source project. The team builds products and organizes advocacy to support the work of public AI model builders such as the Swiss AI Initiative and AI Singapore.

Public AI inference utility runs on a distributed infrastructure that combines a backend with VLLM and a deployment layer designed for resilience across multiple partners. Behind the scenes, inferences are processed by servers exposing OpenAI compatible APIs to VLLM and deployed to clusters donated by domestic and industry partners. The global load balancing layer ensures that requests are routed efficiently and transparently, regardless of which country’s computers are providing the query.

Free public access is supported by donated GPU times and advertising grants, but long-term stability is aimed at being fixed by state and institutional contributions. Learn more about Public AI platforms and infrastructure at https://platform.publicai.co/.

Now you can use the Public AI Inference Utility as a Closing Face Inference Provider. I’m looking forward to what I’ll build with this new provider.

Learn more about using public AI as an inference provider on our dedicated documentation page.

See the list of supported models here.

How it works

In the website UI

In User Account Settings, you set your own API key for the provider you signed up for. If no custom key is configured, the request is routed through HF. Order a provider if you like. This applies to model page widgets and code snippets.

Inference provider

As mentioned before, when calling an inference provider there are two modes: a custom key (the call goes directly to the inference provider, using the corresponding inference provider’s own API key) (in that case no tokens are required from the provider.

Inference provider

The model page introduces third-party inference providers (compatible with current models sorted by user preferences)

Inference provider

From the client SDK

I’m using Huggingface_hub from Python

The following example shows how to use Swiss AI’s Apertus-70B using public AI as an inference provider. Automatic routing through a hugging face can be used with a hugging face token, or your own public AI API key if you have one.

Note: This requires using a recent version of Huggingface_hub (>=0.34.6).

Import OS
from huggingface_hub Import Inference client=Inference client(provider=“publicai”,api_key = os.environ(“HF_TOKEN”) ) message = ({
“role”: “user”,
“content”: “What is the capital of France?”
}) complete = client.chat.completions.create(model =“Swiss-Ai/Apertus-70B-Instruct-2509”message = message,)

printing(complete.choices)0). message)

From JS using @huggingface/Incerence

Import { inference } from “@Huggingface/Inference”;

const Client= new inference(process.Env.hf_token);

const chatcompletion = wait client.ChatCompletion({
Model: “Swiss-Ai/Apertus-70B-Instruct-2509”,
message:({
role: “user”,
content: “What is the capital of France?”,},),
Provider: “publicai”,});

console.log(ChatCompletion.Choices(0).message);

Request

At the time of writing, using the public AI reasoning utility by hugging a facial reasoning provider is free. Pricing and availability may vary.

Here’s how billing works for other providers on the platform:

For direct requests, i.e. when using keys from inference providers, the corresponding provider will be billed. For example, if you are using a public AI API key, your public AI account will be billed.

For routed requests, i.e. when authenticating through a facehub that hugs, you only pay the standard provider API rate. There is no additional markup from us. It simply passes the provider’s costs directly. (In the future, we may establish a revenue sharing agreement with our provider partners.)

Important Memopia users get $2 worth of inference credits each month. You can use them between providers. 🔥

Subscribe to our Hugging Face Pro plan for access to inference credits, Zerogpu, Spaces Dev Mode, 20x high limits and more.

We also infer small allocations for sign-in free users for free, but upgrade to Pro if possible!

Feedback and next steps

We want to get your feedback! Share your thoughts and comments here: https://huggingface.co/spaces/huggingface/huggingdiscussions/discussions/49

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGemini has achieved gold-level performance at the World Finals of the International University Programming Contest
Next Article Piclumen Primo and Picflow: AI Art Models Revolutionizing Image and Video Creation in 2025 | AI News Details
versatileai

Related Posts

Tools

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025
Tools

Gemini 3 for developers: new inference, agent features

December 10, 2025
Tools

Accenture and Anthropic partner to power enterprise AI integration

December 10, 2025
Add A Comment

Comments are closed.

Top Posts

New image verification feature added to Gemini app

December 7, 20256 Views

Aluminum OS is the AI-powered successor to ChromeOS

December 7, 20255 Views

UK and Germany plan to commercialize quantum supercomputing

December 5, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New image verification feature added to Gemini app

December 7, 20256 Views

Aluminum OS is the AI-powered successor to ChromeOS

December 7, 20255 Views

UK and Germany plan to commercialize quantum supercomputing

December 5, 20255 Views
Don't Miss

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025

Gemini 3 for developers: new inference, agent features

December 10, 2025

Anifun vs NovelAI: Which anime AI art generator is better for story creation?

December 10, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?