Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Piclumen Realistic V2 introduces advanced AI art generation. AI News Details

June 17, 2025

GROQ hugging face reasoning provider

June 17, 2025

KREA 1 Image Model launches with excellent aesthetic controls and custom training for AI art generation | AI News Details

June 16, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Tuesday, June 17
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Tools»GROQ hugging face reasoning provider
Tools

GROQ hugging face reasoning provider

versatileaiBy versatileaiJune 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

We are delighted to share that Groq is a supported reasoning provider for Hug Face Hub! GROQ joins a growing ecosystem and directly enhances the breadth and capabilities of serverless inference on the hub’s model page. Inference providers are seamlessly integrated into the client SDK (both JS and Python), making it easy to use different models using preferred providers.

GROQ supports a variety of text and conversation models, including the latest open source models such as Meta’s Llama 4 and Qwen’s QWQ-32B.

At the heart of GROQ’s technology is the Language Processing Unit (LPUâ„¢). This is a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with sequential components such as large-scale language models (LLMS). The LPU is designed to overcome the limitations of GPUs for inference, providing significantly lower latency and higher throughput. This makes it ideal for real-time AI applications.

GROQ provides fast AI inference to openly available models. It provides APIs that allow developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly available LLMS.

Now you can use GROQ’s inference API as your inference provider for Huggingface. I’m extremely excited to see what you build with this new provider.

Learn more about using GROQ as an inference provider on our dedicated documentation page.

See the list of supported models here.

How it works

In the website UI

In User Account Settings, you set your own API key for the provider you signed up for. If no custom key is configured, the request is routed through HF. Order a provider if you like. This applies to model page widgets and code snippets.

Inference provider

As mentioned before, when calling an inference provider there are two modes: a custom key (the call goes directly to the inference provider, using the corresponding inference provider’s own API key) (in that case no tokens are required from the provider.

Inference provider

The model page introduces third-party inference providers (compatible with current models sorted by user preferences)

Inference provider

From the client SDK

I’m using Huggingface_hub from Python

The following example shows how to use Meta’s Llama 4 using GROQ as the inference provider. Automatic routing through a hugging face can be used with a hugging face token or your own GROQ API key if you have one.

Install huggingface_hub from the source (see instructions). Official support will be released soon with version v0.33.0.

Import OS
from huggingface_hub Import Inference client=Inference client(provider=“groq”,api_key = os.environ(“HF_TOKEN”) ) message = ({
“role”: “user”,
“content”: “What is the capital of France?”
}) complete = client.chat.completions.create(model =“Metalama/llama-4-scout-17b-16e-instruct”message = message,)

printing(complete.choices)0). message)

From JS using @huggingface/Incerence

Import { inference } from “@Huggingface/Inference”;

const Client= new inference(process.Env.hf_token);

const chatcompletion = wait client.ChatCompletion({
Model: “Metalama/llama-4-scout-17b-16e-instruct”,
message:({
role: “user”,
content: “What is the capital of France?”,},),
Provider: “groq”,});

console.log(ChatCompletion.Choices(0).message);

Request

For direct requests, i.e. when using keys from inference providers, the corresponding provider will be billed. For example, if you are using a GROQ API key, your GROQ account will be billed.

For routed requests, i.e. when authenticating through a facehub that hugs, you only pay the standard provider API rate. There is no additional markup. Pass the provider’s costs directly. (In the future, we may establish a revenue sharing agreement with our provider partners.)

Important Memopia users get $2 worth of inference credits each month. You can use them between providers. 🔥

Subscribe to our Hugging Face Pro plan for access to inference credits, Zerogpu, Spaces Dev Mode, 20x high limits and more.

We also infer small allocations for sign-in free users for free, but upgrade to Pro if possible!

Feedback and next steps

We want to get your feedback! Share your thoughts and comments here: https://huggingface.co/spaces/huggingface/huggingdiscussions/discussions/49

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleKREA 1 Image Model launches with excellent aesthetic controls and custom training for AI art generation | AI News Details
Next Article Piclumen Realistic V2 introduces advanced AI art generation. AI News Details
versatileai

Related Posts

Tools

Ericsson and AWS bet on AI to create self-healing networks

June 16, 2025
Tools

Thousands of open LLMs bloom in the top AI model garden

June 16, 2025
Tools

Vision Language Model explained

June 15, 2025
Add A Comment

Comments are closed.

Top Posts

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Don't Miss

Piclumen Realistic V2 introduces advanced AI art generation. AI News Details

June 17, 2025

GROQ hugging face reasoning provider

June 17, 2025

KREA 1 Image Model launches with excellent aesthetic controls and custom training for AI art generation | AI News Details

June 16, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?