Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Providing tools to LLM using JavaScript

November 20, 2025

Business cards now have AI too

November 20, 2025

What Europe’s AI education experiment can teach business

November 19, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, November 20
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Providing tools to LLM using JavaScript
Tools

Providing tools to LLM using JavaScript

versatileaiBy versatileaiNovember 20, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

nathan sarrazin avatar

We’ve been working on Agents.js recently with huggingface.js. This is a new library for providing tools with access to LLM from browser or server JavaScript. It comes with several multimodal tools out of the box and can be easily extended with your own tools and language models.

install

It’s very easy to get started. You can get the library from npm like this:

npm install @huggingface/agents

Usage

The library exposes an HfAgent object, which is the entry point to the library. You can instantiate it like this:

import { Hf agent } from “@huggingface/Agent”;

constant HF_ACCESS_TOKEN = “n/a_…”;

constant agent = new Hf agent(HF_ACCESS_TOKEN);

After that, using the agent is easy. If you give it a plain text command it will return some messages.

constant Code = wait agent.generate code(
“Draw a picture of a rubber duck wearing a top hat and caption this picture.”
);

In this case the following code was generated

asynchronous function generate() {
constant output = wait text to image(“Rubber duck wearing a top hat”);
message(“Generate a photo of a duck”,output);
constant Caption = wait image to text(output);
message(“Now add a caption to the image”,caption);
return output; }

The code can then be evaluated like this:

constant message = wait agent.evaluation code (code);

The message returned by the agent is an object of the following format:

export interface update {
message: string;
data: undefined | string | blob;

Here, message is informational text and data can contain strings or blobs. BLOBs can be used to display images or audio.

If your environment is reliable (see warning), you can also use run to run code directly from the prompt.

constant message = wait agent.run(
“Draw a picture of a rubber duck wearing a top hat and caption this picture.”
);

Usage warning

Using this library currently means evaluating arbitrary code in the browser (or Node). This is a security risk and should not be run in an untrusted environment. To see what code is running, we recommend using generateCode and EvaluateCode instead of run.

Custom LLM 💬

By default, HfAgent uses the OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 hosted inference API as the LLM. However, this can be customized.

You can pass a custom LLM when instantiating HfAgent. An LLM in this context is an asynchronous function that takes a string input and returns a string promise. For example, if you have an OpenAI API key, you can use it like this:

import { composition, OpenAIApi } from “Open night”;

constant HF_ACCESS_TOKEN = “n/a_…”;
constant API = new OpenAIApi(new composition({ API key: “Sook…” }));

constant llmOpenAI = asynchronous (prompt: string): promisestring > => {
return ( (
wait API.Creation completed({
model: “Text-da Vinci-003”,
prompt: prompt,
max_tokens: 1000}) ).data.choices(0).sentence ?? “”
); };

constant agent = new Hf agent(HF_ACCESS_TOKENllmOpenAI);

Custom tools 🛠️

Agents.js is designed to be easily extended using custom tools and samples. For example, if you want to add a tool to translate text from English to German, you can do it like this:

import type { tool } from “@huggingface/agents/src/types”;

constant English to German Tools: tool = {
name: “From English to German”,
explanation:
“Takes an English input string and returns a German translation.”,
example🙁 {
prompt: “Translate the string ‘hello world’ into German”,
code: `const Output = englishToGerman(“hello world”)`,
tool🙁“From English to German”), }, {
prompt:
“Please translate the string “The fast brown fox jumps over the lazy dog” into German.”,
code: `const Output = englishToGerman(“The fast brown fox jumps over the lazy dog”)`,
tool🙁“From English to German”), }, ),
phone: asynchronous (input, inference) => {
constant data = wait input;
if (type of data !== “string”) {
throw new error(“Input must be a string”); }
constant Result = wait inference.translation({
model: “t5 base”,
input: input,});
return result.translated text; }, };

Now you can add this tool to the list of tools when starting the agent.

import { Hf agent, LLM From Hubdefault tool} from “@huggingface/Agent”;

constant HF_ACCESS_TOKEN = “n/a_…”;

constant agent = new Hf agent(HF_ACCESS_TOKEN, LLM From Hub(“n/a_…”), (englishToGermanTool, …defaultTools, ));

Pass the input file to the agent 🖼️

The agent can also take input files and pass them to the tool. An optional FileList can be passed to generateCode and evaluateCode.

Given the following HTML:

input ID=“File item” type=“file” />

Then you can:

constant agent = new Hf agent(HF_ACCESS_TOKEN);
constant file = document.getElementById(“File item”).file;
constant code = agent.generate code(
“Caption the image, then read the text aloud.”,file);

The following code was generated when passing the image:

asynchronous function generate(image) {
constant Caption = wait image to text(image);
message(“First, add a caption to the image”,caption);
constant Output = wait text speech(caption);
message(“Then I’ll read the caption.”,output);
return output; }

Demo 🎉

We are developing a demo of Agents.js that you can try here. It is powered by the same Open Assistant 30B model used by HuggingChat and uses tools called from the hub. 🚀

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleBusiness cards now have AI too
versatileai

Related Posts

Tools

What Europe’s AI education experiment can teach business

November 19, 2025
Tools

Stable Diffusion XL on Mac with advanced Core ML quantization

November 19, 2025
Tools

Microsoft, NVIDIA, and Anthropic form AI computing partnership

November 18, 2025
Add A Comment

Comments are closed.

Top Posts

Detailed cyber espionage of humanity orchestrated by AI

November 14, 20256 Views

Concerns about social trends in viruses like Barbie

April 11, 20256 Views

Try generating videos on Gemini with VEO 2

April 16, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Detailed cyber espionage of humanity orchestrated by AI

November 14, 20256 Views

Concerns about social trends in viruses like Barbie

April 11, 20256 Views

Try generating videos on Gemini with VEO 2

April 16, 20255 Views
Don't Miss

Providing tools to LLM using JavaScript

November 20, 2025

Business cards now have AI too

November 20, 2025

What Europe’s AI education experiment can teach business

November 19, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?