Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Hypurrs ai-enation Debate: Trader Flags 16 Pieces “Created by Jeff” and the Risk of Reliability in NFT Ratings | Flash News Details

October 1, 2025

AI causes reduced brain activity in users – MIT

October 1, 2025

Accelerated depth pronune draft model for the QWEN3-8B ​​agent from Intel® Core™ Ultra

October 1, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, October 1
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Training llama 2 chatbots
Tools

Training llama 2 chatbots

versatileaiBy versatileaiSeptember 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email



Abhishek Thakur's avatar

This tutorial shows how everyone can create their own open source ChatGPT without writing a single line of code. Using the Llama 2-based model, tweak it to chat with an open source instruction dataset, then deploy it to a chat app where you can share the model with friends. Everything is simply clicking on the road to our greatness. 😀

Why is this important? Well, machine learning, especially LLMS (large language models), has witnessed an unprecedented surge in popularity and has become an important tool in our personal and business life. However, for most of the professional niches of ML engineering, the complexity of training and deployment of these models appears out of reach. If the expected future of machine learning is filled with ubiquitous, personalized models, then there is a pressing challenge. How do you empower people from non-technical backgrounds to independently utilize this technology?

With our faces in mind, we have been quietly working to pave the way for this inclusive future. Our suite of tools, including services such as Space, Autoterrain, and Inference Endpoints, is designed to make the world of machine learning accessible to anyone.

To show you how accessible this democratized future is, this tutorial shows you how to build a chat app using Space, Autoterrain and Chatui. With all the simple steps, there is a single line of code. For context, I am not an ML engineer, but a member of the embracing Face GTM team. If I can do this, you can too! Let’s jump in!

Introducing the space

HugFace Space is a service that provides easy-to-use GUI for web host ML demos and app building and deployment. This service also lets you quickly build ML demos using gradients or streamlined frontends, upload your own apps to a Docker container, or select pre-configured ML applications to instantly deploy them.

Expand two pre-configured Docker application templates from Space, AutoTrain, and Chatui.

For more information about the space, click here.

Introducing Auto Train

AutoTrain is a codeless tool that allows non-ML engineers (or non-developers) to train cutting edge ML models without the need to code. Can be used for NLP, computer vision, audio, and tabular data. It can also be used to fine-tune LLMS today.

For more information about AutoTrain, click here.

Introducing Chatui

Chatui sounds exactly like that. It is a face-hugging open source UI that provides an interface for interacting with open source LLM. In particular, it’s the same UI behind Huggingchat, and a 100% open source alternative to ChatGpt.

Click here for more information about Chatui.

Step 1: Create a new autotrain space

1.1 Go to huggingface.co/spaces and select Create New Space.

1.2 If you plan to name a space and publish the model or space, select the preferred license.

1.3 Select Select Docker > AutoTrain to deploy the AutoTrain app from the Docker template to the deployed space.

1.4 To run the app, select Space Hardware. (Note: For AutoTrain apps, the free CPU basic options are sufficient, and model training is done later using individual calculations that can be selected later)

1.5 Add “HF_TOKEN” under “Space Secrets” to provide this space access to your hub account. Without this, space will not be able to train or store new models in your account. (Note: HF_TOKEN is located in the hugging face profile under Settings > Access Token, make sure the token is selected as “Write”)

1.6 Choose whether to create “Private” or “Public”. As for the Auto Terrain Space itself, we recommend keeping this private, but you can always publish your model or chat app later.

1.7 hit “Freate Space” etvoilà! New space takes minutes, then a few minutes, then you can open up the space and start using the AutoTrain.

Step 2: Start model training with Autoterrain

2.1 When AutoTrain Space starts, the following GUI will be displayed. AutoTrain can be used for several different types of training, including LLM fine-tuning, text classification, tabular data, and diffusion models. Since we are focusing on today’s LLM training, select the LLM tab.

2.2 If you select the LLM you want to train from the Model Selection field, you can select the model from the list or enter the name of the model from the face model card of the hug. In this example, use Meta’s Llama 2 7b Foundation model and view details from the model card. (Note: llama 2 is a gate model that requires access from the meta before using it, but there are many models other than gates that you can choose, like Falcon.)

2.3 “Backend” select the CPU or GPU to use for training. For the 7B model, the “A10G Large” is large enough. If you choose to train a larger model, you must ensure that the model fits perfectly with the memory of the selected GPU. (Note: If you are training a larger model and need access to the A100 GPU, please email apieenterprise@huggingface.co)

2.4 Of course, you need to upload “training data” to fine-tune the model. When doing so, make sure your dataset is formatted correctly and is in the CSV file format. See here for examples of the required formats. If your dataset contains multiple columns, make sure to select “Text Column” from the file that contains training data. This example uses the Alpaca instruction tuning dataset. For more information about this dataset, see here. You can also download it directly as a CSV from here.

2.5 Optional: You can upload “Validation Data” to test the newly trained model, but this is not required.

2.6 You can configure many advanced settings in AutoTrain to reduce the memory footprint of your model, such as whether to use accuracy (“FP16”), quantization (“INT4/8”), or PEFT (parameter-efficient fine-tuning). It is recommended to use these as set by default, as it reduces the time and cost of training the model and has a small impact on the performance of the model.

Similar to 2.7, you can configure training parameters using “Parameter Selection”, but for now, let’s use the default settings.

2.8 Now everything is set up, “Add Job” to add the model to the training queue, then select “Start Training” (Note: If you want to train multiple model versions with different hyperparameters, you can add multiple jobs and run them simultaneously)

2.9 After training begins, you will see that a new “space” has been created in your hub account. This space is running model training. Once complete, the new model will appear in your hub account under “Models”. (Note: To view training progress, you can view live blogs in space)

2.10 It may take hours or days to get coffee depending on the size of your model and training data. Once done, the new model will appear in your hugging facehub account under “Model”.

Step 3: Create a new Chatui space using the model

Follow the same process as setting up a new space as in 3.1 Step 1.1>1.3, but choose the Chatui Docker template instead of AutoTrain.

3.2 If you select “Space Hardware” for the 7B model, A10G small is sufficient to run the model, but this depends on the size of the model.

3.3 If you have your own Mongo DB, you can provide these details to store chat logs under “Mongodb_url”. Otherwise, leaving the field blank will automatically create a local DB.

3.4 To run the chat app using the trained model, you must provide “Model_name” in the “Space Variables” section. You can find the model’s name by looking at the “Models” section of the embracing face profile. This will be the same as the “project name” used in AutoTrain. In this example, it is “2legit2overfit/wrdt-pco6-31a7-0”.

Under 3.4 “Space Variables”, you can also change the nature of the generation by changing the model inference parameters including temperature, maximum p, maximum token, and more. For now, stick to the default settings.

3.5 Now you’re ready to launch your own open source ChatGPT by pressing “Create”. Congratulations! If you did it right, it should look like this.

If you find yourself inspired, but still need technical support to get started, feel free to reach out and apply for support here. Hugging Face provides paid professional advice services that can help.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleManage your Ethical Cybersecurity approach for 2025
Next Article California Governor Signature Landmark AI Safety Bill SB 53
versatileai

Related Posts

Tools

AI causes reduced brain activity in users – MIT

October 1, 2025
Tools

Accelerated depth pronune draft model for the QWEN3-8B ​​agent from Intel® Core™ Ultra

October 1, 2025
Tools

The value gap from AI investments is growing dangerously fast

September 30, 2025
Add A Comment

Comments are closed.

Top Posts

Plug in and see it works: How Metanova AI is engineering the future of applied AI

July 1, 20257 Views

Advocacy groups urge Birdom to sign California’s AI bill

September 25, 20253 Views

What does that do and why should the UK pay attention

September 25, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Plug in and see it works: How Metanova AI is engineering the future of applied AI

July 1, 20257 Views

Advocacy groups urge Birdom to sign California’s AI bill

September 25, 20253 Views

What does that do and why should the UK pay attention

September 25, 20252 Views
Don't Miss

Hypurrs ai-enation Debate: Trader Flags 16 Pieces “Created by Jeff” and the Risk of Reliability in NFT Ratings | Flash News Details

October 1, 2025

AI causes reduced brain activity in users – MIT

October 1, 2025

Accelerated depth pronune draft model for the QWEN3-8B ​​agent from Intel® Core™ Ultra

October 1, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?