Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

A billion-dollar startup with different ideas for AI

April 27, 2026

The future of AI and cybersecurity: Why openness matters

April 26, 2026

How AI models use real-time cryptocurrency data to interpret market movements

April 26, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, April 27
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»A billion-dollar startup with different ideas for AI
Tools

A billion-dollar startup with different ideas for AI

versatileaiBy versatileaiApril 27, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

A 12-person company has raised $1 billion in startup funding, showing that investors still believe in AI. But Yann LeCun of AMI Labs, the founder of the startup in question, believes that the type of technology currently called AI (large-scale language models) is not the way to produce meaningful, long-term results.

Yann LeCun left his post as Meta’s chief AI scientist at the end of last year to found Advanced Machine Intelligence Lab (AMI Lab), but he insists that it will remain a research organization with no expectation of producing a marketable product for perhaps five years. The team at AMI Labs focuses on AI that consists of a collection of modular components that are trained for and work with specific use cases, rather than large general-purpose language-based models.

The artificial intelligence system proposed by LeCun consists of the following types of elements:

A model of the world specific to the domain in which the AI ​​operates. This is likely to be industry-specific or role-specific: an actor who suggests the next step to take based on classic reinforcement learning, a critic who analyzes different options drawn from a world model based on short-term memory and evaluates the proposed steps according to hard-coded rules, a recognition system specific to the use of AI (for example, video or audio data, text, images, etc. using deep learning visual recognition algorithms), short-term memory, and a configurator. Coordinate the movement of information between each of the above.

Unlike large language models trained on only one source of information (text collected from the internet), each instance of LeCun’s AI is fed directed data that is only relevant to its environment and purpose. Each version may have different importance for each module. For example, critique modules will be more comprehensive in areas dealing with sensitive information. Additionally, recognition modules are most important in systems that need to react quickly to real-world events.

Each module is trained in a way that is relevant to a specific area of ​​AI. There have been some successful examples of this in the past, including machine learning systems that can teach themselves how to play video and board games. These are in contrast to the large language models that underpin much of what we currently talk about AI.

LLMs are trained as generalists and create best guess answers based on what they take in. That answer is then adjusted at a deeper level by prompt engineering via software wrappers (Claude Code is most well known these days) or by inference models (the “think aloud” part of the basic response that is fed back into the AI’s prompt before the user sees the final answer).

The economic impact of AI generated by the type of methods proposed by AMI Labs will be of interest to the current AI industry. Assuming Yann LeCun’s ideas produce fruitful and workable results. Large language models from major technology providers (such as Anthropic, Meta, OpenAI, and Google) have consumed more resources with each iteration over the past five years. Increasing model size in the early stages, along with the recursive prompts required to improve the output from later versions, means that training and running large models becomes increasingly expensive, meaning that only large enterprises can run them at financial cost.

Small, focused modules in AMI Labs’ proposed solution could potentially run on a fraction of the GPU power currently required for huge LLMs, or even on devices. For example, instead of the hundreds of billions of parameter models used in ChatGPT, a specialist model that doesn’t need to be a generalist only needs hundreds of millions of parameters. This, and the assumption that the cost of computing is generally decreasing, means that we may be just around the corner from local, cheaper, and inherently more accurate AI.

Startups with new ideas getting huge funding is nothing new in the recent history of technology. But LeCun’s strategy is based, at least in part, on his belief that current large-scale language models cannot be improved significantly enough to realize the ambitious claims their creators have made. AMI Labs appears to be offering investors a way to make AI work well at a manageable cost at some stage in the near future, using an architecture that differs from current standards. This is different from the proposals currently being considered by today’s AI giants, but the message about future possibilities is similar.

(Image source: “Perspective on Modular Construction” by Sidehike is licensed under CC BY-NC-SA 2.0.)

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.

AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe future of AI and cybersecurity: Why openness matters
versatileai

Related Posts

Tools

The future of AI and cybersecurity: Why openness matters

April 26, 2026
Tools

How AI models use real-time cryptocurrency data to interpret market movements

April 26, 2026
Tools

How to use Transformers.js in a Chrome extension

April 25, 2026
Add A Comment

Comments are closed.

Top Posts

Run VLM on Intel CPUs in 3 easy steps

October 18, 20254 Views

Diffusers welcome stable spread 3

April 26, 20254 Views

Quality First Arabic LLM Leaderboard

April 21, 20263 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Run VLM on Intel CPUs in 3 easy steps

October 18, 20254 Views

Diffusers welcome stable spread 3

April 26, 20254 Views

Quality First Arabic LLM Leaderboard

April 21, 20263 Views
Don't Miss

A billion-dollar startup with different ideas for AI

April 27, 2026

The future of AI and cybersecurity: Why openness matters

April 26, 2026

How AI models use real-time cryptocurrency data to interpret market movements

April 26, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?