Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Intel®Gaudi®2AI Accelerator Text Generation Pipeline

July 3, 2025

CAC has announced AI-powered business registration portal – thisdaylive

July 3, 2025

Research shows that AI can reduce global carbon emissions

July 3, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, July 3
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Research»AI Now: “On the cusp of doing new science”
Research

AI Now: “On the cusp of doing new science”

versatileaiBy versatileaiApril 16, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

(Images created via OpenAI’s image generation technology)

“We’re in the cusp of a system that can do new science.”

That line on page 3 of Openai’s latest “preparation framework” (version 2 updated April 15, 2025) illustrates a potential paradigm shift in the R&D ecosystem.

Going forward, the framework addresses the possibility that AI can “recursively self-improve.” The resulting “major acceleration of AI R&D speed” warns that new features and risks can be introduced rapidly. This acceleration could outweigh current safety measures, render surveillance “inadequate” and explicitly flag the danger of “maintaining human control” on the AI ​​system itself.

Speaking at the Goldman Sachs event (released on YouTube on April 11th), Openai CFO Sarah Friar reinforced this view, saying that the model has already “invented something novel in their field,” and that it merely reflects “extending it” existing knowledge. Friar also focuses on a rapid approach to artificial general information (AGI), suggesting that “we might be there.”

Friar said that CEO Sam Altman’s view (AGI) – it is “immediate” for AI to handle the work of the most valuable human beings while acknowledging the ongoing debate, let alone its feasibility. This suggests that researchers move from AI as a tool to AI, as researchers may be closer than many people understand, and early examples have emerged potentially in areas such as software development.

https://www.youtube.com/watch?v=2kzqm_bue7e

Major R&D institutions are actively building “autonomous research” capabilities. For example, national laboratories such as Argonne and Oak Ridge are developing “autonomous driving labs” specifically designed for materials science and chemistry. Los Alamos is also working with Openai
Test the inference model for energy and national security applications of Venado supercomputers.

Generally, National Labs investigates AI use and undertakes core research tasks. Iterate towards hypotheses (often via optimization strategies), multi-step experiment design, robot execution control, real-time analysis of results, and discovery goals that significantly reduce human intervention within a given operation. Requiring human surveillance for validation and strategic orientation – perhaps working at a “level 3” or a new “level 4” of research autonomy – such initiatives show that AI will move beyond passive data analysis to directly participate in the scientific discovery process. As seen in the recent DOE “1,000 Scientists AI JAM,” it involves empowering researchers directly. This massive collaboration brought together around 1,500 scientists from multiple national labs, including Argonne, to test advanced AI inference models from companies like Openai and humanity, and tested real-world scientific problems. Researchers specifically investigated the possibilities of enhancing tasks such as hypothesis generation and experimental automation.

Currently, developers have a variety of views on the possibilities of Genai-enabled tools, but a similar migration is currently underway in software development. Today’s AI often acts as an assistant, but technology is rapidly improving software games, especially in popular languages ​​ranging from JavaScript to Python. Openai’s model shows a major advancement in “getting closer to the human level” with key benchmarks. This supports potential monks who are described as “agent software engineers.” This is AI that allows you to “go out and work independently” and can be built, tested, documented, and more. This evolution into more autonomous capabilities could completely reconstruct the field.

Openai’s 5-level AI
Maturity Framework

Openai reportedly benchmarks advances towards artificial general information (AGI) using an internal five-level framework. This structure was discussed within the company in mid-2024 and later reported by outlets like Bloomberg, and outlines a clear stage in AI functionality.

Level 1: Chatbot/Conversation AI: A system that is proficient in natural languages ​​like chatgpt. Level 2: Inferentialist: AI capable of solving basic problems comparable to highly educated people. At this level, the model can also demonstrate new inference skills without external tools. Level 3: Agent: An autonomous AI system that manages complex tasks on behalf of users and allows you to make decisions over a long period of time. Level 4: Innovators: AI can greatly contribute to creativity and discovery by generating new ideas, supporting inventions, or driving breakthroughs. Level 5: Organization: The pinnacle phase where AI can potentially manage and operate complex functions across the organization, potentially exceeding human efficiency.

Generally, National Labs investigates AI use and undertakes core research tasks. Iterate towards hypotheses (often via optimization strategies), multi-step experiment design, robot execution control, real-time analysis of results, and discovery goals that significantly reduce human intervention within a given operation. Requiring human surveillance for validation and strategic orientation – perhaps working at a “level 3” or a new “level 4” of research autonomy – such initiatives show that AI will move beyond passive data analysis to directly participate in the scientific discovery process. As seen in the recent DOE “1,000 Scientists AI JAM,” it involves empowering researchers directly. This massive collaboration brought together around 1,500 scientists from multiple national labs, including Argonne, to test advanced AI inference models from companies like Openai and humanity, and tested real-world scientific problems. Researchers specifically investigated the possibilities of enhancing tasks such as hypothesis generation and experimental automation.

Currently, developers have a variety of views on the possibilities of Genai-enabled tools, but a similar migration is currently underway in software development. Today’s AI often acts as an assistant, but technology is rapidly improving software games, especially in popular languages ​​ranging from JavaScript to Python. Openai’s model shows a major advancement in “getting closer to the human level” with key benchmarks. This supports potential monks who are described as “agent software engineers.” This is AI that allows you to “go out and work independently” and can be built, tested, documented, and more. This evolution into more autonomous capabilities could completely reconstruct the field.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleIf AI reasoning doesn’t work: Microsoft Research shows that more tokens can mean more problems
Next Article Australian researchers set aside $2 million to leverage AI, 3D Tech Again Ganth
versatileai

Related Posts

Research

Lossless compression tailored to AI

June 30, 2025
Research

High-tech research jobs in the US will rise by 26% by the next decade. Median future salary for AI, ML and others is $140,000

June 30, 2025
Research

BAU researchers are first to use AI to combat brucellosis

June 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20251 Views
Don't Miss

Intel®Gaudi®2AI Accelerator Text Generation Pipeline

July 3, 2025

CAC has announced AI-powered business registration portal – thisdaylive

July 3, 2025

Research shows that AI can reduce global carbon emissions

July 3, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?