Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Why we need a small, specialized, locally executable model for cyber defense

May 9, 2026

AI helps reduce burden on UK NHS

May 8, 2026

Pre-correction accuracy in RL

May 7, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, May 9
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»HP and AI and data technology for the enterprise
Tools

HP and AI and data technology for the enterprise

versatileaiBy versatileaiMay 7, 2026No Comments9 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Ahead of the AI ​​& Big Data Expo, taking place May 18-19 at the San Jose McEnery Convention Center, we spoke with Jerome Gabryszewski, our AI & Data Science Business Development Manager, about AI, data processing for AI ingestion, and local vs. cloud computing.

While the tech media likes to cite data as the “new oil,” the reality on the ground is that while you have access to a lot of your company’s information, actually using it to your business advantage can be problematic, especially at enterprise scale.

Should you choose cloud-hosted AI models or local computing? How do you organize your “data house” so that smart models can produce meaningful results? And as always, we’re asking our interviewees to help us predict the next chapter in the fast-changing story of business IT in this AI-dominated business environment.

Artificial Intelligence News: Moving data ingestion from manual to automated sounds great in theory, but it’s notoriously difficult. What does HP think the company is doing now?

One of the most consistent friction points we see is that organizations underestimate the organizational and architectural debt behind their data. Before automation can take hold, fragmented data ownership across departments, inconsistent schemas within systems, and legacy infrastructure not designed for interoperability must be reconciled. The technical benefits of automation are often less than the governance and integration work required beforehand.

Artificial Intelligence News: When AI models start updating themselves continuously, things can easily change direction. How do you advise clients to address risks such as concept drift and data poisoning?

Continuous learning puts AI in a position of liability from projects if it is not carefully managed. Our advice to clients is to treat model updates the same way you would treat code deployments. Without validation gates, nothing can go into production. For concept drift, that means an MLOps pipeline with automatic drift detection and human-involved triggers before retraining begins. In the case of data poisoning, it is as much a question of data origin as it is a security issue. It’s important to know exactly where your training data comes from and who can touch it. Clients who get this right aren’t necessarily the most technically sophisticated. These are companies that incorporate AI governance into their risk frameworks before scaling.

Artificial Intelligence News: I’d like to touch on HP’s hardware roots. What does a modern workstation or computing setup actually need to look like to handle the sheer volume of an autonomous AI lifecycle?

HP’s roots here actually matter. The Z Series has been purpose-built for the most demanding professional computing for over 15 years, so when we talk about what the autonomous AI lifecycle actually demands of our hardware, we’re not speculating, and we’ve been iterating on this issue longer than most other companies.

The answer is not a single machine, but a spectrum. At the individual developer level, local computing is required to be powerful enough to run the actual experiments without relying on the cloud for each iteration. The ZBook Ultra and Z2 Mini handle professional-grade machines that are mobile and compact deskside tiers that can run local LLM and heavy workflows simultaneously.

ZGX Nano will be very interesting for AI-first teams. It’s an AI supercomputer that fits in the palm of your hand (15x15cm) and is powered by the NVIDIA GB10 Grace Blackwell Superchip with 128 GB of unified memory and 1,000 TOPS of FP4 AI performance. Process models with up to 200 billion parameters locally in a single unit. And if the team needs to scale beyond that, it will connect the two units through high-speed interconnects and work with models of up to 405 billion parameters. No clouds, no data centers, no queues. With the NVIDIA DGX software stack and HP ZGX toolkit preconfigured, teams can go from setup to initial workflow in minutes instead of days.

Additionally, Z8 Fury provides power user teams with up to four NVIDIA RTX PRO 6000 Blackwell GPUs in a single system (384 GB VRAM). This is a full model development cycle performed on-premises. And on the frontier, ZGX Fury completely changes the conversation. Powered by the NVIDIA GB300 Grace Blackwell Ultra superchip with 748 GB of coherent memory, it enables multi-trillion parameter inference at the deskside rather than the data center. For teams running continuous fine-tuning and inference on sensitive data, it typically pays for itself in 8 to 12 months compared to comparable cloud computing.

And for organizations that require clustering and further expansion, the entire Z portfolio is designed in a rack-ready form factor that integrates into managed IT environments without compromising security or data residency.

Jerome Gabryszewski, Business Development Manager, AI & Data Science, HP.

The bigger point is this. Autonomous AI lifecycles pose governance and latency issues, not compute issues. Teams can’t keep sending sensitive training data to the cloud every time a model needs to be updated. HP’s portfolio provides organizations with a hardware path that scales with the maturity of their workflows, from the developer’s desk to distributed on-premises computing. This hardware ultimately fits the purpose that these AI systems actually need to do.

Artificial Intelligence News: Gen AI computing costs are rising for many companies. What are the practical solutions to balance that huge cost with the efficiency of modern clouds?

The cost problem is structural, not cyclical. Enterprise GenAI spending will soar to $37 billion in 2025, with 80% of companies still falling more than 25% below cost projections. The central tension is that although unit inference costs are actually decreasing, total spending continues to increase because usage is increasing faster than costs are decreasing. The cloud API model was designed for experimental, low-volume workloads. It wasn’t built to be an economic engine for large-scale production AI.

The practical solution is a discipline issue before it is an infrastructure issue. Draw a clear line between exploratory and production workloads, and never use the same compute model for both. Early iterations such as prototyping, fine-tuning, and model evaluation must be performed on local hardware such as the ZGX Nano or Z8 Fury. This allows you to spend your capital once, rather than spending your operating budget on experiments that don’t have a clear ROI path.

Organizations that get this right run a three-tier model. That means the cloud for burst training and real-world frontier model access, on-premises HP Z infrastructure for predictable high-volume inference, and latency-sensitive edge computing. Independent analysis shows that on-premises can deliver up to 18x cost benefits per million tokens over a five-year lifecycle. The framework we use with our clients is simple. Cloud is about the scale you get, not the scale you expect. ”

Artificial Intelligence News: Everyone wants their proprietary data to be “AI-ready.” How do companies do that without exposing sensitive or siled information?

The mistake most companies make is treating “AI-enabled data” as a data engineering problem, when it is actually a data sovereignty problem that requires a different solution. Sending proprietary data to a cloud model for processing is not only a risk of leakage, but also a governance failure, especially in regulated industries, where even the act of sending data externally can trigger non-compliance.

The architecture that solves this is Search Augmentation Generation (RAG), which runs on the local infrastructure. This allows the model to retrieve relevant context from the internal knowledge base at query time without having to be trained or exposed externally. Proprietary data is stored on-premises, in hardware that you control. For example, a ZGX Nano or Z8 Fury running a locally hosted model can power a full RAG pipeline for sensitive internal documents without data ever leaving the building or tokens being sent to third parties.

The access control layer is where this becomes operationally serious. A well-designed RAG system enforces role-based permissions at the search level, so the AI ​​only shows what a specific employee can see, much like a document management system. The combination of local computing, local models, local search, and managed access allows you to make your own data AI-enabled without actually exposing it.

Companies that get this right aren’t just sending their crown jewels to the cloud for processing. They bring intelligence to data, not the other way around.

Artificial Intelligence News: What will the day-to-day role of enterprise IT teams look like in the coming years when autonomous AI is combined with these modern cloud platforms?

I think the person who best represents this concept is Jensen Huang. He said that our jobs are not about tinkering with spreadsheets or typing on keyboards, and that our jobs are generally much more meaningful than that. And he clearly distinguishes between the task of work and its purpose. For example, in IT, the task may be to provision servers or prioritize incidents, but the goal is to keep the business resilient and moving forward. That difference is exactly what is happening now.

Gartner predicts that 40% of enterprise applications will include AI agents by the end of 2026, up from less than 5% just a year ago. This means that while the day-to-day execution layer of IT is rapidly being absorbed, the governance and architecture layers are expanding rapidly as well. What’s already happening in leading organizations is a shift from having IT teams perform tasks to designing and managing agents to perform on their behalf.

A key gap is that only one in five companies still has a mature governance model to do so. This is where local-first infrastructure becomes important again. When the automation layer runs on hardware that it controls, it provides full monitoring of agent behavior that is not available when workloads are abstracted to the cloud. The IT team of the next two years will not be the one keeping the lights on. It’s up to the team to decide which agents can trust which decisions, and to ensure that the infrastructure underlying those decisions is actually capable of supporting the business.

(Image source: Pixabay, license.)

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.

AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAdd Benchmaxxer Repellent to Open ASR Leaderboard
Next Article Pre-correction accuracy in RL
versatileai

Related Posts

Tools

Why we need a small, specialized, locally executable model for cyber defense

May 9, 2026
Tools

AI helps reduce burden on UK NHS

May 8, 2026
Tools

Pre-correction accuracy in RL

May 7, 2026
Add A Comment

Comments are closed.

Top Posts

OpenAI blocks Sora from creating MLK video after Estate object

November 23, 200520 Views

AI work regulations make compliance “very complex”

February 13, 200614 Views

How will rules and regulations affect cybersecurity and AI in 2025?

February 13, 202513 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

OpenAI blocks Sora from creating MLK video after Estate object

November 23, 200520 Views

AI work regulations make compliance “very complex”

February 13, 200614 Views

How will rules and regulations affect cybersecurity and AI in 2025?

February 13, 202513 Views
Don't Miss

Why we need a small, specialized, locally executable model for cyber defense

May 9, 2026

AI helps reduce burden on UK NHS

May 8, 2026

Pre-correction accuracy in RL

May 7, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?