Nutanix: Cloud complexity reshaping customer needs and MSP opportunity
By by Lilia Guan, as seen in arnet.com.au 18 Sept 2025
Infrastructure challenges include AI, protection gaps, and market volatility.

Credit: Nutanix
Resilience and smarter infrastructure are critical as three converging trends create new challenges for customers and opportunities for managed service providers.
AI at the edge, closing protection gaps, and maintaining resilience in a volatile market, are three trends are intersecting to create complex environments for customers, said Nutanix Asia Pacific and Japan (APJ) vice president and general manager Jay Tuseth.
AI’s shift to edge computing requires infrastructure that can process data locally.
However, protection gaps arise as infrastructure evolves from bare metal to virtualisation and containerisation, while volatility in market trends, acquisitions, and data portability challenges necessitate adaptive architectures.
These trends intersect with multi-cloud sprawl that including different tools, billing models, and security postures, across AWS, Azure, and private cloud, said Versent chief technology officer Tim Hope during a media roundtable with Tuseth.

Cloud complexity
An area where this is evident is the rise in AI. This is where performance meets pressure – especially in the way data is being generated is starting to shift.
“It used to be that virtually all data lived in data centres, but over the last five to ten years, that has shifted to data being generated outside traditional data centres, often at the edge,” said Tuseth.
“That could be a retail store, a drone, or, in one organisation I worked with, data coming off the ear tags of cows.
“Data can come from almost anywhere.”
The challenge then becomes how to manage that information and turn it into something valuable. A lot of this depends on the ability to drive the demand for infrastructure that can process data locally, explained Tuseth.
“Organisations are striving to place infrastructure as close to the data as possible,” he said. “This challenge has become even more acute with GenAI, agentic AI, and new types of applications that require more complex, distributed infrastructure — including at the edge.”
“What we see from organisations is a desire to manage these large, distributed, complex environments with the same simplicity as a standardised data centre platform.”
This often involves reducing silos and embedding AI-ready platforms, regardless of whether the application specifically requires AI, noted the APJ vice president and general manager, and in fact applies to any application — traditional, containerised microservices, or AI-ready.
“Consideration is being given to the edge early in the conversation – specifically around moving compute and functionality closer to the user for speed and resilience,” Tuseth told ARN. “This reflects a growing maturity in the market.
“Rather than jumping head-first into AI, we’re seeing more future-looking, strategic planning where the priority is increasingly building an infrastructure platform that is not only compliant, but also scalable for future workloads and demand.”
Versent has also introduced AI ‘starter kits’ using Amazon Q, Bedrock, or Nutanix Data Services, so small- to medium-sized enterprises can experiment with real business data safely.
This offering is an “as-a-service models so SMEs can consume modern capabilities without hiring entire DevOps or MLOps teams”, said Hope.
Resilience and portability
The next trend shows that cloud-native inherently means that there’s more complexity than people traditionally would have with a traditional data centre.
As data begins to sprawl, having seamless protection through all of that becomes difficult, especially the elements around applications that are running at core, at edge, and in cloud — particularly public cloud.
This intersects with IT teams having limited and shrinking budgets and new cloud-native applications built using paradigms such as microservices and containerisation.
These are then extended into AI-specific applications where the data requirement becomes much more complex, and therefore the ability to govern and protect it becomes different.
Versent’s Hope said operational silos between legacy infrastructure teams and newer cloud and Kubernetes teams.
He also pointed out inconsistent disaster recovery and backup models when workloads move from virtual machines to containers without re-engineering recovery strategies.
In addition, the skills shortages in Kubernetes operations are leading to brittle Day-2 management, as well as the need to ensure that the total-cost of resilience is budgeted for in operations.
“Customers often assume resilience comes “for free” with virtualisation or containerisation,” he said. “In reality, it requires redesigned monitoring, replication, and testing of the resilience processes.”
Although Kubernetes has been in use for 15 years since Google open-sourced it, it’s also incomplete. A lot of traditional data services and data requirements that are in a traditional data centre aren’t necessarily natively embedded, said Tuseth.
Having application portability gives customers the ability to move from public cloud to on-premises, to another public or private cloud seamlessly as their needs change.
This helps to prevent and reduce the level of lock-in they experience while allowing them to maintain control over their own environments.
That’s why Versent’s approach is to design portability and sovereignty in from the start. For example, using infrastructure-as-code, like Terraform and Bicep, and Kubernetes orchestration.
This is so “workloads can shift across Nutanix NC2, AWS, or Azure with minimal friction,” Hope said.
“Utilise trusted third-party software solutions for core non-functional requirements that provide capability to support the applications deployed into different landscapes while embedding FinOps and sustainability tracking so operational continuity is not just technical, but also financial and compliance aligned,” he explained
This means when vendor terms or ownership change, customers have the right level of detail and capability to choose different paths for their applications, Hope added.
“Trading off deep platform integration vs portability on a workload-by-workload basis, enabling them to move at their own pace and make choices aligned to their business objectives.”
Market challenges
Organisations face macro challenges outside of their control that impact resilience in the last trend. Unpredictable cloud costs can be a shock for CIOs, while regulatory shifts, particularly around data sovereignty, are difficult to plan for but essential to manage, said Tuseth.
Decisions by vendor partners, such as Broadcom’s acquisition of VMware, have created uncertainty and unease, driving the need for application portability and autonomy.
This has played to Nutanix’s advantage with the vendor adding more than 2,700 customers in the last year.
“It’s actually the highest number of new customers we’ve added in approximately four years,” said Tuseth. “This brings the total number of Nutanix customers globally to nearly 30,000 distinct customers, so we continue to see that growth.”
Tuseth said it has seen strong growth across Australia and New Zealand. While Nutanix doesn’t break those numbers out to the country level, he noted in the vendor’s latest quarterly earnings it reported “Australian revenue growth of 35 per cent year-on-year and partner growth of 45 per cent year-on-year”.
This growth was led by organisations needing an adaptive architecture that allows data and applications to reside anywhere, enabling movement between public cloud and on-premises.
“Some of the decisions following the acquisition have driven a lot of uncertainty and unease into the market,” he said. “We’re seeing organisations who had previously pursued a single vendor infrastructure strategy now seek alternatives to de-risk their businesses.
Hope also said the MSP was seeing many customers re-evaluating their virtualisation and cloud roadmaps in light of Broadcom’s VMware changes.