Microsoft Ignite 2025 Recap: Practical Updates for Modernisation, Developer Experience and Platform Engineering
Dylan McCarthy
Principal Engineer
Dylan McCarthy
Principal Engineer
Microsoft Ignite often brings a long list of announcements, but the value comes from understanding how these changes influence the way engineers build, maintain and modernise systems.
This year, we sent a small team over to San Francisco to attend the conference, see these announcements first-hand, and get hands-on with some of the new tools.
Several updates stood out because they directly support more reliable platforms, clearer development workflows and stronger foundations for AI-enabled applications.

Key Takeaways
Microsoft Ignite 2025 delivered a noticeable shift in focus. Rather than concentrating only on new AI features, this year’s announcements highlighted maturity, governance, observability and trusted adoption at scale. The major updates included improvements to AKS Automatic, the introduction of the GitHub Copilot Modernisation Assistant, a new cloud-native database in Azure HorizonDB and a unified AI governance platform in Foundry IQ.
Taken together, these updates give engineering teams clearer pathways for modernisation, more consistent development workflows and stronger foundations for building AI-enabled systems.
What We Cover in This Breakdown
In the rest of this post, we take a closer look at each of these announcements, focusing on how they affect real engineering work. The aim is to provide practical context for teams working inside organisations today, especially those dealing with modernisation challenges, improving developer experience or building out cloud and AI platform capabilities.

AKS Automatic and Managed Namespaces
AKS Automatic continued to mature this year, and it is now much closer to a “ready-to-use” Kubernetes environment with significantly fewer operational seams. While Kubernetes remains a powerful but complex runtime, Microsoft is steadily removing the undifferentiated heavy lifting that has historically slowed down platform teams.
What AKS Automatic Actually Provides
AKS Automatic is built around the idea that engineering teams should not need to manage the full lifecycle of Kubernetes clusters in order to benefit from container orchestration. To support this, Microsoft has introduced a series of opinionated, automated capabilities:
- Automated upgrades and patching
Node OS patching, Kubernetes version upgrades and critical security updates are handled by the platform. This reduces the risk of drift between clusters and removes a large operational burden from platform teams. - Automatic node provisioning and right-sizing
The cluster automatically adjusts resources based on workload demand. This improves cost efficiency and reduces manual tuning, especially for teams still building container maturity. - Built-in secure defaults
Microsoft now applies a standardised baseline of security policies, admission controls and container runtime options. This gives organisations a consistent security posture without needing to manually configure the full suite of Kubernetes security primitives. - Integrated GitOps bootstrapping
When a new cluster or Managed Namespace is created, AKS Automatic can bootstrap Flux-based GitOps out of the box. This encourages alignment with modern DevOps practices and removes the overhead of manually wiring GitOps pipelines.
Managed Namespaces: A More Practical Multi-Tenancy Model
One of the biggest challenges in enterprise Kubernetes adoption is tenancy: giving teams space to deploy their applications safely without handing over control of the entire cluster.
Managed Namespaces are Microsoft’s answer to this. They create a sandboxed application environment with:
- Pre-configured RBAC boundaries
- Integrated resource quotas
- Scoped network policies
- Built-in policy guardrails
- GitOps configuration tied to the namespace
This makes it much easier to stand up new application environments with predictable behaviour. Instead of each team defining their own conventions or trying to reproduce configuration from internal documentation, they inherit a standardised, supported namespace design.
Why This Matters for Engineering Teams
AKS Automatic and Managed Namespaces directly address the most common barriers organisations face when adopting Kubernetes:
- Reduced operational load
Platform teams no longer need to build and maintain their own upgrade processes, patching pipelines or custom guardrails. - More predictable and consistent environments
Every namespace follows a repeatable structure, which improves reliability and reduces the risk of configuration drift between applications. - Faster onboarding for new teams
Application teams can deploy safely without needing deep Kubernetes expertise. The platform takes care of cross-cutting concerns. - A clearer path for modernisation
Many modernisation programmes include containerisation as part of the target state. AKS Automatic provides a realistic foundation even for organisations without a large cloud or platform engineering team.
Where This Fits in a Platform Engineering Strategy
From a broader platform perspective, AKS Automatic aligns well with the shift toward more opinionated cloud-native platforms. It helps teams move away from “build your own Kubernetes platform” patterns, which often lead to technical debt and inconsistent experiences across teams.
For organisations already using AKS, Automatic can reduce operational overhead. For those evaluating Kubernetes for the first time, it provides a much gentler path into the ecosystem.
GitHub Copilot Modernisation Assistant
One of the more relevant announcements for enterprise engineering teams this year was the introduction of the GitHub Copilot Modernisation Assistant. Modernisation projects often start with time-consuming discovery work: understanding codebases, reviewing dependencies, assessing migration paths and identifying technical debt. The Modernisation Assistant is designed to streamline these early phases by automating much of the initial analysis.
What the Modernisation Assistant Does
The tool uses GitHub Models and the new Copilot agent framework to perform targeted assessments across a codebase. Instead of manually reviewing thousands of lines of code, the assistant provides structured insights and recommended next steps.
Key capabilities include:
- Framework and dependency analysis
Automatically identifies outdated frameworks, unsupported libraries and upgrade paths, which reduces the need for manual dependency audits. - Cloud-readiness assessments
Reviews common migration blockers such as tightly coupled components, filesystem dependencies or hard-coded configurations that would prevent a smooth move to containers or cloud-based runtimes. - Refactoring recommendations
Suggests modernisation opportunities aligned with current best practices, such as splitting monolithic components, adopting managed identity or removing deprecated APIs. - Code structure insights
Highlights complexity hotspots, unused components and areas where architectural patterns may no longer be aligned with modern cloud-native expectations. - Integration with GitHub agents
Enables multi-step workflows and lets the assistant progressively refine its assessments and generate actionable suggestions rather than one-off summaries.
How It Fits Into a Modernisation Workflow
In practice, the Modernisation Assistant does not replace architects, engineers or the decision-making process, but it does change where time is spent.
Instead of:
- Manual dependency checks
- Codebase walkthroughs
- Spreadsheet-driven audits
- Repetitive analysis across multiple repositories
Teams can begin with a set of consistent, machine-generated findings. Engineers and architects can then validate, challenge and build on top of those results.
This improves several parts of a modernisation engagement:
- Faster discovery
Teams can move from exploration to planning sooner, which accelerates project initiation. - More consistent assessments across applications
Repeated analysis follows the same structure, reducing variance in quality. - Better use of engineering time
Higher-value work shifts toward evaluating trade-offs, designing future-state architectures and aligning solutions with organisational priorities. - Improved transparency for business stakeholders
The assistant provides a clear baseline of issues and opportunities, which makes it easier to explain the scale and value of modernisation work.
Where It Adds the Most Value
This tool is particularly useful for:
- Organisations with large legacy portfolios that need an initial triage
- Teams preparing for cloud migration or container adoption
- Modernisation programmes where discovery phases often stretch beyond expected timelines
- Clients who need clear, defensible reasoning behind prioritisation decisions
- Engineering teams who want structured guidance before diving into refactoring work
Current Limitations and Practical Expectations
Because it is an AI-driven analysis tool, it is important to set realistic expectations:
- The assistant identifies issues but does not fully understand system context.
- Recommendations still need validation by engineers who are familiar with the application.
- It may surface more opportunities than a team can realistically address, which highlights the need for prioritisation.
- It acts as a starting point rather than a final migration plan.
Used appropriately, it becomes a strong accelerator and a consistent foundation for modernisation work.
Why This Matters for Enterprise Engineering Teams
Modernisation is often slowed down by ambiguity and extensive manual analysis. The Copilot Modernisation Assistant introduces a more structured and repeatable approach that supports engineering teams without dictating technical decisions.
It helps reduce uncertainty early in the process, provides clarity on where effort is required, and frees teams to spend more time on design and implementation, where engineering expertise is most valuable.

Azure HorizonDB
Azure HorizonDB is one of the more forward-looking announcements from Ignite this year. While it is still early in its lifecycle, it represents a significant shift in how Microsoft is approaching data platforms for modern, distributed and AI-enhanced applications.
Traditional database engines, even cloud-hosted ones, often require trade-offs between scale, latency, flexibility and cost. HorizonDB is positioned as a cloud-native option that reduces these compromises by offering a unified model for transactional, analytical and vector workloads.
What HorizonDB Aims to Solve
Modern applications increasingly combine three characteristics that are difficult to support in a single storage engine:
- Global low-latency reads and writes
Distributed systems, multi-region applications and latency-sensitive workloads often require active-active architectures. - Mixed data shapes and access patterns
Applications may need relational queries, document storage, time-series data and vector search, often within the same workflow. - AI-driven features that rely on semantic context
AI agents, retrieval-augmented generation systems and real-time personalisation require efficient vector operations and semantic indexing.
HorizonDB aims to unify these into a single data platform and reduce the need to combine multiple databases and sync data between them.
Key Capabilities
The initial release describes several capabilities that stand out:
- Serverless consumption model
Automatically scales based on workload demand and can scale to zero when not in use. This simplifies cost management, especially for event-driven workloads or early-stage services. - Active-active global distribution
Multi-region deployments with conflict resolution built into the platform. This is important for systems that need consistent low-latency writes across continents. - Native vector indexing and semantic search
HorizonDB treats vector operations as a first-class capability. This supports:- Semantic queries
- Hybrid retrieval (text and vector)
- Real-time contextual lookups for agent systems
- Search-driven personalisation
- Multi-model flexibility
Supports structured, unstructured and semi-structured data within a single store, which enables more flexible schema patterns without over-complex architectures. - Optimisation powered by AI
Microsoft has stated that HorizonDB uses AI-driven optimisation for indexing, resource allocation and query planning. The goal is to reduce the need for manual performance tuning.
Where HorizonDB Fits
While HorizonDB is not positioned as a replacement for SQL Database, Cosmos DB or Fabric in the near term, it fills a gap that is becoming more visible across modern architectures:
- Applications that need low-latency global presence
- Systems that blend transactional logic with search or semantic retrieval
- AI features that require fast vector lookups
- High-concurrency services with unpredictable traffic patterns
- Teams that want a simple, flexible global data foundation without assembling multiple engines
Example Scenarios
A few examples where HorizonDB makes sense:
- Agent-driven applications
Agents often need quick access to contextual embeddings, user preferences or semantic content. HorizonDB’s native vector support makes this easier. - Multi-region SaaS platforms
Active-active capabilities help SaaS platforms provide consistent performance globally without complex sharding strategies. - Search-heavy applications (for example, e-commerce, logistics, knowledge systems)
Combining traditional queries with semantic search can improve ranking and retrieval quality. - IoT or telemetry systems
High write throughput with varied data shapes is a natural fit.
Considerations and Maturity
As with any new database engine, particularly one with this scope, teams should be thoughtful about early adoption.
Potential considerations include:
- Ecosystem maturity and integration points
- Operational tooling and observability
- Backup and restore workflows
- Cost modelling in real workloads
- Migration pathways from existing databases
For new greenfield projects or AI-enabled systems, HorizonDB is a promising option. For large legacy workloads, careful evaluation is still required.
Why This Matters for Engineering Teams
HorizonDB reflects a broader trend. The lines between transactional, analytical and semantic workloads are blurring. AI-native applications need flexibility built into the storage layer, not bolted on afterwards.
By providing a unified, globally distributed, serverless and vector-enabled database, HorizonDB gives engineering teams a single platform that can support these emerging patterns without forcing architectural fragmentation.
It will not replace every workload, but it does open up new possibilities for how teams design systems that combine traditional data operations with AI-driven functionality.
Foundry IQ
Foundry IQ is Microsoft’s newest addition to the AI governance landscape. It aims to provide a more structured, transparent and repeatable approach to evaluating and managing AI systems. Rather than adding another standalone governance tool, Foundry IQ integrates directly into the workflows that engineering, security and compliance teams are already using across Azure and GitHub.
The core idea is that as organisations scale their use of AI, especially agents, copilots and application-level LLM features, governance cannot remain a manual exercise. Spreadsheets, ad-hoc reviews and one-off exceptions do not scale and often slow teams down. Foundry IQ provides consistency without forcing teams into an entirely new way of working.
What Foundry IQ Provides
The platform focuses on giving organisations visibility into the behaviour, risks and usage patterns of their AI systems. It brings together capabilities that often require separate tools or custom internal processes:
- Model evaluation and safety scoring
Automatically evaluates models and prompts using Microsoft’s safety benchmarks. This helps teams identify potential risks early, without needing deep AI safety expertise. - Governed prompt and workflow management
Teams can submit prompts, flows or agent behaviours for review. Foundry IQ provides a clear audit trail of what changed, when it changed and why. This is important for regulated industries and formal change processes. - Traceability and data lineage
Tracks how prompts, inputs and outputs move through the system. This is critical for understanding how an AI feature arrived at an answer and whether any sensitive data was involved. - Policy enforcement and controls
Security and governance teams can set boundaries around model usage, data access levels and deployment environments. Foundry IQ enforces these policies across GitHub Copilot, Azure OpenAI and the Microsoft Agent Framework.
How It Fits Into Engineering Workflows
One of the key strengths of Foundry IQ is its integration with tools engineering teams already use:
- GitHub Copilot
Organisations can govern how Copilot is configured, which models are available and how code completions are logged or monitored. - Azure OpenAI
Model selection, deployment and data access can be managed through Foundry IQ policies, which reduces the risk of unapproved or misconfigured endpoints. - Microsoft Agent Framework
As teams build agent-driven applications, Foundry IQ provides a way to assess risk, approve behaviours and monitor agent interactions in production.
Instead of building separate governance processes, teams can bring governance into the same environment where development and deployment already happen.
Why This Matters for Scaling AI in the Enterprise
Most organisations are still trying to balance two competing priorities:
- Enabling rapid AI adoption
Teams want to integrate copilots and agents into workflows and applications quickly, without long approval cycles. - Maintaining clear oversight and risk management
Security and compliance teams need visibility into how models behave, what data they access and how prompts evolve over time.
Foundry IQ helps bridge that gap.
With structured governance:
- Developers can adopt AI more confidently.
- Security teams gain transparency without becoming a bottleneck.
- Organisations reduce the drift that comes from unmonitored or one-off AI usage.
- Modernisation and AI uplift programmes gain a clearer approval and evaluation pathway.
Where Foundry IQ Fits Best
Foundry IQ will be particularly useful for organisations that:
- Are implementing enterprise-wide AI adoption and want consistent practices
- Are subject to industry or regulatory compliance frameworks
- Have multiple teams building AI-enhanced applications or agents
- Want a single place to manage and evaluate models, prompts and behaviours
- Need a governance model that does not disrupt developer workflows
Current Considerations
As with any new governance platform, teams should think about:
- How Foundry IQ integrates with existing risk and change management processes
- Which applications or agent workflows should be onboarded first
- How the organisation defines “approved” versus “experimental” AI usage
- What visibility engineering teams need versus what governance teams need
- How to onboard new projects and teams into the governance pipeline without friction
The platform is still evolving, but the direction aligns with what many enterprises have been asking for: governance that supports innovation rather than slowing it down.
Why This Matters for Engineering Teams
From an engineering and platform perspective, Foundry IQ reduces uncertainty. Teams gain:
- Clarity on which models and configurations are approved
- Confidence that they are building within supported boundaries
- Faster approvals, because governance is built into the workflow
- A shared language for discussing risk and behaviour with security teams
As AI becomes more embedded in applications, this sort of governance model becomes essential. Foundry IQ offers a structured, modern approach that fits naturally into current Azure and GitHub-based development practices.
What These Announcements Mean in Practice
More accessible Kubernetes adoption
AKS Automatic removes a significant amount of operational overhead and simplifies how teams create safe, multi-tenant Kubernetes environments. It aligns with a broader trend of cloud providers offering more opinionated, managed runtime environments to reduce the burden on engineering teams.
Faster and more structured modernisation workflows
The Copilot Modernisation Assistant supports the early phases of modernisation by giving teams a starting point backed by automated analysis. This frees engineers to spend more time on higher-value work such as validating findings, exploring options and designing target architectures.
A new data foundation for AI-native systems
With HorizonDB, Azure is addressing the data patterns that modern AI applications require. For future-facing workloads, especially those that incorporate semantic search or agent-driven behaviour, this will become an important option to consider.
Clearer governance for AI adoption
Foundry IQ formalises governance in a way that supports rather than slows down engineering teams. It gives organisations a shared, repeatable model for assessing and deploying AI capabilities, which is essential as usage grows.
Closing Thoughts
This year’s Ignite felt different. While previous years focused heavily on new AI capabilities and rapid innovation, 2025 brought a clearer emphasis on maturity, governance, observability, trust and long-term sustainability. The announcements were not only about adding more AI features. They were also about giving organisations the confidence and structure to adopt these tools safely and at scale.
Across AKS Automatic, the Copilot Modernisation Assistant, HorizonDB and Foundry IQ, a common pattern emerged:
- More opinionated, reliable platforms that reduce operational burden
- Better guardrails and governance built directly into the tools engineers already use
- Improved insight and observability, from data lineage to model behaviour to workload complexity
- AI capabilities that are delivered with structure rather than in isolation
- A stronger focus on organisational trust, which makes it easier for enterprises to introduce AI without increasing risk
Many teams are now moving beyond experimentation and into sustained adoption. They want platforms that are easier to run, modernisation pathways that are clearer, and governance models that keep them safe without blocking delivery.
Microsoft’s direction this year aligns closely with those needs. Instead of pushing organisations into new technologies without support, the focus is now on enabling teams to adopt AI and cloud-native architectures more confidently, with the right controls, structure and visibility in place.
These announcements will not solve every challenge, but they do create a stronger foundation for engineering teams planning their next steps. Whether the goal is modernising a portfolio of legacy applications, improving developer experience or building new AI-enabled systems, there is now a clearer and more dependable set of tools to support that journey.
