Build Smart, Stay Portable

If you read my previous blog posts, The Dark Side of Sovereignty and Digital Sovereignty isn’t a switch you flip, you’ll know I don’t believe sovereignty is achieved just by moving to a “European cloud”. That might tick a few compliance boxes, but it doesn’t give you real control.
Sovereignty is about who holds the keys, who makes decisions, and who truly runs the stack. Sometimes that means owning more. Sometimes it means understanding the trade-offs of giving it away.
Portability often enters the discussion as an exit strategy. But like disaster recovery, it tends to fall apart when you need it most. Teams often move workloads as-is, then run into issues with invisible dependencies and services they didn’t know they relied on. When was the last time you actually restored that backup? Or failed over a region and got everything working again?
This post shares what I’ve learned from 15+ years of building platforms, apps, and infrastructure, and how I think we should be making architectural decisions today. Between lock-in, regulation, and rising complexity, it’s more important than ever to be intentional.
1. Native, portable, or neutral?
Neutrality sounds ideal — full control, vendor freedom, architectural purity. But in my experience, most companies aren’t mature enough to handle it. It’s hard, expensive, and easy to get wrong.
You can think of sovereignty as a spectrum, with three broad strategies:
Strategy | Description | Best Fit When… |
---|---|---|
Cloud-Native | Fully integrated with a single cloud provider | You prioritize speed, security, and vendor trust |
Cloud-Portable | Designed to move across providers | You want leverage or exit options |
Cloud-Neutral | Minimizes or avoids vendor dependencies | You require strict control for compliance or risk |
These aren’t rigid categories. Most teams end up blending strategies. What matters is knowing where you’re going native and where you’re keeping options open.
When long-term planning is missing, service and tooling choices often lead to unintended commitment to a single approach.
2. Portability: theory vs practice
Portability isn’t a single thing. There’s a big difference between swapping services and moving systems. Functional portability sounds great on paper. In reality, it’s rarely achievable. Infrastructure portability, however, is more realistic, and that’s where Kubernetes shines.
Functional portability
Functional portability
Your application’s functionality can run across multiple cloud platforms without modification. It means swapping one cloud service for another with minimal changes, like replacing Azure Functions with AWS Lambda.
In theory: sure. In practice: good luck replicating the triggers, authentication bindings, deployment flow, and logging behavior. You’ll quickly find that swapping one for the other turns into a rewrite, not a drop-in replacement.
The problem is that most cloud services are designed to integrate tightly with their platform — making them hard to swap without significant rework.
Infrastructure portability
Infrastructure portability
You can recreate your cloud infrastructure (compute, storage, networking, IAM) across providers or on-prem without major rewrites. It means being able to run the same workload in different environments — for example, moving from AKS (Azure Kubernetes) to EKS (AWS Kubernetes) without rewriting everything, because the core stack is defined in Kubernetes manifests and GitOps pipelines.
This goes beyond just container workloads. Services like PostgreSQL or even Databricks can be part of a portable stack, when deployed with cloud-neutral configurations.
It’s not effortless, but it’s realistic, especially if you automate platform deployment and focus on common cloud-based applications and services rather than full abstraction.
3. Why I still bet on Kubernetes
Kubernetes isn’t just about running apps — it’s about moving architecture. Kubernetes, containers, and GitOps allow you to define your architecture in a way that can move with you, if you put in the work. Think: your networking model, data layer, observability tooling, and policies — all defined as code and deployed consistently across environments.
What makes Kubernetes powerful isn’t container orchestration. It’s that you can encode your architecture inside the cluster:
- Networking: With CNI plugins and service meshes, you can create a portable, policy-enforced network model.
- Storage: CSI drivers allow you to abstract volume provisioning across different clouds.
- Observability: You can deploy Prometheus, OpenTelemetry, and Grafana declaratively across clusters.
- Compliance: Policy-as-code with Kyverno or OPA enforces governance inside the cluster.
These building blocks let you replicate your architecture, not just your apps, across environments with minimal change.
Kubernetes gives you a consistent interface across clouds, but performance, latency, and pricing still vary, so the experience isn’t identical.
4. The Terraform illusion
Terraform is often seen as a way to make infrastructure “portable.” The language is cloud-agnostic, but the code you write rarely is.
Here's a quick example: spinning up a Kubernetes cluster on Azure vs AWS.
# Azure Kubernetes Cluster (AKS)
# Cloud-specific identity and RBAC tightly integrated with Entra ID
resource "azurerm_kubernetes_cluster" "aks" {
name = "aks-cluster"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "aks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_DS2_v2"
}
identity {
type = "SystemAssigned" # Azure-specific managed identity integration
}
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = [var.admin_group_id] # Tied to Entra ID (Azure AD)
}
}
network_profile {
network_plugin = "azure" # Azure’s own networking stack
}
}
# AWS EKS Cluster
# Cloud-specific IAM role binding required before creating the cluster
resource "aws_eks_cluster" "eks" {
name = "eks-cluster"
role_arn = aws_iam_role.eks.arn
vpc_config {
subnet_ids = var.subnet_ids
}
kubernetes_network_config {
service_ipv4_cidr = "10.100.0.0/16"
}
enabled_cluster_log_types = ["api", "audit", "authenticator"]
}
resource "aws_iam_role" "eks" {
name = "eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
}
}]
})
}
Same goal, completely different setup. Even with tools like Pulumi or Crossplane, which provide alternative abstractions, the reality remains: cloud-specific APIs and services still leak into your code.
Yes, you can create custom modules or build a wrapper layer to standardize interfaces. But at what cost? You’ll be trading portability for complexity. It rarely pays off unless you operate at serious scale.
What I’ve learned is this: optimize for clarity and maintainability first. Reuse modules when it makes sense. Avoid premature abstraction. And remember: using a common tool doesn't mean your code is portable. The abstraction is thin, and cloud-specific APIs will leak through unless you accept the cost of complexity.
5. The role of commercial Kubernetes platforms
Platforms like OpenShift extend upstream Kubernetes with additional layers for automation, governance, security, and enterprise support. These platforms are often chosen by organizations that operate in regulated industries, where long-term support, built-in compliance features, and predictable patching are critical.
They offer a middle ground: the flexibility of Kubernetes, paired with operational guardrails and a clearly defined support model. This can be especially valuable in environments that require strict governance or legal separation from hyperscalers.
Like any enterprise solution, commercial platforms introduce their own forms of dependency — not through cloud region or API design, but through support contracts, lifecycle management, and platform-specific tooling. While they can run across environments such as on-premises, private cloud, or public cloud, your operational model still becomes closely tied to the vendor's release cycle, tooling, and patch delivery pipeline.
This doesn’t make them a bad choice. Far from it. In fact, for many teams, they accelerate adoption while improving security posture and compliance alignment. But it’s worth understanding the boundaries: moving the platform doesn’t always mean full independence, especially when external support is still required for critical updates or security fixes.
In practice, commercial Kubernetes platforms can strengthen legal and operational sovereignty, but supply chain independence may still require additional planning.
6. Jumpstart Your Strategy
You don’t need a perfect blueprint. You just need to start.
Use this simple checklist to map where you are and take intentional steps without overengineering your approach.
- Describe Your Stack
Inventory your stack: What services are you using, in which regions, with which identity providers and monitoring tools? - Assess Risk & Control
For each part of your stack, ask: where are the biggest risks or dependencies? Do we need more control here? And are you prepared to take on that control if needed? - Prioritize Actions
Start with what matters most. For example, rebuild business-critical workloads using containers for portability, store logs and sensitive data in EU-only regions, and use cloud-agnostic services like PostgreSQL or Redis instead of vendor lock-in offerings. - Apply Guardrails
Enforce basic rules to protect sensitive workloads, such as blocking deployments to non-EU regions using Azure Policy or OPA. - Keep Improving
Revisit your setup regularly. Track new regulations like the European AI Act and DORA, which introduce stricter rules around data governance, resilience, and accountability.
“It doesn’t have to be perfect. But it should be intentional.”
What trade-offs are you making today, and are they intentional? I’d love to hear how your team is approaching sovereignty and portability.