Every cloud architecture decision you make is either increasing or decreasing the cost of leaving. That cost — the engineering hours, rewritten code, broken integrations, and lost productivity required to migrate away from your current provider — is vendor lock-in. And unlike most technical debt, almost nobody is tracking it.
Lock-in is not inherently bad. Sometimes a proprietary service is the right call because the productivity gain outweighs the switching cost. The problem is when lock-in happens by default rather than by design — when your team wakes up three years into an AWS deployment and realizes that migrating away would take 18 months and a seven-figure budget.
This post provides a practical framework for measuring your lock-in exposure, understanding which services are portable and which are not, and making informed architectural decisions about when proprietary is worth it and when open standards are the smarter play.
What Lock-In Actually Looks Like
When most people think about cloud vendor lock-in, they think about proprietary APIs. But APIs are just the surface. Real lock-in is layered, and it accumulates silently across every part of your stack.
Compute and Infrastructure Lock-In
This is the most portable layer. Virtual machines are essentially standardized — if you are running Linux VMs on EC2, you can run the same workloads on any IaaS provider with relatively modest migration effort. Container workloads running on Kubernetes are similarly portable, provided you have not deeply integrated with EKS-specific features like IRSA (IAM Roles for Service Accounts) or EKS Add-ons.
The lock-in at this layer comes from instance type dependencies (GPU workloads optimized for specific AWS instance families), placement group configurations, and auto-scaling policies that reference AWS-specific metrics and APIs.
Data Layer Lock-In
This is where lock-in gets expensive. DynamoDB, Aurora Serverless, Neptune, Timestream, and DocumentDB are all proprietary database services with no direct equivalents outside of AWS. If your application is built on DynamoDB, migrating to another provider means rewriting your data access layer, redesigning your data model, and potentially restructuring your entire application architecture.
S3 is the exception — its API has become a de facto standard. Most cloud providers and on-premises storage solutions offer S3-compatible APIs, making object storage one of the most portable cloud services available.
Application Integration Lock-In
This is the layer that catches teams off guard. Every SQS queue, SNS topic, EventBridge rule, Step Functions workflow, and API Gateway endpoint represents a proprietary integration point. These services are convenient and well-designed, but each one adds weeks or months to a migration timeline.
Lambda functions are the most aggressive form of integration lock-in. A single Lambda function is trivial to move. But an architecture built on hundreds of Lambda functions triggered by S3 events, DynamoDB streams, API Gateway routes, and CloudWatch alarms is essentially AWS-native code that would need to be fundamentally rearchitected to run anywhere else.
Infrastructure-as-Code Lock-In
CloudFormation templates are AWS-only. If your entire infrastructure is defined in CloudFormation, that code has zero value outside of AWS. This is why many organizations choose Terraform or Ansible — tools that work across providers and give you at least partial portability of your infrastructure definitions.
IAM and Identity Lock-In
AWS IAM policies, roles, and permission boundaries are deeply AWS-specific. If your security model is expressed entirely in IAM JSON policies, migrating means rebuilding your entire access control framework from scratch. For organizations with hundreds of IAM roles and policies, this alone can take months.
How to Audit Your Lock-In Exposure
You cannot manage what you do not measure. Here is a practical framework for quantifying your organization’s lock-in exposure. The goal is not to eliminate all lock-in — that is neither realistic nor desirable. The goal is to make lock-in visible so you can make informed decisions about where to accept it and where to avoid it.
Step 1: Inventory Every Proprietary Service
Create a spreadsheet listing every AWS (or Azure/GCP) service your organization uses. For each service, note:
- The service name (e.g., DynamoDB, Lambda, SQS)
- How many resources you have deployed (tables, functions, queues)
- Which applications depend on it
- Whether a portable equivalent exists (e.g., PostgreSQL for Aurora, RabbitMQ for SQS)
Most organizations are surprised by the length of this list. A typical mid-market AWS deployment uses 15-30 distinct AWS services, many of which were adopted incrementally without a conscious lock-in assessment.
Step 2: Classify by Portability
For each service, assign a portability rating:
- Portable (Low lock-in): Standard protocols, open APIs, direct equivalents on other platforms. Migration effort: days to weeks.
- Translatable (Medium lock-in): Proprietary but with open-source or cross-platform equivalents that require some rework. Migration effort: weeks to months.
- Captive (High lock-in): Deeply proprietary with no direct equivalent. Migration requires rearchitecting. Migration effort: months to quarters.
Step 3: Estimate Migration Cost
For each service, estimate the engineering effort (in person-weeks) required to migrate to a portable alternative. Be honest — include testing, data migration, performance validation, and the inevitable edge cases. Multiply by your fully loaded engineering cost per week.
The total is your Lock-In Liability — the dollar cost of switching providers today. For most mid-market companies with 2-3 years of AWS usage, this number lands between $500,000 and $2,000,000. That is real technical debt sitting on your books, even if nobody has written it down.
The Portability Spectrum
Not all cloud services are created equal when it comes to portability. Here is a practical ranking of common cloud services, from most portable to most locked in.
Highly Portable
| Service Type | Why It Is Portable | Migration Effort |
|---|---|---|
| Linux VMs | Standardized compute. Export image, import anywhere. | Days |
| Block storage | Disk images are portable. Snapshot and restore. | Days |
| Object storage (S3) | S3 API is a de facto standard. Most providers are compatible. | Days to weeks |
| DNS | Standard protocol. Export zone files, import elsewhere. | Hours |
| Load balancers (L4/L7) | Standard functionality. Reconfigure, not rewrite. | Days |
Moderately Portable
| Service Type | Why It Requires Work | Migration Effort |
|---|---|---|
| Managed PostgreSQL/MySQL | Standard engines but provider-specific extensions, backups, and HA configs. | Weeks |
| Kubernetes (EKS/AKS/GKE) | Core K8s is portable. Provider-specific integrations (IRSA, Workload Identity) are not. | Weeks to months |
| Message queues (SQS → RabbitMQ) | Similar concepts but different APIs and delivery guarantees. | Weeks |
| Container registries | Standard image format. Push images to new registry. | Days |
| Monitoring (CloudWatch → Prometheus) | Different query languages, alert formats, dashboard definitions. | Weeks |
Low Portability (High Lock-In)
| Service Type | Why It Is Locked In | Migration Effort |
|---|---|---|
| Serverless functions (Lambda) | Proprietary runtime, event triggers, and integrations. Rearchitect required. | Months |
| Proprietary databases (DynamoDB, Cosmos DB) | Unique data models with no portable equivalent. Data model rewrite. | Months to quarters |
| Event systems (EventBridge, Step Functions) | Proprietary orchestration logic. No standard equivalent. | Months |
| ML/AI services (SageMaker, Bedrock) | Proprietary training pipelines, model hosting, and inference APIs. | Months |
| IAM policies and roles | Entirely provider-specific. Must be rebuilt from scratch. | Months |
| CloudFormation / ARM templates | Provider-only IaC. Zero portability. | Months |
The pattern is clear: the further up the stack you go — from infrastructure to platform to application services — the deeper the lock-in. Compute and storage are commodities. Serverless orchestration and proprietary databases are golden handcuffs.
Open Standards as a Lock-In Hedge
The most effective hedge against lock-in is not avoiding the cloud — it is choosing services built on open standards and open-source foundations. Here is what that looks like in practice.
Open APIs
OpenStack APIs, for example, are standardized across every OpenStack deployment. Code written against the OpenStack Compute API works on any OpenStack cloud — whether it is a private deployment, a managed provider, or a different vendor entirely. The same is true for the S3 API for object storage, standard SQL for databases, and SMTP for email.
When you choose a platform built on open APIs, your application code is not tied to a single vendor. Your team’s expertise transfers. Your automation scripts work elsewhere. Your integration tests do not need to be rewritten.
Standard Networking
OVN (Open Virtual Network) provides software-defined networking with standard protocols — no proprietary VPC constructs, no vendor-specific security group syntax, no custom NAT implementations. If your networking is built on standard protocols, migrating means reconfiguring — not rewriting.
Portable IaC
Terraform and Ansible work across providers. If your infrastructure is defined in Terraform with provider-agnostic patterns (variables for provider-specific resources, abstraction layers for common operations), switching providers means updating provider configurations — not rewriting your entire infrastructure codebase.
Heat templates (OpenStack’s native orchestration) are also an option for organizations committed to OpenStack-based infrastructure. While Heat is OpenStack-specific, it is an open standard — your templates work on any OpenStack deployment.
Federation and Identity
SAML 2.0 and OIDC are open identity standards supported by every major identity provider. If your access management is built on federated identity (connecting your existing Active Directory, Okta, or Entra ID to your cloud platform via standard protocols), changing cloud providers does not require rebuilding your identity infrastructure.
Compare this to deeply integrated AWS IAM policies that reference specific ARNs, conditions, and resource types. Those policies are write-once, use-on-AWS-only.
When Lock-In Is Acceptable
This post is not an argument against proprietary services. It is an argument for intentional lock-in — making conscious, informed decisions about where you accept switching costs and where you avoid them.
Lock-in is acceptable when:
- The productivity gain clearly outweighs the switching cost. If Lambda saves your team 200 engineering hours per quarter in infrastructure management, and the estimated migration cost is 400 hours, you break even in two quarters. That might be a reasonable trade.
- The service has no viable open alternative. Some proprietary services genuinely do things that no open-source tool can match at the same quality level. If that capability is critical to your business, accept the lock-in and plan around it.
- You have budgeted for it. If your Lock-In Liability is a known quantity on your risk register, reviewed quarterly, and factored into vendor negotiations, it is managed risk rather than hidden debt.
Lock-in is not acceptable when:
- It happened by default. Your team chose DynamoDB because it was the first result in the AWS console, not because they evaluated the trade-offs against PostgreSQL or another portable option.
- Nobody knows the switching cost. If you cannot estimate how long and how much it would cost to leave your current provider, you have unmanaged technical debt.
- It weakens your negotiating position. When your provider knows you cannot leave, they have zero incentive to compete on price, service quality, or contract terms. This is the most expensive form of lock-in — the kind that shows up on your invoice every month.
A Decision Framework for Every New Service
Before your team adopts any new cloud service, run it through these five questions:
- Does a portable equivalent exist? If yes, default to the portable option unless the proprietary service offers a compelling advantage.
- What is the estimated migration cost if we need to leave? Quantify it in person-weeks and dollars. Add it to your Lock-In Liability tracker.
- How many applications will depend on this service? A single application using a proprietary service is manageable. Ten applications sharing a DynamoDB table create systemic lock-in.
- Can we abstract the integration? If you must use a proprietary service, can you wrap it behind an interface that isolates the rest of your application from the provider-specific implementation?
- Is the productivity gain worth the switching cost? If the answer is yes and you have quantified both sides, proceed with confidence. If you cannot answer this question, you do not have enough information to make the decision.
The Bottom Line
Vendor lock-in is not a technical curiosity — it is a financial liability. Every proprietary service in your architecture adds to a switching cost that compounds over time. Left unmanaged, it erodes your negotiating leverage, inflates your cloud spend, and limits your strategic options.
The solution is not to avoid all proprietary services. It is to make every lock-in decision intentionally, measure the cost, and bias toward open standards where the trade-off is close.
At Open Edge, we build on OpenStack because we believe your infrastructure should work for you — not hold you hostage. Standard APIs, open protocols, S3-compatible storage, SAML/OIDC identity federation, and Terraform/Ansible compatibility mean your workloads are portable from day one. If you ever want to leave, you can. That is the kind of confidence your architecture should give you.
If you are interested in understanding your organization’s lock-in exposure or exploring what a portable cloud architecture looks like, we would be happy to talk.