TL;DR
Migrating off AWS to an offshore host is feasible for most non-AWS-specific workloads. Map AWS services to their self-hosted or third-party equivalents, plan egress (the expensive part), and pick your target by workload size:
| AWS workload | Recommended offshore replacement |
|---|---|
| EC2 (general compute) | BuyVM Luxembourg or HostHatch IS |
| EC2 + multi-region | FlokiNET (IS / RO / FI / NL) |
| RDS (managed Postgres / MySQL) | Self-host Postgres on HostHatch |
| S3 (object storage) | BuyVM Block Storage Slabs, MinIO self-hosted |
| CloudFront (CDN) | BunnyCDN, self-hosted on a second VPS, or skip CDN |
| Route 53 (DNS) | Njalla DNS, deSEC, Bunny DNS |
| Lambda | Self-host Cloudflare Workers replacement on Bun / Deno |
| SES (email) | Self-host with Mail-in-a-Box on offshore VPS |
| ACM (certificates) | Let’s Encrypt — free, no provider lock-in |
Total migration time for a small production workload (1-10 services): 1-2 weeks if you have docker-compose-level packaging. Longer if your stack is AWS-specific (Lambda, DynamoDB, etc.).
Why migrate at all
The reasons to leave AWS for an offshore host fall into three buckets:
- DMCA / takedown exposure — AWS acts on DMCA notices. If your content attracts takedowns and you don’t want to play the counter-notice game (which exposes your real identity), you need to leave US infrastructure entirely.
- Real-name signup + KYC — AWS requires a real-name billing relationship with a payment method tied to your identity. For operators where signup anonymity matters, AWS is a non-starter.
- Cost — for small-to-mid workloads, AWS is dramatically more expensive than offshore VPS providers. A $5/mo VPS at HostHatch covers what a $40-80/mo EC2 instance does.
Pre-migration checklist
Before you start moving anything:
- Inventory: list every AWS service you use, with rough storage / compute / bandwidth numbers.
- Identify AWS-specific features you depend on: IAM policies, VPC peering, ALB integration, Lambda triggers, DynamoDB queries, SQS/SNS, CloudWatch. Each of these is a planning unit.
- Estimate egress cost: AWS charges $0.05-0.09/GB outbound. If you have 1 TB to move, that’s $50-90 in egress alone.
- Plan downtime: most migrations need at least a maintenance window. DNS TTLs of 60-300 seconds during the cutover help.
- Decide on the new stack: monolith on one VPS, or multi-VPS with internal networking? Most small ops do fine with the former.
Step-by-step
1. Provision the target
Pick a provider per the decision framework. For a typical migration:
- General compute: 1-3 VPS instances at BuyVM Luxembourg, HostHatch IS, or FlokiNET RO. Spec roughly 2x what your current EC2 t3-class workload uses (KVM is more honest than burst-capped EC2).
- Persistent storage: BuyVM Block Storage Slabs (~$0.005/GB) or HostHatch storage VPS for big disks.
- Domain: if you also want to move the domain off Route 53, use Njalla for owns-on-behalf or 1984 Hosting for ICANN-accredited.
2. Replicate your stack
Most AWS workloads come down to:
- Docker-compose-style services (web, app, db, cache): trivially portable. Run docker-compose on the new VPS.
- Static assets: rsync or restic-style sync from S3 to the new disk. If you keep S3 for cold storage temporarily, that’s fine.
- Database: pg_dump / mysqldump from RDS, restore onto new self-hosted Postgres / MySQL. For zero-downtime: set up logical replication, switch over once primaries are caught up.
- Mail: deploy Mail-in-a-Box or Mailcow on a fresh VPS in your target jurisdiction. Verify rDNS / SPF / DKIM / DMARC before cutover.
3. DNS cutover
- Set TTLs of relevant records to 60-300 seconds 24 hours before cutover.
- Verify the new infrastructure is fully operational (run end-to-end smoke tests).
- Switch DNS records to the new IP. Wait for propagation (with low TTL: ~5 minutes).
- Monitor logs on the new infra; if traffic shows up correctly, you’re done.
4. Decommission AWS
- Wait at least 7 days after cutover before deleting AWS resources (in case you need to roll back).
- Snapshot anything you might need to reference (RDS final dumps, S3 bucket inventory, IAM config in case of future audit).
- Tear down EC2 → RDS → S3 → CloudFront → Route 53 → IAM users → AWS account.
- Cancel the AWS account only after invoices are fully paid out.
Common gotchas
- CloudFront → no equivalent: most offshore providers don’t have a global CDN. If you need one, BunnyCDN is the closest privacy-aligned option (multi-PoP, accepts crypto).
- Lambda → containers: there’s no exact Lambda replacement. Most Lambda workloads can be re-implemented as long-running containers; if you really need event-driven serverless, consider Cloudflare Workers (with the Cloudflare caveats).
- DynamoDB → Postgres: most DynamoDB workloads work fine on Postgres with JSONB. The migration may require code changes.
- IAM → manual user management: offshore providers don’t have IAM equivalents. SSH keys + sudoers files are how access is managed.
- CloudWatch → self-hosted observability: deploy Grafana + Prometheus + Loki on the new VPS, or use Better Uptime / Healthchecks.io for basic monitoring.
Cost comparison (small-to-mid workload)
For a typical small SaaS (1 web tier, 1 db tier, ~50 GB data, ~500 GB egress/mo):
| Provider | Approximate monthly cost |
|---|---|
| AWS (t3.medium + db.t3.micro + S3 + CloudFront) | ~$80-120 |
| BuyVM Luxembourg (2 Slices + Block Storage) | ~$15-20 |
| HostHatch IS (similar spec) | ~$15-25 |
| FlokiNET (premium offshore) | ~$25-40 |
The cost gap pays for the migration effort within 1-3 months for most operators.