3.1 Project Overview

Overview of Project

Scenario

A growing fintech company runs multiple internal systems: Payments (handles customer transactions), Analytics (runs reports and insights), and Shared Services (hosts internal APIs used by other teams). For security and compliance, each system lives in its own VPC.

However, the Payments and Analytics teams still need to call shared internal APIs without:

  • Exposing anything to the public internet
  • Managing complex VPC peering meshes
  • Opening SSH or public endpoints to backend services

The company wants a clean, private-only connectivity model where shared services are consumed over AWS PrivateLink and all traffic stays inside the AWS network.


Your Role as the Cloud Engineer

Your job is to design and implement a multi-VPC architecture where:

  • Each application is isolated in its own VPC + subnets
  • A Shared Services VPC hosts a private internal web app
  • Payments and Analytics VPCs consume that app via PrivateLink
  • No public IPs or direct internet access are required for internal traffic
  • Security Groups and routing ensure least-privilege network access

This is exactly how real enterprises expose internal “platform” services to other teams.


Our Solution

You will build three VPCs (Payments, Analytics, Shared Services) and expose an internal web application in the Shared Services VPC through an internal Network Load Balancer and PrivateLink Endpoint Service.

The Payments and Analytics VPCs will connect using Interface Endpoints, so they can reach the service privately over AWS’s internal network, without peering, NAT, or internet gateways to the app.


About the Project

In this hands-on project, you will:

  • Create three isolated VPCs with public and private subnets
  • Create Security Groups for the Shared app and client VPCs
  • Deploy a private-only EC2 instance running an internal web app in the Shared Services VPC
  • Place the instance behind an internal Network Load Balancer (NLB)
  • Create a VPC Endpoint Service for the NLB (PrivateLink provider)
  • Create Interface Endpoints in the Payments and Analytics VPCs (PrivateLink consumers)

By the end, you’ll have a production-style multi-VPC PrivateLink setup that clearly shows how internal services are shared securely across VPCs.


Steps To Be Performed 👩‍💻

We’ll walk through these major steps:

  1. Build three isolated VPCs (Payments, Analytics, Shared Services) with subnets and routing
  2. Create Security Groups for the Shared app and client instances
  3. Deploy the Shared Services internal web app on a private EC2 instance
  4. Place the app behind an internal NLB and create a VPC Endpoint Service
  5. Create Interface Endpoints in Payments and Analytics VPCs
  6. Approve connections and test private-only access to the internal service

Services Used 🛠

  • Amazon VPC - Isolated networks for Payments, Analytics, and Shared Services
  • Subnets, Route Tables, Internet Gateways - Network layout and routing
  • Amazon EC2 - Internal web application in the Shared Services VPC
  • Security Groups - Fine-grained network access control
  • Elastic Load Balancing (NLB) - Internal load balancer for the shared service
  • AWS PrivateLink -
    • VPC Endpoint Service (provider - Shared Services VPC)
    • Interface Endpoints (consumers - Payments & Analytics VPCs)

Estimated Time & Cost ⚙️

  • Estimated Time: 3-4 hours
  • Cost: $0-$1

➡️ Architectural Diagram

Here is the architecture diagram:


➡️ Final Result

At the end of this project, you’ll have:

  • Three VPCs (Payments, Analytics, Shared Services) with clear isolation
  • A private internal service in the Shared Services VPC, never exposed to the internet
  • Fully working PrivateLink connectivity from client VPCs via Interface Endpoints
  • A real-world example of how enterprises share internal APIs safely across VPCs

Complete and Continue