Cloud Native Reference Architecture on GCP
Imagine a modern e-commerce application running in a cloud-native way on GCP. The architecture looks something like this:
Architecture Diagram (description)
- Frontend (React app) → Deployed on Cloud Run for serverless hosting.
- Backend microservices → Deployed on Google Kubernetes Engine (GKE).
- Example services: cart-service, payment-service, order-service.
- Service Mesh → Anthos Service Mesh (Istio) for observability, retries, and service-to-service security.
- Messaging → Pub/Sub for asynchronous event delivery (e.g., order placed → inventory update).
- Database → Cloud Spanner for global transactional consistency.
- CI/CD → Cloud Build + Cloud Deploy for continuous integration and deployment.
- Monitoring & Logging → Cloud Operations Suite (Stackdriver) integrated with GKE and Cloud Run.
Example: Kubernetes Deployment on GKE
Here’s a sample YAML manifest for deploying a cart service to GKE:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-service
labels:
app: cart
spec:
replicas: 3
selector:
matchLabels:
app: cart
template:
metadata:
labels:
app: cart
spec:
containers:
- name: cart-container
image: gcr.io/my-project/cart-service:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
---
apiVersion: v1
kind: Service
metadata:
name: cart-service
spec:
selector:
app: cart
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
This YAML:
- Deploys the cart-service with 3 replicas.
- Exposes it internally using a ClusterIP service.
- Enforces resource requests/limits for predictable scaling.
Example: Cloud Run Deployment (Serverless Backend)
For lightweight services or APIs, Cloud Run makes deployment even simpler. Here’s a sample YAML for deploying with Cloud Run service configuration:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: payment-service
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/my-project/payment-service:latest
resources:
limits:
cpu: "1"
memory: "512Mi"
containerConcurrency: 80
timeoutSeconds: 300
This YAML:
- Deploys the payment-service as a container to Cloud Run.
- Scales automatically based on requests (down to zero).
- Configures concurrency and timeout.
Adding a Service Mesh with Anthos
With Anthos Service Mesh (Istio), you gain advanced features like mutual TLS (mTLS), retries, and observability. Here’s an example Istio VirtualService YAML for routing traffic:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: cart-routing
spec:
hosts:
- cart-service
http:
- route:
- destination:
host: cart-service
subset: v1
weight: 90
- destination:
host: cart-service
subset: v2
weight: 10
This YAML:
- Routes 90% of traffic to v1 and 10% to v2 of the cart-service.
- Enables canary deployments with zero downtime.
CI/CD with Cloud Build
To automate deployments, you can use Cloud Build with a cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/cart-service:$COMMIT_SHA', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/cart-service:$COMMIT_SHA']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'k8s/deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_REGION=us-central1'
- 'CLOUDSDK_CONTAINER_CLUSTER=my-gke-cluster'
Benefits of This Approach
- Portability – Workloads run in containers, portable across environments.
- Scalability – Automatic scaling in GKE and Cloud Run.
- Resilience – Self-healing workloads and retries via Istio.
- Faster Delivery – Automated pipelines with Cloud Build + Cloud Deploy.
- Global Reach – Backed by Google’s networking and databases like Spanner.
Conclusion
Cloud-native architecture on GCP provides everything you need to build modern, scalable, and resilient applications. By combining GKE, Cloud Run, Anthos Service Mesh, and Cloud Build, teams can confidently deliver production-grade systems with agility.
