Week of October 23rd | SQL Squirrels

“Cause I don’t want to come back… Down from this Cloud ☁️”

Part II of a Cloud☁️ Journey

Hi All -

Happy National Mole Day! 👨‍🔬👩🏾‍🔬

As we have all learned, Cloud computing☁️ empowers us all to focus our time 🕰 on dreaming 😴 up and creating the next great scalable⚖️ applications. In addition, Cloud computing☁️ enables us less worry 😨 time 🕰 about infrastructure, managing and maintaining deployment environments or agonizing 😰 over security🔒. Google evangelizes these principals stronger💪 than any other company in the world 🌎.

Google’s strategy for cloud computing☁️ is differentiated by providing open source runtime systems and a high-quality developer experience where organizations could easily move workloads from one cloud☁️ provider to another.

Once again, this past week we continued our exploration with GCP by finishing the last two courses as part of Google Cloud Certified Associate Cloud Engineer Path on Pluralsight. Our ultimate goal was better understanding of the various GCP services and features and be able to apply this knowledge, to better analyze requirements and evaluate numerous options available in GCP. Fortunately, we gained this knowledge and whole lot more! 😊

Guiding us through another great introduction on Elastic Google Cloud Infrastructure: Scaling and Automation were well-known friends Phillip Maier and Mylene Biddle. Then taking us through the rest of the way through this amazing course was Priyanka Vergadia.

Then finally taking us down the home stretch with Architecting with Google Kubernetes Engine — Foundations (which was last of this amazing series of Google Goodness 😊) were famous Googlers Evan Jones and Brice Rice. …And Just to put the finishing touches 👐 of this magical 🎩💫 mystery tour was Eoin Carrol who gave us in depth look at Google’s game changer in

Modernizing existing applications and building cloud-native☁️ apps anywhere with Anthos.

After a familiar introduction by Philip and Mylene we began delving into the comprehensive and flexible 🧘‍♀️ infrastructure and platform services provided by GCP.

“Across the clouds☁️ I see my shadow fly✈️… Out of the corner of my watering💦 eye👁

Interconnecting Networks — The are 5 ways of connecting your infrastructure to GCP:

  1. Cloud VPN
  2. Dedicated interconnect
  3. Partner interconnect
  4. Direct peering
  5. Carrier peering

Cloud VPN — securely connects your on-premises network to your GCP VPC network. In order to connect to your on-premise network via Cloud VPN configure cloud VPN, VPN Gateway, and to repeat in tunnels.

  • Useful for low-volume connections
  • 99,9% SLA

Please note: The maximum transmission unit or MTU for your on-premises VPN gateway cannot be greater than 1460 bites.

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" target-vpn-gateways create "vpn-1" --region "us-central1" --network "vpn-network-1" $gcloud compute --project "qwiklabs-GCP -02-9474b560327d" target-vpn-gateways create "vpn-1" --region "us-central1" --network "vpn-network-1"$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" forwarding-rules create "vpn-1-rule-esp" --region "us-central1" --address "" --IP -protocol "ESP" --target-vpn-gateway "vpn-1" $gcloud compute --project "qwiklabs-GCP -02-9474b560327d" forwarding-rules create "vpn-1-rule-udp500" --region "us-central1" --address "" --IP -protocol "UDP" --ports "500" --target-vpn-gateway "vpn-1" $gcloud compute --project "qwiklabs-GCP -02-9474b560327d" forwarding-rules create "vpn-1-rule-udp4500" --region "us-central1" --address "" --IP -protocol "UDP" --ports "4500" --target-vpn-gateway "vpn-1" $gcloud compute --project "qwiklabs-GCP -02-9474b560327d" vpn-tunnels create "tunnel1to2" --region "us-central1" --ike-version "2" --target-vpn-gateway "vpn-1" $gcloud compute --project "qwiklabs-GCP -02-9474b560327d" vpn-tunnels create "vpn-1-tunnel-2" --region "us-central1" --peer-address "" --shared-secret "GCP rocks" --ike-version "2" --local-traffic-selector "" --target-vpn-gateway "vpn-1"

Cloud Interconnect and Peering

Dedicated connections provide a direct connection to Google’s network while shared connections provide a connection to Google’s network through a partner

Comparison of Interconnect Options Peering

Direct Peering provides a direct connection between your business network and Google.

  • Broad-reaching edge network locations
  • Capacity 10 Gbps/link
  • Exchange BGP routes
  • Reach all of Google’s services
  • Peering requirement (Connection in GCP Pops)
  • Access Type: Public IP Addresses

Carrier Peering provides connectivity through a supported partner

  • Carrier Peering partner
  • Capacity varies based on parent offering
  • Reach all of Google’s services
  • Partner requirements
  • No SLA
  • Access Type: Public IP Addresses

Shared VPC and VPC Peering

Shared VPC allows an organization to connect Resource is from multiple projects to a common VPC network.

VPC Peering is a decentralized or distributed approach to multi project networking because each VPC network may remain under the control of separate administrator groups and maintains its own global firewall and routing tables.

Cloud Load Balancing 🏋️‍♂️ — distributes user traffic 🚦across multiple instances of your applications. By spreading the load, load balancing 🏋️‍♂️ reduces the risk that your applications experience performance issues. There are 2 basic categories of Load balancers: Global load balancing 🏋️‍♂️ and Regional load balancing 🏋️‍♂️.

Global load balancers — when workloads are distributed across the world 🌎 Global load balancers route traffic🚦 to a backend service in the region closest to the user, to reduce latency. They are software defined distributed systems using Google Front End (GFE) reside in Google’s PoPs are distributed globally

Types of Global Load Balancers

Regional load balancers — when all workloads are in the same region Regional load balancing 🏋️‍♂️ route traffic🚦within a given region.

Regional Load balancing 🏋️‍♂️ uses internal and network load balancers. Internal load balancers are software defined distributed systems (using Andromeda) and network load balancers which use Maglev distributed system.

Managed instance groups — is a collection off identical virtual machine instances that you control as a single entity. (Same as creating a VM but applying specific rules to an Instance group)

Regional managed instance groups are usually recommended over zonal managed instance groups because this allow you to spread the application’s load across multiple zones through replication and protect against zonal failures.

Steps to create a Managed Instance Group:

  1. Need to decide the location and whether instance group will be Single or multi-zones
  2. Choose ports that are going allow load balancing🏋️‍♂️ across.
  3. Select Instance template
  4. Decide Autoscaling⚖️ and criteria for use
  5. Creatine a health check to determine instance health and how traffic🚦should route

Autoscaling and health checks

Managed instance groups offer autoscaling⚖️ capabilities

HTTP/HTTPs load balancing

  • Global load balancing🏋️‍♂️
  • Anycast IP address
  • HTTP or port 80 or 8080
  • HTTPs on port 443
  • IPv4 or IP6
  • Autoscaling⚖️
  • URL maps 🗺

Backend Services

  • Health check
  • Session affinity (Optional)
  • Time out setting (30-sec default)

SSL certificates

SSL proxy load balancing — global load balancing service for encrypted non http traffic.

  • Global load balancing for encrypted non-HTTP traffic 🚦
  • Terminate SSL session at load balancing🏋️‍♂️ Layer
  • IPv4 or IPv6 clients

TCP proxy load balancing — a global load balancing service for unencrypted non http traffic.

  • Global load balancing for encrypted non-HTTP traffic 🚦
  • Terminates TCP session at load balancing🏋️‍♂️ Layer
  • IPv4 or IPv6 clients

Network load balancing — is a regional non-proxy load balancing service.

  • Regional, non-proxied load balancer
  • Forwarding rules (IP protocol data)
  • Backends:

Internal load balancing — is a regional private load balancing service for TCP and UDP based traffic🚦

“..clouds☁️ roll by reeling is what they say … or is it just my way?”

Infrastructure Automation — Infrastructure as code (IaC)

Automate repeatable tasks like provisioning, configuration, and deployments for one machine or millions.

Deployment Manager -is an infrastructure deployment service that automates the creation and management of GCP. By defining templates, you only have to specify the resources once and then you can reuse them whenever you want.

Deployment manager creates all the resource is in parallel.

It’s recommended that you provisioned and managed resource is on GCP with the tools🛠 you are already familiar with

Managed Services — automates common activities, such as change requests, monitoring 🎛, patch management, security🔒, and backup services, and provides full lifecycle🔄 services to provision, run🏃‍♂️, and support your infrastructure.

BigQuery is GCP serverless, high scalable⚖️, and cost-effective cloud Data warehouse

Cloud Dataflow executes a wide variety of data processing patterns

Cloud Dataprep — visually explore, clean, and prepare data for analysis and machine learning

  • Serverless, works at any scale⚖️
  • Suggest ideal analysis
  • Integrated partner service operated by Trifacta

Cloud Dataproc — is a service for running Apache Spark 💥 and Apache Hadoop 🐘 clusters

“Captain Jack will get you by tonight🌃 … Just a little push, and you’ll be smilin’ 😊 “

Architecting with Google Kubernetes Engine — Foundations

After Steller🌟 job by was Priyanka taking through the load balancing 🏋️‍♂️ options, infrastructure as code and some of the managed service options in GCP it was time to take the helm⛵️ and get out K8s☸️ hat 🧢 on.

Cloud Computing and Google Cloud

Just to whet our appetite for Cloud Computing and Evan takes through 5 fundamental attributes:

  1. On-demand self-services (No🚫 human intervention needed to get resources)
  2. Broad network access (Access from anywhere)
  3. Resource Pooling🏊‍♂️ (Provider shares resources to customers)
  4. Rapid Elasticity (Get more resources quickly as needed)
  5. Measured Service (Pay only for what you consume)

Next, Evan introduces some of GCP Services under Compute like Compute Engine, Google Kubernetes Engine (GKE), App Engine, Cloud Functions. Then he discusses some Google’s managed services with Storage, Big Data, Machine Learning Services.

Resource Management Network

  • GCP provides resource is in multi-regions, regions and zones.
  • GCP divides the world🌎 up into 3 multi-regional areas the Americas, Europe and Asia Pacific.
  • 3 multi regional areas are divided into regions which are independent geographic areas on the same continent.
  • Regions are divided into zones (like a data center) which are deployment areas for GCP resources.

The network interconnect with the public internet at more than 90 internet exchanges on more than 100 points of presence worldwide🌎.

Billing How to keep billing under control

  1. Budgets and alerts 🔔
  2. Billing export 🧾
  3. Reports 📊

GCP implements quotas which limit unforeseen extra billing charges. Quotas error designed to prevent the over consumption of resources because of an error or a malicious attack.

Interacting with GCP -There are 4 ways to interact with GCP

  1. Cloud Console
  2. Web-based GUI to manage all Google Cloud resources
  3. Executes common task using simple mouse clicks
  4. Provides visibility into Google Cloud projects and resources
  • Temporary Compute Engine VM
  • Command-line access to the instance through a browser
  • 5 GB of persistent disk storage ($HOME dir)

“Cloud☁️ hands🤲 reaching from a rainbow🌈 tapping at the window, touch your hair” Introduction to Containers

Next Evan takes us through history of computing. First starting with deploying applications on its physical server. This solution wasted resources and took a lot of time to deploy maintaining scale. It also wasn’t very portable. It all applications were built for a specific operating system, and sometimes even for specific hardware as well.

Next transitioning to Virtualization. Virtualization makes it possible to run multiple virtual servers and operating systems on the same physical computer. A hypervisor is the software layer that removes the dependencies of an operating system with its underlying hardware. It allows several virtual machines to share that same hardware.

Finally, Evan introduces us to containers as they solve a lot of the short comings of Virtualization like:

Containers are isolated user spaces for running application code. Containers are lightweight as they don’t carry a full operating system. They could be scheduled or packed tightly onto the underlying system, which makes them very efficient.

Containerization is the next step in the evolution of managing code.

Benefits of Containers:

  • Containers appeal to developers
  • Deliver high performing and scalable applications.
  • Containers run the same anywhere
  • Containers make it easier to build applications that use Microservices design pattern

Containers and Container Images

An Image is an application and its dependencies

A container is simply a running instance image.

Docker is an open source technology that allows you to create and run applications and containers, but it doesn’t offer away to orchestrate those applications at scale

How to get containers?

  • Download containerized software from a container registry gcr.io
  • Docker — Build your own container using the open-source docker command
  • Build your own container using Cloud Build

Introduction to Kubernetes ☸️

Kubernetes ☸️ is an open source platform that helps you orchestrate and manage your container infrastructure on premises or in the cloud☁️.

It’s a container centric management environment. Google originated it and then donated it to the open source community.

K8s ☸️ automates the deployment, scaling⚖️, load balancing🏋️‍♂️, logging, monitoring 🎛 and other management features of containerized applications.

K8s ☸️ features:

Kubernetes also supports workload portability across on premises or multiple cloud service providers. This allows Kubernetes to be deployed anywhere. You could move Kubernetes☸️ workloads freely without a vendor lock🔒 in

Google Kubernetes Engine (GKE)

GKE easily deploys, manages and scales⚖️ Kubernetes environments for your containerized applications on GCP.

GKE Features:

  • Fully managed
  • Container-optimized OS
  • Auto upgrade
  • Auto repair🛠
  • Cluster Scaling⚖️
  • Seamless Integration
  • Identity and access management (IAM)
  • Integrated logging and monitoring (Stack Driver)
  • Integrated networking
  • Cloud Console

Compute Options Detail

Computer Engine

Use Cases

App Engine

Use Cases

Google Kubernetes Engine

  • Fully managed Kubernetes Platform
  • Supports cluster scaling⚖️, persistent disk, automated upgrades, and auto node repairs
  • Built-in integration with GCP

Use Cases

Cloud Run

Use Cases

Cloud Functions

Use Cases

There are two related concepts in understanding K8s works object model and principle of declarative management

Pods — the basic building block of K8s☸️

Principle of declarative management — Declare some objects to represent those in generic containers.

  • K8s☸️ creates and maintains one or more objects.
  • K8s☸️ compares the desired state to the current state.

The Kubernetes Control Plane — continuously monitor the state of the cluster, endlessly comparing reality to what has been declared and remedying the state has needed.

K8s☸️ Cluster consists of a Master and Nodes

Master is to coordinate the entire cluster.

  • View or change the state of the cluster including launching pods.
  • kube-API server — the single component that interacts with the Cluster
  • etcd — key-value store for the most critical data of a distributed system
  • kube-scheduler — assigns Pods to Nodes
  • kube-cloud-manager — embeds cloud-specific control logic.
  • Kube-controller-manager- daemon that embeds the core control loops

Nodes runs run pods.

  • kubelet is the primary “node agent” that runs on each node.
  • kube-proxy is a network proxy that runs on each node in your cluster

Google Kubernetes Engine Concepts

GKE makes administration of K8s☸️ much simpler

Zonal Cluster — has a single control plane in a single zone.

  • single-zone cluster has a single control plane running in one zone
  • multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones.

Regional Cluster — has multiple replicas of the control plane, running in multiple zones within a given region.

Private Cluster — provides the ability to isolate nodes from having inbound and outbound connectivity to the public internet.

Kubernetes Object Management — identified by a unique name and a unique identifier.

Pods and Controller Objects

Pods have a life cycle

  • Controller Object types
  • Allocating resource quotas
  • Namespaces — provide scope for naming resources (pods, deployments and controllers.)

There are 3 initializer spaces in the cluster.

  1. Default name space for objects with no other name space defined.
  2. Kube-system named Space for objects created by the Kubernetes system itself.
  3. Kube-Public name space for objects that are publicly readable to all users.

Best practice tip: namespace neutral YAML

Advanced K8s☸️ Objects



Controller Objects

“Can I get an encore; do you want more?”

Migrate for Anthos — tool🛠 for getting workloads into containerized deployments on GCP

Migrate for Anthos moves VMs to containers

Migrate for Anthos Architecture

  • A migration requires an architecture to be built
  • A migration is a multi-step process
  • Configure processing cluster
  • Add migration source
  • Generate and review plan
  • Generate artifacts
  • Test
  • Deploy

Migrate for Anthos Installation -requires a processing cluster

Installing Migrate for Anthos uses migctl

$migctl setup install

Adding a source enables migration from a specific environment

$migctl source create cd my-ce-src -project my-project -zone zone

Creating a migration generates a migration plan

$migctl migration create test-migration -source my-ce-src -vm- id my-id -intent image

Executing a migration generates resource and artifacts

$migctl migration generate-artifacts my-migration

Deployment files typically need modification

$migctl migration get-artifacts test-migration

Apply the configuration to deploy the workload

$Kubectl apply -f deployment_sepc.yaml

“And we’ll bask 🌞 in the shadow of yesterday’s triumph🏆 And sail⛵️ on the steel breeze🌬

Below are some of the destinations I am considering for my travels for next week:

Thanks -


Originally published at https://sqlsquirrels.com on October 23, 2020.




A Passionate Technologist. Blogging about my journey in learning exciting technologies

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

From Coffee to Techie:

Customized Login Identity Provider: How Does IDP Work?

customized identity providers

Java — Towards The End Of Finalize

QuickFIX vs FIX Orchestra vs FinSpec — what’s the difference?

Managing .env file in Azure Pipeline

Project- Automating Machine Learning with DevOps

A tale of performance — ECS, boto3 & IAM

Peculiar AWS CloudFormation failure

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mark Shay

Mark Shay

A Passionate Technologist. Blogging about my journey in learning exciting technologies

More from Medium


Cloud Development Overview for Non-Cloud Developers

Accessing Google Cloud Storage Bucket’s using AWS SDK

Regulatory and Security Risks When Deploying Fintech in a Public Cloud