Wilson Mar bio photo

Wilson Mar

Hello!

Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

The cloud that runs on fast Google Fiber and Big AI, from the folks who gave you Kubernetes

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Estonian   اَلْعَرَبِيَّةُ (Egypt Arabic)   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

Here is a hands-on introduction to learn the Google Compute Platform (GCP) and getting certified as a Google Certified Professional (GCP).

NOTE: Content here are my personal opinions, and not intended to represent any employer (past or present). “PROTIP:” here highlight information I haven’t seen elsewhere on the internet because it is hard-won, little-know but significant facts based on my personal research and experience.

Why Google?

  1. Google Cloud’s marketing home page is at:

    https://cloud.google.com

Major clients of Google Cloud include HSBC, PayPal, 20th Century Fox, Bloomberg, Dominos.

Google has the most aggressive pricing, including per-second vs. per-minute billing.

Google gives a discount automatically after an instance runs for more than 25% of a month.

https://wilsonmar.github.io/cloud-services-comparisons/#cloud-vendor-comparisons Google has a fast fibre network connecting machines (via underground cables and undersea cables), which enables high capacity and speed across the world.

Unlike AWS, where encryption is a hassle and expensive, Google encrypts data automatically at no additional charge.

Google was the first cloud vendor to offer a VPC which spans several regions (until late 2018 when AWS offers the same). But still, the global scope of a VPC in Google eliminates the cost and latency of VPN between regions (plus a router for each VPN for BGP). This also enables shareable configuration between projects.

As with AWS, Google has Preemptible VMs that run up to just 24 hours, with less features than Spot VMs. Pricing is the same for both for all its machine types series:

  • 3rd gen C3 powered by Intel Sapphire Rapids CPU platform
  • 2nd gen E2, N2 Intel Cascade Lake and Ice Lake CPU platforms, N2D, T2A, T2D AMD
  • 1st gen N1 Intel Skylake CPU platform

No “pre-warming” is required for load balancing.

  • Google builds its own server hardware. As of May 11, 2022, Google’s Cloud TPU (Tensor Processing Units) use 2048-chip and 1024-chip v4 Pods which combine for 9 exaflops of peak aggregate performance (equivalent to the computing power of 10 million laptops combined) – the largest publicly available. Google is also working on Quantum AI. to run ML hub.

And it runs on 90% carbon-free energy First company to be carbon-free.

Google offers a 99.99% SLA.

Even if you don’t use Google Cloud, you can use Google’s DNS at 8.8.8.8.

See https://cloud.google.com/why-google

Due to the scarcity of people working on GCP, individual professionals are likely to be paid better than AWS & Azure pros.

As with other clouds

  • It’s costly and difficult for individual companies to keep up with the pace of technology, especially around hardware
  • “Pay as you go” rather than significant up-front purchase, which eats time
  • No software to install (and go stale, requiring redo work)
  • Google scale - 9 cloud regions in 27 zones. 90 Edge cache locations.

cloud.google.com/about/locations lists current number of regions, zones, network edge locations, countries served by the Google Front End (GFE) with DoS protection.

Google was the first CSP to get a ISO 14001 certification.

Social

Documentation

Google Cloud Deployment Manager vs Terraform vs. Pulumi vs. AWS Co-pilot (AWS CloudFormation) vs. Azure Resource Manager: You still ‘describe’ your desired state, but by having a programming language, you can use complex logic, factor out patterns, and package it up for easier consumption.

https://cloud.google.com/compute/docs/reference/rest/v1/instances/start

Free Cloud Time and Training

https://www.cloudskillsboost.google incorporates features of Qwiklabs, purchased by Google to provide a UX to cloud instance time (around an hour each class).

A. For individuals, 2023 Google began offering a $299/year “Innovators Plus” subscription for $500 cloud credits and a $200 certification voucher.

B. For startups, Google has a https://cloud.google.com/startup program gives $200,000 cloud costs and $200 skills boost for the first 2 years of a startup.

C. For Google partners, Partner Certification Kickstart

PROTIP: project accounts for API vs. Compute are separate?

Google offers classes on Coursera and at https://cloud.google.com/training

https://www.coursera.org/learn/gcp-infrastructure-scaling-automation

Free $300 account for 60 days

In US regions, new accounts get $300 of overage for 12 months.

There are limitations to Google’s no charge low level usage:

  • No more that 8 cores at once across all instances
  • No more than 100 GB of solid state disk (SSD) space
  • Max. 2TB (2,000 GB) total persistent standard disk space

PROTIP: Google bills in minute-level increments (with a 10-minute minimum charge), unlike AWS which charges by the hour (for Windows instances).

  1. Read the fine print in the FAQ to decide what Google permits:

    https://cloud.google.com/free/docs/frequently-asked-questions

  2. Read Google’s Pricing Philosophy:

    https://cloud.google.com/pricing/philosophy

    Gmail accounts

  3. NOTE: Create several Gmail accounts, each with a different identity (name, birthdate, credit card). You would need to use the same name for the credit card and the same phone number because they are expensive.

    PROTIP: Write down all the details (including the date when you opened the account) in case you have to recover the password.

    PROTIP: Use a different browser so you can flip quickly between identities.

    • Use Chrome browser for Gmail account1 with an Amex card for project1
    • Use Firefox browser for Gmail account2 with a Visa card for project2
    • Use Brave browser for Gmail account3 with a Mastercard for project3
    • Use Safari browser for Gmail account4 with a Discover card for project4

  4. In the appropriate internet browser, apply for a Gmail address and use the same combination. in the free trial registration page and Console:

    https://cloud.google.com/free

    Alternately, https://codelabs.developers.google.com/codelabs/cpb100-free-trial/#0

    https://console.developers.google.com/freetrial

  5. Click the Try It Free button. Complete the registration. Click Agree and continue. Start my new trial.

  6. With the appropriate account and browser, configure at console.cloud.google.com

    Keeping track of multiple accounts is an exhausting way to live, in my opinion.

  7. PROTIP: Bookmark the project URL.

    PROTIP: Google remembers your last project and its region, and gives them to you even if you do not specify them in the URL.

    Configure Limits

  8. CAUTION: Your bill can suddenly jump up to thousands of dollars a day, with no explanation. Configure to put limits.


Google Certified Professional (GCP) Certification Exams

After certification, you are listed on the Google Cloud Certified Directory.

PROTIP: Google uses Webaccessor (by Kryterion), which amazingly requires a different email for each exam subject. In other words, if you want to get certified in Salesforce, DevOpsInstitute, and Google, you’ll need 3 emails. Absolutely crazy! And they consider addresses such as “johndoe+google@gmail.com” invalid.

https://www.coursera.org/collections/googlecloud-offer-expired

https://support.google.com/cloud-certification/answer/9907748?hl=en

As of December, 2020, Google had these certifications:
https://cloud.google.com/certification

$99 to answer 50-60 questions in 90 minutes:

$125 to answer 50-60 questions in 2 hours:
Associate Cloud Engineer

$200 to answer 50-60 questions in 2-hours Professional:

Tests can be taken online or in-person at a Kryterion Test Center. PROTIP: Call (602) 659-4660 in Phoenix, AZ because testing centers go in an out of business, or have limitations such as COVID, so call ahead to verify they’re open and to confirm parking instructions. Copy address and parking instructions to your Calendar entry.

Codelabs

For hands-on practice: https://codelabs.developers.google.com/?cat=Cloud

Kryterion’s online-proctoring (OLP) solution is not affected by COVID-19 and may be a suitable testing alternative to taking exams at a test center.

Register for your exam through your Test Sponsor’s Webassessor portal. There you get a Test Taker Authorization Code needed to launch the test.

VIDEO for GCP beginners


Cloud Architect

Cloud Architect – design, build and manage solutions on Google Cloud Platform.

PROTIP: The exam references these case studies, so get to know them to avoid wasting time during the exam at https://cloud.google.com/certification/guides/professional-cloudarchitect/

The above are covered by Google’s Preparing for the Google Cloud Professional Cloud Architect Exam on Coursera is $49 if you want the quizzes and certificate.

More about this certification:

https://www.coursera.org/specializations/gcp-architecture

KC 1, GCSP 1Google Cloud Platform Fundamentals: Core Infrastructure Intro141m
GCSP 2 Networking in Google Cloud: Defining and Implementing Networks Intro141m
GCSP 3 Networking in Google Cloud: Hybrid Connectivity and Network Management Intro141m
GCSP 3 Networking in Google Cloud: Hybrid Connectivity and Network Management Intro141m
GCSP 4 Managing Security in Google Cloud Platform Intro141m
GCSP 5 Security Best Practices in Google Cloud Securing Compute Engine, Application Security, Securing Cloud Data, Securing Kubernetes (Encrypting disks with CSEK) Intro141m
GCSP 6 Mitigating Security Vulnerabilities on Google Cloud Platform (DDoS with botnets, mitigations, partner products) Intro141m
GCSP 7 Hands-On Labs in Google Cloud for Security Engineers Intro141m
KC 2 Essential Cloud Infrastructure: Foundation Intro141m
KC 3Essential Cloud Infrastructure: Core Services Intro141m
KC 4Elastic Cloud Infrastructure: Scaling and Automation Intro141m
KC 5Elastic Cloud Infrastructure: Containers and Services Intro141m
KC 6Reliable Cloud Infrastructure: Design and Process Intro141m

Data Engineer

Data Engineer certification Guide

https://cloud.google.com/training/courses/data-engineering is used within the Data Engineering on Google Cloud Platform Specialization on Coursera. It is a series of five one-week classes ($49 per month after 7 days). These have videos that syncs with transcript text, but no hints to quiz answers or live help.

  1. Building Resilient Streaming Systems on Google Cloud Platform $99 USD

  2. Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform $59 USD

  3. Google Cloud Platform Big Data and Machine Learning Fundamentals $59 USD by Google Professional Services Consulant Valliappa Lakshmanan (Lak) at https://medium.com/@lakshmanok, previously at NOAA weather predictions.

    https://codelabs.developers.google.com/cpb100

  4. Serverless Data Analysis with Google BigQuery and Cloud Dataflow $99 USD

  5. Serverless Machine Learning with Tensorflow on Google Cloud Platform $99 USD by Valliappa Lakshmanan uses Tensorflow Cloud ML service to learn a map of New York City by analyzing taxi cab locations.

    • Vision image sentiment
    • Speech recognizes 110 languages, dictating,
    • Translate
    • personalization

DevOps Engineer

Coursera’s video Architecting with Google Kubernetes Engine Specialization consists of:

  1. Google Cloud Platform Fundamentals: Core Infrastructure

  2. Architecting with Google Kubernetes Engine: Foundations by Brian Rice (Curriculum Lead) provides hands-on Qwiklabs. Lab: Working with Cloud Build Quiz: Containers and Container Images, The Kubernetes Control Plane (master node), Kubernetes Object Management, Lab: Deploying to Google Kubernetes Engine, Migrate for Google Anthos

  3. Architecting with Google Kubernetes Engine: Workloads

  4. Architecting with Google Kubernetes Engine: Production

Coursera’s video courses toward a Prep Professional Cloud DevOps Engineer Professional Certificate

  1. Google Cloud Platform Fundamentals: Core Infrastructure (1 “week”)

    This course introduces you to important concepts and terminology for working with Google Cloud Platform (GCP). You learn about, and compare, many of the computing and storage services available in Google Cloud Platform, including Google App Engine, Google Compute Engine, Google Kubernetes Engine, Google Cloud Storage, Google Cloud SQL, and BigQuery. You learn about important resource and policy management tools, such as the Google Cloud Resource Manager hierarchy and Google Cloud Identity and Access Management. Hands-on labs give you foundational skills for working with GCP.

    Note: •Google services are currently unavailable in China. New! CERTIFICATE COMPLETION CHALLENGE to unlock benefits from Coursera and Google Cloud Enroll and complete Cloud Engineering with Google Cloud or Cloud Architecture with Google Cloud Professional Certificate or Data Engineering with Google Cloud Professional Certificate before November 8, 2020 to receive the following benefits; => Google Cloud t-shirt, for the first 1,000 eligible learners to complete. While supplies last. > Exclusive access to Big => Interview ($950 value) and career coaching => 30 days free access to Qwiklabs ($50 value) to earn Google Cloud recognized skill badges by completing challenge quests

  2. Developing a Google SRE Culture

    In many IT organizations, incentives are not aligned between developers, who strive for agility, and operators, who focus on stability. Site reliability engineering, or SRE, is how Google aligns incentives between development and operations and does mission-critical production support. Adoption of SRE cultural and technical practices can help improve collaboration between the business and IT. This course introduces key practices of Google SRE and the important role IT and business leaders play in the success of SRE organizational adoption.

    Primary audience: IT leaders and business leaders who are interested in embracing SRE philosophy. Roles include, but are not limited to CTO, IT director/manager, engineering VP/director/manager. Secondary audience: Other product and IT roles such as operations managers or engineers, software engineers, service managers, or product managers may also find this content useful as an introduction to SRE.

  3. Reliable Google Cloud Infrastructure: Design and Process by Stephanie Wong (Developer Advocate) and Philipp Mair (Course Developer)

    This course equips students to build highly reliable and efficient solutions on Google Cloud using proven design patterns. It is a continuation of the Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine courses and assumes hands-on experience with the technologies covered in either of those courses. Through a combination of presentations, design activities, and hands-on labs, participants learn to define and balance business and technical requirements to design Google Cloud deployments that are highly reliable, highly available, secure, and cost-effective.

    This course teaches participants the following skills: ● Apply a tool set of questions, techniques, and design considerations ● Define application requirements and express them objectively as KPIs, SLOs and SLIs ● Decompose application requirements to find the right microservice boundaries Leverage Google Cloud developer tools to set up modern, automated deployment pipelines ● Choose the appropriate Cloud Storage services based on application requirements ● Architect cloud and hybrid networks ● Implement reliable, scalable, resilient applications balancing key performance metrics with cost ● Choose the right Google Cloud deployment services for your applications ● Secure cloud applications, data, and infrastructure ● Monitor service level objectives and costs using Google Cloud tools Prerequisites ● Completion of prior courses in the \

    CERTIFICATE COMPLETION CHALLENGE to unlock benefits from Coursera and Google Cloud Enroll and complete Cloud Engineering with Google Cloud or Cloud Architecture with Google Cloud Professional Certificate or Data Engineering with Google Cloud Professional Certificate before November 8, 2020 to receive the following benefits; => Google Cloud t-shirt, for the first 1,000 eligible learners to complete. While supplies last. > Exclusive access to Big => Interview ($950 value) and career coaching => 30 days free access to Qwiklabs ($50 value) to earn Google Cloud recognized skill badges by completing challenge quests

  4. Logging, Monitoring, and Observability in Google Cloud

    This course teaches techniques for monitoring, troubleshooting, and improving infrastructure and application performance in Google Cloud. Guided by the principles of Site Reliability Engineering (SRE), and using a combination of presentations, demos, hands-on labs, and real-world case studies, attendees gain experience with full-stack monitoring, real-time log management, and analysis, debugging code in production, tracing application performance bottlenecks, and profiling CPU and memory usage.

    gcp-observability-629x313


Use Terraform

https://learn.hashicorp.com/collections/terraform/gcp-get-started

https://github.com/GoogleCloudPlatform/terraform-google-examples

Houssem Dellai has a whole set of resources for setting up Kubernetes using Terraform on GCP:

  • https://github.com/HoussemDellai/docker-kubernetes-course
  • https://www.youtube.com/watch?v=mwToMPpDHfg&list=PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE&index=4

GCP Architecture

gcp-hierarchy-409x383

Billing is at the project level. Logical grouping for resources, associated with billing.

Labels are Key-value pairs containing resource metadata used to organize billing.

gcp-hierarchy-375x326.png

Create an organization node when you want to centrally apply organization-wide policies.

New Project

All Google Cloud resources are associated with a project.

Project ID is unique among all other projects at Google and cannot be changed.

Project Names can be changed.

Principals

Service accounts are automatically created for each project:

project_number@developer.gserviceaccount.com
project_id@developer.gserviceaccount.com

Service accounts are one type of principals, called “members” given permissions:

   "members": [
        "user:ali@example.com",
        "serviceAccount:my-other-app@appspot.gserviceaccount.com",
        "group:admins@example.com",
        "domain:google.com"
      ]

Principals can be of the following types:

  • Google Account
  • Service account VIDEO
  • Google group
  • Google Workspace account
  • Cloud Identity domain
  • All authenticated users
  • All users

A Google Account represents a developer, an administrator, or any other person who interacts with Google Cloud. Any email address that’s associated with a Google Account can be an identity, including gmail.com or other domains.

WARNING: Servie Agents have their own (very powerful) roles.

IAM summary

Google Cloud’s Identity and Access Management (IAM) service grants granular access to specific resources and helps prevent access to other resources. IAM adopts the security principle of least privilege, where nobody should have more permissions than they actually need.

VIDEO:: gcp-iam-parts-750x457.png

IAM commands

The gcloud CLI command:

    gcloud iam list-grantable-roles
    gcloud iam list-testable-permissions
       

“GROUP” refers to policies, roles, service-accounts, simulators, workforce-pools, workload-identity-pools.

https://developers.google.com/apis-explorer lists 265 APIs (as of May 31, 2023).

IAM Permissions

Permissions determine what operations (verbs) are allowed on a resource, in the form of:

    service.resource.verb

The caller of each Google Cloud service REST API method needs to be granted permission for the verb associated with the method called. For example, the pubsub service:

    pubsub.subscriptions.consume to call subscriptions.consume() pubsub.topics.publish to call topics.publish()

Permission to access a resource are NOT granted directly to an end user.

Permissions by role

Permissions are grouped into roles granted to authenticated principals.

Each role is specified in the form of:

    roles/service.roleName

For example, roles defined for the Cloud Storage service include:

    roles/storage.objectAdmin roles/storage.objectCreator roles/storage.objectViewer

Again, each role contains a collection of permissions.

When a role is granted to a principal, that user obtain all the permissions of that role.

Allow/IAM Policies by resource

When an authenticated principal attempts to access a resource, IAM determines whether the action is permitted based on the allow policies (aka IAM policy) attached to that resource to enforce what roles are granted to which principals.

Role Bindings

Each allow/IAM policy is a collection of role bindings that bind one or more member principals to an individual role. To define who (principal) has what type of access (role) on a resource, create an allow policy and attach it to the resource.

  • VIDEO: GCP IAM Policies are defined as bindings to roles.
{
   "bindings" : {
      {
         "role" : "roles/storage.admin"
         "members" : [
            "user:alice@example.com",
            "group:admin@example.com"
         ],
      }
      "conditions": {
         "title": "temporary",
         "expression": "request.time < timestamp('2022-09-23T23:55Z')>"
      }
   }
   {
      "role": "roles/compute.admin"
      "members": [
         "user:bob@example.com"
      ],
   }
   "etag": "abcdef1234568=",
   "version": 3
}

Notice a time-limited temporary condition can be defined. v3 includes conditions as a separate section instead of “withcondition”:

REMEMBER: The IAM API is eventually consistent. Changes take time to affect access checks.

Primitive Basic roles

Primitive (Basic IAM) roles (Viewer, Editor, Owner, Billing Administrator).

gcp-iam-basic-590x258.png

PROTIP: In production environments, rather than granting basic roles, grant the most limited predefined roles for each service are the lowest-level, or finest-grained, type of resource that accepts each role.

You can grant users certain roles to access resources at a granularity finer than the project level. For example, you can create an allow policy that grants a user the Subscriber role for a particular Pub/Sub topic.

Create a custom role.

? Basic roles can only be granted to single users.

? Predefined roles can be associated with a group.

Policies in folders

Share policies among Google Cloud projects by placing projects into a folder, and define the policies on that folder.

Custom roles are not applied to folders.

? IAM policies that are implemented higher in the resource hierarchy deny access that is granted by lower-level policies.

Resources:

  • https://cloud.google.com/iam/docs/understanding-roles#predefined_roles

Granular Grants inherited within project

Some services support granting IAM permissions at a granularity finer than the project level. For example, you can grant the Storage Admin role (roles/storage.admin) to a user for a particular Cloud Storage bucket, or you can grant the Compute Instance Admin role (roles/compute.instanceAdmin) to a user for a specific Compute Engine instance.

IAM permissions can be granted at the project level so that permissions can be inherited by all resources within that project. For example, to grant access to all Cloud Storage buckets in a project, grant access to the project instead of each individual bucket. To grant access to all Compute Engine instances in a project, grant access to the project rather than each individual instance.

References:

Permissions need to be defined per project.

Permissions are inherited and additive (flow in one direction). Parent permissions don’t override child permissions. Permissions can’t be denied at lower levels once they’ve been granted at upper levels. There are no deny rules.

setIamPolicy to each resource:

POST https://cloudresourcemanager.googleapis.com/v1/projects/{resource}:setIamPolicy
POST https://pubsub.googleapis.com/v1/{resource}:setIamPolicy

Kubernetes RBAC (Role-based Access Centrol) extends IAM, at the cluster or namespace level, to define Roles – what operations verbs (get, list, watch, create, describe) can be executed over named objects (resource such as a pod, deployment, service, persistent volume). It’s common practice to allocate get, list, and watch together (as a read-only unit).

Roles (such as compute.instanceAdmin) are a collection of permissions to give access to a given resource, in the form:

service.resource.verb

List:

gcloud iam list-testable-permissions //cloudresourcemanager.googleapis.com/projects/$DEVSHELL_PROJECT_ID
   

Applying an example:compute.instances.delete

API groups are defined in the same way as it is defined when creating a Role.

Get and post actions for non-resource endpoints are uniquely defined by RBAC ClusterRoles, as the name implies, defined at the cluster level.

RoleBindings connect Roles to subjects (users/processes) who make requests to the Kubernetes API.

Resources can be cluster scope such as nodes and storage classes, or they can be namespace resources such as pods and deployments.

On the top-left, is a basic assignment that specifies get, list, and watch operations on all pod resources.

On the bottom left, a sub-resource, log, is added to the resources list to specify access to pod/log.

On the top right, a resource name is specified to limit the scope to specific instances of a resource, and the verbs specified as patch and update.

Service accounts

Unlike an end-user account, no human authentication is involved from one service to another when a service account is associated with a VM or app.

Google-managed service accounts are of the format:
[PROJECT_NUMBER]@cloudservices.gserviceaccount.com

User-managed service accounts are of the format:
[PROJECT_NUMBER]-compute@developer.gserviceaccount.com

Service accounts have more stringent permissions and logging than user accounts.

GCDS (Google Cloud Directory Sync) syncs user identities from on-premises.


Cloud Shell Online

The Cloud Shell provides command line access on a web browser, with nothing to install.

Sessions have a 1 hour timeout.

Language support for Java, Go, Python, Node, PHP, Ruby.

Not meant for high computation use.

Shell CLI programs

Google has these shells:

  1. gcloud CLI installed with google-cloud-sdk.

  2. gsutil to access Cloud Storage

  3. bq for Big Query tasks

  4. kubectl for Kubernetes

There is a Google Cloud SDK for Windows (gcloud) for your programming pleasure.

BLOG: Graphical user interface (GUI) for Google Compute Engine instance

Commands

  1. Click the icon in the Google Cloud Platform Console:

    gcp-cloud-shell-menu-568x166-9041

  2. Click “START CLOUD SHELL” at the bottom of this pop-up:

    gcloud-shell-entry-748x511

    When the CLI appears online:

  3. See that your present working directory is /home/ as your account name:

    pwd
    
  4. See the folder with your account name:

    echo ${HOME}
    
  5. Just your account name:

    echo ${USER}
    
  6. Read the welcome file:

    nano README-cloudshell.txt
    

    Your 5GB home directory will persist across sessions, but the VM is ephemeral and will be reset approximately 20 minutes after your session ends. No system-wide change will persist beyond that.

  7. Type “gcloud help” to get help on using Cloud SDK. For more examples, visit https://cloud.google.com/shell/docs/quickstart and https://cloud.google.com/shell/docs/examples

  8. Type “cloudshell help” to get help on using the “cloudshell” utility. Common functionality is aliased to short commands in your shell, for example, you can type “dl &LT;filename>” at Bash prompt to download a file.

    Type “cloudshell aliases” to see these commands.

  9. Type “help” to see this message any time. Type “builtin help” to see Bash interpreter help.

Other resources:

GCP Console / Dashboard

https://console.cloud.google.com/home/dashboard
displays panes for your project from among the list obtained by clicking the “hamburger” menu icon at the upper left corner. The major sections of the product menu are:

  • IDENTITY & SECURITY (Identity, Access, Security)
  • COMPUTE (App Engine, Compute Engine, Kubernetes Engine Container Engine, Cloud Functions, Cloud Run, VMware Engine)
  • STORAGE (Filestore, Storage, Data Transfer)
  • DATABASES (Bigtable, Datastore, Database Migration, Filestore, Memorystore, Spanner, SQL)
  • NETWORKING (VPC network, Network services, Hybrid Connectivity, Network Service Tiers, Network Security, Network Intelligence)
  • OPERATIONS (Monitoring, Debugger, Error Reporting, Logging, Profiler, Trace) STACKDRIVER
  • TOOLS (Cloud Build, Cloud Tasks, Container Registry, Artifact Registry, Cloud Scheduler, Deployment Manager, API Gateway, Endpoints, Source Repositories, Workflows, Private Catalog)
  • BIG DATA (Composer, Dataproc, Pub/Sub, Dataflow, IoT Core, BigQuery, Looker, Data Catalog, Data Fusion, Financial Services, Healthcare, Life Sciences, Dataprep)
  • ARTIFICIAL INTELLIGENCE (AI Platform (Unified), AI Platform, Data Labeling, Document AI, Natural Language, Recommendations AI, Tables, Talent Solution, Translation, Vision, Video Intelligence)
  • OTHER GOOGLE SOLUTIONS
  • PARTNER SOLUTIONS (Redis Enterprise, Apache Kafka, DataStax Astra, Elasticsearch Service, MongoDB Atlas, Cloud Volumes)

Text Editor

  1. Click the pencil icon for the built-in text editor.

  2. Edit text using nano or vim built-in.

  3. PROTIP: Boost mode to run Docker with more memory.

Local gcloud CLI install

Get the CLI to run locally on your laptop:

  1. On MacOSX use Homebrew:

    brew install --cask google-cloud-sdk
    

    Alternately:

    1. In https://cloud.google.com/sdk/downloads
    2. Click the link for Mac OS X (x86_64) like “google-cloud-sdk-173.0.0-darwin-x86_64.tar.gz” to your Downloads folder.
    3. Double-click the file to unzip it (from 13.9 MB to a 100.6 MB folder). If you’re not seeing a folder in Finder, use another unzip utility.
    4. Move the folder to your home folder.

    Either way, edit environment variables on Mac.

    The installer creates folder ~/.google-cloud-sdk

  2. Know what you got:

    gcloud version
    Google Cloud SDK 402.0.0
    bq 2.0.75
    core 2022.09.12
    gcloud-crc32c 1.0.0
    gsutil 5.13
    
  3. Add the path to that folder in the $PATH variable within your ~/.bash_profile or ~/.zshrc

    export PATH="$PATH:$HOME/.google-cloud-sdk/bin"
    

    Also add:

    source "/opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/path.zsh.inc"
    
  4. PROTIP: To quickly navigate to the folder with just 3 characters (gcs):

    alias gcs='cd ~/.google-cloud-sdk'
  5. To specify your favorite GCP project, mac-setup.env

    gcs
  6. Install libraries (without the help argument):

    On Linux or Mac OS X:

    ./install.sh --help

    On Windows:

    .\install.bat --help
  7. Initialize the SDK:

    ./bin/gcloud init

    Login

  8. PROTIP: Click on the browser set to the Person account you want to be used when this next command opens up a window.

  9. From a Terminal CLI:

    gcloud auth login

    That opens your default browser to select your Google account.

  10. Click “Allow” for Google Cloud SDK for a response such as:

    You are now logged in as [johndoe@gmail.com].
    Your current project is [None].  You can change this setting by running:
      $ gcloud config set project PROJECT_ID
    

gcloud CLI commands

Regardless of whether the CLI is online or local:

  1. Get syntax of commands

    gcloud help

  2. Be aware of the full set of parameters possible for GCP tasks at
    https://cloud.google.com/sdk/gcloud/reference

    The general format of commands:

    gcloud [GROUP] [GROUP] [COMMAND] – arguments

    Has all Linux command tools and authentication pre-installed.

  3. Run df to see that /dev/sdb1 has 5,082,480 KB = 5GB of persistent storage:

    Filesystem     1K-blocks     Used Available Use% Mounted on
    none            25669948 16520376   7822572  68% /
    tmpfs             872656        0    872656   0% /dev
    tmpfs             872656        0    872656   0% /sys/fs/cgroup
    /dev/sdb1        5028480    10332   4739672   1% /home
    /dev/sda1       25669948 16520376   7822572  68% /etc/hosts
    shm                65536        0     65536   0% /dev/shm
    
  4. Confirm the operating system version:

    uname -a

    Linux cs-206022718149-default 5.10.133+ #1 SMP Sat Sep 3 08:59:10 UTC 2022 x86_64 GNU/Linux
    

    Previously:

    Linux cs-6000-devshell-vm-5260d9c4-474a-47de-a143-ea05b695c057-5a 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux
     
  5. PROTIP: It may seem like a small thing, but having the cursor prompt always in the first character saves you from hunting for it visually.

    export PS1="\n  \w\[\033[33m\]\n$ "

    The “\n” adds a blank line above each prompt.

    The current folder is displayed above the prompt.

    PROTIP: Setup a keystroke program (such as Stream Deck) to issue that long command above. Aliases

    Programming

    https://cloud.google.com/apis/docs/cloud-client-libraries Go, Java, Node.js, Python, Ruby, PHP, C#, C++

    https://cloud.google.com/code/docs/vscode/client-libraries https://cloud.google.com/code/docs/vscode/client-libraries#remote_development_with_permissions_enabled

    List Projects, Set one

  6. Get list of Project IDs:

    gcloud projects list

    Example (default sort by project ID):

    PROJECT_ID              NAME                       PROJECT_NUMBER
    what-182518             CICD                       608556368368
    
  7. To get creation time of a specified project:

    gcloud projects list –format=”table(projectId,createTime)”

    Response:

    PROJECT_ID                      CREATE_TIME
    applied-algebra-825             2015-01-14T06:51:30.910Z
    
  8. Alternately, gcloud projects describe 608556368368 gives too much info:

    createTime: '2022-09-14T14:30:01.540Z'
    lifecycleState: ACTIVE
    name: something
    projectId: what-182518
    projectNumber: '608556368368'
    
  9. To get last date used and such, example code for apis/SDK involving the f1 micro instance

    • https://github.internet2.edu/nyoung/gcp-gce-project-audit-bq
    • https://github.internet2.edu/nyoung/gcp-project-audit-simple

  10. PROTIP: Instead of manually constructing commands, use environment variable:

    gcloud config set project "${DEVSHELL_PROJECT_ID}"
    

    Alternately, if you want your own:

    export PROJECT_ID=$(gcloud config get-value project)
    
  11. PROTIP: The shell variable $DEVSHELL_PROJECT_ID defined by Google can be used to refer to the project ID of the project used to start the Cloud Shell session.

    echo $DEVSHELL_PROJECT_ID

  12. List project name (aka “Friendly Name”) such as “cp100”.

    gcloud config list project 
    

    A sample response if “(unset)”:

    [core]
    project = what-182518
    Your active configuration is: [cloudshell-20786]
    
  13. Print just the project name (suppressing other warnings/errors):

    gcloud config get-value project 2> /dev/null 
    

    Alternately:

    gcloud config list --format 'value(core.project)' 2>/dev/null
    
  14. PROTIP: Get information about a project using the project environment variable:

    gcloud compute project-info describe --project ${DEVSHELL_PROJECT_ID}
    

    Project metadata includes quotas:

    quotas:
           - limit: 1000.0
      metric: SNAPSHOTS
      usage: 1.0
           - limit: 5.0
      metric: NETWORKS
      usage: 2.0
           - limit: 100.0
      metric: FIREWALLS
      usage: 13.0
           - limit: 100.0
      metric: IMAGES
      usage: 1.0
           - limit: 1.0
      metric: STATIC_ADDRESSES
      usage: 1.0
           - limit: 200.0
      metric: ROUTES
      usage: 31.0
           - limit: 15.0
      metric: FORWARDING_RULES
      usage: 2.0
           - limit: 50.0
      metric: TARGET_POOLS
      usage: 0.0
           - limit: 50.0
      metric: HEALTH_CHECKS
      usage: 2.0
           - limit: 8.0
      metric: IN_USE_ADDRESSES
      usage: 2.0
           - limit: 50.0
      metric: TARGET_INSTANCES
      usage: 0.0
           - limit: 10.0
      metric: TARGET_HTTP_PROXIES
      usage: 1.0
           - limit: 10.0
      metric: URL_MAPS
      usage: 1.0
           - limit: 5.0
      metric: BACKEND_SERVICES
      usage: 2.0
           - limit: 100.0
      metric: INSTANCE_TEMPLATES
      usage: 1.0
           - limit: 5.0
      metric: TARGET_VPN_GATEWAYS
      usage: 0.0
           - limit: 10.0
      metric: VPN_TUNNELS
      usage: 0.0
           - limit: 3.0
      metric: BACKEND_BUCKETS
      usage: 0.0
           - limit: 10.0
      metric: ROUTERS
      usage: 0.0
           - limit: 10.0
      metric: TARGET_SSL_PROXIES
      usage: 0.0
           - limit: 10.0
      metric: TARGET_HTTPS_PROXIES
      usage: 1.0
           - limit: 10.0
      metric: SSL_CERTIFICATES
      usage: 1.0
           - limit: 100.0
      metric: SUBNETWORKS
      usage: 26.0
           - limit: 10.0
      metric: TARGET_TCP_PROXIES
      usage: 0.0
           - limit: 24.0
      metric: CPUS_ALL_REGIONS
      usage: 3.0
           - limit: 10.0
      metric: SECURITY_POLICIES
      usage: 0.0
           - limit: 1000.0
      metric: SECURITY_POLICY_RULES
      usage: 0.0
           - limit: 6.0
      metric: INTERCONNECTS
      usage: 0.0
    
  15. List configuration information for the currently active project:

    gcloud config list

    Sample response:

    [component_manager]
    disable_update_check = True
    [compute]
    gce_metadata_read_timeout_sec = 5
    [core]
    account = johndoe@gmail.com
    check_gce_metadata = False
    disable_usage_reporting = False
    project = what-182518
    [metrics]
    environment = devshell
    Your active configuration is: [cloudshell-20786]
    

    Account Authorization Permissions

  16. List:

    gcloud auth list

    If you have not logged in:

    No credentialed accounts.
     
    To login, run:
      $ gcloud auth login `ACCOUNT`
    

    Returned is: https://cloud.google.com/sdk/auth_success containing:

    * Build and deploy a web service to Cloud Run.
    To get started, follow the walkthrough in Cloud Shell Editor.
    * Launch large compute clusters on Compute Engine.
    To get started, follow a Compute Engine quickstart.
    * Store vast amounts of data on Cloud Storage.
    To get started, follow the gsutil tool quickstart.
    * Analyze Big Data in the cloud with BigQuery.
    To get started, follow the BigQuery command-line tool quickstart.
    * Store and manage data using a MySQL database with Cloud SQL.
    To get started, see Managing instances using the gcloud CLI.
    * Make your applications and services available to your users with Cloud DNS.
    To get started, see Getting started with Cloud DNS.
    
  17. List projects to whch your account has access:

    gcloud projects list

  18. Confirm:

    gcloud compute config-ssh
    WARNING: The private SSH key file for gcloud does not exist.
    WARNING: The public SSH key file for gcloud does not exist.
    WARNING: You do not have an SSH key for gcloud.
    WARNING: SSH keygen will be executed to generate a key.
    Generating public/private rsa key pair.
    Enter passphrase (empty for no passphrase): __
    

    Instances List

  19. Zones are listed as metadata for each GCE instance:

    gcloud compute instances list

    Sample response:

    NAME          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
    hygieia-1     us-central1-f  n1-standard-1               10.128.0.3   35.193.186.181  TERMINATED
    

    PROTIP: Define what Zone and region your team should use.

    Zones

  20. Get list of zone codes in map ???:

    gcloud compute zones list

    Sample response:

    NAME                    REGION                STATUS  NEXT_MAINTENANCE  TURNDOWN_DATE
    asia-east1-c            asia-east1            UP
    asia-east1-b            asia-east1            UP
    asia-east1-a            asia-east1            UP
    asia-northeast1-a       asia-northeast1       UP
    asia-northeast1-c       asia-northeast1       UP
    asia-northeast1-b       asia-northeast1       UP
    asia-south1-c           asia-south1           UP
    us-central1-c           us-central1           UP
    asia-south1-a           asia-south1           UP
    asia-south1-b           asia-south1           UP
    asia-southeast1-a       asia-southeast1       UP
    asia-southeast1-b       asia-southeast1       UP
    australia-southeast1-c  australia-southeast1  UP
    australia-southeast1-b  australia-southeast1  UP
    australia-southeast1-a  australia-southeast1  UP
    europe-west1-c          europe-west1          UP
    europe-west1-b          europe-west1          UP
    europe-west1-d          europe-west1          UP
    europe-west2-b          europe-west2          UP
    europe-west2-a          europe-west2          UP
    europe-west2-c          europe-west2          UP
    europe-west3-b          europe-west3          UP
    europe-west3-a          europe-west3          UP
    europe-west3-c          europe-west3          UP
    southamerica-east1-c    southamerica-east1    UP
    southamerica-east1-b    southamerica-east1    UP
    southamerica-east1-a    southamerica-east1    UP
    us-central1-a           us-central1           UP
    us-central1-f           us-central1           UP
    us-central1-c           us-central1           UP
    us-central1-b           us-central1           UP
    us-east1-b              us-east1              UP
    us-east1-d              us-east1              UP
    us-east1-c              us-east1              UP
    us-east4-c              us-east4              UP
    us-east4-a              us-east4              UP
    us-east4-b              us-east4              UP
    us-west1-c              us-west1              UP
    us-west1-b              us-west1              UP
    us-west1-a              us-west1              UP
    

    REMEMBER: Region is a higher-order (more encompassing concept) than Zone.

  21. Define environment variables to hold zone and region:

    
    export CLOUDSDK_COMPUTE_ZONE=us-central1-f
    export CLOUDSDK_COMPUTE_REGION=us-central1 
    echo $CLOUDSDK_COMPUTE_ZONE
    echo $CLOUDSDK_COMPUTE_REGION
    

    TODO: Get the default region and zone into environment variables.

    curl “http://metadata.google.internal/computeMetadata/v1/instance/zone” -H “Metadata-Flavor: Google”

  22. Set the zone (for example, us-east1-f defined above):

    gcloud config set compute/zone ${CLOUDSDK_COMPUTE_ZONE}
    

    See https://cloud.google.com/compute/docs/storing-retrieving-metadata

  23. Switch to see the Compute Engine Metadata UI for the project:

    https://console.cloud.google.com/compute/metadata

    • google-compute-default-zone
    • google-compute-default-region

    https://github.com/wilsonmar/Dockerfiles/blob/master/gcp-set-zone.sh

Create sample Node server

  1. Download a file from GitHub:

    
    curl -o https://raw.githubusercontent.com/wilsonmar/Dockerfiles/master/NodeJs/server.js
    

    -o (lowercase o) saves the filename provided in the command line.

    See http://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner

    The sample Node program displays just text “Hello World!” (no fancy HTML/CSS).

  2. Invoke Node to start server:

    node server.js
  3. View the program’s browser output online by clicking the Google Web View button, then “Preview on port 8080”:

    gcp-web-preview-396x236-5615

    The URL:
    https://8080-dot-3050285-dot-devshell.appspot.com/?authuser=0

  4. Press control+C to stop the Node server.

Deploy Python

  1. Replace boilerplate “your-bucket-name” with your own project ID:

    sed -i s/your-bucket-name/$DEVSHELL_PROJECT_ID/ config.py

  2. View the list of dependencies needed by your custom Python program:

    cat requirements.txt

  3. Download the dependencies:

    pip install -r requirements.txt -t lib

  4. Deploy the current assembled folder:

    gcloud app deploy -y

  5. Exit the cloud:

    exit

PowerShell Cloud Tools

https://cloud.google.com/powershell/

https://cloud.google.com/tools/powershell/docs/

  1. In a PowerShell opened for Administrator:

    Install-Module GoogleCloud

    The response:

    Untrusted repository
    You are installing the modules from an untrusted repository. If you trust this 
    repository, change its InstallationPolicy value by running the Set-PSRepository
     cmdlet. Are you sure you want to install the modules from 'PSGallery'?
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help 
    (default is "N"):
    
  2. Click A.

  3. Get all buckets for the current project, for a specific project, or a specific bucket:

       $currentProjBuckets = Get-GcsBucket
    $specificProjBuckets = Get-GcsBucket -Project my-project-1
    $bucket = Get-GcsBucket -Name my-bucket-name
    
  4. Navigate to Google Storage (like a drive):

    cd gs:\

  5. Show the available buckets (like directories):

    ls

  6. Create a new bucket

    mkdir my-new-bucket

  7. Help

    Get-Help New-GcsBucket

Source Code Repository

https://console.cloud.google.com/code/develop/repo is
Google’s (Source) Code Repository Console.

served from: gcr.io

See docs at https://cloud.google.com/source-repositories

GCR is a full-featured Git repository hosted on GCP, free for up to 5 project-users per billing account, for up to 50GB free storage and 50GB free egress per month.

Mirror from GitHub

NOTE

  1. PROTIP: On GitHub.com, login to the account you want to use (in the same browser).
  2. PROTIP: Highlight and copy the name of the repository you want to mirror on Goggle.
  3. Create another browser tab (so they share the credentials established in the steps above).
  4. https://console.cloud.google.com/code is the Google Code Console.
  5. Click “Get Started” if it appears.
  6. PROTIP: For repository name, paste or type the same name as the repo you want to hold from GitHub.

    BLAH: Repository names can only contain alphanumeric characters, underscores or dashes.

  7. Click CREATE to confirm name.

    gcp-code-github-925x460

  8. Click on “Automatically Mirror from GitHub”.
  9. Select GitHub or Bitbucket for a “Choose a Repository list”.
  10. Click Grant to the repo to be linked (if it appears). Then type your GitHub password.
  11. Click the green “Authorize Google-Cloud-Development” button.
  12. Choose the repository. Click the consent box. CONNECT.

    You should get an email “[GitHub] A new public key was added” about the Google Connected Repository.

  13. Commit a change to GitHub (push from your local copy or interactively on GitHub.com).
  14. Click the clock icon on Google Code Console to toggle commit history.
  15. Click the SHA hash to view changes.
  16. Click on the changed file path to see its text comparing two versions. Scroll down.
  17. Click “View source at this commit” to make a “git checkout” of the whole folder.
  18. Click the “Source code” menu for the default list of folders and files.
  19. Select the master branch.

    To disconnect a hosted repository:

  20. Click Repositories on the left menu.
  21. Click the settings icon (with the three vertical dots to the far right) on the same line of the repo you want disconnected.
  22. Confirm Disconnect.

    Create new repo in CLI

  23. Be at the project you want.
  24. Create a repository.
  25. Click the CLI icon.
  26. Click the wrench to adjust backgroun color, etc.

  27. Create a file using the source browser.

  28. Make it a Git repository (a Git client is built-in):

    gcloud init

  29. Define, for example:

    git config credential.helper gcloud.sh

  30. Define the remote:

    git remote add google \ https://source.developers.google.com/p/cp100-1094/r/helloworld

  31. Define the remote:

    git push --all google

  32. To transfer a file within gcloud CLI:

    gsutil cp *.txt gs://cp-100-demo

GCR Container Registry

https://console.cloud.google.com/gcr - Google’s Container Registry console is used to control what is in
Google’s Container Registry (GCR). It is a service apart from GKE. It stores secure, private Docker images for deployments.

Like GitHub, it has build triggers.

help

Deployment Manager

Deployment Manager creates resources.

Cloud Launcher uses .yaml templates describing the environment makes for repeatability.

REST API

REST API GUI

  1. On the Google Cloud menu, click the “hamburger” menu at the upper-left.
  2. select APIs & Services.
  3. Click + ENABLE APIS AND SERVICES.

  4. On the Library page, click “Library” on the left menu.
  5. Click Private APIs”. Notice APIs are listed by category.
  6. Use the filter field to search by name.
  7. Click “Private” in the “Visibility” menu items.
  8. Find your API,
  9. If you don’t see your API listed, you were not granted access to enable the API.
  10. Click the API you want to enable.
  11. In the page that displays information about the API, click Enable.

REST API GCLOUD CLI

  1. Click the CLI icon at the top of the page to Activate the Cloud shell.
  2. Click AUTHORIZE to “Authorize Cloud Shell”.

REST API CLI

In a Terminal:

  1. The Google Cloud CLI requires Python (3.5 to 3.9).
  2. Install gcloud program.

REST API commands

  1. Establish a project

    export GCP_PROJECT_ID=”hc-13c8a9855cab4a4eac6640eb730” gcloud config set project “$GCP_PROJECT_ID”

    The expected response:

    Updated property [core/project].
  2. Review Configuration:

    % gcloud config list

    [core]
    account = wilson.mar@hashicorp.com
    disable_usage_reporting = false
    project = wilsonmar-gcp-test
    Your active configuration is: [default]
    
  3. See https://cloud.google.com/sdk/gcloud/reference

    To disable prompting, add option -q or –quiet

  4. Disable usage reporting:

    gcloud config set disable_usage_reporting true

  5. Get a (long) list of services, both GCP and external. Each service has a NAME and TITLE line:

    gcloud services list –available

    NOTE: “ grep googleapis.com” would only display one of two lines each entry.
  6. Filter the list of services, both GCP and external.

    gcloud service-management list –available –page-size=10 –sort-by=”NAME”

    gcloud service-management list –available –filter=’NAME:compute*’

  7. Enable. Example:

gcloud services enable containerregistry.googleapis.com

https://cloud.google.com/sdk/gcloud/reference/services/enable

  1. For more on working with Google API Explorer to test RESTful API’s

https://developers.google.com/apis-explorer

PROTIP: Although APIs are in alphabetical order, some services are named starting with “Cloud” or “Google” or “Google Cloud”. Press Ctrl+F to search.

API Explorer site: GCEssentials_ConsoleTour

### Endpoints (APIs)

Google Cloud Endpoints let you manage, control access, and monitor custom APIs that can be kept private.

### Authenticaton

  1. https://cloud.google.com/docs/authentication

    Authentication using OAuth2 (JWT), JSON.

SQL Servers on GCE: (2012, 2014, 2016)

  • SQL Server Standard
  • SQL Server Web
  • SQL Server Enterprise

Google NETWORKING

  • Getting Started with VPC Networking and Google Compute Engine

    Google creates all instances with a private (internal) IP address such as 10.142.3.2.

One public IP (such as 35.185.115.31) is optionally created to a resource. The IP can be ephemeral (from a pool) or static (reserved). Unassigned static IPs cost $.01 per hour (24 cents per day).

A VM instance cannot be created without a VPC network.

Without a VPC network, there are no routes and no firewall rules!

By default, one VPC is created for each project – 25 subnets and a Default internet gateway (0.0.0.0/0).

Default 1460 MTU (Maximum Transmission Unit).

VPCs are global resources, span all regions.

Each Subnet IP range (private RFC 1918 CIDR block) is defined across several zones within a particular region. Subnet ranges cannot overlap in a region. Auto Mode automatically adds a subnet to each new region.

Each VPC has implied allow egress and implied deny ingress (firewall rules) configured.

A VPC can be shared across several projects in same organization. Subnets in the same VPCs communicate via internal IPs.
Subnets in different VPCs communicate via external IPs, for which there is a charge.

Custom Mode specify specific subnets, for use with VPC Peering and to connect via VPN using IPsec to encrypt traffic. VPC Peering share projects NOT in same organization. VPC Peering has multiple VPCs share resources.

VPN capacity is 1.5 - 3 Gbps

Google Cloud Router supports dynamic routing between GCP and corporate networks using BGP (Border Gateway Protocol).

Google Cloud Interconnect can have SLA with internal IP addresses.

  • two VPN (Cloud VPN software)
  • Partner thru external provider 50Mbps to 10 Gbps
  • Dedicated Interconnect 10 Gbps each link to colocation facility
  • CDN Interconnect - CDN providers link with Google’s edge network

Peering via public IP addresses (no SLA) so can link multiple orgs

  • Direct Peering - connect business directly to Google
  • Carrier Peering - Enterprise-grade connections provided by carrier service providers

HTTP Load Balancing ensures only healthy instances handle traffic across regions.

  • See https://www.ianlewis.org/en/google-cloud-platform-http-load-balancers-explaine
  • https://medium.com/google-cloud/capacity-management-with-load-balancing-32bd22a716a7 Capacity Management with Load Balancing

Load balance

Scale and Load Balance Instances and Apps

  1. Get a GCP account
  2. Define a project with billing enabled and the default network configured
  3. An admin account with at least project owner role.
  4. Create an instance template with a web app on it
  5. Create a managed instance group that uses the template to scale
  6. Create an HTTP load balancer that scales instances based on traffic and distributes load across availability zones
  7. Define a firewall rule for HTTP traffic.
  8. Test scaling and balancing under load.

Allow external traffic k8s

For security, k8s pods by default are accessible only by its internal IP within the cluster.

So to make a container accessible from outside the Kubernetes virtual network, expose the pod as a Kubernetes service. Within a Cloud Shell:

kubectl expose deployment hello-node --type="LoadBalancer"
   

The –type=”LoadBalancer” flag specifies that we’ll be using the load-balancer provided by the underlying infrastructure (in this case the Compute Engine load balancer). To load balance traffic across all pods managed by the deployment.

Sample response:

service "hello-node" exposed
   

The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.

  1. Find the publicly-accessible IP address of the service, request kubectl to list all the cluster services:

    kubectl get services
    

    Sample response listing internal CLUSTER-IP and EXTERNAL-IP:

    NAME         CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
    hello-node   10.3.250.149   104.154.90.147   8080/TCP   1m
    kubernetes   10.3.240.1     <none>           443/TCP    5m
    

Configure Cloud Armor Load Balancer IPs

LAB

Google Cloud Platform HTTP(S) load balancing is implemented at the edge of Google’s network in Google’s points of presence (POP) around the world. User traffic directed to an HTTP(S) load balancer enters the POP closest to the user and is then load balanced over Google’s global network to the closest backend that has sufficient capacity available.

Configure an HTTP Load Balancer with global backends. Then, stress test the Load Balancer and blocklist the stress test IP with Cloud Armor, which prevents malicious users or traffic from consuming resources or entering your virtual private cloud (VPC) networks. Blocks and allows access to your HTTP(S) load balancer at the edge of the Google Cloud, as close as possible to the user and to malicious traffic.

  1. in Cloud Shell: create a firewall rule to allow port 80 traffic

    gcloud compute firewall-rules create \
    www-firewall-network-lb --target-tags network-lb-tag \
    --allow tcp:80
    

    Click Authothorize. Result:

    NAME                     NETWORK  DIRECTION  PRIORITY  ALLOW   DENY  DISABLED
    www-firewall-network-lb  default  INGRESS    1000      tcp:80        False
    
  2. Create an instance template named web-template which specifies a startup script that will install Apache, and creates a home page to display the zone the server is running in:

    gcloud compute instance-templates create web-template \
     --machine-type=n1-standard-4 \
     --image-family=debian-9 \
     --image-project=debian-cloud \
     --machine-type=n1-standard-1 \
     --tags=network-lb-tag \
     --metadata=startup-script=\#\!\ /bin/bash$'\n'apt-get\ update$'\n'apt-get\ install\ apache2\ -y$'\n'service\ apache2\ restart$'\n'ZONE=\$\(curl\ \"http://metadata.google.internal/computeMetadata/v1/instance/zone\"\ -H\ \"Metadata-Flavor:\ Google\"\)$'\n'echo\ \'\<\!doctype\ html\>\<html\>\<body\>\<h1\>Web\ server\</h1\>\<h2\>This\ server\ is\ in\ zone:\ ZONE_HERE\</h2\>\</body\>\</html\>\'\ \|\ tee\ /var/www/html/index.html$'\n'sed\ -i\ \"s\|ZONE_HERE\|\$ZONE\|\"\ /var/www/html/index.html
    
  3. create a basic http health check:

    gcloud compute http-health-checks create basic-http-check

  4. create a managed instance-groups of 3 instances. instance-groups use an instance template to create a group of identical instances so that if an instance in the group stops, crashes, or is deleted, the managed instance group automatically recreates the instance.

    gcloud compute instance-groups managed create web-group \
    --template web-template --size 3 --zones \
    us-central1-a,us-central1-b,us-central1-c,us-central1-f
    
  5. create the load balancing service:

    gcloud compute instance-groups managed set-named-ports \
    web-group --named-ports http:80 --region us-central1
    gcloud compute backend-services create web-backend \
    --global \
    --port-name=http \
    --protocol HTTP \
    --http-health-checks basic-http-check \
    --enable-logging
    gcloud compute backend-services add-backend web-backend \
    --instance-group web-group \
    --global \
    --instance-group-region us-central1
    gcloud compute url-maps create web-lb \
    --default-service web-backend
    gcloud compute target-http-proxies create web-lb-proxy \
    --url-map web-lb
    gcloud compute forwarding-rules create web-rule \
    --global \
    --target-http-proxy web-lb-proxy \
    --ports 80
    
  6. It takes several minutes for the instances to register and the load balancer to be ready. Check in Navigation menu > Network services > Load balancing or

    gcloud compute backend-services get-health web-backend --global
    

    kind: compute#backendServiceGroupHealth

  7. Retrieve the load balancer IP address:

    gcloud compute forwarding-rules describe web-rule --global | grep IPAddress

    IPAddress: 34.120.166.236

  8. Access the load balancer:

    curl -m1 34.120.166.236

    The output should look like this (do not copy; this is example output):

    <!doctype html><html><body><h1>Web server</h1><h2>This server is in zone: projects/921381138888/zones/us-central1-a</h2></body></html>
    
  9. Open a new browser tab to keep trying that IP address

    while true; do curl -m1 34.120.166.236; done

    Create a VM to test access to the load balancer

  10. In Navigation menu > Compute Engine, Click CREATE INSTANCE.
  11. Name the instance access-test and set the Region to australia-southeast1 (Sydney).
  12. Leave everything else at the default and click Create.
  13. Once launched, click the SSH button to connect to the instance

    TODO: Commands instead of GUI for above.

  14. Access the load balancer:

    curl -m1 35.244.71.166

    The output should look like this:

    <!doctype html><html><body><h1>Web server</h1>

    This server is in zone: projects/921381138888/zones/us-central1-a</h2></body></html> </pre> ### Create Blocklist security policy with Cloud Armor To block access from access-test VM (a malicious client), To identify the external IP address of a client trying to access your HTTP Load Balancer, you could examine traffic captured by VPC Flow Logs in BigQuery to determine a high volume of incoming requests.

  15. Go to Navigation menu > Compute engine and copy the External IP of the access-test VM.
  16. Go to Navigation menu > Network Security > Cloud Armor.
  17. Click Create policy.
  18. Provide a name of blocklist-access-test and set the Default rule action to Allow.
  19. Click Next step. Click Add rule.

    TODO: Commands instead of GUI for above.

  20. Set the following Property values:

    Condition/match: Enter the IP of the access-test VM

    Action: Deny

    Deny status: 404 (Not Found)

    Priority: 1000

  21. Click Done.
  22. Click Next step.
  23. Click Add Target.

    For Type, select Load balancer backend service.

    For Target, select web-backend.

  24. Click Done.
  25. Click Create policy.

    Alternatively, you could set the default rule to Deny and only allowlist traffic from authorized users/IP addresses.

  26. Wait for the policy to be created before moving to the next step.

  27. Verifying the security policy in Cloud Shell: Return to the SSH session of the access-test VM.

  28. Run the curl command again on the instance to access the load balancer:

    curl -m1 35.244.71.166

    The response should be a 404. It might take a couple of minutes for the security policy to take affect.

    View Cloud Armor logs

  29. In the Console, navigate to Navigation menu > Network Security > Cloud Armor.
  30. Click blocklist-access-test.
  31. Click Logs.
  32. Click View policy logs and go to the latest logs. By default, the GCE Backend Service logs are shown.
  33. Select Cloud HTTP Load Balancer. Next, you will filter the view of the log to show only 404 errors.

  34. Remove the contents of the Query builder box and replace with 404 - and press Run Query to start the search for 404 errors.
  35. Locate a log with a 404 and expand the log entry.

  36. Expand httpRequest.

    The request should be from the access-test VM IP address.


Google COMPUTE Cloud Services

gcp-compute-735x301

Considerations Compute EngineContainer
Engine
Kubernetes EngineApp Engine
Standard
App Engine
Flexible
Cloud Run Cloud Functions
Users manage: One container per VM Like on-prem. K8s yaml No-opsManaged No-ops
Service model: Iaas Hybrid PaaS Paas Stateless Serverless Logic
Language support: Any Any AnyJava, Node, Python, Go, PHP+Ruby, .NET -
Primary use case: General computing Container-basedContainers Web & Mobile appsDocker container

gcp-compute-usage *

https://cloudplatform.googleblog.com/2017/07/choosing-the-right-compute-option-in-GCP-a-decision-tree.html

Google’s Compute Engines

  • Compute Engine (GCE) is a managed environment for deploying virtual machines (VMs), providing full control of VMs for Linux and Windows Server. The API controls addresses, autoscalars, backend, disks, firewalls, global Forwarding, health, images, instances, projects, region, snapshots, ssl, subnetworks, targets, vpn, zone, etc.

  • Kubernetes Engine (GKE) is a managed environment for deploying containerized applications, for container clustering

  • App Engine (GAE) is a managed platform for Google to deploy and host full app code at scale. Similar to Amanzon Beanstalk, GAE runs full Go, PHP, Java, Python, Node.js, .NET C#, Ruby, etc. coded with login forms and authentication logic. GAE Standard runs in a proprietary sandbox which starts faster than GAE Flexible running in Docker containers. Being proprietary, GAE Standard cannot access Compute Engine resources nor allow 3rd-party binaries. GAE Standard is good

  • Cloud Run enables stateless containers, based on KNative. Being in a container means any language can be used. Each container listens for requests or events. It’s like AWS Fargate.

  • Google Cloud Functions (previously called Firebase, like Amazon Lambda) is a managed serverless platform for deploying event-driven functions. It runs single-purpose microservices written in JavaScript executed in Node.js when triggered by events. Good for stateless computation which reacts to external events.

Google Compute Engine

GCE offers the most control but also the most work (operational overhead).

Preemptible instances are cheaper but can be taken anytime, like Amazon’s.

Google provides load balancers, VPNs, firewalls.

Use GCE where you need to select the size of disks, memory, CPU types

  • use GPUs (Graphic Processing Units)
  • custom OS kernels
  • specifically licensed software
  • protocols beyond HTTP/S
  • orchestration of multiple containers

GCE is called a IaaS (Infrastructure as a Service) offering of instances, NOT using Kubernetes automatically like GKC. Use it to migrate on-premise solutions to cloud.

References:

https://cloud.google.com/compute/docs/machine-types such as n1-standard-1.

The generations of machine types

GCE SonarQube

There are several ways to instantiate a Sonar server.

GCE SonarQube BitNami

One alternative is to use Bitnami

  1. Browser at https://google.bitnami.com/vms/new?image_id=4FUcoGA
  2. Click Account for https://google.bitnami.com/services
  3. Add Project
  4. Setup a BitName Vault password.
  5. PROTIP: Use 1Password to generate a strong password and store it.
  6. Agree to really open sharing with Bitnami:

    • View and manage your Google Compute Engine resourcesMore info
    • View and manage your data across Google Cloud Platform servicesMore info
    • Manage your data in Google Cloud StorageMore info

    CAUTION: This may be over-sharing for some.

  7. Click “Select an existing Project…” to select one in the list that appears. Continue.
  8. Click “Enable Deployment Manager (DM) API” to open another browser tab at https://console.developers.google.com/project/attiopinfosys/apiui/apiview/deploymentmanager
  9. If the blue “DISABLE” appears, then it’s enabled.
  10. Return to the Bitnami tab to click “CONTINUE”.
  11. Click BROWSE for the Library at https://google.bitnami.com/

    The above is done one time to setup your account.

  12. Type “SonarQube” in the search field and click SEARCH.
  13. Click on the icon that appears to LAUNCH.
  14. Click on the name to change it.
  15. NOTE “Debian 8” as the OS cannot be changed.
  16. Click “SHOW” to get the password into your Clipboard. wNTzYLkM1sGX
  17. Wait for the orange “REBOOT / SHUTDOWN / DELETE” to appear at the bottom of the screen.

    Look:

  18. Click “LAUNCH SSH CONSOLE”.
  19. Click to confirm the SSH pop-up.
  20. Type lsb_release -a for information about the operating system:

    No LSB modules are available.
    Distributor ID: Debian
    Description:    Debian GNU/Linux 8.9 (jessie)
    Release:        8.9
    Codename:       jessie
    

    PROTIP: This is not the very latest operating system version because it takes time to integrate.

  21. Type pwd to note the user name (carried in from Google).
  22. Type ls -al for information about files:

    apps -> /opt/bitnami/apps
    .bash_logout
    .bashrc
    .first_login_bitnami
    htdocs -> /opt/bitnami/apache2/htdocs
    .profile
    .ssh
    stack -> /opt/bitnami
    
  23. Type exit to switch back to the browser tab.
  24. Click the blue IP address (such as 35.202.3.232) for a SonarQube tab to appear.

  25. Type “Admin” for user. Click the Password field and press Ctrl+V to paste from Clipboard.
  26. Click “Log in” for the Welcome screen.

    TODO: Assign other users.

  27. TODO: Associate the IP with a host name.

    SonarQube app admin log in

  28. At SonarQube server landing page (such as http://23.236.48.147)

    You may need to add it as a security exception. VMxatH6wcr2g

  29. Type a name of your choosing, then click Generate.

  30. Click the language (JS).
  31. Click the OS (Linux, Windows, Mac).
  32. Highlight the sonar-scanner command to copy into your Clipboard.

  33. Click Download for https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner

    sonarqube-clientinstall-386x292-27346

    On a Windows machine:
    sonar-scanner-cli-3.0.3.778-windows.zip | 63.1 MB

    On a Mac:
    sonar-scanner-cli-3.0.3.778-macosx.zip | 53.9 MB

  34. Generate the token:

    sonarqube-gen-token-601x129-14487

  35. Click Finish to see the server page such as at http://35.202.3.232/projects

    Do a scan

  36. On your Mac, unzip to folder “sonar-scanner-3.0.3.778-macosx”.

    Notice it has its own Java version in the jre folder.

  37. Open a Terminal and navigate to the bin folder containing sonar-scanner.
  38. Move it to a folder in your PATH.
  39. Create or edit shell script file from the Bitnami screen:

    ./sonar-scanner \
      -Dsonar.projectKey=sonarqube-1-vm \
      -Dsonar.sources=. \
      -Dsonar.host.url=http://23.236.48.147 \
      -Dsonar.login=b0b030cd2d2cbcc664f7c708d3f136340fc4c064
    

    NOTE: Your login token will be different than this example.

    https://github.com/wilsonmar/git-utilities/…/sonar1.sh

  40. Replace the . with the folder path such as

    -Dsonar.sources=/Users/johndoe/gits/ng/angular4-docker-example

    Do this instead of editing /conf/sonar-scanner.properties to change default http://localhost:9000

  41. chmod 555 sonar.sh
  42. Run the sonar script.

  43. Wait for the downloading.
  44. Look for a line such as:

    INFO: ANALYSIS SUCCESSFUL, you can browse http://35.202.3.232/dashboard/index/Angular-35.202.3.232
    
  45. Copy the URL and paste it in a browser.

  46. PROTIP: The example has no Version, Tags, etc. that a “production” environment would use.

GCE SonarQube

  1. In the GCP web console, navigate to the screen where you can create an instance.

    https://console.cloud.google.com/compute/instances

  2. Click Create (a new instance).
  3. Change the instance name from instance-1 to sonarqube-1 (numbered in case you’ll have more than one).
  4. Set the zone to your closest geographical location (us-west1-a).
  5. Set machine type to f1-micro.
  6. Click Boot Disk to select Ubuntu 16.04 LTS instead of default Debian GNU/Linux 9 (stretch).

    PROTIP: GCE does not provide the lighter http://alpinelinux.org/

  7. Type a larger Size (GB) than the default 10 GB.

    WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
    
  8. Set Firewall rules to allow Ingree and Egress through external access to ports:

    9000:9000 -p 9092:9092 sonarqube

  9. Allow HTTP & HTPPS traffic.
  10. Click “Management, disks, networking, SSH keys”.
  11. In the Startup script field, paste script you’ve tested interactively:

    # Install Docker: 
    curl -fsSL https://get.docker.com/ | sh
    sudo docker pull sonarqube
    sudo docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube
    
  12. Click “command line” link for a pop-up of the equivalent command.
  13. Copy and paste it in a text editor to save the command for troubleshooting later.

  14. Click Create the instance. This cold-boot takes time:

    gce-startup-time-640x326

    Boot time to execute startup scripts is the variation cold-boot performance.

  15. Click SSH to SSH into instance via the web console, using your Google credentials.
  16. In the new window, pwd to see your account home folder.
  17. To see instance console history:

    cat /var/log/syslog

    Manual startup setup

    https://cloud.google.com/solutions/mysql-remote-access

  18. If there is a UI, highlight and copy the external IP address (such as https://35.203.158.223/) and switch to a browser to paste on a browser Address.

  19. Add the port number to the address.

    BLAH TODO: Port for UI?

    TODO: Take a VM snapshot.

    https://cloud.google.com/solutions/prep-container-engine-for-prod

    Down the instance

  20. Remove image containers and volumes

  21. When done, close SSH windows.
  22. If you gave out and IP address, notify recipients about their imminent deletion.
  23. In the Google Console, click on the three dots to delete the instance.

    Colt McAnlis (@duhroach), Developer Advocate explains Google Cloud performance (enthusiastically) at https://goo.gl/RGsQlF

https://www.youtube.com/watch?v=ewHxl9A0VuI&index=2&list=PLIivdWyY5sqK5zce0-fd1Vam7oPY-s_8X

Windows

https://github.com/MicrosoftDocs/Virtualization-Documentation

On Windows, output from Start-up scripts are at C:\Program Files\Google\Compute Engine\sysprep\startup_script.ps1

Kubernetes Engine

gce-console-menu-244x241-11754

To reduce confusion, until Nov 14, 2017, GKE stood for “Google Container Engine”. The “K” is there because GKE is powered by Kubernetes, Google’s container orchestration manager, providing compute services using Google Compute Engine (GCE).

  1. “Kubernetes” is in the URL to the GKE home page:

    https://console.cloud.google.com/kubernetes

  2. Click “Create Cluster”.
  3. PROTIP: Rename generated “cluster-1” to contain the zone.
  4. Select zone where your others are.
  5. Note the default is Container-Optimized OS (based on Chromium OS) and 3 minion nodes in the cluster, which does not include the master.

    Workload capacity is defined by the number of Compute Engine worker nodes.

    The cluster of nodes are controlled by a K8S master.

  6. PROTIP: Attach a permanent disk for persistence.
  7. Click Create. Wait for the green checkmark to appear.
  8. Connect to the cluster. Click the cluster name to click CONNECT to Connect using Cloud Shell the CLI.

  9. Create a cluster called “bootcamp”:

gcloud container clusters create bootcamp –scopes “https://www.googleapis.com/auth/projecthosting,storage-rw”

   gcloud container clusters get-credentials cluster-1 \
      --zone us-central1-f \
      --project ${DEVSHELL_PROJECT_ID}
   

The response:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
   
  1. Invoke the command:

    
    kubectl get nodes
    

    If the get the following message:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

    Sample valid response:

    NAME                                       STATUS    ROLES     AGE       VERSION
    gke-cluster-1-default-pool-8a05cb05-701j   Ready     <none>    11m       v1.7.8-gke.0
    gke-cluster-1-default-pool-8a05cb05-k4l3   Ready     <none>    11m       v1.7.8-gke.0
    gke-cluster-1-default-pool-8a05cb05-w4fm   Ready     <none>    11m       v1.7.8-gke.0
    
  2. List and expand the width of the screen:

    gcloud container clusters list
    

    Sample response:

    NAME       ZONE           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-f  1.7.8-gke.0     162.222.177.56  n1-standard-1  1.7.8-gke.0   3          RUNNING
    

    If no clusters were created, no response is returned.

  3. Highlight the Endpoint IP address, copy, and paste to construct a browser URL such as:

    https://162.222.177.56/ui

    BLAH: User “system:anonymous” cannot get path “/”.: “No policy matched.\nUnknown user "system:anonymous"”

  4. In the Console, click Show Credentials.
  5. Highlight and copy the password.

  6. Start

    kubectl proxy
    

    The response:

    Starting to serve on 127.0.0.1:8001

    WARNING: You are no longer able to issue commands while the proxy runs.


  1. Create new pod named “hello-node”:

    kubectl run hello-node \
     --image=gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1 \
     --port=8080
    

    Sample response:

    deployment "hello-node" created
  2. View the pod just created:

    kubectl get pods
    

    Sample response:

    NAME                         READY     STATUS    RESTARTS   AGE
    hello-node-714049816-ztzrb   1/1       Running   0          6m
    
  3. List

    kubectl get deployments
    

    Sample response:

    NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    hello-node   1         1         1            1           2m
    
  4. Troubleshoot:

    kubectl get events

    kubectl get services

  5. Get logs:

    kubectl logs pod-name

  6. Other commands:

    kubectl cluster-info

    kubectl config view

Kubernetes Dashboard

Kubernetes graphical dashboard (optional)

  1. Configure access to the Kubernetes cluster dashboard:

    gcloud container clusters get-credentials hello-world \
     --zone us-central1-f --project ${DEVSHELL_PROJECT_ID}
    

    Sample response:

    kubectl proxy --port 8086
    
  2. Use the Cloud Shell Web preview feature to view a URL such as:

    https://8081-dot-3103388-dot-devshell.appspot.com/ui

  3. Click the “Connect” button for the cluster to monitor.

    See http://kubernetes.io/docs/user-guide/ui/


### GCP APIs

  1. Begin in “APIs & Services” because Services provide a single point of access (load balancer IP address and port) to specific pods.
  2. Click ENABLE…
  3. Search for Container Engine API and click it.
  4. In the gshell: gcloud compute zones list

    Create container cluster

  5. Select Zone
  6. Set “Size” (vCPUs) from 3 to 2 – the number of nodes in the cluster.

    Nodes are the primary resource that runs services on Google Container Engine.

  7. Click More to expand.
  8. Add a Label (up 60 64 per resource):

    Examples: env:prod/test, owner:, contact:, team:marketing, component:backend, state:inuse.

    The size of boot disk, memory, and storage requirements can be adjusted later.

  9. Instead of clicking “Create”, click the “command” link for the equivalent the gcloud CLI commands in the pop-up.

    gcloud beta container --project "mindful-marking-178415" clusters create "cluster-1" --zone "us-central1-a" --username="admin" --cluster-version "1.7.5-gke.1" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "2" --network "default" --no-enable-cloud-logging --no-enable-cloud-monitoring --subnetwork "default" --enable-legacy-authorization
    

    PROTIP: Machine-types are listed and described at https://cloud.google.com/compute/docs/machine-types

    Alternately,

    gcloud container clusters create bookshelf \
      --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \
      --num-nodes 2
    

    The response sample (widen window to see it all):

    Creating cluster cluster-1...done.
    Created [https://container.googleapis.com/v1/projects/mindful-marking-178415/zones/us-central1-a/clusters/cluster-1].
    kubeconfig entry generated for cluster-1.
    NAME       ZONE           MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-a  1.7.5-gke.1     35.184.10.233  n1-standard-1  1.7.5         2          RUNNING
    
  10. Push

    gcloud docker – push gcr.io/$DEVSHELL_PROJECT_ID/bookshelf

  11. Configure entry credentials

    gcloud container clusters get-credentials bookshelf

  12. Use the kubectl command line tool.

    kubectl create -f bookshelf-frontend.yaml

  13. Check status of pods

    kubectl get pods

  14. Retrieve IP address:

    kubectl get services bookshelf-frontend

    Destroy cluster

    It may seem a bit premature at this point, but since Google charges by the minute, it’s better you know how to do this earlier than later. Return to this later if you don’t want to continue.

  15. Using the key information from the previous command:

    gcloud container clusters delete cluster-1 –zone us-central1-a

    2). View cloned source code for changes

  16. Use a text editor (vim or nano) to define a .yml file to define what is in pods.

  17. Build Docker

    docker build -t gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1 .

    Sample response:

    v1: digest: sha256:6d7be8013acc422779d3de762e8094a3a2fb9db51adae4b8f34042939af259d8 size: 2002
    ...
    Successfully tagged gcr.io/cicd-182518/hello-node:v1
    
  18. Run:

    docker run -d -p 8080:8080 gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    No news is good news in the response.

  19. Web Preview on port 8080 specified above.

  20. List Docker containers images built:

    docker ps
    CONTAINER ID        IMAGE                              COMMAND                  CREATED              STATUS              PORTS                    NAMES
    c938f3b42443        gcr.io/cicd-182518/hello-node:v1   "/bin/sh -c 'node ..."   About a minute ago   Up About a minute   0.0.0.0:8080->8080/tcp   cocky_kilby
    
  21. Stop the container by using the ID provided in the results above:

    docker stop c938f3b42443

    The response is the CONTAINER_ID.

    https://cloud.google.com/sdk/docs/scripting-gcloud

  22. Run the image:

    docker run -d -p 8080:8080 gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    The response is a hash of the instance.

  23. Push the image to grc.io repository:

    gcloud docker -- push gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    v1: digest: sha256:98b5c4746feb7ea1c5deec44e6e61dfbaf553dab9e5df87480a6598730c6f973 size: 10025


gcloud config set container/cluster ...
   

3). Cloud Shell instance - Remove code placeholds

4). Cloud Shell instance - package app into a Docker container

5). Cloud Shell instance - Upload the image to Container Registry

6). Deploy app to cluster

See https://codelabs.developers.google.com/codelabs/cp100-container-engine/#0

Coursera

Google App Engine (GAE)

GAE is a PaaS (Platform as a Service) offering where Google manages application infrastucture (Jetty 8, Servlet 3.1, .Net Core, NodeJs) that responds to HTTP requests.

Google Cloud Endpoints provide scaling, HA, DoS protection, TLS 1.2 SSL certs for HTTPS.

The first 26 GB of traffic each month is free.

Develop server-side code in Java, Python, Go, PHP.

Customizable 3rd party binaries are supported with SSH access on GAE Flexible environment which also enables write to local disk.

https://cloud.google.com/appengine/docs?hl=en_US&_ga=2.237246272.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-app-engine

Google Cloud Functions

Here, single-purpose functions are coded in JavaScript and executed in NodeJs when triggered by events occuring, such as a file upload.

Google provides a “Serverless” environment for building and connecting cloud services on a web browser.

Google Firebase

API Handles HTTP requests on client-side mobile devices.

Realtime database, crashlytics, perf. mgmt., messaging.


Databases

gcloud sql instances patch mysql \
     --authorized-networks "203.0.113.20/32"
   

Google Data Storage

Google Cloud’s core storage products:

  • Cloud Storage - stores versioned immutable binary accessed by URLs (such as images website content)
  • Cloud Bigtable
  • Cloud SQL provides a cloud relational databases PostgreSQL, MySQL, MS SQL Server.
  • Cloud Spanner

  • Firestore provides a RESTful interface for NoSQL ACID transactions (by mobile devices)

gcp-decision-tree.pngClick to pop-up image

gcp-storage-table-650x270-42645

 Cloud StorageFirestore (Datastore)BigTableCloud SQL (1stGen)
Competitors:AWS S3, Azure Blob Storage-AWS DynamoDB, Azure Cosmos DBAWS RDS, Azure SQL
Storage typeBLOB store bucketsNoSQL, documentwide column NoSQLRelational SQL
Use cases: Images, large media files, backup (zips)User profiles, product catalogAdTech, Financial & IoT time seriesUser credentials, customer orders
Good for:Structured and unstructured binary or object dataGetting started, App Engine apps"Flat" data, heavy read/write, events, analytical dataWeb frameworks, existing apps
Overall capacityPetabytes+Terabytes+Petabytes+Up to 500 GB
Unit size5 TB/
object
1 MB/
entity
10 MB/
cell
standard
Transactions:NoYesNo (OLAP)Yes
Complex queries:NoNoNoYes
Tech:--Proprietary Google-
Scaling:--Serverless autoscalingInstances

Cloud Spanner is Google’s proprietary relational SQL database (like AWS Aurora DB) which spans db’s of unlimited size across regions (globally).

https://stackoverflow.com/questions/tagged/google-cloud-storage

ACLs (Access Control Lists) can be defined.

Google Cloud Storage (GCS) Buckets

Standard storage for highest durability, availability, and performance with low latency, for web content distribution and video streaming.

  • (Standard) multi-regional to accessing media around the world.
  • (Standard) Regional to store data and run data analytics in a single part of the world.
  • Nearline strage for low-cost but durable data archiving, online backup, disaster recovery of data rarely accessed.
  • Coldline storage = DRA (Durable Reduced Availability Storage) at a lower cost for once per year access.

In gcloud on a project (scale to any data size, cheap, but no support for ACID properties):

  1. Create a bucket in location ASIA, EU, or US, in this CLI example (instead of web console GUI):

    gsutil mb -l US gs://$DEVSHELL_PROJECT_ID

  2. Grant Default ACL (Access Control List) to All users in Cloud Storage:

    gsutil defacl ch -u AllUsers:R gs://$DEVSHELL_PROJECT_ID

    The response:

    Updated default ACL on gs://cp100-1094/
    

    The above is a terrible example because ACLs are meant to control access for individual objects with sensitive info.

Google Cloud Firestore

Firestore deprecates DataStore.

Firestore is a NoSQL (document) online database which charges for individual read, write, and deletes.

Documents can be organized into collections.

ACL:

  • 20,000 free Writes per day (with index and device replication sync across regions by default)
  • 20,000 free Deletes per day
  • 50,000 free Reads per day
  • Listen
  • Query can include multiple chained filters but charged for one read Atomic batch operations

  • First 1GB of data stored is free
  • First 10 GiB of egress per month is free between US regions.

Google Spanner

Firestore deprecates DataStore.

Google Cloud SQL (GCS)

Google’s Cloud SQL provides ACID support for cloud-based transactions on traditional relational databases (MySQL, PostgreSQL, Microsoft SQL Server) up to 30TB of storage.

It can scale up to 64 processor cores and 400 GB RAM.

Google provides automatic replicas and replication, managed backups, and patching.

A network firewall is included at no charge (which AWS users pay for).

Google encrypts data when on Google’s internal network and when stored in database tables, temporary files, and backups. For free when AWS users pay premium encryption.

Workbench, Toad (from Quest), and other standard SQL apps can be used to administser Cloud SQL databases:

Google App Engine accesses Cloud SQL databases using drivers Connector/J for Java and MySQLdb for Python.

  • git clone https://github.com/GoogleCloudPlatform/appengine-gcs-client.git

  • https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/using-cloud-storage
  • https://cloud.google.com/sdk/cloud-client-libraries for Python

CSEK

Each chunk is distributed across Google’s storage infra. All chunks (sub-files) within an object is encrypted at rest with its own unique Data Encryption Key (DEK).

DEKs are wrapped by KEKs (Key Encryption Keys) stored in KMS.

With Google-manged keys: The standard key roation period is 90 days, storing 20 versions. Re-encryption after 5 years.

Customer-managed keys: Keys are in a key ring.

Customer-supplied keys are stored outside of GCP.

LAB: Create an encryption key and wrap it with the Google Compute Engine RSA public key certificate

  1. create a 256 bit (32 byte) random number to use as a key:

    openssl rand 32 > mykey.txt more mykey.txt

    Result:

    Qe7>hk=c}

  2. Download the GCE RSA public cert:

    curl
    https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem \

    gce-cert.pem

Use a RSA public key to encrypt your data. After that data has been encrypted with the public key, it can only be decrypted by the respective private key. In this case, the private key is known only to Google Cloud Platform services. Wrapping your key using the RSA certificate ensures only Google Cloud Platform services can unwrap your key and use it to protect your data.

  1. Extract the public key from the certificate:

    openssl x509 -pubkey -noout -in gce-cert.pem > pubkey.pem

  2. RSA-wrap your key:

    openssl rsautl -oaep -encrypt -pubin -inkey pubkey.pem -in
    mykey.txt -out rsawrappedkey.txt

  3. Encode in base64 in base64:

    openssl enc -base64 -in rsawrappedkey.txt | tr -d ‘\n’ | sed -e
    ‘$a' > rsawrapencodedkey.txt

  4. View your encoded, wrapped key to verify it was created:

    cat rsawrapencodedkey.txt

  5. To avoid introducing newlines, use code editor to copy the contents rsawrapencodedkey.txt

    MBMCbcFk … h4eiqQ==

  6. PROTIP: Click the “X” at the right to exit the Editor.

    Encrypt a new persistent disk with your own key

  7. In the browser tab showing the GCP console, select Navigation menu > Compute Engine > Disks. Click the Create disk button.

    Attach the disk to a compute engine instance

  8. In the browser tab showing the GCP console, select Navigation menu > Compute Engine > VM Instances. Click the Create button. Name the instance csek-demo and verify the region is us-central1 and the zone is us-central1-a.

  9. Scroll down and expand Management, security disks, networking, sole tenancy. Click on Disks and under Additional disks, click Attach existing disk. For the Disk property, select the encrypted-disk-1. Paste the value of your wrapped, encoded key into the Enter Key field, and check the Wrapped key checkbox. (You should still have this value in your clipboard). Leave the Mode as Read/write and Deletion rule as keep disk.

  10. Click the Create button to launch the new VM. The VM will be launched and have 2 disks attached. The boot disk and the encrypted disk. The encrypted disk still needs to be formatted and mounted to the instance operating system.

    Important. Notice the encryption key was needed to mount the disk to an instance. Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key there is no way for Google to recover the key or to recover any data encrypted with the lost key.

  11. Once the instance has booted, click the SSH button to connect to the instance.

  12. Issue the following commands on the instance to format and mount the encrypted volume:

    sudo mkfs.ext4 /dev/disk/by-id/google-encrypted-disk-1 mkdir encrypted sudo mount /dev/disk/by-id/google-encrypted-disk-1 encrypted/

    The disk is now mounted as the encrypted folder and can be used like any other disk.

    Create a snapshot from an encrypted disk

  13. In the GCP console, select Navigation menu > Compute Engine > Snapshots. Click the Create snapshot button. Provide a name of encrypted-disk-1-snap1. For the Source disk, select the encrypted-disk-1. For the encryption key, paste in the wrapped, encoded key value you created earlier. Check the Wrapped key checkbox.

    Notice that the snapshot can be encrypted with a different key than the actual disk. For this lab we will use the same key for the snapshot.

    Paste in the wrapped, encoded key value you created earlier again into the snapshot encryption key field. Check the Wrapped key checkbox. Click the Create button.


Networking

NOTE: A router is needed for each VPN for BGP.

NOTE: GCP Firewall Rules apply not just between external instances but also to individual instances within the same network.

Why still charges?

On a Google Cloud account which had nothing running, my bill at the end of the month was still $35 for “Compute Engine Network Load Balancing: Forwarding Rule Additional Service Charge”.

CAUTION Each exposed Kubernetes service (type == LoadBalancer) creates a forwarding rule. And Google’s shutdown script doesn’t remove the Forwarding rules created.

  1. To fix it, per https://cloud.google.com/compute/docs/load-balancing/network/forwarding-rules

    gcloud compute forwarding-rules list
    

    For a list such as this (scroll to the right for more):

    NAME                              REGION       IP_ADDRESS      IP_PROTOCOL  TARGET
    a07fc7696d8f411e791c442010af0008  us-central1  35.188.102.120  TCP          us-central1/targetPools/a07fc7696d8f411e791c442010af0008
    

    Iteratively:

  2. Copy each item’s NAME listed to build command:

    
    gcloud compute forwarding-rules delete [FORWARDING_RULE_NAME]
    
  3. You’ll be prompted for a region each time. And for a yes.

TODO: automate this!

Stackdriver for Logging

In 2014, Google acquired Stackdriver from Izzy Azeri and Dan Belcher, who founded the company two years earlier.

Stackdriver is GCP’s SaaS-based tool for logging, monitoring, error reporting, trace, diagnostics that’s integrated across GCP and AWS.

In October 2020, it got renamed to “Google Cloud Operations after adding advanced observability, debugger, and profiler.

Trace provides per-URL latency metrics.

Open source agents

Collaborations with PagerDuty, BMC, Spluk, etc.

Integrate with auto-scaling.

Integrations with Source Repository for debugging.

Big Data Services

gcp-bigdata-menu-288x772-11619.png

  • BigQuery SaaS data warehouse analytics database streams data at 100,000 rows per second. Automatic discounts for long term data storage. See Shine Technologies.

    HBase - columnar data store, Pig, RDBMS, indexing, hashing

    Storage costs 2 cents per BigTable per month. No charge for queries from cache!

    Competes against Amazon Redshift.

    https://stackoverflow.com/questions/tagged/google-bigquery

  • Pub/Sub - large scale (enterprise) messaging for IoT. Scalable & flexible. Integrates with Dataflow.

  • Dataproc - a managed Hadoop, Spark, MapReduce, Hive service.

    NOTE: Even though Google published the paper on MapReduce in 2004, by about 2006 Google stopped creating new MapReduce programs due to Colossus, Dremel and Flume, externalized as BigQuery and Dataflow.

  • Dataflow - stream analytics & ETL batch processing - unified and simplified pipelines in Java and Python. Use reserved compute instances. Competitor in AWS Kinesis.

  • ML Engine (for Machine Learning) -
  • IoT Core
  • Genomics

  • Dataprep
  • Datalab is a Jupyter notebook server using matplotlib or Goolge Charts for visualization. It provides an interactive tool for large-scale data exploration, transformation, analysis.

.NET Dev Support

https://www.coursera.org/learn/develop-windows-apps-gcp Develop and Deploy Windows Applications on Google Cloud Platform class on Coursera

https://cloud.google.com/dotnet/ Windows and .NET support on Google Cloud Platform.

We will build a simple ASP.NET app, deploy to Google Compute Engine and take a look at some of the tools and APIs available to .NET developers on Google Cloud Platform.

https://cloud.google.com/sdk/docs/quickstart-windows Google Cloud SDK for Windows (gcloud)

Installed with Cloud SDK for Windows is https://googlecloudplatform.github.io/google-cloud-powershell cmdlets for accessing and manipulating GCP resources

https://googlecloudplatform.github.io/google-cloud-dotnet/ Google Cloud CLient Libraries for .NET (new) On NuGet for BigQuery, Datastore, Pub/Sub, Storage, Logging.

https://developers.google.com/api-client/dotnet/ Google API Client Libraries for .NET https://github.com/GoogleCloudPlatform/dotnet-docs-samples

https://cloud.google.com/tools/visual-studio/docs/ available on Visual Studio Gallery. Google Cloud Explorer accesses Compute Engine, Cloud Storage, Cloud SQL

Learning resources

https://codelabs.developers.google.com/

Running Node.js on a Virtual Machine

http://www.roitraining.com/google-cloud-platform-public-schedule/ in the US and UK $599 per day

On Pluralsight, Lynn Langit created several video courses early in 2013/14 when Google Fiber was only available in Kansas City:

Pluralsight redirects to Qwiklabs. It’s best to use a second monitor to display instructions.

https://deis.com/blog/2016/first-kubernetes-cluster-gke/

https://hub.docker.com/r/lucasamorim/gcloud/

https://github.com/campoy/go-web-workshop

http://www.anengineersdiary.com/2017/04/google-cloud-platform-tutorial-series_71.html

https://bootcamps.ine.com/products/google-cloud-architect-exam-bootcamp $1,999 bootcamp

https://www.freecodecamp.org/news/google-cloud-platform-from-zero-to-hero/

https://www.youtube.com/watch?v=jpno8FSqpc8&list=RDCMUC8butISFwT-Wl7EV0hUK0BQ&start_radio=1&rv=jpno8FSqpc8 by Antoni

IaC (Infra as Code) Terraform

https://developer.hashicorp.com/terraform/tutorials/gcp-get-started/google-cloud-platform-build

https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started using https://registry.terraform.io/providers/hashicorp/google/latest at https://registry.terraform.io/providers/hashicorp/google/latest/docs described at https://developer.hashicorp.com/terraform/tutorials/gcp-get-started and VIDEO: https://cloud.google.com/docs/terraform

Classes:

  • https://www.cloudskillsboost.google/course_templates/443?utm_source=video&utm_medium=youtube&utm_campaign=youtube-video1 (FREE)
  • https://acloudguru.com/course/deploying-resources-to-gcp-with-terraform (subscription)
  • https://www.udemy.com/course/terraform-gcp/ by Rohit Abraham

Articles:

Google Marketplace

Google makes available pre-configured infrastructure from various 3rd-party vendors:

  • https://www.cloudskillsboost.google/course_sessions/3135279/labs/375921 Google Cloud Fundamentals: Getting Started with Cloud Marketplace deploy a LAMP stack on a Compute Engine instance. The Bitnami LAMP Stack provides a complete web development environment for Linux and phpinfo.php

  • https://cloud.google.com/marketplace/docs/


Qwiklabs

QuestModuleLevelCreditsTime
G Creating a Virtual Machine Introductory1 30m
G Getting Started with Cloud Shell & gcloud Introductory1 40m
G Provision Services with Cloud Launcher Introductory1 30m
G Creating a Persistent Disk (Activity Tracking) Introductory1 30m
G Creating a Persistent Disk Introductory1 30m
G Monitoring Cloud Infrastructure with Stackdriver Fundamental1 45m
G Set Up Network and HTTP Load Balancers Advanced1 40m
K Introduction to Docker GSP055 Intro141m
K Kubernetes Engine: Qwik Start GSP100 IntroFree30m
G&K Hello Node Kubernetes Advanced760m
K Orchestrating the Cloud with Kubernetes GSP021 Expert975m
K Managing Deployments Using Kubernetes Engine GSP053 Advanced760m
K Continuous Delivery with Jenkins in Kubernetes Engine GSP051 Expert780m
- Running a MongoDB Database in Kubernetes with StatefulSets Expert9 50m
G&K Build a Slack Bot with Node.js on Kubernetes Advanced760m
G&K Helm Package Manager Advanced750m

PROTIP: These labs make use of both commands and interactive UI. So code commands in a Bash script file so you can quickly progress through each lab and (more importantly) have a way to use what you learned on the job.

PROTIP: Use different browser programs to switch quickly among them using Command+tab on Macs:

  1. In a Brave browser, Qwiklabs instructions (especially if you’re using a different Google account for Quicklab than for gmail)
  2. In Chrome, open as an Incognito window to click START of a Cloud console.
  3. In Firefox, open this blog page.

PROTIP: The clock starts after you click “Start Lab”. So read through the instructions BEFORE starting.

Google Log Explorer

Google’s Log Explorer analyzes logs and export them to Splunk, etc.

It retains data access logs for 30 days by default, or up to 3,650 days (10 years).

Admin logs are stored 400 days by default.

For extended retention, export logs to Cloud Storage or Big Query. Data stored in BigQuery can be examined using SQL queries.

Custom code can analyze Pub/Sub streaming messages in real-time.

Cloud Audit Logs helps answer the question, “Who did what, where, and when?” Admin activity tracks configuration changes.

Data access tracks calls that read the configuration or metadata of resources and user-driven calls that create, modify, or read user-provided resource data.

System events are non-human Google Cloud administrative actions that change the configuration of resources.

Access Transparency provides logs capture actions Google personnel take when accessing your content.

Agent logs use a Google-customized and packaged Fluentd agent installed on AWS or Google Cloud VM to ingest log data from Google Cloud instances.

Network logs provide both network and security operations with in-depth network service telemetry.

VPC Flow Logs records samples of VPC network flow and can be used for network monitoring, forensics, real-time security analysis, and expense optimization.

Firewall Rules Logging allows you to audit, verify, and analyze the effects of firewall rules.

NAT Gateway logs capture information on NAT network connections and errors.

Service logs record developers deploying code to Google Cloud, such as building a container using Node.js and deploy it to Cloud Run, Standard Out or Standard Error are sent to Cloud Logging for centralized viewing.

Error Reporting counts, analyzes, and aggregates the crashes in your running cloud services. Crashes in most modern languages are “Exceptions,” which aren’t caught and handled by the code itself. Its management interface displays the results with sorting and filtering capabilities. A dedicated view shows the error details: time chart, occurrences, affected user count, first- and last-seen dates, and a cleaned exception stack trace. You can also create alerts to receive notifications on new errors.

Cloud Trace is based on the tools Google uses on its production services. It collects latency data from distributed applications and displays it in the Google Cloud console. Trace can capture traces from applications deployed on App Engine, Compute Engine VMs, and GKE containers.

Performance insights are provided in near-real time, and Trace automatically analyzes all of your application’s traces to generate in-depth latency reports to surface performance degradations. Trace continuously gathers and analyzes trace data to automatically identify recent changes to your application’s performance.

Cloud Profiler uses statistical techniques and extremely low-impact instrumentation that runs across all production application instances to provide a complete CPU and heap picture of an application without slowing it down. With broad platform support that includes Compute Engine VMs, App Engine, and Kubernetes, it allows developers to analyze applications running anywhere, including Google Cloud, other cloud platforms, or on-premises, with support for Java, Go, Python, and Node.js. Cloud Profiler presents the call hierarchy and resource consumption of the relevant function in an interactive flame graph that helps developers understand which paths consume the most resources and the different ways in which their code is actually called.

References

More on cloud

This is one of a series on cloud computing: