Wilson Mar bio photo

Wilson Mar

Hello!

Email me Calendar Skype call

LinkedIn Twitter Gitter Instagram Youtube

Github Stackoverflow Pinterest

The cloud that runs on fast Google Fiber and Big AI

US (English)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Cyrillic Russian   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

Here is a hands-on introduction to learn the Google Compute Platform (GCP) and getting certified as a Google Certified Professional (GCP).

Concepts are introduced succintly after you take a small action, followed by succinct commentary, with links to more information.

  1. Google Cloud’s marketing home page is at:

    https://cloud.google.com

  2. Documentation:

    https://cloud.google.com/kubernetes-engine

Terraform vs. Pulumi vs. AWS Co-pilot (AWS CloudFormation) vs Google Cloud Deployment Manager You still ‘describe’ your desired state, but by having a programming language at your fingers, you can factor out patterns, and package it up for easier consumption.

Why Google Cloud?

As with other clouds:

  • “Pay as you go” rather than significant up-front purchase, which eats time
  • No software to install (and go stale, requiring redo work)
  • Google scale - 9 cloud regions in 27 zones. 90 Edge cache locations.

Google has a fast fibre network connecting machines, which enables high capacity and speed across the world.

Google was the only cloud vendor who offers a VPC which spans several regions until late 2018 when AWS offers the same.

See https://cloud.google.com/why-google

Social

Cloud Time with Training

Google’s Qwiklabs includes cloud instance time (around an hour each class). At the end you get a certificate of completion with a graphic to display on your resume.

Quests:

QuestModuleLevelCreditsTime
G Creating a Virtual Machine Introductory1 30m
G Getting Started with Cloud Shell & gcloud Introductory1 40m
G Provision Services with Cloud Launcher Introductory1 30m
G Creating a Persistent Disk (Activity Tracking) Introductory1 30m
G Creating a Persistent Disk Introductory1 30m
G Monitoring Cloud Infrastructure with Stackdriver Fundamental1 45m
G Set Up Network and HTTP Load Balancers Advanced1 40m
K Introduction to Docker GSP055 Intro141m
K Kubernetes Engine: Qwik Start GSP100 IntroFree30m
G&K Hello Node Kubernetes Advanced760m
K Orchestrating the Cloud with Kubernetes GSP021 Expert975m
K Managing Deployments Using Kubernetes Engine GSP053 Advanced760m
K Continuous Delivery with Jenkins in Kubernetes Engine GSP051 Expert780m
- Running a MongoDB Database in Kubernetes with StatefulSets Expert9 50m
G&K Build a Slack Bot with Node.js on Kubernetes Advanced760m
G&K Helm Package Manager Advanced750m

PROTIP: These labs make use of both commands and interactive UI. So code commands in a Bash script file so you can quickly progress through each lab and (more importantly) have a way to use what you learned on the job.

PROTIP: Use different browser programs to switch quickly among them using Command+tab on Macs:

  1. In a Brave browser, Qwiklabs instructions (especially if you’re using a different Google account for Quicklab than for gmail)
  2. In Chrome, open as an Incognito window to click START of a Cloud console.
  3. In Firefox, open this blog page.

PROTIP: The clock starts after you click “Start Lab”. So read through the instructions BEFORE starting.

Free $300 account for 60 days

In US regions, new accounts get $300 of overage for 12 months.

There are limitations to Google’s no charge low level usage:

  • No more that 8 cores at once across all instances
  • No more than 100 GB of solid state disk (SSD) space
  • Max. 2TB (2,000 GB) total persistent standard disk space

PROTIP: Google bills in minute-level increments (with a 10-minute minimum charge), unlike AWS which charges by the hour (for Windows instances).

  1. Read the fine print in the FAQ to decide what Google permits:

    https://cloud.google.com/free/docs/frequently-asked-questions

  2. Read Google’s Pricing Philosophy:

    https://cloud.google.com/pricing/philosophy

    Gmail accounts

  3. NOTE: Create several Gmail accounts, each with a different identity (name, birthdate, credit card). You would need to use the same name for the credit card and the same phone number because they are expensive.

    PROTIP: Write down all the details (including the date when you opened the account) in case you have to recover the password.

    PROTIP: Use a different browser so you can flip quickly between identities.

    • Use Chrome browser for Gmail account1 with an Amex card for project1
    • Use Firefox browser for Gmail account2 with a Visa card for project2
    • Use Brave browser for Gmail account3 with a Mastercard for project3
    • Use Safari browser for Gmail account4 with a Discover card for project4
  4. In the appropriate internet browser, apply for a Gmail address and use the same combination. in the free trial registration page and Console:

    https://cloud.google.com/free

    Alternately, https://codelabs.developers.google.com/codelabs/cpb100-free-trial/#0

    https://console.developers.google.com/freetrial

  5. Click the Try It Free button. Complete the registration. Click Agree and continue. Start my new trial.

  6. With the appropriate account and browser, configure at console.cloud.google.com

    Keeping track of multiple accounts is an exhausting way to live, in my opinion.

  7. PROTIP: Bookmark the project URL.

    PROTIP: Google remembers your last project and its region, and gives them to you even if you do not specify them in the URL.

    Configure Limits

  8. CAUTION: Your bill can suddenly jump up to thousands of dollars a day, with no explanation. Configure to put limits.

Google Certified Professional (GCP) Certification Exams

See https://cloud.google.com/certification

https://support.google.com/cloud-certification/answer/9907748?hl=en

As of December, 2020, Google had these certifications:

  • Associate Cloud Engineer (fundamentals)

  • Google Certified Professional - Cloud Architect
  • Google Certified Professional - Data Engineer (for big data)

  • Professional Cloud Developer
  • Professional Cloud Data Engineer
  • Professional Cloud Network Engineer
  • Professional Cloud Security Engineer
  • Professional Cloud DevOps Engineer

  • Professional Collaboration Engineer
  • Professional Machine Learning Engineer

  • G Suite Administrator (Gmail, Google Drive, etc.)
  • Apigee certification
  • Associate Android Developer
  • Mobile Web Specialist

Each 2 hour $200 exam is 2 hours taken in-person at a Kryterion Test Center. PROTIP: Call (602) 659-4660 in Phoenix, AZ because testing centers go in an out of business, or have limitations such as COVID, so call ahead to verify they’re open and to confirm parking instructions. Copy address and parking instructions to your Calendar entry.

Codelabs for hands-on practice: https://codelabs.developers.google.com/?cat=Cloud

Kryterion’s online-proctoring (OLP) solution is not affected by COVID-19 and may be a suitable testing alternative to taking exams at a test center.

Register for your exam through your Test Sponsor’s Webassessor portal. There you get a Test Taker Authorization Code needed to launch the test.

Google offers classes on Coursera and at https://cloud.google.com/training

Cloud Architect

Cloud Architect – design, build and manage solutions on Google Cloud Platform.

PROTIP: The exam references these case studies, so get to know them to avoid wasting time during the exam at https://cloud.google.com/certification/guides/professional-cloudarchitect/

The above are covered by Google’s Preparing for the Google Cloud Professional Cloud Architect Exam on Coursera is $49 if you want the quizzes and certificate.

More about this certification:

https://www.coursera.org/specializations/gcp-architecture

KC 1, GCSP 1Google Cloud Platform Fundamentals: Core Infrastructure Intro141m
GCSP 2 Networking in Google Cloud: Defining and Implementing Networks Intro141m
GCSP 3 Networking in Google Cloud: Hybrid Connectivity and Network Management Intro141m
GCSP 3 Networking in Google Cloud: Hybrid Connectivity and Network Management Intro141m
GCSP 4 Managing Security in Google Cloud Platform Intro141m
GCSP 5 Security Best Practices in Google Cloud Securing Compute Engine, Application Security, Securing Cloud Data, Securing Kubernetes (Encrypting disks with CSEK) Intro141m
GCSP 6 Mitigating Security Vulnerabilities on Google Cloud Platform (DDoS with botnets, mitigations, partner products) Intro141m
GCSP 7 Hands-On Labs in Google Cloud for Security Engineers Intro141m
KC 2 Essential Cloud Infrastructure: Foundation Intro141m
KC 3Essential Cloud Infrastructure: Core Services Intro141m
KC 4Elastic Cloud Infrastructure: Scaling and Automation Intro141m
KC 5Elastic Cloud Infrastructure: Containers and Services Intro141m
KC 6Reliable Cloud Infrastructure: Design and Process Intro141m

Data Engineer

Data Engineer certification Guide

https://cloud.google.com/training/courses/data-engineering is used within the Data Engineering on Google Cloud Platform Specialization on Coursera. It is a series of five one-week classes ($49 per month after 7 days). These have videos that syncs with transcript text, but no hints to quiz answers or live help.

  1. Building Resilient Streaming Systems on Google Cloud Platform $99 USD

  2. Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform $59 USD

  3. Google Cloud Platform Big Data and Machine Learning Fundamentals $59 USD by Google Professional Services Consulant Valliappa Lakshmanan (Lak) at https://medium.com/@lakshmanok, previously at NOAA weather predictions.

    https://codelabs.developers.google.com/cpb100

  4. Serverless Data Analysis with Google BigQuery and Cloud Dataflow $99 USD

  5. Serverless Machine Learning with Tensorflow on Google Cloud Platform $99 USD by Valliappa Lakshmanan uses Tensorflow Cloud ML service to learn a map of New York City by analyzing taxi cab locations.

    • Vision image sentiment
    • Speech recognizes 110 languages, dictating,
    • Translate
    • personalization

DevOps Engineer

Coursera’s video Architecting with Google Kubernetes Engine Specialization consists of:

  1. Google Cloud Platform Fundamentals: Core Infrastructure

  2. Architecting with Google Kubernetes Engine: Foundations by Brian Rice (Curriculum Lead) provides hands-on Qwiklabs. Lab: Working with Cloud Build Quiz: Containers and Container Images, The Kubernetes Control Plane (master node), Kubernetes Object Management, Lab: Deploying to Google Kubernetes Engine, Migrate for Google Anthos

  3. Architecting with Google Kubernetes Engine: Workloads

  4. Architecting with Google Kubernetes Engine: Production

Coursera’s video courses toward a Prep Professional Cloud DevOps Engineer Professional Certificate

  1. Google Cloud Platform Fundamentals: Core Infrastructure (1 “week”)

    This course introduces you to important concepts and terminology for working with Google Cloud Platform (GCP). You learn about, and compare, many of the computing and storage services available in Google Cloud Platform, including Google App Engine, Google Compute Engine, Google Kubernetes Engine, Google Cloud Storage, Google Cloud SQL, and BigQuery. You learn about important resource and policy management tools, such as the Google Cloud Resource Manager hierarchy and Google Cloud Identity and Access Management. Hands-on labs give you foundational skills for working with GCP.

    Note: •Google services are currently unavailable in China. New! CERTIFICATE COMPLETION CHALLENGE to unlock benefits from Coursera and Google Cloud Enroll and complete Cloud Engineering with Google Cloud or Cloud Architecture with Google Cloud Professional Certificate or Data Engineering with Google Cloud Professional Certificate before November 8, 2020 to receive the following benefits; => Google Cloud t-shirt, for the first 1,000 eligible learners to complete. While supplies last. > Exclusive access to Big => Interview ($950 value) and career coaching => 30 days free access to Qwiklabs ($50 value) to earn Google Cloud recognized skill badges by completing challenge quests

  2. Developing a Google SRE Culture

    In many IT organizations, incentives are not aligned between developers, who strive for agility, and operators, who focus on stability. Site reliability engineering, or SRE, is how Google aligns incentives between development and operations and does mission-critical production support. Adoption of SRE cultural and technical practices can help improve collaboration between the business and IT. This course introduces key practices of Google SRE and the important role IT and business leaders play in the success of SRE organizational adoption.

    Primary audience: IT leaders and business leaders who are interested in embracing SRE philosophy. Roles include, but are not limited to CTO, IT director/manager, engineering VP/director/manager. Secondary audience: Other product and IT roles such as operations managers or engineers, software engineers, service managers, or product managers may also find this content useful as an introduction to SRE.

  3. Reliable Google Cloud Infrastructure: Design and Process by Stephanie Wong (Developer Advocate) and Philipp Mair (Course Developer)

    This course equips students to build highly reliable and efficient solutions on Google Cloud using proven design patterns. It is a continuation of the Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine courses and assumes hands-on experience with the technologies covered in either of those courses. Through a combination of presentations, design activities, and hands-on labs, participants learn to define and balance business and technical requirements to design Google Cloud deployments that are highly reliable, highly available, secure, and cost-effective.

    This course teaches participants the following skills: ● Apply a tool set of questions, techniques, and design considerations ● Define application requirements and express them objectively as KPIs, SLOs and SLIs ● Decompose application requirements to find the right microservice boundaries Leverage Google Cloud developer tools to set up modern, automated deployment pipelines ● Choose the appropriate Cloud Storage services based on application requirements ● Architect cloud and hybrid networks ● Implement reliable, scalable, resilient applications balancing key performance metrics with cost ● Choose the right Google Cloud deployment services for your applications ● Secure cloud applications, data, and infrastructure ● Monitor service level objectives and costs using Google Cloud tools Prerequisites ● Completion of prior courses in the \

    CERTIFICATE COMPLETION CHALLENGE to unlock benefits from Coursera and Google Cloud Enroll and complete Cloud Engineering with Google Cloud or Cloud Architecture with Google Cloud Professional Certificate or Data Engineering with Google Cloud Professional Certificate before November 8, 2020 to receive the following benefits; => Google Cloud t-shirt, for the first 1,000 eligible learners to complete. While supplies last. > Exclusive access to Big => Interview ($950 value) and career coaching => 30 days free access to Qwiklabs ($50 value) to earn Google Cloud recognized skill badges by completing challenge quests

  4. Logging, Monitoring, and Observability in Google Cloud

    This course teaches techniques for monitoring, troubleshooting, and improving infrastructure and application performance in Google Cloud. Guided by the principles of Site Reliability Engineering (SRE), and using a combination of presentations, demos, hands-on labs, and real-world case studies, attendees gain experience with full-stack monitoring, real-time log management, and analysis, debugging code in production, tracing application performance bottlenecks, and profiling CPU and memory usage.

    gcp-observability-629x313


GCP Architecture

gcp-hierarchy-409x383

Billing is at the project level. Logical grouping for resources, associated with billing.

Labels are Key-value pairs containing resource metadata used to organize billing.

New Project

Service accounts are automatically created for each project:

project_number@developer.gserviceaccount.com
project_id@developer.gserviceaccount.com

Project ID is unique among all other projects at Google and cannot be changed.

Permissions

  1. Read Google’s IT Security PDF

IAM Policies: The two types of IAM roles on GCP are primitive and curated/pre-defined.

  • Primitive roles (Viewer, Editor, Owner, Billing Administrator).

Permissions are inherited and additive (flow in one direction). Parent permissions don’t override child permissions. Permissions can’t be denied at lower levels once they’ve been granted at upper levels.

There’re no deny rules.

Kubernetes RBAC (Role-based Access Centrol) extends IAM, at the cluster or namespace level, to define Roles – what operations verbs (get, list, watch, create, describe) can be executed over named objects (resource such as a pod, deployment, service, persistent volume). It’s common practice to allocate get, list, and watch together (as a read-only unit).

Roles (such as compute.instanceAdmin) are a collection of permissions to give access to a given resource, in the form:

service.resource.verb

Applying an example:compute.instances.delete

API groups are defined in the same way as it is defined when creating a Role.

Get and post actions for non-resource endpoints are uniquely defined by RBAC ClusterRoles, as the name implies, defined at the cluster level.

RoleBindings connect Roles to subjects (users/processes) who make requests to the Kubernetes API.

Resources can be cluster scope such as nodes and storage classes, or they can be namespace resources such as pods and deployments.

On the top-left, is a basic assignment that specifies get, list, and watch operations on all pod resources.

On the bottom left, a sub-resource, log, is added to the resources list to specify access to pod/log.

On the top right, a resource name is specified to limit the scope to specific instances of a resource, and the verbs specified as patch and update.

Service accounts

Unlike an end-user account, no human authentication is involved from one service to another when a service account is associated with a VM or app.

Google-managed service accounts are of the format:
[PROJECT_NUMBER]@cloudservices.gserviceaccount.com

User-managed service accounts are of the format:
[PROJECT_NUMBER]-compute@developer.gserviceaccount.com

Service accounts have more stringent permissions and logging than user accounts.

GCDS (Google Cloud Directory Sync) syncs user identities from on-premises.


Cloud Shell Online

The Cloud Shell provides command line access on a web browser, with nothing to install.

Sessions have a 1 hour timeout.

Language support for Java, Go, Python, Node, PHP, Ruby.

Not meant for high computation use.

Shell CLI programs

Google has these shells:

  1. gcloud CLI installed with google-cloud-sdk.

  2. gsutil to access Cloud Storage

  3. bq for Big Query tasks

  4. kubectl for Kubernetes

There is a Google Cloud SDK for Windows (gcloud) for your programming pleasure.

BLOG: Graphical user interface (GUI) for Google Compute Engine instance

Commands

  1. Click the icon in the Google Cloud Platform Console:

    gcp-cloud-shell-menu-568x166-9041

  2. Click “START CLOUD SHELL” at the bottom of this pop-up:

    gcloud-shell-entry-748x511

    When the CLI appears online:

  3. See that your present working directory is /home/ as your account name:

    pwd
    
  4. See the folder with your account name:

    echo ${HOME}
    
  5. Just your account name:

    echo ${USER}
    
  6. Read the welcome file:

    nano README-cloudshell.txt
    

    Your 5GB home directory will persist across sessions, but the VM is ephemeral and will be reset approximately 20 minutes after your session ends. No system-wide change will persist beyond that.

  7. Type “gcloud help” to get help on using Cloud SDK. For more examples, visit https://cloud.google.com/shell/docs/quickstart and https://cloud.google.com/shell/docs/examples

  8. Type “cloudshell help” to get help on using the “cloudshell” utility. Common functionality is aliased to short commands in your shell, for example, you can type “dl <filename>” at Bash prompt to download a file.

    Type “cloudshell aliases” to see these commands.

  9. Type “help” to see this message any time. Type “builtin help” to see Bash interpreter help.

Other resources:

GCP Console / Dashboard

https://console.cloud.google.com/home/dashboard
displays panes for your project from among the list obtained by clicking the “hamburger” menu icon at the upper left corner. The major sections of the product menu are:

  • IDENTITY & SECURITY (Identity, Access, Security)
  • COMPUTE (App Engine, Compute Engine, Kubernetes Engine Container Engine, Cloud Functions, Cloud Run, VMware Engine)
  • STORAGE (Filestore, Storage, Data Transfer)
  • DATABASES (Bigtable, Datastore, Database Migration, Filestore, Memorystore, Spanner, SQL)
  • NETWORKING (VPC network, Network services, Hybrid Connectivity, Network Service Tiers, Network Security, Network Intelligence)
  • OPERATIONS (Monitoring, Debugger, Error Reporting, Logging, Profiler, Trace) STACKDRIVER
  • TOOLS (Cloud Build, Cloud Tasks, Container Registry, Artifact Registry, Cloud Scheduler, Deployment Manager, API Gateway, Endpoints, Source Repositories, Workflows, Private Catalog)
  • BIG DATA (Composer, Dataproc, Pub/Sub, Dataflow, IoT Core, BigQuery, Looker, Data Catalog, Data Fusion, Financial Services, Healthcare, Life Sciences, Dataprep)
  • ARTIFICIAL INTELLIGENCE (AI Platform (Unified), AI Platform, Data Labeling, Document AI, Natural Language, Recommendations AI, Tables, Talent Solution, Translation, Vision, Video Intelligence)
  • OTHER GOOGLE SOLUTIONS
  • PARTNER SOLUTIONS (Redis Enterprise, Apache Kafka, DataStax Astra, Elasticsearch Service, MongoDB Atlas, Cloud Volumes)

Text Editor

  1. Click the pencil icon for the built-in text editor.

  2. Edit text using nano or vim built-in.

  3. PROTIP: Boost mode to run Docker with more memory.

Local gcloud CLI install

Get the CLI to run locally on your laptop:

  1. On MacOSX use Homebrew:

    brew install --cask google-cloud-sdk
    

Alternately:

  1. In https://cloud.google.com/sdk/downloads
  2. Click the link for Mac OS X (x86_64) like “google-cloud-sdk-173.0.0-darwin-x86_64.tar.gz” to your Downloads folder.
  3. Double-click the file to unzip it (from 13.9 MB to a 100.6 MB folder). If you’re not seeing a folder in Finder, use another unzip utility.
  4. Move the folder to your home folder.

Either way, edit environment variables on Mac:

  1. Edit your ~/.bash_profile to add the path to that folder in the $PATH variable.

    export PATH="$PATH:$HOME/.google-cloud-sdk/bin"
  2. PROTIP: Add an alias to get to the folder quickly:

    alias gcs='cd ~/.google-cloud-sdk'
  3. Use the alias to navigate to the folder:

    gcs

    Set permissions?

  4. Install libraries (without the help argument):

    On Linux or Mac OS X:

    ./install.sh --help

    On Windows:

    .\install.bat --help
  5. Initialize the SDK:

    ./bin/gcloud init

gcloud CLI commands

Regardless of whether the CLI is online or local:

  1. Get syntax of commands

    gcloud help

  2. Be aware of the full set of parameters possible for GCP tasks at
    https://cloud.google.com/sdk/gcloud/reference

    The general format of commands:

    gcloud [GROUP] [GROUP] [COMMAND] – arguments

    Has all Linux command tools and authentication pre-installed.

  3. Run df to see that /dev/sdb1 has 5,082,480 KB = 5GB of persistent storage:

    Filesystem     1K-blocks     Used Available Use% Mounted on
    none            25669948 16520376   7822572  68% /
    tmpfs             872656        0    872656   0% /dev
    tmpfs             872656        0    872656   0% /sys/fs/cgroup
    /dev/sdb1        5028480    10332   4739672   1% /home
    /dev/sda1       25669948 16520376   7822572  68% /etc/hosts
    shm                65536        0     65536   0% /dev/shm
    
  4. Confirm the operating system version:

    uname -a

    The answer is Debian 3.16:

     Linux cs-6000-devshell-vm-5260d9c4-474a-47de-a143-ea05b695c057-5a 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux
     

    gcloud compute config-ssh

    Projects List

  5. Get list of Project IDs:

    gcloud projects list

    Example:

    PROJECT_ID              NAME                       PROJECT_NUMBER
    what-182518             CICD                       608556368368
    
  6. List project name (aka “Friendly Name”) such as “cp100”.

    gcloud config list project 
    

    A sample response:

    [core]
    project = what-182518
    Your active configuration is: [cloudshell-20786]
    
  7. Print just the project name (suppressing other warnings/errors):

    gcloud config get-value project 2> /dev/null 
    

    Alternately:

    gcloud config list --format 'value(core.project)' 2>/dev/null
    
  8. PROTIP: The shell variable $DEVSHELL_PROJECT_ID defined by Google can be used to refer to the project ID of the project used to start the Cloud Shell session.

    echo $DEVSHELL_PROJECT_ID

  9. PROTIP: Instead of manually constructing commands, use environment variable:

    gcloud config set project ${DEVSHELL_PROJECT_ID}
    

    Alternately, if you want your own:

    export PROJECT_ID=$(gcloud config get-value project)

  10. PROTIP: Get information about a project using the project environment variable:

    gcloud compute project-info describe --project ${DEVSHELL_PROJECT_ID}
    

    Project metadata includes quotas:

    quotas:
           - limit: 1000.0
      metric: SNAPSHOTS
      usage: 1.0
           - limit: 5.0
      metric: NETWORKS
      usage: 2.0
           - limit: 100.0
      metric: FIREWALLS
      usage: 13.0
           - limit: 100.0
      metric: IMAGES
      usage: 1.0
           - limit: 1.0
      metric: STATIC_ADDRESSES
      usage: 1.0
           - limit: 200.0
      metric: ROUTES
      usage: 31.0
           - limit: 15.0
      metric: FORWARDING_RULES
      usage: 2.0
           - limit: 50.0
      metric: TARGET_POOLS
      usage: 0.0
           - limit: 50.0
      metric: HEALTH_CHECKS
      usage: 2.0
           - limit: 8.0
      metric: IN_USE_ADDRESSES
      usage: 2.0
           - limit: 50.0
      metric: TARGET_INSTANCES
      usage: 0.0
           - limit: 10.0
      metric: TARGET_HTTP_PROXIES
      usage: 1.0
           - limit: 10.0
      metric: URL_MAPS
      usage: 1.0
           - limit: 5.0
      metric: BACKEND_SERVICES
      usage: 2.0
           - limit: 100.0
      metric: INSTANCE_TEMPLATES
      usage: 1.0
           - limit: 5.0
      metric: TARGET_VPN_GATEWAYS
      usage: 0.0
           - limit: 10.0
      metric: VPN_TUNNELS
      usage: 0.0
           - limit: 3.0
      metric: BACKEND_BUCKETS
      usage: 0.0
           - limit: 10.0
      metric: ROUTERS
      usage: 0.0
           - limit: 10.0
      metric: TARGET_SSL_PROXIES
      usage: 0.0
           - limit: 10.0
      metric: TARGET_HTTPS_PROXIES
      usage: 1.0
           - limit: 10.0
      metric: SSL_CERTIFICATES
      usage: 1.0
           - limit: 100.0
      metric: SUBNETWORKS
      usage: 26.0
           - limit: 10.0
      metric: TARGET_TCP_PROXIES
      usage: 0.0
           - limit: 24.0
      metric: CPUS_ALL_REGIONS
      usage: 3.0
           - limit: 10.0
      metric: SECURITY_POLICIES
      usage: 0.0
           - limit: 1000.0
      metric: SECURITY_POLICY_RULES
      usage: 0.0
           - limit: 6.0
      metric: INTERCONNECTS
      usage: 0.0
    
  11. List configuration information for the currently active project:

    gcloud config list

    Sample response:

    [component_manager]
    disable_update_check = True
    [compute]
    gce_metadata_read_timeout_sec = 5
    [core]
    account = wilsonmar@gmail.com
    check_gce_metadata = False
    disable_usage_reporting = False
    project = what-182518
    [metrics]
    environment = devshell
    Your active configuration is: [cloudshell-20786]
    

    Account Authorization Permissions

  12. List:

    gcloud auth list

    If you have not logged in:

    No credentialed accounts.
    
    

To login, run: $ gcloud auth login ACCOUNT </pre>

  1. List projects to whch your account has access:

    gcloud projects list

    Instances List

  2. Zones are listed as metadata for each GCE instance:

    gcloud compute instances list

    Sample response:

    NAME          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
    hygieia-1     us-central1-f  n1-standard-1               10.128.0.3   35.193.186.181  TERMINATED
    

    PROTIP: Define what Zone and region your team should use.

    Zones

  3. Get list of zone codes in map ???:

    gcloud compute zones list

    Sample response:

    NAME                    REGION                STATUS  NEXT_MAINTENANCE  TURNDOWN_DATE
    asia-east1-c            asia-east1            UP
    asia-east1-b            asia-east1            UP
    asia-east1-a            asia-east1            UP
    asia-northeast1-a       asia-northeast1       UP
    asia-northeast1-c       asia-northeast1       UP
    asia-northeast1-b       asia-northeast1       UP
    asia-south1-c           asia-south1           UP
    us-central1-c           us-central1           UP
    asia-south1-a           asia-south1           UP
    asia-south1-b           asia-south1           UP
    asia-southeast1-a       asia-southeast1       UP
    asia-southeast1-b       asia-southeast1       UP
    australia-southeast1-c  australia-southeast1  UP
    australia-southeast1-b  australia-southeast1  UP
    australia-southeast1-a  australia-southeast1  UP
    europe-west1-c          europe-west1          UP
    europe-west1-b          europe-west1          UP
    europe-west1-d          europe-west1          UP
    europe-west2-b          europe-west2          UP
    europe-west2-a          europe-west2          UP
    europe-west2-c          europe-west2          UP
    europe-west3-b          europe-west3          UP
    europe-west3-a          europe-west3          UP
    europe-west3-c          europe-west3          UP
    southamerica-east1-c    southamerica-east1    UP
    southamerica-east1-b    southamerica-east1    UP
    southamerica-east1-a    southamerica-east1    UP
    us-central1-a           us-central1           UP
    us-central1-f           us-central1           UP
    us-central1-c           us-central1           UP
    us-central1-b           us-central1           UP
    us-east1-b              us-east1              UP
    us-east1-d              us-east1              UP
    us-east1-c              us-east1              UP
    us-east4-c              us-east4              UP
    us-east4-a              us-east4              UP
    us-east4-b              us-east4              UP
    us-west1-c              us-west1              UP
    us-west1-b              us-west1              UP
    us-west1-a              us-west1              UP
    

    REMEMBER: Region is a higher-order (more encompassing concept) than Zone.

  4. Define environment variables to hold zone and region:

    
    export CLOUDSDK_COMPUTE_ZONE=us-central1-f
    export CLOUDSDK_COMPUTE_REGION=us-central1 
    echo $CLOUDSDK_COMPUTE_ZONE
    echo $CLOUDSDK_COMPUTE_REGION
    

    TODO: Get the default region and zone into environment variables.

    curl “http://metadata.google.internal/computeMetadata/v1/instance/zone” -H “Metadata-Flavor: Google”

  5. Set the zone (for example, us-east1-f defined above):

    gcloud config set compute/zone ${CLOUDSDK_COMPUTE_ZONE}
    

    See https://cloud.google.com/compute/docs/storing-retrieving-metadata

  6. Switch to see the Compute Engine Metadata UI for the project:

    https://console.cloud.google.com/compute/metadata

    • google-compute-default-zone
    • google-compute-default-region

    https://github.com/wilsonmar/Dockerfiles/blob/master/gcp-set-zone.sh

Create sample Node server

  1. Download a file from GitHub:

    
    curl -o https://raw.githubusercontent.com/wilsonmar/Dockerfiles/master/NodeJs/server.js
    

    -o (lowercase o) saves the filename provided in the command line.

    See http://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner

    The sample Node program displays just text “Hello World!” (no fancy HTML/CSS).

  2. Invoke Node to start server:

    node server.js
  3. View the program’s browser output online by clicking the Google Web View button, then “Preview on port 8080”:

    gcp-web-preview-396x236-5615

    The URL:
    https://8080-dot-3050285-dot-devshell.appspot.com/?authuser=0

  4. Press control+C to stop the Node server.

Deploy Python

  1. Replace boilerplate “your-bucket-name” with your own project ID:

    sed -i s/your-bucket-name/$DEVSHELL_PROJECT_ID/ config.py

  2. View the list of dependencies needed by your custom Python program:

    cat requirements.txt

  3. Download the dependencies:

    pip install -r requirements.txt -t lib

  4. Deploy the current assembled folder:

    gcloud app deploy -y

  5. Exit the cloud:

    exit

PowerShell Cloud Tools

https://cloud.google.com/powershell/

https://cloud.google.com/tools/powershell/docs/

  1. In a PowerShell opened for Administrator:

    Install-Module GoogleCloud

    The response:

    Untrusted repository
    You are installing the modules from an untrusted repository. If you trust this 
    repository, change its InstallationPolicy value by running the Set-PSRepository
     cmdlet. Are you sure you want to install the modules from 'PSGallery'?
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help 
    (default is "N"):
    
  2. Click A.

  3. Get all buckets for the current project, for a specific project, or a specific bucket:

       $currentProjBuckets = Get-GcsBucket
    $specificProjBuckets = Get-GcsBucket -Project my-project-1
    $bucket = Get-GcsBucket -Name my-bucket-name
    
  4. Navigate to Google Storage (like a drive):

    cd gs:\

  5. Show the available buckets (like directories):

    ls

  6. Create a new bucket

    mkdir my-new-bucket

  7. Help

    Get-Help New-GcsBucket

Source Code Repository

https://console.cloud.google.com/code/develop/repo is
Google’s (Source) Code Repository Console.

served from: gcr.io

See docs at https://cloud.google.com/source-repositories

GCR is a full-featured Git repository hosted on GCP, free for up to 5 project-users per billing account, for up to 50GB free storage and 50GB free egress per month.

Mirror from GitHub

NOTE

  1. PROTIP: On GitHub.com, login to the account you want to use (in the same browser).
  2. PROTIP: Highlight and copy the name of the repository you want to mirror on Goggle.
  3. Create another browser tab (so they share the credentials established in the steps above).
  4. https://console.cloud.google.com/code is the Google Code Console.
  5. Click “Get Started” if it appears.
  6. PROTIP: For repository name, paste or type the same name as the repo you want to hold from GitHub.

    BLAH: Repository names can only contain alphanumeric characters, underscores or dashes.

  7. Click CREATE to confirm name.

    gcp-code-github-925x460

  8. Click on “Automatically Mirror from GitHub”.
  9. Select GitHub or Bitbucket for a “Choose a Repository list”.
  10. Click Grant to the repo to be linked (if it appears). Then type your GitHub password.
  11. Click the green “Authorize Google-Cloud-Development” button.
  12. Choose the repository. Click the consent box. CONNECT.

    You should get an email “[GitHub] A new public key was added” about the Google Connected Repository.

  13. Commit a change to GitHub (push from your local copy or interactively on GitHub.com).
  14. Click the clock icon on Google Code Console to toggle commit history.
  15. Click the SHA hash to view changes.
  16. Click on the changed file path to see its text comparing two versions. Scroll down.
  17. Click “View source at this commit” to make a “git checkout” of the whole folder.
  18. Click the “Source code” menu for the default list of folders and files.
  19. Select the master branch.

    To disconnect a hosted repository:

  20. Click Repositories on the left menu.
  21. Click the settings icon (with the three vertical dots to the far right) on the same line of the repo you want disconnected.
  22. Confirm Disconnect.

    Create new repo in CLI

  23. Be at the project you want.
  24. Create a repository.
  25. Click the CLI icon.
  26. Click the wrench to adjust backgroun color, etc.

  27. Create a file using the source browser.

  28. Make it a Git repository (a Git client is built-in):

    gcloud init

  29. Define, for example:

    git config credential.helper gcloud.sh

  30. Define the remote:

    git remote add google \ https://source.developers.google.com/p/cp100-1094/r/helloworld

  31. Define the remote:

    git push --all google

  32. To transfer a file within gcloud CLI:

    gsutil cp *.txt gs://cp-100-demo

GCR Container Registry

https://console.cloud.google.com/gcr - Google’s Container Registry console is used to control what is in
Google’s Container Registry (GCR). It is a service apart from GKE. It stores secure, private Docker images for deployments.

Like GitHub, it has build triggers.

help

Deployment Manager

Deployment Manager creates resources.

Cloud Launcher uses .yaml templates describing the environment makes for repeatability.

Endpoints (APIs)

Google Cloud Endpoints let you manage, control access, and monitor custom APIs that can be kept private.

REST API

  1. Enable the API on Console.

  2. For more on working with Google API Explorer to test RESTful API’s

    https://developers.google.com/apis-explorer

    PROTIP: Although APIs are in alphabetical order, some services are named starting with “Cloud” or “Google” or “Google Cloud”. Press Ctrl+F to search.

SQL Servers on GCE: (2012, 2014, 2016)

  • SQL Server Standard
  • SQL Server Web
  • SQL Server Enterprise

API Explorer site: GCEssentials_ConsoleTour

Authentication using OAuth2 (JWT), JSON.

Google NETWORKING

Google creates all instances with a private (internal) IP address such as 10.142.3.2.

One public IP (such as 35.185.115.31) is optionally created to a resource. The IP can be ephemeral (from a pool) or static (reserved). Unassigned static IPs cost $.01 per hour (24 cents per day).

One VPC is created by default for each project. Each VPC has implied allow egress and implied deny ingress (firewall rules) configured.

VPCs are global resources, span all regions.

Each Subnet IP range (RFC 1918 CIDR block) is defined across several zones within a particular region. Subnet ranges cannot overlap in a region. Auto Mode automatically adds a subnet to each new region.

A VPC can be shared across several projects in same organization. Subnets in the same VPCs communicate via internal IPs.
Subnets in different VPCs communicate via external IPs, for which there is a charge.

Custom Mode specify specific subnets, for use with VPC Peering and to connect via VPN using IPsec to encrypt traffic. VPC Peering share projects NOT in same organization. VPC Peering has multiple VPCs share resources.

VPN capacity is 1.5 - 3 Gbps

Google Cloud Router supports dynamic routing between GCP and corporate networks using BGP (Border Gateway Protocol).

Google Cloud Interconnect can have SLA with internal IP addresses.

  • two VPN (Cloud VPN software)
  • Partner thru external provider 50Mbps to 10 Gbps
  • Dedicated Interconnect 10 Gbps each link to colocation facility
  • CDN Interconnect - CDN providers link with Google’s edge network

Peering via public IP addresses (no SLA) so can link multiple orgs

  • Direct Peering - connect business directly to Google
  • Carrier Peering - Enterprise-grade connections provided by carrier service providers

HTTP Load Balancing ensures only healthy instances handle traffic across regions.

  • See https://www.ianlewis.org/en/google-cloud-platform-http-load-balancers-explaine
  • https://medium.com/google-cloud/capacity-management-with-load-balancing-32bd22a716a7 Capacity Management with Load Balancing

Allow external traffic k8s

For security, k8s pods by default are accessible only by its internal IP within the cluster.

So to make a container accessible from outside the Kubernetes virtual network, expose the pod as a Kubernetes service. Within a Cloud Shell:

kubectl expose deployment hello-node --type="LoadBalancer"
   

The –type=”LoadBalancer” flag specifies that we’ll be using the load-balancer provided by the underlying infrastructure (in this case the Compute Engine load balancer). To load balance traffic across all pods managed by the deployment.

Sample response:

service "hello-node" exposed
   

The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.

  1. Find the publicly-accessible IP address of the service, request kubectl to list all the cluster services:

    kubectl get services
    

    Sample response listing internal CLUSTER-IP and EXTERNAL-IP:

    NAME         CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
    hello-node   10.3.250.149   104.154.90.147   8080/TCP   1m
    kubernetes   10.3.240.1     <none>           443/TCP    5m
    

Configure Cloud Armor Load Balancer IPs

LAB

Google Cloud Platform HTTP(S) load balancing is implemented at the edge of Google’s network in Google’s points of presence (POP) around the world. User traffic directed to an HTTP(S) load balancer enters the POP closest to the user and is then load balanced over Google’s global network to the closest backend that has sufficient capacity available.

Configure an HTTP Load Balancer with global backends. Then, stress test the Load Balancer and blocklist the stress test IP with Cloud Armor, which prevents malicious users or traffic from consuming resources or entering your virtual private cloud (VPC) networks. Blocks and allows access to your HTTP(S) load balancer at the edge of the Google Cloud, as close as possible to the user and to malicious traffic.

  1. in Cloud Shell: create a firewall rule to allow port 80 traffic

    gcloud compute firewall-rules create \
    www-firewall-network-lb --target-tags network-lb-tag \
    --allow tcp:80
    

    Click Authothorize. Result:

    NAME                     NETWORK  DIRECTION  PRIORITY  ALLOW   DENY  DISABLED
    www-firewall-network-lb  default  INGRESS    1000      tcp:80        False
    
  2. Create an instance template named web-template which specifies a startup script that will install Apache, and creates a home page to display the zone the server is running in:

    gcloud compute instance-templates create web-template \
     --machine-type=n1-standard-4 \
     --image-family=debian-9 \
     --image-project=debian-cloud \
     --machine-type=n1-standard-1 \
     --tags=network-lb-tag \
     --metadata=startup-script=\#\!\ /bin/bash$'\n'apt-get\ update$'\n'apt-get\ install\ apache2\ -y$'\n'service\ apache2\ restart$'\n'ZONE=\$\(curl\ \"http://metadata.google.internal/computeMetadata/v1/instance/zone\"\ -H\ \"Metadata-Flavor:\ Google\"\)$'\n'echo\ \'\<\!doctype\ html\>\<html\>\<body\>\<h1\>Web\ server\</h1\>\<h2\>This\ server\ is\ in\ zone:\ ZONE_HERE\</h2\>\</body\>\</html\>\'\ \|\ tee\ /var/www/html/index.html$'\n'sed\ -i\ \"s\|ZONE_HERE\|\$ZONE\|\"\ /var/www/html/index.html
    
  3. create a basic http health check:

    gcloud compute http-health-checks create basic-http-check

  4. create a managed instance-groups of 3 instances. instance-groups use an instance template to create a group of identical instances so that if an instance in the group stops, crashes, or is deleted, the managed instance group automatically recreates the instance.

    gcloud compute instance-groups managed create web-group \
    --template web-template --size 3 --zones \
    us-central1-a,us-central1-b,us-central1-c,us-central1-f
    
  5. create the load balancing service:

    gcloud compute instance-groups managed set-named-ports \
    web-group --named-ports http:80 --region us-central1
    gcloud compute backend-services create web-backend \
    --global \
    --port-name=http \
    --protocol HTTP \
    --http-health-checks basic-http-check \
    --enable-logging
    gcloud compute backend-services add-backend web-backend \
    --instance-group web-group \
    --global \
    --instance-group-region us-central1
    gcloud compute url-maps create web-lb \
    --default-service web-backend
    gcloud compute target-http-proxies create web-lb-proxy \
    --url-map web-lb
    gcloud compute forwarding-rules create web-rule \
    --global \
    --target-http-proxy web-lb-proxy \
    --ports 80
    
  6. It takes several minutes for the instances to register and the load balancer to be ready. Check in Navigation menu > Network services > Load balancing or

    gcloud compute backend-services get-health web-backend --global
    

    kind: compute#backendServiceGroupHealth

  7. Retrieve the load balancer IP address:

    gcloud compute forwarding-rules describe web-rule --global | grep IPAddress

    IPAddress: 34.120.166.236

  8. Access the load balancer:

    curl -m1 34.120.166.236

    The output should look like this (do not copy; this is example output):

    <!doctype html><html><body><h1>Web server</h1>

    This server is in zone: projects/921381138888/zones/us-central1-a</h2></body></html> </pre>

  9. Open a new browser tab to keep trying that IP address

    while true; do curl -m1 34.120.166.236; done

    Create a VM to test access to the load balancer

  10. In Navigation menu > Compute Engine, Click CREATE INSTANCE.
  11. Name the instance access-test and set the Region to australia-southeast1 (Sydney).
  12. Leave everything else at the default and click Create.
  13. Once launched, click the SSH button to connect to the instance

    TODO: Commands instead of GUI for above.

  14. Access the load balancer:

    curl -m1 35.244.71.166

    The output should look like this:

    <!doctype html><html><body><h1>Web server</h1>

    This server is in zone: projects/921381138888/zones/us-central1-a</h2></body></html> </pre> ### Create Blocklist security policy with Cloud Armor To block access from access-test VM (a malicious client), To identify the external IP address of a client trying to access your HTTP Load Balancer, you could examine traffic captured by VPC Flow Logs in BigQuery to determine a high volume of incoming requests.

  15. Go to Navigation menu > Compute engine and copy the External IP of the access-test VM.
  16. Go to Navigation menu > Network Security > Cloud Armor.
  17. Click Create policy.
  18. Provide a name of blocklist-access-test and set the Default rule action to Allow.
  19. Click Next step. Click Add rule.

    TODO: Commands instead of GUI for above.

  20. Set the following Property values:

    Condition/match: Enter the IP of the access-test VM

    Action: Deny

    Deny status: 404 (Not Found)

    Priority: 1000

  21. Click Done.
  22. Click Next step.
  23. Click Add Target.

    For Type, select Load balancer backend service.

    For Target, select web-backend.

  24. Click Done.
  25. Click Create policy.

    Alternatively, you could set the default rule to Deny and only allowlist traffic from authorized users/IP addresses.

  26. Wait for the policy to be created before moving to the next step.

  27. Verifying the security policy in Cloud Shell: Return to the SSH session of the access-test VM.

  28. Run the curl command again on the instance to access the load balancer:

    curl -m1 35.244.71.166

    The response should be a 404. It might take a couple of minutes for the security policy to take affect.

    View Cloud Armor logs

  29. In the Console, navigate to Navigation menu > Network Security > Cloud Armor.
  30. Click blocklist-access-test.
  31. Click Logs.
  32. Click View policy logs and go to the latest logs. By default, the GCE Backend Service logs are shown.
  33. Select Cloud HTTP Load Balancer. Next, you will filter the view of the log to show only 404 errors.

  34. Remove the contents of the Query builder box and replace with 404 - and press Run Query to start the search for 404 errors.
  35. Locate a log with a 404 and expand the log entry.

  36. Expand httpRequest.

    The request should be from the access-test VM IP address.


Google COMPUTE Cloud Services

gcp-compute-735x301

<td">Java, Node, Python, Go, PHP</td>
Considerations Compute EngineContainer
Engine
Kubernetes EngineApp Engine
Standard
App Engine
Flexible
Cloud Run Cloud Functions
Users manage: One container per VM Like on-prem. K8s yaml No-opsManaged No-ops
Service model: Iaas Hybrid PaaS Paas Stateless Serverless Logic
Language support: Any Any Any+Ruby, .NET -
Primary use case: General computing Container-basedContainers Web & Mobile appsDocker container

gcp-compute-usage *

https://cloudplatform.googleblog.com/2017/07/choosing-the-right-compute-option-in-GCP-a-decision-tree.html

Google’s engines:

  • Compute Engine (GCE) is a managed environment for deploying virtual machines (VMs), providing full control of VMs for Linux and Windows Server. The API controls addresses, autoscalars, backend, disks, firewalls, global Forwarding, health, images, instances, projects, region, snapshots, ssl, subnetworks, targets, vpn, zone, etc.

  • Kubernetes Engine (GKE) is a managed environment for deploying containerized applications, for container clustering

  • App Engine (GAE) is a managed platform for Google to deploy and host full app code at scale. Similar to Amanzon Beanstalk, GAE runs full Go, PHP, Java, Python, Node.js, .NET C#, Ruby, etc. coded with login forms and authentication logic. GAE Standard runs in a proprietary sandbox which starts faster than GAE Flexible running in Docker containers. Being proprietary, GAE Standard cannot access Compute Engine resources nor allow 3rd-party binaries. GAE Standard is good

  • Cloud Run enables stateless containers, based on KNative. Being in a container means any language can be used. Each container listens for requests or events. It’s like AWS Fargate.

  • Google Cloud Functions (previously called Firebase, like Amazon Lambda) is a managed serverless platform for deploying event-driven functions. It runs single-purpose microservices written in JavaScript executed in Node.js when triggered by events. Good for stateless computation which reacts to external events.

Google Compute Engine

GCE offers the most control but also the most work (operational overhead).

Preemptible instances are cheaper but can be taken anytime, like Amazon’s.

Google provides load balancers, VPNs, firewalls.

Use GCE where you need to select the size of disks, memory, CPU types

  • use GPUs (Graphic Processing Units)
  • custom OS kernels
  • specifically licensed software
  • protocols beyond HTTP/S
  • orchestration of multiple containers

GCE is called a IaaS (Infrastructure as a Service) offering of instances, NOT using Kubernetes automatically like GKC. Use it to migrate on-premise solutions to cloud.

https://cloud.google.com/compute/docs/?hl=en_US&_ga=2.131668815.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-compute-engine

https://cloud.google.com/compute/docs/machine-types such as n1-standard-1.

GCE SonarQube

There are several ways to instantiate a Sonar server.

GCE SonarQube BitNami

One alternative is to use Bitnami

  1. Browser at https://google.bitnami.com/vms/new?image_id=4FUcoGA
  2. Click Account for https://google.bitnami.com/services
  3. Add Project
  4. Setup a BitName Vault password.
  5. PROTIP: Use 1Password to generate a strong password and store it.
  6. Agree to really open sharing with Bitnami:

    • View and manage your Google Compute Engine resourcesMore info
    • View and manage your data across Google Cloud Platform servicesMore info
    • Manage your data in Google Cloud StorageMore info

    CAUTION: This may be over-sharing for some.

  7. Click “Select an existing Project…” to select one in the list that appears. Continue.
  8. Click “Enable Deployment Manager (DM) API” to open another browser tab at https://console.developers.google.com/project/attiopinfosys/apiui/apiview/deploymentmanager
  9. If the blue “DISABLE” appears, then it’s enabled.
  10. Return to the Bitnami tab to click “CONTINUE”.
  11. Click BROWSE for the Library at https://google.bitnami.com/

    The above is done one time to setup your account.

  12. Type “SonarQube” in the search field and click SEARCH.
  13. Click on the icon that appears to LAUNCH.
  14. Click on the name to change it.
  15. NOTE “Debian 8” as the OS cannot be changed.
  16. Click “SHOW” to get the password into your Clipboard. wNTzYLkM1sGX
  17. Wait for the orange “REBOOT / SHUTDOWN / DELETE” to appear at the bottom of the screen.

    Look:

  18. Click “LAUNCH SSH CONSOLE”.
  19. Click to confirm the SSH pop-up.
  20. Type lsb_release -a for information about the operating system:

    No LSB modules are available.
    Distributor ID: Debian
    Description:    Debian GNU/Linux 8.9 (jessie)
    Release:        8.9
    Codename:       jessie
    

    PROTIP: This is not the very latest operating system version because it takes time to integrate.

  21. Type pwd to note the user name (carried in from Google).
  22. Type ls -al for information about files:

    apps -> /opt/bitnami/apps
    .bash_logout
    .bashrc
    .first_login_bitnami
    htdocs -> /opt/bitnami/apache2/htdocs
    .profile
    .ssh
    stack -> /opt/bitnami
    
  23. Type exit to switch back to the browser tab.
  24. Click the blue IP address (such as 35.202.3.232) for a SonarQube tab to appear.

  25. Type “Admin” for user. Click the Password field and press Ctrl+V to paste from Clipboard.
  26. Click “Log in” for the Welcome screen.

    TODO: Assign other users.

  27. TODO: Associate the IP with a host name.

    SonarQube app admin log in

  28. At SonarQube server landing page (such as http://23.236.48.147)

    You may need to add it as a security exception. VMxatH6wcr2g

  29. Type a name of your choosing, then click Generate.

  30. Click the language (JS).
  31. Click the OS (Linux, Windows, Mac).
  32. Highlight the sonar-scanner command to copy into your Clipboard.

  33. Click Download for https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner

    sonarqube-clientinstall-386x292-27346

    On a Windows machine:
    sonar-scanner-cli-3.0.3.778-windows.zip | 63.1 MB

    On a Mac:
    sonar-scanner-cli-3.0.3.778-macosx.zip | 53.9 MB

  34. Generate the token:

    sonarqube-gen-token-601x129-14487

  35. Click Finish to see the server page such as at http://35.202.3.232/projects

    Do a scan

  36. On your Mac, unzip to folder “sonar-scanner-3.0.3.778-macosx”.

    Notice it has its own Java version in the jre folder.

  37. Open a Terminal and navigate to the bin folder containing sonar-scanner.
  38. Move it to a folder in your PATH.
  39. Create or edit shell script file from the Bitnami screen:

    ./sonar-scanner \
      -Dsonar.projectKey=sonarqube-1-vm \
      -Dsonar.sources=. \
      -Dsonar.host.url=http://23.236.48.147 \
      -Dsonar.login=b0b030cd2d2cbcc664f7c708d3f136340fc4c064
    

    NOTE: Your login token will be different than this example.

    https://github.com/wilsonmar/git-utilities/…/sonar1.sh

  40. Replace the . with the folder path such as

    -Dsonar.sources=/Users/wilsonmar/gits/ng/angular4-docker-example

    Do this instead of editing /conf/sonar-scanner.properties to change default http://localhost:9000

  41. chmod 555 sonar.sh
  42. Run the sonar script.

  43. Wait for the downloading.
  44. Look for a line such as:

    INFO: ANALYSIS SUCCESSFUL, you can browse http://35.202.3.232/dashboard/index/Angular-35.202.3.232
    
  45. Copy the URL and paste it in a browser.

  46. PROTIP: The example has no Version, Tags, etc. that a “production” environment would use.

GCE SonarQube

  1. In the GCP web console, navigate to the screen where you can create an instance.

    https://console.cloud.google.com/compute/instances

  2. Click Create (a new instance).
  3. Change the instance name from instance-1 to sonarqube-1 (numbered in case you’ll have more than one).
  4. Set the zone to your closest geographical location (us-west1-a).
  5. Set machine type to f1-micro.
  6. Click Boot Disk to select Ubuntu 16.04 LTS instead of default Debian GNU/Linux 9 (stretch).

    PROTIP: GCE does not provide the lighter http://alpinelinux.org/

  7. Type a larger Size (GB) than the default 10 GB.

    WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
    
  8. Set Firewall rules to allow Ingree and Egress through external access to ports:

    9000:9000 -p 9092:9092 sonarqube

  9. Allow HTTP & HTPPS traffic.
  10. Click “Management, disks, networking, SSH keys”.
  11. In the Startup script field, paste script you’ve tested interactively:

    # Install Docker: 
    curl -fsSL https://get.docker.com/ | sh
    sudo docker pull sonarqube
    sudo docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube
    
  12. Click “command line” link for a pop-up of the equivalent command.
  13. Copy and paste it in a text editor to save the command for troubleshooting later.

  14. Click Create the instance. This cold-boot takes time:

    gce-startup-time-640x326

    Boot time to execute startup scripts is the variation cold-boot performance.

  15. Click SSH to SSH into instance via the web console, using your Google credentials.
  16. In the new window, pwd to see your account home folder.
  17. To see instance console history:

    cat /var/log/syslog

    Manual startup setup

    https://cloud.google.com/solutions/mysql-remote-access

  18. If there is a UI, highlight and copy the external IP address (such as https://35.203.158.223/) and switch to a browser to paste on a browser Address.

  19. Add the port number to the address.

    BLAH TODO: Port for UI?

    TODO: Take a VM snapshot.

    https://cloud.google.com/solutions/prep-container-engine-for-prod

    Down the instance

  20. Remove image containers and volumes

  21. When done, close SSH windows.
  22. If you gave out and IP address, notify recipients about their imminent deletion.
  23. In the Google Console, click on the three dots to delete the instance.

    Colt McAnlis (@duhroach), Developer Advocate explains Google Cloud performance (enthusiastically) at https://goo.gl/RGsQlF

https://www.youtube.com/watch?v=ewHxl9A0VuI&index=2&list=PLIivdWyY5sqK5zce0-fd1Vam7oPY-s_8X

Windows

https://github.com/MicrosoftDocs/Virtualization-Documentation

On Windows, output from Start-up scripts are at C:\Program Files\Google\Compute Engine\sysprep\startup_script.ps1

Kubernetes Engine

gce-console-menu-244x241-11754

To reduce confusion, until Nov 14, 2017, GKE stood for “Google Container Engine”. The “K” is there because GKE is powered by Kubernetes, Google’s container orchestration manager, providing compute services using Google Compute Engine (GCE).

  1. “Kubernetes” is in the URL to the GKE home page:

    https://console.cloud.google.com/kubernetes

  2. Click “Create Cluster”.
  3. PROTIP: Rename generated “cluster-1” to contain the zone.
  4. Select zone where your others are.
  5. Note the default is Container-Optimized OS (based on Chromium OS) and 3 minion nodes in the cluster, which does not include the master.

    Workload capacity is defined by the number of Compute Engine worker nodes.

    The cluster of nodes are controlled by a K8S master.

  6. PROTIP: Attach a permanent disk for persistence.
  7. Click Create. Wait for the green checkmark to appear.
  8. Connect to the cluster. Click the cluster name to click CONNECT to Connect using Cloud Shell the CLI.

  9. Create a cluster called “bootcamp”:

gcloud container clusters create bootcamp –scopes “https://www.googleapis.com/auth/projecthosting,storage-rw”

   gcloud container clusters get-credentials cluster-1 \
      --zone us-central1-f \
      --project ${DEVSHELL_PROJECT_ID}
   

The response:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
   
  1. Invoke the command:

    
    kubectl get nodes
    

    If the get the following message:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

    Sample valid response:

    NAME                                       STATUS    ROLES     AGE       VERSION
    gke-cluster-1-default-pool-8a05cb05-701j   Ready     <none>    11m       v1.7.8-gke.0
    gke-cluster-1-default-pool-8a05cb05-k4l3   Ready     <none>    11m       v1.7.8-gke.0
    gke-cluster-1-default-pool-8a05cb05-w4fm   Ready     <none>    11m       v1.7.8-gke.0
    
  2. List and expand the width of the screen:

    gcloud container clusters list
    

    Sample response:

    NAME       ZONE           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-f  1.7.8-gke.0     162.222.177.56  n1-standard-1  1.7.8-gke.0   3          RUNNING
    

    If no clusters were created, no response is returned.

  3. Highlight the Endpoint IP address, copy, and paste to construct a browser URL such as:

    https://162.222.177.56/ui

    BLAH: User “system:anonymous” cannot get path “/”.: “No policy matched.\nUnknown user "system:anonymous"”

  4. In the Console, click Show Credentials.
  5. Highlight and copy the password.

  6. Start

    kubectl proxy
    

    The response:

    Starting to serve on 127.0.0.1:8001

    WARNING: You are no longer able to issue commands while the proxy runs.


  1. Create new pod named “hello-node”:

    kubectl run hello-node \
     --image=gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1 \
     --port=8080
    

    Sample response:

    deployment "hello-node" created
  2. View the pod just created:

    kubectl get pods
    

    Sample response:

    NAME                         READY     STATUS    RESTARTS   AGE
    hello-node-714049816-ztzrb   1/1       Running   0          6m
    
  3. List

    kubectl get deployments
    

    Sample response:

    NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    hello-node   1         1         1            1           2m
    
  4. Troubleshoot:

    kubectl get events

    kubectl get services

  5. Get logs:

    kubectl logs pod-name

  6. Other commands:

    kubectl cluster-info

    kubectl config view

Kubernetes Dashboard

Kubernetes graphical dashboard (optional)

  1. Configure access to the Kubernetes cluster dashboard:

    gcloud container clusters get-credentials hello-world \
     --zone us-central1-f --project ${DEVSHELL_PROJECT_ID}
    

    Sample response:

    kubectl proxy --port 8086
    
  2. Use the Cloud Shell Web preview feature to view a URL such as:

    https://8081-dot-3103388-dot-devshell.appspot.com/ui

  3. Click the “Connect” button for the cluster to monitor.

    See http://kubernetes.io/docs/user-guide/ui/


  1. Begin in “APIs & Services” because Services provide a single point of access (load balancer IP address and port) to specific pods.
  2. Click ENABLE…
  3. Search for Container Engine API and click it.
  4. In the gshell: gcloud compute zones list

    Create container cluster

  5. Select Zone
  6. Set “Size” (vCPUs) from 3 to 2 – the number of nodes in the cluster.

    Nodes are the primary resource that runs services on Google Container Engine.

  7. Click More to expand.
  8. Add a Label (up 60 64 per resource):

    Examples: env:prod/test, owner:, contact:, team:marketing, component:backend, state:inuse.

    The size of boot disk, memory, and storage requirements can be adjusted later.

  9. Instead of clicking “Create”, click the “command” link for the equivalent the gcloud CLI commands in the pop-up.

    gcloud beta container --project "mindful-marking-178415" clusters create "cluster-1" --zone "us-central1-a" --username="admin" --cluster-version "1.7.5-gke.1" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "2" --network "default" --no-enable-cloud-logging --no-enable-cloud-monitoring --subnetwork "default" --enable-legacy-authorization
    

    PROTIP: Machine-types are listed and described at https://cloud.google.com/compute/docs/machine-types

    Alternately,

    gcloud container clusters create bookshelf \
      --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \
      --num-nodes 2
    

    The response sample (widen window to see it all):

    Creating cluster cluster-1...done.
    Created [https://container.googleapis.com/v1/projects/mindful-marking-178415/zones/us-central1-a/clusters/cluster-1].
    kubeconfig entry generated for cluster-1.
    NAME       ZONE           MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-a  1.7.5-gke.1     35.184.10.233  n1-standard-1  1.7.5         2          RUNNING
    
  10. Push

    gcloud docker – push gcr.io/$DEVSHELL_PROJECT_ID/bookshelf

  11. Configure entry credentials

    gcloud container clusters get-credentials bookshelf

  12. Use the kubectl command line tool.

    kubectl create -f bookshelf-frontend.yaml

  13. Check status of pods

    kubectl get pods

  14. Retrieve IP address:

    kubectl get services bookshelf-frontend

    Destroy cluster

    It may seem a bit premature at this point, but since Google charges by the minute, it’s better you know how to do this earlier than later. Return to this later if you don’t want to continue.

  15. Using the key information from the previous command:

    gcloud container clusters delete cluster-1 –zone us-central1-a

    2). View cloned source code for changes

  16. Use a text editor (vim or nano) to define a .yml file to define what is in pods.

  17. Build Docker

    docker build -t gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1 .

    Sample response:

    v1: digest: sha256:6d7be8013acc422779d3de762e8094a3a2fb9db51adae4b8f34042939af259d8 size: 2002
    ...
    Successfully tagged gcr.io/cicd-182518/hello-node:v1
    
  18. Run:

    docker run -d -p 8080:8080 gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    No news is good news in the response.

  19. Web Preview on port 8080 specified above.

  20. List Docker containers images built:

    docker ps
    CONTAINER ID        IMAGE                              COMMAND                  CREATED              STATUS              PORTS                    NAMES
    c938f3b42443        gcr.io/cicd-182518/hello-node:v1   "/bin/sh -c 'node ..."   About a minute ago   Up About a minute   0.0.0.0:8080->8080/tcp   cocky_kilby
    
  21. Stop the container by using the ID provided in the results above:

    docker stop c938f3b42443

    The response is the CONTAINER_ID.

    https://cloud.google.com/sdk/docs/scripting-gcloud

  22. Run the image:

    docker run -d -p 8080:8080 gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    The response is a hash of the instance.

  23. Push the image to grc.io repository:

    gcloud docker -- push gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    v1: digest: sha256:98b5c4746feb7ea1c5deec44e6e61dfbaf553dab9e5df87480a6598730c6f973 size: 10025


gcloud config set container/cluster ...
   

3). Cloud Shell instance - Remove code placeholds

4). Cloud Shell instance - package app into a Docker container

5). Cloud Shell instance - Upload the image to Container Registry

6). Deploy app to cluster

See https://codelabs.developers.google.com/codelabs/cp100-container-engine/#0

Coursera

Google App Engine (GAE)

GAE is a PaaS (Platform as a Service) offering where Google manages application infrastucture (Jetty 8, Servlet 3.1, .Net Core, NodeJs) that responds to HTTP requests.

Google Cloud Endpoints provide scaling, HA, DoS protection, TLS 1.2 SSL certs for HTTPS.

The first 26 GB of traffic each month is free.

Develop server-side code in Java, Python, Go, PHP.

Customizable 3rd party binaries are supported with SSH access on GAE Flexible environment which also enables write to local disk.

https://cloud.google.com/appengine/docs?hl=en_US&_ga=2.237246272.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-app-engine

Google Cloud Functions

Here, single-purpose functions are coded in JavaScript and executed in NodeJs when triggered by events occuring, such as a file upload.

Google provides a “Serverless” environment for building and connecting cloud services on a web browser.

Google Firebase

API Handles HTTP requests on client-side mobile devices.

Realtime database, crashlytics, perf. mgmt., messaging.


Databases

gcloud sql instances patch mysql \
     --authorized-networks "203.0.113.20/32"
   

Google Cloud Storage (GCS) Buckets

In gcloud on a project (scale to any data size, cheap, but no support for ACID properties):

  1. Create a bucket in location ASIA, EU, or US, in this CLI example (instead of web console GUI):

    gsutil mb -l US gs://$DEVSHELL_PROJECT_ID

  2. Grant Default ACL (Access Control List) to All users in Cloud Storage:

    gsutil defacl ch -u AllUsers:R gs://$DEVSHELL_PROJECT_ID

    The response:

    Updated default ACL on gs://cp100-1094/
    

    The above is a terrible example because ACLs are meant to control access for individual objects with sensitive info.

gcp-storage-table-650x270-42645

 Cloud StorageFirestore (Datastore)BigTableCloud SQL (1stGen)
Competitors:AWS S3, Azure Blob Storage-AWS DynamoDB, Azure Cosmos DBAWS RDS, Azure SQL
Storage typeBLOB store bucketsNoSQL, documentwide column NoSQLRelational SQL
Use cases: Images, large media files, backup (zips)User profiles, product catalogAdTech, Financial & IoT time seriesUser credentials, customer orders
Good for:Structured and unstructured binary or object dataGetting started, App Engine apps"Flat" data, heavy read/write, events, analytical dataWeb frameworks, existing apps
Overall capacityPetabytes+Terabytes+Petabytes+Up to 500 GB
Unit size5 TB/
object
1 MB/
entity
10 MB/
cell
standard
Transactions:NoYesNo (OLAP)Yes
Complex queries:NoNoNoYes
Tech:--Proprietary Google-
Scaling:--Serverless autoscalingInstances

Cloud Spanner is Google’s proprietary relational SQL database (like AWS Aurora DB) which spans db’s of unlimited size across regions (globally).

https://stackoverflow.com/questions/tagged/google-cloud-storage

Google Cloud Firestore

Firestore deprecates DataStore.

Firestore provides a RESTful interface for NoSQL ACID transactions.

Cloud storage bucket classes

Standard storage for highest durability, availability, and performance with low latency, for web content distribution and video streaming.

  • (Standard) multi-regional to accessing media around the world.
  • (Standard) Regional to store data and run data analytics in a single part of the world.
  • Nearline strage for low-cost but durable data archiving, online backup, disaster recovery of data rarely accessed.
  • Coldline storage = DRA (Durable Reduced Availability Storage) at a lower cost for once per year access.

Google Cloud SQL

Google’s Cloud SQL is MySQL in the cloud, and scale up to 16 processor cores and 100 GB RAM.

It provides ACID support for transactions. relatively expensive.

Google provides automatic replicas, backups, and patching.

Up to 30TB in size.

App Engine access Cloud SQL databases using drivers Connector/J for Java and MySQLdb for Python.

  • git clone https://github.com/GoogleCloudPlatform/appengine-gcs-client.git

  • https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/using-cloud-storage
  • https://cloud.google.com/sdk/cloud-client-libraries for Python

Tools like Toad can be used to administser Cloud SQL databases.

https://stackoverflow.com/questions/tagged/google-cloud-sql

CSEK

Each chunk is distributed across Google’s storage infra. All chunks (sub-files) within an object is encrypted at rest with its own unique Data Encryption Key (DEK).

DEKs are wrapped by KEKs (Key Encryption Keys) stored in KMS.

With Google-manged keys: The standard key roation period is 90 days, storing 20 versions. Re-encryption after 5 years.

Customer-managed keys: Keys are in a key ring.

Customer-supplied keys are stored outside of GCP.

LAB: Create an encryption key and wrap it with the Google Compute Engine RSA public key certificate

  1. create a 256 bit (32 byte) random number to use as a key:

    openssl rand 32 > mykey.txt more mykey.txt

    Result:

    Qe7>hk=c}

  2. Download the GCE RSA public cert:

    curl
    https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem \

    gce-cert.pem

Use a RSA public key to encrypt your data. After that data has been encrypted with the public key, it can only be decrypted by the respective private key. In this case, the private key is known only to Google Cloud Platform services. Wrapping your key using the RSA certificate ensures only Google Cloud Platform services can unwrap your key and use it to protect your data.

  1. Extract the public key from the certificate:

    openssl x509 -pubkey -noout -in gce-cert.pem > pubkey.pem

  2. RSA-wrap your key:

    openssl rsautl -oaep -encrypt -pubin -inkey pubkey.pem -in
    mykey.txt -out rsawrappedkey.txt

  3. Encode in base64 in base64:

    openssl enc -base64 -in rsawrappedkey.txt | tr -d ‘\n’ | sed -e
    ‘$a' > rsawrapencodedkey.txt

  4. View your encoded, wrapped key to verify it was created:

    cat rsawrapencodedkey.txt

  5. To avoid introducing newlines, use code editor to copy the contents rsawrapencodedkey.txt

    MBMCbcFkf6xyKEFyyKu/VoA/OyQfyHqeaj6z3bGruewwUOz1cEOqIoPbgbtYAJHKiZdB6/loAGHoeIH+MyLoEndOX2BNoVOkPdDkx3VDVfaUl4qxwwLQDLtWKaEMdASpgCzDz/fxSbMAhJE9smcIxAFPAvHMHiUeGMju+Mk+Hi2UYc5c2gwpzah4z6v/5vNF1WubDbQ9g0QulU/p9Gqk/3kj/7Jl3cMqnIxPlOzFzvz5jPWHTZqR+EqwZoondL6kU/XfKurOXa28wVs8yvwHRcYp/7n5yHJjXa+psJbY/SeDFVlN6+J1IYqv6MnupOLbWEGZkfWvKAqSYnQCh4eiqQ==

  6. PROTIP: Click the “X” at the right to exit the Editor.

    Encrypt a new persistent disk with your own key

  7. In the browser tab showing the GCP console, select Navigation menu > Compute Engine > Disks. Click the Create disk button.

    Attach the disk to a compute engine instance

  8. In the browser tab showing the GCP console, select Navigation menu > Compute Engine > VM Instances. Click the Create button. Name the instance csek-demo and verify the region is us-central1 and the zone is us-central1-a.

  9. Scroll down and expand Management, security disks, networking, sole tenancy. Click on Disks and under Additional disks, click Attach existing disk. For the Disk property, select the encrypted-disk-1. Paste the value of your wrapped, encoded key into the Enter Key field, and check the Wrapped key checkbox. (You should still have this value in your clipboard). Leave the Mode as Read/write and Deletion rule as keep disk.

  10. Click the Create button to launch the new VM. The VM will be launched and have 2 disks attached. The boot disk and the encrypted disk. The encrypted disk still needs to be formatted and mounted to the instance operating system.

    Important. Notice the encryption key was needed to mount the disk to an instance. Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key there is no way for Google to recover the key or to recover any data encrypted with the lost key.

  11. Once the instance has booted, click the SSH button to connect to the instance.

  12. Issue the following commands on the instance to format and mount the encrypted volume:

    sudo mkfs.ext4 /dev/disk/by-id/google-encrypted-disk-1 mkdir encrypted sudo mount /dev/disk/by-id/google-encrypted-disk-1 encrypted/

    The disk is now mounted as the encrypted folder and can be used like any other disk.

    Create a snapshot from an encrypted disk

  13. In the GCP console, select Navigation menu > Compute Engine > Snapshots. Click the Create snapshot button. Provide a name of encrypted-disk-1-snap1. For the Source disk, select the encrypted-disk-1. For the encryption key, paste in the wrapped, encoded key value you created earlier. Check the Wrapped key checkbox.

    Notice that the snapshot can be encrypted with a different key than the actual disk. For this lab we will use the same key for the snapshot.

    Paste in the wrapped, encoded key value you created earlier again into the snapshot encryption key field. Check the Wrapped key checkbox. Click the Create button.


Stackdriver for Logging

“Stackdriver” GCP’s tool for logging, monitoring, error reporting, trace, diagnostics that’s integrated across GCP and AWS.

Trace provides per-URL latency metrics.

Open source agents

Collaborations with PagerDuty, BMC, Spluk, etc.

Integrate with auto-scaling.

Integrations with Source Repository for debugging.

Big Data Services

gcp-decision-tree.pngClick to pop-up image

gcp-bigdata-menu-288x772-11619.png

  • BigQuery SaaS data warehouse analytics database streams data at 100,000 rows per second. Automatic discounts for long term data storage. See Shine Technologies.

    HBase - columnar data store, Pig, RDBMS, indexing, hashing

    Storage costs 2 cents per BigTable per month. No charge for queries from cache!

    Competes against Amazon Redshift.

    https://stackoverflow.com/questions/tagged/google-bigquery

  • Pub/Sub - large scale (enterprise) messaging for IoT. Scalable & flexible. Integrates with Dataflow.

  • Dataproc - a managed Hadoop, Spark, MapReduce, Hive service.

    NOTE: Even though Google published the paper on MapReduce in 2004, by about 2006 Google stopped creating new MapReduce programs due to Colossus, Dremel and Flume, externalized as BigQuery and Dataflow.

  • Dataflow - stream analytics & ETL batch processing - unified and simplified pipelines in Java and Python. Use reserved compute instances. Competitor in AWS Kinesis.

  • ML Engine (for Machine Learning) -
  • IoT Core
  • Genomics

  • Dataprep
  • Datalab is a Jupyter notebook server using matplotlib or Goolge Charts for visualization. It provides an interactive tool for large-scale data exploration, transformation, analysis.

.NET Dev Support

https://www.coursera.org/learn/develop-windows-apps-gcp Develop and Deploy Windows Applications on Google Cloud Platform class on Coursera

https://cloud.google.com/dotnet/ Windows and .NET support on Google Cloud Platform.

We will build a simple ASP.NET app, deploy to Google Compute Engine and take a look at some of the tools and APIs available to .NET developers on Google Cloud Platform.

https://cloud.google.com/sdk/docs/quickstart-windows Google Cloud SDK for Windows (gcloud)

Installed with Cloud SDK for Windows is https://googlecloudplatform.github.io/google-cloud-powershell cmdlets for accessing and manipulating GCP resources

https://googlecloudplatform.github.io/google-cloud-dotnet/ Google Cloud CLient Libraries for .NET (new) On NuGet for BigQuery, Datastore, Pub/Sub, Storage, Logging.

https://developers.google.com/api-client/dotnet/ Google API Client Libraries for .NET https://github.com/GoogleCloudPlatform/dotnet-docs-samples

https://cloud.google.com/tools/visual-studio/docs/ available on Visual Studio Gallery. Google Cloud Explorer accesses Compute Engine, Cloud Storage, Cloud SQL

Load balance

Scale and Load Balance Instances and Apps

  1. Get a GCP account
  2. Define a project with billing enabled and the default network configured
  3. An admin account with at least project owner role.
  4. Create an instance template with a web app on it
  5. Create a managed instance group that uses the template to scale
  6. Create an HTTP load balancer that scales instances based on traffic and distributes load across availability zones
  7. Define a firewall rule for HTTP traffic.
  8. Test scaling and balancing under load.

Why still charges?

On a Google Cloud account which had nothing running, my bill at the end of the month was still $35 for “Compute Engine Network Load Balancing: Forwarding Rule Additional Service Charge”.

CAUTION Each exposed Kubernetes service (type == LoadBalancer) creates a forwarding rule. And Google’s shutdown script doesn’t remove the Forwarding rules created.

  1. To fix it, per https://cloud.google.com/compute/docs/load-balancing/network/forwarding-rules

    
    gcloud compute forwarding-rules list
    

    For a list such as this (scroll to the right for more):

    NAME                              REGION       IP_ADDRESS      IP_PROTOCOL  TARGET
    a07fc7696d8f411e791c442010af0008  us-central1  35.188.102.120  TCP          us-central1/targetPools/a07fc7696d8f411e791c442010af0008
    

    Iteratively:

  2. Copy each item’s NAME listed to build command:

    
    gcloud compute forwarding-rules delete [FORWARDING_RULE_NAME]
    
  3. You’ll be prompted for a region each time. And for a yes.

TODO: How to automate this?

Learning resources

https://codelabs.developers.google.com/

Running Node.js on a Virtual Machine

http://www.roitraining.com/google-cloud-platform-public-schedule/ in the US and UK $599 per day

Lynn Langit created several video courses early in 2013/14 when Google Fiber was only available in Kansas City:

https://deis.com/blog/2016/first-kubernetes-cluster-gke/

https://hub.docker.com/r/lucasamorim/gcloud/

https://github.com/campoy/go-web-workshop

http://www.anengineersdiary.com/2017/04/google-cloud-platform-tutorial-series_71.html

https://bootcamps.ine.com/products/google-cloud-architect-exam-bootcamp $1,999 bootcamp

CLI for GCP API

https://dzone.com/articles/cli-for-rest-api

More on cloud

This is one of a series on cloud computing: