Wilson Mar bio photo

Wilson Mar

Hello. Join me!

Email me Calendar Skype call 310 320-7878

LinkedIn Twitter Gitter Google+ Youtube

Github Stackoverflow Pinterest

The cloud that runs on fast Google Fiber


Overview

Here is a hands-on introduction to learn the Google Compute Platform (GCP) and getting certified as a Google Certified Professional (GCP).

Concepts are introduced succintly after you take a small action, followed by succinct commentary, with links to more information.

  1. Google Cloud’s marketing home page is at:

    https://cloud.google.com

Free Cloud Time with Training

Google’s Qwiklabs includes cloud instance time (around an hour each class). At the end you get a certificate of completion for your resume.

Quests:

  • https://google.qwiklabs.com/quests/23
    G = GCP Essentials Quest
  • https://google.qwiklabs.com/quests/29
    K = Kubernetes in the Google Cloud
  • https://google.qwiklabs.com/quests/27
    W = Windows on GCP

QuestModuleLevelCreditsTime
G Creating a Virtual Machine Introductory1 30m
G Getting Started with Cloud Shell & gcloud Introductory1 40m
G Provision Services with Cloud Launcher Introductory1 30m
G Creating a Persistent Disk (Activity Tracking) Introductory1 30m
G Creating a Persistent Disk Introductory1 30m
G Monitoring Cloud Infrastructure with Stackdriver Fundamental1 45m
G Set Up Network and HTTP Load Balancers Advanced1 40m
K Introduction to Docker Beginner141m
G&K Hello Node Kubernetes Advanced760m
K Orchestrating the Cloud with Kubernetes Expert975m
K Managing Deployments Using Kubernetes Engine Advanced760m
K Continuous Delivery with Jenkins in Kubernetes Engine Expert780m
K Running a MongoDB Database in Kubernetes with StatefulSets Expert9 50m
G&K Build a Slack Bot with Node.js on Kubernetes Advanced760m
G&K Helm Package Manager Advanced750m

These labs make use of commands rather than interactive UI.

PROTIP: Use different browser programs to switch quickly among them:

  1. In a Brave browser, Qwiklabs instructions (especially if you’re using a different Google account for Quicklab than for gmail)
  2. In Chrome, open as an Incognito window to click START of a Cloud console.
  3. In Firefox, open this blog page.

    PROTIP: The time starts after you click “Start Lab”. So read through the instructions before starting.

Free $300 account for 60 days

In US regions, new accounts get $300 of overage for 12 months.

There are limitations to Google’s no charge low level usage:

  • No more that 8 cores at once across all instances
  • No more than 100 GB of solid state disk (SSD) space
  • Max. 2TB (2,000 GB) total persistent standard disk space

PROTIP: Google bills in minute-level increments (with a 10-minute minimum charge), unlike Amazon which charges by the hour. But there is talk now about Amazon matching this.

  1. Read the fine print in the FAQ to decide what Google permits:

    https://cloud.google.com/free/docs/frequently-asked-questions

  2. Read Google’s Pricing Philosophy:

    https://cloud.google.com/pricing/philosophy

    Gmail accounts

  3. NOTE: Create several Gmail accounts, each with a different identity (name, birthdate, credit card). You would need to use the same name for the credit card and the same phone number because they are expensive.

    PROTIP: Write down all the details (including the date when you opened the account) in case you have to recover the password.

    PROTIP: Use a different browser so you can flip quickly between identities.

    • Use Chrome browser for Gmail account1 with an Amex card for project1
    • Use Firefox browser for Gmail account2 with a Visa card for project2
    • Use Brave browser for Gmail account3 with a Mastercard for project3
    • Use Safari browser for Gmail account4 with a Discover card for project4
  4. In the appropriate internet browser, apply for a Gmail address and use the same combination. in the free trial registration page and Console:

    https://cloud.google.com/free

    Alternately, https://codelabs.developers.google.com/codelabs/cpb100-free-trial/#0

    https://console.developers.google.com/freetrial

  5. Click the Try It Free button. Complete the registration. Click Agree and continue. Start my new trial.

  6. With the appropriate account and browser, configure at console.cloud.google.com

    Keeping track of multiple accounts is an exhausting way to live, in my opinion.

  7. PROTIP: Bookmark the project URL.

    PROTIP: Google remembers your last project and its region, and gives them to you even if you do not specify them in the URL.

    Configure Limits

  8. CAUTION: Your bill can suddenly jump up to thousands of dollars a day, with no explanation. Configure to put limits.

Social

Why Google Cloud?

As with other clouds:

  • “Pay as you go” rather than significant up-front purchase, which eats time
  • No software to install (and go stale, requiring redo work)
  • Google scale - 9 cloud regions in 27 zones. 90 Edge cache locations.

But most of all, Google has a fast fibre network connecting machines, which enables high capacity and speed across the world.

https://cloud.google.com/why-google

Google Certified Professional (GCP) Certification Exams

As of December, 2016, Google pared down to three certifications:

  1. Google Certified Professional - Cloud Architect
  2. Google Certified Professional - Data Engineer (for big data)
  3. Google Certified Associate - G Suite Administrator (Gmail, Google Drive, etc.)

NOTE: There is no “Associate” level, unlike Amazon.

See https://cloud.google.com/certification
Each 2 hour $200 exam is taken in-person at a Kryterion Test Center. (602) 659-4660 in Phoenix, AZ. PROTIP: Centers go in an out of business, or have limitations, so call ahead to verify they’re open and to confirm parking instructions. Copy address and parking instructions to your Calendar entry.

Register for your exam through your Test Sponsor’s Webassessor portal. There you get a Test Taker Authorization Code needed to launch the test.

Google offers classes at https://cloud.google.com/training

Cloud Architect

Cloud Architect – design, build and manage solutions on Google Cloud Platform.

The exam references these case studies:

https://www.coursera.org/specializations/gcp-architecture Architecting with Google Cloud Platform Specialization 6 courses (that start once per month) for $79 USD per month via qwiklabs:

  1. Google Cloud Platform Fundamentals: Core Infrastructure
  2. Essential Cloud Infrastructure: Foundation
  3. Essential Cloud Infrastructure: Core Services
  4. Elastic Cloud Infrastructure: Scaling and Automation
  5. Elastic Cloud Infrastructure: Containers and Services
  6. Reliable Cloud Infrastructure: Design and Process

More about this certification:

Data Engineer

Data Engineer certification Guide

https://cloud.google.com/training/courses/data-engineering is used within the Data Engineering on Google Cloud Platform Specialization on Coursera. It is a series of five one-week classes ($49 per month after 7 days). These have videos that syncs with transcript text, but no hints to quiz answers or live help.

  1. Building Resilient Streaming Systems on Google Cloud Platform $99 USD

  2. Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform $59 USD

  3. Google Cloud Platform Big Data and Machine Learning Fundamentals $59 USD by Google Professional Services Consulant Valliappa Lakshmanan (Lak) at https://medium.com/@lakshmanok, previously at NOAA weather predictions.

    https://codelabs.developers.google.com/cpb100

  4. Serverless Data Analysis with Google BigQuery and Cloud Dataflow $99 USD

  5. Serverless Machine Learning with Tensorflow on Google Cloud Platform $99 USD by Valliappa Lakshmanan uses Tensorflow Cloud ML service to learn a map of New York City by analyzing taxi cab locations.

    • Vision image sentiment
    • Speech recognizes 110 languages, dictating,
    • Translate
    • personalization

New Project

Service accounts are automatically created for each project:

project_number@developer.gserviceaccount.com
project_id@developer.gserviceaccount.com

Project ID is unique among all other projects at Google and cannot be changed.

Permissions

  1. Read Google’s IT Security PDF

IAM Objects

The two types of IAM roles on GCP are primitive and curated/pre-defined.

Primitive roles (Viewer, Editor, Owner, Billing Administrator).

Permissions flow in one direction. Parent permissions don’t override child permissions. Instead, permissions are inherited and additive. Permissions can’t be denied at lower levels once they’ve been granted at upper levels.

Roles (such as compute.instanceAdmin) are a collection of permissions to give access to a given resource, in the form:

service.resource.verb

Applying an example:

compute.instances.delete

IAM Policies

Service accounts

Unlike an end-user account, a service account is associated with a VM or app for use to authenticate from one service to another. So no human authentication is involved.

Google-managed service accounts are of the format:
[PROJECT_NUMBER]@cloudservices.gserviceaccount.com

User-managed service accounts are of the format:
[PROJECT_NUMBER]-compute@developer.gserviceaccount.com

Service accounts have more stringent permissions and logging than user accounts.

Google CLIs

Google has three shells:

  1. gcloud CLI installed with google-cloud-sdk.

  2. gsutil to access Cloud Storage

  3. bq for Big Query tasks

There is a Google Cloud SDK for Windows (gcloud) for your programming pleasure.

Graphical user interface (GUI) for Google Compute Engine instance

Cloud Shell Online

The Cloud Shell provides command line access on a web browser, with nothing to install.

Sessions have a 1 hour timeout.

Language support for Java, Go, Python, Node, PHP, Ruby.

Not meant for high computation use.

  1. Click the icon in the Google Cloud Platform Console:

    gcp-cloud-shell-menu-568x166-9041

  2. Click “START CLOUD SHELL” at the bottom of this pop-up:

    gcloud-shell-entry-748x511

    When the CLI appears online:

  3. See that your present working directory is /home/ as your account name:

    
    pwd
    
  4. See the folder with your account name:

    
    echo ${HOME}
    
  5. Just your account name:

    
    echo ${USER}
    
  6. Read the welcome file:

    
    nano README-cloudshell.txt
    

    Your 5GB home directory will persist across sessions, but the VM is ephemeral and will be reset approximately 20 minutes after your session ends. No system-wide change will persist beyond that.

  7. Type “gcloud help” to get help on using Cloud SDK. For more examples, visit https://cloud.google.com/shell/docs/quickstart and https://cloud.google.com/shell/docs/examples

  8. Type “cloudshell help” to get help on using the “cloudshell” utility. Common functionality is aliased to short commands in your shell, for example, you can type “dl " at Bash prompt to download a file. Type "cloudshell aliases" to see these commands.

  9. Type “help” to see this message any time. Type “builtin help” to see Bash interpreter help.

Other resources:

GCP Console / Dashboard

https://console.cloud.google.com/home/dashboard
displays panes for your project from among the list obtained by clicking the “hamburger” menu icon at the upper left corner. The major sections of this menu are:

  • COMPUTE (App Engine, Compute Engine, Container Engine)
  • STORAGE (Cloud Bigtable, Cloud Datastore, Storage, Cloud SQL, Spanner)
  • NETWORKING (VPC)
  • STACKDRIVER (Monitoring, Debug, Trace, Logging, Error Reporting)
  • TOOLS (Container Registry, Source Repositories, Deployment Manager, Endpoints)
  • BIG DATA (BigQuery, Pub/Sub, Dataproc, Dataflow, ML Engine, Genomics, IoT Core, Dataprep)

Text Editor

  1. Click the pencil icon for the built-in text editor.

  2. Edit text using nano or vim built-in.

  3. PROTIP: Boost mode to run Docker with more memory.

Local gcloud CLI install

Get the CLI to run locally on your laptop:

  1. On MacOSX use Homebrew:
    
    brew cask install google-cloud-sdk
    

Alternately:

  1. In https://cloud.google.com/sdk/downloads
  2. Click the link for Mac OS X (x86_64) like “google-cloud-sdk-173.0.0-darwin-x86_64.tar.gz” to your Downloads folder.
  3. Double-click the file to unzip it (from 13.9 MB to a 100.6 MB folder). If you’re not seeing a folder in Finder, use another unzip utility.
  4. Move the folder to your home folder.

Either way, edit environment variables on Mac:

  1. Edit your ~/.bash_profile to add the path to that folder in the $PATH variable.

    export PATH="$PATH:$HOME/.google-cloud-sdk/bin"
  2. PROTIP: Add an alias to get to the folder quickly:

    alias gcs='cd ~/.google-cloud-sdk'
  3. Use the alias to navigate to the folder:

    gcs

    Set permissions?

  4. Install libraries (without the help argument):

    On Linux or Mac OS X:

    ./install.sh --help

    On Windows:

    .\install.bat --help
  5. Initialize the SDK:

    ./bin/gcloud init

gcloud CLI commands

Regardless of whether the CLI is online or local:

  1. Get syntax of commands

    gcloud help

  2. Be aware of the full set of parameters possible for GCP tasks at
    https://cloud.google.com/sdk/gcloud/reference

    The general format of commands:

    gcloud [GROUP] [GROUP] [COMMAND] – arguments

    Has all Linux command tools and authentication pre-installed.

  3. Run df to see that /dev/sdb1 has 5,082,480 KB = 5GB of persistent storage:

    Filesystem     1K-blocks     Used Available Use% Mounted on
    none            25669948 16520376   7822572  68% /
    tmpfs             872656        0    872656   0% /dev
    tmpfs             872656        0    872656   0% /sys/fs/cgroup
    /dev/sdb1        5028480    10332   4739672   1% /home
    /dev/sda1       25669948 16520376   7822572  68% /etc/hosts
    shm                65536        0     65536   0% /dev/shm
    
  4. Confirm the operating system version:

    uname -a

    The answer is Debian 3.16:

     Linux cs-6000-devshell-vm-5260d9c4-474a-47de-a143-ea05b695c057-5a 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux
     

    gcloud compute config-ssh

    Projects List

  5. Get list of Project IDs:

    gcloud projects list

    Example:

    PROJECT_ID              NAME                       PROJECT_NUMBER
    what-182518             CICD                       608556368368
    
  6. List project name (aka “Friendly Name”) such as “cp100”.

    gcloud config list project 
    

    A sample response:

    [core]
    project = what-182518
    Your active configuration is: [cloudshell-20786]
    
  7. Print just the project name (suppressing other warnings/errors):

    gcloud config get-value project 2> /dev/null 
    

    Alternately:

    gcloud config list --format 'value(core.project)' 2>/dev/null
    
  8. PROTIP: The shell variable $DEVSHELL_PROJECT_ID defined by Google can be used to refer to the project ID of the project used to start the Cloud Shell session.

    echo $DEVSHELL_PROJECT_ID

  9. PROTIP: Instead of manually constructing commands, use environment variable:

    gcloud config set project ${DEVSHELL_PROJECT_ID}
    

    Alternately, if you want your own:

    export PROJECT_ID=$(gcloud config get-value project)

  10. PROTIP: Get information about a project using the project environment variable:

    gcloud compute project-info describe --project ${DEVSHELL_PROJECT_ID}
    

    Project metadata includes quotas:

    quotas:
           - limit: 1000.0
      metric: SNAPSHOTS
      usage: 1.0
           - limit: 5.0
      metric: NETWORKS
      usage: 2.0
           - limit: 100.0
      metric: FIREWALLS
      usage: 13.0
           - limit: 100.0
      metric: IMAGES
      usage: 1.0
           - limit: 1.0
      metric: STATIC_ADDRESSES
      usage: 1.0
           - limit: 200.0
      metric: ROUTES
      usage: 31.0
           - limit: 15.0
      metric: FORWARDING_RULES
      usage: 2.0
           - limit: 50.0
      metric: TARGET_POOLS
      usage: 0.0
           - limit: 50.0
      metric: HEALTH_CHECKS
      usage: 2.0
           - limit: 8.0
      metric: IN_USE_ADDRESSES
      usage: 2.0
           - limit: 50.0
      metric: TARGET_INSTANCES
      usage: 0.0
           - limit: 10.0
      metric: TARGET_HTTP_PROXIES
      usage: 1.0
           - limit: 10.0
      metric: URL_MAPS
      usage: 1.0
           - limit: 5.0
      metric: BACKEND_SERVICES
      usage: 2.0
           - limit: 100.0
      metric: INSTANCE_TEMPLATES
      usage: 1.0
           - limit: 5.0
      metric: TARGET_VPN_GATEWAYS
      usage: 0.0
           - limit: 10.0
      metric: VPN_TUNNELS
      usage: 0.0
           - limit: 3.0
      metric: BACKEND_BUCKETS
      usage: 0.0
           - limit: 10.0
      metric: ROUTERS
      usage: 0.0
           - limit: 10.0
      metric: TARGET_SSL_PROXIES
      usage: 0.0
           - limit: 10.0
      metric: TARGET_HTTPS_PROXIES
      usage: 1.0
           - limit: 10.0
      metric: SSL_CERTIFICATES
      usage: 1.0
           - limit: 100.0
      metric: SUBNETWORKS
      usage: 26.0
           - limit: 10.0
      metric: TARGET_TCP_PROXIES
      usage: 0.0
           - limit: 24.0
      metric: CPUS_ALL_REGIONS
      usage: 3.0
           - limit: 10.0
      metric: SECURITY_POLICIES
      usage: 0.0
           - limit: 1000.0
      metric: SECURITY_POLICY_RULES
      usage: 0.0
           - limit: 6.0
      metric: INTERCONNECTS
      usage: 0.0
    
  11. List configuration information for the currently active project:

    gcloud config list

    Sample response:

    [component_manager]
    disable_update_check = True
    [compute]
    gce_metadata_read_timeout_sec = 5
    [core]
    account = wilsonmar@gmail.com
    check_gce_metadata = False
    disable_usage_reporting = False
    project = what-182518
    [metrics]
    environment = devshell
    Your active configuration is: [cloudshell-20786]
    

    Account Authorization Permissions

  12. List:

    gcloud auth list

    If you have not logged in:

    No credentialed accounts.
    
    

To login, run: $ gcloud auth login ACCOUNT </pre>

  1. List projects to whch your account has access:

    gcloud projects list

    Instances List

  2. Zones are listed as metadata for each GCE instance:

    gcloud compute instances list

    Sample response:

    NAME          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
    hygieia-1     us-central1-f  n1-standard-1               10.128.0.3   35.193.186.181  TERMINATED
    

    PROTIP: Define what Zone and region your team should use.

    Zones

  3. Get list of zone codes in map ???:

    gcloud compute zones list

    Sample response:

    NAME                    REGION                STATUS  NEXT_MAINTENANCE  TURNDOWN_DATE
    asia-east1-c            asia-east1            UP
    asia-east1-b            asia-east1            UP
    asia-east1-a            asia-east1            UP
    asia-northeast1-a       asia-northeast1       UP
    asia-northeast1-c       asia-northeast1       UP
    asia-northeast1-b       asia-northeast1       UP
    asia-south1-c           asia-south1           UP
    us-central1-c           us-central1           UP
    asia-south1-a           asia-south1           UP
    asia-south1-b           asia-south1           UP
    asia-southeast1-a       asia-southeast1       UP
    asia-southeast1-b       asia-southeast1       UP
    australia-southeast1-c  australia-southeast1  UP
    australia-southeast1-b  australia-southeast1  UP
    australia-southeast1-a  australia-southeast1  UP
    europe-west1-c          europe-west1          UP
    europe-west1-b          europe-west1          UP
    europe-west1-d          europe-west1          UP
    europe-west2-b          europe-west2          UP
    europe-west2-a          europe-west2          UP
    europe-west2-c          europe-west2          UP
    europe-west3-b          europe-west3          UP
    europe-west3-a          europe-west3          UP
    europe-west3-c          europe-west3          UP
    southamerica-east1-c    southamerica-east1    UP
    southamerica-east1-b    southamerica-east1    UP
    southamerica-east1-a    southamerica-east1    UP
    us-central1-a           us-central1           UP
    us-central1-f           us-central1           UP
    us-central1-c           us-central1           UP
    us-central1-b           us-central1           UP
    us-east1-b              us-east1              UP
    us-east1-d              us-east1              UP
    us-east1-c              us-east1              UP
    us-east4-c              us-east4              UP
    us-east4-a              us-east4              UP
    us-east4-b              us-east4              UP
    us-west1-c              us-west1              UP
    us-west1-b              us-west1              UP
    us-west1-a              us-west1              UP
    

    REMEMBER: Region is a higher-order (more encompassing concept) than Zone.

  4. Define environment variables to hold zone and region:

    
    export CLOUDSDK_COMPUTE_ZONE=us-central1-f
    export CLOUDSDK_COMPUTE_REGION=us-central1 
    echo $CLOUDSDK_COMPUTE_ZONE
    echo $CLOUDSDK_COMPUTE_REGION
    

    TODO: Get the default region and zone into environment variables.

    curl “http://metadata.google.internal/computeMetadata/v1/instance/zone” -H “Metadata-Flavor: Google”

  5. Set the zone (for example, us-east1-f defined above):

    
    gcloud config set compute/zone ${CLOUDSDK_COMPUTE_ZONE}
    

    See https://cloud.google.com/compute/docs/storing-retrieving-metadata

  6. Switch to see the Compute Engine Metadata UI for the project:

    https://console.cloud.google.com/compute/metadata

    • google-compute-default-zone
    • google-compute-default-region

    https://github.com/wilsonmar/Dockerfiles/blob/master/gcp-set-zone.sh

Create sample Node server

  1. Download a file from GitHub:

    
    curl -o https://raw.githubusercontent.com/wilsonmar/Dockerfiles/master/NodeJs/server.js
    

    -o (lowercase o) saves the filename provided in the command line.

    See http://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner

    The sample Node program displays just text “Hello World!” (no fancy HTML/CSS).

  2. Invoke Node to start server:

    node server.js
  3. View the program’s browser output online by clicking the Google Web View button, then “Preview on port 8080”:

    gcp-web-preview-396x236-5615

    The URL:
    https://8080-dot-3050285-dot-devshell.appspot.com/?authuser=0

  4. Press control+C to stop the Node server.

Enhance

### Database

gcloud sql instances patch mysql \
     --authorized-networks "203.0.113.20/32"
   

### Deploy Python

  1. Replace boilerplate “your-bucket-name” with your own project ID:

    sed -i s/your-bucket-name/$DEVSHELL_PROJECT_ID/ config.py

  2. View the list of dependencies needed by your custom Python program:

    cat requirements.txt

  3. Download the dependencies:

    pip install -r requirements.txt -t lib

  4. Deploy the current assembled folder:

    gcloud app deploy -y

  5. Exit the cloud:

    exit

PowerShell Cloud Tools

https://cloud.google.com/powershell/

https://cloud.google.com/tools/powershell/docs/

  1. In a PowerShell opened for Administrator:

    Install-Module GoogleCloud

    The response:

    Untrusted repository
    You are installing the modules from an untrusted repository. If you trust this 
    repository, change its InstallationPolicy value by running the Set-PSRepository
     cmdlet. Are you sure you want to install the modules from 'PSGallery'?
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help 
    (default is "N"):
    
  2. Click A.

  3. Get all buckets for the current project, for a specific project, or a specific bucket:

    $currentProjBuckets = Get-GcsBucket
    $specificProjBuckets = Get-GcsBucket -Project my-project-1
    $bucket = Get-GcsBucket -Name my-bucket-name
    
  4. Navigate to Google Storage (like a drive):

    cd gs:\

  5. Show the available buckets (like directories):

    ls

  6. Create a new bucket

    mkdir my-new-bucket

  7. Help

    Get-Help New-GcsBucket

Source Code Repository

https://console.cloud.google.com/code/develop/repo is
Google’s (Source) Code Repository Console.

served from: gcr.io

See docs at https://cloud.google.com/source-repositories

GCR is a full-featured Git repository hosted on GCP, free for up to 5 project-users per billing account, for up to 50GB free storage and 50GB free egress per month.

Mirror from GitHub

NOTE

  1. PROTIP: On GitHub.com, login to the account you want to use (in the same browser).
  2. PROTIP: Highlight and copy the name of the repository you want to mirror on Goggle.
  3. Create another browser tab (so they share the credentials established in the steps above).
  4. https://console.cloud.google.com/code is the Google Code Console.
  5. Click “Get Started” if it appears.
  6. PROTIP: For repository name, paste or type the same name as the repo you want to hold from GitHub.

    BLAH: Repository names can only contain alphanumeric characters, underscores or dashes.

  7. Click CREATE to confirm name.

    gcp-code-github-925x460

  8. Click on “Automatically Mirror from GitHub”.
  9. Select GitHub or Bitbucket for a “Choose a Repository list”.
  10. Click Grant to the repo to be linked (if it appears). Then type your GitHub password.
  11. Click the green “Authorize Google-Cloud-Development” button.
  12. Choose the repository. Click the consent box. CONNECT.

    You should get an email “[GitHub] A new public key was added” about the Google Connected Repository.

  13. Commit a change to GitHub (push from your local copy or interactively on GitHub.com).
  14. Click the clock icon on Google Code Console to toggle commit history.
  15. Click the SHA hash to view changes.
  16. Click on the changed file path to see its text comparing two versions. Scroll down.
  17. Click “View source at this commit” to make a “git checkout” of the whole folder.
  18. Click the “Source code” menu for the default list of folders and files.
  19. Select the master branch.

    To disconnect a hosted repository:

  20. Click Repositories on the left menu.
  21. Click the settings icon (with the three vertical dots to the far right) on the same line of the repo you want disconnected.
  22. Confirm Disconnect.

    Create new repo in CLI

  23. Be at the project you want.
  24. Create a repository.
  25. Click the CLI icon.
  26. Click the wrench to adjust backgroun color, etc.

  27. Create a file using the source browser.

  28. Make it a Git repository (a Git client is built-in):

    gcloud init

  29. Define, for example:

    git config credential.helper gcloud.sh

  30. Define the remote:

    git remote add google \ https://source.developers.google.com/p/cp100-1094/r/helloworld

  31. Define the remote:

    git push --all google

  32. To transfer a file within gcloud CLI:

    gsutil cp *.txt gs://cp-100-demo

GCR Container Registry

https://console.cloud.google.com/gcr - Google’s Container Registry console is used to control what is in
Google’s Container Registry (GCR). It is a service apart from GKE. It stores secure, private Docker images for deployments.

Like GitHub, it has build triggers.

help

Deployment Manager

Deployment Manager creates resources.

Cloud Launcher uses .yaml templates describing the environment makes for repeatability.

Endpoints (APIs)

Google Cloud Endpoints let you manage, control access, and monitor custom APIs that can be kept private.

REST API

  1. Enable the API on Console.

  2. For more on working with Google API Explorer to test RESTful API’s

    https://developers.google.com/apis-explorer

    PROTIP: Although APIs are in alphabetical order, some services are named starting with “Cloud” or “Google” or “Google Cloud”. Press Ctrl+F to search.

SQL Servers on GCE: (2012, 2014, 2016)

  • SQL Server Standard
  • SQL Server Web
  • SQL Server Enterprise

API Explorer site: GCEssentials_ConsoleTour

Authentication using OAuth2 (JWT), JSON.

Google NETWORKING

Google creates all instances with a private (internal) IP address such as 10.142.3.2.

One public IP (such as 35.185.115.31) is optionally created to a resource. The IP can be ephemeral (from a pool) or static (reserved). Unassigned static IPs cost $.01 per hour (24 cents per day).

Connect vis VPN using IPsec to encrypt traffic.

Google Cloud Router supports dynamic routing between Google Cloud Platform and corporate networks

HTTP Load Balancing ensures only healthy instances handle traffic across regions.

  • See https://www.ianlewis.org/en/google-cloud-platform-http-load-balancers-explaine

Google Cloud Interconnect:

  • Carrier Interconnect - Enterprise-grade connections provided by carrier service providers
  • Direct Peering - connect business directly to Google
  • CDN Interconnect - CDN providers link with Google’s edge network

Allow external traffic k8s

For security, k8s pods by default are accessible only by its internal IP within the cluster.

So to make a container accessible from outside the Kubernetes virtual network, expose the pod as a Kubernetes service. Within a Cloud Shell:

kubectl expose deployment hello-node --type="LoadBalancer"
   

The –type=”LoadBalancer” flag specifies that we’ll be using the load-balancer provided by the underlying infrastructure (in this case the Compute Engine load balancer). To load balance traffic across all pods managed by the deployment.

Sample response:

   service "hello-node" exposed
   

The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.

  1. Find the publicly-accessible IP address of the service, request kubectl to list all the cluster services:

    kubectl get services
    

    Sample response listing internal CLUSTER-IP and EXTERNAL-IP:

    NAME         CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
    hello-node   10.3.250.149   104.154.90.147   8080/TCP   1m
    kubernetes   10.3.240.1     <none>           443/TCP    5m
    

Google COMPUTE Cloud Services

gcloud-offerings-600x120-48k

From the left, IaaS raw controlled by you to the right a PaaS highly managed by Google for “NoOps”.

 Compute EngineContainer
Engine
App Engine
Standard
App Engine
Flexible
Service model Iaas HybridPaas
Language support Any AnyJava, Python, Go, PHPAny
Primary use case workloads General computing Container-basedWeb & Mobile appsContainer-based

https://cloudplatform.googleblog.com/2017/07/choosing-the-right-compute-option-in-GCP-a-decision-tree.html

The engines of GCP:

Google Compute Engine

GCE offers the most control but also the most work (operational overhead).

Preemptible instances are cheaper but can be taken anytime, like Amazon’s.

Google provides load balancers, VPNs, firewalls.

Use GCE where you need to select the size of disks, memory, CPU types

  • use GPUs (Graphic Processing Units)
  • custom OS kernels
  • specifically licensed software
  • protocols beyond HTTP/S
  • orchestration of multiple containers

GCE is called a IaaS (Infrastructure as a Service) offering of instances, NOT using Kubernetes automatically like GKC. Use it to migrate on-premise solutions to cloud.

https://cloud.google.com/compute/docs/?hl=en_US&_ga=2.131668815.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-compute-engine

https://cloud.google.com/compute/docs/machine-types such as n1-standard-1.

GCE SonarQube

There are several ways to instantiate a Sonar server.

GCE SonarQube BitNami

One alternative is to use Bitnami

  1. Browser at https://google.bitnami.com/vms/new?image_id=4FUcoGA
  2. Click Account for https://google.bitnami.com/services
  3. Add Project
  4. Setup a BitName Vault password.
  5. PROTIP: Use 1Password to generate a strong password and store it.
  6. Agree to really open sharing with Bitnami:

    • View and manage your Google Compute Engine resourcesMore info
    • View and manage your data across Google Cloud Platform servicesMore info
    • Manage your data in Google Cloud StorageMore info

    CAUTION: This may be over-sharing for some.

  7. Click “Select an existing Project…” to select one in the list that appears. Continue.
  8. Click “Enable Deployment Manager (DM) API” to open another browser tab at https://console.developers.google.com/project/attiopinfosys/apiui/apiview/deploymentmanager
  9. If the blue “DISABLE” appears, then it’s enabled.
  10. Return to the Bitnami tab to click “CONTINUE”.
  11. Click BROWSE for the Library at https://google.bitnami.com/

    The above is done one time to setup your account.

  12. Type “SonarQube” in the search field and click SEARCH.
  13. Click on the icon that appears to LAUNCH.
  14. Click on the name to change it.
  15. NOTE “Debian 8” as the OS cannot be changed.
  16. Click “SHOW” to get the password into your Clipboard. wNTzYLkM1sGX
  17. Wait for the orange “REBOOT / SHUTDOWN / DELETE” to appear at the bottom of the screen.

    Look:

  18. Click “LAUNCH SSH CONSOLE”.
  19. Click to confirm the SSH pop-up.
  20. Type lsb_release -a for information about the operating system:

    No LSB modules are available.
    Distributor ID: Debian
    Description:    Debian GNU/Linux 8.9 (jessie)
    Release:        8.9
    Codename:       jessie
    

    PROTIP: This is not the very latest operating system version because it takes time to integrate.

  21. Type pwd to note the user name (carried in from Google).
  22. Type ls -al for information about files:

    apps -> /opt/bitnami/apps
    .bash_logout
    .bashrc
    .first_login_bitnami
    htdocs -> /opt/bitnami/apache2/htdocs
    .profile
    .ssh
    stack -> /opt/bitnami
    
  23. Type exit to switch back to the browser tab.
  24. Click the blue IP address (such as 35.202.3.232) for a SonarQube tab to appear.

  25. Type “Admin” for user. Click the Password field and press Ctrl+V to paste from Clipboard.
  26. Click “Log in” for the Welcome screen.

    TODO: Assign other users.

  27. TODO: Associate the IP with a host name.

    SonarQube app admin log in

  28. At SonarQube server landing page (such as http://23.236.48.147)

    You may need to add it as a security exception. VMxatH6wcr2g

  29. Type a name of your choosing, then click Generate.

  30. Click the language (JS).
  31. Click the OS (Linux, Windows, Mac).
  32. Highlight the sonar-scanner command to copy into your Clipboard.

  33. Click Download for https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner

    sonarqube-clientinstall-386x292-27346

    On a Windows machine:
    sonar-scanner-cli-3.0.3.778-windows.zip | 63.1 MB

    On a Mac:
    sonar-scanner-cli-3.0.3.778-macosx.zip | 53.9 MB

  34. Generate the token:

    sonarqube-gen-token-601x129-14487

  35. Click Finish to see the server page such as at http://35.202.3.232/projects

    Do a scan

  36. On your Mac, unzip to folder “sonar-scanner-3.0.3.778-macosx”.

    Notice it has its own Java version in the jre folder.

  37. Open a Terminal and navigate to the bin folder containing sonar-scanner.
  38. Move it to a folder in your PATH.
  39. Create or edit shell script file from the Bitnami screen:

    ./sonar-scanner \
      -Dsonar.projectKey=sonarqube-1-vm \
      -Dsonar.sources=. \
      -Dsonar.host.url=http://23.236.48.147 \
      -Dsonar.login=b0b030cd2d2cbcc664f7c708d3f136340fc4c064
    

    NOTE: Your login token will be different than this example.

    https://github.com/wilsonmar/git-utilities/…/sonar1.sh

  40. Replace the . with the folder path such as

    -Dsonar.sources=/Users/wilsonmar/gits/ng/angular4-docker-example

    Do this instead of editing /conf/sonar-scanner.properties to change default http://localhost:9000

  41. chmod 555 sonar.sh
  42. Run the sonar script.

  43. Wait for the downloading.
  44. Look for a line such as:

    INFO: ANALYSIS SUCCESSFUL, you can browse http://35.202.3.232/dashboard/index/Angular-35.202.3.232
    
  45. Copy the URL and paste it in a browser.

  46. PROTIP: The example has no Version, Tags, etc. that a “production” environment would use.

GCE SonarQube

  1. In the GCP web console, navigate to the screen where you can create an instance.

    https://console.cloud.google.com/compute/instances

  2. Click Create (a new instance).
  3. Change the instance name from instance-1 to sonarqube-1 (numbered in case you’ll have more than one).
  4. Set the zone to your closest geographical location (us-west1-a).
  5. Set machine type to f1-micro.
  6. Click Boot Disk to select Ubuntu 16.04 LTS instead of default Debian GNU/Linux 9 (stretch).

    PROTIP: GCE does not provide the lighter http://alpinelinux.org/

  7. Type a larger Size (GB) than the default 10 GB.

    WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
    
  8. Set Firewall rules to allow Ingree and Egress through external access to ports:

    9000:9000 -p 9092:9092 sonarqube

  9. Allow HTTP & HTPPS traffic.
  10. Click “Management, disks, networking, SSH keys”.
  11. In the Startup script field, paste script you’ve tested interactively:

    # Install Docker: 
    curl -fsSL https://get.docker.com/ | sh
    sudo docker pull sonarqube
    sudo docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube
    
  12. Click “command line” link for a pop-up of the equivalent command.
  13. Copy and paste it in a text editor to save the command for troubleshooting later.

  14. Click Create the instance. This cold-boot takes time:

    gce-startup-time-640x326

    Boot time to execute startup scripts is the variation cold-boot performance.

  15. Click SSH to SSH into instance via the web console, using your Google credentials.
  16. In the new window, pwd to see your account home folder.
  17. To see instance console history:

    cat /var/log/syslog

    Manual startup setup

    https://cloud.google.com/solutions/mysql-remote-access

  18. If there is a UI, highlight and copy the external IP address (such as https://35.203.158.223/) and switch to a browser to paste on a browser Address.

  19. Add the port number to the address.

    BLAH TODO: Port for UI?

    TODO: Take a VM snapshot.

    https://cloud.google.com/solutions/prep-container-engine-for-prod

    Down the instance

  20. Remove image containers and volumes

  21. When done, close SSH windows.
  22. If you gave out and IP address, notify recipients about their imminent deletion.
  23. In the Google Console, click on the three dots to delete the instance.

    Colt McAnlis (@duhroach), Developer Advocate explains Google Cloud performance (enthusiastically) at https://goo.gl/RGsQlF

https://www.youtube.com/watch?v=ewHxl9A0VuI&index=2&list=PLIivdWyY5sqK5zce0-fd1Vam7oPY-s_8X

GCE SonarQube Command

Windows

https://github.com/MicrosoftDocs/Virtualization-Documentation

On Windows, output from Start-up scripts are at C:\Program Files\Google\Compute Engine\sysprep\startup_script.ps1

Kubernetes Engine

gce-console-menu-244x241-11754

To reduce confusion, this was previously renamed from GKE (Google Container Engine). The “K” is there because GKE is powered by Kubernetes, Google’s container orchestration manager, providing compute services above Google Compute Engine (GCE).

  1. “Kubernetes” is in the URL to the GKE home page:

    https://console.cloud.google.com/kubernetes

  2. Click “Create Cluster”.
  3. PROTIP: Rename generated “cluster-1” to contain the zone.
  4. Select zone where your others are.
  5. Note the default is Container-Optimized OS (based on Chromium OS) and 3 minion nodes in the cluster, which does not include the master.

    Workload capacity is defined by the number of Compute Engine worker nodes.

    The cluster of nodes are controlled by a K8S master.

  6. PROTIP: Attach a permanent disk for persistence.
  7. Click Create. Wait for the green checkmark to appear.
  8. Connect to the cluster. Click the cluster name to click CONNECT to Connect using Cloud Shell the CLI.

  9. Create a cluster called “bootcamp”:

gcloud container clusters create bootcamp –scopes “https://www.googleapis.com/auth/projecthosting,storage-rw”

   gcloud container clusters get-credentials cluster-1 \
      --zone us-central1-f \
      --project ${DEVSHELL_PROJECT_ID}
   

The response:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
   
  1. Invoke the command:

    
    kubectl get nodes
    

    If the get the following message:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

    Sample valid response:

    NAME                                       STATUS    ROLES     AGE       VERSION
    gke-cluster-1-default-pool-8a05cb05-701j   Ready     <none>    11m       v1.7.8-gke.0
    gke-cluster-1-default-pool-8a05cb05-k4l3   Ready     <none>    11m       v1.7.8-gke.0
    gke-cluster-1-default-pool-8a05cb05-w4fm   Ready     <none>    11m       v1.7.8-gke.0
    
  2. List and expand the width of the screen:

    
    gcloud container clusters list
    

    Sample response:

    NAME       ZONE           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-f  1.7.8-gke.0     162.222.177.56  n1-standard-1  1.7.8-gke.0   3          RUNNING
    

    If no clusters were created, no response is returned.

  3. Highlight the Endpoint IP address, copy, and paste to construct a browser URL such as:

    https://162.222.177.56/ui

    BLAH: User “system:anonymous” cannot get path “/”.: “No policy matched.\nUnknown user "system:anonymous"”

  4. In the Console, click Show Credentials.
  5. Highlight and copy the password.

  6. Start

    
    kubectl proxy
    

    The response:

    Starting to serve on 127.0.0.1:8001

    WARNING: You are no longer able to issue commands while the proxy runs.


  1. Create new pod named “hello-node”:

    
    kubectl run hello-node \
     --image=gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1 \
     --port=8080
    

    Sample response:

    deployment "hello-node" created
  2. View the pod just created:

    
    kubectl get pods
    

    Sample response:

    NAME                         READY     STATUS    RESTARTS   AGE
    hello-node-714049816-ztzrb   1/1       Running   0          6m
    
  3. List

    
    kubectl get deployments
    

    Sample response:

    NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    hello-node   1         1         1            1           2m
    
  4. Troubleshoot:

    kubectl get events

    kubectl get services

  5. Get logs:

    kubectl logs pod-name

  6. Other commands:

    kubectl cluster-info

    kubectl config view

Kubernetes Dashboard

Kubernetes graphical dashboard (optional)

  1. Configure access to the Kubernetes cluster dashboard:

    gcloud container clusters get-credentials hello-world \
     --zone us-central1-f --project ${DEVSHELL_PROJECT_ID}
    

    Sample response:

    kubectl proxy --port 8086
    
  2. Use the Cloud Shell Web preview feature to view a URL such as:

    https://8081-dot-3103388-dot-devshell.appspot.com/ui

  3. Click the “Connect” button for the cluster to monitor.

    See http://kubernetes.io/docs/user-guide/ui/


  1. Begin in “APIs & Services” because Services provide a single point of access (load balancer IP address and port) to specific pods.
  2. Click ENABLE…
  3. Search for Container Engine API and click it.
  4. In the gshell: gcloud compute zones list

    Create container cluster

  5. Select Zone
  6. Set “Size” (vCPUs) from 3 to 2 – the number of nodes in the cluster.

    Nodes are the primary resource that runs services on Google Container Engine.

  7. Click More to expand.
  8. Add a Label (up 60 64 per resource):

    Examples: env:prod/test, owner:, contact:, team:marketing, component:backend, state:inuse.

    The size of boot disk, memory, and storage requirements can be adjusted later.

  9. Instead of clicking “Create”, click the “command” link for the equivalent the gcloud CLI commands in the pop-up.

    gcloud beta container --project "mindful-marking-178415" clusters create "cluster-1" --zone "us-central1-a" --username="admin" --cluster-version "1.7.5-gke.1" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "2" --network "default" --no-enable-cloud-logging --no-enable-cloud-monitoring --subnetwork "default" --enable-legacy-authorization
    

    PROTIP: Machine-types are listed and described at https://cloud.google.com/compute/docs/machine-types

    Alternately,

    
    gcloud container clusters create bookshelf \
      --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \
      --num-nodes 2
    

    The response sample (widen window to see it all):

    Creating cluster cluster-1...done.
    Created [https://container.googleapis.com/v1/projects/mindful-marking-178415/zones/us-central1-a/clusters/cluster-1].
    kubeconfig entry generated for cluster-1.
    NAME       ZONE           MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-a  1.7.5-gke.1     35.184.10.233  n1-standard-1  1.7.5         2          RUNNING
    
  10. Push

    gcloud docker – push gcr.io/$DEVSHELL_PROJECT_ID/bookshelf

  11. Configure entry credentials

    gcloud container clusters get-credentials bookshelf

  12. Use the kubectl command line tool.

    kubectl create -f bookshelf-frontend.yaml

  13. Check status of pods

    kubectl get pods

  14. Retrieve IP address:

    kubectl get services bookshelf-frontend

    Destroy cluster

    It may seem a bit premature at this point, but since Google charges by the minute, it’s better you know how to do this earlier than later. Return to this later if you don’t want to continue.

  15. Using the key information from the previous command:

    gcloud container clusters delete cluster-1 –zone us-central1-a

    2). View cloned source code for changes

  16. Use a text editor (vim or nano) to define a .yml file to define what is in pods.

  17. Build Docker

    docker build -t gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1 .

    Sample response:

    v1: digest: sha256:6d7be8013acc422779d3de762e8094a3a2fb9db51adae4b8f34042939af259d8 size: 2002
    ...
    Successfully tagged gcr.io/cicd-182518/hello-node:v1
    
  18. Run:

    docker run -d -p 8080:8080 gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    No news is good news in the response.

  19. Web Preview on port 8080 specified above.

  20. List Docker containers images built:

    docker ps
    CONTAINER ID        IMAGE                              COMMAND                  CREATED              STATUS              PORTS                    NAMES
    c938f3b42443        gcr.io/cicd-182518/hello-node:v1   "/bin/sh -c 'node ..."   About a minute ago   Up About a minute   0.0.0.0:8080->8080/tcp   cocky_kilby
    
  21. Stop the container by using the ID provided in the results above:

    docker stop c938f3b42443

    The response is the CONTAINER_ID.

    https://cloud.google.com/sdk/docs/scripting-gcloud

  22. Run the image:

    docker run -d -p 8080:8080 gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    The response is a hash of the instance.

  23. Push the image to grc.io repository:

    gcloud docker -- push gcr.io/${DEVSHELL_PROJECT_ID}/hello-node:v1
    

    v1: digest: sha256:98b5c4746feb7ea1c5deec44e6e61dfbaf553dab9e5df87480a6598730c6f973 size: 10025


gcloud config set container/cluster ...
   

3). Cloud Shell instance - Remove code placeholds

4). Cloud Shell instance - package app into a Docker container

5). Cloud Shell instance - Upload the image to Container Registry

6). Deploy app to cluster

See https://codelabs.developers.google.com/codelabs/cp100-container-engine/#0

Google App Engine (GAE)

GAE is a PaaS (Platform as a Service) offering where Google manages application infrastucture (Jetty 8, Servlet 3.1, .Net Core, NodeJs) that responds to HTTP requests.

Google Cloud Endpoints provide scaling, HA, DoS protection, TLS 1.2 SSL certs for HTTPS.

The first 26 GB of traffic each month is free.

Develop server-side code in Java, Python, Go, PHP.

Customizable 3rd party binaries are supported with SSH access on GAE Flexible Enviornment which also enables write to local disk.

https://cloud.google.com/appengine/docs?hl=en_US&_ga=2.237246272.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-app-engine

Google Cloud Functions

Here, single-purpose functions are coded in JavaScript and executed in NodeJs when triggered by events occuring, such as a file upload.

Google provides a “Serverless” environment for building and connecting cloud services on a web browser.

Google Firebase

API Handles HTTP requests on mobile devices.

Google Cloud Storage (GCS)

In gcloud on a project:

  1. Create a bucket in location ASIA, EU, or US, in this example:

    gsutil mb -l US gs://$DEVSHELL_PROJECT_ID

  2. Grant Default ACL (Access Control List) to All users:

    gsutil defacl ch -u AllUsers:R gs://$DEVSHELL_PROJECT_ID

    The response:

    Updated default ACL on gs://cp100-1094/
    

gcp-storage-table-650x270-42645

  Cloud StorageCloud DatastoreBigtableCloud SQL (1stGen)
Storage typeBLOB storeNoSQL, documentwide column NoSQLRelational SQL
Overall capacityPetabytes+Terabytes+Petabytes+Up to 500 GB
Unit size5 TB/
object
1 MB/
entity
10 MB/
cell
standard
TransactionsNoYesNoYes
Complex queriesNoNoNoYes

https://stackoverflow.com/questions/tagged/google-cloud-storage

Google DataStore

Provides a RESTful interface for NoSQL ACID transactions.

Cloud storage bucket classes

Standard storage for highest durability, availability, and performance with low latency, for web content distribution and video streaming.

  • (Standard) multi-regional to accessing media around the world.
  • (Standard) Regional to store data and run data analytics in a single part of the world.
  • Nearline strage for low-cost but durable data archiving, online backup, disaster recovery of data rarely accessed.
  • Coldline storage = DRA (Durable Reduced Availability Storage) at a lower cost for once per year access.

Google Cloud SQL (GCS)

Google’s Cloud SQL is MySQL in the cloud, and scale up to 16 processor cores and 100 GB RAM.

Google provides automatic replicas, backups, and patching.

App Engine access Cloud SQL databases using drivers Connector/J for Java and MySQLdb for Python.

  • git clone https://github.com/GoogleCloudPlatform/appengine-gcs-client.git

  • https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/using-cloud-storage
  • https://cloud.google.com/sdk/cloud-client-libraries for Python

Tools like Toad can be used to administser Cloud SQL databases.

https://stackoverflow.com/questions/tagged/google-cloud-sql

Stackdriver for Logging

“Stackdriver” GCP’s tool for logging, monitoring, error reporting, trace, diagnostics that’s integrated across GCP and AWS.

Trace provides per-URL latency metrics.

Open source agents

Collaborations with PagerDuty, BMC, Spluk, etc.

Integrate with auto-scaling.

Integrations with Source Repository for debugging.

Big Data Services

gcp-decision-tree.pngClick to pop-up image

gcp-bigdata-menu-288x772-11619.png

  • BigQuery data wharehouse analytics database streams data at 100,000 rows per second. Automatic discounts for long term data storage. See Shine Technologies.

    HBase - columnar data store, Pig, RDBMS, indexing, hashing

    Storage costs 2 cents per BigTable per month. No charge for queries from cache!

    Competes against Amazon Redshift.

    https://stackoverflow.com/questions/tagged/google-bigquery

  • Pub/Sub - large scale (enterprise) messaging for IoT. Scalable & flexible. Integrates with Dataflow.

  • Dataproc - a managed Hadoop, Spark, MapReduce, Hive service.

    NOTE: Even though Google published the paper on MapReduce in 2004, by about 2006 Google stopped creating new MapReduce programs due to Colossus, Dremel and Flume, externalized as BigQuery and Dataflow.

  • Dataflow - stream analytics & ETL batch processing - unified and simplified pipelines in Java and Python. Use reserved compute instances. Competitor in AWS Kinesis.

  • ML Engine (for Machine Learning) -
  • IoT Core
  • Genomics

  • Dataprep
  • Datalab is a Jupyter notebook server using matplotlib or Goolge Charts for visualization. It provides an interactive tool for large-scale data exploration, transformation, analysis.

.NET Dev Support

https://www.coursera.org/learn/develop-windows-apps-gcp Develop and Deploy Windows Applications on Google Cloud Platform class on Coursera

https://cloud.google.com/dotnet/ Windows and .NET support on Google Cloud Platform.

We will build a simple ASP.NET app, deploy to Google Compute Engine and take a look at some of the tools and APIs available to .NET developers on Google Cloud Platform.

https://cloud.google.com/sdk/docs/quickstart-windows Google Cloud SDK for Windows (gcloud)

Installed with Cloud SDK for Windows is https://googlecloudplatform.github.io/google-cloud-powershell cmdlets for accessing and manipulating GCP resources

https://googlecloudplatform.github.io/google-cloud-dotnet/ Google Cloud CLient Libraries for .NET (new) On NuGet for BigQuery, Datastore, Pub/Sub, Storage, Logging.

https://developers.google.com/api-client/dotnet/ Google API Client Libraries for .NET https://github.com/GoogleCloudPlatform/dotnet-docs-samples

https://cloud.google.com/tools/visual-studio/docs/ available on Visual Studio Gallery. Google Cloud Explorer accesses Compute Engine, Cloud Storage, Cloud SQL

Load balance

Scale and Load Balance Instances and Apps

  1. Get a GCP account
  2. Define a project with billing enabled and the default network configured
  3. An admin account with at least project owner role.
  4. Create an instance template with a web app on it
  5. Create a managed instance group that uses the template to scale
  6. Create an HTTP load balancer that scales instances based on traffic and distributes load across availability zones
  7. Define a firewall rule for HTTP traffic.
  8. Test scaling and balancing under load.

Why still charges?

On a Google Cloud account which had nothing running, my bill at the end of the month was still $35 for “Compute Engine Network Load Balancing: Forwarding Rule Additional Service Charge”.

CAUTION Each exposed Kubernetes service (type == LoadBalancer) creates a forwarding rule. And Google’s shutdown script doesn’t remove the Forwarding rules created.

  1. To fix it, per https://cloud.google.com/compute/docs/load-balancing/network/forwarding-rules

    
    gcloud compute forwarding-rules list
    

    For a list such as this (scroll to the right for more):

    NAME                              REGION       IP_ADDRESS      IP_PROTOCOL  TARGET
    a07fc7696d8f411e791c442010af0008  us-central1  35.188.102.120  TCP          us-central1/targetPools/a07fc7696d8f411e791c442010af0008
    

    Iteratively:

  2. Copy each item’s NAME listed to build command:

    
    gcloud compute forwarding-rules delete [FORWARDING_RULE_NAME]
    
  3. You’ll be prompted for a region each time. And for a yes.

TODO: How to automate this?

Learning resources

https://codelabs.developers.google.com/

Running Node.js on a Virtual Machine

http://www.roitraining.com/google-cloud-platform-public-schedule/ in the US and UK $599 per day

Lynn Langit created several video courses early in 2013/14 when Google Fiber was only available in Kansas City:

https://deis.com/blog/2016/first-kubernetes-cluster-gke/

https://hub.docker.com/r/lucasamorim/gcloud/

https://github.com/campoy/go-web-workshop

http://www.anengineersdiary.com/2017/04/google-cloud-platform-tutorial-series_71.html

More on cloud

This is one of a series on cloud computing: