Wilson Mar bio photo

Wilson Mar

Hello. Hire me!

Email me Calendar Skype call 310 320-7878

LinkedIn Twitter Gitter Google+ Youtube

Github Stackoverflow Pinterest

The cloud that runs on fast Google Fiber


Overview

Here is a hands-on introduction to learn the Google Compute Platform (GCP) and getting certified as a Google Certified Professional (GCP).

Concepts are introduced succintly after you take a small action, followed by succinct commentary, with links to more information.

Free $300 account for 60 days

In US regions, new accounts get $300 of overage for 12 months.

PROTIP: Create several emails, each with a different credit card.

Google Cloud Platform free trial ends after 60 days or when your $300 in credits are used up.

Throughout, Google does not charge for a low level of usage:

  • No more that 8 cores at once across all instances
  • No more than 100 GB of solid state disk (SSD) space
  • Max. 2TB (2,000 GB) total persistent standard disk space

Google bills in minute-level increments (with a 10-minute minimum charge), unlike Amazon which charges by the hour.

  1. In Chrome or Firefox browser, get a Gmail address.
  2. Go to the free trial page:

    https://console.developers.google.com/freetrial

    Alternately, go to Google Cloud’s marketing home page at:

    https://cloud.google.com

  3. Click the Try It Free button. Complete the registration. Click Agree and continue. Start my new trial.

  4. Configure at console.cloud.google.com

  5. PROTIP: Bookmark this URL.

    PROTIP: Google remembers your last project and its region, and gives them to you even if you do not specify them in the URL.

    CAUTION: Your bill can suddenly jump up to thousands of dollars a day, with no explanation. Configure to put limits.

Social

Google Certified Professional (GCP) Certification Exams

https://cloud.google.com/certification/ Google has three certifications, each 2 hour $200 exam is taken in-person at a kryterion Test Center.

As of December, 2016, Google pared down exam materials to 3 exams:

  1. Google Certified Professional - Cloud Architect
  2. Google Certified Professional - Data Engineer (for big data)
  3. Google Certified Associate - G Suite Administrator (Gmail, Google Drive, etc.)

NOTE: There is no “Associate” level, unlike Amazon.

See https://cloud.google.com/training

Cloud Architect

Cloud Architect – design, build and manage solutions on Google Cloud Platform.

Case studies:

https://medium.com/@earlg3/google-cloud-architect-exam-study-materials-5ab327b62bc8

https://www.youtube.com/playlist?list=PLIivdWyY5sqI8RuUibiH8sMb1ExIw0lAR Google Cloud Next ’17 Conference Videos

https://www.coursera.org/specializations/gcp-architecture

Data Engineer

Data Engineer certification Guide

https://cloud.google.com/training/courses/data-engineering is used within the Data Engineering on Google Cloud Platform Specialization on Coursera. It is a series of five one-week classes ($49 per month after 7 days). These have videos that syncs with transcript text, but no hints to quiz answers or live help.

  1. Building Resilient Streaming Systems on Google Cloud Platform $99 USD

. Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform $59 USD

  1. Google Cloud Platform Big Data and Machine Learning Fundamentals $59 USD

  2. Serverless Data Analysis with Google BigQuery and Cloud Dataflow $99 USD

  3. Serverless Machine Learning with Tensorflow on Google Cloud Platform $99 USD by Valliappa Lakshmanan uses Tensorflow Cloud ML service to learn a map of New York City by analyzing taxi cab locations.

    • Vision image sentiment
    • Speech recognizes 110 languages, dictating,
    • Translate
    • personalization

Why Google Cloud?

As with other clouds:

  • “Pay as you go” rather than significant up-front purchase, which eats time
  • No software to install (and go stale)
  • Google scale - 9 cloud regions in 27 zones. 90 Edge cache

But Google has the fastest fibre network, enabling high performance across the world.

GCP Console / Dashboard

https://console.cloud.google.com/home/dashboard
displays panes for your project from among the list obtained by clicking the “hamburger” menu icon at the upper left corner. The major sections of this menu are:

  • COMPUTE (App Engine, Compute Engine, Container Engine)
  • STORAGE (Cloud Bigtable, Cloud Datastore, Storage, Cloud SQL, Spanner)
  • NETWORKING (VPC)
  • STACKDRIVER (Monitoring, Debug, Trace, Logging, Error Reporting)
  • TOOLS (Container Registry, Source Repositories, Deployment Manager, Endpoints)
  • BIG DATA (BigQuery, Pub/Sub, Dataproc, Dataflow, ML Engine, Genomics, IoT Core, Dataprep)

New Project

Project Name (aka “Friendly Name”)

Project ID is unique among all other projects at Google and cannot be changed.

The “cp100” project is a demo project available to all accounts.

Service accounts are automatically created for each project:

project_number@developer.gserviceaccount.com
project_id@developer.gserviceaccount.com

Permissions

The two types of IAM roles on GCP are primitive and predefined. Primitive roles (Owner, Editor, Viewer, Billing Administrator).

With GCP’s hierarchical format, the parent policy always overrules a child policy.

Roles (such as compute.instanceAdmin) are a collection of permissions to give access to a given resource, in the form:

service.resource.verb
compute.instances.delete

Owner, Editor, Viewer

Google CLIs

Google has three shells:

gcloud CLI

gsutil to access Cloud Storage

bq for Big Query tasks

https://cloud.google.com/sdk/docs/quickstart-windows Google Cloud SDK for Windows (gcloud)

Cloud Shell

  1. On Chrome internet browser, the overall help is at

    https://cloud.google.com/shell/docs

    The command reference is at:

    https://cloud.google.com/sdk/gcloud/reference/

  2. Click the icon in the Google Cloud Platform Console:

    gcloud-shell-entry-748x511

    This provides command line access on a web browser, with nothing to install.

    gcloud-activate-gshell-251x84

  3. Click “START CLOUD SHELL”.

  4. Click the pencil icon for the built-in text editor.

  5. Edit text using nano or vim built-in.

  6. Boost mode to run Docker.

gcloud CLI install

QUESTION: WHy is there not a Homebrew for this?

  1. In https://cloud.google.com/sdk/downloads
  2. Click the link for Mac OS X (x86_64) like “google-cloud-sdk-173.0.0-darwin-x86_64.tar.gz” to your Downloads folder.
  3. Double-click the file to unzip it (from 13.9 MB to a 100.6 MB folder). If you’re not seeing a folder in Finder, use another unzip utility.
  4. Move the folder to your home folder.
  5. Edit your ~/.bash_profile to add the path to that folder in the $PATH variable.

    export PATH="$PATH:$HOME/.google-cloud-sdk/bin"
  6. PROTIP: Add an alias to get to the folder quickly:

    alias gcs='cd ~/.google-cloud-sdk'
  7. Use the alias to navigate to the folder:

    gcs

    Set permissions?

  8. Install libraries (without the help argument):

    On Linux or Mac OS X:

    ./install.sh --help

    On Windows:

    .\install.bat --help
  9. Initialize the SDK:

    ./bin/gcloud init

gcloud CLI commands

Format: https://cloud.google.com/sdk/gcloud/reference/

gcloud [GROUP] [GROUP] [COMMAND] – arguments

  1. See docs on common GCP tasks at
    https://cloud.google.com/sdk/gcloud/reference

    Has all Linux command tools and authentication pre-installed.

  2. Run df to see that /dev/sdb1 has 5,082,480 KB = 5GB of persistent storage:

    Filesystem     1K-blocks     Used Available Use% Mounted on
    none            25669948 16520376   7822572  68% /
    tmpfs             872656        0    872656   0% /dev
    tmpfs             872656        0    872656   0% /sys/fs/cgroup
    /dev/sdb1        5028480    10332   4739672   1% /home
    /dev/sda1       25669948 16520376   7822572  68% /etc/hosts
    shm                65536        0     65536   0% /dev/shm
    
  3. Confirm the operating system version:

    uname -a

    The answer is Debian 3.16:

     Linux cs-6000-devshell-vm-5260d9c4-474a-47de-a143-ea05b695c057-5a 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux
     
  4. Get syntax of commands

    gcloud help

    Sessions have a 1 hour timeout.

    Language support for Java, Go, Python, Node, PHP, Ruby

    Not meant for high computation use.

  5. Get list of Project Numbers:

    gcloud projects list

    PROTIP: The shell variable $DEVSHELL_PROJECT_ID can be used to refer to the project ID of the project used to start the Cloud Shell session.

    echo $DEVSHELL_PROJECT_ID

  6. Get list of zone codes:

    gcloud compute zones list

  7. Set your zone:

    gcloud config set compute/zone us-east1-b

  8. List configurations:

    gcloud config list

    Sample response:

    [core]
    disable_usage_reporting = True
     
    Your active configuration is: [default]
     
    Updates are available for some Cloud SDK components.  To install them,
    please run:
      $ gcloud components update
    

    Database

    gcloud sql instances patch mysql \
      --authorized-networks "203.0.113.20/32"
    

    Deploy Python

  9. Replace boilerplate “your-bucket-name” with your own project ID:

    sed -i s/your-bucket-name/$DEVSHELL_PROJECT_ID/ config.py

  10. View the list of dependencies needed by your custom Python program:

    cat requirements.txt

  11. Download the dependencies:

    pip install -r requirements.txt -t lib

  12. Deploy the current assembled folder:

    gcloud app deploy -y

  13. Exit the cloud:

    exit

Cloud Tools for PowerShell

https://cloud.google.com/powershell/

https://cloud.google.com/tools/powershell/docs/

Install

  1. In a PowerShell opened for Administrator:

    Install-Module GoogleCloud

    The response:

    Untrusted repository
    You are installing the modules from an untrusted repository. If you trust this 
    repository, change its InstallationPolicy value by running the Set-PSRepository
     cmdlet. Are you sure you want to install the modules from 'PSGallery'?
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help 
    (default is "N"):
    
  2. Click A.

  3. Get all buckets for the current project, for a specific project, or a specific bucket:

    $currentProjBuckets = Get-GcsBucket
    $specificProjBuckets = Get-GcsBucket -Project my-project-1
    $bucket = Get-GcsBucket -Name my-bucket-name
    
  4. Navigate to Google Storage (like a drive):

    cd gs:\

  5. Show the available buckets (like directories):

    ls

  6. Create a new bucket

    mkdir my-new-bucket

  7. Help

    Get-Help New-GcsBucket

TOOLS

Source Code Repository

https://cloud.google.com/source-repositories Google’s (Source) Code Repository controls what is available on gcr.io

It’s a full-featured Git repository hosted on GCP, free for up to 5 project-users per billing account, for up to 50GB free storage and 50GB free egress per month.

### Mirror from GitHub

NOTE

  1. PROTIP: On GitHub.com, login to the account you want to use (in the same browser).
  2. PROTIP: Highlight and copy the name of the repository you want to mirror on Goggle.
  3. Create another browser tab (so they share the credentials established in the steps above).
  4. https://console.cloud.google.com/code is the Google Code Console.
  5. Click “Get Started” if it appears.
  6. PROTIP: For repository name, paste or type the same name as the repo you want to hold from GitHub.

    BLAH: Repository names can only contain alphanumeric characters, underscores or dashes.

  7. Click CREATE to confirm name.

    gcp-code-github-925x460

  8. Click on “Automatically Mirror from GitHub”.
  9. Select GitHub or Bitbucket for a “Choose a Repository list”.
  10. Click Grant to the repo to be linked (if it appears). Then type your GitHub password.
  11. Click the green “Authorize Google-Cloud-Development” button.
  12. Choose the repository. Click the consent box. CONNECT.

    You should get an email “[GitHub] A new public key was added” about the Google Connected Repository.

  13. Commit a change to GitHub (push from your local copy or interactively on GitHub.com).
  14. Click the clock icon on Google Code Console to toggle commit history.
  15. Click the SHA hash to view changes.
  16. Click on the changed file path to see its text comparing two versions. Scroll down.
  17. Click “View source at this commit” to make a “git checkout” of the whole folder.
  18. Click the “Source code” menu for the default list of folders and files.
  19. Select the master branch.

    To disconnect a hosted repository:

  20. Click Repositories on the left menu.
  21. Click the settings icon (with the three vertical dots to the far right) on the same line of the repo you want disconnected.
  22. Confirm Disconnect.

    Create new repo in CLI

  23. Be at the project you want.
  24. Create a repository.
  25. Click the CLI icon.
  26. Click the wrench to adjust backgroun color, etc.

  27. Create a file using the source browser.

  28. Make it a Git repository (a Git client is built-in):

    gcloud init

  29. Define, for example:

    git config credential.helper gcloud.sh

  30. Define the remote:

    git remote add google \ https://source.developers.google.com/p/cp100-1094/r/helloworld

  31. Define the remote:

    git push --all google

  32. To transfer a file within gcloud CLI:

    gsutil cp \*.txt gs://cp-100-demo </strong></tt>

Container Registry

https://console.cloud.google.com/gcr - Google’s Container Registry console is used to control what is in
Google’s Container Registry (GCR). It is a service apart from GKE. It stores secure, private Docker images for deployments. Like GitHub, it has build triggers.

help

Deployment Manager

Deployment Manager creates resources.

Cloud Launcher uses .yaml templates describing the environment makes for repeatability.

Endpoints (APIs)

Google Cloud Endpoints let you manage, control access, and monitor custom APIs that can be kept private.

REST API

  1. Enable the API on Console.

  2. For more on working with Google API Explorer to test RESTful API’s

    https://developers.google.com/apis-explorer

    PROTIP: Although APIs are in alphabetical order, some services are named starting with “Cloud” or “Google” or “Google Cloud”. Press Ctrl+F to search.

SQL Servers on GCE: (2012, 2014, 2016)

  • SQL Server Standard
  • SQL Server Web
  • SQL Server Enterprise

API Explorer site: GCEssentials_ConsoleTour

Authentication using OAuth2 (JWT), JSON.

Google NETWORKING

Google creates all instances with a private (internal) IP address such as 10.142.3.2.

One public IP (such as 35.185.115.31) is optionally created to a resource. The IP can be ephemeral (from a pool) or static (reserved). Unassigned static IPs cost $.01 per hour (24 cents per day).

Connect vis VPN using IPsec to encrypt traffic.

Google Cloud Router supports dynamic routing between Google Cloud Platform and corporate networks

HTTP Load Balancing ensures only healthy instances handle traffic across regions.

Google Cloud Interconnect:

  • Carrier Interconnect - Enterprise-grade connections provided by carrier service providers
  • Direct Peering - connect business directly to Google
  • CDN Interconnect - CDN providers link with Google’s edge network

Google COMPUTE Cloud Services

gcloud-offerings-600x120-48k

From the left, IaaS raw controlled by you to the right a PaaS highly managed by Google for “NoOps”.

 Compute EngineContainer
Engine
App Engine
Standard
App Engine
Flexible
Service model Iaas HybridPaas
Language support Any AnyJava, Python, Go, PHPAny
Primary use case workloads General computing Container-basedWeb & Mobile appsContainer-based

https://cloudplatform.googleblog.com/2017/07/choosing-the-right-compute-option-in-GCP-a-decision-tree.html

The engines of GCP:

Google Compute Engine

GCE offers the most control but also the most work (operational overhead).

Preemptible instances are cheaper but can be taken anytime, like Amazon’s.

Google provides load balancers, VPNs, firewalls.

Use GCE where you need to select the size of disks, memory, CPU types

  • use GPUs (Graphic Processing Units)
  • custom OS kernels
  • specifically licensed software
  • protocols beyond HTTP/S
  • orchestration of multiple containers

GCE is called a IaaS (Infrastructure as a Service) offering of instances, NOT using Kubernetes automatically like GKC. Use it to migrate on-premise solutions to cloud.

https://cloud.google.com/compute/docs/?hl=en_US&_ga=2.131668815.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-compute-engine

GCE SonarQube BitNami

One alternative is to use Bitnami

  1. Browser at https://google.bitnami.com/vms/new?image_id=4FUcoGA
  2. Click Account for https://google.bitnami.com/services
  3. Add Project
  4. Setup a BitName Vault password.
  5. PROTIP: Use 1Password to generate a strong password and store it.
  6. Agree to really open sharing with Bitnami:

    • View and manage your Google Compute Engine resourcesMore info
    • View and manage your data across Google Cloud Platform servicesMore info
    • Manage your data in Google Cloud StorageMore info

    CAUTION: This may be over-sharing for some.

  7. Click “Select an existing Project…” to select one in the list that appears. Continue.
  8. Click “Enable Deployment Manager (DM) API” to open another browser tab at https://console.developers.google.com/project/attiopinfosys/apiui/apiview/deploymentmanager
  9. If the blue “DISABLE” appears, then it’s enabled.
  10. Return to the Bitnami tab to click “CONTINUE”.
  11. Click BROWSE for the Library at https://google.bitnami.com/

    The above is done one time to setup your account.

  12. Type “SonarQube” in the search field and click SEARCH.
  13. Click on the icon that appears to LAUNCH.
  14. Click on the name to change it.
  15. NOTE “Debian 8” as the OS cannot be changed.
  16. Click “SHOW” to get the password into your Clipboard. wNTzYLkM1sGX
  17. Wait for the orange “REBOOT / SHUTDOWN / DELETE” to appear at the bottom of the screen.

    Look:

  18. Click “LAUNCH SSH CONSOLE”.
  19. Click to confirm the SSH pop-up.
  20. Type lsb_release -a for information about the operating system:

    No LSB modules are available.
    Distributor ID: Debian
    Description:    Debian GNU/Linux 8.9 (jessie)
    Release:        8.9
    Codename:       jessie
    

    PROTIP: This is not the very latest operating system version because it takes time to integrate.

  21. Type pwd to note the user name (carried in from Google).
  22. Type ls -al for information about files:

    apps -> /opt/bitnami/apps
    .bash_logout
    .bashrc
    .first_login_bitnami
    htdocs -> /opt/bitnami/apache2/htdocs
    .profile
    .ssh
    stack -> /opt/bitnami
    
  23. Type exit to switch back to the browser tab.
  24. Click the blue IP address (such as 35.202.3.232) for a SonarQube tab to appear.

  25. Type “Admin” for user. Click the Password field and press Ctrl+V to paste from Clipboard.
  26. Click “Log in” for the Welcome screen.

    TODO: Assign other users.

  27. TODO: Associate the IP with a host name.

    SonarQube app admin log in

  28. At SonarQube server landing page (such as https://35.202.3.232)

    You may need to add it as a security exception.

  29. Type a name of your choosing, then click Generate.
  30. Click the language (JS).
  31. Click the OS (Linux, Windows, Mac).
  32. Highlight the sonar-scanner command to copy into your Clipboard.
  33. Click Download for https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner

    On a Windows machine:
    sonar-scanner-cli-3.0.3.778-windows.zip | 63.1 MB

    On a Mac:
    sonar-scanner-cli-3.0.3.778-macosx.zip | 53.9 MB

  34. Click Finish to see the server page such as at http://35.202.3.232/projects

    Do a scan

  35. On your Mac, unzip to folder “sonar-scanner-3.0.3.778-macosx”.

    Notice it has its own Java version in the jre folder.

  36. Open a Terminal and navigate to the bin folder containing sonar-scanner.
  37. switch back to the Bitnami screen to copy the command:

    ./sonar-scanner \
      -Dsonar.projectKey=Angular-35.202.3.232 \
      -Dsonar.sources=. \
      -Dsonar.host.url=http://35.202.3.232 \
      -Dsonar.login=b0b030cd2d2cbcc664f7c708d3f136340fc4c064
    

    Do this instead of editing /conf/sonar-scanner.properties to change default http://localhost:9000

  38. Save the command in a shell script (such as sonargo), then change the Dsonar.sources path.
  39. chmod 555 sonargo
  40. Run the sonargo script.

  41. Wait for the downloading.
  42. Look for a line such as:

    INFO: ANALYSIS SUCCESSFUL, you can browse http://35.202.3.232/dashboard/index/Angular-35.202.3.232
    
  43. Copy the URL and paste it in a browser.

  44. PROTIP: The example has no Version, Tags, etc. that a “production” environment would use.

GCE SonarQube

  1. In the GCP web console, navigate to the screen where you can create an instance.

    https://console.cloud.google.com/compute/instances

  2. Click Create (a new instance).
  3. Change the instance name from instance-1 to sonarqube-1 (numbered in case you’ll have more than one).
  4. Set the zone to your closest geographical location (us-west1-a).
  5. Set machine type to f1-micro.
  6. Click Boot Disk to select Ubuntu 16.04 LTS instead of default Debian GNU/Linux 9 (stretch).

    PROTIP: GCE does not provide the lighter http://alpinelinux.org/

  7. Type a larger Size (GB) than the default 10 GB.

    WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
    
  8. Set Firewall rules to allow Ingree and Egress through external access to ports:

    9000:9000 -p 9092:9092 sonarqube

  9. Allow HTTP & HTPPS traffic.
  10. Click “Management, disks, networking, SSH keys”.
  11. In the Startup script field, paste script you’ve tested interactively:

    # Install Docker: 
    curl -fsSL https://get.docker.com/ | sh
    sudo docker pull sonarqube
    sudo docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube
    
  12. Click “command line” link for a pop-up of the equivalent command.
  13. Copy and paste it in a text editor to save the command for troubleshooting later.

  14. Click Create the instance. This cold-boot takes time:

    gce-startup-time-640x326

    Boot time to execute startup scripts is the variation cold-boot performance.

  15. Click SSH to SSH into instance via the web console, using your Google credentials.
  16. In the new window, pwd to see your account home folder.
  17. To see instance console history:

    cat /var/log/syslog

    Manual startup setup

    https://cloud.google.com/solutions/mysql-remote-access

  18. If there is a UI, highlight and copy the external IP address (such as https://35.203.158.223/) and switch to a browser to paste on a browser Address.

  19. Add the port number to the address.

    BLAH TODO: Port for UI?

    TODO: Take a VM snapshot.

    https://cloud.google.com/solutions/prep-container-engine-for-prod

    Down the instance

  20. Remove image containers and volumes

  21. When done, close SSH windows.
  22. If you gave out and IP address, notify recipients about their imminent deletion.
  23. In the Google Console, click on the three dots to delete the instance.

    Colt McAnlis (@duhroach), Developer Advocate explains Google Cloud performance (enthusiastically) at https://goo.gl/RGsQlF

https://www.youtube.com/watch?v=ewHxl9A0VuI&index=2&list=PLIivdWyY5sqK5zce0-fd1Vam7oPY-s_8X

GCE SonarQube Command

Windows

https://github.com/MicrosoftDocs/Virtualization-Documentation

On Windows, output from Start-up scripts are at C:\Program Files\Google\Compute Engine\sysprep\startup_script.ps1

GKE (Google Container Engine)

gce-console-menu-244x241-11754

The “K” is there because GKE is powered by Kubernetes, Google’s container orchestration manager, providing compute services above Google Compute Engine (GCE). “Kubernetes” is in the URL to the GKE home page:

https://console.cloud.google.com/kubernetes

  1. Begin in “APIs & Services” because Services provide a single point of access (load balancer IP address and port) to specific pods.
  2. Click ENABLE…
  3. Search for Container Engine API and click it.
  4. In the gshell: gcloud compute zones list

Workloads are defined by the number of Compute Engine worker nodes.

The cluster of nodes are controlled by a K8 master.

Pods are a group of containers within a node which share IP addresses, hostname, and other resources. Pods abstact network and storage of container for easy movement. Pods have short lifespans – deleted and recreated as necessary.

Podes are replicated across several Compute Engine nodes.

kubernetes-pods-599x298-35069

replication controller automatically add or remove pods to ensure that the specified number of pod replicas are running across nodes. This makes GKE “self healing” to provide high availability and reliability with “autoscaling” up and down based on demand.

#### Create container cluster

  1. Select Zone
  2. Set “Size” (vCPUs) from 3 to 2 – the number of nodes in the cluster.

    Nodes are the primary resource that runs services on Google Container Engine.

  3. Click More to expand.
  4. Add a Label.

    The size of boot disk, memory, and storage requirements can be adjusted later.

  5. Instead of clicking “Create”, click the “command” link for the equivalent the gcloud CLI commands in the pop-up.

    gcloud beta container --project "mindful-marking-178415" clusters create "cluster-1" --zone "us-central1-a" --username="admin" --cluster-version "1.7.5-gke.1" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "2" --network "default" --no-enable-cloud-logging --no-enable-cloud-monitoring --subnetwork "default" --enable-legacy-authorization
    

    Alternately,

    
    gcloud container clusters create bookshelf \
      --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \
      --num-nodes 2
    

    The response sample (widen window to see it all):

    Creating cluster cluster-1...done.
    Created [https://container.googleapis.com/v1/projects/mindful-marking-178415/zones/us-central1-a/clusters/cluster-1].
    kubeconfig entry generated for cluster-1.
    NAME       ZONE           MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    cluster-1  us-central1-a  1.7.5-gke.1     35.184.10.233  n1-standard-1  1.7.5         2          RUNNING
    
  6. The list of containers can be obtained again using this command:

    gcloud compute instances list </strong></tt>

  7. Push

    gcloud docker – push gcr.io/$DEVSHELL_PROJECT_ID/bookshelf

  8. Configure entry credentials

    gcloud container clusters get-credentials bookshelf

  9. Use the kubectl command line tool.

    kubectl create -f bookshelf-frontend.yaml

  10. Check status of pods

    kubectl get pods

  11. Retrieve IP address:

    kubectl get services bookshelf-frontend

    Destroy cluster

    It may seem a bit premature at this point, but since Google charges by the minute, it’s better you know how to do this earlier than later. Return to this later if you don’t want to continue.

  12. Using the key information from the previous command:

    gcloud container clusters delete cluster-1 --zone us-central1-a </strong></tt>

    2). View cloned source code for changes

  13. Use a text editor (vim or nano) to define a .yml file to define what is in pods.

  14. Build Docker

    docker build -t gcr.io/…

    gcloud config set container/cluster …

3). Cloud Shell instance - Reove code placeholds

4). Cloud Shell instance - package app into a Docker container

5). Cloud Shell instance - Upload the image to Container Registry

6). Deploy app to cluster

See https://codelabs.developers.google.com/codelabs/cp100-container-engine/#0

Google App Engine (GAE)

GAE is a PaaS (Platform as a Service) offering where Google manages application infrastucture (Jetty 8, Servlet 3.1, .Net Core, NodeJs) that responds to HTTP requests.

Google Cloud Endpoints provide scaling, HA, DoS protection, TLS 1.2 SSL certs for HTTPS.

The first 26 GB of traffic each month is free.

Develop server-side code in Java, Python, Go, PHP.

Customizable 3rd party binaries are supported with SSH access on GAE Flexible Enviornment which also enables write to local disk.

https://cloud.google.com/appengine/docs?hl=en_US&_ga=2.237246272.-1771618146.1506638659

https://stackoverflow.com/questions/tagged/google-app-engine

Google Cloud Functions

Here, single-purpose functions are coded in JavaScript and executed in NodeJs when triggered by events occuring, such as a file upload.

Google provides a “Serverless” environment for building and connecting cloud services on a web browser.

Google Firebase

Handles HTTP requests on mobile devices.

Google Cloud Storage (GCS)

In gcloud on a project:

  1. Create a bucket in location ASIA, EU, or US, in this example:

    gsutil mb -l US gs://$DEVSHELL_PROJECT_ID

  2. Grant Default ACL (Access Control List) to All users:

    gsutil defacl ch -u AllUsers:R gs://$DEVSHELL_PROJECT_ID

    The response:

    Updated default ACL on gs://cp100-1094/
    

gcp-storage-table-650x270-42645

  Cloud StorageCloud DatastoreBigtableCloud SQL (1stGen)
Storage typeBLOB storeNoSQL, documentwide column NoSQLRelational SQL
Overall capacityPetabytes+Terabytes+Petabytes+Up to 500 GB
Unit size5 TB/
object
1 MB/
entity
10 MB/
cell
standard
TransactionsNoYesNoYes
Complex queriesNoNoNoYes

https://stackoverflow.com/questions/tagged/google-cloud-storage

Google DataStore

Provides a RESTful interface for NoSQL ACID transactions.

Cloud storage bucket classes

Standard storage for highest durability, availability, and performance with low latency, for web content distribution and video streaming.

  • (Standard) multi-regional to accessing media around the world.
  • (Standard) Regional to store data and run data analytics in a single part of the world.
  • Nearline strage for low-cost but durable data archiving, online backup, disaster recovery of data rarely accessed.
  • Coldline storage = DRA (Durable Reduced Availability Storage) at a lower cost for once per year access.

Google Cloud SQL (GCS)

Google’s Cloud SQL is MySQL in the cloud, and scale up to 16 processor cores and 100 GB RAM.

Google provides automatic replicas, backups, and patching.

App Engine access Cloud SQL databases using drivers Connector/J for Java and MySQLdb for Python.

  • git clone https://github.com/GoogleCloudPlatform/appengine-gcs-client.git

  • https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/using-cloud-storage
  • https://cloud.google.com/sdk/cloud-client-libraries for Python

Tools like Toad can be used to administser Cloud SQL databases.

https://stackoverflow.com/questions/tagged/google-cloud-sql

Stackdriver for Logging

“Stackdriver” GCP’s tool for logging, monitoring, error reporting, trace, diagnostics that’s integrated across GCP and AWS.

Trace provides per-URL latency metrics.

Open source agents

Collaborations with PagerDuty, BMC, Spluk, etc.

Integrate with auto-scaling.

Integrations with Source Repository for debugging.

Big Data Services

gcp-decision-tree.pngClick to pop-up image

BigQuery data wharehouse analytics database streams data at 100,000 rows per second. Automatic discounts for long term data storage. See Shine Technologies.

(HBase - columnar data store Pig RDBMS indexing hashing)

Storage costs 2 cents per BigTable per month. No charge for queries from cache!

Competes against Amazon Redshift.

https://stackoverflow.com/questions/tagged/google-bigquery

Pub/Sub enterprise messaging for IoT. scalable & flexible. Integrates with Dataflow.

Dataflow - stream analytics & ETL batch processing - unified and simplified pipelines</strongh> in Java and Python. Use reserved compute instances. Competitor in AWS Kinesis.

Dataproc - managed Hadoop, Spark, MapReduce, Hive service

ML Engine, Genoics, IoT Core, Dataprep)

Datalab is a Jupyter notebook server using matplotlib or Goolge Charts for visualization. It provides an interactive tool for large-scale data exploration, transformation, analysis,

.NET Dev Support

https://www.coursera.org/learn/develop-windows-apps-gcp Develop and Deploy Windows Applications on Google Cloud Platform class on Coursera

https://cloud.google.com/dotnet/ Windows and .NET support on Google Cloud Platform.

We will build a simple ASP.NET app, deploy to Google Compute Engine and take a look at some of the tools and APIs available to .NET developers on Google Cloud Platform.

https://cloud.google.com/sdk/docs/quickstart-windows Google Cloud SDK for Windows (gcloud)

Installed with Cloud SDK for Windows is https://googlecloudplatform.github.io/google-cloud-powershell cmdlets for accessing and manipulating GCP resources

https://googlecloudplatform.github.io/google-cloud-dotnet/ Google Cloud CLient Libraries for .NET (new) On NuGet for BigQuery, Datastore, Pub/Sub, Storage, Logging.

https://developers.google.com/api-client/dotnet/ Google API Client Libraries for .NET https://github.com/GoogleCloudPlatform/dotnet-docs-samples

https://cloud.google.com/tools/visual-studio/docs/ available on Visual Studio Gallery. Google Cloud Explorer accesses Compute Engine, Cloud Storage, Cloud SQL

Learning resources

https://codelabs.developers.google.com/

Running Node.js on a Virtual Machine

Scale and Load Balance Instances and Apps

  1. Get a GCP account
  2. A project with billing enabled and the default network configured
  3. An admin account with at least project owner role.

Create an instance template with a web app on it

Create a managed instance group that uses the template to scale

Create an HTTP load balancer that scales instances based on traffic and distributes load across availability zones

Define a firewall rule for HTTP traffic.

Test scaling and balancing under load.

Classes

http://www.roitraining.com/google-cloud-platform-public-schedule/ in the US and UK $599 per day

More on cloud

This is one of a series on cloud computing: