Friday, May 8, 2020

GCP Study Notes 9: Architecting with Google Kubernetes Engine Specialization (coursera notes)

4 courses from Architecting with Google Kubernetes Engine Specialization:
1. Google Cloud Platform Fundamentals: Core Infrastructure

Infrastructure as a Service, IaaS, and Platform as a Service, PaaS offerings. IaaS offerings provide raw compute, storage, and network organized in ways that are familiar from data centers. PaaS offerings, on the other hand, bind application code you write to libraries that give access to the infrastructure your application needs. That way, you can just focus on your application logic.

In the IaaS model, you pay for what you allocate. In the PaaS model, you pay for what you use. Both sure beat the old way where you bought everything in advance based on lots of risky forecasting. What about SaaS? software as a Service.

Example: use Cloud Launcher to deploy a solution on Google Cloud platform. The solution I've chosen is a LAMP stack. LAMP stands for Linux(Operating System), Apache HTTP Server(web server), MySQL(relation database), PHP(Web application framework). It's an easy environment for developing web applications. I'll use Cloud Launcher to deploy that Stack into a Compute Engine Instance.

#================================================
use gcloud shell to Create VM: 
gcloud compute zones list | grep us-central1
-—set up zone for the vm: 
gcloud config set compute/zone us-central1-b
—create vm: 
gcloud compute instances create "my-vm-2” \
--machine-type "n1-standard-1” \
--image-project "debian-cloud" \ 
--image "debian-9-stretch-v20190213” \
--subnet "default"

--#Connect between VM instances, visit vm2 from vm1: 
click ssh on the vm-2: 
ping my-vm-1
ssh my-vm-1
sudo apt-get install nginx-light -y
sudo nano /var/www/html/index.nginx-debian.html

curl http://localhost/
exit

curl http://my-vm-1/
#===========================================================

Bigtable is actually the same database that powers many of Google's core services including search, analytics, maps and Gmail.

Cloud SQL provides several replica services like read, failover, and external replicas. This means that if an outage occurs, Cloud SQL can replicate data between multiple zones with automatic failover. Cloud SQL also helps you backup your data with either on-demand or scheduled backups. It can also scale both vertically by changing the machine type, and horizontally via read replicas. From a security perspective, Cloud SQL instances include network firewalls, and customer data is encrypted when on Google's internal networks, and when stored in database tables, temporary files, and backups. If Cloud SQL does not fit your requirements because you need horizontal scaleability, consider using Cloud Spanner.
Here are more speicific difference in terms of capacity and use case type:

It offers transactional consistency at a global scale, schemas, SQL, and automatic synchronous replication for high availability. And, it can provide pedabytes of capacity. Consider using Cloud Spanner if you have outgrown any relational database, or sharding your databases for throughput high performance, need transactional consistency, global data and strong consistency, or just want to consolidate your database. Natural use cases include, financial applications, and inventory applications.

We already discussed one GCP NoSQL database service: Cloud Bigtable. Another highly scalable NoSQL database choice for your applications is Cloud Datastore. One of its main use cases is to store structured data from App Engine apps. You can also build solutions that span App Engine and Compute Engine with Cloud Datastore as the integration point.

Cloud Datastore: Structured objects, with transactions and SQL-like queries
Cloud Spanner: A relational database with SQL queries and horizontal scalability.
Cloud Bigtable: Structured objects, with lookups based on a single key
Cloud Storage: Immutable binary objects

Example of create webhost using storage and VM:

Task 2: Deploy a web server VM instance

  1. In the GCP Console, on the Navigation menu, click Compute Engine > VM instances.
  2. Click Create.
  3. On the Create an Instance page, for Name, type bloghost
  4. For Region and Zone, select the region and zone assigned by Qwiklabs.
  5. For Machine type, accept the default.
  6. For Boot disk, if the Image shown is not Debian GNU/Linux 9 (stretch), click Change and select Debian GNU/Linux 9 (stretch).
  7. Leave the defaults for Identity and API access unmodified.
  8. For Firewall, click Allow HTTP traffic.
  9. Click Management, security, disks, networking, sole tenancy to open that section of the dialog.
  10. Enter the following script as the value for Startup script:
apt-get update
apt-get install apache2 php php-mysql -y
service apache2 restart
  1. Leave the remaining settings as their defaults, and click Create.

Task 3: Create a Cloud Storage bucket using gsutil command

All Cloud Storage bucket names must be globally unique. To ensure that your bucket name is unique, these instructions will guide you to give your bucket the same name as your Cloud Platform project ID, which is also globally unique.
Cloud Storage buckets can be associated with either a region or a multi-region location: US, EU, or ASIA. In this activity, you associate your bucket with the multi-region closest to the region and zone that Qwiklabs or your instructor assigned you to.
  1. On the Google Cloud Platform menu, click Activate Cloud Shell. If a dialog box appears, click Start Cloud Shell.
  2. For convenience, enter your chosen location into an environment variable called LOCATION. Enter one of these commands:
export LOCATION=US
Or
export LOCATION=EU
Or
export LOCATION=ASIA
  1. In Cloud Shell, the DEVSHELL_PROJECT_ID environment variable contains your project ID. Enter this command to make a bucket named after your project ID:
gsutil mb -l $LOCATION gs://$DEVSHELL_PROJECT_ID
  1. Retrieve a banner image from a publicly accessible Cloud Storage location:
gsutil cp gs://cloud-training/gcpfci/my-excellent-blog.png my-excellent-blog.png
  1. Copy the banner image to your newly created Cloud Storage bucket:
gsutil cp my-excellent-blog.png gs://$DEVSHELL_PROJECT_ID/my-excellent-blog.png
  1. Modify the Access Control List of the object you just created so that it is readable by everyone:
gsutil acl ch -u allUsers:R gs://$DEVSHELL_PROJECT_ID/my-excellent-blog.png

Task 4: Create the Cloud SQL instance

  1. In the GCP Console, on the Navigation menu, click Storage > SQL.
  2. Click Create instance.
  3. For Choose a database engine, select MySQL.
  4. For Instance ID, type blog-db, and for Root password type a password of your choice.


  1. Set the region and zone assigned by Qwiklabs.


  1. Click Create.


  1. Click on the name of the instance, blog-db, to open its details page.

  2. From the SQL instances details page, copy the Public IP address for your SQL instance to a text editor for use later in this lab.
  3. Click the Users tab, and then click Create user account.
  4. For User name, type blogdbuser
  5. For Password, type a password of your choice. Make a note of it.
  6. Click Create to create the user account in the database.


  1. Click the Connections tab, and then click Add network.


  1. For Name, type web front end
  2. For Network, type the external IP address of your bloghost VM instance, followed by /32
The result will look like this:
35.192.208.2/32


  1. Click Done to finish defining the authorized network.
  2. Click Save to save the configuration change.

Task 5: Configure an application in a Compute Engine instance to use Cloud SQL

  1. On the Navigation menu, click Compute Engine > VM instances.
  2. In the VM instances list, click SSH in the row for your VM instance bloghost.
  3. In your ssh session on bloghost, change your working directory to the document root of the web server:
cd /var/www/html
  1. Use the nano text editor to edit a file called index.php:
sudo nano index.php
  1. Paste the content below into the file:

<html>
<head><title>Welcome to my excellent blog</title></head>
<body>
<h1>Welcome to my excellent blog</h1>
<?php
 $dbserver = "CLOUDSQLIP";
$dbuser = "blogdbuser";
$dbpassword = "DBPASSWORD";
// In a production blog, we would not store the MySQL
// password in the document root. Instead, we would store it in a
// configuration file elsewhere on the web server VM instance.

$conn = new mysqli($dbserver, $dbuser, $dbpassword);

if (mysqli_connect_error()) {
        echo ("Database connection failed: " . mysqli_connect_error());
} else {
        echo ("Database connection succeeded.");
}
?>
</body></html>



  1. Press Ctrl+O, and then press Enter to save your edited file.
  2. Press Ctrl+X to exit the nano text editor.
  3. Restart the web server:
sudo service apache2 restart
  1. Open a new web browser tab and paste into the address bar your bloghost VM instance's external IP address followed by /index.php. The URL will look like this:
35.192.208.2/index.php


When you load the page, you will see that its content includes an error message beginning with the words:
Database connection failed: ...


  1. Return to your ssh session on bloghost. Use the nano text editor to edit index.php again.
sudo nano index.php
  1. In the nano text editor, replace CLOUDSQLIP with the Cloud SQL instance Public IP address that you noted above. Leave the quotation marks around the value in place.
  2. In the nano text editor, replace DBPASSWORD with the Cloud SQL database password that you defined above. Leave the quotation marks around the value in place.
  3. Press Ctrl+O, and then press Enter to save your edited file.
  4. Press Ctrl+X to exit the nano text editor.
  5. Restart the web server:
sudo service apache2 restart
  1. Return to the web browser tab in which you opened your bloghost VM instance's external IP address. When you load the page, the following message appears:
Database connection succeeded.


Task 6: Configure an application in a Compute Engine instance to use a Cloud Storage object

  1. In the GCP Console, click Storage > Browser.
  2. Click on the bucket that is named after your GCP project.
  3. In this bucket, there is an object called my-excellent-blog.png. Copy the URL behind the link icon that appears in that object's Public access column, or behind the words "Public link" if shown.


  1. Return to your ssh session on your bloghost VM instance.
  2. Enter this command to set your working directory to the document root of the web server:
cd /var/www/html
  1. Use the nano text editor to edit index.php:
sudo nano index.php
  1. Use the arrow keys to move the cursor to the line that contains the h1 element. Press Enter to open up a new, blank screen line, and then paste the URL you copied earlier into the line.
  2. Paste this HTML markup immediately before the URL:
<img src='
  1. Place a closing single quotation mark and a closing angle bracket at the end of the URL:
'>
The resulting line will look like this:
<img src='https://storage.googleapis.com/qwiklabs-gcp-0005e186fa559a09/my-excellent-blog.png'>
The effect of these steps is to place the line containing <img src='...'> immediately before the line containing <h1>...</h1>


  1. Press Ctrl+O, and then press Enter to save your edited file.
  2. Press Ctrl+X to exit the nano text editor.
  3. Restart the web server:

No comments:

Post a Comment

Python Study Notes: how to load stock data, manipulate data, find patterns for profit?

#================================================ from pandas_datareader import data as pdr #run the upgrade if see error: pandas_dataread...