Using Chrome Dev Tools with Google Cloud – Google Cloud Platform
Using Chrome Dev Tools with Google Cloud
Some Google Cloud customers have a very large amount of static content that they wish to serve to users. Typically, lots of video and image media with some text. They want a solution that is:
- Easy to update content
- Reliable
- Fast
- Mobile friendly
Options for serving web content on Google Cloud Platform (GCP) are summarized in the page
For a full e-commerce site or social media app you will need many other elements in support of payment, order fulfillment, account management, and user engagement analytics. The various other cloud services which would be needed for that will not be covered in this post. Many of the same principles apply to hosting media to support a mobile app.
Setting up a Static Website on Google Cloud Storage
To get started, let’s see how to host static content can be easily hosted on GCP using the GCS features for Hosting a Static Website. You will need a domain and point that to Cloud Storage by using a CNAME record. Exdecute the commands below in a Linux shell with gcloud installed.
Check that the right project is set as your default
PROJECT=[Your project]gcloud config set project $PROJECT
To create a bucket matching the domain you need to perform Domain-Named Bucket Verification. After doing that, create the bucket with the command
BUCKET=[Your domain]gsutil mb gs://${BUCKET}
Copy the web site files
gsutil cp *.html gs://${BUCKET}gsutil cp *.js gs://${BUCKET}
Enable the bucket as a web site
gsutil web set -m index.html -e notfound.html gs://${BUCKET}
gsutil iam ch allUsers:objectViewer gs://${BUCKET}gsutil acl ch -u AllUsers:R gs://${BUCKET}
Test the site
Analyze the Website with Lighthouse
Open up Lighthouse in Chrome Developer Tools by clicking on the Audit tab and then run the analysis by clicking the ‘Run Audits’ button. A screenshot is shown below.
In addition to the performance measurement there are a number of other recommendations. The first several in the list of recommendations for a progressive web site is shown below.
The top item on the list is ‘Does not respond with a 200 when offline.’ You can enable a static website to be accessed offline by using a Service Worker to cache and retrieve the content in the browser’s Application Cache. The index2.html and main2.js files in the gist implement this. However, service workers require HTTPS. Also, this the third item on the Lighthouse audit list: ‘Does not use HTTPS.’ So our first priority will be to enable HTTPS.
To support a mobile app the same principles apply but instead of using JavaScript for a service worker you would cache commonly accessed media on the device using application code.
Enabling HTTPS for a Static Website
You can add an L7 load balancer in front of a GCS bucket to provide HTTPS. This is described in the page Creating Content-Based Load Balancing. When I say HTTPS, I mean QUIC as well, which is faster. If setting up a load balancer seems like overkill for your use case then you should explore App Engine or Firebase hosting as alternative ways of hosting an app with HTTPS.
Adding an L7 Load Balancer
Create a global static IP address:
IP_NAME=lb-ipgcloud compute addresses create $IP_NAME \ --ip-version=IPV4 \ --global
BACKEND_BUCKET_NAME=static-bucketgcloud compute backend-buckets create $BACKEND_BUCKET_NAME \ --gcs-bucket-name $BUCKET
Create a URL Map for content based load balancing using the gcloud compute url-maps create command:
URL_MAP_NAME=web-mapgcloud compute url-maps create $URL_MAP_NAME \ --default-backend-bucket=$BACKEND_BUCKET_NAME
PATH_MATCHER_NAME=bucket-matchergcloud compute url-maps add-path-matcher $URL_MAP_NAME \ --default-backend-bucket=$BACKEND_BUCKET_NAME \ --path-matcher-name $PATH_MATCHER_NAME \
CERT_NAME=${PROJECT}-certDOMAIN=[Your domain]gcloud beta compute ssl-certificates create $CERT_NAME \ --domains $DOMAIN
PROXY_NAME=https-lb-proxygcloud compute target-https-proxies create $PROXY_NAME \ --url-map $URL_MAP_NAME — ssl-certificates $CERT_NAME
Enable QUIC for the proxy
gcloud compute target-https-proxies update $PROXY_NAME \ --quic-override=ENABLE
FWD_NAME=https-content-rulegcloud compute forwarding-rules create $FWD_NAME \ --address $IP_NAME \ --global \ --target-https-proxy $PROXY_NAME \ --ports 443
Now that you are using a load balancer for the site, change the DNS record to an A record that refers to the static IP address used by the load balancer. You might need to wait for a while for this to take effect, depending on your DNS TTL.
Send traffic to the load balancer
Run the analysis again in Chrome Dev Tools. You should find that the №1 (no 200 offline) and №3 (no HTTPS) problems are now absent from the list of Progressive Web App problems. The progressive web app audit looks like shown in the screenshot below.
The top problem is now ‘User will not be prompted to Install the Web App.’ That can be addressed by providing a Web App Manifest and some other web resources as described in User Can Be Prompted To Install The Web App. Other possible problems are described in the Lighthouse reference guide.
Network Performance
So far our network performance looks pretty good. That is because there are no big files in our app. The HTML and JavaScript files are few and small. However, apps typically introduce performance problems as the code base grows and more assets are collected as the business grows. The Lighthouse reference has a list of suggestions for improving performance, which relate to measurements reported in Chrome Dev Tools.
One common example of a suggested improvement is to Enable Text Compression. Files can be compressed and then saved to GCS. You will need to set the GCS metadata appropriate to let the browser know to decompress it. See Transcoding of gzip-compressed files for details on that. See Compression of data over HTTP on Google Cloud for details of serving compressed dynamic pages and use of Brotli for even higher compression than with gzip.
The Network tab in Chrome Dev Tools contains metrics for retrieving the different resources. An example is shown in the screenshot below for a version of the page without the service worker to avoid caching locally.
As described in Assessing Loading Performance in Real Life with Navigation and Resource Timing these values, loaded from your own browser, should be considered as lab data. The data is helpful for the development of your app but is not representative of real world users who may be further from data centers and using less powerful devices on slow cellular networks. To collect timing data as experienced by real users you need to instrument your web pages (or mobile app).
You can collect timeing data in browsers with JavaScript using the Navigation Timing API. For example,
let pageNav = performance.getEntriesByType(“navigation”)[0];let dnsTime = pageNav.domainLookupEnd — pageNav.domainLookupStart;let connectTime = pageNav.connectEnd — pageNav.connectStart;let ttfb = pageNav.responseStart — pageNav.requestStart;let dt = pageNav.responseEnd — pageNav.responseStart;
This is demonstrated in the JavaScript file main.js in the gist, which is loaded from index.html. From the browser you may post the data back to your server to store. The example so far is a static site hosted on GCS but you will need a dynamic handler to receive and save the data. You can do this by adding a GCE instance or Cloud Endpoint, or Google Kubernetes Engine cluster behind the load balancer. A rough diagram of this is shown below.
The conceptual flow is
- The browser posts the timing measurements back to the server, say in JSON at /analytics
- Using content based load balancing the load balancer directs the request to a managed instance group managed by GKE
- The analytics data is dumped out to log
- A Cloud Logging log export is used to store the data to BigQuery.
There are other variants of this flow that may work equally well for you. For a lot more information on resource timings see the page Resource Timing in Practice. If saving client performance analytics seems like too much work for you then you might consider a managed service like New Relic.