Skip to content <?xml version="1.0" encoding="UTF-8"?>

Deploying a Static Website to Google Cloud Storage With Github Actions

The goal of this tutorial is to demonstrate how to automate the build and the deployment to Google Cloud Storage of a static website with Github workflows.

The workflow will be made up of several steps to run a linter, execute a test suite, and build the application. The workflow will then connect to Google Cloud and synchronise the static website with a Cloud Storage bucket.

I’ll be using Hugo but you can achieve the same result with any static website generator.

The website

The directory structure of a Hugo website looks like this:

$ tree -L 1 --dirsfirst .
.
├── archetypes
├── assets
├── content
├── themes/my-theme
├── config.toml
└── Makefile

The theme directory themes/my-theme has the following structure:

$ tree -L 1 --dirsfirst themes/my-theme/
themes/my-theme/
├── archetypes
├── layouts
├── src
├── static
├── LICENSE
└── theme.toml

The src directory will contain the source files necessary to build the themes’s assets, such as CSS and javascript.

$ tree -L 1 --dirsfirst themes/my-theme/src/
themes/my-theme/src/
├── js
├── css
├── node_modules
├── package.json
├── .eslintrc.js
├── jest.config.js
├── webpack.config.js
└── yarn.lock

As an example, we’ll define the following npm scripts in the package.json file.

// package.json
"scripts": {
  "test": "jest -c jest.config.js",
  "lint": "eslint js -c .eslintrc.js --ext js",
  "build": "webpack --config webpack.config.js",
}

This is completely up to you but I personally prefer using make to build the application and execute related tasks. This way, if I later need to change the steps, I can simply update the Makefile without changing the workflow.

# Makefile
all: deps lint test build

deps:
	cd ./themes/my-theme/src && yarn
lint:
	cd ./themes/my-theme/src && yarn lint
test:
	cd ./themes/my-theme/src && yarn test
build:
	cd ./themes/my-theme/src && yarn build
	hugo --minify

The services

We need a Google Cloud Platform (GCP) account as well as a Github account to make the magic happen so head up to both websites and create an account if you don’t have one already.

Creating a bucket and configuring your domain

I won’t go into too much detail on how to host a static website on Google Cloud Storage (GCS), there is already a very detailed documentation explaining how to create a bucket and configure the domain.

Make sure that the bucket you want to use is publicly accessible, otherwise nobody will be able to see your website.

Creating a Gitbub repository

Same here, refer to the Github documentation to create a new repository.

Creating a service account

In order to communicate with the GCP from the workflow environment, we need to create a service account. If you are not familiar with the concept, the documentation defines a service account as follows:

A service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data in Google APIs.

In the GCP console, go to IAM & Admin - Service Accounts and create a new service account with a private key; choose the json format for the key. Download the json file, you’ll need this file as well as the service account’s email in the next section.

We now need to grant the service account access to the GCS bucket. In the IAM section, find in the list the service account you just created, edit its permissions, and give it the role Storage Object Admin.

You could add further restrictions by adding conditions to the role, for instance restricting it to a single bucket, however, you’ll have to configure your bucket to have the uniform bucket-level access enabled to do so.

In any case, respect the principle of least privilege, do not grant the service account full access to GCP.

Creating environment variables

In the Settings tab of the Github repository, there is a section called Secrets. This is where you can define environment variables which will be available in the workflow context. Add the following variables:

  • GCP_SA_EMAIL our service account’s email
  • GCP_SA_KEY our base64 encoded private key

To generate the base64 string from the JSON file we previously downloaded, we can use the following shell command:

$ cat ./path/to/private-key.json | base64

With those credentials our workflow will be able to authenticate to GCP and push our website to Cloud Storage.

The workflow

The Github documentation defines a workflow as follows:

A configurable automated process that you can set up in your repository to build, test, package, release, or deploy any project on GitHub. Workflows are made up of one or more jobs and can be scheduled or activated by an event.

So essentially a workflow is composed of one or more jobs, a job is a set of steps executed on a machine called a runner and a step is either an action or a command.

To create a workflow, we need to add a yaml file in the .github/workflows directory of our repository.

# .github/workflows/my-workflow.yml
name: Website - Production # name of your workflow
on:
  push:
    branches:
      - master # Deployment will be triggered when we will push to the master branch
jobs:
  build:
    name: Build & Deploy # name of the job
    runs-on: ubuntu-latest # the runner in which the job will be executed
    steps:
    # here we will declare the steps
    # ...

In the steps section of the workflow’s configuration file, we are going to declare either actions with uses or commands with run. There are already thousands of actions available in the Github marketplace that you can use to construct your workflow but you can also create your own.

Fetching the source code

The first thing we need to do is to tell Github to fetch the source code with the checkout action. You can specify a branch or tag. We will be using in our example the git reference refs/heads/master to fetch the master branch.

- uses: actions/[email protected]
  with:
    ref: refs/heads/master
    fetch-depth: 1
Installing NodeJS

We then need to install NodeJS as well as npm and yarn. You can either define a specific version or even use a matrix to test your app against different NodeJS versions.

- uses: actions/[email protected]
  with:
    node-version: '13.x'
Configuring the cache

This is not mandatory but highly recommended. Defining this action allows you to cache the dependencies to speed up future deployments.

With the function hashFiles, you can create a hash from the yarn.lock (or package-lock.json if you are using npm) and use it as the cache key.

The next time the workflow is executed, if the lock file is the same (i.e. the cache key matches a cache entry), the job will not download the dependencies and will use the cache instead.

- uses: actions/[email protected]
  with:
    path: themes/my-theme/src/node_modules
    key: ${{ runner.os }}-js-${{ hashFiles('themes/my-theme/src/yarn.lock') }}
Installing Hugo

We are using the latest version in this case but we can also use a specific version.

- uses: peaceiris/[email protected]
  with:
    hugo-version: 'latest'
Linting, testing, and building the application

We now execute the make commands we defined earlier.

- run: make deps
- run: make lint
- run: make test
  env:
    NODE_ENV: test
- run: make build
  env:
    NODE_ENV: production
    HUGO_ENVIRONMENT: production
Pushing the files to Google Cloud Storage

We set up the Google Cloud SDK using the environment variables GCP_SA_EMAIL and GCP_SA_KEY.

- uses: GoogleCloudPlatform/github-actions/[email protected]
  with:
    version: '276.0.0'
    service_account_email: ${{ secrets.GCP_SA_EMAIL }}
    service_account_key: ${{ secrets.GCP_SA_KEY }}

By default Hugo generates the static website into the public directory, so we only need to synchronise this directory with our GCS bucket. To do so, we can use the gsutil and rsync commmands:

- run: gsutil -m rsync -d -c -r public gs://my-gcs-bucket
  • -m to perform parallel synchronization;
  • -r to synchronize the directories and subdirectories recursively;
  • -c to use files' checksum instead of their modification time to avoid sending all files during each deployment;
  • -d to delete the files that are no longer necessary.
Setting the HTTP cache

By default GCS will serve public files with the header Cache-Control: public, max-age=3600, unless the bucket has uniform bucket-level access enabled, in that case the header will have the value private. We can change this behavior by setting the object storage’s metadata during the deployment. For example:

- run: gsutil -m setmeta -h "Cache-Control:public, max-age=3600" gs://my-gcs-bucket/**/*.html
- run: gsutil -m setmeta -h "Cache-Control:public, max-age=31536000" gs://my-gcs-bucket/js/*.js
- run: gsutil -m setmeta -h "Cache-Control:public, max-age=31536000" gs://my-gcs-bucket/css/*.css
Triggering the workflow

Our final workflow file will look like this:

# .github/workflows/my-workflow.yml
name: Website - Production
on:
  push:
    branches:
      - master
jobs:
  build:
    name: Build & Deploy
    runs-on: ubuntu-latest
    steps:
      - uses: actions/[email protected]
        with:
          ref: refs/heads/master
          fetch-depth: 1
      - uses: actions/[email protected]
        with:
          node-version: '13.x'
      - uses: peaceiris/[email protected]
        with:
          hugo-version: 'latest'
      - uses: actions/[email protected]
        with:
          path: themes/my-theme/src/node_modules
          key: ${{ runner.os }}-js-${{ hashFiles('themes/my-theme/src/yarn.lock') }}
      - run: make deps
      - run: make lint
      - run: make test
        env:
          NODE_ENV: test
      - run: make build
        env:
          NODE_ENV: production
          HUGO_ENVIRONMENT: production
      - uses: GoogleCloudPlatform/github-actions/[email protected]
        with:
          version: '276.0.0'
          service_account_email: ${{ secrets.GCP_SA_EMAIL }}
          service_account_key: ${{ secrets.GCP_SA_KEY }}
      - run: gsutil -m rsync -d -c -r public gs://my-gcs-bucket
      - run: gsutil -m setmeta -h "Cache-Control:public, max-age=3600" gs://my-gcs-bucket/**/*.html
      - run: gsutil -m setmeta -h "Cache-Control:public, max-age=31536000" gs://my-gcs-bucket/js/*.js
      - run: gsutil -m setmeta -h "Cache-Control:public, max-age=31536000" gs://my-gcs-bucket/css/*.css

Once the files will be pushed up to the master branch of the remote repository, Github will detect our configuration file and automatically trigger the workflow.

$ git push origin master

You can follow the job’s progression in the Actions tab of the repository.