Deploy ASP.NET Core Web App to Ubuntu Server Part 3 — CICD

This is the third and final part of the three-part series that touches on deploying an ASP.NET Core Razor Pages web application to Ubuntu Server. In this series I will cover the various methods to make deploy and finally advancing to continuous integration and continuous deployment (CICD) practices.
GitHub Actions workflow in action

Continuous integration and continuous deployment (CICD) has been the ultimate goal of the whole effort from the start. Tapping on the benefit of practicing a CICD workflow, I wanted to achieve a process where upon pushing the commits to git repository, the build, test, and deployment could be carried out immediately, so that moments after the code was updated, the results could be seen immediately, i.e. the updated application would be running on the remote server automatically.

The project has been hosted on Github so it makes perfect sense to use the GitHub Actions to achieve CICD workflow. Here I wrote a GitHub Actions workflow that includes a set of instructions on building the web application, packing it to a new docker image, pushing the docker image to Docker Hub, accessing the remote server, and finally running the newly created docker image/container.

GitHub Actions Workflow

A workflow was in essence a .yml file that contained a series of jobs in which a set of steps resides. The simplified workflow for the CICD that my development branch uses is as follows:

name: Active development release

on:
  push:
    branches:
      - develop
 jobs:
  build:
    name: Build application
    runs-on: ubuntu-latest
    steps:
      - name: Checkout develop branch
        uses: actions/che[email protected]
      - name: Setup .NET Core 3.1
        uses: actions/[email protected]
        with:
          dotnet-version: 3.1.101
      ...
      ...
  deploy:
    name: Deploy to server
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Executes remote ssh commands using ssh key
        uses: appleboy/[email protected]
        with:
          host: ${{ secrets.DIGITALOCEAN_SERVERADDRESS }}
      ...
      ...

The workflow consisted of two jobs: build and deploy. The job “build” had steps that started from checking out the branch, preparing .NET Core 3.1 SDK, building and publishing the project, packing the docker image, and pushing it to Docker Hub. The second job “develop” had a parameter called needs that told Github to hold this step until the job “build” had finished running because by default all jobs run concurrently. This job focused on accessing to the remote server using SSH, pulling the newly updated docker image, and running it.

As you may have noticed, in a public Github repository, using such a workflow file can be dangerous as it might expose sensitive data such as the credentials needed to login to Docker Hub, remote server IP address and SSH key, and authentication client secrets. Fortunately, Github provides a secrets management tool called “Secrets” that hides sensitive data away from the public code repository. These secrets can then be injected through placeholders such as the ${{ secrets.DIGITALOCEAN_SERVERADDRESS }} in the sample above.

Note that sensitive data should not exist on the checked in files such as appsettings.json for .NET Core project as well. These data can also be stored in the Github Secrets which will be accessed during the deployment process by passing them as environment variables.

SFTP or Docker

At this point of the process, the CICD has been successfully set up. However, there is one thing that perhaps worths taking a quick moment too. The previous two posts discussed two ways of deploying a .NET Core project, either by transferring the files manually to the server or deploying as a docker image. I opted for the latter after some considerations. What was the rationale?

Using file transfer, the project is updated either by overwriting files or removing obsolete files. To ensure the files are transferred properly, this process needs to take two steps: clearing the whole folder the project is in, and transferring every files to the folder. If I have multiple server instances in the future, the process repeats for each server and might be prone to error or inconsistent. Further more, I will need to install the runtime environment in a already resource-limited server.

On the other hand, by using docker, the necessary runtime is included in the docker image. I will also be sure of the consistent version of application across a multi-servers setup. The lightweight docker engine and daemon ensure application or container runs super fast while being independent of the operating system the web application is hosted in.

These were just my opinion and reasons that I used for decision making when I was building the CICD workflow. There may be events or changes down the road that makes me switch to files transfer over docker.

What I Have Achieved so Far

The CICD workflow has been set up successfully with the help of GitHub Actions and docker. From now onwards, pushing commits to the repository will trigger the deployment process and the update can be seen on the remote server almost instantly. This has greatly reduced the time wasted to deploy web application manually.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s