Install Yarn On Docker

/ Comments off

These instructions assume you are using a Bash shell. You can easily get a Bash shell on Windows by installing the git client.

Prerequisites

You need to install following in order to build MAGDA:

  • Node.js - To build and run the TypeScript / JavaScript components, as well as many of the build scripts. Version 9+ works fine as of March 2018.
  • Java 8 JDK - To run the JVM components, and to build the small amount of Java code.
  • sbt - To build the Scala components.
  • yarn - Npm replacement that makes node deps in a monorepo much easier.

To push the images and run them on kubernetes, you’ll need to install:

  • GNU tar - (Mac only) MacOS ships with BSD tar. However, you will need GNU tar for docker images operations. On MacOS, you can install GNU Tar via Homebrew: brew install gnu-tar
  • gcloud - For the kubectl tool used to control your Kubernetes cluster. You will also need to this to deploy to our test and production environment on Google Cloud.
  • Helm 3 to manage kubernetes deployments and config. Magda 0.0.57 and higher requires helm 3 to deploy.
  • Docker - Magda uses docker command line tool to build docker images.

Install it using Homebrew like this: brew install -cask android-platform-tools. Now you are ready to install sitespeed.io: npm install sitespeed.io -g. After that you can also install the browsers that you need for your testing: Chrome / Firefox /Edge. Npm (or Yarn) Install within a Docker Container, the Right Way Published on 26 January 2017 Working as a web agency (or more specifically at marmelab, as an innovation workshop), we have to deal with several different customers and projects. A list of common Docker on YARN related problem and how to resolve them. Install Docker binary and provide the path to the binaries by specifying it using the. The guide also assumes you have a working Docker installation and a basic understanding of how a Node.js application is structured. In the first part of this guide we will create a simple web application in Node.js, then we will build a Docker image for that application, and lastly we will instantiate a container from that image.

You’ll also need a Kubernetes cluster - to develop locally this means installing either minikube or docker (MacOS only at this stage). We’ve also started trialing microk8s on Linux, but we’re not sure how well it’s going to work long-term. Potentially you could also do this with native Kubernetes, or with a cloud cluster, but we haven’t tried it.

Trying it out locally without building source code

This pipeline clones the source code, runs unit tests and finally creates a Docker image. Codefresh is automatically caching Docker layers (it uses the Docker image of a previous build as a cache for the next) and therefore builds will become much faster after the first one finishes. Building a React.Js application without Docker.

If you just want to try it out locally without actually changing anything, it’s much easier to just install minikube or docker for desktop, then following the instructions at https://github.com/magda-io/magda-config/blob/master/existing-k8s.md. What follows is instructions on how to build everything, code, databases and all, from scratch into a working application.

Building and running (just) the frontend

If you just want to edit the UI, you don’t actually even need helm -just clone the repo, run yarn install at the root, then cd magda-web-client and run yarn run dev.This will build/run a local version of the client, connecting to the API at https://dev.magda.io/api.If you want to connect to a magda API hosted elsewhere you can modify the config.ts file in the client.

Building and running the backend

First clone the magda directory and cd into it.

Then install dependencies and set up the links between components by running:

Once the above prerequisites are in place, and the dependencies are installed, building MAGDA is easy.From the MAGDA root directory, simply run the appropriate build command:

You can also run yarn build in an individual component’s directory (i.e. magda-whatever/) to build just that component.

Set up Helm

Helm is the package manager for Kubernetes - we use it to make it so that you can install all the various services you need for MAGDA at once.To install, follow the instructions at https://helm.sh/docs/intro/install/.

Once you have helm3 installed, add Magda Helm Chart Repo and other relavent helm chart repos:

Install a local kube registry

This gives you a local docker registry that you’ll upload your built images to so you can use them locally, without having to go via DockerHub or some other external registry.

Install kubernetes-replicator

A complete Magda installation includes more than one namespaces and kubernetes-replicator helps to automated copy required secrets from main deployed namespace to workload (openfaas function) namespace.

To install kubernetes-replicator:

1> Add kubernetes-replicator helm chart repo

2> Update Helm chart repo

3> Create a namespace for kubernetes-replicator

As you only need one kubernetes-replicator instance per cluster, it’s a good idea to install kubernetes-replicator in a seperate namespace.

3> Install kubernetes-replicator

Please note: you only need to install kubernetes-replicator once per k8s cluster

Build local docker images

Now you can build the docker containers locally - this might take quite a while so get a cup of tea.

Build Connector and Minion local docker images

As of v0.0.57, Magda official connectors & minions live outside the core repository. You can find connector & minions repositories at here.

You don’t have to build connector & minions docker images as the default config value file minikube-dev.yml specifically set to use official production-ready docker image from docker hub repository.

If you do want to use local build connector & minion docker images for testing & development purpose, you need to:

  1. Clone the relevant connector or minion repository
  2. Build & Push docker image to a local docker registry.

Run the following commands from the cloned folder:

  1. Modify minikube-dev.yml, remove the global.connectors.image & global.minions.image section.
  2. Deploy Magda with helm using the instructions provided by the Install Magda on your minikube/docker-desktop cluster section below.

Create the necessary secrets with the secret creation script

Windows only: Set up a volume for Postgres data

If you’re using Docker Desktop on Windows, you’ll need to set up a volume to store Postgres data because the standard strategy approach - a hostpath volume mapped to a Windows share - will result in file/directory permissions that are not to Postgres’s liking. Instead, we’ll set up a volume manually which is just a directory in the Docker Desktop VM’s virtual disk. We use the unusual path of /etc/kubernetes because it is one of the few mount points backed by an actual virtual disk.

Note: If using docker desktop for Windows older than version 19, change the value from “docker-desktop” to “docker-for-desktop” in nodeAffinity in file deploy/kubernetes/local-storage-volume.yaml

Install Magda on your minikube/docker-desktop cluster

If you need HTTPS access to your local dev cluster, please check this doc for extra setup steps.

This can take a while as it does a lot - downloading all the docker images, starting them up and running database migration jobs. You can see what’s happening by opening another tab and running kubectl get pods -w.

Also note that by default there won’t be any minions running, as some of them can be very CPU intensive. You can toggle them on by specifying --set tags.minion-<minionname>=true when you run helm upgrade.

If you’re using Docker Desktop on Windows, add -f deploy/helm/docker-desktop-windows.yml too, i.e. do this instead of the above:

If you want to deploy the packed & production ready helm chart in our helm repo, please check out this sample config repo.

Crawl Data

By default, helm will create a one-time crawl job for data.gov.au to get you started. If you want to crawl other datasets, look at the config under connectors: in deploy/helm/minikube-dev.yml. For sources of data, check out deploy/helm/magda-dev.yml. Once you’ve changed your config, just run the helm upgrade command above again to make it happen.

Install Yarn On Docker

Install Yarn On Docker For Sale

Kubernetes tricks

Running individual services

If you want to just start up individual pods (e.g. just the combined database) you can do so by setting the all tag to false and the tag for the pod you want to true, e.g.

You can find all available tags in deploy/helm/magda-core/requirements.yaml and deploy/helm/magda/requirements.yaml

Once everything starts up, you can access the web front end on http://192.168.99.100:30100. The IP address may be different on your system. Get the real IP address by running:

It’s a good idea to add an entry for minikube.data.gov.au to your hosts file (C:WindowsSystem32driversetchosts on Windows), mapping it to your Minikube IP address. Some services may assume this is in place. For example:

Running on both host and minikube

It’s also possible to run what you’re working on your host, and the services your dependent on in minikube. Depending on what you’re doing, this might be simple or complicated.

Using the minikube database

This is super-easy, just run

Now you can connect to the database in minikube as if it were running locally, while still taking advantage of all the automatic schema setup that the docker image does.

Running a microservice locally but still connecting through the gateway

Yarn install docker slow

You might find yourself developing an API locally that depends on authentication, which is easiest done by just logging in through the web interface and connecting through the gateway. You can actually make this work by telling the gateway to proxy your service to 192.168.99.1 in deploy/helm/internal-charts/gateway/templates/configmap.yaml. For instance, if I wanted to run the search api locally, I’d change configmap.yaml like so:

Then update helm:

Now when I go to http://${minikube ip}/api/v0/search, it’ll be proxied to my local search rather than the one in minikube.

Be aware that if your local service has to connect to the database or other microservices in minikube you’ll have to use kube-port-forward to proxy from localhost:{port} to the appropriate service in minikube - you can find a list of ports at https://github.com/magda-io/magda/blob/master/doc/local-ports.md.

In the likely even you need to figure out what the jwt shared secret is on your minikube, you can cheat by opening up a shell to a container that has that secret and echoing the environment variable:

Running local minions

You can use the same pattern for minions - register a webhook with a url host of 192.168.99.1 and it’ll post webhooks to your local machine instead of within the minikube network. Be aware that your minion won’t be able to find the registry until you use kubectl port-forward to make it work… e.g.

What do I need to run?

Running individual components is easy enough, but how do we get a fully working system? It is rarely necessary to run all of MAGDA locally, but various components depend on other components as follows:

ComponentDependencies
magda-*-connectormagda-registry-api
magda-*-minionmagda-registry-api
magda-authorization-apimagda-postgres, magda-migrator-combined-db
magda-gatewaymagda-registry-api, magda-search-api, magda-web-client, magda-authorization-api, magda-discussions-api
magda-indexermagda-elastic-search
magda-registry-apimagda-postgres, magda-migrator-combined-db
magda-search-apimagda-elastic-search
magda-web-clientmagda-web-server, but uses API at https://dev.magda.io/api if server is not running.
magda-web-servernone, but if this is running then magda-gateway and its dependencies must be too or API calls will fail.
Install Yarn On Docker

Architecture Diagram

The following Architecture Diagram may help you to get clearer idea which components you need to run in order to look at a particular function area:

The following table shows the relationship between Magda components and Diagram elements:

ComponentDiagram elements
magda-admin-apiAdmin API (NodeJS)
magda-*-connectorConnectors
magda-elastic-searchES Client, ES Data (x2), ES Master (x3)
magda-*-minionMinions
magda-authorization-apiAuth API (NodeJS)
magda-gatewayGateway (x1+) (NodeJS)
magda-indexerSearch Indexer (Scala)
magda-registry-apiRegistry API (Scala)
magda-search-apiSearch API (Scala)
magda-web-clientMAGDA Web UI
magda-web-serverWeb Server (NodeJS)
magda-preview-mapTerria Server (NodeJS)
magda-postgresAll databases - see the migrators that set up the individual database schemas below
magda-migrator-authorization-dbAuth DB (Postgres). magda-migrator-authorization-db is only used for production environment.
magda-migrator-discussions-dbDiscussion DB (Postgres). magda-migrator-discussions-db is only used for production environment.
magda-migrator-registry-dbRegistry DB (Postgres). magda-migrator-registry-db is only used for production environment.
magda-migrator-session-dbSession DB (Postgres). magda-migrator-session-db is only used for production environment.
magda-migrator-combined-dbRegistry DB (Postgres), Session DB (Postgres), Discussion DB (Postgres), Auth DB (Postgres). magda-migrator-combined-db component is only used for dev environment. Production environment will launch all DB components above separately.

Running on your host machine

You can also avoid minikube and run magda components on your local machine - this is much, much trickier. In any component (except databases/elasticsearch), you can run:

This will build and launch the component, and automatically stop, build, and restart it whenever source changes are detected. In some cases (e.g. code generation), it is necessary to run yarn run build at least once before yarn run dev will work. Typically it is not necessary to run yarn run build again in the course of development, though, unless you’re changing something other than source code.

A typical use case would be:

  1. Start combined-db in Minikube using helm:

From root level of the project directory:

  1. Port forward database service port to localhost so that your local running program (outside the Kubernetes cluster in minikube) can connect to them:
  1. Start the registry API by executing the following command
  1. (Optional) If later you wanted to start elastic search as well:

Like combined-db, elastic search can only be started in minikube via helm rather than yarn run dev.

You need to upgrade previously installed helm chart magda to include magda-elastic-search component:

And then, port forward elasticsearch so that you can run other components that may need to connect to elasticsearch outside the minikube:

Debugging Node.js / TypeScript components

Node.js / TypeScript components can easily be debugged using the Visual Studio Code debugger. Set up a launch configuration like this:

Debugging Scala components

Scala components can easily be debugged using the IntelliJ debugger. Create a debug configuration for the App class of whatever component you’re debugging.

How to create local users

Please see document: How to create local users

How to create API key

Please see document: How to create API key

The goal of this example is to show you how to get a Node.js application into aDocker container. The guide is intended for development, and not for aproduction deployment. The guide also assumes you have a working Dockerinstallation and a basicunderstanding of how a Node.js application is structured.

In the first part of this guide we will create a simple web application inNode.js, then we will build a Docker image for that application, and lastly wewill instantiate a container from that image.

Docker allows you to package an application with its environment and all of its dependencies into a'box', called a container. Usually, a container consists of an application running in a stripped-to-basics version of a Linux operating system. An image is the blueprint for a container, a container is a running instance of an image.

Create the Node.js app

First, create a new directory where all the files would live. In this directorycreate a package.json file that describes your app and its dependencies:

With your new package.json file, run npm install. If you are using npmversion 5 or later, this will generate a package-lock.json file which will be copiedto your Docker image.

Then, create a server.js file that defines a web app using theExpress.js framework:

In the next steps, we'll look at how you can run this app inside a Dockercontainer using the official Docker image. First, you'll need to build a Dockerimage of your app.

Install Yarn On Docker

Creating a Dockerfile

Create an empty file called Dockerfile:

Open the Dockerfile in your favorite text editor

The first thing we need to do is define from what image we want to build from.Here we will use the latest LTS (long term support) version 14 of nodeavailable from the Docker Hub:

Next we create a directory to hold the application code inside the image, thiswill be the working directory for your application:

This image comes with Node.js and NPM already installed so the next thing weneed to do is to install your app dependencies using the npm binary. Pleasenote that if you are using npm version 4 or earlier a package-lock.jsonfile will not be generated.

Note that, rather than copying the entire working directory, we are only copyingthe package.json file. This allows us to take advantage of cached Dockerlayers. bitJudo has a good explanation of thishere.Furthermore, the npm ci command, specified in the comments, helps provide faster, reliable, reproducible builds for production environments.You can read more about this here.

To bundle your app's source code inside the Docker image, use the COPYinstruction:

Your app binds to port 8080 so you'll use the EXPOSE instruction to have itmapped by the docker daemon:

Last but not least, define the command to run your app using CMD which definesyour runtime. Here we will use node server.js to start your server:

Your Dockerfile should now look like this:

Install Yarn In Docker Image

.dockerignore file

Create a .dockerignore file in the same directory as your Dockerfilewith following content:

This will prevent your local modules and debug logs from being copied onto yourDocker image and possibly overwriting modules installed within your image.

Building your image

Go to the directory that has your Dockerfile and run the following command tobuild the Docker image. The -t flag lets you tag your image so it's easier tofind later using the docker images command:

Your image will now be listed by Docker:

Run the image

Running your image with -d runs the container in detached mode, leaving thecontainer running in the background. The -p flag redirects a public port to aprivate port inside the container. Run the image you previously built:

Print the output of your app:

If you need to go inside the container you can use the exec command:

Test

To test your app, get the port of your app that Docker mapped:

Install Yarn On Docker Software

In the example above, Docker mapped the 8080 port inside of the container tothe port 49160 on your machine.

Now you can call your app using curl (install if needed via: sudo apt-getinstall curl):

We hope this tutorial helped you get up and running a simple Node.js applicationon Docker.

You can find more information about Docker and Node.js on Docker in thefollowing places: