Thursday, October 4, 2018

Moving Dokku Apps Between Physical Servers with No (Very Little) Down Time


Moving Live Servers without Downtime... Risky my friend!

I built and run quite a large scale free cloud software that spans multiple independent servers (using a Microservice design pattern) and recently I was trying to optimise the operational management of all my servers. I am a big fan of Dokku and had cloud servers running "Microservice Apps" deployed via Dokku on Digital Ocean and also AWS EC2. The platforms were fragmented and I wanted to bring all my cloud servers together onto the same cloud platform (Digital Ocean or AWS) in order to make it easier to manage and gain operational visibility (centralised monitoring via dashboards where metrics all mean the same thing).

After looking at my options I decided to move all my servers to Digital Ocean. I always loved their platform and after they added streamlined (and free server monitoring) it made it a no-brainer (AWS EC2 monitoring is not seamless and the machines cost more as well). For me Digital Ocean always had the upper-edge on AWS when it came to user-friendless and provided a much better DX (Developer Experience). AWS is a lot more feature-rich when building distributed backend systems but Digital Ocean gives you a much better experience when it comes to cloud machine setup and management (Droplet vs EC2)

In this tutorial, I will show you how I moved a “Live” Microservice API running on AWS (an EC2 with Dokku Installed and the API Microservice deployed inside a Docker container) to the same setup in Digital Ocean (an Droplet with Dokku Installed and the API Microservice deployed inside a Docker container). As it was a live API, I wanted limited downtime as possible (I managed to do it with zero downtime - but as DNS updates are required you may want to schedule some downtime with your users before you attempt this). The API was also served over HTTPS so I had to bring up the HTTP and HTTPS endpoints as close together as possible. (I used the Dokku LetsEncrypt plugin for this as shown below)

Docker via Dokku in DigitalOcean


My setup was as follows:
Old Server: AWS - EC2 T2-Micro, Dokku 0.8.0 (older version)
New Server: Digital Ocean Droplet, Dokku 0.12.13 (I used the Digital Ocean one-click app images they already have for Dokku)


💥 💥 Firstly, I need to give you my usual Disclaimer for these kinds of risky tutorials :) There might be a much better way to do this but these are the steps I followed and got it down with zero downtime. I can’t guarantee this will work for you and attempting this might result in excessive downtime or lost data for you. Please do this at your own risk! 



Now let’s get to it:

1) Sign up for a new Digital Ocean Server (Droplet with Dokku Installed). I used the One-Click Dokku image they had. I already had an SSH key setup as I had other servers with DigitalOcean so I used the existing key during setup of the new Droplet

2) Once your server is up, open the Dokku config page (It’s usually found by hitting your new IP in the browser). In the config page, make sure your new server IP is added in the "hostname" field - do not use the domain you are using in the old server now. Don't select virtualhost naming for now (although I don’t think this will impact you - as you will need to turn on virtualhost in a step below). Finish this config setup as soon as possible or you risk someone discovering your IP and seeing your KEY details!

3) SSH into your new server and create a a Dokku app with the same name in the old server:

dokku apps:create my-app
4) Locally in your GIT controlled source code for the app, set a new GIT remote from your repo and point to your new sever:

git remote add dokku-new dokku@my-new-server-ip:my-app
5) Now push your latest master code to dokku-new. If you have any issues with the deployment you might want to enable tracing and debug by SSHing to your new Server and running:

dokku trace on
6) One common error that may occur with the above step is you will see an error similar to “pre-receive hook declined”. This may be caused by your code actually being successfully deployed but (in the case of a node.js app), it attempted to start the app in the new server but some Environment Variables driven dependency like a Database or Redis connection failed (as Environment Variables that hold the database endpoint or connection details do not exist in the new server). This will throw an error like above which will make you think the deployment failed.

7) If this happens, then SSH into your box and set all the required Environment Variables. Your do this via the command:

dokku config:set my-app varName1=VarValue1 varName2=VarValue2

8) Once deployment has completed successfully, SSH in again and check to ensure all your Environment Variables are there and you will also notice some new Dokku Variables added.

9) I then enabled vhost/virtualhost on my new app via the command:

dokku domains:enable my-app
This will restart the app with vhost enabled and give you a IP:80 url instead of the default IP:RandomPort. Now when you can hit the url in the browser you can verify if your API is accessible. For me it was by hitting the Health Check URL I have on all my microservices. e.g. http://IP/health-check

10) I then added the live domain to the app via the command:

dokku domains:add my-app myapplivedomain.com
After you run the above command you should see a success message that nginx reloaded with your new settings. Confirm this has happened now by hitting this command and seeing the response:

dokku domains:report cloud-service
11) If you restart your app now you should see the domain appear in the Dokku log:
.
.
=====> Application deployed:
http://myapplivedomain.com

12) Now we should kickstart the HTTPS certificate part via the plugin dokku-letsencrypt, but we won't be able to complete it now as the DNS is not updated to new IP - but we should start it off now. We get HTTPS via the free letsencrypt service. Install the dokku plugin on your new server via:

sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git

* This article also points to some instruction on this process - https://medium.com/@pimterry/effortlessly-add-https-to-dokku-with-lets-encrypt-900696366890

13) I ran this command to set it up:
dokku config:set --no-restart my-app DOKKU_LETSENCRYPT_EMAIL=your@email.tld
* your@email.tld is something you need to control and monitor

14) Then I ran the command to start up the letsencrypt process:

dokku letsencrypt my-app
15) The above command will most likely fail with an error like:
Did you set correct path in -d myapplivedomain.com:path or --default_root? Is there a warning log entry about unsuccessful self-verification? Are all your domains accessible from the internet? Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz/[some-verylong-code-here] Challenge validation has failed, see error log.

16) It's now time to swap the DNS nameservers for your domain and point to the new Server's IP. There will be some downtime from now on!! as we still have not successfully enabled letsencrypt for HTTPS on your new server (and your downstream apps are probably expecting the HTTPS urls).. but we have no choice as we can’t enable letsencrypt until the domain resolves to the new server IP.

17) Swap your DNS A and @ values to the new server IP (This may be different for each domain registrar)

18) At the same time do the letsencrypt “acme challenge” as well as seen in the error message above when we tried to enable letsencrypt. The acme challenge is something you may need to complete (not sure if this is really needed but I did it anyway). Basically you need to copy the [some-verylong-code-here] from the error message seen above as a TXT DNS record with the name _acme-challenge. The code needs to go into value like so:

TXT    _acme-challenge    some-verylong-code-here
19) After the DNS resolves, which should not take too long. Go back to the SSH console and re-run the command that failed previously:

dokku letsencrypt my-app
20) It should now work successfully... and you won't see that error anymore.

21) Its also a good idea to auto-renew your letsencrypt cert or your HTTPS endpoints may go down. Do that like so:

dokku letsencrypt:cron-job --add
22) Finally to verify it all went well, restart the dokku app. It should reload and show both your HTTP and HTTPS endpoint mapping set up:
.
.
=====> Application deployed:
http://myapplivedomain.com
https://myapplivedomain.com

23) You have now officially swapped over to the new server! 🙌💪💃💕


If this article helped you out and you are keen to give Digital Ocean a try (I highly recommend them) - then use this referrer link to get some free credit
https://m.do.co/c/63904110df0d

You should get 25$ (in October 2018 I believe it’s actually 100$!) and I’ll get some credit to run my servers as well :)

All the best and happy coding!

Wednesday, July 11, 2018

Unit Testing Node.js HTTP Network Requests using PassThrough Streams

Do you really need unit testing?

I use the mocha, chai and sinon unit test stack for my Node.js and frontend Javascript projects. Its a very powerful and user-friendly stack. Sinon is great for stubbing and spying on your unit tested codes callbacks and promise resolves.

If you have had the need to unit test functions that also make network requests using Node.js http or https modules you are faced with some complex logic paths for sinon mocking. As it is a network request, you need to ask yourself "do I really need to hit this API and test it's valid response?" or "am I happy to simulate the network request but test the callback logic?"

In your quest to extend the coverage of functions that may also do network requests you ideally may just want to mock the network request but test the callback of the request for valid execution flow.

Since of late I have added nock to the above mentioned test stack as well. nock is a brilliant tool to completely mock out your Node.js network requests. It's very declarative and abstracts all the complexities of setting up stubs and spys.

I will write out a seperate post on how to use nock, but I wanted to show how I used to unit test network requests prior to nock. This will be useful for people who prefer not to use nock or keep their unit testing as "vanilla" as possible.

To simulate and mock network requests I implement the very useful steam.PassThrough class provided natively by Node.js. In their documentation they describe it as:


The stream.PassThrough class is a trivial implementation of a Transform stream that 
simply passes the input bytes across to the output. Its purpose is primarily for 
examples and testing, but there are some use cases where stream. 
PassThrough is useful as a building block for novel sorts of streams.


Here is an example implementation of Unit testing a function that incudes a https get request using the mocha, chai, sinon and PassThrough tools. I have provided detailed comments in the code so hope that helps explaining what is going on.

const https = require('https');

// An example function that has other logic you need unit tested
// ... but you also want to cover the https call as part of your coverage 
function functionWithHttpsCall(apiInput) { 
  // wrap this whole aync function to be Promise based 
  return new Promise((resolve, reject) => {
    // do somthing with apiInput, update logic if needed etc

    // make an api call
    const request = https.get(`https://dummyapi.com?giveme=${apiInput}`, (response) => {
      let body = '';

      // construct stream response
      response.on('data', (d) => {
        body += d;
      });

      // stream ended so resolve the promise 
      response.on('end', ()=> {
        resolve(JSON.parse(body));
      });
    });

    request.on('error', (e) => {
      reject(e);
    });
  });
}

We need to now write a unit test to test the functionWithHttpsCall function above. We want to test all execution flows in this function to improve out code coverage so we also want to test the https.get callback response (without not actually testing live API response)

Here is the unit test for this test case:

// using mocha, sinon
const chai = require('chai');
const sinon = require('sinon');
const expect = chai.expect; // use chai.expect assertions
chai.use(require('sinon-chai')); // extend chai with sinon assertions

const https = require('https'); // the core https Node.js module
const { PassThrough } = require('stream'); // PassThrough class

// Unit Tests
describe('My App Tests - functionWithHttpsCall', () => {
  // do this before each test
  beforeEach(() => {
    // stub out calls to https.get
    // due to how npm caches modules, calls to https.get
    // ... from this point onwards will use the sinon stub
    this.get = sinon.stub(https, 'get');
  });

  // clean up afte each test 
  afterEach(() => {
    // restore after each test
    https.get.restore();
  });

  // begin test config
  let mockedInput = 'authors';
  let expectedOutput = {'authorName': 'john doe'};
  let actualOutout;

  it('should return a valid response when we hit a https call', (done) => {
    // create a fake response stream using the PassThrough utility
    const response = new PassThrough();
    response.write(JSON.stringify(expectedOutput)); // inject the fake response payload
    response.end();

    // create a fake request stream as well (needed below)
    const request = new PassThrough();

    // when the test crawl below for functionWithHttpsCall hits the stub (https.get)
    // ... respond with the mock response Stream in the index 1 param of the callback of https.get
    this.get.callsArgWith(1, response)
            .return(request); // calls to https.get returns the request stream so we need this to as well

    // unit test the makeHttpCall function
    functionWithHttpsCall(mockedInput)
      .then((actualOutout) => {

        // actual output (actualOutout) will be same as the (expectedOutput) variable above
        // ... because we used (expectedOutput) in the response PassThrough above
        
        expect(actualOutout).to.deep.equal(expectedOutput); // this test will pass

        done();
      });
  });
});


Hope the above makes sense you have an idea now on how you can use PassThrough to unit test your network calls.

But PassThrough does have its limitations, I have not worked out how to simulate various https status codes (404, 500) and also simulate network timeouts. These are the main reasons why I moved over to nock.

Happy coding!

Friday, May 11, 2018

Dokku app deployment fails with a “main: command not found. access denied”. Its most likely a storage space issue...

If your Dokku app deployment failed with an error similar to this:

/home/dokku/.basher/bash: main: command not found
Access Denied

And you have confirmed that its not some typo in your command or if your remote server is down or repo has been deleted etc... then you most likely have hit this issue due to your remote server going out of space.

You can confirm this by SSH'ing into your remote server and hitting the "df -h" command. Once you confirm it then there are a few things you can do to release some space so you can redeploy.

The following is highly risky as you have to deal with docker directly but if you know what you are doing then it should be OK. (but do these at your own RISK!)

Run:
sudo docker ps

To see the docker containers that are running and check for ones that have exited and therefore are not needed

Run:
sudo docker ps -a -q -f status=exited

To see the containers that are exited. You can delete the images behind this to free space. You will see a list. Remove containers using the container IDs. e.g.

sudo docker rm [0fb0724ae3a7] --> this is the containerID

Now check if you have some free space and try to redeploy.

If you still have no space you can try and delete the docker images as well. This is again very risky in case you are not sure what is happening so please do this at your own risk

Run:
docker images

or Run:
sudo docker images | grep '<none>'

This should show you orphaned images.

Start deleting some images using this command:
sudo docker rmi [afa892371d46]  --> this is the IMAGE ID

Now you should have some space for a deployment.

** If you are using Dokku, then I've noticed that overtime it leaves many Docket containers in a bad state and creates a lot of bloat. At times I go ahead and remove any containers that I feel are invalid containers created by Dokku. The Dokku "app name" is not what necessarily what your Docker container name will be so don't let this confuse you, even if you have no containers that match your dokku app name, the next time push dokku deploy it will spin up a new container for your app. Again - all this is risky do don't attempt unless you are sure.

Hopefully this helps someone. You can read more in these issues raised on Github.
https://github.com/dokku/dokku-mariadb/issues/38
https://github.com/dokku/dokku/issues/120

More dockers commands that are useful:
https://zaiste.net/removing_docker_containers/

Bonus free space tip: Also might help cleaning up for free space on your Linux box. I was using ubuntu 16.04.2 and this article gave some great tips on freeing space: https://www.omgubuntu.co.uk/2016/08/5-ways-free-up-space-on-ubuntu

By running sudo apt-get autoremove I was able to free up close 1 GB!

Monday, April 30, 2018

Installing Octave on OSX with Graphics (For Graphing and Plotting) Enabled

This article is relevant for: 
- Octave version 3.8.0
- OSX (Specifically on 1version 0.11.6)
it may or may not work for other versions.


A Surface Plot generated by Octave



Octave GNU is a free, open source programming language and environment that is very useful if you want to get started with Machine Learning or data visualisation. Installing Octave via Octave Forge on a Mac used to be very complicated but over time it has got a lot easier thanks to an official installer that walks you through the process.

But I did have some issues getting the full functionality up and running and here are the steps I took to resolve the problems. By following the below steps you will have GNC Octave running with full Graphing capabilities.

1) Download the Octave 3.8.0 installer here, its a large file so it will take some time

2) Follow the Installer's instructions and will have the Octave-cli and Octave-gui available on your computer

3) Opening the Octave-cli will open the Terminal with the Octave command prompt active



4) You can now run your Octave scripts and code from here

5) Octave has graphical plotting built into it, for e.g. when you run "plotData(X, y);" it should bring up a window with the X and y data plotted onto a graph. But you may face an issue here in case a Graphics Engine has not been allocated to Octave-cli. The error you see will be something similar to: 

WARNING: Plotting with an 'unknown' terminal.
No output will be generated. Please select a terminal with 'set terminal'.
What we have to do is to install QT as the engine and assign it to Octave-cli. 

Here is how you do it:

- Open your standard OSX terminal and type:
gnuplot
- If this command does not work, then you will need to install it with QT like so (use install instead of reinstall if more appropriate)
brew reinstall gnuplot --with-qt
- Now when you re-type "gnuplot" you should see it work and show that the engine is QT (it will say something like - Terminal type os now 'qt')

In gnuplot, now run the command
set terminal
This will list all the available terminals and confirm that QT exists. If you confirmed this then all that is left to do is to configure Octave-cli to use it. If needed you can select QT as the engine from here as well.

- Back in your Octave-cli terminal, type this command
setenv('GNUTERM','qt')

- You will have to restart the terminal (or you may have to restart your computer as well)

6) You should now have Octave-cli working with full graphics capability.


Hope this helps someone.

Thursday, January 25, 2018

Toggle between Fish and Bash Shell


Love Fish Shell? But want to toggle between Fish and Bash? I know I do this at least a few times each time. I mainly need to do this as I copy-paste some script from the web and Fish complains it “does not like certain characters”

Well here is how you can toggle between the Fish and Bash Shells without leaving your command line.

FISH -> BASH :
In your Fish terminal, type “bash --login” to switch back to your Bash terminal

BASH -> FISH :
In your Bash terminal, type “fish” to switch back to your Fish terminal

You're Welcome :)

Thursday, January 11, 2018

Fixed a Dokku App that is "Locked" due to a Previously Interrupted Build or Deploy

Just got this new error when trying to deploy to a Dokku box:

remote: app is currently being deployed or locked. Waiting...

This can happen if a previous build and deploy was interrupted by you or the origin server or if the app crashes in a live box and Dokku is unaware for some reason.

With a little investigating and it appears Dokku locks by creating an empty hidden file:

/home/dokku/your-app/.build.lock

// or it may be called
/home/dokku/your-app/.deploy.lock

Deleting these files will fix your problem.

My Linux / Shell Cheat Cheat

Here is a list of Shell Commands that I find useful. I'm keeping a log here so I can refer back whenever needed - which happens a lot when I'm debugging live server issues :)


// Check disk space
df -h

// Show total folder size is human readable format - very useful to debug out-of-disk-space errors (to locate the problem)
du -bsh *

Update SWAP space (if you already have configured SWAP before)
1. Make all swap off

sudo swapoff -a

2. Resize the swapfile 

sudo dd if=/dev/zero of=/swapfile bs=1M count=1024

3. Make swapfile usable

sudo mkswap /swapfile

4. Make swapon again
sudo swapon /swapfile
Fork me on GitHub