Wednesday, July 11, 2018

Unit Testing Node.js HTTP Network Requests using PassThrough Streams

Do you really need unit testing?

I use the mocha, chai and sinon unit test stack for my Node.js and frontend Javascript projects. Its a very powerful and user-friendly stack. Sinon is great for stubbing and spying on your unit tested codes callbacks and promise resolves.

If you have had the need to unit test functions that also make network requests using Node.js http or https modules you are faced with some complex logic paths for sinon mocking. As it is a network request, you need to ask yourself "do I really need to hit this API and test it's valid response?" or "am I happy to simulate the network request but test the callback logic?"

In your quest to extend the coverage of functions that may also do network requests you ideally may just want to mock the network request but test the callback of the request for valid execution flow.

Since of late I have added nock to the above mentioned test stack as well. nock is a brilliant tool to completely mock out your Node.js network requests. It's very declarative and abstracts all the complexities of setting up stubs and spys.

I will write out a seperate post on how to use nock, but I wanted to show how I used to unit test network requests prior to nock. This will be useful for people who prefer not to use nock or keep their unit testing as "vanilla" as possible.

To simulate and mock network requests I implement the very useful steam.PassThrough class provided natively by Node.js. In their documentation they describe it as:


The stream.PassThrough class is a trivial implementation of a Transform stream that 
simply passes the input bytes across to the output. Its purpose is primarily for 
examples and testing, but there are some use cases where stream. 
PassThrough is useful as a building block for novel sorts of streams.


Here is an example implementation of Unit testing a function that incudes a https get request using the mocha, chai, sinon and PassThrough tools. I have provided detailed comments in the code so hope that helps explaining what is going on.

const https = require('https');

// An example function that has other logic you need unit tested
// ... but you also want to cover the https call as part of your coverage 
function functionWithHttpsCall(apiInput) { 
  // wrap this whole aync function to be Promise based 
  return new Promise((resolve, reject) => {
    // do somthing with apiInput, update logic if needed etc

    // make an api call
    const request = https.get(`https://dummyapi.com?giveme=${apiInput}`, (response) => {
      let body = '';

      // construct stream response
      response.on('data', (d) => {
        body += d;
      });

      // stream ended so resolve the promise 
      response.on('end', ()=> {
        resolve(JSON.parse(body));
      });
    });

    request.on('error', (e) => {
      reject(e);
    });
  });
}

We need to now write a unit test to test the functionWithHttpsCall function above. We want to test all execution flows in this function to improve out code coverage so we also want to test the https.get callback response (without not actually testing live API response)

Here is the unit test for this test case:

// using mocha, sinon
const chai = require('chai');
const sinon = require('sinon');
const expect = chai.expect; // use chai.expect assertions
chai.use(require('sinon-chai')); // extend chai with sinon assertions

const https = require('https'); // the core https Node.js module
const { PassThrough } = require('stream'); // PassThrough class

// Unit Tests
describe('My App Tests - functionWithHttpsCall', () => {
  // do this before each test
  beforeEach(() => {
    // stub out calls to https.get
    // due to how npm caches modules, calls to https.get
    // ... from this point onwards will use the sinon stub
    this.get = sinon.stub(https, 'get');
  });

  // clean up afte each test 
  afterEach(() => {
    // restore after each test
    https.get.restore();
  });

  // begin test config
  let mockedInput = 'authors';
  let expectedOutput = {'authorName': 'john doe'};
  let actualOutout;

  it('should return a valid response when we hit a https call', (done) => {
    // create a fake response stream using the PassThrough utility
    const response = new PassThrough();
    response.write(JSON.stringify(expectedOutput)); // inject the fake response payload
    response.end();

    // create a fake request stream as well (needed below)
    const request = new PassThrough();

    // when the test crawl below for functionWithHttpsCall hits the stub (https.get)
    // ... respond with the mock response Stream in the index 1 param of the callback of https.get
    this.get.callsArgWith(1, response)
            .return(request); // calls to https.get returns the request stream so we need this to as well

    // unit test the makeHttpCall function
    functionWithHttpsCall(mockedInput)
      .then((actualOutout) => {

        // actual output (actualOutout) will be same as the (expectedOutput) variable above
        // ... because we used (expectedOutput) in the response PassThrough above
        
        expect(actualOutout).to.deep.equal(expectedOutput); // this test will pass

        done();
      });
  });
});


Hope the above makes sense you have an idea now on how you can use PassThrough to unit test your network calls.

But PassThrough does have its limitations, I have not worked out how to simulate various https status codes (404, 500) and also simulate network timeouts. These are the main reasons why I moved over to nock.

Happy coding!

Friday, May 11, 2018

Dokku app deployment fails with a “main: command not found. access denied”. Its most likely a storage space issue...

If your Dokku app deployment failed with an error similar to this:

/home/dokku/.basher/bash: main: command not found
Access Denied

And you have confirmed that its not some typo in your command or if your remote server is down or repo has been deleted etc... then you most likely have hit this issue due to your remote server going out of space.

You can confirm this by SSH'ing into your remote server and hitting the "df -h" command. Once you confirm it then there are a few things you can do to release some space so you can redeploy.

The following is highly risky as you have to deal with docker directly but if you know what you are doing then it should be OK. (but do these at your own RISK!)

Run:
sudo docker ps

To see the docker containers that are running and check for ones that have exited and therefore are not needed

Run:
sudo docker ps -a -q -f status=exited

To see the containers that are exited. You can delete the images behind this to free space. You will see a list. Remove containers using the container IDs. e.g.

sudo docker rm [0fb0724ae3a7] --> this is the containerID

Now check if you have some free space and try to redeploy.

If you still have no space you can try and delete the docker images as well. This is again very risky in case you are not sure what is happening so please do this at your own risk

Run:
docker images

or Run:
sudo docker images | grep '<none>'

This should show you orphaned images.

Start deleting some images using this command:
sudo docker rmi [afa892371d46]  --> this is the IMAGE ID

Now you should have some space for a deployment.

** If you are using Dokku, then I've noticed that overtime it leaves many Docket containers in a bad state and creates a lot of bloat. At times I go ahead and remove any containers that I feel are invalid containers created by Dokku. The Dokku "app name" is not what necessarily what your Docker container name will be so don't let this confuse you, even if you have no containers that match your dokku app name, the next time push dokku deploy it will spin up a new container for your app. Again - all this is risky do don't attempt unless you are sure.

Hopefully this helps someone. You can read more in these issues raised on Github.
https://github.com/dokku/dokku-mariadb/issues/38
https://github.com/dokku/dokku/issues/120

More dockers commands that are useful:
https://zaiste.net/removing_docker_containers/

Bonus free space tip: Also might help cleaning up for free space on your Linux box. I was using ubuntu 16.04.2 and this article gave some great tips on freeing space: https://www.omgubuntu.co.uk/2016/08/5-ways-free-up-space-on-ubuntu

By running sudo apt-get autoremove I was able to free up close 1 GB!

Monday, April 30, 2018

Installing Octave on OSX with Graphics (For Graphing and Plotting) Enabled

This article is relevant for: 
- Octave version 3.8.0
- OSX (Specifically on 1version 0.11.6)
it may or may not work for other versions.


A Surface Plot generated by Octave



Octave GNU is a free, open source programming language and environment that is very useful if you want to get started with Machine Learning or data visualisation. Installing Octave via Octave Forge on a Mac used to be very complicated but over time it has got a lot easier thanks to an official installer that walks you through the process.

But I did have some issues getting the full functionality up and running and here are the steps I took to resolve the problems. By following the below steps you will have GNC Octave running with full Graphing capabilities.

1) Download the Octave 3.8.0 installer here, its a large file so it will take some time

2) Follow the Installer's instructions and will have the Octave-cli and Octave-gui available on your computer

3) Opening the Octave-cli will open the Terminal with the Octave command prompt active



4) You can now run your Octave scripts and code from here

5) Octave has graphical plotting built into it, for e.g. when you run "plotData(X, y);" it should bring up a window with the X and y data plotted onto a graph. But you may face an issue here in case a Graphics Engine has not been allocated to Octave-cli. The error you see will be something similar to: 

WARNING: Plotting with an 'unknown' terminal.
No output will be generated. Please select a terminal with 'set terminal'.
What we have to do is to install QT as the engine and assign it to Octave-cli. 

Here is how you do it:

- Open your standard OSX terminal and type:
gnuplot
- If this command does not work, then you will need to install it with QT like so (use install instead of reinstall if more appropriate)
brew reinstall gnuplot --with-qt
- Now when you re-type "gnuplot" you should see it work and show that the engine is QT (it will say something like - Terminal type os now 'qt')

In gnuplot, now run the command
set terminal
This will list all the available terminals and confirm that QT exists. If you confirmed this then all that is left to do is to configure Octave-cli to use it. If needed you can select QT as the engine from here as well.

- Back in your Octave-cli terminal, type this command
setenv('GNUTERM','qt')

- You will have to restart the terminal (or you may have to restart your computer as well)

6) You should now have Octave-cli working with full graphics capability.


Hope this helps someone.

Thursday, January 25, 2018

Toggle between Fish and Bash Shell


Love Fish Shell? But want to toggle between Fish and Bash? I know I do this at least a few times each time. I mainly need to do this as I copy-paste some script from the web and Fish complains it “does not like certain characters”

Well here is how you can toggle between the Fish and Bash Shells without leaving your command line.

FISH -> BASH :
In your Fish terminal, type “bash --login” to switch back to your Bash terminal

BASH -> FISH :
In your Bash terminal, type “fish” to switch back to your Fish terminal

You're Welcome :)

Thursday, January 11, 2018

Fixed a Dokku App that is "Locked" due to a Previously Interrupted Build or Deploy

Just got this new error when trying to deploy to a Dokku box:

remote: app is currently being deployed or locked. Waiting...

This can happen if a previous build and deploy was interrupted by you or the origin server or if the app crashes in a live box and Dokku is unaware for some reason.

With a little investigating and it appears Dokku locks by creating an empty hidden file:

/home/dokku/your-app/.build.lock

// or it may be called
/home/dokku/your-app/.deploy.lock

Deleting these files will fix your problem.

My Linux / Shell Cheat Cheat

Here is a list of Shell Commands that I find useful. I'm keeping a log here so I can refer back whenever needed - which happens a lot when I'm debugging live server issues :)


// Check disk space
df -h

// Show total folder size is human readable format - very useful to debug out-of-disk-space errors (to locate the problem)
du -bsh *

Update SWAP space (if you already have configured SWAP before)
1. Make all swap off

sudo swapoff -a

2. Resize the swapfile 

sudo dd if=/dev/zero of=/swapfile bs=1M count=1024

3. Make swapfile usable

sudo mkswap /swapfile

4. Make swapon again
sudo swapon /swapfile

Thursday, April 20, 2017

Move Working Code Changes to new GIT Branch without Stashing



If you are like me, you will probably find your self working on the "master" branch of your project most of the time (when you are the sole contributor on a project of course...). Often I start working on an issue that has been raised by my users confident that I can fix it quickly in one go. But I soon discover that it is far more complicated than initially thought and regret not creating a new "issue branch" to work off (so my master branch is clean and the commits are atomic to features/issues I'm working on)

I often face this workflow problem when I'm working on my open source project React Stepzilla for example.

Here is what I do now when I come across this GIT workflow problem:

  • Locally on my computer I've checked out "master" and I'm working on a new github issue with id 27 (for e.g)
  • I modify multiple files trying to fix the issue
  • I then discover that I'm not going to be able to complete the fix easily and regret not creating a new issue branch "issue-27"
  • Running "git status" shows me all my changes so far to the "master" branch
  • I then run "git checkout -b issue-27". This will create a new branch called issue-27 with all your local changes.
  • I'm now in a new local branch called "issue-27"
  • I continue my work and stop for the moment, I then add my code via "git add -A" and commit it using "git commit -m 'Working on #27'" (putting the #27 here makes the code change referenced in the github issue, which is nice...)
  • I then push the new branch to origin and track it "git push -u origin issue-27"
  • If you checkout master now and run "git status" you will notice that the changes are not longer "pending" here and your master is clean.
  • Once you are done with your work on the "issue-27" branch, then your can merge the changes into "master" via a github "pull request" or locally and then delete the "issue-27" branch.


Hope this helps you guys, also here is a good stackoverflow that tell your more.

Fork me on GitHub