Tuesday, March 12, 2019

Javascript Function Composition


Functional Programming is all the craze these days and we are seeing these concepts make its way into the JavaScript programming language. Although JavaScript is not a functional language by design there are some built in APIs that allow for a Functional coding paradigm. For example, the array methods filter, map and reduce allow for declarative, immutable transformations of arrays and objects and the const declaration allows for immutability of primitive types like string, boolean, and integer.

Function Composition

One interesting coding functional coding pattern is called Function Composition. If you use redux or express.js you will see this pattern being used to facilitate their middleware concepts. Middleware can be thought of chaining/pipeing a value through multiple stages before resolving it. For example, in express.js, we listen for a request, we then pass that request though multiple middleware methods like logging, authentication etc before actually sending it to the response event handler. In Redux we can use middleware for asynchronous handling of actions creators and for logging etc.

Functional composition is also a very interesting pattern to code out. In this post I'll explain how I created a library method that lets you pass in N number of middleware functions and it will compose/join it into a single function that will handle the pipe of your input through all your middleware functions before giving you a single entry point for your initial value.

/*
  this is a simple utility function that will once compose/merge 2 functions. 
  It will be used in the main composeAll when we recurse and process function arguments
  as you see, it just takes 2 functions as first and next and returns a anounomous function
  that chains that and passes in a val through both
*/
const composeTwo = (first, next) => (val) => {
  return next(first(val));
}

/*
  This is the main function. Comments are inline 
*/
function composeAll() {
  // arguments is N, so convert them to a normal array using ES6 spread
  const args = [...arguments];
  const len = args.length;

  // if someone is composing nothing or just 1 func, deal with it here
  if (!args || len === 0) {
    return (val) => (val);
  } 

  if (len === 1) {
    return args[0];
  }

  /* using destructiong we pull out the first and 2nd item called "next"
  and spread out the remaining args into another array called "other" */
  const [first, next, ...other] = args;

  // if only 2 functions are left just compose them using composeTwo
  if (len === 2) {
    return composeTwo(first, next);
  }

  /* if more than 2 existed, use recursion to composeTwo the first 2 and then spread out the remaining
  "other" functions back into composeAll. Javascript recursion using the stack datatype internally (last in first out)
  and we iterate this stack and end up eventually with just 2 functions, which get handled in the len === 2 block above */
  if (len > 2) {
    return composeAll(composeTwo(first, next), ...other)
  }
}

// S: Here are a function of "middleware" sample functions
function toUpperCase(val) {
  return val.toUpperCase();
}

function strongify(val) {
  return `${val}`
}

function pad(val) {
  return `----------${val}----------`;
}

function endSmile(val) {
  return `${val} :)`;
}

function startSmile(val) {
  return `:) ${val}`;
}
// S: Here are a function of "middleware" sample functions

// composeAll all these middleware functions
const composed = composeAll(toUpperCase, strongify, pad, endSmile, startSmile);

// we then end up with a single "entry point". "foobar" goes through toUpperCase -> strongify -> pad -> endSmile -> startSmile -> in order
const applyAllToStr = composed('foobar');

// this will print ":) ----------FOOBAR---------- :)"
console.log(applyAllToStr);


If you want to use this feel free to grab the code from here : https://github.com/newbreedofgeek/functionaljs

Happy Coding!

Wednesday, February 27, 2019

Undo the most recent commit(s) in GIT?



This is a handy GIT tip:

Say you accidentally committed the wrong files to Git in a local branch, but you haven't pushed the commit to the server/origin yet. How can you undo those commits from the local repository and “reset”?

Undoing a commit is a little scary if you don't know how it works. But it's actually amazingly easy if you do understand.

Say you have this, where C is your HEAD and (F) is the state of your files.

   (F)
A-B-C
    ↑
  master

You want to nuke commit C and never see it again. You do this:
git reset --hard HEAD~1

The result is:

 (F)
A-B
  ↑
master

Now B is the HEAD. Because you used --hard, your files are reset to their state at commit B.
Ah, but suppose commit C wasn't a disaster, but just a bit off. You want to undo the commit but keep your changes for a bit of editing before you do a better commit. Starting again from here, with C as your HEAD:

   (F)
A-B-C
    ↑
  master


You can do this, leaving off the --hard:
git reset HEAD~1

In this case the result is:
   (F)
A-B-C
  ↑
master

In both cases, HEAD is just a pointer to the latest commit. When you do a git reset HEAD~1, you tell Git to move the HEAD pointer back one commit. But (unless you use --hard) you leave your files as they were. So now git status shows the changes you had checked into C. You haven't lost a thing!

For the lightest touch, you can even undo your commit but leave your files and your index:
git reset --soft HEAD~1

This not only leaves your files alone, it even leaves your index alone. When you do git status, you'll see that the same files are in the index as before. In fact, right after this command, you could do git commit and you'd be redoing the same commit you just had.

One more thing: Suppose you destroy a commit as in the first example, but then discover you needed it after all? Tough luck, right?

Nope, there's still a way to get it back. Type git reflog and you'll see a list of (partial) commit shas (that is, hashes) that you've moved around in. Find the commit you destroyed, and do this:

git checkout -b someNewBranchName shaYouDestroyed

You've now resurrected that commit. Commits don't actually get destroyed in Git for some 90 days, so you can usually go back and rescue one you didn't mean to get rid of.

Monday, November 26, 2018

Deploying node.js Apps using Docker onto ZEIT ▲now

I recently deployed a node.js + react.js Docker baked app onto ZEIT ▲now and here are the walkthroughs I used.



By the way, the project was this:
https://github.com/newbreedofgeek/react-abr-lookup

and the Dockerfile was https://github.com/newbreedofgeek/react-abr-lookup/blob/master/Dockerfile

Step 1:
Dockerizing a Node.js webapp
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
** when testing Docker locally you will most likely make many mistakes and here is a good collection of commands you can run to clean up Docker runtime
https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

Step 2:
Deploying to ZEIT ▲now
https://dev.to/grikomsn/how-i-deployed-a-dockerized-nodejs-app-on-zeit-now-358b

Let me know if you have any specific issues in the comments and I’ll help where possible.

Thursday, October 18, 2018

Change GIT Commit Author Name and Email for Specific Commits in History

If you are like me and have multiple GIT accounts that you work from, occasionally you may commit source code using the wrong GIT Profile/Alias. You will most likely not discover this until later when you look over your history at some point in the future and realise that some commits have a different Author Name and Author Email.

Fix a specific GIT commit using Interactive Rebase


In this short post I will show you how you can change the Author Name and Author Email from specific commits in your GIT commit history.

Lets assume you have this GIT Commit history.

commit 18e31d7cdec72d9d6aba0ef19e5270d14936b511 (HEAD -> master, origin/master)
Author: Mark Paul <another-email@gmail.com>
Date:   Thu Oct 18 15:35:52 2018 +1100


    Committed bug fixes



commit 3546dd57f77508d9a6262af8b862dff23422ba72

Author: Mark Paul <another-email@gmail.com>

Date:   Mon Oct 15 20:56:32 2018 +1100



    init the code



commit e5b47248597e2df98a106f098311afc34f5cc37d

Author: Mark Paul <my-email@gmail.com>

Date:   Mon Oct 15 20:51:22 2018 +1100


In the above commit history, Mark Paul <my-email@gmail.com> is the profile you WANT to use. But you realise that you also have commits using the INCORRECT Mark Paul <another-email@gmail.com> profile.

So you need to update the 3546dd57f77508d9a6262af8b862dff23422ba72 and 18e31d7cdec72d9d6aba0ef19e5270d14936b511 commits to use the Mark Paul <my-email@gmail.com> git profile.

You do this by using GIT's Interactive Rebase feature.

Interactive Rebase off of a point earlier in the history than the commit you need to modify (git rebase -i <earliercommit>). In the list of commits being rebased, change the text from pick to edit next to the hash of the one you want to modify. Then when git prompts you to change the commit, use this:

git commit --amend --author="Author Name <email@address.com>"

Let's see this in action:

In the example above, our commit history was e5b4724-3546dd5-18e31d7 with 18e31d7 as HEAD, and you want to change the Author Name and Author Email of 3546dd5 and 18e31d7, then you would:

Specify git rebase -i e5b4724 (use the full commit hash if the short commit hash does not work)

If you need to edit e5b4724, use git rebase -i --root

Change the lines for both 3546dd5 and 18e31d7 from pick to edit

Once the rebase started, it would first pause at 3546dd5

You would git commit --amend --author="Mark Paul <my-email@gmail.com>"

Then git rebase --continue

It would pause again at 18e31d7

Then you would git commit --amend --author="Mark Paul <my-email@gmail.com>" again

The git rebase --continue

The rebase would complete.

Use git push origin master -f to update your origin with the updated commits.


Hope this helps you.

See this for more details on this topic.


Happy Coding / Hacking!

Thursday, October 4, 2018

Moving Dokku Apps Between Physical Servers with No (Very Little) Down Time


Moving Live Servers without Downtime... Risky my friend!

I built and run quite a large scale free cloud software that spans multiple independent servers (using a Microservice design pattern) and recently I was trying to optimise the operational management of all my servers. I am a big fan of Dokku and had cloud servers running "Microservice Apps" deployed via Dokku on Digital Ocean and also AWS EC2. The platforms were fragmented and I wanted to bring all my cloud servers together onto the same cloud platform (Digital Ocean or AWS) in order to make it easier to manage and gain operational visibility (centralised monitoring via dashboards where metrics all mean the same thing).

After looking at my options I decided to move all my servers to Digital Ocean. I always loved their platform and after they added streamlined (and free server monitoring) it made it a no-brainer (AWS EC2 monitoring is not seamless and the machines cost more as well). For me Digital Ocean always had the upper-edge on AWS when it came to user-friendless and provided a much better DX (Developer Experience). AWS is a lot more feature-rich when building distributed backend systems but Digital Ocean gives you a much better experience when it comes to cloud machine setup and management (Droplet vs EC2)

In this tutorial, I will show you how I moved a “Live” Microservice API running on AWS (an EC2 with Dokku Installed and the API Microservice deployed inside a Docker container) to the same setup in Digital Ocean (an Droplet with Dokku Installed and the API Microservice deployed inside a Docker container). As it was a live API, I wanted limited downtime as possible (I managed to do it with zero downtime - but as DNS updates are required you may want to schedule some downtime with your users before you attempt this). The API was also served over HTTPS so I had to bring up the HTTP and HTTPS endpoints as close together as possible. (I used the Dokku LetsEncrypt plugin for this as shown below)

Docker via Dokku in DigitalOcean


My setup was as follows:
Old Server: AWS - EC2 T2-Micro, Dokku 0.8.0 (older version)
New Server: Digital Ocean Droplet, Dokku 0.12.13 (I used the Digital Ocean one-click app images they already have for Dokku)


💥 💥 Firstly, I need to give you my usual Disclaimer for these kinds of risky tutorials :) There might be a much better way to do this but these are the steps I followed and got it down with zero downtime. I can’t guarantee this will work for you and attempting this might result in excessive downtime or lost data for you. Please do this at your own risk! 



Now let’s get to it:

1) Sign up for a new Digital Ocean Server (Droplet with Dokku Installed). I used the One-Click Dokku image they had. I already had an SSH key setup as I had other servers with DigitalOcean so I used the existing key during setup of the new Droplet

2) Once your server is up, open the Dokku config page (It’s usually found by hitting your new IP in the browser). In the config page, make sure your new server IP is added in the "hostname" field - do not use the domain you are using in the old server now. Don't select virtualhost naming for now (although I don’t think this will impact you - as you will need to turn on virtualhost in a step below). Finish this config setup as soon as possible or you risk someone discovering your IP and seeing your KEY details!

3) SSH into your new server and create a a Dokku app with the same name in the old server:

dokku apps:create my-app
4) Locally in your GIT controlled source code for the app, set a new GIT remote from your repo and point to your new sever:

git remote add dokku-new dokku@my-new-server-ip:my-app
5) Now push your latest master code to dokku-new. If you have any issues with the deployment you might want to enable tracing and debug by SSHing to your new Server and running:

dokku trace on
6) One common error that may occur with the above step is you will see an error similar to “pre-receive hook declined”. This may be caused by your code actually being successfully deployed but (in the case of a node.js app), it attempted to start the app in the new server but some Environment Variables driven dependency like a Database or Redis connection failed (as Environment Variables that hold the database endpoint or connection details do not exist in the new server). This will throw an error like above which will make you think the deployment failed.

7) If this happens, then SSH into your box and set all the required Environment Variables. Your do this via the command:

dokku config:set my-app varName1=VarValue1 varName2=VarValue2

8) Once deployment has completed successfully, SSH in again and check to ensure all your Environment Variables are there and you will also notice some new Dokku Variables added.

9) I then enabled vhost/virtualhost on my new app via the command:

dokku domains:enable my-app
This will restart the app with vhost enabled and give you a IP:80 url instead of the default IP:RandomPort. Now when you can hit the url in the browser you can verify if your API is accessible. For me it was by hitting the Health Check URL I have on all my microservices. e.g. http://IP/health-check

10) I then added the live domain to the app via the command:

dokku domains:add my-app myapplivedomain.com
After you run the above command you should see a success message that nginx reloaded with your new settings. Confirm this has happened now by hitting this command and seeing the response:

dokku domains:report cloud-service
11) If you restart your app now you should see the domain appear in the Dokku log:
.
.
=====> Application deployed:
http://myapplivedomain.com

12) Now we should kickstart the HTTPS certificate part via the plugin dokku-letsencrypt, but we won't be able to complete it now as the DNS is not updated to new IP - but we should start it off now. We get HTTPS via the free letsencrypt service. Install the dokku plugin on your new server via:

sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git

* This article also points to some instruction on this process - https://medium.com/@pimterry/effortlessly-add-https-to-dokku-with-lets-encrypt-900696366890

13) I ran this command to set it up:
dokku config:set --no-restart my-app DOKKU_LETSENCRYPT_EMAIL=your@email.tld
* your@email.tld is something you need to control and monitor

14) Then I ran the command to start up the letsencrypt process:

dokku letsencrypt my-app
15) The above command will most likely fail with an error like:
Did you set correct path in -d myapplivedomain.com:path or --default_root? Is there a warning log entry about unsuccessful self-verification? Are all your domains accessible from the internet? Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz/[some-verylong-code-here] Challenge validation has failed, see error log.

16) It's now time to swap the DNS nameservers for your domain and point to the new Server's IP. There will be some downtime from now on!! as we still have not successfully enabled letsencrypt for HTTPS on your new server (and your downstream apps are probably expecting the HTTPS urls).. but we have no choice as we can’t enable letsencrypt until the domain resolves to the new server IP.

17) Swap your DNS A and @ values to the new server IP (This may be different for each domain registrar)

18) At the same time do the letsencrypt “acme challenge” as well as seen in the error message above when we tried to enable letsencrypt. The acme challenge is something you may need to complete (not sure if this is really needed but I did it anyway). Basically you need to copy the [some-verylong-code-here] from the error message seen above as a TXT DNS record with the name _acme-challenge. The code needs to go into value like so:

TXT    _acme-challenge    some-verylong-code-here
19) After the DNS resolves, which should not take too long. Go back to the SSH console and re-run the command that failed previously:

dokku letsencrypt my-app
20) It should now work successfully... and you won't see that error anymore.

21) Its also a good idea to auto-renew your letsencrypt cert or your HTTPS endpoints may go down. Do that like so:

dokku letsencrypt:cron-job --add
22) Finally to verify it all went well, restart the dokku app. It should reload and show both your HTTP and HTTPS endpoint mapping set up:
.
.
=====> Application deployed:
http://myapplivedomain.com
https://myapplivedomain.com

23) You have now officially swapped over to the new server! 🙌💪💃💕


If this article helped you out and you are keen to give Digital Ocean a try (I highly recommend them) - then use this referrer link to get some free credit
https://m.do.co/c/63904110df0d

You should get 25$ (in October 2018 I believe it’s actually 100$!) and I’ll get some credit to run my servers as well :)

All the best and happy coding!

Wednesday, July 11, 2018

Unit Testing Node.js HTTP Network Requests using PassThrough Streams

Do you really need unit testing?

I use the mocha, chai and sinon unit test stack for my Node.js and frontend Javascript projects. Its a very powerful and user-friendly stack. Sinon is great for stubbing and spying on your unit tested codes callbacks and promise resolves.

If you have had the need to unit test functions that also make network requests using Node.js http or https modules you are faced with some complex logic paths for sinon mocking. As it is a network request, you need to ask yourself "do I really need to hit this API and test it's valid response?" or "am I happy to simulate the network request but test the callback logic?"

In your quest to extend the coverage of functions that may also do network requests you ideally may just want to mock the network request but test the callback of the request for valid execution flow.

Since of late I have added nock to the above mentioned test stack as well. nock is a brilliant tool to completely mock out your Node.js network requests. It's very declarative and abstracts all the complexities of setting up stubs and spys.

I will write out a seperate post on how to use nock, but I wanted to show how I used to unit test network requests prior to nock. This will be useful for people who prefer not to use nock or keep their unit testing as "vanilla" as possible.

To simulate and mock network requests I implement the very useful steam.PassThrough class provided natively by Node.js. In their documentation they describe it as:


The stream.PassThrough class is a trivial implementation of a Transform stream that 
simply passes the input bytes across to the output. Its purpose is primarily for 
examples and testing, but there are some use cases where stream. 
PassThrough is useful as a building block for novel sorts of streams.


Here is an example implementation of Unit testing a function that incudes a https get request using the mocha, chai, sinon and PassThrough tools. I have provided detailed comments in the code so hope that helps explaining what is going on.

const https = require('https');

// An example function that has other logic you need unit tested
// ... but you also want to cover the https call as part of your coverage 
function functionWithHttpsCall(apiInput) { 
  // wrap this whole aync function to be Promise based 
  return new Promise((resolve, reject) => {
    // do somthing with apiInput, update logic if needed etc

    // make an api call
    const request = https.get(`https://dummyapi.com?giveme=${apiInput}`, (response) => {
      let body = '';

      // construct stream response
      response.on('data', (d) => {
        body += d;
      });

      // stream ended so resolve the promise 
      response.on('end', ()=> {
        resolve(JSON.parse(body));
      });
    });

    request.on('error', (e) => {
      reject(e);
    });
  });
}

We need to now write a unit test to test the functionWithHttpsCall function above. We want to test all execution flows in this function to improve out code coverage so we also want to test the https.get callback response (without not actually testing live API response)

Here is the unit test for this test case:

// using mocha, sinon
const chai = require('chai');
const sinon = require('sinon');
const expect = chai.expect; // use chai.expect assertions
chai.use(require('sinon-chai')); // extend chai with sinon assertions

const https = require('https'); // the core https Node.js module
const { PassThrough } = require('stream'); // PassThrough class

// Unit Tests
describe('My App Tests - functionWithHttpsCall', () => {
  // do this before each test
  beforeEach(() => {
    // stub out calls to https.get
    // due to how npm caches modules, calls to https.get
    // ... from this point onwards will use the sinon stub
    this.get = sinon.stub(https, 'get');
  });

  // clean up afte each test 
  afterEach(() => {
    // restore after each test
    https.get.restore();
  });

  // begin test config
  let mockedInput = 'authors';
  let expectedOutput = {'authorName': 'john doe'};
  let actualOutout;

  it('should return a valid response when we hit a https call', (done) => {
    // create a fake response stream using the PassThrough utility
    const response = new PassThrough();
    response.write(JSON.stringify(expectedOutput)); // inject the fake response payload
    response.end();

    // create a fake request stream as well (needed below)
    const request = new PassThrough();

    // when the test crawl below for functionWithHttpsCall hits the stub (https.get)
    // ... respond with the mock response Stream in the index 1 param of the callback of https.get
    this.get.callsArgWith(1, response)
            .return(request); // calls to https.get returns the request stream so we need this to as well

    // unit test the makeHttpCall function
    functionWithHttpsCall(mockedInput)
      .then((actualOutout) => {

        // actual output (actualOutout) will be same as the (expectedOutput) variable above
        // ... because we used (expectedOutput) in the response PassThrough above
        
        expect(actualOutout).to.deep.equal(expectedOutput); // this test will pass

        done();
      });
  });
});


Hope the above makes sense you have an idea now on how you can use PassThrough to unit test your network calls.

But PassThrough does have its limitations, I have not worked out how to simulate various https status codes (404, 500) and also simulate network timeouts. These are the main reasons why I moved over to nock.

Happy coding!

Friday, May 11, 2018

Dokku app deployment fails with a “main: command not found. access denied”. Its most likely a storage space issue...

If your Dokku app deployment failed with an error similar to this:

/home/dokku/.basher/bash: main: command not found
Access Denied

And you have confirmed that its not some typo in your command or if your remote server is down or repo has been deleted etc... then you most likely have hit this issue due to your remote server going out of space.

You can confirm this by SSH'ing into your remote server and hitting the "df -h" command. Once you confirm it then there are a few things you can do to release some space so you can redeploy.

The following is highly risky as you have to deal with docker directly but if you know what you are doing then it should be OK. (but do these at your own RISK!)

Run:
sudo docker ps

To see the docker containers that are running and check for ones that have exited and therefore are not needed

Run:
sudo docker ps -a -q -f status=exited

To see the containers that are exited. You can delete the images behind this to free space. You will see a list. Remove containers using the container IDs. e.g.

sudo docker rm [0fb0724ae3a7] --> this is the containerID

Now check if you have some free space and try to redeploy.

If you still have no space you can try and delete the docker images as well. This is again very risky in case you are not sure what is happening so please do this at your own risk

Run:
docker images

or Run:
sudo docker images | grep '<none>'

This should show you orphaned images.

Start deleting some images using this command:
sudo docker rmi [afa892371d46]  --> this is the IMAGE ID

Now you should have some space for a deployment.

** If you are using Dokku, then I've noticed that overtime it leaves many Docket containers in a bad state and creates a lot of bloat. At times I go ahead and remove any containers that I feel are invalid containers created by Dokku. The Dokku "app name" is not what necessarily what your Docker container name will be so don't let this confuse you, even if you have no containers that match your dokku app name, the next time push dokku deploy it will spin up a new container for your app. Again - all this is risky do don't attempt unless you are sure.

Hopefully this helps someone. You can read more in these issues raised on Github.
https://github.com/dokku/dokku-mariadb/issues/38
https://github.com/dokku/dokku/issues/120

More dockers commands that are useful:
https://zaiste.net/removing_docker_containers/

Bonus free space tip: Also might help cleaning up for free space on your Linux box. I was using ubuntu 16.04.2 and this article gave some great tips on freeing space: https://www.omgubuntu.co.uk/2016/08/5-ways-free-up-space-on-ubuntu

By running sudo apt-get autoremove I was able to free up close 1 GB!
Fork me on GitHub