Thursday, April 20, 2017

Move Working Code Changes to new GIT Branch without Stashing



If you are like me, you will probably find your self working on the "master" branch of your project most of the time (when you are the sole contributor on a project of course...). Often I start working on an issue that has been raised by my users confident that I can fix it quickly in one go. But I soon discover that it is far more complicated than initially thought and regret not creating a new "issue branch" to work off (so my master branch is clean and the commits are atomic to features/issues I'm working on)

I often face this workflow problem when I'm working on my open source project React Stepzilla for example.

Here is what I do now when I come across this GIT workflow problem:

  • Locally on my computer I've checked out "master" and I'm working on a new github issue with id 27 (for e.g)
  • I modify multiple files trying to fix the issue
  • I then discover that I'm not going to be able to complete the fix easily and regret not creating a new issue branch "issue-27"
  • Running "git status" shows me all my changes so far to the "master" branch
  • I then run "git checkout -b issue-27". This will create a new branch called issue-27 with all your local changes.
  • I'm now in a new local branch called "issue-27"
  • I continue my work and stop for the moment, I then add my code via "git add -A" and commit it using "git commit -m 'Working on #27'" (putting the #27 here makes the code change referenced in the github issue, which is nice...)
  • I then push the new branch to origin and track it "git push -u origin issue-27"
  • If you checkout master now and run "git status" you will notice that the changes are not longer "pending" here and your master is clean.
  • Once you are done with your work on the "issue-27" branch, then your can merge the changes into "master" via a github "pull request" or locally and then delete the "issue-27" branch.


Hope this helps you guys, also here is a good stackoverflow that tell your more.

Monday, April 3, 2017

React Stateless Components used for Routes don't work with Hot Module Replacement

This article is relevant for: 
- react 15.4.2
- react-router 3.0.0
- webpack 2.2.1
it may or may not work for other versions.



I've had this issue many times in the past and yet I cant seem to find a proper solution. I'm documenting it here so it may be of help to others who have hit this roadblock and drank themselves silly.

This Github issue seems to relate to this as well...

Say you are using React, React-Router and Hot Module Replacement via webpack.

You many have a combination of React components that inherit from React.Component and Pure Components.

So a React Class Component like so (e.g. ProfileComponent)

class Profile extends Component {
  render() {
    return (
      <div>
        <h1>Profile</h1>
      </div>
    );
  }
}

And a "Pure" React components (e.g. DashboardComponent)

export default () => {
  return (
    <header>
      <h1>Dashboard</h1>
    </header>
  );
};

And let's say you configure React Router to expose these Components based of these routes.

const Routes = [
  {
    path: '/',
    component: App,
    indexRoute: { component: HomeComponent },
    childRoutes: [
      {
          path: '/profile',
          component: ProfileComponent
      },
      {
          path: '/dashboard',
          component: DashboardComponent
       }
    ]
  }
];

ReactDOM.render (
    <Provider store={store}>
      <Router routes={routes} history={history} />
    </Provider>,
    document.getElementById('root')
  );


In the above example, Hot Module Replace will NOT work for "DashboardComponent" but it will work fine for "ProfileComponent".

This seems to be well known issue actually with no obvious fix.

The issue seems to be related to the "first level" Route Component specified by a React Router Route and where this "first level" component cannot be a Pure Component!

But nested 2nd, 3rd or Nth lower level components for a Route can be Pure and Hot Module Replacement will work fine.

e.g let say this ProfileComponent actually nested a Pure Component called <Button>

class Profile extends Component {
  render() {
    return (
      <div>
        <h1>Profile</h1>
        <Button />
      </div>
    );
  }
}

And Button was this:

export default () => {
  return (
    <div>
      <h1>Button</h1>
    </div>
  );
};

You can update anything in <Button> and it will work with Hot Module Replacement.

So in summary, until this issue has been solved (if it has then please send me a comment) make sure that your "first level" Route Components are NOT Pure....

Annoying? Yes.... and I feel your pain. 

Wednesday, February 15, 2017

Atomic UI / API Pattern



One of the main issues that UI Products face when it deals with a MicroService backend is the need to coordinate API calls and successfully commit all changes (when made by the user) in a Atomic level. It's tricky to do this as individual API calls can fail or be slow to respond and the UI might will probably maintain an interim Data State/Store (basically an object in memory) which then needs to be remapped to the API schema and saved over REST.

Let's look at an example:
  • Our UI Screen needs to render something (trigged by a Client side Route Change on the Single Page App - SPA)
  • The screen requires data from multiple MicroServices as the Screen is composed of different object elements, it makes multiple MicroService calls to gather all data needed to generate the Screen Content and Input Forms
  • User interacts with the data (adds something, edits something etc) - the UI App has to maintain these changes locally using an interim, browser based Data State/Store
  • User commits to publishing the data in the screen (which they have edited via a form) and "Saves" it. Business Logic in the UI needs to coordinate and remap the local Data State/Store to the various MicroService endpoints and SAVE/PUT

There are multiple failure points to this approach:
  • If the local Data State/Store is not handled with immutability then there will possibly be side-effects to that data (caused by bad coding)
  • On saving, some API calls may fail whilst some may succeed. This will cause a disconnect in the UI level as to what the server knows to be the correct data and what the UI believes to be the correct data (as the local UI will be bound to the interim data state)

One approach around this is to follow an atomic API -> UI relationship.

Where ONE Screen gets all it's content from ONE API and saves back to ONE API.


I'm not sure if there is a term that describes this pattern (if there is please correct me in the comments), but I'm calling this the Atomic UI/API Pattern.

This seems to be a common practice followed by the large UI Products in the world (Netflix, Facebook etc) using tools such as GraphGL.

I'll write more on this soon...

Friday, December 16, 2016

Apple Mac Webcam Camera not working? Here is a Quick Fix



I had to take a urgent video call today on my Mac and at the last moment found that my built in webcam was not working. In the browser, the video plugin to display the webcam feed was showing "Camera not found" and in other programs like Skype the video webcam feed was just showing a black screen.

Seems a restart would fix it (based on my reading online) but I had to fix it quicker and didn't want to restart the machine as I had some tasks running which I could not stop. I then found this below command which fixed it immediately. 

sudo killall VDCAssistant

Just open your "terminal" and type that and hit enter (you will have to enter your admin password).

Hopefully that will fix it for you too! 

Sunday, November 27, 2016

Beware - forever list vs sudo forever list are not the same

A few days back I noticed that my Pub/Sub Listener (a node.js script that listens for events and routes data to my Datastore) for Google Cloud Platform (GCP) was behaving badly. I had recently deployed some updated Listener logic which was being bypasses altogether.

My Pub/Sub Listener were "kept alive" using the Forever npm module, as I was working on a GCP Compute Engine, I was using sudo to run all my commands.

So I launched my Pub/Sub Listener using:
sudo forever app.js

To debug the issue, I ran the following to stop all my Forever scripts:
sudo forever stopall

And verified this by doing:
sudo forever list

It shows an empty list. So what was wrong? It seemed as be as though Forever was running some hidden processes and my Pub/Sub listener was still running.

After a bit of digging around, it seemed as though, sometime in the past I had launched my Pub/Sub Listener using
forever app.js

So what was the problem?

Well my scripts were running as both root user and my user permissions, and were essentially duplicated.

As you can see:



So to fix this, just stopall on your root account.
forever stopall

And make sure you only run your Forever script using the correct user permissions.

Happy Coding!


Thursday, June 30, 2016

Pushing a Dokku App from Multiple Computers and Fixing the "Could not read from remote repository" Error

I use Dokku to push the multiple Micro Services that power my blog www.wisdomtoinspire.com (which are hosted in a Digital Ocean server)

Dokku uses Docker and is a Free open source Heroku alternative which makes a Microservices architecture possible

Initially, I worked mostly on single computer which I used to set up Dokku on Digital Ocean and set my initial SSH keys etc

Recently I wanted to push a new Micro Service (which is deployed via Dokku on my digital Ocean machine) on a different computer but I ran into multiple issues.

  • I assumed that all I had to do was clone the GIT repo, set my Dokku remote (after creating a Dokku App on the server) and then pushing via "git push dokku master"
  • But I could not access the server. So I create a new SSH key with the same email I used previously when I set up the server and stored it in "id_rsa.pub"

I then SSHed into my Digital Ocean machine, and manually copied the new SSH into ~/.ssh/authorized_keys and then assumed I need to copy it into /home/dokku/.ssh/authorized_keys as Dokku might use it. I also switched to the "dokku" user (using "su dokku") and added the key to "~/.ssh/authorized_keys"

Assuming this would fix the issues I was having trying to push my app to the Dokku remote, I pushed via "git push dokku master"

And kept getting this error:
fatal: 'appname' does not appear to be a git repository
fatal: Could not read from remote repository.

I then dug around and came across these issue on Dokku's Github:
https://github.com/dokku/dokku/issues/1608 (Gitreceive doesn't work)

This comment struck out:
"Guess there was a bug in how you initially setup your keys. The dokku user is meant to be managed solely by sshcommand, and you aren't supposed to "ssh" in as that user, hence why things seemed to break for you."

Basically, it means that I should not try and add new SSH keys like the way I did. Instead I should remotely do it from my machine like it's mentioned in this article https://www.digitalocean.com/community/questions/dokku-add-new-ssh-key

i.e. I undid all those manual keys I manually copied in my Digital Ocean server and from my local computer I ran this:

cat /path/to/public_key | ssh root@yourdokkuinstance "sudo sshcommand acl-add dokku [description]"

Read more here (in the SSH command part)
http://off-the-stack.moorman.nu/2013-11-23-how-dokku-works.html

Now when you try "git push dokku master" it should start pushing your app to Digital Ocean using Dokku.

But I then ran into this issue:

 ! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'dokku@wisdomtoinspire.com:thumby'

:(

But I knew the above "pre-receive hook declined" usually appear when something goes wrong with the install and launch via npm. So I just enabled Dokku tracing like so "dokku trace off" (http://dokku.viewdocs.io/dokku/troubleshooting/) and fixed all my issues and my Service was good to go!

Hope this helps someone.

Happy Coding!


Wednesday, June 1, 2016

Undo/Reset last GIT Commit Locally and Remotely

If you are like me and you fork some project, make a bunch of changes and commit to your local and remote repositories only to discover that you pushed changes that should not go in.

Then here is a quick way to undo to a specific commit locally and then push that to the remote branch.


Keep in my mind that by doing this you will lose your local work (i.e. the work you want to undo) so only do this if you don't care about the work you did before you pushed the new commit.

First, find out the commit SHA you want to reset to. In Github you can get this like so:

Grab the SHA of the commit to want to go back to



In you command line run these commands one by one.
git reset --hard 0415b7a2bf2517f37f662e063ffee36706554d8f
git push --force

The reset syntax is basically:
git reset --hard target-commit-SHA (grab the full SHA by clicking the link in green above)


Your repos (local and remote) should now have gone back to the point you wanted it to.

Use at your own risk!! :)

Fork me on GitHub