Understanding angular $http interceptors

Angular JS  built in service  $http  is used to make http server requests.  More often than not you would find yourself in a situation where you would want to run hooks for the http calls, i.e execute some logic before or after the http call. For example appending the auth token  to every api request or generic http response error handling. For this $http interceptors become quite handy. One more very important use of interceptors is to log http requests made to external API’s which can be used for analytics.

Although there is not a lot written about interceptors on the documentation site, reading through the code comments makes much more sense. One way to implement interceptors is to create a service and implement the required hooks as functions in the service and then push the service to interceptors array. Or  by pushing  an anonymous factory(function) with the required functions to http interceptors array. There are four types of interceptors Request, Response, Request error and Response Error. Every interceptor factory should have one out of these four methods defined rest are optional.

Let’s clear things with an example.

Response Interceptor

A response interceptor function takes a promise object as argument and returns the resolved promise.  The example Below handles 401 errors from the servers and does suitable error handling.  If the http request returns a success response it does not do anything. But if there is a error, it checks for the error code from the server and if its 401, redirects the user to the login page.

Request Interceptor

A request interceptor function takes in a config object and returns a modified config object.  In the Example Below we are going to add a ‘auth token’ parameter to every request made to the API server.

Although the examples above have implemented two different factories for simplicity, they can be combined into a single factory. And the interceptor looks like the one below.

Final Gist borrowed form https://gist.github.com/gnomeontherun/5678505

Advertisements

Must have bash aliases for git command line users

alias gb=’git branch’
alias gc=’git commit’
alias ga=’git add’
alias gco=’git checkout ‘
alias gd=’git diff’
alias gp=’git push’
alias gp=’git push’
alias pp=’git pull && git push’
alias gst=’git status’

My Favorite here is ‘pp’, pull and push on a single command. Quite handy and saves a lot of time. This can obviously be improved on further, that will be done when I learn more of bash scripting. As of now I have decided to build on my bashrc and vimrc to setup a more productive environment.

 

Thanks to this link. I have modified a few things to suit me.

Why I use Open source software

I am often told that Mac is better than Linux or Chrome is better than Firefox and I am a fool to use the latter( I had refused a Mac over linux powered PC at my work place). The reason for me to use them is something very intrinsic. I believe open source has the potential to  affect people’s life’s, to benefit humanity in general rather than only the people who can afford it. So using these tools over better(not all are better) alternatives is my silent support to open source, a kind of motivation to continue the awesome task of making the world a better place.

PS: On a side note, I think open hardware and a REAL open mobile OS is going to put a dent in the world.

Another project and lots of new learning

The project I am working on currently has reached its final stages and this time i decided to write down the major learning’s and shortcomings i could improve on next time.

1. Git work flows: This time around we stopped following the highly inefficient every_one_push_to_master( thats not a work flow at all :P) approach to a pull request work flow. Every developer forks his own version of the code and maintains his code on his own repo, giving a pull request whenever a feature is complete. Also a central repository to which the pull requests where given had two branch, Development and master. Development is where the development build code resided and master with the production code. The advantage of following a Pull request work flow is

– It reduces the chances of rubbish code being pushed into build branch as  the developer will have a look at the code changes before every request

– Also with one guy having permission to merge the pull requests( our case it was the team lead) there is a possibility of  code review on the fly.

Future scopes: Proper usage of tags in build versions.
2. Unit Tests:

Unit test must be one of the most underrated practise in software development. Trying to explain to a “Non Believer” that Unit tests are a must is almost impossible, “Its rubbish” was the thoughts i had before actually trying out unit tests for my code. The advantage of catching a error in code in one part due to change somewhere is probably the best advantage. Also Tests give you confidence on the code you have written and also the entire project. Now you can hear me saying “yea the module is done and unit tested and iam sure it will work”
Future scopes: Try to write Test driven code rather than code first and test later

3.Documenting:
Documenting code and API’s is probably the hardest job ever. It a sucky job but the fact is that some has to do. We did it in a worst way possible, we used Google sites for our docs 😛 and ended up with a blob of unmaintainable crap. As of now we are in a process of brainstorming some elegant solution for this, probably something using Jekyll so the documentation will remain part of  the code.
4.Re-factoring:

Re-factoring is a lie, you will either have no time to do it or the code will be running a critical feature that changing that code will be dangerous. Working on a project for around 3 months, i re-factored only one class, which i did putting in extra time at work.

Moral: the only way to re-factor your code is to write good code in the first place.
5.Murphys Law

Things will go wrong and it will go horribly wrong,  the important thing is to foresee it early and be prepared for it.

Extracting Semantic Data from wikipedia Infoboxes. Dbpedia

One thing quite amazing about Wikipedia is the huge amount of information it provides. At the time of writing the Wiki data dump articles was at a size of 7.8 GB compressed, 34.8 GB uncompressed, mind you all that without any images. So much data and that too available free . But wikipedia has one problem, due to the mechanical additions of entries scraping wikipedia for any data is a serious pain in the ass(at-least for me). Then the other day i came across Dbpedia , a effort to make the data(information) in Wikipedia info boxes through a freely available, query-able interface.  The data is in RDF format and you can write semantic queries, more like questions asked to wikipedia and it gets better, the queries can be easily imported as a XML/JSON. At first glance this might seem completely trivial, in a way it is very trivial. But the good news is Wikipedia is expanding and Dbpedia is getting better. Imagine a world where one day non programmers become content producers rather than consumers. Thats what i believe Dbpedia will do to the internet and to top it up there are amazing tools like Exhibit which makes visualizing all the data very easy and fun.