This article is going to explain how to perform and test Ajax requests in a React
application using Redux and Axios which
is a promise based HTTP client.
I like Axios because of the way you can centrally configure the common behaviour
for the requests your front-end needs to make and the responses it receives.
To take one example, it has the concept of interceptors
that allow you to write a single function that can the handle responses such
as a 401 unauthorized or 500 exception.
When using Axios I like to create a utility ‘Service’ class to setup the centralised
config and ensure all Ajax requests get handled consistently.
It also means calls to Axios are not sprinkled all over the codebase, so I could
swap out Axios for an alternative library and I should only need to update this
one wrapper class.
In a Redux application, actions are purely functional and thus ‘side-effect free’. So in order to perform Ajax requests you will need middleware such as ‘redux-thunk’. An alternative, that I’ve not used but heard good things of, is redux-loop.
Redux Thunk means you can return a function from your action to delay the dispatch
of it, or only dispatch if a condition is met (like the response status is 200).
Once Redux Thunk is installed, you can return a function in your Action as shown by
saveResource in the example below:
Then for any containers that you want to use the action, you need to pass through the dispatch to the function in the mapDispatchToProps. See saveResource below:
To test the action you can use standard jasmine spy functions:
Then finally to test the Service class:
There are a few bits here to go through here. Firstly I found that when running the specs
I needed to add the babel-polyfill for Promises. I’m using Karma test runner so I needed to add 'node_modules/babel-polyfill/dist/polyfill.js' to the files array in the karma.config.js
Secondly at the top of the spec you’ll notice that jasmine-ajax is imported. This is the official plugin for faking Ajax requests in your Jasmine specs. What I like about this
is that it has no knowledge of Axios. In the beforeEach it is turned on and
then all Ajax requests triggered in the specs are captured and can be inspected.
Due to the asynchronous nature of the promise, I use setTimeout to grab the
most recent request and check it was formed correctly. To test the callback
I use the then function on the promise under test.
A small, but important, detail is the call to done() – this ensures the test
does not exit early and waits until the asynchronous part of the test is complete
And that’s it! There’s a lot of code there, but once the Service class is setup it
should remain pretty much unchanged. Adding additional Ajax requests to your React application actions should be fairly straight forward.
Capybara is a great way for performing
integration testing of web applications. Typically it is used in development with
and is unable to access HTTP resources outside of your Rack application.
Enter PhantomJS – a headless implementation of Webkit
way to get the two to talk together is via the Poltergeist gem.
When accepting file uploads from users of your application, you should always
check the file to ensure malicious content isn’t permitted.
If you have a simple Rails file_field in a multipart form, you’ll get a ActionDispatch::Http::UploadedFile. This object wraps up the temporary uploaded file and has methods
like content_type and original_filename.
At first glance it sounds sensible to check the extension (determined from original_filename) along with the content_type, against a whitelist of allowed
values. However, if the pesky user has renamed the file extension before uploading
the file, your whitelist will not help you as even content_type will return the
This is why checking the mime type of the file is important, but this is not
something Ruby does natively. There are a few Ruby gems around which specifically
handle this problem but the simplest solution I found, without requiring any third
party dependencies, was to use the underlying Operating System and execute the
file command via back ticks:
`file --b --mime-type '/path/to/file.txt'`.strip
This works on both Linux and OS X and will return the mime type such as
‘text/plain’, which can then be used as part of your validation.
Functional programming has piqued my interest for a while – it appears to be reaching
a tipping point with the rise of ReactJS, Redux, Elm and ImmutableJS
Elixir is also a functional programming language that’s
attracting a lot of the Ruby community.
As an object-orientated Rubyist, I’ve been exposed to some functional programming
ideas through Blocks, Procs and Lambdas but when programming in Ruby you’re typically
mutating state all over the place.
Pure functional programming works with immutable state.
Although I didn’t set out to specifically learn Scala, I decided to take the
course because the reviews from fellow programmers
highly recommended it for learning Functional Programming. I think they were right – it’s
a very well put together course and it’s also really tough.
It completely challenges the way you think when you are used to writing imperative code.
The course is definitely less about learning Scala and more focussed on the fundamentals
of functional programming. Interestingly Scala actually allows both the worlds of
functional and object-orientated to co-exist.
Each week you have an assignment to complete that you can upload your program directly
from your IDE – there’s an automated test suite that will grade your submission.
Side effect free
So one of the reasons functional programming is gaining popularity is because one of
the main cited benefits is that the code is easier to reason about due to being
It’s this immutability that is at the core of React & Redux, especially for performance.
When a component’s props or state change, React decides whether an actual DOM update
is necessary. By using immutable state, it’s very easy to track if an object
has changed. The React documentation has a really good detailed explanation on Optimizing Performance.
Functional programming languages are often talked about as being highly performant, but
what does this mean?
The future trend for computing appears to be more cores and one the way to make best
use of the hardware is to run code in parallel. This is where that ‘side-effect free’
code comes into play as makes it a lot easier to run in parallel.
Functional programming is certainly a completely different mindset for those of
us used to object-oriented programs. It will be really interesting to see if it
does become the mainstream approach to building web applications. I think I might
start trying to apply some functional programming in Ruby and see how I get on.
A container component is concerned with how things work. This typically involves
fetching data, watching the store for updates and updating the store itself by
Essentially a container component will wrap a presentational component to add
behaviour to it.
What does a container component look like?
The container component is created on Line 16 via the connect() function provided
You can pass in additional functions to control how you want the state and the
actions mapped to the props. These are mapStateToProps() and mapDispatchToProps().
In the example above, I’m using mapDispatchToProps() to setup the addTodo prop
that will dispatch the addTodo action we’ve imported. How this is used is entirely
down to the AddToForm presentational component and is outside the responsibility
of this component.
Note the first argument to connect() is null because I’m not using mapStateToProps(), but
the principle is the same as for mapDispatchToProps().
So how is a container component tested?
Container components should be fairly lightweight. There seems little point testing
the connect() function, as this is not our code. But we do care that the props are
going into our wrapped AddTodoForm component correctly.
So to test this component, all I’m interested in is that the addTodo prop has
been setup to dispatch the required action to the store.
The mock store and the Provider are required to render the connected component.
After that I’m just reaching into the component to grab the addTodo prop and calling
the function. I’m spying on the dispatch method in the store, that means I can check it’s
called with the action that’s formed correctly.
My test for the presentational component AddTodoForm will take care of checking
that the prop is actually triggered when it’s supposed to.
Rebasing a git branch is great way to maintain a clean commit history and it’s my
preferred way of getting changes into a feature branch (instead of merging). This
blog post will explain how I use it and how I make it safer to use.
Rebasing is dangerous because you are re-writing the branch history!
Imagine someone else has pulled down your branch and then you re-write history –
it will really hurt when it comes to combining both your changes.
If your feature branch will be worked on by multiple people, I would strongly urge
you do not rebase.
The git book does a great job of describing rebasing in detail,
including the pitfalls of rebasing public branches and what to do if you do if you
get really stuck after rebasing.
My work flow
For most projects I work on, feature branches are typically only contributed to
by one person – so rebasing is ok.
My work-flow for creating a feature branch from master is something like:
$ git checkout -b feature/new_thing
... I make various commits for feature related changes
... Meanwhile, master is also updated
$ git fetch
$ git pull --rebase origin master
This effectively re-writes the history of feature/new_thing to place my commits
after those made in master. There’s no merge commit, so it’s really neat and tidy.
I like the idea of opening a Pull Request as soon I start work on a feature.
I’ll then make ‘work in progress’ commits as I go and push them up to the remote.
This gives people an early chance to review what I’m doing and lessens the chance
that my work might get lost if my computer blows up and my backup fails :)
But these commits can be messy and include changes I later revert, as I work on
the feature. I think it’s good practice to try and keep commits as logical and
concise as is reasonably possible. Ideally they will read like documentation for
the feature so that git log is like a story.
So during development, I won’t get too hung up on the commits. But eventually I
will use interactive rebase to re-order or
squash my commits into shape.
My interactive rebase work-flow is typically:
$ git log
... Find the sha for the last commit I want to leave alone
$ git rebase -i sha-for-commit-to-leave-alone-goes-here
It will then show something like this:
pick f7f3f6d Initial work on feature
pick 310154e Additional work on feature
pick a5f4a0d Finish feature
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# These lines can be re-ordered; they are executed from top to bottom.
Just edit pick to whatever action you want to perform and use :wq to save.
Now the next step is to push feature/new_thing to the remote and (other than the
first time I push) I’ll need to force push and this is where it gets dangerous!
$ git push -f origin feature/new_thing
This will completely overwrite feature/new_thing whatever state it is in. Too bad if
someone else updated it, their changes are gone.
Accidentally type feature/other_thing and you’ve just blitzed the wrong branch!
The safer way to force push
$ git push --force-with-lease feature/new_thing
Force with lease essentially checks the remote branch to see if it’s been updated
upstream and if so rejects the push and shows you a nice error message.
Beware that if you fetch, but don’t actually pull in the changes, force-with-lease
will not save you!
I want to always use force-with-lease over the standard force but this is
long to type so I have a git alias:
Last week the first ever Elixir conference was held in London and I was lucky enough to attend. It was brilliantly run with lots of great speakers and they’ve even got the videos up online already!
Elixir and Ruby
Elixir is a relatively new programming language that’s gaining in popularity fast, particularly amongst Ruby developers. During the conference they asked the audience to raise their hand if they had been, or are, a Ruby developer and over 75% did (myself included)!
So what’s attracting the Ruby community? Aside from the fact developers love shiny new things, this shiny new thing is different. That’s because Elixir runs on the Erlang BEAM which has been around for decades and was originally developed by Ericsson.
The key-note was from co-creator of Erlang,Robert Virding. It was fascinating to hear him talk about how Erlang was developed to solve the challenges Ericsson was facing building telecoms systems in the late 80’s, early 90’s. It’s amazing that the problems they solved are just as relevant in todays world of online software applications. One of the most high-profile recent adopters of Erlang is WhatsApp.
So what does this have to do with Ruby? Well Elixir inherits all of the characteristics that Erlang is known for:
Easy to scale
Often these are the same criticisms levelled at Ruby and Elixir provides an appealing syntax that looks a lot like Ruby.
The icing on the cake for Rails developers looking to transition to Elixir is Phoenix. Phoenix takes a lot of the good ideas in Rails and adds all the benefits of Elixir such as the concurrency and fault tolerance.
Gary Rennie (core member of the Phoenix team) did a great job running through some best practices with controllers in Phoenix. He also explained why umbrella applications are a good idea to logically split your application where it makes sense.
However, it’s not all about web applications and there were a few great talks and demonstrations showcasing embedded systems with Elixir. One of the most interesting discoveries for me was the Nerves Project.
The goal of Nerves is it make it simple for developers to write Elixir to interface with network and input/output devices, embed the code on a memory card, and run it on hardware like a Raspberry Pi.
Elixir is very young and there’s clearly lots of enthusiasm in the community. I’m definitely going to be keeping a close eye on the ecosystem that is already growing very quickly. There seems to be a lot of benefits to using Elixir for modern application development.
I definitely want to try writing an Elixir application and wrapping my head around functional programming (when I’m used to Object Oriented programming) is an interesting challenge I’ve been looking for an excuse to take on!