This global health emergency is horrendous and thus it looks like remote working will be the new normal for some time.
I’ve been working remotely for Rail Europe for two and a half years, so I thought i’d reignite my blog to pass on some tips and experience.
Sadly this enforced remote working scenario has the potential to amplify all the difficult parts of remote working. Namely: dealing with isolation,
fear that you need to “prove” you really are working, not separating home life from work life, amongst others.
To be clear, the ideas I want to share are not about squeezing out more productivity from your team. It’s about reducing
stress and anxiety, trusting people and giving them room and flexibility to breathe. Remote working is great for all these
things, but we distributed teams have to act and think differently to the typical co-located norms.
Anyway, I want to focus on one thing in this post…
Daily Stand-ups
These are really common in the tech industry. At Rail Europe we do them asynchronously. But what does that really mean?
Everyone writes their stand-up in a dedicated Slack channel
They post it (ideally) towards the beginning of their day
There’s no expectation that they all have to be posted at the same time
A stand-up message should contain 3 things:
What you worked on yesterday
What you plan to work on today
Any blockers
You’re encouraged to share how you are feeling or anything else effecting your work
This last point is important because, as a distributed team, you cannot rely on normal body language cues.
In theory this is a relatively simple change, that can have quite a dramatic impact, in a number of positive ways:
There’s no requirement for everyone to be online all at the same time:
This gives people flexibility to start work when it works for them
It’s especially benefitial if your team is in multiple-timezones
It’s less disruptive because you don’t need everyone to stop work at the same time.
For anyone off sick, or just not able to attend for any reason, they can easily catch-up on what happened while they were off
It’s massively more time efficient:
In-person stand-ups regularly get side-tracked off into details that are rarely relevant for the entire group
If someone outside your team wants a status update from you, they can read your stand-up
People can make their own decisions about whether they need to read every single update
If you need to follow up with someone on a specific point. We tend to use Slack threads or respond on the Jira Story/Trello card.
Ultimately the idea behind this approach is part of a wider communication strategy we have: To allow people to structure their working day how it works for them. To maximse and respect the amount of uninterupted time that we all have,
so we can focus and stay in the flow.
The rails/webpacker gem is a fantastic way to configure webpack for your Rails application. However, it has gone through quite a few different configuration options to get it working with Docker.
Initially you could set the host by using command line arguments, but as of version 3.0.2 and this Pull Request the command line argument support was removed in favour of environment variables. By setting various environment variables you can override all dev server settings defined in config/webpacker.yml.
So now my docker-compose.yml config looks something like this:
This blog post runs through an example of iterating on some Ruby code. The bizarre fictional scenario
develops to make the requirements ever more complicated. There’s certainly many different ways to
solve the underlying problem, but I wanted to show an example where define_method and instance_exec
can be combined with class methods to make for some very powerful and concise code.
For each change in requirements, I’ll post the code in full for how I’ve handled the new scenario.
I will say upfront that these two metaprogramming methods should be used with caution. Powerful and concise code does not necessarily make for easy to understand/debug/maintain code.
The Curry Restaurant
So, there’s a curry restaurant that sells Vindaloo and Tarka Daal and there’s a customer that likes to order curry from the restaurant… bear with me :). Upon receiving their meal, the customer will respond
with a comment if the spiceyness isn’t to their taste. Unfortunately the restuarant isn’t consistent
with the spices it uses when cooking.
If any of the meals are mild then the customer will respond with “That’s too mild!”.
A new greedy customer
Then along comes another customer, a greedy one. They also like to order curry and comment on the meal. However, rather than comment on the spiceyness, they like to comment on the size of the portion.
Unfortunately the restuarant is also inconsistent with its porton sizes, but the greedy customer
doesn’t care what type of curry it is, they just want it large!
Already we have a bit of duplication between the two customers but it’s not a massive problem at
this point.
A new menu item - Jalfrezi
The restaurant is doing really well and introduces a new Jalfrezi curry to the menu. The greedy customer responds the same to it as for any curry, but the spicey customer responds to the Jalfrezi,
the same as when recieving a Vindaloo - they want it hot!
We’ve added order_jalfrezi to both customers and added expect_hot_curry to SpiceyCustomer because
they respond the same for jalfrezi and vindaloo. Now we really do have duplication between the two customers, so we’ll look at refactoring that next.
Refactor - Dining Out
We decide that the two customers have enough shared behaviour to introduce a new module called DiningOut.
This means both customers can order all three currys and then respond in a way that is specific to them. With the custom response defined in each Customer in the respond_to_meal method.
You might be thinking it would be better to make Customer a class and have GreedyCustomer and SpiceyCustomer as subclasses that inherit from it, but for this example it doesn’t matter. This is a version of the Template method pattern.
We have our new DiningOut module and we’ve defined a respond_to_meal in both customers. This code is definitely cleaner but there is still a bit of a smell.
If the restaurant expands with lots of new items on the menu and these customers need to respond differently, then the respond_to_meal method is going to get pretty big and complicated pretty quickly. We’re also
very reliant on the name of the meal not changing.
In addition the DiningOut module will bloat quickly with each new menu item. That isn’t a massive problem, but there’s already a fair bit of obvious duplication going on where the name of the order method closely matches the argument sent to order_meal.
Now you might be thinking, why can’t we just expose order_meal as a public method and call SpiceyCustomer.new.order_meal(:jalfrezi) instead of SpiceyCustomer.new.order_jalfrezi. However…
A new fussy customer
Along comes the Fussy customer who will only eat tarka daal and the newest item on the menu, chips.
The fussy customer isn’t interested in responding to the meal and just eats in silence. In addition,
the Greedy customer will order chips, but the Spicey customer would never order chips. Sounds complicated doesn’t it?
The simplest thing to do is to add an order_chips method to DiningOut and an empty respond_to_meal to the new FussyCustomer.
Now we have some complicated rules and they’re not properly enforced. There’s nothing stopping us calling FussyCustomer.new.order_vindaloo or SpiceyCustomer.new.order_chips even though neither of those customers would order those things.
We could start putting some logic into the order_* methods to check the type of customer before ordering the meal, but that would get ugly fast. Ideally we want a solution where we can tell by looking at the code which customer likes to eat which food and each customer wont even know how to order food they don’t want to order.
Introducing define_method and instance_exec
Before we get into the final solution, the end result is that we can now do things like the following:
1234567891011121314
spicey_customer=SpiceyCustomer.newspicey_customer.order_tarka_daalspicey_customer.order_vindaloospicey_customer.order_jalfrezispicey_customer.order_chips# returns an undefined method errorgreedy_customer=GreedyCustomer.newgreedy_customer.order_tarka_daalgreedy_customer.order_vindaloogreedy_customer.order_jalfrezifussy_customer=FussyCustomer.newfussy_customer.order_chipsfussy_customer.order_tarka_daal# returns an undefined method error
Also when we look at each customer we can see very clearly what they like to eat and how they like to respond for each menu item.
The nice thing is that if the restaurant adds a new menu item, we don’t need to touch the DiningOut
module at all. We just need to decide which customers would like to order the new menu item and how they might optionally respond.
It works by defining a new class method eats. This method takes the name of the menu item and a
block to be called with the meal so the customer can inspect the meal to determine their response.
The eats class method in DiningOut uses define_method to dynamically create the order_* instance method on the customer. This means if the customer declares eats :chips then they will respond to the method order_chips (and conversely if they don’t define it then they will return undefined method error). This makes it very obvious what menu items the customer will eat and prevents mistakes.
The order method uses instance_exec to execute the response block passed in. This is required because we want to ensure the block is called on the customer instance and not in the context of the class method - if we just did block.call(meal) then an undefined method (for whatever method the response block called) would be raised.
By accepting a block and calling it with instance_exec gives the nice benefit of that block being able to call any method it likes on the instance. In this scenario each Customer can call a different response method, for each type of menu item, if they want. This is important because, for example, the Chips menu item does not respond to the method spiceyness - so if there was a single assumed respond_to_meal method each Customer implemented it would have to handle every type of meal (checking if it responds to methods such as spiceyness and all sorts of other logic).
Conclusion
Obviously this is a ridiculously contrived and long example to demonstrate two metaprogramming methods in Ruby but hopefully if you’ve read this far, you’ve got something out of it.
As I mentioned in the intro, these metaprogramming methods should be used with caution. Powerful and concise code does not necessarily make for easy to understand/debug/maintain code.
Enzyme is a React Testing utility written
by the developers at Airbnb that is designed to make testing your components
much simpler and easier.
Here’s a quick tip for testing onClick functions where your code calls event.preventDefault()
When triggering a click via the Enzyme api method simulate, you are not actually triggering a real event – the underlying implementation is simply calling the onClick prop of the node.
So if you’re calling event.preventDefault() in your code, you will need to pass
a second argument to simulate, which is a mock object, that is passed through to the event handler.
Easy when you know how, but not immediately obvious. Fortunately the Enzyme docs are brilliant and well worth checking out for further detail and examples.
This article is going to explain how to perform and test Ajax requests in a React
application using Redux and Axios which
is a promise based HTTP client.
I like Axios because of the way you can centrally configure the common behaviour
for the requests your front-end needs to make and the responses it receives.
To take one example, it has the concept of interceptors
that allow you to write a single function that can the handle responses such
as a 401 unauthorized or 500 exception.
When using Axios I like to create a utility ‘Service’ class to setup the centralised
config and ensure all Ajax requests get handled consistently.
It also means calls to Axios are not sprinkled all over the codebase, so I could
swap out Axios for an alternative library and I should only need to update this
one wrapper class.
In a Redux application, actions are purely functional and thus ‘side-effect free’. So in order to perform Ajax requests you will need middleware such as ‘redux-thunk’. An alternative, that I’ve not used but heard good things of, is redux-loop.
Redux Thunk means you can return a function from your action to delay the dispatch
of it, or only dispatch if a condition is met (like the response status is 200).
Once Redux Thunk is installed, you can return a function in your Action as shown by
saveResource in the example below:
Then for any containers that you want to use the action, you need to pass through the dispatch to the function in the mapDispatchToProps. See saveResource below:
To test the action you can use standard jasmine spy functions:
Then finally to test the Service class:
There are a few bits here to go through here. Firstly I found that when running the specs
I needed to add the babel-polyfill for Promises. I’m using Karma test runner so I needed to add 'node_modules/babel-polyfill/dist/polyfill.js' to the files array in the karma.config.js
file.
Secondly at the top of the spec you’ll notice that jasmine-ajax is imported. This is the official plugin for faking Ajax requests in your Jasmine specs. What I like about this
is that it has no knowledge of Axios. In the beforeEach it is turned on and
then all Ajax requests triggered in the specs are captured and can be inspected.
Due to the asynchronous nature of the promise, I use setTimeout to grab the
most recent request and check it was formed correctly. To test the callback
I use the then function on the promise under test.
A small, but important, detail is the call to done() – this ensures the test
does not exit early and waits until the asynchronous part of the test is complete
before exiting.
And that’s it! There’s a lot of code there, but once the Service class is setup it
should remain pretty much unchanged. Adding additional Ajax requests to your React application actions should be fairly straight forward.
Capybara is a great way for performing
integration testing of web applications. Typically it is used in development with
the default Rack Test Driver. However, the Rack Test driver does not support JavaScript
and is unable to access HTTP resources outside of your Rack application.
Enter PhantomJS – a headless implementation of Webkit
that can execute JavaScript, external urls and works with Capybara. The easiest
way to get the two to talk together is via the Poltergeist gem.
When accepting file uploads from users of your application, you should always
check the file to ensure malicious content isn’t permitted.
If you have a simple Rails file_field in a multipart form, you’ll get a ActionDispatch::Http::UploadedFile. This object wraps up the temporary uploaded file and has methods
like content_type and original_filename.
At first glance it sounds sensible to check the extension (determined from original_filename) along with the content_type, against a whitelist of allowed
values. However, if the pesky user has renamed the file extension before uploading
the file, your whitelist will not help you as even content_type will return the
incorrect value.
This is why checking the mime type of the file is important, but this is not
something Ruby does natively. There are a few Ruby gems around which specifically
handle this problem but the simplest solution I found, without requiring any third
party dependencies, was to use the underlying Operating System and execute the
file command via back ticks:
`file --b --mime-type '/path/to/file.txt'`.strip
This works on both Linux and OS X and will return the mime type such as
‘text/plain’, which can then be used as part of your validation.
Functional programming has piqued my interest for a while – it appears to be reaching
a tipping point with the rise of ReactJS, Redux, Elm and ImmutableJS
bringing Functional Programming to a whole new set of developers through Javascript.
Elixir is also a functional programming language that’s
attracting a lot of the Ruby community.
As an object-orientated Rubyist, I’ve been exposed to some functional programming
ideas through Blocks, Procs and Lambdas but when programming in Ruby you’re typically
mutating state all over the place.
Pure functional programming works with immutable state.
Although I didn’t set out to specifically learn Scala, I decided to take the
course because the reviews from fellow programmers
highly recommended it for learning Functional Programming. I think they were right – it’s
a very well put together course and it’s also really tough.
It completely challenges the way you think when you are used to writing imperative code.
The course is definitely less about learning Scala and more focussed on the fundamentals
of functional programming. Interestingly Scala actually allows both the worlds of
functional and object-orientated to co-exist.
Each week you have an assignment to complete that you can upload your program directly
from your IDE – there’s an automated test suite that will grade your submission.
Side effect free
So one of the reasons functional programming is gaining popularity is because one of
the main cited benefits is that the code is easier to reason about due to being
‘side-effect free’.
It’s this immutability that is at the core of React & Redux, especially for performance.
When a component’s props or state change, React decides whether an actual DOM update
is necessary. By using immutable state, it’s very easy to track if an object
has changed. The React documentation has a really good detailed explanation on Optimizing Performance.
Scalable
Functional programming languages are often talked about as being highly performant, but
what does this mean?
The future trend for computing appears to be more cores and one the way to make best
use of the hardware is to run code in parallel. This is where that ‘side-effect free’
code comes into play as makes it a lot easier to run in parallel.
The Future
Functional programming is certainly a completely different mindset for those of
us used to object-oriented programs. It will be really interesting to see if it
does become the mainstream approach to building web applications. I think I might
start trying to apply some functional programming in Ruby and see how I get on.
A container component is concerned with how things work. This typically involves
fetching data, watching the store for updates and updating the store itself by
dispatching actions.
Essentially a container component will wrap a presentational component to add
behaviour to it.
What does a container component look like?
The container component is created on Line 16 via the connect() function provided
by Redux.
You can pass in additional functions to control how you want the state and the
actions mapped to the props. These are mapStateToProps() and mapDispatchToProps().
In the example above, I’m using mapDispatchToProps() to setup the addTodo prop
that will dispatch the addTodo action we’ve imported. How this is used is entirely
down to the AddToForm presentational component and is outside the responsibility
of this component.
Note the first argument to connect() is null because I’m not using mapStateToProps(), but
the principle is the same as for mapDispatchToProps().
So how is a container component tested?
Container components should be fairly lightweight. There seems little point testing
the connect() function, as this is not our code. But we do care that the props are
going into our wrapped AddTodoForm component correctly.
So to test this component, all I’m interested in is that the addTodo prop has
been setup to dispatch the required action to the store.
The mock store and the Provider are required to render the connected component.
After that I’m just reaching into the component to grab the addTodo prop and calling
the function. I’m spying on the dispatch method in the store, that means I can check it’s
called with the action that’s formed correctly.
My test for the presentational component AddTodoForm will take care of checking
that the prop is actually triggered when it’s supposed to.