Why all developers should know Array.reduce()

Instead of just for loops

Timothy Teoh
6 min readOct 20, 2018


Frontend Malaysia recently highlighted a post titled You don’t need Array.reduce() which attracted some minor controversy. The author tries to argue that Array.reduce() is always better simplified using simple for loops.

In the example he cited, a dataset of trips needed to be summed up, with the result being the total trip distance by type:

const trips = [
{type: 'car', dist: 42},
{type: 'foot', dist: 3},
{type: 'flight', dist: 212},
{type: 'car', dist: 90},
{type: 'foot', dist: 7}

Using Array.reduce() solves the problem like this:

const distanceByType = trips.reduce((currState, trip) => {
const { type, dist } = trip
if (currState[type]) {
currState[type] += dist
} else {
currState[type] = dist
return currState
}, {})

// distanceByType = {car: 132, foot: 10, flight: 212}

But he argues that this is not needed, saying a for loop can do it better and clearer:

const distanceByType = {}for (const trip of trips) {
const { type, dist } = trip
if (distanceByType[type]) {
distanceByType[type] += dist
} else {
distanceByType[type] = dist

What do you think? Do you find the Array.reduce() syntax too confusing? Or do you think there isn’t really much difference anyway?

Before we continue, let me recap how the Array.reduce() function works. I’ll include two closely-related functions that often work in tandem, Array.filter() and Array.map().

What are the filter, map, and reduce functions?

filter, map, and reduce are higher order functions — a function that either receives a function as its input, or returns a function as its output.

Their implementations exist in practically every frontend and backend language or framework — Javascript, PHP, Elixir, Haskell. If they aren’t in the standard library, then a package most certainly exists that implement them.


The filter function returns a new array with ≤ the number of items in the input array.
  • For every item in the input array, the callback function is applied.
  • If the result of the function is false(y), that item will not be in the output array.


The map function returns a new array with a function applied to every item of the input array.
  • For every item in the array, the callback function is applied.
  • The result of the function goes into the output array.
  • The number of items remains the same, in the same order.


The reduce function returns a single item, which is the return value of the last iteration of the callback.
  • For every item in the array, the callback is applied. Unlike the other functions we have covered here, this function is also passed the previous state, and returns the new state to be used by the next iteration.
  • The final output doesn’t need to be an array — it is the output of the last iteration of the reducer (the lastnewState)
  • The reduce function lets you set the initial state that will be passed into the prevState of the first iteration.

So what’s the deal with filter, map, and reduce?

There are a couple of reasons why you should know how and when to use these functions:

1. Immutability and predictability

In a for loop, there is never a guarantee that the original array was not modified, either intentionally or unintentionally.

const distanceByType = {}for (const trip of trips) {
const { type, dist } = trip
if (distanceByType[type]) {
distanceByType[type] += dist
} else {
distanceByType[type] = dist
//this modifies the trips array
trips[type] = dist


This example is contrived, but the point is that all the const declarations didn’t matter — the trips array can still be modified. This kind of unpredictability was(is?) a large problem especially in the Javascript scene, because Javascript was not initially built as a language with many features.

Object mutation can make your application behave in odd ways, and when you combine this with complex Javascript applications it gets worse. This is one reason why the trend has moved from the original Angular1-like two-way binding across components to the React-like “top-down” approach, as it is easier to determine when something has changed and what changed it.

Consider the following code:

const distanceByType = trips
.reduce(reduceCallback, {})
  • With filter, map, or reduce you are always certain that the trips array has never been mutated, without looking at the implementation of filterCallback, mapCallback, or reduceCallback.
  • Because these functions are used, you also know exactly what the return types of filterCallback, mapCallback, and reduceCallbackwill be - again, without having to look through any implementation details.

If you do something like this instead:

const matchingTrips = returnMatchingTrips(trips)
const convertedTrips = convertTrips(matchingTrips)
const distanceByType = calculateTripDurations(convertedTrips)
  • You can’t be sure what the return types are for these functions without glancing through the implementation.
  • You also can’t be certain that trips, matchingTrips, and convertedTrips haven’t been modified after passing them as arguments, because in Javascript, function arguments are passed by reference*.
function returnMatchingTrips(trips) {
for (var i = 0; i < trips.length; i++) {
if (trips[i].dist < 10) {
trips.splice(i, 1)
return trips

In the example above, returnMatchingTrips(trips) will indeed return all trips with more than 10 distance, but also mutate the original trips passed to it (and yes, even if it was declared a const)

2. Pure functions are great

This brings us to the growing emphasis on the value of pure functions — a pure function is one where the same input will always produce the same output, with no side effects, and without relying on any other input. These are used heavily in functional programming.

A pure function always returns the same output given the same input. An impure function may not, as it may read outside state and/or modify outside state.

Why is this paradigm important?

  • Pure functions are easy to scale horizontally. Apache has a programming model called MapReduce that is widely used in data processing because it can be scaled out across cheap hardware — and you can guess how it works off its name alone!
  • As more teams move towards microservices, having predictable input and output produces fewer bugs and easier testing.
  • There are a growing number of cloud providers offering serverless features — functions that you dynamically run in the cloud without worrying about underlying infrastructure. Pure functions map well to these.

The map, filter, and reduce functions promote this paradigm — I’ll go out on a limb and say that learning how to use these functions are your first steps towards it!

Conclusion — don’t reinvent the wheel!

If you are a developer, don’t jump into your own Custom Solution™ or Better Solution® whenever you first encounter a problem — especially so if there is an implementation in the language standard library.

The most important thing to realize as a developer is that there are many, many other talented people who have come before you, who have faced similar problems. Take the time to delve into why things are the way they are before building a solution.

That said, this is not to say that a standard library function or design pattern is always the best way to solve a problem.filter, map and reduce for example always run in O(n) time (every item in the array is always iterated once) which may or may not be what you need. But the key is to familiarize yourself as many options as you can before you start developing.

If I have seen further it is only by standing on the shoulders of giants — Isaac Newton

This story was originally published on Tribehired.

*Unless they are primitives



Timothy Teoh

Full-stack software architect and technology leader from Kuala Lumpur, Malaysia