POST DIRECTORY
Software development alternate photo

This is Part 1 of a three-part guide on refactoring JavaScript from imperative and/or object-oriented patterns to declarative functional ones. This first part is a conceptual overview. In Part 2 we apply the concepts of currying, partial application and pointfree style and Part 3 discusses automated function composition.

As developers, it is in our interest to cultivate ways of thinking that are efficient, productive, and organized. How we think about problems is just as important as solving the problems themselves. Functional programming (FP) promotes a mental framework for problem solving that engenders clarity, momentum, and flexibility over time.

Many of the tasks I run into in client-side JavaScript involve transforming data from one type and/or shape into another type and/or shape. The data typically come from an API request or state store and I have to prepare it for presentation on the DOM. This domain lends itself especially well to function composition. I’m here to tell and show you why.

This series is a detailed refactor of a common programming task: filtering an array of objects based on a property value. In Part 1, I will provide an overview of FP and the terminology and concepts that will be covered in the “makeover”. The makeover itself, Parts 2 and 3, will consist of turning a for loop implementation into a pipeline of pure function composition.

The point of this series is to demonstrate how to solve a common data transformation using function composition by thinking in terms of “functions first, data last”. Along with that, I want you to learn how to identify and use currying, partial application, and pointfree style. The goal is not for you to feel like a functional programming expert, but to finish this series with better intuition for how to write declarative, composable JavaScript.

The Essence of Functional Programming

  1. Break down a problem into a set of problems that are as small and/or simple as possible.
  2. Write a function for each small problem.
  3. Stitch together those smaller functions via composition to solve the larger problem at hand.

There are other aspects of FP out there such as algebraic data types. I do not believe these are necessary to get up an running with function composition in everyday JavaScript data transformations. In this series, we will focus solely on function composition.

Object Oriented Programming vs. Functional Programming

In object-oriented programming (OOP), everything is an object. Objects collaborate with other objects by sending messages to (or calling methods on) each other. We program objects to know what to do when other objects send messages to them. The objects are the information, and their messages are what prompt behavior. This is a powerful paradigm because the objects are almost personified. We can imagine these abstract classes, instances, and objects running around like little worker ants building a great colony.

In functional programming, everything is a function. Instead of sending messages to objects, we pass data into functions. The key to successful collaboration of functions is aligning data types of inputs and outputs. In this case, functions are the behavior, and the data is the information.

JavaScript is a really fun language because it is a blend of both patterns. We can easily alternate paradigms depending on the problem at hand. For a lot of client-side tasks, functional patterns are the better tool for the job.

Cognitive Anchoring

When we tie our thoughts to specific constructs (e.g., users, books, pictures), we develop a cognitive bias called anchoring (aka focalism). This is the phenomenon where we get attached to initial pieces of information when making decisions. Anchoring limits perspective on how a problem might be solved because we attach unnecessary context to the domain, when the actual solution might be very general.

For example, you may have developed some custom business logic for the users in your application. There is a high probability that you’ve done similar logic for other models of data outside of users. Whether you recognized the similarity or not, thinking of the problems in terms of “I need to do this thing with user data” produces more anchoring and therefore clouded problem solving than just thinking in terms of “I need to do this thing with data (…that just so happens to represent users in this case)”.

Thinking of transformations at an abstract level ensures that we focus more on the high-level flow of the logic and not get stuck on specific implementation details.

Terminology and Definitions

Here are the building blocks and concepts that will prepare us to implement a filtering solution that is built upon function composition:

Declarative vs. Imperative Style

Imperative code shows you how it works. Declarative code tells you what it does. This, like many things, is a spectrum. Declarative code will always have some level of how, even if abstracted away, but the point is to produce a more intuitive API that encapsulates implementation details in one place.

Here is an example of a more imperative way of filtering an array:

const ages = [1, 29, 4, 44, 423, 26, 5]
const evens = numbers => {
  let matches = []

  for (number of numbers) {
    if (number % 2 === 0) {
      matches.push(number)
    }
  }

  return matches
}
evens(ages) // [4, 44, 26]

We make a new empty array, compare values, push onto (mutating) the new array if the values are equal, then return the new array. There is a lot of how in this implementation.

Here is a more declarative example:

const isEven = number => number % 2 === 0
const evens = numbers => numbers.filter(isEven)

Our only element of how in this example is how to produce a true or false value in the function we pass to filter. We don’t have to make a new array or push onto it by hand. We get to lean on built-in JavaScript abstractions that handle it for us.

With the for loop example, it is not immediately clear what is being done on each iteration. When we use filter, however, I know right away, without needing to look at the function passed to filter, that the goal is to take an array and produce a new array that has only the elements that meet certain criteria. Sure, it’s required that I have familiarity with Array.prototype methods, but that is a small price to pay that opens us up to a more intentional expression of logic.

Declarative code expresses intent at a higher level of abstraction. It allows you to focus more on relationships between data and behavior by hiding the low level implementation details of those relationships. Think of step-by-step directions compared to a map (…the topographic kind). The former helps you get somewhere, but leaves you relatively helpless in understanding where you really are in relation to other things. The latter provides you broader context of the system (geography) you’re a part of, which, in turn, gives you more understanding of the system.

Pure Functions

Given the same inputs, pure functions produce the same outputs. Always. This makes caching expensive computations easy because we know that a given input always produces the same output.

Pure functions also have no side effects, which include:

  • mutating data (opposite of “immutability”)
  • network requests
  • updating state
  • file I/O

Side effects are undoubtedly the coolest part about web applications, so FP is not about removing them. The goal is to just manage them with a disciplined and predictable strategy.

// pure
const increaseCount = (count, value) => count + value

// impure
let count = 0
const increaseCount = value => {
  count += value
}

In the first example, the same inputs will always yield the same outputs. The function depends entirely on the inputs.

In the second function, the increaseCount function depends on an outer context. If we pass in 1, we get a return value of 1, and counter‘s value is mutated from 0 to 1. Say increaseCount is used again in another spot, and the value of count has updated to 5. If we pass in 1 again to increaseCount, we would get a return value of 6. Given the same input (1), increaseCount had different outputs, and it mutated a variable outside of its own scope. As an application grows, these types of patterns can lead to unpredictable behavior and difficult bugs to track down.

Methods vs. Functions

For the purposes of this guide, methods refer to messages sent to objects via dot notation: user.buildProfile(). Functions refer to first-class functions that receive data as arguments: buildProfile(user).

Predicate Functions

These are functions that return true or false.

users.filter(user => !!user.firstName)
users.filter(user => user.fullName === "Grace Hopper")

Higher Order Functions

These take one or more functions as arguments and/or return a function.

Array.prototype.map/reduce/filter are all higher order functions.

const ages = [10, 72, 90, 44]
ages.map(age => age % 2 === 0)

The argument to map is an uninvoked anonymous function.

If you’ve connected a React component to Redux store, you may have seen:

export default connect(
  mapStateToProps,
  mapDispatchToProps
)(SomeComponent)

Without knowing exactly how connect works, we can deduce that it is a higher order function because it takes two arguments, mapStateToProps and mapDispatchToProps, and returns a function that then takes SomeComponent. It is a function that returns a function.

Pointfree Style

“Points” is a synonym for “arguments”. Pointfree style omits anonymous functions used to delegate arguments. Take the following example:

const double = number => number * 2

ages.map(age => double(age))
// is the same as
ages.map(double)

In the first map, the argument we pass to map is an anonymous function that takes one argument, age. The sole purpose of that anonymous function is to pass its argument to another function, in this case double. The result of passing age to that anonymous function is the same result as passing age directly to double, so we can remove this delegation pattern and just pass double, as it itself is a function that takes one argument and doubles it.

It’s a little cryptic at first, but it has advantages. There is less clutter in the function, which makes the function more declarative. The focus is on what the transformation does, not how it does it. I don’t know how ages.map(double) works, and I don’t need to. I just need to know that it will produce an array of the same length where each age value is doubled.

Currying

Let’s say we have two DOM nodes that should open a menu component when clicked. One way we could implement this is to just use the function twice.

const openMenu = event => { ... }
$('.someButton').on('click', openMenu)
$('.someLink').on('click', openMenu)

This isn’t a terrible pattern, but what if more elements end up need to open the menu on click events? What about if we want the same functionality for event types other than 'click'? The more we repeat code, the harder it is to change.

Either way, both uses of on use the same exact arguments. All that is different is the DOM element being selected by jQuery ($). To reduce code duplication, we can write a function that takes arguments in multiple stages.

const fancyEventHandler = (handler, eventType) =>
  domElement => $(domElement).on(eventType, handler)

const openMenuOnClick = fancyEventHandler(openMenu, 'click')

openMenuOnClick('.someButton')
openMenuOnClick('.someLink')

fancyEventHandler is a higher order function. After receiving handler and eventType, it returns another function (note the two fat arrows =>). After this first “stage”, the result is another function that is waiting to receive the selector value that jQuery will use. After the second stage when it receives the selector value, the event listeners will be registered.

In the unwrapped examples:

$('.someButton').on('click', openMenu)
$('.someLink').on('click', openMenu)

we must have every piece data (the selector, event type, and event handler) up front. Our fancy wrapping function basically says to the needy unwrapped pattern “Look, I don’t have all your arguments yet, just take these for now and I’ll give you the rest later.”

Now for any DOM element that needs to open the menu on click, we just have to pass that selector to openMenuOnClick. We could even automate this by wrapping the selectors in an array and using forEach.

const menuOpeners = ['.someButton', '.someLink']
menuOpeners.forEach(openMenuOnClick)

If we ended up needing to use the openMenu handler for different event types, we can make our wrapper function even fancier by adding a third stage for gathering the arguments.

const fancierEventHandler = handler =>
  eventType =>
  domElement => $(domElement).on(eventType, handler)

Then we can expand the data represented in menuOpeners to specify the event type for each selector.

const openMenuOn = fancierEventHandler(openMenu)
const menuOpeners = [
  { selector: 'someButton', eventType: 'click' },
  { selector: 'someLink', eventType: 'click' },
  { selector: 'someInput', eventType: 'change' },
]

menuOpeners.forEach(({ selector, eventType }) =>
  openMenuOn(eventType)(selector))

With this style, adding event listeners becomes a matter of adding and removing objects from an array. As an additional benefit, menuOpeners is now a sort of table of contents for all our menu opening functionality. Beautifully declarative.

Now our wrapper function is made up of three chained anonymous function that take one argument each. This fancy pattern of breaking up functions into multiple stages to accumulate arguments one at a time is the essence of currying (named after Haskell Curry, not the delicious category of dishes from India). A curried function returns a function until it receives all of its arguments.

As demonstrated in the event handling example, this is valuable because functions can be built up incrementally without duplicating common arguments. Breaking up functions into separate stages for each argument is tremendously useful (if not required) for effective function composition.

Partial Application

The good news is: you just learned partial application. Hooray! Remember openMenuOn? That is partial application. What about openMenuOnClick? You betcha.

A curried function with only some of its arguments is considered partially applied. The return value of partially applying a curried function is another function. Partial application is what makes currying useful, so you will often see the concepts brought up together or even used interchangeably.

Conclusion

All of these concepts push us in a direction to write more declarative, flexible, and modular code. This allows us to be less distracted by implementation details, and more focused on the flow of high-level transformation logic. Let these techniques and ideas percolate in your lovely brain for a while, and we’ll meet again for Part 2 to put them into action.

''