## Pure vs impure functions

In computer science, functions are said to be either pure or impure. A pure function is one that doesn't act on data outside of its inputs, or modify data outside of its outputs. An impure function, on the other hand, is one that doesn't follow these rules.

## 3 examples of JavaScript closures

Closures in JavaScript are a really cool mechanism that helps us wrap up data with functions that act on that data. This data is usually a representation of state, at least in the world of web development. While they might seem mysterious to some programmers, these three simple examples should help shine some light on them.

## Ground truth in machine learning

In the domains of machine learning and data science, the term ground truth gets thrown around a lot. But what exactly do we mean by ground truth, and why is it useful? I decided to do a little research, and piece together a definition from a variety of sources.

## What is function arity?

Understanding the concept of arity is crucial to understanding function anatomy, and more higher-level concepts like functional programming and currying. The arity of a function is simply the number of parameters the function in question expects.

## The difference between rank and shape in TensorFlow.js

When beginning to use TensorFlow.js, specially without a strong background in linear algebra, it might be hard to appreciate the difference between ranks and shapes. To help clarify the confusion, for myself and others, I put together this short reference.

## The diagonals of a matrix

I couldn't find a good reference online for the many different diagonals of a matrix, so I decided to create one.

## Books read in 2018

I spent most of 2018 reading about healthy relationships, machine learning, and computer theory. In keeping with the annual tradition, here are all the books I read this year.

## What is the factorial?

The factorial of an integer is the product of all integers from that number down to one. It's useful when we need to calculate permutations.

## Why do we square the residuals in linear regression models?

We usually square the residuals in linear regression models in order to penalize larger residuals, and make them all positive.

## Performing double summations

Performing double summations is just slightly more challenging than single summations, specially when we break down the expression into an inner and outer sum.