From Least Squares to Likelihood: The Dual Nature of Linear Regression
June 29, 2025
This post is accessible to readers with a basic understanding of matrix algebra and probability. If you're new to those topics, don’t worry — my upcoming posts will cover the essential foundations of linear algebra and probability for machine learning. Stay tuned!
Introduction
Imagine you want to predict the price of a house in City A. Naturally, the price depends on various factors — the size of the house, its location (proximity to schools, hospitals, etc.), its age, and perhaps a dozen more. In machine learning, we represent these influencing factors as a feature vector:, where each is a numerical representation of some property of the house.
Our goal is to learn a relationship between and , so we can predict the price of new houses just from their features. One of the most classic ways to approach this is through linear regression:
This is the least squares perspective: find the weights and bias that minimize the total squared error between predicted and actual prices across the training data.
In practice, no matter how many features we include, there will always be unpredictable or unmeasured influences — random fluctuations, measurement noise, and real-world complexities we choose to ignore for simplicity. That's why we don't write our model as , but rather:
The term captures this uncertainty — the deviation between the actual observed value and the model’s idealized prediction. It reminds us that the model is a simplification. As the statistician George E. P. Box famously said:
"All models are wrong, but some are useful."
Thinking of as a random variable — specifically, as a sample from a probability distribution — turns this from a deterministic equation into a probabilistic model. And as we’ll see next, this small conceptual shift leads us directly to maximum likelihood estimation.
The Least Squares Perspective
Before diving into the math, let’s address one notational point. In the previous section, we used to represent the intercept term. For clarity and to align with standard convention, we’ll now write the intercept as , and keep the rest of the weights as a vector . This allows us to express our model compactly as:
Now suppose we are given a dataset of examples, where each data point consists of a feature vector and a corresponding target value . Our goal is to find the parameters and that best fit the data — in other words, that produce predictions that are as close as possible to the actual observed values .
To make this concrete, here’s a snapshot of a real dataset used for regression: the California Housing dataset. Each row represents one example (a house), and each column is a numerical feature such as the median income of the area, average number of rooms, etc. The target value is the median house value.

In our notation, each row corresponds to a feature vector , and the final column (the target) corresponds to . If the dataset has rows and features, then:
- is the matrix of feature vectors
- is the vector of target values
To measure how good our predictions are, we define the squared error loss for each data point:
In machine learning, such loss functions are typically called objective functions — mathematical expressions we aim to minimize (or maximize) using optimization techniques like gradient descent or closed-form solvers. In this case, we use the squared error because it's:
- Differentiable, which makes it easy to optimize using calculus-based methods,
- Convex, so it guarantees a unique global minimum
Here's a visual intuition for what the squared error captures: Each blue point represents a house in our dataset, plotted against a single feature (average number of rooms). The red line is the regression model’s prediction. The dashed vertical lines show the errors — the distance between the true value and the predicted value for each data point.

The squared error loss function penalizes these vertical distances — especially large ones — and finds the line that minimizes the total penalty over the dataset.
The total error over the dataset is then the sum of these individual squared errors. This gives us the least squares objective function:
The optimal weights and intercept are those that minimize this loss:
This optimization problem defines the classical formulation of linear regression. It’s purely geometric and algebraic: we are finding the linear function that best fits the data by minimizing squared deviation.
To derive a closed-form solution, we express the entire dataset in matrix form. Let be the matrix of feature vectors, where each row is a data point and be the vector of target values.
To incorporate the intercept , we augment with a column of ones, forming :
Similarly, we define the augmented parameter vector:
Now the prediction for all data points becomes simply:
And the least squares loss can be written compactly as:
This is a standard quadratic minimization problem. Let’s expand the squared loss function into a fully expanded quadratic form:
Expanding this using standard matrix identities:
Before we compute the gradient, let's confirm that is indeed a scalar (a single number), which justifies the symmetry trick used later in the derivation.
Recall the matrix dimensions:
- — data matrix with bias column added
- — parameter vector
Then we have:
Here's the reasoning:
- — the predicted values
Therefore, the whole expression collapses to a single real number. Since it's a scalar, we can freely take its transpose:
Which justifies the simplification in the gradient derivation.
We now take the gradient of with respect to . Using standard vector calculus identities (listed below), we obtain:
Setting the gradient to zero gives the first-order condition:
Cancelling the constant and rearranging, we get the normal equation:
And assuming is invertible, the unique solution is:
This is the closed-form solution to the least squares problem, derived directly from calculus and matrix algebra.
The following identities were used in the derivation:
- (if is constant)
-
(or if is symmetric)
These results are foundational in matrix calculus and commonly appear in optimization for machine learning.
This is the closed-form solution to the linear regression problem. It gives us the optimal weights and intercept that minimize the total squared error across the training data.
But there’s also a powerful geometric interpretation here. If we look closely, we can see that linear regression is performing an orthogonal projection of the target vector onto the column space of the design matrix .
In other words, the predicted values live in the subspace spanned by the feature vectors (the columns of ). Among all possible vectors in this subspace, is the one closest to the true output in Euclidean distance.
This makes the best linear approximation of using the available features. The residual vector is orthogonal to the subspace spanned by the feature vectors.
The Probabilistic Interpretation: Linear Regression as Maximum Likelihood
In the previous section, we viewed linear regression purely as a geometric problem: we projected the target vector onto the column space of the design matrix . But notice what was missing: we made no assumptions about uncertainty or randomness. The model simply sought the best deterministic fit.
In this section, we take a different approach. Instead of treating as fixed, we now treat it as a random variable. Specifically, we assume that the outputs are generated from a linear model plus some random noise:
We model the noise as Gaussian:
But why assume the noise is Gaussian in the first place? This assumption may seem arbitrary, but it actually arises from a deep statistical principle: maximum entropy.
Among all probability distributions with a given mean and variance, the Gaussian distribution is the one with the highest entropy — meaning it makes the fewest assumptions beyond those two constraints. In other words, it’s the most "uninformative" distribution consistent with knowing only the second moment.
This aligns with a core generalization principle in machine learning: when we model uncertainty, we should choose distributions that inject as little bias as possible beyond what the data dictates. The Gaussian is a natural choice in this regard, especially within the exponential family of distributions of distributions, which form the foundation of many standard ML models.
The Gaussian noise assumption implies that for each data point, the target value is normally distributed around the linear prediction with variance :
The goal of maximum likelihood is to choose parameters and that maximize the probability of observing the entire dataset. Assuming the data points are independent, we can write the likelihood as:
Plugging in the Gaussian formula for each term:
Using the product rule for exponentials, we simplify the product:
To make optimization easier, we take the logarithm of the likelihood function — known as the log-likelihood. Taking the negative of this expression transforms the maximization problem into a minimization problem:
Notice that the first term is constant with respect to and , so it has no effect on the optimization.
To make this explicit, let’s compute the partial derivatives with respect to the parameters:
This shows that the constant term vanishes under differentiation, and what remains is proportional to the gradient of the squared loss function.
Therefore, maximizing the log-likelihood is equivalent to minimizing the following (negative log-likelihood):
This is exactly the same loss we derived from a geometric perspective — the squared error — now obtained from a probabilistic foundation.
In other words, least squares estimation is equivalent to maximum likelihood estimation under a Gaussian noise model. What was previously a purely geometric objective now emerges from a probabilistic framework.
This new perspective allows us to reason probabilistically about our model’s uncertainty and make confidence estimates
Conclusion
We’ve now seen linear regression from two fundamental perspectives:
- The geometric view: linear regression projects the target onto the column space of , finding the best linear approximation in Euclidean space.
- The probabilistic view: assuming Gaussian noise and applying maximum likelihood leads to the same squared error loss — but this time, derived from a principled statistical model.
This duality is foundational to how we design and understand learning algorithms. Whether we interpret linear regression as a projection or as a likelihood optimizer, we’re using the same underlying mathematics — but with different implications and extensions.
📌 This post was inspired by a lecture and exam question from the Machine Learning (CS376) course at KAIST, taught by Professor Noseong Park.