in the number of mortgages originated in low-income areas, not by a relative increase in the
average size of new mortgages in those areas. This finding undermined Mian and Sufi’s claim
that looser income requirements on the part of lenders drove the housing boom. Mian and
Sufi (2017) responded by raising concerns with the way that Adelino and his co-authors used
the HMDA data, especially their treatment of second liens and their use of borrower-income
data from HMDA, which can be overstated. Our paper addresses these two concerns, and
finds that in practice they are not strong enough to overturn Adelino, Schoar, and Severino’s
main claim: the average size of new mortgages did rise proportionately across the income
distribution during the housing boom. Along with other research highlighting the broad-
based nature of the boom, the proportionate increase in average mortgage sizes suggests that
the housing boom resulted from excessive optimism about future house price appreciation,
not from exogenous changes in underwriting requirements.
1
We also find, however, that the relationship between debt and income did flatten in the
1990s, a period that is recognized as one of intense technological change in mortgage lending
(LaCour-Little 2000; Bogdon 2000; Colton 2002). Below we show that computer technology
allowed mortgage lenders to process loans much more quickly at the end of the 1990s than at
the beginning of the decade. But technology did more than speed up mortgage processing: it
fundamentally transformed it by replacing human evaluation of credit risk with predictions
from data-driven models. Before 1990, loan officers and originators evaluated mortgage
applications by personally applying so-called knockout rules, which specified maximal cutoffs
for variables such as the LTV (loan-to-value) ratio and the DTI (debt-to-income) ratio.
2
This
type of rules-based system would seem to be tailor-made for replacement by computers, which
have transformed the US economy due to their ability to perform routine tasks efficiently
(Levy and Murnane 2004; Acemoglu and Autor 2011). In fact, during the early 1990s many
mortgage lenders tried to use computers in precisely this way, by encoding their lending rules
into formal algorithms that computers could follow. The resulting artificial intelligence (AI)
systems would then be expected to evaluate loan applications in the same way that humans
had, but at a lower cost.
The coders soon discovered that despite the rules-based nature of loan-evaluation pro-
1
As we discuss in Section 4, HMDA covers only mortgage originations, not terminations. Consequently,
HMDA data cannot be used to study a credit expansion along the extensive margin; that is, an increase
in the number of persons approved for new mortgages. Related to this, HMDA’s coverage of only one flow
of debt (originations) means that the HMDA data cannot measure changes in stocks of mortgage debt. As
discussed below, Foote, Loewenstein, and Willen (2019) uses data from the Equifax credit bureau and the
Survey of Consumer Finances to show that—consistent with the proportionate increase in average mortgage
sizes highlighted here—there were no significant cross-sectional differences in debt accumulation along the
extensive margin. Stocks of debt rose proportionately across the income distribution as well.
2
For mortgage lenders, the DTI ratio denotes the ratio of the borrower’s monthly payment to her monthly
income, and does not involve the borrower’s entire stock of debt. Although it is not quite accurate, this DTI
definition is so ingrained in the mortgage industry that we stick with it here.
2