Tuesday, February 08, 2005
We learn from this publication of Nellie Mae (the Fannie Mae of student loans) that someone with a four-year degree averages $2.1 million in "lifetime earnings" -- by which we'll mean a forty year working life -- and someone with a high-school degree has an expected earnings of $1.2 million. This publication is of course trying to help you finance college, and get you to take a loan. You pay interest on that. The question is, what should you pay for an extra $900,000 of lifetime earnings?
The $900,000 begins, of course, after you leave college and find a job. That takes a few years during which you have to live like, well, a college student. You could be out working a job of manual labor earning a hire income -- you could be modeling, or cooking, or ... well, you get the idea. Kids who go to work full time out of high school do have a period where they out-earn those who go to college. Of course after they leave the college graduates pretty quickly go on to higher wages than their friends who did not.
To show that, consider this graph for college student Prudence and high-school graduate/college refusenik Rip (as in let 'er...)
Rip's earnings in the early years are higher, so that up to the point where they cross he out-earns Prudence by the cumulative area A. But Prudence then goes on to earn B more than A from that point to "working age 40", at which point Prudence and Rip retire. Nellie Mae says, I believe, that Area B - Area A = $900,000. But not so fast! Area A occurs early in one's life, area B much later. At age 18, it's quite possible for someone to decide to be Rip, if they have a high preference for income now over income later. Only those who believe in saving for the future or are prudent (or have a low rate of time preference, in economists' jargon) end up going to college. (More technical analysis using rates of time preference and this "life cycle hypotehsis of consumption found here.)
Higher rates of student borrowing could simply indicate that the size of B relative to A has gotten higher, so that the returns to education justify Prudence actually borrowing more. I know, it sounds weird for me to have Prudence borrow, but if she's done her math correctly she will choose to borrow as long as the size of B-A is greater than the present value of all the loan payments and current payments she has to make for her college education. Since we have some evidence that the returns to education are increasing, there's no reason why we shouldn't see higher rates of student borrowing.
One problem with this model is what is the correct rate of discounting B over A? And what is true for an individual may not be true for society as a whole. (See, for example, this paper by Tyler Cowen, which argues for a zero rate of discount for public policy. Steve, it has a model, and it's written pretty well.)