On degrees
Lately there’s been a recurring discussion on various social-media outlets about the relevance of academic degrees to a career in programming. Specifically: is a degree in computer science (or some other field perceived as related) a necessity for a career as a programmer? If not, is it still a good idea to have one?
I’ve jumped into a few such threads on Twitter, but I have complex thoughts on the topic, and 140-character or (now) 280-character chunks of commentary don’t really work for that. Luckily, I have a blog and it obeys my rules, so here I can use as many characters as I like.
Cards on the table
First off: I have never taken a university course in computer science or software engineering or any of the other departments or fields often treated as “appropriate” degree programs for a career in programming. In the rest of this article I’ll use “CS degree” as a shorthand for “a degree in a field considered relevant to programming, regardless of whether any particular person or school happens to use the name ‘computer science’ for that field”, largely because it’s much easier to write.
I hold a degree in philosophy, and finished several credits short of a minor in math, and none of my philosophy or math work was on any topic most people would accept as being related or useful to programming. My thesis, for example, was on the post-Darwin history and development of the argument from design (yes, people have continued to work on it since evolution became broadly accepted as fact, and some of the people who’ve worked on it might surprise you!).
I first wrote code in exchange for money when I was in college, but didn’t go right into it after graduation; I became a full-time programmer a bit later, and programming has been my main job and source of income since late 2004.
By the standards most people would probably care about, I’ve had a successful career. I’ve worked at multiple places large and small, including a household-name company and a startup, I’ve built or worked on some well-known web properties, I’ve given a bunch of conference talks, I’ve written a book (two editions). I’ve had a commit bit on Django for around ten years, and served on its technical board and security team. I’ve written or contributed to quite a few popular open-source packages.
So if someone forced me to give a one-word answer to “do you think a CS degree is required for a successful career as a programmer”, my one-word answer would have to be no — after all, I don’t have a CS degree, and I’ve had a successful career.
But one word is not enough, because when people ask about whether a CS degree is required they’re talking about a collection of problems, not all of which are immediately obvious.
A problem of values
I live in the United States, and I can tell you that in the US people spend a lot of time worrying about whether other people deserve the things they have. At a cultural level we seem to want very badly to believe we’re a meritocracy, and thus that people who succeed deserved success while people who didn’t deserved failure. You see this happen a lot in discussion of government programs that offer assistance to people who are poor or sick or disabled: there’s a lot of rhetoric about laziness and unwillingness to work and about how people on welfare (which, for the record, no longer exists in the form people usually think of) get to have unthinkable luxuries like basic appliances in their homes.
But it also percolates upward through the economic classes. As a society, we seem to be all right with people making a lot of money, so long as we think they’ve done something that proves they earned or deserved it. For example, many people seem to think of doctors and lawyers as big money-makers (which they aren’t always), and both medicine and law are difficult and expensive to get into: you have to go to school, and then go on to a post-college program where you get additional specialized training, and you have to get certified or accredited by a professional body.
At the super high end, in sectors like finance, there’s a similar willingness to accept that some people “deserve” to make absolute metric tons of money while other people don’t, though in that world the credentials are different; there, it’s much more about coming from the “right” kind of family and socializing with the “right” kind of people than it is about having the right kind of academic or professional certifications. And a lot of people in the US do seem to accept that being the son of a billionaire is sufficient qualification for certain jobs and the money and prestige that comes with them.
And I think that regardless of where someone originally came from, everyone who spends enough time in America ends up being exposed to and to some degree consciously or unconsciously absorbing this aspect of the culture which says that only people who “deserve” to make a lot of money should be allowed to; that whether you “deserve” what you have is a decision to be made by others; that they expect you to jump through hoops to prove it to them, on an ongoing basis; and that they reserve the right to deny or take away things — a job, money, basic necessities like food and shelter — if they don’t think you’re consistently proving yourself worthy enough of them.
Which brings us to programming. Programmers… well, we make a lot of money. My initial salary at my first full-time programming job was more than my parents ever made in any year of their working lives, and that was a pittance compared to what I make now. And by Bay-area standards I’m probably still “underpaid” for my experience level.
And I think there are a lot of people, including a lot of people in the industry, who are uncomfortable with the idea that you can get that far and make that much without having obtained some magical credentials or otherwise proved you “deserve” it. A lot of the belief that a degree is somehow necessary or important to a career as a programmer is, I believe, rooted in this nasty wannabe-meritocracy aspect of our culture; we seem to be saying “sure, we know you don’t strictly need to, but we still aren’t OK with you making that much until you do the rituals to prove you deserve it”.
A problem of (lack of) difficulty
A related issue is that most professional programming is, well, easy. More than once I’ve heard current and former Googlers joke that the company hires the most brilliant people it can find and puts them to work building basic CRUD apps. And while that’s obviously a bit of an exaggeration, it does seem that at places like Google (or Facebook, or insert-your-favorite-BigTechCo-here), only a relatively small percentage of programmers are working on truly difficult technical problems. The rest are doing much more mundane work, mostly consisting of plugging already-developed components together in standard ways.
And the moment you step outside of the BigTech realm, this becomes even more pronounced. There are entire companies out there that could get by without ever needing much more than the skills taught in a typical coding bootcamp (which can also be picked up, self-directed, by any sufficiently motivated person), because the vast majority of actual real-world programming is just executing a few basic known patterns with the assistance of stock tools like libraries or frameworks.
When I worked at Mozilla, most of what I did was straightforward application of Django or other stock tools, and the hardest technical bits were things like coordinating and testing all the dependencies during a platform upgrade. And at my current job, most of the code I write implements ways to put things in databases and get them back out later, using query patterns so standardized that Django provides them as generic building blocks.
And this does not seem to be at all unique to web development; many other fields of programming similarly have a few stock patterns of things that they build, and mature tools and libraries and frameworks for doing it. I’ve said publicly several times that it’s a bad sign, for most programmers, if a lot of your daily work conversations are about technology. I’d also say it’s a bad sign if you spend most of your time in an editor or IDE writing code. The technology side of what we do should not be the hard part, and there’s no reason, in this era, for it to be the hard part.
I know someone, probably on Hacker News, will jump in to disagree with me on this, but: even for complex things at large scale, the patterns and solutions are getting to be pretty well known and documented. If you’re inventing new technology and you’re not literally Google, Facebook or a couple of other companies (and not on a handful of very specific teams inside those companies), I am willing to go on record saying that you are probably doing something wrong.
But we don’t like admitting this. We especially don’t like admitting it in light of how much we get paid. If what we do really is that easy, how do we justify our six-figure salaries and piles of perks?
So we tell ourselves stories about how the company we work for might someday need to suddenly scale to twice the size of Google, the day after an electromagnetic pulse wipes every copy of the standard library of our language of choice from every hard drive on Earth, and so of course we’d need to implement all those tricky fundamental algorithms and data structures from scratch. On a whiteboard, since all the hard drives got wiped.
And then we relax, feeling our existence and salaries suddenly safely justified again, and go back to our standup meeting and start labeling potential features as “You Aren’t Gonna Need It™”.
A problem of gatekeeping
There are two things that, I think, contributed more to my success as a programmer than anything else:
- I got into it at the right time, and
- I looked right
For point 1, I started learning to code in the late 90s, and happened to be doing it for web stuff. That was the heyday of the first startup/dot-com boom, and even though I didn’t start working full-time as a programmer until after the crash, it was a time when demand was unbelievably high and there was no way universities could churn out enough fresh graduates to keep up. And, anecdotally, I think there was much less of an attitude at the time that academic credentials were even necessary; tech and startup culture today are pretty ossified, fixed into certain patterns that venture capital believes will yield reproducible returns. Back then, those patterns hadn’t been frozen yet, and people tried anything and everything to see what worked.
And for point 2: I was a white guy who passed for middle-class. My actual personal background doesn’t quite match that, but it was good enough and that absolutely opened a lot of doors for me that would have stayed or been slammed shut if I’d been a woman or the “wrong” skin color or obviously the “wrong” economic background. I got to just participate in a lot of communities and move in a lot of circles and be taken seriously without being questioned about whether I was some “real” participant’s wife/girlfriend, or the “token” or “affirmative action” female/black/whatever participant. And if you think those things didn’t happen to people back then, or don’t still happen today, you’ve got another thought coming.
Unfortunately, things have gotten worse since then. Despite a lot of success stories from the early days involving people who didn’t have the credentials that are trendy today, we as an industry seem to have become much less welcoming to anyone who doesn’t fit our stereotype of what a programmer “should” look like. This is particularly acute in the Bay area tech scene, where the most lucrative and career-advancing opportunities (whether companies, or funding, or other parts of the tech-industry engine) are focused with laser intensity on recruiting a specific sort of person from a very specific and short list of universities and backgrounds.
I think there is also a tendency for people to mistakenly assume that the way they learned to do something is the only way anyone can learn to do that thing. They’ll argue that CS teaches you about things like abstractions and reasoning that are necessary for a programmer, while not realizing plenty of other backgrounds will provide that too.
This is not an evidence-based approach, of course; as far as I can tell from having known and interviewed a lot of people from a lot of backgrounds, Stanford (to pick an example) doesn’t turn out noticeably better programmers than any other university or any other entry path into the industry. The average Stanford CS graduate is, well, average. Stanford essentially got lucky with a couple historical accidents, and has exploited them for all they were worth, which resulted in people coming to Stanford to study because they felt they’d be taken more seriously, and companies coming to Stanford to recruit because they hoped lightning might strike the same place again, and thus a feedback loop was born. Somebody in Des Moines or Warsaw or Nairobi might have created a company ten times bigger and more important than Google, but because they weren’t from the right kind of place or background to end up at Stanford or a similar top-n university and get sucked into the standardized tech development-and-exploitation pipeline, we’ll never know.
And, yes, as an industry tech still is incredibly biased on axes of race, gender and socioeconomic background. People like to deny it, or claim that it’s someone else’s problem (usually the universities), but denying it doesn’t make it untrue. And when a handful of universities are used as the primary recruiting grounds, blaming the universities is pointless; if someone believed the universities were doing a bad job getting good people into “the pipeline”, they could always point their pipeline’s intakes somewhere more fruitful. If they don’t, they’re accepting and endorsing whatever the university chooses to do.
A problem of demarcation
And, of course, the typical interviewing and hiring processes at tech companies are mostly badly broken. I’ve written at length about this in a variety of places, and spoken about it, and gone on and on about it to anyone who will listen to me, so I’ll try to keep this from turning into yet another interviewing rant. But: interviewing and hiring processes are broken.
In theory, we develop processes to separate the people who are competent to work as programmers from the people who aren’t. In reality, we cargo-cult practices we think larger, more successful companies are using, and refuse to reflect on whether they really do use those practices, or whether they really are valid, or whether there are critical flaws that would affect us but not them (see, for example, Google’s famously high false-negative rate — with the number of applicants they get, they can afford to reject a large number of qualified people, but your company probably can’t). We test for skills that are unrelated to what we expect people to do on the job. We substitute shibboleths and hazing rituals for a probing of someone’s relevant knowledge and ability. And we pat ourselves on the back and declare what a good job we’re doing at keeping the impostors out, all while lamenting that we just can’t find enough qualified people.
In interviewing, a degree — or questions and exercises intended to test for the knowledge we think someone should gain from a CS degree — is treated, for some reason, as a qualification. Yet when Jeff Atwood proposed the FizzBuzz question, he did so as a way to determine which degree-holders could code and which couldn’t, after complaining that far too many of them couldn’t. Now, we’ve largely forgotten the original reason for FizzBuzz, and talk about how we need to test for “CS fundamentals” in order to be sure someone has the right knowledge to work as a programmer.
Worse, the typical interview process is designed in such a way that — deliberately or not — it favors a recent graduate who may or may not be able to usefully work as a programmer, simply because the process involves rote performance on the types of problems that come up in CS exams. Someone who’s actually been a working programmer for a while will have long since paged out that knowledge and gotten out of practice at snap regurgitation, due to spending their time actually getting work done. But then they’ll do worse on the typical interview, and that’s why there’s a whole cottage industry of books and training systems to get people back into the “interview coding” mindset. The fact that it’s treated as a separate skill from actual professional programming should be a gigantic red flag, but for some reason most people don’t seem to notice that.
What I know about programming
I have actually picked up a modicum of theoretical CS knowledge over the years, largely by reading about things that caught my interest, or in helping friends and acquaintances study. I rarely if ever use any of it, except if I need to interview somewhere and make it look like I have “fundamentals”.
In terms of technical knowledge that’s actually relevant: I know some patterns. I know some libraries and frameworks. I know some tools and some languages. I know how to learn new ones. Which ones I know and use and come back to has evolved over time. None of them are, as far as I can tell, routinely taught in even “top” CS schools. Pattern recognition has been a pretty important skill, and the patterns generally aren’t the data structures and algorithms taught in CS or tested for in interviews. I know a little bit about a lot of things, and a lot about a few things (like Django, because of my involvement with it; or security or Unicode, because they’re good topics to be a curmudgeon about, and I am nothing if not curmudgeonly).
The most important things I learned in my first “real” tech job, at the Journal-World all those years ago, were all non-technical. And almost all of them were of the “what not to do” variety. The skills that have been the most consistently useful to me have been things like figuring out how to set up meetings and talk to people and get them to open up about what they need and want; how to catch unstated assumptions or hidden requirements; how to anticipate effects of changes elsewhere in large systems; and how to feed that back into a plan of what to do. And while I certainly can and do sling and talk about code when the time comes, I think it’s probably the least important skill I have, and the one that I spend the least time maintaining.
What I think about CS degrees
If you want to get one, I will not actively discourage you. At most, I’ll say that a CS degree often seems to be a way of playing it safe, and depending on your personal circumstances that may be exactly what you want or need to do. I will not deny that simply having the degree can ease your path into programming as a career, but it won’t be due to anything intrinsic to the degree or anything you’ll learn in the courses. It will be because we live in a society with the problems I’ve enumerated above, in which many other people will overvalue the degree for irrational or incorrect reasons.
I will say that I wish we didn’t live in a society like that.
I will also say that over the course of my career, many of the most interesting and brilliant people I’ve worked with did not have a CS degree. Django, for example, was originally developed at the Journal-World, by a team which had zero CS degrees. And I’ve found no precipitous decline in work quality among colleagues who were self-taught, or went to bootcamps, or took any of the myriad other paths that lead to working as a programmer. If anything, I think they’ve been slightly better on average, and there’s probably a selection effect going on there; the qualities that drive someone to decide on a mid-life change of career and follow through on it probably correlate well with qualities that lead to success in many fields, not just programming.
And if you encounter someone who keeps arguing that a CS degree is required to be a successful programmer, or that there’s something about the curriculum that gives people special qualifications or makes them somehow more deserving? I will say you can send them to me for some proper education.