I’ve recently realized why I’m bad at regularly publishing blog posts: it’s because I think I know very little. To publish something, you have to be acutely self-assured in the veracity of what you’re writing, which means either a) it’s something you have deep knowledge and experience in that you can speak with authority on, or b) it’s a personal anecdote, which you’re inherently sure about. Call it rationalism or a crippling need to self-question and avoid overconfidence bias—the result is that I don’t think I have a lot of answers.
What I do have are questions, and from something I read last year from Paul Graham, I think the best approach to work with these questions is to publicly ask well-framed questions. In How You Know, Graham quotes Constance Reid who, in this snippet, talks about mathematician David Hilbert:
Hilbert had no patience with mathematical lectures which filled the students with facts but did not teach them how to frame a problem and solve it. He often used to tell them that “a perfect formulation of a problem is already half its solution.”
Likewise, I think a well-framed question is essential for exploring a problem. So let’s get to it. The question I have—which I’ll add detail to shortly—is:
I ask this question because I took a nontraditional path: I enrolled in college, stayed for my freshman year, had a small success on a product in my winter break, and left college to start a startup and travel. Throughout those years, I learned on my own, reading books and papers, browsing endless Wikipedia articles, and talking to people who had similar interests—taking the same self-learning path that many other nontraditional students did.
Self-learning works for tech. But I was switching into the field of behavioral science and social science—does self-learning still work well for the sciences and for fields that are academia-driven instead of industry-driven? Does it make sense to give traditional education another shot to see if it’s the best place to learn about and gain credibility in a field?
I returned to college to experiment with this question. The hypothesis that I wanted to test is that college continues to be the ideal place to build foundations—knowledge in a specific field, a liberal arts background, a toolkit for critical thinking, and a social foundation in terms of meeting fascinating folks that you would otherwise not be able to meet. Or, is there a better place to do this?
Let’s frame this question a bit more specifically. I have here a cascade of questions that emerge when I think about the question of “What is the use of college?”
What function does college serve for individuals? How does it benefit people?
A greater understanding of this might tell us what the strengths and weaknesses of college and other forms of learning are, potentially allowing us to see what works for each and find areas of improvement. Issues abound, however: there is not one “college” but many, and that talking about “college” is basically a generalization.
In my own experience, the benefits of college for learning lie in its ability to create a structure around your learning. That structure—lectures, coursework, exams, and other forms of assessment—creates a motivational structure that pushes people to learn. Yet, the downside of this, in my experience, is that this creates a quandry of motivation that Daniel Pink touches on: it’s extrinsic motivation—a desire to hit the goal posts set up by someone else and to make the grade—which is a lot less engaging than intrinsic motivation, an internal desire to learn for the sake of learning. In traditional education, at least a part of one’s motivation to learn stems from the need to perform well and pass a course.
And throughout the past 1.5 years of school, I’ve seen something similar: I do generally like to learn about what I’m learning, but even with similar subjects, I’m much more engaged in self-learning. I can read about John Stuart Mill and utilitarianism in class and be pretty engaged, but I can’t put down the book I’m currently reading, Justice—and there is a significant difference in engagement and internal motivation. When I self-learn, I learn because I want to know. When I’m in a traditional learning environment, I learn because I want to know, absolutely, but there’s also the aspect of “I need to get this in my head so I can perform well on the upcoming test and essay.” And yes, that’s the motivational structure that many colleges want to create, and it works well, but the ramifications are not as tidy as simply ‘students learning more’. I’m willing to bet this has an impact on long-term retention as well—and at the least, it leads some (extremely smart) students to go into college seeing it as an intellectual haven but 2 years later optimizing their schedules for the “easiest” courses, not the ones that are the most challenging or useful.
On the other hand, something that college does that is difficult to replicate in a self-learning environment is the impact of learning something you wouldn’t seek out yourself. In a way, our desires of what we want to learn are somewhat antiquated: we apply past notions of what we care about to influence future decisions on what we want to learn about. What college does is that it forces you to be more progressive, to expose you to what Rumsfeld calls the “unknown unknowns” of knowledge—things you wouldn’t have sought out but that can be really important. I’ve heard countless people who, as part of Columbia’s Core Curriculum, were forced to take Literature Humanities or an art history course and ended up loving it.
Yet, on the other hand, the inflexibility of college curriculums deny highly motivated students from getting exactly what they want from college. The semester/term of college reduces the flexibility in directing exactly when and what to study, and the progression of knowledge that many colleges implement does the same. Personal experience: I’m interested in taking a game theory course, but the course requires microeconomics and macroeconomics, which in turn requires economics, creating a 1- to 2-year lead time to taking a specific course. Undoubtedly, that means that course requires previous knowledge, but without other offerings in game theory, that field of knowledge is closed off everyone other than economics majors. This, combined with registration limits on classes, comprise a set of obstacles and constraints that are not present with self-directed learning.
There are many issues, also, with lecture-based learning. Many self-learners have used textbooks to teach themselves at a faster pace than college courses allow, and it might be that college adds a lot of overhead to learning that is cut out by a lean, efficient self-learning system that incorporates modern techniques (e.g. spaced repetition).
Further exploration: Think about how college learning can be more engaging; think about ways to combine nontraditional education with traditional education; talk to pedagogicians (heh) and look at research on what the best learning environment is. Personally: enroll in some Coursera classes and do some self-learning and see if it’s more effective or enjoyable.
Theoretically, it should be the best place to build a foundation of knowledge in the liberal arts. There seem to be few other places outside of academia where you can explore the Great Books and learn about critical theory and spend months thinking about philosophy, especially in an environment with support, a motivational structure, and discussions with other students going through the same thing. While things like Coursera and peer-teaching programs like those at Brooklyn Brainery go for the same idea, it’s hard for those to replace the college learning experience in the liberal arts, especially when you factor in that college is when a lot of time is dedicated to these efforts and alternative methods are on nights and weekends.
And regarding the life of the mind, college again should theoretically be the best place for this. However, William Deresiewicz notes in his book Excellent Sheep that college is becoming more and more vocational, teaching practical skills like those necessary for finance and consulting careers, and not so much a liberal arts education. And it’s almost inherently this way: the American university system was set up as a combination of the English college and the German research institute, arguably being fully invested in neither (via Alex Miles Younger).
Further exploration: Read more about the importance of liberal education. Is it really necessary? What is its role and how can it benefit our lives / society? Fareed Zarkaria (In Defense of a Liberal Education and Michael S. Roth (Beyond the University: Why Liberal Education Matters) have books on this subject.
This is a fact: there are more people in a specific field (say, behavioral economics) outside of a particular university than inside it. The number of smart people in a field inside a particular university is only a fraction of the total number of smart people in that field in total. We might see that as an advantage for self-learning: why should we restrict ourselves to the fraction of the people in the field that are at our university? Why not make an effort to meet the smart people, everywhere?
The difference lies in two factors: access and depth. Access is obvious: access to a professor at your current college is somewhat more plausible than access to some other professor at some other college. Depth is another thing: while learning outside of college might allow you the entire breadth of folks to connect to and meet, I think the proximity of others and other factors make the possibility that you’ll build a high-depth relationship with someone significantly higher. Despite the advances in FaceTime and related stuff, I still communicate more frequently with people on campus and in the city than those who are elsewhere.
Yet, a strong argument for self-learning is flexibility. In traditional education, you are restricted and locked down—time-wise, to class schedules and deadlines, and location-wise, since you have to attend class at a specific location—in addition to the inflexible nature of the college semester/term system discussed above. This means that you can’t find out that a certain university or city is an epicenter for what you’re interested in (e.g. Carnegie Mellon University and decision science), go and AirBnB your apartment, and spend time there. You can’t take an unorthodox approach like coming up with an idea to explore, like how individuals in different countries conceive of the role of government in their lives, and then book a bunch of dirt-cheap airline tickets and see for yourself. Of course, other constraints are present—money in particular—but the fact is that the time- and location-based inflexibility of college restricts the possibility space of how you can learn.
The importance of college in meeting more interesting people cannot be understated—people say it’s one of the most important parts of college, sometimes even above learning itself. While the Internet makes meeting other amazing people a lot easier, it still seems to me that college offers a tradeoff: you can meet a small subset of all the smart people in a field, and that subset is still really awesome and smart, but it won’t be all of them, obviously—but, you’ll build a much deeper relationship with them. Is that better? Jury’s still out, as far as I know.
Further exploration: Think about whether one of these options can incorporate the other, e.g. whether going to college can allow for deeper connections with some people but might not totally restrict the possibilty of meeting others elsewhere—in fact, being a “student” somewhere might actually make this more possible.
Probably a lot. In other industries, such as in tech startups, credentials take a different form, mostly things like your GitHub repos, past experience in places you worked, what you did there, and talks you gave. But science is still highly dependent on credentials as signals, it seems, and I don’t get the feeling that the lone independent researcher (especially one without a degree) gets much respect (or is really even that possible). I don’t really know much about this, other than that there is a lot of credential inflation and that even a bachelor’s isn’t enough to be taken seriously (though surely more seriously than being degree-less), but it’s one thing to explore.
This is a catch-all for the other things that the above questions don’t address, but one other advantage of college that comes to mind is: almost everyone who’s someone has done it. Two things: 1) That doesn’t mean that it’s right; 2) There are people who dropped out of college and made it on their own (or never went in the first place). Those people tend to be outliers, though, and it seems that many of them had some sort of extremely compelling reason—a startup or something else—that both gave them a reason to leave. I think there’s more to think about here, such as the idea that all of the examples that come to mind (Gates, Jobs, Zuckerberg) all got their success from the thing that they dropped out for (Microsoft, Apple, Facebook), but we see few people that dropped out, meandered around, and then built a Fortune 50.
One advantage that self-directed learning might have is that, by being nontraditional, you might come up with nontraditional approaches. I’m reminded of a friend of a friend who came up with a newer, harder way to do a physics calculation, but by relying on this unorthodox method, he was able to best others who used the traditional method. Might traditional education teach us to think traditionally, and might nontraditional education allow us to come up with our own, non-standard ways of seeing things, that might turn out to be better? That’s another open question and is a really big consideration, because as the curse of knowledge suggests, once you learn and adopt something, it’s really hard to think in a different way.
In addition to the above, when I’m at college, I worry about all the experiences I’m not experiencing and all the lives that I’m not living. On one hand, that’s an argument for a self-directed education or an enriched program like Minerva or Uncollege: you can gain experience from the real world and have your early twenties be shaped by a wider gamut of experiences. On the other hand, you could make the argument that that’s not exactly what college is for: it’s for enriching the mind through knowledge. I’d say, however, that experiential knowledge is an absolutely essential part of one’s education, and that sort of experience is being hampered by the constraints that college has—heavy workload, rigid curriculum, and binding yourself geographically to the campus and time-wise to the schedule—which I think makes it hard to attain the kind of worldly education that we expect to have in our early twenties. And with the current standard of getting out of college and immediately starting your career, I worry that most of us won’t ever get the chance to gain that kind of education.
What’s the use of college? What is college good for? It’s a crucial question as we consider what the right post-secondary education system is for a rapidly changing economy. You’ve probably read a hundred think pieces™ on that, so I won’t go on about it. In any case, this question is personally relevant for me as well as important as we consider the role of college today, the democratization of education that we’ve seen over the past 10 years, and whether traditional and nontraditional forms of education will always be different or if we can combine the benefits of each to modernize education and foster the life of the mind.