There’s a disquieting truth lurking in academia: no one really thinks for themselves. I mean, no one really thinks for themselves period. But in academia it comes into sharper focus. After all, they’re the people who claim to be thinking for themselves, publishing papers with ideas that they came up with. They’re the ones who are supposed to have new thoughts, make innovative progress, pave our way towards the stars.
So it’s a bit unsettling when you find yourself in these circles, thinking you too might contribute something real, something novel and important, only to find that no one is interested in that. No one wants new ideas, no one is actually trying. They’re all just doing the easy, normal thing that everyone else does: following social gradients.
A friend of mine’s parent works at a biology lab. She studies cancer. When asked if she was trying to cure said cancer she thought about it for a few seconds and said, “no, we’re not”.
The only thing that’s uncommon about this encounter is that she genuinely considered the question and told the truth. Most scientists have a thick narrative layer between what they’re actually doing and what they say it’s about. They recite it every time they write a grant, give a talk, or an elevator pitch. But they’re all in the same boat: their work isn’t goal-directed, they’re not actually trying to do the thing, they’re just doing what makes sense to do given the social environment they find themselves in.
And the narratives work, at first. A lot of neuroscientists talk the big talk: they work on things as eye-catching and important as attention. And you think, wow! I want to understand attention too! It’s pretty wild that we can filter out the stuff in our environment and only focus on what’s relevant to us, isn’t it? And then you go to one of their talks and you realize the deceit in a slow horror as slide after slide reveals the inevitable inaneness to which their work has been reduced.
They don’t study attention. No, they study some twentieth-order correlate to it which some people might call attention, although no one really knows what attention is to begin with. They do experiments like putting a bunch of red blocks on the screen and one green one and they see how fast you see the green one. And those scientists grad students will go on to put two green blocks on the screen and see how fast you see it! So much of neuroscience, cognitive science, and psychology is like this.
And it’s not like we’ve learned nothing. I think some of these types of hyper controlled, reduced experiments are good. I just think that so much of the time people do these types of experiments because they’re not really thinking about it. They do them because it’s what their lab does, or because they’ve been well established, or because they’re easy. All of those reasons come out well ahead of the reason that should come first: is this important at all?
But no one really asks themselves that. Because they’re not trying to explain what they’re ostensibly setting out to explain, not really. They’re doing some sort of weird mix. Many scientists do genuinely care and want to understand what they study, but they don’t step back and consider what it would take to actually do that. Instead their mind concocts this half-baked socially approved solution that’s kinda sorta pointing in the direction they want to point in, but doesn’t cut straight to it.
Oh, you’re a grad student interested in visual attention? That’s so cool! Maybe you should work in so-and-so’s lab, they study how many green blocks it would take for people to lower their reaction times by 1 second!
And then they do, inevitably, because that’s where they found themselves after some initial spark piqued their interest in brains (seriously, such a good spark to have!), and then that interest narrowed to attention at some point, and then they decided on visual attention because that was the lab that had openings that year. And then they apply to PhD programs and so-and-so has a lab studying green blocks and that lab is friends with their lab and… you get the picture.
Which is all to say that most people don’t set out with a goal, they instead do the extremely human thing to do. When you have an inchoate and not-yet-clear interest, you take the local, socially approved actions available to you to advance that interest.
All of that is fine and good. I don’t think you need a clear goal to start off. It’s just that there’s not really a second step where they reflect on this and update their plans. They don’t really stop three years into their PhDs and think, “have I really made any progress on understanding brains and attention…?”. And if they do, they don’t act on it. They might get depressed, they might wish it were different, but they usually don’t pivot towards actually trying to figure out what’s going on.
Because, well, figuring out what’s going on is hard. It’s risky. It requires you to really step back and take some potentially quite weird actions. Maybe the best way to understand attention is not in a lab. Maybe it’s to think on your own for a bit, to switch programs, or to drop out of your program entirely. But these actions are scary, and weird, and so most people don’t do them.
There’s this strange and related phenomena that I’ve encountered around people who work in ML. Some of them will say things like, “yeah I’m pretty sure we can get to superintelligent AI if we just scale up neural networks. It’s probably only a few years away”. They say this in all seriousness.
Somewhat shocked, I ask them if they should maybe work on AI safety instead of their pointless project of improving the accuracy on some dataset by 0.1%. They’re usually pretty unperturbed. “Oh it’s not really my area”, “Maybe when I finish my PhD”, “I’m not really interested in that”.
Sorry -- you think superintelligent AI is a few years away, something that will have a monumental impact on humanity, sign unknown, and you want to finish your PhD first? I sometimes actually say this and they laugh and say “yeah you’re right”. Like how you might joke around with your friends: “I really should stop eating all this cake!” But we all know you won’t.
It’s so strange. But it strikes me as the same phenomena as a grad student going into so-and-so’s lab to study green blocks. People don’t pivot to work on AI safety, not because they don’t have some propositional belief that it’s real or important, but because that would be weird and hard and it would go against their social gradients.
People work on some twentieth order correlate of visual attention because they found themselves there. People stay in their ML lab and continue doing the work they were already doing because that’s where they are. It’s comfortable to be where you are. You are where you are because it was the path that was nicely inlaid for you, a gradient set up for you that you just needed to follow. And it’s nice not to have to justify yourself to your friends and family, to grantors. I get it.
But also, come on people! I’m happy if you keep adding green blocks to the screen so long as you and I and everyone can admit that you’re not actually making progress on visual attention. So long as this is just your weird passion project, that you for some reason really really want to know how many you need to add before that reaction time goes up, so long as it’s something like that, fine. Just don’t tell me you’re “studying attention”. We don’t even know what attention is!
Most concepts in neuroscience and cognitive science are this way, actually. In fact, we don’t even know what a concept is! How embarrassing! Neither is there any consensus about what intelligence, or abstractions, or analogies are. You’d think that in such a pre-paradigmatic world, everyone would be working on the fundamentals before refining their green block experiments. But no. No one cares!
I asked once, in a lab meeting, what a concept was. Many people didn’t even have an answer. Some of them had conflicting answers. This is all well and good -- it’s exciting that we don’t know! It means we have more to uncover, more to learn! That’s what science is about! But the fact of the matter is that no one in that meeting was the least bit perturbed that this was the case. No one thought, “Oh, we have no idea what some of the most basic components of our field are, maybe we should fix that.” They all seemed perfectly content with this state of affairs.
Can you imagine the same scene playing out in physics? If some people thought light was a material substance, others thought it was an infinitesimal point, and others thought it was a wave? They might all do experiments to figure out the truth, but they’d at least be trying to figure out what the truth was. In the brain sciences everyone seems oddly content on using their own obscure concepts, doing little tweaks on the existing confusion, “progressing” forward on some ontology that never made any sense in the first place.
I think it is a very natural thing to believe we understand things more clearly than we do. Or to not admit how much uncertainty we have. Uncertainty is scary and uncomfortable. But it’s very unsettling when I see scientists doing it. Scientists are the ones who are supposed to boggle at things, they’re supposed to get curious!
But so many scientists I’ve talked to seem to gloss over wide swaths of confusion with passwords. Oscillations (i.e., brain waves) are weird. No one knows what they’re there for. There’s all kinds of correlates -- “when these waves are in phase with each other people remember things better!”, “when they have this amplitude, people are more anxious!” -- but there’s not a good theory about what they’re actually doing such that they cause you to remember something better!
I find this embarrassing and also to be a pretty clear reminder that we understand so little about the brain. But no one will admit that! I ask everyone about oscillations, because I am curious about them and I want to understand them. Almost every time neuroscientists will give me some hand-wavy answer that doesn’t explain anything. “Oh they carry information.” Sorry, what information? How? Why? “Oh, it’s populations of neurons synchronizing.” Yes… I know, but why do they do that? Why is it useful for brains to have this property? Is it just superfluous, an epiphenomena?
These follow up questions sometimes don’t deter them at all, they just keep repeating whatever inane thing in slightly different language. Sometimes they’re like, “oh I guess I don't know.” But does that cause them to try to figure it out? Does that make these scientists who study brain waves for a living, go on to actually understand what they are? Nope.
Scientists don’t think for themselves. I’m always surprised by how strong the social norms are around this, given the profession. I would often press people on questions like the ones above: “what are oscillations?”, “what is attention?”. A very common response to this was to send me a deluge of papers on the topic, none of which explained anything. “Oh, you should read so-and-so if you haven’t already. They don’t really clarify anything at all, but they’re higher status than us so we can pretend that they’re good thoughts and move on!”
This is all fair and good, in some respects. It is good to read what others have said before you, to see how people have thought about this. It’s just that, that’s all they’ll say! They don’t offer their thoughts, they don’t give you room for yours. It’s not an open discussion where you might actually make progress by thinking about it. Thinking is for the esteemed, well-established professors to do behind closed doors. Us, mere grad students, couldn’t possibly have a novel thought!
It’s as though everyone has assumed an implicit efficient marketplace of ideas -- like anything they might think of has surely already been thought, because there are so many smart people thinking about this topic! What they don’t realize is that the scientific enterprise is a house of cards. The people at the top don’t think for themselves, just as much as you don’t. The people who get anywhere in academia are the ones who follow the social gradients, doing the green block experiments ad infinitum until tenure.
And there are strong social norms which rail against new ideas. I remember one lab meeting in particular where I presented what I thought was a bold, new, interesting theory in neuroscience. Afterwards, everyone complained to me that it had no evidence. I’m sorry, isn’t the whole point of science to produce new theories and then test them..?
It’s understandable to dislike a new theory based on things like Occam’s razor, or that it seems in conflict with existing evidence. It makes no sense to prematurely disregard it solely because we haven’t found any evidence for it yet -- we haven’t looked for evidence yet! It seems like such a silly thing to say and I don’t think they’re even conscious of how ridiculous it is. But it creates this weird landscape, where only the ideas that are already integrated and approved of are acceptable ideas to work with. Everything else is out there and probably wrong, because no one has thought of it yet.
And the “novel” ideas that most esteemed scientists come up with, at least in cognitive science, are just random amalgams of whatever is hot in CS at that moment. Oh, it seems like both symbolic reasoning and associative learning are important? Let’s glue neural networks to symbolic processes! Oh, maybe concepts are explicit programs? That would be nice and clean, and more closely resemble a computer -- seems right! Oh, maybe thoughts are Markov decision processes? We have models for that from CS so it’s probably correct.
In other words, much of the “novelty” is just rehashing motifs from other fields. These ideas don’t start from basic questions, they emerge from pattern matching to the current landscape. From the outside it sometimes seems like a GPT-3-esque process. It’s as though these scientists are running the algorithm: predict what the next word will be from the current state of cognitive science.
But like GPT-3, it often doesn’t make sense. There’s no top-down constraint, no zooming out to see whether it’s consistent, or important, or right. It’s just whatever random association seems to come out next.
I should note that all of what I’m saying is of course a generalization. There are many scientists I respect and admire. There are scientists who truly do think for themselves and make breakthrough discoveries. It’s just that this is a very small minority, much smaller than anyone would care to admit.
I think that people don’t want to admit this in part because they want to fit themselves into a narrative of importance, which makes perfect sense. I think another reason, though, is that we live in a polite society where it’s taboo to say bad things about anyone else.
In my PhD program it was required to go to everyone's talks, in order to pay respect and support others in the cohort. I think norms like this are gesturing at something very good. I do think it’s important to support your community. But so many of the talks were so… bad. I often questioned whether I just had really bad attention, or whether the talks were exceedingly boring and not worth my time. I felt like I ought to be getting something out of them. But as I found myself being really engaged in online lectures that I chose to listen to, I concluded it was the latter.
It’s just a sad fact that as academia becomes more bloated, with way more people, and way more of those people doing it “just because”, the quality goes down. This is inevitable, but everyone pretends that it isn’t. People pretend that everyone has an equally important thing to say and that we should respect that. But it’s not true. Some people’s work is more interesting, more useful, more important than others.
Everyone knows it, too. When you walk away from a really good talk, everyone leaves the room excitedly chattering about it. People reference it weeks and months later. When you walk away from a bad talk, people are kind of dull. They want to go grab lunch, a small reward to forget the agonizing 90 minutes they just endured.
But you’re still expected to have thoughts about it. You’re expected to engage with the green block experiments as much as you are about the ones trying to get at the mathematical theory of intelligence. Because it’s rude not to, and we can’t be rude.
So everyone goes on getting artificially equal attention. Of course, people with better ideas will tend to get better positions. There are consequences for bad work, but much fewer than there would be if we didn’t have to pretend to be egalitarian all the time. And it’s not like there isn’t any behind closed doors gossip about the bad talks, it’s just that the outward appearance is always one of equality.
I think this is good for the goal of being kind to people and very bad for the goal of actually trying to figure anything out. But I also don’t think it’s the only way to be kind to people. I think that there are ways to offer constructive feedback, while being kind and also orienting towards the goal of actually understanding. But it’s hard to do that, so everyone defaults to being polite and those people never update that they should leave their green blocks behind.
Scientists don’t think for themselves. They are bound by social norms just as much as anyone; they find comfortable, respectable positions and they stay there. They don’t zoom out of their social constraints and think about how they might actually understand what they set out to understand. All of their thinking is channeled through social norms, just like everyone else.
It’s an odd state of affairs. Everyone loves the people who disrupt and innovate -- the Tesla’s, the Einstein’s, the Jobs’, the people who did truly novel things in their time. Some people probably hate them, too, but there’s at least no disagreement that they did something. That they moved society. People know that, but they simultaneously enforce norms against doing anything novel.
They implicitly and sometimes explicitly tell you not to think new thoughts, not to go against the social fabric, or to have new theories. So you go on, collecting data about a cancer you’re not trying to cure, or running experiments that tell you about twentieth-order correlates of attention, rather than gaining new insight about the phenomena itself.
And in polite society, you can’t assume anyone’s work is better than anyone else's, and you certainly can’t say anything if you do secretly wonder about that, so this state of affairs continues indefinitely. Like a hall of mirrors; everyone assumes everyone else is doing something of import, when really it’s all empty ideas, reflecting off of each other, landing nowhere but on ourselves and the poor unsuspecting grad students who follow.
Progress, novelty, innovation, all these things that we claim to love, they require people who are able to think for themselves. No one gets anywhere new by doing what society has laid out for them. Revolutionary breakthroughs don’t happen by way of literature reviews, or reciting other works ad nauseum. We don’t get any closer to the stars by adhering to what’s already been said.
No, genuine novelty stems from an ability to question, to marvel, to boggle at the world. It is that endless curiosity, that not-being-quite-satisfied with the explanations that don’t quite fit, the ability and the passion to push those boundaries, regardless of what others say or think. That audaciousness, that genuineness, that ability to think for oneself. This is what is needed to make real progress.