cognitive science is doing it wrong
I Think We’re Doing it Wrong
I have all of these gripes with cognitive science. And I was reading Kuhn today, reading his work on the function of measurement in science, on crises, on the early qualitative work most fields undergo, and I was thinking about whether or not cognitive science is doing it right.
I have all of these complaints with the field, but that criticism is mostly born out of my love for it. Cognitive science is the field of abstractions, of ontologies, of how those manifest in living things, in humans, and that is a mystery I hold within myself very deeply. It’s the most interesting topic on the planet right now, I claim, and I’m really invested in seeing its fruition. And I think cognitive science could do it! If it tried.
But I often have a sense that cognitive science is doing it “wrong.” After reading Kuhn, though, I wondered whether every pre-paradigmatic science looks like cognitive science does right now. This is something I want to investigate more, but my sense is that there is something meaningfully different about how qualitative science is currently conducted now relative to a few hundred years ago.
Muddy Waters
I think that the 21st and 20th centuries saw the rise of scientism. The conflation of the scientific endeavor with these objective things: measurement, numbers, quantitative data. The conviction that mathematization was the true test of value in the sciences. And a rising status of science more generally: lay audiences now regularly consume science as though it were “fact.” A false dichotomy emerged between fact and opinion, with fact being the real arbiter of truth, some objective epistemic thing that can be cleanly cleaved away from those muddy waters of opinion.
And I think that most scientists have a better sense of the true scientific endeavor than scientism entails. But scientism still sneaks in, anyways. And the ways that it does are pernicious. It is this sense that qualitative work is unworthy, that anything without an explicit test or hypothesis is not science. Science is where we have numbers god damn it! Science is math.
Sure, there is a lot of work in cognitive science that doesn’t have numbers. There are philosophical pieces. But these are largely decried – perhaps not publicly, but certainly by many people in the field behind closed doors. There is an implicit status hierarchy in academia, at least in the “sciences” – the people who use the fanciest numbers are at the top, the more qualitative work is at the bottom. And that’s a caricature, of course. It’s not so cut and dry like that. But as a general rule of thumb, it’s true.
Which Came First?
But what are numbers? What are they in the scientists' toolkit? They are presumably the result of measurement. And what is measurement? Measurements are produced out of theory. And Kuhn makes this point very elegantly: it’s not the other way around.
There’s this tendency, or perhaps it’s better to say desire, to think of measurement as some objective thing. The whole idea of data-driven science bakes this notion into it, otherwise it wouldn’t make any sense. “Just collect some neutral data and see what emerges!” It’s this idea that measurements are somehow a neutral image of the world, and it is up to the scientists to construct theory from their groundwork. There is nothing further from the truth.
Measurements are created in response to theory. Why would anyone in the 18th century have labored over these intricate devices otherwise? Devices like the Atwood machine? I mean, just look at the thing:
They weren’t developed out of the blue, they were developed to test Newton’s theories. And how does one tell that a measurement is reasonable? How does one know that is indeed measuring the correct thing? That is determined by the theory that engenders it! A measurement is “correct” if it matches predictions made by the very theory it is attempting to measure. What else could it be based on? Measurements are somewhat circular in this way. They do tell us something, it’s not arbitrary that one could get a device to produce the predictions of a theory – but they are not objective – they are epistemic tools that are always relative to a theory at hand.
Kuhn makes another good point: what counts as a reasonable measurement differs by several orders of magnitude by discipline. In some subfields of physics, anything to the hundreds place is acceptable, in other subfields, anything to the hundredths place is. What is meaningfully predictive is something that is relative to our state of knowledge in the field, not some objective criterion.
A Thirst for Salt
It is this that scientism glosses over. Scientism wants to pretend that science is objective, that it can deliver facts, that measurement is neutral. That it is this slow accumulation of data which mounts into theory in this bottom-up process. It is anything but. Theory comes first. And how is theory developed? That is in the realm of the qualitative.
It is my current understanding that most qualitative developments in the preceding centuries (anytime before the 1940’s, basically) were less hindered by this thirst for numbers. It hadn’t quite seeped into the collective consciousness yet, what it meant to be legitimate science. I am certainly not sure about this. But it feels true, somehow. The way many scientists also used to be philosophers, the way they talked to each other about their findings, the way thought experiments were featured in their work.
And I have a sense from reading Kuhn that people knew when they hadn’t quite figured things out back then. Things like temperature. There were decades of debate and back and forth between measurement and theory before a universal device was created. This doesn’t feel as true today. It is, in some sense – there isn’t unanimous agreement about phenomena like intelligence – but somehow every little subsection in the life sciences feels more or less content to work on their pet ontology, segregated from the rest of the field. There doesn’t seem to be as much of a push for centrality as I get the sense that there used to be. People aren’t trying, as much as they used to, to reach common ground.
Big Science
One of the reasons for this is, I think, because academia is huge now. The number of PhDs has 8xed since around 1950 whereas the population has only tripled. There are around 2 million papers published each year just in science and engineering alone. And around 200,000 people earn PhDs in America each year. Which means that about 1 out of every 300 people is currently in a PhD program. Academia is truly massive.
And academia being too big hinders scientific progress for a few reasons. One is that academics now have to read vastly more papers than they used to in order to keep up, which is a big time cost. It’s harder to find the good papers now, too, because the signal to noise ratio is massively diluted. These factors come together, making it difficult to stay at the forefront of a field and consequently, making it harder to unify fields. Good luck knowing about the state of “biology.” You might be able to keep up with advances in molecular biology, or biophysics, but the entire field is massive and sprawling. There’s just no way. And this makes paradigms harder to reach.
Academics also cite each other more now, a lot more. You’re generally expected to have a very long reference list at the end of a paper. I’m not sure why this is, exactly. But it at least partially reads to me as signaling – a signal that you know a lot about the field. A short references list would look disreputable now. But if you look at papers from the 60’s and 70’s, some of the most seminal ones cite just one or two other papers, sometimes none. I think that citing others' work that has influenced you is good, but often the push seems to come more from a signaling game, and this seems like a problem to me.
I always felt this pressure in academia, to be reading a ton of papers. And this probably works for some people, but for me, it was frame-forcey. It was a constant deluge of other ideas, other ways of thinking that were vying for my attention, over and above my own thoughts. It took me months after leaving my PhD to feel like I could finally reflect on everything, instead of being completely absorbed in it.
Just Because
I remember voicing my own ideas in discussion groups, in lab meetings, etc., and a common response was “have you read so and so? They said some of the same words you just did and they’re way higher status than you so you should probably go and defer to them.” I’m sure some of it was much more genuine than this, I do think it’s good to read work that came before you, but so often it felt more about deference, about reputability, about this implicit and pervasive sense of an efficient marketplace of ideas – that novel thoughts were just not a thing mere grad students did.
Which is so antithetical to the scientific enterprise that it seems like it couldn’t be true. But it is. I don’t know about all of academia, I just know about my little corner. But I expect it generalizes. Academia is huge now. It’s not just inhabited by the people who fell in love with it, like it used to be, it’s populated by the people who sort of just meandered into it, following gradients. And when you get a group of people who were just following gradients together, there starts to be a lot more deference. A lot less thinking for oneself. That’s what “following gradients” is, after all.
In fact I think this is a problem that all organizations face as they scale. Not all organizations care, probably, but the ones that are supposed to be about thinking for oneself ought to. As soon as you set up gradients, set up social grooves that people can click into and easily follow along, as soon as you start “growing the movement,” you’re beginning to select more for people who care about reputability, deference, the people who end up places “just because.” Of course their answer for why they became an academic would never be “just because.” But under the hood I think that’s basically what’s happening.
And then you end up with things like academia or EA where the original group was composed almost entirely of people who actually cared, who were actually trying, who actually thought for themselves, but as it expanded it started to encompass more and more of the “just because” people. And eventually you find yourself in a room with a bunch of people who are purportedly the ones coming up with new ideas, paving our ways to the stars, but who are in fact just in a hall of mirrored deference.
The Efficient Marketplace of Ideas
And I think that when academia is this big, it obscures the fact that not much progress is being made. So many people are working on it – something good must be happening right? This sub sub sub field has like ten thousand people, surely there wouldn’t be that many if it were complete BS right? Well obviously I think the answer is no, but it’s a very common bias to assume that because people are working on it, it must be important. Which of course isn’t true.
But by the time you’re that far down the rabbit hole, knee deep into a subfield of a subfield of a subfield, legitimized by the sheer number of people around you, asking the big, hard, actually meaningful questions seems not just futile but remote. Questions about whether or not the theories your work rests on makes sense, questions about the basic ontology of the field, questions about what these things like “representations” actually ground out in. No one ends up asking these, maybe because of some implicit efficient marketplace of ideas, maybe because they’re just following gradients, or maybe because academia is too damn big and they spend all of their time reading other people’s papers.
Inching Along
And so people end up questioning things like reaction times to tiny green blocks, instead of understanding executive attention, instead of understanding attention, instead of understanding what the brain is in the business of doing in the first place. And it’s not like this has no merit, it does. But when I zoom out and ask if it’s working at a broad scale, I come up with a resounding “no.” It’s a common refrain in the brain sciences that if understanding the brain were the length of a mile, then we’re only about an inch along it. People in these fields know that we don’t understand basically anything, that we lack theory, but we go on measuring anyways because that’s what science does.
Well I agree with Kuhn, I think theory comes first. And I think we need more of it. I think we need to get clearer on the fundamentals before trudging through the knee high water in sub sub sub fields. And yes, there is some theory, things like reinforcement learning and prototypes and Bayesian approximation, but ask any brain scientist what oscillations are, what intelligence or attention is, and I expect you’ll get a sort of confused response. We still haven’t pinned down very basic observed phenomena.
A Numbers Game
Above and beyond the truly colossal size of academia, another reason that we’re only inching along in these fields is that we prematurely mathematize, paint numbers onto everything, and in so doing we not only obscure the fact that we know less than we say we do, but we disincentivize where the real work should be happening: in the realm of the qualitative.
Scientism is, roughly, the conflation of number with truth. Of explicitness with science. It elevates formalization to a status that is not quite deserved. It’s a cargo cult of physics, but only the legible parts, only the ones that graduated into paradigm, only the ones that get printed out into numbers in textbooks. It forgets that physics also had this inchoate stage and that it still does. It conflates physics, the MVP science, with precision. Physics does have precision, but we can’t forget that it also had philosophy, centuries of thought experiments and qualitative work.
But because this conflation exists in the modern conception of science, it takes anything with numbers to be more or less legitimate. Yes, bad statistics is a thing, yes p-hacking has been acknowledged. But by and large we still subconsciously make this mistake. And it’s not to say it’s all bad, numbers are certainly useful, they are necessary in gaining precision and getting clear about what we mean. But they don’t come first. And they certainly shouldn’t a priori legitimize bad ideas. The bulk of the work in science is, I have come to think, in verbal reasoning. Understanding argument structure, triangulating your thoughts with others and with reality. The mathematization step should be a skillful translation, not the whole thing.
The number obsession is bad for science because it falsely leads us to assume that we know more than we do. It also delegitimizes where I think most of the real work ought to be done, especially in pre-paradigmatic fields (which I claim is the status of almost all of the life sciences). This combination means that people aren’t acknowledging where the problem is – they think it’s that we don’t have enough “objective” data, that we don’t have enough math, when really the problem is that we don’t have enough original thought, not enough verbal reasoning, not enough qualitative work, not enough theory. The numbers game is detrimental, not just because it can be hacked through p-values and bad statistics, but because it has hacked our conception of what good science looks like.
Cargo Culting Physics
Cognitive science has a history of mathematizing too quickly, of oversimplifying problems in ways that turn out to be disastrously false. GOFAI is a central example. In the 60’s everyone assumed that minds consisted of something like predicate logic. Programs were developed, promises were made, and we all know roughly how that turned out: an AI winter. In what is now so well known to be almost inane to mention – a summer class at MIT thought they could solve the entire problem of perception in those few short months. Of course, we still haven’t “solved it.” But the draw was there – computers were so powerful, built out of logic gates and abstractions and symbols – they seemed to share all of these characteristics with minds. If only minds were as simple as logic would suggest…
The pressure to formalize is exceptionally strong in cognitive science, since everything is expected to cash out into a computational model. But formalizing too quickly is dangerous – it risks glossing over confusions and concepts which ought to be clarified first. Ideas should always come before mathematization. And I worry that this cargo culting of physics that plagues all of the life sciences, wherein math reigns supreme, we rush into it too quickly and in doing so we falsely assume that we know more than we do.
Reduction to the Wholes
The bloating of academia, this obsession with numbers, these seem to me to be problems with academia generally speaking. But there is also a problem that cognitive science uniquely faces. It’s that it hasn’t actually done the reductionism it claims to have done.
I think it is a very natural thing to try to break down a phenomenon in order to understand it. I quite like this activity, I think it’s very fruitful. But cognitive science doesn’t really do this. It pretends to, pretends that things like beliefs and desires are fundamental building blocks of a mind – that abstractions and representations and computation are the core components, but it doesn’t really get clear on what that means. It builds everything from there, on a teetering ontology with a hollow foundation.
And perhaps this is natural and fine, you need to start from somewhere to get off the ground. But I think cognitive science is in this strange and precarious situation where the things it takes for granted are built out of these flimsy, floofy abstract notions. Notions that are neither concrete nor precisely articulated. Oof!
Neuroscience works under the assumption that neurons are the fundamental unit. It is possible that they aren’t – it seems unlikely, but possible. But even so, they are still studying something concrete. Something tangible. It’s less pressing to identify the right ontology upfront because just poking around with neurons is a pretty good start.
I also think that a pretty good bet in pre-paradigmatic sciences is to have everyone cluster around “intellectual Schelling points.” Geoff Anders has a good piece on this — basically without theory, you should try to direct attention to things that we can all at least point to: model organisms, measurement techniques, etc. Neuroscience has this to some extent: C. Elegans and Drosophila.
Cognitive science barely does. A few people look at honeybees and other small organisms, but by and large we are interested in human cognition. But my god, starting with the human brain is one of the most hopeless places to start! It’s exciting in part because it’s so complex! And yes we’ve learned things here and there, but by and large we’re very lost with respect to its function. Furthermore, cognitive science hasn’t resolved the “fundamental unit,” there is nothing analogous to a neuron. Maybe a concept? Maybe a belief? But what are things like beliefs? No one knows! People have folk notions of many of these things, there are different models underpinning them across the field, but you can’t point to it in the same way you can point to a neuron.
Yes, we observe behavior. Sometimes cognitive neuroscientists even look at fMRI. Often we don’t even do that. Still, there are no canonical results you can point to of what a belief is, what a concept is, needless to say what things like representations, abstractions, goals, intelligence, and computation are. Even though that is the bedrock of the field.
This seems bad to me. Reductionist accounts are good insofar as they pinpoint what the parts are and tell a compelling story about how the larger whole is made up of them. Cognitive science has not finished reducing its parts, nor has it told a compelling story of the whole. Oops.
But this isn’t even the worst part. I would be fine with all of this if cognitive science were more self aware that it was in this predicament. It isn’t. It seems like most scientists are content with their own little bankrupt ontology, not all that perturbed that there is nothing close to unanimous agreement about it. I asked in a lab meeting once what a concept was. Everyone gave a different answer. My next slide listed at least ten different formalizations of it, none of them commensurate. No one seemed the least bit disturbed. No one even really commented. It was just accepted that this was the state of things, no use in trying to rectify it.
Precision of the Imprecise
We aren’t paradigmatic yet. And that’s okay. This stage is good, it’s incredibly interesting and fruitful. But we can’t jump ahead of ourselves. Science, in the Kuhnian frame, proceeds by first processing this inchoate, qualitative stage, this time before the formalization, when ideas make contact with reality and other minds and theories develop. Only then, once theory is accepted, does normal science proceed. Only then do measurements make much sense. But the brain sciences, and cognitive science more specifically, wants to pretend that it is in the normal science stage when it isn’t. It wants to throw out qualitativeness and reel in more precise numbers. How many times have I been in a discussion group over some paper that is doing interesting philosophical work only to hear a ringing chorus of, “but they don’t have numbers. That’s not science.”
And I have some sympathy for this, it’s easy to push around meaningless symbols with words, easy for them to fall on deaf ears in the grand arc of scientific progress. The allergic reaction seems healthy in small doses. But not all words are meaningless. If you listen closely, almost all ideas begin this way. Not with math, not with numbers, but with a sense, an idea, an inkling. And yes, talk is cheap. Yes, most of it is probably garbage. But don’t throw the baby out with the bathwater. Some words will meaningfully shape the future of cognitive science, if it ever breaks out of its reverence for scientism.
We! Need! Theory!
It’s not easy to get measurements to match a theory – it takes a lot of hard work, intricate work like the Atwood machine – but it is often possible, even if the theory is wrong. People had evidence of phlogiston before it was disproven. Measurement of something does not entail its status as a True thing. We can be misled by our concepts, our ontologies. And if you take the Kuhnian view, which I do, this is the most important thing to get right. You don’t get measurement and data for free, these are inherently relative things, filtered through our concepts. And often, because they are a product of our ontologies, we forget that our ontologies had such a large hand in delivering them. But they do.
Grokking this is a vital part of being a good scientist, in my humble opinion. And I don’t think it implies anything hopeless about our situation. There has been scientific progress, we have made strides in our collective understanding. But I also hold the belief that if you think theory precedes data, and if you also believe your theories to be lacking, then the thing to do is to go out and develop them, to refine them until they reach a point of consensus within the field. Only then are you squarely in the realm of normal science. Only then do you have a paradigm.
Well, cognitive science’s theories are lacking. I think this is true for all of the brain sciences, but I find it most striking within cognitive science. Maybe that’s in part because I was so close to it, but I think by and large it’s because it deals only in the abstract, in these untethered ideas, floofy things you can’t quite point to, fundamentals which haven’t been articulated clearly, concisely, and unanimously. It certainly isn’t lacking in models, but this is a different thing than a theory. A theory unifies a field, a model posits an explanation for a single experiment, or a set of them.
We need theory. An ontology and structure which makes sense of what these things floofy things called minds are. We need words and ideas before we write it all down in math. And we need it even more so in this environment where the number obsession has obscured the fact that we don’t know what we’re talking about. Even more so when experiment and data abound, in literally millions of papers per year, and yet no one has had a truly monumental theoretical breakthrough in the life sciences since Darwin.
Cognitive Science is Doing it Wrong
So, yes, I do think cognitive science is doing it wrong. I think most of modern science is, but cognitive science is doing it especially wrong! Oof. To think I just spent the last two years making all of these same mistakes…
And I do think there was something different about pre-paradigmatic science hundreds of years ago, and that we’d do well to remember that. Like how science used to be small and intimate and strived for consensus. Like how philosophy was an important tool for almost all prominent thinkers (Aristotle, Galileo, Einstein, Descartes, Bohr, etc).
Science is a process for finding truth, but truth is not some objective fact you get linearly closer to by applying neutral measurement. Truth is a process of refining our contact with reality. It’s this incredibly special and beautiful process, one that I think we all take part in whether we notice it or not. But, contrary to popular imagination, it is not fundamentally about numbers and math.
It's a fundamentally conceptual process. And conceptual process is sometimes gotten through the refinement and precision of numbers. But more often it is born out of philosophy: thought experiments, the inklings in the corners of minds coming together, the subtle notion of “this definition doesn’t quite catch everything.” And thinking about that – why doesn’t it catch everything? How is wrong? Why does Lamarckism fall short? What, exactly, is missing from the idea that speed is average velocity?
Most concepts give us something to work with, so it’s easy to overlook where they don’t hold weight. Adherents to Aristotle overlooked the concept of instantaneous velocity for thousands of years! Because it didn’t just really matter, it didn’t really come up. And even though Aristotle had wisps of confusion in his writing, there just wasn’t much need to refine it. Not until Galileo came along, and with a single thought experiment showed how average velocity was not sufficient to account for all phenomena relating to speed.
These advances, this refinement of thought, this philosophy, is what is needed to bring theory out of observation. To engender new concepts that better characterize the world we live in. And cognitive science is in desperate need of it. More conceptual precision around agency, intelligence, goals, abstractions. Not just rounding them off to some folk notion we can overlook and quickly move on from, but actually digging in and figuring out what it is we mean. I expect that this process will bear fruit, much more than the process of pretending we know everything because hundreds of thousands of academics are working hard at producing very fancy numbers.
I want a paradigm for cognitive science. I want to understand what minds are! I want the field to make progress, not only because I am insanely drawn to the subject matter, but also because I expect it could have a real impact on some of our most pressing problems: making intelligent and aligned agents. I have this sense that understanding what it is we mean by intelligence, by agency, by power, will help us in creating safe AI. More on this soon. But it doesn’t seem like a crazy thing to believe – it seems like if we understood more about the black box nature of neural networks, we’d be in a better position to control them, understand their consequences, etc.
So my battle cry: let’s embrace philosophy a bit more, let’s try to do some of that pre-paradigmatic qualitative thought, let’s poke things like E. Coli and slime mold and get concrete about what we mean by representation, abstraction, prediction, etc. Let’s not assume we know what we mean by attention. Let’s pause a bit more when we can’t answer what an oscillation is. Let’s dig in, have some fun, poke around, and try to get clear on what we’re poking at. Let’s think! Cognitive science, I’m talking to you. Let’s do it right.