As every year, there was a lot of excitement for the Nobel prizes last month. Scrolling through X, what stood out for me were the comments about the winners for physics and chemistry. While all were laudatory about the researchers and their work, many pointed out that these prizes were not really a recognition of their own achievements but of something else entirely: artificial intelligence (AI).
Indeed, it is hard to overstate the growing importance of AI not just in science and technology, but in society at large. According to a recent report published by the European Research Council, AI has applications in fields such as health, transport, manufacturing, food and farming, public administration, education and cybersecurity. The technological applications of AI have vast effects, with some computer scientists proclaiming dystopian changes to humanity’s future.
Also very important is the impact of AI in chemistry and biochemistry where, among other things, it is being used to synthesise novel chemical substances for the purpose of designing new drugs. Some argue that AI could lead to immense progress in chemistry and revolutionise chemical practice.
But is this a sensible expectation? Is AI really going to change chemistry in ways that we cannot imagine? Will it revolutionise how we understand the world and our place in it? These sound like daunting questions but what they really come down to are the same old questions philosophers of science have been asking for centuries: how does science progress, and what constitutes a scientific revolution?
Traditionally, certain values are used to evaluate whether progress is achieved in a science. Two of the most significant ones are the ability of a science to make novel predictions and its ability to offer coherent explanations of phenomena. In fact, before the use of AI in science, it was widely accepted that the predictive and explanatory success of a science indicates the truth of the underlying theory, and acts as a criterion for the choice of one theory over another.1
How exactly ML algorithms produce useful and empirically successful results is quite obscure
AI seems to challenge this widely held assumption as its algorithms make predictions and offer explanations of phenomena without using theoretical postulates, laws or hypotheses as inputs.2 Instead, machine learning (ML) algorithms ‘learn’ from data about the systems they are set to describe and discover patterns, which are in turn used to make predictions about similar systems. How exactly ML algorithms produce useful and empirically successful results is quite obscure; in philosophy, this question is called ‘the problem of opacity’.3
The problem of opacity also reveals another thorn to the idealised image we have of AI. How autonomous is it really? By this, I do not mean as an entity (though questions about artificial personhood have been raised around AI), but rather I refer to AI’s capacity to produce accurate and useful results without the scrutiny of the scientific eye.
A colleague of mine once told me that nowadays, some biology laboratories contain people who have no idea about biology whatsoever. Something similar is reflected in this year’s Nobel prize winners. One of the three winners of the prize in chemistry is a computer scientist with no background in chemistry! How strange, and perhaps a bit unsettling. However, I believe it also reflects a misconception we have about the role of AI in science. Without AI and the development of ML algorithms by computer scientists, these amazing scientific achievements would not have been possible. But do these achievements mean anything unless there is a scientist who knows not only how to use them but also, more importantly, how to evaluate them?
Let me give you a mundane example. I was expressing my worries recently to a colleague about how I am going to evaluate a philosophy course this semester. Usually, I would assign students a paper for them to write a summary about. Now, I told her, I’m not so sure this has any value. Students could use some AI tool and hand over a summary that an algorithm wrote for them.
She said I was mistaken. AI can be very wrong and produce summaries of texts that very poorly summarise their content. Of course, a student may not realise that when producing such a summary. But I would! And so would anyone who actually read the original paper.
So perhaps it is not the end of an era. Rightly we award prizes for achievements in chemistry and physics, even if not all recipients are experts in those fields. AI can do nothing without the expert eye of the scientist who scrutinises its results. A deep knowledge of the underlying theory and the constant experimental evaluation of all results are still very much – if not more – important for scientific progress to actually happen.
Long live the sciences then, and let’s refrain from putting AI on a pedestal just yet!