Computer science is recognized as a scientific discipline, though it’s not commonly categorized under natural sciences. While certain branches of computer science do engage with natural phenomena, the majority of its core aspects align more closely with formal science, given its status as a subsidiary field of mathematics.
What Makes Computer Science a Science?
Noam Chomsky, a perpetually nuanced thinker, comes off as
Almost every science lab practices a model of science, and only in its practice, qualifies for being scientific about their work and findings.
That model almost always begins with a natural phenomenon that draws a scientist’s attention toward it, which in turn begs for an explanation.
This explanation is termed as theory. It’s almost a given that the phenomenon is much more complex than its explanatory theory.
But then, anyone can cook up a theory that can explain away the phenomenon.
Often, a model is designed in the next step, that can be taken for testing and experimentation. Again, in terms of complexity, a model is a step simpler than its theory.
At the very end of this scientific pipeline, the model gets to have a real-world application or is turned into a commercial technology.
While all labs and research groups of physics, chemistry, biology, and even economics are following this scientific process and, therefore, qualify as science, can the same be concluded about computing? It is, nevertheless, named computer science! The only thing that should change is that the model would be a computational model (or an algorithm, in most cases.)
Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. (Newell and Simon 1975, 1–2; emphasis added)
Speaking of machines, the founding father of computing, Alan Turing, created a complete computational model of a machine in order to answer just
Now, in order to answer this question, it is imperative that the process of science be followed: From the theorizing of the phenomenon to the computational modeling of the theory to the application of the model to a definite problem resulting in technological innovation. The early computer scientists followed the process with integrity. Let’s list three tall figures from the domain.
Pitts and McCulloch#
Walter Pitts and Warren McCulloch were an
John Holland#
John Holland, who laid the foundations for evolutionary algorithms, walked the same lines, and we quote from his
Computer programs that “evolve” in ways that resemble natural selection can solve complex problems even their creators do not fully understand. (Holland 1992, 66)
It’s important to note that evolutionary algorithms did not just poof out of non-existence, they happened when John Holland took the natural phenomenon of evolution seriously, then took Darwin’s theory that explained evolution seriously, only then he designed an algorithm based on the theory, which eventually gets to be applied to optimization problems.
John Holland received the following accolade from the president of the Santa Fe Institute (SFI), as he recognized his main contribution in terms of the scientific currency, as reported in
John (was) rather unique in that he took ideas from evolutionary biology in order to transform search and optimization in computer science, and then he took what he discovered in computer science and allowed us to rethink evolutionary dynamics. This kind of rigorous translation between two communities of thought is a characteristic of very deep minds. And John’s ideas at the interface of the disciplines continue to have a lasting impact on the culture and research of SFI. (Simon 2015)
This obituary note also highlights the importance of the two-way relationship between our computational models and scientific theories. This fruit cannot be reaped if we bypass the process and focus only on algorithms and technology (or the last two components of the process.)
Geoffrey Hinton#
One last example is that of Geoffrey Hinton, who is almost single-handedly responsible for rescuing machine learning from its first winter, by an ingenious idea of applying a chain-rule while backpropagating the redistribution of weights through the hidden layers with respect to the error on the output layer. Without Hinton’s 1986 contribution, known as the famous backpropagation, modern deep-learning neural networks, including Transformer based GPTs wouldn’t be possible. But the note on which
The learning procedure, in its current form, is not a plausible model of learning in brains. However, applying the procedure to various tasks shows that interesting internal representations can be constructed by gradient descent weight-space, and this suggests that it is worth looking for more biologically plausible ways of doing gradient descent in neural networks. (Rumelhart, Hinton, and Williams 1986, 536)
This only goes to show the culture within which most of the development within computer science was taking place up until the 1990s, that important publications had to put up a disclaimer if their computational models diverged a little from the theories and the scientific process.
In conclusion#
Perhaps Chomsky has the intuitive insight, being a scientist par excellence, that despite so much investment of computing power and training data, GPTs are fascinating as a chatbot and Q/A technology, but they really do not inform us about how intelligence or language works. Perhaps, it is the bypassing of the scientific process that results in the hallucinations, both by the
If you enjoyed reading this blog, you can check our other blogs from this series or the foundational course on neural networks:
Make Your Own Neural Network in Python
Machine learning is one of the fastest growing fields, and we cannot emphasize enough about its importance. This course aims to teach one of the fundamental concepts of machine learning, i.e., Neural Network. You will learn the basic concepts of building a model as well as the mathematical explanation behind Neural Network and based on that; you will build one from scratch (in Python). You will also learn how to train and optimize your network to achieve a better result.
Frequently Asked Questions
Is computer science a branch of science?
Is computer science a branch of science?