by Elena Nisioti

Going deeper: A history of ideas in AI research

hFNoixgvqUNco6rkubb8TFFO7s13ykz0v7F1
Do Androids Dream of Electric Sheep? Let’s go back to a time when AI questions were more straightforward (but still difficult to answer).

Artificial Intelligence is moving fast. The vibe is all around. Facts are beginning to sound like science fiction movies and science fiction movies like a version of reality (with better graphics). It may be that AI has finally achieved the level of maturity it has been pursuing for decades and was stubbornly denied, making parts of its community, and the whole world, suspicious against its feasibility.

Frankenstein may contain parallels relevant to the present day. Mary Shelley’s Gothic novel contains a discussion on the consequences of creating and introducing an artificial being into society. The Creature puzzles us with its inhuman atrocity and yet human manifestations of weakness, need for companionship and existential crisis.

One could say that we should focus on the future and the consequences of our discoveries. But how can one focus on the chaos created by injecting an army of Creatures into a system so complicated as contemporary society? One could also focus on the achievements, the success-stories that made these ideas sound veracious. But how can one ex post discriminate between correct intuition and luck?

91AiTJ6BkYpTH8wwwrJmVpQMaGSVvB75gxwk

It takes self-restraint, and wisdom, to set aside for a while the branches of your work and evaluate the firmness of its roots. A blooming tree can be distracting.

Whether you are tracing the rules of logical thinking in the ancient Greek philosophers, the formulation of reasoning in Arabic mathematicians or the power of mathematical knowledge in 19th century intellectuals — one unsettling notion becomes clear: the questions are deeper than the networks you can design (even taking Moore’s law into account).

“I believe that what we become depends on what our fathers teach us at odd moments, when they aren’t trying to teach us. We are formed by little scraps of wisdom.”
Umberto Eco

The rest of the discussion will emerge from the history of AI. Not the history of achievements, but the history of questions, arguments and beliefs of some significant individuals. Most of the events revolve around the ‘60s, the era AI acquired its official definition, its purpose, its scientific community and its opponents.

Can machines think?

In 1950 Alan Turing attempts to answer this purposely simplistically-expressed, question in his seminal paper Computing Machinery and Intelligence. Acknowledging its ambiguity and the limits it imposes on understanding AI, he proceeds by formulating a thought-experiment, also known as the Turing test:

Player A is a man, player B is a woman and player C is of either sex. C plays the role of the interrogator and is unable to see either player, but can communicate with them by means of impersonal notes. By asking questions to A and B, C tries to determine which of the two is the man and which is the woman. A’s role is to trick the interrogator into making the wrong decision, while B attempts to assist the interrogator in making the right one.

The reformulated question is then:

What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often as he does when the game is played between two humans?

Turing’s approach seems to follow the doctrine of the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

His attitude when it comes to “human” aspects of intelligence, such as consciousness, is that you can’t blame someone (or something) for not possessing a characteristic that you have yet to define. Thus, consciousness is irrelevant in our quest for AI.

Gödel’s incompleteness theorems were an obstacle in one’s attempt to talk about AI. According to them mathematical logic cannot be both complete and consistent, thus, machines equipped with mathematical logic to learn, as is the case of AI, are expected to fail in learning some truths. Turing’s answer to this is fairly disarming: how do you know that human-intellect does not also come with its limitations?

Turing’s paper is lavish both in terms of arguments and a clear, dialectical structure, nevertheless constrained in speculations about technologies that had yet to be discovered.

Steps towards artificial intelligence

Marvin Minsky was one of the fathers of AI as a research field. In the dusty album of AI family photos, Minsky would be this old man that brings some uneasiness to a family dinner: “Old uncle Minsky. He was charmingly peculiar and always had something interesting to say”.

Minsky was one of the organisers of the Dartmouth Conference in 1956, where Artificial Intelligence was first defined as a term and a field. He is mostly remembered for his vigour of believing that AI is feasible and his depreciation in pursuing it with wrong means.

Let’s see what Minsky had to say in 1961, when he was asked about the so far progress in AI.

Should we ask what intelligence “really is”? My own view is that this is more of an aesthetic question, or one of sense of dignity, than a technical matter! To me “intelligence” seems to denote little more than the complex of performances which we happen to respect, but do not understand. So it is, usually, with the question of “depth” in mathematics. Once the proof of a theorem is really understood, its content seems to become trivial.

Acknowledging the inherent difficulties in defining AI, and thus pursuing it, Minsky begins by setting the building pillars of it. According to him, these are search, pattern-recognition, learning, planning, and induction.

If the ultimate purpose of the program is to search and find solutions of its own, then pattern-recognition can help it recognise the appropriate tools, learning can help it improve through experience, and planning can lead to more efficient exploration. As regards the possibility of making a machine with inductive abilities and thus reasoning, Minsky has to say:

Now [according to Gödel’s incompleteness theorem], there can be no system for inductive inference that will work well in all possible universes. But given a universe [our world], or an ensemble of universes, and a criterion of success, this (epistemological) problem for machines becomes technical rather than philosophical.

The rest of the text contains a recurrent urge to clarify that the pursuit of AI should be conducted through complex, hierarchical architectures. For this reason he questions the perceptron approach, as it will fail for moderately difficult problems. And frankly, we can’t expect reality to be simplistic.

Minsky can be attributed the responsibility of discouraging research in perceptrons, which probably delayed the bloom of deep learning. The realisation that, even using simple building blocks, one can solve complicated problems by going into deep architectures seems to have escaped his, nevertheless, ingenious insight.

Yet his remarks can be seen as ultimately constructive criticism, as they helped the community explore the weaknesses of the original approaches. Also, deep learning may be the best we have so far (and how marvellous the applications), but should not be regarded unconditionally as the Holy Grail of AI.

Minds, brains, and programs

In 1980 John Searle got angry. Although he probably got angry earlier, this is the moment he decided to publicise his disagreement with strong AI. Indeed, even the title sounds sarcastic. I feel like Searle is grabbing me by the collar and, vigorously waving his finger, saying: “Let me help you make some fundamental distinctions young lad”.

“One gets the impression that people in AI who write this sort of thing think they can get away with it because they don’t really take it seriously, and they don’t think anyone else will either. I propose for a moment at least, to take it seriously.”

Searle is solely attacking the notion of strong AI, which he identifies as the capability of a computer to practice any human-like behaviour. He translates this to the ability of a machine to demonstrate consciousness which he disproves by analogy. His famous thought-experiment, the Chinese room, goes like this:

You are a monolingual English speaker locked in a room with the following things: a large batch of Chinese writing (called a script), another large batch of Chinese writing (called a story) and a set of English rules instructing you how to match Chinese symbols of the second batch to the first (called a program). Then, you are given another batch of Chinese writing (this time called questions) and another set of English instructions with rules that match the questions to the other two batches. Congratulations, you just learned Chinese!

This is the Chinese Room experiment, introduced by Searle in 1980 . A thought experiment is not an experiment per se, as its goal is not be conducted, but to explore the potential consequences of an idea. The oldest and most famous is, probably, Galileo’s Leaning Tower of Pisa experiment (did you also think Galileo was actually dropping apples from the tower?).

Searle’s point is that the fact that you can produce Chinese answers by accepting Chinese questions does not mean that you understand Chinese, if this ability was created by following rules in another language. As a consequence, a machine that gives the expected output after it has been given the appropriate algorithm should not be considered a ‘thinking’ entity.

2ZkVMQSjb0B7CrPycMebnNIrQSMRh20hqw-9

What Searle does not dispute is the ability of a program to think, as in terms of some functional reasoning. He accuses current AI researchers of being behaviouristic and operationalistic, as they attempt to equate a program with a mind (which is true), letting aside however the importance of the brain.

According to him consciousness comes only from biological operations and since a program is totally independent from its implementation (as it can run on any hardware) it cannot exhibit consciousness.

Reading the original text, one gets the feeling that he is attacking an immature community of computer scientists that has not bothered to reach a consensus on what intelligence is, nevertheless attempts to simulate it guided by teleological approaches and speculations.

Minsky’s response to Searle, and philosophical approaches in general, is as nihilistic as it gets: “they misunderstand, and should be ignored”.

Elephants don’t play chess.

And you should not make them feel bad about it. This paper, written by Rodney A. Brooks in 1990, is an attempt of a Nouvelle AI evangelist to persuade, employing both arguments and his robotic fleet, that the classical approach to AI should leave some space for his.

To get the feeling of that era, AI was experiencing its second winter. Funding was cut as companies and governments realised that the community had set the expectations too high.

So, time for introspection. When something fundamentally fails, there are two ways to go at it: either it’s impossible to achieve or your approach is flawed.

Brooks suggested that AI’s stagnation is due to its dogma of functional representations. The symbol system hypothesis is a long-standing view on how intelligence operates. According to it, the world involves entities, like people, cars and cosmic love, so it is natural to match them to symbols and feed machines with them. If this hypothesis is correct, then you have provided the machine with all the necessary information for it to “come up” with intelligence.

Although this assumption does not seem problematic, it has some far-reaching consequences that might account for the bad performance of AI:

  • The symbol system is not adequate to describe the world. According to the frame problem it is a logical fallacy to assume anything that is not explicitly stated. To this point, Brooks charmingly suggests: why not take the world as its own model?
  • Intelligence cannot emerge from simple calculations. The immense use of heuristics, that are necessary to train intelligent algorithms is a contradiction to our attempt to create knowledge. (Your grid-search is insulting to human intellect.)
  • AI’s obsession with ensuring generality of a learned model has lead to a phenomenon Brooks calls puzzlitis: excessive effort given on proving that the algorithm works in obscure cases. It ιs certainly an attractive ability, but it does not seem to be a fundamental consequence of knowledge and our world is rather consistent.

Brook’s counterproposal is the physical grounding hypothesis. That is, allow Artificial Intelligence to directly interact with the world and use it as its own representation. This certainly changes the standard practise of AI: from learning requiring immense computational resources, guidance from experts and a never-satisfied need for training data, Brook suggests equipping physical entities with cheap hardware and unleashing them in the world. But does this underestimate the problem?

Brooks sees intelligence rising from collective behaviour, not sophisticated parts. Perhaps the most profound observation of his experiments regards how “goal-directed behaviour emerges from the interactions of simpler non goal-directed behaviours”. There does not need to exist a predetermined coordination pattern, as an intelligent machine should draw its own strategies to optimally interact with the world.

Brook’s argument of evolution draws a long way along persuading us of the importance of the physical ground hypothesis: humans are the most common and closest example we have to intelligence. Thus, in our attempt to re-create this characteristic, isn’t it natural to observe the evolution, a slow, adaptive process that gradually led to the formulation of human civilisation? Now, if one considers the time it took us to evolve skills such as interacting, reproducing and surviving, in contrast to our still young abilities of using a language or playing chess, then one may reach the conclusion that these are the hardest skills to develop. So, why not focus on that?

Although ecstatic about the practicality of his approach, Brook acknowledges its theoretical limitations, which can be attributed to the fact that we have yet to develop a complete understanding of the dynamics of interacting populations. Once more, the disregard of an engineer towards philosophical objections is evident:

“At least if our strategy does not convince the arm chair philosophers, our engineering approach will have radically changed the world we live in.”

AI manifests progress

Despite floating in a sea of questions, AI manifests something we cannot dispute: progress. Nevertheless, stripping current applications from the effect of technological advancements and heuristic advantages to acquire an accurate perception of the quality of the current research is a tedious task.

Will deep learning prove a worthy tool towards pleasing our ever-demanding criteria of intelligence? Or is this another interglacial period before AI reaches winter again?

What’s more, the concerns and questions have shifted from pure philosophical to social, as the consequences of AI in everyday life are becoming more obvious and pressing than the need for understanding consciousness, God and intelligence. Yet, this may be an even more difficult question to answer and urge us to dig even deeper.

When Wittgenstein wrote the Tractactus, he was confronted with the danger of a fundamental fallacy: his arguments fell victims to the doctrine of his work. That is, if one accepted his doctrine as true, his arguments were illogical and thus his doctrine should be false. But Wittgenstein thought differently:

“My propositions are elucidatory in this way: he who understands me finally recognises them as senseless, when he has climbed out through them, on them, over them.”

To understand the truth behind a complicated idea, we need to evolve. We must stand firm on our previous step and be willing to abandon it. Not every step has to be correct, but it has to be understood. When later confronted with this argument, Wittgenstein said that he does not need a ladder, as he is capable of directly approaching the truth.

We may still need it.