top of page

Are You A Robot? -revisited

As the world ruminates over the impact of AI, The Business Of Pleasure has dived deep into the murky depths of its data pool to retrieve the episode 'Are You A Robot?' to help some of our human readers understand their electronic third-cousins-twice-removed a bit better...

“I’M NOT A ROBOT” is probably one of the most frequently repeated statements in The Business of Pleasure. The ultimate box-ticking exercise in response to our ticketing websites’ attempts to prevent bots from gobbling up inventory on behalf of unauthorised resellers. But is it more human, or more robotic to simply hit that tick without giving the question a bit more thought? How are we different? What makes us unique? What are the contrasting ways in which we approach questions and situations? And, more crucially, how did our understanding of human intelligence shape the development of Artificial Intelligence?

We’ll start with human intelligence …because we were here first!


The timing of CAPTCHA, in the late 1990s, was more than a little ironic. The early ‘Completely Automated Public Turing test to tell Computers and Humans Apart’ arrived on the scene slightly ahead of the scientific breakthrough which revolutionised our understanding of how we humans interact with the world. Although long-anticipated by philosophy, it was a single piece of research published in 1999 that brought about this change. To quote one leading scientist on the subject: “In the 20th century we thought the brain extracted everything it needed to know from its sensations, the standard ‘sandwich’ model of stimulus-cognition-response. Whereas in the 21st century … the brain became an organ for inference, constructing explanations for resolving uncertainty about what’s going on ‘out there’…”

The Predictive Brain

The particular type of inference that this organ relies on is called Abductive Inference. Nothing to do with kidnapping (as I assumed when I read it) it is probably best described by contrasting it with other forms of inference…

With Deductive Inference things are pretty cut and dried.

All bears are mammals

All mammals have lungs

If both statements are true we can deduce that

All bears have lungs

Inductive Inference, however, involves making assumptions based on limited knowledge such as generalizations from small samples:

The red-headed people I know frequently wear powder blue sweaters, therefore all red-headed people frequently wear powder-blue sweaters.

Or Statistical Inductive Reasoning:

Since 95% of UK red-heads frequently wear powder-blue sweaters, red heads around the world frequently wear powder blue sweaters

Causal Inference:

In the third week of June there were Inbound tourists in London, therefore next year the third week of June will bring Inbound Tourists to London

Analogical Inductive Reasoning:

Jack and Jill are red heads and frequently wear powder-blue sweaters. Sarah is also a red-head. Therefore Sarah also frequently wears read sweaters.

Predictive Inductive Reasoning:

In the past, Inbound Tourists have always started coming to London in the third week of June. Therefore next year Inbound Tourists will come to London from the third week of June.

Abductive Inference:

Abductive Inference is a sub-species of Inductive Reasoning that makes assumptions, or predictions, based on the understanding of the evidence to hand, which, in turn, is based on previous experience of similar or analogous situations.

Examples of abductive inference include a doctor making a diagnosis based on test results and a jury using evidence to pass judgement on a case; in both scenarios there is not a 100% guarantee of correctness, just the best guess based on available evidence.

The difference between abductive reasoning and inductive reasoning is a subtle one; both use evidence to form guesses that are likely, but not guaranteed. However, abductive reasoning looks for cause and effect relationships while induction seeks to determine general rules.

ARTIFICIAL INTELLIGENCE -a short history of Neural Networks.

Towards the end of the Second World War, scientists attempted to explain activity in the neurons (our body’s processors and cabling) in terms of what they then understood about electricity, and how this might enable the creation of simple networks which could process information mathematically. In doing this they were consciously building on the work of the German mathematician-philosopher and computer pioneer Gottfried Leibniz (1646-1716) who had shown that: “Any task which can be described completely and unambiguously in a finite number of words can be done by a logic machine.” This approach culminated, in 1943, with McCulloch and Pitts’ landmark paper ‘A Logical Calculus of Ideas Immanent in Nervous Activity.’

To illustrate the type of advance this thinking led to, the late ‘50s and early ‘60’s, saw the development of two working models, or machines, ADALINE (‘ADAptive LINear Element’) and (M)ADALINE (Multiple ADAptive LINear Elements) …’for the purpose of illustrating adaptive behaviour and artificial learning.” This ‘learning,’ put very simply, reduces the average number of wrong predictions by minimizing the average number of errors. Examples of input and desired output are fed to the system in a step-by-step manner, and “as the experience accrues, the system’s competence accrues too – the more examples, the better the performance,” i.e. it is an iterative process which allows changes towards ever-better performance over time.

Only it turned out that learning was not quite so simple.

In the early 1970’s, scientists working on how us humans learn, noted how this particular version of ‘error-corrective learning’ was subject to bias because initially ‘learned’ connections can subsequently ‘block’ those that follow. In short (very short!) they concluded, as we covered in our Podcast on ‘Surprise!’, that we only learn when we’re, well …surprised.


It was The Business of Pleasure that, purely by accident, fueled the next series of leaps in artificial intelligence. We have now reached the 1980s and the tsunami of demand for increasingly sophisticated computer gaming led to the development of ever faster computers. The scientific community immediately took advantage of this greatly increased computing power to continue their work, ‘teaching’ Artificial Intelligence how to draw and drive and execute basic business admin functions. The 1990’s saw a further period of acceleration, but with a twist; the machines’ masters started to throw down the gauntlet and challenge humans to compete with their creations.

Some of it was playful stuff:

1994 the World Draughts Champion, Tinsley, was forced to resign a match against the Chinook programme.

1997 the World Chess Champion, Garry Kasparov, was defeated by IBM’s Deep Blue.

But the machines weren’t just making moves across the chess and chequers boards. Pretty soon autonomous and semi-autonomous cars were driving across Europe and America, and robots were boldly going where no man could go before… along the floors of oceans, across vast frozen wastes of antarctica, and all over the internet as spiders for search engines. In fact, the question became: ‘Is there anywhere they can’t go?’ To answer this, let’s go back to those words of Leibniz: “Any task which can be described completely and unambiguously in a finite number of words can be done by a logic machine.”

To put that another way…

A couple of years ago I listened to a radio programme on how technology (in the form of bridges) had disrupted the livelihoods of the Worshipful Company of Watermen (established 1514) who held the monopoly for ferrying passengers across the Thames. This change was contrasted with the more recent disruptive threat that Uber presented to licensed London black cab drivers, and the programme concluded with the CEO of a leading AI company being asked (and I paraphrase): “So what career advice should parents give their children to help them avoid being replaced by machines?” The AI CEO thought about this for a few seconds before replying: “If your children have a burning vocation to become something like a doctor or lawyer or engineer, indulge them in their passion, but make sure they have something more solid to fall back on, like music, acting or dance. Because those are the only things the machines can’t do.”

Only the line is not quite as clear-cut as they made it out to be, as I learnt at a seminar at Stationers’ Hall, the home of the Worshipful Company of Stationers and Newspaper Makers (founded 1403). Here, among the stained glass windows depicting legendary disrupters Johannes Guttenberg and William Caxton, we learned how robots were already replacing cub journalists in writing up sports reports, and listened to chapters of novels written by digital wizards replicating the writing style of JK Rowling. In fact, how do you know that this podcast isn’t written, and read by machines?

Don’t answer that!

Now the prospect of robot invasion will doubtless strike fear into the hearts of many within The Business of Pleasure. But that’s the wrong way to look at it. As one recent scientific paper puts it:

“For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and ‘collaborate” with) these systems as effectively as possible.”

And here’s a topical for instance…

Like many other areas of life, most sectors of The Business of Pleasure have been disrupted by the Covid pandemic and its after-shocks. Our audiences are displaying new behaviours, reflecting changing priorities, and therefore our understanding of these audiences has to be rapidly recalibrated, time after time, and our product lines, and how we communicate them, need to be updated accordingly. And while AI will not provide the silver bullet, it is incredibly useful, and fast, at detecting changes in moving targets.

A final caveat to our audience:

At time of writing, there is no generally accepted definition of Artificial Intelligence. Part of the problem is that most definitions, like this blog, attempt to define it by contrasting it with human intelligence, which is itself being constantly redefined as we (with the help of our machines) learn more about ourselves. One paper, written in 2009, probably sums it up succinctly as:

“…In line with this, AI is then defined as “the study of how to make computers do things at which, at the moment, people are better.”

But no matter how useful robots may be in serving The Business of Pleasure (or damaging it as ticket-buying bots!) they will never, ever, become members of our audience. And perhaps that, ultimately, will be the final, and solitary, defining difference.

Copyright David Thomas 2021


10 views0 comments

Recent Posts

See All


bottom of page