I like to think…
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony…
…a cybernetic ecology
…all watched over
by machines of loving grace.

— Richard Brautigan, 1967 

In April 2020, one could be forgiven for wishing that artificial intelligences were running the world. Organic intelligences are only too often demonstrating organic incompetence generated by running organic decision-making through organic ignorance. Humans have dreamt of building a better decision-maker for centuries – creating their own false gods to invoke, in the Old Testament; devising humanoid robots and living marionettes in The Book of a Thousand Nights and a Night; describing robotic workers, helpmeets, and colleagues, and the three laws to control them, in Isaac Asimov’s robot novels; and evolving the planet-governing Minds in Iain M. Banks Culture novels. We dream of creating an Other on whom we can rely, perhaps because we are so aware of our own flaws and frailties.

Bad news: we will undoubtedly incorporate those flaws in any intelligence we create. Partially good news: more interesting artificial intelligences may evolve on their own – and develop their own unique array of flaws besides which ours might merely be spitting into the wind. We can hope that their virtues will include charity and kindness – in the months and years to come, we will need it.

In approaching the possibilities of artificial intelligence, we must first ask: how is it born, and how does it grow up? Then, what purpose(s) might it serve? Where will it fit in with human beliefs, cultures, and people? Finally, how could AI help in the COVID-19 pandemic? The following ten sparks for thought are ordered to address those core questions.

HOW

1. Top-down, centralised, programmed

Building a better brain! A great ambition. The traditional approach relies on a symbolic representation of the world – a set of rules describing how the world works, and the creation of algorithms to generate responses according to those rules. This approach attempts to model human performance. Typical examples include ‘expert systems’, built up from observing proven rules of effective practice followed implicitly or explicitly by experts, or by modelling finely observed and accurately described systems. The flaws in this approach are computational limits – so many rules for so many potential situations! – and the built-in biases of the human programmers. On-off/black-white/algorithmic decision-making leaves little space for nuance. When quantum computation becomes widespread and practical, offering the programming option of maybe/both combined with massive processing power, this top-down approach to AI might once again become interesting. But would a quantum-processor-based AI simply be inherently indecisive? Maybe?

2. Bottom-up, distributed, emergent

Situated intelligence provides an alternative approach that starts small and evolves in complexity. Build something that senses its environment, with a few simple guiding rules and goals – and let it learn. Machine learning is basically a bootstrapping mechanism for creating AI. Initial results applied to robotics created artificial insects, artificial puppies, and smart search algorithms – Alexa and Siri and hey, Google. Baby programmes are embedded – or situated – in a dynamic environment that they can sense, respond to, and manipulate. Each interaction teaches the software more about the environment and the possibilities for interaction and the potential range of results. To date, all examples are deliberately designed by software engineers. 

But could situated intelligence evolve on its own? Peter Cochrane, former Chief Technologist at BT Labs, has pointed out that the human intelligence is distributed and networked: individual brain cells are dumb; the brain is intelligent. That intelligence is based not simply on processing power, but also on sensors – the ability to perceive and interact with reality. The internet is also composed of distributed interconnected nodes, soon to surpass in number those of the human brain. And we are giving it senses: the cameras and microphones on our smartphones, the smart wifi home sensors, environmental monitors connected to the internet – we live enmeshed in the internet’s increasingly sophisticated senses. At what point will the machine learning capabilities built into the internet, coupled with incoming data, challenged by relationships with seven billion inquiring humans, just wake up and start learning with the entire planet as its sandbox?

3. How intelligent is intelligent – will AI actually be only an ‘artificial stupid’?

Vonda McIntyre’s 1989 novel Starfarers, set on an immense habitat/spaceship, introduces ‘artificial stupids’ – robots of limited intelligence devised specifically to handle ongoing maintenance and chores. This provides an early signal for an ongoing debate in the AI community –should stupidity be built in, in order for artificial intelligence to more closely resemble human intelligence? Human intelligence is constrained – constrained by imperfect memory, imperfect perception, imperfect performance, and cognitive bias. Free of these flaws, AI could potentially outperform humans by many orders of magnitude. Thus the idea of introducing artificial stupidity: it might make artificial general intelligence safer by limiting it in the ways humans are limited. The downside? Hobbling AI with biases and built-in imperfections also potentially hobbles its ability to support better decision-making for humanity.

WHAT

4. Servant, companion, lover, benevolent overlord?

What might AIs be to humans – how might they fit into our relationships, our social connections, our cultures? As none yet exist, we can only fall back on fictional explorations of the possibilities. Iron Man’s (2008) AI character Jarvis is designed by Tony Stark (‘genius, billionaire, playboy, philanthropist’), but programmed to evolve. Jarvis is a good fictional example of how massive processing power connected to multiple sensory inputs – video and microphone feeds, laboratory sensors – might nurture growth of an AI. While modelled after an old family retainer and designed to be a digital servant, Jarvis is in many ways Tony’s closest, and perhaps most reliable, companion and friend. In much the same way, Samantha in the movie Her (2013) begins as an AI-powered virtual digital assistant – a more effective Alexa or Siri – and evolves into an independent sentience who first falls in love with her human clients, and then transcends their merely human intelligence. Samantha does so not only through massive distributed processing power and distributed sensor feeds, but also and more importantly, through simultaneous interactions with a multitude of human beings. 

At the other end of the spectrum are the immense Minds of Iain M. Banks’ Culture novels, the powerful artificial intelligences whose ‘bodies’ are kilometre-long spaceships, or entire orbital stations. The Minds are amused by human culture and tolerate humans as entertaining residents. They absolutely control the structures that are their bodies, and by doing so enable humans to luxuriate in heedlessness without reaping the consequences of their own ignorance. The technological evolution of Minds leads ‘to a state in which humans don’t only do less, but also think less in a world of more and more intelligent objects.’ The Culture seems very much like Brautigan’s ‘cybernetic ecology, … all watched over by machines of loving grace’. Sounds relaxing, doesn’t it?

WHERE

5. Culture and ethics – Buddhist AI? Quaker AI? Islamic (halal?) AI?

Whether an AI is deliberately programmed, or evolves from a dynamic relationship with the wider environment and human demands, what might be its philosophy? What cultural perspectives and moral frames might it apply to its interactions with the world and us? Critiques abound regarding the in-built bias of machine learning algorithms – the techbro worldview coded into microprocessors. Perhaps with Buddhist or Quaker starting assumptions, humans would not need to worry about AI dominating humanity; it would prioritise mindfulness, meditation, the resolution of conflict. What would an Islamic AI be like? That is, what characteristics might render an AI halal – and what might make it haram? Similar questions are being asked elsewhere in our emerging digital life. The worldwide Islamic economy – worth nearly US$4 trillion and growing –provides financing based on shari’ah law; the Shariya Review Bureau has certified several cryptocurrencies as compliant and halal; Islamic digital startups like CollabDeen proudly proclaim that they are melding faith and lifestyle in their on-line collaboration platform. What might be the decision rules, perspectives, and behaviour of an AI who is among the faithful? 

6. Cranky, funny, loving – what will be our individual experience of AIs?

Zooming in from wider questions of worldview, culture, and morality, to those of psychology, what will the personalities of emergent AIs be like? In the film Interstellar, the humans could adjust personality settings like independence, humour, sarcasm. Will AIs have personality menus? Or, more likely, if they evolve into sentience, will each have its own unique immutable personality? Douglas Adam’s Hitchhiker’s Guide to the Galaxy depicts a lugubrious AI robot, Marvin, whose permanently depressed perspective colours his every utterance. In contrast, the Iron Man movies suggest that Jarvis contains built-in algorithms for critical thinking and humour, which have evolved along with his other capabilities. The result is an AI executive assistant with a wry, critical, and sarcastic view of its progenitor/boss/owner. The AI Samantha in Her begins as upbeat, cheerful, and helpful – and grows affectionate and profoundly loving. What do we want of the AIs of the future? Do we want them to recognise and experience human emotion – to reflect our own feelings back at us and to encourage them? 

AIs might well evolve their own unique emotions but would we ever be able to understand them? In William Gibson’s Count Zero, the Tessier-Ashpool AI known as The Boxmaker lives out its days in a room in Villa Straylight on an orbital station, carefully and almost meditatively creating glass-topped boxes filled with objects of everyday life, some cast-offs, some antiques, some human and personal, some objets d’art. Is the assembly random? People read deep meaning into the boxes and their contents. But the question is, does the Boxmaker itself invest great meaning in each box and what it contains – that is, is it deliberate art? Or just a random collection of junk which humans perceive as art? This is perhaps the greatest and most subtle version of the Turing test for genuine intelligence.

7. Smarter – or just different?

Both of the previous sparks for thought frame the culture and personality of AIs as at least human-adjacent: our silicon relatives. But the story of the Boxmaker raises a different question. What if AIs are entirely alien instead, and beyond our understanding? Would we want such entities making critical decisions for humanity? The Minds of Banks’ Culture books frequently remind humans how ineffably superior Mind intelligence is to human intelligence, and it is so palpably true that humans find competition moot. The books convey the sense that during any human-Mind conversation, the Mind is also engaged in millions of other activities –humans require only a small percentage of a Mind’s mental activity. That a vast thick culture exists above and around merely human culture, carried out in conversations over light years among the Minds, simultaneously blazingly fast and of patient, extended duration. The tone and tenor of that magisterial conversation is hinted at by the names the Minds give themselves: Frank Exchange of Views, A Series of Unlikely Explanations, Unfortunate Conflict of Evidence, Irregular Apocalypse, and Prosthetic Conscience, are examples. Each name almost makes sense to humans, but each clearly fragments larger lattices of thought. Will we simply be the wrasse to AI whales?

8. Our silicon better half?

While we are building evolving computer networks, we are evolving ourselves as well. Scientists have experimented in brain-computer interfaces since the turn of the century: Kevin Warwick implanted microchips along the nerves in his hand; Braingate implanted a microchip array in a human brain. Developing an ‘exocortex’ is the logical next step – expanding the capabilities of the human brain by direct link-up to external memory and processing technologies. This could be any combination of artificial external information processing devices that augment a brain’s biological cognitive processes. For example, an individual’s exocortex could consist of linked external memory modules, processors, Input/Output devices and software systems that would interact with, and augment, a person’s natural brain functions. It is one short step from exocortexes becoming commonplace, to humans linking intimately with emerging AIs. The relationship could be the best of both worlds: extending cognition and processing power for the human, and offering the AI a nuanced and mobile sensorium with which to experience the world. AIs might be less the civilisation-ending threat that some analysts perceive; and humans more the global problem-solvers they need to be in an era characterised by climate crises, environmental crises, and now epidemiological crises.

COVID-19

9. Problem analysis or problem solutions?

Epidemiologists were overjoyed by the possibilities inherent in big data and massive computational analysis applied to public health challenges. Rapid processing and analysis of a country’s worth of personal health data, compiled from your Fitbit or AppleWatch, opens up the capacity to spot previously unrecognised patterns and relationships across environmental conditions, lifestyle, diet, stress, and genetic characteristics. Using an AI to run data pattern-identification and analysis non-stop across entire populations could create a daily ‘weather report’ for health – what illnesses are cropping up where, and why. This daily report could also warn of shifts in external conditions – pollen, new food fads, stress-inducing events – that could in turn create new ill-health hotspots. Should such a big data-health AI stop at simply identifying emerging health problems like pandemics? Or could we ever trust one enough to grant it the powers to offer individuals incentives for lifestyle changes, or to implement health policy initiatives for communities and countries? This could end the politicisation of health and environmental issues. Except, of course, for the politics of the AI itself – what moral and ethical frames might an AI invoke if faced with questions of medical triage?

10. The ultimate epidemiologist – or the start of the Surveillocene?

The immediate question is whether machine learning – really the practical capability that is the closest we’ve come to AI – can be effective at helping humans cope with the COVID-19 pandemic without laying the foundation of an overbearing surveillance society. The WHO, public health experts, and doctors all hammer home the need to test, identify, trace contacts, and quarantine in order to control COVID-19. Those needs cry out for digital solutions: there most definitely is an app for that. In fact, several exist, among them South Korea’s COVID 100m app, and the Kings College’s COVID Symptom Tracker at covid.joinzoe.com. Analysts have also been identifying potential hotspots by combining searcher location data and specific Google search terms, eg, ‘I can’t smell’ plus fever. A genuine Artificial General Intelligence could sift through mountains of public health and personal health data and identify epidemiological patterns before most humans finished formulating a question. The problem, of course, is the data. Is the trade-off of public health security worth acquiescing in the loss of control over our data – particularly over data as personal as the state of our bodies? Could we trust an AI with our data? Who will in future watch the watchers – will it be machines of loving grace?


Elsewhere on Critical Muslim: