
This article first appeared as "AI and Covid-19" by John A. Sweeney in Critical Muslim 34: Artificial (Spring 2020, Edited by Ziauddin Sardar and Published by the Muslim Institute and Hurst Publishers). To see other great articles and to get your copy of this or past issues of Critical Muslim visit their website. And perhaps you are interested in supporting a good cause and obtaining all the benefits that come along with becoming a fellow of the Muslim Institute.
Dr. John A. Sweeney is an award-winning futurist, designer, and author. He serves as an Assistant Professor of Futures and Foresight at Narxoz University where he is also the Director at the Qazaq Research Institute for Futures Studies. John also currently serves as the Foresight Advisor for INTERPOL and as co-Editor of World Futures Review: A Journal of Strategic Foresight. John tweets on trends, emerging issues, and all things postnormal at @aloha_futures.
The Covid Chronicles
AI and Covid-19
Allow me to open with an obtuse provocation: Artificial Intelligence (AI) does
and does not exist. This is not an allusion to ‘AI’ that have been painstakingly
developed only to be later decommissioned, as was the case with
Facebook’s recent chatbot experiment, although this is precisely where our
odyssey into unpacking the above assertion begins. While there were
numerous reports that the social media titan’s two AIs, who were known
as Alice and Bob, had developed their own language and began having
independent conversations, the truth of the matter is far more complex.
Alice and Bob were indeed communicating with one another in ways that
the programmers could not understand, but they were doing so in a
modified version of English, which suggests that the primary issue was not
hyper-intelligence run amok but an all-too-human-esque unwillingness to
follow the rules, so to speak, of English grammar. In short, Alice and Bob,
who were actually ‘neural nets,’ found a more efficient means of
communicating using aspects of English, but as the programmers could not
figure out what was being said, the entire research program was scrapped.
Neural nets, which is shorthand for artificial neural networks, are not only
somewhat inspired by biological brains; they are engineered to learn
through examples, rather than through explicit instructions and tasks,
which is to say that experience drives how they come to ‘know’ things and,
when asked, provide solutions to specific challenges. This makes neural
nets extremely apt at ‘pattern recognition’ problems, which involves
identifying ‘signals’ (some insight) amidst a sea of noise (large-scale data
sets), but this is also what led Alice and Bob to begin speaking gibberish,
at least from the programmers’ perspective. This anecdote will return as a
parable of sorts at the end of our journey.
But first, back to my opening salvo: AI does and does not exist. This
intentionally contradictory framing points toward the dynamics underlying
an array of technologies that spark hope, fear, and everything in between.
While AI might conjure up a singular image for many, what actually and
currently constitutes artificial intelligence is anything but monolithic. As
such, the above and is doing quite a bit of work and points toward the
diversity of technologies and tools that can be and is often loosely referred
to as AI. This also highlights a key challenge at the very epicenter of most,
if not all, discussions of this topic: the complex interstices of the actual and
the perceptual. Many, if not most, of the predominant visions of AI – from
autonomous robots to hyper-intelligent algorithms – fail to capture the
all-too-human constraints of this still emerging technology. It is within this
lacuna between the actual and perceptual that one feels the real weight of
thinking through the postulate that AI does and does not exist. If humans
were not part of the above equation, perhaps Alice and Bob would have
created an entirely new linguistic structure – one that could have
revolutionised how we communicate, which is exactly what some say
emojis have done. Of course, Facebook’s interest in the opportunities to
advance human communication is secondary to its focus on monetising
communication itself, and gibberish-speaking chatbots have not turned a
profit, at least not yet.
Half-joking forecasts aside, there can be little doubt that AI has had, and
looks positioned to continue having, a profound influence upon our
collective images of the future. One recent event that seems to foretell of
AI’s potentiality is the tragedy of Lee Sedol, who was one of the world’s
top-rated players of Go, an ancient game whose complexity dwarfs chess.
You might have noticed the past tense, which is due to Lee’s retirement in
late 2019. As a grandmaster, who ranked only second in international titles
to a fellow Korean, Lee rose to global stardom in 2016 during a highly publicised
five-match competition against the Google-backed DeepMind’s
AlphaGo program, which learned how to play by evaluating tens of
millions of matches, including games against itself. Sedol not only lost four
out of five games but he was clearly outmatched in ways that left the
grandmaster as well as some commentators downright stupefied. In the
second game, AlphaGo made a move that was deemed unhuman, which is
to say that no human, to date, would have made such a move, according to
the experts. Lee himself called it, ‘so beautiful’. Fast forward to the
present and one can find that rather than studying the strategies of Lee,
competitive as well as casual Go players are looking to learn from AlphaGo,
which has inadvertently transformed the multi-millenia-old game. Rather
than persist in a world with AlphaGo atop in both actual and perceptual
terms, Lee confessed that AI ‘cannot be defeated’ and decided to step away.
Lee’s experience is often invoked as a parable for the growing power and
prowess of AI-enabled programs and tools, although some have already
looked at impacts and implications beyond board games.
From the outspoken Elon Musk to the somewhat more demure Centre
for the Study of Existential Risk at Cambridge University, many have
sounded alarms over the dangers of AI, particularly the cascading effects of
‘sentient’ robots and fully-automated weapons systems. While it would be
foolish to ignore the misuses of AI, it is clear that there are a number of
pressing threats and risks, including some that challenge us to examine
artificiality in its various forms. Take the World Economic Forum’s Global
Risks Report 2020, which is currently in its 15th edition. For the first time
ever, the top five risks in terms of likelihood are all environmental in scope,
and three (climate action failure, biodiversity loss, and extreme weather)
made the top four in terms of impact. The concept of the Anthropocene,
which argues that the environment has become an artifact of human
creation, should remind us all of the artificiality of humanity’s dominion
over the natural world. Our actions have created artificial conditions
anathema to our very survival. And, it is quite artificial to believe that
humans have even a modicum of control over a complex planet that
continuously defies our best models, hence the need for immediate action
to abate the current climate emergency. If recent events are any indication,
anyone reading this will be continuously subject to high-likelihood and
high-impact risks, or, to put it another way, postnormal bursts, that reorient
our very sense of what can and might be possible. Enter Covid-19.
Infectious Imaginings
To say that Zhongnan Hospital was at the centre of the novel coronavirus
outbreak would be an understatement. As the medical centre for Wuhan
University, it was ground zero for what would eventually become our
present crisis. As things began to worsen in Hubei Province, doctors and
researchers looked to AI to assist with diagnosing patients, specifically to
look for signs of pneumonia, which is what makes this particular disease so
lethal. Early detection can not only help patients receive much-needed
treatment before the disease takes hold but, at a community level,
potentially keep surges from overwhelming healthcare systems. As one
might expect, interventions of this type were experimental, which speaks
as much to the nature of the crisis as the technology itself. Indeed, there can
and should be little doubt that many, if not most, were simply caught off
guard by the coronavirus outbreak that mutated into a full-blown pandemic
earlier this year. Perhaps it was merely beleaguerment due to the string of
seemingly unimaginable events that have come to characterise life in
postnormal times. Even as troubling reports started coming in from Wuhan
and images of roadblocks and empty streets began circulating online, the
outbreak felt unreal, yet also eerily familiar. Ebola, H1N1, MERS, and SARS
have done much to shape how we perceive such events, but infectious
imaginings, from Outbreak (1995) to Contagion (2011), have also done much
to both raise awareness but also perhaps desensitise us to these ‘inevitable
surprises’. Can or might we end up saying the same for AI?
Of course, analysts, futurists, and scientists have warned about the risk
of pandemics for years, if not decades. Reading a range of alternative
futures scenarios or being tossed into a pandemic simulation are certainly
powerful means to learn, but if this crisis has made anything clear, it is that
too often too little is learned too late. When I was approached to make a
contribution to this issue, it was not my intent to write about pandemics.
But, as with AI, which has consumed a great deal of our imaginative and
anticipatory capacities over the past few years, Covid-19 will dominate
imaginings of the future for years if not decades to come, especially if
things continue unfolding on their present course, so it was not only
responsible but also necessary to make it a focal point for thinking through
AI. At the time of writing, the world looks poised for an unprecedented,
yet all too familiar, global crisis – one that has already been (and will
certainly continue to be) impacted by developments, applications, and the
complexities of AI. Then again, much of what is written here could be
outdated in the coming days, weeks, and months ahead.
While almost everything feels suspended in a state of flux at the moment,
one thing appears to be certain: for better or worse, AI and Covid-19 have a
lot in common. First, and broadly speaking, both are poorly understood,
although social media seems to have amplified the confidence with which
anyone, and everyone, invokes and analyses them. This is not to say that ‘the
experts’ ought to drive conversations about how to steer our responses to
both, quite the contrary. A range of voices, including the most vulnerable,
must drive the proactive and responsive choices that are made in relation to
both, but this is easier said than done. AI and Covid-19 necessitate a somewhat
technical understanding of complex phenomena and have been subject to
torrents of mis- and dis-information. Second, both AI and Covid-19 spur
deep-seated fears over humanity’s place as our planet’s ‘alpha’ species. Putting
Terminator and/or Contagion imaginings aside, the potential perils of truly
artificial intelligence and the pitfalls of a catastrophic pandemic do much to
highlight the fragility of many, if not most, of the systems – from our own
biological to the incredibly inequitable economic to those that bring food to
our tables – upon which our all-too-modern lives depend. Finally, both AI and
Covid-19 highlight the need to ask ‘what’s next?’ with particular attention to
policy-making and governance design. Are there better structures for how we
might choose to govern ourselves? Could AI play a role? The plight of AI in the
time of Covid-19 portends more than just pattern recognition; it points
towards the potentiality for governments to be structured in radically new
forms with designs focused on presenting ‘speculative evidence,’ which
deserves greater attention a bit later. Before delving into what might be next,
it is worth looking at what is happening now.
A tale of two articles
Wired magazine has been an advocate for futures thinking, or at least a
variation of it, since its inception nearly thirty years ago. From its
beginning, the California-based publication has demonstrated a clear
penchant for ‘Silicon Valley’ approaches to sense-making and solutioning.
Wired has become something of a bellwether in the broader ‘tech trends’
space and a site for distilling popular imaginings. This is not to say that
Wired offers any sort of representative sample but rather that it speaks to a
current within our zeitgeist on human-technology relations. Take two
recent articles on AI, which also make clear connections to the Covid-19
pandemic. On 15 March 2020, Wired published an opinion piece by Glen
Weyl and Jaron Lanier. The latter’s name is likely familiar as he has been a
vocal advocate on the oppressive nature of many technologies, including
those he worked on for decades as a pioneer in virtual reality.
In ‘AI is an Ideology, Not a Technology,’ Weyl and Lanier put forward a
compelling argument that there is a technocratic ideology underpinning
the predominant imaginings of AI, which come from both corporate
entities as well as less-than-democratic governments (to be read as China).
Ultimately, they argue that it is important, if not essential, to re-imagine
‘the role of technology in human affairs’. While it has become fashionable
to make such pronouncements without offering examples as to how one
might accomplish this truly Herculean feat, the authors actually suggest
looking at Taiwan, which has made great strides in building ‘a culture of
agency over their technologies through civic participation and collective
organisation’. Furthermore, they note how such approaches have been
central to the island’s Covid-19 response, which has been noted by many
as exemplary. Implementing a strategy that features an impressive ‘124
action items,’ Taiwan has balanced communication and transparency with
tech-driven decision-making that gave it an early jump on containment
measures. Taiwan’s aggressive policies, which were developed after the
SARS epidemic, have been buttressed by AI-enabled applications that
deliver citizen alerts and support supply chain management for critical
items, such as face masks. While it is certainly too early to declare Taiwan
a ‘winner’ in the global race to ‘flatten the curve,’ their efforts, which
included proactive and widespread testing, have produced, to date, positive
results, which has not been the case in many other places.
The second article, ‘AI Can Help Scientists Find a Covid-19 Vaccine’, by
Oren Etzioni, CEO of the Allen Institute for AI, and Nicole DeCario, who
serves as Etzioni’s Senior Assistant, strikes a much more optimist tone.
Noting how AI is already being used to map components of a vaccine, the
authors deploy real world examples, primarily their own efforts to assist
researchers in mining the explosion of scientific articles on Covid-19, to
highlight the ‘bad rap’ handed down to AI in recent years. In the article’s
penultimate paragraph, and in a passage worth quoting in full, the authors
observe: ‘It is ironic that the AI which has caused such consternation with
facial recognition, deepfakes, and such is now at the frontlines of helping
scientists confront Covid-19 and future pandemics’.
One can only wonder if any of the Uighur Muslims, who perhaps
number near one million, currently residing in one of China’s ‘re-education
camps’ feel a sense of irony in relation to how AI-enabled facial recognition
systems are being used both to corral them and combat the current
pandemic. Maybe this irony can be found within the deep fake viral videos,
which were seen by as many as fifteen million citizens, circulated via
Whatsapp groups during the 2019 Indian general elections and those being
used to identify signs of pneumonia. Perhaps this irony is palpable to those
subject to China’s ‘social credit’ system, which uses a mixture of
technologies – big data, facial recognition and, of course, AI – to enable
and disable certain actions and activities from employment opportunities
to purchasing plane and train tickets. It is perhaps unfair to expect too
much from Wired. Given that these two articles were published less than
two weeks apart, and the latter lacks any sense of self-reflexivity, the tale
of these two articles highlights the challenges of finding consistent sources
that shy away from simplistic framings and sanguine apologetics on AI, and
this has become especially difficult in a time of Covid-19. This brings us
back to the signal-to-noise problem, which, as we have been told, if not
promised, is where AI shines.
Again, it is a bit too early to congratulate or condemn, but, on the whole,
the United States looks poised to be hit quite hard by the Covid-19
pandemic. And given who sits atop the current administration, it would be
foolhardy to expect otherwise. Will, or rather can, AI save the day?
Apparently, the White House has pinned at least part of its hopes on
analysing the mountain of data (estimated to be at or around 29,000
scientific articles as of March 2020) generated by scientists and researchers.
Summoning leaders from companies such as Google and Microsoft, the
coronavirus task force, which is actually led by Vice President Pence,
announced a new public/private partnership aimed at providing cuttingedge
computing power, including AI-based tools such as machine learning.
While there can be little doubt that such approaches, which encompasses a
broader range of algorithms that includes neural nets, are effective for
identifying patterns, there is no guarantee that decision makers, particularly
those currently in power, will enact policies based on data-driven insights.
This, if anything, is the greatest challenge at the heart of AI in the time of
Covid-19: humans. For all of the concerns over what might happen should
machines become equally or more intelligent than humans, our current
crisis makes clear that AI cannot match the limitations of our humanity,
especially in times of crisis, which brings out both the best and worst in us.
From Covid-19 transmission parties to physical-distancing-be-damned
spring breakers to the airplane-toilet-seat-licking-coronavirus-challengemodel
(yes, that’s a real thing), the corollary pandemic of stupidity has left
many of us clamouring for intelligence, artificial or otherwise. Can our
governance structures truly handle such crises?
What’s next?
As this exploration began with a provocation, it only seems fitting to end
with one. No, this will not involve Alice and Bob, at least not yet. Ok, here
it goes: Democracy had a good run. No, really. From the experiments of
ancient Greece to the complex participatory structures of the Iroquois
Confederacy to the hundreds of millions of voters across modern day
India, there can be little doubt that democracy had a profound impact on
the world, including playing a role in the rise of some of the world’s most
tyrannical despots. To be clear, pronouncing the death of democracy, which
has been done many times before, is not welcoming a turn toward
authoritarian and non-participatory modes of political organisation and
decision-making but rather challenging us all to imagine, and ultimately
design, what comes next. As imaginings of AI have predominantly tended
toward the singular, the same can be said, for the most part, about our
collective imaginings for governance design, broadly defined. Parliament
is a fascinating project that looks at how the assembly space for 193 United
Nations members affects policy and decision-making practices. Could AI
be useful for reimagining not only how issues are framed but the very
spaces within which such discussions take place? Will the Covid-19
pandemic create a space to re-examine and re-engineer governance itself?
What other models of governance would be better suited to anticipate and
confront the challenges of life in postnormal times? The Covid-19
pandemic certainly offers an opportunity to call into question many things,
especially as they begin to fail, but there have also been calls for things to
return to ‘normal,’ although others have suggested that this is not going to
happen. Can the crisis spur substantive change or will things be quickly
ordered back into their previous state? What measures might be taken to
sustain systems that are woefully unsustainable?
As countries turn toward emergency measures to combat the ongoing
pandemic, there can be little doubt, and there have already been signs, that
some will intentionally abuse this ‘opportunity’ toward despotic ends. In
Hungary, who is a member of the European Union, Prime Minister Viktor
Orban rejected calls for a limit on a range of unprecedented emergency
powers, which includes rule by decree, the ability to override all existing
legislation, and the suspension of elections. Orban’s not-so-subtle power
grab is viewed as the culmination of a series of moves aimed at cementing
his grip over civil society, especially as there are harsh measures for
spreading information deemed false by the government. Will other
countries seek to implement similar measures? In many ways, China is
ahead of the curve. The superpower’s AI-enabled Corporate Social Credit
System (CSCS), which should be fully up and running by the end of 2020,
has been buttressed by a virus-response-driven increase in mass
surveillance and tracking that some fear could become a ‘new normal’.
Interestingly, a pre-pandemic study carried out in China found that people
were willing to give up aspects of personal privacy to combat the
‘nationwide crisis of trust’ that has come about through rampant fraud and
corruption. While the sample was relatively small (only five hundred
people), this research raises a significant question: what trade-offs (and
there are many inherent to the CSCS) are citizens willing to endure in and
beyond postnormal bursts, such as the Covid-19 pandemic?
The CSCS will live and die based on one thing: data. Given the strategic
and tactical importance of good and reliable information, it might, or might
not, come as a surprise that some have called into question the provenance
of China’s reporting related to the outbreak in Hubei Province, including
concerns that official cases were hidden and, even more troubling, that the
mortality rate could be higher than suggested. This is extremely troubling
as many measures and models have been based on data from Wuhan, which
is to say that some measures might be woefully inadequate should any of the
critical early-stage outbreak data be insufficient, incomplete, and/or
incorrect. Pattern recognition would be for naught without good data,
which illuminates an Achilles heel for many, if not most, AI-driven
approaches. Given the global turn toward ‘evidence-based’ policy-making,
which also necessitates sound and reliable information, some have called for
‘responsible data collection and processing at a global scale,’ although how
such initiatives can and might be implemented remains uncertain, especially
when one considers the lack of headway on other global challenges, such as
reducing greenhouse gas emissions.
In the United Kingdom, issues over data and evidence bubbled to the
surface as the government announced a behavioural science-driven approach,
which included a controversial measure known as ‘herd immunity’ that
essentially allows for a greater percentage of the population to become
infected. However, it took public outcry as well as an open-letter signed by
over two hundred academics asking for ‘evidence’ that led the government to
reconsider, although models suggested that the National Health Service would
be quickly and fully overwhelmed. Labelled a ‘debacle’ by some, this strategy
was first clarified and then later rejected in favour of increased testing and
physical distancing, which appears to have worked well in both Singapore and
South Korea. Whatever forms of governance comes next, it is clear that one
of the most important criteria would be to present speculative-evidence,
specifically ‘weak signals, ethnographic observations, and the stories of
people’s experience’. Located at the intersection of actual and perceptual
framings, speculative-evidence allows for phenomena, from AI to Covid-19,
to be seen and understood from a range of perspectives across a diversity of
contexts. And, as the Covid-19 pandemic makes abundantly clear, there is
certainly an opportunity to promote more engagements that critically and
creatively focus on navigating uncertainty, rather than attempting to ‘manage’
risk, at a variety of scales within and beyond government. If anything, the
Covid-19 pandemic is a stark reminder that: ‘everything in the world about
you is a social invention’. This will become more apparent as various systems
bend and break and, perhaps most importantly, as communities on the
frontlines of this crisis find ways to survive and thrive. To navigate the
complexities of postnormal times, an era defined in part by an abundance of
noise, we must find ways to both identify and amplify signals pointing toward
inventing a better tomorrow. Such an endeavour must be anything but
artificial. And, if Alice and Bob have taught us anything, it is that an
unwillingness to follow rules can actually be a sign of intelligence. And, if the
tragedy of Lee Sedol conveys a lesson, it is that beauty can and might emerge
from unlikely sources during moments of great uncertainty.
Citations
This two x Wired article are: Glen Weyl and Jaron Lanier. 2020. ‘AI Is An
Ideology, Not A Technology | WIRED.’ Wired. March 15, 2020. https://
www.wired.com/story/opinion-ai-is-an-ideology-not-a-technology/ and
Oren Etzioni and Nicole DeCario. 2020. ‘AI Can Help Find Scientists Find
a Covid-19 Vaccine.’ Wired, March 28, 2020. https://www.wired.com/
story/opinion-ai-can-help-find-scientists-find-a-covid-19-vaccine/;
For more concerning the advancement of AI in contemporary society, see
Pranam, Aswin. n.d. ‘Why The Retirement Of Lee Se-Dol, Former ‘Go’
Champion, Is A Sign Of Things To Come.’ Forbes. Accessed March 30,
2020. https://www.forbes.com/sites/aswinpranam/2019/11/29/
why-the-retirement-of-lee-se-dol-former-go-champion-is-a-sign-ofthings-
to-come/.
For more on Covid-19 induced displays of idiocy see Folley, Aris. 2020.
‘Video of Spring Breakers Saying Coronavirus Won’t ‘stop Me from
Partying’ Sparks Viral Condemnation.’ Text. TheHill. March 18, 2020.
https://thehill.com/blogs/blog-briefing-room/news/488357-video-ofspring-
breakers-saying-coronavirus-wont-stop-me-from and Glantz, Tracy.
2020. ‘Model Who Licked Toilet Seat in Coronavirus Challenge Is
‘Unbothered’ by Outrage.’ Miamiherald. March 17, 2020. https://www.
miamiherald.com/miami-com/miami-com-news/article241270296.html.
For more on the Covid-19 pandemic and AI’s applications therewithin, see
Bieber, Florian. 2020. ‘Authoritarianism in the Time of the Coronavirus.’
Foreign Policy (blog). March 30, 2020. https://foreignpolicy.
com/2020/03/30/authoritarianism-coronavirus-lockdown-pandemicpopulism/;
Chen, Sharon, Dandan Li, and Claire Che. 2020. ‘Stacks of
Urns in Wuhan Prompt New Questions of Virus’s Toll.’ Bloomberg.Com,
March 27, 2020. https://www.bloomberg.com/news/
articles/2020-03-27/stacks-of-urns-in-wuhan-prompt-new-questions-ofvirus-
s-toll; Chen, Stacy. 2020. ‘Taiwan Sets Example for World on How
to Fight Coronavirus.’ ABC News. March 13, 2020. https://abcnews.
go.com/Health/taiwan-sets-world-fight-coronavirus/
story?id=69552462; Chun, Andy. 2020. ‘Coronavirus: China’s Investment
in AI Is Paying off in a Big Way.’ South China Morning Post. March 18, 2020.
https://www.scmp.com/comment/opinion/article/3075553/timecoronavirus-
chinas-investment-ai-paying-big-way; Dator, Jim. 1993.
‘Society Is a Social Invention and You Are a Social Inventor.’ http://www.
futures.hawaii.edu/publications/futures-theories-methods/
SocialInventor1993.pdf; Dave, Paresh. 2020. ‘White House Urges
Researchers to Use AI to Analyze 29,000 Coronavirus Papers.’ Reuters,
March 16, 2020. https://www.reuters.com/article/us-healthcoronavirus-
tech-research-idUSKBN2133E6; Duff-Brown, Beth. 2020.
‘How Taiwan Used Big Data, Transparency and a Central Command to
Protect Its People from Coronavirus.’ Freeman Spogli Institute for
International Studies. March 3, 2020. https://fsi.stanford.edu/news/
how-taiwan-used-big-data-transparency-central-command-protect-itspeople-
coronavirus; Dunleavy, Jerry. 2020. ‘US Spy Agencies Warned
Trump That China Was Lying about Coronavirus.’ Washington Examiner.
March 21, 2020. https://www.washingtonexaminer.com/news/us-spyagencies-
warned-trump-that-china-was-lying-about-coronavirus;
Editorial. 2020. ‘The Guardian View on Hungary’s Coronavirus Law:
Orbán’s Power Grab | Editorial.’ The Guardian, March 29, 2020, sec.
Opinion. https://www.theguardian.com/commentisfree/2020/
mar/29/the-guardian-view-on-hungarys-coronavirus-law-orbans-powergrab;
Etherington, Darrell. 2020. ‘IBM, Amazon, Google and Microsoft
Partner with White House to Provide Compute Resources for COVID-19
Research.’ TechCrunch (blog). March 23, 2020. http://social.techcrunch.
com/2020/03/22/ibm-amazon-google-and-microsoft-partner-withwhite-
house-to-provide-compute-resources-for-covid-19-research/;
Ghosh, Pallab. 2020.
To learn more about the ongoing crisis of information and knowledge see
Jain, Anab. 2017. ‘Can Speculative Evidence Inform Decision Making?’
Anab Jain (blog). June 28, 2017. https://medium.com/@anabjain/canspeculative-
evidence-inform-decision-making-6f7d398d201f; Christopher,
Nilesh. 2020. ‘We’ve Just Seen the First Use of Deepfakes in an Indian
Election Campaign.’ Vice (blog). February 18, 2020. https://www.vice.
com/en_in/article/jgedjb/the-first-use-of-deepfakes-in-indian-election
by-bjp; Patil, Samir. 2019. ‘Opinion | India Has a Public Health Crisis. It’s
Called Fake News.’ The New York Times, April 29, 2019, sec. Opinion.
https://www.nytimes.com/2019/04/29/opinion/india-electionsdisinformation.
html; Schwartz, Peter. 2001. Inevitable Surprises: Thinking
Ahead in a Time of Turbulence. New York: Gotham Books. http://search.
ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nla
bk&AN=124844; and Shirky, Clay. 2019. ‘Emojis Are Language Too: A
Linguist Says Internet-Speak Isn’t Such a Bad Thing - The New York Times.’
August 16, 2019. https://www.nytimes.com/2019/08/16/books/
review/because-internet-gretchen-mcculloch.html; To learn more about
the Parliament project, see XML. 2016. Parliament. Amsterdam: XML.