Artificial Intelligence and Sustainability
AI has been described
as potentially important in helping to meet some of the objectives listed in
the 17 Sustainable Development Goals (SDGs) of the United Nations (SDGs, 2015).
Palomares et
al. (2021) set out to describe progress and prospects in artificial intelligence
technologies with regard to the SDGs. Six categories are chosen for their
analysis: life, economic and technological development, social development,
equality, resources and natural environment. They aim to identify the opportunities
and challenges for AI in helping to fulfil the SDGs, and to provide a roadmap for
maximising this assistance in the next decade; they also highlight the difficulties and potential
threats associated with AI. The writers note the hostility often felt towards
AI, and attempt to present its potential benefits with regard to the SDGs. They
offer a definition of AI, note its main elements, and list its main areas, such
as knowledge representation, natural language processing, computer vision,
machine learning, automated reasoning and robotics. Concerns “about the motivation behind
decisions made by AI algorithms” have given rise to the concept of “trustworthy
AI” – robust, ethical and lawful AI systems. Brief descriptions are given of
technologies which support AI systems, such as the Internet of Things (IoT), 3D
technologies, Blockchain, Big data and 5G communication infrastructure.
The
extensive literature review provided by Palomares at al. begins with six
broadly focussed studies (section 3.3) from which some general points are
listed here. One paper claims that AI can improve the efficiency of industrial
processes, help to preserve non-renewable resources and to disseminate expert knowledge,
reduce the gap between resources and technology, and foster the creation of
alliances among governments, the private sector and society for maximizing global
sustainability. Others focus on the contributions of blockchain technology to the
SDGs, especially its ability to ensure the integrity of data and prevent corruption;
and on a comparison of the potentials of
AI to help and to hinder attainment of the SDGs, concluding that it could be an
enabler for the majority of the goals, but with potential to hinder a
substantial minority of them. Palomares at al. stress the “importance of the
interactions among AI, the society, environment, the economy and the
government”, and the need for “approaching these critical interactions from a global
perspective with the guidance of solidly established regulations.” They then
proceed to conduct an analysis of strengths, weaknesses, opportunities and
threats (SWOT) when AI is applied to each of the SDGs.
As an
example some points from their SWOT analysis of SDG 13 (Climate action) follow.
Strengths: Predictive AI can be applied remotely
to assist disadvantaged countries against climate phenomena; AI models help in making
better emergency or disaster recovery decisions. Weaknesses: Climate prediction demands precise information in real time, not affordable
everywhere; Black-box AI models are difficult for emergency services to use in
order to justify decisions against disasters. Opportunities: Early prediction of natural catastrophes
enables rapid response by authorities; AI prediction of energy needs and
traffic can help reducing pollutants; AI technology can reinforce young people’s
education about climate change. Threats: AI computation can require significant energy;
AI models used to predict natural catastrophes can become obsolete as climate
changes.
The roadmap
for the decade revolves around five key elements: the need for “unified,
accessible, open and high quality data [which abide by] inalienable human
rights”; the imperative to “strengthen the links between science, industry and
institutions”; the careful adaption of AI and digital technologies to the
situation and characteristics of each country; the definition of “alternative and
more flexible standards” for the evaluation of the SDGs; and reflection on and “reformulation
of the approaches under which each and every SDG in the UN agenda are being
currently addressed” in the light of the experience of the COVID-19 pandemic.
Mhlanga
(2021) believes that AI “is beginning to live up to its promises of delivering
real value” and seeks to investigate its influence on the attainment of the
SDGs, focussing on poverty reduction (SDG1),
and industry, innovation, and infrastructure (SDG3) in emerging economies. He claims
that AI can be used in conjunction with satellite images to map poverty in some regions, such as Thailand
(ADP 2021). In agriculture, AI programs are “helping to improve farming,
through effective diseases detection, prediction of crop yields, and location
of areas prone to a scarcity” and Mhlanga mentions the work of Stanford
University’s Sustainability and Artificial Intelligence Lab in this respect (Stanford
2021). He believes that AI is “enabling massive infrastructure development,
increase in access to information and knowledge as well as fostering innovation
and entrepreneurship” and mentions the importance of the transport sector in making
economic growth and development possible.
Lahsen (2020)
claims that political uses of AI “in the form of data flows, machine learning,
and large-scale data analytics and algorithms” carry threats to the openness of
societies, but acknowledges that information and communications technologies including
AI can be harnessed to the common good by helping to bring about the “mass
public mobilization and transformations” needed to achieve the sustainable development
goals, for which current information environments are inadequate. She believes
that reform of traditional mass media and the use of AI are both needed “to
stimulate changes in values, understanding and social engagements that in turn
can transform legislation and economic policies.” Fear of social engineering
may inhibit the wise use of AI with its “historically unprecedented potential to
reshape society for the common good”. However, social engineering is already a
reality, and what matters is “who is in control and their guiding norms,
ethics, and principles.” While communications scholars see the importance of reforming
mass media to achieve progressive change, corporate media tends to obstruct
such change, while scientists typically value the political neutrality of the
media, and mainstream environmental researchers are likely to assume that in
reality “current information environments are neutral”. Power lies with “those
most vocal and influential on social media” and a “suite of persuasive
technologies” is available to them, and can be used, for example, to “spread
doubt about climate change”. Moreover, Lahsen claims that there is evidence
from cognitive science that in the area of climate change, people protect their
values and beliefs against new scientific information. She argues that
worldwide, mass media are controlled by elites, who “wield vastly
disproportionate influence on public understandings of reality, manifestly
including climate change”, and that media control allows the extent of its mind
shaping power to be disguised. In view of the power of AI to sway our thinking,
“AI design needs to be carefully governed and to become an integral, deliberate
and explicit element in “transformative” policy frameworks for achievement of
Agenda 2030 and respect for planetary boundaries.” Policies must force
disclosure of the “assumptions, choices, and adequacy determinations” embodied
in current and future intelligent systems.
Stein (2020)
acknowledges the threats posed by artificial intelligence “to privacy,
security, due process, and democracy itself”, but sees its usefulness in “those
particularly complex technical problems lying beyond our ready human capacity”
such as climate change. She sees AI as useful in dealing with the huge amounts
of data associated with climate science, for example in monitoring greenhouse
gases, and in weather prediction models, where machine learning is particularly
applicable. Her focus however is on the energy sector as a means of
illustrating the “potential promise and pitfalls” of applied AI. Stein states
that electricity accounts for about 25% of global GHG emissions, and sees a
role for AI in “accelerating the development of clean-energy technologies,
improving electricity-demand forecasts, strengthening system optimization and
management, and enhancing system monitoring” as well as in improving safety and
reliability. The integration into the supply grid of renewable energy sources
can be improved by using AI to predict their intermittent outputs: Stein cites
work by Google and DeepMind which “boosted the value of … wind energy by
approximately twenty percent.” AI can adjust wind-farm propellers to keep up
with changing wind directions, help design the layout of renewable energy
sources, and improve the management of large battery storage systems. AI is “poised
to assist” in the management of distributed resources such as rooftop solar,
wind, fuel cells, energy storage and microgrids, as well as in demand shifting
to match supply. AI can also facilitate the ‘smart grid’ – described as “an intelligent
electricity grid—one that uses digital communications technology, information
systems, and automation to detect and react to local changes in usage, improve
system operating efficiency … while maintaining high system reliability”.
After
further discussion of the ways in which AI can be used in the energy sector,
Stein addresses more general issues such as the trade-offs between AI and climate.
AI can itself be a large consumer of electricity: some possible solutions to
this problem are the requirement for AI researchers to disclose their computational
costs in their publications; to introduce an environmental certification
regime; and “to enhance the sharing of data used in climate-related algorithms.”
The regulation of data is an issue of concern, as is data privacy; examples of
the failure of anonymization are cited. The funding of AI for climate issues is
discussed, as are issues of accountability, safety, and certification. A final
issue is the legitimacy of the algorithms used. “If we are to base important
policy decisions on the results of climate AI, it is imperative that there is
trust in the system.” This can involve enabling the AI system to explain in
comprehensible terms how and why it has reached a particular decision. While it
is “imperative that the limitations of AI be acknowledged and tempered” recognition
of these limitations should not result in exclusion of its application to
addressing the “complicated data challenges associated with climate change” where
appropriate.
Writing in
the journal AI and Ethics, Coeckelbergh
(2021) addresses issues which overlap those covered by Stein, but offers some
interestingly different perspectives. Among the questions raised, he compares
two widely different views of how AI might impinge upon freedom: the first is a
paternalistic approach, in which AI is used to “nudge” people to “use less
energy, produce less waste, not use a car” and the like. This is viewed by some
as non- coercive, but by others as limiting freedom, due to its subconscious
influence on choices and behaviour. At another extreme is the option to use AI
to help govern humanity, since “if the current political situation continues,
with a serious lack of climate governance at a planetary level”, planetary disaster
is likely to follow. In this option freedom is seriously threatened “through straightforward
coercion” and (referencing Hobbes) a “Green Leviathan” is called for. Coeckelbergh
argues that a middle way can be found in which “it is possible to put
environmental and climate regulation in place which restricts freedom to some extent (for the purpose of
improving the climate situation) but still leaves enough freedom”, but admits that this is “a huge challenge in a
democratic society, and
even more so at the global level”.
Scoville et
al. (2021) address the place of AI in “existing climate knowledge
infrastructures and decision making systems” and in conservation. As an example
of the expansion of climate data, they note that a “four-dimensional global
atmospheric dataset of weather” is now available as far back as 1836. The development of AI techniques has enabled
“breakthroughs in modelling cloud systems … and analyses of complex
interconnectivity among earth system features”. They note the divisions between
researchers over whether “AI will further the cause of environmental
sustainability … or accelerate unsustainable patterns of extraction and
consumption.” Historically algorithms used in conservation provided information
to decision makers on what to protect and where. “This is now beginning to
change” as data are collected and updated almost in real time: it is no longer
practical for outputs to pass through “multiple levels of decision makers and
stakeholders”, and increasingly “the algorithms are tasked with real-time
‘decisions.’” AI systems can make predictive decisions such as identifying probable
areas of illegal fishing, future poaching events, and likely changes to forest
cover; these can result in anticipatory action, including policing. There are also concerns over inbuilt bias in
AI systems, and the questions of “whose interests are shaping algorithmic
decision making systems in the context of climate change”, who controls access
to AI platforms, and who has influence over the regulators of AI systems.
References
ADP, 2021,
Mapping the spatial distribution of poverty using satellite imagery in
Thailand, Asian Development Bank, online, accessed 15 September 2021
Coeckelbergh,
M., 2021, AI for climate: freedom, justice, and other ethical and political
challenges, AI and Ethics, online,
accessed 15 September 2021
https://link.springer.com/content/pdf/10.1007/s43681-020-00007-2.pdf
Lahsen, M.,
2020, Should AI be Designed to Save Us from Ourselves? IEEE Technology and Society Magazine, June 2020, online, accessed
15 September 2021
Mhlanga, D.,
2021, Artificial Intelligence in the Industry 4.0, and Its Impact on Poverty,
Innovation, Infrastructure Development, and the Sustainable Development Goals:
Lessons from Emerging Economies? Sustainability
2021, online, accessed 15 September 2021
https://www.mdpi.com/2071-1050/13/11/5788
Palomares,
I., et al., 2021, A panoramic view and swot analysis of artificial intelligence
for achieving the sustainable development goals by 2030: progress and
prospects, Applied Intelligence,
online, accessed 14 September 2021
https://link.springer.com/article/10.1007/s10489-021-02264-y
Scoville,
C., et al., Algorithmic conservation in a changing climate, Current Opinion in Environmental
Sustainability, online, accessed 16 September 2021
SDGs, 2015,
THE 17 GOALS, online, accessed 13 September 2021
Stanford,
2021, Sustainability and Artificial Intelligence Lab, online, accessed 15
September 2021
Stein, A.,
2020, Artificial Intelligence and Climate Change, Yale Journal on Regulation, online, accessed 16 September 2021
https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1565&context=yjreg
Comments
Post a Comment