AI in energy: is it as smart as you think? Part One

By Emily Judson, Energy Policy Group, 25 March 2019

Artificial intelligence (AI) is currently in the spotlight due to recent rapid expansion of AI technologies across a range of applications in both private and public sectors. Applied AI technologies can be powerful, augmenting human capacity to address more complex situations and challenges in shorter timeframes. Indeed, AI was recently named as ‘the emerging power behind daily life’ by Microsoft’s Kate Rosenshine in her keynote speech at the 2018 Tech UK Digital Ethics Summit. These new technologies also impact all levels of the UK energy system from enhancing personalisation of domestic customer service through to facilitating predictive maintenance of national transmission infrastructure.

This three-part blog series will address the growing use of AI in the energy system. Part One (below) is a scene-setter for readers who are unfamiliar with AI, addressing general background and terminology. Part Two will analyse three socio-economic enabling factors that provide a fertile ground for the adoption of AI in energy, and provide examples of where applied AI technologies are already appearing on the market. Part Three will address policy implications of the growth of AI in the energy system, opening the conversation regarding future governance.

When establishing institutions and principles supporting the governance of emerging energy technologies, policymakers must be mindful that technologies are never neutral instruments. Rather, they are human-made tools situated in context. While advances in technology can certainly be powerful, they are not guaranteed to be positive and there is a risk of unintended negative side-effects. Moving forward, policymakers must work with a wider set of actors – including industry and civil society bodies – to support the responsible development of AI in the energy sector. This work requires active, rapid development and holds the potential for significant further research.

Part 1: Definitions and disagreements

The definition of AI is highly contested. Across various definitions one fairly consistent – but problematic – feature is the reference to “intelligence” or “intelligent behaviour”. Definitions thus become based on a highly subjective concept that is variable across time and culture. For example, some definitions base AI on the concept of the human mind while others advocate for recognition of more diverse forms of intelligence, or sub-sets of general human intelligence. These could include animal intelligences (e.g. the behaviour of fish to move within a swarm); task-based intelligences (e.g. information search or language translation); or emotional intelligence (e.g. tact and empathy).

Kurzweil [1] further breaks AI down into two broad categories: weak AI and strong AI. While these terms do not escape from all ambiguities surrounding the concept of intelligence, they do provide a useful framework that distinguishes between AI focussing on specific, task-based applications (weak AI), and that imbued with more general intelligence (strong AI). There is no universally agreed benchmark for what ‘strong’ AI might look like, and how this might relate to the nature and limits of human intelligence. Even if it were decided that the human mind was to become the universally agreed benchmark for strong AI, ambiguity would remain. For example there is no singular understanding of the limits of human intelligence, ‘non-logical’ elements of the human mind, or human neurodiversity [2].

To add further complexity, for some, what constitutes ‘true’ AI is challenged as AI develops. Intelligence is often regarded as a complex and ultimately incomprehensible property, so as soon as methods are developed to build certain facets of intelligence, they quickly fall out of the remit of what is considered ‘true’ intelligence. This issue is known as ‘the AI effect’ or ‘Tesler’s theorem’:

“… once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem. “Al is whatever hasn’t been done yet.””[3]

Terminology Glossary

Due to the technical nature of the field, a significant amount of specific terminology is used in discussions surrounding AI and its applications. The table below contains an overview of common AI-related terms and definitions.

Term Selected definition
Algorithm “A procedure or set of rules used in calculation and problem-solving.”[1] While the term today is used largely in relation to problem-solving in computing, the term was developed in ancient mathematics to describe a specific procedure, or set of rules, that could be applied to solve a certain type of problem.
Artificial intelligence (AI) The “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”[2]
Strong AI [3] or Artificial General Intelligence
(AGI) [4]
AI imbued with intelligence that is multi-faceted, adaptive to new environments or tasks, and has independent learning abilities. For example, strong AI may be able to combine pattern-recognition with emotionally and context-sensitive communication abilities. The human mind is one benchmark for general intelligence.

Weak AI [5]

AI imbued with intelligence in a narrower domain or application e.g. playing Chess or Go. Some consider narrow AI to overlap with the broader definition of cognitive technologies.
Automation The introduction and development of technologies that replace the need for forms of human labour; both physical and cognitive.
Cognitive technologies Technologies with information or knowledge processing abilities in a narrow field e.g. pattern recognition. Cognitive technologies generally require some form of human input e.g. setting task bounds, monitoring or interpreting results.


“Interpretability is the degree to which a human can understand the cause of a decision.”[6] Model interpretability is becoming a significant area of research, fuelled by demand for better insight into black and grey box models (see below) – particularly those applied in socially influential areas such as criminal justice, employment, or credit scoring.
Black box model A model for which the inputs and outputs are visible, but there is either no or very limited knowledge of its internal workings.
Grey box model A model for which there is partial understanding of its internal workings.
White box model A model for which there is full understanding of its internal workings.

Machine learning (ML)


“The capacity of a computer to learn from experience, i.e. to modify its processing on the basis of newly acquired information.”[7] ML algorithms rely on large volumes of data to ‘train’ – i.e. detect patterns and make predictions, classifications or decisions accordingly. May be used independently or in AI development.

Artificial Neural Networks (ANNs)


“Artificial neural networks (ANNs) are collections of small individual interconnected processing units. Information is passed between these units along interconnections… ANNs while implemented on computers are not programmed to perform specific tasks. Instead, they are trained with respect to data sets until they learn the patterns presented to them. Once they are trained, new patterns may be presented to them for prediction or classification.”[8] The design of ANNs is loosely modelled on the human brain, with interconnected processing units replacing neurons and synapses.

Deep learning (DL)


A subset of machine learning based largely on ANNs. May be used independently or in AI development. Deep learning generally uses black box models. These models can often cope with data that has been less well labelled than other models.

Reinforcement learning (RL)


“Similar to toddlers learning how to walk who adjust actions based on the outcomes they experience such as taking a smaller step if the previous broad step made them fall, machines and software agents use reinforcement learning algorithms to determine the ideal behavior based upon feedback from the environment… Depending on the complexity of the problem, reinforcement learning algorithms can keep adapting to the environment over time if necessary in order to maximize the reward in the long-term.”[9] This method of ML requires very large quantities of data to train.

Supervised learning

A type of ML where algorithms are trained on labelled data; where both the input (X) and output (Y) variables are already known. “It is called supervised learning because the process of an algorithm learning from the training dataset can be thought of as a teacher supervising the learning process. We know the correct answers, the algorithm iteratively makes predictions on the training data and is corrected by the teacher. Learning stops when the algorithm achieves an acceptable level of performance”[10]
Semi-supervised learning A type of ML where algorithms are trained on partially labelled data; where you have the input variables (X) but only some output variables (Y). Speech recognition is one example of a problem that can be tackled using semi-supervised learning techniques.
Unsupervised learning A type of ML where algorithms are given only input data (X). This type of machine learning is used for different types of problems than supervised or semi-supervised ML, namely “to model the underlying structure or distribution in the data in order to learn more about the data.” [11] For example, unsupervised ML algorithms may be able to detect previously overlooked patterns such as grouping (‘clustering’) houses with similar electricity demand profiles across large geographic areas.
Smart agent A software programme that performs an activity or service without immediate supervision by the user, generally based on pre-set user preferences – e.g. a smart agent could trade excess electricity on behalf of a household fitted with solar PV.

An ICT landscape visualisation

This diagram is designed to present a simple visual illustration of relationships between some of the terms defined in the glossary above. It is not drawn to represent any particular scale and is not an exhaustive ‘map’ of all techniques and technologies present in the AI landscape.

Source: author’s adaptation of Sommer 2017[15]


This blog has highlighted the difficulty in defining AI, introduced common AI-related terms, and presented a visualisation of relationships within the ICT/AI landscape, designed to provide readers with a general introduction to the field. The second part of this blog series will examine three socio-economic factors currently providing a supportive environment for the development of applied AI in the UK energy system, and examples of where applied AI technologies are starting to appear on the market.


With thanks to Deepak Kumar Panda for technical advice in formulating the glossary.

Supervision: Dr Iain Soutar and Prof Catherine Mitchell

[1]   Kurzweil, R. “The Singularity Is Near: When Humans Transcend Biology”. Viking. New York. 2005.

[2] For further information see: Walker, N. “Neurodiversity: Some Basic Terms and Definitions”, 27 September 2014. Accessible via: (accessed 12 March 2019).

[3] Hofstadter, D. R. “Gödel, Escher, Bach: an eternal golden braid”. Basic Books, 20th anniversary edition, 1999. Print. pp597.

[4] Oxford English Dictionary Online via accessed 13 November 2018.

[5] Kurzweil, R. “The Singularity Is Near: When Humans Transcend Biology”. Viking. New York. 2005.

[6] Goertzel, B. “Artificial General Intelligence”. Ed. by Pennachin, C. Vol. 2. Springer. New York. 2007.

[7] Kurzweil, R. “The Singularity Is Near: When Humans Transcend Biology”. Viking. New York. 2005.

[8] Nilsson, N. J. “Preface,” in The Quest for Artificial Intelligence, Cambridge University Press, Cambridge, 2009, pp. xiii-xvi. doi: 10.1017/CBO9780511819346.001. (Nilsson is a computer scientist considered one of the founding fathers of AI.)

[9] Miller, T, “Explanation in Artificial Intelligence: Insights from the Social Sciences”, 2017, arXiv Preprint arXiv:1706.07269 in Molnar, C. “A Guide for Making Black Box Models Explainable”, 1 January 2019, Github, accessible via: Accessed 9 January 2019.

[10] Oxford English Dictionary Online via accessed 13 November 2018.

[11] Kalogirou, S. A., “Artificial neural networks in renewable energy systems applications: a review”, Renewable and Sustainable Energy Reviews 5 (2001), pp 376. An alternative explanation, including useful diagrams, is also provided by: Castrounis, A. “Artificial Intelligence, Deep Learning, and Neural Networks Explained”, InnoArchiTech, 2019, accessible via Accessed 9 January 2018.

[12] Marr, B. “Artificial Intelligence: What Is Reinforcement Learning – A Simple Explanation & Practical Examples”, Forbes, 28 September 2018. Accessible via: Accessed 19 March 2019.

[13] Brownlee, J. “Supervised and Unsupervised Machine Learning Algorithms”, Machine Learning Mastery, 16 March 2016. Accessible via: Accessed 21 March 2019.

[14] Brownlee, J. “Supervised and Unsupervised Machine Learning Algorithms”, Machine Learning Mastery, 16 March 2016. Accessible via: Accessed 21 March 2019.

[15] Somer, P. “The New Technologies”, IBM, November 2017. accessed 19 November 2018.

You may also like...

Leave a Reply

Skip to toolbar