AI in energy: is it as smart as you think? Part Three

By Emily Judson, Energy Policy Group, 8 April 2019

Artificial intelligence (AI) is currently in the spotlight due to recent rapid expansion of AI technologies across a range of applications in both private and public sectors. Applied AI technologies can be powerful, augmenting human capacity to address more complex situations and challenges in shorter timeframes. Indeed AI was recently named as ‘the emerging power behind daily life’ by Microsoft’s Kate Rosenshine in her keynote speech at the December 2018 Tech UK Digital Ethics Summit. These new technologies also impact all levels of the UK energy system from enhancing personalisation of domestic customer service through to facilitating predictive maintenance of national transmission infrastructure.

This three-part blog series will address the growing use of AI in the energy system. Part One provided a scene-setter for readers who are unfamiliar with AI, addressing general background and terminology. Part Two analysed three socio-economic enabling factors that provide a fertile ground for the adoption of AI in energy, and provided examples of where applied AI technologies are already appearing on the market. Part three (below) addresses policy implications of the growth of AI in the energy system, opening the conversation regarding future governance.

When establishing institutions and principles supporting the governance of emerging energy technologies, policymakers must be mindful that technologies are never neutral instruments. Rather, they are human-made tools situated in context. While advances in technology can certainly be powerful, they are not guaranteed to be positive and there is a risk of unintended negative side-effects. Moving forward, policymakers must work with a wider set of actors – including industry and civil society bodies – to support the responsible development of AI in the energy sector. This work requires active, rapid development and holds the potential for significant further research.


Part 3: Policy at the intersection of energy and digital

While there appears to be a lot of potential for applied AI technologies to change and shape the energy system in future, it is far from certain that the impacts of these technologies will naturally align with public interest issues, such as addressing climate change. The degree to which the benefits or costs of applied AI technologies will be distributed among society is also uncertain. To steer these powerful technologies towards positive impacts there is a pressing need to broaden discussions around AI in energy, considering social, economic and environmental issues as well as technological detail.

Uneven adoption

Technology diffusion is usually non-linear. In his canonical book “The Diffusion of Innovations”, Rogers describes how innovations diffuse across a population over time, dividing groups into: innovators, early adopters, early majority, late majority, and laggards [2]. These categories broadly vary in terms of attitudes to risk – with laggards seen to have the lowest risk appetite. However, different groups may also differ in terms of age, social class, and financial liquidity to take a few examples.

Source: Rogers, E.M, “Diffusion of Innovations”, Free Press: New York, 5th edition, 2003 in European Public Health, “Diffusion of Innovations”, (accessed 4 March 2019)

In energy, the approach of segmenting consumers according to technological (and thus market) engagement can also be used by commercial and policy actors. For example, energy companies may segment customers for tailored marketing and product-development purposes. Segmentation may also be used by energy policy-makers to identify sub-sets of consumers vulnerable to, or disadvantaged by, technological change.

Source: Electricity Network Transformation Roadmap Interim Program Report, Fig. 1.4 Proposed market segmentation curve for residential customers, pp33 and as discussed in a previous IGov blog by Hoggett, R. “The changing role of consumers in the energy system”, iGov, 8 July 2016. Accessible via: (accessed 29 March 2018).

In addition to attitudes to risk, studies have also shown that factors such as social influences and networks [3], national narratives [4], and financial incentives [5] can influence the diffusion of technologies across a population. Within the digital energy context, factors influencing uptake or refusal of smart meters have been studied in particular detail. While factors can vary according to geographic and social context, recent findings from the USA indicate that the most significant factors influencing smart meter acceptance are technological familiarity and climate change risk perceptions [6].

Beyond factors driving diffusion, Kahma and Matschoss (2017) add a critical perspective to Rogers’ adoption curve by identifying five additional types of “non-use” of emerging technologies beyond a temporary lack of uptake that diminishes over time (Rogers’ “laggards” phase). These types of non-use include: active resistance, disenchantment, disenfranchisement, displacement, or disinterest. When considering factors behind the uptake of applied AI in energy, it is useful to consider both reasons for uptake and also for potentially longer term non-use. For example, users concerned about privacy or security of applied AI in homes may actively resist uptake of the technology based on principled active resistance rather than perhaps more straightforward geographic or financial barriers.


These differences in personal circumstances, beliefs and attitudes relating to technological uptake have implications for energy policy-makers, given the public interest dimensions of energy.  As outlined in Part Two of this blog series, AI-based technologies are already being developed to tackle analysis and optimisation of our increasingly complex energy systems. This holds potential to support decarbonisation, for example through better planning and integration of renewable generation, or unlocking demand side flexibility that could reduce requirements for costly network reinforcements. Part Two also highlighted the high levels of political support for emerging AI in energy and other sectors through, for example the UK Government’s Industrial Strategy.

While this support could have a positive impact, policymakers must also bear in mind the distributional and equity impacts of technological change. This must include pre-emptive action focussed on providing safeguarding mechanisms that preserve a basic set of rights, freedoms, safety checks and equality standards throughout the development of new technologies. These mechanisms will be essential in steering the development of emerging technologies in a positive direction. They are also necessary to support the protection of individuals or groups who are either unable or unwilling to actively participate in engagement with AI-based technologies. In other words, safeguarding is a key government policy responsibility to ensure an equitable energy transition in which nobody is ‘left behind’.

These principles stand not only for AI but also for responsible digital technology development in the energy sector more broadly. For example, principles could also be relevant to trading technologies or digital ancillary services. Policymakers can benefit from drawing on the growing body of research focussed around AI risks, ethics and governance to explore how to effectively develop their new roles and responsibilities, and to align with cross-sectoral best practice.

New tools for public input and redress

Building upward from a baseline protected by government safeguarding mechanisms, the creation of responsible, inclusive and sustainable AI calls for increased involvement of the general public in technology design, monitoring and enforcement of accountability. To facilitate this broadened participation in AI-related discussions there is a need for better public resources, explanatory tools, and access to events. While some organisations – such as Doteveryone and the Open Data Institute – are already working broadly in this area, there remains a shortage of sector-specific resources [7]. Improving public information resources around AI in energy would provide a helpful first step in facilitating wider engagement with debate in a balanced manner. This may also help to counterbalance the tendency of media coverage to produce polarised presentations of emerging technologies as inherent saviours or disasters.

Building on the availability of information resources, there is a need for deeper public involvement in the development of AI in energy in two key ways.

  1. Input in the design-phase
    Improving public consultation during the design phases of new energy technologies can help facilitate better accessibility and mitigation of bias, by picking up potential equity issues at a stage prior to widespread technology adoption. Design-stage consultation can also help technologies cater to a larger range of energy user engagement styles and abilities; for example providing differential engagement options for users wishing to take more active roles in their energy management [8]. In order access these benefits, developers should consult with a range of people, including those from vulnerable, disadvantaged or under-represented groups. User experience testing should also be conducted with people from a range of demographics, at a point in the design phase where feedback can be meaningfully incorporated. This form of consultative design constitutes a different approach from much of technology business culture to date; suggesting that policy interventions may be necessary to stimulate change particularly in the practice of private companies. The introduction of industry standards could potentially encourage improvement in practises of consultative design in the energy sector. Here too, public scrutiny of compliance to standards could provide a key driver behind effective enforcement.
  2. Accountability and redress
    While inclusive design can pre-empt some issues, there may still be problems with applied AI that emerge only when the technology is put into everyday use. For example, technology use may result in exclusionary practices beyond those we currently recognise based on known socio-economic markers. If and when problems arise, appropriate and transparent redress mechanisms should be accessible. While redress mechanisms internal to private companies – such as customer service protocols – may be sufficient for some cases, these should be backed up by regulatory and legal protective measures that are appropriately updated to handle AI and other advanced technological cases. Public scrutiny of standards and testing of redress mechanisms will be necessary to build robust protective mechanisms. Policymakers and businesses must also be open to public feedback in the process of testing redress mechanisms.


This three-part blog series has explained that AI is not a single technology or technique, but a complex and shifting area of research and development that evades singular definition. The application of AI in energy is expanding rapidly however, due to its technical content and fast pace of development, it remains an area that can seem opaque. A lack of transparency and accessible information can fuel polarised speculation over the abilities and risks of applied AI technologies, in turn risking a polarised approach to future AI development that either glosses over potential negative consequences or encourages an over-cautious approach that could stymie innovation supporting decarbonisation. Part One of this blog provided a small contribution to the need for improved information resources about AI, however far more must be done in future to facilitate public understanding of these important technologies.

Inquiry into the strengths and of applied AI in energy helps facilitate a more measured analysis of what the technologies can do in practice. While Part Two of this blog provided a brief overview of four developing areas of applied AI in energy, a more thorough evaluation of the forms and impacts of these emerging technologies is an important area for future energy research. This kind of analysis will also be essential in supporting effective technology governance moving forwards.

Finally, when establishing institutions and principles supporting the governance of emerging energy technologies, policymakers must be mindful that technologies are never neutral instruments. Rather, they are human-made tools situated in context. While advances in technology can certainly be powerful, they are not guaranteed to be positive and there is a risk of unintended negative side-effects. Part Three of this blog has highlighted three areas – safeguarding; public input into technology design; and appropriate accountability and redress mechanisms – in which policymakers can work with the public and industry to support the development of responsible AI applied in the energy sector. This is an area that requires active development and holds the potential for significant further research.


[1] O’Neil, C. “Weapons of math destruction: How big data increases inequality and threatens democracy”, Broadway Books, 2017. There is also a growing academic literature around this topic of algorithmic and data bias. Other notable authors whose work has contextually informed this blog post include: Buolamwini, J., Crawford, K., and Noble, S.U.. For a discussion and breakdown of the term algorithmic bias see section 2 of Danks, D. and London, A.J. “Algorithmic Bias in Autonomous Systems”, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), August 2017. DOI: 10.24963/ijcai.2017/65.

[2] Rogers, E. “Diffusion of Innovations”, Free Press: New York, 5th edition, 2003.

[3] Peres, R. et. al. “Innovation diffusion and new product growth models: A critical review and

research directions”, International Journal of Research in Marketing, vol.27, 2010, pp91-106

[4] Malone, E. “Stories about ourselves: How national narratives influence the diffusion of large-scale energy technologies”, Energy Research and Social Science, vol.31, 2017, pp70-76.

[5] Simpson, G. and Clifton, J. “Testing Diffusion of Innovations Theory with data: Financial incentives, early adopters, and distributed solar energy in Australia”, Energy Research and Social Science, vol.29, 2017, pp 12-22

[6] Bugden, D. and Stedman, R. “A synthetic view of acceptance and engagement with smart meters in the United States”, Energy Research and Social Science, vol.47, 2019, pp137-145

[7] Whittaker, M. et al., “AI Now Report 2018”, AI Now Institute,

[8] Renstrom, S. “Supporting diverse roles for people in smart energy systems”, Energy Research and Social Science, vol.52, 2019, pp 98-109

Emily Judson is an EPSRC funded PhD student in the Energy Policy Group examining digitalisation and democratisation of energy in the context of system decarbonisation.


Supervision: Dr Iain Soutar and Prof Catherine Mitchell

You may also like...

Leave a Reply

Skip to toolbar