CHAI: Cyber Hygiene in AI enabled domestic life

Funded by the Engineering and Physical Sciences Research Council (EPSRC) under EPSRC References: EP/T026812/1, EP/T026596/1, EP/T026707/1, EP/T026820/1 (01 December 2020 - 30 November 2023)

Artificial Intelligence (AI) is rapidly becoming part of people’s lives at home. Smart speakers, smart thermostats, security cameras with face recognition, and in the near future, brain-computer interfaces and elderly care companion robots can have considerable benefits to energy efficiency, comfort, and even health. However, AI also introduces new cyber security risks, which users are not prepared for. When a user faces a security threat such as receiving a phishing email or visiting a watering hole website, there are often visual and behavioural cues that can raise their suspicion, and there are known cyber hygiene measures they can follow. In contrast, for AI enabled devices, such as those found in a smart home, this is rarely the case, because they are designed to be minimalist and seamless. Also, there are no equivalent cyber hygiene measures for AI security risks to advise users given the emerging nature of this technology.

The aim of CHAI is to help the individual protect themselves against security risks in AI enabled environments. CHAI argues that in AI enabled domestic life, new cyber hygiene measures need supporting by diagnostic tools that allow users to identify security attacks and appropriate training. This will be achieved through the following goals: (i) to identify and demonstrate the novel security breaches introduced by AI in the home; and to assess the social, psychological and neuroscientific factors that may influence an individual’s susceptibility in the context of these breaches; (ii) to employ and improve the use of methods already proposed in AI for improving the explainability of AI decisions in order to provide diagnostic information that allows users to identify AI security breaches; (iii) to develop new cyber hygiene measures, i.e. diagnostic and actionable steps that users may take to address a breach, optimised to the user and situation in terms of their cost (in usability, difficulty in implementing, mental effort, and even monetary if needing further software/hardware to be installed) using mathematical techniques; (iv) to co-design a novel cyber hygiene training programme with users of home technology that supports the use of Explainable AI while personalising and optimising the training to match each individual. Empirical research will be carried out in participating households to evaluate the effectiveness of this training approach.

CHAI focuses on the social housing sector, which is introducing several AI initiatives, such as housing management chatbots, building maintenance bots, and smart thermostats to tackle fuel poverty. While these initiatives can result in cost cuts and facilitate property management (e.g. temperature and humidity controllers), residents have no control over these changes and often do not have the digital literacy to respond to security risks and breaches. If an AI system’s integrity or availability is breached this could affect the physical privacy of tenants (e.g. life patterns of behaviour), as well as their emotional and physical safety (e.g. temperature, electrical appliances' control). CHAI has chosen to focus on this population because of its heightened vulnerability with respect to security.

With a view to deeply integrating CHAI in real-life settings, we approached leading industrial partners: (i) Gas Tag, AI developers for gas supply smart appliances in social housing, will support the examination of realistic AI applications that are currently in place or expected to be introduced in the near future in the home; (ii) Security awareness training providers, Bob’s Business, whose current clients include over 70,000 employees in the UK Government, will co-design cyber hygiene training programmes and webinars; and (iii) Housing technology sector representative, Housing Technology, will help recruit participant households and social housing associations for experiments and offer its dissemination channels in the housing sector.

Budget: £2.4M (University of Greenwich share £453k).
Principal Investigator: George Loukas, Co-Investigator: Manos Panaousis, Vasalou, A (UCL), Nemorin, S (UCL)
Partners: UCL, Queen Mary Univ. of London, Univ. of Bristol, Univ. of Reading.
Role: Leading research on optimisation of cybersecurity controls to mitigate threats against AI applications.

Emmanouil (Manos) Panaousis
Emmanouil (Manos) Panaousis
Head of the Cyber Risk Lab, Associate Professor of Computer Science
Emma Scott
Emma Scott
Final Year Project Researcher