Cyber_Bytes Issue 63

Published on 10 May 2024

Welcome to Cyber_Bytes, our regular round-up of key developments in cyber, tech and evolving risks.

UK Government publishes cyber security breaches survey 2024

The UK government has published the results of a research study for UK cyber resilience. The study explores the policies, processes and approach to cyber security for 2,000 businesses, 1,004 charities and 430 educational institutions. The findings of the survey provides a description of the cyber security position of a representative sample of UK organisations, providing a snapshot of UK cyber resilience at this point in time.

Some interesting statistics include:

  1. 7.78 million cyber crimes of all types have been experienced by UK businesses in the last 12 months.
  2. 32% of businesses are experiencing attempted attacks at least once a week.
  3. Malware impacted 17% of organisations that experienced a cyber incident.
  4. Phishing remains the top method of initial access, and the cause of 84% of cyber incidents.
  5. Just 22 % of businesses have a formal incident response plan in place.
  6. Just 11% of businesses say they review the risks posed by their immediate suppliers and only 6% are looking at their wider supply chain.

It will be interesting to see how the final two points develop with upcoming EU's NIS2 Directive and Digital Operational Resilience Act (DORA) prompting affected UK businesses to focus further on cyber risk.

Click here to read the full UK Government survey.

ICO launches consultation on accuracy of generative AI models

The ICO has announced the launch of the third chapter of its consultation series on generative AI, focussing on how the accuracy principle of data protection law applies to the outputs of generative AI models and the impact that accurate training data has on the output.

The consultation explains that the level of accuracy required of the outputs of generative AI models depends on how the model will be used, with high accuracy needed for models that are used to make decisions about people or that are relied on by users as a source of information. It also notes that organisations developing and using generative AI models that have a purely creative purpose are unlikely to need to ensure that the outputs are accurate as their first priority. For example, the consultation highlights that models used to triage customer queries would need to maintain higher accuracy than models used to help develop ideas for video game storylines.

Where an application based on generative AI is used by individuals in consumer-facing services, the ICO notes that application developer need to consider:

  • Providing clear information about the statistical accuracy of the application, and easily understandable information about appropriate usage; 
  • Monitoring user-generated content;
  • User engagement research, to validate whether the information provided is understandable and followed by users;
  • Labelling the outputs as generated by AI, or not factually accurate; and
  • Providing information about the reliability of the output. 

Click here to read the full publication by the ICO.

NCSC published new version of the Cyber Assessment Framework

The NCSC has published an updated version of the Cyber Assessment Framework (CAF). This follows an increase in the cyber threat to critical national infrastructure.

The updated CAF covers the increased use of AI technologies and makes changes to the previous CAF in relation to remote access, privileged operations, user access levels and the use of multi-factor authentication. It has also been revised to improve navigation across the CAF collection and consolidate references to both internal NCSC and wider external guidance.

The update has been completed in full consultation with NIS regulators and other interested parties. The NCSC explains that they have also improved alignment with Cyber Essentials by mirroring some of its requirements while ensuring the existing outcome-focussed approach of the CAF is retained.

Click here to read the NCSC press release.

CyberCube issues warning on increased cyberattacks targeting public sector

CyberCube, an analytics platform which provides data-driven insights for the insurance industry, has raised the rising risk of cyberattacks targeting public sector institutions, particularly government and election systems. In anticipation of the upcoming global electoral events, the report, "Global Threat Outlook, H1 2024," urges government agencies to enhance cybersecurity defences "in 2024 and beyond".

The report also discusses eight sectors vulnerable to cyber threats (telecoms, IT, education, retail, arts & entertainment, financials services and healthcare). CyberCube underscores healthcare as the most susceptible to cyber threats.

CyberCube explains that sectors like banking and aviation are frequently targeted but they maintain robust cybersecurity making them slightly less susceptible to threats. Sectors such as mining and agriculture are found to be targeted less but still maintain high security standards in any event.

Click here to read more from Insurance Business Magazine.

ICO publishes guidance to improve transparency in health and social care

The Information Commissioner's Office (ICO) has published new guidance on improving transparency in health and social care.

The health and social care sectors routinely handle sensitive information about the most intimate aspects of someone’s health, which is provided in confidence to trusted practitioners. Under data protection law, people have a right to know what is happening to their personal information, which is particularly important when accessing vital services.

The guidance has been prepared following receipt of feedback from a public consultation earlier this year to heath and social care organisations across the UK.

The guidance will help health and social care organisations to understand the definition of transparency and assess appropriate levels of transparency, as well as providing practical steps to developing effective transparency information. The guidance supplements existing ICO guidance on the principle of transparency and the right to be informed.

Click here to read the guidance. Click here to read the associated ICO press release.

International agencies publish joint guidance on securely deploying AI systems

Global government security agencies, including the UK's NCSC, have published the joint Cybersecurity Information Sheet "Deploying AI Systems Securely".

The guidance provides best practices for deploying and operating externally developed AI systems and aims to:

  • improve the confidentiality, integrity, and availability of AI systems;
  • ensure there are appropriate mitigations for known vulnerabilities in AI systems; and
  • provide methodologies and controls to protect, detect and respond to malicious activity against AI systems and related data and services. 

The information sheet is for organisations deploying and operating externally developed AI systems on premises or in private cloud environments, especially those in high-threat and high-value environments. The sheet notes that each organisation should consider the guidance alongside their use case and threat profile.

Click here to read the CISA press release.

European Supervisory Authorities (ESAs) consultation seeks views on draft regulatory technical standards under DORA

The Digital Operational Resilience Act (DORA) (Regulation (EU) 2022/2554) introduces a pan-European oversight framework of ICT third-party service providers designated as critical (CTPPs). ESAs have been mandated, under DORA, to develop draft regulatory technical standards (RTS) to harmonise the conduct of oversight activities by competent authorities and the ESAs.

Under Article 41(1) of DORA, the draft RTS should specify:

  • the information to be provided by an ICT third–party service provider in the application for a voluntary request to be designated as critical;
  • the information to be submitted by the ICT third–party service providers that is necessary for the Lead Overseer (who is appointed to conduct oversight of the assigned CTPPs and act as the primary point of contact for those CTPPs) to carry out its duties;
  • the criteria for determining the composition of the joint examination team, their designation, tasks, and working arrangements;
  • the details of the competent authorities’ assessment of the measures taken by CTPPs based on the recommendations of the Lead Overseer.

The consultation seeks feedback, until 18 May 2024, on whether the content of the RTS is sufficiently clear and detailed, and whether respondents agree with the impact assessment and main conclusions stemming from it.

Click here to view the consultation paper.

Stay connected and subscribe to our latest insights and views 

Subscribe Here