Regulators in the US and UK continue to closely observe how firms monitor communications. The global pandemic and increased homeworking have arguably been one of the catalysts for this closer scrutiny, but it has been a feature of the enforcement landscape for a few years now.
In the US, there have been some headline-grabbing fines for the use of “off-channel” communications.[1] In November 2023, one of the five commissioners on the SEC even criticised the SEC for imposing over a billion dollars in fines, to go along with onerous conditions, all without defining exactly “what is considered to be a business communication”.[2] The DOJ is not far behind. Its 2023 revised guide [3] for evaluating corporate compliance programs for the purposes of calculating fines and penalties dives deeply into employees’ messaging habits, asking, for instance, if corporate data on employees’ personal devices is accessible and preserved.
Enforcement action in the UK has been less severe, but the FCA has nevertheless imposed recent fines for surveillance processes that were deemed to be deficient. Ofgem has also targeted breaches of recordkeeping requirements where traders had used privately owned phones to discuss market transactions.
Reactive v proactive
The issue of monitoring often arises in a reactive context. If there are red flags surrounding an employee, or if a whistleblower has credibly identified suspicious conduct, then businesses are expected to pull and review at least a selection of employees’ chats and emails. If a communication has not been preserved, then regulators and law enforcement are, in their view, robbed of direct and irrefutable proof.
But firms are of course expected to proactively monitor for any criminal conduct on an ongoing basis and to preserve communications. For instance, in the US, the DOJ directs prosecutors to ask if the organisation’s compliance program includes “monitoring and auditing to detect criminal conduct.” In the UK, firms subject to FCA SYSC 10A [4] must comply with specific requirements relating to record keeping of communications and prevention of employees’ use of privately owned devices.
It is of course unrealistic to expect all companies to monitor every electronic communication to or from their employees. Indeed, when law enforcement wiretaps suspected criminals’ phones, a dozen or more agents are often staffed to keep up with the constant influx of communication, to select what is pertinent. Firms have therefore had to adopt a proportionate approach to monitoring. Historically they have relied on blunt tools such as keyword search terms to flag anything requiring closer inspection.
With the advent of generative AI, monitoring can now be far more sophisticated and effective. According to the FCA, “one of the defining features of AI is its ability to process large volumes of data, and to detect and exploit patterns in those data”.[5] AI algorithms enable analysis that surpasses the capabilities of humans, where vast volumes of data can be sifted at high speed in real-time, without drowning managers in false positives.
Changing regulatory expectations
Regulators have clearly indicated that they expect firms to embrace this new technology. The FCA signalled in a speech that developments in technology such as AI are fuelling additional opportunities for the way communications must be proactively monitored.[6] Similarly, in July 2023, the FCA’s Chief Executive said, “as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time. We will take a robust line on this”. The introduction of the Digital Sandbox in the UK further underpins the clear message that regulators are devoting considerable resources into developing AI and expect firms to use it.
In the US, President Biden’s executive order on 30 October 2023 directs agencies to, among other things, acquire AI products and services and hire AI professionals to “strengthen AI deployment”.[7] But we are far off from the imposition of the use of AI on the private sector.
Practical tips to consider in relation to monitoring
- Ahead of embedding any new technology, ensure there is a clear senior management structure that is accountable not only for the process of AI implementation, but also its ongoing management and compliance with relevant regulation.
- Conduct sufficient testing and calibration of technology before it is rolled out and gather feedback from a variety of sources.
- Ensure internal policies reflect the ability to use the new technology.
- Ensure employees are informed of the purpose and extent of any monitoring and consider whether specific data protection and employment advice is required. For example, in the UK, to comply with the Data Protection Act 2018 and GDPR, surveillance must be proportionate, justified and necessary. A data protection impact assessment must be undertaken for processing likely to result in a high risk to rights of individuals, and it is good practice to always undertake this for any new project that requires personal data processing.[8] Employees can only be monitored without their knowledge if firms suspect they are breaking the law or if letting the employee know would make it hard to detect the crime. [9]
- Consider a phased approach when introducing technology where possible to mitigate against overreliance until it can be effectively implemented.
When technology is introduced, beware of ‘automation bias’- a tendency to believe that the flagging of potential criminal conduct must indeed represent criminal conduct because an automated system, rather than a human, caught it.