Thousands of passengers traveling through the United Kingdom’s train stations have likely had their faces scanned by Amazon software as part of widespread artificial intelligence (AI) trials, according to newly revealed documents. These trials aimed to predict travelers’ age, gender, and emotions, with potential future use in advertising systems.
As Wired Magazine reports, over the past two years, eight major stations, including London Euston, Waterloo, and Manchester Piccadilly, tested AI surveillance technologies through CCTV cameras. These trials, overseen by Network Rail, used object recognition to detect trespassing, monitor platform overcrowding, identify antisocial behavior, and spot potential bike thieves. Additional trials used wireless sensors to detect hazards such as slippery floors and overflowing bins.
Civil liberties group Big Brother Watch obtained the details of these trials through a freedom of information request. Jake Hurfurt, the group’s head of research, expressed concerns about the normalization of AI surveillance in public spaces without adequate consultation.
The trials employed a mix of smart CCTV cameras and older cameras connected to cloud-based analysis, with five to seven cameras or sensors installed at each station.
Documents from April 2023 list 50 potential AI use cases, though not all were tested. For instance, a “suicide risk” detection system at London Euston was abandoned after technical failures.
One of the most controversial aspects of the trials was the use of Amazon’s Rekognition system to analyze passenger demographics and emotions. This setup, which involved capturing images at ticket barriers, aimed to produce statistical analyses of passengers’ age, gender, and emotional states.
This data could potentially enhance advertising and retail revenue.
AI researchers have frequently warned about the unreliability of emotion detection technology. In October 2022, the UK Information Commissioner’s Office advised against using such immature technologies.
Network Rail declined Wired Magazine request to comment on the current status of these trials or on issues regarding emotion detection and privacy. A spokesperson emphasized the importance of security and compliance with relevant legislation.
Despite the lack of transparency, Gregory Butler, CEO of data analytics firm Purple Transform, stated that emotion detection was discontinued during the trials and no images were stored. The trials also demonstrated benefits such as quicker detection of trespassing incidents and improvements in safety measures through AI analytics.
Privacy experts and AI ethics researchers, like Carissa Véliz from the University of Oxford, voiced concerns over the potential expansion of surveillance.
They warn that increased surveillance could lead to a loss of personal freedoms, emphasizing the need for careful consideration and transparency in deploying such technologies.
What do you think about this, is it ethically write or not?