On the limitations of ‘smart’ CCTV: technology, ethics and accountability

 

Session: PARALLEL SESSION 2CCTV I

Date and time: 24th April 2014. (13:00‐14:30)

Authors: Daniel Neyland and Patrick Murphy

On the limitations of ‘smart’ CCTV: technology, ethics and accountability

Recent years have been witness to a number of developments in what is often termed ‘smart’ (or algorithmic) CCTV. Alongside face recognition (and other biometric-based technologies) and gate recognition, object tracking, auto-deletion and access management systems have come to the fore as potential means to enhance the capabilities of surveillance systems. Questions have been raised about the potential of these technologies to invade privacy and expand the scope of surveillance on a new scale and in new ways. Simultaneously (although on a smaller scale) suggestions have been made that such developments might enable CCTV systems to be made more ‘privacy sensitive,’ through the deletion of footage, reducing the scope of visibility for video-based surveillance and increasing accountability through the provision of automated outputs on system activity. This paper reports on one such project which attempted to combine ‘smart’ CCTV developments in order to create a Privacy Enhancing Technology, assessed through a form of on-going Privacy Impact Assessment. The paper will suggest that such ‘smart’ developments can quickly run into significant and fundamental problems with technology, ethics and accountability. The paper will argue that the extent and deeply entangled nature of such problems can result in systems invading rather than enhancing privacy and becoming almost impossible to hold to account.