Facial recognition tech too dangerous without safeguards, experts warn
Last week IBM announced it could not condone facial recognition technologies that allowed for mass surveillance and racial profiling and that it would end its development of facial recognition products. Amazon subsequently announced a one-year freeze on the availability of its facial recognition tools to law enforcement, and Microsoft said it would not provide the technology to law enforcement until federal legislation was in place. The announcements came as many in the United States continued to protest racially biased policing.
Research has shown a tendency for facial recognition to be less accurate when attempting to identify darker-skinned people and Mr Sandow said the use of the technology to identify an individual among many candidates was particularly worrying.
This so-called “one-to-many” application of facial recognition technology is distinct from the “one-to-one” systems used for passport control or user authentication in smartphones, which Mr Santow said carried a very low risk of harm.
“One-to-many is much more prone to error and the consequences of error can be exceptionally serious.”
“In a law enforcement context, if you wrongly identify someone as a suspect, then you can take all kinds of action against that person that can violate their basic human rights,” he said.
“[Facial recognition for law enforcement] should not be permissible, unless or until there are proper legal protections in place that will prohibit and prevent misuse and harm against humans.”
The Identity-matching Services Bill, which was introduced to parliament in 2018 and again in 2019, would allow the government to create facilities for the sharing of facial images between various government agencies and private organisations.
The bill was withdrawn, to be redrafted, following bipartisan concern from the parliamentary joint committee on intelligence and security. The committee said at the time the bill lacked appropriate protections and oversight. Privacy advocates also expressed fear that boundaries and acceptable use cases were not defined.
A recent survey showed that 61 per cent of Australians believed facial recognition could be a useful tool for public safety but many didn’t want it in public spaces or wanted the right to opt out.
Monash University researcher Robbie Fordyce said the results showed inconsistencies that suggested people didn’t comprehend all the ways facial data could be used.
“We need to have basically a moratorium on facial recognition until we have consultative legislation put in place that recognises what the public think it’s going to be used for,” Dr Fordyce said.
“People think they’re going to have a lot more say in how it happens and the degree to which they can actually choose to be subject to it.”
Dr Fordyce said without proper regulation around where and how data can be collected and stored and for how long, datasets would be bought and sold between government agencies and private companies — making it impossible for people to understand the use cases even if they consented to their image being stored — and datasets could be conglomerated into central files of personal information.
As with other forms of digital media with social use cases, Dr Fordyce said people are aware their data is being used in ways they don’t like or don’t understand, but they feel powerless to do anything about it.
“They might just go to the shops, and then they find out a couple of years later that their whole behaviour in that space had been tracked right down to which toilet cubicle they went into,” he said.
“Does someone want to have their data of them, as a 16-year-old shoplifting a Chupa Chup held up in perpetuity right through into old age?”
Tim is the editor of The Age and Sydney Morning Herald technology sections.