Remi AI Research Fellow, Thom Dixon, presents the first in a series of thought provoking pieces. Today, he explores the ethical considerations around the roll out of Image Recognition solutions in public spaces.
Over the coming weeks I’ll be writing a number of articles in a discussion series highlighting ethical issues in artificial intelligence research. These articles are based on internal interviews and coffee conversations at Remi AI. I’m writing them to give an insight on the kind of ethical issues that arise daily in A.I research. From the outset I’ll add the following caveat: the issues Remi AI encounters aren’t the singularity and no one here is close to cooking up consciousness in the backyard. It will be years until such problems arise. While there is need for conversation around these topics, there are much more pressing issues that are already impacting society, or will in the near future.
The primary goal of the In Discussion series is to combat the A.I hype cycle which is diverting attention away from issues of real concern. This series is going to focus on contemporary issues that aren’t being talked about nearly as much as they should be.
Let’s begin with image recognition.
Setting aside the impressive technological advancements made in recent years, image recognition capabilities have opened up a plethora of data privacy problems. An A.I research firm focusing on image recognition has to navigate these problems each and every day. Do we do this project? Do we deliver this capability? Who are we delivering to? Just because we can, does that mean we should?
The reaction of Google’s employees to Project Maven is a perfect example. Companies operating in this area have to ensure their choices are aligned with the values of their employees. When everything on the ground moves quickly, this translates to constant conversations about what can, could, and should be done.
Image recognition can be split this into two types of capability: facial recognition and person recognition. The former powers your cloud photo libraries and a decent amount of policing and security work. The latter uses clothes, arms and hand positioning to track a person through a scene or space. This is great for pedestrian and buyer behaviour analysis. Now, with person recognition, it’s easy to get an A.I agent to “forget” you (delete the footage) once you leave the scene, but that’s not so easy under option one - facial recognition - because it is predicated on recognising you over a potentially infinite time span. With person recognition, once you exit a scene, the agent that tracked you can be set to forget you after a certain amount of time. In most cases there’s no need to retain that data as your aggregated behaviour is all that’s of interest. Moreover, if you leave the scene and change your sweater to something leopard print, put on gold gloves and bright green leather pants, then return ten minutes later, chances are the agent doing the analysis is none the wiser you’re the same person.
There have already been some highly successful applications of person recognition for behavioural analysis in the optimisation of highly trafficked spaces. This tech has saved time and increased convenience on a mass scale, encouraging greater pedestrian traffic through popular cities. In fact, it’s hard to think that in ten years’ time anyone would design a space without first modelling and analysing how people are going to interact with it, and to do that they’ll most likely use this tech. High traffic spaces that aren’t constantly assessed and redesigned based on behavioural analysis will be easily identifiable in comparison, because they will not function as well.
Two questions often underpin privacy debates: when are you allowed to be forgotten AND when should you be remembered? Perhaps more importantly though, asking ‘when are you allowed to be forgotten’ assumes that you know when you’re being watched.
Which, to be quite frank, you don’t.
This is one of the key differences between the West and China right now. If you go to China tomorrow you should expect to be watched and you would be naive if you didn’t. If you travel to China you should expect your passport photo to be digitally chewed over and your face to be watched through each and every city. That’s the state framework. For better or worse, that’s the approach China has taken to image recognition. It’s laid out, it’s known, it’s deployed. Truth is, it’s working very well, so well in fact that China’s now talking about on-selling their tech to a variety of countries.
In the West we’re in a very different situation. We don’t know the position of the state security apparatus in relation to image recognition. More importantly, the majority of us have no idea how our corporations are using it. The regulation isn’t there, the legislative framework isn’t there, the broad based public understanding of what image recognition is and does, it’s not there. Yet now is the time that we need to think seriously about these questions. If we don’t, we risk sleepwalking into a domestic security regime comparable to China. This might be fine for some, but the entire structure of Western society is predicated on the fact that at the end of the Second World War, we agreed certain levels of state intrusion into people’s privacy weren’t a good thing. It’s disappointing that 70 years later we need to have this conversation again.
When are you allowed to be forgotten, when should you be remembered?
Opinion at Remi AI is in flux as the answers to these questions are necessarily context driven, but there’s a few things that can be said with surety. Facial recognition is a dual-use technology in a way that person recognition is not. This should mean that when facial recognition is deployed, it is monitored by a national regulatory body. Additionally, this monitoring should be in a much more intrusive way than that required for the deployment of person recognition. Currently, neither are being monitored and who knows which federal agency would put their hand up for the job if it was ever actually required. There’s clearly ground to make up.
Think of it this way, if a wild card political movement swept the country tomorrow, what is the breakout time for them to turn today’s state and corporate image recognition capabilities to nefarious work? With no stopgaps in place it wouldn’t take long. .
Image recognition, if deployed maliciously and malevolently, is the hole that will sink the democratic ship. At Remi AI, we think it’s time liberal and democratic societies culturally engage with this. It’s time to inform the development of this emergent capability with our own values and ideals. That means finding a balance of intrusion. Image recognition is an intrusive dual-use capability with great potential for better enabling our lives. That should mean when it is deployed it’s also monitored and regulated with an equal amount of intrusion by an independent ombudsman.
The European Union has a head start on Australia and the US on this front. They’ve taken that lead with the General Data Protection Regulation (GDPR), and Australia could do worse than to start with its own version of that.
The near-term issues for A.I aren’t lethal autonomous weapons that go rogue, and they’re not superintelligent paperclip-making factories. They are data and privacy. In international affairs we slice and dice countries up in all sorts of ways, and one of those ways is between rule-makers and rule-takers. When it comes to data privacy and the way your data interacts with A.I enabled tools, we’ll all be better off as rule-makers.
Not everyone can be a rule-maker though, and first movers normally have the advantage.
More to come...
By Thom Dixon
Thom is Research Fellow at Remi AI