How to craft effective AI policy

So to your first question, I think you’re right. That policy makers should actually define the guardrails, but I don’t think they need to do it for everything. I think we need to pick those areas that are most sensitive. The EU has called them high risk. And maybe we might take from that, some models that help us think about what’s high risk and where should we spend more time and potentially policy makers, where should we spend time together?

I’m a huge fan of regulatory sandboxes when it comes to co-design and co-evolution of feedback. Uh, I have an article coming out in an Oxford University press book on an incentive-based rating system that I could talk about in just a moment. But I also think on the flip side that all of you have to take account for your reputational risk.

As we move into a much more digitally advanced society, it is incumbent upon developers to do their due diligence too. You can’t afford as a company to go out and put an algorithm that you think, or an autonomous system that you think is the best idea, and then land up on the first page of the newspaper. Because what that does is it degrades the trustworthiness by your consumers of your product.

And so what I tell, you know, both sides is that I think it’s worth a conversation where we have certain guardrails when it comes to facial recognition technology, because we don’t have the technical accuracy when it applies to all populations. When it comes to disparate impact on financial products and services.There are great models that I’ve found in my work, in the banking industry, where they actually have triggers because they have regulatory bodies that help them understand what proxies actually deliver disparate impact. There are areas that we just saw this right in the housing and appraisal market, where AI is being used to sort of, um, replace a subjective decision making, but contributing more to the type of discrimination and predatory appraisals that we see. There are certain cases that we actually need policy makers to impose guardrails, but more so be proactive. I tell policymakers all the time, you can’t blame data scientists. If the data is horrible.

Anthony Green: Right.

Nicol Turner Lee: Put more money in R and D. Help us create better data sets that are overrepresented in certain areas or underrepresented in terms of minority populations. The key thing is, it has to work together. I don’t think that we’ll have a good winning solution if policy makers actually, you know, lead this or data scientists lead it by itself in certain areas. I think you really need people working together and collaborating on what those principles are. We create these models. Computers don’t. We know what we’re doing with these models when we’re creating algorithms or autonomous systems or ad targeting. We know! We in this room, we cannot sit back and say, we don’t understand why we use these technologies. We know because they actually have a precedent for how they’ve been expanded in our society, but we need some accountability. And that’s really what I’m trying to get at. Who’s making us accountable for these systems that we’re creating?

It’s so interesting, Anthony, these last few, uh, weeks, as many of us have watched the, uh, conflict in Ukraine. My daughter, because I have a 15 year old, has come to me with a variety of TikToks and other things that she’s seen to sort of say, “Hey mom, did you know that this is happening?” And I’ve had to sort of pull myself back cause I’ve gotten really involved in the conversation, not knowing that in some ways, once I go down that path with her. I’m going deeper and deeper and deeper into that well.

Anthony Green: Yeah.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *