An Open Loop Is Critical for Innovative AI
How policy prototyping encourages responsible growth
I’m proud to officially share with you today that Evo is an Open Loop participating company.
What does that mean?
Over the final quarter of 2020, Evo was a part of a team of leading AI companies in Europe brought together by Facebook that dedicated itself to developing a data-driven framework to manage AI risk while encouraging innovation and growth. This project allowed us to assess the risks in our own AI applications more deliberately and then work together to create and test an AI risk assessment framework.
If your eyes are glazing over just thinking about AI policy, you’re not alone. Legal governance and risk management are not the most exciting topics for most data scientists. But if we care about developing innovative AI models and better technology for the future, all data scientists need to prioritise these analyses. Responsible AI requires active engagement with risk issues by all stakeholders, especially those of us working on AI every day.
Data scientists need to be involved in developing policies that limit AI risk while encouraging growth.”
My Open Loop experience showed me that policy prototyping is a simple yet effective way to accomplish that goal.
The need to mitigate AI risks with policy
In recent years, the risks of technology in general and AI, in particular, have moved to the forefront of many people’s minds. Headlines about AI warn about the dangers of encoding bias, reducing transparency in decision-making, and other serious societal consequences. International public perception splits on whether AI has a net positive or negative impact on the world — and people are trending more sceptical of AI companies and applications.
Understandably, these concerns have led to calls to legislate the use and development of AI. Lawmakers in many countries are stepping in to regulate AI and other emerging technologies. Here in the EU, the European Commission and the EU Parliament are developing a new framework for AI governance to release in Spring 2021.
These regulation efforts have created a lot of anxiety in the tech community. I have written about some of the shortcomings in the GDPR that frustrate businesses without accomplishing privacy and risk management goals. It’s easy to see why a comprehensive AI regulation causes further hesitation. A good law will help improve public confidence in AI, while a flawed regulatory framework could set us back.
AI can only innovate and grow when governed by equally innovative policies.”
Regulators have proven open to modern policies that balance risks with rewards, but they can’t know the tech as well as those of us that work in the tech field every day. Without our input, policies will inevitably fall short. As data scientists, we need to be more involved.
Why policy prototyping?
But how can data scientists contribute most effectively to regulation efforts? We’re not lawyers or experts in policymaking. We are, however, experts in data. That’s where policy prototyping comes in.
Policy prototyping is an empirical approach to experimental governance. In other words, AI companies test a proposed new policy in real-life conditions and provide feedback on the framework’s impact, strengths and limitations. Areas of confusion or compliance difficulties can be identified early to iterate on and improve these normative frameworks. In this way, data scientists can provide evidence-based input to improve existing governance frameworks and inform law-making processes. We can run the data to determine how to optimise policies to reduce risk.
The key attraction of policy prototyping: data-driven decision-making.
We collect empirical data on what helps achieve the optimal outcome. We then iterate to improve upon the model until it reaches our goals — exactly as any data scientist would go about any other machine learning task.
A data-driven approach to risk management
Facebook AI led the policy prototyping effort for AI that I had the opportunity to participate in through a program called Open Loop. I’d encourage any AI companies to participate in a similar effort if they have the opportunity. The benefits of policy prototyping, however, aren’t limited to a large-scale experiment. We can each be leaders in risk management using policy prototyping on a smaller scale.
As data scientists (or even better, business scientists), we know that decisions driven by accurate real-time data give better results. We need to
- Actively investigate our technology for areas of risk
- Set transparent internal procedures and policies to mitigate any risks
- Collect data to monitor the success of your compliance framework
- Use that data to refine policies to be more effective
- Iterate to improve regularly
This is what I’m taking away from my experience with policy prototyping, and what any data scientist can mirror. The Open Loop report dives deep into some of the principles we used to assess risk more deeply if you want more details.
The best part is that this practice can make models better in ways beyond the theory. Just in the first weeks of carrying out this risk mitigation more formally, we were able to see the impact. While our AI was already lower risk due to Evo’s inclusion of the human element, policy prototyping helped my team discover new areas where we could include stakeholder feedback and transparency. We increased manually-human-editable parameters 16% to give our clients more direct control and stricter bounds to the solutions we generate. Based on simulations, this will make our algorithm even more accurate.
Why it matters for all data scientists
You may feel tempted to ignore AI legislation or write off all AI regulation as misguided attempts to hold back a misunderstood technology. After all, any data scientist is driven to innovate and improve existing processes by using data, and outdated rules can stifle that innovation.
Yet ignoring the public’s need to understand AI increases negative public perception. Choosing not to formalise and continuously improve upon risk mitigation policies hinders our growth. Ignoring the practical need for AI governance only limits our voice — and increases AI risk.
Policymakers and legislators can learn from the iterative learning process integral to AI. Data scientists can learn from the values, principles and risk mitigation frameworks so critical to effective policy. It’s time for us all to take an active role in developing public policies that minimise risk while encouraging responsible growth and innovation.
As the CEO of an AI company who works daily to optimise our algorithms and get more accurate and actionable recommendations, I continuously am looking for risks that could harm our model. Despite that, I still learned a lot by prioritising risk mitigation through a clearly defined framework. Outside guidance helped us gain a new perspective, but we were still in control to implement the policy flexibly based on our own unique risk factors. I am committed to transparency and active risk management in AI; this is one more tool I’ve discovered that helps us do so effectively.
Innovation: The future of AI
Data scientists working with AI models every day are best equipped to understand the particular concerns, strengths, and weaknesses that create (or reduce) risk. Policymakers can provide an independent outside perspective, guiding us to consider new strategies through iterative frameworks.
We have to work together.”
As data scientists, we have a responsibility to take an active role in AI policy. Policy prototyping and active risk mitigation is an excellent place to start. An open loop of policy innovation makes sure that AI can grow responsibly, fostered by regularly updated and transformed policies. As we pioneer new advancements in data science, so too can we pioneer practical regulations that encourage continued growth. It’s a process that never ends, and we are a critical part of the circle.
Innovation is critical for powerful AI, but AI governance doesn’t have to hamstring our innovations. I believe that if those of us in the tech world join together with governments, civil society, and academia, our diverse perspectives can iterate towards a better policy future and ultimately the cutting-edge AI we’ve all been working towards.
Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. The initiative builds on the collaboration and contributions of a consortium composed of regulators, governments, tech businesses, academics and civil society representatives. Through experimental governance methods, Open Loop members co-create policy prototypes and test new and different approaches to laws and regulations before they are enacted, improving the quality of rulemaking processes in the field of tech policy.