Artificial Intelligence promises an exciting future and tremendous growth, provided that legal professionals able to navigate their business in this novel environment. In-house AI specialist Julien Willeme offers some step-by-step strategies to overcome the barriers
Many companies make massive investments in artificial intelligence (AI), and more and more AI products and technologies are being launched by companies that are not traditional software companies. This signals a transition where traditional engineering companies invest in software capabilities and position AI as a critical way to disrupt their markets and gain market share.
That transition does not come without challenges for legal teams. Lawyers need to keep abreast of new and fast-evolving technologies and familiarise themselves with novel technical concepts like “machine learning” or “black box AI”. The integration of AI in a company’s products also entails a new business strategy and new monetisation models, which lawyers need to understand and help shape.
Even in practice areas that lawyers have been working in for a long time, AI is a whole new ball game. Take privacy. AI training, testing and validation necessitate access to a massive amount of data. Even as an AI product functions, it processes massive amounts of data, which raises novel privacy issue.
Policymakers in Europe, Asia, the US and Canada have diverse points of view, and are trying to learn and understand the best way to regulate AI. The reality is, however, that no one has really cracked it yet.
When it comes to particular sectors, each regulator also pays close attention to AI applications and the necessity to regulate the technology. That brings in a whole new level of complexity.
Roadmap for legal teams
At a tech talk he gave on 22 July at the Tech Law Fest in Singapore, Julien Willeme proposed a strategic roadmap to equip legal teams with the necessary skills to manoeuvre that complexity and position themselves as true AI business transformation enablers. The roadmap has three components: culture, thought leadership, and engagement.
Julien shared his experience in fostering an “AI-proof” culture in legal teams and how to leverage that culture to develop the necessary new skills and position your teams as true thought leaders in the AI space, internally within your organisation, as well as externally when they engage with industry peers, regulators and policy makers.
That thought leadership will prove invaluable, since regulators and policy makers are trying to learn the technology and are willing to engage with companies, educational institutions and a broad range of stakeholders to understand how to best tackle the novel policy questions raised by AI.
It all starts with culture. To set the right tone in your organisation as a whole and in your legal team more specifically, there are four “cultural shifts” that one needs to trigger.
The first one is what we call “from buzz to purpose”, which brings everyone’s attention to a project. If you really want to do good work on AI, you need to understand the vision behind the use of AI, the specific problems AI is addressing, and how AI will interact with your customers.
As a legal leader working in the AI space, you must ensure your team has a deep level of understanding of the above-mentioned questions, and is able to articulate your organisation’s particular answers to those.
We often underestimate the fact that people look at AI as a threat. They’re concerned that AI will replace and overrule them, and it sometimes introduces significant friction in adoption. As a lawyer, you need to be able to provide reassurance and make sure that AI is perceived as a way to augment people and empower them, rather than replace them.
Second, you need to move from a siloed work approach to a multidisciplinary collaborative approach to work on an AI project. AI projects require a wide range of skill sets and perspectives to ensure that AI design and implementation closely aligns with the organisational priorities.
Lawyers need to ensure they work in sync with their research and development, regulatory, and business counterparts. As the technology is novel and evolving at a fast pace, there are a lot of things to learn and there are a lot of pieces that will keep moving during the project’s development and implementation. Continuous interdisciplinary collaboration is therefore extremely important.
Third, legal teams working on AI must adopt data-driven decision-making for their own work. Every single person that works on AI-enabled products or applications – including legal teams – needs to be empowered with data-driven decision-making in their day-to-day work.
Data-driven decision-making is key to building up the team’s own “data acumen”, which will be critical to engage effectively with internal and external stakeholders on data technologies and AI.
Finally, mindset. AI requires legal teams to move away from a rigid and risk-averse approach and adopt a more agile, experiential and adaptable model. As AI is novel, there are lots of issues that are not yet resolved, and you won’t have an answer to everything. Try build up your level of comfort in taking calibrated risks and learn.
Once you have built up an “AI-proof culture ”, educate your team on the novel legal issues raised by AI and embark on a journey to make them thought leaders in this fascinating space.
A good starting point is to look at the EU AI Act, which is currently going through the European legislative process. It provides a broad overview of the different questions policymakers consider when they are looking at regulating AI and how they are looking at mitigating the risks raised by certain AI applications.
The EU AI Act’s original approach is to set out legal requirements for AI systems that are proportionate to the risk that those AI systems will present to individuals and society. Those requirements will determine how the AI systems need to be trained, validated, approved and monitored throughout the entire product life cycle.
Before engaging externally with policymakers and regulators, every company should adopt a total product lifecycle approach for AI products.
Although the concept of total product lifecycle is used by the FDA as a regulatory paradigm for medical devices and softwares, it can be applied regardless of the industry. It covers formulation, product design, deployment and feedback, and provides a great framework for your legal team to help their organisation think through the different phases of an AI project.
Policymakers and regulators are all going through the AI learning curve and are willing to engage with industries and educational institutions around the world to understand how they look at the technology. Familiarising yourself with ongoing policy initiatives in your region and the sectors where you operate will go a long way and give you a golden opportunity to engage externally and have your legal team recognised as thought leaders in the space.
Different countries take different approaches. The EU is taking a horizontal, risk-based approach, where all AI application are within the scope of one regulation. China is much more sector-specific or issue-specific.
In the US, the Biden administration is considering on an AI Bill of Rights that would primarily focus on AI systems that make direct judgments that affect benefits and opportunities for individuals. This could create a right for individuals to govern their personal data and the right to know what data was used to create and test an AI algorithm.
Finally, learn the technology. There are a lot of tools out there that are available, so ensure that you do speak, at least at some level, the same language as your technology teams. That is critical to build up your team’s credibility internally, as well as their confidence to engage externally.
Julien Willeme is senior legal director of data and AI ventures at American medical device company Medtronic in Singapore. He is also a board member of the Association of Corporate Counsel (ACC) Asia. The article is taken from his presentation at Singapore’s TechLaw.Fest 2022.