Winnie Yeung of Microsoft on the 6 core ethical principles for AI solutions


Winnie Yeung, chief legal director of Microsoft Hong Kong


My name is Winnie Yeung and I am the commercial legal lead for Greater China region at Microsoft. I lead a team in Hong Kong, China and Taiwan. We work very closely with our clients, especially in the area where we are helping our customers to move over to digital transformation. We address a lot of concerns about their adoption of cloud computing, such as privacy and security, data residency, and those kind of questions.

Talking about data privacy and artificial intelligence, what are the challenges and directions of these technologies?

I think we are really at a very exciting time in terms of the IT technology advancements. In the last few years, we see huge progress has been made in the area of cloud computing. That really brings about the AI advancement as well, because it is only with the cloud computing technology that you will become able to collect and store vast amount of data and also being able to use that computing power to analyze the data. So we see that with artificial intelligence that is used very broadly. People will also sometimes use that to do machine learning, IOT, big data, all kinds of solutions that AI can help them with.

We see that while we are at this early stage of developing AI technology, it is also important that we pause a little and think about the ethics issues behind this new technology. When we see that whenever we have new technology coming into place, we sometimes will come into these new challenges of how this technology will affect the society and the economy. So for Microsoft, we have come up with six core ethical principles that we build our AI solutions on. Very briefly, those would be; fairness, reliability and security, privacy, and inclusiveness as its core principles. Those are built upon two fundamental principles which is transparency and accountability. We have been spending a lot of time in the last few years to develop these set of principles and we are glad to see that recently. If you notice, regulators from around the world are starting to pick up on this issue as well. And we are happy to see that our principles are fully aligned with the regulators approach in Australia, EU or Singapore.

I think for start ups, it may be challenging because on the one hand they are struggling for their survival and on the other hand they are really coming up with cutting edge technology, and society does have some expectation on them on how they make use of this technology and how that would impact the society. So, it is a challenging balance to make. But my advice to the start-ups or companies that are focusing on AI would be; at the end of the day, AI as a technology would not be successful if people do not trust that technology. And for people to trust it, they have to believe that it is fair, reliable, secure, inclusive, transparent, accountable, etc. So, I think it would be good for start-ups and companies involving in the AI field to pay attention to these issues as they make their technology advancement and developments.

This interview was conducted during the Association of Corporate Counsel (ACC) inaugural APAC Meeting at the Island Shangri-la, Hong Kong, on 11 April. More than 240 in-house counsel delegates attended from locations across the Asia-Pacific region including Hong Kong, China, South Korea, Japan, Australia, Singapore, Thailand, India and the Philippines. Read our coverage here