In 2015, Alphago, a go-playing computer program developed by Google subsidiary DeepMind, became the first of its kind to defeat a professional human player on a full-sized board without handicap, triggering an ongoing discussion about AI development and its possible ethical ramifications. Tao Fuwu, Chief Legal Officer of Coudwalk Group, one of the “four AI dragons in China”, shares his views on current legal and compliance requirements on AI ethics


n its development plan on the New Generation of Artificial Intelligence, issued in 2017, the State Council highlighted the uncertainties revolving around the development of artificial intelligence (AI), and new challenges it may bring.

A disruptive technology, AI has a wide range of influence, potentially capable of altering the labour structure, impacting laws and social ethics, violating personal privacy and challenging the norms of international relations. It may even leave far-reaching impressions on government administration, economic security, social stability and global governance. While vigorously advancing AI technologies, we must also remain vigilant to the possible security challenges, strengthen preventive and restraint guidance, minimise risks, and ensure that AI development is safe, reliable – and controllable.


The Opinions on Strengthening the Ethics and Governance in Science and Technology, issued by the general office of the State Council on 20 March 2022, is the first national guiding document for the ethical governance of science and technology. The opinions contain general requirements, ethical principles, governance systems, institutional safeguards, review and regulation, as well as education and publicity of technological ethics.

Ethics for AI Tao Fuwu
Tao Fuwu
Chief Legal Officer
Cloudwalk Group

Although the document was only recently promulgated, provisions on technological ethics in China have already appeared in a number of laws. For example, the Law on Scientific and Technological Progress set out that improvements are necessary for scientific and technological ethic systems and standards, and that scientific activities should not violate technological ethics. The Data Security Law provides that data processing activities and development of new data technologies should be conducive to promoting economic and social development and enhancing people’s well-being, while in compliance with social norms and ethics.

After fully considering the increasing ethical concerns about privacy, prejudice, discrimination and fairness, the Ministry of Science and Technology issued the Code of Ethics for a New Generation of Artificial Intelligence on 25 September 2021, which became the top-level design guidance for the ethical governance of AI.

The code contains general provisions and ethical norms for specific AI activities, and puts forward six fundamental ethical principles: Promoting human well-being; promoting fairness and justice; protecting personal privacy and security; ensuring reliability and controllability of AI activities; strengthening accountability; and improving ethical literacy. The code also sets out 18 ethical requirements for specific activities such as AI management, R&D, supply and application.

In addition, the National Information Security Standardisation Technical Committee, under the Standardisation Administration of China, issued the Practice Guide to Cybersecurity Standards – Guidelines on the Prevention of Ethical and Security Risks from Artificial Intelligence, providing guidelines for ethical risk prevention during AI activities such as R&D, design and manufacture, deployment and application, and user use.

Along with breakthroughs and widespread application of big data and algorithm technology in recent years, various social issues have emerged such as internet addiction of minors, ill suitability to elders, “strictest algorithm” of takeout platforms, and price discrimination. The Cyberspace Administration of China and three other departments jointly promulgated the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services, which clearly stipulates that: (1) providers of algorithm-based recommendations should neither send to minors any information that may cause them to imitate unsafe acts or those contrary to social ethics, induce them to develop unhealthy habits or otherwise affect their physical and mental health, nor use algorithm-based recommendations to indulge in online addiction; (2) operators should fully consider elderly needs for travel, medical treatment, consumption and errands, and provide intelligent, elderly-friendly services in accordance with relevant national regulations; (3) for providers of workflow scheduling services, algorithms for platform order distribution, composition and payment of remuneration, working hours, rewards and penalties should be established and refined; and (4) algorithms should not be used to commit unreasonable differential treatment and other illegal acts in respect of transaction conditions based on consumer preferences, transaction habits and other characteristics.

In addition, the Provisions on the Administration of Internet Pop-up Information Push Services (Draft for Comment), issued in March 2022 by the Cyberspace Administration, also stipulated that algorithm models should not violate laws, regulations or ethics in relation to inducing users to indulge and consume excessively.


According to the above-mentioned State Council’s opinions, the state encourages scientific and technology development and innovation, supports unification of innovation and risk mitigation, and the combination of institutional norms and self-discipline. It requires that governance must be strengthened from the source of scientific and technological development, and ethics should be embedded in the whole process of scientific and technological activities, such as scientific research and development of technologies.

Such activities should advance hand-in-hand with ethics in realising responsible innovation. Colleges and universities, scientific research institutes, medical and health institutes, and enterprises are expressly required to shoulder the main responsibility for managing ethics in science and technology by establishing a normalised mechanism for daily management of technological ethics, and taking the initiative to investigate, identify and promptly resolve any ethical risks in their scientific and technological activities.

Operators of scientific and technological activities in life sciences, medicine and AI should establish a review committee of scientific and technological ethics, if their research involves sensitive areas of ethics.

In terms of practical experience, the author notes that enterprises such as Cloudwalk, MEGVII, DeepGlint and other recent IPO companies have already established ethical review bodies, and the China Securities Regulatory Commission (CSRC) is also highly concerned with ethical reviews of AI enterprises. Matters such as controllability of AI technology, protection of customer privacy data, and the nature of ethical protection measures taken by AI enterprises have emerged in IPO application enquiries.

Therefore, ethical review has become a key compliance component for AI enterprises, which may increase the immediate compliance costs, but the long-term value of ethical compliance will easily triumph such costs. Ethical compliance will become a selling point of products, a key investor consideration and a major cornerstone for the enterprise’s long-term development.

So how should AI enterprises approach ethical compliance and effectively carry out ethical review? There are currently no comprehensive or specific national rules or guidelines for ethical review of AI, while the above-mentioned guidelines only list the ethical security risks of AI and provide some suggestions on risk prevention.

Given the mature ethical review mechanism existent in the medical field, the Measures for the Ethical Review of Biomedical Research Involving Humans, released by the National Health and Family Planning Commission in 2016, and the Guidelines for Establishing Ethical Review Committees for Clinical Research Involving Humans (2020 Edition), released by the National Health Commission in 2020, provide valuable references. Based on such well-established experience in the medical field, the author believes that AI enterprises should focus on the following aspects in improving their own ethical compliance.

Set up a designated ethics review body. The ethical review body should be composed of personnel with multi-disciplinary professional backgrounds, at least including experts in AI, ethics and law, while experts in other specialised fields may be engaged as independent consultants if necessary. In order to ensure the quality of ethical review, members of the ethical review body should have strong awareness of scientific research ethics and ability in ethical review. In addition, to ensure the independence, objectivity and impartiality of the ethical review process and results, the ethical review body should be independent from the technical and business departments.

Formulate and implement a sound ethical review system. The ethical review system should at least include the following aspects: the composition, responsibilities, authorities and working mechanism of the ethical review body; the scope of the ethical review and application guidelines; confidentiality measures of the ethical review; selection of independent consultants; principles, requirements and standards of the ethical review; training and continued education; and the archiving of ethical review documents.

Considering the global development of AI and needs for overseas business, AI ethical review should not only abide by domestic laws and regulations. In November 2021, the UN Educational, Scientific and Cultural Organisation (UNESCO) passed the Recommendation on the Ethics of Artificial Intelligence, the first global standard-setting instrument on AI ethics, proposed to 193 member states. Attention should also be diverted to Western ethical governance regulations, such as those of the US and Europe, as well as other intergovernmental and non-governmental multilateral consensus on ethical governance of AI, especially regulations about life and health, privacy, human dignity, discrimination, algorithm black box, information cocoons, and large-scale monitoring risks.

Update ethical review standards and requirements regularly. Considering the rapid development of AI and legislation in this field, the ethical review body should timely update the scope, principles and requirements of ethical review, preferably forming a periodic update mechanism, so as to ensure the legitimacy and compliance of ethical review results, reduce compliance costs to the greatest possible extent, and improve the economic benefits of the enterprise.

Establish an effective follow-up mechanism of ethical monitoring. AI enterprises should establish an ethics tracking and review mechanism throughout a product’s lifecycle, including research and development, the design and manufacturing, deployment and application, and user use so as to regulate various ethical risks in product manufacturing, deployment and use. Compliance of AI enterprises can be expensive. During the process of implementing the follow-up mechanism, ethical review experience accumulated can be valuable when applied to the process of future product development and design, in order to mitigate any avoidable impact of compliance on the enterprise’s interests. subscripton ad blue 2022