LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

The eruption of generative AI is provoking a tsunami of shock waves

Early adopters of AI technology are creating documents, images, music and videos at an astounding pace, with increasingly high quality, often indistinguishable from human productions. The inevitable flipside to this is consternation among creatives who fear being displaced in the marketplace. To add insult to injury, AI is being trained on creative works produced by these same people.

There are certainly deep issues that need to be addressed by regulation and industry practices, whether copyright and piracy concerns, livelihoods being threatened, or erosion of rigour and work ethic of students and trainees, as traditional assessment and examination methods are quickly rendered obsolete.

Ban Jiun Ean
Ban Jiun Ean
Chief Executive
Maxwell Chambers
Singapore
Email: info@maxwellchambers.com

But every new technology, ever, has always brought new challenges and opportunities with it. It has always been up to society to adapt, adjust and evolve; to embrace and properly wield these new technologies.

The invention of the automated loom caused upheaval during the Industrial Revolution as textile workers feared for their jobs. Telephone operators disappeared as telephone exchanges became more sophisticated. Typing pools vanished with the invention of the word processor. Granted, all of these emerged over longer periods of time, giving people the opportunity to reskill and upgrade as old jobs disappeared.

But the pace of AI innovation is far too fast for even governments to regulate, let alone individuals to learn new skills to pivot away from their former jobs, so the discomfort is not unfounded.

But these issues pale in comparison to the biggest challenge that AI brings – that of trust.

It is one thing to have a machine do something faster and better than you; a machine carry heavier loads or go to places too dangerous for a person; or a machine do the work of 100 men. In all these instances, the machine is simply multiplying the efforts of the human decision-maker behind it.

It is quite another to live our lives being told by a machine what to do. With AI, the threat is of eliminating the human being entirely from the decision-making loop.

To some extent, we are already living such a reality. AI algorithms already decide, autonomously, which passenger to direct a driver to pick up. They perform facial recognition checks without human intervention to make decisions about the identities of individuals. They study your viewing or shopping habits and make recommendations about other shows or products you might be interested in, out of thousands or millions of choices available. They decide whose tweets or threads to show you on your social media feed, and which not to.

Whether we like it or not, our lives are already being dictated by AI, to some degree.

When it concerns more mundane issues like food, shopping and movies, or even transportation, we care little about who is guiding us, as long as it drives efficiency and effectiveness at the lowest possible cost. But this makes us complacent and desensitised to the fact that AI is increasingly creeping into areas where we ought to care a lot more about how decisions are made, and by whom.

Can we trust a recipe that ChatGPT threw up? Would we eat what was made from it? Can we trust “facts” that an AI compiled, if sources and context were not provided? Would we act on “medical advice” gleaned from an AI-powered app? And, increasingly, can we believe what our eyes see and our ears hear, if the sights and sounds are of a digital nature?

Deep fakes were already a problem long before generative AI took off – they will now be a nightmare for celebrities and ordinary people alike.

Trust is increasingly the new currency. Trust is what put Google on the top of the search engine pile – people trust its search results more than they trust any other. But would you trust something even if you don’t understand how it works, if you have no idea where its information is from, and if you don’t know who designed it? This is the fundamental problem with using AI for important decisions.

An extreme but pertinent example would be the case of lethal autonomous weapons. It’s no secret that militaries worldwide are in an arms race to develop AI, and it’s widely believed in these circles that whoever wins the AI race will dominate. As such, vast efforts are being deployed by governments to build weapons powered and controlled by AI, as well as using AI tools in the cyberwarfare space.

One supposed red line, espoused by theorists and critics, is that the decision to kill must never be handed over to AI. It must always be in the hands of a human. The underlying rationale for this is that a human will still be bound to some degree by ethics, morality, duty, honour, loyalty, kinship, or even just plain self-interest. The fear is that a weaponised AI has no such guardrails or considerations, and will reduce the decision to kill to a simple case of numbers and probabilities.

Hence, it is argued that the final decision to kill must always remain in the hands of a human.

Everyone agrees that an AI shouldn’t be allowed to decide which person lives and which person dies. But we disagree about the extent to which AI can assist the conduct of warfare. What is the point beyond which it is considered that we have surrendered the most crucial decisions to AI?

Currently, many armies use AI to process and analyse the torrent of information available on a battlefield – collected from a host of sensors and combined with massive amounts of other data – to aid identifying potential threats and enemies. Once done, the decision to fire theoretically lies with a human operator. But which operator, when advised by AI, based on analysis of battlefield intel, would refuse to “fire” if AI reports a target’s threat has “99%” certainty; or have the confidence to fire if AI reports the target as benign, again with a high level of confidence?

In effect, this means the decision to fire or not has already been delegated to the AI and its algorithms. Except for marginal decisions, the human operator is basically rubber stamping.

What does all of this have to do with the law? It can be said that these exact concerns have already arisen with regard to the use of AI in judicial and law enforcement matters.

In the UK, a system called Oasys has been around for two decades to assess the risk of a convict reoffending on their release, to inform decisions about parole. In the US, Compas (Correctional Offender Management Profiling for Alternative Sanctions) serves a similar function.

Based on many data points, these systems purport to predict the likelihood of recidivism, which then leads to a judge deciding whether to grant or deny parole. There are supporters and critics of these systems, some lauding them for efficiencies and speed, others pointing out alleged biases and discriminatory outcomes stemming from training data for the systems, which cannot be examined by the public due to their sensitive nature.

While not necessarily about life and death (yet), these systems still decide the fate of individuals, namely either further incarceration or freedom. Yet it’s unclear how or why a decision was made, since algorithms are far too complex to understand, especially with the advent of machine learning, where they train themselves at a pace unmatched in history. Accusations that minorities and people from certain social-economic backgrounds are unfairly profiled cannot easily be addressed or dismissed as long as the algorithms and training data sets remain inaccessible to outsiders.

These concerns heavily inform our approach towards technology adoption at Maxwell Chambers. On one hand we’re bullish about using mixed reality and online meetings for dispute resolution. On the other we’re carefully studying AI products to see which are suitable for us, but with sufficient safety nets to ensure they don’t create unwanted outcomes.

Almost certainly, AI tools will grow more user-friendly and sophisticated. There will come a day, sooner than most expect, where their adoption will be widespread and fundamental to business. Until then, we are taking small steps and watching this space closely.

Likewise, Singapore is heavily invested in AI, yet trying to strike a balance between protecting consumers and enabling room for innovation and growth.

Beyond just turbo-charging document drafting and legal research, now the power and perils of AI are finally evident, perhaps it’s time to reconsider what we’re prepared to surrender to the algorithms, and what we are not.

Maxwell Chambers

MAXWELL CHAMBERS
32 Maxwell Road #03-01
Singapore 069115
Tel: +65 6595 9010
Email: info@maxwellchambers.com
www.maxwellchambers.com

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link