Type to search

ChatGPT Has Woken Up The House Of Commons. A Conversation With Tim Clement-Jones Innovation

ChatGPT Has Woken Up The House Of Commons. A Conversation With Tim Clement-Jones

Lord Tim Clement-Jones

Some people have biographical summaries which wear you out just by reading them. Lord Clement-Jones is one of those people. He has been a very successful lawyer, holding senior positions at ITV and Kingfisher among others, and later becoming London Managing Partner of law firm DLA Piper. He is better known as a senior politician, becoming a life peer in 1998. He has been the Liberal Democrats’ spokesman on a wide range of issues, and he joined the London Futurists Podcast to discuss how policy makers globally should think about AI.

ChatGPT and MPs

Lord Clement-Jones set up the All-Party Parliamentary Group (APPG) on AI in 2016, and he notes that from then until the release of ChatGPT, the level of interest in AI among parliamentarians was limited. The number who had any real understanding of the technology and its significance was probably lower than ten. Whatever you think of the claims made for ChatGPT, it has done at least one thing really well: it has made huge numbers of people aware of the power of modern AI. This has to be a good thing.

The APPG meetings used to attract a few Lords occasionally, and an MP only rarely. MPs are now having to answer questions from their constituents about AI, so the recent APPG meeting had many more MPs than ever before, and more MPs than Lords. Sadly, very few British MPs have a technical background, so they find the APPG a very useful way to learn about AI.

Even AI experts have been surprised at how powerful ChatGPT is. It is far from human-level, but few people expected that machines would be producing plausible-sounding text this soon, as well as realistic and interesting images.

In defence of AI regulation

Lord Clement-Jones has been an advocate of AI regulation for some years, and he thinks that if the EU’s primary role in the AI industry is to lead its regulation, then that is a worthwhile contribution. He is broadly pleased with the EU’s AI Act because it adopts a risk-based approach, meaning that the riskier the application, the more diligence that must be demonstrated by the organisation implementing it. The UK government is due to publish its intentions with regard to AI regulation in April, and although the US has little regulatory action at a national level, the individual states are busy, especially California.

It is important for these regulatory regimes to harmonise, and that is one of the goals of the Global Partnership on AI, a multinational initiative launched by 15 countries in June 2020, which now has 29 members, although not China. Of course, technology is increasingly developing faster than regulators can keep up, but Lord Clement-Jones thinks they can move quickly enough to prevent or mitigate the most serious harms.

It isn’t always obvious whether a particular application of AI is risky. A lot of people bridle at the use of face recognition in public places, but what if it helps to locate a lost child, or a known terrorist? On the other hand, it is probably less acceptable to use face recognition technology to distinguish between students who are entitled to free school meals and those who are not. All around the UK, CCTV cameras are being equipped with face recognition software, and it is far from clear that the authorities which operate them have thought properly about whether it is appropriate and responsible to do so.

AI in Europe and the UK

The APPG’s reports acknowledge that advanced AI is currently a duopoly between the USA and China, and the UK is, at best, a very distant third. Lord Clement-Jones thinks there is too much rivalry between the major EU countries, especially their academic institutions, for the EU to create any rivals to Google or Baidu. He also accepts that the UK cannot hope to compete with the big two overall, but he argues that it can and should focus on some areas of excellence, such as cyber-security. The UK’s world-class universities produce a steady stream of AI talent, and the UK is probably still ahead of any of the EU countries in terms of the number of unicorns established (private companies valued above $1bn).

Almost no-one who is both sane and honest now denies that Brexit is harming the UK’s economy, but even though a clear majority of British voters now think it was a mistake, both the main political parties have an “omerta” on the subject, an unspoken code of silence. With regard to AI, Brexit is hurting the UK’s ability to play a major role, not least because its universities are now excluded from the Horizon programme, which was previously an important source of funding for their research and development work. UK universities are also being damaged by the newly-raised barriers to EU talent moving to the UK, and their reluctance to do so when they are made to feel less welcome.

Beyond the US, China, and Europe

Of course, AI policy does not only affect the US, China, and Europe, and it will not only be developed in those regions either. One of Lord Clement-Jones’ many past roles was membership of the Saudi–Britain Joint Business Council. Saudi Arabia is keen to become one of the leading countries in AI, by building up a cadre of data scientists and AI experts, and placing AI at the heart of their change programme, Vision 2030. He thinks they may struggle to achieve that, but by being an enthusiastic and extraordinarily wealthy client of AI firms, they can influence the way the technology develops.

Laws about LAWS

The House of Lords launches a limited number of special enquiries each year, and after considerable effort, Lord Clement-Jones was able to arrange for one of this year’s to be on the subject of lethal autonomous weapon systems (LAWS). It is due to report by the end of this year. Devising a way to regulate the deployment of LAWS is an extremely challenging task, since once AI is good enough, a military force which deploys it will beat one that doesn’t every time. The enquiry will bring some very bright minds to bear on the subject.

AGI in 15 years?

The House of Lords is traditionally regarded as a sedate, cautious, slow-moving institution, so it is remarkable to learn that Lord Clement-Jones thinks it may not be much longer than 10 to 15 years before a collection of different AI systems can be combined to out-perform the human mind. If he is right – and it seems that a growing number of well-informed people agree – then we have very little time left to prepare for what will surely be the biggest revolution that humanity has ever faced.

London Futurists Podcast