In the news
Artificial intelligence puts skilled professions at highest risk
‘Occupations in finance, medicine and legal activities, which often require many years of education, and whose core functions rely on accumulated experience to reach decisions, may suddenly find themselves at risk of automation from artificial intelligence [(AI)]’ the Organisation for Economic Co-operation and Development (OECD)’s 2023 employment outlook has said.
It also commented on the speed with which ChatGPT has trained itself on vast amounts of data and its ability to pass professional exams. ‘High-skilled occupations are those most likely to involve non-routine cognitive tasks, and they have therefore been the most exposed to progress in AI.’
The report went on to say that, for the time being, the impact of AI on the labour market was minimal, for a number of reasons. ‘Firms may need time to implement new technologies after adoption. Any negative employment effects of AI may therefore take time to materialise. However, the latest wave of generative AI may further expand the tasks and jobs that can be automated.’
The OECD said countries should review their product and labour markets and tax policies to make sure they did not incentivise labour-replacing technology adoption.
Read the full OECD report ‘Artificial Intelligence and the Labour Market’
Government proposes delegating AI regulation
The government has consulted on regulating and ‘[unleashing] the benefits of AI’. It says its proposed framework will: ‘strengthen the UK’S position as a global leader in AI, harness AI’s ability to drive growth and prosperity, and increase public trust in its use and application’ in its white paper.
The five principles the government intends to guide and inform the responsible development and use of AI across the economy are:
- safety, security and robustness
- appropriate transparency and explainability
- fairness
- accountability and governance
- contestability and redress.
The government aims to delegate regulation of AI to existing regulators and, as part of its proposals, will strive for consistency and coordination between regulatory bodies.
Read the ‘A pro-innovation approach to AI regulation’ white paper
AI will be used in justice systems, says Master of the Rolls
Sir Geoffrey Vos has said that generative AI may be used to take some decisions in the legal system and be used by clients to predict the decisions in cases. Speaking at an online law and technology conference, the Master of the Rolls added that these decisions may be at first very minor ones. Vos likened the professional expertise of AI to the diagnosis of melanomas, where machine diagnosis has seen more skin cancers than any doctor and acts as a: ‘valuable adjunct to the tools available to medical professionals.’
Vos also stated that citizens and businesses’ confidence in the system was critical – and there were some decisions that humans were unlikely ever to accept being decided by machines. For example, intensely personal decisions relating to the welfare of children.
He also commented on the recent news story of the two US lawyers who were fined for using fake court citations generated by ChatGPT in a case filing, saying: ‘One thing generative AI cannot do effectively for lawyers is to allow them to simply cut corners.’
Professor Richard Susskind
National centre for innovation would help UK compete globally
LegalUK, the lobby group promoting English law as the law of choice worldwide, has called for a national centre to better compete with other international jurisdictions in the tech space.
Professor Richard Susskind, a legal futurologist, said the UK needed to bring together the best legal and technology minds and realise the full power and potential of AI.
Susskind, a director of Legal UK, commented that the UK can no longer rely on the heritage of English law while technological change moves at a pace. ‘Yesterday’s formula is not sustainable. We cannot rely on tradition to help us meet the challenges the UK justice and legal systems are facing.’
He continued: ‘We are calling for the establishment of a new National Institute for Legal Innovation to systematically bring together the best legal minds and ensure we are ahead of the game and position ourselves as global leaders.’
Read more about national centre for innovation would help UK compete globally
Calls for ChatGPT to go through regulatory sandbox
Ethical AI campaign group ForHumanity has challenged OpenAI to enter ChatGPT into a regulatory sandbox. It states that a regulatory sandbox: ‘provides the ideal process to share governance widely and fairly.’
ForHumanity founder and executive director Ryan Carrier urged OpenAI to join regulatory sandboxes in the European Union under the legal jurisdiction of the EU Artificial Intelligence Act.
Regulatory sandboxes are a way of testing and trialling new, innovative ideas and tech in a safe and controlled way.
Read more about calls for ChatGPT to go through regulatory sandbox
Dr Thomas Wood, Lead Data Scientist, SRA
What tech do consumers want?
Discussion on AI, the current and future use of technology in law firms, practical advice and what consumers want were all on the agenda last month at the Solicitors Regulation Authority’s (SRA) Innovate roadshows in Bristol and London.
Industry experts from a range of law firms, tech providers and consumer groups shared their insights on the future, as well as practical tips to help firms on a day-to-day basis. A beginner’s guide to AI and ChatGPT also touched on the everyday use of AI and how large language models could be used in future legal services.
AI won’t be replacing core solicitor skills any time soon, says SRA Chief Executive
Effective oral communications, being able to impart difficult messages sensitively, or understanding how to deal with a client in vulnerable circumstances are all skills AI will not be replacing anytime soon, the Chief Executive of the SRA has said. These skills are tested in stage 2 of the Solicitors Qualifying Examination (SQE), the exam aspiring solicitors need to pass to qualify as a solicitor in England and Wales.
Paul Philip made the comments in light of the fact that ChatGPT-4 scored 50 per cent when it took stage 1 of the SQE, which meant it narrowly missed out on a pass. Earlier in the year, the large language model passed the US bar exam with a mark putting it in the top 10 per cent of candidates.
Philip went on to say that he has ‘no doubt’ that AI will have huge implications for the legal sector: ‘Some tasks that have traditionally been done by lawyers or paralegals will likely be replaced by the instantaneous efficiency of tech.
‘Even if, in many instances, qualified lawyers will need to provide oversight, they will also be focusing more on those nuanced tasks where human input adds the most value.’
Philip also confirmed that there are no plans to change any of the fundamentals of the SQE.
Read the article by Paul Philip: ‘AI – a game changer for legal education?’