Can we really trust AI to channel the public’s voice for ministers? | Seth Lazar

Can we really trust AI to channel the public’s voice for ministers? | Seth Lazar

Large-language models such as ChatGPT are still liable to distort the meaning of what they are summarising

Seth Lazar is a professor of philosophy at the Australian National University and a distinguished research fellow at the Oxford Institute for Ethics in AI

What is the role of AI in democracy? Is it just a volcano of deepfakes and disinformation? Or can it – as many activists and even AI labs are betting – help fix an ailing and ageing political system? The UK government, which loves to appear aligned with the bleeding edge of AI, seems to think the technology can enhance British democracy. It envisages a world where large-language models (LLMs) are condensing and analysing submissions to public consultations, preparing ministerial briefs, and perhaps even drafting legislation. Is this a valid initiative by a tech-forward administration? Or is it just a way of dressing up civil service cuts, to the detriment of democracy?

LLMs, the AI paradigm that that has taken the world by storm since ChatGPT’s 2022 launch, have been explicitly trained to summarise and distil information. And they can now process hundreds, even thousands, of pages of text at a time. The UK government, meanwhile, runs about 700 public consultations a year. So one obvious use for LLMs is to help analyse and summarise the thousands of pages of submissions they receive in response to each. Unfortunately, while they do a great job of summarising emails or individual newspaper articles, LLMs have a way to go before they are an appropriate replacement for civil servants analysing public consultations.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *