Wednesday, December 11, 2024
spot_img

Government – and society – must be ready to adapt to Artificial Intelligence

Must Read

Rishi Sunak’s warning that artificial intelligence (AI) brings new dangers to society that need to be addressed “head on” has made headlines, 
1


Sunak R, ‘Prime Minister’s speech on AI’, Gov.uk, 26 October 2023, retrieved 26 October 2023, www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023

but the government has been alive to the development of AI for some time. It issued guidance on its use in the public sector in mid-2019, 
2


Central Digital and Data Office and Office for Artificial Intelligence, A guide to using artificial intelligence in the public sector, Gov.uk, last updated 18 October 2019, retrieved 23 October 2023, www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector 

and published a white paper this year which set out a strategy to exploit opportunities posed by general purpose AI – such as Chat GPT – in both the private and public sectors while also regulating its use on a sector-specific basis (for example, AI generated online content will be regulated by Ofcom). 
3


Department for Science Innovation and Technology, A pro-innovation approach to AI regulation, CP 815, The Stationery Office, 2023, www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach   

Sunak’s speech was a scene-setter ahead of the government’s hosting of a Global Summit on AI Safety in November. But despite its international proactivity regarding AI’s potential future risks, the UK’s enthusiasm to exploit the opportunities presented by AI means that for now, in Sunak’s words, ‘the UK’s answer is not to rush to regulate’. Maintaining this light touch domestic approach 
4


Dickson A, ‘UK goes light-touch on AI as Elon Musk sounds the alarm’, Politico, 29 March 2023, retrieved 26 October 2023,  www.politico.eu/article/uk-go-light-touch-ai-artificial-intelligence-elon-musk-alarm/

could risk the UK lacking influence over globally coordinated regulation if the US and EU collaborate on prescriptive new rules. 
5


Chalmers A and Benaich N, ‘Staying the course – reflections ahead of the UK’s AI safety Summit’, blog, Air Street Capital, 17 August 2023, retrieved 23 October 2023, www.airstreet.com/blog/uk-ai-safety-summit  

With open source models – which anyone can test and adapt – proliferating, fostering innovation in the UK while regulating AI’s development effectively at a global level will be a delicate balance. Sunak’s speech suggests he is alive to the problem, but the solution will be more complex. 

The sheer difficulty of regulating AI’s development should lead to realism  

Recognising how difficult the development of AI already is – and may increasingly become – to regulate implies two important follow-on conclusions. 

First, there is little to be achieved by more stringent domestic regulation than that imposed by international standards regarding AI’s development, as distinct from its application, over the long term. The UK should aim for a regulatory framework that prioritises safety and ethical standards in the application of AI, but should be cautious about limiting the domestic development of the technology in a way that could cause developers either to move elsewhere or to fall behind competitors in other regimes. The possible exception, which is far from certain to be required, would be if existential threats from artificial general intelligence came closer to materialising, which – in some scenarios – could require much more costly precautionary and protectionist approaches. Defining the boundary between this and general purpose AI will be key.  

Second, the UK should place much greater emphasis on preparing for the change that will come. It can go some way towards doing so by regulating the application and use of AI as effectively as possible. But, if trying to control technological development at home can only buy time before it occurs elsewhere instead, this time is only worth buying if it is used well, either to build a more effective international regulatory consensus or to prepare the UK’s own population for the reality of the future. 

Government should devote much greater attention to preparing for AI’s impact 

While exploiting opportunities can engage the private sector, and effective regulation requires international cooperation, the challenge of preparing a population for change falls squarely to domestic governments. Optimism about the opportunities for innovation should not lead the UK government to overlook the social impact AI is already having on its citizens, and the need to prepare for greater potential change to come. 
6


 House of Commons Science, Innovation and Technology Committee, The governance of artificial intelligence: interim report, Ninth Report of Session 2022-23 (HC 1769), The Stationery Office, 2023, https://publications.parliament.uk/pa/cm5803/cmselect/cmsctech/1769/report.html 

General purpose AI will pose threats to jobs, national security and public discourse, and there are plenty of reasons to doubt whether regulation can keep pace with it. Resilience and readiness must also be key planks of the government’s response, in at least five areas: 

  • Resilience against misinformation. As AI comes closer to resembling human interaction, it is unclear whether even digital tools will be able to detect increasingly sophisticated deepfakes, or whether regulation can keep such developments in check. Society may become increasingly reliant on large technology companies, rather than government, to verify information. Education in evaluating artificially generated content will be required across all age groups. 
    7


    House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able?, Report of Session 2017-19 (HL 100), The Stationery Office, 2018, https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf, p. 133.  

     
    8


    Department for Education, Generative artificial intelligence in education, Gov.uk, March 2023, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146540/Generative_artificial_intelligence_in_educati… 

     The ways in which citizens are educated to place trust – or to refuse it – will have important implications for both the quality of political discourse and individuals’ exposure to fraud – and will be a shared responsibility across both government and the private sector.  

  • Employment. General purpose AI may offer long term economic opportunities, but the transition will be disruptive – and it could particularly affect the service and knowledge economy in which the UK has specialised. 
    9


    Shepard M, Technology and the future of the government workforce: How new and emerging technology will change the nature of work in government, Institute for Government, 2020, www.instituteforgovernment.org.uk/publication/report/technology-and-future-government-workforce  

    The higher level critical skills needed to effectively navigate the new environment as citizens will become more important in the job market too, while a less differentiated ability to communicate – particularly in text – will become replaceable. Workforces in some areas (e.g. where there is a concentration of call centres) may need both welfare support and re-training, as the UK learnt in previous transitions away from mining and heavy manufacturing.  

  • Economic reconfiguration. Government should be prepared for radical and rapid changes in the structure of the economy. There may – for example – be consolidation in many markets as consumers rely more heavily on high profile brands they trust, which could require a different approach to competition regulation.  

  • Maintaining vital infrastructure. General purpose AI will pose increasing threats to our financial, defence and energy systems, and is recognised as a chronic risk in the national risk register. 
    10


    Cabinet Office, National Risk Register 2023, Gov.uk, August 2023, www.gov.uk/government/publications/national-risk-register-2023, p. 17.

     Government should consider the extent to which it needs to insulate these systems – which may be in the private as well as the public sector – from general purpose AI, perhaps relying on stronger cyber-security requirements (or even forming strategic partnerships for defence) rather than solely relying on a regulatory approach. 

  • Rights. Rights to intellectual property and privacy may need to be defended or overhauled in light of models that develop content based on learning from large datasets. Both rights and responsibilities associated with innovations derived from general purpose AI will need to be apportioned. Decisions affecting individuals are already being shaped – in the public and private sectors – by tools which may embed unidentified bias, 
    11


    Stacey K, ‘UK risks scandal over ‘bias’ in AI tools in use across public sector’, Guardian, 23 October 2023, retrieved 24 October 2023, 
    www.theguardian.com/technology/2023/oct/23/uk-risks-scandal-over-bias-in-ai-tools-in-use-across-public-sector  

     and those individuals will need redress procedures that do not presume that automated decisions are correct.  

Sunak’s speech clearly recognises that AI is here to stay – and he is right to seek out the potential benefits. Government cannot halt development of the technology. But it can proactively shape the social change it will bring, and mitigate some of the transitional frictions that will arise, if it realistically anticipates where it does and does not have real agency. 

Credit:Source link

- Advertisement -spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img
Latest News
- Advertisement -spot_img

More Articles Like This

- Advertisement -spot_img