Tech Titans agree to Biden’s AI rules: Will the UK follow its lead?
Several major technology companies – including Microsoft, Meta, Amazon, Google, and OpenAI as well as other companies – have agreed to comply with a set of guidelines proposed by President Joe Biden’s administration, following concerns about the technology’s capability. Amidst this announcement, a survey from the Prospect Trade Union found that 60% of respondents want the UK government to provide AI regulation in the workplace. In light of this, Claire Trachet, industry expert and CEO of business advisory, Trachet, highlights the need for the UK to balance real action and regulation amidst consumer concerns and continue to drive innovation for the tech sector.
The regulation proposed by the Biden administration aims to ensure these products are safe before they hit the market and are available to the public. This includes independent experts conducting security testing and third-party oversight of commercial AI systems. Following this development, questions arise about how the UK government will respond. According to research from Ada Lovelace, there appear to be ‘significant gaps’ within the UK’s AI safety plan, leaving the majority of the responsibility with regulators. According to Trachet, whilst it is pleasing to see the UK capitalising on the economic benefits of AI – with research from Earlybird revealing that Britain houses the largest number of AI startups in Europe at around 334 – it is also important not to lose sight of taking precautionary actions against potential risks posed by AI.
Despite AI serving as a cause for optimism for the UK tech sector, having contributed roughly £3.7bn in value to the UK economy, as well as attracting almost £19bn in private investment through 2022, the report reveals concerns over light regulation not being able to combat growing harms and calls for an ‘expensive’ definition’ to be given to AI. According to a Forbes Advisor survey, 76% of UK consumers are concerned about the misinformation that comes with AI technology, suggesting a caution amongst consumers around how AI operates and its potential. In this sense, a similar strategy to the US could be beneficial, as a lack of regulation could lead to a host of problems arising in the future.
Claire Trachet, tech industry expert and CEO of business advisory, Trachet, comments on how real action and regulation are needed to ensure businesses do not lose control and risk consumer safety:
“The proposed regulation we are seeing in the US, and the effort being made by tech giants like Microsoft and Google is definitely a step in the right direction. Whilst AI continues to bring about a wave of excitement and acts as a critical component in driving the global economy, this development in the US provides a good example for the UK to consider following.
“The UK government needs to ensure there is clear regulation put in place that helps mitigate any harm that comes from AI technology. This means balancing stimulating innovation and protecting the interests of consumers and businesses, which can be done by investing in safeguarding and real reform. While we have some form of risk management and different reports coming out now, none of them are true coordinated approaches.
“As the AI space is so fast-paced, establishing effectiveregulation can be difficult. One thing that can be done, is to put a clear, legal responsibility onto the board and CEOs of these AI companies, so that they prioritise the safeguarding surrounding their products at all times.”