By Emma Pascoe-Watson, Associate Director
2023 was the year AI truly started to capture headlines. While many welcomed the ‘new-found’ technology, its increased application across society and business brought questions and concerns around accuracy, regulation, data privacy and the likelihood that it will replace jobs.
This year will likely see continued advancement of this revolution, and with it more headlines and a sustained discourse around safety and security. AI will grow to be intrinsic to the work we do as comms professionals, from content creation to due-diligence, but what should we be on the lookout for?
A central concern is AI’s susceptibility to bias. AI models are often trained on datasets developed by humans, who, by their nature, hold their own bias. As a result, AI models have been found to generate troubling outputs and as such cannot be trusted to be neutral, even if perceived to be. This problem is referred to as algorithmic bias, and the concept is not new – we have been confronted with issues in healthcare algorithms, hiring advertisements and online advertising for years.
The Cambridge Analytica scandal back in 2015 presented us with the concept of deliberate bias - where data and information are manipulated to produce an undemocratic outcome.
Shocking, too, was the recent deep-fake AI audio clip of Labour leader Keir Starmer that surfaced during Labour Conference. Such situations are likely to become more common and more believable. In a year when seven of the ten most populous nations in the world will head to the polls, the issue is certainly pressing.
In a similar vein, the threat of algorithmic radicalisation looms - the concept that algorithms on social media sites such as TikTok, X and Facebook drive users towards progressively more extreme content over time, leading them to develop politically radicalised or extremist views.
Indeed, Elon Musk recently commented that his “aspiration for the X platform is that it is the best source of truth”, whilst simultaneously allowing extremists previously banned for hate speech and harassment back onto the platform.
The intersection between the rise of AI and its inherent biases, together with people’s trust of social media as a source of information, in an increasingly polarised world, is something likely to be top of the agenda for regulators and policymakers tasked with bettering internet safety.
So, whilst AI continues to improve and filter into our everyday lives and will inevitably feed its way into our working practices, it should not be taken as a single source of truth. Data integrity is of utmost importance, along with good advisers who can use and programme AI responsibly. Keeping an open mind, utilising the benefits of AI but understanding its pitfalls as the technology develops will be key to staying ahead of the curve in 2024.
For advice on how to manage AI in your digital strategy, get in touch: digital@cardewgroup.com