Comment: Our responsibilities in the age of AI

24/02/2023
Brian Hills (The Data Lab CEO)

by Brian Hills, CEO, The Data Lab

AI HAS always been a technology that divides the opinion of the masses. It’s seen either as something with the power to enhance our lives or as the eventual cause of the apocalypse that will steal our jobs in the meantime.

The latter perspective was taken when media headlines stated that “AI could kill off the human race”  following testimony by Oxford Professor Michael Osborne and PhD student Michael Cohen to the UK Government’s Science and Technology Committee.  

This extreme view was expressed at a time when the momentum being given to ChatCPT, DALL-E, and Lensa (among others), which are in a full hype cycle, was building. But the prominence of AI in the headlines leads us to an important question – as a community that develops and uses data and AI technologies, what should our responsibility be?

More recently, the viewpoint of the tech companies developing and working with AI was shared which was a little less extreme. In this week’s Science and Technology Committee Governance of AI hearing, representatives from BT, Google, and Microsoft considered their companies’ use of AI and the responsibilities that go along with it.

They also stressed that generative AI, like that of ChatGPT, was just one pillar of AI that doesn’t represent the breadth of capability that the technology has.

While, in 100 years, it is very unlikely any of us will be here to see Cohen’s and Osborne’s predictions.  I believe this outcome is not inevitable. We have the responsibility (as big tech companies have shown they are aware of) to shape the development of AI innovation, education, and policy that benefits our children and future generations to come. As regulation and governance evolve around AI, this is the legacy we should focus on right now. In doing so, there are three central areas of activity to consider:

1) Educating technologists and the users of AI
AI has the potential to automate tedious tasks and augment human labour – making our lives easier, more efficient, and perhaps more enjoyable. However, in the words of technology ethicist Stephanie Hare “technology is not neutral”. Every new technology is designed to improve the human experience, but there is always the potential for unintended consequences – “when you invent the ship, you also invent the shipwreck” Hare goes on to say.  It, therefore, becomes imperative that we educate technologists and users to maximise the benefits, predict the use cases and minimise the harms. 

The origin of AI must also be considered. At the Governance of AI hearing, representatives from the tech companies were clear on what and how they were embedding AI within their solutions. However, they noted how it was not always possible to regulate tech created in some overseas countries. As such, they called on AI to be regulated by use cases and not by where the tech was created.  

As Hugh Milward, General Manager, Corporate, External and Legal Affairs, Microsoft UK stressed in the hearing: “If AI is developed in a regime we don’t agree with, if we’re regulating it’s use, then that AI, in its use in the UK, has to abide by a set of principles – we can regulate how it’s used in the UK. It allows us then to worry less about where it’s developed and worry more about how it’s being used irrespective of where it’s being developed”

Having this awareness could mean the difference between AI being used for good or it being exploited further – thus giving it an unnecessarily bad name. However, we also need to be mindful of how quickly technology can evolve – AI included. 

We have all seen how ChatGPT can be used to automate the creation of content. However, while its less mature predecessor, GPT-3, could formulate sentences, these had the potential to be deemed racist, sexist, and in some cases completely inappropriate. Through the development of the platform and training (which was outsourced to teams in Kenya who were tasked with removing this toxicity), we are seeing the potential for chatbots. While there are undoubtedly ethical issues concerning the employment of these teams, the education of the platform to understand right versus wrong reinforces how awareness of the responsibility of the tool can enhance reputation.


2) Increasing accessibility and public understanding of data and AI
This widespread education will also undoubtedly heighten awareness of data and AI. If we continue to consider potential harms in how AI is designed and used, we can actively work to diminish risks. However, still more work is needed to raise awareness of AI and its uses. 

As a responsible AI community, we must engage in more proactive public dialogue on how these technologies play a role in our everyday lives, and ensure the public is well informed on the rights and choices we have as individuals in whether to participate or not to participate (though in some cases, like being captured on CCTV, it is impossible to opt-out).

The Scottish AI Alliance is a great example of an organisation engaging and collaborating with academia, charities, industry, and the public – both adults and children – to encourage people to develop a better understanding of AI and make it trustworthy, ethical, and inclusive.


3) Accelerating policy development

On top of education and awareness, there are many ethical and legal frameworks in place to regulate the development and use of AI, including the EU’s General Data Protection Regulation (GDPR) which governs the use of personal data, and the soon-to-be-published Online Safety Bill which the UK Government say will “make the UK the safest place in the world to be online.” Because historically, legislation is slow to follow innovation, researchers, engineers, governments, organisations, and individuals must work together – nationally and internationally – to ensure AI is used ethically and its impact on society is positive. This includes legislation focused on the long-term planning of AI advances and initiatives to actively mitigate risks by promoting responsible innovation, addressing bias (which the hearing this week acknowledged), and preventing job loss due to automation.

Ultimately, like many innovations that came before AI, we can’t predict the future, but we can create responsible and ethical legal frameworks which could protect its reputation and reduce sensational headlines. It is something that the UK government is conscious of and seeking to tackle at the source. As long as we educate and collaborate to create regulation that prevents dangerous types of AI whilst promoting safer designs that create economic value, we can mitigate the risks. If we believe in a future where AI adds value to our economy and society, it is everyone’s responsibility to step up, engage in the debates, debunk the hype, and shape the future.

The Latest Stories

Next Generation Security Checkpoint Screening goes live at Aberdeen International Airport
Systal Technology Solutions expands international operations with US headquarters launch
Experts say Edinburgh’s AI falls short despite high expectations
Experts say: you risk cyber attacks by connecting to WiFi on UK flights