AI Safety Summit: Experts discuss the risks posed by AI

The first international AI Safety Summit took place this week and experts from the University’s Digital research theme shared their thoughts on the potential risks posed by AI:

Professor Katie Atkinson, expert in the development of artificial intelligence for the legal sector:

With AI now being in deployment in multiple domains, stakeholders in these domains are seeing first hand both the transformative potential that AI can have for their work and also the risks posed.

Although Lawtech applications are becoming more visible and used, fundamental research in AI and law has a long history, stretching back to before the 1980s, tackling challenges such as how to search legal documents to identify legally relevant concepts, how to design and implement computable contracts, and how to build decision support systems to undertake decision making on legal cases.

Recent advances in machine learning and natural language processing have accelerated the development of applications for automating a variety of legal tasks that are able to reduce processing times of repeatable work, assist with consistency checking and free up legal professionals’ time for the most demanding aspects of their roles.

However, a big concern is the need for any AI-based decision support systems to be able to explain the outputs they produce in a manner that is understandable to humans and grounded in the relevant law.

Given these challenges, a current key strand of research in AI and law is focussing on production of ‘explainable AI’ tools that can provide legally relevant explanations, which can be inspected and verified by human legal experts.

These tools will also be vital to develop to meet regulatory requirements, in place and being developed, to ensure that suitable safeguards are being put in place to enable deployment of Lawtech applications in a trustworthy manner. 

Professor Simeon Yates, from the Department of Communication and Media, is a digital exclusion expert:

This week, AI is front and centre in civic debate. Though the potential for AI to exacerbate, entrench, or even to amplify existing inequities are sometimes noted (the issue is given a few lines in the Bletchley statement ) a much deeper public debate is needed.

The consequences of poor algorithms, bias in training data sets, and inappropriate use have started to be documented by both academics and civil society. In many cases the inherent biases (often against minorities, vulnerable communities, and women) that we find in society are replicated in AI and data analytic systems as they are trained on existing biased data

In addition, there are other ways in which our drive to AI leads to inequities. Though AI systems are often presented as providing simpler, easier, friendlier interfaces to information and services (chat bots, ChatGPT etc.), in reality many systems require much greater digital literacy from users.

Our own research and that of Ofcom clearly indicates that many citizens have very low data and digital literacy. Lacking the skills to use technology well, identify good or truthful content, and understand how platforms work. With the acquisition of a higher education and greater wealth corresponding closely with higher skills.

AI and data analytics are not ‘natural phenomena’, they are not gravity nor DNA. They are products of human ingenuity and are designed by people. Their form and uses are therefore the outcomes of human decisions, not inevitabilities. As many corporations and governments rush to embrace AI and data analytics there also needs to be very sincere commitment by democratic governments to three things.

First, significant informed and open public debate about AI. Second, clear commitments to developing citizens digital literacies and skills so that they can both use and make personal and civic assessments of AI. Third, robust research on actual or potential inequalities created by AI. These three things would go some way to informing us as citizens about what uses of AI and data analytics we want and that we will accept in our digital society.

Professor Xiaowei Huang heads up the Trustworthy Autonomous Cyber Physical System Lab at the University in the University’s Department of Computer Science:

While AI brings significant opportunities and benefits, its long-term and short-term safety issues are also widely concerned.

From computer science’s perspective, the key challenges are not only on how to capitalise AI’s ability to improve the productivity but also on how to assure that the AI performs within our expectation.

For the latter, technologies have been developed on e.g. extracting and aligning with human values, identifying AI-related hazards and risks, enhancing AI with constraints (robustness, security, privacy, fairness, etc), and verifying and assuring AI across its lifecycle.

While progresses have been made in the past years, due to the complexity of AI models and the fast progress of AI (e.g. the foundation models), such technologies on AI assurance are lagging. This calls for the computer science community to develop novel techniques and welcomes more investments into this field.

Dr Swati Sachan is a lecturer in Financial Technology with an interest in responsible AI in Finance:

The concept of responsible AI application in high-stakes domains such as finance, law, and healthcare is widely acknowledged. Regulatory bodies have introduced regulations to ensure transparency in AI-driven decisions for clarity in the rationale behind decisions and to trace the accountability of adversarial AI decisions. In response to the regulatory demands, AI researchers have worked diligently to create transparent AI systems.

The rise of Generative AI has introduced new challenges for banking and financial institutions. These tools powered by Large Language Models (LLMs) have showcased a remarkable potential to solve complex problems, design financial products, and generate high-quality content such as texts and images. However, the concerns extend beyond just decision accountability; institutions must also confront the risk of data breaches and the potential for AI to fabricate deceptive news that can skew financial markets.

Furthermore, the introduction of Open Banking has amplified the focus on data security. It permits third-party entities, such as peer banks and emerging FinTechs, to access customers’ banking data with their consent. These developments have fueled a competitive marketplace, propelling a race toward creating future data-centric and AI-powered financial products & services. To truly revolutionize data-centric banking, it is essential to aggregate data from varied demographics for greater inclusivity of underserved groups of individuals. Financial institutions need strategies to monitor the responsible use of AI for transparent decision-making, high data security, and financial inclusion of the underserved.

To find out more about the University’s Digital Research Theme, please visit: https://www.liverpool.ac.uk/digital