Sign in: Staff/Students
Dr Louise Dennis is a Computer Scientist who works in the University’s Centre for Autonomous Systems Technology.
“Mark Zuckerberg has announced on Facebook that, as his personal challenge for 2016, he plans to build a simple Artificial Intelligence to run his home. He hopes it will be like the computer butler, Jarvis, that appears in the Iron Man films and comics.
On the face of it it seems ludicrous that one man, in the space of one year, could produce an artificial intelligence while, at the same time, running arguably the most successful social media company on the planet.
It seems even more so when we consider that many scientists have been working towards this goal for over fifty years. However it is a sign, both of how far that research has progressed, and of how our understanding of what we might want from an Artificial Intelligence has changed.
Many aspects of intelligent behaviour are now achievable with computers. Zuckerberg talks specifically about having voice recognition software to obey his commands, and computer vision software to recognise friends and automatically open the front door for them.
Both of these are achievable with current technology, particularly if you are prepared to tolerate some inaccuracy. People are already using voice recognition on their phones and facial recognition to identify people for tagging in photos on Facebook.
Clearly the AI butler he envisages is not going to be the reproduction of human-like intelligence but a combination of systems which he hopes will allow his home to run more smoothly and easily.
And this is technology that is being developed in many universities and companies, not just to provide playthings for the super-wealthy but also in the hopes that it can aid everyday people, particularly the elderly and infirm.
Zuckerberg talks, for instance, of building a system that can monitor his baby daughter when he and his wife are not in her room, to alert them if anything seems to be wrong. Similar technology would be invaluable for the thousands of elderly people in the UK who suffer from falls every year, some of whom have to wait extended periods for family or friends to discover them.
Research into homecare assistants is a very active area and the Centre for Autonomous Systems Technology at Liverpool, together with researchers at the University of Hertfordshire and Bristol Robotics lab, is engaged in active research on their safety and trustworthiness.
Cursory thought shows that there are a number of complicated issues that need to be addressed before many people would be happy to have these systems in their homes.
Any system that is monitoring you for activity levels, in order to detect if you have fallen or are in distress, will also be tracking where you are and what you are doing.
Therefore such a system will need to weigh carefully the risk that you are in some kind of trouble with the risk that your privacy will be violated.
Such decisions, moreover, are likely to be highly personal with some people wishing the system to contact someone quickly even if it is a false alarm and others wishing to preserve their privacy and independence as long as possible and only have help summoned if the system is convinced there is a serious problem.
Similar issues arise in most situations where we hope to integrate artificial intelligence into our daily lives. Should a self-driving car system, for instance, be able to take control from a human driver, if it considers their driving reckless or unsafe? How much of a risk should a human driver pose to themselves or to others before a car restricts their autonomy in this way? The answer may well vary from country to country.
All these questions fall into the domain of ethical reasoning by computational systems and it is becoming clear that such reasoning involves understanding and comparing different ethical concerns and priorities (privacy, safety and dignity for instance).
Research at Liverpool
At the University of Liverpool, together with Marija Slavkovik from the University of Bergen, we have looked at how different ethical principles can be ranked and used by an autonomous system when choosing an action and how we can guarantee that the ranking will be respected by the artificial intelligence program. We have also worked with researchers at Delft University of Technology to examine how systems can adapt their behaviour by being given new social rules by users or organisations.
In the future we hope such research will give rise to systems that are sensitive to the complex web of ethical concerns that govern behaviour. These systems will be adaptable to different legal systems and organisations and will respect the wishes of individual users.”
You must be logged in to post a comment.
All recent news
Prof Tom Solomon elected Vice President of the Academy of Medical Sciences
Liverpool Bayesian model helps calculate COVID-19 R rate
Event: Live, interactive family show on COVID with Tom Solomon
Merseyside Police warn students about webcam sextortion
New JetDose project set to optimise proton beam cancer therapy
A Bayesian statistical model created by @LivUniEEE researchers is one of the models being used by the UK Government to estimate the UK’s COVID -19 R number. #LivUniCovid
Read more: https://bit.ly/3AZJdTb
We welcomed Minister for Small Business, Paul Scully MP (@scullyp) this afternoon to discuss @UoLManSchool's successful roll out of the #HelptoGrow scheme
A big day today - 3 #CLPU members @SyngeLaw @JohnPicton5 and Professor Debra Morris gave evidence at the House of Lords Special Public Bills Committee on the new Charities Bill. 📽️📽️📽️Watch here: https://www.parliamentlive.tv/Event/Index/258d880e-995f-4975-8620-c78bf149529a #charitylaw