Becoming an Expert: Andrew Bolster on suspicious submarines

AndrewBolster-1wAndrew Bolster, from Northern Ireland, is a third year PhD researcher in the new Advanced Networks Research Group in the Department of Electrical Engineering and Electronics. He completed his MEng in Electronics and Software Engineering at Queen’s University Belfast in 2010 before starting his PhD at QUB’s Institute of Electronics, Communications and Information Technology under a joint UK/French Defence Research programme facilitated by DSTL in the UK and DGA in France. In October last year, he moved to the University of Liverpool as part of Professor Alan Marshall’s research group.

“My PhD is a strange one, which is saying something for a PhD. The current title is ‘An Investigation into Trust and Reputation Frameworks for Collaborative Teams of Autonomous Underwater Vehicles’, but the shorter version is ‘teaching smart submarines how to be suspicious’.

“More generally though, my research has taken me through studies into submarine construction and kinematics (how they move in the water); acoustic physics (a top tip – sound bends underwater due to the speed of sound changing with depth, salt content, particulates like algae and plankton blooms, as well as surface conditions); distributed control theory and flocking as well as using atomic clocks for accurate timing for positioning, and work into the social science of trust in social groups and hierarchies. It’s been great to have a range of experiences and the flexibility of driving my own research goals.

Mine Sweepers

“Primarily, I am investigating future applications of autonomous submarines for coastal and deep-sea environmental survey and mine sweeping.

“The attraction for research into this area is that it is high risk and high cost for current monolithic solutions like the Hunt and Sandown class minesweepers of the Royal Navy. These risks and costs could be significantly reduced by using several smaller, cheaper, independent decentralised submarines to survey an area and report if they find anything interesting/dangerous.

“A key to this decentralised operation is the use of ad hoc networking so that teams of submarines can communicate without there being a ‘boss’ or ‘commander’.

“This is where trust gets complicated in collaborative networks; I might trust you, and you might trust someone else. That means that I can ask your opinion of the guy I’ve never dealt with before. If you say he’s a good guy and I trust you, then I can trust the other guy slightly more”
”However, this means that for the communications to be secure, there has to be some form of ‘trust’ between nodes, i.e. I’ve been swimming with that guy for a while and he seems to be behaving well and doing what I’ve asked him to do, so I’ll believe that he’s more likely to accomplish the next task than the guy I’ve never seen before.

“This is where trust gets complicated in collaborative networks; I might trust you, and you might trust someone else. That means that I can ask your opinion of the guy I’ve never dealt with before. If you say he’s a good guy and I trust you, then I can trust the other guy slightly more.

“Combining this ‘Recommendation’ relationship along with the ‘Direct’ relationship, as well as the ability to ask for recommendations of people you already know (Second Opinions), in a sample scenario I worked on with the Defence Science and Technology Laboratory last year with six submarines, two surface relays, and an aerial monitoring platform included over 110 dynamic trust relationships. Needless to say, it is difficult to keep track of 20 real time relationships let along 110

“But these are just ‘abstract’ trust relationships for modelling purposes, our real work has been in looking at previous trust systems, built on message passing communications systems, and developing our own motion-based trust system, and then combining the two. From a social perspective, this means that we’re not just looking at what is said between the submarines, but a kind of version of ‘Body Language’

We aim to misbehave

“The single most challenging and interesting part of the work for me is that to test my assumptions and ideas, I have to play the bad-guy on regular occasions. I have to imagine what I would do as an enemy that knew everything about all the security that’s currently used (and will be used in future) and how would I subvert that. How do I inject malicious information into a team of submarines?

”I have to play the bad-guy on regular occasions. How do I inject malicious information into a team of submarines? How can I knock them off course slowly by incorrectly reporting my position?”
”How can I knock them off course slowly by incorrectly reporting my position? How can I sneak secret messages out to another vessel or submarine just out of range of the team?

“I then construct simulations to test the actions and reactions of my autonomous teams to ‘spys’ in their team, and then try to assess a) how dangerous that is b) what can be done to detect and identify it in real time and c) what can be done to eliminate the vulnerability.

“It’s an exciting area of work that’s taken me to NATO facilities in Italy experimenting with real submarines, Cellars under the Ministry of Defence, discussing UK/French research relations, the National Physical Laboratory discussing ‘stealth submarines’,  presenting at Stanford as part of an Artificial Intelligence conference, and now I’ve found myself delivering a report of the human and technical factors of trust in autonomous systems to a five-nation consortium. It’s a stressful and exciting time to be in research!”

To find out more about studying at the University of Liverpool, visit our Study pages

AndrewBolster-1h

 

Leave a comment