Currently there are no seminar talks scheduled.
Simulation of attacks on Robotic Swarms
Most contemporary research in the field of robotic swarms assumes a benign operational environment. In our work we assume a hostile environment. We consider how swarms might be attacked and begin with a review of robotic swarm taxonomies. We then consider how a generic swarm might be attacked, and how attacks on swarms might be investigated. We conclude by presenting results of simulations of attacks that have been undertaken against swarms, based the robotic swarm taxonomies.
Allan Tomlinson is a senior lecturer with the Information Security Group (ISG) at Royal Holloway, University of London. He was awarded a PhD in 1991 from the University Edinburgh for work on VLSI architectures for cryptography. He then joined the Institute of Microelectronics at the National University of Singapore, working on secure NICAM broadcasting and in 1994 moved to General Instrument in California to work on the Digicipher II pay-tv system. Before joining the ISG in 2003, he was Principal Engineer at Barco Communications Systems where he was responsible for the development of the "Krypton" video scrambler.
His current research interests are in systems security, trusted virtualization and swarm robotics. He is PI for the TSB funded SPACE project investigating cloud security, and CO-I for the CySeCa project which is part of the Research Institute in the Science of Cyber Security. Previously he was PI on the Mobile VCE "Instant Knowledge" programme investigating privacy in mobile social networks. He also represents the ISG on the UK Cyber Security Challenge; a national competition to identify the nation's cyber security talent.
Memory Corruption: Why Protection is Hard
Software vulnerabilities allow adversaries to take control of systems. As it is unlikely that all software bugs will be fixed, we must protect systems in the presence of bugs. With the rise of defense techniques, attacks have become much more complicated, yet control-flow hijack attacks are still prevalent. Attacks rely on code reuse, often leveraging some form of information disclosure. Strong defense mechanisms have not yet been wide deployed due to (i) the time it takes to roll out a security mechanism, (ii) incompatibility with specific features, and (iii) performance overhead. We will evaluate the security benefits and limitations of the status quo and look into upcoming defense mechanisms.
Control-Flow Integrity (CFI) and Code-Pointer Integrity (CPI) are two of the hottest upcoming defense mechanisms. CFI guarantees that the runtime control flow follows the statically determined control-flow graph. An attacker may reuse any of the valid transitions at any control-flow transfer. CPI on the other hand is a dynamic property that enforces memory safety guarantees like bounds checks for code pointers by separating code pointers from regular data. We will discuss differences and advantages/disadvantages of both approaches, especially considering their security guarantees and performance impacts, and look at strategies to defend against other attack vectors like type confusion.
Mathias Payer is a security researcher and an assistant professor in computer science at Purdue university, leading the HexHive group. His research focuses on protecting applications even in the presence of vulnerabilities, with a focus on memory corruption. He is interested in system security, binary exploitation, user-space software-based fault isolation, binary translation/recompilation, and (application) virtualization. Before joining Purdue in 2014, he spent two years as PostDoc in Dawn Song's BitBlaze group at UC Berkeley. He graduated from ETH Zurich with a Dr. sc. ETH in 2012. In 2014, he founded the b01lers Purdue CTF team. All implementation prototypes from his group are open-source.