Putting computational models in the real-world
Project description
The ability to fuse, process and act upon sensory information is an important human attribute. Multisensory integration in mammalian brains is exemplified by the superior colliculus (SC).
The SC serves an important function – it subconsciously shifts our gaze (saccades) to focus on prominent stimuli regardless of which sense the stimulus was detected by. Imagine seeing a car from the corner of your eye as you cross a road and turning to see it clearly, or reacting to a tap on the shoulder by turning around.
As such, the SC prioritises the use of the fovea in our eyes to point it at stimuli it feels are important: events occurring in any one (or preferably more) sensory modality. Computational modelling has helped us develop a better understanding of the operation of the SC.
For computer science, developing models of brain structures is important as they allow us to explore their application in the real-world, offering robustness and flexibility beyond currently constrained and brittle systems. As such, the application of these models may have significant impact in areas from intelligent surveillance to sensory fusion in health care monitoring.
In this project, we propose that an existing abstract model of the SC be situated in the real-world to see whether it can operate in way that is similar to biological systems. This work has been conducted in discussion with nine leading research groups across the UK and US in preparation for a Programme Grant proposal.
The model has already been implemented in a Java framework that can allow it to be connected to appropriate sensors (a camera and two microphones). However, the challenge is to provide the model with features in an appropriate format by developing software interfaces between the model and the sensors to produce an artificial saccade. This should be possible in the time using the existing code and libraries.
This Vacation Bursary was used to fund Anthony Timotheou, a student studying on the BSc Computer Science programme at Surrey.
Aims and objectives
The aim of this project is to take an existing abstract model of the superior colliculus and situate this in a system that can react to video and sound stimuli from the real-world.
Objectives are to:
- Develop a broad understanding of how biological systems route sensory signals (particularly visual and auditory stimuli) in the midbrain to the superior colliculus.
- Use this understanding to develop feature selectors for a camera and two microphones that can process stimuli and input them to an abstract model of the superior colliculus.
- Integrate the camera, microphones and feature selectors with the model.
- Evaluate the developed system in a number of systematic experiments on live input.
- Write-up the method and evaluation for publication.
Funding amount
£2,500