Tavis Shore
Academic and research departments
Centre for Vision, Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences.About
My qualifications
Affiliations and memberships
ResearchResearch projects
Visual Localisation Strategies for Robotics and AIVisual Localisation Strategies for Robotics and AI
Research projects
Visual Localisation Strategies for Robotics and AI
Publications
Low Power Wide Area Networks (LPWANs) are a subset of IoT transmission technologies that have gained traction in recent years with the number of such devices exceeding 200 million. This paper considers the scalability of one such LPWAN, LoRaWAN, as the number of devices in a network increases. Various existing optimisation techniques target LoRa characteristics such as collision rate, fairness, and power consumption. This paper proposes a machine learning ensemble to reduce the total distance between devices and improve the average received signal strength, resulting in improved network throughput, the scalability of LoRaWAN, and the cost of networks. The ensemble consists of a constrained K-Means clustering algorithm, a regression model to validate new gateway locations and a Neural network to estimate signal strength based on the location of the devices. Results show a mean distance reduction of 51% with an RSSI improvement of 3% when maintaining the number of gateways, also achieving a distance reduction of 27% and predicting an RSSI increase of 1% after clustering with 50% of the number of gateways.
Cross-view image matching for geo-localisation is a challenging problem due to the significant visual difference between aerial and ground-level viewpoints. The method provides localisation capabilities from geo-referenced images, eliminating the need for external devices or costly equipment. This enhances the capacity of agents to autonomously determine their position, navigate, and operate effectively in GNSS-denied environments. Current research employs a variety of techniques to reduce the domain gap such as applying polar transforms to aerial images or synthesising between perspectives. However, these approaches generally rely on having a 360 degree field of view, limiting real-world feasibility. We propose BEV-CV, an approach introducing two key novelties with a focus on improving the real-world viability of cross-view geo-localisation. Firstly bringing ground-level images into a semantic Birds-Eye-View before matching embeddings, allowing for direct comparison with aerial image representations. Secondly, we adapt datasets into application realistic format-limited-FOV images aligned to vehicle direction. BEV-CV achieves state-of-the-art recall accuracies, improving Top-1 rates of 70 degree crops of CVUSA and CVACT by 23% and 24% respectively. Also decreasing computational requirements by reducing floating point operations to below previous works, and decreasing embedding dimensionality by 33%-together allowing for faster localisation capabilities.