Skip to main content

Student Projects 2024

In 2024, the ASE course was launched. The projects were scoped such that they were supervised by Natalia Silvis and Joshua Kenyon as first readers. Students formed groups of 2-3 each and had access to the ASE Labs for the entire duration of the project. They were given a Rover with a camera that had basic autonomous driving capabilities out of the box. Some groups decided to use this as a baseline to compare against in their research. Other groups focused their research on specific parts of the hardware and another group used multiple rovers. Many groups decided to utilize a downwards facing ceiling camera to accurately track the position of the rovers from above. This ceiling mounted tracker was made by the students themselves in cooperation with the ASE Team. The following are summaries of student projects of this year.

Max Gallup

Max Gallup

Hardware Lead, Course Organizer

Position Readings in GPS-denied Environments

Comparative Analysis of Navigation Techniques in Autonomous Driving: From Wheel Odometry to Advanced Visual Odometry and Integration of Inertial Measurement Unit

team Navigation Ninjas

Determining the position and orientation of a vehicle or a wheeled robot at all times is a valuable navigation technique that is often applied in the context of autonomous vehicles. Global Positioning System (GPS) based algorithms achieve that objective by leveraging satellite signals, however, they suffer from signal obstructions in areas such as urban canyons. Therefore, this paper presents an alternative approach that could be applied in GPS-denied environments, that is based on self-contained odometry algorithms which only use sensors mounted on the car itself. For this purpose, this research implemented three different odometry algorithms on which later on a comparative analysis was carried out, by testing every algorithm on identical trajectories. The first algorithm, known as Wheel Odometry (WO), had no visual inputs and relied on wheel rotations to determine the position of the car. The second algorithm, known as Visual Odometry (VO), only used a camera to capture images and sequentially analyze them. The final algorithm, known as Visual Inertial Odometry (VIO), extends upon the VO algorithm by adding an Inertial Measurement Unit (IMU) sensor, that consists of accelerometers and gyroscopes that help with the tracking of movement and orientation. Despite the implementation and testing challenges encountered, the results indicate that WO performs best in straight and simple paths, while VO and VIO perform better in more complicated paths. Additionally, even though VIO increases the computational complexity and cost over VO, with the added IMU sensor it managed to generally improve the performance of VO in the complicated paths.

Comparing Protocols for Traffic Congestion Alleviation

Comparing a Centralized and a Decentralized Protocol to Solve Traffic Congestion

team Jammers

This thesis explores traffic congestion solutions using a decentralized and a centralized approach. It also explores how to simulate sensors in micro-environment. The primary objective is to compare the two approaches to see which performs better. This thesis will introduce implementations of the two approaches and an experiment where the implementations are tested on a circular track. The results indicate that both the centralized and the decentralized protocols perform equally well. The protocols start working increasingly less efficiently at a lower penetration rate than 60%. Finally, this thesis concludes that both protocols work are sufficient in solving Phantom jams and perform significantly better than a situation without protocol. The choice between both approaches depends on the application and specific needs, as each has its advantages and disadvantages.

Challenges with Real-Time Image Processing Algorithms for Autonomous Driving

How can real-time image processing algorithms help deal with the challenges of varying illumination conditions in autonomous driving?

team Rayo

This research investigates, implements, and evaluates various algorithms designed to optimise real-time image processing for addressing the challenges of varying illumination conditions in autonomous driving. The study focuses on five illumination correction techniques: Gamma Correction, Logarithmic Tone Mapping, Reinhard Tone Mapping, Homomorphic Filtering, and CLAHE. To thoroughly assess their effectiveness, we employed a combination of theoretical analysis, practical application on the ASE/Rover-1, and simulation. Results indicate that the Gamma Correction algorithm is the most effective, completing the majority of tracks with the best average finishing time, and that the effectiveness of the algorithms can be further enhanced by dynamic adaptation to real-time lighting conditions. The simulation system we developed shows promise as a tool for algorithm development and testing. These findings identify an optimal algorithm for autonomous vehicle use and illustrate how simulation can be utilised for fine-tuning.

Safety Assesment of Traction Control in Autonomous Vehicles

Assessment of Traction Control in Autonomous Vehicles

team Deepers

This bachelor thesis presents the safety assessment of the traction control system of an autonomous vehicle. To do so, the ASE/Rover-1 autonomous robot car is examined with two state of the art safety analysis methods, namely FMEA and STAMP. FMEA is a more traditional approach starting with hardware and eventually reaching software, whereas the more recent STAMP focuses on high-level interactions between the systems components. These methodologies were used to address both internally and externally caused failures within the system. The results are subsequently ranked according to their relevance and potential impact. This ranking enables the practical application of selected mitigations in the overall ASE hardware and software frameworks. Internally, reducing noise in steering values has been tackled as the main issue by implementing five different filtering algorithms. Externally, vehicle slippage was examined but severe drawbacks were observed which prevented satisfactory results. Then, any implementation is evaluated with both successes and limitations explored.

Lane Centering and Image Processing Techniques for Autonomous Driving

Image Segmentation and Lane Positioning Optimization in Autonomous Driving Systems

team Adapt

This thesis explores image segmentation and lane positioning in autonomous driving systems using a small-scale rover environment. The research addresses two primary questions: the assessment of different image segmentation techniques, and the lane centering performance of two vision-based lateral control methods that utilize a Dynamic Look-ahead and Static Look-ahead. The study employs a variety of segmentation methods, including threshold-based and deep-learning-based approaches, evaluated on both custom and benchmark datasets. For lateral control, the research introduces a dynamic method that adjusts the look-ahead value based on real-time conditions. Results indicate that deep-learning methods have the advantage in accuracy and robustness, whilst the traditional methods have the advantage in efficiency and speed, and the Dynamic Look-ahead method offers improved lane centering only under optimal conditions while the Static Look-ahead returns more robust results. These findings contribute many insights to the field of vehicle vision technology and provide further innovation in autonomous systems.