Physik  |  Technik

 

Victoria Hoffmann, 2004 | Zürich, ZH

 

Elderly individuals often face severe falls with age, necessitating a rapid and reliable response. To address this, an autonomous robot was developed that incorporates a neural network-based computer vision model, specifically designed and extensively trained with own data. Upon a fall detection, the robot sends images via email to caregivers for immediate assistance.
The robot scans an apartment efficiently, leaving only minimal blind spots. It detects lying individuals with 93% accuracy, while always correctly identifying walking or chair-sitting individuals as «not lying.» However, floor-sitting postures are often misclassified emphasizing the need for a lager training database for the model.
Future improvements include enhanced image processing and a gyroscope for faster and more reliable navigation and live streaming for initial medical assistance.

Introduction

Industrialized countries› aging populations face challenges in elderly independence and safeguarding against falls. Existing monitoring solutions, like wearables and fixed cameras struggle with usability, accuracy, and privacy issues. This study develops and tests an autonomous robot that uses machine learning for fall detection, ultrasound sensors for autonomous navigation and automated alerts to caregivers, offering an unobtrusive, reliable solution for home use.

Methods

The robot’s primary processing unit is a Raspberry Pi 4B+, with an Arduino Uno managing ultrasound sensors for distance measurements. These inputs are fed into a wall-following algorithm for autonomous navigation.
To detect individuals lying on the ground, a computer vision model was trained with a neural network. 3322 images of people in various positions were taken and subsequently annotated on the Roboflow platform. An automated email alert system was built to notify caregivers upon fall detection.
Extensive empirical testing included area coverage calculations and distance measurements to test navigation efficiency. 707 images taken by the robot were used to assess detection accuracy and 239 test alerts were executed to confirm notification reliability.

Results

The object detection model achieved 93% accuracy in identifying lying individuals, consistently triggering alerts. Accuracy was high across all lying positions (true positive) and 100% for standing and chair-seated individuals (true negative). Performance was high across variations of age, gender, extremity visibility, and familiarity from training data, but varied slightly by the robot’s angle of view. However, 60% of floor-sitting cases were misclassified as lying and in experimental settings, stuffed animals were misclassified as lying people in 67% of cases. The external alerting system was fully reliable, sending all emails successfully.
The search-navigation algorithm enabled 92% coverage of a 65m² apartment in 50 minutes, leaving only minor blind spots. Navigation was stable on hard surfaces (18 cm/s, 90° turns) but slower and with inaccurate rotations on carpets (15 cm/s, 40° turns).

Discussion

While the 93% fall detection accuracy is promising, misclassifying floor-sitting individuals (60%) indicates a training dataset gap. The model’s consistent accuracy across age, gender, visibility of extremities, and familiarity indicates sufficient diversity in those aspects. The model’s slightly varying performance with regards to the robot’s angle of view can be explained due to the varying frequency of sitting people for certain robot positions, suggesting a need for more diverse sitting postures in the training data. As the model is trained on shapes, the 67% false positive rate for human-like objects is understandable but emphasizes the need for further training.
Inconsistencies in navigation across different surfaces can be addressed by using a gyroscope for angle measurements or computer vision for path plotting. Concurrent programs and better equipment can reduce time requirements significantly to approximately four minutes.
While the alert system is reliable, alternative methods like phone notifications or voice commands could improve caregiver response time and allow direct interaction.

Conclusions

The autonomous robot for fall detection and emergency alerts offers advantages over traditional systems by functioning without user interaction, avoiding blind spots and ensuring privacy as images are only shared in case of falls. Further navigation refinements, dataset expansion, and communication upgrades can enhance its reliability and usability.

 

 

Würdigung durch den Experten

Frédéric Bourgeois

Für die Entwicklung eines autonomen Roboters zur Sturzerkennung wurde eine Vielzahl technischer Disziplinen, von Elektronik über Programmierung bis hin zum Training von KI-Modellen, angewandt. Victoria Hoffmann hat sich das notwendige Know-how selbst angeeignet und diese komplexen Themen erfolgreich miteinander kombiniert. Dadurch gelingt ihr, eine Lösung für ein gesellschaftlich relevantes Problem zu präsentieren. Ihre Begeisterung für das Thema zeigt sich insbesondere in der ausführlichen Evaluation der Roboterperformance, wodurch die Arbeit ein hohes wissenschaftliches Niveau erreicht.

Prädikat:

hervorragend

Sonderpreis «London International Youth Science Forum (LIYSF)» gestiftet von der Metrohm Stiftung

 

 

 

Rämibühl-MNG, Zürich
Lehrerin: Adriana Mikolaskova