Retrival Augmented Generation (RAG) Based Framework for Real Time Human Activity Recognition (RHAR) In Disaster Zones
Contenido principal del artículo
Resumen
This research work introduces a lightweight, real-time human activity recognition system designed for disaster response applications and optimized for edge computing environments. Inspired by Retrieval Augmented Generation (RAG), the system employs a rule-based, interpretable approach instead of traditional machine learning models. Human poses are encoded as binary feature vectors based on geometric features including elbow and knee angles, hand height, symmetry ratio and sitting ratio computed from 33 key body landmarks. These vectors are compared to a pre-collected, labeled dataset using Hamming distance to classify actions such as standing, sitting, waving, hands up, and lying down. Majority vote smoothing is applied to reduce noise and enhance prediction stability. Designed to run efficiently on edge devices such as Unmanned Aerial Vehicles (UAV) or embedded processors, the system enables fast, transparent decision-making directly at the source of data capture without relying on cloud connectivity. Tests using a standard webcam under drone-simulated conditions showed reliable recognition of most actions, with occasional challenges in detecting lying-down poses. The solution is ideal for low-resource, latency-sensitive disaster response operations where edge computing is critical.
Detalles del artículo

Esta obra está bajo una licencia internacional Creative Commons Atribución 4.0.