This paper presents a real-time computer vision system for detecting eating activities inside public buses, published at the 2025 5th International Multidisciplinary Information Technology and Engineering Conference (IMITEC) and indexed on IEEE Xplore. The system is designed to run on edge devices and supports automated enforcement of no-eating regulations in public transportation — reducing driver distraction and improving passenger safety.
The Problem
Public buses in many countries enforce no-eating rules to maintain cleanliness, reduce pest infestations, and preserve passenger comfort. But enforcement today depends almost entirely on the driver watching passengers via rear-view or side mirrors — a method that diverts attention from the road and creates real accident risk. A scalable, automated solution was needed.
Abstract
A hybrid dataset was constructed by combining 958 frames from a public activity recognition source with 650 bus-specific synthetic images generated under diverse lighting, seasonal, and passenger conditions. The data were annotated into five classes — hand, mouth, utensil, food, eat — to capture eating-related actions. A YOLOv8n detector was trained and integrated with a lightweight rule-based inference module to classify each scene as: eating, has food but not eating, or no food.
The system achieves an overall mAP50 of 0.714 on movie clips, 0.595 on generated bus data, and 0.677 on the hybrid dataset. Larger features like hands and mouths are detected reliably, while smaller objects such as utensils remain challenging. The results demonstrate the feasibility of detecting eating in dynamic, real-world bus environments and highlight the value of hybrid datasets for model generalization.
Dataset Construction
Building a reliable dataset was one of the core contributions of this work. No existing public dataset captured eating in a moving bus context, so a hybrid approach was taken:
- 958 frames sourced from a public activity recognition dataset (movie/film clips showing eating scenes)
- 650 synthetic bus-interior images generated under varied conditions — different lighting, seasons, seating arrangements, and passenger demographics
- All images annotated into 5 object classes: hand, mouth, utensil, food, and eat
Model & Inference Pipeline
YOLOv8n (the nano variant of YOLOv8) was selected for its balance of detection speed and accuracy, making it suitable for deployment on resource-constrained edge devices. The trained detector feeds into a lightweight rule-based inference module that reasons over detected objects to produce a final scene classification:
- Eating — food or utensil detected near hand and/or mouth
- Has food but not eating — food present but no eating-related gesture detected
- No food — no eating-related objects detected
Results
- mAP50: 0.714 on movie clips
- mAP50: 0.595 on generated bus interior data
- mAP50: 0.677 on the combined hybrid dataset
- Large features (hands, mouths) detected with high reliability
- Small objects (utensils) remain a challenge — identified as an area for future work
The hybrid dataset consistently outperformed training on either source alone, demonstrating that combining real-world and synthetic data is a viable strategy when bus-specific ground truth is scarce.
Key Contributions
- First application of YOLOv8n to eating detection in a public bus context
- Novel hybrid dataset combining real activity recognition footage with synthetically generated bus interior images
- Lightweight rule-based reasoning module for scene-level eating classification on edge hardware
- Demonstrated generalization benefits of hybrid data over single-source training
- Lays groundwork for automated, driver-independent enforcement of no-eating regulations in public transit
Publication Details
- Title: A Computer Vision-Based System for Detecting Eating Activities in Public Bus Transportation
- Conference: 2025 5th International Multidisciplinary Information Technology and Engineering Conference (IMITEC)
- Conference Location: Pretoria, South Africa
- Date of Conference: 26–28 November 2025
- Date Added to IEEE Xplore: 03 March 2026
- Publisher: IEEE
- DOI: 10.1109/IMITEC67386.2025.11410482
- Authors: Ovie Michael Odafe, Witesyavwirwa Vianney Kambale, Mohamed Salem, Mahmoud Hamed, Selain K. Kasereka, Kyandoghere Kyamakya
Read the Paper on IEEE Xplore
The full paper is available on IEEE Xplore. Access the abstract, citation details, and — for subscribers — the complete PDF.
