TOVOIR is an augmented reality (AR) navigation application designed to assist visually impaired individuals in navigating their surroundings using advanced machine learning and voice command technologies.
-
Real-time Object Detection: Utilizes YOLOv3 machine learning model for real-time object detection, identifying obstacles and landmarks in the user's environment.
-
Spatial Awareness: Provides spatial awareness through AR overlays, guiding users with auditory and haptic feedback.
-
Voice Command Integration: Enables users to interact with the app using voice commands for navigation and settings adjustments.
-
Integrated Map: Includes a map feature for planning routes and exploring nearby points of interest.
-
Swift: Primary programming language for iOS development.
-
UIKit: Used for building the user interface components.
-
YOLOv3: Deep learning model used for object detection in real-time.
-
Clone the repository:
git clone https://github.com/shreyansh232/ToTest2.git
-
Open the project in Xcode.
-
Build and run the application on your iOS device or simulator.
-
Launch the TOVOIR app on your iOS device.
-
Use voice commands to navigate through the app and interact with AR overlays.
-
Access the integrated map to plan routes and explore nearby locations.
We welcome contributions to improve TOVOIR and make it more accessible and effective for visually impaired individuals. If you would like to contribute, please fork the repository and submit a pull request with your proposed changes.
This project is licensed under the MIT License - see the LICENSE file for details.