1. I need something to show to my supervisor Wednesday 3rd march by 10am Sydney time that I have started the work already2. I have attached a sample previous report of the same project as I am continuing and modifying to soft previous errors encountered.3. I have attached requirements from my supervisor.4. Please be thorough. Thanks
Raspberry PI Vehicle Anti-Theft Face Recognition System System Design Applied Project – NEF 3001 Project Members Mohammad Waleed Ahmed – s4569298 Joseph Nguyen Tran - s4495004 Minh Tam Tran - s4532460 Giselle Calinga - s4570003 Ryan Sinnott - s4059408 Executive Summary *Written by Waleed Ahmed This document provides a blueprint for the implementation of all functional aspects of the project defined previously in Requirements Analysis (RA). In this phase, we will comprehensively describe all technical details for building our Android application. According to the requirements we have gathered so far, our Android app (short for application), called ‘FaceCar’, should be able to: • Register a new user by simply taking their photo and storing it into the file system. • Allowor deny access to an existing user based on whether a facial match exists in its file system. • Clear a user’s image data by scanning their face and finding a match within the app’s file system. • Seamlessly request external camera access and receive facial snapshots. Moreover, we also provide details of the underlying Raspberry PI (RPi) hardware plus buzzer circuitry which is to be implemented as part of this project. In a nutshell: • FaceCar app communicates via a wireless connection with the RPI which in turn is connected to a breadboard via jumper wires. • The breadboard connects to Red, Green, Yellow LED lights and a buzzer alarm. Alarm and lights’ terminals are directly inserted into the breadboard’s holes. • Yellow LED stays on as long as the connection of the RPi with FaceCar stays active. • Red LED lights up and buzzer alarm sounds simulating vehicle access denial. • Green LED lights up simulating vehicle access success. Therefore, this document first provides a model of the entire project architecture showing all subsystems/modules involved and their interactions. Afterwards, an overall description is provided for each subsystem outlining what it does and how it contributes to the entire project. For the hardware, a model is shown depicting all items used and how they are connected. Finally, we explain each subsystem using pseudocode and diagrams showing a flow of events. 2 Table of Contents Executive Summary 2 1. Overview 4 2. System Architecture 5 2.1. Camera Delegation Management Subsystem (CDM) 7 2.2. File Management Subsystem (FMS) 7 2.3. Facial Comparison Management Subsystem (FCM) 8 2.4. User Notification Management Subsystem (UNM) 9 2.5. Raspberry PI Notification Management Subsystem (RNM) 10 3. Hardware Architecture 12 3.1. Hardware Diagram 12 4. Detailed Design 13 4.1. General User Interface 13 4.2. Camera Delegation Management (CDM) 15 4.3. File Management (FMS) 18 4.3. Facial Comparison Management (FCM) 20 4.4. Raspberry PI Notification (RNM) 23 4.5. User Notification Management (UNM) 25 5. Frequently Asked Client Questions 28 6. Glossary 29 3 1. Overview *By Waleed Ahmed In order to satisfy functional requirements determined previously in Requirements Analysis, we have identified several subsystems. Since our app is expected to save only image data, we are using Android’s own file system to store images rather than employing a database. As a result, a File Management Subsystem (FMS) is used to deal with storing and fetching images from the file system. The user can take their photo via an external camera app whose result snapshot automatically gets saved by FaceCar app through a joint collaboration of Camera Delegation Management Subsystem (CDM) and FMS. Facial Comparison Management Subsystem (FCM) later gets an image from CDM and compares it to images stored by FMS in the app’s file system. FCM finds common facial features among images to find whether a user is already registered with FaceCar and that their image exists. As a result of comparing facial data, there are also subsystems dealing with triggering an LED/buzzer response on the RPi and displaying notifications within the FaceCar app. Notifications contain a short message informing the user of whether a facial match was found. The basic version of FaceCar app needs a network connection to communicate with the RPi. The communication occurs with both the RPi and the app being connected to Wi-Fi. However, if the link of the Android phone with the RPi breaks, FaceCar should inform the user via a notification message. For future versions and improvements, FaceCar is expected to first scan the user’s face and look for a match in the file system to determine if it is a new or returning user; this is done so that FaceCar can display personalized home screen. 4 2. System Architecture *Designed by Waleed Ahmed This section provides models of the system architecture as well as brief descriptions of each of our subsystems. Figure 2.1 below is a component diagram showing all subsystems involved within the entire project architecture. The dependencies among subsystems are represented with a ball and socket notation. Figure 2.1 5 Since the subsystems shown in Figure 2.1 reside within different hardware devices, Figure 2.2 below shows a deployment diagram with subsystems in relation to the underlying hardware. Figure 2.2 6 2.1. Camera Delegation Management Subsystem (CDM) *Written by Waleed Ahmed On FaceCar’s home screen, there are three buttons: Register, Begin Scan and Clear Data. If the user presses any of these three buttons, this subsystem is the first one to activate. It requests for a camera application in the user’s Android phone. If a camera application exists, it fires up onwards from FaceCar’s home screen and functions normally as if it were part of FaceCar app. Therefore, the user makes a smooth transition from FaceCar app to camera view. User then takes a snapshot of their face whenever they are ready and camera is focused. After a snap is taken, camera gives the user a choice to retry or save the image. If the user is satisfied with their image, external camera app fires the camera delegation intent for the user to take their photo again. The whole sequence above occurs thrice after which the external camera app ends and redirects user back into FaceCar. Since FaceCar does not require a coded camera function as part of itself, it therefore makes use of this subsystem to seamlessly delegate image snap function to a dedicated camera app and fetch the snapped image back into FaceCar. 2.2. File Management Subsystem (FMS) *Written by Ryan Sinnott and Waleed Ahmed FMS works in three instances: • When user is in the process of registering themselves and their image needs to be saved in FaceCar app’s file system. • When the user wants to scan their face and find out if a facial match is present in the file system. • When user wants to clear their image data from FaceCar’s file system. In the first instance, when external camera app has taken the photo snapshot, FMS gets that photo and sends it to FCM to be compared. If FCM is satisfied no matching image exists, FMS then saves the image snap in a dedicated images folder within the FaceCar app’s directory. It does that automatically for the user. For the second instance, FMS queries the folder where images are stored and fetches images so they can be compared by FCM. FMS does this when the user has snapped their photo and it needs to be compared with all stored images in the file system. Finally for the third instance, after a user has snapped their photo, FMS queries FaceCar image folder and searches for an image that ‘looks’ like the face in the snapped photo. If it finds a matching image, the image gets deleted. However, as a data security precaution, FMS displays a final warning with a thumbnail of the image to be deleted requesting confirmation from the user to proceed. Again, it is worth noting that all three instances above query image data thrice (to work with CDM) before FMS finally saves or deletes user image depending on which functionality is used. 7 2.3. Facial Comparison Management Subsystem (FCM) *Written by Tam Tran and Waleed Ahmed FCM is the most vital subsystem since it will define the utility of FaceCar app. It is responsible for reading any two images that need to be compared. Essentially, this subsystem is working on three occasions: • If a user tries to register, FCM is to compare their snapshot with the images in image folder to make sure user is not already registered. • If a user wants to scan their face, FCM determines if the user’s facial image exists in the file system. Again, it does this by comparing user’s snapshot to pre-existing images in FaceCar’s file system. • If a user wants to delete their image, FCM compares their image snapshot to all images stored in FaceCar images folder. If an image is found resembling the user, it gets deleted by FMS. The above three points suggest that FaceCar is heavily reliant on the facial comparison algorithm and therefore, on each of these three occasions, to work in conjunction with CDM subsystem, FCM is expected to do its comparisons thrice (with three provided user snaps). This will ensure any chances of error are greatly reduced. As for an overview on our algorithm determined in the Requirements phase, we will be using OpenCV library’s ‘Feature Match’ capability. In simple words, Feature Match looks for some distinguishing features in an image and compares it with features in another image. An example of Feature Matching to be used in our project is called ‘Brute Force Matching’ or BFMatcher. In an image being compared (called train image), BFMatcher automatically looks for distinctive features (called ORB descriptors). These descriptors usually point out edges of objects inside an image since edges are easier to compare and chance of error is low. In the train image, every feature pointed out by BFMatcher gets compared with all other features in another image (called query image) using a distance value as an argument. The one that returns the least distance value (lesser distance values mean a more accurate match) gets matched with a line drawn connecting the detected similar features within both image sets (See image below).1 1Image Reference: https://docs.opencv.org/3.4.3/matcher_result1.jpg 8 The image above shows a facial comparison example for comparing features on a cover