top of page

SMART ATTENDANCE MODEL

Writer's picture: madrasresearchorgmadrasresearchorg

Author : Gauri Godghase

Advancements in the field of facial recognition have made its application possible in a vast number of previously unexplored domains. One such realm is the use of facial recognition in educational institutions. A common practice in schools and colleges to mark the attendance of students in a classroom is via roll call. This approach not only wastes time if there are many pupils but is also prone to false attendances. This blog proposes a Smart Attendance Model to counteract these problems. It uses machine learning and facial recognition technologies to automatically recognize attendees and fill up their names in a database.
 

Introduction

Today, the number of students in any classroom is increasing. Traditional methods of marking attendance are unsuitable as they are tedious and inefficient. Moreover, they are easy to get around. Many students can mark the attendance of their friends, who are not present in class, without the teachers ever finding out. This practice is pretty common especially among collegiate students. The proposed method of Smart Attendance can be used to tackle this issue. Here, we will make use of the face-recognition and openCV libraries to identify faces. The video camera of laptop can be used as a substitute for actual hardware. The name of attendees and time of joining the class will be stored in a csv file. Firstly, we will explore the face-recognition library, before moving on to the implementation of our attendance model.

Face-recognition library

The job of recognizing and labelling the image of a person is done by this library. It does so in four steps:

  1. Locating all the faces in the image

  2. Dealing with various poses and angle of face of the same person

  3. Encoding

  4. Identifying and associating a person with the encoded values

We will look at each of these steps in detail subsequently.

Step 1: Locating individual faces

In the very first step, the algorithm needs to recognize how many faces are in the picture along with where in the image the face is located. In the backend, the library makes use of Histogram of Oriented Gradients (HOG) method to do this.

This method is quite complex so we will not go into too much detail about it. But in short, the image is divided into cells of size 8x8 or 16x16. The cells surrounding each cell are examined and an arrow is drawn in the direction in which the image grows darker. This is done for all the cells in the image. These arrows are known as gradients. Gradients are like vectors in Physics -they have a magnitude as well as a direction.

FIG:1

FIG:2

After finding the gradients, our images resemble this. Once we get this HOG version, it is easy to locate this face pattern in different photos.


Step 2: Identifying the same person across different angles and poses.

As easy as it is for humans to recognize a person in various frames and angles, computer algorithms find it just as difficult.

The computer will treat each of these images as belonging to a different person.


FIG:3

Thus, we need to resolve this discrepancy. To account for this error, we use an algorithm called face landmark estimation. It involves identifying distinctive face points like the tip of our noses, eyebrows and so on. The following picture depicts the 68 points on the face (or landmarks) that this algorithm uses to differentiate between people.


FIG:4

The machine learning model is trained to recognize these specific points across all faces. The image is rotated, scaled, and sheared to achieve this. Now the model can roughly recognize a person irrespective of the angle, pose, frame or how the face is turned.

Step 3: Encoding

It is not possible for an algorithm to compare all the faces generated in step 2. Thus, we use a previously trained algorithm to generate encodings for each face and compare them. Encodings refer to unique measurements of faces that can be used to distinguish them. The face recognition library uses the pre-trained neural net developed by Brandon Amos at OpenFace to achieve this. 128 unique measurements of each face are generated for known images and the unknown image. The values of the unknown image are then compared with the known ones to label the person present in the image.

FIG:5

Step 4: Identifying and associating a person with the encoded values

Finally, in the last step, we simply look up the name of the person associated with the encoded values from the database. The name is assigned to the image. This concludes the working of the face-recognition library. Now we will have a step by step look at how to implement our Smart Model system.

Step 1: Installing C compiler

To use the face-recognition library, we need to install the C compiler. To do this, download the Visual Studio Community version from the official website. In the installation, select the Desktop Development with C++ feature in the Workloads tab.

FIG:6

Step 2: Install the Dependencies.

There are some dependencies for the face_recognition library. The easiest way to install them is through the PyCharm.

Start a Pycharm project. Go to file->Settings->Python Interpreter.

Click on the ‘+’ sign and install the following dependencies: 1) cmake

2) dlib

3) face recognition

4) numpy

5) OpenCV


FIG:7

Step 3: Gathering Photos

Create a folder in the project folder to store the photos of all the students. We’ll name this folder ‘ImageAttendance’.

Every photo should be labelled the name of the student it belongs to.

Step 4: Import the required libraries.

FIG:8

Step 5: Load the images

Here we will store all the images in a list : images and store the names of attendees in a separate list-classNames


FIG:9

Step 6: Converting images to rgb and Encoding

The library provides a direct function to encode values, but before that we have to convert all the images from bgr to rgb. The following code does so:

FIG:10

First, we will find encodings of all the images.

FIG:11

Step 7: Start capturing the video

FIG:12

Step 8: Detection, recognition and marking attendance.

Capture the video frame by frame.

Resize the image captured, covert it to rgb. Locate the people int images and encode their face values using the face recognition library.

Use the compare_faces fuction of image_recognition library to find the person’s name from the pre trained values.

FIG:13

FIG:14

Output

FIG:15

CSV file

FIG:16

Conclusion

Thus, we have implemented a simple smart attendance model. It is easy to use and to add new members, we only need to upload their photos in the Image Attendance folder with their names. The library used has an accuracy of 98%.

Repository Link

  1. https://github.com/gaurigodghase/MSRF/blob/main/AttendanceProject.py

References

  1. https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78

  2. https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-018-0324-4

  3. https://www.youtube.com/watch?v=sz25xxF_AVE

  4. https://www.youtube.com/watch?v=wJFG-O0JpVI

Recent Posts

See All

Commentaires


Madras Scientific  Research Foundation

About US

 

Contact

 

Blog

 

Internship

 

Join us 

Know How In Action 

bottom of page