Traffic Sign Detection
Traffic sign detection is a crucial component in an autonomous vehicle navigation system. For an automobile to navigate itself safely in an urban environment, it must be able to understand traffic signs
- It should be able to read the speed limit, such that it will not received tickets for speeding and paid a premium on its insurance
- It should be able to read traffic lights and stop on red
- It should be able to read stop sign and yield to other vehicles which are also crossing the same intersection.
This demonstration aims to solve a small part of a autonomous vehicle navigation system, which detect stop sign from images captured by camera.
Multiple face detection and recognition in real time
The facial recognition has been a problem very worked around the world for many persons; this problem has emerged in multiple fields and sciences, especially in computer science, others fields that are very interested In this technology are: Mechatronic, Robotic, criminalistics, etc. In this demonstration the main goal is showing a face detector and recognizer in real time for multiple persons using Principal Component Analysis (PCA) with eigenface for implement it in multiple fields.
An example of EigenFaces:
Sample application demonstrating how to use the Image Processing (Based on face tracking) to provide joystick-like controls for a Windows application.
Hands Gesture Recognition
This sample demonstration uses motion detection as its first step and then does some interesting routines with the detected object - hands gesture recognition. Let's suppose we have a camera, which monitors some area. When somebody gets into the area and makes some hands gestures in front of the camera, application should detect type of the gesture and raise an event, for example. When the hands gesture recognition is detected, the application may perform different actions depending on the type of gesture. For example, gestures recognition application may control some sort of device or another application sending different commands to it depending on the recognized gesture. What type of hands gestures are we talking about? This particular application recognize up to 15 gestures, which are combination of 4 different positions of 2 hands - hand is not raised, raised diagonally down, diagonally up or raised straight.
Demonstrates performing image classification using the Bag of Visual Words (BoW) model with SURF features and the Binary Split algorithm.
The BoW model is used to transform the many SURF feature points in a image in a single, fixed-length feature vector. The feature vector is then used to train a Support Vector Machine (SVM) using a variety of kernels.
Time Series Prediction
This demonstration tries to solve yet another task with Genetic Programming and Gene Expression Programming. For the given time series it tries to build an algebraic expression, which calculates next time series value from the given known past values. Once a good expression is found during training phase, the expression may be tried to predict future data points from the last known values.