Duplicate songs detector via audio fingerprinting
This demonstration shows an efficient algorithm of signal processing which will allow one to have a competent system of sound fingerprinting and signal recognition.
As an example, consider an audio signal Ψ1, which you would like to compare to another Ψ2 in order to see if they both are coming from the same song or audio object. Any person could cope with this assignment with no problem at all, but computers unfortunately are not that intuitively "smart". The difficulty lies in the fact that each of the signals might have distinct digitized formats, thus making their binary signatures totally opposite (resulting in an obsolete byte-by-byte comparison). The dissimilarity may also result because of the combination of variant internal characteristics of the same audio format (bit rate, sampling rate, number of channels (mono, stereo, etc.)). Even if you proceed with the conversion of the files to some predefined specifications (e.g., 44100 Hz, stereo, WAVE PCM format), you still might bump into the problem of having different binary representations because of the possible time misalignment, noise, distortion, or "loudness levels" for the same song.
3D Pose Estimation
This sample application demonstrates usage of POSIT and Coplanar POSIT algorithms for 3D pose estimation. The application renders some artificial object allowing user to rotate and move it. Then projected points are used for estimation of the object's pose. Application provides original objects' transformation matrix and the estimated one, so user could compare them both. The sample is mostly aimed for testing/understanding of the pose estimation algorithms.
3D Pose Estimation (2)
This sample application also demonstrates usage of POSIT and Coplanar POSIT algorithms for 3D pose estimation, however it estimates pose of a real objects shown on some picture. The application allows to open some image file, select image points of the object to estimate pose for, specify model coordinates of those points and then estimate the object's pose. When estimation is done, the application will render X/Y/Z coordinate system using the estimated rotation and position, which ideally should match the object show on the picture. The application contains several built-in samples to demonstrate how it works.
Fuzzy Auto Guided Vehicle Sample
This sample application demonstrates part of classical robotics application: the navigation of an auto guided vehicle. An artificial vehicle is travailing through artificial environment, which has obstacles/walls and driving space. Collecting information from its three sensors (distance to obstacle on the left, on the right and in front of vehicle), the robot should decide how to correct its movement - which angle to use to rotate. All the robot's logic is represented by fuzzy rules. Once sensors' values are provided as inputs into the system, the fuzzy rules are examined to get the movement angle's correction.
This sample demonstrates genetic computations and introduces Genetic Programming (GP) and Gene Expression Programming (GEP). Using both GP and GEP the sample application tries building an algebraic expression, which approximates the given function specified as data points. For the approximation task, the application allows to specify functions set to use: only simple arithmetic operation or extended set with additional functions. During algorithm's work, the application updates graph showing current solution, so it could be seen how the found expression fits given data points.
This sample application demonstrates capturing video and depth data from Microsoft Kinect sensor. It also allows to control sensor's LED and motor and get access to accelerometer data.
The sample application demonstrates usage of Hough line and circle transformations, which may be applied for detection of straight lines and circles of the given radius.
The sample application demonstrates different image processing filters and their application to an image. The applications demonstrates filters from many different areas, like color filtering, correction of color levels, convolution filters, edge detection filters, binarization filters, etc.
The sample serves a good demonstration of usage of difference image processing filters.
This sample applications demonstrates finding all separate objects in the specified image. It finds each individual object, provides its properties and provides convex hull for each object or its quadrilateral's corners (if the object is really a quadrilateral, then its corners should be found with good precision).
This sample demonstrates detecting/checking some simple geometrical shapes. The sample application uses few demo images (generated and real) and recognizes shapes in them.
Delta Rule Learning
This sample is similar to the above one - it also classifies linearly separable data into several classes, which means that this sample also demonstrates a layer of neurons. But this time neurons have continuous activation function, but not a threshold function, which enables usage of new learning algorithm known as delta rule learning.
This sample application demonstrates usage of the API to control Lego Mindstorm RCX robotics kit. With the sample application it is possible to connect RCX brick (USB IR tower is required), read its sensors' values, control motors, get information about the device and play simple sounds on the device.
The sample application is very similar to the above one, but demonstrates usage of API to control Lego Mindstorm NXT robotics kit. The application provides the same functionality - connecting to NXT brick (over Bluetooth), checking its sensors values, controlling motors, etc.
This application also demonstrates controlling Surveyour's Stereo Vision System board. The application allows receiving video feeds from both cameras, show stereo anaglyph images and manipulate robot by driving it using predefined commands or using direct motors' control.