Overview

Lighting invariant Contour Detection - lighting invariant Change Detection -Hyperbola Detection - Information Space Analysis - CI- based Identification - Data Categorisation - Alarming Routines - Machine Monitoring - Harmonic Analysis - DLS-Analyser - FD-Analyser - CI-Alarming Routines - Trend Analyser - Neural Based Quality Control - Sensor Fusion - Neural Based Prediction - Analysis of Biological Signals - Neural Based Identification - Acoustic Pattern Recognition - CI-based People Categorizer

IngB RT&S, the forge of bionic filter technologies

Welcome to our website!


Find out more about us and our products here, in which you can see some of our applications in action!


All applications work with the same bionic filter, modelling the structure of the first 3 layers of the retina. That means: no learning procedures, no parameter adaptation! Simply integrate the filter in your existing software and "see" the lighting invariant.


Do you see interfaces to your areas of responsibility?


Contact us, we are here for you!


Well, it means that edges are made visible even when an image has different lighting conditions. This is shown using an example from component monitoring. As I said, without prior training of the filters used and in an image analysis time of 40 milliseconds.

Using this basic technology the following applications are implicite solvebale:


The contacless, independent of lightning, quality control - of course with virtual collision control for safe interaction between human and robot.

Or, illumination inversion based contour and motion detection for safe interaction between robot and human being under extremely fluctuating lighting conditions by defining virtual danger protection areas.

...including "follow me" functionality and "emergency shutdown" of the robots action...

...or in a slightly modified version: lighting invariant contour recognition to support autonomous driving. Left original, right contour-filter representation.

Technical details of the filters:

Necessary external libraries/frameworks: none

Operating system independent

Language: C


Shown Example: Quasi-3D object representation during car driving.

Camera: Logitec; used hardware: Asus Laptop

Processingtime/frame: 31 ms

.. or to increase the image quality through online post-processing of image material recorded during AVU trips. (Film on the left: original camera material and camera material optimized by a bionic filter. Right: original camera material and a quasi-3D object representation generated by a bionic filter.)

Technical details of the filters:

Necessary external libraries/frameworks: none

Operating system independent

Language: C


Shown Example: AVU-Survey material in accelerated representation.

Camera: Logitec; used hardware: Asus Laptop

Processingtime/frame: 31 ms

...or if you want stable contours for tracking and identification under strongly fluctuating lighting conditions...(bionic filter sequence, ordinary noise filter sequence, original recording)

Technical details of the filters:

Necessary external libraries/frameworks: none

Operating system independent

Language: C


Shown Example: real situation representation.

Camera: Logitec; used hardware: Asus Laptop

Processingtime/frame: 31 ms

.. or simply apply the bionic filters to your web cameras or common pictures, because there is more information in their images than you think. (Left: original, right: transformed bionic image)

The overview therefore shows the following main areas of our bionic filter technology

As part  of a joint project, we work and develope for the EU


© IngB RT&S GmbH 2024 all rights reserved | Imprint| Privacy | Terms of Use | Contact | Conditions