How do I configure Basic features of a plugin?

Administration Parameters

More advanced users can edit administration parameters by clicking on "Administration Parameters" and typing the password.
The password is the name of the plugin, first three capital letters without indent (for instance, the plugin AI-Intrusion-Pro the password is AIIntrusionpro, for AI-People is AIPeople),unless it has been edited from the default value. In this section it is possible to edit all the configuration parameters. In most cases these parameters do not need to be edited and, as their editing requires significant experience,this configuration is protected by password. This control is not enabled on AI-Appliance, since you already need to be system administrator to edit the configurations.

Image Pre-processing

This filtering allows the elimination of residual noise on the image to make object detection smoother and more effective. For 1 CIF images it is advisable to use a 3x3 kernel, for 4 CIF a 5x5 kernel and for higher resolutions a 7x7 kernel. Moreover, the filtering can be disabled by choosing the setting NO. Filtraggio Gaussiano

Modeling and update of the background

The output is an image within the YUV420 colour space, representing the static part of the scene framed; it is then used to determine the dynamic part of the current frame, namely the foreground mask.

Foreground mask extraction

Background subtraction using an efficient algorithm

A comparison is made between the current frame and the background image of the previous instant: if the pixel is "close" to the corresponding background pixel, then the former is not a foreground pixel; otherwise, that pixel will be white in the foreground mask.

Foreground

N.B. The comparison is performed separately on each of the three channels YUV. The pixel belongs to the foreground mask if in at least one of the cases:

Foreground_2

Foreground_3

This option, enabled by default, is the most efficient and ensures a good performance in most of the sceneries.

Background subtraction using a self-learning algorithm

Update type : accurate both in grayscale and colour, uses an up-to-date self-learning algorithm to extract the foreground mask. The grayscale only uses colour channel Y, while the colour version uses all of the channels. The first one is more efficient, while the second is more effective, but they are both less efficient than the default option. Here, shadow removal can only be enabled with the colour version.

Foreground_4

Speed of background update when changes occur

Tau Background Parameter: the user defines a timespan after which a change in the scenery automatically becomes part of the background.

Post-processing with morphological operators

Applies consecutively three morphological operators (if enabled): erosion, dilation and another erosion. The first erosion removes spurious white pixel caused by noise, while dilation fills the holes and strenghtens the connection between regions of the image that are poorly connected; the last erosion recovers the original dimension of the objects. It is possible to choose the shape of the kernel that will be used (rectangular, diamond, octagon, disk), as well as the dimensions in terms of width and lenght (rectangular) or radius (diamond, octagon, disk).

Operatori_Morfologici Operatori_Morfologici_2

Dimension-based filtering

Pixel dimensions

Removal of extremely small, big or oddly-shaped blobs based on pixel dimension.

Pixel dimensions

The user can define minimum and maximum values of height and width of a blob by clicking on the corresponding blue pencil and simply drawing a couple of rectangles on the image.

Aspect ratio

Distinguishes, for instance, people from cars. The user can determine the minimum or maximum value of the height/width ratio. FiltraggioDimensioni

Real dimensions

The user can define the minimum and the maximum height of the blob.
WARNING!!!!!!!!!
To enable this filtering you have to calibrate the camera and the algorithm first, so to compute the mapping between the real size of an object and its pixel dimensions. FiltraggioDimensioniReali

Camera calibration

Altezza della Camera

In the drop-down menu on the left side, select the tab"Algorithm Parameters" and then "Calibration" This step allows to set the camera parameters; these parameters are provided by the manufacturer. Altezza della Camera

The values to set are:

  • Camera height: height in meters of the camera with respect to the ground.
  • Horizontal angle: identifies the horizontal angle of view in degrees of the camera. It is a parameter available on fixed focal cameras' datasheet, while it must be calculated for varifocal cameras
  • Vertical angle: identifies the vertical angle of view in degrees of the camera. It is a parameter available on fixed focal cameras' datasheet, while it must be calculated for varifocal cameras.

Algorithm calibration

Calibrazione Camera In the drop-down menu on the left side, select the tab"Algorithm Parameters" and then "Algorithm Calibration" This step allows to collect samples to train the algorithm, so to compute the mapping of the real size of the object from its pixel dimensions. Calibrazione Camera

The values to set are:

  • Rotation (degrees): Camera inclination in degrees with respect to the horizontal plane.
  • Training samples: When selecting Add Element it is required to ask a person of known height to move in different positions within the scene and at different distances from the camera, drawing a rectangle around him each time the person stops. For an accurate calibration, it is advisable the collection of at least ten samples.

Object Tracking

Object Tracking Object Tracking

In the drop-down menu on the left side, select the tab "_Administration Parameters" and then"Tracking"

The goal is to find the correspondance between the detected object in the previous frame (t-1) and the blob detected in the current frame (t), thus solving occlusions-related problems (e.g.: trees).

Object Tracking

The values to set are:

  • Maximum radius: Maximum movement of an object in two consecutive frames(in pixels)
  • Maximum ghost time (ms): Maximum time (in milliseconds) for which a detected object can assume the status of ghost, namely it can be stored and retrieved in case of occlusion (for instance, hidden behind an obstacle).