Description
Video Analytics process video in real-time and transform it into intelligent data. They automatically generate descriptions of what is happening in the video (metadata) and are used to detect and track objects which also could be categorized as persons, vehicle and other objects in the video stream. This information forms the basis on which to perform actions, e.g. to decide if security staff should ne notified or if a higher quality recording stream should be used. Video Analytics turn simple IP video surveillance into business intelligence.
Video Analytics is a much more practical solution to review hours of surveillance video to identify incidents that are pertinent to what you are looking for. Utilizing video analytics will increase the efficiency of your security monitoring process and decrease the work load on security and management staff.
When used in conjunction with a Video Management System (VMS) you can act upon the metadata generated by Video Analytics. Surveillance automation allows you to create rules to alert security personnel for specific events of interest.
Benefits
To reduce loss, theft and vandalism
Creating staff and visitor safety
Very high scalability
Remote Monitoring
Improve accessibility
No recurring main license yearly
Reduce installation cost
Improve productivity
Reduce manpower crunch
Improves easy of installation and implementation
Integrate with Video Management Software System
No recurring main license yearly
Fall Detection
The Fall Detection module is designed to detect fall of people
Note: the module does not detect the falling process. It detects a laying down person as a fallen person.
It is possible to enable the frame highlighting of the fall scene, both in live view and archive playback. When falls are detected, Fall detected events are generated and recorded in the Event log.
Note: The module operates on a neural network using a video card (GPU). It is recommended to install the Eocortex Neural Network Special package.
Compatibility with other modules
|
Requires Eocortex motion detector |
Neutral Network |
Compatible with Modules |
Incompatible with modules
|
Standard |
Special |
√ |
√ |
√ |
- Auto Zoom
- Loud sound detection
- Fire and Smoke Detection
- Face Mask Detection
- Sabotage Detection
- Crowd Monitoring
- Personnel Activity Monitoring
- Shelf Fullness Check
- Uniform Detection
- Face detection
- Abandoned object detection module
- Emergency Vehicle Detection
- Counting People in Queue
- People Counting
- Unique Visitor Counting
- Search for Objects
- Fisheye dewarping module
- Frame Area Blurring
- License Plate Recognition (Complete)
- License Plate Recognition (Light)
- Face Recognition (Light)
- Traffic density heat map
- Tracking
|
- Object Classification and Counting
|
√
|
supported and required for the module to work
|
+
|
supported and provides additional features of the module
|
–
|
not supported or not required for the module to work
|
⚠
|
not recommended for use with the current module
|
It is possible to enable the frame highlighting of the fall scene, both in live view and archive playback. When falls are detected, Fall detected events are generated and recorded in the Event log.
⚠Warning: This module will only work on the cameras on which it has been enabled by the administrator of video surveillance system.
To enable the display of frames around people who have fallen, select the Fall Detection subitem Show colored boundaries of objects in the context menu of the cell.



Configuring the Fall Detection module
⚠Warning: It is required to install the neural package before using the module
To use the module, enable and set up the software motion detector, then enable and set up the module itself.
Launch the Eocortex Configuration go to the
Cameras tab, select a camera in the list located on the left side of the page, and set up the motion detector on the Motion detector tab on the right side of the page.
Then switch to the Analytics tab enable the module using the
toggle.

Clicking the
button opens the module setup window.

When setting up the module, select the detection frequency, the minimum and maximum people dimensions to be detected and set the areas in which falls will monitored.
By increasing the detection frequency, falls will be detected faster, including those in which the person immediately stands up. But this will increase the load on the CPU and GPU.
When setting up detection areas, it is important to consider that people whose center of the image is outside the specified areas will not be detected.
Sensitivity is selected for each area:
- When High is selected, it increases the chances of detecting falls, but there is also a higher probability of false alarms.
- When medium is selected, it improves resistance to noise, but the fall detection accuracy may decrease.
Setting the size of a person is the same for all areas. Therefore, falls will only be detected in all areas for people who fit within the specified size range.

⚠Warning: The module will start working only when the settings are applied.
Requirements and recommendations for the Fall Detection module
Hardware and software
⚠Warning: It is required to install the neural network package before using the module.
The following equipment is required to use this neural network-based module:
- A processor that supports AVX instructions;
- An NVDIA video card (GPU) with the computation capacity index of at least 6.5 and with at least 4 Gb of memory; the parameters and performance of the video card must be similar or better than those of NVDIA GTX 1650 Super model;
- Version of the video card driver at least 460;
- Swap file at least half of the total RAM size.
If the package will be installed on a virtual machine, it may additionally be required to:
- Enable support for AVX instructions in the guest machine settings;
- Used GRID drivers for GPU virtualization
⚠Warning:
- Eocortex must use video cards selected for running neural networks in monopoly mode. It is not allowed to use such card for other applications or tasks that consume GPU resources, including for displaying video. Simultaneous use of a video card for several tasks may lead to incorrect system operation: from analytics performance degradation to server instability.
- The neural network works with the 64-bit version of Eocortex only
- When upgrading Eocortex to another version, it is necessary to also upgrade the Eocortex neural Networks package to the corresponding version.
Video stream
- Frame frequency: at least 10 frames per second;
- The optimal resolution is HD or FullHD.
Image
- Lighting in the frame should be uniform and constant.
- If the camera is installed in front of a bright light source (the sun behind the entrance door, etc.), it is necessary to adjust the exposure (or brightness) so that the objects in the frame have a natural color (not overexposed or too dark). In this case, an overexposed background is acceptable.
- The image must be in color.
- Image quality should be at least average. There should be no significant compression artifacts.
- With balance must be adjusted correctly.
Scene and camera position
- People to be detected must be visible in the frame will standing, full-length, not overlapped by other objects.
- The dimensions of detected people should be at least 8% of the frame height.
- The frame must not contain reflective surfaces: glass,, mirrors, etc.
- The camera may be placed above the face level, directly facing the people to be recognized. In such a case, the camera elevation angle must not exceed 35°
Deployment of the Fall Detection module
Note: The module operates on a neural network using a video card (GPU). It is a recommended to install the Eocortex Neural Networks Special package.
While installing the package, select the relevant component.

It is recommended to use graphics card (GPU) to run the module.
