AXIS COMMUNICATIONS Analytics Detects Classifies Moving Objects User Manual
- June 5, 2024
- AXIS COMMUNICATIONS
Table of Contents
AXIS COMMUNICATIONS Analytics Detects Classifies Moving Objects User Manual
Solution overview
About the application
AXIS Object Analytics detects and classifies moving objects, specifically
humans or vehicles. You can choose different conditions for detection, such as
objects that move or stay longer than a set timeBETA within a predefined area
or that cross a defined line. Upon detection, Axis network devices or third-
party software can perform different actions, such as recording video, playing
an audio message, or alerting security staff.
Considerations
Considerations
For best results, the camera must be appropriately mounted, and there are requirements on the scene, image and objects.
This image illustrates an appropriately mounted camera.
- Mounting height
- Tilt
- Detection area
- Minimum detection distance
- Maximum detection distance
The following considerations apply to the mounting of all camera types:
Mounting position
If you mount the camera so it looks down from above, it makes it difficult for the application to classify objects.
Tilt
The camera must be sufficiently oriented towards the ground so that the center of the image is below the horizon. Mount the camera so that the minimum detection distance is longer than half of the camera’s mounting height (minimum detection distance > camera mounting height / 2).
Detection area
An object’s point of detection must be inside the detection area. The point of detection of a human is at its feet, and of a vehicle it’s at its center.
Maximum detection distance
The maximum detection distance depends on:
- Camera type and model
- Camera lens. A higher focal range allows for a longer detection distance.
- Weather. For example, heavy rain or snow can affect the detection distance and accuracy.
- Light. Detection accuracy and range can be affected by insufficient illumination.
- Camera load
We recommend you to use AXIS Site Designer to determine the maximum detection distance for different camera models at your site.
Roll
The camera’s roll angle must be nearly equal to zero. It means that the image should be level with the horizon.
Field of view
The camera’s field of view must be fixed.
Vibrations
The application tolerates small camera vibrations, but you get the best performance when the camera is not subject to vibrations.
Object size
For a human to be detected, the minimum height is 4% of the total image height. For a vehicle, the minimum height is 3% of the total image height. However, this requires perfect image conditions and no obstructions to the view. To minimize the risk of missed detections, we recommend a height of at least 8% for humans and 6% for vehicles.
Object visibility
Detection accuracy can be affected:
-
if objects are only partially visible due to, for example, foliage. It’s particularly important that characteristic features, such as legs or wheels, are visible.
-
when the scene is crowded with objects. For example when there’s a traffic congestion.
-
when you use the time in area functionality BETA: if objects frequently overlap each other.
Contrast
There needs to be a certain level of contrast between objects and the background. Fog, direct light shining on the camera, or an overly noisy image can cause contrast issues. You can increase the level of illumination and adjust the image settings to improve the level of contrast.
- When you use a day-and-night camera with artificial lighting, we recommend at least 50 lux in the entire detection area.
- When you use built-in IR lighting, the maximum detection distance depends on the camera and the environment.
Expected movement of objects in the scene
Objects that approach the camera in a straight line need to move for a longer time before they get detected compared to objects that move perpendicular to the camera’s field of view.
Human pose
Humans need to move in a somewhat upright position.
Object motion
Objects need to move within the scene for at least 2 seconds.
Recommended image settings
Before you start to use the application, we recommend you to turn on Forensic WDR and barrel distortion correction, if they are available for your camera.
The image to the right is an example of barrel distortion. Barrel distortion is a lens effect where straight lines appear increasingly bent closer to the edges of the frame.
Conditions where detections can be delayed or missed
- Fog
- Direct light shining on the camera
- Inadequate light
- Overly noisy image
Situations that can trigger false alarms
- Partially hidden people or vehicles. For example, a small van that appears from behind a wall can look like a person since the vehicle is high and narrow.
- Insects on the camera lens. Note that day-and-night cameras with infrared spots attract insects and spiders.
- A combination of car headlights and heavy rain.
- Human-size animals.
- Strong light causing shadows.
Get started
-
Log in to the product’s webpage as an administrator and go to Settings > Apps > AXIS Object Analytics.
-
Select the application.
-
Start the application and click Open.
-
In the welcome screen, click Step-by-step to follow the recommended setup procedure.
-
In step 1, read through the considerations.
-
In step 2, select if you want the application to detect humans, vehicles, or both. Read more about Classification of objects.
-
Select if you want the application to detect when objects move or stay for longer than a set timeBETA inside a defined area, or when they cross a defined line. If you want to learn more, see Object in area on page 13 and Line crossing.
-
For PTZ cameras, you can choose to restrict detection to a specific preset position. Select it from the list.
-
Adjust the default line that objects need to cross, or the area where objects need to move.
To find out how to adjust the default line or include area, see Adjust virtual line or area on page 9 . -
In step 3, verify your settings.
You have now created one scenario. To rename or modify it, click Open.
To create more scenarios, click +.
To watch this video, go to the web
version of this document.
www.axis.com/products/online-
manual/62075#t10056465
Create scenario: object in area
To watch this video, go to the web
version of this document.
www.axis.com/products/online-
manual/62075#t10056465
Create scenario: time in area
To watch this video, go to the web version
of this document.
www.axis.com/products/online-
manual/62075#t10056465
Create scenario: line crossing
Adjust virtual line or area
- To reshape a virtual line or area, click and drag one of the anchor points.
- To move a virtual line or area, click and drag.
- To remove a corner, right-click the corner.
Virtual line
- To reset the virtual line to its default size, click Scene > Reset line.
- To change the direction that objects should move to be detected, click Scene > Change trigger direction. The red arrows next to the line show the current direction. Actions trigger when objects cross the line in the direction of the arrows.
Area
- To reset the include area to its default size, click Scene > .
- To create an area inside the include area where you don’t want objects to be detected, click Scene > Add exclude area.
Configure the application
Modify a scenario
To modify a scenario, click Scenarios and click Open in the scenario card.
-
To rename the scenario, click .
-
To change what type of objects to detect, click Triggering objects.
Note
If you select Any motion, the application doesn’t classify objects. Instead, the application detects any object that moves in the scene. It can, for example, be animals, swaying foliage, flags, or shadows. To ignore small objects or objects that only appear for a short time, you can use filters. For more information, see Filters . -
In an Object in area scenario: To allow objects to stay inside the include area for a certain time before the application triggers an action, click Triggering objects and turn on Time in areaBETA . Set the allowed time.
-
To adjust the virtual line or area, click Scene.
Calibrate perspective
If the scene has a significant depth, you need to calibrate the perspective to remove false alarms due to small objects. During calibration, the application compares the height of the objects as they appear in the image with the actual heights of the corresponding physical objects. The application uses the calibrated perspective to calculate the object size.
Place vertical bars in the image to calibrate perspective. The bars represent physical objects at different distances from the camera.
-
Go to Settings > Advanced > Perspective and click +.
-
In the live view, choose two objects of the same, known height, that are located on the ground and at different distances from the camera.
You can use, for example, fence poles or a human.
-
Place the bars by the objects and adjust the length of each bar to the height of the object.
-
Select the scenarios you want to apply the perspective to.
-
Enter the height of the objects in Perspective bar height.
-
Click Save
Example
If there is a fence with 2 meter high poles extending from the camera towards
the horizon, position the bars at the fence poles, adjust their lengths and
enter 200 cm (6 ft 7 in) in the fields.
Important
Make sure the bars don’t overlap each other in height.
Add burnt-in metadata overlays to video streams
To show the event that was detected in the live and recorded video stream, turn on metadata overlay. When you turn on metadata overlay the application shows:
- A rectangle around detected objects.
- The area or line of the scenario where the object was detected.
If you turn on trajectories, the application also shows a line that outlines the path that an object has taken.
If several scenarios get triggered at the same time, overlays are shown for all of them in all streams with that selected resolution.
Important
The metadata overlays are burnt in alarm overlays to the selected resolution
of the video stream. You can’t remove them from recorded video.
Note
If you use view areas, the metadata overlays only appear in the first view
area. The default name of the first view area is View area 1.
-
In the application’s webpage, go to Settings > Advanced and, depending on your camera:
- Turn on Metadata overlay.
- Under Metadata overlay, select in which resolution burnt-in metadata overlays should appear. You can only
select one resolution and the setting applies to all scenarios.
-
To show the path an object has taken, select Trajectories.
Restrict detection to a PTZ preset position
For PTZ cameras, you can restrict detection to a specific preset position.
- Go to Scenarios and click Open in a scenario card, or click + to create a new scenario.
- Click Scene and select a preset position from the list.
Note
Each time the preset position changes, the application needs to recalibrate.
We recommend you to wait at least 15 seconds before you change between preset
positions in a guard tour.
Learn more
Classification of objects
The application can classify two types of objects: humans and vehicles. The application shows a rectangle around classified objects. Objects classified as humans get a red rectangle, and objects classified as vehicles get a blue rectangle.
For cameras with deep learning, vehicles can be further categorized into trucks, buses, cars, and bikes.
If you use the time in areaBETA functionality, the rectangle is yellow until the time condition has been fulfilled. If the object then stays inside the include area for another 30 seconds, the rectangle becomes dashed.
Each classified object has a point of detection that the application uses to decide if an object is inside or outside an include area or when it crosses a virtual line. For a human, the point of detection is at its feet, and for a vehicle it’s at its center. If a human’s feet or a vehicle’s center gets obstructed from the camera’s view, the application makes an assumption of the location of the point of detection.
Note: We recommend you to take the assumed location of objects’ point of detection into consideration when you draw the include area or virtual line.
For the best possible results:
-
At some point, the entire object needs to be visible in the scene.
-
The object needs to be in motion within the scene for at least 2 seconds.
-
For cameras with machine learning, humans need to move in a somewhat upright position. For cameras with deep learning, this is not a requirement.
-
The upper body of a human needs to be visible
-
Objects need to stand out from the background
-
Reduce motion blur.
Object in area
When you use the trigger condition Object in area, the application detects objects that move inside a defined area. This area is called an include area.
You can turn on time in areaBETA to set a time limit for how long an object is allowed to stay inside the include area before the application triggers an action. When an object enters the include area, the counter starts. If the object moves out of the include area before the set time limit is reached, the counter resets. It’s the object’s point of detection that must be inside the include area for the counter to keep counting. The time in areaBETA functionality is suitable for areas where humans or vehicles are only supposed to stay for a short while, like tunnels or school yards after hours.
Include area
The include area is the area where the application detects selected object types. The application triggers actions for objects if its point of detection is inside the include area. The application ignores objects that are outside the include area.
Reshape and resize the area so that it only covers the part of the scene where you want to detect objects. If you use the time in areaBETA functionality, it’s important to only include parts of the scene where objects don’t frequently overlap each other. The default include area rectangle can be changed to a polygon with up to 10 corners.
Recommendation
If there’s a busy road or sidewalk close to the include area, draw the include area so that objects outside the include area don’t accidentally get detected. This means you should avoid drawing the include area too close to the busy road or sidewalk.
Exclude areas
An exclude area is an area inside the include area in which selected object types don’t get detected. Use exclude areas if there are areas inside the include area that trigger a lot of unwanted actions. You can create up to 5 exclude areas.
Move, reshape, and resize the area so that it covers the desired part of the scene. The default rectangle can be changed to a polygon with up to 10 corners.
Recommendation
Place exclude areas inside the include area. Use exclude areas to cover areas where you don’t want to detect objects.
Line crossing
When you use the trigger condition Line crossing, the application detects objects that cross a virtually defined line.
The virtual line is a yellow line in the image. Objects of the selected type that cross the line in a certain direction get detected. The red arrows on the line show the current direction. Actions trigger when objects cross the line in the direction indicated by the arrows.
To trigger an action the object must cross the line. As shown in the illustration, the object’s point of detection must cross the line for the action to trigger. Objects that only touch the line don’t trigger actions.
- In the illustration to the left, the man doesn’t trigger an action, as his point of detection has not yet crossed the line.
- In the illustration to the right, the man triggers an action, as his point of detection has crossed the line.
For information about the point of detection, see Classification.
Virtual line recommendations
Adjust the virtual line so that:
- objects are clearly visible in the image before they cross the line.
- an object’s point of detection is likely to cross the line.
- objects can’t enter the protected area without passing the line.
Integration
Set up alarms in AXIS Camera Station
This example explains how to set up a rule in AXIS Camera Station to alert the operator and record video that includes metadata overlays when AXIS Object Analytics detects an object.
Before you start
You need:
- an Axis network camera with AXIS Object Analytics set up and running, see Get started on page 8 .
- metadata overlays turned on in the application, see Add burnt-in metadata overlays to video streams on page 11.
- a computer with AXIS Camera Station installed
To watch this video, go to the web
version of this document.
www.axis.com/products/online-
manual/62075#t10145372
Add the camera to AXIS Camera Station
- In AXIS Camera Station, add the camera. See the user manual for AXIS Camera Station.
Create a device event trigger
- Click and go to Configuration > Recording and events > Action rules and click New.
- Click Add to add a trigger.
- Select Device event from the list of triggers and click Ok.
- In the Configure device event trigger section:
- In Device, select the camera.
- In Event, select one of the scenarios for AXIS Object Analytics.
- In Trigger period, set an interval time between two successive triggers. Use this function to reduce the number of successive recordings. If an additional trigger occurs within this interval, the recording will continue and the trigger period starts over from that point in time.
- In Filters, set active to Yes.
- Click Ok.
Create actions to raise alarms and record video
-
Click Next.
-
Click Add to add an action.
-
Select Raise alarm from the list of actions and click Ok.
Note
The alarm message is what the operator sees when an alarm is raised. -
In the Alarm message section, enter an alarm title and description.
-
Click Ok.
-
Click Add to add another action.
-
Select Record from the list of actions and click Ok.
-
In the list of cameras, select the camera to use for recording.
Important
To include metadata overlays in the recording, make sure you select a profile with the same resolution as the one selected for metadata overlays in the application. -
Select a profile and set the prebuffer and postbuffer.
-
Click Ok
Specify when the alarm is active
- Click Next.
- If you only want the alarm to be active during certain hours, select Custom schedule.
- Select a schedule from the list.
- Click Next.
- Enter a name for the rule.
- Click Finish.
Note
To see the metadata overlays in the live view, make sure you select the
streaming profile that matches the one you set in the application.
Record video when an object gets detected
This example explains how to set up the Axis device to record video to an SD card when the application detects an object.
-
In the device’s web interface, go to Apps and make sure the application is started.
-
To check that the SD card is mounted, go to System > Storage.
-
Go to System > Events and add a rule.
-
Type a name for the rule.
-
In the list of conditions, under Application, select the application scenario. To trigger the same action for all scenarios, select Object Analytics: Any Scenario.
-
In the list of actions, under Recordings, select Record video.
-
In the list of storage options, select SD-DISK.
-
Select a Camera and a Stream profile.
To show metadata overlays, make sure you have turned it on in the application for the same resolution that is in the stream profile.
Note
We don’t recommend you to use a scenario with time in areaBETA to trigger recordings if the time an object is allowed to stay inside the include area is more than 30 seconds. The reason is that it’s challenging to use a prebuffer time longer than 30 seconds, which is required if you want to see what happened before the object was detected. -
If you want to start the recording before the object was detected, enter a Prebuffer time.
-
Click Save.
-
To test the rule, go to the application’s webpage and open the scenario. Click Test alarm. This generates an event, as if the scenario had triggered for real. If you have turned on metadata overlays, a red or blue rectangle will show.
Troubleshooting
Problems detecting objects
when image is unstable| Turn on Electronic image stabilization (EIS) in the
Image tab of the product’s webpage
at image edges, where the image looks distorted| Turn on Barrel distortion
correction (BDC) in the Image tab of the product’s webpage.
immediately| Objects need to be fully visible in the scene before they can be
detected by the application.
in other situations| t could be because the objects melt into the background
if they are the same color, or because there is bad light in the scene. Try to
improve the light.
Problems with false alarms
due to small animals that appear large in the image| Calibrate the
perspective. See Calibrate perspective.
Problems with metadata overlays
On a second client| Metadata overlays are only visible for one client at a
time.
Problems with the video stream
on Firefox browser for cameras with high resolutions| Try Google Chrome™
browser instead.
Filters
If you have set up the application is set up to detect any motion, you may experience false alarms. You can then use filters.
Short-lived objects – Use this to ignore objects that only appear in the
image for a short period of time.
Small objects – Use this to ignore small objects.
Swaying objects – Use this to ignore objects that only move a short
distance.
Filter recommendations
- Filters are applied to all moving objects found by the application and should be set up with care to make sure that no important objects are ignored.
- Set up one filter at a time and test it before you turn on another filter.
- Change the filter settings carefully until you’ve reached the desired result.
The short-lived objects filter
Use the short-lived objects filter to avoid detecting objects that only appear for a short period of time, such as light beams from a passing car or quickly moving shadows.
When you turn on the short-lived objects filter and the application finds a moving object, the object doesn’t trigger an action until the set time has passed. If the action is to start a recording, configure the pre-trigger time so that the recording also includes the time the object moved in the scene before it triggered the action.
Set up the short-lived objects filter
- Click Scenarios and select an existing scenario or click + to create a new scenario.
- Click Triggering objects and make sure Any motion is selected.
- Go to Filters > Short-lived objects.
- Enter the number of seconds in the field. The number of seconds is the minimum time that must pass before the object triggers an action. Start with a small number.
- If the result is not satisfactory, increase the filter time in small steps.
The swaying object filter
The swaying objects filter ignores objects that only move a short distance, for example swaying foliage, flags, and their shadows. If the swaying objects are large, for example large ponds or large trees, use exclude areas instead of the filter. The filter is applied to all detected swaying objects and, if the value is too large, important objects might not trigger actions.
When the swaying object filter is turned on and the application detects an object, the object does not trigger an action until it has moved a distance larger than the filter size.
Set up the swaying objects filter
The filter ignores any object moving a shorter distance than that from the center to the edge of the ellipse.
Note
- The filter applies to all objects in the image, not just objects in the same position as the setup ellipse.
- We recommend that you begin with a small filter size.
- Click Scenarios and select an existing scenario or click + to create a new scenario.
- Click Triggering objects and make sure Any motion is selected.
- Go to Filters > Swaying objects.
- Enter how far objects are allowed to move, as a percentage of the screen, before an action triggers.
The small objects filter
The small objects filter reduces false alarms by ignoring objects that are small, for example small animals.
Note
- The filter applies to all objects in the image, not just objects in the same position as the setup rectangle.
- The application ignores objects that are smaller than both the entered height and the entered width.
Set up the small objects filter
-
Click Scenarios and select an existing scenario or click + to create a new scenario.
-
Click Triggering objects and make sure Any motion is selected.
-
Go to Filters > Small objects.
Note
If you have calibrated the perspective, enter the width and height of the objects to ignore in centimeters (inches) instead of as percentage of the image. -
Enter the width and height of the objects to ignore as percentage of the image.
User Manual
Ver. M8.6
AXIS Object Analytics
Date: June 2022 © Axis Communications AB, 2019 – 2022
Part No. T10144039