Skip to main content
Version: 2.4.6

Settings

General Parameters

Common setting (as of ver 2.3.4) can be found in Settings > General Parameters

LocationFieldNameDescriptionUnitDefaultRange
CommonOutputPublish Point-CloudThe level of publishing point cloud to output port. This setting will greatly increase the bandwidth between the algo nodes and the master node!
- 0: Nothing published
- 1: Only object point published
- 2: Both object & background points published
- 3: All points (including out of detection range ones) published
int1
Background Points Update IntervalTime between updates of background and ground point cloud information from algo nodes to the master node. Updating frequently can be resource-intensive..seconds1.0
Sensor Status Update IntervalTime between updates of sensor information from algo nodes to the master node.seconds0.5
Output Sending Type SelectionMethod for sending and updating new results.
- 0: Instant Sending: Publish result whenever a new one is available.
- 1: Fixed-Period Sending: Publish the most recent result periodically. The published result might already have been published in a previous frame.
- 2: Fixed-Period with Max-Delay: Publish the most recent result periodically. If there is no new result, the sending is postponed until a new result is received or a maximum delay time has passed.
int1
Update IntervalPublish interval for output sending type 1 - Fixed-Period Sending and type 2 - Fixed-Period with Max-Delayseconds0.1
Point-Cloud Bandwidth ReductionThis decreases the communication burden by reducing pointcloud data size. This option if enable will affects:
- In calibration mode, only one point cloud will be sent (circularly) at a time.
- In runtime mode, background and ground points will be downsampled
- Points out of detection range will not be published regardless of the point cloud publishing level.
booleanfalse
Downsampling ResolutionResolution of downsampling method for pointcloud bandwidth reduction. Value can be between 0 to 1.meters0.3
InputMaximum Lidar Input IntervalIf there is no response from one of the lidar sensors for this time duration, that lidar sensor will be considered dead.seconds7[0,30]
Restart dead lidar drivers automaticallyWhen this is enabled, lidar sensor driver will be restarted automatically if it does not response for duration specified in “Maximum Lidar Input Interval”True/ FalseTrue
Frame Update SettingMethod for updating new frames from received results
- 0: Instant: Update frames as soon as results arrive
- 1: Use buffer: Update frames using buffer that stores frames after specified delay time in case of network traffic
int0
Minimum Pending Time (For Using Instant Update)Due to connection issues, the time interval between two point-cloud messages of one lidar can be unreasonably small. This minimum interval will ensure that one lidar point-cloud message of the pipeline has a reasonable time gap with the next one. A message which time difference with the previous message is below this minimum interval will get discarded. (This option is only visible if "Instant" is selected in the Frame Update Method.)seconds0.02[0, 0.1]
Maximum Pending Time (For Using Instant Update)Perception pipeline will be triggered whenever data from all connected lidar sensors were received or at least new data from one lidar sensor arrived and this time threshold is reached. (This option is only visible if "Instant" is selected in the Frame Update Method.)seconds0.15[0.1, 1.0]
Number of Frames Operating Delayed (For Using Buffer Update)Number of frames operating behind the current time. The frame duration is specified by the frame interval parameter.3[1, +inf]
Frame Interval [s] (For Using Buffer Update)Perception pipeline will be triggered with new input according to the frame interval.0.15[0.01, +inf]
Retro-Reflection TrackingEnable Retro-Reflection TrackingEnables marking of objects as retro-reflective. For this to work properly the retro-reflectivity threshold setting for each lidar also needs to be tuned properly.False
Min Point CountFor an object to be considered retro-reflective it has to have at least this many retro-reflective points.int5[1, +inf]
History ThresholdThe number of frames an object will stay retro-reflective after classified as retro-reflective..int10[0, +inf]
Reflection FilterUse Simple Filter [2D]Enable simple filtering algorithm for faster but less accurate filtering. The height will not be considered.True
Use detection limitIf enabled, any sensor further from the reflection filter than the detection limit will not be considered during filtering. A smaller detection limit will make filtering faster.True
Detection limit [m]Detection range of the option "Use detection limit".meters50.0[0, +inf]
Zone SettingTrigger Zone Events with MISC ObjectsEnabling this will trigger zone events with MISC objects.True/ FalseFalse
Zone Filter Resolution [m]Grid Cell Size for Zone Filter.meters0.1[0, +inf]
Tracking HistoryTracking History LengthThe number of history positions to keepint100[0, +Inf.]
History Smoothing LevelTrajectory smoothing levelint3[0, 3]
Trajectory PredictionNumber of Prediction StepsNumber of prediction stepsint10[0, +Inf.]
Prediction Step Time-HorizonTime step between two predicted pointsseconds0.1[0, +Inf.]
Max. Acceleration Filter ThresholdMaximum acceleration for predictionm/s215.0[0, +Inf.]

Master Node Parameters

Master setting(as of ver 2.3.4) can be found in Settings > Master Node Parameters

LocationNameDescriptionUnitDefaultRange
Output-Merger > Algo-Node Result TimeoutsAlgo-Node Object Result-TimeoutIf a point cloud receival time of a result from an algo node and the latest receival time differ by more than this timeout, the algo node result is removed from result aggregation. For the point result, the point-cloud update frequency is added to this timeout. For the sensor statuses, the sensor update frequency is added to this timeout.seconds0.2[0.1, +Inf.]
Algo-Node Point Result Timeout-seconds2.0[0.1, +Inf.]
Algo-Node Sensor Status Timeout-seconds2.0[0.1, +Inf.]
Output-Merger > Multi Algo-Node MergerAssociation Distance ThresholdDistance threshold to associate objects in two nodesmeters2.0[0, +Inf.]

Algo Node Parameters

To change algorithm settings you can open the algorithm window (with the Settings > Algo Node Parameters> Algorithm (ip_address) in the menu bar). A description of the parameter specification and a more detail of how to tune the parameter can be seen below. Be advised to change algorithm parameters with caution, as wrong parameters can negatively affect the performance.

LocationFieldNameDescriptionUnitRange
Parameters > Object-Allow Floating-ObjectIf True, min. z of the object is set to be the min. z-value of the object bounding box instead of ground height.True/ False
Tracked Object's Min. PointsThe minimum number of points of an object to be tracked.[0, +Inf.]
Tracked Object's Min RadiusMinimum object radius to be tracked.meters
Parameters > Tracking-Validation PeriodPeriod of checking validity in the early stage of tracking. This time determines length of the VALIDATION period, or how quickly a new object will be tracked and classified.seconds[0, +Inf.]
Invalidation PeriodPeriod of short term prediction when tracking is lost while in the VALIDATION period. A longer period will allow for objects that are at the edge of detection range to be tracked, but will be more likely to introduce false alarms.seconds[0, +Inf.]
Drifting PeriodPeriod of short term location prediction when tracking is lost in TRACKING status. A longer period can help with obstructions, but can lead to object ID switching in busy environmentsseconds[0, +Inf.]
Drifting Period for Miscellaneous ObjectsDrifting period apply for Miscellaneous Objects.seconds[0, +Inf.]
Algorithm Componenets > Ground DetectorElevation Ground DetectorApplyIf True, use the ground detector.True/ False
Max. Ground HeightMaximum height for ground segmentation.meters[-0.5, +Inf.]
Limit Detection RangeTo limit the ground detection range (xy-range). If enable, all points out of certain xy-range will not be considered as ground points. Note that it's not applied for z detection range.True/ False
Limit Detection Range > Max Detection RangeMaximum xy-range to apply ground detection.meters[0, +Inf.]
Algorithm Componenets > Background DetectorBackground DetectorApplyIf True, use the background detector.True/ False
Time to Initialize BackgroundInitiated period to learn background points.seconds[0, +Inf.]
Time to Become BackgroundThe period for static objects becomes background.seconds[1, +Inf.]
Time to Become ForegroundThe period for a background object becomes foreground when it starts moving.seconds[1, +Inf.]
Use Multi-lidar Background FusionFuse background information from multi-lidarTrue/False
Number of Frames To Estimate Detection RangeNumber of initial frames used to estimate the background detection range[1, +Inf]
Use Global Detection RangeIf use this option, the global detection range will be used for background detection range instead of using initial frames estimationTrue/False
Resolutions (Range, Azimuth, Elevation)Range (distance), Azimuth angle and Elevation angle resolution.meters, degree, degree[0, +Inf]
Algorithm Componenets > ClustererGrid-Clustererx-ResolutionGrid resolution in x-axis.meters(0, +Inf.]
y-ResolutionGrid resolution in y-axis.meters(0, +Inf]
Cell Point ThresholdMinimum points per cell for clustering[1, +Inf.]
Point-Size Growing > Enable Point-Size GrowingTo allow using bigger clustering resolution to far-distant objects.True/ False
Point-Size Growing > Point-Size Scaling Factor per MeterThe scaling factor of point size's radius per metter with respect to the distance to lidar.meters
Point-Size Growing > Max. x-ResolutionMax grid resolution in x-axis when using Point-Size Growing.meters[0, +inf]
Point-Size Growing > Max. y-ResolutionMax grid resolution in y-axis when using Point-Size Growing.meters[0, +inf]
Algorithm Componenets > TrackerHybrid-TrackerApplyIf true, use Hybrid-TrackerTrue/ False
Association-DistanceDistance threshold to associate objects in two consecutive frames.meters[0.01, +Inf]
Merge Object LevelObject merging level to decide how easily small closed objects can be merged into one object.[0, 10]
Max. Grow-Size ToleranceMaximum size of merged object can growmeters[1, +inf]
Max-Dimensions (W, L)Maximum size of object can be tracked.meters[0.01, +inf]
Human-TrackerApplyIf true, use Human-TrackerTrue/ False
Association-DistanceDistance threshold to associate objects in two consecutive frames.meters[0.01, +Inf]
Merge/Split Size Mul. ThresholdSize ratio threshold that allows to merge/split[0.0, +Inf]
Merge/Split Size Add. ThresholdSize increasing/decreasing threshold that allows to merge/splitmeters[0.0, +Inf]
Min. Time for SplittingMinimum lifetime of objects to apply merger/ splitter checkseconds[0.0, +Inf]
GPU Pipeline Supported Tracker (GPU plug-in)ApplyIf True, use GPU Pipeline Supported TrackerTrue/ False
Tracking Association DistanceDistance threshold to associate objects in two consecutive frames.meter[0.0, +Inf]
Tracking Association Distance For Non-Tracking Status ObjectsDistance threshold to associate objects in two consecutive frames for non-tracking status objects.meter[0.0, +Inf]
Algorithm Componenets > ClassifierLight ML ClassifierApplyIf True, use Light ML ClassifierTrue/ False
Target Classification ClassesCAR/PED/CYC/MISCThe classes the output of the classifier will include. E.g. if only PED(estrian) and MISC(ellaneous) are chosen all the objects will be classified as either PED or MISC.True/ False
Only Classify Tracked ObjectsIf selected, only objects with tracking status will be classifiedTrue/False
Skipped Classification ClassesCAR/PED/CYC/MISCAny class marked with this option will be ignored by the classifier.True/ False
Object Class Stability > Stability LevelClassification stability level:
0: Disabled,
1: If an object class is stable, the classification will be kept until replaced by another stable class,
2: If an object class is stable, the classification will be kept permanently.
Object Class Stability > Min. Number of Frames for Stable Class LabelThe number of frames an object class need to be continuously kept in order to turn to semi-permanently (for Stability Level = 1) or permanently (for Stability Level = 2).
First Order ClassifierThis session will be applied first and only if it can’t classify the objects the send order classifier can assist it.
Size Classifier for Big ObjectsApplyIf True, use the size classifier for big objects. If used, all objects which have size bigger than Min Length, Width, Height will be classified as one of Include Classes.True/ False
Length Range/ Width Range/ Height RangeThe range of length, width, height to classify as big objects.meters[0, +Inf]
Include ClassesThe classes the output of the size classifier for big objects will include.True/ False
Size Classifier for Small ObjectsApplyIf True, use the size classifier for small objects. If used, all objects which have size in the Length Range, Width Range, Height Range will be classified as one of Include Classes. If an object is classified as a big object, it won’t be classified as a small object even though it’s in the small object range.True/ False
Length Range/ Width Range/ Height RangeThe range of length, width, height to classify as small objects.meters[0, +Inf]
Include ClassesIf True, the classes the output of the size classifier for big objects will include.True/ False
Velocity ClassifierApplyIf True, use the velocity classifier.True/ False
Non-MISC Min DisplacementMinimum displacement which object will not be classified as a MISC.meters[0, +Inf]
Non-MISC Min VelocityMinimum velocity which object will not be classified as a MISC.km/h[0, +Inf]
PED Max VelocityMinimum velocity which object will not be classified as a PED.km/h[0, +Inf]
Use Velocity EstimatorEnable this to use the velocity estimatorTrue/ False
Second Order Classifier > ML ClassifierApplyIf True, use the machine learning classifier. If applied, it will be used only after using size classifier and velocity classifier. E.g, After the size classifier and velocity classifier, the object is classified as either PED or CYC, ML classifier will decide whether it’s a PED or a CYC.True/ False
Min Num PointsMinimum number of points to use ML classifier.[0, +inf]
Max Num Objects To ClassifyMaximum number of objects can be classified by ML classier.[0, +inf]
Algorithm Componenets > Cluster Merger (GPU plug-in)Cluster MergerApplyIf True, enables Cluster Merger.True/ False
Algorithm Componenets > Object Detector (GPU plug-in)2D Object DetectorApplyIf True, enables the 2D Object DetectorTrue/ False
Car Probability ThresholdMinimum probability to be considered CAR[0.0, 1.0]
Pedestrian Probability ThresholdMinimum probability to be considered PED[0.0, 1.0]
Cyclist Probability ThresholdMinimum probability to be considered CYC[0.0, 1.0]
Exclude Invalid PointsIf selected, points in an exclusion zone will not be included in the object detection model.True/ False
Limit Detection Range > EnableThis to allow object detection cover in a smaller area compared to the global detection range model.True/ False
Limit Detection Range > Max Detection RangeMaximum range to apply object detection.meters[0, +Inf]

More on algorithm setting

Detection range

It is recommended to change some parameters to better fit each specific use case. As mentioned in previous sections, detection range is one of the important parameters that should be tuned carefully to save computational resources, reduce noise and false detections (refer to Project Setup, Optimizing the world size). If you want to change detection range while in runtime mode, go back to project setup mode (with the Mode > Project Setup in the menu bar) and click the gear icon Algo Node Settings.

Background Detector and Zone

Background detector is used to automatically set exclusion zones (called background zones) to ignore static objects in the detection pipeline. If an object stays still for more than Time to Become background seconds, the background detector will set a background zone around that object. In consequence, that object will become background and won’t be tracked. If a background object moves away from its background zone, it needs Time to Become Foreground seconds so that the zone is removed. Since lidar is noisy and some objects can vibrate (vegetations, flags, ...), the background detector can’t completely remove all the static objects. It’s recommended to use exclusion zones (see exclusion zones) for the region which you don’t want to cover.

If you also want tracked objects which can be static for a long time (e.g. track cars in the parking lot), you could use static zones (background detection is not applied to static zones) or disable Background Detector and use only exclusion zones instead.

Tracker

Non-GPU pipeline

Human-tracker is more optimized for human only tracking applications. If lidar is used in the scene which doesn’t have cars and cyclists (e.g. indoor), Human-tracker is recommended. If lidar is used outdoors (which have cars/cyclists), Hybrid-Tracker would perform better.

Since lidar point-clouds are sparse, a big car can break into multiple small objects. Hybrid-Tracker can merge those small objects together. The Merge Object Level parameters help to control this merging process. Merge Object Level indicates how easy the merge process is. If lidars are used to track big objects (big trucks/containers) and there aren't many people, increasing Merge Object Level can help to detect the big object better. But if the scene is a crowded intersection with many pedestrians those values need to be kept small to avoid merging closed distance pedestrians to one object. Merge Object Level can have value in the range [0, 10]. We recommend using Merge Object Level 3-4 for crowded intersections and 7-8 for highways with many big trucks cases.

GPU pipeline

With GPU pipeline, GPU Pipeline Supported Tracker is recommended for outdoor scene. For indoor or only human scene, Human-tracker is still recommended.

Classifier

Use Target Classification Classes can reduce false classification. E.g. If lidars are used on highways to detect cars/cyclists, it recommends choosing CAR, CYC, MISC as target classes. And if lidars are used indoors, choosing PED, MISC as target classes could perform better.

If only one target class is selected, then all objects are classified as the selected class without considering properties of the objects (i.e. the classification is skipped). In the case that no target class is selected, all objects are classified as MISC.

First order classifier (size based and velocity based) will be used first, and only if it can’t decide which class the object belongs to the second order classifier (machine learning based) will be applied. Let’s consider the following classifier setting and an object of size (4.5m, 2.7m, 1.5m).

setting-classifer

The object is classified as either CAR or MISC by Size Classifier for big Objects. If it moves at a speed 20km/h, it is not a MISC or a PED by Velocity Classifier. So it will be classified as a CAR. In case it’s not moving, ML Classifier will be used to determine if it’s a CAR or a MISC.

Dealing with limited computational resources

If running with multiple lidars or using a computational-resource-limited machine, you can improve the speed of the algorithm by:

  • Optimize the detection range (prefered to general parameter setting).
  • Increase resolution values in Pipeline > Clusterer. Resolution 0.1 x 0.1 means that points in a grid of size 0.1 x 0.1 will be grouped in one object. Increasing those values can help the clusterer run faster with the trade off that closed objects can be merged together.
  • Increase resolution values in Pipeline > Background Detector > Range, Azimuth, Elevation. Increasing those values can help the Background Detector run faster with the trade off that objects which are closed to background can become background.

Rendering Config Editor

Rendering setting can be found in View > Preference (or press F11)

LocationNameDescriptionDefault
VisibilityObject PointsEnable or disable showing points belonging to objects.True
Ground PointEnable or disable showing ground points. (e.g. floor.)True
Background PointsEnable or disable showing background points. (e.g. wall or desk…)True
Intensity Visual ModeEnable this to color point-cloud by intensity values. Off: Intensity color visualization is off. Fixed Range: Use max and min intensity value from Range below. Auto Range: Calculate max and min intensity value from each frame.0: Off
1: Fixed Range
2: Auto Range
Min/Max Point Intensity RangeColor range for intensity visualization. Intensity values outside of range will be clamped to the range.[0.0, 255.0]
GridEnable or disable showing squared Grid.True
Grid CircularEnable or disable showing circular Grid.False
AxisEnable or disable showing the world XYZ coordinate axis.True
ObjectsEnable or disable showing tracked objects. (e.g. pedestrian)True
Misc ObjectsEnable or disable showing miscellaneous objects.True
Drifting ObjectsEnable or disable showing drifting objects.True
Invalidating ObjectsEnable or disable showing invalidating objects.True
Validating ObjectsEnable or disable showing validating objects.True
Object TrailEnable or disable showing the object trail.True
Predicted TrajectoryEnable or disable showing the predicted trajectory of moving objects.False
Map ImageEnable or disable showing a map image.True
Detection RangeEnable or disable showing the detection range of each algo node.True
Lidar NameEnable or disable showing the name of each sensor.False
Lidar TopicEnable or disable showing the topic of each sensor.False
Algo Node NameEnable or disable showing the name of each algo node.False
Zone NameEnable or disable showing the name of each zone.False
Retro-Reflective ObjectsEnable or disable showing the retro-reflective objects.False
Draw AnnotationsEnable or disable drawing object annotationsFalse
Visible Annotations -> Show Object IDIf object annotations are enabled the object’s ID is shownTrue
Visible Annotations -> Show Object Class and ProbabilityIf object annotations are enabled the object’s class and probability score of that class is shownFalse
Visible Annotations -> Show Object SpeedIf object annotations are enabled the object’s speed is shownTrue
Visible Annotations -> Show Object HeightIf object annotations are enabled the object’s height is shownFalse
ColorBackground ColorColor of the background.-
Ground Point ColorColor of the ground points.-
Background Point ColorColor of the background points.-
Object Point ColorColor of points belonging to objects.-
Car ColorColor of cars.-
Pedestrian ColorColor of pedestrians.-
Cyclist ColorColor of cyclists.-
Misc ColorColor of miscellaneous objects.-
Event Zone ColorColor of event zones.-
Exclusion Zone ColorColor of exclusion zones.-
Reflection Zone ColorColor of reflection zones.-
Static Zone ColorColor of static zones.-
Detection Range ColorColor of the detection range outline.-
Retro-Reflective Object ColorColor of the retro-reflective object-
MiscellaneousCloud Point SizeControl the visual size of the rendered points. The values can range from 1.0 to 10.0.1.0
Use Random Obj. ColorThis option colors each object in a different color.False
Grid Interval [m]Display grid size.10
View RangeZ Clipping Range Limits [m]Display Z-range.[-10.0, 10.0]

Export Configuration

SENSR can export the configuration changes the user has made into a human-readable format. This is useful for manually checking a project's settings against the settings SENSR uses by default.

To export the configuration changes go to sensor setup mode and in the top menu-bar navigate to File->Export Parameter change Report. This will bring up a dialog-box to choose where to export the text-file.

Logo
If you need access, please contact