Point Cloud Change Detection for City Scenes

SHREC 2023 Track

Motivation and Task

The rapid development of 3D acquisition devices enables us to collect billions of points in few hours. However, the analysis of the output data is a challenging task, especially in the field of point cloud change detection.

In this track, we provide large scale real and synthetic point cloud pairs. Our goal is to detect the changes from multi-temporal point clouds in a complex street environment. We provide manually labelled ground-truth for training and validation. We expect to encourage researchers to try out different methods, including both deep learning and traditional techniques.

Dataset

The dataset includes real and synthesized point clouds and it consists of annotated "points of interest" in street level colored point clouds. The real data is provided by CycloMedia and it is gathered in 2016 and 2020 in the city of Schiedam, Netherlands using vehicle mounted LiDAR sensors. The synthesized data is generated using simulated scenes in Unreal Engine 4 with the AirSim plugin. The dataset focus on street furniture, with the majority of labels corresponding to road-signs, people and cars, while other objects such as advertisements, statues and garbage bins are also included. Labeling was done through manual inspection.

For the real data, we choose over 78 annotated street-scene 3D point cloud pairs in the year of 2016 and 2020. For the synthetic data, we choose over 100 point cloud pairs. Each point cloud pair represents a street scene at different times and contains a group of changed or unchanged objects. Each object pair is assigned one of the following labels:

(1) Nochange, (2) Removed, (3) Added, (4) Change, (5) Color_change

To explain the labels:

Nochange refers to the case where there is no significant change between the two scans.

Added refers to objects that do not exist in the first scan but are added during the second scan.

Removed is the opposite case. For example, in the synthetic dataset, an object exist in the time_a subset while it is removed in the time_b subset, it will be marked as removed in the point cloud in the time_b subset.

Change refers to the case where there is at least significant geometric change but also includes cases where there is also significant change in the RGB space. This includes being replaced by other objects. For example in the following picture a small blue sign is added whilst the rest of the sign stays the same.

Color_change refers to the case where there is not significant geometric case but significant change in the RGB space. For example, in the following picture, content of an advertisement changed but the rest of the cloud is the same.

Labeling Format

Each data point consists of the coordinate of a point of interest and the corresponding label.The points have been placed on or at the base of the object. A first step for preparing the points for input to a model may be taking all points within a certain x-y radius of the point of interest (resulting cylinders (lidar data) as seen above) or within cuboids (synthetic data) from both point clouds. In most cases, apart from the ground this will give a fairly clean representation of the object. There are though cases where this will include other objects (for example signs that are close together) or parts of trees that are above the object.

Corresponding point clouds are saved with file names starting with the same integer (the scene number). The classifications are saved in csv files which also start with the same scene number. The coordinates contained in the csv file correspond to the points of interest.

To fit for a learning system, the dataset is split into training and test sets with the ratio 80% and 20%.

Viewing the dataset

The points and corresponding labels can be viewed in context by loading the classification csv and the corresponding cloud(s) in CloudCompare software.

The following tool can be used to isolate, view these points of interest in a more convenient manner. A very similar tool was used to conduct the labeling.

Registration

To participate in the track, please send us an email. In it, please confirm your interest in participation and mention your affiliation and possible co-authors. After filling out and submitting this terms of usage document to hlyuan@nuist, you will get the credentials to download the dataset.

Submission

From participants, no later than the deadline mentioned in the schedule, we expect an executable program submitted along with a file (a TEX file is preferred) to describe the method (less than two pdf pages). The program will be used to generate results on the private test sets.

Evaluation

The main goal of the track is to find change area and corresponding change labels. Global accuracy and per-class accuracy will be the main evaluation metrics.

Download

During the challenge period, only the train set is provided to the participants. Download the datasets

Organizers

  • Honglin Yuan 1,2
  • Tao Ku 2
  • Yang Gao1
  • Remco C. Veltkamp 2
1: Nanjing University of Information Science and Technology, School of Computer Science
2: Utrecht University, Department of Information and Computing Sciences

To contact the organizers, please send email to us

Schedule

The registration and submission deadlines are in AoE (Anywhere on Earth) timezone.

  • Jan 8, 2023: The dataset is available.
  • Feb 10, 2023: Registration deadline.
  • March 1 March 10, 2023: Submission deadline of the results.
  • March 28, 2023: Track submission to SHREC for review.
  • April 20, 2023: First reviews done, first stage decision on acceptance or rejection.
  • May 20, 2023: First revision due.
  • June 16, 2020: Second stage decision on acceptance or rejection.
  • June 30, 2023: Final version submission.
  • July 5, 2023: Final decision on acceptance or rejection.
  • August 31, 2023: Publication online in Computers & Graphics.
  • August 31, Sept 1, 20203: Presentation at the Eurographics Symposium on 3D Object Retrieval.

CycloMedia develops, builds and operates the worlds most advanced mobile mapping systems. With a combination of sensors ranging from cameras and Lidar scanners to state-of-the-art positioning systems we map dense urban areas in Western Europe and North America. Point clouds and images are accurately geo-registered, where the panoramic imagery is of high geometric quality.

The petabytes of data (all processed and stored in the Microsoft Azure cloud) that we collect every year are used by “professional users”, ranging from city governments to large corporates working in utilities, infrastructure and insurance. Their one common denominator is their need for better data and information to drive change in how they can serve their inhabitants and customers. Today our solutions make cities more safe, green, accessible and smart and help companies to make the right decisions based on fresh and accurate data. CycloMedia is based in The Netherlands and has offices in the US and Germany.