When disaster strikes, speed is critical. The time it takes to properly assess damage in the wake of a major event can be the difference between life and death. However, emergency responders must often navigate disruptions to local communication and transportation infrastructure, making accurate assessments dangerous, difficult, and slow. While satellite and aerial imagery offer a less risky alternative that covers more ground, analysts must still conduct manual, time-intensive assessments of images.
The Defense Innovation Unit’s (DIU’s) xView2 Challenge (Challenge) seeks to automate post-disaster damage assessment. DIU is challenging machine learning experts to develop computer vision algorithms that will speed up analysis of satellite and aerial imagery by localizing and categorizing various types of building damage caused by natural disasters. The xView2 Challenge is DIU’s second prize competition focused on furthering innovation in computer vision for humanitarian assistance and disaster relief (HADR) efforts. This year’s competition builds upon the xView1 Challenge, which sought out computer vision algorithms to locate and identify distinct objects on the ground useful to first responders.
“DIU’s goal in hosting this Challenge is to enlist the global community of machine learning experts to tackle a critically hard problem: detecting key objects in overhead imagery in context and assessing damage in a disaster situation,” said Mike Kaul, DIU AI Portfolio Director.
“We are always looking for ways to improve rapid damage assessment to ensure we and our partners deliver the right resources to the right places at the right time, and we are confident the DIU Challenge can contribute to that goal,” said FEMA Regional Administrator Robert Fenton, a partner in the Challenge.
DIU led a team of experts from academia and industry to create a new dataset, xBD, to enable both localization and damage assessment before and after disasters. The dataset will provide the foundation for the Challenge. While several open datasets for object detection from satellite imagery already exist (e.g. SpaceNet, xView 1), each represent only a single snapshot in time and lack information about the type and severity of damage following a disaster. xBD is currently the largest and most diverse annotated building damage dataset, allowing ML/AI practitioners to generate and test models to help automate building damage assessment. The open source electro-optical imagery (0.3 m resolution) xBD dataset will encompass ~544,556 building annotations across ~19,520 square kilometers of freely available imagery from multiple countries*. Six disaster types are included: wildfire, landslides, volcanic eruptions, earthquakes/tsunamis, and wind and flooding damage (*more data are being added to xBD as the labeling becomes available).
There are three competition prize tracks for the xView2 Challenge:
Open source. Teams compete for leaderboard position and awards for top scores. By releasing their models publicly under a permissive open source license, teams also become eligible for an additional open source award.
Non-exclusive Government purpose rights. Teams grant government purpose rights to become eligible for awards or top scores on the leaderboard. Solutions can be used to help future disaster recovery efforts.
Evaluation Only. Teams retain IP and only grant rights to benchmark their solution and compete for leaderboard position. Top teams in this category will still be eligible for a special monetary prize pool for their submissions.
The best solutions for all three categories will be eligible for a share of a $150,000 prize purse. Top solvers will also be invited to present their work at the December NeurIPS 2019 Workshop on AI for HADR. Winners of any cash prize will be considered eligible to be awarded follow-on work with the Department of Defense. The competition will start in September 2019 and run through November 2019.