Categories
Satellites Space

Scientists can use HPC to extract information from drone imagery and satellite

Researchers at Oak Ridge National Laboratory (ORNL) are optimizing data extraction from both satellite and drone imagery utilizing high-performance computing (HPC) and machine learning platforms.

“Of course, the massive amount of data brings unprecedented issues in the machine learning developments as well as how to better utilize current computational resources to help us do more efficient and effective remote sensing data analytics work,” Lexie Yang, who works as a research scientist at the ORNL, stated during a presentation at NVIDIA’s GTC21 GPU Technology Conference 2021 on November 11.

She and Philipe Dias, who works as an ORNL research associate, used building segmentation, which entails labeling pixels of structures from complicated remote sensing images, to extract building layouts and roads from the satellite imagery files.

One difficulty with this endeavor is the large amount of data that must be processed. The researchers employ a low resolution of about 5 meters for every pixel to map the Earth’s surface, which implies 100 trillion pixels must be processed, according to Dias. For a nation, the size of Nigeria, more granularity — assume, 0.5-meter resolution — results in around 90 terabytes of data. That’s 32 trillion pixels in pixels. However, buildings account for less than half of that – 62 billion.

“It becomes a fruitless situation,” Dias explained. He also mentioned that generalization is a problem. Sensors aboard satellites and drones are now collecting imagery at varied resolutions. Other factors to consider are the “look angle,” or how the device collects the image, as well as domain distribution shifts, which can cause images taken from the same area with the same look angle to appear differently at various times.

“You need to handle the domain change in some way,” he said, adding that if you don’t, your model’s performance will suffer. Three such strategies were mentioned by him. “Every new domain you possess, you require to annotate more data,” he added. The first is to identify more data for the particular domain, but which is the most expensive and non-scalable technique.

The second step is to apply what you’ve learned. This entails adding a discriminator component as well as corresponding loss components to the initial segmentation pipeline to force the adversarial learning, resulting in a feature representation which maps data from the search domain as well as data from target domain in such a comparable pattern that the discriminator cannot tell them apart. The notion is that by layering on constructing segmentation, you may have a decent enough segmentation for search and focus domains, according to Dias.

The third possibility is to separate many spaces to develop specialized models. This entails running the extracted related characteristics of a data collection through an ML model and then mapping such features to create a display. “You can start to find some structure, some trends in this data by doing so,” Dias added.

ReSFlow, “a workflow that divides a task of model generalization into a set of specialized exploitations,” is based on this idea. ReSFlow divides image collections into homogeneous buckets, each with exploitation models tuned to function well in the setting of that bucket. According to a paper by the other ORNL researchers, “ReSFlow aspires for generalization through stratification.”

Leave a Reply

Your email address will not be published. Required fields are marked *