IBM and NASA open-source foundation AI model for analyzing satellite data

IBM and NASA open-source foundation AI model for analyzing satellite data

Posted on

IBM Corp. and NASA today released an advanced artificial intelligence model designed to help researchers analyze satellite data faster.

The model is available on Hugging Face, a popular GitHub-like platform for sharing open-source neural networks. The next phase of IBM’s collaboration with NASA will focus on extending their AI to additional use cases. They will partner with Worcester, Mass.-based Clark University on the initiative.

“The essential role of open-source technologies to accelerate critical areas of discovery such as climate change has never been clearer,” said Sriram Raghavan, the vice president of IBM Research AI.

IBM says the new model is designed to help researchers identify areas in the continental U.S. that may be at risk of flooding and wildfire. According to the company, the model can analyze geospatial data up to four times faster than state-of-the-art neural networks. It also takes less data to train.

IBM describes the AI as a foundation model, or a model that can perform a wide range of advanced computing tasks. It’s based on the Transformer architecture, a popular approach to designing neural networks. Transformer models can take a large amount of contextual information into account when reasoning about a piece of data, which allows them to make more accurate decisions than other AI systems.

The technology underpins many of the most advanced AI systems on the market. That includes GPT-4, OpenAI LP’s latest large language model.

IBM and NASA jointly trained their model on a geospatial dataset called Harmonized Landsat Sentinel-2. The dataset includes images of the Earth’s surface that were taken by NASA’s Landsat-8 satellite. It also contains measurements from Sentinel-2, a satellite constellation operated by the European Space Agency.

IBM trained the AI model using its internally-developed Vela supercomputer. The system, which the company revealed earlier this year, is powered by chips from Nvidia Corp.’s A100 series of data center graphics cards. Vela uses a high-end version of the A100 with a particularly large pool of onboard memory for storing AI models.

Alongside Nvidia silicon, the supercomputer includes IBM-developed virtualization software. Virtualization makes certain AI development tasks easier, but that simplicity comes at the cost of reduced processing power. IBM says that it lowered the performance impact to less than 5%, which its researchers describe as “lowest overhead in the industry that we’re aware of.”

Though IBM and NASA optimized their model to detect areas at risk of flooding and wildfires, they estimate it can be adapted to other use cases as well. Tracking deforestation is one task that the model could speed up. IBM says that it can also be used to help researchers monitor carbon emissions and forecast crop yields.

Down the road, the company plans to further extend the AI’s capabilities. It has teamed up with researchers from NASA and Clark University to pursue the effort.

As part of the initiative, IBM hopes to optimize the model for time-series segmentation and similarity research. Those are two popular data analysis methods that are used for not only geospatial research but also a range of other tasks. Time-series segmentation can, for example, be used to study the cause of stock price fluctuations.

IBM eventually plans to make a commercial version of the model available through its Watsonx product suite. Introduced in May, the suite includes an array of software tools designed to help companies build advanced AI models and deploy them in production. There are also prepackaged neural networks optimized for various use cases.

Watsonx is powered by Red Hat OpenShift AI, another recently launched component of IBM’s machine learning portfolio. It’s a version of the OpenShift application development and deployment platform that is specifically optimized for AI workloads. The offering eases tasks such as monitoring the performance of machine learning models running in production.

Image: IBM

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *