Community

NOAA AI Word Cloud

AI/ML communities across NOAA

Artificial Intelligence (AI) and Machine Learning (ML) efforts are underway across NOAA. As NCAI develops, this will be a space for those Communities of Practice to provide a vehicle for discovery and networking.

Featured NOAA AI Research

Each month the NCAI Newsletter features AI-related NOAA research from our community members. The rotator below highlights research from the current and previous newsletters. Subscribe to the NCAI Newsletter offsite link.

NOAA uses artificial intelligence to translate forecasts, warnings into Spanish and Chinese

The National Weather Service is asking for public feedback on its new Spanish and Chinese translation services powered by Lilt's AI language model.
The National Weather Service is asking for public feedback on its new Spanish and Chinese translation services powered by Lilt's AI language model. (Image credit: NOAA)

NOAA’s National Weather Service (NWS) has provided manual translations of weather forecasts and warnings in Spanish for the past 30 years, but now the agency has a new tool to be more accurate, efficient and equitable. 

Through a series of pilot projects over the past few years, NWS forecasters have been training artificial intelligence (AI) software for weather, water and climate terminology in Spanish and Simplified Chinese, the most common languages in the United States after English. NWS will add Samoan and Vietnamese next, and more languages in the future. 

This effort was supported by the House Appropriations Committee in NOAA’s fiscal year 2023 Congressional budget.

"Getting timely weather alerts ahead of a dangerous storm in multiple languages helps ensure that potentially lifesaving information is available to everyone,” said U.S. Rep. Grace Meng (D-NY), a member of the House Appropriations Subcommittee on Commerce, Justice, and Science. “By capitalizing on the advancements of AI technology, we will be able to provide these alerts in even more languages in the near future. I want to applaud the National Weather Service and Lilt for working with me to meet people where they are. By helping to be more inclusive and further increase safety in our many diverse communities, we can protect more people from severe weather storms in the United States.” 

“This language translation project will improve our service equity to traditionally underserved and vulnerable populations that have limited English proficiency,” said Ken Graham, director of NOAA’s National Weather Service. “By providing weather forecasts and warnings in multiple languages, NWS will improve community and individual readiness and resilience as climate change drives more extreme weather events.”

View Article

The Warn-on-Forecast System: A cutting-edge storm-scale NWP system made even better by AI

 

Multiple Warn-On-Forecast System (WoFS) products corroborating the possibility of a tornado at Little Rock (Arkansas) at 78-minute lead time.
Multiple Warn-On-Forecast System (WoFS) products corroborating the possibility of a tornado at Little Rock (Arkansas) at 78-minute lead time. (Image credit: NOAA NSSL)

The NOAA National Severe Storms Laboratory (NSSL) and Cooperative Institute for Severe and High-Impact Weather Research and Operations (CIWRO) are leading the development of the Warn-on-Forecast System (WoFS):

 a cloud-based, rapidly-updating, convection-allowing ensemble designed to support watch-to-warning (0–6 hr) operations for tornadoes, flash floods, and other high-impact weather. The WoFS is scheduled for operationalization at the National Weather Service (NWS) Unified Forecast System around 2027, but is already routinely used by many NWS Weather Forecast Offices, the Storm Prediction Center, and the Weather Prediction Center. 

A major strength of the WoFS is its use of machine learning (ML) to generate probabilistic predictions of severe thunderstorm hazards (tornadoes, large hail, damaging wind). Forecasters have frequently found these ML products to be helpful during severe weather operations. Additional ML models are in development for predicting not only severe weather, but also heavy rainfall and regions where WoFS forecasts of storms will be unusually high- or low-quality. Inspired by the recent success of emerging global data-driven AI-NWP models, the NSSL WoFS team has begun exploring the concept of a data-driven WoFS, where forecasts are generated by deep learning models trained on archived WoFS output.

In February, NOAA entities involved with the WoFS received a 2023 DOC Gold Medal Award for “scientific and engineering excellence in developing a revolutionary prediction tool that provides short-term probabilistic thunderstorm guidance.” AI promises to play an increasingly vital role as NSSL and other agencies advance the frontiers of storm-scale prediction.

View Article offsite link

Can Scientists Train Machines to Listen for Marine Ecosystem Health?

 

A reseach diver collects data from an underwater sound recorder in Florida Keys National Marine Sanctuary
Researchers collect acoustic data using underwater sound recorders, like this one maintained by divers in Florida Keys National Marine Sanctuary. (Image credit: Florida Fish and Wildlife Conservation Commission)

What if we could detect a problem within a marine ecosystem just like a doctor can detect a heart murmur using a stethoscope? Listening to the heart and hearing the murmur tells the doctor there may be a more serious underlying condition that should be addressed before it gets worse. In an ocean world where things like climate change and overfishing have the ability to drastically alter the functionality of entire ecosystems, having a stethoscope to detect signs of major issues could really come in handy to marine resource managers. That’s where sound monitoring, artificial intelligence, and machine learning come in.

Since 2018, NOAA and the U.S. Navy have engaged in a multi-year effort to monitor underwater sound within the National Marine Sanctuary System. The agencies worked with numerous scientific partners to study sound within seven national marine sanctuaries and one marine national monument on the U.S. east and west coasts and in the Pacific Islands region. As the first coordinated monitoring effort of its kind for the National Marine Sanctuary System, SanctSound offsite link was designed to provide standardized acoustic data collection to document how much sound is present within these protected areas as well as potential impacts of unnatural noises to the areas’ marine taxa and habitats.

 

View Article

A Machine Learning Explainability Tutorial for Atmospheric Sciences

Illustration of the relationship between understandability and model complexity.
Illustration of the relationship between understandability and model complexity. Fully interpretable models have high intrinsic understandability, while partially interpretable or simpler black box models have the most to gain from explainability methods. With increased dimensionality and nonlinearity, explainability methods can improve understanding. Still, there is considerable uncertainty about the ability of future explanation methods to improve the understandability of high-dimensional, highly nonlinear methods. (Image credit: Artificial Intelligence for the Earth Systems 3, 1; 10.1175/AIES-D-23-0018.1) offsite link

With increasing interest in explaining machine learning (ML) models, this paper synthesizes many topics related to ML explainability. We distinguish explainability from interpretability, local from global explainability, and feature importance versus feature relevance. We demonstrate and visualize different explanation methods, how to interpret them, and provide a complete Python package (scikit-explain) to allow future researchers and model developers to explore these explainability methods.

The explainability methods include Shapley additive explanations (SHAP), Shapley additive global explanation (SAGE), and accumulated local effects (ALE). Our focus is primarily on Shapley-based techniques, which serve as a unifying framework for various existing methods to enhance model explainability. For example, SHAP unifies methods like local interpretable model-agnostic explanations (LIME) and tree interpreter for local explainability, while SAGE unifies the different variations of permutation importance for global explainability.

They provide a short tutorial for explaining ML models using three disparate datasets: a convection-allowing model dataset for severe weather prediction, a nowcasting dataset for subfreezing road surface prediction, and satellite-based data for lightning prediction. In addition, we showcase the adverse effects that correlated features can have on the explainability of a model. Finally, we demonstrate the notion of evaluating model impacts of feature groups instead of individual features. Evaluating the feature groups mitigates the impacts of feature correlations and can provide a more holistic understanding of the model.

View Article offsite link