Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems

Typhoon Intensity Prediction with Vision Transformer

Huanxin Chen · Pengshuai Yin · Huichou Huang · Qingyao Wu · Ruirui Liu · Xiatian Zhu


Abstract:

Predicting typhoon intensity accurately across space and time is crucial for issuing timely disaster warnings and facilitating emergency response. This has vast potential for minimizing the loss of life, property damage, and reducing economic and environmental impacts. Leveraging satellite imagery for situational analysis is effective but presents challenges due to the complex relationships among clouds and the highly dynamic context. Existing deep learning methods in this domain rely on convolutional neural networks (CNNs), which suffer from limited per-layer receptive fields. This limitation hinders their ability to capture long-range dependencies and global contextual knowledge during inference. In response, we introduce a novel approach, the "Typhoon Intensity Transformer" (TiT), which leverages self-attention mechanisms with global receptive fields per layer. TiT adopts a sequence-to-sequence feature representation learning perspective. It begins by dividing a given satellite image into a sequence of patches and recursively employs self-attention operations to extract both local and global contextual relationships between all patch pairs simultaneously, enhancing per-patch feature representation learning. Extensive experiments on a publicly available typhoon benchmark validate the efficacy of TiT when compared to both state-of-the-art deep learning methods and conventional meteorological modeling approaches.

Chat is not available.