[关键词]
[摘要]
随着风电装机容量和并网比例持续提升,预测模型必须在动态环境中频繁更新,但增量学习易遗忘且批量学习计算开销巨大。本文提出了一种信息增强记忆感知突触(IE-MAS)协同持续学习框架,通过特异性缩放、任务相似度调控与知识蒸馏三重机制协同抑制遗忘并平衡新旧知识。以新疆某200MW风电场为实验对象,分别在LSTM、GRU、Transformer与TCN四种时序网络中验证IE-MAS性能。在每轮仅引入10%新增数据的条件下,IE-MAS使LSTM的RMSE、MAE分别降低约1.7%与2.5%,R2提升约1.9%。在跨模型对比中,IE-MAS在各架构上与需累积所有历史数据的批量训练相比表现相当或略优,且轮训练时间平均压缩约70%,为超短期风电预测及其他时序持续学习场景提供了兼顾效率和精度的实时更新方案。
[Key word]
[Abstract]
As the installed capacity and grid‐integration ratio of wind power continue to rise, prediction models must be updated frequently in dynamic environments. However, naive incremental learning suffers from catastrophic forgetting, while batch retraining incurs prohibitive computational costs. In this paper, we propose an Information‐Enhanced Memory Aware Synapses (IE‐MAS) collaborative continual learning framework that leverages three synergistic mechanisms—specificity scaling, task similarity regulation, and knowledge distillation—to suppress forgetting and balance new versus retained knowledge. We conduct experiments on a 200 MW wind farm in Xinjiang, evaluating IE‐MAS across four sequence‐modeling architectures: LSTM, GRU, Transformer, and TCN. Under the constraint of introducing only 10 % new data per update, IE‐MAS reduces the LSTM’s RMSE and MAE by approximately 1.7 % and 2.5 %, respectively, while boosting R2 by about 1.9 %. In cross‐model comparisons, IE‐MAS performs on par with—or slightly better than—batch training that accumulates all historical data, and cuts per‐round training time by roughly 70 % on average. This real‐time update strategy offers a compelling balance of efficiency and accuracy for ultra-short-term wind power prediction and other sequential continual learning scenarios.
[中图分类号]
TK221
[基金项目]
国家电力投资集团有限公司-上海交通大学“未来能源计划联合基金”(WH410245001/004)