2024年5月8日,新加坡通訊與新聞部長楊莉明在國會書面答覆官委議員
拉斯瓦娜副教授有關人工智慧技術提供商風險評估的問題。
以下內容為新加坡眼根據國會英文資料翻譯整理:
拉斯瓦娜副教授(官委議員)詢問通訊及新聞部長:
(a)新加坡的國家人工智慧(AI)戰略2.0是否包括要求人工智慧技術提供者在公開提供技術之前完成風險評估?
(b)如果是,由誰進行評估?
(c)評估哪些風險?
楊莉明(通訊及新聞部長)女士:新加坡《國家人工智慧戰略2.0》 將可信生態系統的存在視為促進人工智慧蓬勃發展的關鍵因素。
事實上,新加坡早在2019年就率先推出了我們的《人工智慧模型治理評估框架》,該框架推薦了解決人工智慧部署治理問題的最佳實踐。我們將繼續更新該框架,以應對新出現的風險,包括在今年推出。
與此同時,政府還為尋求在開發和部署人工智慧過程中管理風險的組織提供實際支持,包括推出開源測試工具包,如 "人工智慧驗證"(AI Verify),以幫助他們驗證其人工智慧系統在國際公認的治理原則(如穩健性和可解釋性)方面的表現。
這些框架為政府與行業合作管理和評估整個生態系統的人工智慧風險提供了有用的基準。在金融領域,金融機構遵循特定行業人工智慧治理指南,例如新加坡金融管理局 (MAS) 的《促進公平、道德、問責和透明度的原則》(FEAT),這與前面提到的人工智慧治理框架密切相關。許多公司還通過額外的內部指南來監督人工智慧開發,這些例子可以在個人數據保護委員會(PDPC)的《模型人工智慧治理框架用例彙編》中找到。例如,星展銀行已為其人工智慧模型實施了自己的「負責任數據使用」框架,以符合法律、安全和質量標準,並使用機率—嚴重性矩陣等風險評估工具。
除了在國內加強治理方法外,我們還與國際夥伴合作,共同構建全球人工智慧的可信環境。例如,我們在 "人工智慧驗證 "和美國的 《人工智慧風險管理框架》之間進行了聯合映射,以統一方法並簡化在不同司法管轄區部署人工智慧的合規負擔。我們將繼續尋找這樣的機會,並根據技術發展的需要調整我們的方法。
以下是英文質詢內容:
Assoc Prof Razwana Begum Abdul Rahimasked the Minister for Communications and Information (a) whether Singapore's National Artificial Intelligence (AI) Strategy 2.0 includes a requirement for providers of AI technologies to complete a risk assessment prior to making the technology publicly available; (b) if so, who undertakes the assessment; and (c) what risks are assessed.
Mrs Josephine Teo: The Singapore National AI Strategy 2.0 identifies the presence of a trusted ecosystem as a key enabler for robust AI development.
In fact, Singapore was a first-mover in launching our AI Model Governance Framework back in 2019, which recommends best practices to address governance issues in AI deployment. We continue to update it to address emerging risks, including by launching a Framework for Generative AI this year. Meanwhile, the Government also provides practical support for organisations seeking to manage risks in the development and deployment of AI, including through launching open-source testing toolkits, such as AI Verify to help them validate their AI systems' performance on internationally-recognised governance principles like robustness and explainability.
These frameworks provide a useful baseline for the Government to partner industry on managing and assessing AI risks across the ecosystem. In the finance sector, financial institutions are guided by sector-specific AI governance guidelines, such as the Monetary Authority of Singapore's (MAS') Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT), which aligns closely to the earlier-mentioned AI governance frameworks. Many companies have supplemented these with additional internal guidelines to oversee AI development, examples of which can be found in the Personal Data Protection Commission's Compendium of Use Cases for the Model AI Governance Framework. For example, DBS has implemented its own Responsible Data Use framework for its AI models to comply with legal, security and quality standards and utilises risk assessment tools, such as the probability-severity matrix.
未完待续,请点击[下一页]继续阅读
{nextpage}Besides enhancing our governance approach domestically, we collaborate with international partners to build a trusted environment for AI worldwide. For instance, we have conducted a joint mapping exercise between AI Verify and the US' AI Risk Management Framework, to harmonise approaches and streamline the compliance burdens on organisations deploying AI across different jurisdictions. We will continue to seek out such opportunities and adapt our methods in tandem with the technology development.
HQ丨編輯
HQ丨編審
新加坡國會丨來源