I think the best, nuanced approach is to continue to maintain clear responsible use of AI, which the Government has actually introduced in the form of a model AI governance framework. And one particular example is AI Verify. I think for the purpose of the brevity of this discussion, I do not want to go into too much discussion on that part of it. This is a toolkit, which is developed by IMDA. I am happy to walk the Members of the House through at subsequent Parliamentary Sittings if there is another Parliamentary Question filed on that, to talk about how we can use that.
For the use of data in terms of the algorithms that many of these companies may want to use, it is important that the data pertaining to individuals is anonymised. And, of course, consent would really be one of those things that we are looking at as well. So, I hope that gives the Member that reassurance that we are doing everything that we can to stay on top of it.
Mr Speaker: Dr Tan Wu Meng, a short one please.
Dr Tan Wu Meng (Jurong): Can I ask the Minister, in his assessment of the current landscape of TAFEP cases on unfair HR practices, roughly what proportion of cases did the adjudication hinge on proof of intention by the hiring manager or the company's management? Because with AI, it can be difficult to ascertain intention because the AI is not able to give testimony and be cross-examined or provide information for investigation the way a human can be questioned.
Dr Tan See Leng: I thank Dr Tan for his supplementary question. As I have said, we are at a very pivotal state of transformation and the adoption of AI. If we were having a series of discussions earlier on and Dr Tan himself also brought to our attention that, today, you can actually file a legal suit using ChatGPT.
So, what is fundamentally important for us today is to work closely with employees or with potential employees who may feel that they are aggrieved, to surface such cases to us so that we can investigate. Then, obviously, we will work with the companies to see if some of the algorithms – sometimes, it may not be an intention, it could be a function of the datasets that the company is using – have an inherent bias, for instance, in looking at certain characteristics and, therefore, favour hiring or promotion in favour of those characteristics.
So, we need that constant sense of vigilance, we need the participation of all parties coming together. We also need different agencies, the IHRP, the Labour Movement and we need our tripartite partners to come into the space alongside with us. Then we can ensure a more equitable society and workplace for everyone.
Mr Speaker: Mr Pritam Singh.
Mr Pritam Singh (Aljunied): Mr Speaker, just a response to the Minister through a question vis-à-vis what he said about AI. Unfortunately, from the worker's perspective, one usually is not in a position of information superiority over the employer, so you do not know what back-end selections or pre-qualifications your AI system has done.
未完待续,请点击[下一页]继续阅读