1. 程式人生 > >英語流利說 第13天

英語流利說 第13天

帶著問題聽講解

Q1: 人工智慧出現了什麼新問題?

Q2: 動詞“deprecate”怎麼理解(輕視,忽視)

Q3: 你對人工智慧有哪些暢想

紅色表示重點詞彙

藍色表示句子主幹

What's wrong with AI? Try asking a human being 

Amazon has apparently abandoned an AI system aimed at automating its recruitment

process. The system gave job candidates scores ranging from one to five stars, a bit like shoppers rating products on the Amazon website.

The trouble was, the program tended to give five stars to men and one star to women. According to Reuters, it “penalised résumés that included the word ‘women’s’, as in ‘women’s chess club captain’” and marked down

applicants who had attended women-only colleges.

It wasn’t that the programme was malevolently misogynistic. Rather, like all AI programs, it had to be “trained” by being fed data about what constituted good results. Amazon, naturally, fed it with details of its own recruitment programme over the previous 10 years. Most applicants had been men, as had most recruits

. What the program learned was that men, not women, were good candidates.

It’s not the first time AI programs have been shown to exhibit bias. Software  used in the US justice system to assess a criminal defendant’s likelihood of reoffending is more likely to judge black defendants as potential recidivists. Facial recognition software is poor at recognising non-white faces. A Google photo app even labelled African Americans “gorillas”.

All this should teach us three things. First, the issue here is not to do with AI itself, but with social practices. The biases are in real life.

Second, the problem with AI arises when we think of machines as being objective. A machine is only as good as the humans programming it.

And third, while there are many circumstances in which machines are better, especially where speed is paramount, we have a sense of right and wrong and social means of challenging bias and injustice. We should never deprecate that.

人工智慧怎麼了?試著去問問人類吧

亞馬遜似乎已經放棄了旨在使其招聘過程自動化的一個人工智慧系統。該系統給求職者打分,從一星到五星不等,有點類似於顧客在亞馬遜網站上給產品打分。

問題在於,該程式往往給男性求職者打五星,給女性求職者卻只打一星。據路透社報道,該系統“將含有‘女性的’這幾個字的簡歷置於不利地位,比如寫有‘女子象棋俱樂部隊長’這樣的簡歷就會失分”,除此之外,該系統還壓低了上過女子大學的申請人的分數。

這並不是說這個程式患有惡毒的厭女症。相反,正如所有人工智慧程式一樣,它必定被“訓練”過,訓練方式是向其灌輸資料,這些資料表明什麼可以被視為好結果。亞馬遜自然也向它灌輸了過去 10 年裡招聘方案的細節資訊。大多數申請者都是男性,大多數新員工也都是男性。這個程式學到的是:男人,而不是女人,才是好的候選人。

這已經不是人工智慧程式第一次顯示出偏見了。美國司法系統用來評估刑事被告再次犯罪的可能性的軟體,更有可能將黑人被告視為潛在的累犯。人臉識別軟體對非白種人的面部識別能力較差。而谷歌的一個圖片應用程式甚至把非裔美國人標註為“大猩猩”。

所有這一切都應該告訴我們三件事。首先,這裡的問題不是與人工智慧本身有關,而是與社會實踐有關。這些偏見存在於現實生活中。

第二,當我們認為機器是客觀的時候,人工智慧的問題出現了。機器如何表現,僅取決於對它程式設計的人們。

第三,在許多情況下,機器表現更好,特別是在速度至上的情況下,但我們有對正確和錯誤的識別能力,以及挑戰偏見和不公正的社會手段。我們永遠不應該忽視這一點。