經(jīng)濟(jì)學(xué)人精讀 [56] The Economist | Algorithm's dilemma

經(jīng)濟(jì)學(xué)人精讀?The Economist [56]

選自?|January 20?2018?| Science and Technology | 科技板塊


更多英語(yǔ)閱讀聽(tīng)力口語(yǔ)內(nèi)容,請(qǐng)關(guān)注

微信公眾號(hào)MyEnglishTrip

我是Eva

一個(gè)認(rèn)真學(xué)習(xí)英語(yǔ)的美少女


#Eva導(dǎo)讀#

在美國(guó),計(jì)算機(jī)已經(jīng)被用來(lái)正確的預(yù)測(cè)一個(gè)人是否應(yīng)該接受協(xié)助保釋和判決的決定很多年了。支持者這一做法的人認(rèn)為,經(jīng)過(guò)大量數(shù)據(jù)訓(xùn)練的算法,也對(duì)囚犯是否會(huì)再次犯罪作出判斷。研究人員就此問(wèn)題做了實(shí)驗(yàn)。實(shí)驗(yàn)結(jié)果顯示,計(jì)算機(jī)算法的判斷與人的判斷的準(zhǔn)確率是相同的。既然計(jì)算機(jī)算法的準(zhǔn)確率并沒(méi)有超過(guò)人的判斷,那么計(jì)算機(jī)算法的價(jià)值就很難評(píng)判。

#以上,個(gè)人總結(jié)和理解,歡迎批評(píng)指正,歡迎留言討論

#有輸出才有進(jìn)步

Computers and criminal justice[計(jì)算機(jī)與刑事審判]

Algorithms dilemma[算法的困境]

Are programs better than people at predicting?recidivism?[程序會(huì)比人更擅長(zhǎng)預(yù)測(cè)累犯嗎]

IN AMERICA, computers have been correctly predicted whether someone used to assist?bail[保釋]and sentencing decisions for many years[在美國(guó),計(jì)算機(jī)已經(jīng)正確的預(yù)測(cè)一個(gè)人是否應(yīng)該接受協(xié)助保釋和判決的決定很多年了].?Their proponents argue that the?rigorous[縝密的]logic of an algorithm, trained with a vast amount of data, can?make judgments about?whether a?convict[囚犯]will?reoffend[再次犯罪]that are unclouded by human bias[支持者認(rèn)為,經(jīng)過(guò)大量數(shù)據(jù)訓(xùn)練的算法,邏輯縝密,可以在不受人類偏見(jiàn)的影響下,對(duì)囚犯是否會(huì)再次犯罪作出判斷].?Two researchers have now put one such program, COMPAS, to the test[現(xiàn)在,有兩位研究人員將試驗(yàn)這樣的一個(gè)叫做COMPAS的系統(tǒng)].?According to their study, published in ScienceAdvances, COMPAS did neither better nor worse than people with no specialexpertise[根據(jù)他們發(fā)表在《科學(xué)進(jìn)展》雜志中的研究,COMPAS比那些沒(méi)有特殊專業(yè)知識(shí)的人相比,既沒(méi)有更好也沒(méi)有更差].?

Julia Dressel and Hany Farid of Dartmouth College in New Hampshire selected 1,000 defendants at random from a database of 7,214people arrested in Broward County, Florida between 2013 and 2014, who had been subject to COMPAS analysis[來(lái)自新罕布什爾州達(dá)特茅斯學(xué)院的JD和HF,在佛羅里達(dá)州B郡2013-2014年間被逮捕的7214人的數(shù)據(jù)庫(kù)中,隨機(jī)選取了1000名被告人].?They?split?their sample?into?20 groups of 50[他們將樣本分成20組,每組50人].?For each?defendant?they created a short description that included sex, age?and?prior convictions, as well as the criminal charge faced[對(duì)每一位被告人,他們都添加了一個(gè)簡(jiǎn)短的描述,包括性別、年齡、前科和面臨的刑事控告].

They then?turned to[開(kāi)始使用]Amazon Mechanical Turk, a website which recruits volunteers to?carry out?small tasks in exchange for cash[之后,他們開(kāi)始使用亞馬遜土耳其機(jī)器人,一個(gè)可以有償雇傭志愿者完成小任務(wù)的網(wǎng)站].?They asked 400 such volunteers to predict, on the basis of the descriptions, whether a particular defendant would be arrested for another crime within two years of his?arraignment?(excluding any jail time he might have served)—a fact now known because of the passage of time[他們會(huì)要求400名這樣的志愿者,基于描述,預(yù)測(cè)某個(gè)被告人是否會(huì)在他傳訊的兩年之內(nèi)因再次犯罪被捕(不包括他可能服刑的監(jiān)獄時(shí)間)——因?yàn)闀r(shí)間已經(jīng)過(guò)去,所以現(xiàn)在已經(jīng)知道的事實(shí)].?Each volunteer saw only one group of 50people, and each group was seen by 20 volunteers[每一位志愿者只會(huì)看到一組50人,每一組人會(huì)被20位志愿者看到].?When Ms Dressel and Dr Farid?crunched the numbers[(快速大量的)處理數(shù)字],?they found that the volunteers?correctly predicted whether someone?had been rearrested 62.1% of the time[當(dāng)D和F處理這些數(shù)字時(shí),他們發(fā)現(xiàn)志愿者有62.1%的概率正確預(yù)測(cè)一個(gè)人是否會(huì)再次被捕].?When the judgments of the 20 who examined a particular defendant’s case were pooled, this rose to 67%[當(dāng)20組檢測(cè)某一個(gè)被告人案例的判斷集合在一起時(shí),正確率提升到了67%].?COMPAS had scored 65.2%—essentially the same as the human volunteers[COMPAS的得分是65.2%——實(shí)際上與人類志愿者分?jǐn)?shù)相同].?

To see whether?mention?of a person’s race (a?thorny[棘手的]issue in the American criminal-justice system)would affect such judgments,?Ms?Dressel and?Dr?Farid recruited 400 more volunteers and repeated their experiment, this time adding each defendant’s race to the description[為了檢測(cè)提到一個(gè)人的種族(在美國(guó)刑事審判系統(tǒng)中一個(gè)棘手的問(wèn)題)是否會(huì)影響這樣的判斷,D和F重新雇傭了400多名志愿者,并重復(fù)了他們的試驗(yàn),這一次將被告人的種族加入到描述中].?It made no difference[結(jié)果沒(méi)有區(qū)別].?Participants identified those rearrested with66.5% accuracy[參與者以66.5%的準(zhǔn)確率鑒別出了那些再次被捕的人].?

All this suggests that COMPAS, though not perfect, is indeed as good as human common sense at parsing?pertinent[有關(guān)的]facts to predict who will and will not come to the law’s attention again[所有這些表明,COMPAS,盡管不完美,但在預(yù)測(cè)誰(shuí)將會(huì)或者不會(huì)遇到法律的再次警告時(shí),與人類分析有關(guān)事實(shí)的常識(shí)性判斷一樣好].?That is encouraging[這個(gè)結(jié)果是鼓舞人心的].?Whether it is good value, though, is a different question,?for?Ms Dressel and Dr Farid have?devised[發(fā)明]an algorithm of their own that was as accurate as COMPAS in predicting rearrest when fed the Broward County data, but which involves only two inputs—the defendant’s age and number of prior convictions[然而,這是否是一個(gè)有利的價(jià)值是一個(gè)難題,D和F已經(jīng)發(fā)明了自己的一套算法,當(dāng)用B郡的數(shù)據(jù)預(yù)測(cè)再次犯罪時(shí),與COMPAS一樣準(zhǔn)確,但是僅需要兩個(gè)輸入——被告的年齡和前科的次數(shù)].?

As?Tim Brennan, chief scientist at?Equivant, which makes COMPAS, points out, the researchers’algorithm, having been trained and tested on data from one and the same place, might prove less accurate if faced with records from elsewhere[盡管創(chuàng)造COMPAS的公司Equivant的首席科學(xué)家TB指出,研究人員的算法是通過(guò)單一且同一地點(diǎn)的數(shù)據(jù)訓(xùn)練和檢測(cè),在遇到其他地方的記錄時(shí),可能檢測(cè)出的準(zhǔn)確率更低].?But so long as the algorithm behind COMPASitself remains proprietary, a detailed comparison of the virtues of the two is not possible[但是,只要支持COMPAS自身的算法仍然是專有的,那么詳細(xì)比較兩者的優(yōu)點(diǎn)是不可能的].?

Jan 22 | 519 words


更多英語(yǔ)閱讀聽(tīng)力口語(yǔ)內(nèi)容,請(qǐng)關(guān)注

微信公眾號(hào)MyEnglishTrip

我是Eva

一個(gè)認(rèn)真學(xué)習(xí)英語(yǔ)的美少女

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容