TED Talk? ? Machine intelligence makes human morals more important 機器智能使人類道德更重要? ? Speaker: Zeynep Tufekci? ? 第二課
Machine intelligence is here. 機器智能在這里。
We're now using computation to make all sort of decisions, but also new kinds of decisions. 我們現(xiàn)在使用計算機運算來做所有的決定,但也有新的決定。
We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.? ? 我們向運算提問的問題沒有單個正確答案,這是主觀的,開放的,有價值的。
We're asking questions like, "Who should the company hire?" 我們在問這樣的問題:“公司應該雇傭誰?”
"Which update from which friend should you be shown?"“你應該從哪個朋友那里得到更新?”
"Which convict is more likely to reoffend?" “哪一個犯人更有可能重新犯罪?”
"Which news item or movie should be recommended to people?"? ? “應該向人們推薦哪種新聞或電影?”
Look, yes, we've been using computers for a while, but this is different.看,是的,我們已經(jīng)使用了一段時間的電腦,但這是不同的。
This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon.這是一個歷史的轉(zhuǎn)折,因為我們不能講計算用于主觀決定的方式,我們可以將計算用于飛行飛機,建造橋梁,登錄月球。
Are airplanes safer? Did the bridge sway and fall?飛機安全嗎?這座橋搖晃了嗎?
There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. 在那里,我們已經(jīng)達成一致,相當明確的基準,我們有自然法則來指導我們。
We have no such anchors and benchmarks for decisions in messy human affairs.在無規(guī)律的人類事務(wù)中,我們沒有這樣的錨定和基準。
【選擇】What does Tufekci mean by historical twist?? ? -Computers are being used to solve subjective problems for the first time in history.
【選擇】According to Tufekci, machine intelligence should not be trusted to...? ? hire new employees.
【選擇】If sth. reflects your personal values, it is... value-laden.? ?value-laden? ? adj. 受主觀價值影響的,主觀的
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex.? ? 為了使事情變得更復雜,我們的軟件變得越來越強大,但它也變得越來越不透明,越來越復雜。
Recently, in the past decade, complex algorithms have made great strides.? 【跟讀】最近,在過去的十年中,復雜的算法取得了很大的進步。
They can recognize human faces. They can decipher handwriting.他們可以識別人臉。他們能辨認筆跡。
They can detect credit card fraud and block spam and they can translate between languages.??【跟讀】他們可以檢測信用卡詐騙和阻止垃圾郵件,他們可以翻譯不同的語言。
They can detect tumors in medical imaging. 他們可以在醫(yī)學影像中發(fā)現(xiàn)腫瘤。
They can beat humans in chess and Go.? 他們可以在國際象棋中擊敗人類。
Much of this progress comes from a method called "machine learning."? ?這種進步很大程度上來自一種叫做“機器學習”的方法。
Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions.? ?機器學習不同于傳統(tǒng)編程,會你給計算機詳細、精確、細致的指令。
It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives.??它更像是你采取的系統(tǒng),你喂它大量的數(shù)據(jù),包括非結(jié)構(gòu)化數(shù)據(jù),像我們在我們的數(shù)字生活中產(chǎn)生的那種。
And the system learns by churning through this data.? ?系統(tǒng)通過這些數(shù)據(jù)來學習。
And also, crucially, these systems don't operate under a single-answer logic.? 而且,關(guān)鍵的是,這些系統(tǒng)不在一個單一的答案邏輯下運作。
They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."? ?他們并沒有給出一個簡單的答案,而是更多的概率:“這一個可能更像你正在尋找的?!?/p>
Now, the upside is: this method is really powerful. 現(xiàn)在,好處是:這種方法真的很強大。
The head of Google's AI systems called it, "the unreasonable effectiveness of data."? 谷歌的人工智能系統(tǒng)的負責人稱之為“數(shù)據(jù)的不合理有效性”。
The downside is, we don't really understand what the system learned.? 【跟讀】 缺點是,我們并不真正理解系統(tǒng)所學到的東西。
In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control.??事實上,這就是它的力量。這不像是給電腦指令,更像是訓練 puppy-machine-creature 我們并不真正理解或控制。
So this is our problem. It's a problem when this artificial intelligence system gets things wrong.? ?這就是我們的問題。當這種人工智能系統(tǒng)出錯時,這是個問題。
It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem.? 這也是一個問題,當它得到正確的東西,因為我們甚至不知道這是什么時候,這是一個主觀的問題。
We don't know what this thing is thinking.? 我們不知道這是什么想法。
【選擇】Why is it a problem while machine intelligence gets things right?? ? -People can't examine how the system reaches its conclusion.
【選擇】How is machine learning different from traditional programming? -It? leads to probabilistic answers.? ?機器學習與傳統(tǒng)編程有何不同?它導致了概率性的答案。
【選擇】If a method or argument is probabilistic, it is based on what is most likely to be true.
So, consider a hiring algorithm -- a system used to hire people, right, using machine-learning systems.? ?因此,考慮一種雇傭算法——一種用來雇傭人的系統(tǒng),使用機器學習系統(tǒng)。
Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company.? ?這樣的系統(tǒng)將被培訓以前的員工的數(shù)據(jù),并指示找到和雇用的人,如現(xiàn)有的高績效的公司。
Sounds good.?聽起來不錯。
I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring.??我曾經(jīng)參加過一個匯集了人力資源經(jīng)理和高級管理人員,高層人員的會議,他們在招聘中使用這種系統(tǒng)。
They were super excited.他們非常興奮。
They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
他們認為這會使招聘更加客觀,減少偏見,給女性和少數(shù)群體一個更好的機會來對付有偏見的人類管理者。
And look -- human hiring is biased. 看,人的雇傭是有偏見的。
I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!" 我知道。我的意思是,在早期我作為一名程序員時,我的直屬經(jīng)理有時會到我早到或者是在下午很晚的時候,她會說:“Zeynep,我們?nèi)コ晕顼埌桑 ?/p>
I'd be puzzled by the weird timing. 我會被奇怪的時間所迷惑。
It's 4 pm. Lunch? I was broke, so free lunch. I always went. I later realized what was happening. 下午四點。午餐?我破產(chǎn)了,所以免費的午餐我總是去。后來我意識到發(fā)生了什么。
My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work.? ? 我的直屬經(jīng)理們還沒有向上司承認,他們雇用來做一項嚴肅的工作的員工,是一個穿著牛仔褲和運動鞋上班的十幾歲女孩。
I was doing a good job, I just looked wrong and was the wrong age and gender.? 我做得很好,我只是看起來錯了,是錯誤的年齡和性別。
?【跟讀】?I was doing a good job, but I was the wrong age and gender.?
So hiring in a gender- and race-blind way certainly sounds good to me.? ? 【跟讀】? ? 因此,通過一個沒有性別和種族偏見的方式雇人聽起來很好。
But with these systems, it is more complicated, and here's why:? 但是,這些系統(tǒng),它是更復雜的,以下是原因:
Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things.? ? 目前,計算系統(tǒng)可以從你的數(shù)字碎屑推斷出各種各樣的事情,即使你沒有透露這些東西。
They can infer your sexual orientation, your personality traits, your political leanings. 他們可以推斷出你的性取向,你的個性特征,你的政治傾向。
They have predictive power with high levels of accuracy.? ? 他們具有高精度的預測能力。
Remember -- for things you haven't even disclosed. This is inference.? ? 記住--甚至是你還沒有透露的事情。這是推理。
【選擇】-How a hiring algorithm?more complicated than it appeared to be? -?The infer undisclosed information and make predictions.
【選擇】A hiring algorithm would find and hire strong candidates by... basing its criteria on existing employees.
【跟讀】?