Machine intelligence makes human morals more important
機器智能使人類道德更重要
by Zeynep Tufekci
So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager.
所以,我在大學一年級時就開始了我的第一份電腦程序員的工作,基本上是一個十幾歲的孩子。
Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, "Can he tell if I'm lying?" There was nobody else in the room.
我開始工作后不久,在一家公司寫軟件,一位在公司工作的經(jīng)理來到我所在的地方,他低聲對我說:“他能告訴我我在撒謊嗎?”房間里沒有其他人。
"Can who tell if you're lying? And why are we whispering?"
“誰能告訴我你在撒謊?我們?yōu)槭裁匆`竊私語?”
The manager pointed at the computer in the room. "Can he tell if I'm lying?" Well, that manager was having an affair with the receptionist.
經(jīng)理指著房間里的電腦?!八芨嬖V我我在撒謊嗎?”嗯,那個經(jīng)理和接待員有曖昧關(guān)系。
(Laughter)
(笑聲)
And I was still a teenager. So I whisper-shouted back to him, "Yes, the computer can tell if you're lying."
我還是個十幾歲的孩子。于是我小聲地對他喊道:“是的,電腦能分辨出你在撒謊?!?/p>
(Laughter)
(笑聲)
Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested.
嗯,我笑了,但事實上,我笑了?,F(xiàn)在,有一些計算系統(tǒng)可以解決情緒狀態(tài),甚至可以從處理人臉上撒謊。廣告商甚至政府都很感興趣。
I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers.
我已經(jīng)成為一名電腦程序員,因為我是一個對數(shù)學和科學著迷的孩子。但我在某個地方學到了核武器,我真的很關(guān)心科學的倫理學。我很煩惱。然而,由于家庭情況,我也需要盡快開始工作。因此,我想,嘿,讓我選擇一個技術(shù)領域,我可以輕松地找到一份工作,在那里我不需要處理任何棘手的道德問題。所以我選擇了電腦。
(Laughter)
(笑聲)
Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They're developing cars that could decide who to run over. They're even building machines, weapons, that might kill human beings in war. It's ethics all the way down.
哈,哈,哈!所有的笑聲都在我身上。如今,計算機科學家正在構(gòu)建一個平臺,控制著每天有十億人看到的東西。他們正在開發(fā)可以決定誰來跑的汽車。他們甚至制造機器,武器,可能會在戰(zhàn)爭中殺死人類。這是道德的一路下滑。
Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
機器智能在這里。我們現(xiàn)在使用計算來做所有的決定,但也有新的決定。我們問的問題是沒有一個正確答案的計算,這是主觀的,開放的和價值的。
We're asking questions like, "Who should the company hire?" "Which update from which friend should you be shown?" "Which convict is more likely to reoffend?" "Which news item or movie should be recommended to people?"
我們在問這樣的問題:“公司應該雇傭誰?”“你應該從哪個朋友那里得到更新?”“哪一個犯人更有可能重新犯罪?”“應該向人們推薦哪種新聞或電影?”
Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.
看,是的,我們已經(jīng)使用了一段時間的電腦,但這是不同的。這是一個歷史的轉(zhuǎn)折,因為我們不能錨定計算這樣的主觀決定的方式,我們可以錨定計算的飛行飛機,建造橋梁,去月球。飛機安全嗎?這座橋搖晃了嗎?在那里,我們已經(jīng)達成一致,相當明確的基準,我們有自然法則來指導我們。在混亂的人類事務中,我們沒有這樣的錨定和基準。
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.
為了使事情變得更復雜,我們的軟件變得越來越強大,但它也變得越來越不透明,越來越復雜。最近,在過去的十年中,復雜的算法取得了很大的進步。他們可以識別人臉。他們能辨認筆跡。他們可以檢測信用卡詐騙和阻止垃圾郵件,他們可以翻譯之間的語言。他們可以在醫(yī)學影像中發(fā)現(xiàn)腫瘤。他們可以在國際象棋中擊敗人類。
Much of this progress comes from a method called "machine learning." Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."
這種進步很大程度上來自一種叫做“機器學習”的方法。機器學習不同于傳統(tǒng)編程,在那里你給計算機詳細、精確、細致的指令。它更像是你采取的系統(tǒng),你喂它大量的數(shù)據(jù),包括非結(jié)構(gòu)化數(shù)據(jù),像我們在我們的數(shù)字生活中產(chǎn)生的那種。系統(tǒng)通過這些數(shù)據(jù)來學習。而且,關(guān)鍵的是,這些系統(tǒng)不在一個單一的答案邏輯下運作。他們并沒有給出一個簡單的答案,而是更多的概率:“這一個可能更像你正在尋找的。”
Now, the upside is: this method is really powerful. The head of Google's AI systems called it, "the unreasonable effectiveness of data." The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking.
現(xiàn)在,好處是:這種方法真的很強大。谷歌的人工智能系統(tǒng)的負責人稱之為“數(shù)據(jù)的不合理有效性”。缺點是,我們并不真正理解系統(tǒng)所學到的東西。事實上,這就是它的力量。這不像是給電腦指令,更像是訓練 puppy-machine-creature 我們并不真正理解或控制。這就是我們的問題。當這種人工智能系統(tǒng)出錯時,這是個問題。這也是一個問題,當它得到正確的東西,因為我們甚至不知道這是什么時候,這是一個主觀的問題。我們不知道這是什么想法。
So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
因此,考慮一種雇傭算法——一種用來雇傭人的系統(tǒng),使用機器學習系統(tǒng)。這樣的系統(tǒng)將被培訓在以前的員工的數(shù)據(jù),并指示找到和雇用的人,如現(xiàn)有的高績效的公司。聽起來不錯。我曾經(jīng)參加過一個會議,匯集了人力資源經(jīng)理和高級管理人員,高層人員,在招聘中使用這種系統(tǒng)。他們非常興奮。他們認為這會使招聘更加客觀,減少偏見,給女性和少數(shù)群體一個更好的機會來對付有偏見的人類管理者。
And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!" I'd be puzzled by the weird timing. It's 4pm. Lunch? I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender.
看,人的雇傭是有偏見的。我知道。我的意思是,在我作為一名程序員的早期工作中,我的直屬經(jīng)理有時會到我真正早到的地方,或者是在下午很晚的時候,她會說:“Zeynep,我們?nèi)コ晕顼埌?!”我會被奇怪的時間所迷惑。下午四點。午餐?我破產(chǎn)了,所以免費的午餐。我總是去。后來我意識到發(fā)生了什么。我的直屬經(jīng)理們還沒有承認他們的上司,他們雇用的一個嚴肅的工作是一個十幾歲的女孩穿著牛仔褲和運動鞋上班。我做得很好,我只是看起來錯了,是錯誤的年齡和性別。
So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference.
因此,在一個性別和種族的盲目的方式雇用聽起來對我很好。但是,隨著這些系統(tǒng),它是更復雜的,這就是為什么:目前,計算系統(tǒng)可以推斷出各種各樣的事情,你從你的數(shù)字面包屑,即使你沒有透露這些東西。他們可以推斷出你的性取向,你的個性特征,你的政治傾向。他們具有高精度的預測能力。記住--對于你還沒有透露的事情。這是推理。
I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms -- months before. No symptoms, there's prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring.
我有一個朋友開發(fā)了這樣的計算系統(tǒng),從社會媒體數(shù)據(jù)預測臨床或產(chǎn)后抑郁癥的可能性。結(jié)果令人印象深刻。她的系統(tǒng)可以預測幾個月前出現(xiàn)任何癥狀之前的抑郁的可能性。沒有癥狀,有預測她希望這將被用于早期干預。太棒了!但現(xiàn)在把這放在招聘的背景下。
So at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, "Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They're not depressed now, just maybe in the future, more likely. What if it's weeding out women more likely to be pregnant in the next year or two but aren't pregnant now? What if it's hiring aggressive people because that's your workplace culture?" You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled "higher risk of depression," "higher risk of pregnancy," "aggressive guy scale." Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it.
所以在這次人力資源經(jīng)理會議上,我找了一家非常大的公司的高級經(jīng)理,我對她說:“看,如果你不知道,你的系統(tǒng)正在淘汰那些未來可能有抑郁癥的人呢?他們現(xiàn)在不沮喪,只是可能在未來,更有可能。如果在接下來的一年或兩年內(nèi)淘汰婦女更可能懷孕,但現(xiàn)在又沒有懷孕怎么辦?如果它雇傭有侵略性的人,因為那是你的工作場所文化?”你不能通過看性別問題來判斷。這些可能是平衡的。而且由于這是機器學習,而不是傳統(tǒng)的編碼,沒有可變的標簽“高風險的抑郁癥,”“更高的風險懷孕,”“侵略性的家伙規(guī)模。”不僅你不知道你的系統(tǒng)在選擇什么,你甚至不知道從哪里開始看。它是一個黑匣子。它具有預測力,但你不理解它。
"What safeguards," I asked, "do you have to make sure that your black box isn't doing something shady?" She looked at me as if I had just stepped on 10 puppy tails.
“什么保障措施,”我問,“你必須確保你的黑匣子沒有做什么可疑的事情?”她看著我,好像我剛剛踩了10條小狗的尾巴。
(Laughter)
(笑聲)
She stared at me and she said, "I don't want to hear another word about this." And she turned around and walked away. Mind you -- she wasn't rude. It was clearly: what I don't know isn't my problem, go away, death stare.
她盯著我,她說:“我不想再聽到這個消息了?!彼D(zhuǎn)身走開了。注意你--她并不粗魯。很明顯:我不知道的不是我的問題,走開,死亡凝視。
(Laughter)
(笑聲)
Look, such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we've done this, because we turned decision-making to machines we don't totally understand?
看,這樣的系統(tǒng)在某些方面甚至可能比人類管理者有更少的偏見。這可能會使貨幣變得有意義。但這也可能導致一個穩(wěn)定但卻悄無聲息的退出就業(yè)市場的人,有更高的憂郁癥風險。這是我們想要建立的社會,甚至不知道我們已經(jīng)這樣做了,因為我們把決策變成了我們不完全理解的機器?
Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, "We're just doing objective, neutral computation."
另一個問題是:這些系統(tǒng)通常是由我們的行為、人類印記所產(chǎn)生的數(shù)據(jù)訓練的。嗯,他們可能只是反映了我們的偏見,這些系統(tǒng)可能會發(fā)現(xiàn)我們的偏見,放大他們,并讓他們回到我們,而我們告訴自己,“我們只是做客觀的,中立的計算。”
Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences.
研究人員發(fā)現(xiàn),在谷歌上,女性比男性更不可能在高薪職位上招聘廣告。而尋找非裔美國人的名字更可能帶來犯罪歷史的廣告,即使沒有。研究人員有時會發(fā)現(xiàn)這種隱藏的偏見和黑箱算法,但有時我們不知道,會有改變生活的結(jié)果。
In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants.
在威斯康星州,一名被告因躲避警察被判六年監(jiān)禁。你可能不知道這一點,但算法越來越多地用于假釋和判決決定。他想知道:這是怎么計算的?它是一個商業(yè)黑匣子。該公司拒絕在公開法庭上對其算法提出質(zhì)疑。但是ProPublica,一個調(diào)查性的非營利組織,對他們所能發(fā)現(xiàn)的公共數(shù)據(jù)進行了審計,發(fā)現(xiàn)其結(jié)果是有偏見的,其預測能力是令人沮喪的,幾乎沒有機會,并且錯誤地把黑人被告作為未來的犯罪分子,是白人被告的兩倍。
So, consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid's bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, "Hey! That's my kid's bike!" They dropped it, they walked away, but they were arrested.
所以,考慮一下這個案例:這個女人在佛羅里達州布勞沃德縣的一所學校接她的教母,她和她的一個朋友在街上跑來跑去。他們發(fā)現(xiàn)一個未上鎖的孩子的自行車和一輛滑板車在門廊和愚蠢地跳上它。當他們超速行駛時,一個女人跑出來說:“嘿!那是我孩子的自行車!”他們把它扔了,他們走開了,但他們被逮捕了。
She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power.
她錯了,她很愚蠢,但她也只有18歲。她有過兩次未成年的輕罪。與此同時,那名男子因在家得寶商店行竊被逮捕——價值85美元的東西,類似的輕微犯罪。但他有兩次持槍搶劫的前科。但該算法的得分高風險,而不是他。兩年后,propublica發(fā)現(xiàn)她沒有再生氣。她很難找到一份有記錄的工作。另一方面,他又重新犯罪,現(xiàn)在為以后的罪行服刑8年。顯然,我們需要審計我們的黑匣子,而不是讓他們擁有這種不受約束的權(quán)力。
(Applause)
(掌聲)
Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture?
審計是偉大而重要的,但它們并不能解決我們所有的問題。采取Facebook的強大的新聞feed算法-你知道,一個排名一切,并決定什么顯示你從所有的朋友和網(wǎng)頁,你跟隨。你應該再給我看一張嬰兒照片嗎?
(Laughter)
(笑聲)
A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments.
一個熟人的悶悶不樂的便條?一個重要但困難的新聞?沒有正確的答案。Facebook優(yōu)化了對網(wǎng)站的參與:喜歡,分享,評論。
In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem.
2014年8月,在密蘇里州的弗格森,在一名白人警官在黑暗的環(huán)境下殺害一名非洲裔美國少年后,發(fā)生了抗議活動??棺h的消息充斥了我的算法未過濾的Twitter feed,但在我的Facebook上卻沒有。是我的Facebook朋友嗎?我禁用了Facebook的算法,這很難,因為Facebook一直想讓你在算法的控制下,看到我的朋友們在談論它。只是算法沒有顯示給我看。我研究了這一點,發(fā)現(xiàn)這是一個普遍的問題。
The story of Ferguson wasn't algorithm-friendly. It's not "likable." Who's going to click on "like?" It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel.
弗格森的故事不太友好。這不是“可愛”。誰會點擊“喜歡”?這甚至不容易評論。沒有喜歡和評論,算法可能會顯示它更少的人,所以我們沒有看到這一點。相反,在那個星期,F(xiàn)acebook的算法突出了這一點,這是ALS冰桶挑戰(zhàn)。有價值的事業(yè);傾倒冰水,捐給慈善機構(gòu),罰款。但它是超級算法友好。機器為我們做了這個決定。一個非常重要但困難的談話可能被扼殺了,F(xiàn)acebook是唯一的渠道。
Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle."
現(xiàn)在,最后,這些系統(tǒng)也可能是錯誤的方式,不象人類系統(tǒng)。你們還記得Watson,IBM的機器智能系統(tǒng)嗎?這是一個偉大的球員。但是,最后的危險,沃森被問到這個問題:“它最大的機場被命名為一個二戰(zhàn)英雄,它的第二次世界大戰(zhàn)第二次戰(zhàn)斗?!?/p>
(Hums Final Jeopardy music)
(人類最后的危險音樂)
Chicago. The two humans got it right. Watson, on the other hand, answered "Toronto" -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make.
芝加哥。這兩個人是對的。華生,另一方面,回答“多倫多”-為美國城市類別!令人印象深刻的系統(tǒng)也犯了一個錯誤,一個人類永遠不會做出,一個二年級學生不會作出。
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.
我們的機器智能會以不符合人類錯誤模式的方式失敗,以我們無法預料的方式來準備。如果沒有一份工作是合格的,那就太糟糕了,但如果是因為某些子程序中的堆棧溢出,那么它將是三倍的。
(Laughter)
(笑聲)
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what "error" means in the context of lethal autonomous weapons.
2010年5月,華爾街“賣出”算法的反饋回路引發(fā)的華爾街閃電崩盤,在36分鐘內(nèi)抹去了1萬億美元的價值。我甚至不想在致命的自主武器的背景下思考“錯誤”的含義。
So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines.
是的,人類總是有偏見。決策者和看門人,在法庭上,在新聞里,在戰(zhàn)爭中……他們犯了錯誤,但這正是我的觀點。我們無法逃避這些難題。我們不能把我們的責任外包給機器。
(Applause)
(掌聲)
Artificial intelligence does not give us a "Get out of ethics free" card.
人工智能并沒有給我們一個“擺脫道德自由”的卡片。
Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human.
數(shù)據(jù)科學家弗雷德本尼森稱這是數(shù)學清洗。我們需要相反的東西。我們需要培養(yǎng)算法的懷疑、審查和調(diào)查。我們需要確保我們有算法問責、審計和有意義的透明度。我們需要承認,把數(shù)學和計算帶入混亂、充滿價值的人類事務并不會帶來客觀性;相反,人類事務的復雜性會侵入算法。是的,我們可以并且我們應該使用計算來幫助我們做出更好的決定。但我們必須承認我們的道德責任,并在這個框架內(nèi)使用算法,而不是作為一種手段來放棄和外包我們的責任,作為人類對人類的責任。
Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics.
機器智能在這里。這意味著我們必須更嚴格地堅持人類價值觀和人類倫理。
Thank you.
謝謝。
(Applause)
(掌聲)