On Machine Intelligence 3(4’50)
— — Zeynep Yufekci
Audit are great and important, but they don't solve all our problems.
審查是很中喲的, 但是他們不能解決所有問(wèn)題。
Take facebook's powerful news feed algorithm, you know, ?the one that ranks everything and decides show you what from all the friends and pages you follow.
拿Facebook強(qiáng)有力的新聞喂食算法來(lái)說(shuō),你知道的,通過(guò)你的朋友圈和你瀏覽過(guò)的頁(yè)面,來(lái)決定你的推薦內(nèi)容的算法。
should you be shown another baby picture?
你是否應(yīng)該被推薦另一張嬰兒照片?
A sullen(面有慍色的 悶悶不樂(lè)的) note from an acquaintance?
來(lái)自一個(gè)熟人的悶悶不樂(lè)的狀態(tài)?
An important but difficult news item?
一條重要但是晦澀的新聞?
There's no right answer.
沒(méi)有正確的答案。
Facebook optimizes(使最優(yōu)化)?for engagement on the site: likes, shares, comments.
Facebook通過(guò)參與度來(lái)優(yōu)化:喜歡、分享、評(píng)論。
So, In ?August of 2014, protests broke out in Ferguson, Missouri,
因此,在2014年8月,密蘇里州佛格森爆發(fā)了抗議。
after the killing of an African- American teenager by a white police officer, under murky?(污濁的 隱晦的 含糊的)circumstances.
在情況不明下,一個(gè)白人警察殺死了一個(gè)非裔少年,
The news of the protests was all over my algorithmically unfiltered Twitter feed, but no where on my facebook.
關(guān)于抗議的新聞在我的未經(jīng)算法過(guò)濾的Twitter上鋪天蓋地,但是Facebook上卻沒(méi)有。
Was it my facebooks friends?
是因?yàn)槲业腇acebook好友不關(guān)注嗎?
I disabled(喪失能力 使無(wú)效 使不能運(yùn)轉(zhuǎn))?Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control.
我關(guān)閉了Facebook的算法,這很難。因?yàn)镕acebook總是想要使你一直在他的算法控制下。
And saw that my friends were talking about it, it's just that the algorithm wasn't showing it to me.
而看看我的朋友們?cè)趺凑務(wù)摯耸??就是這個(gè)算法沒(méi)有推薦給我這信息。
I researched this and found this was a ?widespread problem.
我調(diào)研了這個(gè),發(fā)現(xiàn)這是一個(gè)普遍問(wèn)題。
The story of Ferguson wasn't algorithm, friendly.
這個(gè)佛格森的故事不是算法問(wèn)題,朋友。
It's not likable. who'e going to click on like?
它不是喜好問(wèn)題,誰(shuí)會(huì)點(diǎn)贊這個(gè)呢?
It's not even easy to comment on.
它甚至不是很容易去評(píng)論的。
Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this.
沒(méi)有點(diǎn)贊和評(píng)論,算法很有可能會(huì)將它推薦給更少的朋友,所以我們沒(méi)有看得到這條新聞。
Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge.
相反的,在那一周,F(xiàn)acebook算法熱推了這個(gè),誰(shuí)是ALS冰桶挑戰(zhàn)
Worthy cause, dump ice water, donate to charity, fine.
很有意義,倒冰水,捐款,很好。
But it was super algorithm friendly.
但是它太算法了,友好的。
The machine made this decision for us.
機(jī)器替我們做了這個(gè)決定。
A very important but difficult conversation might have been smothered(使窒息而死 厚厚的覆蓋)?had facebook been the only channel.
一個(gè)非常重要但是晦澀的會(huì)話(huà)將會(huì)被湮沒(méi)掉,因?yàn)镕acebook已經(jīng)是僅有的渠道。
1. What is a possible danger of using an algorithm to feed it news?
...Important social issues could be ignored.
2. How was news ranks by facebook's new feed algorithm?
...according to the likelihood of user engagement.
3. When you protest something,...
....you strongly object to it.
3. 選詞填空
I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm control, and saw that my friends were talking about it.
Now, finally, these systems can also be wrong in ways that don't resemble human systems.
Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy?
It was a great player.
But then, for final Jeopardy, Watson was asked this question:
"It's largest airport is named for a world War 2 hero, its second- largest for a World War 2 battle.
Chicago. The two humans got it right.
Watson, on the other hand, answered “ Toronto" for a US city category.
The impressive system also made an error that a human would never make, ?a second - grader wouldn't make.
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for.
It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.
In May of 2010, ?a flash crash on Wall Street fueled by a feedback loop in Wall Street's sell algorithm wiped a trillion dollars of value in 36 minutes.
I don't even want to think what error means in the context of lethal autonomous weapons.
So, yes, humans have always made biases.
Decision makers and gatekeepers, in courts, in news, in war... they make mistakes; but that's exactly my point.
We cannot escape these difficult questions.
We cannot outsource our responsibilities to machines.
Artificial intelligence does not give us a " Get out of ethics free" card.
1. Why is Tufekci concerned about using machine intelligence to control lethal(致命的 破壞性極大的) weapons?
...Algorithm errors might cause heavy casualties.
2. What does Tufekci mean by artificial intelligence does not give us a " Get out of ethics" cards?
Decisions made by AI don't free people from moral responsibilities.
3. To wipe the floor with someone is...
...to defeat them easily.
4. 選詞填空
?Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for.
Data scientist Fred Benenson calls this math - washing.
We need the opposite.
We need to cultivate algorithm suspicion, scrutiny and investigation.
We need to make sure we have algorithm accountability, auditing and meaningful transparently.
We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather , the complexity of human affairs invades the algorithms.
Yes, we can and we should use computation to help us make better decisions.
But we have to own up our moral responsibilities to judgement,
and use algorithms within that framework.
not as a means to abdicate and outsource our responsibilities to one another as human to human.
Machine intelligence is here.
That means we must hold on ever tighter to human values and human ethics.
Thank you.
1. How does Tufekci end her presentation?
..by emphasizing the importance of human values and ethics.
2. Tufekci believes that...
...machine intelligence needs human oversight.
3. 完形填空
But we have to own up our moral responsibilities to judgement, and use algorithms within that framework,?not as a means to abdicate and outsource our responsibilities to one another.
4. 聽(tīng)復(fù)述
These systems can also be wrong in ways that don't resemble human systems.
5. Artificial intelligence does not give us a " Get out of ethics free " card.
6. We need to cultivate algorithm suspicion, scrutiny and investigation.?