下載App 希平方
攻其不背
App 開放下載中
下載App 希平方
攻其不背
App 開放下載中
IE版本不足
您的瀏覽器停止支援了😢使用最新 Edge 瀏覽器或點選連結下載 Google Chrome 瀏覽器 前往下載

免費註冊
! 這組帳號已經註冊過了
Email 帳號
密碼請填入 6 位數以上密碼
已經有帳號了?
忘記密碼
! 這組帳號已經註冊過了
您的 Email
請輸入您註冊時填寫的 Email,
我們將會寄送設定新密碼的連結給您。
寄信了!請到信箱打開密碼連結信
密碼信已寄至
沒有收到信嗎?
如果您尚未收到信,請前往垃圾郵件查看,謝謝!

恭喜您註冊成功!

查看會員功能

註冊未完成

《HOPE English 希平方》服務條款關於個人資料收集與使用之規定

隱私權政策
上次更新日期:2014-12-30

希平方 為一英文學習平台,我們每天固定上傳優質且豐富的影片內容,讓您不但能以有趣的方式學習英文,還能增加內涵,豐富知識。我們非常注重您的隱私,以下說明為當您使用我們平台時,我們如何收集、使用、揭露、轉移及儲存你的資料。請您花一些時間熟讀我們的隱私權做法,我們歡迎您的任何疑問或意見,提供我們將產品、服務、內容、廣告做得更好。

本政策涵蓋的內容包括:希平方學英文 如何處理蒐集或收到的個人資料。
本隱私權保護政策只適用於: 希平方學英文 平台,不適用於非 希平方學英文 平台所有或控制的公司,也不適用於非 希平方學英文 僱用或管理之人。

個人資料的收集與使用
當您註冊 希平方學英文 平台時,我們會詢問您姓名、電子郵件、出生日期、職位、行業及個人興趣等資料。在您註冊完 希平方學英文 帳號並登入我們的服務後,我們就能辨認您的身分,讓您使用更完整的服務,或參加相關宣傳、優惠及贈獎活動。希平方學英文 也可能從商業夥伴或其他公司處取得您的個人資料,並將這些資料與 希平方學英文 所擁有的您的個人資料相結合。

我們所收集的個人資料, 將用於通知您有關 希平方學英文 最新產品公告、軟體更新,以及即將發生的事件,也可用以協助改進我們的服務。

我們也可能使用個人資料為內部用途。例如:稽核、資料分析、研究等,以改進 希平方公司 產品、服務及客戶溝通。

瀏覽資料的收集與使用
希平方學英文 自動接收並記錄您電腦和瀏覽器上的資料,包括 IP 位址、希平方學英文 cookie 中的資料、軟體和硬體屬性以及您瀏覽的網頁紀錄。

隱私權政策修訂
我們會不定時修正與變更《隱私權政策》,不會在未經您明確同意的情況下,縮減本《隱私權政策》賦予您的權利。隱私權政策變更時一律會在本頁發佈;如果屬於重大變更,我們會提供更明顯的通知 (包括某些服務會以電子郵件通知隱私權政策的變更)。我們還會將本《隱私權政策》的舊版加以封存,方便您回顧。

服務條款
歡迎您加入看 ”希平方學英文”
上次更新日期:2013-09-09

歡迎您加入看 ”希平方學英文”
感謝您使用我們的產品和服務(以下簡稱「本服務」),本服務是由 希平方學英文 所提供。
本服務條款訂立的目的,是為了保護會員以及所有使用者(以下稱會員)的權益,並構成會員與本服務提供者之間的契約,在使用者完成註冊手續前,應詳細閱讀本服務條款之全部條文,一旦您按下「註冊」按鈕,即表示您已知悉、並完全同意本服務條款的所有約定。如您是法律上之無行為能力人或限制行為能力人(如未滿二十歲之未成年人),則您在加入會員前,請將本服務條款交由您的法定代理人(如父母、輔助人或監護人)閱讀,並得到其同意,您才可註冊及使用 希平方學英文 所提供之會員服務。當您開始使用 希平方學英文 所提供之會員服務時,則表示您的法定代理人(如父母、輔助人或監護人)已經閱讀、了解並同意本服務條款。 我們可能會修改本條款或適用於本服務之任何額外條款,以(例如)反映法律之變更或本服務之變動。您應定期查閱本條款內容。這些條款如有修訂,我們會在本網頁發佈通知。變更不會回溯適用,並將於公布變更起十四天或更長時間後方始生效。不過,針對本服務新功能的變更,或基於法律理由而為之變更,將立即生效。如果您不同意本服務之修訂條款,則請停止使用該本服務。

第三人網站的連結 本服務或協力廠商可能會提供連結至其他網站或網路資源的連結。您可能會因此連結至其他業者經營的網站,但不表示希平方學英文與該等業者有任何關係。其他業者經營的網站均由各該業者自行負責,不屬希平方學英文控制及負責範圍之內。

兒童及青少年之保護 兒童及青少年上網已經成為無可避免之趨勢,使用網際網路獲取知識更可以培養子女的成熟度與競爭能力。然而網路上的確存有不適宜兒童及青少年接受的訊息,例如色情與暴力的訊息,兒童及青少年有可能因此受到心靈與肉體上的傷害。因此,為確保兒童及青少年使用網路的安全,並避免隱私權受到侵犯,家長(或監護人)應先檢閱各該網站是否有保護個人資料的「隱私權政策」,再決定是否同意提出相關的個人資料;並應持續叮嚀兒童及青少年不可洩漏自己或家人的任何資料(包括姓名、地址、電話、電子郵件信箱、照片、信用卡號等)給任何人。

為了維護 希平方學英文 網站安全,我們需要您的協助:

您承諾絕不為任何非法目的或以任何非法方式使用本服務,並承諾遵守中華民國相關法規及一切使用網際網路之國際慣例。您若係中華民國以外之使用者,並同意遵守所屬國家或地域之法令。您同意並保證不得利用本服務從事侵害他人權益或違法之行為,包括但不限於:
A. 侵害他人名譽、隱私權、營業秘密、商標權、著作權、專利權、其他智慧財產權及其他權利;
B. 違反依法律或契約所應負之保密義務;
C. 冒用他人名義使用本服務;
D. 上載、張貼、傳輸或散佈任何含有電腦病毒或任何對電腦軟、硬體產生中斷、破壞或限制功能之程式碼之資料;
E. 干擾或中斷本服務或伺服器或連結本服務之網路,或不遵守連結至本服務之相關需求、程序、政策或規則等,包括但不限於:使用任何設備、軟體或刻意規避看 希平方學英文 - 看 YouTube 學英文 之排除自動搜尋之標頭 (robot exclusion headers);

服務中斷或暫停
本公司將以合理之方式及技術,維護會員服務之正常運作,但有時仍會有無法預期的因素導致服務中斷或故障等現象,可能將造成您使用上的不便、資料喪失、錯誤、遭人篡改或其他經濟上損失等情形。建議您於使用本服務時宜自行採取防護措施。 希平方學英文 對於您因使用(或無法使用)本服務而造成的損害,除故意或重大過失外,不負任何賠償責任。

版權宣告
上次更新日期:2013-09-16

希平方學英文 內所有資料之著作權、所有權與智慧財產權,包括翻譯內容、程式與軟體均為 希平方學英文 所有,須經希平方學英文同意合法才得以使用。
希平方學英文歡迎你分享網站連結、單字、片語、佳句,使用時須標明出處,並遵守下列原則:

  • 禁止用於獲取個人或團體利益,或從事未經 希平方學英文 事前授權的商業行為
  • 禁止用於政黨或政治宣傳,或暗示有支持某位候選人
  • 禁止用於非希平方學英文認可的產品或政策建議
  • 禁止公佈或傳送任何誹謗、侮辱、具威脅性、攻擊性、不雅、猥褻、不實、色情、暴力、違反公共秩序或善良風俗或其他不法之文字、圖片或任何形式的檔案
  • 禁止侵害或毀損希平方學英文或他人名譽、隱私權、營業秘密、商標權、著作權、專利權、其他智慧財產權及其他權利、違反法律或契約所應付支保密義務
  • 嚴禁謊稱希平方學英文辦公室、職員、代理人或發言人的言論背書,或作為募款的用途

網站連結
歡迎您分享 希平方學英文 網站連結,與您的朋友一起學習英文。

抱歉傳送失敗!

不明原因問題造成傳送失敗,請儘速與我們聯繫!
希平方 x ICRT

「Max Tegmark:如何運用人工智慧而不受反撲」- How to Get Empowered, Not Overpowered, by AI

觀看次數:2501  • 

框選或點兩下字幕可以直接查字典喔!

After 13.8 billion years of cosmic history, our universe has woken up and become aware of itself. From a small blue planet, tiny, conscious parts of our universe have begun gazing out into the cosmos with telescopes, discovering something humbling. We've discovered that our universe is vastly grander than our ancestors imagined and that life seems to be an almost imperceptibly small perturbation on an otherwise dead universe. But we've also discovered something inspiring, which is that the technology we're developing has the potential to help life flourish like never before, not just for centuries but for billions of years, and not just on earth but throughout much of this amazing cosmos.

I think of the earliest life as "Life 1.0" because it was really dumb, like bacteria, unable to learn anything during its lifetime. I think of us humans as "Life 2.0" because we can learn, which we in nerdy, geek speak, might think of as installing new software into our brains, like languages and job skills. "Life 3.0," which can design not only its software but also its hardware of course doesn't exist yet. But perhaps our technology has already made us "Life 2.1," with our artificial knees, pacemakers and cochlear implants. So let's take a closer look at our relationship with technology, OK?

As an example, the Apollo 11 moon mission was both successful and inspiring, showing that when we humans use technology wisely, we can accomplish things that our ancestors could only dream of. But there's an even more inspiring journey propelled by something more powerful than rocket engines, where the passengers aren't just three astronauts but all of humanity. Let's talk about our collective journey into the future with artificial intelligence.

My friend Jaan Tallinn likes to point out that just as with rocketry, it's not enough to make our technology powerful. We also have to figure out, if we're going to be really ambitious, how to steer it and where we want to go with it. So let's talk about all three for artificial intelligence: the power, the steering and the destination. Let's start with the power.

I define intelligence very inclusively—simply as our ability to accomplish complex goals, because I want to include both biological and artificial intelligence. And I want to avoid the silly carbon-chauvinism idea that you can only be smart if you're made of meat. It's really amazing how the power of AI has grown recently. Just think about it. Not long ago, robots couldn't walk. Now, they can do backflips. Not long ago, we didn't have self-driving cars. Now, we have self-flying rockets. Not long ago, AI couldn't do face recognition. Now, AI can generate fake faces and simulate your face saying stuff that you never said. Not long ago, AI couldn't beat us at the game of Go. Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games and Go wisdom, ignored it all and became the world's best player by just playing against itself. And the most impressive feat here wasn't that it crushed human gamers, but that it crushed human AI researchers who had spent decades handcrafting game-playing software. And AlphaZero crushed human AI researchers not just in Go but even at chess, which we have been working on since 1950.

So all this amazing recent progress in AI really begs the question: How far will it go? I like to think about this question in terms of this abstract landscape of tasks, where the elevation represents how hard it is for AI to do each task at human level, and the sea level represents what AI can do today. The sea level is rising as AI improves, so there's a kind of global warming going on here in the task landscape. And the obvious takeaway is to avoid careers at the waterfront—which will soon be automated and disrupted. But there's a much bigger question as well. How high will the water end up rising? Will it eventually rise to flood everything, matching human intelligence at all tasks. This is the definition of artificial general intelligence—AGI, which has been the holy grail of AI research since its inception. By this definition, people who say, "Ah, there will always be jobs that humans can do better than machines," are simply saying that we'll never get AGI. Sure, we might still choose to have some human jobs or to give humans income and purpose with our jobs, but AGI will in any case transform life as we know it with humans no longer being the most intelligent. Now, if the water level does reach AGI, then further AI progress will be driven mainly not by humans but by AI, which means that there's a possibility that further AI progress could be way faster than the typical human research and development timescale of years, raising the controversial possibility of an intelligence explosion where recursively self-improving AI rapidly leaves human intelligence far behind, creating what's known as superintelligence.

Alright, reality check: Are we going to get AGI any time soon? Some famous AI researchers, like Rodney Brooks, think it won't happen for hundreds of years. But others, like Google DeepMind founder Demis Hassabis, are more optimistic and are working to try to make it happen much sooner. And recent surveys have shown that most AI researchers actually share Demis's optimism, expecting that we will get AGI within decades, so within the lifetime of many of us, which begs the question—and then what? What do we want the role of humans to be if machines can do everything better and cheaper than us?

The way I see it, we face a choice. One option is to be complacent. We can say, "Oh, let's just build machines that can do everything we can do and not worry about the consequences. Come on, if we build technology that makes all humans obsolete, what could possibly go wrong?"

But I think that would be embarrassingly lame. I think we should be more ambitious—in the spirit of TED. Let's envision a truly inspiring high-tech future and try to steer towards it. This brings us to the second part of our rocket metaphor: the steering. We're making AI more powerful, but how can we steer towards a future where AI helps humanity flourish rather than flounder? To help with this, I cofounded the Future of Life Institute. It's a small nonprofit promoting beneficial technology use, and our goal is simply for the future of life to exist and to be as inspiring as possible. You know, I love technology. Technology is why today is better than the Stone Age. And I'm optimistic that we can create a really inspiring high-tech future ... if—and this is a big if—if we win the wisdom race—the race between the growing power of our technology and the growing wisdom with which we manage it. But this is going to require a change of strategy because our old strategy has been learning from mistakes. We invented fire, screwed up a bunch of times—invented the fire extinguisher.

We invented the car, screwed up a bunch of times—invented the traffic light, the seat belt and the airbag, but with more powerful technology like nuclear weapons and AGI, learning from mistakes is a lousy strategy, don't you think?

It's much better to be proactive rather than reactive; plan ahead and get things right the first time because that might be the only time we'll get. But it is funny because sometimes people tell me, "Max, shhh, don't talk like that. That's Luddite scaremongering." But it's not scaremongering. It's what we at MIT call safety engineering. Think about it: before NASA launched the Apollo 11 mission, they systematically thought through everything that could go wrong when you put people on top of explosive fuel tanks and launch them somewhere where no one could help them. And there was a lot that could go wrong. Was that scaremongering? No. That's was precisely the safety engineering that ensured the success of the mission, and that is precisely the strategy I think we should take with AGI. Think through what can go wrong to make sure it goes right.

So in this spirit, we've organized conferences, bringing together leading AI researchers and other thinkers to discuss how to grow this wisdom we need to keep AI beneficial. Our last conference was in Asilomar, California last year and produced this list of 23 principles which have since been signed by over 1,000 AI researchers and key industry leaders, and I want to tell you about three of these principles.

One is that we should avoid an arms race and lethal autonomous weapons. The idea here is that any science can be used for new ways of helping people or new ways of harming people. For example, biology and chemistry are much more likely to be used for new medicines or new cures than for new ways of killing people, because biologists and chemists pushed hard—and successfully—for bans on biological and chemical weapons. And in the same spirit, most AI researchers want to stigmatize and ban lethal autonomous weapons. Another Asilomar AI principle is that we should mitigate AI-fueled income inequality. I think that if we can grow the economic pie dramatically with AI and we still can't figure out how to divide this pie so that everyone is better off, then shame on us.

Alright, now raise your hand if your computer has ever crashed.

Wow, that's a lot of hands. Well, then you'll appreciate this principle that we should invest much more in AI safety research, because as we put AI in charge of even more decisions and infrastructure, we need to figure out how to transform today's buggy and hackable computers into robust AI systems that we can really trust; otherwise, all this awesome new technology can malfunction and harm us, or get hacked and be turned against us. And this AI safety work has to include work on AI value alignment, because the real threat from AGI isn't malice, like in silly Hollywood movies, but competence—AGI accomplishing goals that just aren't aligned with ours. For example, when we humans drove the West African black rhino extinct, we didn't do it because we were a bunch of evil rhinoceros haters, did we? We did it because we were smarter than them and our goals weren't aligned with theirs. But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI, we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.

And whose goals should these be, anyway? Which goals should they be?

This brings us to the third part of our rocket metaphor: the destination. We're making AI more powerful, trying to figure out how to steer it, but where do we want to go with it? This is the elephant in the room that almost nobody talks about—not even here at TED—because we're so fixated on short-term AI challenges. Look, our species is trying to build AGI, motivated by curiosity and economics, but what sort of future society are we hoping for if we succeed? We did an opinion poll on this recently, and I was struck to see that most people actually want us to build superintelligence: AI that's vastly smarter than us in all ways. What there was the greatest agreement on was that we should be ambitious and help life spread into the cosmos, but there was much less agreement about who or what should be in charge. And I was actually quite amused to see that there's some some people who want it to be just machines.

And there was total disagreement about what the role of humans should be, even at the most basic level, so let's take a closer look at possible futures that we might choose to steer toward, alright?

So don't get be wrong here. I'm not talking about space travel, merely about humanity's metaphorical journey into the future. So one option that some of my AI colleagues like is to build superintelligence and keep it under human control, like an enslaved god, disconnected from the internet and used to create unimaginable technology and wealth for whoever controls it. But Lord Acton warned us that power corrupts, and absolute power corrupts absolutely, so you might worry that maybe we humans just aren't smart enough, or wise enough rather, to handle this much power. Also, aside from any moral qualms you might have about enslaving superior minds, you might worry that maybe the superintelligence could outsmart us, break out and take over. But I also have colleagues who are fine with AI taking over and even causing human extinction, as long as we feel the the AIs are our worthy descendants, like our children. But how would we know that the AIs have adopted our best values and aren't just unconscious zombies tricking us into anthropomorphizing them? Also, shouldn't those people who don't want human extinction have a say in the matter, too? Now, if you didn't like either of those two high-tech options, it's important to remember that low-tech is suicide from a cosmic perspective because if we don't go far beyond today's technology, the question isn't whether humanity is going to go extinct, merely whether we're going to get taken out by the next killer asteroid, supervolcano or some other problem that better technology could have solved.

So, how about having our cake and eating it, with AGI that's not enslaved but treats us well because its values are aligned with ours? This is the gist of what Eliezer Yudkowsky has called "friendly AI," and if we can do this, it could be awesome. It could not only eliminate negative experiences like disease, poverty, crime and other suffering, but it could also give us the freedom to choose from a fantastic new diversity of positive experiences—basically making us the masters of our own destiny.

So in summary, our situation with technology is complicated, but the big picture is rather simple. Most AI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history—let's face it. It could enable brutal, global dictatorship with unprecedented inequality, surveillance and suffering, and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybody's better off: the poor are richer, the rich are richer, everybody is healthy and free to live out their dreams.

Now, hang on. Do you folks want the future that's politically right or left? Do you want the pious society with strict moral rules, or do you an hedonistic free-for-all, more like Burning Man 24/7? Do you want beautiful beaches, forests and lakes, or would you prefer to rearrange some of those atoms with the computers, enabling virtual experiences? With friendly AI, we could simply build all of these societies and give people the freedom to choose which one they want to live in because we would no longer be limited by our intelligence, merely by the laws of physics. So the resources and space for this would be astronomical—literally.

So here's our choice. We can either be complacent about our future, taking as an article of blind faith that any new technology is guaranteed to be beneficial, and just repeat that to ourselves as a mantra over and over and over again as we drift like a rudderless ship towards our own obsolescence. Or we can be ambitious—thinking hard about how to steer our technology and where we want to go with it to create the age of amazement. We're all here to celebrate the age of amazement, and I feel that its essence should lie in becoming not overpowered but empowered by our technology.

Thank you.

播放本句

登入使用學習功能

使用Email登入

HOPE English 播放器使用小提示

  • 功能簡介

    單句重覆、重複上一句、重複下一句:以句子為單位重覆播放,單句重覆鍵顯示綠色時為重覆播放狀態;顯示白色時為正常播放狀態。按重複上一句、重複下一句時就會自動重覆播放該句。
    收錄佳句:點擊可增減想收藏的句子。

    中、英文字幕開關:中、英文字幕按鍵為綠色為開啟,灰色為關閉。鼓勵大家搞懂每一句的內容以後,關上字幕聽聽看,會發現自己好像在聽中文說故事一樣,會很有成就感喔!
    收錄單字:框選英文單字可以收藏不會的單字。
  • 分享
    如果您有收錄很優秀的句子時,可以分享佳句給大家,一同看佳句學英文!