深度學習的簡要歷史_第1頁
深度學習的簡要歷史_第2頁
深度學習的簡要歷史_第3頁
深度學習的簡要歷史_第4頁
深度學習的簡要歷史_第5頁
已閱讀5頁,還剩6頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領

文檔簡介

A

brief

history

of

deep

learningAlice

GaoLecture10Readings:

this

article

by

Andrew

KurenkovBased

on

work

by

K.

Leyton-Brown,

K.

Larson,

and

P.

van

Beek1/112/11OutlineLearning

GoalsA

brief

historyofdeep

learningLessons

learned

summarizedby

Geoff

Hinton

Revisiting

the

Learning

goals3/11Learning

GoalsBy

the

end

of

the

lecture,

you

should

beable

to?

Describethe

causes

andtheresolutionsof

thetwoAIwintersin

thepast.?

Describethe

fourlessons

learned

summarized

by

Geoff

Hinton.4/11Abrief

history

of

deep

learningA

brief

history

of

deep

learning

based

on

this

article

by

Andrew

Kurenkov.5/11The

birth

of

machine

learning(Frank

Rosenblatt

1957)?

A

perceptron

can

be

used

to

represent

and

learn

logicoperators

(AND,OR,

and

NOT).?

It

was

widely

believed

that

AI

is

solved

if

computers

canperformformallogical

reasoning.6/11First

AI

winter(Marvin

Minsky

1969)?

A

rigorous

analysis

of

the

limitations

of

perceptrons.?

(1)

We

need

to

use

multi-layer

perceptrons

to

representsimple

non-linear

functions

such

as

XOR.?

(2)

No

one

knows

a

good

way

to

train

multi-layer

perceptrons.?

Led

to

the

first

AI

winter

from

the

70s

to

the

80s.7/11The

back-propagation

algorithm(Rumelhart,

Hinton

andWilliams1986)?

In

their

Naturearticle,

theyprecisely

and

concisely

explainedhow

we

can

use

the

back-propagation

algorithm

to

trainmulti-layer

perceptrons.8/11Second

AI

Winter?

Multi-layer

neural

networks

trained

with

back-propagationdon’t

work

well,

especially

compared

to

simpler

models

(e.g.support

vector

machines

and/or

random

forests).?

Led

to

the

second

AI

winter

in

the

mid

90s.9/11The

rise

of

deep

learning?

In

2006,

Hinton

,

Osindero

and

Teh

showed

thatback-propagation

works

if

the

initial

weights

are

chosen

in

asmart

way.?

In

2012,

Hintonentered

the

ImageNet

competition

using

deepconvolutional

neural

networks

anddid

far

better

than

the

nextclosest

entry

(an

error

rate

15.3%

rather

than

26.2%

for

thesecond

best

entry).?

The

deep

learning

tsunami

continues

today.10/11Lessons

learned

summarized

by

Geoff

Hinton?

Our

labeled

data

sets

were

thousands

of

times

too

small.?

Our

computers

were

millions

of

times

too

slow.?

We

initialized

the

weights

in

astupid

way.?

We

used

the

wrong

typeof

non-linearity.11/11Revisiting

the

Learning

GoalsBy

the

end

of

the

lecture,

you

should

beable

to?

Describe

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論