人工智能原理Lecture 8 生成性對(duì)抗網(wǎng)絡(luò) Generative Adversarial Networks -PPT精品課件_第1頁(yè)
人工智能原理Lecture 8 生成性對(duì)抗網(wǎng)絡(luò) Generative Adversarial Networks -PPT精品課件_第2頁(yè)
人工智能原理Lecture 8 生成性對(duì)抗網(wǎng)絡(luò) Generative Adversarial Networks -PPT精品課件_第3頁(yè)
人工智能原理Lecture 8 生成性對(duì)抗網(wǎng)絡(luò) Generative Adversarial Networks -PPT精品課件_第4頁(yè)
人工智能原理Lecture 8 生成性對(duì)抗網(wǎng)絡(luò) Generative Adversarial Networks -PPT精品課件_第5頁(yè)
已閱讀5頁(yè),還剩57頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

Lecture

8:

Generative

Adversarial

Network2November

27,

2019

Artificial

IntelligenceGenerative

Adversarial

Networks

?

Genarative

?

Learn

a

generative

model

?

Adversarial

?

Trained

in

an

adversarial

setting

?

Networks

?

Use

Deep

Neural

NetworksArtificial

Intelligence3Generative

Models

November

27,

2019Artificial

Intelligence4Generative

Models

November

27,

2019Artificial

Intelligence5Why

Generative

Models?

?

Discriminative

models

?

Given

a

image

X,

predict

a

label

Y

?

Estimates

P(Y|X)

?

Discriminative

models

limitations:

?

Can’t

model

P(X)

?

Can’t

generate

new

images

?

Generative

models

?

Can

model

P(X)

?

Can

generate

new

images

November

27,

2019Artificial

Intelligence6Magic

of

GANs

November

27,

2019Artificial

Intelligence7Magic

of

GANs

?

Which

one

is

Computer

generated?

November

27,

2019Artificial

Intelligence8Magic

of

GANs

November

27,

2019Artificial

Intelligence9GAN’s

Architecture

November

27,

2019Artificial

Intelligence10November

27,

2019Adversarial

Training?Adversarial

Samples:????We

can

generate

adversarial

samples

to

fool

a

discriminative

modelWe

can

use

those

adversarial

samples

to

make

models

robustWe

then

require

more

effort

to

generate

adversarial

samplesRepeat

this

and

we

get

better

discriminative

model?GANs

extend

that

idea

to

generative

models:????Generator:

generate

fake

samples,

tries

to

fool

the

DiscriminatorDiscriminator:

tries

to

distinguish

between

real

and

fake

samplesTrain

them

against

each

otherRepeat

this

and

we

get

better

Generator

and

Discriminator

Artificial

Intelligence11Training

Discriminator

November

27,

2019

Artificial

Intelligence12Training

Generator

November

27,

2019

Artificial

Intelligence13Mathematical

formulation

November

27,

2019

Artificial

Intelligence14Mathematical

formulation

November

27,

2019

Artificial

Intelligence15Mathematical

formulation

November

27,

201916November

27,

2019

Artificial

IntelligenceMathematical

formulation

Artificial

Intelligence17Advantages

of

GANs

November

27,

2019

Artificial

Intelligence18Problems

with

GANs

November

27,

2019

Artificial

Intelligence19Problems

with

GANs

November

27,

2019Artificial

Intelligence20November

27,

2019Formulation?Deep

Learning

models

(in

general)

involve

a

single

player????The

player

tries

to

maximize

its

reward

(minimize

its

loss).Use

SGD

(with

Backpropagation)

to

find

the

optimal

parameters.SGD

has

convergence

guarantees

(under

certain

conditions).Problem:

With

non-convexity,

we

might

converge

to

local

optima.Artificial

Intelligence21November

27,

2019Formulation?GANs

instead

involve

two

(or

more)

players????

Discriminator

is

trying

to

maximize

its

reward.

Generator

is

trying

to

minimize

Discriminator’s

reward.SGD

was

not

designed

to

find

the

Nash

equilibrium

of

a

game.Problem:

We

might

not

converge

to

the

Nash

equilibrium

at

all.22November

27,

2019

Artificial

IntelligenceNon-Convergence

Artificial

Intelligence23Problems

with

GANs

November

27,

2019

Artificial

Intelligence24Mode-Collapse

November

27,

2019

Artificial

Intelligence25Some

Real

Examples

November

27,

2019

Artificial

Intelligence26

Some

Solutions?

Mini-Batch

GANs?

Supervision

with

labels?

Some

recent

attempts

:

?

Unrolled

GANs

?

W-GANs

November

27,

2019

Artificial

Intelligence27

Basic

(Heuristic)

Solutions?

Mini-Batch

GANs?

Supervision

with

labels

November

27,

201928November

27,

2019

Artificial

IntelligenceHow

to

reward

sample

diversity??At

Mode

Collapse,??Generator

produces

good

samples,

but

a

very

few

of

them.Thus,

Discriminator

can’t

tag

them

as

fake.?To

address

this

problem,?Let

the

Discriminator

know

about

this

edge-case.?More

formally,??Let

the

Discriminator

look

at

the

entire

batch

instead

of

single

examplesIf

there

is

lack

of

diversity,

it

will

mark

the

examples

as

fake?Thus,?Generator

will

be

forced

to

produce

diverse

samples.Artificial

Intelligence29November

27,

2019Mini-Batch

GANs?Extract

features

that

capture

diversity

in

the

mini-batch?For

e.g.

L2

norm

of

the

difference

between

all

pairs

from

the

batch??Feed

those

features

to

the

discriminator

along

with

the

imageFeature

values

will

differ

b/w

diverse

and

non-diverse

batches?Thus,

Discriminator

will

rely

on

those

features

for

classification?This

in

turn,??Will

force

the

Generator

to

match

those

feature

values

with

the

real

dataWill

generate

diverse

batches

Artificial

Intelligence30

Basic

(Heuristic)

Solutions?

Mini-Batch

GANs?

Supervision

with

labels

November

27,

201931November

27,

2019

Artificial

IntelligenceSupervision

with

Labels32November

27,

2019

Artificial

IntelligenceAlternate

view

of

GANs33November

27,

2019

Artificial

IntelligenceAlternate

view

of

GANs

(Contd.)34November

27,

2019

Artificial

IntelligenceEnergy-Based

GANs35November

27,

2019

Artificial

IntelligenceExamples36November

27,

2019

Artificial

IntelligenceExamples37November

27,

2019

Artificial

IntelligenceExamples38November

27,

2019

Artificial

IntelligenceExamples39November

27,

2019

Artificial

IntelligenceHow

to

reward

Disentanglement?Artificial

Intelligence40November

27,

2019Recap:

Mutual

Information??Mutual

Information

captures

the

mutual

dependence

between

two

variablesMutual

information

between

two

variables

??,

??

is

defined

as:Artificial

Intelligence41November

27,

2019InfoGAN??We

want

to

maximize

the

mutual

information

??

between

??

and

??

=

??(??,

??)Incorporate

in

the

value

function

of

the

minimax

game.

Artificial

Intelligence42Conditional

GANs

November

27,

2019Artificial

Intelligence43November

27,

2019Conditional

GANs??Simple

modification

to

the

originalGAN

framework

that

conditions

themodel

on

additional

information

forbetter

multi-modal

learning.Lends

to

many

practicalapplications

of

GANs

when

wehave

explicit

supervision

available.

Artificial

Intelligence44Conditional

GANs

November

27,

2019Artificial

Intelligence45November

27,

2019Coupled

GAN???Learning

a

joint

distribution

of

multi-domain

images.Using

GANs

to

learn

the

joint

distribution

with

samples

drawn

from

the

marginaldistributions.Direct

applications

in

domain

adaptation

and

image

translation.

Artificial

Intelligence46Coupled

GAN

November

27,

2019

Artificial

Intelligence47Coupled

GAN

November

27,

201948November

27,

2019

Artificial

IntelligenceApplications49November

27,

2019

Artificial

IntelligenceApplications

Artificial

Intelligence50Deep

Convolution

GANs

November

27,

2019

Artificial

Intelligence51Deep

Convolution

GANs

November

27,

2019

Artificial

Intelligence52Deep

Convolution

GANs

November

27,

2019

Artificial

Intelligence53DCGAN(bedroom)

November

27,

2019

Artificial

Intelligence54Image-to-ImageTran

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論