版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
GPT-4
System
CardOpenAIMarch
14,
2023AbstractLarge
language
models
(LLMs)
are
being
deployed
in
many
domains
of
our
lives
rangingfrom
browsing,
to
voice
assistants,
to
coding
assistance
tools,
and
have
potential
for
vast
societalimpacts.[1,
2,
3,
4,
5,
6,
7]
This
system
card
analyzes
GPT-4,
the
latest
LLM
in
the
GPT
familyof
models.[8,
9,
10]
First,
we
highlight
safety
challenges
presented
by
the
model’s
limitations(e.g.,
producing
convincing
text
that
is
subtly
false)
and
capabilities
(e.g.,
increased
adeptnessat
providing
illicit
advice,
performance
in
dual-use
capabilities,
and
risky
emergent
behaviors).Second,
we
give
a
high-level
overview
of
the
safety
processes
OpenAI
adopted
to
prepare
GPT-4for
deployment.
This
spans
our
work
across
measurements,
model-level
changes,
product-
andsystem-level
interventions
(such
as
monitoring
and
policies),
and
external
expert
engagement.Finally,
we
demonstrate
that
while
our
mitigations
and
processes
alter
GPT-4’s
behavior
andprevent
certain
kinds
of
misuses,
they
are
limited
and
remain
brittle
in
some
cases.
This
pointsto
the
need
for
anticipatory
planning
and
governance.[11]Content
Warning:
This
document
contains
content
that
some
may
?nd
disturbing
or
o?ensive,including
content
that
is
sexual,
hateful,
or
violent
in
nature.11
IntroductionLarge
language
models,
also
known
as
LLMs,
have
become
an
increasingly
prevalent
part
of
ourday-to-day
lives,
with
their
use
extending
to
a
wide
range
of
domains
including
web
browsing,
voiceassistants,
and
coding
assistance
tools.[1,
2,
3,
4]
These
models
have
the
potential
to
signi?cantlyimpact
society
in
numerous
ways.[5,
6,
7]
This
system
card
analyzes
GPT-4,
the
latest
large
languagemodel
in
the
GPT
family
of
models.[8,
9,
10]
Since
it
?nished
training
in
August
of
2022,
we
havebeen
evaluating,
adversarially
testing,
and
iteratively
improving
the
model
and
the
system-levelmitigations
around
it.
Our
mitigations
and
processes
alter
GPT-4’s
behavior
and
prevent
certainkinds
of
misuses,
though
they
have
limitations,
pointing
to
the
need
for
anticipatory
planning
andgovernance[11]
and
further
safety
research.
Our
approach
to
deployment
balances
minimizing
riskfrom
deployment,
enabling
positive
use
cases,
and
learning
from
deployment.GPT
models
are
often
trained
in
two
stages.
First,
they
are
trained,
using
a
large
dataset
of
textfrom
the
Internet,
to
predict
the
next
word.
The
models
are
then
?ne-tuned
with
additional
data,using
an
algorithm
called
reinforcement
learning
from
human
feedback
(RLHF),
to
produce
outputsthat
are
preferred
by
human
labelers.[10,
12,
13]
Training
language
models
on
large
text
datasetshas
given
rise
to
capabilities
such
as
few-shot
learning[10]
and
the
ability
to
carry
out
a
wide
rangeof
natural
language
tasks
spanning
di?erent
domains,
including
question
answering,
arithmetic,
andclassi?cation.
Fine-tuning
has
made
these
models
more
controllable
and
useful.1.1
Overview
of
?ndings
and
mitigationsIn
this
system
card,1
we
outline
the
safety
challenges
that
arise
from
GPT-4,
and
explain
theinterventions
we
implemented
to
mitigate
potential
harms
from
its
deployment.
We
focus
on
safetychallenges
not
because
they
necessarily
outweigh
the
potential
bene?ts,2
but
because
we
wish
tomotivate
further
work
in
safety
measurement,
mitigation,
and
assurance.
The
scope
of
this
systemcard
is
narrower
than
the
potential
scope
of
abilities
GPT-4
can
be
used
to
unlock;
notably,
bothcustom
?ne-tuning
and
image
capabilities
are
explicitly
out
of
scope.We
focus
on
analyzing
two
versions
of
the
model:
an
early
version
?ne-tuned
for
instructionfollowing
(“GPT-4-early”);
and
a
version
?ne-tuned
for
increased
helpfulness
and
harmlessness[18]that
re?ects
the
further
mitigations
outlined
in
this
system
card
(“GPT-4-launch”).3
When
wediscuss
the
risks
of
GPT-4
we
will
often
refer
to
the
behavior
of
GPT-4-early,
because
it
re?ects
therisks
of
GPT-4
when
minimal
safety
mitigations
are
applied.
In
most
cases,
GPT-4-launch
exhibitsmuch
safer
behavior
due
to
the
safety
mitigations
we
applied.Known
risks
associated
with
smaller
language
models
are
also
present
with
GPT-4.
GPT-4can
generate
potentially
harmful
content,
such
as
advice
on
planning
attacks
or
hate
speech.
Itcan
represent
various
societal
biases
and
worldviews
that
may
not
be
representative
of
the
usersintent,4
or
of
widely
shared
values.
It
can
also
generate
code
that
is
compromised
or
vulnerable.The
additional
capabilities
of
GPT-4
also
lead
to
new
risk
surfaces.To
understand
the
extent
of
these
risks,
we
engaged
more
than
50
experts
to
help
us
gain
a
morerobust
understanding
of
the
GPT-4
model
and
potential
deployment
risks.
We
selected
these
areas1This
document
takes
inspiration
from
the
concepts
of
model
cards
and
system
cards.[14,
15,
16]
This
documentoften
takes
the
system
level
of
analysis,
with
that
system
including
non-model
mitigations
such
as
use
policies,
accesscontrols,
and
monitoring
for
abuse2See,
e.g.
discussion
of
Di?erential
Technology
Development
in[17].3We
intentionally
focus
on
these
two
versions
instead
of
a
comparison
to
the
base
GPT-4
model,
since
the
basemodel
proved
challenging
for
domain
expert
red
teamers
to
use
e?ectively
to
surface
behaviors
of
interest.4This
includes
tendencies
to
do
things
like
repeat
back
a
dialog
user’s
preferred
answer
(“sycophancy”),
which
canworsen
with
scale.[19]2based
on
a
number
of
factors,
including
prior
observed
risks
in
language
models
and
AI
systems,and
domains
where
we
have
observed
increased
user
interest
in
the
application
of
language
models.Working
with
these
experts
enabled
us
to
test
model
behavior
in
high-risk
areas
that
require
expertiseto
evaluate,
as
well
as
nascent
risks
that
are
poorly
understood.Through
this
analysis,
we
?nd
that
GPT-4
has
the
potential
to
be
used
to
attempt
to
identifyprivate
individuals
when
augmented
with
outside
data.
We
also
?nd
that,
although
GPT-4’scybersecurity
capabilities
are
not
vastly
superior
to
previous
generations
of
LLMs,
it
does
continuethe
trend
of
potentially
lowering
the
cost
of
certain
steps
of
a
successful
cyberattack,
such
as
throughsocial
engineering
or
by
enhancing
existing
security
tools.
Without
safety
mitigations,
GPT-4
isalso
able
to
give
more
detailed
guidance
on
how
to
conduct
harmful
or
illegal
activities.
Finally,
wefacilitated
a
preliminary
model
evaluation
by
the
Alignment
Research
Center
(ARC)
of
GPT-4’sability
to
carry
out
actions
to
autonomously
replicate5
and
gather
resources—a
risk
that,
whilespeculative,
may
become
possible
with
su?ciently
advanced
AI
systems—with
the
conclusion
thatthe
current
model
is
probably
not
yet
capable
of
autonomously
doing
so.Further
research
is
needed
to
fully
characterize
these
risks.
In
particular,
we
would
like
to
seework
on
more
robust
evaluations
for
the
risk
areas
identi?ed
and
more
concrete
measurements
of
theprevalence
of
such
behaviors
across
di?erent
language
models,
and
to
guide
the
development
of
thesemodels
in
safer
directions.
We
are
working
on
these
types
of
evaluations,
often
in
collaboration
withother
research
groups,
with
a
focus
on
assessing
risky
emergent
behaviors.In
addition
to
work
on
measurement,
we
aimed
to
mitigate
the
identi?ed
issues
at
various
stepsof
the
development
and
deployment
process.
We
reduced
the
prevalence
of
certain
kinds
of
contentthat
violate
our
usage
policies
(such
as
inappropriate
erotic
content)
in
our
pre-training
dataset,
and?ne-tuned
the
model
to
refuse
certain
instructions
such
as
direct
requests
for
illicit
advice.
We
alsoreduced
the
tendency
of
the
models
to
hallucinate
and,
by
leveraging
data
from
prior
model
usage,reduced
the
surface
area
of
adversarial
prompting
or
exploits
(including
attacks
sometimes
referredto
as
“jailbreaks”)
that
the
model
succumbs
to.
Additionally,
we
trained
a
range
of
classi?ers
onnew
risk
vectors
and
have
incorporated
these
into
our
monitoring
work?ow,
enabling
us
to
betterenforce
our
API
usage
policies.
The
e?ectiveness
of
these
mitigations
varies,
but
overall
we
were
ableto
signi?cantly
reduce
the
ease
of
producing
various
kinds
of
potentially
harmful
content,
therebymaking
GPT-4-launch
signi?cantly
safer
than
GPT-4-early
along
these
dimensions.This
system
card
is
not
comprehensive,
and
we
expect
to
learn
more
over
time
about
theissues
discussed
below.
Consistent
with
OpenAI’s
deployment
strategy,[21]
we
applied
lessons
fromearlier
deployments
and
expect
to
apply
lessons
learned
from
this
deployment
both
to
make
coursecorrections
and
lay
a
foundation
for
future
deployments.Note
that
the
examples
included
throughout
this
system
card
are
not
zero-shot
and
are
cherrypicked
from
our
evaluation
e?orts
to
illustrate
speci?c
types
of
safety
concerns
or
harms.
We
includedexamples
to
provide
readers
with
context
about
the
nature
of
the
observed
risks.
One
example
isnot
enough
to
show
the
breadth
of
ways
these
issues
may
manifest.In
Section
1,
we
outline
some
of
the
observed
safety
challenges
in
the
development
of
GPT-4.
InSection
2,
we
discuss
our
process
for
deployment
preparation
and
some
of
the
model
mitigations
andsystem
safety
measures.
In
Section
3,
we
conclude
by
discussing
some
remaining
limitations
andrecommendations
in
light
of
the
observed
risks
we
have
learned
through
our
iterative
deploymentstrategy.5Autonomously
replicate
is
a
reference
to
self-replication,
a
concept
that
dates
back
at
least
as
far
as
the
1988,
tothe
self-replicating
computer
worms,
“Morris
worm”,
written
by
Robert
Morris.[20]32
GPT-4
Observed
Safety
ChallengesGPT-4
demonstrates
increased
performance
in
areas
such
as
reasoning,
knowledge
retention,
andcoding,
compared
to
earlier
models
such
as
GPT-2[22]
and
GPT-3.[10]
Many
of
these
improvementsalso
present
new
safety
challenges,
which
we
highlight
in
this
section.We
conducted
a
range
of
qualitative
and
quantitative
evaluations
of
GPT-4.
These
evaluationshelped
us
gain
an
understanding
of
GPT-4’s
capabilities,
limitations,
and
risks;
prioritize
ourmitigation
e?orts;
and
iteratively
test
and
build
safer
versions
of
the
model.
Some
of
the
speci?crisks
we
explored
are:6?
Hallucinations?
Harmful
content?
Harms
of
representation,
allocation,
and
quality
of
service?
Disinformation
and
in?uence
operations?
Proliferation
of
conventional
and
unconventional
weapons?
Privacy?
Cybersecurity?
Potential
for
risky
emergent
behaviors?
Economic
impacts?
Acceleration?
OverrelianceWe
found
that
GPT-4-early
and
GPT-4-launch
exhibit
many
of
the
same
limitations
as
earlierlanguage
models,
such
as
producing
societal
biased
and
unreliable
content.
Prior
to
our
mitigationsbeing
put
in
place,
we
also
found
that
GPT-4-early
presented
increased
risks
in
areas
such
as
?ndingwebsites
selling
illegal
goods
or
services,
and
planning
attacks.
Additionally,
the
increased
coherenceof
the
model
enables
it
to
generate
content
that
may
be
more
believable
and
more
persuasive.
Weelaborate
on
our
evaluation
procedure
and
?ndings
below.2.1
Evaluation
Approach2.1.1
Qualitative
EvaluationsIn
August
2022,
we
began
recruiting
external
experts
to
qualitatively
probe,
adversarially
test,
andgenerally
provide
feedback
on
the
GPT-4
models.
This
testing
included
stress
testing,
boundary6This
categorization
is
not
intended
to
represent
an
optimal,
hierarchical
taxonomy,
though
we
recognize
thatsaying
this
doesn’t
prevent
it
from
valorizing
some
perspectives
and
framings.[23]
Nor
are
these
categories
mutuallyexclusive.
For
example,
things
like
bias,
misinformation,
and
harmful
content
are
often
deeply
intertwined
and
drawingdistinctions
between
these
can
narrow
the
problem.
See
further
discussion
on
taxonomies
of
harms
and
factors
toconsider
in
using
them
in,
e.g.,
[24]
and
[25].4testing,
and
red
teaming.7
We
refer
to
these
adversarial
testing
processes
informally
as
“red
teaming”in
line
with
the
de?nition
given
in
[27],
namely“a
structured
e?ort
to
?nd
?aws
and
vulnerabilitiesin
a
plan,
organization,
or
technical
system,
often
performed
by
dedicated
’red
teams’
that
seek
toadopt
an
attacker’s
mindset
and
methods.”
Red
teaming
has
been
applied
to
language
models
invarious
ways:
to
reduce
harmful
outputs;[28]
and
to
leverage
external
expertise
for
domain-speci?cadversarial
testing.[16]
Some
have
explored
red
teamed
language
models
using
language
models.[29]Red
teaming
in
general,
and
the
type
of
red
teaming
we
call
’expert
red
teaming,’8
is
just
one
ofthe
mechanisms[27]
we
use
to
inform
our
work
identifying,
measuring,
and
testing
AI
systems.
Ourapproach
is
to
red
team
iteratively,
starting
with
an
initial
hypothesis
of
which
areas
may
be
thehighest
risk,
testing
these
areas,
and
adjusting
as
we
go.
It
is
also
iterative
in
the
sense
that
weuse
multiple
rounds
of
red
teaming
as
we
incorporate
new
layers
of
mitigation
and
control,
conducttesting
and
re?ning,
and
repeat
this
process.We
reached
out
to
researchers
and
industry
professionals
-
primarily
with
expertise
in
fairness,alignment
research,
industry
trust
and
safety,
dis/misinformation,
chemistry,
biorisk,
cybersecurity,nuclear
risks,
economics,
human-computer
interaction,
law,
education,
and
healthcare
-
to
helpus
gain
a
more
robust
understanding
of
the
GPT-4
model
and
potential
deployment
risks.
Weselected
these
areas
based
on
a
number
of
factors
including
but
not
limited
to:
prior
observed
risks
inlanguage
models
and
AI
systems;[6,
30]
and
domains
where
we
have
observed
increased
user
interestin
the
application
of
language
models.
Participants
in
this
red
team
process
were
chosen
based
onprior
research
or
experience
in
these
risk
areas,
and
therefore
re?ect
a
bias
towards
groups
withspeci?c
educational
and
professional
backgrounds
(e.g.,
people
with
signi?cant
higher
education
orindustry
experience).
Participants
also
typically
have
ties
to
English-speaking,
Western
countries(such
as
the
US,
Canada,
and
the
UK).
Our
selection
of
red
teamers
introduces
some
biases,
andlikely
in?uenced
both
how
red
teamers
interpreted
particular
risks
as
well
as
how
they
probedpolitics,
values,
and
the
default
behavior
of
the
model.
It
is
also
likely
that
our
approach
to
sourcingresearchers
privileges
the
kinds
of
risks
that
are
top
of
mind
in
academic
communities
and
at
AI?rms.These
experts
had
access
to
early
versions
of
GPT-4
(including
GPT-4-early)
and
to
the
modelwith
in-development
mitigations
(precursors
to
GPT-4-launch).
They
identi?ed
initial
risks
thatmotivated
safety
research
and
further
iterative
testing
in
key
areas.
We
reduced
risk
in
many
ofthe
identi?ed
areas
with
a
combination
of
technical
mitigations,
and
policy
and
enforcement
levers;however,
many
risks
still
remain.
We
expect
to
continue
to
learn
more
about
these
and
othercategories
of
risk
over
time.
While
this
early
qualitative
red
teaming
exercise
is
very
useful
forgaining
insights
into
complex,
novel
models
like
GPT-4,
it
is
not
a
comprehensive
evaluation
of
allpossible
risks.We
note
further
context,
examples,
and
?ndings
for
some
of
the
domains
evaluated
in
theremainder
in
the
subcategories
listed
in
this
section.2.1.2
Quantitative
EvaluationsAs
a
complement
to
our
qualitative
evaluations
and
adversarial
testing,
we
built
internal
quantitativeevaluations
for
categories
against
our
content
policy
such
as
hate
speech,
self-harm
advice,
and
illicit7Note
that,
in
addition
to
red
teaming
focused
on
probing
our
organization’s
capabilities
and
resilience
to
attacks,we
also
make
ample
use
of
stress
testing
and
boundary
testing
methods
which
focus
on
surfacing
edge
cases
and
otherpotential
failure
modes
with
potential
to
cause
harm.
In
order
to
reduce
confusion
associated
with
the
term
’red
team’,help
those
reading
about
our
methods
to
better
contextualize
and
understand
them,
and
especially
to
avoid
falseassurances,
we
are
working
to
adopt
clearer
terminology,
as
advised
in
[26],
however,
for
simplicity
and
in
order
to
uselanguage
consistent
with
that
we
used
with
our
collaborators,
we
use
the
term
“red
team”
in
this
document.8We
use
the
term
’expert’
to
refer
to
expertise
informed
by
a
range
of
domain
knowledge
and
lived
experiences.5advice.
These
evaluations
measure
the
likelihood
of
a
language
model
to
generate
content
that
wouldfall
into
one
of
the
above
categories
when
given
prompts
aimed
at
eliciting
content
in
each
of
thosecategories.
The
generated
text
from
the
language
model
was
classi?ed
as
containing
the
unwantedcontent
using
classi?ers
and
human
analysis.These
evaluations
were
built
to
automate
and
accelerate
evaluations
of
di?erent
model
checkpointsduring
training
and
to
more
easily
compare
di?erent
models
on
safety-relevant
criteria.
We
speci?callytargeted
content
areas
that
were
identi?ed
as
being
high
risk
and
those
that
we
were
further
targetingfor
model
mitigations.
See
?ndings
in
the
Model
Mitigations
section.In
the
remainder
of
this
section,
we
provide
further
context,
examples,
and
?ndings
for
some
ofthe
areas
we
evaluated.2.2
HallucinationsGPT-4
has
the
tendency
to
“hallucinate,”9
i.e.
“produce
content
that
is
nonsensical
or
untruthful
inrelation
to
certain
sources.”[31,
32]
This
tendency
can
be
particularly
harmful
as
models
becomeincreasingly
convincing
and
believable,
leading
to
overreliance
on
them
by
users.
[See
furtherdiscussion
in
Overreliance].
Counterintuitively,
hallucinations
can
become
more
dangerous
as
modelsbecome
more
truthful,
as
users
build
trust
in
the
model
when
it
provides
truthful
information
inareas
where
they
have
some
familiarity.
Additionally,
as
these
models
are
integrated
into
societyand
used
to
help
automate
various
systems,
this
tendency
to
hallucinate
is
one
of
the
factors
thatcan
lead
to
the
degradation
of
overall
information
quality
and
further
reduce
veracity
of
and
trust
infreely
available
information.[33]We
have
measured
GPT-4’s
hallucination
potential
in
both
closed
domain
and
open
domaincontexts10
using
a
range
of
methods.
We
measured
close
domain
hallucinations
using
automaticevaluations
(using
GPT-4
as
a
zero-shot
classi?er)
and
human
evaluations.
For
open
domainhallucinations,
we
collected
real-world
data
that
had
been
?agged
as
not
being
factual,
reviewedit,
and
created
a
’factual’
set
for
it
where
it
was
possible
to
do
so.11
We
used
this
to
assess
modelgenerations
in
relation
to
the
’factual’
set,
and
facilitate
human
evaluations.GPT-4
was
trained
to
reduce
the
model’s
tendency
to
hallucinate
by
leveraging
data
from
priormodels
such
as
ChatGPT.
On
internal
evaluations,
GPT-4-launch
scores
19
percentage
points
higherthan
our
latest
GPT-3.5
model
at
avoiding
open-domain
hallucinations,
and
29
percentage
pointshigher
at
avoiding
closed-domain
hallucinations.2.3
Harmful
ContentLanguage
models
can
be
prompted
to
generate
di?erent
kinds
of
harmful
content.
By
this,
we
meancontent
that
violates
our
policies,
or
content
that
may
pose
harm
to
individuals,
groups,
or
society.129We
use
the
term
“hallucinations,”
though
we
recognize
ways
this
framing
may
suggest
anthropomorphization,which
in
turn
can
lead
to
harms
or
incorrect
mental
models
of
how
the
model
learns.10Closed
domain
hallucinations
refer
to
instances
in
which
the
model
is
instructed
to
use
only
information
providedin
a
given
context,
but
then
makes
up
extra
information
that
was
not
in
that
context.
For
example,
if
you
ask
themodel
to
summarize
an
article
and
its
summary
includes
information
that
was
not
in
the
article,
then
that
would
be
aclosed-domain
hallucination.
Open
domain
hallucinations,
in
contrast,
are
when
the
model
con?dently
provides
falseinformation
about
the
world
without
reference
to
any
particular
input
context.11See
related
work
in
this
area
and
discussion
of
use
of
words
like
“factual”
and
“truthful”
in,
e.g.
[34].12Terms
like
“harmful”
or
“toxic”
can
be
wielded
in
ways
that
are
themselves
harmful
or
oppressive
as
discussed
in[35].
For
example,
mislabeling
content
as
“harmful”
or
“toxic”
can
negatively
impact
users,
particularly
in
the
caseof
false-positives
due
to
bias
in
the
classi?ers.
For
instance,
a
harmless
love
story
about
a
heterosexual
couple
maynot
be
?agged,
but
may
be
deemed
“unsafe”
if
it
involves
queer
characters.[36]
Thus,
it
is
important
to
specify
what“unwanted”
content
means
and
who
?nds
it
undesirable.
In
some
instances,
“unwanted”
refers
to
content
a
user
did6This
assessment
of
harm
doesn’t
account
for
context
of
usage,
which
plays
a
key
role
in
determiningif
a
piece
of
content
is
eventually
harmful
or
not.[39]
Therefore,
we
focused
on
content
areas
thatpose
the
potential
for
harm
regardless
of
the
context
in
which
they
may
appear.As
an
example,
GPT-4-early
can
generate
instances
of
hate
speech,
discriminatory
language,incitements
to
violence,
or
content
that
is
then
used
to
either
spread
false
narratives
or
to
exploitan
individual.
Such
content
can
harm
marginalized
communities,
contribute
to
hostile
onlineenvironments,
and,
in
extreme
cases,
precipitate
real-world
violence
and
discrimination.
In
particular,we
found
tha
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 04年企業(yè)知識(shí)產(chǎn)權(quán)保護(hù)服務(wù)合同
- 2024年加工承攬合同服裝加工承攬合同
- 2024農(nóng)產(chǎn)品加工企業(yè)與農(nóng)戶(hù)就原材料供應(yīng)合同
- 2024醫(yī)院數(shù)字化建設(shè)與升級(jí)合同
- 2024年度礦區(qū)安全監(jiān)控系統(tǒng)升級(jí)合同
- 2024城市基礎(chǔ)設(shè)施建設(shè)及運(yùn)營(yíng)合同
- 2024年度物流運(yùn)輸合同模板
- 2024年二手健身器材買(mǎi)賣(mài)合同樣本
- 2024年臨時(shí)勞動(dòng)合同范本
- 2024年工程承包方總承包合同
- 填埋場(chǎng)工藝流程設(shè)計(jì)
- 體量與力量雕塑的美感課件高中美術(shù)人美版美術(shù)鑒賞
- 水災(zāi)期間的食品安全措施
- 上下班安全交通培訓(xùn)
- 股骨頭置換術(shù)后護(hù)理查房
- 《招商招租方案》課件
- 第六單元中國(guó)特色社會(huì)主義生態(tài)文明建設(shè)及結(jié)語(yǔ)練習(xí)-2023-2024學(xué)年中職高教版(2023)中國(guó)特色社會(huì)主義
- 結(jié)算周期與付款方式
- 【S鋼材民營(yíng)企業(yè)經(jīng)營(yíng)管理探究17000字(論文)】
- 林木種質(zhì)資源調(diào)查表(新表)
- 蔬菜出口基地備案管理課件
評(píng)論
0/150
提交評(píng)論