KnowledgeRepresentationandReasoningVillanovaUniversity維拉諾瓦大學知識表示與推理課件_第1頁
KnowledgeRepresentationandReasoningVillanovaUniversity維拉諾瓦大學知識表示與推理課件_第2頁
KnowledgeRepresentationandReasoningVillanovaUniversity維拉諾瓦大學知識表示與推理課件_第3頁
KnowledgeRepresentationandReasoningVillanovaUniversity維拉諾瓦大學知識表示與推理課件_第4頁
KnowledgeRepresentationandReasoningVillanovaUniversity維拉諾瓦大學知識表示與推理課件_第5頁
已閱讀5頁,還剩40頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

1、Knowledge Representation and ReasoningFocus on Sections 10.1-10.3, 10.6Guest Lecturer: Eric EatonUniversity of Maryland Baltimore CountyLockheed Martin Advanced Technology LaboratoriesAdapted from slides byTim Finin andMarie desJardins.Some material adopted from notes by Andreas Geyer-Schulz,and Chu

2、ck Dyer.1第1頁,共45頁。OutlineApproaches to knowledge representationSituation calculusDeductive/logical methodsForward-chaining production rule systemsSemantic networksFrame-based systemsDescription logicsAbductive/uncertain methodsWhats abduction?Why do we need uncertainty?Bayesian reasoningOther method

3、s: Default reasoning, rule-based methods, Dempster-Shafer theory, fuzzy reasoning2第2頁,共45頁。IntroductionReal knowledge representation and reasoning systems come in several major varieties.These differ in their intended use, expressivity, features,Some major families areLogic programming languagesTheo

4、rem proversRule-based or production systemsSemantic networksFrame-based representation languagesDatabases (deductive, relational, object-oriented, etc.)Constraint reasoning systemsDescription logicsBayesian networksEvidential reasoning3第3頁,共45頁。Ontological EngineeringStructuring knowledge in a usefu

5、l fashionAn ontology formally represents concepts in a domain and relationships between those conceptsUsing the proper representation is key!It can be the difference between success and failureOften costly to formally engineer domain knowledgeDomain experts (a.k.a. subject matter experts)Commercial

6、ontology, e.g. Cyc (cyc/, /)4第4頁,共45頁。Representing changeRepresenting change in the world in logic can be tricky.One way is just to change the KBAdd and delete sentences from the KB to reflect changesHow do we remember the past, or reason about changes?Situation calculus is another wayA s

7、ituation is a snapshot of the world at some instant in timeWhen the agent performs an action A in situation S1, the result is a new situation S2.5第5頁,共45頁。Situations6第6頁,共45頁。Situation calculusA situation is a snapshot of the world at an interval of time during which nothing changes Every true or fa

8、lse statement is made with respect to a particular situation. Add situation variables to every predicate.at(hunter,1,1) becomes at(hunter,1,1,s0): at(hunter,1,1) is true in situation (i.e., state) s0.Alternatively, add a special 2nd-order predicate, holds(f,s), that means “f is true in situation s.”

9、 E.g., holds(at(hunter,1,1),s0) Add a new function, result(a,s), that maps a situation s into a new situation as a result of performing action a. For example, result(forward, s) is a function that returns the successor state (situation) to s Example: The action agent-walks-to-location-y could be rep

10、resented by(x)(y)(s) (at(Agent,x,s) onbox(s) - at(Agent,y,result(walk(y),s) 7第7頁,共45頁。Deducing hidden propertiesFrom the perceptual information we obtain in situations, we can infer properties of locations l,s at(Agent,l,s) Breeze(s) = Breezy(l) l,s at(Agent,l,s) Stench(s) = Smelly(l) Neither Breezy

11、 nor Smelly need situation arguments because pits and Wumpuses do not move around8第8頁,共45頁。Deducing hidden properties IIWhy both causal and diagnostic rules? Maybe diagnostic rules are enough? However, it is very tricky to ensure that they derive the strongest possible conclusions from the available

12、 information. For example, the absence of stench or breeze implies that adjacent squares are OK: ( x,y,g,u,c,s) Percept(None,None,g,u,c,t) At(Agent,x,s) Adjacent(x,y) = OK(y) but sometimes a square can be OK even when smells and breezes abound. Consider the following model-based rule:(x,t) ( t(Wumpu

13、s,x,t) Pit(x) OK(x) If the axioms correctly and completely describe the way the world works and the way percepts are produced, the inference procedure will correctly infer the strongest possible description of the world state given the available percepts. 9第9頁,共45頁。Deducing hidden properties IIWe ne

14、ed to write some rules that relate various aspects of a single world state (as opposed to across states)There are two main kinds of such rules: Causal rules reflect the assumed direction of causality in the world: (Al1,l2,s) At(Wumpus,l1,s) Adjacent(l1,l2) = Smelly(l2) (A l1,l2,s) At(Pit,l1,s) Adjac

15、ent(l1,l2) = Breezy(l2) Systems that reason with causal rules are called model-based reasoning systemsDiagnostic rules infer the presence of hidden properties directly from the percept-derived information. We have already seen two diagnostic rules:(A l,s) At(Agent,l,s) Breeze(s) = Breezy(l) (A l,s)

16、At(Agent,l,s) Stench(s) = Smelly(l) 10第10頁,共45頁。Representing change:The frame problemFrame axiom: If property x doesnt change as a result of applying action a in state s, then it stays the same.On (x, z, s) Clear (x, s) On (x, table, Result(Move(x, table), s) On(x, z, Result (Move (x, table), s)On (

17、y, z, s) y x On (y, z, Result (Move (x, table), s)The proliferation of frame axioms becomes very cumbersome in complex domains11第11頁,共45頁。The frame problem IISuccessor-state axiom: General statement that characterizes every way in which a particular predicate can become true:Either it can be made tr

18、ue, or it can already be true and not be changed:On (x, table, Result(a,s) On (x, z, s) Clear (x, s) a = Move(x, table) On (x, table, s) a Move (x, z)In complex worlds, where you want to reason about longer chains of action, even these types of axioms are too cumbersomePlanning systems use special-p

19、urpose inference methods to reason about the expected state of the world at any point in time during a multi-step plan12第12頁,共45頁。Qualification problemQualification problem:How can you possibly characterize every single effect of an action, or every single exception that might occur?When I put my br

20、ead into the toaster, and push the button, it will become toasted after two minutes, unlessThe toaster is broken, orThe power is out, orI blow a fuse, orA neutron bomb explodes nearby and fries all electrical components, orA meteor strikes the earth, and the world we know it ceases to exist, or13第13

21、頁,共45頁。Ramification problemSimilarly, its just about impossible to characterize every side effect of every action, at every possible level of detail:When I put my bread into the toaster, and push the button, the bread will become toasted after two minutes, andThe crumbs that fall off the bread onto

22、the bottom of the toaster over tray will also become toasted, andSome of the aforementioned crumbs will become burnt, andThe outside molecules of the bread will become “toasted,” andThe inside molecules of the bread will remain more “breadlike,” andThe toasting process will release a small amount of

23、 humidity into the air because of evaporation, andThe heating elements will become a tiny fraction more likely to burn out the next time I use the toaster, andThe electricity meter in the house will move up slightly, and14第14頁,共45頁。Knowledge engineering!Modeling the “right” conditions and the “right

24、” effects at the “right” level of abstraction is very difficultKnowledge engineering (creating and maintaining knowledge bases for intelligent reasoning) is an entire field of investigationMany researchers hope that automated knowledge acquisition and machine learning tools can fill the gap:Our inte

25、lligent systems should be able to learn about the conditions and effects, just like we do!Our intelligent systems should be able to learn when to pay attention to, or reason about, certain aspects of processes, depending on the context!15第15頁,共45頁。Preferences among actionsA problem with the Wumpus w

26、orld knowledge base that we have built so far is that it is difficult to decide which action is best among a number of possibilities. For example, to decide between a forward and a grab, axioms describing when it is OK to move to a square would have to mention glitter. This is not modular! We can so

27、lve this problem by separating facts about actions from facts about goals. This way our agent can be reprogrammed just by asking it to achieve different goals. 16第16頁,共45頁。Preferences among actionsThe first step is to describe the desirability of actions independent of each other. In doing this we w

28、ill use a simple scale: actions can be Great, Good, Medium, Risky, or Deadly. Obviously, the agent should always do the best action it can find: (a,s) Great(a,s) = Action(a,s) (a,s) Good(a,s) (b) Great(b,s) = Action(a,s) ( a,s) Medium(a,s) (b) Great(b,s) v Good(b,s) = Action(a,s) . 17第17頁,共45頁。Prefe

29、rences among actionsWe use this action quality scale in the following way. Until it finds the gold, the basic strategy for our agent is: Great actions include picking up the gold when found and climbing out of the cave with the gold. Good actions include moving to a square thats OK and hasnt been vi

30、sited yet. Medium actions include moving to a square that is OK and has already been visited. Risky actions include moving to a square that is not known to be deadly or OK. Deadly actions are moving into a square that is known to have a pit or a Wumpus. 18第18頁,共45頁。Goal-based agentsOnce the gold is

31、found, it is necessary to change strategies. So now we need a new set of action values. We could encode this as a rule: (s) Holding(Gold,s) = GoalLocation(1,1),s)We must now decide how the agent will work out a sequence of actions to accomplish the goal. Three possible approaches are:Inference: good

32、 versus wasteful solutionsSearch: make a problem with operators and set of statesPlanning: to be discussed later 19第19頁,共45頁。Semantic NetworksA semantic network is a simple representation scheme that uses a graph of labeled nodes and labeled, directed arcs to encode knowledge.Usually used to represe

33、nt static, taxonomic, concept dictionariesSemantic networks are typically used with a special set of accessing procedures that perform “reasoning”e.g., inheritance of values and relationshipsSemantic networks were very popular in the 60s and 70s but are less frequently used today.Often much less exp

34、ressive than other KR formalismsThe graphical depiction associated with a semantic network is a significant reason for their popularity.20第20頁,共45頁。Nodes and ArcsArcs define binary relationships that hold between objects denoted by the nodes.john5Sueagemothermother(john,sue)age(john,5)wife(sue,max)a

35、ge(max,34).34agefatherMaxwifehusbandage21第21頁,共45頁。Semantic NetworksThe ISA (is-a) or AKO (a-kind-of) relation is often used to link instances to classes, classes to superclassesSome links (e.g. hasPart) are inherited along ISA paths.The semantics of a semantic net can be relatively informal or very

36、 formaloften defined at the implementation levelisaisaisaisaRobinBirdAnimalRedRustyhasPartWing22第22頁,共45頁。ReificationNon-binary relationships can be represented by “turning the relationship into an object”This is an example of what logicians call “reification”reify v : consider an abstract concept t

37、o be real We might want to represent the generic give event as a relation involving three things: a giver, a recipient and an object, give(john,mary,book32)givemarybook32johnrecipientgiverobject23第23頁,共45頁。Individuals and ClassesMany semantic networks distinguishnodes representing individuals and th

38、ose representing classesthe “subclass” relation from the “instance-of” relationsubclasssubclassinstanceinstanceRobinBirdAnimalRedRustyhasPartWinginstanceGenus24第24頁,共45頁。Link types25第25頁,共45頁。Inference by InheritanceOne of the main kinds of reasoning done in a semantic net is the inheritance of valu

39、es along the subclass and instance links.Semantic networks differ in how they handle the case of inheriting multiple different values.All possible values are inherited, orOnly the “l(fā)owest” value or values are inherited26第26頁,共45頁。Conflicting inherited values27第27頁,共45頁。Multiple inheritanceA node can

40、 have any number of superclasses that contain it, enabling a node to inherit properties from multiple “parent” nodes and their ancestors in the network. These rules are often used to determine inheritance in such “tangled” networks where multiple inheritance is allowed:If XAB and both A and B have p

41、roperty P, then X inherits As property.If XA and XB but neither AB nor B B A -BA = B B-Possibly AWhenever A then B-Possibly A = BDeduction reasons from causes to effectsAbduction reasons from effects to causesInduction reasons from specific cases to general rules37第37頁,共45頁。Characteristics of abduct

42、ive reasoning“Conclusions” are hypotheses, not theorems (may be false even if rules and facts are true) E.g., misdiagnosis in medicineThere may be multiple plausible hypothesesGiven rules A = B and C = B, and fact B, both A and C are plausible hypotheses Abduction is inherently uncertainHypotheses c

43、an be ranked by their plausibility (if it can be determined)38第38頁,共45頁。Characteristics of abductive reasoning (cont.)Reasoning is often a hypothesize-and-test cycle Hypothesize: Postulate possible hypotheses, any of which would explain the given facts (or at least most of the important facts)Test:

44、Test the plausibility of all or some of these hypothesesOne way to test a hypothesis H is to ask whether something that is currently unknownbut can be predicted from His actually trueIf we also know A = D and C = E, then ask if D and E are trueIf D is true and E is false, then hypothesis A becomes m

45、ore plausible (support for A is increased; support for C is decreased)39第39頁,共45頁。Characteristics of abductive reasoning (cont.)Reasoning is non-monotonic That is, the plausibility of hypotheses can increase/decrease as new facts are collected In contrast, deductive inference is monotonic: it never

46、change a sentences truth value, once knownIn abductive (and inductive) reasoning, some hypotheses may be discarded, and new ones formed, when new observations are made40第40頁,共45頁。Sources of uncertaintyUncertain inputsMissing dataNoisy dataUncertain knowledgeMultiple causes lead to multiple effectsIn

47、complete enumeration of conditions or effectsIncomplete knowledge of causality in the domainProbabilistic/stochastic effectsUncertain outputsAbduction and induction are inherently uncertainDefault reasoning, even in deductive fashion, is uncertainIncomplete deductive inference may be uncertainProbab

48、ilistic reasoning only gives probabilistic results (summarizes uncertainty from various sources)41第41頁,共45頁。Decision making with uncertaintyRational behavior:For each possible action, identify the possible outcomesCompute the probability of each outcomeCompute the utility of each outcomeCompute the probability-weighted (expected) utility over possible outcomes for each actionSelect the action with the highest expected utility (principle of Maximum Expec

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論