Representation and Computation范文[英语论文]

资料分类免费英语论文 责任编辑:王教授更新时间:2017-04-25
提示:本资料为网络收集免费论文,存在不完整性。建议下载本站其它完整的收费论文。使用可通过查重系统的论文,才是您毕业的保障。

范文:“Representation and Computation” 表示的概念,最重要的是有争议的心理学。其当代的历史可以追溯到控制论,最早的现代心理学认为,大脑运作不直接于外部现实,但在内部的模型中,它操纵并用来理解、模拟和预测世界事件和动态。此规则认为精神/大脑是一台电脑,与人工智能的联系密切,这产生在1970年代的认知科学。一个图灵机是一个抽象的数字计算机的特性,它由一组数据构成。

在过去的几十年里,试图识别代码,催生了一个重要的探讨领域。远非哲学的争论或隐喻。不同于其他科学的发现,这样的理解是至关重要的,计算模型是模型的字面含义。每个表征代码非常适合一套相应的推理规则。下面的范文进行详述。

The notion of representation is one of the most important and controversial in psychology. Leaving aside the acceptations that were given of it in the first decades of scientific psychology – which include the works of Frederick Bartlett (1932) and even of behaviourists like Edward C. Tolman (1948), as well as the main body of Gestalt psychology – its contemporary history traces back to the cybernetic turn that took place around the middle of the 20th century. Kenneth Craik (1943) was among the first in modern psychology to argue that the mind operates not directly on external reality, but on internally created models thereof which it manipulates and uses to understand, simulate and predict world events and dynamics.

Positions of this sort fitted well into the burgeoning cognitive psychology. This discipline viewed the mind/brain as a computer and, via its close relation to artificial intelligence, was to give rise in the 1970s to cognitive science. A Turing machine (Church 1936; Turing 1936) is an abstract characterization of digital computers. It is comprised of a set of data, which is written on a tape as tokens of a finite symbolic alphabet (e.g. made of zeros and ones), and a set of procedures that operate on them. It was all too natural in the heyday of cognitive science to equate – or straightforwardly identify – Craik’s mental models with the data of a Turing machine, their constitutive elementary items with the symbols in its formal alphabet, and their manipulation on the part of the mind with the operation of its programs (e.g. Thagard 1996).

Over the decades that followed, the attempt to identify the code, or the codes, in which knowledge is supposedly represented in the mind gave rise to a major research area. The debate was far from merely philosophical or metaphorical. In an oft-quoted passage, Zenon Pylyshyn (1991: 219) stated that, differently from what happens in other sciences, ‘…in cognitive science our choice of notation is critical precisely because the theories claim that representations are written in the mind in the postulated notation: that at least some of the knowledge is explicitly represented and encoded in the notation proposed by the theory… What is sometimes not appreciated is that computational models are models of what literally goes on in the mind’.

Each representational code is well suited for a corresponding set of reasoning rules, and vice versa: the form of the data and the form of the procedures mirror each other, so that to identify the one practically means to identify the other. However, it was generally maintained that the mind’s program(s), once identified, would turn out to be comparatively simple: ‘An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behaviour over time is largely a reflection of the complexity of the environment in which it finds itself’ (Simon 1981: 64). In Herbert Simon’s famous metaphor, the mind, like the ant, is a simple set of programs, and the complex environment in which it finds itself – and which makes it appear more complex than it actually is – is the set of representations over which it operates. Therefore, the real issue was held to be the identification of the code in which the representations are ‘written in the mind’. Once this code was identified, the mind and its functioning would be substantially understood.

In the 1960s and 1970s many such codes were proposed to capture the nature of human representations: the most notable among them, apart from classical and nonclassical logic, were semantic networks (Quillian 1968; Collins and Quillian 1969; Woods 1975), production rules (Newell and Simon 1972), frames (Minsky 1974), schemata (Bobrow and Norman 1975), scripts (Schank and Abelson 1977; Schank 1980), and mental models (Johnson-Laird 1983; the phrase ‘mental models’ has a specific, more technical meaning in Johnson-Laird’s work than in Craik’s account).

Each proposed notation had its own theoretical specifications and often its own computational and empirical or experimental correlates. What all of them appeared to have in common is the idea that mental representations are coded symbolically and are structured and computable. That representations are coded symbolically means that they are, to quote Pylyshyn again, ‘written in the mind in the postulated notation’; that they are computable means that they can be the input and – once transformed by the program – the output of the mind’s functioning. Taken together these properties mean that the mind/brain is a digital computer. 

That representations are structured means that the elementary items of which they are composed are linked to each other in complex ways and grouped into meaningful aggregates. Knowledge of restaurants, for example, has to include or be linked to knowledge about rooms, tables, menus, waiters, dishes, money, and so on; knowledge of money has to include or be linked to knowledge about value, trade, banknotes, coins, cheques, jobs, wages, robberies and so on; knowledge of robberies has to include or be linked to knowledge about property, law, banks, guns, police, handcuffs, jails and so on. Each such node may also point to specific examples or instances of the concept which the system has encountered. Thus, an intelligent agent’s overall knowledge system consists in a huge network or graph with different types of nodes and links to connect them. This is in practice a hypertext. Computational theories of representation differ with regard to what structure the hypertext is supposed to have, what types of nodes and links it may contain, what types of inference may be drawn by the processor while it traverses the hypertext, and so on.

Other researchers, while subscribing to the computational paradigm, maintained instead that representations have an analogical nature (most notably Shepard 1980 and Kosslyn 1983) or that they can have both a symbolic and an analogical nature (Paivio 1986). These views gained popularity, albeit in a different form, when parallel distributed models of representation (later known under labels like connectionism or neural networks) were developed (Rumelhart et al. 1986; McClelland et al. 1986).

Another reason for the decline of interest in knowledge representation outside artificial intelligence was the growing understanding of the many limits of the classical view. Let us reconsider the assumption that the mind does not operate on the world, but only on the representations of the world that it entertains. This position, which constitutes one of the foundations of computational functionalism, is known as methodological solipsism (Fodor 1980). It requires that the mind/brain be connected to the world via noncognitive subsystems known as modules (Fodor 1983). Thus, the representational and reasoning system only needs to satisfy constraints of completeness, correctness, consistency and, possibly, efficiency, while truth – or, at least, appropriateness to reality – is maintained via nonrepresentational connections to the external world.

A problem with this view is that it only functions on a closed-world assumption. This is the assumption that all that exists to the system has to be either explicitly coded in its knowledge base or formally deducible from what is coded. However, the closed-world assumption gives rise to computationally intractable problems known as the frame problem (McCarthy and Hayes 1969) and the qualification problem (McCarthy 1980). These problems follow from the requirement that each and every effect that a certain action may have or, respectively, each and every precondition that must hold for such action to be executable, must be explicitly stated in the knowledge base or formally deducible from it. 

Some researchers think that these two problems imply the impossibility of a computational system operating intelligently in the real open world (Searle 1980; Dreyfus 1992). Others proposed instead that they can be overcome by coding the entire body of knowledge that a computational system would need, which is in practice a description of the whole relevant universe. This was attempted, for example, with the CYC project (Lenat and Feigenbaum 1991; the name of the project comes from the ‘psych’ syllable in ‘encyclopaedia’) (see Smith 1991 for a criticism of CYC and of its underlying assumptions). It may be interesting to remark that this position also corresponds to the standard position of computational psychology and artificial intelligence that everything in the mind has to be innate: learning from experience is viewed as impossible both in natural and in artificial agents, although the solutions to this impasse seem to differ in the two cases.

A seemingly different attempt to overcome the difficulties of methodological solipsism is to work with agents so simple as to not need a knowledge base at all. Mainstream autonomous robotics rejected the whole idea of representation and claimed that cognition can and should be understood without recurring to it: internal models of the world are useless because ‘the world is its own best model’ (Brooks 1990: 6). This allowed investigators to ‘build complete creatures rather than isolated cognitive simulators’, as proposed by Rodney Brooks (1991) in the title of a . On the one hand, however, these creatures hardly reach the intelligence level of a simple arthropod (or of any other computer), and scaling up to the human species appears impossible for principled reasons (Kirsh 1991; Tirassa et al. 2017). On the other hand, because their control systems ultimately function on zeros and ones, autonomous robots have been interpreted as an integral part of the symbolic paradigm and therefore of the research program of classical artificial intelligence (Vera and Simon 1993).

Thus, the most radical criticism of the classical view is the claim that the mind/brain is indeed a representational organ, but that the nature of representations is not that of a formal code. John Searle (1992) argued that the representational and computational structures that have typically been theorized in cognitive science lack any acceptable ontology. Not being observable or understandable either in the third person (because all that we can objectively see is neurons or circuitries and not frames or other representational structures) or in the first person (because frames and other representational structures are ‘cognitively impenetrable’, that is, inaccessible to subjectivity or introspection), these structures just cannot exist. Searle (1983) rejected the assumption – undisputed from Craik to Simon – that the representational mind/brain operates on formal internal models detached from the world and argued instead that its main feature is intentionality (see also Brentano 1874), a term which has been variously viewed as synonymous with connectedness, aboutness, meaningfulness, semantics or straightforwardly consciousness.

The idea that representations are constructed (or simply happen) at the interaction of the conscious mind/brain and the external world is also a major tenet of the area known as situated or embodied cognition (e.g. Gibson 1979; Johnson 1987; Varela et al. 1991; Hutchins 1995; Clark 1997; Clancey 1997; Glenberg 1997; Tirassa et al. 2017). Representations here are viewed as neither structured symbolic codes nor as the objects of formal manipulation, but as (at least partially culturally constructed) artefacts that are interposed between the internal and the external worlds and that generate a continuous dynamical reconceptualization of meaning. Thus, many researchers in situated cognitive science are constructivist with regard to the nature of knowledge, which they view as a continuously renewed product of consciousness and as tightly bound to action and experience, and practitioners of phenomenology with regard to their methodology, which follows from the idea that the mind only exists in the first person immersed in time (Heidegger 1927; Merleau-Ponty 1945; Varela et al. 1991; Varela 1996).()

网站原创范文除特殊说明外一切图文作品权归所有;未经官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯作品权现象,英语论文题目,保留一切法学追诉权。
更多范文欢迎访问我们主页 当然有需求可以和我们24小时在线客服 20171 关系交流。-X()

英语毕业论文
免费论文题目: