Cohabitation: Computation at 70, Cognition at 20范文[英语论文]

资料分类免费英语论文 责任编辑:王教授更新时间:2017-04-25
提示:本资料为网络收集免费论文,存在不完整性。建议下载本站其它完整的收费论文。使用可通过查重系统的论文,才是您毕业的保障。

范文:“Cohabitation: Computation at 70, Cognition at 20”把认知等同计算,我们不知道大脑如何运作,而我们知道计算可以做一切。一切名副其实的认知,可能解决身心问题,这篇心理范文讲述了认知与计算的关系。如果灵魂像软件,独立于它的物理化身,英语毕业论文,甚至神经网络可以是模拟或包容。但随后塞尔的思想实验,表明认知无法计算所有一切。如果认知混合感觉符号,原来我们都是讨价还价的价格,而不是交付货物。第一个理论,认知的心理意象理论,当我们反思,我们大多数人都知道在头脑中生成场景。

意象理论强调,例如,我记得我三年级的老师是谁,我第一次将她画在我的脑海里。今天,经过3年的探讨,与所有意识的认知,我们不知道认知盲点,我们大多是认知失明。下面的范文进行讲述。

ABSTRACT
Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment,  showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.
 
One of the first candidate armchair theories of cognition was mental imagery theory: When we introspect, most of us are aware of images going on in our heads. (There are words too, but we will return to those later.) The imagery theorists stressed that, for example, the way I remember who my third-grade school-teacher was is that I first picture her in my head, and then I name her, just as I would if I had seen her. Today, after 3 decades of having been enlightened on this score by Zenon Pylyshyn's celebrated "Mind's Eye" critique of mental imagery in 1973, it is hard even to imagine that anyone could ever have failed to see this answer that the way I remember her name is by picturing her, and then identifying the picture as anything but empty question-begging. How do I come up with her picture? How do I identify her picture? Those are the real functional questions we are missing; and it is no doubt because of the anosognosia the "picture completion" effect that comes with all conscious cognition that we don't notice what we are missing: We are unaware of our cognitive blind spots and we are mostly cognitively blind.
 
It is now history how Zenon opened our eyes and minds to these cognitive blind spots and to how they help non-explanations masquerade as explanations. First, he pointed out that the trouble with "picture in the mind" "just-so" stories is that they simply defer our explanatory debt: How did our brains find the right picture? And how did they identify whom it was a picture of? By ing our introspection of what we are seeing and feeling while we are coming up with the right answer we may (or may not) be correctly ing the decorative accompaniments or correlates  of our cognitive functions but we are not explaining the functions themselves. Who found the picture? Who looked at it? Who recognized it? And how? I first asked how I do it, what is going on in my head; and the reply was just that a little man in my head (the homunculus) does it for me. But then what is going on in the little man's head?
 
Discharging the Homunculus. Imagery theory leaves a lot of explanatory debts to discharge, perhaps an infinite regress of them. Zenon suggested that the first thing we need to do is to discharge the homunculus. Stop answering the functional questions in terms of their decorative correlates, but explain the functions themselves. Originally, Zenon suggested that the genuine explanation has to be "propositional" (Pylyshyn 1973) but this soon evolved into "computational" (Pylyshyn 1984). If I ask you who your 3rd grade school-teacher was, your brain has to do a computation, a computation that is invisible and impenetrable to introspection. The computation is done by our heads implicitly, but successful cognitive theory must make it explicit, so it can be tested (computationally) to see whether it works. The decorative phenomenology that accompanies the real work that is being done implicitly is simply misleading us, lulling us in our anosognosic delusion into thinking that we know what we are doing and how. In reality, we will only know how when the cognitive theorists figure it out and tell us.
 
Well this does not solve the mind/body problem, for many reasons, but here I will only point out that it does not solve the problem of the relation between computational and dynamical processes in cognition either: Computations need to be dynamically implemented in order to run and to do whatever they do, but that's not the only computational/dynamical relationship; and it's not the one we were looking for when we were asking about, for example, mental rotation.
 
Computation is rule-based symbol manipulation; the symbols are arbitrary in their shape (e.g., 0's and 1's) and the manipulation rules are syntactic, being based on the symbols' shapes, not their meanings. Yet a computation is only useful if it is semantically interpretable; indeed, as Fodor and Pylyshyn (1988) have been at pains to point out, systematic semantic interpretability, indeed compositional semantics, in which most of the symbols themselves are individually interpretable and can be combined and recombined coherently and interpretably, like the words in a natural language is the hallmark of a symbol system. But if symbols have meanings, yet their meanings are not in the symbol system itself, what is the connection between the symbols and what they mean?
 
Grounding the Language of Thought. Here it is useful to think of propositions again, Pylyshyn's original candidate, as the prototypes of Fodor's (1975) "language of thought" computation in both instances. The words in propositions are symbols. What connects those symbols to their referents? What gives them meaning? In the case of a sentence in a book, such as "the cat is on the mat," there is no problem, because it is the mind of the writer or reader of the sentence that makes the connection between the word "cat" and the things in the world we happen to call "cats," and between the proposition "the cat is on the mat" and the circumstance in the world we happen to call "cats being on mats."  Let us call that mediated symbol-grounding: The link between the symbol and its referent is made by the brain of the user. That's fine for logic, mathematics and computer science, which merely use symbol systems. But it won't do for cognitive science, which must also explain what is going on in the head of the user; it doesn't work for the same reason that homuncular explanations do not work in cognitive explanation, leading instead to an endless homuncular regress. The buck must stop somewhere, and the homunculus must be discharged, replaced by a mindless, fully autonomous process.
 
Well, in Pylyshyn's computationalism, the only candidate autonomous internal function for discharging the homunculus is computation, and now we are asking whether that function is enough. Can cognition be just computation? The philosopher John Searle (1980) asked this question in his celebrated thought experiment. Let us agree (with Turing 1950) that "cognition is as cognition does" or better, so we have a Chomskian competence criterion rather than a mere behaviorist performance criterion that "cognition is as cognition can do." The gist of the Turing Test is that on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition.
 
The root of the problem is the symbol-grounding problem: How can the symbols in a symbol system be connected to the things in the world that they are ever-so-systematically interpretable as being about: connected directly and autonomously, without begging the question by having the connection mediated by that very human mind whose capacities and functioning we are trying to explain! For ungrounded symbol systems are just as open to homuncularity, infinite regress and question-begging as subjective mental imagery is!
 
The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities. Not just the email version of the TT, based on computation alone, which has been shown to be insufficient by Searle, but the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.
 
We cannot prejudge what proportion of the TT-passing robot's internal structures and processes will be computational and what proportion dynamic. We can just be sure that they cannot all be computational, all the way down. As to which components of its internal structures and process we will choose to call "cognitive":
Does it really matter? And can't we wait till we get there to decide?[1]()

网站原创范文除特殊说明外一切图文作品权归所有;未经官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯作品权现象,英语论文题目,保留一切法学追诉权。()
更多范文欢迎访问我们主页 当然有需求可以和我们 联系交流。-X()

免费论文题目: