网范文:“Practical value of cognitive models” 对于理论问题的心理表征模型和实验范式,与这些模型有关,英语毕业论文,理论是很重要的,无论他们是心理语言学,仅客观有效的证据支持。这篇生理范文讲述了这一问题。同时任何理论框架的结构和假设应该屈服于实证,越来越多的理论和实证措施之间需要交互。没有复杂的模型推理,英语毕业论文,我们将无法理解知觉表征是如何构造的。虽然认知科学最终经历高级认知感觉的进展,这些模型不能是自底向上的方式。
有很多原因开发新模型和改进现有理论,有利于理论和实证两个方面。首先,模型帮助经验主义者设计探讨。任何实证探讨是基于理论假设,通过探讨设计和措施。这种假设的理论背景认为,人类认知处理能力是一种有限的资源。下面的范文进行详述。
The previous sections discussed theoretical issues regarding models of mental representation and experimental paradigms associated with those models. It is important to remember that theories of representation, whether they are psychological, linguistic or psycholinguistic in nature, are objectively only as valid as the evidence that supports them. At the same time, the structure and assumptions of any theoretical framework should yield to empirical testing, or risk criticism from the empirically-focused part of the cognitive science community – much as was the case with CMT. There is a growing need for interaction between theoretical and empirical approaches to mental representation (Gibbs 2017).
Without models of how complex reasoning and expertise develops 55 we will not be able to understand how perceptual representations are constructed. Although cognitive science would ultimately like to produce an explanation regarding the progress from sensation to high-level cognition, these models cannot be developed in a purely bottom-up fashion (Markman and Dietrich 2017a: 474). There are a number of reasons why developing new models and improving existing theories is beneficial for both the theoretical and empirical sides of cognitive science. First, models help empiricists design studies.
Any empirical study is based on theoretical assumptions that inform study design and methodology. For instance, many psycholinguistic studies consider reaction time to be indicative of processing difficulty; the longer it takes for a participant to react to a stimulus, the more difficult to process it is assumed to be. The theoretical background of this assumption is related to the belief that human cognitive processing capacity is a limited resource, the online allocation of which follows certain principles. Similarly, the understanding of concepts should translate to study design. Change in theoretical approach should translate to change in methodology. “Theoretical change should translate into operationalization change. Or, to put it differently, operationalization change should track theoretical change” (Machery 2017: 64). Consequently, it is important not to stop at the theoretical level without considering the practical implications of a mental representation model. A successful theory should be clear with regard to its scope and terms, but also needs to generate precise predictions.
A good example here is conceptual metaphor theory which, while clearly defined, has been accused of both vagueness (Murphy 1997) and lack of empirical focus (Gibbs 2017). A general model is perhaps acceptable in the beginning stages of theory development, but with its evolution the focus needs to be shifted towards the implementation of the model. Second, if the model is meant to be applied in an interdisciplinary context it should demonstrate awareness of the developments in the range of fields it is trying to reach. In particular, models that can be reconciled with what we know about the brain lead to greater understanding between scientific disciplines. One of the theoretical frameworks that aims to be compatible with a range of fields in cognitive science is connectionism or, more specifically, neuroconstructivism (Westermann et al. 2017). Researchers that subscribe to this framework aim to produce cognitive level theories consistent with neural theories in order to increase dialogue opportunities these discip- 56 lines. Finally, while many models are meant to be interpreted as analogies or simulations, they should go beyond that in order to be truly useful.
While the network model for past tense acquisition (Rumelhart and McClelland 1987) and the connectionist model that accounts for syntactical processing (Elman 1990) are successful simulations of processes in these specific domains, they are not useful in terms of generating insight beyond limited sets of data. There is no doubt that simulations are informative. However, the main aim of cognitive models is to predict and explain, which requires that partial models fit within a broader, cohesive framework. If we consider models of mental representation this requirement is uncomfortable for the amodal symbol theory.
Although some connectionist models assume amodal (arbitrary) representation without losing the capacity to fit in the broader neuroconstructivist framework, systems fully reliant on amodal representation are not psychologically feasible. Amodal representation is dissociated from findings in neurology, psychology and psycholinguistics that demonstrate sensory involvement in tasks involving imagining and understanding concepts (Hauk and Pulvermüller 2017). In contrast, grounded cognition based theories of representation, including prototype theory seem compatible with a variety od cognitive disciplines.
Because organisms need cognitive systems that deal with the world as a whole rather than separate situations (Edelman 2017) models of particular cognitive processes need to be either compatible with other models, or scalable to include them. The capacity to generalise, make inferences, and abstract from experience is known as hierarchical abstraction. Edelman argues that, just as cognitive agents need hierarchical abstraction to scale up their understanding of the world, cognitive scientists need their models to possess this trait if they aspire to broaden the understanding of cognition (2017: 273). There is currently a debate whether amodal symbols are a prerequisite for hierarchical abstraction (Markman and Dietrich 2017b), or if this capacity can be achieved in dynamic systems (Beer 2017) but, although fascinating, it lies beyond the scope of this thesis5 . For now let us agree that an adequate model of mental representation should be compatible with empirical findings, follow a coherent theoretical framework, and be scalable so that inference goes beyond any specific cognitive function. Therefore, if CMT is to become a reliable conceptualisation model, the theory should fulfil the requirements stated above. The first step toward this goal is to look at its compatibility with studies outside cognitive linguistics. This naturally leads us towards the human brain.
Neurolinguistic evidence for cognitive phenomena – review of methodological constraints
Although on the surface the results of neurolinguistic studies regarding conceptual structure (Binder et al. 2017; Quinn and Eimas 2017) seem both promising and convincing, interpreting research results and comparing them to the predictions made by cognitive linguistic theories is not a straightforward process. Each of the methods used in neurolinguistic research (fMRI, ERP, PET) has its limitations, assumptions, and biases. Both between and within those disciplines we will find differences in definitions and beliefs. Therefore, before we can assess the congruency of cognitive theories and neurolinguistic results it is important to discuss the extent to which the latter can be meaningful from an interdisciplinary perspective. Broadly speaking, there are two types of noninvasive methods used in neuroimaging research on humans.
Direct methods monitor electrical or magnetic fields linked to neural activity, indirect methods monitor changes in blood flow associated with neural activity (Ganis and Kosslyn 2017). Two of the most common direct methods used in neurolinguistic research are EEG and ERPs6 . Electroencephalogram (EEG) provides information about the summed electrical events produced by individual brain cells. Eventrelated potential (ERP) is a variant of EEG often used in neurolinguistic research because it measures changes in electrical activity immediately following the presentation of a stimulus or decision. EEG and ERPs are recorded from a set of electrodes placed on the patient's scalp. For a variety of reasons, these techniques are limited to measuring activity within the grey matter of the neocortex (Ganis and Kosslyn 2017).
Although ERP is very effective in terms of measuring quick (less than 1 msec) changes in activa- tion, it has limited spatial resolution because this technique can only measure signals outside the surface of the head. Interpreting surface data as indicative of internal processing within the brain is one of the challenges of EEG and ERP data analysis (Savoy 2017). Indirect methods such as MRI, fMRI and PET are also called hemodynamic because instead of measuring brain activity directly, they measure changes in blood flow, oxygen and glucose consumption, and cerebral blood oxygenation levels correlated with neural activity (Ganis and Kosslyn 2017).
Very generally speaking, these methods are based on the belief that oxygen consumption and blood flow temporarily increase in brain areas involved in a given cognitive task which results in measurable changes in the adjacent magnetic field (Savoy 2017). The exact mechanism by which neurological processes cause metabolic changes and influence the blood flow is not clear. However, the empirical relationship between brain activity and such changes is very reliable. Positron emission tomography (PET) is one of the methods that applies this principle to measure neural activity7 . From an empirical perspective PET has a number of limitations that directly influence its usefulness for conceptual research. First, it requires the subject to ingest a radioactive isotope which limits the number of times per year any given volunteer may be scanned (due to ethical and medical constraints).
Second, the produced images have a relatively low spatial and temporal resolution. In order to generate useful data participants need to perform the same task for an extended period of time (about 30 s before and 60 s during data collection) which limits the types of cognitive tasks that can be studied with PET. Because of these factors, PET studies in the domain of neurolinguistics have largely been replaced with functional magnetic resonance (fMRI). The fMRI technique refers to the detection of hemodynamic changes associated with neural activity using magnetic resonance imaging (MRI). Magnetic resonance was originally developed as a non-ionic radiation based (therefore less invasive) method of creating images of soft tissue. Functional magnetic resonance imaging (fMRI) is at present the most widely used neuroimaging technique. It exploits the optical and magnetic properties of deoxygenated and oxygenated haemoglobin, and the fact that any increase in local brain activity is marked by an increased concentration of oxygenated haemoglobin in that region (Ramachandran 2017). Although it is currently a very popu- lar method in neurolinguistic research, fMRI is not ideal. It offers good spatial and temporal resolution and is less expensive than PET. However, the technique is very noisy and many subjects find spending time in the narrow tunnel of the machine uncomfortable. Also, it is very sensitive to motion. In other words, even small movements of the head introduce artefacts into the data, which may make the collected information effectively useless. Assuming that the experiment produced valid results, another question is whether they comparable to the results of other studies, and to what extent it is possible to make cross-disciplinary inferences. It is often the case, particularly in popular scientific ing, that results of neurolinguistic studies are sensationalised. This is not surprising: the colourful 3D activation maps produced by neuroimaging software easily yield to enthusiastic misinterpretation. It is important to remember that activation patterns recorded in the course of a neurolinguistic experiment are not “what happens in the brain” during a task. A general principle of functional neuroimaging studies is that the measured activations show relative differences in neural activity between 2 or more brain states. The pattern of activation ed in a study that targets semantic processing not only depends on the cognitive processes the researcher intended to record during a task, but on the activation, or lack thereof, in the comparison task (Binder et al. 2017).
In other words, because the brain is constantly active at some level, what is measured in functional neuroimaging research is not its activity in any objective sense (Ramachandran 2017). In order to eliminate the noise of normal brain activity researchers measure the difference in activation between two or more conditions one of which serves as a benchmark. Once a basic activation level is established, researchers need to decide on the activation threshold, or how strong the change in activation needs to be before it is recorded. Therefore, if the participant is asked to look at pictures of their loved ones and emotionally neutral images of unfamiliar people what is measured is not the objective response to images of family members, but rather the difference in brain activation when looking at familiar and unfamiliar people.
Furthermore, in order to reduce the effects of individual variation in brain size and structure activation patterns of individual participants are normally mapped onto a default brain model. Naturally, this procedure lowers the accuracy of the findings. Therefore, when interpreting neurolinguistic study results it is best to err on 60 the side of caution rather than overgeneralise. In conclusion, it is clear that neuroimaging research contributed greatly to the development of the field of cognitive science. Nevertheless, one should bear in mind both the advantages and limitations of such studies when constructing theoretical models with interdisciplinary scope (Poeppel and Embick 2017).
网站原创范文除特殊说明外一切图文作品权归所有;未经官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯作品权现象,保留一切法学追诉权。
更多范文欢迎访问我们主页 当然有需求可以和我们 联系交流。-X
|