Linking services and information environments范文[英语论文]

资料分类免费英语论文 责任编辑:王教授更新时间:2017-04-25
提示:本资料为网络收集免费论文,存在不完整性。建议下载本站其它完整的收费论文。使用可通过查重系统的论文,才是您毕业的保障。

范文:“Linking services and information environments” 通过数字图书馆的信息,英语论文题目,来组织环境继续成为网络的一部分,但优质的服务,适用于被认为是其中的内容,来决定采取行动的内容和服务。这些环境的例子可能包括的内容库,分布在英国国家电子储备如ACM数字图书馆,藏书发表的期刊,通过DOI-based服务访问,或分布式等档案网络化。从本质上说,在这些环境中,网络文献传递服务转化成一个动态的、计算框架。

项目的范围,这些场景是非常灵活的,可以由不同的运用程序构成。例如连接箭头中可能的场景,虽然这类运用程序可能还没有实现。下面的范文进行详述。


Information environments organised via digital libraries continue to be part of the Web but are distinguished by services that apply to contents that are deemed to be within them, determined not by physical location but by the nature and selection of those contents and the services that act on them. Examples of these environments might include the contents of libraries as in the UK eLib-funded Distributed National Electronic Reserve (DR), single-publisher collections such as the ACM Digital Library, larger collections of published journal s accessed via DOI-based services, or distributed archives such as the worked Computer Science Technical Report Library, NCSTRL. In essence, in these environments the Web is transformed from a document delivery service into a dynamic, computational framework.

The scope of the project and that shown in Figure 1 is intentionally wide, although of immediate concern is the area bounded by the left-hand vertical arrows and the information environments denoted by 'Southampton' and 'Cornell'. The first results ed below specifically relate to Southampton's work with the Los Alamos physics archives.

These scenarios are highly flexible and may be viewed in other ways by different applications. For example, the connecting arrows are all possible scenarios, although such applications may not have been implemented yet. An SFX application linking various resources - notably a number of abstracting and indexing (A&I)  services, some publishers' full-text content as well as the Los Alamos archives - via an SFX database for library environments was described by Van de Sompel and Hochstenbach (1999b). The multi-publisher CrossRef linking initiative is likely to route through a DOI resolver to a library or aggregated journal environment. ISI has announced a number of agreements with publishers to link between Web of Science and full-texts. Each of these applications can be identified in some form in Figure 1.

In addition, the roles imputed to the linking tools in Figure 1 may not reflect their wider capabilities. Citeseer and SFX variously contain information retrieval, database and linking functions. This is not shown for Citeseer. Demonstrator services based on these tools have created user interfaces. In this respect the tools could legitimately be indicated as part of the information environments at the top of the diagram, but are not in this view. Apart from Citeseer, methods for data extraction from the archives - Dienst and the Santa Fe metadata conventions - that will be used to create citation databases are ongoing developments of the Open Archives initiative.

Thus it can be seen how interchangeable these components are, and it is anticipated that this flexibility will drive significant innovation in citation linking. Figure 1 should be considered a perspective on environments for citation linking held by the OpCit project but not necessarily by others.

OpCit: early implementations and results
The process of adding citation links dynamically to documents retrieved from an archive involves parsing the document during download to identify and read citations. The data are compared with a precompiled link or citation database, and a link to the cited work added where an exact match is found. For more details, one method for doing this was described by Hitchcock et al. (1997).

A similar method has been adopted for OpCit, but in this case the application demands that a larger, richer citation database is compiled. Broadly, the stages involved in the compilation of this database are:

transforming original documents to a format (e.g. plain text) for extracting citations;
parsing documents to identify and read citations;
designing a database schema to store reference information in an information-rich, easy-to-use and flexible manner, and accommodating future extensions.
Citeseer excels in this respect, but the algorithms have edly not been so successful when applied to the Los Alamos physics archive because the references contain too little information. The project aims to build tools to supplement the results produced by Citeseer for other archives.
The importance of stage 3 is that richer citation databases can provide users with useful information, apart from linking, effectively a highly automated version of Garfield's famous citation analyses:

which are the most frequently cited s?
what s refer to the current (forward linking)?
who co-reference the same s, and to what extent? (identifying  similar research interests)
in what context is a is cited? (i.e. citation context)
which journals are the most popular?
In a preliminary implementation of the linking model, where one of the objectives was to integrate services provided by some of the linking tools described above, a successfully linked citation directs the user to an intermediate page offering the user a choice: either download the text from the archive or look up some contextual information on the citation. In this example the links in the original document are added by the DLS, and the intermediate page is produced from an SFX-like database which will maintain some knowledge of the user privileges and can offer all versions of the cited that are accessible to the user. In this case only the archive versions - abstract, and link-enhanced and authors' original full texts are available. In principle, if the user or a library subscribes to the journal in which the cited was published that version could be linked from SFX too; also, other versions of the , in abstracting services for example. The different stages of retrieval are shown in Figures 2-4.

免费论文题目: