The Web (the + web)

Distribution by Scientific Domains

Kinds of The Web

  • available on the web
  • on the web


  • Selected Abstracts


    Virtual reality simulations in Web-based science education

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2002
    Young-Suk Shin
    Abstract This article presents the educational possibilities of Web-based science education using a desktop virtual reality (VR) system. A Web site devoted to science education for middle school students has been designed and developed in the areas of earth sciences: meteorology, geophysics, geology, oceanography, and astronomy. Learners can establish by themselves the pace of their lessons using learning contents considered learner level and they can experiment in real time with the concepts they have learned, interacting with VR environments that we provide. A VR simulation program developed has been evaluated with a questionnaire from learners after learning freely on the Web. This study shows that Web-based science education using VR can be effectively used as a virtual class. When we consider the rapid development of VR technology and lowering of cost, the study can construct more immersive environments for the education in the near future. © 2002 Wiley Periodicals, Inc. Comput Appl Eng Educ 10: 18,25, 2002; Published online in Wiley InterScience (www.interscience.wiley.com.); DOI 10.1002/cae.10014 [source]


    SVG Linearization and Accessibility

    COMPUTER GRAPHICS FORUM, Issue 4 2002
    Ivan Herman
    Abstract The usage of SVG (Scaleable Vector Graphics) creates new possibilities as well as new challenges for theaccessibility of Web sites. This paper presents a metadata vocabulary to describe the information content ofan SVG file geared towards accessibility. When used with a suitable tool, this metadata description can helpin generating a textual ("linear") version of the content, which can be used for users with disabilities or withnon-visual devices. Although this paper concentrates on SVG, i.e. on graphics on the Web, the metadata approach and vocabularypresented below can be applied in relation to other technologies, too. Indeed, accessibility issues have a muchwider significance, and have an effect on areas like CAD, cartography, or information visualization. Hence, theexperiences of the work presented below may also be useful for practitioners in other areas. ACM CSS: I.3.4 Graphics Utilities,Graphics Packages, I.3.6 Methodology and Techniques,Graphics datastructures and data types, Standards, K.4.2 Social Issues,Assistive technologies for persons with disabilities [source]


    A Web page that provides map-based interfaces for VRML/X3D content

    ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 2 2009
    Yoshihiro Miyake
    Abstract The electronic map is very useful for navigation in the VRML/X3D virtual environments. So far various map-based interfaces have been developed. But they are lacking for generality because they have been separately developed for individual VRML/X3D contents, and users must use different interfaces for different contents. Therefore, we have developed a Web page that provides a common map-based interface for VRML/X3D contents on the Web. Users access VRML/X3D contents via the Web page. The Web page automatically generates a simplified map by analyzing the scene graph of downloaded contents, and embeds the mechanism to link the virtual world and the map. An avatar is automatically created and added to the map, and both a user and its avatar are bidirectionally linked together. In the simplified map, obstructive objects are removed and the other objects are replaced by base boxes. This paper proposes the architecture of the Web page and the method to generate simplified maps. Finally, an experimental system is developed in order to show the improvement of frame rates by simplifying the map. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 92(2): 28,37, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10017 [source]


    User profiling on the Web based on deep knowledge and sequential questioning

    EXPERT SYSTEMS, Issue 1 2006
    Silvano Mussi
    Abstract: User profiling on the Web is a topic that has attracted a great number of technological approaches and applications. In most user profiling approaches the website learns profiles from data implicitly acquired from user behaviours, i.e. observing the behaviours of users with a statistically significant number of accesses. This paper presents an alternative approach. In this approach the website explicitly acquires data from users, user interests are represented in a Bayesian network, and user profiles are enriched and refined over time. The profile enrichment is achieved through a sequential asking algorithm based on the value-of-information theory using the Shannon entropy concept. However, what mostly characterizes the approach is the fact that the user is involved in a collaborative process of profile building. The approach has been tried out for over a year in a real application. On the basis of the experimental results the approach turns out to be particularly suitable for applications where the website is strongly based on deep domain knowledge (as for example is the case for scientific websites) and has a community of users that share the same domain knowledge of the website and produce a ,low' number of accesses (,low' compared to the high number of accesses of a typical commercial website). After presenting the technical aspects of the approach, we discuss the underlying ideas in the light of the experimental results and the literature on human,computer interaction and user profiling. [source]


    A metagenetic algorithm for information filtering and collection from the World Wide Web

    EXPERT SYSTEMS, Issue 2 2001
    Z.N. Zacharis
    This paper describes the implementation of evolutionary techniques for information filtering and collection from the World Wide Web. We consider the problem of building intelligent agents to facilitate a person's search for information on the Web. An intelligent agent has been developed that uses a metagenetic algorithm in order to collect and recommend Web pages that will be interesting to the user. The user's feedback on the agent's recommendations drives the learning process to adapt the user's profile with his/her interests. The software agent utilizes the metagenetic algorithm to explore the search space of user interests. Experimental results are presented in order to demonstrate the suitability of the metagenetic algorithm's approach on the Web. [source]


    Teaching & Learning Guide for: The Origins of English Puritanism

    HISTORY COMPASS (ELECTRONIC), Issue 4 2007
    Karl Gunther
    Author's Introduction This essay makes the familiar observation that when one part of an historiography changes, so must other parts. Here the author observes that the phenomenon known as puritanism has dramatically changed meanings over the past quarter century, though the change has focused on the Elizabethan and early Stuart periods. He asks that we consider the impact of that change on the earlier period, when puritanism in England had its origins. Focus Questions 1Why is the author unable to posit an answer to his question? 2If new study of the origins of puritanism were to reveal that it was not a mainstream Calvinist movement, but a radical critique of the Henrician and early Elizabethan church, how would that affect the new orthodoxy in Puritan studies? Author Recommends * A. G. Dickens, The English Reformation (Batsford, 1989). The starting place for all modern discussions of the English Reformation and the origins of both conservative and radical protestantism in England. Dicken's view is that the reformation was a mixture of German ideas, English attitudes, and royal leadership. * Eamon Duffy, The Stripping of the Altars: Traditional Religion in England c.1400,1580 (Yale Univeristy Press, 2005). What was it that the Reformation reformed? In order to understand early English protestantism, one needs to see it within the context of Catholicism. Eamon Duffy rejects the narrative of the Catholic church told by Protestant reformers and demonstrates the ruthlessness of the reformation. * Ethan Shagen, Popular Politics and the English Reformation (Cambridge University Press, 2003). Shagan asks the question, how is a conservative population energized to undertake the overthrow of their customs and beliefs? He too is centrally concerned with the issue of how radical was the English Reformation. * Brad Gregory, Salvation at Stake: Christian Martyrdom in Early Modern Europe (Harvard University Press, 1999). Nothing better expressed the radicalism of religious belief than the dual process of martyrdom, the willingness of the established religion to make martyrs of its enemies and of dissendents to be martyrs to their cause. Gregory explores this phenomenon across the confessional divide and comes to surprising conclusions about similarities and differences. Online Materials 1. Puritan Studies on the Web http://puritanism.online.fr A site of resources for studies of Puritanism, this contains a large number of primary sources and links to other source sites. The Link to the English Reformation is particularly useful. 2. The Royal Historical Society Bibliography http://www.rhs.ac.uk/bibl/dataset.asp The bibliography of the Royal Historical Society contains a complete listing of articles and books on all aspects of British history. Subject searches for Puritanism or the English Reformation will yield hundreds of works to choose from. [source]


    Assessing and managing the benefits of enterprise systems: the business manager's perspective

    INFORMATION SYSTEMS JOURNAL, Issue 4 2002
    Shari Shang
    Abstract. This paper focuses on the benefits that organizations may achieve from their investment in enterprise systems (ES). It proposes an ES benefit framework for summarizing benefits in the years after ES implementation. Based on an analysis of the features of enterprise systems, on the literature on information technology (IT) value, on data from 233 enterprise systems vendor-reported stories published on the Web and on interviews with managers of 34 organizations using ES, the framework provides a detailed list of benefits that have reportedly been acquired through ES implementation. This list of benefits is consolidated into five benefits dimensions: operational, managerial, strategic, IT infrastructure and organizational, and illustrated using perceived net benefit flow (PNBF) graphs. In a detailed example, the paper shows how the framework has been applied to the identification of benefits in a longitudinal case study of four organizations. [source]


    Estimating and eliminating redundant data transfers over the web: a fragment based approach

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2005
    Christos Bouras
    Abstract Redundant data transfers over the Web, can be mainly attributed to the repeated transfers of unchanged data. Web caches and Web proxies are some of the solutions that have been proposed, to deal with the issue of redundant data transfers. In this paper we focus on the efficient estimation and reduction of redundant data transfers over the Web. We first prove that a vast amount of redundant data is transferred in Web pages that are considered to carry fresh data. We show this by following an approach based on Web page fragmentation and manipulation. Web pages are broken down to fragments, based on specific criteria. We then deal with these fragments as independent constructors of the Web page and study their change patterns independently and in the context of the whole Web page. After the fragmentation process, we propose solutions for dealing with redundant data transfers. This paper has been based on our previous work on ,Web Components' but also on related work by other researchers. It utilises a proxy based, client/server architecture, and imposes changes to the algorithms executed on the Proxy server and on clients. We show that our proposed solution can considerably reduce the amount of redundant data transferred on the Web. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Applying aggregation operators for information access systems: An application in digital libraries

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 12 2008
    Enrique Herrera-Viedma
    Nowadays, the information access on the Web is a main problem in the computer science community. Any major advance in the field of information access on the Web requires the collaboration of different methodologies and research areas. In this paper, the concept of aggregation operator playing a role for information access on the Web is analyzed. We present some Web methodologies, as search engines, recommender systems, and Web quality evaluation models and analyze the way aggregation operators help toward the success of their activities. We also show an application of the aggregation operators in digital libraries. In particular, we introduce a Web information system to analyze the quality of digital libraries that implements an important panel of aggregation operators to obtain the quality assessments. © 2008 Wiley Periodicals, Inc. [source]


    Relevance in systems having a fuzzy-set-based semantics

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2007
    Ronald R. Yager
    Future automated question answering systems will typically involve the use of local knowledge available on the users' systems as well as knowledge retrieved from the Web. The determination of what information we should seek out on the Web must be directed by its potential value or relevance to our objective in the light of what knowledge is already available. Here we begin to provide a formal quantification of the concept of relevance and related ideas for systems that use fuzzy-set-based representations to provide the underlying semantics. We also introduce the idea of ease of extraction to quantify the ability of extracting relevant information from complex relationships. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 385,396, 2007. [source]


    A choice prediction competition: Choices from experience and from description

    JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 1 2010
    Ido Erev
    Abstract Erev, Ert, and Roth organized three choice prediction competitions focused on three related choice tasks: One shot decisions from description (decisions under risk), one shot decisions from experience, and repeated decisions from experience. Each competition was based on two experimental datasets: An estimation dataset, and a competition dataset. The studies that generated the two datasets used the same methods and subject pool, and examined decision problems randomly selected from the same distribution. After collecting the experimental data to be used for estimation, the organizers posted them on the Web, together with their fit with several baseline models, and challenged other researchers to compete to predict the results of the second (competition) set of experimental sessions. Fourteen teams responded to the challenge: The last seven authors of this paper are members of the winning teams. The results highlight the robustness of the difference between decisions from description and decisions from experience. The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the sensitivity to the weighted values decreases with the distance between the cumulative payoff functions. The best predictions of decisions from experience were obtained with models that assume reliance on small samples. Merits and limitations of the competition method are discussed. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    In Google We Trust: Users' Decisions on Rank, Position, and Relevance

    JOURNAL OF COMPUTER-MEDIATED COMMUNICATION, Issue 3 2007
    Bing Pan
    An eye tracking experiment revealed that college student users have substantial trust in Google's ability to rank results by their true relevance to the query. When the participants selected a link to follow from Google's result pages, their decisions were strongly biased towards links higher in position even if the abstracts themselves were less relevant. While the participants reacted to artificially reduced retrieval quality by greater scrutiny, they failed to achieve the same success rate. This demonstrated trust in Google has implications for the search engine's tremendous potential influence on culture, society, and user traffic on the Web. [source]


    Searching for Culture,High and Low

    JOURNAL OF COMPUTER-MEDIATED COMMUNICATION, Issue 3 2007
    Jennifer Kayahara
    This article examines the link between finding out about cultural activities on the Web and finding out through other people. Using data from interviews with Torontonians, we show that people first obtain cultural information from interpersonal ties or other offline sources and only then turn to the Web to amplify this information. The decisions about what information to seek from which media can be evaluated in terms of a uses and gratifications approach; the main gratifications identified include efficiency and the availability of up-to-date information. Our findings also have implications for the model of the traditional two-step flow of communication. We suggest the existence of new steps, whereby people receive recommendations from their interpersonal ties, gather information about these recommendations online, take this information back to their ties, and go back to the Web to check the new information that their ties have provided them. [source]


    The WikiID: An Alternative Approach to the Body of Knowledge

    JOURNAL OF INTERIOR DESIGN, Issue 2 2009
    Hannah Rose Mendoza M.F.A.
    ABSTRACT A discussion of the locus of design knowledge is currently underway as well as a search for clear boundaries defined by a formal Body of Knowledge (BoK). Most attempts to define a BoK involve the creation of "jurisdictional boundaries of knowledge" that "allow those who possess this knowledge to claim authority over its application" (Guerin & Thompson, 2004, p. 1). This claim is attractive but such control may no longer be an option in the Internet Age, when even the call for the discussion of the BoK definition process is on the Web. Marshall-Baker (2005) argued that "the moment knowledge is bordered it is no longer knowledge" (p. xiv). Whereas data and information are easily captured and generalized, knowledge is specific to users and their evolving understandings, implying purposeful application over time. This paper explores knowledge as process transcending boundaries and seeks to answer not "where" the locus lies but rather "what" that locus could be. Using a feminist framework, I argue that in conjunction with the work done thus far we should move toward the creation of an inclusive model for the BoK. In such a model, the value of the profession is felt as a result of inclusion in and interaction with the knowledge creation process. I propose that the BoK should not be a printed document, but a Web-based organizational system that supports change and innovation. Wikipedia provides this type of inclusive, mutable system, and the same framework could be applied to the creation of a systemic BoK. I call this creation the WikiID. [source]


    Domain,ligand mapping for enzymes

    JOURNAL OF MOLECULAR RECOGNITION, Issue 2 2010
    Matthew Bashton
    Abstract In this paper we provide an overview of our current knowledge of the mapping between small molecule ligands and protein domains. We give an overview of the present data resources available on the Web, which provide information about protein,ligand interactions, as well as discussing our own PROCOGNATE database. We present an update of ligand binding in large protein superfamilies and identify those ligands most frequently utilized by nature. Finally we discuss potential uses for this type of data. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Information for inspiration: Understanding architects' information seeking and use behaviors to inform design

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 9 2010
    Stephann Makri
    Architectural design projects are heavily reliant on electronic information seeking. However, there have been few studies on how architects look for and use information on the Web. We examined the electronic information behavior of 9 postgraduate architectural design and urban design students. We observed them undertake a self-chosen, naturalistic information task related to one of their design projects and found that although the architectural students performed many similar interactive information behaviors to academics and practitioners in other disciplines, they also performed behaviors reflective of the nature of their domain. The included exploring and encountering information (in addition to searching and browsing for it) and visualizing/appropriating information. The observations also highlighted the importance of information use behaviors (such as editing and recording) and communication behaviors (such as sharing and distributing) as well as the importance of multimedia materials, particularly images, for architectural design projects. A key overarching theme was that inspiration was found to be both an important driver for and potential outcome of information work in the architecture domain, suggesting the need to design electronic information tools for architects that encourage and foster creativity. We make suggestions for the design of such tools based on our findings. [source]


    Scatter matters: Regularities and implications for the scatter of healthcare information on the Web

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2010
    Suresh K. Bhavnani
    Abstract Despite the development of huge healthcare Web sites and powerful search engines, many searchers end their searches prematurely with incomplete information. Recent studies suggest that users often retrieve incomplete information because of the complex scatter of relevant facts about a topic across Web pages. However, little is understood about regularities underlying such information scatter. To probe regularities within the scatter of facts across Web pages, this article presents the results of two analyses: (a) a cluster analysis of Web pages that reveals the existence of three page clusters that vary in information density and (b) a content analysis that suggests the role each of the above-mentioned page clusters play in providing comprehensive information. These results provide implications for the design of Web sites, search tools, and training to help users find comprehensive information about a topic and for a hypothesis describing the underlying mechanisms causing the scatter. We conclude by briefly discussing how the analysis of information scatter, at the granularity of facts, complements existing theories of information-seeking behavior. [source]


    Consumer health information on the Web: The relationship of visual design and perceptions of credibility

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2010
    David Robins
    Consumer health information has proliferated on the Web. However, because virtually anyone can publish this type of information on the Web, consumers cannot always rely on traditional credibility cues such as reputation of a journal. Instead, they must rely on a variety of cues, including visual presentation, to determine the veracity of information. This study is an examination of the relationship of people's visual design preferences to judgments of credibility of information on consumer health information sites. Subjects were asked to rate their preferences for visual designs of 31 health information sites after a very brief viewing. The sites were then reordered and subjects rated them according to the extent to which they thought the information on the sites was credible. Visual design judgments bore a statistically significant similarity to credibility ratings. Sites with known brands were also highly rated for both credibility and visual design. Theoretical implications are discussed. [source]


    Perspectives on social tagging

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 12 2009
    Ying Ding
    Social tagging is one of the major phenomena transforming the World Wide Web from a static platform into an actively shared information space. This paper addresses various aspects of social tagging, including different views on the nature of social tagging, how to make use of social tags, and how to bridge social tagging with other Web functionalities; it discusses the use of facets to facilitate browsing and searching of tagging data; and it presents an analogy between bibliometrics and tagometrics, arguing that established bibliometric methodologies can be applied to analyze tagging behavior on the Web. Based on the Upper Tag Ontology (UTO), a Web crawler was built to harvest tag data from Delicious, Flickr, and YouTube in September 2007. In total, 1.8 million objects, including bookmarks, photos, and videos, 3.1 million taggers, and 12.1 million tags were collected and analyzed. Some tagging patterns and variations are identified and discussed. [source]


    A method for measuring the evolution of a topic on the Web: The case of "informetrics"

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 9 2009
    Judit Bar-Ilan
    The universe of information has been enriched by the creation of the World Wide Web, which has become an indispensible source for research. Since this source is growing at an enormous speed, an in-depth look of its performance to create a method for its evaluation has become necessary; however, growth is not the only process that influences the evolution of the Web. During their lifetime, Web pages may change their content and links to/from other Web pages, be duplicated or moved to a different URL, be removed from the Web either temporarily or permanently, and be temporarily inaccessible due to server and/or communication failures. To obtain a better understanding of these processes, we developed a method for tracking topics on the Web for long periods of time, without the need to employ a crawler and relying only on publicly available resources. The multiple data-collection methods used allow us to discover new pages related to the topic, to identify changes to existing pages, and to detect previously existing pages that have been removed or whose content is not relevant anymore to the specified topic. The method is demonstrated through monitoring Web pages that contain the term "informetrics" for a period of 8 years. The data-collection method also allowed us to analyze the dynamic changes in search engine coverage, illustrated here on Google,the search engine used for the longest period of time for data collection in this project. [source]


    Uncovering the dark Web: A case study of Jihad on the Web

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 8 2008
    Hsinchun Chen
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the "Dark Web",the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research. [source]


    Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 13 2007
    Miriam J. Metzger
    This article summarizes much of what is known from the communication and information literacy fields about the skills that Internet users need to assess the credibility of online information. The article reviews current recommendations for credibility assessment, empirical research on how users determine the credibility of Internet information, and describes several cognitive models of online information evaluation. Based on the literature review and critique of existing models of credibility assessment, recommendations for future online credibility education and practice are provided to assist users in locating reliable information online. The article concludes by offering ideas for research and theory development on this topic in an effort to advance knowledge in the area of credibility assessment of Internet-based information. [source]


    Mining related queries from Web search engine query logs using an improved association rule mining model

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 12 2007
    Xiaodong Shi
    With the overwhelming volume of information, the task of finding relevant information on a given topic on the Web is becoming increasingly difficult. Web search engines hence become one of the most popular solutions available on the Web. However, it has never been easy for novice users to organize and represent their information needs using simple queries. Users have to keep modifying their input queries until they get expected results. Therefore, it is often desirable for search engines to give suggestions on related queries to users. Besides, by identifying those related queries, search engines can potentially perform optimizations on their systems, such as query expansion and file indexing. In this work we propose a method that suggests a list of related queries given an initial input query. The related queries are based in the query log of previously submitted queries by human users, which can be identified using an enhanced model of association rules. Users can utilize the suggested related queries to tune or redirect the search process. Our method not only discovers the related queries, but also ranks them according to the degree of their relatedness. Unlike many other rival techniques, it also performs reasonably well on less frequent input queries. [source]


    Data cleansing for Web information retrieval using query independent features

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 12 2007
    Yiqun Liu
    Understanding what kinds of Web pages are the most useful for Web search engine users is a critical task in Web information retrieval (IR). Most previous works used hyperlink analysis algorithms to solve this problem. However, little research has been focused on query-independent Web data cleansing for Web IR. In this paper, we first provide analysis of the differences between retrieval target pages and ordinary ones based on more than 30 million Web pages obtained from both the Text Retrieval Conference (TREC) and a widely used Chinese search engine, SOGOU (www.sogou.com). We further propose a learning-based data cleansing algorithm for reducing Web pages that are unlikely to be useful for user requests. We found that there exists a large proportion of low-quality Web pages in both the English and the Chinese Web page corpus, and retrieval target pages can be identified using query-independent features and cleansing algorithms. The experimental results showed that our algorithm is effective in reducing a large portion of Web pages with a small loss in retrieval target pages. It makes it possible for Web IR tools to meet a large fraction of users' needs with only a small part of pages on the Web. These results may help Web search engines make better use of their limited storage and computation resources to improve search performance. [source]


    Information politics on the Web

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2007
    Kevin C. Desouza
    [source]


    "I'm feeling lucky": The role of emotions in seeking information on the Web

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2006
    James Kalbach
    Recent research highlights the potential relevance of emotions in interface design. People can no longer be modeled as purely goal-driven, task-solving agents: They also have affective motivations for their choices and behavior implying an extended mandate for search design. Absent from current Web design practice, however, is a pattern for emotive criticism and design reflecting these new directions. Further, discussion of emotions and Web design is not limited to visual design or aesthetic appeal: Emotions users have as they interact with information also have design implications. The author outlines a framework for understanding users' emotional states as they seek information on the Web. It is inspired largely by Carol Kuhlthau's (1991, 1993, 1999) work in library services, particularly her information searching process (ISP), which is adapted to Web design practice. A staged approach resembling traditional models of information seeking behavior is presented here as the basis for creating appropriate search and navigation systems. This user-centered framework is flexible and solution-oriented, enjoys longevity, and considers affective factors. Its aim is a more comprehensive, conceptual analysis of the user's entire information search experience. [source]


    Evidence-based practice in search interface design

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2006
    Barbara M. Wildemuth
    An evidence-based practice approach to search interface design is proposed, with the goal of designing interfaces that adequately support search strategy formulation and reformulation. Relevant findings from studies of information professionals' searching behaviors, end users' searching of bibliographic databases, and search behaviors on the Web are highlighted. Three brief examples are presented to illustrate the ways in which findings from such studies can be used to make decisions about the design of search interfaces. If academic research can be effectively connected with design practice, we can discover which design practices truly are "best practices" and incorporate them into future search interfaces. [source]


    Query expansion behavior within a thesaurus-enhanced search environment: A user-centered evaluation

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2006
    Ali Shiri
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students. [source]


    Analysis of the query logs of a Web site search engine

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 13 2005
    Michael Chau
    A large number of studies have investigated the transaction log of general-purpose search engines such as Excite and AltaVista, but few studies have reported on the analysis of search logs for search engines that are limited to particular Web sites, namely, Web site search engines. In this article, we report our research on analyzing the search logs of the search engine of the Utah state government Web site. Our results show that some statistics, such as the number of search terms per query, of Web users are the same for general-purpose search engines and Web site search engines, but others, such as the search topics and the terms used, are considerably different. Possible reasons for the differences include the focused domain of Web site search engines and users' different information needs. The findings are useful for Web site developers to improve the performance of their services provided on the Web and for researchers to conduct further research in this area. The analysis also can be applied in e-government research by investigating how information should be delivered to users in government Web sites. [source]


    Probabilistic question answering on the Web

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2005
    Dragomir Radev
    Web-based search engines such as Google and NorthernLight return documents that are relevant to a user query, not answers to user questions. We have developed an architecture that augments existing search engines so that they support natural language question answering. The process entails five steps: query modulation, document retrieval, passage extraction, phrase extraction, and answer ranking. In this article, we describe some probabilistic approaches to the last three of these stages. We show how our techniques apply to a number of existing search engines, and we also present results contrasting three different methods for question answering. Our algorithm, probabilistic phrase reranking (PPR), uses proximity and question type features and achieves a total reciprocal document rank of .20 on the TREC8 corpus. Our techniques have been implemented as a Web-accessible system, called NSIR. [source]