首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Critical research is becoming increasingly accepted as a valid approach to research in information systems. It is deemed to be particularly suitable for situations where researchers want to address conspicuous injustice, such as in areas of development or the digital divide. Critical research in information systems (CRIS), I will argue, is a possible approach to some of the ethical problems arising in the context of information and communication technology (ICT). It can be sensitive to the question of culture and therefore suitable for researching cross-cultural ethical questions in ICT. It is often unclear, however, what exactly critical research stands for and to what extent critical approaches are applicable across cultural boundaries. This paper will address these problems by proposing a definition of critical research as focused on changing the status quo and aiming for emancipation. It will then look at the question whether different cultures are compatible and comparable and what the role of culture in research on information systems is. The paper will then return to the question whether the critical intention to emancipate and empower humans is an expression of cultural imperialism or whether there are valid ways of promoting emancipation across cultural divides.  相似文献   

2.
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing.  相似文献   

3.
Synchronous collaborative information retrieval (SCIR) is concerned with supporting two or more users who search together at the same time in order to satisfy a shared information need. SCIR systems represent a paradigmatic shift in the way we view information retrieval, moving from an individual to a group process and as such the development of novel IR techniques is needed to support this. In this article we present what we believe are two key concepts for the development of effective SCIR namely division of labour (DoL) and sharing of knowledge (SoK). Together these concepts enable coordinated SCIR such that redundancy across group members is reduced whilst enabling each group member to benefit from the discoveries of their collaborators. In this article we outline techniques from state-of-the-art SCIR systems which support these two concepts, primarily through the provision of awareness widgets. We then outline some of our own work into system-mediated techniques for division of labour and sharing of knowledge in SCIR. Finally we conclude with a discussion on some possible future trends for these two coordination techniques.  相似文献   

4.
Anonymising technologies are cyber-tools that protect people from online surveillance, hiding who they are, what information they have stored and what websites they are looking at. Whether it is anonymising online activity through ‘TOR’ and its onion routing, 256-bit encryption on communications sent or smart phone auto-deletes, the user’s identity and activity is protected from the watchful eyes of the intelligence community. This represents a clear challenge to intelligence actors as it prevents them access to information that many would argue plays a vital part in locating and preventing threats from being realised. Moreover, such technology offers more than ordinary information protections as it erects ‘warrant-proof’ spaces, technological black boxes that no matter what some authority might deem as being legitimately searchable is protected to the extent that there are very limited or non-existent means of forcing oneself in. However, it will be argued here that not only is using such anonymising technology and its extra layer of protection people’s right, but that it is ethically mandatory. That is, due to the en masse surveillance—from both governments and corporations—coupled with people’s limited awareness and ability to comprehend such data collections, anonymising technology should be built into the fabric of cyberspace to provide a minimal set of protections over people’s information, and in doing so force the intelligence community to develop more targeted forms of data collection.  相似文献   

5.
6.
Corporate dynamic transparency: the new ICT-driven ethics?   总被引:1,自引:1,他引:0  
The term “corporate transparency” is frequently used in scholarly discussions of business ethics and corporate social responsibility (CSR); however, it remains a volatile and imprecise term, often defined incompletely as “information disclosure” accomplished through standardized reporting. Based on the results of empirical studies of organizational behaviors, this paper identifies a new set of managerial practices based on the use of information and communication technologies (ICT) and particularly Internet-based tools. These practices are resulting in what can be termed “dynamic transparency.” ICT allows for an informational environment characterized by two-way exchange between corporations and their stakeholders, which fosters a more collaborative marketplace. It is proposed that such dynamic information sharing, conducted by means of ICT, drives organizations to display greater openness and accountability, and more transparent operations, which benefit both the corporations and their constituents. One of the most important outcomes that will accrue to consumers and other individuals is the “right to know,” especially about corporate strategies and activities that might directly affect their quality of life. This paper demonstrates that dynamic transparency is more desirable and more effective than the more common “static transparency” where firms’ information disclosure is one-way, usually in response to government regulation. We present three ethical arguments to justify the implementation by business firms of dynamic transparency and demonstrate that their doing so is related to CSR and to augment and complement stakeholder engagement and dialogue. The paper concludes with a summary of the possible limits to and the problems involved in the implementation of dynamic transparency for corporations, and suggests some strategies to counter them.  相似文献   

7.
The proliferation of information and communication technologies (ICTs) into all aspects of life poses unique ethical challenges as our modern societies become increasingly dependent on the flawless operation of these technologies. As we increasingly entrust our privacy, our well-being and our lives to an ever greater number of computers we need to look more closely at the risks and ethical implications of these developments. By emphasising the vulnerability of software and the practice of professional software developers, we want to make clear the ethical aspects of producing potentially flawed software. This paper outlines some of the vulnerabilities associated with software systems and identifies a number of social and organisational factors affecting software developers and contributing to these vulnerabilities. Scott A. Snook’s theory of practical drift is used as the basis for our analysis. We show that this theory, originally developed to explain the failure of a military organisation, can be used to understand how professional software developers “drift away” from procedures and processes designed to ensure quality and prevent software vulnerability. Based on interviews with software developers in two Norwegian companies we identify two areas where social factors compel software developers to drift away from a global set of rules constituting software development processes and methods. Issues of pleasure and control and difference in mental models contribute to an uncoupling from established practices designed to guarantee the reliability of software and thus diminish its vulnerability.  相似文献   

8.
Statistics are the primary tools for assessing relationships and evaluating study questions. Unfortunately, these tools are often misused, either inadvertently because of ignorance or lack of planning, or conspicuously to achieve a specified result. Data abuses include the incorrect application of statistical tests, lack of transparency and disclosure about decisions that are made, incomplete or incorrect multivariate model building, or exclusion of outliers. Individually, each of these actions may completely invalidate a study, and often studies are victim to more than one offense. Increasingly there are tools and guidance for researchers to look to, including the development of an analysis plan and a series of study specific checklists, in order to prevent or mitigate these offenses.Key words: a priori, analytical plan, statistical methods, disclosure, transparency, biostatistics  相似文献   

9.
基于游戏引擎的3D动态演示文稿   总被引:1,自引:0,他引:1  
演示文稿是人与人交流思想的重要工具,通过利用强大的工具制作演示文稿,能够让听众在最短的时间内快速地理解陈述者所表述的概念、思想和产品.但是,随着陈述者提出概念的复杂化、表达内容的多样化,传统的演示文稿已经不能满足对各种思想的表达要求,希望能有更好的、更具体、更形象的工具来充分表达思想.因此,本文通过利用游戏引擎来实现一个与传统演示文稿完全不同的3D动态演示文稿,使文稿除了具有传统演示文稿的一般特性外,还更加生动、形象,并且具有与听众进行实时交互的能力.  相似文献   

10.
This paper provides a general philosophical groundwork for the theoretical and applied normative evaluation of information generally and digital information specifically in relation to the good life. The overall aim of the paper is to address the question of how Information Ethics and computer ethics more generally can be expanded to include more centrally the issue of how and to what extent information relates and contributes to the quality of life or the good life, for individuals and for society. To answer that question, the paper explores and provides by way of a theoretical groundwork for further research, the concept of wisdom understood as a type of meta-knowledge as well as a type of meta-virtue, which can enable one to both know in principle what a good life is and how to successfully apply that knowledge in living such a life in practice. This answer will be based on the main argument presented in this paper that the notion of wisdom understood as being at once a meta-epistemological, meta-axiological and meta-eudemonic concept, provides the essential conceptual link between information on the one hand and the good life on the other. If, as we are told, this is the Age of Information, both the theoretical examination and analysis of the question of how information relates to the good life and the provision of an adequate answer to that question are essential for developing a deeper understanding of how to evaluate the theoretical and practical implications and ramifications of information for the good life, for individuals and societies generally.  相似文献   

11.
This conceptual paper focuses on misinformation in the context of asylum seekers. We conducted a literature review on the concept of misinformation, which showed that a more nuanced understanding of information and misinformation is needed. To understand and study different viewpoints when it comes to the perception of the accuracy of information, we introduce two new concepts: perceived misinformation and normative misinformation. The concepts are especially helpful when marginalised and vulnerable groups are studied, as these groups may perceive information differently compared to majority populations. Our literature review on the information practices of asylum seekers shows that asylum seekers come across different types of misinformation. These include official information that is inadequate or presented inadequately, outdated information, misinformation via gatekeepers and other mediators, information giving false hope or unrealistic expectations, rumours and distorted information. The diversity of misinformation in their lives shows that there is a need to understand information in general in a broad and more nuanced way. Based on this idea, we propose a Social Information Perception model (SIP), which shows that different social, cultural and historical aspects, as well as situation and context, are involved in the mental process which determines whether people perceive information as accurate information, misinformation or disinformation. The model, as well as the concepts of perceived and normative misinformation, are helpful when the information practices of marginalised and vulnerable groups are studied, giving a holistic view on their information situation. Understanding the information practices more holistically enables different actors to give trustworthy information in an understandable and culturally meaningful way to the asylum seekers.  相似文献   

12.
In order to evaluate the effectiveness of Information Retrieval (IR) systems it is key to collect relevance judgments from human assessors. Crowdsourcing has successfully been used as a method to scale-up the collection of manual relevance judgments, and previous research has investigated the impact of different judgment task design elements (e.g., highlighting query keywords in the document) on judgment quality and efficiency. In this work we investigate the positive and negative impacts of presenting crowd human assessors with more than just the topic and the document to be judged. We deploy different variants of crowdsourced relevance judgment tasks following a between-subjects design in which we present different types of metadata to the human assessor. Specifically, we investigate the effect of human metadata (e.g., what other human assessors think of the current document, as in which relevance level has already been selected by the majority crowd workers), machine metadata (e.g., how IR systems scored this document such as its average position in ranked lists, statistics about the document such as term frequencies). We look at the impact of metadata on judgment quality (i.e., the level of agreement with trained assessors) and cost (i.e., the time it takes for workers to complete the judgments) as well as at how metadata quality positively or negatively impact the collected judgments.  相似文献   

13.
Abstract

The government world lags behind the business world in feeling the effects of the information technology revolution and related innovations in organization. But government may change radically in the decades ahead. This essay fields a concept— cyberocracy—to discuss how the development of, and demand for access to, the future electronic information and communications infrastructures (i.e., cyberspace) may alter the nature of the bureaucracy. Although it is too early to say precisely what a cyberocracy may look like, the outcomes may include new forms of democratic, totalitarian, and hybrid governments. Optimism about the information revolution should be tempered by a constant, anticipatory awareness of its potential dark side.  相似文献   

14.
With the newspapers' recent move to online reporting, traditional norms and practices of news reporting have changed to accommodate the new realities of online news writing. In particular, online news is much more fluid and prone to change in content than the traditional hard-copy newspapers--online newspaper articles often change over the course of the following days or even weeks as they respond to criticisms and new information becoming available. This poses a problem for social scientists who analyse newspaper coverage of science, health and risk topics, because it is no longer clear who has read and written what version, and what impact they potentially had on the national debates on these topics. In this note I want to briefly flag up this problem through two recent examples of U.K. national science stories and discuss the potential implications for PUS media research.  相似文献   

15.
This paper offers a novel contribution to an evidence-based assessment of the attractiveness features (or perceived qualities) of cities or urban neighbourhoods, based on a quantitative evaluation of such areas by introducing and applying what is called ‘city-love’ analysis. To put this new concept in context, we offer first a concise overview of related and complementary notions (e.g. happiness, satisfaction, well-being, quality of life, contentment). Then we propose a new departure for attractiveness research pertaining to micro-based information on residents or users of cities by introducing the notion of a ‘city-love production function’. This function expresses the ability of cities to enhance the love or appreciation for a city or its neighbourhoods through an appropriate combination of five specific ‘city capital’ constituents. We test the validity of this so-called ‘Pentagon’ approach to city love by means of the city-love production function using a multivariate econometric model based on extensive heterogeneous statistical data on municipalities in Sweden and complemented with cell phone data. Our results are confronted with empirical ‘big data’ on the appreciation of Swedish places – and their characteristics – taken from social media platforms. The study offers also interesting findings from an advanced spatial-econometric and multilevel modelling approach. Our estimations show that the concept of the city-love production function allows us to quantitatively uncover important determinants of citizens’ love for their local environment.  相似文献   

16.
《Research Policy》2022,51(8):104143
The comparative advantage of a location shapes its industrial structure. Current theoretical models based on this principle do not take a stance on how comparative advantages in different industries or locations are related with each other, or what such patterns of relatedness might imply about the evolution of comparative advantage. We build a simple Ricardian-inspired model and show that hidden information on inter-industry and inter-location relatedness can be captured by simple correlations between the observed structure of industries across locations, or the structure of locations across industries. We then use this recovered information to calculate a measure of implied comparative advantage, and show that it explains much of the location’s current industrial structure. We give evidence that these patterns are present in a wide variety of contexts, namely the export of goods (internationally) and the employment, payroll and number of establishments across the industries of subnational regions (in the US, Chile and India). In each of these cases, the deviations between the observed and implied comparative advantage in the past tend to be highly predictive of future industry growth, especially at horizons of a decade or more; this explanatory power holds at both the intensive as well as the extensive margin. These results suggest that a component of the long-term evolution of comparative advantage is already implied in today’s patterns of production.  相似文献   

17.
This paper may be read as a sequel to a 1995 paper, published in this journal, in which I predicted what sort of transformations and problems were likely to affect the development of the Internet and our system of organized knowledge in the medium term. In this second attempt, I look at the future developments of information and communication technologies and try to estimate what their impact on our lives will be. The forecast is that, in information societies, the threshold between online and offline will soon disappear, and that once there is no difference, we shall become not cyborgs but rather inforgs, that is, connected informational organisms.  相似文献   

18.
We describe design and miniaturization of a polymeric optical interface for flow monitoring in biomicrofluidics applications based on polydimethylsiloxane technology, providing optical transparency and compatibility with biological tissues. Design and ray tracing simulation are presented as well as device realization and optical analysis of flow dynamics in microscopic blood vessels. Optics characterization of this polymeric microinterface in dynamic experimental conditions provides a proof of concept for the application of the device to two-phase flow monitoring in both in vitro experiments and in vivo microcirculation investigations. This technology supports the study of in vitro and in vivo microfluidic systems. It yields simultaneous optical measurements, allowing for continuous monitoring of flow. This development, integrating a well-known and widely used optical flow monitoring systems, provides a disposable interface between live mammalian tissues and microfluidic devices making them accessible to detection∕processing technology, in support or replacing standard intravital microscopy.  相似文献   

19.
This paper employs the 2008 financial crisis as an empirical setting to examine how investment strategies of venture capitalists (VCs) vary in the presence of a liquidity supply shock, and what the performance implications of these strategies are for their portfolio startups. We show that while, on aggregate, funded startups receive no less financing during the financial crisis than in non-crisis times, VCs allocate relatively more resources to startups operating in the VCs’ core sectors. We show that this skew allocation follows from VCs choosing to double down on their core-sector investing, rather than by a changed mix of investors or startups during the financial crisis. These effects are strongest for early-stage startups, for which information problems are most severe. Furthermore, these results are driven by the investment strategies of more-experienced VCs. Building on these findings, we find superior ex post performance among crisis-funded portfolio startups operating in more-experienced VCs’ core sectors.  相似文献   

20.
Machine reading comprehension (MRC) is a challenging task in the field of artificial intelligence. Most existing MRC works contain a semantic matching module, either explicitly or intrinsically, to determine whether a piece of context answers a question. However, there is scant work which systematically evaluates different paradigms using semantic matching in MRC. In this paper, we conduct a systematic empirical study on semantic matching. We formulate a two-stage framework which consists of a semantic matching model and a reading model, based on pre-trained language models. We compare and analyze the effectiveness and efficiency of using semantic matching modules with different setups on four types of MRC datasets. We verify that using semantic matching before a reading model improves both the effectiveness and efficiency of MRC. Compared with answering questions by extracting information from concise context, we observe that semantic matching yields more improvements for answering questions with noisy and adversarial context. Matching coarse-grained context to questions, e.g., paragraphs, is more effective than matching fine-grained context, e.g., sentences and spans. We also find that semantic matching is helpful for answering who/where/when/what/how/which questions, whereas it decreases the MRC performance on why questions. This may imply that semantic matching helps to answer a question whose necessary information can be retrieved from a single sentence. The above observations demonstrate the advantages and disadvantages of using semantic matching in different scenarios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号