首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This chapter describes the applicability of quantum cryptography beyond key exchange. The section is devided into two parts: one describing applications of quantum cryptography different from key exchange and the other considering countermeasures to additional threats like coercibility or traffic analysis. Every section ends with a short summary or an appraisal in boldface and in the conclusions all topics of this section are briefly outlined and in an outlook we give a personal statement about the relevance of quantum cryptographic pro tocols and promising future directions of research.  相似文献   

2.
A promising application of future quantum computers is the simulation of physical systems of a quantum nature. It has been estimated that a quantum computer operating with as few as 50–100 logical qubits would be capable of obtaining simulation results that are inaccessible to classical computers. This chapter explains the basic principles of simulation on a quantum computer and reviews some applications.  相似文献   

3.
Quantum algorithms are a field of growing interest within the theoretical computer science as well as the physics community. Surprisingly, although the number of researchers working on the subject is ever-increasing, the number of quantum algorithms found so far is quite small. In fact, the task of designing new quantum algorithms has been proven to be extremely difficult. In this paper we give an overview of the known quantum algorithms and briefly describe the underlying ideas. Roughly, the algorithms presented are divided into hidden subgroup type algorithms and in amplitude amplification type algorithms. While the former deal with problems of group-theoretical nature and have the promise to yield strong separations of classical and quantum algorithms, the latter have been proved to be a prolific source of algorithms in which a polynomial speed-up as compared to classical algorithms can be achieved. We also discuss quantum algorithms which do not fall under these two categories and give a survey of techniques of general interest in quantum computing such as adiabatic computing, lower bounds for quantum algorithms, and quantum interactive proofs.  相似文献   

4.
Progress in controlling quantum systems is the major pre-requisite for the realization of quantum computing, yet the results of quantum computing research can also be useful in solving quantum control problems that are not related to computational problems. We arguethat quantum computing provides clear concepts and simple models for discussing quantum theoretical problems. In this article we describe examples from completely different fields where models of quantum computing and quantum communication shed light on quantum theory. First we address quantum limits of classical low power computation and argue that the terms of quantum information theory allows us to discuss device-independent bounds. We argue that a classical bit behaves to some extent like a quantum bit in the time period where it switches its logical value. This implies that a readout during the switching process generates entropy. A related problem is the distribution of timing information like clock signals in low power devices. For low signal energy, the situation is close to phase-covariant cloning problems in quantum information theory.  相似文献   

5.
Distributed top-k query processing is increasingly becoming an essential functionality in a large number of emerging application classes. This paper addresses the efficient algebraic optimization of top-k queries in wide-area distributed data repositories where the index lists for the attribute values (or text terms) of a query are distributed across a number of data peers and the computational costs include network latency, bandwidth consumption, and local peer work. We use a dynamic programming approach to find the optimal execution plan using compact data synopses for selectivity estimation that is the basis for our cost model. The optimized query is executed in a hierarchical way involving a small and fixed number of communication phases. We have performed experiments on real web data that show the benefits of distributed top-k query optimization both in network resource consumption and query response time.  相似文献   

6.
This paper deals with automating the drawing of subway maps. There are two features of schematic subway maps that make them different from drawings of other networks such as flow charts or organigrams. First, most schematic subway maps use not only horizontal and vertical lines, but also diagonals. This gives more flexibility in the layout process, but it also makes the problem provably hard. Second, a subway map represents a network whose components have geographic locations that are roughly known to the users of such a map. This knowledge must be respected during the search for a clear layout of the network. For the sake of visual clarity the underlying geography may be distorted, but it must not be given up, otherwise map users will be hopelessly confused. In this paper we first give a rather generally accepted list of rules that should be adhered to by a good subway map. Next we survey three recent methods for drawing subway maps, analyze their performance with respect to the above rules, and compare the resulting maps among each other and to official subway maps drawn by graphic designers. We then focus on one of the methods, which is based on mixed-integer linear programming, a widely-used global optimization technique. This method guarantees to find a drawing that fulfills a subset of the above-mentioned rules (if such a drawing exists) and optimizes a weighted sum of costs that correspond to the remaining rules. The method can draw even large subway networks such as the London Underground in an aesthetically pleasing manner, similar to maps made by professional graphic designers. If station labels are included in the optimization process, so far only medium-size networks can be drawn. Finally we give evidence why drawing good subway maps is difficult (even without labels).  相似文献   

7.
Quantum information theory holds the promise of revolutionizing technologies other than computing and communications. In this article we show how quantum entanglement can be harnessed to beat the Rayleigh diffraction limit of conventional optical lithography, and to permit nano-devices to be fabricated at a scale arbitrarily shorter than the wavelength used. Given the relative ease of performing optical lithography compared with other schemes, and the relative costs associated in migrating the lithography industry to each new fabrication technology, exploiting quantum entanglement to extend the useful life of optical lithography could be economically attractive.  相似文献   

8.
Over the recent years, very little effort has been made to give XPath a proper algebraic treatment. One laudable exception is the Natix Algebra (NAL) which defines the translation of XPath queries into algebraic expressions in a concise way, thereby enabling algebraic optimizations. However, NAL does not capture various promising core XML query evaluation algorithms like, for example, the Holistic Twig Join. By integrating a logical structural join operator, we enable NAL to be compiled into a physical algebra containing exactly those missing physical operators. We will provide several important query unnesting rules and demonstrate the effectivity of our approach by an implementation in the XML Transaction Coordinator (XTC) – our prototype of a native XML database system.  相似文献   

9.
Zusammenfassung  AEG-Telefunken entwickelte seit 1957 Rechner. Neben den Prozessrechnern spielten die Gro?rechner TR 4 und TR 440 eine wichtige Rolle. Von 1969 bis 1976 wurden 46 TR 440 installiert, bei einem Gesamtumsatz von 730 Mio. DM (370 Mio. Euro) [2]. In der Produktpolitik spielten die Rechner keine zentrale Rolle, obwohl eine ausgezeichnete technologische Ausgangsposition bestanden hatte und die Telefunkenrechner sp?ter nachdrückliche technologiepolitische Unterstützung erhielten. Die Entwicklung des Systems TR 440 hatte mit wesentlichen konzeptionellen und technologischen Schwierigkeiten zu k?mpfen, wozu der Mangel an qualifizierten Zulieferern und erfahrenen Mitabeitern kam. Trotzdem gelang es, bis 1970 den bislang schnellsten in Europa entwickelten Rechner [6] fertig zu stellen und Systemsoftware zu entwickeln [9], die den Produkten der Mitbewerber um Jahre voraus war. Von Anfang an hatte das Unternehmen nach geeigneten Kooperationspartnern gesucht. Mit dem Verkauf des Gro?rechnergesch?ftes an Siemens gingen die bereits laufenden Planungen für einen TR 440-Nachfolger zu Ende.
AEG-Telefunken developed computers since 1957. Besides process control computers, the large scale computers, TR 4 and TR 440, played an important role. 46 TR 440 computers were installed from 1969 to 1976, summing up to a sales volume of 730 Mio. DM (370 Mio. Euro) [2]. AEG-Telefunken did not consider computers as a strategic part of its product policy, though the company had had an excellent technological position for this business and the Telefunken computers got vigorous political support. The development of the TR 440 system had to defeat essential conceptual and technological difficulties, besides lack of qualified suppliers and experienced personnel. Nevertheless, the company could, by 1970, develop the fastest European computer so far [6], with system software ahead of competitors by years [9]. From the beginning, the company had been looking for business partners. With the sale of the large scale computer business to Siemens, all activities on a TR 440 successor were stopped. AEG-Telefunken computer business strategy is also subject of a paper to appear in the IEEE Annals of the History of Computing.
  相似文献   

10.
Zusammenfassung  Der Telefunkenrechner TR 440 war bei seiner Fertigstellung 1970 die schnellste bisher in Europa entwickelte und produzierte Maschine. 46 Maschinen wurden in Wissenschaftseinrichtungen, bei Beh?rden und in der Industrie installiert. Der TR 440 folgte dem 1962 fertig gestellten TR 4, der auf demselben Markt durch Leistungsf?higkeit und sein innovatives Compilerkonzept eine wichtige Stellung erobert hatte. Der TR 440 wurde als Maschine für den Teilnehmerbetrieb entwickelt. Die Maschine war einer der ersten gro?en Rechner mit integrierten Schaltungen. Sie verfügte über Seitenadressierung und ein ausgefeiltes System von privilegierten Funktionen. Die Leistung des Prozessors betrug knapp 1 Mips, die Hauptspeicherkapazit?t 1.5 MB. Ausgangspunkt, Struktur und Technologie werden aus heutiger Sicht beschrieben. Ein Nachfolgeprodukt, TR 550, wurde konzipiert, aber nicht mehr entwickelt. Die Wertsch?tzung des TR 440 bei den Nutzern ging wesentlich auf die Systemsoftware zurück, die Gegenstand eines anderen Beitrages ist [6].
The Telefunken TR 440 Computer was, at its deployment in 1970, the fastest machine developed and manufactured so far in Europe. 46 of them were installed in scientific institutions, agencies, and industry. The TR 440 was the successor of the TR 4 of 1962, which on the same market had gained an important position by its performance and its innovative compiler concept. TR 440 was designed for timesharing operation. The machine was one of the first large scale computers to use integrated circuits. It had page addressing and a refined system of privileged functions. The instruction rate approached 1 Mips, the main memory had a capacity of 1.5 MB. Point of departure, structure and technology are described from the point of view of today. A follow-on product, TR 550, was conceived but not developed. The appreciation TR 440 gained among its users was primarily due to its system software, which is described in a further paper [6]. TR 440 structure and technology is also subject of a paper submitted to the IEEE Annals of the History of Computing.
  相似文献   

11.
Model checking techniques are recognized to provide reliable and copious results. Instead of examining a few cases only – as it is done in testing – model checking includes the whole state space in mathematical proofs of correctness. Yet, this completeness is seen as a drawback as the state explosion problem is hard to handle. In our industrial case study, we apply automated model checking techniques to an innovative elevator system, the TWIN by ThyssenKrupp. By means of abstraction and nondeterminism, we cope with runtime behaviour and achieve to efficiently prove our specification’s validity. The elevator’s safety requirements are exhaustively expressed in temporal logic along with real-world and algorithmic prerequisites, consistency properties, and fairness constraints. Beyond verifying system safety for an actual installation, our case study demonstrates the rewarding applicability of model checking at an industrial scale. CR subject classification  D.2.4; F.3.1; J.7 ; C.3  相似文献   

12.
Model checking techniques are recognized to provide reliable and copious results. Instead of examining a few cases only – as it is done in testing – model checking includes the whole state space in mathematical proofs of correctness. Yet, this completeness is seen as a drawback as the state explosion problem is hard to handle. In our industrial case study, we apply automated model checking techniques to an innovative elevator system, the TWIN by ThyssenKrupp. By means of abstraction and nondeterminism, we cope with runtime behaviour and achieve to efficiently prove our specification’s validity. The elevator’s safety requirements are exhaustively expressed in temporal logic along with real-world and algorithmic prerequisites, consistency properties, and fairness constraints. Beyond verifying system safety for an actual installation, our case study demonstrates the rewarding applicability of model checking at an industrial scale.  相似文献   

13.
Design, implementation, and re-engineering of operating systems are still an ambitious undertaking. Despite, or even because, of the long history of theory and practice in this field, adapting existing systems to environments of different conditions and requirements as originally specified or assumed, in terms of functional and/or non-functional respects, is anything but simple. Especially this is true for the embedded systems domain which, on the one hand, calls for highly specialized and application-aware system abstractions and, on the other hand, cares a great deal for easily reusable implementations of these abstractions. The latter aspect becomes more and more important as embedded systems technology is faced with an innovation cycle decreasing in length. Software for embedded systems needs to be designed for variability, and this is in particular true for the operating systems of this domain. The paper discusses dimensions of variability that need to be considered in the development of embedded operating systems and presents approaches that aid construction and maintenance of evolutionary operating systems. CR subject classification  C.3; D.2.11; D.2.13; D.4.7  相似文献   

14.
 为解决搜索引擎结果繁杂而导致的浏览性不高的问题,提出一个基于用户行为学习的元搜索框架和结果聚类方法,并加以详细描述。利用该框架与方法,可以实时搜集用户行为进行推理学习,将学习到的有效知识存入知识库用以指导结果聚类,并随着用户的搜索过程不断调整完善。原型系统证明该方法是可行有效的。  相似文献   

15.
Medical software is regarded as both a success and a risk factor for quality and efficiency of medical care. First, reasons are given in what respects the medical industry is special: it is highly complex and variable, exposed to risks of privacy/confidentiality but also of denied access to authorized personnel, and medical users are a highly qualified and demanding population. Then some software technology and engineering approaches are presented that partially master these challenges. They include various divide and conquer strategies, process and change management approaches and quality assurance approaches. Benchmark institutions and comprehensive solutions are also presented. Finally, some challenges are presented that call for approaches other than technical to achieve user “buy in”, handle the outer limits of complexity, variability and change. Here, methods from psychology, economics and game theory complement domain knowledge and exploratory experimentation with new technologies.
Zusammenfassung  Medizinische Software wird sowohl als Erfolgs- als auch als Risikofaktor für die Qualit?t und Effizienz medizinischer Versorgung gesehen. In diesem Artikel werden zun?chst Gründe aufgeführt, inwiefern der medizinische Sektor sich von anderen abhebt: Er ist hoch komplex und variantenreich und den Risiken stark ausgesetzt, die sich aus Vertraulichkeit und Schutzbedürftigkeit der Daten einerseits und andereseits aus deren Nicht-Verfügbarkeit für autorisiertes Personal ergeben. Auch sind medizinische Nutzer hoch und vielf?ltig qualifiziert und entsprechend anspruchsvoll. Es werden dann Verfahren aus Software-Technologie und -Engineering vorgestellt, welche diese Herausforderungen z.T. meistern. Darunter befinden sich „Teile-und-Herrsche“-artige Partitionierungsverfahren, Prozess- und ?nderungsmanagement- sowie Qualit?tssicherungsmethoden. Einige richtungweisend erfolgreiche Institutionen und L?sungen werden erl?utert. Schlie?lich wird auf Herausforderungen hingewiesen, bei denen andere als technische Zug?nge naheliegen, damit anspruchsvolle Nutzer „einsteigen“ und grenzwertig schwierige Anforderungen an Komplexit?t, Variabilit?t und ?nderungsintensit?t handhabbar werden. Dabei erg?nzen Methoden aus Psychologie, Wirtschaftswissenschaften und Spieltheorie ein vertieftes Wissen über den medizinischen Gegenstandsbereich und das Experimentieren mit neuen Technologien.
CR subject classification  D.2.1; D.2.2; D.2.7; D.2.8; D.2.9; D.2.11; D.2.12; D.4.6; H.1.2; H.2.7; H.4.1; H.5.3; J.3; K.1; K.4.1; K.4.3; K.5.2  相似文献   

16.
17.
The reverse k-nearest neighbor (RkNN) problem, i.e. finding all objects in a data set the k-nearest neighbors of which include a specified query object, has received increasing attention recently. Many industrial and scientific applications call for solutions of the RkNN problem in arbitrary metric spaces where the data objects are not Euclidean and only a metric distance function is given for specifying object similarity. Usually, these applications need a solution for the generalized problem where the value of k is not known in advance and may change from query to query. In addition, many applications require a fast approximate answer of RkNN-queries. For these scenarios, it is important to generate a fast answer with high recall. In this paper, we propose the first approach for efficient approximative RkNN search in arbitrary metric spaces where the value of k is specified at query time. Our approach uses the advantages of existing metric index structures but proposes to use an approximation of the nearest-neighbor-distances in order to prune the search space. We show that our method scales significantly better than existing non-approximative approaches while producing an approximation of the true query result with a high recall.  相似文献   

18.
Money is a very important and precious medium of exchange which has been used for nearly 2500 years. This medium of exchange consisted of metal coins until the 17th century, and later on coins were used together with paper notes. Money, as well as being the most important medium of exchange, is the symbol of a state's economic and commercial vision, wealth, culture, national values, political viewpoint and international influence. Money is both an information source and a medium for the recording of information. In this paper the main characteristics of paper money and its usefulness as an information source will be studied along with a comparative analysis of the organization and presentation of the banknote collection of the Turkish National Library and the banknote collections of the German, American, British and French national libraries.  相似文献   

19.
ABSTRACT

The Deep Dive into KBART preconference workshop provided a comprehensive overview of the National Information Standards Organization Knowledge Bases and Related Tools (KBART) Phase II Recommended Practice (http://www.niso.org/publications/rp/rp-9–2014/). The workshop was divided into four sections. The first provided an overview of the background, purpose, and value of KBART to all members of the information supply chain. The next section focused on the basic guidelines for effective exchange of metadata with knowledgebases, including method of exchange, data format, file naming conventions, and frequency of exchange. The remaining two sections of the workshop addressed the correct use of KBART data fields, first in relation to serials and then to monographs. Through classroom instruction, interactive quizzes, and hands-on exercises, the workshop provided in-depth coverage of all KBART data elements, with special focus on many of the most frequently asked questions about the recommended practice.  相似文献   

20.
Background: The Australian National Stroke Foundation appointed a search specialist to find the best available evidence for the second edition of its Clinical Guidelines for Acute Stroke Management. Objective: To identify the relative effectiveness of differing evidence sources for the guideline update. Methods: We searched and reviewed references from five valid evidence sources for clinical and economic questions: (i) electronic databases; (ii) reference lists of relevant systematic reviews, guidelines, and/or primary studies; (iii) table of contents of a number of key journals for the last 6 months; (iv) internet/grey literature; and (v) experts. Reference sources were recorded, quantified, and analysed. Results: In the clinical portion of the guidelines document, there was a greater use of previous knowledge and sources other than electronic databases for evidence, while there was a greater use of electronic databases for the economic section. Conclusions: The results confirmed that searchers need to be aware of the context and range of sources for evidence searches. For best available evidence, searchers cannot rely solely on electronic databases and need to encompass many different media and sources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号