首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper necessary and sufficient conditions are derived for a two-variable positive real function to be the driving-point impedance of certain classes of doubly-terminated lossless ladder networks. Specifically, two classes of networks are studied: (a) the class of networks in which the lossless structure is a cascade of p1- and p2-variable two-ports, each two-port having its transmission zeros at the origin and/or at infinity; (b) the class of networks in which the lossless structure is a lowpass or highpass ladder network with series arms having p1- and p2-type elements in series and shunt arms having the p1- and p2-type elements in parallel. It is indicated that via suitable transformations of the variables, conditions for many other types of ladder structures can be derived.  相似文献   

2.
The present paper sheds some light on a mathematical model for blood flow through stenosed arteries with axially variable peripheral layer thickness and variable slip at the wall. The model consists of a core region of suspension of all the erythrocytes assumed to be a Casson fluid and a peripheral layer of plasma as a Newtonian fluid. For such models, in literature, the peripheral layer thickness and slip velocity are assumed a priori based on experimental observations. In the present analysis, new analytic expressions for the thickness of the peripheral layer, slip velocity and core viscosity have been obtained in terms of measurable quantities (flow rate (Q), centerline velocity (U), pressure gradient (−dp/dz), plasma viscosity (μp) and yield stress (θ)). Using the experimental values of Q, U, (−dp/dz), μp and θ, the values of the peripheral layer thickness, core viscosity, and slip velocity at the wall have been computed. The theoretically obtained peripheral layer thickness has been compared with its experimental value. It is found that the agreement between the two is very good (error<1.4%). Further, a comparison between theoretical and experimental values of core viscosity is made and it is observed that the error between the two becomes 3.7465% in the case of two-layered model (Casson-Newtonian) for tube diameter 40 μm. The analysis developed here could be used to determine the more accurate values of the apparent viscosity of blood, agreeability, rigidity and deformability of red cells. This information of blood could be useful in the development of new diagnosis tools for many diseases.  相似文献   

3.
This paper investigates pth moment boundedness of neutral stochastic functional differential equations with Markovian switching (NSFDEsMS) based on Razumikhin technique and comparison principle. And pth moment stability is examined as a special case. Since the stochastic disturbances and neutral delays are incorporated, the considered system becomes more complex. Besides, the coefficients of the estimated upper bound for the diffusion operation associated with the underlying NSFDEsMS also may be chosen to be sign-changing functions instead of constant functions or negative definite functions, as a result, our results can work in general non-autonomous neutral stochastic systems. Finally, two examples are provided to show the effects of the proposed methods.  相似文献   

4.
5.
6.
A number of numerical codes have been written for the problem of finding the circle of smallest radius in the Euclidean plane that encloses a finite set P of points, but these do not give much insight into the geometry of this circle. We investigate geometric properties of the minimal circle that may be useful in the theoretical analysis of applications. We show that a circle C enclosing P is minimal if and only if it is rigid in the sense that it cannot be translated while still enclosing P. We show that the center of the minimal circle is in the convex hull of P. We use this rigidity result and an analysis of the case of three points to find sharp estimates on the diameter of the minimal circle in terms of the diameter of P.  相似文献   

7.
Multiple-prespecified-dictionary sparse representation (MSR) has shown powerful potential in compressive sensing (CS) image reconstruction, which can exploit more sparse structure and prior knowledge of images for minimization. Due to the popular L1 regularization can only achieve the suboptimal solution of L0 regularization, using the nonconvex regularization can often obtain better results in CS reconstruction. This paper proposes a nonconvex adaptive weighted Lp regularization CS framework via MSR strategy. We first proposed a nonconvex MSR based Lp regularization model, then we propose two algorithms for minimizing the resulting nonconvex Lp optimization problem. According to the fact that the sparsity levels of each regularizers are varying with these prespecified-dictionaries, an adaptive scheme is proposed to weight each regularizer for optimization by exploiting the difference of sparsity levels as prior knowledge. Simulated results show that the proposed nonconvex framework can make a significant improvement in CS reconstruction than convex L1 regularization, and the proposed MSR strategy can also outperforms the traditional nonconvex Lp regularization methodology.  相似文献   

8.
Robust fault detection for a class of nonlinear time-delay systems   总被引:1,自引:0,他引:1  
In this paper, the robust fault detection filter (RFDF) design problems are studied for nonlinear time-delay systems with unknown inputs. Firstly, a reference residual model is introduced to formulate the robust fault detection filter design problem as an H model-matching problem. Then appropriate input/output selection matrices are introduced to extend a performance index to the time-delay systems in time domain. The reference residual model designed according to the performance index is an optimal residual generator, which takes into account the robustness against disturbances and sensitivity to faults simultaneously. Applying robust H optimization control technique, the existence conditions of the robust fault detection filter for nonlinear time-delay systems with unknown inputs are presented in terms of linear matrix inequality (LMI) formulation, independently of time delay. An illustrative design example is used to demonstrate the validity and applicability of the proposed approach.  相似文献   

9.
One of the best known measures of information retrieval (IR) performance is the F-score, the harmonic mean of precision and recall. In this article we show that the curve of the F-score as a function of the number of retrieved items is always of the same shape: a fast concave increase to a maximum, followed by a slow decrease. In other words, there exists a single maximum, referred to as the tipping point, where the retrieval situation is ‘ideal’ in terms of the F-score. The tipping point thus indicates the optimal number of items to be retrieved, with more or less items resulting in a lower F-score. This empirical result is found in IR and link prediction experiments and can be partially explained theoretically, expanding on earlier results by Egghe. We discuss the implications and argue that, when comparing F-scores, one should compare the F-score curves’ tipping points.  相似文献   

10.
Popular and/or prestigious? Measures of scholarly esteem   总被引:1,自引:0,他引:1  
Citation analysis does not generally take the quality of citations into account: all citations are weighted equally irrespective of source. However, a scholar may be highly cited but not highly regarded: popularity and prestige are not identical measures of esteem. In this study we define popularity as the number of times an author is cited and prestige as the number of times an author is cited by highly cited papers. Information retrieval (IR) is the test field. We compare the 40 leading researchers in terms of their popularity and prestige over time. Some authors are ranked high on prestige but not on popularity, while others are ranked high on popularity but not on prestige. We also relate measures of popularity and prestige to date of Ph.D. award, number of key publications, organizational affiliation, receipt of prizes/honors, and gender.  相似文献   

11.
When a recommender system suggests items to the end-users, it gives a certain exposure to the providers behind the recommended items. Indeed, the system offers a possibility to the items of those providers of being reached and consumed by the end-users. Hence, according to how recommendation lists are shaped, the experience of under-recommended providers in online platforms can be affected. To study this phenomenon, we focus on movie and book recommendation and enrich two datasets with the continent of production of an item. We use this data to characterize imbalances in the distribution of the user–item observations and regarding where items are produced (geographic imbalance). To assess if recommender systems generate a disparate impact and (dis)advantage a group, we divide items into groups, based on their continent of production, and characterize how represented is each group in the data. Then, we run state-of-the-art recommender systems and measure the visibility and exposure given to each group. We observe disparities that favor the most represented groups. We overcome these phenomena by introducing equity with a re-ranking approach that regulates the share of recommendations given to the items produced in a continent (visibility) and the positions in which items are ranked in the recommendation list (exposure), with a negligible loss in effectiveness, thus controlling fairness of providers coming from different continents. A comparison with the state of the art shows that our approach can provide more equity for providers, both in terms of visibility and of exposure.  相似文献   

12.
In a typical inverted-file full-text document retrieval system, the user submits queries consisting of strings of characters combined by various operators. The strings are looked up in a text-dictionary which lists, for each string, all the places in the database at which it occurs. It is desirable to allow the user to include in his query truncated terms such as X1, 1X, 1X1, or X1Y, where X and X are specified strings and 1 is a variable-length-don't-care character, that is, 1 represents an arbitrary, possibly empty, string. Processing these terms involves finding the set of all words in the dictionary that match these patterns. How to do this efficiently is a long-standing open problem in this domain.In this paper we present a uniform and efficient approach for processing all such query terms. The approach, based on a “permuted dictionary” and a corresponding set of access routines, requires essentially one disk access to obtain from the dictionary all the strings represented by a truncated term, with negligible computing time. It is thus well suited for on-line applications. Implementation is simple, and storage overhead is low: it can be made almost negligible by using some specially adapted compression techniques described in the paper.The basic approach is easily adaptable for slight variants, such as fixed (or bounded) length don't-care characters, or more complex pattern matching templates.  相似文献   

13.
14.
Interdocument similarities are the fundamental information source required in cluster-based retrieval, which is an advanced retrieval approach that significantly improves performance during information retrieval (IR). An effective similarity metric is query-sensitive similarity, which was introduced by Tombros and Rijsbergen as method to more directly satisfy the cluster hypothesis that forms the basis of cluster-based retrieval. Although this method is reported to be effective, existing applications of query-specific similarity are still limited to vector space models wherein there is no connection to probabilistic approaches. We suggest a probabilistic framework that defines query-sensitive similarity based on probabilistic co-relevance, where the similarity between two documents is proportional to the probability that they are both co-relevant to a specific given query. We further simplify the proposed co-relevance-based similarity by decomposing it into two separate relevance models. We then formulate all the requisite components for the proposed similarity metric in terms of scoring functions used by language modeling methods. Experimental results obtained using standard TREC test collections consistently showed that the proposed query-sensitive similarity measure performs better than term-based similarity and existing query-sensitive similarity in the context of Voorhees’ nearest neighbor test (NNT).  相似文献   

15.
Psoriasis is a chronic inflammatory disease associated with an increased insulin resistance, obesity and cardiovascular risk. The present study was aimed to assess insulin resistance and pattern of body fat deposition in psoriasis. Body mass index (BMI) and waist circumference (WC) were measured in 40 psoriatic patients and 46 age- and sex-matched control subjects. Fasting blood glucose (FBG) and serum insulin level were measured by standard photometric method and ELISA respectively. HOMA-IR (homeostatic model of insulin resistance) was calculated by appropriate software. The results indicated that case and control groups were comparable in terms of age and sex (p = 0.934) with an increased prevalence of psoriasis among male subjects (60 %). FBG and mean WC between the two groups were statistically not significant (p value = 0.271 and 0.21 respectively). BMI was significantly higher in case group compared to the control group (p = 0.049). Serum insulin level and insulin resistance in the psoriatic patients were significantly higher (p value <0.001). Multiple regression analysis revealed that insulin resistance (measured by HOMA) was dependent on BMI and WC at a significance level of p < 0.001 and 0.043 respectively. Therefore, the psoriatic patients in this region have significantly high amount of fasting serum insulin level along with an increased IR though their FBG level remains normal. Furthermore, these abnormalities are significantly dependent on total body fat as well as abdominal fat deposits. We suggest that psoriatic patients need to be evaluated for metabolic syndrome and managed accordingly.  相似文献   

16.
The topic of the paper is both the pth moment and almost sure stability on a general decay rate for neutral stochastic functional differential equations, by applying the Razumikhin approach. This concept is extended to neutral stochastic differential delay equations. The results obtained in the paper are more general and they may be specialized on the exponential, polynomial or logarithmic stability. Moreover, some neutral stochastic functional differential equations which are not pth moment or almost surely exponentially stable, could be stable with respect to a certain lower decay rate. In that sense, some nontrivial examples are presented to justify and illustrate the usefulness of the theory. More precisely, one can say anything about both the pth moment and almost sure exponential stability, although the solutions are pth moment and almost surely polynomially or logarithmically stable.  相似文献   

17.
Backgroundβ-Glucosidase assay is performed with purified or semipurified enzymes extracted from cell lysis. However, in screening studies, to find bacteria with β-glucosidase activity among many tested bacteria, a fast method without cell lysis is desirable. In that objective, we report an in vivo β-glucosidase assay as a fast method to find a β-glucosidase producer strain.ResultsThe method consists in growing the strains for testing in a medium supplemented with the artificial substrate p-nitrophenyl-β-glucopyranoside (pNPG). The presence of β-glucosidases converts the substrate to p-nitrophenol (pNP), a molecule that can be easily measured in the supernatant spectrophotometrically at 405 nm. The assay was evaluated using two Bifidobacterium strains: Bifidobacterium longum B7254 strain that lacks β-glucosidase activity and Bifidobacterium pseudocatenulatum B7003 strain that shows β-glucosidase activity. The addition of sodium carbonate during pNP measurement increases the sensitivity of pNP detection and avoids the masking of absorbance by the culture medium. Furthermore, we show that pNP is a stable enzymatic product, not metabolized by bacteria, but with an inhibitory effect on cell growth. The β-glucosidase activity was measured as units of enzyme per gram per minute per dry cell weight. This method also allowed the identification of Lactobacillus strains with higher β-glucosidase activity among several lactobacillus species.ConclusionThis in vivo β-glucosidase assay can be used as an enzymatic test on living cells without cell disruption. The method is simple, quantitative, and recommended, especially in studies screening for bacteria not only with β-glucosidase activity but also with high β-glucosidase activity.  相似文献   

18.
The Authority and Ranking Effects play a key role in data fusion. The former refers to the fact that the potential relevance of a document increases exponentially as the number of systems retrieving it increases and the latter to the phenomena that documents higher up in ranked lists and found by more systems are more likely to be relevant. Data fusion methods commonly use all the documents returned by the different retrieval systems being compared. Yet, as documents further down in the result lists are considered, a document’s probability of being relevant decreases significantly and a major source of noise is introduced. This paper presents a systematic examination of the Authority and Ranking Effects as the number of documents in the result lists, called the list depth, is varied. Using TREC 3, 7, 8, 12 and 13 data, it is shown that the Authority and Ranking Effects are present at all list depths. However, if the systems in the same TREC track retrieve a large number of relevant documents, then the Ranking Effect only begins to emerge as more systems have found the same document and/or the list depth increases. It is also shown that the Authority and Ranking Effects are not an artifact of how the TREC test collections have been constructed.  相似文献   

19.
A new method for obtaining reduced order models for single-input-single-output, continuous-time systems is presented. The proposed algorithm matches the transfer functions of the original and the reduced system at 2M points where M is the order of the reduced model. The location of these points depends on a parameter which can be selected to control the accuracy of the approximation and stability. Numerical examples and comparisons with other methods of model reduction are given.  相似文献   

20.
The documents retrieved by a web search are useful if the information they contain contributes to some task or information need. To measure search result utility, studies have typically focused on perceived usefulness rather than on actual information use. We investigate the actual usefulness of search results—as indicated by their use as sources in an extensive writing task—and the factors that make a writer successful at retrieving useful sources. Our data comprise 150 essays written by 12 writers whose querying, clicking and writing activities were recorded. By tracking authors’ text reuse behavior, we quantify the search results’ contribution to the task more accurately than before. We model the overall utility of the search results retrieved throughout the writing process using path analysis, and compare a binary utility model (Reuse Events) to one that quantifies a degree of utility (Reuse Amount). The Reuse Events model has greater explanatory power (63% vs. 48%); in both models, the number of clicks is by far the strongest predictor of useful results—with β-coefficients up to 0.7—while dwell time has a negative effect (β between −0.14 and −0.21). As a conclusion, we propose a new measure of search result usefulness based on a source’s contribution to an evolving text. Our findings are valid for tasks where text reuse is allowed, but also have implications on designing indicators of search result usefulness for general writing tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号