Conditional Probability (conditional + probability)

Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting


Selected Abstracts


Genetic testing for HFE hemochromatosis in Australia: The value of testing relatives of simple heterozygotes

JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY, Issue 7 2002
JULEEN A CAVANAUGH
AbstractBackground : It is unclear whether screening of relatives of C282Y and H63D heterozygotes (other than compound heterozygotes) for hemochromatosis will detect sufficient numbers of cases to justify introduction of this screening strategy. Methods : Conditional probabilities were determined using published Australian allele frequencies and penetrance data to determine the detection rate of hemochromatosis by testing the siblings and offspring of heterozygotes (subjects with only one HFE mutation). Results : The number of individuals who are at risk of developing increased body iron stores because of HFE mutations is substantially higher (1 in 80) than previously estimated. In addition, 33% of the Australian population are heterozygous for either C282Y or H63D. Based on population estimates, the relative risk to the offspring of C282Y and H63D heterozygotes of developing increased iron stores is 4.1 and 1.5, respectively, while the relative risk to each sibling is 2.3 and 1, respectively. The risk of developing clinical features of hemochromatosis or hepatic fibrosis is likely to be substantially lower. Conclusions : Although the detection rate from testing the families of unaffected heterozygotes is low, this can be justified as a clinically useful screening strategy. At the present time this strategy should be restricted to first-degree relatives of heterozygotes. Further studies are recommended to determine if cascade genetic screening is a cost-effective alternative to general population screening. © 2002 Blackwell Publishing Asia Pty Ltd [source]


Multi-scale system reliability analysis of lifeline networks under earthquake hazards

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2010
Junho Song
Abstract Recent earthquake events evidenced that damage of structural components in a lifeline network may cause prolonged disruption of lifeline services, which eventually results in significant socio-economic losses in the affected area. Despite recent advances in network reliability analysis, the complexity of the problem and various uncertainties still make it a challenging task to evaluate the post-hazard performance and connectivity of lifeline networks efficiently and accurately. In order to overcome such challenges and take advantage of merits of multi-scale analysis, this paper develops a multi-scale system reliability analysis method by integrating a network decomposition approach with the matrix-based system reliability (MSR) method. In addition to facilitating system reliability analysis of large-size networks, the multi-scale approach enables optimizing the level of computational effort on subsystems; identifying the relative importance of components and subsystems at multiple scales; and providing a collaborative risk management framework. The MSR method is uniformly applied for system reliability analyses at both the lower-scale (for link failure) and the higher-scale (for system connectivity) to obtain the probability of general system events, various conditional probabilities, component importance measures, statistical correlation between subsystem failures and parameter sensitivities. The proposed multi-scale analysis method is demonstrated by its application to a gas distribution network in Shelby County of Tennessee. A parametric study is performed to determine the number of segments during the lower-scale MSR analysis of each pipeline based on the strength of the spatial correlation of seismic intensity. It is shown that the spatial correlation should be considered at both scales for accurate reliability evaluation. The proposed multi-scale analysis approach provides an effective framework of risk assessment and decision support for lifeline networks under earthquake hazards. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Inductive Inference: An Axiomatic Approach

ECONOMETRICA, Issue 1 2003
Itzhak Gilboa
A predictor is asked to rank eventualities according to their plausibility, based on past cases. We assume that she can form a ranking given any memory that consists of finitely many past cases. Mild consistency requirements on these rankings imply that they have a numerical representation via a matrix assigning numbers to eventuality,case pairs, as follows. Given a memory, each eventuality is ranked according to the sum of the numbers in its row, over cases in memory. The number attached to an eventuality,case pair can be interpreted as the degree of support that the past case lends to the plausibility of the eventuality. Special instances of this result may be viewed as axiomatizing kernel methods for estimation of densities and for classification problems. Interpreting the same result for rankings of theories or hypotheses, rather than of specific eventualities, it is shown that one may ascribe to the predictor subjective conditional probabilities of cases given theories, such that her rankings of theories agree with rankings by the likelihood functions. [source]


Timing structural change: a conditional probabilistic approach

JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2006
David N. DeJong
We propose a strategy for assessing structural stability in time-series frameworks when potential change dates are unknown. Existing stability tests are effective in detecting structural change, but procedures for identifying timing are imprecise, especially in assessing the stability of variance parameters. We present a likelihood-based procedure for assigning conditional probabilities to the occurrence of structural breaks at alternative dates. The procedure is effective in improving the precision with which inferences regarding timing can be made. We illustrate parametric and non-parametric implementations of the procedure through Monte Carlo experiments, and an assessment of the volatility reduction in the growth rate of US GDP. Copyright © 2006 John Wiley & Sons, Ltd. [source]


What is learned from experience in a probabilistic environment?

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 3 2004
Stephen E. Edgell
Abstract Three experiments explored what is learned from experience in a probabilistic environment. The task was a simulated medical decision-making task with each patient having one of two test results and one of two diseases. The test result was highly predictive of the disease for all participants. The base rate of the test result was varied between participants to produce different inverse conditional probabilities of the test result given the disease across conditions. Participants trained using feedback to predict a patient's disease from a test result showed the classic confusion of the inverse error, substituting the forward conditional probability for the inverse conditional probability when tested on it. Additional training on the base rate of the test result did little to improve performance. Training on the joint probabilities, however, produced good performance on either conditional probability. The pattern of results demonstrated that experience with the environment is not always sufficient for good performance. That natural sampling leads to good performance was not supported. Further, because participants not trained on joint probabilities did, however, know them but still committed the confusion of the inverse error, the hypothesis that having joint probabilities would facilitate performance was not supported. The pattern of results supported the conclusion that people learn all the necessary information from experience in a probabilistic environment, but depending upon what the experience was, it may interfere with their ability to recall to memory the appropriate sample set necessary for estimating or using the inverse conditional probability. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Short-term propagation of rainfall perturbations on terrestrial ecosystems in central California

APPLIED VEGETATION SCIENCE, Issue 2 2010
Mónica García
Abstract Question: Does vegetation buffer or amplify rainfall perturbations, and is it possible to forecast rainfall using mesoscale climatic signals? Location: Central California (USA). Methods: The risk of dry or wet rainfall events was evaluated using conditional probabilities of rainfall depending on El Niño Southern Oscillation (ENSO) events. The propagation of rainfall perturbations on vegetation was calculated using cross-correlations between monthly seasonally adjusted (SA) normalized difference vegetation index (NDVI) from the Advanced Very High Resolution Radiometer (AVHRR), and SA antecedent rainfall at different time-scales. Results: In this region, El Niño events are associated with higher than normal winter precipitation (probability of 73%). Opposite but more predictable effects are found for La Niña events (89% probability of dry events). Chaparral and evergreen forests showed the longest persistence of rainfall effects (0-8 months). Grasslands and wetlands showed low persistence (0-2 months), with wetlands dominated by non-stationary patterns. Within the region, the NDVI spatial patterns associated with higher (lower) rainfall are homogeneous (heterogeneous), with the exception of evergreen forests. Conclusions: Knowledge of the time-scale of lagged effects of the non-seasonal component of rainfall on vegetation greenness, and the risk of winter rainfall anomalies lays the foundation for developing a forecasting model for vegetation greenness. Our results also suggest greater competitive advantage for perennial vegetation in response to potential rainfall increases in the region associated with climate change predictions, provided that the soil allows storing extra rainfall. [source]


Statistical Inference For Risk Difference in an Incomplete Correlated 2 × 2 Table

BIOMETRICAL JOURNAL, Issue 1 2003
Nian-Sheng Tang
Abstract In some infectious disease studies and 2-step treatment studies, 2 × 2 table with structural zero could arise in situations where it is theoretically impossible for a particular cell to contain observations or structural void is introduced by design. In this article, we propose a score test of hypotheses pertaining to the marginal and conditional probabilities in a 2 × 2 table with structural zero via the risk/rate difference measure. Score test-based confidence interval will also be outlined. We evaluate the performance of the score test and the existing likelihood ratio test. Our empirical results evince the similar and satisfactory performance of the two tests (with appropriate adjustments) in terms of coverage probability and expected interval width. Both tests consistently perform well from small- to moderate-sample designs. The score test however has the advantage that it is only undefined in one scenario while the likelihood ratio test can be undefined in many scenarios. We illustrate our method by a real example from a two-step tuberculosis skin test study. [source]


Estimation of Competing Risks with General Missing Pattern in Failure Types

BIOMETRICS, Issue 4 2003
Anup Dewanji
Summary. In competing risks data, missing failure types (causes) is a very common phenomenon. In this work, we consider a general missing pattern in which, if a failure type is not observed, one observes a set of possible types containing the true type, along with the failure time. We first consider maximum likelihood estimation with missing-at-random assumption via the expectation maximization (EM) algorithm. We then propose a Nelson-Aalen type estimator for situations when certain information on the conditional probability of the true type given a set of possible failure types is available from the experimentalists. This is based on a least-squares type method using the relationships between hazards for different types and hazards for different combinations of missing types. We conduct a simulation study to investigate the performance of this method, which indicates that bias may be small, even for high proportion of missing data, for sufficiently large number of observations. The estimates are somewhat sensitive to misspecification of the conditional probabilities of the true types when the missing proportion is high. We also consider an example from an animal experiment to illustrate our methodology. [source]


Sequence learning in infancy: the independent contributions of conditional probability and pair frequency information

DEVELOPMENTAL SCIENCE, Issue 6 2009
Stuart Marcovitch
The ability to perceive sequences is fundamental to cognition. Previous studies have shown that infants can learn visual sequences as early as 2 months of age and it has been suggested that this ability is mediated by sensitivity to conditional probability information. Typically, conditional probability information has covaried with frequency information in these studies, raising the possibility that each type of information may have contributed independently to sequence learning. The current study explicitly investigated the independent contribution of each type of information. We habituated 2.5-, 4.5-, and 8.5-month-old infants to a sequence of looming visual shapes whose ordering was defined independently by specific conditional probability relations among pair elements and by the frequency of occurrence of such pairs. During test trials, we tested infants' sensitivity to each type of information and found that both types of information independently influenced sequence learning by 4.5 months of age. [source]


Optimizing image matches via a verification model

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 11 2010
Jimmy Addison Lee
In the literature, we have seen a boom in wide-baseline matching approaches proposed for locating correspondences between images. However, wrong correspondences or the so-called outliers are still rather inevitable, especially in urban environments with the presence of repetitive structures, and/or a large dissimilarity in viewpoints. In this paper, we propose a verification model to optimize the image matching results by significantly reducing the number of outliers. Several geometric and appearance-based measurements are exploited, and conditional probability is used to compute the probability of each true correspondence. The model is validated by extensive experiments on images from the ZuBud database, which are taken in different weather conditions, seasons, and with different cameras. It is also demonstrated on a real-time application of an image-based navigation system. © 2010 Wiley Periodicals, Inc. [source]


Efficient computation for the noisy MAX

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 2 2003
Francisco J. Díez
Díez's algorithm for the noisy MAX is very efficient for polytrees, but when the network has loops, it has to be combined with local conditioning, a suboptimal propagation algorithm. Other algorithms, based on several factorizations of the conditional probability of the noisy MAX, are not as efficient for polytrees but can be combined with general propagation algorithms such as clustering or variable elimination, which are more efficient for networks with loops. In this article we propose a new factorization of the noisy MAX that amounts to Díez's algorithm in the case of polytrees and at the same time is more efficient than previous factorizations when combined with either variable elimination or clustering. © 2003 Wiley Periodicals, Inc. [source]


Uncertain inheritance and recognition as probabilistic default reasoning

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2001
T. H. Cao
This paper proposes probabilistic default reasoning as a suitable approach to uncertain inheritance and recognition for fuzzy and uncertain object-oriented models. The uncertainty is due to the uncertain membership of an object to a class and/or the uncertain applicability of a property, i.e., an attribute or a method, to a class. First, we introduce a logic-based uncertain object-oriented model where uncertain membership and applicability are measured by support pairs, which are lower and upper bounds on probability. The probability for a property being applicable to a class is interpreted as the conditional probability of the property being applicable to an object given that the object is a member of the class. Each uncertainty applicable property is then a default probabilistic logic rule, which is defeasible. In order to reduce the computational complexity of general probabilistic default reasoning, we propose to use Jeffrey's rule for a weaker notion of consistency and for local inference, then apply them to uncertain inheritance of attributes and methods. Using the same approach but with inverse Jeffrey's rule, uncertain recognition as probabilistic default reasoning is also presented. © 2001 John Wiley & Sons, Inc. [source]


What is learned from experience in a probabilistic environment?

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 3 2004
Stephen E. Edgell
Abstract Three experiments explored what is learned from experience in a probabilistic environment. The task was a simulated medical decision-making task with each patient having one of two test results and one of two diseases. The test result was highly predictive of the disease for all participants. The base rate of the test result was varied between participants to produce different inverse conditional probabilities of the test result given the disease across conditions. Participants trained using feedback to predict a patient's disease from a test result showed the classic confusion of the inverse error, substituting the forward conditional probability for the inverse conditional probability when tested on it. Additional training on the base rate of the test result did little to improve performance. Training on the joint probabilities, however, produced good performance on either conditional probability. The pattern of results demonstrated that experience with the environment is not always sufficient for good performance. That natural sampling leads to good performance was not supported. Further, because participants not trained on joint probabilities did, however, know them but still committed the confusion of the inverse error, the hypothesis that having joint probabilities would facilitate performance was not supported. The pattern of results supported the conclusion that people learn all the necessary information from experience in a probabilistic environment, but depending upon what the experience was, it may interfere with their ability to recall to memory the appropriate sample set necessary for estimating or using the inverse conditional probability. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Estimating the Accuracy of Jury Verdicts

JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 2 2007
Bruce D. Spencer
Average accuracy of jury verdicts for a set of cases can be studied empirically and systematically even when the correct verdict cannot be known. The key is to obtain a second rating of the verdict, for example, the judge's, as in the recent study of criminal cases in the United States by the National Center for State Courts (NCSC). That study, like the famous Kalven-Zeisel study, showed only modest judge-jury agreement. Simple estimates of jury accuracy can be developed from the judge-jury agreement rate; the judge's verdict is not taken as the gold standard. Although the estimates of accuracy are subject to error, under plausible conditions they tend to overestimate the average accuracy of jury verdicts. The jury verdict was estimated to be accurate in no more than 87 percent of the NCSC cases (which, however, should not be regarded as a representative sample with respect to jury accuracy). More refined estimates, including false conviction and false acquittal rates, are developed with models using stronger assumptions. For example, the conditional probability that the jury incorrectly convicts given that the defendant truly was not guilty (a "Type I error") was estimated at 0.25, with an estimated standard error (s.e.) of 0.07, the conditional probability that a jury incorrectly acquits given that the defendant truly was guilty ("Type II error") was estimated at 0.14 (s.e. 0.03), and the difference was estimated at 0.12 (s.e. 0.08). The estimated number of defendants in the NCSC cases who truly are not guilty but are convicted does seem to be smaller than the number who truly are guilty but are acquitted. The conditional probability of a wrongful conviction, given that the defendant was convicted, is estimated at 0.10 (s.e. 0.03). [source]


A fractal forecasting model for financial time series

JOURNAL OF FORECASTING, Issue 8 2004
Gordon R. Richards
Abstract Financial market time series exhibit high degrees of non-linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non-integer, this is associated with two features: (1) inhomogeneity,extreme fluctuations at irregular intervals, and (2) scaling symmetries,proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition-fitted residual scale ratio (ST-FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15-minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST-FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Shannon Meets Shortz: A Probabilistic Model of Crossword Puzzle Difficulty

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2008
Miles Efron
This article is concerned with the difficulty of crossword puzzles. A model is proposed that quantifies the difficulty of a Puzzle P with respect to its clues. Given a clue,answer pair (c,a), we model the difficulty of guessing a based on c using the conditional probability P(a|c); easier mappings should enjoy a higher conditional probability. The model is tested by two experiments, each of which involves estimating the difficulty of puzzles taken from The New York Times. Additionally, we discuss how the notion of information implicit in our model relates to more easily quantifiable types of information that figure into crossword puzzles. [source]


Aid and Growth Accelerations: An Alternative Approach to Assessing the Effectiveness of Aid

KYKLOS INTERNATIONAL REVIEW OF SOCIAL SCIENCES, Issue 3 2007
Jonas Dovern
SUMMARY The paper applies an alternative approach to assess whether foreign aid promotes economic growth in developing countries, based on the concept of temporary growth accelerations suggested by Hausmann, Pritchett and Rodrik. In addition to aggregate aid, we differentiate between major aid categories, including grants, loans and so-called short-impact aid. It turns out that aid flows have a small but significantly positive effect on the conditional probability of growth accelerations. This result holds across different estimation methods. However, the significance of results crucially depends on the criteria applied to identify growth accelerations. [source]


Multivariate Composition Distribution in Free-Radical Multicomponent Polymerization, 1

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 7 2003
Hidetaka Tobita
Abstract Statistical multicomponent polymerization is a typical example of a Markovian process for which the generating function approach can be applied. Up to the present, generating functions have been used mainly to obtain analytical solutions. However, recent advances of computer software capable of handling symbolic calculations can throw new light on the old mathematical technique. After formulating the equations representing the instantaneous composition distribution of polymers for a given chain length, r, the illustrative numerical calculations are conducted by using the symbolic calculator. For a multicomponent polymerization consisting of more than two components, the second component distribution is dependent on the composition of the first component (F1), which is represented by the conditional probability given r and F1, . It is found that is well approximated by the Gaussian distribution with the variance following the relationship, , as in the case of the first component distribution , where A and B are the constants. With the knowledge of chain length distribution, it is now possible to conduct the full analysis of multivariate distribution of chain length and compositions for multicomponent free-radical polymerization. Bivariate distribution of composition F1 and F2 for chain length r,=,100 in a three-component system. [source]


Flashbulb memory for 11 September 2001

APPLIED COGNITIVE PSYCHOLOGY, Issue 5 2009
Andrew R. A. Conway
The recollection of particularly salient, surprising or consequential events is often called ,flashbulb memories'. We tested people's autobiographical memory for details of 11 September 2001 by gathering a large national random sample (N,=,678) of people's reports immediately following the attacks, and then by contacting them twice more, in September 2002 and August 2003. Three novel findings emerged. First, memory consistency did not vary as a function of demographic variables such as gender, geographical location, age or education. Second, memory consistency did not vary as a function of whether memory was tested before or after the 1-year anniversary of the event, suggesting that media coverage associated with the anniversary did not impact memory. Third, the conditional probability of consistent recollection in 2003 given consistent recollection in 2002 was p,=,.73. In contrast, the conditional probability of consistent recollection in 2003 given inconsistent recollection in 2002 was p,=,.18. Finally, and in agreement with several prior studies, confidence in memory far exceeded consistency in the long term. Also, those respondents who revealed evidence for consistent flashbulb memory experienced more anxiety in response to the event, and engaged in more covert rehearsal than respondents who did not reveal evidence for consistent flashbulb memory. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Estimation of Competing Risks with General Missing Pattern in Failure Types

BIOMETRICS, Issue 4 2003
Anup Dewanji
Summary. In competing risks data, missing failure types (causes) is a very common phenomenon. In this work, we consider a general missing pattern in which, if a failure type is not observed, one observes a set of possible types containing the true type, along with the failure time. We first consider maximum likelihood estimation with missing-at-random assumption via the expectation maximization (EM) algorithm. We then propose a Nelson-Aalen type estimator for situations when certain information on the conditional probability of the true type given a set of possible failure types is available from the experimentalists. This is based on a least-squares type method using the relationships between hazards for different types and hazards for different combinations of missing types. We conduct a simulation study to investigate the performance of this method, which indicates that bias may be small, even for high proportion of missing data, for sufficiently large number of observations. The estimates are somewhat sensitive to misspecification of the conditional probabilities of the true types when the missing proportion is high. We also consider an example from an animal experiment to illustrate our methodology. [source]