Hypothesis Testing Procedures (hypothesis + testing_procedure)

Distribution by Scientific Domains

Selected Abstracts

Exact Confidence Bounds Following Adaptive Group Sequential Tests

BIOMETRICS, Issue 2 2009
Werner Brannath
Summary We provide a method for obtaining confidence intervals, point estimates, and p-values for the primary effect size parameter at the end of a two-arm group sequential clinical trial in which adaptive changes have been implemented along the way. The method is based on applying the adaptive hypothesis testing procedure of Müller and Schäfer (2001, Biometrics57, 886,891) to a sequence of dual tests derived from the stage-wise adjusted confidence interval of Tsiatis, Rosner, and Mehta (1984, Biometrics40, 797,803). In the nonadaptive setting this confidence interval is known to provide exact coverage. In the adaptive setting exact coverage is guaranteed provided the adaptation takes place at the penultimate stage. In general, however, all that can be claimed theoretically is that the coverage is guaranteed to be conservative. Nevertheless, extensive simulation experiments, supported by an empirical characterization of the conditional error function, demonstrate convincingly that for all practical purposes the coverage is exact and the point estimate is median unbiased. No procedure has previously been available for producing confidence intervals and point estimates with these desirable properties in an adaptive group sequential setting. The methodology is illustrated by an application to a clinical trial of deep brain stimulation for Parkinson's disease. [source]

Resampling-based multiple hypothesis testing procedures for genetic case-control association studies,

Bingshu E. Chen
Abstract In case-control studies of unrelated subjects, gene-based hypothesis tests consider whether any tested feature in a candidate gene,single nucleotide polymorphisms (SNPs), haplotypes, or both,are associated with disease. Standard statistical tests are available that control the false-positive rate at the nominal level over all polymorphisms considered. However, more powerful tests can be constructed that use permutation resampling to account for correlations between polymorphisms and test statistics. A key question is whether the gain in power is large enough to justify the computational burden. We compared the computationally simple Simes Global Test to the min,P test, which considers the permutation distribution of the minimum p -value from marginal tests of each SNP. In simulation studies incorporating empirical haplotype structures in 15 genes, the min,P test controlled the type I error, and was modestly more powerful than the Simes test, by 2.1 percentage points on average. When disease susceptibility was conferred by a haplotype, the min,P test sometimes, but not always, under-performed haplotype analysis. A resampling-based omnibus test combining the min,P and haplotype frequency test controlled the type I error, and closely tracked the more powerful of the two component tests. This test achieved consistent gains in power (5.7 percentage points on average), compared to a simple Bonferroni test of Simes and haplotype analysis. Using data from the Shanghai Biliary Tract Cancer Study, the advantages of the newly proposed omnibus test were apparent in a population-based study of bile duct cancer and polymorphisms in the prostaglandin-endoperoxide synthase 2 (PTGS2) gene. Genet. Epidemiol. 2006. Published 2006 Wiley-Liss, Inc. [source]

Functional Coefficient Autoregressive Models: Estimation and Tests of Hypotheses

Rong Chen
In this paper, we study nonparametric estimation and hypothesis testing procedures for the functional coefficient AR (FAR) models of the form Xt=f1(Xt,d)Xt, 1+ ... +fp(Xt,d)Xt,p+,t, first proposed by Chen and Tsay (1993). As a direct generalization of the linear AR model, the FAR model is a rich class of models that includes many useful parametric nonlinear time series models such as the threshold AR models of Tong (1983) and exponential AR models of Haggan and Ozaki (1981). We propose a local linear estimation procedure for estimating the coefficient functions and study its asymptotic properties. In addition, we propose two testing procedures. The first one tests whether all the coefficient functions are constant, i.e. whether the process is linear. The second one tests if all the coefficient functions are continuous, i.e. if any threshold type of nonlinearity presents in the process. The results of some simulation studies as well as a real example are presented. [source]

A hybrid method for simulation factor screening

Hua Shen
Abstract Factor screening is performed to eliminate unimportant factors so that the remaining important factors can be more thoroughly studied in later experiments. Controlled sequential bifurcation (CSB) and controlled sequential factorial design (CSFD) are two new screening methods for discrete-event simulations. Both methods use hypothesis testing procedures to control the Type I Error and power of the screening results. The scenarios for which each method is most efficient are complementary. This study proposes a two-stage hybrid approach that combines CSFD and an improved CSB called CSB-X. In Phase 1, a prescreening procedure will estimate each effect and determine whether CSB-X or CSFD will be used for further screening. In Phase 2, CSB-X and CSFD are performed separately based on the assignment of Phase 1. The new method usually has the same error control as CSB-X and CSFD. The efficiency, on the other hand, is usually much better than either component method. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2010 [source]

An Adaptive Single-step FDR Procedure with Applications to DNA Microarray Analysis

Vishwanath Iyer
Abstract The use of multiple hypothesis testing procedures has been receiving a lot of attention recently by statisticians in DNA microarray analysis. The traditional FWER controlling procedures are not very useful in this situation since the experiments are exploratory by nature and researchers are more interested in controlling the rate of false positives rather than controlling the probability of making a single erroneous decision. This has led to increased use of FDR (False Discovery Rate) controlling procedures. Genovese and Wasserman proposed a single-step FDR procedure that is an asymptotic approximation to the original Benjamini and Hochberg stepwise procedure. In this paper, we modify the Genovese-Wasserman procedure to force the FDR control closer to the level alpha in the independence setting. Assuming that the data comes from a mixture of two normals, we also propose to make this procedure adaptive by first estimating the parameters using the EM algorithm and then using these estimated parameters into the above modification of the Genovese-Wasserman procedure. We compare this procedure with the original Benjamini-Hochberg and the SAM thresholding procedures. The FDR control and other properties of this adaptive procedure are verified numerically. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]