Pre conference courses


Sunday, August 23th, 2015

9.00h - 17.00h

Conference venue

Price, early registration: € 300.00, students € 150.00

Price, late registration:   € 350.00, students € 200.00

 

Group sequential and adaptive designs for clinical trials

 

Christopher Jennison - University of Bath, United Kingdom

 

This course will start by introducing group sequential designs and their applications, including error-spending tests and inference following a sequential trial. We shall discuss recent controversies and developments, including: controlling bias when reporting results of a trial with early stopping; reporting results on co-primary endpoints; and accommodating delayed observations, or “pipeline” data, at interim analyses. We shall consider multiple testing and present a general approach for sequential trials that test multiple hypotheses.

Adaptive designs allow modification of a trial in mid-course while still protecting the type I error. Possible modifications include: enlarging the sample size to increase power; changing the focus to a subset of the initial study population (enrichment); combining treatment selection and testing in a Phase II/III seamless design; or selecting one of two treatment variants in an adaptive Phase III trial. Adaptations may follow rigid rules, pre-specified in the protocol, or, in a more flexible application, unplanned changes may take place at unplanned analyses. We shall describe adaptive procedures in detail and discuss their benefits and limitations.

The course is aimed at Masters or PhD level statisticians who have some familiarity with clinical trials but not necessarily with the aspects of sequential monitoring or adaptive design.

 

Applied multiple imputation in R

 

Stef van Buuren - University of Utrecht, The Netherlands
Gerko Vink - University of Utrecht, The Netherlands

 

Most researchers have encountered the problem of missing data: It seriously complicates the statistical analysis of data, and simply ignoring it is not a good strategy. A general and statistically valid technique to analyze incomplete data is multiple imputation, which is rapidly becoming the standard in health science research. The aim of this course is to enhance participants’ knowledge in imputation methodology, and to provide a flexible solution to their incomplete data problems.

Creating good multiple imputations in real data requires a flexible methodology that is able to mimic distinctive features in the data. The course will explain the principles of fully conditional specification (FCS), the cutting edge of imputation technology for multivariate missing data.

In the course we outline a step-by-step approach toward creating high quality imputations, and provide guidelines on how to report the results. Specific topics include: imputation of mixed continuous-categorical variables, influx/outflux missing data patterns, assessment of convergence, compatibility, predictor selection, derived variables, multilevel data, diagnostics, increasing robustness, imputation under MNAR and reporting guidelines.

The lectures will follow the book `Flexible Imputation of Missing Data' by Stef van Buuren (Chapman & Hall, 2012). Prerequisites include familiarity to basic statistical concepts and techniques.

 

Stratification and randomisation in clinical trials

 

Armin Koch - MH Hannover, Germany
Dieter Hilgers - RWTH Aachen University, Germany

 

Despite the fact that randomization is essential for the validity of conclusions about efficacy and safety of drugs and treatments and the fact that numerous papers have been written about simple unrestricted, stratified and dynamic randomization, many issues in this field are still decided based on traditions. Similarly, stratified analyses and the evaluation of outcome in strata pose issues for interpretation of the outcome of clinical trials.

In the first part of the course we will discuss some randomization procedure, their properties and principles for selection of a particular randomization procedure. In the second part we provide a review of the literature, present simulation results and put these into context with the interpretation of clinical trials and subgroup findings in the light of regulatory guidance as provided in the guideline on the adjustment of baseline co-variates and the new guideline on subgroup analyses in clinical trials.

 

Large scale multiple hypothesis testing

 

Jelle Goeman - Radboud University Medical Center, Nijmegen, The Netherlands

Aldo Solari - University of Milano-Bicocca, Milan, Italy

 

Modern biological techniques, such as arise from genomics or fMRI allow many thousands of measurements to be performed simultaneously. Typical research questions in these areas lead to analyses in which many thousands of statistical hypotheses are tested, and it is obvious that some adjustment for multiplicity is necessary. This unprecedented multiplicity has been an active area of research for the last two decades. Users are faced with a bewildering array of possible error rates and methods with different properties. This course aims to give guidance for the choices users are faced with.

 

We discuss familywise error versus false discovery rate, and the advantages and disadvantages of each. We clarify the difference between control of the false discovery rate and estimation of false discovery proportion, and confidence statements for the latter. We discuss methods based on probability inequalities and methods based on permutations, and assumptions underlying these methods.

 

Special emphasis will be on the exploratory nature of high throughput experiments, the need for post-processing of results, and the consequences this has for the choice of error rate and method. We will also emphasize the usefulness of using aggregated data for testing, and methods for testing at multiple resolutions simultaneously. All methods will be illustrated with practical sessions in R.

 

Bayesian statistics for follow-up data

 

Emmanuel Lesaffre - L-Biostat, KU Leuven, Leuven, Belgium

David Dejardin - F. Hoffmann-La Roche AG, Basel, Switzerland

 

Many statistical practitioners make use of the Bayesian approach because it allows analyses on highly structured data. An important class of models involves the analysis of follow-up studies, i.e. longitudinal-, survival studies or a combination of the two.

We will illustrate the Bayesian approach for the analysis of such data, by means of examples. For instance, Bayesian implementations will be illustrated on (generalized and non-linear) linear mixed models with non-standard distributions for the random parts, longitudinal models with a change point, growth curve models, pharmaco-kinetic models, multivariate mixed models, joint mixed models of several random variables, longitudinal models with smooth subject-specific evolutions, longitudinal models with informative measurement times, etc.

We will also consider a variety of Bayesian implementations of parametric and semi-parametric survival models, incorporating frailty models, recurrent and competing events, etc.

Finally, we will look at joint modeling of the survival and longitudinal process. Examples will be analysed using WinBUGS/OpenBUGS/JAGS and R-versions of them, but also dedicated R-software and the non-sampling based INLA software.

The course consists of 2 parts: Part I: introduction to the Bayesian approach based on the newly released book Bayesian Biostatistics of Lesaffre and Lawson and Part II: devoted to the analysis of FU studies.