Filtering by: Randomized Trials
Mar
24
12:00 PM12:00

CDIAS PSMG: C. Hendricks Brown and Ian Cero

How to Make Scientific Inferences and Conduct Power Analyses for Randomized Implementation Rollout Trials

C. Hendricks Brown, PhD
Northwestern University

Ian Cero, PhD
University of Rochester Medical Center

ABSTRACT:
This two-part presentation continues the virtual presentations on Randomized Implementation Rollout Designs and Trials, which include Stepped Wedge Implementation Designs.  These designs are commonly used to examine how well an evidence-based intervention or package is being implemented in community or healthcare settings.  The multitude of implementation research questions and specific hypotheses suggest the need for diverse randomized rollout implementation trial designs, assignment principles and procedures, and statistical modeling.  In the first part we discuss key research questions and identify mixed effect models for randomized implementation rollout trials involving 1) a single implementation strategy that tests how this strategy varies over time and/or resources that are allocated, 2) comparison of two distinct implementation strategies, and 3) three distinct strategies or components tested in a single trial. 

In the second part of the presentation we present the use of Rollout, a general statistical package written in R that can be used to conduct detailed statistical power and sample size analyses for diverse rollout designs.  Users specify both the underlying generative data model as well as the analytic model and output includes power and bias in the parameters.  We discuss how the package can account for misspecified modeling and robustness.  Only limited knowledge of R is necessary to use this package, and we provide examples for planning new implementation trials and examining the effects on power when modifications of a design during the conduct of a trial are necessary.

Back to PSMG Schedule
View Event →

Oct
28
12:00 PM12:00

CDIAS PSMG: Sandra Japuntich

Adventures in Hybrid Implementation-Effectiveness trials: Integrating smoking cessation treatment into healthcare settings

Sandra Japuntich, PhD
University of Minnesota Medical School

ABSTRACT:
Hybrid implementation-effectiveness trials hold tremendous promise to speed implementation by collecting data necessary for implementation whilst conducting effectiveness data.  This presentation will review hybrid clinical trial design and present outcomes from two hybrid implementation-effectiveness trials of smoking cessation treatments.  Insights and experiences will be shared about the importance of collecting implementation data for treatments that are effective to aid in implementation as well as when treatments are not effective to help explain unexpected results.

DOWNLOAD SLIDE SET
Back to PSMG Archive
View Event →
May
26
12:00 PM12:00

PSMG: Hendricks Brown, Daniel Almirall, Robert Gibbons, Don Hedecker, Carlos Gallo, Naihua Duan

Mixed Up: Modeling for Context

Hendricks Brown, PhD
Northwestern University Feinberg School of Medicine

Daniel Almirall, PhD
University of Michigan

Robert Gibbons, PhD
University of Chicago

Don Hedeker, PhD
University of Chicago, Public Health Sciences

Carlos Gallo, PhD
Northwestern University Feinberg School of Medicine

Naihua Duan, PhD
Columbia University

ABSTRACT:
This presentation provides a background into design and analysis of interventions or implementation strategies that are initially randomized, then afterwards are conducted in group or network settings where the units randomized can no longer be treated as independent. Such designs include individually randomized group assigned trials, where the group context is an active ingredient in delivering one arm of the trial. Also included are implementation trials that involve formal learning collaboratives where the sites interact with one another. A wide variation of such designs occur, including trials with rolling entrances and exits to groups, network based interventions, and so-called rollout trials. It is important to take into account such non-independence in analysis, because otherwise the critical values ordinarily used in test statistics are too small and therefore erroneously finding significance more often than they should. Examples are given in multiple contexts, and appropriate statistical procedures are given. To increase appropriate statistical testing, we provide tools to conduct such analyses across different statistical platforms. A shiny R program that accounts for some of these procedures is demonstrated.

Back to PSMG Archive
View Event →