Bias Reduction Via Resampling for Estimation Following Sequential Tests
It is well known that maximum likelihood (ML) estimation results in biased estimates when estimating parameters following a sequential test. Existing bias correction methods rely on explicit calculations of the bias that are often difficult to derive. We suggest a simple alternative to the existing...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
1997
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/soe_research/365 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | It is well known that maximum likelihood (ML) estimation results in biased estimates when estimating parameters following a sequential test. Existing bias correction methods rely on explicit calculations of the bias that are often difficult to derive. We suggest a simple alternative to the existing methods. The new approach relies on approximating the bias of the estimate using a bootstrap method. It requires bootstrapping the sequential testing process by resampling observations from a distribution based on the ML estimate. Each bootstrap process will give a new ML estimate, and the corresponding bootstrap mean can be used to calibrate the estimate. An advantage of the new method over the existing methods is that the same procedure can be used under different stopping rules and different study designs. Simulation results suggest that this method performs competitively with existing methods. |
---|