Minimize Unblinding Risks during Randomization: Your Questions Answered
In clinical trials, we randomize to ensure the blind. But even the most secure randomization algorithms, which provide the least predictability, have a real risk of unblinding when implemented.
In a recent webinar, Calyx’s Head of Statistics and Product Support Services, Malcolm Morrissey shared his experience on how to identify the randomization implementation options that could lead to unblinding of the block size or partial unblinding of subject treatment.
Here he addresses some of the questions that arose during the webinar related to how Calyx mitigates this risk of selection bias at the site, as well as other practical issues associated with randomization, such as capping and trends in methodology.
Calyx follows the requestor’s preference because some sponsors cannot receive password-protected files of various forms directly due to their email account settings/security/firewalls.
Calyx’s process for sharing password-protected files requires the password to be collected from our 24/7 Service Desk and only the named/approved contact can receive that password.
When using the transmission to a safe location/portal, Calyx does advise the use of a test file before posting unblinding information.
Calyx has on occasion seen the test file identify that the chosen location is not suitable for the blinding status of the data. So, we have not seen a safe location/portal remove risks of data transmission.
How does stratifying by site impact the blind? Does it make it better or worse?
Stratifying by site will increase the risk of selection bias because the randomization records will be dedicated to a site and another site cannot take the next record in the sequence.
So, if the next entry becomes predictable, then the risk of selection bias is evident.
Mixed block sizes or alternative randomization methods will help to reduce predictability of the randomization sequence in this case.
Calyx wouldn’t typically expect to see an open-label study stratified by site for that reason.
What is Calyx’s stance on randomization caps? Specifically, treatment arm caps?
Calyx will always seek to understand the background and the specific impact on the patients, but the starting point for discussions would focus on closing all treatments in the capped group at the same time to avoid any differences or time effects. For example, if the capping was being used to control entry into a subset by each treatment, then could the IRT system stop allocating subjects to the subset when all treatments have reached their cap, such that some treatments may have over-recruited to the subset when the last treatment reaches its target.
This approach may be unethical if subjects undergo an invasive procedure as part of that subset. In this situation, we could discuss the risk of partial unblinding versus using ‘forcing’ to complete the recruitment in the subset without any treatments going over their target.
Generally speaking, in a large Phase 3 or 4 study we would normally see a sponsor prefer to avoid any manipulation of the randomization sequence through forcing.
Do you see an up-tick in usage of baseline adaptive algorithms (e.g., minimization and its extensions) to mitigate the risks associated with a rand list? Any specific blinding considerations with baseline adaptive algorithms?
Historically Calyx has seen minimization used to achieve balance in studies with many stratifying factors; this is a direct justification for using the method because a blocked randomization would not achieve the required balance, as mentioned in the ICH E9 guidance.
Over the years Calyx has seen a growing reluctance to this method due to a concern about the additional need for supporting sensitivity analysis, requested on occasion by regulators.
We understand the request is not consistent and would require extra work involved in a re-randomization test when requested.
A long time ago there was concern about minimization calculations being programmed on a study basis incurring programming errors; this is not an issue these days with pre-validated modules typically available in IRT and studies simply being parameterized within pre-validated code that provides full audit trail logging of the calculations.
Currently the increase we see in use is only within the APAC region, with many biostatisticians preferring the features of these methods over blocked randomization, which includes improving unpredictability as well as treatment balance control.
Calyx has been party to many discussions in the EU on the benefits of these methods in 2022 (with more of a focus on reducing predictability than we had seen in the past) and we expect to see an increase in the use of these methods as these discussion groups exert influence over future protocol design.
Over the last 10 years, we have observed a conservative trend within the EU/US and a preference to stick with blocked randomization until regulatory guidance forces change.