A new National Institutes of Health (NIH) policy aimed at boosting the rigor and transparency of clinical trials is triggering concerns among many behavioral scientists. They are worried that the agency's widening definition of clinical trials could sweep up a broad array of basic science projects studying the human brain and behavior that do not test treatments. The clinical trials designation would impose a raft of new requirements on work that has already passed ethics review, such as different standards for applications submitted for funding, and a mandate to report results on clinicaltrials.gov, Critics say that would result in wasted resources and public confusion. NIH officials say they are still determining which behavioral studies will be defined as clinical trials.
European and U.S. clinical trial data transparency initiatives -- such as EMA Policy 70 -- are creating additional disclosure compliance requirements for pharma and biotech companies. These transparency initiatives have, at their core, the distribution of clinical trial data for public consumption. Clinical trial data typically are contained within regulatory documents such as Clinical Study Reports (CSRs), Marketing Application Submission Documents (NDAs, MAAs, BLAs, etc.) and others. To achieve compliance with these mandates, pharma and biotech companies will need to anonymize and de-identify data sets in their clinical study reports and submission documents, produce research summaries suitable for a lay audience, and publish their clinical study information publicly. In this webinar, Synchrogenix President, Keith Kleeman will discuss how Artificial Intelligence (AI) and natural language recognition and processing are significantly improving the accuracy and efficiency of successfully anonymizing personally identifiable information, patient protected data, company confidential information and other sensitive information from clinical trial documents for public disclosure.
We present the open problem of building a comprehensive clinical trial repository to remove duplication of effort from the systematic review process. Arguing that no single organization has the resources to solve this problem, an approach based on crowdsourcing supplemented by automated data extraction appears to be the most promising. To determine the feasibility of this idea, we discuss the key challenges that need to be addressed.
Former FDA Commissioner Dr. Scott Gottlieb stressed the need for modernizing the clinical trials process in a speech to the Bipartisan Policy Center in January of this year.1 He is quoted as saying, "digital technologies are one of our most promising tools for making healthcare more efficient." Improving efficiency in clinical trial development is only one potential enhancement that can result from the use of machine learning. Machine learning and artificial intelligence (AI) are often used interchangeably, but that assumption is incorrect. Machine learning is the subset of AI that is related to the development of algorithms that can make accurate predictions of future outcomes via pattern recognition and rules-based logic.
Chimeric antigen receptor (CAR)-T cell therapy, which is based on modified immune cells, has lured doctors, companies, and patients alike, but many are hitting a frustrating roadblock: generating enough of these CAR-T cells, which remain experimental, to meet surging demand. For patients, getting the most anticipated new treatments is never easy. Clinical trials are tightly controlled and not everyone is eligible. But for this personalized approach, the difficulties are multiplied. Unlike a drug, each batch is designed for a specific patient.