A Case Study in
Machine learning and artificial intelligence are gaining traction across many areas of healthcare research and delivery. Yet challenges can arise when applying such approaches, from practical issues of data collection to questions of interpretation. This case study highlights a multi-stakeholder project that aimed to optimize operating room scheduling. Our team conducted analysis and built tools to evaluate the approach, then provided statistical recommendations.
Due to COVID-19, hospitals have had to respond to overwhelming demand for resources, resulting in decreased availability of hospital and ICU beds.
In 2021, the McGill University Health Centre (MUHC) entered into a multi-stakeholder partnership to address the pressing need to optimize operating room (OR) scheduling processes in order to minimize wait times and break the bottleneck of delayed surgeries.
Through the partnership, a proof-of-concept scheduling tool was developed to provide predictions for surgery duration, risk of staff overtime, OR utilization, and bed availability.
In order to assess the validity and efficacy of the tool, predefined metrics and evaluation processes were required to facilitate knowledge transfer to MUHC stakeholders.
Our team of software developers, data scientists, and biostatisticians was able to seamlessly engage with both clinical and scientific partners to understand the key challenges within the project.
We provided an independent review of data sources and data extraction, transfer, and load procedures, as well as the model building and validation process. Our team also performed a targeted literature review to identify new opportunities to improve and assess model performance.
Following our review, we conducted an analysis of existing and new data sources to evaluate whether the tool’s predictions were accurate, both overall and across sub-specialties/departments. Our metrics included comparisons against status quo surgery duration estimates (e.g., via available scheduling tools and/or manager predictions), as well as comparisons with approaches described in the literature.
Our data science team also developed a visualization dashboard where stakeholders could review performance metrics for different components of the model. This interactive tool helped guide discussions about data drift, model fitting, and performance metrics.
Lastly, we recommended specific investigations that would measure the impact of these tools on real patients and providers.
Our assessment identified opportunities to improve model performance against existing surgery duration estimates, which is a key component of creating optimized surgery schedules. We recommended exploring models of increasing complexity to help address some of the sources of variation that remained with the previous approach. We also made data-related recommendations to both improve the data sources used to develop future models, but also to fully leverage existing data collection efforts.
Feedback that we solicited found that some aspects of the tool, such as visualizations of overtime risk, were already perceived as helpful by clinical stakeholders. Therefore, we recommended further investigations to determine the impact of access to these tools on patient care, which would be needed to justify additional tool development and deployment.
As the MUHC moves towards digitization and embracing machine learning-enabled operational and clinical planning, PA’s work provided invaluable context for performance evaluation of machine learning technologies in a real-world setting.
Machine learning is powerful for addressing some of our health care system’s most pressing challenges. But while creating a model has become easier than ever, it can be difficult to understand what went wrong (or right) once a model is deployed.
At Precision Analytics, we work on a range of prediction and classification problems, both as part of existing machine learning initiatives and as principal architects. With our statistical expertise and interdisciplinary team, we help our clients to see inside any “black boxes” and ensure that their machine learning approach is delivering value.