The Potential of Artificial Intelligence to Bring Equity in Health Care | MIT News

0

[ad_1]

Healthcare is at a crossroads, a point where artificial intelligence tools are being introduced into all areas of space. With this introduction comes great expectations: AI has the potential to dramatically improve existing technologies, refine personalized medicines, and, with an influx of big data, benefit historically underserved populations.

But to do these things, the healthcare community needs to ensure that AI tools are trustworthy and don’t end up perpetuating the biases that exist in the current system. MIT researchers Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), an initiative to support AI research in healthcare, call for the creation of a strong infrastructure that can help scientists and clinicians to pursue this mission.

Fair and equitable AI for healthcare

The Jameel Clinic recently hosted the AI ​​for Health Care Equity conference to assess current cutting-edge work in this space, including new machine learning techniques that support fairness, personalization and inclusiveness; identify the main areas of impact in the delivery of health care; and discuss regulatory and policy implications.

Nearly 1,400 people virtually attended the conference to hear from thought leaders in academia, industry and government who strive to improve equity in health care and better understand technical challenges in this space and the ways forward.

During the event, Regina Barzilay, Emeritus Professor of AI and Health at the School of Engineering and Head of AI Faculty for the Jameel Clinic, and Bilal Mateen, Head of Clinical Technology at the Wellcome Trust, announced the Wellcome Fund grant to the Jameel Clinic to create a community platform supporting equitable AI tools in healthcare.

The ultimate goal of the project is not to solve an academic question or to reach a specific research benchmark, but to actually improve the lives of patients around the world. Researchers at the Jameel Clinic insist that AI tools should not be designed for one population, but rather to be repetitive and inclusive, to serve any community or subpopulation. To do this, a given AI tool must be studied and validated with many populations, generally in several cities and countries. Also on the project’s wishlist is to create open access for the scientific community as a whole, while respecting patient privacy, to democratize the effort.

“What has become increasingly evident to us as a funder is that the nature of science has fundamentally changed over the past few years and that it is much more computational by design than it is. never has been before, ”says Mateen.

The clinical point of view

This call to action is a response to healthcare in 2020. At the conference, Collin Stultz, professor of electrical and computer engineering and cardiologist at Massachusetts General Hospital, explained how healthcare providers typically prescribe treatments and why these treatments are often incorrect.

In simplistic terms, a doctor collects information about their patient and then uses that information to create a treatment plan. “The decisions providers make can improve the quality of life of patients or make them live longer, but it doesn’t happen in a vacuum,” says Stultz.

Instead, he says a complex web of forces can influence how a patient receives treatment. These forces range from hyper-specific to universal, ranging from factors unique to an individual patient, to provider biases, such as knowledge gleaned from flawed clinical trials, to broad structural issues, such as access unequal to care.

Data sets and algorithms

A central question of the conference revolved around how race is represented in data sets, since it is a variable that can be fluid, self-reported, and defined in non-specific terms.

“The inequalities that we are trying to correct are large, striking and persistent,” says Sharrelle Barber, assistant professor of epidemiology and biostatistics at Drexel University. “We need to think about what this variable really is. Really, it’s a marker of structural racism, ”says Barber. “It’s not biological, it’s not genetic. We have said it time and time again.

Some aspects of health are purely determined by biology, such as inherited conditions like cystic fibrosis, but the majority of conditions are not straightforward. According to Massachusetts General Hospital oncologist T. Salewa Oseni, when it comes to patient health and outcomes, research tends to assume that biological factors have a disproportionate influence, but socio-economic factors must be taken into account. matters just as seriously.

Even if machine learning researchers detect pre-existing biases in the healthcare system, they must also address the weaknesses of the algorithms themselves, as a series of speakers pointed out at the conference. They face important questions that arise at all stages of development, from the initial framing of what the technology is trying to solve to overseeing the real-world deployment.

Irene Chen, an MIT doctoral student studying machine learning, examines all stages of the development pipeline from an ethical perspective. As a freshman doctoral student, Chen was alarmed to discover an “out-of-the-box” algorithm, which projected patient mortality, producing very different predictions based on race. This type of algorithm can also have real impacts; it guides how hospitals allocate resources to patients.

Chen struggled to understand why this algorithm produced such uneven results. In later work, she identified three specific sources of bias that could be disentangled from any model. The first is ‘bias’, but in a statistical sense – maybe the model is not suitable for the research question. The second is variance, which is controlled by the sample size. The last source is noise, which has nothing to do with modifying the model or increasing the sample size. Instead, it indicates that something happened during the data collection process, a step before the model was developed. Many systemic inequalities, such as limited health insurance or a historic mistrust of medicine among certain groups, are “rolling” in noise.

“Once you identify which component it is, you can come up with a fix,” Chen explains.

Marzyeh Ghassemi, assistant professor at the University of Toronto and new professor at MIT, explored the trade-off between anonymizing highly personal health data and ensuring that all patients are fairly represented. In cases like differential privacy, a machine learning tool that ensures the same level of privacy for each data point, too “unique” individuals in their cohort have started to lose their predictive influence in the model. In health data, where trials often under-represent certain populations, “minorities are the ones that seem unique,” ​​Ghassemi explains.

“We need to create more data, we need diverse data,” she says. “These robust, private, fair, high-quality algorithms that we are trying to train require large-scale data sets for research.”

Beyond the Jameel Clinic, other organizations recognize the power to harness a variety of data to create more equitable healthcare. Anthony Philippakis, head of data at the Broad Institute of MIT and Harvard, presented the All of Us research program, an unprecedented National Institutes of Health project that aims to bridge the gap between historically under-recognized populations by collecting data. observational and longitudinal data. health data from over one million Americans. The database is intended to find out how diseases present themselves in different subpopulations.

One of the most important questions of the conference, and of AI in general, revolves around politics. Kadija Ferryman, cultural anthropologist and bioethicist at New York University, points out that regulating AI is in its infancy, which can be a good thing. “There are a lot of opportunities to create policies with these ideas about fairness and justice, as opposed to policies that have been made, and then trying to undo some political regulations,” says Ferryman.

Even before the policy kicks in, there are some best practices that developers need to keep in mind. Najat Khan, Head of Data Science at Janssen R&D, encourages researchers to be “extremely systematic and thorough from the start” when choosing data sets and algorithms; detailed feasibility on data source, types, gaps, diversity and other considerations are essential. Even large common data sets contain inherent biases.

Even more fundamental opens the door to a diverse group of future researchers.

“We need to make sure that we develop and reinvest in data science talent that is diverse in their backgrounds and experiences and make sure they have the opportunity to work on issues that are really important to the patients they care about.” , Khan said. “If we do this right, you’ll see … and we’re already starting to see … a fundamental shift in the talent we have – a more bilingual and diverse talent pool.” “

The AI ​​for Health Care Equity conference was co-hosted by the Jameel Clinic at MIT; Department of Electrical and Computer Engineering; Institute of Data, Systems and Society; Institute of Medical Engineering and Sciences; and the MIT Schwarzman College of Computing.

[ad_2]

Share.

Leave A Reply