ipopba - stock.adobe.com
How MSD is harnessing AI
MSD’s global lead for artificial intelligence discusses what it will take for the technology to realise its full potential in healthcare
At biopharmaceutical giant MSD, Jason Tamara Widjaja leads a team of artificial intelligence (AI) experts and data scientists that looks at deploying AI across all aspects of the business, from supporting research and development activities to improving research productivity and employee retention.
That involves building internal AI products, assessing external AI software and addressing challenges such as data privacy and quality, ethical concerns, as well as AI governance where MSD had contributed to Singapore’s Model AI Governance Framework. In an interview with Computer Weekly, Widjaja, MSD’s director for AI and head of responsible AI, offers insights into the company’s approach to AI and what it takes for AI to realise its full potential in healthcare.
Tell us more about your role leading a global AI team in MSD. What are you responsible for and what’s your typical day like?
Widjaja: At MSD, we are unified around our purpose: we use the power of leading-edge science to save and improve lives around the world. For more than a century, we’ve been at the forefront of research, bringing forward medicines, vaccines and innovative health services for the world’s most challenging diseases.
I have a dual responsibility of leading a large and diverse data science team in the company’s technology hub in Singapore and driving business outcomes through AI globally. On a daily basis, I work with my team to develop and mature AI talent and capabilities, embed responsible AI into the enterprise and nurture partnerships in the ecosystem of data science and AI practitioners.
On a typical day, I split my time between working with different parts of our AI business and team. Some teams work in an agile product mode, developing internal AI products or assessing external AI software, while others assist with piloting new applications of AI. We also started an AI ethics and governance function about three years ago to deal with the emerging risks of AI.
We work flexible hours due to the fact the team is spread throughout the world. The Singapore IT hub is part of a global network, with sister hubs in New Jersey, Prague and Czech Republic, and our team members are all working collaboratively on projects and across different parts of our operations.
Team members
What is the profile of your team members? Do they bring different skill sets to the table?
The teams that I lead are primarily composed of AI and machine learning engineers, data scientists, and product managers. Beyond the more traditional roles, we also work in a pioneering area, and therefore recognise the need to stay ahead of the industry and think towards future capabilities and skillsets we need to develop. To this end, we have created a new AI ethicist role to ensure AI-related systems in our company are developed and implemented based on ethical governance frameworks.
This requires both a technical understanding of AI as well as a working knowledge of policy and risk management. This mix of skill sets is highly unique, and an example of pushing the boundaries of roles we need to succeed in AI.
Beyond the skills in the team, we recognise the importance of working seamlessly and collaborating closely with our colleagues from other disciplines to tap into capabilities such as user research, product design and compliance.
Gaining awareness
What has been the most challenging aspect of your job?
While it is great that AI is gaining awareness and entering mainstream conversations, there is also a lot of hype around it, which sometimes overpromises or glosses over the challenges of the technology.
There is this notion that branding something “AI” makes it more valuable or eye-catching, and the net result of that creates noise and confusion which detracts from the work of teams that are genuinely trained in AI and machine learning, and using it to create value.
The AI term has become so commonplace in society, to a point where someone actually says, “We’ve been waiting so long for this elevator, the AI system must be faulty”, or when an app switches the light on when people walk into a room and that is marketed as AI. There is a lot of confusion around what constitutes AI and what does not.
From the perspective of AI teams, where the use of AI requires additional attention, we do not want every Excel macro or robotics process automation (RPA) bot to trigger AI assessments, so there are questions we must ask to determine if something is genuinely AI. Beyond looking at the methods used (for example, whether the system is making decisions based on a simple 1+1= 2 formula versus a sophisticated deep learning model), we also ask questions like whether the system learns from data and observations, and crucially, whether the system makes automated decisions.
How is MSD harnessing AI in healthcare and medical research, particularly in the Asia-Pacific region?
At the most basic level, AI methods used in conjunction with other automation approaches can make the work of our scientists more efficient, freeing them up to focus on the science. For example, using AI assistants to help scientists annotate medical images more efficiently. Beyond that, AI holds a lot of potential across the spectrum of research and development activities, with the caveat that some of these benefits take a long time to realise as the drug development process tends to operate on long-term horizons.
We also harness AI methods working with our human resources teams to understand employee engagement and what might be driving retention risks for our MSD’s offices. Through this we are able to make changes that improve employee experience, drive engagement and ultimately aim to improve employee retention.
Alongside this, we have implemented AI governance practices that focus on having clear roles and responsibilities in AI development and deployment, making models explainable, allowing us to show the rationale for their predictions and training our teams to ensure AI is used ethically throughout the business.
Jason Tamara Widjaja, MSD
We are also currently exploring industry partnerships to investigate an emerging area of AI known as federated learning. In brief, it challenges traditional paradigms of machine learning that generally require data to be in a central location to train a model. Rather, it allows a trusted third-party platform to host models that learn from multiple locations without the need to physically share data. Federated learning holds a lot of potential to improve healthcare because it allows machine learning to be applied in a complex healthcare environment when it is often difficult to share and move data.
What are the challenges of adopting AI applications in healthcare?
Adopting AI applications in healthcare comes with many challenges, from a general lack of understanding around AI capabilities and limitations, concerns around data privacy and the quality of data available, as well as the fact that everything we do in this realm must be done according to a strong ethical framework. We also have to manage expectations around the benefits and risks, as well as the speed of adoption of AI with those less familiar with the technology.
As we know, to take a drug from research through development to regulatory approval takes scientists years of labour-intensive research and commitment, and many candidate medicines and vaccines fail at an early stage. The use of AI and the sharing of data across medical research organisations has strong potential to improve drug development progress, allowing us to bring life-changing medicines and vaccines to market more quickly, hopefully enabling us to save and improve the lives of many more patients around the world.
The fact that AI applications in healthcare are being developed at such a fast pace speaks to the vast potential and benefits AI technology can bring to the industry. However, the personal and sensitive nature of health data poses ethical challenges, even when such data serves scientific purposes. AI is powerful, but there are both technical and non-technical challenges around data sharing, interoperability and quality that we need to overcome.
With this in mind, there is a strong need to develop a framework for the ethical sharing of health-related data that is centred on core values, which can be determined by a multidisciplinary stakeholder group comprising governments, the scientific community and healthcare sectors. We also need to educate our stakeholders, driving awareness and greater understanding of AI throughout the business.
Legal and ethical ramifications
What are the legal and ethical ramifications of using AI in healthcare, and how can all stakeholders come together to address those issues?
The healthcare and pharmaceutical industries are highly regulated, so operating in a complex compliance and ethical environment is not new. However, the use of AI systems adds several new challenges specific to AI. These include the need to extend informed consent when interacting with an AI system, the need to carefully calibrate attempts at automation and insert a human in the loop, building in safeguards to mitigate against algorithmic bias and fairness, and ensuring that AI decisions and predictions are explainable, interpretable and transparent.
It is important to recognise that AI governance is inherently multi-disciplinary and requires a diverse combination of skills. To this point, people who are well-versed in policy may not understand the nuts and bolts of AI, and vice-versa. Furthermore, the perceptions of what is ethical and permissible can vary depending on the region the policy is created for, and how it is implemented and enforced.
One approach to overcome these challenges has been to form a multidisciplinary working group. At MSD, our working group consists of representatives from medical affairs, legal, policy, AI, cyber security, ethics and compliance. We collaborate to identify and design systems that aim to strike a balance in the ethical use of data using AI.
We are proud to be one of a handful of healthcare and health technology companies that contributed to Singapore’s Model AI Governance Framework, the Implementation and Self-Assessment Guide for Organisations, and that our AI governance practices were featured as a case study in the Compendium of use cases in 2020.
Partnering with the Singapore government and the Infocomm Media Development Authority and Personal Data Privacy Commission to build the foundation of AI governance in Singapore has been an honour for our team, but this is only the beginning. With the rapid evolution and adoption of AI in healthcare, it will be crucial for us to continue to work towards maturity and collaboration to ensure we collectively realise the benefits of AI in improving patient outcomes.
Additionally, in my personal capacity as vice-president of the AI and robotics chapter at Singapore Computer Society, I have been working with the group to raise awareness and nurture the next generation of values-based AI professionals. Our work focuses on accrediting tertiary institutions to teach AI ethics and governance courses as part of their curriculum to infuse moral and ethical considerations in developing AI products.
Future implementation
Above all, what else needs to be in place before AI can realise its full potential in healthcare?
It is imperative that we remember the fundamental reason we are using AI technology is to improve and save lives. We need to think of AI systems not just as software that exists in a vacuum, but as socio-technical systems that are as much about people and communities as they are about data and code.
As such, they should never be done in a silo but seek to engage and include the diverse stakeholders that are impacted by any AI implementation.