The COVID-19 pandemic was tightening its grip globally! It was scarry! A few friends from India met on Zoom to check in each other. Many of us were discussing about how healthcare could be improved and delivered faster now that the Artificial Intelligence is here! As I have done some webinars on healthcare and had some exposure to medical fields, I shared some of my thoughts too!
Unlike many who think Artificial Intelligence is relatively new that happened in the early 2000, the concept of generating insights from data and patterns are not new. During my senior year at the University of Madras, I studied Prolog on my own with an additional certificate program at Thyagarajar College of Engineering, Madurai. I learned how this Prolog language differed from logic driven and forward directional procedural languages with backward tracking of logic to look for patterns until a solution to a sub-goal was met! There were references to green cut that altered the procedural behavior of the program but not the logical behavior and the red cut that also changed the logical behavior. This was the precursor to Expert Systems (using software to learn itself).
Subsequently, when I was studying about information technology course certificate as part of Biomedical Engineering in University of Aberdeen during early 1990s, I had the opportunity to build a small clinical program using Prolog (which, sadly, I have now forgotten) on how to differentiate standard respiratory, cold, cough and fever related symptoms to either act as an automatic computerized first-responder or recommend an intervention from a healthcare practitioner.
Furthermore, when I did my graduate program in Detroit, I experimented with logical gates (AND, OR, NAND, NOR, XOR, NOT all of which facilitate algorithmic constructs like sequencing, branching, and looping in procedural languages) and neural networks in allowing hardware/firmware level learning that was applied in simulated anti-lock braking (ABS) in automotive space.
Based on all these experiences, when it comes to using learned intelligent behavior, we need to ask ourselves a few important questions. These questions are almost analogous to the article "Managing Others: Four Simple Powerful Questions." (Rajagopalan, 2016).
1. Does our knowledge extend to the new situation? We can't assume that our new knowledge applies without any reservations. To ensure that our knowledge can work in a new situation, we need to be consciously aware of the boundary conditions of our knowledge. Only when do this, we can eliminate blind spots in our thinking that can endanger the "beneficence" principle of digital ethics (Beauchamp & Childress, 1979; Nebeker, Torous, Bartlett Ellis, 2019).
2. Who would be accountable for abuse or misuse? Even if we have addressed the first question very effectively, we must think through the unknowns and put process controls in place proactively to avoid any potential use of the product in an unintended way. Just like the discussion on abuser persona (Rajagopalan, 2015), it is important for us to look at what should our AI based solutions not do. I believe this question facilitates one to think about the digital ethics principle (Nebeker, Torous, Bartlett Ellis, 2019) of non-maleficence. In simple words, AI solution designers and developers should answer the question about the accountability of something wrong.
3. AI is meant to relieve us from mundane decisions so that people can solve more important problems. So, one of the questions we should ask ourselves is more of the "So what" type of question. In other words, how will the AI solution improve our productivity by augmenting efficiency, promoting effectiveness, and enhancing efficacy (I call them the 3E's of better product development (regardless of the type of solution like software or healthcare). For instance, I believe questions of consciously eliminating all bias and stereotyping so that the solutions are impartial promoting equality. Furthermore, how much do we give options for the users of these AI solutions when they are generating false positive (or false negative). The digital bioethics principles (Nebeker, Torous, Bartlett Ellis, 2019) continue to come to rescue with their principles of justice and autonomy.
4. Lastly, the AI solutions should also look at whether these solutions are commercially and operationally simple, secure, stable, scalable, and sustainable (I call them the 5S of any solution). The scope of these AI solutions differs based on the intended target market it serves. As a result, various considerations such as economic affordability for the people, provider reimbursement and insurance payment considerations, etc.
So, understanding these four elements of extensibility of solution to the targeted audience, accountability for unintended use, productivity enhancements of current situation, and commercial and operational considerations are pivotal in any AI based solution to accelerate itself to the market. So, while AI enhances the ability to accelerate decision-making, making AI based solutions needs to be carefully orchestrated. After all, abuses and adverse effects should not be the experiments to new product development for mission critical applications or new product developments that can impact people.
What do you think?
References
Beauchamp, T.L., Childress, J.F. Principles of Biomedical Ethics. New York: Oxford University Press.
Nebeker, C., Torous, J. & Bartlett Ellis, R.J. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med 17, 137 (2019). https://doi.org/10.1186/s12916-019-1377-7
Rajagopalan, S. (2015). Abuser Persona: What shouldn't the software do? https://agilesriram.blogspot.com/2015/09/abuser-storieswhat-shouldn-software-do.html
Rajagopalan, S. (2016). Managing Others: Four Simple Powerful Questions Questions. https://agilesriram.blogspot.com/2016/09/managing-others-four-simple-powerful.html