Search This Blog

Friday, January 8, 2021

Redefining Value in Artificial Intelligence Solutions

Recently, I worked with a student group for their interview preparation. These students were from Informatics major. In some mockup preparations, I engaged in asking questions about them the value of their contributions to the business. All their questions focused on technology and nothing about cross-functional implications of technology enabled solutions. About six months back, I had the opportunity to develop the curriculum for Northeastern University for the Informatics Capstone class (Rajagopalan, 2020b) where I continued to emphasize on the use of technology in business decisions. I was heartbroken to see that the student's interview preparation focused on technology aspects alone. 

I had discussed about four questions people should ask themselves in developing AI based solutions (Rajagopalan, 2020a). These questions were primarily based on Digital Ethics proposed by Beauchamp & Childress (1979), such as the beneficence (do good), non-maleficence (don't do bad), autonomy (give options), and justice (be fair). I have added in parenthesis some simple definitions of these principles. As you can see, anyone in the practice of creating an AI enabled solutions that uses data and patterns for insightful problem solving or faster decision-making must hold themselves accountable for the impact they have on the people through their solutions.  

As I though the gaps that these students had in using technology to solve a problem, they mostly focused on capabilities, sometimes on outcomes, rarely on benefits and value. Readers should read my blog (Rajagopalan, 2020c) on strategic project management from a dental visit on how the terms capabilities, outcomes, benefits, and value mean. It dawned on me then that the term, "Value" itself need to be redefined from its "Business" focus to "Ethical" focus. 

In business, value is defined as the degree of benefit someone derives from the solutions that we have provided. The value in business context is frequently evaluated in terms of strategic outcomes and compliance to procedural practices. These procedural practices are often codified in organizational policies and processes, adopted from company specific methodologies used, and guidelines embedded in cultural norms. 

However, when we factor ethics into account, we dig deeper into the rationale and reason! It is no longer about the profits and markets but people and the planet. The "greater good" discussions require everyone to demonstrate leadership (regardless of one's role) where value stands for doing the right thing by questioning the impact of our solutions on people and planet. That is the foundation of ethics where we focus on fairness, responsibility, honesty, and respect. Now, the digital bioethics principle extended from care for the people, despite their simplicity may be difficult for many to implement any of them correctly. So, I thought about what guidelines and guardrails to provide? 

After much deliberation, I came up with the notion that everyone in designing or developing any AI enabled or embedded solution must consciously evaluate four considerations. These are fairness, alternatives, risk management, and efficacy. I feel that any product developed based on AI enabled services or embedding AI services should evaluate against these areas to support the digital bioethics. Let me elaborate. 

Fairness: In this area, I think we can question on the inherent assumptions we make on the persona represented in the target market for the product! Asking the following sample list of questions may help us check against beneficence and non-maleficence.

  1. How much are any assumptions we make demonstrating our bias?
  2. To what extent are we documenting our assumptions transparently so that the solutions distinguish what demographics of the population will not be served by our solutions? Or worse, not served correctly!
Alternatives: In this area, I incorporate the considerations for experimental and explorative concepts but still provide choices or options for the people to work around the edge cases our solutions will not support the targeted market. So, despite our intentions to avoid false positives or false negatives, we leave options on the table. Asking ourselves to come up with the following scenarios may help us elevate ourselves to autonomy and justice. 

  1. What measure do we put in place for users to reject our recommendations (and allow us to learn)?
  2. What options do we allow for people to reject using our solutions or recommendations?  

Risk Management: I feel both the fairness and alternatives thinking are rooted in our risk management discipline. I feel that anyone who does not have a basic understanding of risk management framework should not be allowed to design or develop any AI enabled or AI embedded solution. 

  1. Familiarizing ourselves against risk management lifecycle (identification, assessment, evaluation, treatment) ensures that we are able to understand the categories of risk (safety, security, data privacy, informed consent procedures, etc.) and document the risk breakdown structure. 
  2. Consequently, we are able to understand the likelihood (probability), impact (severity), and detectability (either in a qualitative, semi-quantitative, or quantitative scale) so that we can contribute to assess the risks and their impact on the solutions we conceive, design and develop. 

Efficacy: This last element is what evaluates the value of our solution because it evaluates the degree to which our solutions can influence the desired behavior by producing the intended results correctly and consistently. 

  1. Such thoughts should be factored in applying risk management concepts on exception scenarios, such as thinking as an abuser (Rajagopalan, 2015) so that we protect the user and their data. 
  2. In technical solutions, we are able to think through all code paths (not just happy path testing) but think through scripted and unscripted ways to use testing to augment quality.

What are your thoughts?.


References

Beauchamp, T.L. & Childress, J.F. (1979). Principles of Biomedical Ethics. New York: Oxford University Press.

Rajagopalan, S. (2015). Abuser stories: What shouldn't the software do? https://agilesriram.blogspot.com/2015/09/abuser-storieswhat-shouldn-software-do.html

Rajagopalan, S. (2020a). Artificial Intelligence: Four Considerations extended from Digital Bioethics. https://agilesriram.blogspot.com/2020/09/artificial-intelligence-solutions-four.html

Rajagopalan, S. (2020b). Sriram's Approach to Course Design. https://youtu.be/DAVuICDWBpo

Rajagopalan, S. (2020c). Lessons Learned on Strategic Management from a Dental Visit. https://agilesriram.blogspot.com/2020/08/lessons-learned-on-strategic-project.html