Search This Blog

Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Sunday, March 31, 2024

Relationship between Artificial Intelligence and Leadership

I recently attended the Healthcare Information Management Systems and Society (HIMSS) 2024 conference in Orlando, Florida. As the Global Head of Agile Strategy, I represented Inflectra corporation looking up for learning about the growing influence of information systems in the healthcare space. It was wonderful to see the various improvements in the healthcare space in pre-clinical, clinical, and post-clinical stages of drug development, various healthcare and life sciences spaces like ambulatory care, hospital management, emergency care, neonatal and pediatric spaces, and endless array of devices and software products that support these areas. 

One thing that I saw written in many of the sessions, keynotes, exhibitors, and conversations among the attendees was the influence of artificial intelligence. Unlike many other audiences, these discussions were not just on large language models or generative artificial intelligence but also on deep machine learning algorithms within their line of work. While people talked about training data, prompt engineering, privacy and security considerations, there were only a very handful of sessions that I can count in my one hand that I was exposed where people talked about digital ethics or leadership. 

One of the expert panel speakers at the onset of the conference keynote commented about Steve Polack's quote, "... before talking about artificial intelligence, let us talk about natural stupidity!" History has taught that there have been a lot of lessons learned but we continue to learn the same mistakes due to personal egos, bias, labeling, stereotyping, and plain unwillingness to learn. This is one of the reasons why I even say, "Common sense isn't common!"  Therefore, if we take the evolution of people compared to the growth of technology, people have lived longer in relation to technology and yet lack perfection! Then, how can we expect technology to be perfect? 

This is the fundamental reason why we need leadership. Every time people rush to do something, it is the leadership from everyone that puts the checks and balances required in the processes to ensure the right thing is done. For every change, including but not limited to what artificial intelligence brings, it is not about whether you like that change or not, but it is about the rate at which we adopt it successfully. In a couple of sessions, there were experts referring to cases where the ChatGPT based solution was not successful because there was not a clear business case, and that leadership and AI governance were mandatory for AI to be successful ever. We talk about business case as a strategy document balancing the benefits with the risks and how the strategy aligns with the vision (that is from an "as is" state to the "to be" state). Such a business case is not made up of just plain technical experiments without a solid use case to support the business. 

Furthermore, the closing keynotes focused on how 'advanced technology' does not necessarily mean technical elements alone but require strategic considerations for legality and ethics. Embedded deeply in these thoughts is the need for the leadership (not just the people at the top but also the middle management such as product, project, account, program, portfolio, process, technology, HR, etc.) to partner, engage, collaborate and knowledge-share with multiple stakeholders garnering support for the strategy and vision in the business case. This may additionally involve how to crowdsource fund differentiating between investment and funding schedules while simultaneously managing the in-house talent for capacity, transition, and succession planning! I believe if people are not engaged to lead and manage their processes, technology alone will not yield a solution. And, if we don’t think this way, we are not managing risks effectively and efficiently. Don't wonder why quality suffers in this case!

These thoughts are much more than just focusing on Agile and DevOps thinking as part of AI based experiments baked into iterations and spikes. I firmly believe that the ways of working are integrating two big frameworks in today's digital transformation. This integration involves both the middle management frameworks (like portfolio, program, product, and project) with the software development lifecycle (SDLC). So, instead of getting mixed up with plan-driven, adaptive, and hybrid ways of working that is equally important to both these frameworks, we should focus on "Product Application Lifecycle Management" (PALM, in my mind) that brings the frameworks together using multi-artifact traceability and auditability.  As I always say, enterprise business agility is not shifting left and shifting right alone but it is also about shifting up and down. That is how value flows - both vertically and horizontally. 

As a part-time assistant teaching professor at Northeastern University, I can tell that not every graduate courses in digital marketing, informatics, project management, software engineering, and business schools even mandate a good understanding of leadership. In fact, sometimes, project management graduates can't articulate the requirements of a contract or the procurement guidelines. Having trained many professionals for certifications furthermore outside the teaching engagements, I feel that even these tenured working professionals or those holding Certified Scrum Master can't understand the ingredients of servant leadership. 

We are on the cusp of a major change just like Internet or Telephony made waves once changing the landscape of how we work. We have one more opportunity to write history in leading the AI wave on how we work or will work in future. At this juncture, paying attention only to the technological aspects alone or yielding to comfort zones of known technical tools alone is a sure prescription for failure. If we know the principles of leadership, then, we can develop the right AI based solutions ensuring digital ethical principles like beneficence, non-maleficence, justice and autonomy are protected further ensuring that we monitor the AI's ability to explain itself identifying model drifts and hallucinations. 

Let us join forces learning about leadership first and technology next. Share your thoughts. 

Monday, September 28, 2020

Artificial Intelligence Solutions: Four Considerations extended from Digital Bioethics

The COVID-19 pandemic was tightening its grip globally! It was scarry! A few friends from India met on Zoom to check in each other. Many of us were discussing about how healthcare could be improved and delivered faster now that the Artificial Intelligence is here! As I have done some webinars on healthcare and had some exposure to medical fields, I shared some of my thoughts too! 

Unlike many who think Artificial Intelligence is relatively new that happened in the early 2000, the concept of generating insights from data and patterns are not new. During my senior year at the University of Madras, I studied Prolog on my own with an additional certificate program at Thyagarajar College of Engineering, Madurai. I learned how this Prolog language differed from logic driven and forward directional procedural languages with backward tracking of logic to look for patterns until a solution to a sub-goal was met! There were references to green cut that altered the procedural behavior of the program but not the logical behavior and the red cut that also changed the logical behavior. This was the precursor to Expert Systems (using software to learn itself).

Subsequently, when I was studying about information technology course certificate as part of Biomedical Engineering in University of Aberdeen during early 1990s, I had the opportunity to build a small clinical program using Prolog (which, sadly, I have now forgotten) on how to differentiate standard respiratory, cold, cough and fever related symptoms to either act as an automatic computerized first-responder or recommend an intervention from a healthcare practitioner.

Furthermore, when I did my graduate program in Detroit, I experimented with logical gates (AND, OR, NAND, NOR, XOR, NOT all of which facilitate algorithmic constructs like sequencing, branching, and looping in procedural languages) and neural networks in allowing hardware/firmware level learning that was applied in simulated anti-lock braking (ABS) in automotive space. 

Based on all these experiences, when it comes to using learned intelligent behavior, we need to ask ourselves a few important questions. These questions are almost analogous to the article "Managing Others: Four Simple Powerful Questions." (Rajagopalan, 2016). 

1. Does our knowledge extend to the new situation? We can't assume that our new knowledge applies without any reservations. To ensure that our knowledge can work in a new situation, we need to be consciously aware of the boundary conditions of our knowledge. Only when do this, we can eliminate blind spots in our thinking that can endanger the "beneficence" principle of digital ethics (Beauchamp & Childress, 1979; Nebeker, Torous, Bartlett Ellis, 2019). 

2. Who would be accountable for abuse or misuse? Even if we have addressed the first question very effectively, we must think through the unknowns and put process controls in place proactively to avoid any potential use of the product in an unintended way. Just like the discussion on abuser persona (Rajagopalan, 2015), it is important for us to look at what should our AI based solutions not do. I believe this question facilitates one to think about the digital ethics principle (Nebeker, Torous, Bartlett Ellis, 2019) of non-maleficence. In simple words, AI solution designers and developers should answer the question about the accountability of something wrong.

3. AI is meant to relieve us from mundane decisions so that people can solve more important problems. So, one of the questions we should ask ourselves is more of the "So what" type of question. In other words, how will the AI solution improve our productivity by augmenting efficiency, promoting effectiveness, and enhancing efficacy (I call them the 3E's of better product development (regardless of the type of solution like software or healthcare). For instance, I believe questions of consciously eliminating all bias and stereotyping so that the solutions are impartial promoting equality. Furthermore, how much do we give options for the users of these AI solutions when they are generating false positive (or false negative). The digital bioethics principles (Nebeker, Torous, Bartlett Ellis, 2019) continue to come to rescue with their principles of justice and autonomy.  

4. Lastly, the AI solutions should also look at whether these solutions are commercially and operationally simple, secure, stable, scalable, and sustainable (I call them the 5S of any solution). The scope of these AI solutions differs based on the intended target market it serves. As a result, various considerations such as economic affordability for the people, provider reimbursement and insurance payment considerations, etc. 

So, understanding these four elements of extensibility of solution to the targeted audience, accountability for unintended use, productivity enhancements of current situation, and commercial and operational considerations are pivotal in any AI based solution to accelerate itself to the market. So, while AI enhances the ability to accelerate decision-making, making AI based solutions needs to be  carefully orchestrated. After all, abuses and adverse effects should not be the experiments to new product development for mission critical applications or new product developments that can impact people. 

What do you think?    

References

Beauchamp, T.L., Childress, J.F. Principles of Biomedical Ethics. New York: Oxford University Press. 

Nebeker, C., Torous, J. & Bartlett Ellis, R.J. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med 17, 137 (2019). https://doi.org/10.1186/s12916-019-1377-7

Rajagopalan, S. (2015). Abuser Persona: What shouldn't the software do? https://agilesriram.blogspot.com/2015/09/abuser-storieswhat-shouldn-software-do.html

Rajagopalan, S. (2016). Managing Others: Four Simple Powerful Questions Questions. https://agilesriram.blogspot.com/2016/09/managing-others-four-simple-powerful.html