Skip to content
Menu
Box Thoughts
  • Home
  • About Me
  • LinkedIn
Box Thoughts
March 18, 2021

Are you checking behind your AI to make sure it isn’t leading you wrong?

The story of technology is predictable. A new tool comes out in alpha and starts to generate press. It moves to beta and shows amazing potential for some universal use cases. It moves to early adopter production and word grows exponentially as the cases generate business benefits for those early adopters. It begins to be adopted by general enterprise IT groups who do not implement it with the same level of understanding or alignment to its value proposition but things still go fairly well because the purchasers still had a pretty good grasp on why they needed it. Then comes the late adopters who begin purchasing it because everyone else did. Suddenly, the picture isn’t so rosy for this new tool because it is actually breaking the processes of the companies that adopt it.

There are few new technologies in this world that do not have a dark side. It takes decades for tools to mature to the point that they are safe for the unsuspecting enterprise to use. There are still horror stories about the ravages Microsoft Access caused on critical business processes a decade after people stopped using it. Sharepoint is another great example of a tool that has brought entire organizations to their knees as it crumpled under the weight of expectations and poor management.

AI is the next big thing and it does not have decades behind it yet. But the value proposition seems so high that companies are diving into the concept without looking for the canyon in front of them.

By way of analogy, let’s look at Tesla’s automated drive modes as that is run by AI. Teslas are considered one of the safest cars available by most safety testing measures. They are built well. They are also considered the smartest car on the road especially given their driverless capability. But there have been some scary fatalities associated with this mode where drivers completely forego any responsibility for what the car is doing. The driver is sitting there watching a DVD in the passenger seat as the car decided to do something unexpected.

This is the big risk of AI in the enterprise: no one watches as it begins to move in the wrong direction. Six months of letting AI run without oversight can lead to years of recovery after the fact. Not to mention the enterprise-wide distrust of all future AI implementations that may actually add value.

Technology, and especially automation technology, is not an invitation to eliminate the human element. Unfortunately, sometimes these implementations are viewed as a shortcut to make heavy cuts which leads to lack of oversight.

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to email a link to a friend (Opens in new window) Email

Related

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • The words we use impact how effectively we are communicating.
  • RTO thinking that is SO close to understanding this new world
  • Why I love corporate real estate
  • Battles over encryption matter more than AI over the next decade
  • Considerations for Moments of Spark and Connection in the office.

analysis bias blog change collaboration Communication CRE culture data decision making demand design experience failure fear flex flexibility future growth hybrid idea innovation leadership managing mandate metrics modeling office personal planning portfolio productivity quality relationships risk sales strategy success team technology trust WFH work Workplace writing

©2025 Box Thoughts | Powered by WordPress and Superb Themes!