This is the anti-AI AI post.
If you’re one of the IT leaders who’s been pushing back on all the hype around AI, this one’s for you.
Another buzzword that gets thrown around quite a bit, is ‘thought leadership’. Charles Araujo’s approach to most things makes him different.
Charlie is an IT veteran who has written three books and hundreds of articles on technology. He published The Digital Experience Report and founded and writes for The Institute for Digital Transformation.
Before becoming an analyst, Charlie ran technical ops for a billion-dollar healthcare org. He has also run large-scale technology and digital transformation programs as a consultant.
If you get past his accomplishments and sit down to chat with him, you’ll soon find that he has a grounded first-principles perspective on pretty much everything. He’s not afraid to question conventional wisdom and he has a particular aversion against overcomplicating things. He also has a wealth of knowledge and uses tons of anecdotes to illustrate his points.
So, when he agreed to join us for an episode of the Atomic Conversations podcast, I knew that it was going to be real talk.
Charlie has put together what he calls the enterprise technology landscape. In it, he purposely avoids adding ‘conversational AI’ as a category for enterprise tech products.
I wanted to understand why.
His take is that in the long term, the conversational interface will become the norm. It will be a feature that’ll come embedded within products across categories.
Another reason is that Charlie primarily focuses on what problem he is trying to solve. Conversational AI is typically a means to solving the problem rather than the core solution itself.
Since the launch of ChatGPT and the frenzy around it, the perception of conversational AI has also shifted. Pre-ChatGPT conversational interfaces were more rooted in traditional enterprise-grade technology and, by association, were approached with a higher level of trust.
Another downside of the ChatGPT buzz is that all conversations around AI get mixed up with GenAI, although AI is much more broad.
While thinking about the applications of AI in IT, Charlie brings the conversation back to the same question – What problems are we trying to solve?
In ITSM, it comes down to a couple of main use cases:
They’re both related, so let’s unpack each.
IT organizations are washed in operational data. AI can help sort out what is critical and needs immediate attention. This can, in turn, help identify and tackle issues that can potentially cause (or are currently causing) the most disruption to the business.
Back when Charlie was leading an IT org, he created a system that was focused on ensuring that the service desk was working on the most critical issue at a given time.
While Charlie’s system didn’t employ AI at the time, prioritizing tickets based on business impact is something most support teams are still not doing. To this day, most teams follow a first-in, first-out (or FIFO) approach while dealing with tickets.
Let’s say you have a mechanism in place to classify the ticket priority based on impact and urgency. Odds are, the urgency and impact are still specified by a human whose judgement can be flawed or biased or both. AI can objectively take into account different parameters that determine the impact and urgency. This can be augmented with historical and contextual data to automatically prioritize incoming issues.
ITSM, at its core, is a workflow process, and done right, this approach can help manage your support workflow in a much more sophisticated and dynamic fashion.
Most security concerns and risks come down to one thing – data.
If there’s an attempted attack on the data, an AI-based tool can take into account signals coming in from multiple systems and give them scores based on how anomalous they are. Multiple related anomalies can compound and raise a red flag. Identifying these anomalous signals that happen asynchronously is difficult for a human.
The underlying principle is similar to AI Ops, where you identify different threads that are occurring in different places and link them all up. AI can be used for security in the same way.
The next step is to enable remedial action. If all a system does is raise an alert for a human to act upon, the damage would have already been done. You want it to sandbox it to limit the potential impact, but in a way that doesn’t cause a long-term impact on the business.
Even after a human expert comes in to address the security alert, speed is often of the essence. A conversational AI interface can allow them to provide verbal commands to look up data that resides in multiple systems. Such an AI technology can have tremendous ability to speed up the response to the security threat.
In the realm of cybersecurity, as with most other disciplines within IT, the true power of AI is to act as an enabler to the humans who would take the final call, by doing what humans can’t do well.
IT leaders have been rightfully cautious about implementing AI despite all the frenzy around it.
A study conducted by Atomicwork and ITSM.tools has found that 3 out of 4 employees or IT end users were using free AI tools, like ChatGPT, for their work. That, combined with the expectation from the board and the C-suite, puts real pressure on IT organizations to adopt AI.
Charlie cautions against giving in to the pressure:
“If your board or your executing team is pushing you to adopt AI, your first question should be 'Why? What’s the business challenge that you think AI is going to solve?'
If you try to implement AI for the sake of technology, you’re setting yourself up to waste a lot of time and waste a lot of money.”
That said, the organizations that fail to look deep into themselves and ask themselves “Where can we apply AI to create value – either directly, in terms of customer value, or from an operational efficiency standpoint?” will leave themselves at a disadvantage.
Because of all the hype and pressure, your competitors will be looking at how they can leverage AI. You can’t afford to stand still.
If you’re running IT for a mid-sized enterprise, you might not have the budget, resources, or expertise within your team to develop AI-driven technology yourself.
AI is an extremely expensive road to go down, across the spectrum – the hardware, the support staff, the development staff, the data scientists, et al. The easy approach gives you no advantage and the hard approach is too impractical.
You’d be better off taking an “off the shelf” AI technology and applying it to your org.
This includes AI and GenAI capabilities within ITSM tools that you already use. It includes newer technology like AI copilots that can provide productivity gains – although Charlie remains sceptical that these tools can drive enough productivity gain to justify the cost since these tools are fairly expensive as of today.
"You need to be cautious, yet progressive. You should ask yourself whether these tools provide enough value to justify the cost."
For a larger enterprise, though, a worthy use case or opportunity can justify creating a bespoke AI-based technology. If it enables you to create competitive differentiated value for your customer, it can leapfrog your business in the marketplace and provide a disproportionate return on investment.
The sweet spot for most organizations will be in the middle – an out-of-the-box solution that provides more than just raw productivity but without the overhead of investing millions of dollars in building an AI team.
SaaS made Shadow IT super easy and it looks like “Shadow AI” has a much more rapid proliferation at the workplace, thanks to tools like ChatGPT and Perplexity. How does Charlie recommend dealing with it while deploying AI technology to end users?
There are lots of nuances and lots of risks and challenges. His general standpoint is that IT organizations need to treat their consumers as adults. They need to educate them that not all AI is the same and each technology needs to be treated differently.
You have to realize that this can’t be stopped. So, if you focus your time and energy on building moats and walls, you’re going to lose that battle. Instead, focus on the cultural and social engineering aspects where you get them on board as stakeholders.
Blocking all AI is another extreme measure that some IT organizations take, which is shocking to Charlie. This needs a nuanced strategy, but the starting point is to recognize that not only is it going to happen, we need it to happen.
“Shadow IT is generally a symptom of a much bigger problem. It reflects the fact that the IT organization is not being responsive enough to the needs of the organization.
The answer isn’t to put up another wall. The answer is to tear them down”
The answer is to bring in your business peers and make them part of the process. When they share the accountability, they become more responsible.
Every time a user circumvents IT guardrails and purchases a tool, it’s not just because they can. It’s because of the value it gives them. IT needs to understand this value and the results that the user is expecting to achieve and help them get there.
This is, admittedly, a tough question to answer. It’s hard to predict the future of any space, but in this context, we’re seeing two completely opposite trendlines:
“I ran an executive event about a year ago and asked the CIOs and senior execs present there, “What are your biggest challenges?”. What I heard were the exact same challenges when I was running IT 20-odd years ago.
They’re really interested in GenAI, but if you ask about their tech stack, you find that most of them are still running on legacy architecture. They’re just trying to migrate to the cloud.”
How will these two opposing forces balance?
On the one hand, we’ll see the continued evolution of this technology and its capability.
On the other hand, we’ll continue to see that IT organizations are going to be focused on the business value of a given technology.
Most of the GenAI use cases today are solutions looking for problems. There’s value in that, since you can take the blinders off and re-envision the way you work or engage with the customer.
The technology has made some things possible that weren’t possible a year ago. But it has to be justifiable and it has to make sense in a business context.
Charlie thinks that while we’re going to see a continued explosion in this technology, most enterprises won’t exactly transform into something totally different from how they work today.
But there likely are going to be some amazing stories of a handful of organizations that found that sweet spot, generated meaningful competitive value, and propelled their organization into greatness.
But that would be the exception, not the norm.
You can listen to the whole podcast here.