Battery Ventures names Atomicwork a ‘New Guard’ AI company in their 2024 OpenCloud report. Read the report ->

In This Article:

No items found.

Share Article

Embracing the inevitable: How can IT be a facilitator and not the gatekeeper of AI

The future of work is undoubtedly AI-integrated, and proactive IT management is critical to navigating this future successfully.

In early May 2023, electronics behemoth Samsung cracked down on the usage of GenAI tools by employees.

This was done across company-owned devices, covering computers, tablets, and phones, as well as non-company-owned devices running on internal networks. The ban covered everything from ChatGPT to Microsoft’s Bing and Google’s Bard. 

The reason?

A month earlier, a Samsung employee had pasted some confidential code into ChatGPT to test it for errors. Of course, the LLM used the proprietary data to train its language model.

The Economist Korea also reported as many as three distinct scenarios where various employees had leaked confidential info on ChatGPT - error check, code optimization, and meeting recording to note conversion.

Samsung immediately reduced the ChatGPT upload capacity to 1024 bytes per person. News reports also said the company wanted to build an internal AI chatbot to avoid such incidents. 

Within a few weeks of the ban, the Korea Economic Daily reported that two of South Korea's IT behemoths - Naver Corp and Samsung - had joined hands to develop a GenAI platform for corporate users. The goal was for the AI tool, which would be available in Korean, to be used by Samsung’s Device Solutions (DS) division once it was developed. 

With this partnership, Samsung would have secured an in-house AI tool that could help push the boundaries of productivity while preventing potential confidential information leaks. Naver will use this partnership to enter the global enterprise AI market. 

Moral of the story?

Companies need to evolve to cater to the demands of the modern enterprise employee. With AI permeating every aspect of our lives, the workplace must be open to this possibility while considering its security aspects.

IT teams must be enablers, not gatekeepers, of AI in the workplace.  

What is the state of AI usage in work spaces today?

If you maybe thought this was a one-off incident, you couldn’t be more mistaken.  

We at Atomicwork recently conducted a survey on AI in IT. We asked our participants this question - Do IT end users (employees) use free AI tools for their work?

The numbers are rather telling.

75% of the participants we surveyed said they had. And you guessed it right, ChatGPT was the preferred platform. Also, here’s a breakdown of what they use it for:

  • Creative Ideation or Problem Solving – 46%
  • Email Drafting or Editing – 40%
  • Content Creation or Editing – 35%
  • Market Research – 29%
  • Data Analysis and Synthesizing Insights – 29%
  • Code generation and testing - 17%

(NOTE: The percentages don’t add up since multiple choices were allowed.)

What this means is that if you are in the IT team of a sizeable corporation, three out of four employees are using free AI tools with or without your knowledge.

Given this reality, what should IT teams do? 

The role of IT in managing AI adoption

According to our survey, the IT team was the originator of AI adoption activities in nearly two-thirds of organizations (61%), with the C-suite accounting for one-quarter (24%). In organizations where the C-suite had originated the need, AI adoption had progressed less than those where the IT team had done this. This stat goes on to showcase how important the role of IT is when it comes to AI adoption/origination leadership. 

Given the IT team’s strong pulse of employee requirements, they are best placed to decide on the AI in IT roadmap and its subsequent implementation. The survey also lists the top most stated benefits of AI adoption--data analytics and synthesizing insights (45%), chatbots for self-service adoption (38%), and improving employee experience and workflow automation and optimization (tied at 34%). 

Now, given this context, where should IT teams start?

Step 1- Set guidelines for governance

The key to avoiding a Samsung-like incident is to first come up with guidelines and governance policies. 

This comprehensive framework for AI tool usage within the organization should cover:

  • Approval process: Define a transparent and straightforward process for evaluating and approving AI applications. This includes assessing the tool's relevance to business needs, compatibility with existing systems, and adherence to ethical standards.
  • Policy development: Develop specific policies outlining the acceptable use of AI tools. These policies should cover aspects like data handling, user privacy, and ethical considerations specific to AI. While developing the policy, the IT team must also consider the relevant data privacy laws to comply with. 
  • Audit and review: The IT team must implement regular audits of AI tools to ensure continuous compliance and assess these tools' impact on business operations and data security.

Step 2- Invest in training and support

While developing guidelines and policy is the first step, drilling the point home with adequate training and support mechanisms is essential.

  • User training: Conduct regular training sessions for end-users, focusing on the functionalities of AI tools, best practices, and the importance of data accuracy.
  • Awareness programs: Create awareness around the ethical implications of AI, including biases in AI algorithms and the importance of maintaining data privacy.
  • Support channels: Establish dedicated support channels where users can report issues, seek guidance, and provide feedback on AI tools.

Step 3- Ensure security and compliance

This is arguably the most important from an IT perspective. The IT team must establish clear protocols to manage the usage of AI tools.

  • Security protocols: Implement robust security protocols for AI applications, including data encryption, access controls, and secure data storage practices.
  • Regular monitoring: Continuously monitor AI tools for any security vulnerabilities or data breaches and take immediate corrective actions when necessary.
  • Compliance checks: Regularly review AI tools to ensure they comply with evolving legal and regulatory standards.

Step 4- Encourage collaboration

Fostering a collaborative environment is critical to the success of AI in IT projects/tools.

  • Feedback mechanisms: Create platforms where end-users can provide feedback on AI tools, which can be used for further improvements and customization.
  • Cross-departmental teams: Form cross-departmental teams to discuss the deployment of AI tools, ensuring that these tools meet diverse departmental needs.
  • User-centric design: Involve end-users in the design and testing phases of AI tool development to ensure that the tools are user-friendly and effectively meet user requirements.

In conclusion 

It is paramount that enterprises allocate a good chunk of their IT budget towards AI in IT projects. These will act as an investment towards increasing the organization's overall productivity. 

According to our survey, over half (52%) of IT respondents stated that their organizations spend at least 5% of their IT budgets on AI, and ~19% of them said that this was more than 10%. 

This indicates that a lot of organizations are seriously considering the implementation of AI at the enterprise level. AI is no longer about employees using ChatGPT; it has become a lot more serious, and enterprise-level actions need to be taken to use AI effectively to ensure long-term success. 

To this end, the key to successful AI adoption lies in the partnership and communication between IT and business leaders. By working together, they can ensure that AI is used in a beneficial, secure, and compliant manner. 

The future of work is undoubtedly AI-integrated, and proactive IT management is critical to navigating this future successfully.

Start 2024 with a deliberate AI strategy for your IT operations.

Download the State of AI in IT, 2024 report

No items found.
Get a demo

You may also like...

6 steps to build a robust AI implementation strategy in IT
Key strategies for successfully introducing AI in IT, from problem identification to vendor selection and change management.
70-80% of AI projects in IT organizations fail. Here’s why.
Using AI effectively to achieve clear-cut business goals is challenging.Here's what to keep in mind when planning your next AI initiative.
How IT can leverage AI for incident management
The integration of AI in incident management is not just about enhancing efficiency but also about revolutionizing user experience.