Three questions and answers: Shadow AI has arrived in companies

0
25
Three questions and answers: Shadow AI has arrived in companies


Shadow IT is considered a major problem in many companies, with employees using tools like Google Drive, OneNote or Slack without the approval of management or the IT department. Now a new phenomenon is emerging, shadow AI, with threats to compliance, data protection and copyright.

Advertisement


Especially in Germany, many companies criticize AI assistants for developers, as a recent study showed. This increases the risk that employees will resort to Shadow AI themselves. We spoke to Peter Guagenti, president and chief marketing officer of AI coding assistant TabNine, about the situation in German companies.




(Picture:

Peter Guagenti

,

Peter Guagenti is an experienced business strategist and entrepreneur with a proven track record of driving technology startup growth through strategy, product development, and exit. With his strong commitment to data security and ethical standards, he has successfully accompanied companies in the introduction of AI while ensuring security, data protection, and compliance.

How widespread is the shadow AI phenomenon in German development departments? Does it make a difference?

Shadow AI probably plays a role in almost every development department of a company today. The related tools are growing so quickly and so strongly that they have probably been introduced even before the companies themselves have done the relevant research. For example, since the beginning of this year, more than ten percent of new users of our product have come from Germany. This is the second place worldwide after the United States. Many people are probably using our AI code assistant for corporate projects without their employer’s policy.

We’ve seen this type of use of products and tools before Shadow AI. If companies refuse to adopt new technologies that their employees want, they will find a way to gain access to it themselves. This is what happened with the emergence of the Internet, open source, mobile devices, the cloud, and other innovations. However, AI tools raise some of the risks we’ve seen with other technologies, such as loss of privacy and unwanted distribution of confidential corporate data. But it also includes potential misuse of intellectual property and copyright infringement, as well as inaccuracies in the output of these tools. With this in mind, Shadow AI is a much bigger challenge than the Shadow IT of earlier times.

However, this is not a purely German phenomenon. It can be observed in many countries around the world. The actual distribution is largely uncertain.

Can companies also tell if their developers are using unauthorized AI tools?

There are tools that can help companies detect the use of unauthorized technologies. But they only give a small insight into what is going on. Even if they work, it will only be after the fact. Sensitive information will have already been disclosed in an undesirable way.

Communication and education are far better long-term strategies for keeping unauthorized AI products and tools away from the company. Involving developers from the very beginning is a critical step. This includes: identifying tools, evaluating them, and actually purchasing them – all in line with the company’s data protection, data security, and compliance needs.

As an entrepreneur, embrace AI and find out what problems developers and administrators are trying to solve with it. What features and tools are they missing? Then offer solutions to these problems! The call to enlightenment is to better understand the AI ​​tools themselves. This includes questions such as: where is your internal data stored and shared? How do you ensure you get enough good answers? What do you know about how the AI ​​system was trained? Are you confident you are not violating data protection regulations?

In short: accept that shadow AI is already in your organization. It is highly unlikely you will be able to stop the rollout. So now is the time to take control to eliminate shadow AI with better and approved AI products, communication, and education.

In our opinion, the AI ​​boom can hardly be slowed down. How should companies that are overly critical of AI tools behave best?

All CIOs and CEOs are called to act now: develop your own standards and requirements for where and how you want to use AI tools, what data is allowed to go out and what is not. In particular, be prepared to limit the use of AI providers that do not meet your standards. To do this, you must offer your employees a suitable alternative.

Companies must know the data on which generative AI models are trained and whether this data can be used for their use cases. Similarly, they must ensure that the AI ​​technologies they choose include clear and properly enforced terms of use and meet their organization’s specific privacy expectations.

On the other hand, developers must maintain responsible control over the tools they use. This is especially true when the technology landscape – such as in the field of AI – is rapidly evolving. These include: avoiding vendor lock-in, the need for transparency, and maintaining flexibility. These are critical steps to securing the future of IT and AI investments.

Mr. Guagenti, thank you very much for the answer!

In the “Three Questions and Answers” series, iX wants to get to the core of today’s IT challenges – whether it’s the user’s perspective in front of the PC, the manager’s perspective or the administrator’s everyday life. Do you have any suggestions from your daily practice or from your users? Whose suggestions on what topic would you like to read briefly and to the point? Then feel free to write to us or leave a comment in the forum.


(Who)

LEAVE A REPLY

Please enter your comment!
Please enter your name here