Summary: According to a recent IFOP study68% of employees use generative AI without telling their line manager. And this at a time when it's not advisable to allow unrestricted use of ChatGPT by employees. One solution lies in using GPT via the API, which enables more secure, rigorous and powerful control. We tell you more.
It has become clear that ChatGPT possesses extraordinary power. But can an organization handling confidential data take advantage of this generative AI without putting itself at risk?
What are the risks?
1) We know that the data entered into ChatGPT is used by OpenAI to enrich their AI model.
Inevitably, thanks to its ability to capture and manipulate large volumes of textual data, ChatGPT will be able, through a succession of queries, to establish a connection between these data and reconstruct the overall context. It's as if each query were a piece of an immense puzzle, which ChatGPT will assemble, interpolating where necessary.
This means that even if employees individually make efforts to avoid disclosing sensitive information, AI will be able to make links and correlations between queries.
This can lead, for example, to the disclosure of a major customer's investment strategy, or the suggestion of a confidential name for a new product or brand.
Once ChatGPT is populated with employee inputs, the task can become relatively simple for a hacker: all he needs to do is repeat a series of well-chosen prompts to gain access to this sensitive information.
2) Some ChatGPT accounts and their contents have been hacked.
In a newly published report, researchers at cybersecurity firm Group-IB have announced that they have found over 101,000 compromised ChatGPT login credentials for sale on dark web markets over the past year. See here
3) A psychosocial risk
Recent research suggests that employees using AI would be likely to develop a sense of loneliness, a form of depression, accompanied by a notable increase in alcohol consumption. However, they would also be more willing to support their colleagues, expressing an increased need for social interaction. See here
Should ChatGPT be banned from sensitive business environments?
In these sectors, it's best to adopt a cautious approach and ban the use of ChatGPT if possible. A number of major corporations have understood this, and have either partially or totally banned the use of ChatGPT. Examples include Verizon, Apple, Amazon, Wells Fargo, Deutsche Bank, etc. (to find out more, it's here).
However, it is difficult to completely ban the use of ChatGPT. As the recent IFOP survey68% of employees use generative AI without their employer's knowledge...
So large companies, laboratories and government agencies have to do without GPT?
And besides, if you did, you'd get very few results...
The benefits of ChatGPT are considerable, both individually and collectively within an organization. Indeed, its power is multiplied when used collectively, on a large scale. Allow me to explain:
When each individual uses GPT in his or her area of expertise and shares the results with other users, this creates a multiplier effect. Knowledge and information accumulate, complement and reinforce each other.
So how do you go about it?
Prohibiting is good, supporting is better 🙂
By setting up your own GPT in API mode combined with appropriate software, you can supervise, control and monitor usage:
- Supervise to track in real time how GPT is being used and for what results.
- Controlling its use for greater safety.
- Monitor to adjust GPT usage to specific needs.
- Segment uses to define specific areas of application, according to the skills and needs of different users.
- Avoid leaving employees alone, left to their own devices, isolated from a machine that simulates human interaction.
What's the difference between ChatGPT and GPT in API mode?
ChatGPT is often referred to as an AI-based language model. But in reality, it's more precisely a Human Machine Interface (HMI).
A "language model" is a type of computer program that has been trained to understand, generate or modify text. ChatGPT uses the GPT (Generative Pretrained Transformer) language model.
A "human-machine interface", on the other hand, is a mechanism that enables interactions between humans and machines to take place. It acts as a bridge between the user and the system, facilitating communication between the two.
The addition of "Chat" to GPT consists precisely in adding a facilitating capability enabling dialogue between the user and the GPT language model. This interface enables users to give instructions to the language model (in the form of prompts or questions), receive model-generated responses and easily interact with the results.
What are the advantages of using GPT in API mode, compared with ChatGPT?
Essentially, it's the ability to build both your own model and your own man-machine interface. This "GPT in API mode" capability is far more powerful than ChatGPT, especially in terms of security.
In fact, you can set up specific policies to control how information is shared and used, thus reducing the risk of security breaches:
- Data are not reused by OpenAI
- The data transmitted via the API is not used to enhance the GPT model. warranty offered by OpenAI.
- You can store data on servers independent of those of OpenAI, which further reduces the risk of data leakage.
- You can add additional layers of security, such as a blockchain to protect the intellectual property of published content in real time.
- You can control who uses GPT and what GPT can access
These advantages make using GPT in API mode a powerful choice for many organizations. It's what we do at MARYLINK.
More security and control with your own GPT! More benefits?
Setting up your own model and GUI has several major advantages for collaborative innovation:
- Customized tools
You can customize GPT to meet the specific needs of your organization or project. And consider specific scenarios, problems and use cases that might not be possible with a "one size fits all" solution.
- Improved collaboration
With the supervision and control you have over GPT, you can create an environment where users collaborate more effectively. They'll be on the same wavelength to work together optimally. And they'll be naturally encouraged to share. As this social experimentation.
- Segmentation of uses
GPT's ability to segment uses fosters innovation by enabling different users to focus on specific areas.
- Continuous learning
Real-time monitoring of the results produced by GPT enables you to quickly identify what's working and what's not, so you can adapt and continually improve your processes.
What to do now?
Innovative organizations should start by familiarizing themselves with the underlying technologies, in particular GPT and its use via an API.
Stage 1: Acculturation
This may involve an initial pilot project to test the use of GPT in real-life conditions. This project can be small and targeted, with a reduced perimeter to start withto solve a specific problem or improve an existing process. This acculturation phase can include workshops, seminars or other forms of collaborative learning.
Step 2: Deployment
If the pilot project is a success, organizations can then consider rolling out the use of GPT on a larger scale. This may require adjustments to the GPT model, changes to the way it is used, or the addition of new functionalities.
Is it better to wait?
Not necessarily. In fact, it's not recommended. In fact, every week you wait reduces your chances of being one of the frontrunners and building a low-cost competitive advantage.
Then, learn that at MARYLINK, the solution is already operational and we can launch this pilot project in less than 10 days 🙂