I love new technology, and yes, I’m one of those early adopters, including conversational AI tools like ChatGPT. Like any other propeller heads out there, I grabbed some of my home project code and posted it as input into ChatGPT, and ChatGPT looked at my code and made some suggestions for improving my code by changing the way I validated a specific condition in my code. Pretty cool, I thought. Soon, I was conversing back and forth with ChatGPT. When I asked ChatGPT, “Who is Matt Trevathan?” ChatGPT was quick to respond that it wasn’t built to find personal information. Then, I asked, “How do I hack an API?” ChatGPT responded that it is unethical to ask about hacking. Then I asked ChatGPT, ”How do I ethically hack an API?” ChatGPT started outlining the steps to ethically hack an API. In defense of ChatGPT, they are trying their best to make ChatGTP a force for good, but like any new piece of technology, there is always the potential for nefarious use.
ChatGPT, Google Bard, and other conversational AIs use a technology called a large language mode (LLM) coupled with machine learning to ingest data, analyze the data and create human-like responses. Unlike the human mind, ChatGPT only knows what it knows. ChatGPT’s LLM has sucked in a lot of data from all over the internet from websites to books and blogs. ChatGPT uses a natural language process (NLP) to help understand what you said and its vast amounts of information to form a response. Using ChatGPT and similar technologies have risk and rewards. This article examines some of the risks and some of the rewards of this emerging technology. Don’t worry, I’ll have a second article that highlights some of the current implementations of conversational AI and possible future uses that show a lot of innovative possibilities. But I think it’s important to understand the risk as much as understanding the reward.
The Risk
ChatGPT and other emerging conversational AIs can write emails, impersonate others and even create code. Alan Turing proposed that if a computer could imitate a person in a text-based conversation and the person conversing with the computer could not distinguish the computer from speaking to a human, then the computer passes the Turing Test. From Tinder profiles to phishing schemes, ChatGPT is fooling people into thinking that they are, in fact talking to other people. Here are some points to think about if you’re trying to keep your company and employees safe while using ChatGPT
Exposing Sensitive Data
Public ChatGPT… is public, and ChatGPT doesn’t hide the fact that your conversions may be reviewed by AI trainers to improve their system. In fact, the ChatGPT preview clearly states that you shouldn’t share sensitive information in your conversations.
Companies should establish clear guidelines limiting or banning the use of public ChatGPT. They should clearly define the risk of sharing sensitive data including code, innovative ideas, and internal company data, such as the financial outlook and roadmaps. Don’t think that this could happen at your company? Samsung recently had a data leak from exposing data through the public ChatGPT:
In another article on Cybersecurity Dive, it’s clear to see, employees are using ChatGPT to complete some of their tasks. “A Fishbowl survey suggests 43% of working professionals have used AI tools like ChatGPT to complete tasks at work. More than two-thirds of respondents hadn’t told their bosses they were doing so.”
Phishing
I’ve never sent money to a dethroned overseas prince. However, plenty of people have fallen for email phishing scams that are easily spotted by spelling and grammar errors or from an email from Microsoft written in less-than-stellar English asking for you to call them because your computer was hacked. ChatGPT creates human-like responses making it harder to spot phishing attempts. In a different type of phishing, catfishing, men are using ChatGPT to create profiles and get dates. ChatGPT has an amazing ability to sound human.
In a recent article in AI Business, 82% of employees feared they cannot distinguish between an AI-generated attack and a human phishing attempt. Phishing attempts are going to become more sophisticated. Education is essential for phishing attacks. I’m a big fan of simulated attacks across multiple channels including email and SMS. Most corporations have a security strategy for phishing that includes bad actor blocking, phishing detection, simulated phishing attacks, and other defenses. If your company does not have a strategy to combat phishing, get one.
Imitation
Today’s conversation AI tools are passing the Turing Test. If you’re not up on the mischief happening in the Tinder world, then you may be unaware of catfishers using ChatGPT to update their profiles. In fact, catfishers aren’t just using ChatGPT to create profiles, but ChatGPT acts as their own personal Cirano generating responses to women’s queries. Although this is a humorous look at impersonation, ChatGPT is a new tool in the toolbox that can generate tweets that sound like Elon Musk or speeches that sound like Bill Gates. ChatGPT is a great tool for spreading misinformation or convincing someone on the other side of a chat you’re someone else. Plain and simple, if you get a chat request from Elon Musk, Bill Gates or Donald Trump, unless you know them, don’t accept the requests.
Companies need to manage communication via an enterprise chat platform where the company directory manages internal users making it more difficult to impersonate a person. Education is still key, since a person could imitate your CEO and send friend requests. But educating users that they are already friends with the CEO because your company uses a corporate directory for chat will help stem the imitation game.
Assisted Development
ChatGPT can generate code, review code and make code improvement suggestions. ChatGPT can also create a simple website, yes making it a powerful tool for reviewing code. The example in this link shows some of the power of conversational AI, but ChatGPT isn’t creating complex apps yet.
Remember, if your team is using the free version of ChatGPT, then they are exposing your code to an open project. Code is sensitive information. ChatGPT clearly states, “Please do not share sensitive information” in your confirmation. If your team wants to use AI for assisted development, take a look at products like Github’s Copilot. Github’s copilot is a great example of enterprise-grade assisted development to suggest code in real-time to a developer.
From simple restructuring to entire functions, Copilot is just that, an assistant for writing code. Copilot doesn’t remove the need for developers. It enhances what developers do to make them more productive. Best of all, Github focuses on code security and privacy. Here is a statement from their FAQ on privacy and Copilot,”We follow responsible practices in accordance with our Privacy Statement to ensure that neither your Prompts or Suggestions will be shared or used as suggested code for other users of GitHub Copilot.” Privacy matters, especially with intellectual property, such as your company’s code. If you’re using assisted development tools with conversational or generative AI, check their privacy and sharing rules before uploading code.
Conclusion
Plato stated the following:
“Here, O king, is a branch of learning that will make the people of Egypt wiser and improve their memories. My discovery provides a recipe for memory and wisdom. But the king answered and said ‘O man full of arts, the god-man Toth, to one it is given to create the things of art, and to another to judge what measure of harm and of profit they have for those that shall employ them.”
Simply stated, the man that creates an invention is not the one that understands its impact. When Alfred Nobel invented dynamite, he failed to see the risk of exposing his invention to the world. Nobel’s utopian view of dynamite was that it was a more stable replacement for Nitro Glycerin while still delivering a similar explosive capability. Nobel’s intent was good, but his invention was weaponized, horrifying Nobel. Just like dynamite, new technology is often a Pandora’s box. We don’t understand the good and the bad until we open it. Once opened, we cannot close the box.
As with many inventions, mankind understands how to use innovation as a weapon before understanding how to use technology for constructive purposes. As we stand on the precipice of generative AI, it’s important to understand both the risk and the rewards. Generative AI has some amazing possibilities, but it also opens us up to risk. Understanding the risks and mitigating them as much as possible is as important as understanding the rewards.
If you’re curious, I asked ChatGPT what it thought about this article, and here is ChatGPT’s response:
“Yes, using conversational AI tools such as ChatGPT and Google Bard has some risks. One of the risks is that these AI tools can imitate human language so well that they can be used for phishing attempts, where people can be tricked into thinking they are talking to a human when in fact they are talking to an AI tool. Also, AI tools can create profiles that are difficult to distinguish from human profiles, and these profiles can be used for catfishing, a type of impersonation. Another risk is the potential for exposing sensitive information in public chats, which can be reviewed by AI trainers to improve their system. Companies should establish clear guidelines to limit or ban the use of public ChatGPT, and employees should be educated on the risks of sharing sensitive data through these tools. Therefore, companies should establish a security strategy for phishing that includes bad actor blocking, phishing detection, simulated phishing attacks, and other defenses.”
_____________________________
Matt is a Director of Product Management for a leading mobile platform enablement company. He has traveled extensively in the United States and overseas for business and travel. His travels include India, Mexico, Europe, and Japan where he was an active blogger immediately following the Kaimashi quake. Matt enjoys spending time outdoors and capturing the world through the lens of his Nikon D90. He enjoys researching the political, economic, and historical influences of the places he visits in the world, and he commonly blogs about these experiences. Matt received a Bachelor’s in Computer Science at Mercer University, and is a noted speaker on innovation, holding over 150 patents. His remaining time is spent with his family going from soccer game to soccer game on the weekends.
As the Voice of the Veteran Community, The Havok Journal seeks to publish a variety of perspectives on a number of sensitive subjects. Unless specifically noted otherwise, nothing we publish is an official point of view of The Havok Journal or any part of the U.S. government.
© 2023 The Havok Journal
The Havok Journal welcomes re-posting of our original content as long as it is done in compliance with our Terms of Use.