
ChatGPT Part 2: The Dark Underbelly You Should Know About
With the rise of AI technology, some people think we are living in a real-life version of “The Matrix.”
What used to be just entertaining science fiction has now become a fast-evolving reality.
Just like in the movies, no free-thinking person wants to live under a robot-controlled dystopian society, evading Arnold Schwarzenegger and his fellow Terminators working for the evil SkyNet Corporation.
While many would like to say “Hasta la vista, baby” to all things AI, it’s just not going to happen. ChatGPT and other AI bots are here to stay, and when used properly, can make life and work much easier.
As with many emerging technologies, there can be a dark underbelly to ChatGPT. To protect yourself and your business, watch out for these unique pitfalls and what they mean in terms of risk versus reward.
OpenAI’s Sam Altman Warns of Dangers
With great innovation comes great responsibility, and no one realizes this more than Sam Altman, the CEO of OpenAI, which developed ChatGPT.
Altman believes that as AI advances, careful oversight will be needed to guard against potential threats such as rampant disinformation campaigns and even devastating cyber-attacks, now that AI tools have gotten so good at writing computer code.
OpenAI’s own usage policy strictly forbids leveraging their services for nefarious or illegal activities. That does little to deter those people with bad intentions to begin with, however. As time goes on, more evidence of this will likely become more of an ugly reality.
Apart from the dangers listed above, worry over AI replacing jobs in tech, finance and education, among others, also abounds.
Basic Pitfalls of ChatGPT
Beyond the big-picture concerns over AI, there are some basic user-level pitfalls you should be aware of as you experiment with ChatGPT, such as:
- Giving wrong, incomplete or nonsensical answers. Because it responds based on a series of patterns it identified from the text databases it was trained on, it is not always accurate or entirely trustworthy when it comes to factual data.
- Implicit bias in its output. It’s important to remember that AI is created by humans, and that means bias can be a factor in the content it produces. OpenAI is working to correct this issue through user feedback and flagging of biased results.
- Not keeping pace with competitors. ChatGPT is creating lots of chatter, but emerging competitors like Google’s Bard are fast becoming strong alternatives with enhanced features that could blow ChatGPT out of the water.
TruStar’s Senior Analytics Manager Edwin Acevedo warns about the chatbot’s potential for fast and loose content creation:
“Whatever content comes out of ChatGPT is not necessarily accurate. In some cases, it’s completely made up.”
Legal & Liability Questions That Can’t Be Ignored — But Many Are
Many influencers are looking to make a serious profit from ChatGPT without a mention of its inherent flaws. Doing so might turn the wave of money they’re surfing on into a trickle, selling their content and courses to the average Joe.
The reality is that ChatGPT isn’t just disrupting the entire online world with its technology – it’s presenting new legal conundrums as well.
Specifically, Section 230 of the Communications Decency Act has become a hot topic as it relates to AI. Under the 1996 law, tech companies are protected from lawsuits based on user content. Unfortunately for ChatGPT and other AI technology, this kind of legal protection will not apply, according to the bill’s authors, due to the nature of the content generated and the technology itself.
Liability and copyrighted content are also concerns. From worries about the use of AI to inflict online harms, both targeted and widespread, to copyright infringement, emerging legal entanglements are developing almost as fast as AI itself. And because this kind of AI technology is so new, no one is really a legal expert in it… for now.
Paula Milam, president of TruStar Marketing, adds: “As good-faith users of ChatGPT, we have an obligation to make sure it’s safe to use in every way possible. We are obliged to fact check and apply real-world experience to how it is used. As citizens, we also have an obligation to help ensure it is implemented with moral guidance. This is critical for all AI, not just ChatGPT.”
This kind of change and group accountability takes discernment. It’s not a one-size-fits-all solution by a long stretch. But it is a starting point.
TruStar’s Edwin Acevedo adds, “There are ethical and moral reasons to watch your use of ChatGPT content. You should never publish raw copy from ChatGPT. Trust but verify everything. And if it is not right, rewrite until it is.”
On a final note, Avantpost.co provides a very interesting read of 3 possible futures we might all be subject to, depending on how all the variables of AI play out: distraction, disruption and destruction. Of course, it could also be a mix rather than a picture of absolutes.
We will be following ChatGPT and GPT4’s evolvement closely as time goes on. In the meantime, be aware of its pitfalls to protect yourself and potentially your business. Do you need help figuring out how to leverage ChatGPT? Book a call today, and let’s talk about it.