ChatGPT broke the ice for AI technology into the mainstream. Everyone and their grandmother is talking about it. Honestly, the chatbot is a sight to behold because it has an answer for everything.
It can explain concepts to a five-year-old, generate articles, write code, and create marketing strategies.
But it does come at a cost – your private data.
Italy was the first country to block OpenAI’s services throughout the country because ChatGPT doesn’t comply with data protection laws.
Does this mean that regulation is coming to the world of AI? And what happened that the Italian government had to react in such a way? Let’s dive in.
Why did Italy ban ChatGPT?
ChatGPT was the record-holder for an app to reach 100 million users in two months. Its record was recently broken by Zuckerberg’s Threads, doing it in five days. But still, it’s an impressive feat.
Because the servers were constantly overloaded, OpenAI decided to add a premium feature, ChatGPT Plus. Subscribers who paid $20 a month won’t experience downtime and will have access to GPT4, a more powerful model.
But on March 20, there was a problem. A data breach exposed the payment information of its users. The culprit wasn’t a hacker. Instead, it was a simple bug.
The bug allowed users to check other people’s history. Imagine you’re sharing a super-secret code about a banking app and asking ChatGPT to perform an audit. The entire company is toast if that information falls into a hacker’s hands.
The Italian government immediately started an investigation to see if there were any more suspicious things. And there were.
First of all, kids could be exposed to inappropriate content. The platform was marketed to everyone, but based on their investigation, kids under 13 shouldn’t be using it.
Secondly, OpenAI tried to pull a sneaky trick on everyone. They explicitly stated that they wouldn’t process personal data with ChatGPT. It was supposed to be an experiment. Yet, everyone could see their history, including loads of personal data. There weren’t any disclaimers for it. OpenAI also didn’t mention whether they are processing the data to train the algorithm. They didn’t have a privacy policy.
Will other countries follow Italy’s ban?
The European Union is serious when it comes to protecting data. Multiple countries and their DPAs have started to analyze ChatGPT and its competitors.
There is a valid reason for that. Some users will use these services to manipulate and deceive people. The number of phishing attacks has skyrocketed after ChatGPT became widely
accessible. Spam emails filled with grammar mistakes and bad English became believable, and people are falling for them.
Eventually, governments will crack down on AI as they’ve done on social media and crypto.
What should you do if you have an AI company?
Artificial intelligence is all the rage now in the startup world. ChatGPT opened the floodgates, and people are creating plugins or completely new apps that use it in the background. Other countries could follow in Italy’s footsteps, and you need to be careful and work on a few things.
The first thing is to limit your service to adults only. You don’t want to meddle with the legality of kids using the internet, watching content, or using apps. Add a panel that asks for the date of birth, and don’t allow people under 18 to access your app.
Next, make a bulletproof privacy policy. Work with a lawyer or read up on how to write one. Answer the questions that everyone wants to know. Will you store personal data? How will you train the bot? How do you ensure you won’t get hacked? Are you following all the data privacy laws?
Spend some time and answer these kinds of questions in detail, especially the part about how AI works. People who aren’t in the tech world will think of it as a black box system or magic in computer form. They might get scared and think AI will rule the world. Create a simple explanation, and answer your users’ questions before they even ask them.
You might even use ChatGPT to create your terms of service and privacy policies. But you’ll be feeding it your company data. Using a VPN for ChatGPT will help you stay safe because your information will be encrypted, and your IP address will be masked. Since you already know there are data issues with the service, create a fake account. Insert a credit card that you don’t use often, with minimal money, in case the company gets hacked or suffers a data breach. At least that way, you’re as protected as you can be.
Finally, there’s no need to imitate Facebook. They store 52,000 data points for each user. They get into data privacy lawsuits all the time. Since you’re making a startup or a small company into a new niche, the last thing you need is to deal with lawyers. Collect the minimum data needed, and train your algorithm with it.