To begin with, the chatbot platform has no universal method to give users permission to remove their data from the platform. It has in fact been use to train the machine learning model, raising questions about ChatGPT
ChatGPT gained sudden and massive popularity, especially in the world of AI and machine learning. These tools are language models that can help answer specific questions along with generating code and the like. Everyone was optimistic about these language models for answering questions until now.
ChatGPT has been use in a wide array of applications and was also used in language translation, chatbots, and in summarizing text as well. However, it has also been use for generating phishing emails and has been used in DDoS attacks too. Hence, it is now being use for a wide array of cyber crimes and offenses.
ChatGPT and privacy – not on good terms at all
One of the key concerns with these language and chatbot models is the issue of privacy. It’s not easy for people to find out whether or not their data was use to train a machine-learning model. This can cause serious infringements and stealing people’s research is a crime.
GPT-3 is a large language model running on both NLP and ML. It has been train a lot of data, especially personal websites and social media content.
This has led to serious concerns about the model using users’ data without their explicit permission. Also, it might be difficult to either control or delete the data used for training the model. Another concern to note is the issue of the right to become forgotten.
As the usage of GPT models, chatbots, and other learning models becomes common and widespread, users will desire that their data erase and remove at all costs for the right reasons. They’re furious about their data being use without their consent. At times they have delete the data but as the language models have already use them, they are store in their respective storages.
Users want to know how they can delete that data. It is their need. These language models do not provide such instructions and they are infringing data privacy rights and laws. There is a lack of a method that can help remove that data because once the model learns it, it does not remove it from its storage.
Researchers and companies alike are working on methods to pave the way for data removal and data to be forgot from specific data points, or user information. However, this is still in the early stages of development. It is also still not clear how effective or feasible these data removal options can be.
Plus there are numerous technical challenges when it comes to data removal from machine learning models. The data these models have been fed and used for training purposes upon removal may cause these chatbot platforms (especially ChatGPT) to lose their accuracy, and begin giving biased and wrong answers.
Is usage of ChatGPT legal?
The legality of using personal data to train learning models like GPT-3 is dependent on specific laws and regulations of each country present. The United States has no national-level legislation but has state-level rules and laws about it.
In the European Union, the General Data Protection Regulation (GDPR) regulates the usage of personal data and also requires data collection and utilization only for certain, specific, and lawful purposes.
GDPR restricts unethical purposes of data collection. Data needs to be use for the purpose it is being collect for. Usage for any other purpose requires legal permissions. However, language models just cannot do that, and data fed to them can use for multiple purposes. Under GDPR, companies need to obtain explicit consent from people before collecting and using their data.
But there is a legal basis for processing personal data for historical, research, and scientific purposes. However, the research controller must keep everything in line with GDPR’s principles, policies, and rights, especially the following:
• Right to be informe.
• Access rights.
• Correction rights.
• Rights to data erasure.
• The right to object and raise objections.
• Right to data portability.
ChatGPT lacks the needed accuracy and has been use for cyber attacks
The accuracy margin of ChatGPT is a cause of concern. Experts from a DDoS protection service provider in London identified the platform as a source of DDoS, ransomware, and phishing attacks because it helped cyber crooks generate the content required for their attacks. It even helped them determine methods and ways of conducting cyber attacks too.