In the latest AI news, researchers have unveiled some possible security threats that are inherent in Artificial Intelligence (AI) tools like OpenAI’s ChatGPT.
Malware threat to ChatGPT
With the rate at which different AI tools are being launched by different companies, one would think that their usage comes with no risks.
However, new research into these innovative technologies has revealed that users may be predisposed to some security threats even though this is not the case presently. It is worth noting that there are already fears around AI safety as noted by different regulatory bodies.
Researchers noted that AI tools like ChatGPT and Google’s Gemini which had its latest version released a few weeks ago, can be breeding grounds for malware threats.
The research discovered a malware worm that “exploits bad architecture design for the GenAI ecosystem and is not a vulnerability in the GenAI service.” This malware worm is named Morris II, after the Morris worm of 1988 which crashed about 10% of all computers connected to the internet at the time.
This kind of malware worm is capable of destroying by replicating and spreading itself to other systems. Most of the time, it does not require user interaction to infect Generative AI. Ordinarily, these GenAI platforms require prompts; and instructions in text format, to carry out their functions. Morris II tries to override the system by compromising prompts and transforming them into malicious instructions.
The malicious prompts trick the GenAI into performing deleterious actions without the knowledge of the user or the system.
How to keep Malware Worms Away from your computer
Consequently, AI users are advised to be vigilant and cautious about emails and links from unknown or untrustworthy sources. For reinforcement, users could also invest in reliable and efficient antivirus software that can easily identify and remove malware, including these computer worms. This, according to the researchers, is the best method to keep the malware worms out of your system.
The use of strong passwords, constant system updates, and limited file-sharing are some of the other suggestions to limit the activities of malware worms.
Amidst this research, a new AI tool that can recreate the human voice has been introduced by Sam Altman’s OpenAI. Voice Engine requires text input and a single 15-second recording sample to recreate a person’s voice. Considering its GenAI model, there is a high potential for this new tool to also be exploited by bad actors when it finally goes live following the ongoing testing phase.
The post AI News: Researchers Uncover Potential Security Threats To ChatGPT appeared first on CoinGape.
Earn more PRC tokens by sharing this post. Copy and paste the URL below and share to friends, when they click and visit Parrot Coin website you earn: https://parrotcoin.net0
PRC Comment Policy
Your comments MUST BE constructive with vivid and clear suggestion relating to the post.
Your comments MUST NOT be less than 5 words.
Do NOT in any way copy/duplicate or transmit another members comment and paste to earn. Members who indulge themselves copying and duplicating comments, their earnings would be wiped out totally as a warning and Account deactivated if the user continue the act.
Parrot Coin does not pay for exclamatory comments Such as hahaha, nice one, wow, congrats, lmao, lol, etc are strictly forbidden and disallowed. Kindly adhere to this rule.
Constructive REPLY to comments is allowed