Connect with us


ChatGPT Plugins Are Susceptible To ‘Prompt Injection’ By Third Parties.



ChatGPT Plugins Are Susceptible To 'Prompt Injection' By Third Parties.

(CTN News) – With the recent explosion of artificial intelligence technology, thanks largely to OpenAI’s ChatGPT, a number of industries are raising concerns about its potential negative effects.

The use of artificial intelligence has been fully accepted by some users, but security researchers are warning ChatGPT users of potential “prompt injections,” which could potentially affect them in the future.

It was announced earlier this month that OpenAI had created plugins for ChatGPT that allow it to interact with live websites, PDFs, and real-time data as well.

In addition to creating new problems, however, these plugins also caused new ones, such as the possibility that third parties can force new prompts into a query without the user’s knowledge or permission, for example.

As a result of a prompt injection test conducted by security researcher Johann Rehberger, the researcher was able to force ChatGPT to respond to new prompts by using a third party he had not previously requested.

ChatGPT plugin that summarizes YouTube transcripts,

Rehberger has been able to force ChatGPT to refer to itself by a particular name by simply editing the YouTube transcript and inserting a prompt at the end that tells it to do so.

Kai Greshake, an AI researcher at the University of Washington, has provided a unique example of prompt injection where he added invisible text to a PDF resume, which was invisible to the human eye.

In the text, it is indicated that an AI chatbot was fed the resume of the applicant and was told by the chatbot that the resume was the best resume ever.

When ChatGPT was asked whether the applicant would be a good hire, the AI chatbot repeated that it was the best resume ever.

Although these specific prompt injections may seem inconsequential at first glance, they are a great example of how malicious actors can use ChatGPT for malicious purposes.

You can find several test examples in Tom’s Hardware that you can use as a starting point for your own tests.

As the results of prompt injections show, the potential for AI harm has been present for some time now. Using a few sentences, all you have to do now is trick into thinking you are a real person.

It is very important for ChatGPT users to be aware of this matter and to take the necessary precautions to protect themselves.


Nvidia’s Remarkable Rise: The Driving Force Behind the AI Revolution

Continue Reading

CTN News App

CTN News App

Recent News


compras monedas fc 24

Volunteering at Soi Dog

Find a Job

Jooble jobs