Homehttps://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/Technologyhttps://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/AI, considered "too dangerous to release", makes it into the light
AI, considered "too dangerous to release", makes it into the light
AI, which was considered too dangerous to be marketed, has now been released to the world.
Researchers feared that the model known as "GPT-2" was so powerful that it could be misused by anyone from politicians to fraudsters.
GPT-2 was created for the simple purpose of being able to submit text and to be able to predict the words to come. In this way he manages to create long strings of writing that are very different from those written by man.
Download the new Indpendent Premium app
Share the whole story, not just the titles
But it became clear that he is disturbingly good at this work because the text creation is so powerful that it can be used to deceive people and can undermine confidence in the things we read.
Moreover, the model may be abused by extremist groups to create "synthetic propaganda" that would allow them to automatically generate long texts promoting white supremacy or jihadist Islamism, for example.
"Because of our concerns about malicious use of technology, ours do not run training model, "wrote OpenAI in a blog post in February, published in the announcement. "As a responsible disclosure experiment, we instead run a much smaller model for researchers to experiment with, as well as a technical paper."
At that time, the organization released only a very limited version of the tool it used 124 million parameters. He has since released more sophisticated versions and has now made the full version available.
The full version is more convincing than the smaller version, but only "insignificant".
It is hoped that the release may partly help the public understand how such an instrument can be misused and help inform discussions among experts about how this danger can be mitigated.
In February, researchers said there were various ways malicious people could abuse the program. The leaked text can be used to create misleading news articles, introduce others, automatically create abusive or fake social media content, or be used for spamming people – along with various possible applications that may not even have been featured , they
Such abuses would force the public to become more critical of the text they read online, which could be generated by artificial intelligence.
"These findings, combined with earlier results for synthetic images, audio and video, suggest that technologies reduce the cost of generating false content and running disinformation campaigns," they wrote. "Society as a whole will have to become more skeptical of the text they discover online, just as the phenomenon of "deep forgery" requires more skepticism about the images. "
Researchers say that experts need to work to look at" how gene research erythrocytes of synthetic images, videos, audio and text can be combined to unlock new, yet unforeseen opportunities for these participants, and should aim to create better technical and non-technical counteraction measures. "