The Rise Of The AI Dark Web: Unfiltered Generative AI Threats


 
While companies like OpenAI, Anthropic, Microsoft (MSFT), Meta (META), and Google (GOOGL) focus on responsible AI development, a darker side has emerged on underground networks. Unfiltered AI models are being deployed on the dark web, enabling dangerous applications including tools for cybercrime, the distribution of disinformation, and automated privacy violations. Let’s explore.

Who’s Involved?
Several malicious AI models have surfaced, with names like WormGPT, FraudGPT, and PoisonGPT leading the charge. These tools are advertised on dark web marketplaces and encrypted messaging platforms. The creators behind these models often have backgrounds in hacking and market them as unrestrained versions of popular tools like ChatGPT. Designed to serve bad actors, these AI models offer functions that include creating malware, phishing pages, and ransomware, to name a few. That said, even mainstream platforms like Hugging Face have been flagged for unintentionally hosting repositories of uncensored AI models.

What’s Happening?
Malicious generative AI models are engineered to bypass the ethical and safety filters of their mainstream counterparts. For example, FraudGPT provides an array of tools for cybercrime, including writing malicious code and crafting scam pages. These tools are sold via subscription, with prices ranging from $200 to $1,700 annually. The rise of platforms like UnfilteredAI, which promotes AI models for unrestricted use, is not intrinsically malicious but highlights how accessible unconstrained models have become.

Newly Relevant
While the development of AI models for nefarious purposes is not new, it has rapidly accelerated since 2022. The emergence of FraudGPT and similar models in mid-2023 marked a new wave of AI tools tailored explicitly for malicious use. The term “Deepfake” was coined in 2017, but the relatively recent explosion of manipulated images and short videos is empowered by the extreme pace of AI improvements. Research estimates suggest that more than 96 percent of Deepfakes are pornographic and non-consensual, though these statistics are difficult to verify.

There Are Bad Guys. So What?
It’s crucial to understand that while many struggle with the limitations of mainstream AI tools, a whole world of unconstrained AI exists. Your competition may already be leveraging these unrestricted tools. While I’m not advocating for the use of unfiltered models in the workplace, it’s important to recognize why some vendors can produce content that’s unattainable with licensed “safe for work” AI tools.Then, there’s the truly dark side – malicious AI models lower the barrier for criminals with minimal technical skills to launch sophisticated cyberattacks and scams. Deepfakes are being weaponized for blackmail and revenge porn. Despite these threats, regulation is lagging. While many governments have begun to criminalize the sharing of AI-generated deepfake images without consent, many parts of the world still lack comprehensive legal frameworks to address the rise of unfiltered AI models.

What’s Next?
Addressing these challenges will require a nuanced approach. First, we need to clearly identify and acknowledge the specific issues at hand. Then, we must decide how to balance innovation with regulation (the same issue we have with mainstream AI). Can any of this be regulated or controlled? If it can, who should be in charge? We’re already seeing some local, regional, and national governments pass laws that are not applicable outside their borders. This is unfortunate. We’re going to see this first hand as certain AI models (such as Meta’s upcoming multimodal AI platforms) will not be released in the EU due to their AI regulations.Sadly, this is a hot mess. But you can help. Start talking about it. Make this a topic of conversation. Adopt a personal point of view and make it known to others. Contact your elected officials and let them know how you’re thinking about this. It’s the only way we’re gMore By This Author:Apple Intelligence Is Now In Beta Does Your Company Need a Chief AI Officer (CAIO)?Siri Will No Longer Suck (iOS 18 & Apple Intelligence)

Reviews

  • Total Score 0%
User rating: 0.00% ( 0
votes )



Leave a Reply

Your email address will not be published. Required fields are marked *