On 30 April 2025, the Computing online magazine published an article “Dark LLMs designed for cybercrime are on the rise”. The author has written, “Cybercriminals are increasingly adapting large language models for criminal purposes”. Well, yes, but not always…
The interviewer Brian Krebs talked with self-proclaimed “grey hat” hacker Rafael Morais about jailbreaking LLMs for testing AI security. As usually, “a road to hell is paved by good intentions”, the jailbreaking LLMs became and instrument in the hands of cybercriminals. However, this fact is simultaneously indicates a poor AI security in general, and Rafael Moraisis not in a fault for this.
Up to this point, the article made sense to me. Since “jailbreaking an AI model is achieved by engineering prompts to bypass the inbuilt safeguards”, people cannot be negatively impacted by cybercriminals (with jailbreaking prompts) unless the latter somehow enforce the former to provide special prompts to bypass the safeguards. Correct?How massive and realistic are such scenario? Or, is the whole purpose of the author was to try “saving the face” of the AI creators that actually faked or simulated “inbuilt safeguards”?
The next statement has turned me off: “Cybercriminals manipulate AI with deceptive inputs to gain unauthorised responses”. In my humble opinion, blaming users for “deceptive inputs” is the highest level of foolishness – the AI users have irrevocable right to provide any inputs they wish and AI must react properly – if a user gets “unauthorised responses”, the user simple deals with a piece of rubbish. That is, if AI does not react properly, it is a bad product and the creators simply failed their jobs. All twaddles about “responsible use of AI” is fake created to hide inability of some creators to provide really useful AI products.
The article compares dark web to “ an AI without ethical limitations”. Yes, what ethical limitations can be at all? Who are the AI creators to limit the ethical acceptance of the user? Why ethics should be limited if it is human and comprehensive? The only answer to this question is the ethics limitations is needed when the AI creators want to hide reality from the users and, when this conspiracy fails, they call it hallucination, bias, disinformation, untruth, mistakes, and so forth.
So, when the cybercriminals unmask real information, i.e. the AI creators are exposed and cannot control the AI users anymore, I highly appreciate the outcome. Not only me but, IMO, all people in all social societies countries and cultures must be protected from “ethical limitations”. Every user has individual ethical norms and they must be respected by the AI creators.
The author points to that “A Russian disinformation network was found by news reliability rating service NewsGuard to have created 3.6 million articles in 2024 aimed at influencing AI chatbot responses, with AI chatbots found to echo their content”. This is right and evidences that no “NewsGuard” can be efficient regardless of any LLM training. The only solution for this problem is to enable an AI user to conduct verification of all facts and statements returned in the AI outcome. No AI in socio-cultiral and mass-media (including news) is trustful or can be made trustful due to used AI design.
If you are interested in this theme, I write about all of these in the “Married to Deepfake” book.