Exploring GANs and their application in Cybersecurity. Also touch on AI in cybersecurity.
GANs, or Generative Adversarial Networks, are a type of machine learning model architecture. They are composed of two parts: a generator and discriminator. The model is fed real world data, and the generator is trained to create data as similar to the real data as possible. The discriminator is trained to detect whether the data is real or synthetic. The perpetual competition between these two subcomponents leads to the adversarial aspect.
Typically, these models are used for high quality image generation, as the generator becomes so good it can fool even human discriminators.
Training the discriminator and generator results in a zero sum game. Each training iteration, it is either the generator which wins by fooling the discriminator, or the discriminator who wins by identifying the generation.
GANs can be used to convert MRI images to CT scans. This is valuable as some patients can not tolerate invasive CT scans.
Another common use of GANs is in image filling.
They are also used for upscaling images.
These examples are meant to emphasize the quality of GANs' generation.
Consider the following two images:
One is fake, while the other is a real person, are you able to identify which is which?
The reality is both of them are real, however the fact that most readers would have to seriously consider the chance of an artificial image underscores how realistic generations have become. In fact, an entire website is created around this fact: This Person Does Not Exist. The striking fact is that the website is completely off GANs' generations, and has been up for much before the creation of transformers.
While it is easy to solely appreciate the production of these models, it raises serious ethical concerns about cybersecurity. Over 55% of all cybersecurity is because of catfishing related to some type of bot-created account.
Nowadays, transformer based diffusion models are the staple for any sort of image generation. The reasoning for this is two fold: 1. GAN models are very tedious and complex to train, and, more importantly, 2. they cannot do any sort of transfer learning.
Transformer models can output on a wide array of content, however GANs can only generate on specific subsets of data. This is the primary reason they have become phased out in favor of transformer based models.
Now, discussing some cybersecurity, let's focus on prompt injection with LLMs. Prompt Injection is defined as 'using malicious input to make an LLM respond outside its guidelines'. One famous example is a Twitter post of a man tricking Chevrolet's AI assistant to agree to sell him a Tahoe for $1:
While this example is somewhat comical, there do indeed exists real, serious cybersecurity concerns with prompt injections.
For example, in direct prompt injection, a bad-actor could instruct the LLM to forget previous safety rail guards and solely their instructions.
Indirect prompt injection would requite some craftsmanship, and is a more elaborate way of comprising systems reliant on LLMs.
Advancements on prompt injection are of serious monetary value, as LLM company Freysa AI offered $50,000 to any competitors who could crack it with prompt injection.
Elementary examples of Freysa's competition exist, like Gandalf AI.
Now, we explore how AI tools can help prevent cybersecurity exploits.
GANs, as discussed earlier, are high-quality synthetic data generators. This means they can create test cases for cybersecurity systems, as detailed below
LLMs are also used in cybersecurity, like Israel's use of LLMs for intercepted message analysis.
While GANs have pushed the boundaries of synthetic data generation and opened new avenues for creative applications—from realistic image synthesis to medical imaging conversions—they also raise important ethical and cybersecurity challenges. As the industry shifts toward transformer-based diffusion models for their adaptability and ease of transfer learning, the risks associated with prompt injection and bot-driven identity fraud become increasingly critical to address. Balancing innovation with robust safeguards will be essential as we harness these technologies to enhance both our digital experiences and our defenses in the ever-evolving cybersecurity landscape.