Over the past few days, I’ve been exploring Artificial Intelligence (AI), particularly Generative AI. This has given me a deeper understanding of its potential and risks.
With affordable GPUs and NVMe storage, deploying powerful custom Generative AI models like GPT, LLaMA, and DALL·E 2 is now easier than ever. These models can be tailored for tasks like text or image generation and fine-tuned using prompt engineering. While this expands innovation, it raises significant concerns.
My main concern lies in the ‘tuning’ process. With no clear regulations on who can fine-tune models, harmful data can be injected. For example, someone could upload harmful content, such as gun-making tutorials, and fine-tune a model to generate dangerous insights. Such misuse could lead to severe consequences.
Bias is another major issue. If toxic content is used to train a model, it may reinforce harmful narratives. Imagine feeding a model with content that dehumanizes certain groups — the result could be a system that promotes division and hostility. Techniques like RAG (Retrieval-Augmented Generation) and guardrails can help reduce this risk, but these measures are only effective when applied responsibly, meaning model being “tuned” by responsible people, not by evil genius.
Another concern is the over-reliance on Generative AI. These models confidently provide responses, even when they contradict themselves. For instance, asking “Why is a banana healthier than an apple?” and “Why is an apple healthier than a banana?” may both yield convincing yet conflicting answers. Without adding context like dietary needs or personal preferences, the output may be meaningless.
The risk is in assuming the model’s response is absolute truth. Users may become overly satisfied with an impressive response, ignoring the need for critical thinking or fact-checking. This overconfidence prevents us from exploring better alternatives or questioning flawed insights.
I belive, to ensure Generative AI is used safely and effectively, we must encourage responsible tuning, clear ethical guidelines, and continued evaluation of model outputs. Critical thinking remains crucial in mitigating the risks that accompany this powerful technology.