The advancement in technology has been staggering since transformer technology was utilized in AI Chatbots. While these advancements were getting lot of attention at beginning (2023 - 2024), now (2025) it seems to have reached it’s peak.
In this blog post, I’ll go over outlining some of the potential and pitfalls of AI
AI based chatbots are great at offering summaries of large quantities of text data. With (Retrieval Augmented Generation (RAG), it’s easier than ever to get summarized output of books, articles, journals etc.
Another area where AI is good at will be coming up with new code (provided your treating assistant to be smart as an intern who memorized common language syntax, code snippets). With step by step instructions asked to an AI assistant, it’s has become easier to come up with new code.
AI Large Language Models (LLMs) are trained of vast amounts of data from internet so it can be used to come up with new ideas (limited on quality of data LLMs are trained on)
The transformer architecture is attempting to mimic human brain and how we think. In theory, this is great but in practice every individual has unique experiences (contexts) so it’s unique brain. As an example, a developer might know the details about the production environment but AI just looking at source code might not be able to understand details about environment.
AI hallucinations is simply way of saying AI made up answer to your questions in most confident way. This maybe because of the prompts (system / user) or LLMs failure to understand distinct concepts with few similarities and vice a versa.
Unlike other technology tools AI outputs are non-deterministic (i.e. we using same prompts and context could yield distinct results on distinct invocations). One potential solution to force AI / LLMs to offer us deterministic outputs could be to use specific prompts. When building AI agents we need to specifically instruct AI / LLMs to offer us exactly the sort of outputs we need.
While AI / LLM is great technology, we humans have better context and apply logic to a given problem. In my daily life I use AI as an assistant to offer me assistance on syntax checks, getting test cases, summarizations of long text etc. So in a way AI / LLMs help me get things quickly but with current state of AI / LLMs I’ll spend more time in bug fixes if I rely on AI / LLMs more.