Vibe-Uncoding: Debunking the Myth of Programming with

Vibe-coding is an approach that uses artificial intelligence, specifically Large Language Models (LLMs), to generate code from natural language descriptions provided by the user. Despite its usefulness, there is a misperception of its real capabilities. Media exposure and the overvaluation of its potential have led many to develop preconceived ideas that do not correspond to reality.
It is important to note that large-scale language models rely on repeating patterns identified during their training on large data sets. In the context of automatic code generation, this data largely comes from publicly accessible software repositories, such as those hosted on GitHub, a practice whose legality remains under debate.
The quality of the code generated by these models is directly related to the quality of the data that feeds them. This quality can be measured through several metrics, including code complexity (which has a direct impact on performance and energy consumption) and its robustness, which determines resistance to exploitable vulnerabilities. Unfortunately, attacks based on similar approaches already exist, especially against open source software supply chains , and it is expected that, in the near future, attacks targeting the training data itself will emerge, with the aim of inserting intentional vulnerabilities into the code produced by AI models.
At the same time, the perception has been emerging that these technologies make higher education institutions obsolete, suggesting that, in the future, the determining factor for the job market will be the ability of professionals to express, in a clear and structured way, what they intend to code, in a logic close to what is now known as prompt engineering. This view would imply the loss of relevance of higher education for highly qualified professionals, particularly in the areas of computer science and computer engineering.
However, a careful analysis of this premise reveals an obvious fallacy, since it is only possible to formalize an idea when one truly understands what one is trying to achieve. The fundamental mission of higher education is not to repeat known patterns. On the contrary, it is based on developing a critical capacity for analysis, combined with mastery of the theory underlying fundamental scientific areas, such as mathematics, physics and computer science. It is precisely this theoretical basis and this critical capacity that allow the implementation and informed assessment of the quality of technological solutions, thus ensuring the robustness, efficiency and reliability of the systems developed.
It is important to understand the real potential of this technology. Contrary to the perception that it will lead to the elimination of jobs, artificial intelligence, when used correctly, has the capacity to expand the market by increasing productivity. In this sense, these tools should be seen as auxiliary tools at the service of highly qualified professionals, who have the necessary skills to critically evaluate the results produced and, thus, take advantage of their true value.
On the other hand, indiscriminate use by individuals without adequate training entails significant risks. The lack of critical and technical capacity can lead to the uncritical acceptance of incorrect results, compromising the efficiency and security of the systems developed. This phenomenon is not new, since practices such as reproducing solutions copied from platforms such as StackOverflow were relatively common for years. The difference is that, nowadays, patterns are generated in a more automated and personalized way, requiring greater care in their validation and application.
For those who place absolute trust in this technology, it is worth asking the following question: would you be willing to board a plane whose system had been developed using code generated by artificial intelligence?
observador