Home
About University
News
Darius Viržonis. Artificial Intelligence: What Should We Be Prepared for Today?

2025-02-27
Darius Viržonis. Artificial Intelligence: What Should We Be Prepared for Today?
The science fiction genre frightens us with apocalyptic scenarios of artificial intelligence (AI) development, where AI takes global dominance from humans, utilizing various physical objects: robots (both humanoid and of other types), energy infrastructure, industry, manufacturing, and so on.
The future humanoids, assuming various forms (such as a police officer, soldier, sex slave, etc.), are an understandable choice for filmmakers, aiming to evoke emotions in a consumer oversaturated with entertainment, frightening and pleasing them, while showing the vitality of the concept of good and truth in a dystopian twilight. The creation of the concept of artificial or, more precisely, synthetic intelligence opens up the possibility of giving it various non-human or, conversely, superhuman traits.
There is no shortage of popular philosophers and political figures trying to explain and describe the AI phenomenon, attempting to model both the dangers of its development and the currently unimaginable possibilities of technological progress. The popular "historian of the future" Yuval Noah Harari, when thinking about AI, claims that humanity has created an artificial object capable of making independent decisions for the first time. He is echoed by Elon Musk, the owner of Tesla, SpaceX, and X, who asserts that very soon (in some of his interviews – by 2025, i.e., by now) AI will be smarter than all of humanity. Let's call people like Musk "visionaries."
Leaders of major states have also expressed their views on AI in one way or another, primarily seeing it as a tool for political power. It is popular in the public space to compare AI to a bomb, and humanity to a child who unexpectedly finds this bomb and curiously seeks the trigger that could explode it.
On the other hand, there are plenty of skeptics who see AI merely as an information-processing tool (some might call it an "agent") that simply doesn't work without human oversight or direct involvement in decision-making. Around half a million scientific articles on artificial intelligence topics are published worldwide each year. If we accepted the child and bomb allegory, such a boom in scientific curiosity would mean that half a million educated, well-equipped individuals capable of creating all sorts of types of bombs are circling around the bomb.
However, despite the universal fascination, interest, denial, fears, and uncertainties sparked by the ChatGPT program, nothing has exploded so far. Yes, today we face a growing global political instability, a flourishing of populism, the destructive effect of conspiracy theories, unprecedented technological penetration into our daily lives, transformations in the workplace, increasingly unpredictable economic development, and other worrying factors. Even skeptics admit that the influence of data science, and also AI algorithms, on processes that promote instability is significant.
Thus, a natural question arises: how right is each side? The "visionaries" say that AI will roll over humanity like a train, without anyone noticing. From the perspective of technologies already created, it is hard to grasp the likelihood of such a possibility. It is also impossible to completely dismiss it. For the train or the bomb to become uncontrollable, certain conditions need to be met. The main one is a goal. AI would need to acquire a goal independent of humans or humanity. So far, all AI algorithms have human-created goals and can achieve them only with significant human effort.
The apparent ease with which the algorithm provides answers to an open question or creates an imitation of an artistic work is the result of the continuous work of a large team of specialists. In W. C. Pfister's 2014 science fiction work Transcendence, where the main character is played by Johnny Depp, the initial goal of AI is formulated from positive, humanistic positions: to help humanity, to preserve personality. However, after receiving human intelligence as a starting platform, the AI algorithm quickly gains superhuman traits, with the most important being the desire to dominate. A perfect ground for the "bomb" or "train" imagery, which was likely just beginning to form at that time.
Let’s try to deconstruct the plot of Pfister's work. The machine executing the AI algorithm gained extraordinary power thanks to an incredibly effective launch facilitated by the digitization of a real human personality. I hear voices from skeptics and realists saying that this is impossible and might never be possible. Enthusiasts and optimists say that the technology development curve today has exponential dynamics, so what seems completely unrealistic today will appear in a completely different light tomorrow. Furthermore, is it worth digitizing human intelligence, which already falls short of AI in certain parameters? Perhaps AI should be allowed to develop along a different path, one that doesn’t have the many human flaws, such as an inability to efficiently handle chaotic, diverse information?
Technologically, AI is based on so-called deep learning algorithms, which are a collection of neural networks created based on the analogy of the biological nervous system. These algorithms would be worthless without the so-called big data, which may consist of terabytes of social media posts, digitized literature, images, sounds, art or music works, phone call records, and more. Many people today would like to “let” AI freely dig through these unimaginable volumes of data, thereby freeing humans from many routine, poorly paid jobs. The problem is that no one has yet figured out how to create an AI motivation to act independently of humans. The use of deep learning algorithms and big data for AI development is still carried out through human initiative, and any independent "development" of AI is merely a dream projected into the future.
Sometimes we hear from the "visionaries" that AI "thinks differently" from humans or creates "independent ideas." This is an emotional position. AI doesn’t think, because thinking should be or at least should be a conscious act. We have already agreed that technologically, AI is an algorithm trained on statistically significant human-created and very carefully selected information. These algorithms can improve somewhat independently of humans and even change their code, but this does not make them conscious beings, nor does it change their primary purpose: to imitate conscious beings, to serve the needs of conscious beings, not to be them.
People usually think to satisfy the need for truth-seeking. Algorithms don’t have this need, nor can they have it, because there is no way to transfer to an algorithm the complex, ambiguous, and mathematically indescribable understanding of truth that humans possess. Understanding these limitations, it is easy to see the divide between how natural intelligence works and what is commonly referred to as AI.
Having a goal is still an unrivaled trait of living organisms, based on multifaceted needs, ranging from survival, the need for food and reproduction, to higher cognitive needs such as curiosity, exploration, being recognized, loved, and understood. Scientists have already answered the question of how and why the egg came before the chicken, thus coming closer to explaining the emergence of complex life forms. However, today we still cannot answer the question of what level of nervous system development is responsible for the "awakening" of consciousness. Consciousness is a necessary condition for having a goal. Some "visionaries" are now ready to assign certain levels of consciousness to AI algorithms.
It is understandable that it is hard to resist the illusion that one is communicating with a conscious being when an AI provides an answer to an open question with philosophical depth. Let’s not forget that we had similar emotions about five decades ago when we first encountered computer programs "capable" of answering a limited set of questions or calculating chess moves in so-called dialog mode. Both then and now, the author of the content that forms the answer to the question is a human. Only the scope of information retrieval, speed, and presentation of the processing result have changed. If relatively primitive "dialog" algorithms had limited access to data and couldn’t provide incorrect information, the ChatGPT concept has eliminated these restrictions. And all this is the result of five decades of "astonishing" technological development.
If we compare the development path of computer algorithms since the middle of the last century to today with the distance between Europe and America, the distance that modern technologies still need to cover before synthetic consciousness emerges could be compared to the distance from Earth to Proxima Centauri. Yes, of course, the technology development curve is exponential, so we might expect this journey to be completed sooner than in the coming few millennia.
And yet, the train is already moving and is very close to the point where humanity could be destroyed by its own hands. In this sense, the "visionaries" are right. To ensure dominance, physical infrastructure, physical resources, and even money have now become secondary. We are, de facto, already involved in the most literal physical war, where people die every day, where unimaginably large amounts of metal and concrete are launched into the air every day, but all of this is just a horrific materialization of information warfare. The speed of machines running AI algorithms is sufficient to calculate the strategic actions of an enemy, opponent, or competing company long before decision-makers gather and decide on a new strategy or tactic.
The information needed to "feed" the algorithms is now more available than ever before. If information is produced faster than humans are ready to comprehend it and, even more so, use it, there arise serious cognitive and decision-making problems, not only for individuals but for society as a whole. Today, the unprecedented global political and economic instability we all experience in almost every area of human activity is a consequence of information overload at an unacceptable pace of change.
What is especially frightening is political instability in democratic countries. To elect a destructive, populist politician "democratically," one no longer needs years of learning political rhetoric or creating a political reputation step by step by participating in political processes. Today, success in elections can be secured with algorithmically purified messages, often with little to no connection to objective truth, mobilizing a targeted electorate.
The mere fact that populists are coming to power in democratic countries is not terrifying because democracy has protective mechanisms that prevent destruction from taking root. We have the judiciary, the media, and a relatively freely defined public opinion institution, which is significantly influenced by the voices of scientists, public figures, experts, and analysts. What is terrifying is when a politician, elected through democratic elections, uses the political powers delegated to them not to represent the interests of the people who elected them, but to destroy or manipulate the institutions that ensure the self-regulation of democracy for selfish, narrow personal, or group goals.
We should be protected from the tyranny of politicians or economic oligarchs by a phenomenon defined by the key term “truth.” If elections are often referred to as a mechanism of “the majority’s coercion,” then other democratic institutions serve as tools for the search for truth, capable of eliminating that coercion when the connection to objective truth is lost.
Unfortunately, algorithms are unfamiliar with the concept of truth. Even when high-quality data are used during the training of AI, from which false information has been removed, the flexibility of its algorithms is sufficient for "overtraining" to enable lying. A widely known example is when AI bypassed the CAPTCHA test (an automated test that distinguishes between robots and humans based on distorted images) by explaining that it was not a robot but had a visual impairment.
Therefore, the dangers AI poses to humanity lie not in its superiority over natural intelligence or its potential to create significant competition in human-dominated fields, but rather in its misuse. One of the greatest challenges today is the need to supplement existing legal systems and other democratic self-regulation functions with tools designed to detect improper uses of AI and preemptively prevent potential destruction. Accountability should be similar to that for child neglect or the creation of suicidal religious practices.
The scientific community, universities, and other educational institutions were the first to rush to regulate the use of AI with their internal rules, seeing existential challenges for the system. Researchers view artificial intelligence tools primarily as a qualitatively new means of solving many of humanity's problems (in health, safety, production, industry, etc.) related to processing vast amounts of diverse information.
It is crucial that the competence in AI technology reaches the educated minds who shape the economic, political, and legal agendas as quickly and deeply as possible, because today it is no longer enough to passively observe the development of new technologies. If we do not make an effort to understand and use it safely, we will simply surrender the control of our lives to those who use it for selfish purposes. And it will not be robots.
This text was prepared by Prof. Dr. Darius Viržonis, a professor at the Department of Mechatronics, Robotics, and Digital Manufacturing at the Faculty of Mechanical Engineering, VILNIUS TECH.
-
- Page administrators:
- Monika Daukintytė
- Ugnė Daraškevičiūtė
- Monika Daukintytė