Artificial intelligence is on everybody’s lips these days, sparking excitement, fear and endless debates. Is it a force for good or bad – or a force we actually have yet to fully understand? We sat down with prominent computer scientist and AI researcher Mária Bieliková to discuss these and other pressing issues surrounding AI, its impact on humanity, and broader ethical dilemmas and questions of trust it raises.

Congratulations on becoming the latest laureate of the ESET Science Award. How does it feel to win the award?

I feel immense gratitude and happiness. Receiving the award from Emmanuelle Charpentier herself was an incredible experience, filled with intense emotions. This award doesn't just belong to me – it belongs to all the remarkable people who accompanied me on this journey. I believe they were all equally thrilled. In IT and also in technologies in general, results are achieved by teams, not individuals.

I'm delighted that this is the first time the main category of the award has gone to the field of IT and AI. 2024 was also the first year the Nobel Prize was awarded for progress in AI. In fact, there were four Nobel Prizes for AI-related inventions – two in Physics for Machine Learning of Neural Networks and two in Chemistry for training Deep Neural Networks that predict protein structures.

And of course, I feel immense pride for the Kempelen Institute of Intelligent Technologies, which was established four years ago and now holds a stable place in the AI ecosystem of Central Europe.

maria-bielikova

A leading Slovak computer scientist, Mária Bieliková has conducted extensive research in human-computer interaction analysis, user modelling and personalization. Her work also extends to the data analysis and modelling of antisocial behavior on the web, and she’s a prominent voice in the public discourse about trustworthy AI, the spread of disinformation, and how AI can be used to combat the issue. She also co-founded and currently heads up the Kempelen Institute of Intelligent Technologies (KInIT), where ESET acts as a mentor and partner. Ms. Bieliková recently won the Outstanding Scientist in Slovakia category of the ESET Science Award.

Author and historian Yuval Noah Harari has made the pithy observation that for the first time in human history, no one knows what the world will look like in 20 years or what to teach in schools today. As someone deeply involved in AI research, how do you envision the world two decades from now, particularly in terms of technology and AI? What are the skills and competencies that will once be essential for today’s children?

The world has always been difficult, uncertain, and ambiguous. Today, technology accelerates these challenges in ways that people struggle to manage in real time, making it hard to foresee the consequences. AI not only helps us automate our activities  and replace humans in various fields, but also create new structures and synthetic organisms, which could potentially cause new pandemics.

Even if we didn’t anticipate such scenarios, technology is consciously or unconsciously used to divide groups and societies. It's no longer just digital viruses aiming to paralyze infrastructure or gain resources; it's a direct manipulation of human thinking through propaganda spread at the speed of light and magnitude we couldn't have imagined a few decades ago.

I don’t know what kind of society we will live in 20 years from now or how the rules of humanity will change. It might take longer, but we might even be able to alter our meritocratic system, currently based on the evaluation of knowledge, in a way that doesn’t divide society. Perhaps we'll change the way we handle data once we realize we can't fully trust our senses.

I am convinced  that even our children will increasingly deviate from the need for knowledge and evaluating success in various tests, including IQ tests. Knowledge will remain important, but it must be knowledge that we can apply. What will truly matter is the energy people are willing to invest in doing meaningful things. This is true today, but we often underutilize this perspective when discussing education. We still evaluate cognitive skills and knowledge despite knowing these competencies alone are insufficient in the real world today.

I believe that as technology advances, our need for strong communities and the development of social and emotional skills will only grow.

As AI continues to advance, it challenges long-standing philosophical ideas about what it means to be human. Do you think René Descartes' observation about human exceptionalism, "I think, therefore I am", will need to be re-evaluated in an era where machines can "think"? How far do you believe we are from AI systems that might push us to redefine human consciousness and intelligence?

AI systems, especially the large foundation models, are revolutionizing the way AI is used in society. They are continually improving. Before the end of 2024, OpenAI announced new models, O3 and O3mini, which achieved significant advancements in all tests, including the ARC-AGI benchmark that measures AI's efficiency in acquiring skills for unknown tasks.

From this, one might assume that we are close to achieving Artificial General Intelligence (AGI). Personally, I believe we are not quite there with current technology. We have amazing systems that can assist in programming certain tasks, answer numerous questions, and in many tests, they perform better than humans. However, they do not truly understand what they are doing. Therefore, we cannot yet talk about genuine thinking, even though some reasoning behind task resolution is already being done by machines.

Just as we understand words like intelligence and consciousness today, we can say that AI possesses a certain level of intelligence – meaning it has the ability to solve complex problems. However, as of now, it lacks consciousness. Based on how it functions, AI does not have the capability to feel and use emotions in the tasks it is given. Whether this will ever change, or if our understanding of these concepts will evolve, is difficult to predict.

esa-maria-bielikova-emanuelle-charpentier
Mária Bieliková receiving the ESET Science Award from the hands of Nobel Prize laureate Emmanuelle Charpentier

The notion that "to create is human" is being increasingly questioned as AI systems become capable of generating art, music, and literature. In your view, how does the rise of generative AI impact the human experience of creativity? Does it enhance or diminish our sense of identity and uniqueness as creators?

Today, we witness many debates on creativity and AI. People devise various tests to showcase how far AI has come and where these AI systems or models surpass human capabilities. AI can generate images, music, and literature, some of which could be considered creative, but certainly not in the same way as human creativity.

AI systems can and do create original artifacts. Although they generate them from pre-existing materials, we could still find some truly new creations. But that's not the only important aspect. Why do people create art, and why do people watch, read, and listen to art? At its essence, art helps people find and strengthen relationships with one another.

Art is an inseparable part of our lives; without it, our society would be very different. This is why we can appreciate AI-generated music or paintings – AI was created by humans. However, I don’t believe AI-generated art would satisfy us long-term to the same extent as real art created by humans, or by humans with the support of technology.

Just as we develop technologies, we also seek reasons to live and to live meaningfully. We might live in a meritocracy where we try to measure everything, but what brings us closer together and characterizes us are stories. Yes, we could generate those too, but I am talking about the stories that we live.

AI research has seen fluctuations in progress over the decades, but the recent pace of advancement – especially in machine learning and generative AI – has surprised even many experts. How fast is too fast? Do you think this rapid progress is sustainable or even desirable? Should we slow down AI innovation to better understand its societal impacts, or does slowing down risk stifling beneficial breakthroughs?

The speed at which new models are emerging and improving is unprecedented. This is largely due to the way our world functions today – a massive concentration of wealth in private companies and certain parts of the world, as well as a global race in multiple fields. AI is a significant part of these races.

To some extent, progress depends on the exhaustion of today's technology and the development of new approaches. How much can we improve current models with known methods? To what extent will big companies share new approaches? Given the high cost of training large models, will we just be observers of improving black boxes?

At present, there is no balance between the systems humanity can create and our understanding of their effects on our lives. Slowing down, given how our society works, is not possible, in my opinion, without a paradigm shift.

This is why it is crucial to allocate resources and energy to research the consequences of these systems and to study the models themselves, not just through standardized tests as their creators do. For example, at the Kempelen Institute, we research the skills and willingness of models to generate disinformation. Recently, we have also been looking into the generation of personalized disinformation.

There’s a lot of excitement around AI’s potential to solve global challenges – from healthcare to climate change. Where do you believe the promise of AI is greatest in terms of practical and ethical applications? Can AI be the "technological fix" for some of humanity’s most pressing issues, or do we risk overestimating its capabilities?

AI can help us tackle the most pressing issues while simultaneously creating new ones. The world is full of paradoxes, and with AI, we see this at every turn. AI has been beneficial in various fields. Healthcare is one such area where, without AI, some progress – for example, in developing new medications – would not be possible, or we would have to wait much longer. AlphaFold, which predicts the structure of proteins, has enormous potential and has been used for years now.

On the other hand, AI also enables the creation of synthetic organisms, which can be beneficial but also pose risks such as pandemics or other unforeseen situations.

AI assists in spreading disinformation and manipulating people's thoughts on issues like climate change, while at the same time, it can help people understand that climate change is real. AI models can demonstrate the potential consequences for our planet if we continue on our current path. This is crucial, as people tend to focus only on short-term challenges and often underestimate the seriousness of the situation unless it directly affects them.

However, AI can only help us to the extent that we, as humans, allow it to. This is the biggest challenge. Since AI doesn't understand what it produces, it has no intentions. But people do.

maria-bielikova-miro-nota
Photo credit: © Miro Nota

With great potential also come significant risks. Prominent figures in tech and AI have expressed concerns about AI becoming an existential threat to humanity. How do you think we can balance responsible AI development with the need to push boundaries, all while avoiding alarmism?

As I mentioned before, the paradoxes we witness with AI are immense, raising questions for which we have no answers. They pose significant risks. It's fascinating to explore the possibilities and boundaries of technology, but on the other hand, we are not ready – as individuals, nor as a society – for this type of automation of our skills.

We need to invest at least as much in researching the technological impact on people, their thinking, and their functioning as we do in the technologies themselves. We need multidisciplinary teams to jointly explore the possibilities of technology and their impact on humanity.

It's as if we were creating a product without caring about the value it brings to the consumer, who should buy it, and why. If we didn’t have a seller, we wouldn't sell much. The situation with AI is more serious, though. We have use cases, products, and people who want them, but as a society, we don’t fully understand what's happening when we use them. And perhaps most people don't even want to know.

In today's global world, we cannot stop progress, nor can we slow it down. It only slows when we are saturated with outcomes and find it hard to improve, or when we run out of resources, as training large AI models is very expensive. That is why their best protection is researching their impact from the beginning of their development and creating boundaries for their use. We all know that it is prohibited to drink alcohol before the age of 18, or 21 in some countries, yet often without hesitation, we allow children to chat with AI systems, which they can easily liken to humans and trust implicitly without understanding the content.

Trust in AI is a major topic globally, with attitudes toward AI systems varying widely between cultures and regions. How can the AI research community help foster trust in AI technologies and ensure that they are viewed as beneficial and trustworthy across diverse societies?

As I was saying, multidisciplinary research is essential not only for discovering new possibilities and improving AI technologies but also for evaluating their skills, how we perceive them, and their impact on humans and society.

The rise of deep neural networks is changing the scientific methods of AI and IT. We have artificial systems where the core principles are known, but through scaling, they can develop skills that we cannot always explain. As scientists and engineers, we devise ways to ensure the necessary accuracy in specific situations by combining various processes. However, there is still much we don't understand, and we cannot fully evaluate the properties of these models.

Such research does not produce direct value, which makes it challenging to garner voluntary support from the private sector on a larger scale. This is where the private and public sectors can collaborate for the future of all of us.

AI regulation has struggled to keep up with the field’s rapid advancements, and yet, as someone who advocates for AI ethics and transparency, you’ve likely considered the role of regulation in shaping the future. How do you see AI researchers contributing to policies and regulations that ensure the ethical and responsible development of AI systems? Should they play a more active role in policymaking?

Thinking about ethics in research is crucial, not only in research but also in the development of products. However, it can be quite expensive because it is important that a real need arises at the level of critical masses. We still have to consider the dilemma of new knowledge acquisition versus the possible interference with the autonomy or privacy of individuals.

I am convinced that a good resolution is possible. The question of ethics and credibility must be an integral part of the development of any product or research from the beginning. At the Kempelen Institute, we have experts on ethics and regulations who help not only researchers but also companies in evaluating the risks connected to the ethics and credibility of their products.

We see that all of us are becoming more sensitive. Philosophers and lawyers think about the technologies and offer solutions that don’t eliminate the risks, while scientists and engineers are asking themselves questions they hadn’t considered before.

In general, there are still too few of these activities. Our society evaluates results primarily based on the number of scientific papers produced, leaving little room for policy advocacy. This makes it even more critical to create space for it. In recent years, in certain circles, such as natural language processing or recommender system communities, it has become standard for scientific papers to include opinions on ethics as part of the review process.

As AI researchers work toward innovation, they are often confronted with ethical dilemmas. Have you encountered challenges in balancing the ethical imperatives of AI development with the need for scientific progress? How do you navigate these tensions, particularly in your work on personalized AI systems and data privacy?

At the Kempelen Institute, it has been beneficial to have philosophers and lawyers involved from the very beginning, helping us navigate these dilemmas. We have an ethics board, and diversity of opinions is one of our core values.

Needless to say, it’s not easy. I particularly find it problematic when we want to translate research results into practice and encounter issues with the data the model was trained on. In this regard, it is crucial to ensure transparency from the outset, so we can not only write a scientific paper but also help companies innovate their products.

Given your collaboration with large technology companies and organizations, such as ESET, how important do you think it is for these companies to lead by example in promoting ethical AI, inclusivity, and sustainability? What role do you think corporations should play in shaping a future where AI is aligned with societal values?

The Kempelen Institute was established based on the collaboration of individuals with strong academic backgrounds and visionaries from several large and medium-sized companies. The idea is that shaping a future where AI aligns with societal values cannot be realized by just one group. We have to connect and seek synergies wherever possible.

For that reason, in 2024, we organized the first edition of the AI Awards, focused on Trustworthy AI. This event culminated at the Forbes Business Fest, where we announced the laureate of the award – AI:Dental, a startup. In 2025 we are successfully continuing the AI Awards and have received more and higher quality applications.

We started discussing the topic of AI and disinformation almost 10 years ago. Back then, it was more academic, but even then, we witnessed some malicious disinformation, especially related to human health. We had no idea of the immense influence this topic would eventually have on the world. And it's only one of many pressing issues.

I fear that the public sector alone has no chance of tackling these issues without the help of large companies, especially today when AI is being used by politicians to gain popularity. I consider the topic of trustworthiness in technology, particularly AI, to be as important as other key topics in CSR. Supporting research on the features of AI models and their impact on humans is fundamental for sustainable progress and quality life.

Thank you for your time!