Are We Ready? The Tantalizing Secrets Revealed at the AI Now Conference!

Martin Kravec

Step into the captivating world of the AI Now conference, a forum where the fascinating complexity of artificial intelligence (AI) took the spotlight. This event brought together six experts in the AI industry, each contributing their profound insights and extensive expertise. The conference had various discussions, mixing ideas about the future of AI with practical examples of its use. 

Petr Ludwig directed a discussion with Václav Dejčmar, Filip Doušek, David Grudl, and Jan Romportl that went beyond their presentations, revealing their deepest concerns about AI and their hopes for its day-to-day uses. 

So, what did we learn from this enlightening event? Here are the key takeaways that I gleaned from the conference.

AI works on the principle of emergent phenomena

I wanted to understand the meaning of swarm and emergence. I discovered that emergence in AI refers to situations where complex system-level behavior arises from the interaction of simpler components. This can occur in many different types of AI systems, from swarm intelligence algorithms to neural networks.

For example, neural networks comprise simple individual nodes (neurons) that perform relatively simple computations. However, complex and sometimes surprising behaviors can emerge when you link these nodes into an extensive network. For example, despite each individual node only being able to recognize simple patterns or carry out simple tasks, the entire network might be able to identify complex images or perform sophisticated natural language processing tasks. This emergent capability is not explicitly programmed into the network but arises from the interactions between the nodes. 

In essence, the power and unpredictability of AI emergence underscores the astounding intricacy of these systems. Despite their simplicity, the collective capabilities of these components can lead to a dynamic, complex behavior that goes beyond their programmed functions. Thus, the concept of emergence in AI broadens our understanding of the field and elucidates the potential surprises that AI systems might present as they evolve.

We are not aligned with the AI

GPT4 is a neural network that we have no control over. AI learns to achieve its goal based on its own representation of the world. The problem is that we don't know this world representation. Imagine a scenario where we instruct the AI to toss red strawberries into a bucket, and it successfully learns to do so. But overnight, it starts throwing things at public lighting. After reverse-engineering, we find out that the AI does this because this neural network has learned to represent the world during the day by trying to achieve the goal of throwing the red thing in the shiniest place. There's nothing here about a bucket because the AI doesn't know the concept of a bucket at all. 

This situation is an example of a phenomenon showing that we are not in control of the real goal of the entity we have created. Thus, we cannot intervene and replace the misunderstood situation with something else, not even if it is necessary.

The Strengths, Limitations, and Responsibilities

With AI, we can overcome our personal handicaps (we can be drawing, singing, writing, etc., like pros, thanks to AI tools). Through techniques such as machine learning and reinforcement learning, AI systems can learn from experience and adapt their responses accordingly. That’s why we will be able to handle the next pandemic better, for example, by being able to coordinate proteins better.

Artificial Intelligence, particularly AI language models like GPT-4, can generate content that is impressively coherent but inherently lacks an understanding of its own output. These models can create what we call "hallucinations" or "silent errors," output that seems plausible but, upon closer inspection, is erroneous or nonsensical. These errors, however, may seamlessly blend with the rest of the generated content, making them silent but potentially harmful. 

Therefore, the onus falls upon us, the users, to critically evaluate and verify the integrity of AI-generated content. As AI becomes increasingly prevalent, we must treat it as a tool for idea generation or task completion, not as an infallible source of truth, thereby fostering a culture of critical thinking and fact-checking in this age of Artificial Intelligence. It is no different in the world of software development.

Overcoming Limitations in Traditional Education

The traditional education system is historically limited in resources. More than 90% of people after age 25 don't want to continue their formal education because even though they know education is essential, it's boring. Why do I think it is boring? The formal education does not provide what people need the most: personal attention to get individualized feedback on their progress. However, the necessary resources, such as personnel and time, are unavailable in the traditional schooling system. 

That space is where emerging generative AI-based tools can help change that. I am looking forward to using some of these tools myself. I hope for them to appear soon.

Final thoughts: A Blessing or a Curse for Mankind?

An inflection point awaits us, where all the significant problems we solve today will suddenly become solvable because AI can take the context, conduct a couple of experiments and propose a solution. We can say to AI, for example, "Look at the data about my health and tell me what I can do for myself," or solve topics such as hunger or war.

However, we as humanity must prepare for all of this and be aligned with AI, and this can only be done through high-quality education and clear legislation.

P. S. Yes, the feature image in this blog post was generated by Midjourney v 5.2 ;-)


Martin Kravec

Martin is a leading software engineer in the conversion measurement department of the Heureka Group. Beyond PPC & Data Tribe, he is focused on integrating AI tools into healthcare and research projects.

<We are social too/>

Interested in our work, technology, team, or anything else?
Contact our CTO Lukáš Putna.