On March 19, 2026, the first online workshop of the Shaped by AI project took place, moderated by Enrico Maioli and introduced by Stefano Toffolo.
A phenomenon you can't ignore
The meeting opened with an introduction on the broader context. The adoption of ChatGPT and generative AI is reaching unprecedented numbers. By February 2026, there were over 900 million weekly active users worldwide — a figure that more than quadrupled in a year and a half. In Europe, the impact is significant too.
From these premises, Shaped by AI was born, with the goal of observing how the adoption of this new technology is shaping the way people interact with digital products. The workshop inaugurated the "Mapping the change" phase, designed to understand how users' habits, needs, expectations, and mental models are evolving. This is still a young phenomenon; ongoing monitoring will serve, in a second phase, to understand how digital professions will need to adapt to these new scenarios.
The discussion unfolded around three main themes.
1. Conversational interfaces and accessibility
The first topic addressed the nature of conversational interfaces, analyzed from two main angles:
The illusion of simplicity. The chat interface feels familiar thanks to the habits built through messaging apps. However, knowing how to use the interface is not the same as being able to use it effectively. Crafting the right request for the machine (prompting) remains a complex activity.
Anthropomorphization. GenAI tools respond in a proactive, empathetic, and seemingly "human" way, adapting to the user's needs.
This was the most debated point. The group discussed how the conversational interface is less intuitive than it appears, as it requires users to articulate their intentions precisely in terms the machine can understand. A Jakob Nielsen article on the topic was referenced in support.
Another observation concerned the ambiguity of natural language. Meaning-making is strongly tied to context — an element that is difficult to make explicit in a chat. When users expect AI to solve problems without being given precise instructions, short circuits arise. To mitigate this, the adoption of hybrid interfaces was proposed, balancing elements of classic UI with conversational ones.
On the accessibility front, the discussion highlighted concrete challenges. Studies on the difficulties faced by visually impaired users with traditional systems raise doubts about whether voice or natural language interfaces can replace the mouse-keyboard combination. The conversation then extended to users with dyslexia or dysorthography, for whom written text is a barrier; integrating graphic elements and colors to leverage visual memory was suggested as a possible response.
2. ChatGPT as the new search engine
A second thread explored the perception of GenAI tools as new starting points for online research. The paradigm of intent-based interaction is gaining ground — users describe the desired output directly, rather than entering a series of commands to obtain it.
The discussion was enriched by a perspective tied to one of the core themes of Information Architecture. Drawing on information-seeking strategies (as outlined by Luca Rosati), a concrete risk was highlighted: ChatGPT returns answers hyper-focused on the user's initial question, eliminating the element of serendipity. The passive exposure to information not directly sought — which is actually one of the primary ways humans learn — is lost.
The shift from graphical to conversational interfaces was also analyzed, highlighting how the accuracy of results depends on the clarity of the prompt. The question of how traditional search (Google-style) and these new modes will coexist in the future remains open.
3. (False?) expectations
The final theme touched on the balance of trust toward GenAI tools. On one hand, there is a tendency to overestimate the capabilities of these tools, treating them as omniscient oracles rather than "probabilistic language engines." On the other hand, widespread skepticism persists; users are not yet ready to entrust critical operations — such as managing their own portfolio — to this technology. A recent example: OpenAI is scaling back its Instant Checkout feature.
The lack of transparency in processing — the black-box nature of AI — was identified as the main obstacle to user trust. Added to this is an epistemic risk: the fluency and apparent perfection of GenAI-generated text lead users to confuse what seems well-written with what is actually true. A theme already raised by Walter Quattrociocchi at WIAD in Cesena.
Finally, the discussion turned to the impact on designers' day-to-day work. Concrete design experiences emerged — from data visualization to management software — in which clients insisted on adopting exclusively natural language interactions. Requests out of scale with current tools, which exposed the gap between expectations and actual technical complexity. Participants agreed: today, part of the designer's role is to bring clients back to the real objectives of projects, steering away from simplistic solutions based on adding a chatbot.
The role of design in the transition
The first Shaped by AI workshop made it clear that adopting generative AI is not just a technological challenge — it is also one of interpretation and design. Conversational interfaces bring new obstacles related to accessibility, the loss of serendipity, and the risk of confusing linguistic plausibility with factual truth.
Faced with often unrealistic expectations, the role of digital professionals becomes even more relevant: governing this transition by anchoring innovation to real user needs, promoting hybrid interfaces, and maintaining a critical mindset. Future workshops will deepen these themes with the goal of building practical guidelines for those who design digital experiences.