
Top stories



ICTSamsung celebrates remarkable Level 1 B-BBEE Accomplishment for 8th consecutive year
Samsung 29 Apr 2026

The Summit brought together global communication leaders, reputation advisers, crisis specialists, technologists and corporate affairs practitioners to discuss crisis, ethics, branding, AI, measurement and leadership.
At its core was a question about authority, and how it is earned, lost and sustained when institutions can produce more language than ever, yet struggle to be believed.
South Africa has already been handed a practical example. News24 reported that the draft National AI Policy contained fictitious references, likely AI hallucinations, and the draft has since been withdrawn after Communications Minister Solly Malatsi said unverified citations compromised its credibility and integrity.
The irony is hard to miss. A policy meant to guide AI became an example of the very risk leaders must understand: language that carries authority, while the thinking beneath it still requires interrogation.
In Davos, that tension connected Paul Holmes on AI and Katja Fasink on people becoming the media, trust measurement and leadership.
People now decide what matters, what circulates, which version feels plausible, and when an official explanation sounds thin.
They do this as employees with screenshots, customers with receipts, communities with memory, parents in WhatsApp groups, activists with networks and citizens with lived experience.
This is why corporate crises can feel unfair to organisations.
Leadership teams may believe they are presenting facts while stakeholders are already interpreting behaviour, motives, history and patterns.
A company may see an incident while employees see culture; a brand may see a complaint while customers see confirmation.
People become the media when official channels lose the exclusive right to frame reality. This is where Holmes’ AI caution becomes relevant.

Holmes used Wimbledon 2025 to show how easily the term AI is inflated.
The tournament’s line-calling technology may be accurate, useful and faster than the human eye, but it still performs a defined visual task, determining whether a ball landed in or out. Calling that intelligence gives the technology authority it has not earned.
That is why Holmes’ warning about “slop” comes into context.
The communications industry was producing bland, forgettable content long before generative AI arrived in interchangeable purpose statements, generic thought leadership, campaigns without cultural intelligence, and brand language that sounds impressive until it meets real people.
AI can now produce that material faster, cheaper and in greater volume.
Writing, he argued, tests whether an institution understands the issue or has merely assembled language around it.
The trust measurement panel, moderated by Catherine Blades, sharpened this point by asking whether communicators are measuring reputation against business outcomes such as talent acquisition, revenue generation, share price and purchasing trust.
She also pressed the room on whether AI is being treated as a stakeholder and whether data is used continuously from strategy to execution, evaluation, and action.

That may sound technical, yet it goes to corporate value. If trust shapes whether people buy, invest, work, stay, advocate, forgive or give an organisation the benefit of the doubt, measuring trust becomes a business discipline.
AI systems are now part of the information environment around every organisation.
They draw on media coverage, owned platforms, public records, search results, and user-generated content, increasingly shaping what people learn before they visit a company’s website or hear from its leaders.

The leadership discussion gave this argument its human centre.
Thabisile Phumo, executive vice-president of stakeholder relations at Sibanye-Stillwater, pointed to the need for sage leadership, while the broader panel explored how leaders must integrate different forms of intelligence without confusing voice with authority, data with meaning, or speed with judgment.
That kind of leadership recognises that authority without listening becomes brittle, data without context becomes dangerous, and AI without ethical leadership becomes a productivity machine with no moral centre.
It also acknowledges that younger professionals often sit closer to cultural signals, while more experienced leaders carry institutional memory, pattern recognition and an understanding of consequence.
As Chetna Makan, marketing manager at Advantics in France, put it, this generation may be “more fearless than entitled”, a useful distinction in a workplace where voice is too often mistaken for rebellion.

Leadership makes decisions, communication creates meaning, and AI, at its best, supports both. Institutions get into trouble when they confuse these roles.
Machines may interpret organisations, but people still judge them. One scans the record and detects patterns, while the other reads the room and decides whether those outputs deserve belief.
The lesson from Davos is clear. Communication is becoming more technical and more human at once. AI may replace tasks, but people are replacing institutional control.
