Heute 151

Gestern 578

Insgesamt 39431736

Mittwoch, 15.05.2024
eGovernment Forschung seit 2001 | eGovernment Research since 2001

Cristian Capotescu and Gil Eyal, associate director and director of The Trust Collaboratory at Columbia University, explore the social dynamics of trust as it relates to the rollout of AI in cities.

Cities today are at a technological crossroads. The rise of generative AI promises to reshape how urban residents inhabit, study, work, and conduct their daily lives in the city of the future. Municipal governments across the world recognise the enormous potential of AI. From Sao Paulo, Barcelona, and Paris to New York, cities are racing toward piloting AI-based technology based on large language models (LLMs) to guide residents through complex applications and city ordinances.

In New York, the LLM-based chatbot MyCity has been operating behind the city’s 311 call line since October 2023, guiding residents through complex applications and city ordinances. When Mayor Eric Adams introduced MyCity as part of the city’s ambitious AI Action Plan, its benefits seemed evident: the chatbot would considerably improve citizen-government interactions and give residents a more powerful tool to interact with their city. This initial euphoria was quickly dashed; local non-profits recently reported that the chatbot generates a slew of misleading and incorrect information when it is queried about city laws and regulations. Because MyCity is based on Microsoft’s Azure protocol, the system exhibits similar issues plaguing other generative AI tools: notably, the tendency to hallucinate and produce inaccurate responses. While some errors might be little more than irritating to users, some can be dangerously misleading. If smart city enthusiasts have not experienced enough backlash over the past decade – as in the case of Toronto’s derailed Google’s Quayside plan – the hasty adoption of AI-based chatbots like MyCity and other analytics tools into municipal systems may just open the next chapter for a crisis of trust in technology, with generative AI at the centre of it, but also in local governments and other democratic institutions.

Public backlash against generative AI in cities is not a foregone conclusion, however. Steering clear of the most obvious and avoidable mishaps will require that city planners, administrators, mayors, and data and privacy experts consider carefully how or if AI can solve existing issues, and whether it can be responsibly integrated into digital infrastructure and service delivery systems. Cities will need to understand how residents perceive and relate to AI and how it may supercharge surveillance and police infrastructure through street cameras, sensors, and other surveillance technology. Questions related to privacy are particularly important because the introduction of AI may raise a range of public concerns. Citizens, for instance, may resist the integration of AI because they feel that cities are rushing a technology whose abilities and limitations are poorly understood; hallucinations and biases that result from being trained on limited datasets are just a few. The public might also be, rightfully or not, concerned that governments are ill-equipped to build robust guardrails to safeguard the public’s interest, or that AI will replace jobs. These tensions may flare up, especially if AI is perceived to be suddenly integrated into vital government functions without clear public guidance or without open trial periods and processes that involve citizens in the design and decision-making process.

Navigating this fraught landscape will be made much more difficult if one word and one concept remains absent from the discussion: trust. Trust is a slippery, fuzzy thing, but there is no escaping it. More than 100 years ago, the sociologist Emile Durkheim wrote that “all the scientific demonstrations in the world would have no influence if a people had no faith in science.” This dictum applies to the integration of AI into modern cities. Cities can develop the most sophisticated, ingenious technical solution for an urban problem, but if the intended users – the public – do not trust it or if they do not trust those who fund, approve, and develop the technology, its ultimate adoption will face stiff headwinds.

Defining trust

To chart a path of how cities can build, maintain, or repair trust in their capacity to be responsible stewards of AI, let us first get our terminologies in order and entertain a basic question: what is trust? Social scientists have grappled with this question for a long time. To date, most answers, even theoretically sophisticated ones, have been rather unconvincing. This is not because few have attempted to crack the code of what trust is but because trust is notoriously hard to study. This is made especially difficult by the fact that trust is a vernacular in our daily lives that imposes itself on us. Its ubiquitous availability prompts many polling institutions, academics, journalists, and experts in the non-profit sector to issue confident pronouncements that they know how much citizens trust institutions, democracy, or science and that trust can be readily categorised and measured, often absent empirical research into what trust actually is. This has led to a tendency to collapse the social complexity of trust into nifty metrics and percentages (this is what trust barometers routinely do). Or, going the opposite route, trust is clad in the feel-good language of HR departments and self-help guides of wellness gurus and “thought leaders.”

"We have come to learn that trusting is not a fixed attitude (hence the misleading nature of trust barometers), and it is not a disposition either. Trusting is highly context-dependent; it is shifting and moving depending on new circumstances and events."

We want to suggest a different route. The first point is that the noun “trust” is misleading. It leads us to imagine that there is some such substance, a social “glue” that, once present and solidly materialised, effortlessly holds societies together. There is no such substance, and there is no such guarantee. There is only the adjective form of trust-ing, undertaken or not by concrete individuals in concrete situations. Our approach studies how people arrive at daily decisions, which allows us to “see” how trusting operates, messily and idiosyncratically, in the nooks and crannies of everyday life. Through this perspective, we have come to learn that trusting is not a fixed attitude (hence the misleading nature of trust barometers), and it is not a disposition either. Trusting is highly context-dependent; it is shifting and moving depending on new circumstances and events. Trusting is a skillful act that humans learn to develop to make decisions they can justify to themselves and others as informed choices rather than as leaps of faith. Trusting is, therefore, both a practice and a communicative act (we have written about this problem at more length here).

It follows from this that we need to be a lot more skeptical and cautious regarding what we think we know about how to study degrees of trust, its breaking points, and ways to increase or restore trust in government and/or science. This is doubly true for the integration of AI technology, which presents us with a much more difficult challenge. In recent years, attempts to address these hard questions have not come from the social sciences. Instead, engineers, data scientists, privacy experts, and representatives from a range of other technical fields have jumped into the fray and offered a new concept of “trustworthiness” to resolve some of these issues. In the wake of this newfound interest in trust, expert panels, action plans, conferences, books, and articles have proliferated that tout “trustworthy AI” as a new gold standard for how to proceed in building, safeguarding, and maintaining trust in this new technology. The concept of trustworthiness has indeed become so widely popularised that it has established itself alongside older key terms like accountability, fairness, privacy, safety, and transparency. But is it truly compelling?

Breaking down “trustworthiness”

Alas, we are neither convinced that trustworthiness is a good substitute (or synonym) for trust, nor that it is the right way of thinking about the question of how to build trust in smart cities of the future. There are multiple reasons for our skepticism. When data scientists, for instance, refer to “trustworthy AI,” what they mean is that for machine intelligence to be trusted, its failure rate is of paramount importance. If designers and engineers succeed in driving errors down as much as is technically possible, the result will be that users will trust the system. Errors, and with them, the risk of failure, are thought to be predictable and manageable qualities through the process of iteration. Every time an AI system commits a mistake, the designer will tinker with the code and improve it a bit. Bit by bit, the AI will get better and become more reliable, and indeed trustworthy. From this vantage point, trustworthiness is treated as a technical problem for which exists a technical solution.

The logic of trustworthiness runs afoul of a fundamental obstacle: trusting does not follow a probabilistic logic. Trusting, instead, is asymmetrical. By this, we mean that the effort to build and maintain trust is disproportional to the ease with which it can be broken and lost. If trusting were to be described by a function, it would be non-monotonic. A single event could cause an abrupt and precipitous decline in trust, all the way to profound mistrust and a sense of betrayal. Consider something we all know: no spouse would accept the explanation that their partner committed only one act of infidelity and should be, therefore, trusted because they were loyal 99 per cent of the remaining time. From the point of view of “trustworthy AI,” however, 99 per cent accuracy for a chatbot would be a designer’s dream. Yet, from the point of trusting as a social practice, the issue is not about the probability of error but about sequence and asymmetry. A single instance of a particularly socially salient prediction error – for instance, a traffic light powered by AI leading to a pedestrian death – can cause trust to plummet precipitously or fuel the perception of a profound breach of trust. No amount of protesting that the technology rate of error is still better than a human crossing guard is likely to convince wary city residents. Put differently, while the design of trustworthy AI involves iteration in reversible time, trusting exists in unidirectional, historical time. As noted before, trust requires enormous effort and time to build but little effort and time to undo.

If trustworthiness is riddled with assumptions that contradict so much of what we know about the social dynamics of trusting, is there a better substitute? Let us return to a key point: no technical solutions will ever build a robust framework for public trust as long as the social propensities of trusting behaviour remain neglected. Trusting can only be the outcome of a combination of what we have said thus far: First, we have noted that trusting is asymmetrical and that it exists in unidirectional time. This should prompt us to consider timing, speed, and sequence as crucial elements of a trust-building process. Second, if asymmetrical effort and time are of the essence, duration – staying for the long haul – must be part of how to create and maintain trusting behaviours. Third, we highlighted that trust is not a social substance that can be captured by a fixed metric; hence, putting too much trust (pun intended) in trust barometers and surveys is likely to be highly misleading. The fourth and final point is that trust is relational; it is part and parcel of social relationships. Based on these four principles, a framework geared towards encouraging and supporting trusting behaviours should include the following points:

“It’s never too early”

Everybody working on a project knows that time and deadlines are important. For trusting, timing, speed, and sequence are absolutely critical. Considerations of how long, how fast, and when to implement a smart city campaign or new urban technology project are often what can make or break trust. As a result, public engagement and inclusion work cannot be afterthoughts in city planning. These efforts cannot come after all technically important issues have already been decided, after designs have been drawn up, and after funders have committed support on a circumscribed set of priorities. Ideally, involving the public needs to start during the ideation process for a new project. Questions such as these should be at the forefront of the process:

  • Do local residents need and want what city planners, investors, and other stakeholders are setting out to create?
  • Is smart city technology capable of addressing local priorities in responsible, safe, and reliable ways?
  • What other needs might residents have that are not considered, and might there be alternate solutions to these issues that may not involve cutting-edge technology?

City planners, mayors, and other relevant stakeholders should get to the bottom of such questions as early as possible. This process should involve community advocacy groups, community boards, local anchor institutions (such as churches, schools, or local businesses), trusted mediators who are respected in their community (including teachers, pastors, or volunteers), and, most importantly, residents themselves. Uncomfortable conversations will have to be part of this process as well as the willingness to take different routes or change course on a given project altogether. Finally, plan to have sufficient time to work with the public and to involve representatives at all decision-making stages either directly or through transparent feedback loops.

“It’s never too long”

Duration matters too. In historically fraught contexts where local institutions interface with a skeptical public, it would be foolish to assume that residents will greet city representatives with open arms when they parachute into the neighbourhood. This is something we have learned from our colleagues in public health: communities that have endured decades of bias, neglect, and underinvestment will react to overtures by even the most well-meaning institutions and groups perceived to be external to their own with suspicion and even hostility if there is no previous relationship of trust. But how do you create such relationships? One way is for city planners and experts to demonstrate (and communicate) that they are going to be there for the long haul by providing tangible benefits early on, even if these are small (e.g. a micro-grants programme). One cannot rely just on promissory public engagement. This should also be accompanied by an effort to understand public attitudes, concerns, and needs over the long term. When the attempt is made to take the pulse of the public, we discourage using snap surveys. This method can be perfunctory and perceived as performative. Moreover, snap surveys tend to yield little substantial insight into how local residents think. Formats that are more suitable are focus groups, town halls, interviews, listening sessions, participant observations, digital or traditional ethnographies, and co-design workshops with residents. These activities should be repeated periodically and allow city representatives to embed themselves in local settings. This can be a powerful tool to build relationships and should be accompanied, if possible, by workforce development and capacity-building formats that will continue public engagement after the end of a project (e.g. educational youth programmes or community fellowships).

“Trust surveys are a limited tool”

We said no snap surveys. At the risk of being repetitive, let us underline the problems with trust surveys:

  1. Trust is, as we said before, context-dependent. What can be learned from a survey is just a snapshot in time and has limited shelf time.
  2. Surveys create their own context. What people say they trust or mistrust in surveys may be very different from what they do in practical, everyday life contexts.
  3. How people respond to survey questions has been shown to be profoundly influenced by how these questions are worded, by social desirability bias, and by respondents’ anticipation of how the survey will be used.
  4. Most importantly, surveys are top-down and unidirectional. They do not create an opportunity for real dialogue and exchange. They are, therefore, hardly the right tool to convey to the public that a city cares about what residents think. Surveys are, in other words, an obtuse tool to measure trust and a poor substitute to create it.

“The role of trusted intermediaries”

Finally, city planners should consider the question: Who should be included? We have variously referred to the central role that “the community” and “community members” should be given. This is imprecise language. Moreover, it is a feel-good language that infers the existence of social cohesion and consensus where there might exist considerable heterogeneity and tension. In short, referring to “the community” will not get us very far. Developing an eye for the sheer breadth of who comprises any given community and, with it, for the diversity of existing (and often conflicting) viewpoints and interests is key. Equally important is to recognize that amid this social complexity, there are individuals who maintain ongoing trusting relations with diverse audiences, who can navigate the cleavages of diverging local interests, and who can speak the language of different groups. These are trusted intermediaries, and they typically can be found in every community. Trusted intermediaries are potentially untapped sources of expertise about what residents want, how they are likely to react, etc. A city planner might be puzzled why all the technology at a particular intersection is not causing more people to use it. A trusted intermediary knows that there is a police car parked there regularly, and people are avoiding it. It is useful to consider who are local intermediaries that can be tapped to understand a set of local issues and facilitate the flow of essential information into the design process. When working with trusted intermediaries, city planners, and experts should not only include them as upstream as possible but also recognise that their interests may not be identical to those higher up in the chain of command. If such intermediaries are to be trusted by their networks, they cannot be seen as merely the city’s mouthpiece or puppets. They must be given room to contest and insist on the importance of bottom-up considerations. For trusted intermediaries to serve as effective messengers among their networks and become communicative channels between citizens and public institutions, city representatives will need to understand that the position of intermediaries is shaped and constrained by mixed incentives.

---

Autor(en)/Author(s): Cristian Capotescu and Gil Eyal

Quelle/Source: Smart Cities World, 18.04.2024

Bitte besuchen Sie/Please visit:

Zum Seitenanfang