Smart Cities

A “smart city” thinks independently and makes intelligent and equitable decisions at lightning speed. Mannheim, too, is among the cities with aspirations to become smarter by using data and innovative technology to improve residents’ quality of life. But the transition to becoming a smart city that deploys AI technologies alongside other resources can produce negative repercussions. How can a city like Mannheim successfully become a smart city without systematically excluding some groups of people? An interdisciplinary team at the University of Mannheim is tackling just this question in the CAIUS project.

As soon as visitors arriving in Mannheim by train step from the station onto the plaza in front of it, Willy-Brandt-Platz, they inescapably encounter two different smart city applications simultaneously. The gleaming blue public bike share cycles parked in a neat line on the left can be rented via an app. The numerous intelligent video cameras overhead and all around analyze images automatically, using algorithms that can detect certain behavior patterns that may indicate that a crime is occurring (punching, kicking, or falling, for instance), and report these patterns to the police situation center in real time.

An impressive seventh place: the City of Mannheim has once again been ranked as one of Germany’s smartest cities this year. The 2023 Smart City Ranking conducted by Haselhorst Associates Consulting confirms that Mannheim’s work on becoming a smart city is bearing fruit. Its work, in other words, on becoming the kind of city that uses technology and data to improve its efficiency, economic development, sustainability, and quality of life. Tracking traffic infringements with smart cameras? A dynamic pricing strategy for parking charges? The social scientists Daria Szafran and Dr. Ruben Bach have come up with some ideas for how Mannheim’s digital future could unfold. They are part of an interdisciplinary team of researchers who are currently working on the CAIUS project at the University of Mannheim and investigating how the city can make successful progress towards becoming a smart city without excluding certain population groups. “Right now, the term ‘smart city’ could still be more a buzzword than anything else. Cities are competing to outrank each other on the Smart City Index, a German ranking produced by Bitkom Research. The topic is new, and more research is needed,” Ruben Bach comments. As innovation always exerts effects on people and their environment, the CAIUS project sets out to look at the progression of the digital transformation and increasing automation in cities from a social sciences perspective and to ascertain what negative consequences can result from deploying AI applications and whether these applications genuinely serve the public good. The project name, “CAIUS,” stands for “Consequences of AI for urban societies.”

The scene: Block K7 on Luisenring in the heart of Mannheim’s city center. A few years ago, visitors who wished to avail of the services provided by the City of Mannheim at this central location had to take a number from a ticket dispenser and wait — often for a protracted period — for their number to come up. Today, the process is much smarter: residents can book appointments online with just a few clicks and save themselves oodles of time and hassle. What has now become entirely routine for many Mannheim residents is one of Dr. Ruben Bach’s favorite example cases when he is asked to explain what exactly the CAIUS project is studying. Potential fault lines become visible right here, after all: when processes are digitalized and offered as purely online services, what happens to residents with no Internet access? Are these groups then excluded from using the services? Or are they likely to make less use of services that are available to all because accessing them demands greater effort? These and similar questions go to the heart of what the CAIUS project is about. “Reality shows, unfortunately, that undesirable adverse effects that were not predicted are common, especially when AI, automation, and supposedly smart systems are involved,” Ruben Bach remarks. The CAIUS team aims to detect such negative effects before they arise. As well as drawing on traditional surveys, it also makes use of an innovative methodological approach that was co-developed by the computer science experts on the team: agent-based simulations. Ruben Bach explains how they work: “With the agent-based model, we can simulate the behavior of individual agents — residents of a city, for example — and see how they react to decisions made by an AI. We simulate an environment that is as realistic as possible, such as the city of Mannheim, and investigate how the agents behave when an AI-based smart city application is introduced.” Working in this way makes it possible to identify who benefits from the smart city and to identify whether specific groups are systematically excluded — and whether the groups that gain the most were perhaps in a highly advantageous position to begin with. The two social scientists speak with one voice on one fundamental point — that collaboration across a range of disciplines, with both the social sciences and computer science represented — is an essential feature of the CAIUS project. Only by taking this approach does it become possible to identify areas where AI might be prone to generating erroneous or biased recommendations and to modify algorithms in such a way that unfair decisions can be avoided.

Daria Szafran tells us about a prominent example case from the Netherlands which demonstrated that deploying AI can increase social inequality. An AI system used to detect welfare fraud led to thousands of families, most of them with migration backgrounds, being identified as having made bogus claims for social benefits. “AI systems are generally trained with large amounts of data before being used. The systems then reproduce patterns that occur in the training data. When the training data reflects undesirable patterns, such as discrimination against certain social groups, the AI incorporates these patterns into the logic it learns, too. The AI system used in the Netherlands identified characteristics such as dual citizenship and foreign-sounding names as possible indicators of welfare fraud and used them to automate the pattern of unequal treatment that already existed,” the sociologist explains. What is especially disastrous about this phenomenon is the fact that no human decision-maker can match the speed and the scope of the unfair decisions made by AI. This speed is also what makes AI so powerful: urban planners are placing their hopes in deploying AI in cities to make faster, better, and more equitable decisions than humans ever could.

The two researchers emphasize that accounting for the human factor is a crucial component in such planning processes: cities are, above all, social places, and sites where symptoms of social problems become visible: poverty, homelessness, unequal access to resources. Daria Szafran and Dr. Ruben Bach agree that the central question of whether AI applications are genuinely serving the public good cannot be asked often enough. Daria Szafran sums up the current status quo: “We are pleased that the City of Mannheim has expressed interest in our simulations and that a lively dialogue about our data and the possible use cases for our models has begun. Our research results can put urban planning on a sounder footing by demonstrating the limitations of AI applications. When we identify areas where using AI is genuinely worthwhile and creates tangible added value for the public, we can make the transition to a smart city a truly smart one.”

Text: Jule Leger/December 2023