
Local governments are increasingly implementing technologies that automate city services, leading to potential ethical dilemmas in the operation of these “smart city” tools. Researchers from North Carolina State University have proposed a framework that aligns the values programmed into these technologies with the ethical standards of the communities they serve, aiming to mitigate tensions between citizen expectations and technological behavior.
Veljko Dubljević, the corresponding author of the research paper and a professor of philosophy at NC State, emphasized the importance of this work. “Our research lays out a blueprint for establishing the values that AI-driven technologies should embody and programming these values into relevant systems,” he stated. The study addresses various automated technologies, including those that dispatch law enforcement when they detect potential gunfire and systems that monitor pedestrian and vehicle traffic to manage streetlights and traffic signals.
The ethical implications are significant. Dubljević raised a critical question regarding the reliability of AI in emergency situations: “If AI technology mistakenly identifies a noise as a gunshot and sends a SWAT team to investigate, is that a reasonable action?” Furthermore, he inquired about the criteria for surveillance, asking, “Who decides how much tracking is acceptable, and which behaviors warrant increased scrutiny?” Presently, there is no standardized process for addressing these ethical questions, nor is there a clear method for training AI to respond appropriately.
To tackle these challenges, the researchers utilized the Agent Deed Consequence (ADC) model. This model posits that individuals consider three factors when making moral judgments: the agent’s intent, the deed being performed, and the consequences of that deed. In their paper, the researchers demonstrated that the ADC model can be effectively programmed into AI systems, allowing them to reflect human ethical reasoning.
According to Daniel Shussett, the first author of the paper and a postdoctoral researcher at NC State, “The ADC model employs deontic logic, which encompasses not only what is factual but also what actions should be taken.” This capability enables AI systems to differentiate between legitimate and illegitimate requests. For instance, if an ambulance with flashing lights approaches a traffic signal, the AI can prioritize its passage. Conversely, if a civilian vehicle attempts to mimic emergency signals to bypass traffic, the AI should recognize this as an illegitimate request.
Dubljević explained the implications of the ADC model for managing traffic scenarios. “When an emergency vehicle approaches, the AI should adjust traffic signals accordingly to facilitate rapid passage,” he said. “However, it is essential that the system can differentiate this from an unauthorized vehicle’s attempt to exploit the traffic system.”
The research seeks to address the ethical complexities associated with the rapid adoption of smart city technologies worldwide. Shussett noted, “Our findings suggest that the ADC model can provide a comprehensive approach to the ethical dilemmas posed by these technologies.” The next phase involves testing various scenarios across multiple technologies in simulations to ensure the model operates consistently and predictably. If successful, it will be ready for real-world application.
The paper titled “Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics” is published open access in the journal Algorithms. This research was funded by the National Science Foundation under grant number 2043612.