A team of researchers has devised an experiment aimed at understanding how individuals perceive moral decisions within the context of driving. The primary objective of this study is to gather valuable data that could be utilized to train autonomous vehicles to make ethically sound choices on the road.
Unlike previous studies that mainly focused on the classic ethical conundrum known as the “trolley problem,” this innovative approach aims to delve deeper into the intricacies of real-life moral challenges encountered while driving. As the prevalence of autonomous vehicles increases, ensuring these vehicles make morally responsible decisions becomes pivotal for the safety and trust of both passengers and pedestrians.
The conventional trolley problem presents a hypothetical scenario where an individual must decide whether to intentionally sacrifice one life to save multiple lives, a situation that starkly contrasts with everyday moral decisions faced by drivers. The researchers argue that scenarios like whether to exceed speed limits or run a red light are more common, yet their consequences can be equally significant, potentially impacting lives.
To address the scarcity of data in this particular area, the research team designed a series of experiments tailored to collect insights into how humans make moral judgments regarding less critical traffic situations. They crafted seven distinct driving scenarios, each representing different moral challenges encountered on the road. For instance, one scenario simulated a parent having to make the decision of breaking a traffic signal to ensure their child reaches school on time.
These scenarios were integrated into a virtual reality setting, allowing participants to experience the visual and auditory elements of the drivers’ actions. The researchers employed the Agent Deed Consequence (ADC) model, which considers three critical factors—namely, the character or intent of the individual (agent), the action performed (deed), and the resulting outcome (consequence)—when evaluating moral judgments.
To ensure comprehensive data collection, the researchers created eight variations of each scenario, manipulating the combination of agent, deed, and consequence. Participants were then asked to evaluate the morality of the driver’s behavior using a rating scale ranging from one to ten.
Ultimately, the overarching goal of this study is to amass data that could be utilized in the development of artificial intelligence algorithms geared toward enabling autonomous vehicles to make ethical decisions while navigating real-world driving situations. By comprehending how humans perceive moral behavior in driving contexts, the researchers aim to facilitate the training of self-driving cars to make morally sound choices on the roads.