People in Japan, US, and Germany show different concerns regarding Artificial Intelligence (AI) being used in entertainment, shopping services, or to help find criminals, reports a new study in AI and Ethics.
Japanese people tended to report more concern in AI used to fight crime, while Germans and Americans tended to report more concern over the ethical and social aspects of using AI in entertainment, it was found.
“We found there is a difference in the AI and ELSI levels of understanding between countries. I think it will become important to carry out thorough discussions about the legal and policy issues surrounding AI,” said first author and Kanazawa University Associate Professor Yuko Ikkatai.
AI is currently being used in a wide range of fields, which has raised positive and negative attitudes in the general public. Ethics policies differ from country to country, such as in Japan where guidelines emphasize regulation of AI and decreasing people’s concerns, while in the US they emphasize the need to maximize social benefits of AI and mention long-term risks, while in Europe the guidelines emphasize the rights and responsibilities of people.
A research team led by Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Professor Hiromi Yokoyama, Ikkatai, and University of Tokyo Institute for Physics of Intelligence Assistant Professor Tilman Hartwig, noticed the ethical attitudes towards AI, a universal, advanced technology, varied between countries. The researchers say recognizing public attitudes about AI in different countries will become increasingly important before deploying new AI technologies.
Their study involved carrying out an online survey in Japan, the United States, and Germany, asking respondents to look at four different AI scenarios and answer 3 questions about each of them, taking into consideration the Ethical, Legal, and Social Issues (ELSI). The first scenario involved using AI for AI-generated singers, the second scenario, AI customer purchases, the third, AI autonomous weapons, and lastly AI predictions of criminal activities. About 1000 respondents in each country were chosen, reflecting their own country's population for age, gender and location.
After analyzing their results, the researchers were able to separate responses into four groups: people with optimistic views, people with negative views, people concerned about legal issues, and those not concerned about legal issues. The team have named this the AI and ELSI segment.
The researchers had previously developed an octagonal visual metric, analogous to a rating system, which could be useful to AI researchers who wished to know how their work might be perceived by the public. Developing the AI and ELSI segment was because the researchers found the octagonal visual metric to be limited in versatility when applied to new technologies other than AI.
In the team's most recent study, they found that overall, the older the respondent, the more concern they had about AI and ELSI issues, while respondents more familiar with AI said they were most concerned about the legal issues.
In regard to each scenario, German and US respondents were most concerned about ethical and social issues regarding AI-generated singers.
About using AI for shopping purposes, German respondents were most concerned about the ethical issues, while Japanese respondents were most concerned about the legal issues.
In regard to AI autonomous weapons, Japanese and German respondents were most concerned about ethical issues, and Japanese respondents were also most concerned about the social and legal issues.
Finally, in regard to using AI to capture criminals, US respondents were most concerned about the ethical, social and legal issues.
"It is exciting that we can segment the replies so clearly into four groups, and the most distinctive feature is the perception of AI legal issues. This is robust amongst the three countries and shows that communication about AI-related laws and policies is very important," said Hartwig.
Details of their study were published in AI and Ethics on September 1.