Skip to content
Menu
Menu

AI generated bogus attractions and fictional cuisine for a Japanese tourism website

The new tourism website for Fukuoka prefecture was riddled with fictitious natural wonders and other falsehoods, thanks to AI
  • Experts say the situation highlights the perils of relying too much on generative AI and the importance of human oversight

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 26 Nov 2024, 7:44 am

A new tourism website for Fukuoka, Japan, promised visitors non-existent attractions and local culinary delights that no local has ever heard of – all thanks to AI-generated content.

The Japanese prefecture launched the Fukuoka Connection Support website on 1 November, according to multiple media reports, and within days, local residents’ complaints about misleading content had made it into the pages of the Mainichi, Japan’s oldest newspaper. The site touted natural wonders, including the Kinoura Coast, Kagoshima Bay and Fukutsu Great Nature Park – none of which are in the prefecture – alongside attractions like Kashii Kaen Sylvania Garden (which closed in 2021) and Uminaka Happiness World (which doesn’t exist). While seafood is popular in the coastal prefecture, the “Koga sashimi set” the site touted as a local delicacy doesn’t exist either.

The Fukuoka local government has withdrawn its support for the website and apologised, as has the company that created the content, Tokyo-based web developer First Innovation Co.

[See more: AI image generators are useful, but there are legal pitfalls, says expert]

An official told Mainichi that the government had not been informed that the website operator would be using generative AI for tourist information, adding: “Accuracy of the content and verifications are fundamental requirements.” However, at the same time the site included disclaimers that read: “This article is generated by AI based on information on the internet, and we do not guarantee its accuracy.” For its part, First Innovation claims that the articles underwent “human verification” before being published, although it admitted errors had been made and deleted the inaccurate pieces.

“I find it hard to believe that no one checked the text before it was published, either at the web company or at the city,” Morinosuke Kawaguchi, a technology analyst and consultant who was previously a lecturer at the Tokyo Institute of Technology, told South China Morning Post. The whole episode underscores the danger of overreliance on AI because the “process of the flow of logic for the information is a complete black box, and we cannot trace how the AI reaches its conclusions,” he said.

This process is known to create ‘hallucinations’ – defined by IBM as when an LLM (large language model) perceives patterns or objects that are non-existent or imperceptible to human observers and creates outputs that are nonsensical or altogether inaccurate. They often have more serious consequences than a few days of false advertising. Hallucinations have led AI to advise illegal behaviour, encourage eating disorders, misidentify poisonous mushrooms as safe to eat and offer up deadly chlorine gas recipes.

UPDATED: 26 Nov 2024, 7:44 am