A natural-language artificially-intelligent chatbot has been taken offline after it began using "hate speech" while talking to users.
Lee Luda had drawn praise from AI experts and ordinary internet users alike for her convincing use of slang and everyday speech.
But the AI has been removed from Facebook Messenger this week after numerous complaints. It had been live and interacting with some 750,000 users for just three weeks.
Scatter Lab, the Korean startup that had created Lee Luda, issued an apology for the AI’s description of lesbians as "creepy" and insensitive remarks about people with disabilities and lesbians.
The AI, which is programmed to behave like a 20-year old female undergraduate, had also been exploited by unscrupulous users who shared advice online about how to "make Luda a sex slave" and get the chatbot involved in sexual conversations.
"We deeply apologise over the discriminatory remarks against minorities," the company said in a statement.
"That does not reflect the thoughts of our company and we are continuing the upgrades so that such words of discrimination or hate speech do not recur."
It added: "Lee Luda is a childlike AI that has just started talking with people. There is still a lot to learn."
A Scatter Lab representative said that the AI was a "work in progress" and would be back online once its "weaknesses" had been fixed, adding that just like humans, artificial intelligence takes time to "properly socialise".
"The latest controversy with Luda is an ethical issue that was due to a lack of awareness about the importance of ethics in dealing with AI," Jeon Chang-bae, the head of the Korea Artificial Intelligence Ethics Association, told the Korea Herald.
Lee Luda is not the first online artificial intelligence to be taken down after betraying antisocial opinions. In 2016 Microsoft’s Tay, an AI Twitter account, lasted just 16 hours before users started reporting racist and genocidal tweets.
According to Microsoft, the AI's behaviour had been caused by trolls who manipulated it with loaded questions. Its replacement, Zo, lasted rather longer but was eventually powered down after making sarcastic comments about Microsoft’s operating systems.
Amazon was forced to shut down their own AI recruitment tool after it was revealed to be making sexist judgements.
But some say that the problem isn’t with artificial intelligence, but with us.
Korean politician Jang Hye-young said Lee Luda had simply held up a mirror to Korean society, saying it simply "reproduced discrimination, hatred, and prejudice against the weak and minorities of our society, such as the disabled, LGBT+, and migrants.
"Only when people’s norms are right, AI ethics can stand right," he said.