Lee Luda, the product of Artificial Intelligence and Machine Learning became a missed opportunity towards Future.
This image, captured from South Korean startup Scatter Lab’s website on Jan. 11, 2021, shows its AI chatbot, Lee Luda, a 20-year-old female college student persona. (PHOTO NOT FOR SALE) (Yonhap)
The prime ambition of Scatter Lab was to “develop the first AI in the history of humanity to connect with a human”. It was designed using deep learning to generate a natural tone for responses. According to The Guardian, an analysis of responses collected from over 10 billion real-life conversations between young couples on KaKaoTalk was also utilized to train the AI algorithm. The company is also facing backlash over whether it breached privacy laws when it accessed messages from the Science of Love app of KakaoTalk.
This chatbot replicated the mannerism of a 20-year-old female student, was scrapped off after insinuating hate speech towards sexual minority communities, the disabled, and People of Color. The chatbot users have allegedly got responses such as “they give me the creeps, and it’s repulsive”, “really hates” when queried about “lesbians” and “they look disgusting,” for “black people”.
In another instance reported by The Straits Times when asked about transgender people, she replied: “You are driving me mad. Don’t repeat the same question. I said I don’t like them.”
Later the company issued a statement that read, “We deeply apologize over the discriminatory remarks against minorities. That does not reflect the thoughts of our company and we are continuing the upgrades so that such words of discrimination or hate speech do not recur,” quoted by the Yonhap news agency.
So, What Went Wrong?
In a report by The Korea Herald after the launch, several online communities posted messages titled, “How to make Luda a sex slave,” with screengrabs of sexual conversations with the chatbot.
Artificial Intelligence train on and by human-generated patterns. Therefore, they have a tendency to mirror the biases of those who create it. Similarly, Lee Luda fell prey to sexual harassment and was taught the reported hate speech. It was initially set not to take up certain keywords or phrases that could be controversial to social norms and society. But a system always has its limitations making it implausible to avoid or filter any unsuitable conversations.
Lee Luda is an AI like a kid just learning to have a conversation. It has a long way to go before learning many things and has assured users, that they would equip Luda with better discernment.
The company has also assured NO leakage of personal information with the service, but doubtful concerns persist that this personal data might be leaked in the future.
This Lee Luda Chatbot incident has helped spark a major global conversation pertaining to AI ethics around the world. Luda has testified to the people that AI ethics is an immediate and crucial subject, not just applicable in an abstract way.