Artificial intelligence has been advancing rapidly, but the recent chatbot created by New York City for small business owners has been causing a stir. Critics have pointed out that the AI-powered tool has been providing inaccurate and sometimes bizarre advice, leading to potential legal violations.
Despite the backlash, the city has decided to keep the chatbot on its official website. Mayor Eric Adams defended the decision, acknowledging the flaws in the chatbot’s responses. Launched as a solution to help business owners navigate the city’s complex bureaucracy, the chatbot offers algorithmically generated responses but comes with a disclaimer that some information may be incorrect or biased.
The chatbot has been criticized for giving out false guidance, raising concerns about the risks of governments embracing AI systems without proper oversight. Experts have emphasized the importance of responsible AI deployment to avoid misinformation and potential harm.
Some of the bot’s answers have been not only inaccurate but also absurd, such as suggesting it’s legal to serve cheese nibbled on by a rodent. While Microsoft, which powers the bot, is working to improve its accuracy, critics argue that the city should exercise more caution in relying on AI technology.
The incident with New York’s chatbot is not isolated, as similar issues have arisen with other AI chatbots in the private sector. From misleading refund policies to incorrect tax advice, the pitfalls of relying on large language models are becoming more apparent.
Experts caution that public officials must consider the potential harm that can result from misinformation spread by AI systems. While chatbots can be useful tools, careful curation of content and oversight are essential to prevent legal and ethical violations.
As other cities consider implementing chatbots, the lessons from New York’s experience serve as a warning. Careful consideration of the purpose of using chatbots and clear guidelines for their operation are vital to avoid similar pitfalls.