There is a lot of controversy about the copyrights of AI and the information the Chatbots use. Now The New York Times sued the founder of Chat GPT (OpenAI) and Microsoft because of copyright issues of their Chatbot. After a few experts warned that AI is heavily relying on copyright information, this is the first big lawsuit that pulls more attention to the topic. What will be the consequences? What will the future bring for Artificial Intelligence and the Chatbots?
The case of the New York Times and OpenAI
The New York Times sued OpenAI for using its articles to train its chatbots. The lawsuit alleges that OpenAI used millions of articles for this purpose. The Times is seeking compensation because the company is responsible for “billions of dollars in statutory and actual damages” for the “unlawful copying and use of The Times’ uniquely valuable works”. The newspaper wants the company to destroy all chatbots and training data based on The Times’ articles. Although The Times, Microsoft and OpenAI entered into talks about the situation and a possible commercial agreement, The New York Times was not happy with the results. It said the talks had not produced a solution.
OpenAI spokeswoman Lindsey Held said the company was “moving forward constructively” in its discussions with the Times, and that the company was “surprised and disappointed” by the case. She also said that OpenAI respects the rights of content creators and owners, and that the company wants to ensure that content creators benefit from AI. Microsoft declined to comment on the case at all.
A big problem, not just for the NYT but for other paid-for newspapers, is that chatbots like Chat GPT are showing information from articles that people cannot normally access without a paid subscription. This is one of the main problems with using all the articles from these newspapers. This is a massive loss for many companies, as people may simply ask the chatbot instead of subscribing to the newsletters.
The problem with A.I.
The chatbots have access to a wide variety of newspapers, poems, etc., and they use most of them to share information without the permission of the creators. What’s more, they usually don’t even rewrite the text, they just copy it. This does not comply with copyright laws. This is why people are starting to question chatbots.
OpenAI is now worth $80 billion, and Microsoft has invested $13 billion in the company and integrated it into its Bing search system. Microsoft itself raised concerns about the copyrights of the AI programme back in September 2023, but nothing has changed. They have simply told their users that the company will pay for any copyright concerns.
Even though artificial intelligence is a huge step forward in technology in this day and age, it still has to obey the law. It will be in the hands of the AI companies how to handle the situation without disregarding the law. OpenAI has taken the first steps. They’ve started to make licensing deals with some newspapers so that they don’t have to worry about copyright. Examples include the Associated Press and Germany’s Axel Springer, which owns Politico and Business Insider. The conflict with The New York Times could have been avoided if OpenAI had a contract or licence with the Times.
Going forward, if chatbots only have access to a few select newspapers, how much can we trust them? And how do we avoid fake news when the companies behind those chatbots could be blackmailed?
Leave a Reply