A US federal judge has ruled in favor of Meta in a major AI copyright lawsuit, finding that the company did not break the law by training its AI models on 13 authors’ books. The judge, Vince Chhabria, determined there wasn’t enough evidence that Meta’s use of the books caused financial harm—a critical factor in copyright fair-use cases. See detailed news in the following links:
This is the second victory in a week by tech companies against authors, after a federal judge ruled in favor of Anthropic in a similar case about the use of copyrighted materials to train its own AI tools.
However, there is a BUT… Chhabria warned that his decision reflected the authors’ failure to properly make their case, and suggested a potentially winning argument: the damage caused by generative AI models to “People can prompt generative AI models to produce these outputs using a tiny fraction of the time and creativity that would otherwise be required,” said Chhabria, and warned that genAI can “dramatically undermine the incentive for human beings to create things the old-fashioned way”.
The ruling emphasizes that although training AI with copyrighted content can be legally transformative, it still depends heavily on whether the original market is harmed. While this decision is a clear win for Meta, the judge made a point to say it doesn’t set a broad precedent—other authors may still have valid claims if they can demonstrate economic impact.
Also in the news today, a recent study by MIT found that getAI is homogenizing our thoughts. I’ll just reproduce a LinkedIn blog post of the New Yorker: “a recent MIT study found that subjects who used ChatGPT to write essays demonstrated much less brain activity than a group that used their own brains to write and a group that was given access to Google Search to look up relevant information. The analysis of the LLM users showed fewer widespread connections between different parts of their brains; less alpha connectivity, which is associated with creativity; and less theta connectivity, which is associated with working memory. Another striking finding was that the texts produced by the LLM users tended to converge on common words and ideas; the use of AI had a homogenizing effect. “The output was very, very similar for all of these different people, coming in on different days, talking about high-level personal, societal topics, and it was skewed in some specific directions,” Nataliya Kosmyna, a research scientist at MIT Media Lab, said. AI is a technology of averages: large language models are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus. Other, older technologies have aided and perhaps enfeebled writers, of course. But with AI we’re so thoroughly able to outsource our thinking that it makes us more average, too.“
What do you think? Is it ethical to use copyrighted material to train AI LLM models? What can be the consequences of a massive use of genAI? Will genAI slowly kill human creativity?
See detailed news in the following links:
- Wired
- Financial Times
- The New Yorker about the MIT study
About Santiago Andrés Azcoitia
- Web |
- More Posts(9)