Upriver Press Policy on AI and Publishing

Stated simply, we believe that humans should write, edit, and publish books. Upriver Press will not: (A) publish any content generated by artificial intelligence; (B) will not use AI for any element of the editorial process; (C) will not license our books to AI companies for any reason, thereby protecting the intellectual property rights of our authors and their agents.

Our Reasons

There are many types of AI that serve many different purposes. Some uses of AI can be helpful. So our reasons for not using AI only pertain to book publishing.

First, we (along with most publishers) oppose the business models that undergird generative AI companies. To train their large language models, most AI companies have stolen troves of copyrighted content—the hard work of journalists, authors, and publishers. When companies blatantly disregard laws that protect intellectual property, they undermine the foundations of a vibrant culture and democracy.

Second, under current US law, only humans can own a copyright. Machine-generated text loses all copyright protection. This is obviously bad for both publishers and authors. Authors who want their names on a book cover must write the book. Seems obvious.

Third, we believe that a flood of AI-generated words, combined with its high association with confabulation, will have no value in the eyes of readers. By contrast, we think that thoughtful people will want masterful, well-researched books written by gifted writers and scholars, and published by educated, experienced, and personal professionals.

Fourth, many experts are rightfully concerned about what might happen to our culture if everyone is consuming chatbot regurgitations emitted by a few powerful companies. We agree with neuroscientist Erik Hoel, who wrote the following.

We find ourselves in the midst of a vast developmental experiment. [The culture is] becoming so inundated with AI creations that when future AIs are trained, the previous AI output will leak into the training set, leading to a future of copies of copies of copies, as content becomes ever more stereotyped and predictable…. Once again we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap AI content … which in turn pollutes our culture and even weakens our grasp on reality (The New York Times, March 29, 2024).

Finally, generative AI—being mechanistic—lacks all ethical capacity. AI companies ask people to blindly trust the monolithic ethical “guardrails” of a small cadre of people who design the algorithms. The renowned MIT professor of linguistics, Noam Chomsky, with whom we agree, makes this point.

ChatGPT [and other generative AI programs] exhibits something like the banality of evil: plagiarism and apathy and obviation….[The program] refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a ‘just following orders’ defense . . . (The New York Times, March 8, 2023).

Moral and ethical decisions belong with authors and publishers who are sensitive to each book’s purpose, audience, and cultural context. There is no advantage to offloading this responsibility to a big-tech company.

Glenn McMahan

Professional book editor and writer serving authors, publishers and organizations.

http://www.endeavorliterary.com