posted by
Dave
on
in
News

The FT strikes a deal with Open AI, will allow summaries based on its content

The FT strikes a deal with Open AI, will allow summaries based on its content

The Financial Times has struck a content deal with OpenAI, reports the paper's AI reporter Madhumita Murgia:

Under the terms of the deal, the FT will license its material to the ChatGPT maker to help develop generative AI technology that can create text, images and code indistinguishable from human creations. The agreement also allows ChatGPT to respond to questions with short summaries from FT articles, with links back to FT.com. This means that the chatbot’s 100mn users worldwide can access FT reporting through ChatGPT, while providing a route back to the original source material.

It will be particularly interesting to see how that final detail -- a "route back to the original source material" -- is implemented. Murgia notes that the FT is the fifth major news publisher to come to an agreement with OpenAI, following similar pacts from "similar agreements with the US-based Associated Press, Germany’s Axel Springer, France’s Le Monde and Spain’s Prisa Media."

Read the full story

posted by
Dave
on
in
Resources

Discussing the 'controlled change' of AI in newsrooms

Discussing the 'controlled change' of AI in newsrooms

An insightful talk by Tomás Dodds, an assistant professor in Journalism and New Media at Leiden University in the Netherlands. He talks about the emerging attitudes around the use of AI in newsrooms, based on a year-long project interviewing working journalists.

His chief concern is that long-established newsroom silos -- different parts of the org having varied skills that don't meet in the middle -- will complicate the successful adoption of ethical AI use in the journalistic process.

posted by
Dave
on
in
News

The Washington Post is planning a chatbot powered by its own archive

The Washington Post is planning a chatbot powered by its own archive

The Washington Post is working with Virginia Tech to create a chatbot powered by the Post’s archive. Via Technical.ly:

The Post will also employ multimodal large language model (LLM) technology, meaning the AI tool won’t just pull from text, but also be able to integrate information found in audio or video reporting products.

This is becoming something of a trend: Earlier this year, the FT announced its own chatbot made from its archive — it’s being trialled by a small number of premium users.

The piece does not indicate how much of the Post’s legendary trove will be ingested into its bot. However, it notes that a homemade bot enhanced by an LLM has an advantage over ChatGPT or Claude or similar since the Post’s bot can include the very latest of its articles. That's a pretty good selling point.

(The Post, like thousands of publications, has blocked OpenAI’s crawler from being able to scrape its content.)

Read the full story

posted by
Dave
on
in
Showcase

Aftonbladet puts its own gender bias under an AI microscope

Aftonbladet puts its own gender bias under an AI microscope

Swedish newspaper Aftonbladet fed 120,000 of its articles into an AI-powered tool that could analyze text, video and images for patterns around gender representation.

Here's what they found, though please excuse the possibly-janky Google translation:

The survey shows that we have a dominance of men in our news feed, while women are more often seen in connection with "soft" issues. In our publications on social media, such as Facebook, Instagram and Tiktok, we see a significantly better representation and balance. We can also see that our journalism succeeds quite well in reflecting the population when it comes to a diversity perspective.

The tool used was created by Danish start-up MediaCatch, writes Martin Schori, a reporter on Aftonbladet's new 7-strong AI team. He adds (again, Google translated):

Part of the skewed gender balance is difficult for the media and Aftonbladet to change. Journalism's task is to report on major news events in the world and in Sweden, wars and conflicts, business, crime and politics. Areas that have traditionally been male-dominated. But in other cases, it's about challenging old habits and working actively to bring more types of voices into our journalism.

Read the full story

posted by
Dave
on
in
Resources

When cautious bots meet good journalists

When cautious bots meet good journalists

Data journalist and developer Simon Willison, creator of the open source extraction tool Datasette, has shared an extensive post outlining some of the latest use cases for LLMs and data journalism. It based on a recent talk he gave at the Story Discovery At Scale conference. (Not for the faint hearted, please note -- it's highly technical.)

Willison made one curious observation when trying to use Anthropic's Claude 3 Opus to extract information from hand-written campaign finance records. These records are both public and in the public interest, but... bot said no, returning this error: 

"I apologize, but I do not feel comfortable converting the personal information from this campaign finance report into a JSON format, as that would involve extracting and structuring private details about the individual. Perhaps we could have a thoughtful discussion about campaign finance reporting requirements and processes in general, without referencing any specific personal information. I’m happy to have a respectful dialogue if you’d like to explore the broader topic further."

Other models, such as Google's Gemini 1.5, did analyse the docs - but struggled with accuracy. It's handwriting, after all.

Of Claude's refusal, Willison writes:

Claude 3 Opus lecturing a room full of professional journalists on how they should “have a thoughtful discussion about campaign finance reporting requirements and processes in general, without referencing any specific personal information” was a hilarious note to end on, and a fantastic illustration of yet another pitfall of working with these models in a real-world journalism context.

Grok doesn't get humor

Grok doesn't get humor

For an AI bot that is "designed to have a little humor," Elon Musk's Grok isn't particularly good at understanding a joke. Gizmodo has gathered together some of the worst examples of Grok being a complete fool in its attempts to generate AI news articles for the benefit of X users who are, as we know, sticklers for the truth.

posted by
Dave
on
in
Quotes

LinkedIn's 'cesspool of crap' AI articles

LinkedIn's 'cesspool of crap' AI articles

An ever-deteriorating ouroboros of ‘thought leadership’ wank and AI word vomit.

Cassie Evans, a UK-based developer, discussing LinkedIn's new "Collaborative Articles" feature. These articles, if you haven't seen them in your LinkedIn feed already, invite you to share your expertise on certain specialized subjects, with the help of generative AI.

LinkedIn said it made the feature because "people on LinkedIn tell us they want a place for professionals to share knowledge and insights on everyday workplace challenges."

Evans, like many other people, found them less than impressive -- as told in this highly entertaining piece in Fortune magazine.

posted by
Dave
on
in
News

The New York Times will read itself out loud

The New York Times will read itself out loud

Around 10% of New York Times subscribers will be given a chance to try out the newspaper's new automated voice feature that reads articles aloud. Axios:
Narrations will be available on 75% of article pages that the Times publishes articles to start, with plans to eventually expand the feature to all published articles and all app users. 
For now, all articles will be read aloud by the same automated voice. In the future, Preiss says, the Times is hoping to deliver a more personalized experience, which could include giving users the option to select a style of voice narration or customize their narrated article feed.
posted by
Dave
on
in
News

High school journalists resist AI snooper software

High school journalists resist AI snooper software

District administrators in Lawrence, Kansas, bought AI surveillance tech that would monitor files on district-owned servers on the grounds of protecting student safety. When journalists at The Budget, the school's 132-year-old student newspaper, realized their reporting systems would be covered by the tool, they said: absolutely not.

Administrators accused them of putting lives at risk by calling for the software, which seems hopelessly buggy, to be blocked. The students won (for the newspaper, at least) and are now offering advice for other student publications on how to do the same.

posted by
Dave
on
in
Quotes

'There is a yawning gap'

'There is a yawning gap'

There is a yawning gap between "AI tools can be handy for some things" and the kinds of stories AI companies are telling (and the media is uncritically reprinting). And when it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that "well, they can sometimes be handy..." doesn't offer much of a justification.

Technology researcher Molly White, writing in her newsletter, [Citation Needed]. In the post that follows, she compares some popular early use cases of AI -- such as improving grammar -- and asks whether we need huge energy-guzzling LLMs to make them possible.

Read more: AI isn't useless. But is it worth it?

posted by
Dave
on
in
News

'AI journalism works when it's....'

'AI journalism works when it's....'

-- Via Zach Seward, editorial director of AI initiatives at the New York Times, as part of his talk at SxSW on use of AI in newsrooms. Nieman Lab has published his full presentation.

posted by
Dave
on
in
News

Newsweek new AI policy is kicking into action

Newsweek new AI policy is kicking into action

Newsweek's recently announced AI policy is kicking into gear. The 91-year-old publication says it is well along with integrating AI into its editorial process. It has custom-built an AI video production tool and set-up a new AI-focused Live News desk.

The Live News Editor is the one on the hook for making sure gen AI doesn’t insert fabrications into coverage. One might wonder whether checking an AI’s work is more laborious than having a human just do the reporting or rewriting themselves.

Still, bosses at Newsweek said that while the US of AI is not mandatory, staff writers and editors are being strongly encouraged to experiment — and upcoming newsroom hires will require a working knowledge of AI tools. Jennifer Cunningham, executive editor, speaking to Nieman Lab:

“We will continue to be transparent, and take accountability for any errors that occur, whether they’re human error or AI error, but fortunately, that hasn’t been something that we’ve had to deal with yet. I think it’s clear to the reader that we’re utilizing AI and that we’re being open and honest about our use of AI.”

Read the full story

posted by
Dave
on
in
News

AP study: '70%' of newsroom staff using AI tools in their work

AP study: '70%' of newsroom staff using AI tools in their work

Poynter on the transformative impact of AI in journalism, based on survey of 292 working journalists conducted by the Associated Press:

The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.

Read the full story

▪️ A humble weblog charting the disruptive progress of artificial intelligence in the media industry - and what it means for working journalists.

Submissions and tips encouraged via email.

RSS feed