Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Jan. 30, 2014, 12:58 p.m.
LINK: www.capitalnewyork.com  ➚   |   Posted by: Justin Ellis   |   January 30, 2014

nytimes-logoLast summer, The New York Times brought its mobile apps more in line with the rest of the company’s digital offerings by creating a meter that limited the number of free stories to three a day.

Today, the Times tweaked its meter once again: The company announced that users of its mobile apps would now be allowed 10 free stories a month, according to an email from Times spokesperson Linda Zebian. Once the free-riders hit the meter they’ll be prompted to sign up for a subscription. Browsing section fronts and article summaries inside the app will still be free, as will all videos from the Times. In other words, the mobile apps paywall will look a lot more like the NYTimes.com paywall.

Since introducing Paywall 1.0 in 2011, the Times has continually refined the subscription system to try to convert more readers into paying customers. Originally, the Times’ mobile apps set aside a pre-selected set of top stories that were free to readers; the website also used to allow up to 20 freebies a month. This change on the mobile apps comes as the Times is preparing to offer a new collection of news products and digital subscription offerings in the next few months.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”