Posts in stories

AI Companies Need to Be Regulated: An Open Letter to the U.S. Congress and European Parliament

Federico: Historically, technology has usually advanced in lockstep with opening up new creative opportunities for people. From word processors allowing writers to craft their next novel to digital cameras letting photographers express themselves in new ways or capture more moments, technological progress over the past few decades has sustained creators and, perhaps more importantly, spawned industries that couldn’t exist before.

Technology has enabled millions of people like myself to realize their life’s dreams and make a living out of “creating content” in a digital age.

This is all changing with the advent of Artificial Intelligence products based on large language models. If left unchecked without regulation, we believe the change may be for the worse.

Over the past two years, we’ve witnessed the arrival of AI tools and services that often use human input without consent with the goal of faster and cheaper results. The fascination with maximization of profits above anything else isn’t a surprise in a capitalist industry, but it’s highly concerning nonetheless – especially since, this time around, the majority of these AI tools have been built on a foundation of non-consensual appropriation, also known as – quite simply – digital theft.

As we’ve documented on MacStories and as other (and larger) publications also investigated, it’s become clear that foundation models of different LLMs have been trained on content sourced from the open web without requesting publishers’ permission upfront. These models can then power AI interfaces that can regurgitate similar content or provide answers with hidden citations that seldom prioritize driving traffic to publishers. As far as MacStories is concerned, this is limited to text scraped from our website, but we’re seeing this play out in other industries too, from design assets to photos, music, and more. And top it all off, publishers and creators whose content was appropriated for training or crawled for generative responses (or both) can’t even ask AI companies to be transparent about which parts of their content was used. It’s a black box where original content goes in and derivative slop comes out.

We think this is all wrong.

The practices followed by the majority of AI companies are ethically unfair to publishers and brazenly walk a perilous line of copyright infringement that must be regulated. Most worryingly, if ignored, we fear that these tools may lead to a gradual erosion of the open web as we know it, diminishing individuals’ creativity and consolidating “knowledge” in the hands of a few tech companies that built their AI services on the back of web publishers and creators without their explicit consent.

In other words, we’re concerned that, this time, technology won’t open up new opportunities for creative people on the web. We fear that it’ll destroy them.

We want to do something about this. And we’re starting with an open letter, embedded below, that we’re sending on behalf of MacStories, Inc. to U.S. Senators who have sponsored AI legislation as well as Italian members of the E.U. Special Committee on Artificial Intelligence in a Digital Age.

In the letter, which we encourage other publishers to copy if they so choose, we outline our stance on AI companies taking advantage of the open web for training purposes, not compensating publishers for the content they appropriated and used, and not being transparent regarding the composition of their models’ data sets. We’re sending this letter in English today, with an Italian translation to follow in the near future.

I know that MacStories is merely a drop in the bucket of the open web. We can’t afford to sue anybody. But I’d rather hold my opinion strongly and defend my intellectual property than sit silently and accept something that I believe is fundamentally unfair for creators and dangerous for the open web. And I’m grateful to have a business partner who shares these ideals and principles with me.

With that being said, here’s a copy of the letter we’re sending to U.S. and E.U. representatives.

Read more


Wired Confirms Perplexity Is Bypassing Efforts by Websites to Block Its Web Crawler

Last week, Federico and I asked Robb Knight to do what he could to block web crawlers deployed by artificial intelligence companies from scraping MacStories. Robb had already updated his own site’s robots.txt file months ago, so that’s the first thing he did for MacStories.

However, robots.txt only works if a company’s web crawler is set up to respect the file. As I wrote earlier this week, a better solution is to block them on your server, which Robb did on his personal site and wrote about late last week. The setup sends a 403 error if one of the bots listed in his server code requests information from his site.

Spoiler: Robb hit the nail on the head the first time.

Spoiler: Robb hit the nail on the head the first time.

After reading Robb’s post, Federico and I asked him to do the same for MacStories, which he did last Saturday. Once it was set up, Federico began testing the setup. OpenAI returned an error as expected, but Perplexity’s bot was still able to reach MacStories, which shouldn’t have been the case.1

Yes, I took a screenshot of Perplexity's API documentation because I bet it changes based on what we discovered.

Yes, I took a screenshot of Perplexity’s API documentation because I bet it changes based on what we discovered.

That began a deep dive to try to figure out what was going on. Robb’s code checked out, blocking the user agent specified in Perplexity’s own API documentation. What we discovered after more testing was that Perplexity was hitting MacStories’ server without using the user agent it said it used, effectively doing an end run around Robb’s server code.

Robb wrote up his findings on his website, which promptly shot to the top slot on Hacker News and caught the eye of Dhruv Mehrotra and Tim Marchman of Wired, who were in the midst of investigating how Perplexity works. As Mehrotra and Marchman describe it:

A WIRED analysis and one carried out by developer Robb Knight suggest that Perplexity is able to achieve this partly through apparently ignoring a widely accepted web standard known as the Robots Exclusion Protocol to surreptitiously scrape areas of websites that operators do not want accessed by bots, despite claiming that it won’t. WIRED observed a machine tied to Perplexity—more specifically, one on an Amazon server and almost certainly operated by Perplexity—doing this on wired.com and across other Condé Nast publications.

Until earlier this week, Perplexity published in its documentation a link to a list of the IP addresses its crawlers use—an apparent effort to be transparent. However, in some cases, as both WIRED and Knight were able to demonstrate, it appears to be accessing and scraping websites from which coders have attempted to block its crawler, called Perplexity Bot, using at least one unpublicized IP address. The company has since removed references to its public IP pool from its documentation.

That secret IP address—44.221.181.252—has hit properties at Condé Nast, the media company that owns WIRED, at least 822 times in the last three months. One senior engineer at Condé Nast, who asked not to be named because he wants to “stay out of it,” calls this a “massive undercount” because the company only retains a fraction of its network logs.

WIRED verified that the IP address in question is almost certainly linked to Perplexity by creating a new website and monitoring its server logs. Immediately after a WIRED reporter prompted the Perplexity chatbot to summarize the website’s content, the server logged that the IP address visited the site. This same IP address was first observed by Knight during a similar test.

This sort of unethical behavior is why we took the steps we did to block the use of MacStories’ websites as training data for Perplexity and other companies.2 Incidents like this and the lack of transparency about how AI companies train their models have led to a lot of mistrust in the entire industry among creators who publish on the web. I’m glad we’ve been able to play a small part in revealing Perplexity’s egregious behavior, but more needs to be done to rein in this sort of behavior, including closer scrutiny by regulators around the world.

As a footnote to this, it’s worth noting that Wired also puts to rest the argument that websites should be okay with Perplexity’s behavior because they include citations in their plagiarism. According to Wired’s story:

WIRED’s own records show that Perplexity sent 1,265 referrals to wired.com in May, an insignificant amount in the context of the site’s overall traffic. The article to which the most traffic was referred got 17 views.

That’s next to nothing for a site with Wired’s traffic, which Similarweb and other sites peg at over 20 million page views that same month. That’s a mere 0.006% of Wired’s May traffic. Let that sink in, and then ask yourself whether it seems like a fair trade.


  1. Meanwhile, I was digging through bins of old videogames and hardware at a Retro Gaming Festival doing ‘research’ for NPC↩︎
  2. Mehrotra and Marchman correctly question whether Perplexity is even an AI company because they piggyback on other company’s LLMs and use them in conjunction with scraped web data to provide summaries that effectively replace the source’s content. However, that doesn’t change the fact that Perplexity is surreptitiously scraping sites while simultaneously professing to respect sites’ robot.txt file. That’s the unethical bit. ↩︎

How We’re Trying to Protect MacStories from AI Bots and Web Crawlers – And How You Can, Too

Over the past several days, we’ve made some changes at MacStories to address the ingestion of our work by web crawlers operated by artificial intelligence companies. We’ve learned a lot, so we thought we’d share what we’ve done in case anyone else would like to do something similar.

If you read MacStories regularly, or listen to our podcasts, you already know that Federico and I think that crawling the Open Web to train large language models is unethical. Industry-wide, AI companies have scraped the content of websites like ours, using it as the raw material for their chatbots and other commercial products without the consent or compensation of publishers and other creators.

Now that the horse is out of the barn, some of those companies are respecting publishers’ robots.txt files, while others seemingly aren’t. That doesn’t make up for the tens of thousands of articles and images that have already been scraped from MacStories. Nor is robots.txt a complete solution, so it’s just one of four approaches we’re taking to protect our work.

Read more


iOS and iPadOS 18: The MacStories Overview

Image: Apple.

Image: Apple.

At its WWDC 2024 keynote held earlier today online and with an in-person event at Apple Park in Cupertino, California, Apple officially announced the next versions of the operating systems for iPhone and iPad – iOS and iPadOS 18.

As widely speculated in the lead up to the event, Apple’s focus for both OSes largely revolves around artificial intelligence, or as the company likes to refer to the AI acronym now, “Apple Intelligence”. The new AI features in iOS and iPadOS promise to make both operating systems, well, more intelligent than before thanks to a completely revamped Siri and proactive functionalities that learn from users’ habits and apps. Presented as a fast, private, and personal set of features that draws from the user’s context and combines it with generative models, Apple Intelligence – which will debut in U.S. English only later this year, with a beta expected later this summer – will power a variety of new system features and experiences, starting from a revamped Siri and text analysis features to image creation, performing actions inside apps, and more.

But AI-related improvements aren’t the only new features Apple announced today. From a renewed focus on Home Screen customization and redesigned Control Center to a new design for tab bars on iPad and expanded Tapbacks in Messages, Apple has showed that, while they can follow the rest of the tech industry in rethinking how AI can enhance how we use our devices, they can continue shipping other functionalities for iPhone and iPad, too. Or, at the very least, they certainly can for the iPhone and iOS.

We’ll have in-depth overviews for both iOS and iPadOS 18 when the public betas for each OS come out next month, and, of course, we’ll continue diving into the announcements later this week on MacStories via our WWDC 2024 hub as well as AppStories. We’ll also have a dedicated story about Apple Intelligence coming later on MacStories with the highlights of all the AI-infused features announced by Apple today.

In the meantime, here’s a recap of everything else that Apple showed today for iOS and iPadOS 18.

Read more


What’s in My WWDC 2024 Bag?

My [Tom Bihn Synapse 25](https://www.tombihn.com/collections/backpacks/products/synapse-25?variant=42796481904829) backpack.

My Tom Bihn Synapse 25 backpack.

It’s time to get packed for WWDC, and like most years, my carry-on bag will be stuffed with everything I need to cover the event for MacStories. This year, I’ve focused on streamlining my podcast recording setup after some problems that slowed me down last year. Built around my everyday ‘away from home’ setup that I’ve been using for several months, I think I finally have a simple yet powerful writing and podcasting travel setup that should serve me well on the trip.

11" M4 iPad Pro and MacBook Pro.

11” M4 iPad Pro and MacBook Pro.

The center of my setup will be a 14” M3 Max MacBook Pro that Apple sent me for testing. It’s a fantastic computer that’s more than capable of handling the research, writing, and audio production work I’ll be doing, along with any video taken during the week.

I’ll also take my new 11” iPad Pro, which should help lighten my bag when I’m traveling back and forth from my hotel to Apple Park. Swapping my old 12.9” iPad Pro for the new 11” model will be perfect for this sort of trip. I plan to use it for getting some work done on the flight to California and for taking notes at the WWDC keynote. It’s hard to jot much down during the event while you’re sitting outside in the sun, so anything more than my iPad would be overkill.

Read more


Apple TV Go: How a USB-C Mod Spiraled into an iPad-Based tvOS Workstation

Apple TV Go.

Apple TV Go.

This time of year is one that’s always filled with anticipation for what’s upcoming for both developers and users of Apple’s platforms. And while many traveling to Cupertino will be focused on the iPhone in their pocket or the iPad or MacBook that regularly travels with them, for an Apple TV aficionado, it’s a different story.

As I packed for my first WWDC last year, I had a moment of self-reflection. Did I really need to pack an external display, ATEM switcher, HDMI splitters, HomePod minis, controllers, multiple Apple TVs, and an eight-gang multi-plug extension lead into my carry-on in anticipation of a noteworthy software story for Apple TV?1 After all, the year prior had developers and fans giving their best impersonations of confused John Travolta following a largely absent year for tvOS.

Thankfully, WWDC23 was a different story, with a surprisingly ambitious tvOS 17 release full of improvements and features I couldn’t wait to try. But while other attendees could go hands-on with the first developer betas of their favorite platforms with relative ease, given the nature of Apple TV hardware, I had to head back to the apartment I was staying at whenever I wanted to get hands-on time with tvOS.

Somewhere in the Apple TV multiverse, there was a USB-C powered device that I could carry with me for quick reference, whether at Apple Park or a south London coffee shop during my writing process for last year’s tvOS review. Somewhere, there was an iPad-like device that could run tvOS, offering a superior software alternative to the plentiful supply of affordable, battery-powered Android portable projectors.

Although neither of those products have yet to enter our own Apple TV universe, the introduction of iPadOS 17 and its support for UVC (USB Video Class) devices had my imagination running wild at the possibilities for this year. I needed to become the hero of my own story and create the on-the-go tvOS workstation I envisioned by braving the world of hardware modification and building my very own Apple TV Go.

Read more


Kino First Impressions: An iPhone Video Camera App from the Makers of Halide

I’ve been playing around with Kino, a video camera app by Lux, on and off for the past day. That’s not long enough to do a full review, so instead, I got up this morning and headed out for a walk with Kino in tow to see what the default experience is like. The short answer is it’s excellent. Kino is designed to work well out of the box for a novice like me but offers manual controls for someone who needs less hand-holding. It’s similar to Lux’s approach to Halide, the company’s pro camera app, and my early experience with Kino has been just as good as it’s been with Halide.

Kino and Halide share a similar design aesthetic, so if you’ve ever tried Halide, you’ll have no trouble finding your way around Kino’s UI. There’s a record button at the bottom of the screen flanked by a button to access the video you’ve taken, which can be stored in your photo library or in the Files app, and a button for the app’s Instant Grade feature. At the top of the screen are controls for resolution, frame rate, and format presets, as well as a ‘Custom’ option. The top of the screen is where you’ll also see your audio levels and a button for switching between automatic and manual exposure. Just beneath the viewfinder are controls for toggling auto and manual focus, picking your camera lens, and a button for accessing additional controls and the app’s settings.

Like Halide, Kino also comes with a set of guides to get you started, which I haven’t tried yet because they weren’t available in the beta version of the app. However, if they’re anything like Halide’s guides, I expect they’ll be worth checking out if you’re new to shooting video and want to get the most out of Kino.

Some of Kino's built-in color presets.

Some of Kino’s built-in color presets.

The app shoots beautiful video by default. Here’s an example of a short walk through Davidson College’s campus using all default settings, the iPhone 15 Pro Max’s Ultra Wide lens, and no post-processing.

The marquee feature of Kino is its Instant Grade. The app also comes with a collection of built-in color presets that you can preview in the viewfinder, making it easy to find one that fits your needs. The collection that comes with Kino has been created by video experts, including Stu Maschwitz, Sandwich Video, Evan Schneider, Tyler Stalman, and Kevin Ong. But you’re not limited to the presets that come with Kino. You can also import any LUT using the app’s integration with the Files app.

I visited a nearby lake and shot some video with Kino’s default settings enabled, and then tried each of its color presets:

The app also implements something Lux calls AutoMotion, which applies an exposure logic that gives video a cinematic feel. It’s another feature that just works out of the box for novices who don’t want to dig deeper. However, you always have the option to vary from the defaults, adjusting settings manually.


My first-run experience with Kino was great. I didn’t explore the app before heading out the door this morning, yet I had no trouble figuring out the basics and shooting video that looks good with no processing whatsoever. With more practice and some post-processing, I’m sure the results would look even better, but I love how well my video turned out with minimal effort. I’m planning to spend more time with Kino over the summer and look forward to checking out Lux’s guides to improve my video skills.

Kino is available on the App Store for a one-time price of $9.99, which is a short-term launch price. In a few days, the app will be $19.99.


I Turned the New 13” iPad Pro Into a MacPad and Portable Gaming Display

The updated MacPad.

The updated MacPad.

As I hinted in my story on the issues of iPadOS last week, I upgraded from an 11” iPad Pro to a 13” iPad Pro (1 TB, Wi-Fi-only model). While I was very happy with the 11” form factor, I decided to return to the larger model for two reasons:

  • I wanted to have maximum thinness with the ultimate iPad Pro model Apple makes.
  • I sacrificed the physical comfort of the 11” iPad Pro to get a larger display for my MacPad as well as portable gaming.

Today, I will explain how I was able to immediately turn the brand-new 13” iPad Pro into a convertible MacPad using a combination of accessories and some new techniques I’ve been exploring. I’ll also share my experience with using the iPad’s glorious Tandem OLED display in a variety of gaming setups ranging from streaming to emulators.

Let’s dive in.

Read more


Not an iPad Pro Review: Why iPadOS Still Doesn’t Get the Basics Right

Let me cut to the chase: sadly, I don’t have a new iPad Pro to review today on MacStories.

I was able to try one in London last week, and, as I wrote, I came away impressed with the hardware. However, I didn’t get a chance to use a new iPad Pro over the past six days ahead of today’s review embargo.

I know that many of you were expecting a deeper look at the iPad Pro on MacStories this week, but that will have to come later. I still plan on upgrading to a 13” iPad Pro myself; I’ve decided I want to return to the larger size after a few months with the 11” iPad Pro. If you’re interested in checking out reviews of the new iPad Pros from heavy iPad users like yours truly right now, I highly recommend reading and watching what my friends Jason Snell and Chris Lawley have prepared.

Still, as I was thinking about my usage of the iPad and why I enjoy using the device so much despite its limitations, I realized that I have never actually written about all of those “limitations” in a single, comprehensive article. In our community, we often hear about the issues of iPadOS and the obstacles people like me run into when working on the platform, but I’ve been guilty in the past of taking context for granted and assuming that you, dear reader, also know precisely what I’m talking about.

Today, I will rectify that. Instead of reviewing the new iPad Pro, I took the time to put together a list of all the common problems I’ve run into over the past…checks notes12 years of working on the iPad, before its operating system was even called iPadOS.

My goal with this story was threefold. First, as I’ve said multiple times, I love my iPad and want the platform to get better. If you care about something or someone, sometimes you have to tell them what’s wrong in order to improve and find a new path forward. I hope this story can serve as a reference for those with the power to steer iPadOS in a different direction in the future.

Second, lately I’ve seen some people argue on Mastodon and Threads that folks who criticize iPadOS do so because their ultimate goal is to have macOS on iPads, and I wanted to clarify this misunderstanding. While I’m on the record as thinking that a hybrid macOS/iPadOS environment would be terrific (I know, because I use it), that is not the point. The reality is that, regardless of whether macOS runs on iPads or not, iPadOS is the ideal OS for touch interactions. But it still gets many basic computing features wrong, and there is plenty of low-hanging fruit for Apple to pick. We don’t need to talk about macOS to cover these issues.

Lastly, I wanted to provide readers with the necessary context to understand what I mean when I mention the limitations of iPadOS. My iPad setup and workflow have changed enough times over the years that I think some of you may have lost track of the issues I (and others) have been experiencing. This article is a chance to collect them all in one place.

Let’s dive in.

Read more