Six Colors
Six Colors

This Week's Sponsor

Kolide ensures that if a device isn't secure, it can't access your apps.  It's Device Trust for Okta. Watch a demo now!

by Jason Snell

Yes, Secure Erase on Mac is secure

There’s been a lot of buzz lately involving a bug in Photos that caused some deleted items to reappear in libraries, including some (apparent) misinformation that blew the entire story out of proportion. Is Apple’s “Secure Erase” feature truly insecure?

No, it’s not. Howard Oakley has the technical details about why:

As far as I can tell, claims made about securely erased devices recovering old images originate from a single post on Reddit, since deleted by the person who posted it. Although that brought a series of cogent responses pointing out how that isn’t possible, it was picked up and amplified elsewhere, under the title iOS 17.5 Bug May Also Resurface Deleted Photos on Wiped, Sold Devices, which is manifestly incorrect. Sadly, even those who should know better have piled in and reported that single, retracted claim as established fact.

Good information and a useful lesson about not believing everything you read on the Internet.

—Linked by Jason Snell

Do we multitask on our iPads? What AI uses do we actually find helpful? Plus, the email apps we use and whether we’ve experimented with cloning our voice.



By Joe Rosensteel

The Dos and Don’ts of AI at WWDC

Photo illustration of Craig Federighi introducing AI features at next month's WWDC
Picture this. (Photo illustration by Joe Rosensteel)

For the past year, pressure has really been ramping up on the tech industry to do stuff with AI. What “AI” actually means can vary, but usually refers to a large language model (LLM) chatbot that takes natural language input.

While Apple has tried very hard to remind everyone that all the stuff they do with machine learning and the neural engine counts too, the fact is that Apple is perceived as being behind, because it has no LLM chatbot of its own.

LLMs are artificial, but not really intelligent. They can be quite wrong, or simply malfunction. They are far better at conversational threads, and in understanding context than original-recipe voice assistants, like Siri. It was enough to spook Apple’s executives and lead them to begin a crash AI program.

OpenAI, Microsoft, Meta, Google—you name it. It’s a land grab. Everyone is trying to find a way around smartphone platforms, search monopolies, data brokers, ad sales, SEO, publishers, photographers, stock footage…. pretty much everything. The urgency, the sheer sweatiness of tech companies to show their AI relevance is palpable.

Apple doesn’t want anyone to see them sweat, but at WWDC they’re going to have to break out the AI buzzwords and show where they fit into the current zeitgeist. Here’s what Apple can learn from the mistakes other companies are making when it comes to demonstrating AI prowess.

Summaries and slop

Don’t show off summarizing a conversation. I know Mark Gurman suggests this might be a new feature, but every demo of it from other companies has gone over like a lead balloon. Summarization demos say one thing: “How can I more efficiently ignore the nuance and humanity of the people around me?” Also, demos of summaries are just plain boring.

Google I/O featured several instances of summarization that were not useful and borderline disrespectful. There was a theoretical conversation between a power couple and their prospective roofer. Google’s “helpful” summary said a quote was agreed to, but didn’t actually say what the quote actually was! The actual price—seems like a key element of a quote—didn’t appear until a follow-up question. It also omits all the nuance of the roofer’s interactions with the husband in the scenario. Who would trust that summary?

LLM summaries remove words, collapse context, kill tone, and neuter meaning. Busy technology executives eat these demos up, though!

Don’t demo things that snoop on a user’s calls or their device’s screen. At Google I/O, a demo displayed a fraud warning during a phone call. That means there was an AI model listening to the phone conversation. Even if that’s happening entirely on your device, it’s still unnerving that Google is now listening to the contents of my phone calls. The same goes for Microsoft’s Recall, another on-device feature that watches everything you do—so long as you forget Microsoft’s lousy track record securing people’s devices.

Under no circumstances should there be a chatbot in a conversation with real people that’s jumping in to offering to help coordinate times or issue reminders. Fortunately, Apple doesn’t ship a workplace chat platform, so we’re unlikely to get Google’s demo of “Chip,” the nosy virtual chat kibitzer shown at Google I/O. But I don’t want that bot in my iMessage threads, either.

No generative slop. Don’t show off AI-written poetry or book reports. If people ask for help writing a cover letter, show them an example of a cover letter. AI should point users to vetted and approved templates. (But there should be an AI story with Xcode at WWDC, or why even have it be about AI? It just needs to be respectful of developers’ needs, and actually useful in helping developers with their jobs.)

I think Apple already learned a valuable lesson about visual metaphor when they smashed instruments of human expression into a thin iPad, but just to reiterate: Don’t do that again.

Speaking of creation: Don’t show off images generated out of nothing but a prompt. Any generative elements should be augmented from source images or video. Show off altering aspect ratio on an image, object and lens flare removal, creating thumbnail images, sharpening, denoising, or focus effects.

Even then, keep it grounded to what a reasonable person would want to do with their photos. The Photos app doesn’t need to become Midjourney, or Stable Diffusion, and it certainly doesn’t need to use any models with opaque, legally questionable sources to augment a photo of you smiling at the beach. It should still be that photo at the end of the day.

As for partner demos, I would recommend against demonstrations from companies that have AI models that allow people to make a logo or icon for their company or product without using an artist. Under no circumstances should Midjourney, Dall-E, or any of the other generators that scraped art and photos off the internet be used as a demo. That sends the wrong message, even if it is absolutely a use case that can be demoed to show how the neural engine makes creating a logo 90% faster than on Intel.

Don’t demo video generators. These mostly scare people and impress weirdos. “Look, her hands are boiling!” They’re basically a substitute for artifacting stock footage, and Apple is not a purveyor of artifacting stock footage.

AI video tools that handle retiming, color grading, detail recovery, and noise reduction are all acceptable, especially if they can lean on Apple’s multifaceted imaging pipeline, or can use Apple’s depth data as part of the dataset in processing the footage.

For example: Apple is interested in customers shooting Spatial Video, but there are technical shortcomings with the different lenses. Show us how data can be transferred from one eye to the other to help reduce artifacts, and increase resolution. Do an easy-to-use version of something akin to Ocula.

It is possible to preserve AI/ML as a tool without having AI/ML take over the output. There should always be a kernel of reality in every demo to ground it. It should apply to real life, and not trying to compete in the crowded hallucination market.

Hey, Siri

Now that the lede is good and buried, let’s talk about Siri.

We’d all love a senior Apple exec to get on stage and issue a mea culpa before launching the new version, but it’s probably going to be something more like, “Millions of people use Siri every day, which is why we’re excited to announce Siri is even better than before.”

Unfortunately, Mark Gurman has kind of burst the bubble:

The big missing item here is a chatbot. Apple’s generative AI technology isn’t advanced enough for the company to release its own equivalent of ChatGPT or Gemini. Moreover, some of its top executives are allergic to the idea of Apple going in that direction. Chatbot mishaps have brought controversy to companies like Google, and they could hurt Apple’s reputation.

But the company knows consumers will demand such a feature, and so it’s teaming up with OpenAI to add the startup’s technology to iOS 18, the next version of the iPhone’s software. The companies are preparing a major announcement of their partnership at WWDC, with Sam Altman-led OpenAI now racing to ensure it has the capacity to support the influx of users later this year.

Baffling. I have no idea what that demo will look like, but I hope it isn’t “Showing results from ChatGPT on your iPhone” and there’s a big modal window of ChatGPT output.

It is worth noting that not everyone is enamored with ChatGPT, despite the enthusiasm over the features ChatGPT has.

Apple certainly won’t be demoing the imposter Scarlett Johansen voice from OpenAI at WWDC like OpenAI did at their spring event. You know, on account of them being sued, and all.

That same OpenAI spring presentation had perhaps one of the best demos of an LLM voice interface I’ve seen where one presenter spoke in English, and the other spoke in Italian, and ChatGPT 4o acted as live translator. That was a great demo, and translation is definitely one of the areas Apple is playing catch-up in already. It’s not rumored to be a feature, but it would be a good demo.

Google demoed integration with Google Workspace (Drive, Sheets, Gmail, Gchat (lol), etc.) and Apple should show that Siri can pull in information and context from Mail, Messages, Calendar, Photos, Reminders, etc. Ideally, it would be great to work with apps beyond that, but it needs to be able to plug into at least that data.

That means there needs to be a privacy interface for what apps Siri can access, especially if it is relaying it to a third party, and a privacy story about how Apple won’t be looking into every app on your device if you don’t want it to.

I fear that Apple simply won’t address anything but ChatGPT basics shoved into Siri windows. Which is possibly worse than continuing to work quietly on whatever the hell it is they’re working on. I’ll still run through some examples I’d love to see:

Show us someone asking a HomePod or Watch to do something, and instead of saying it can’t, it’ll execute it on your iPhone. Tell us the story about how Siri is secure and functional across devices under your Apple ID.

Demo someone telling Siri to play something on TV. Then asking their Apple Watch to “pause the TV”. Where Siri can know “the TV” is the one I started playing something on (and my iPhone is near based on Bluetooth), even if there are many TVs attached to my Apple ID.

Put on a little show of someone asking Siri where something is in the interface, or how they can do something. “Hey Siri, where are my saved passwords?” It whisks the person right to the Passwords section of Settings. “Hey Siri, I turned down the brightness all the way but it’s still too bright, what can I do?” and it surfaces the Reduce Whitepoint control. Conversationally, “How can I only turn on Reduce Whitepoint late at night?” and it offers a Shortcut based around the sleep and wake-up times.

Demo someone using new Siri with CarPlay, an essential application of Siri, where someone can conversationally talk to Siri to “Play ‘Mona Lisa Overdrive'” and then follow that up with “Play the rest of the album” and it’ll queue up the tracks after instead of doing something completely random like it does now.

Absolutely demo someone pausing music on their Mac, and telling their HomePod to “play what I was last listening to” and it can go resume playback on the HomePod exactly as if you had just hit play on your Mac.

Demo Siri being able to understand what’s currently on-screen when asked. “Hey Siri, who is the actor in this video?” Then conversationally follow that up with “What have I seen them in recently?” Where it could look through what was recently watched through the TV app and check that against the roles that actor has played. That’s not putting anyone out of a job (Well, except Casey. Sorry, buddy.)

Above all else, demo to the audience that when Siri doesn’t know what to do, it’ll ask. Show us a graceful failure state that reassures people how Apple can behave responsibly.

Let me illustrate what not to do with a recent interaction I had with Current Siri:

Me: “Play the soundtrack for The Last Starfighter
Siri: “Here’s The Last Starfighter
[Opens TV app on iOS and starts playing The Last Starfighter from my video library.]
Me: “Play The Last Starfighter soundtrack.”
Siri: “Here’s Dan + Shay”
[Music app starts playnig Dan + Shay “Alone Together”.]
Me: “Play The Last Starfighter Original Motion Picture Soundtrack.”
Siri: “Here’s The Last Starfighter by Craig Safan.”

It seems, however, that nothing is really rumored along these lines. Oh well, guess, I’ll listen to some more Dan + Shay!

Ethics? Anyone?

A very troubling aspect of these rumors is Apple partnering with OpenAI. They didn’t ethically buy rights to use information to train their models, just like they didn’t take Scarlett Johansen’s no for an answer. They’re in active lawsuits with various media companies.

Even companies that have struck a deal with OpenAI—like Stack Overflow, and Reddit—are getting bought off after their sites were already being scraped. Users, who generated all the value in the site, can’t even delete their posts in protest.

Is Apple going to endorse OpenAI by giving them a thumbs up and slotting them into their next operating system releases without comment? They absolutely shouldn’t show anyone from OpenAI in their WWDC presentation, especially not Sam Altman.

There’s an easy way to draw a parallel to Google. Companies sue Google all the time over rights, and Apple still includes Google.

Of course, they are taking money from Google to be the default search engine on iOS, and then trying to have Safari insert Spotlight suggestions to pretend there’s a privacy angle. That Google deal now means that the default search will go through Google’s AI Overview. So Apple is already going to endorse Google’s approach to AI too, even if they don’t strike a deal for anything more.

And let’s not forget the ethics of Apple’s climate pledge. There should be a point in the WWDC keynote where Apple communicates how they can harness AI and still stay on target for their climate goals. That probably seems like a small thing, but people are getting pretty hand-wavy about maintaining their commitments while also putting their models to use.

Regardless of what happens, I suspect there will be plenty of disappointment and outrage to go around in the aftermath of WWDC. These are the times we live in. I just hope Apple takes some lessons from that thing with the hydraulic press and the iPad and doesn’t step in it too badly, just to show that they’re keeping up with the AI hype from the bozos of the tech world.

[Joe Rosensteel is a VFX artist, writer, and co-host of the Defocused and Unhelpful Suggestions podcasts.]


By Jason Snell for Macworld

The future of the iPhone is coming, but it’ll cost you dearly

It’s kind of hard to believe today, given how wildly successful the iPhone has been, but in the product’s early days there was only a single iPhone model for sale. It wasn’t until ten years ago, in 2014, that Apple introduced two different iPhone models, the iPhone 6 and iPhone 6 Plus. But that opened the floodgates, and Apple has spent the last ten years trying to find the right combination of new iPhone models to maximize the money it makes from its most important product.

If reports are true, next year Apple’s going to be switching things up again, dropping the iPhone Plus for a dramatic new model. After the discontinuation if the iPhone Mini after two years and the (apparent) death of the iPhone Plus after three, what does Apple have up its sleeve for 2025?

Continue reading on Macworld ↦


by Jason Snell

‘What If?’ for Vision Pro to arrive next week

What If training

The Watcher is watching with a Vision Pro.

Today Marvel and ILM dropped the trailer for the forthcoming “What If?” immersive story, which was just announced a couple of weeks ago, and (surprise!) is launching next Wednesday in the visionOS App Store as a free app.

Marvel and ILM say the immersive story is about an hour long, and is directly connected to the “What If?” animated series on Disney+, which itself is a multiversal riff on various Marvel Cinematic Universe movies. (Lest you think that this multiverse stuff is new, “What If?” is based on a comic that I absolutely devoured when I was a kid. I still have my issue #1 in a box—what if Spider-Man joined the Fantastic Four?—not too far from where I’m typing this.)

The story will feature the cosmic being The Watcher asking you, the Vision Pro user, for help in facing off with dangerous “variants,” alternate-universe versions of various Marvel characters. Sorcerer Supreme Wong will instruct players on how to cast spells (presumably with the Vision Pro’s hand tracking features) and use the Infinity Stones. Versions of characters such as Thanos, Hela, the Collector, and Red Guardian will appear.

Marvel and ILM describe the app as including both full virtual environments as well as mixed-reality scenarios which involve the real world around players. It sounds very much like an interesting mash-up of interactive and animated elements. I’m very much looking forward to seeing how the “What If?” animation style translates into 3-D. I guess my wait will be over in a week!

—Linked by Jason Snell

Myke returns to the show to give his personal iPad Pro review, and we discuss Apple’s accessibility announcements, worrying Apple AI reports, and some mystifying rumors about future iPhones for this year and next.


Why the “great rebundling” isn’t the return of the cable bundle; recalibrating how talent is paid for streaming shows; Paramount twists in the wind; Spulu gets a real name; the NFL makes a deal with Netflix; and Diamond Sports meets with skepticism.


By John Moltz

This Week in Apple: Going hard on the software

John Moltz and his conspiracy board. Art by Shafer Brown.

The new iPad Pros are astounding, fast, and durable machines that run an operating system. In other news, the new chatbots are here and we’ll never hear the end of it.

It’s a real good news/bad news situation

If you’ve been living under a rock for the past week… well, I have a lot of questions. First, I hope this is just a hatch-type situation where the rock is simply hiding the door to a silo or bunker because otherwise that sounds really uncomfortable. Unless it’s a very small rock, in which case why bother? We can see you. But I don’t want to get distracted because we have Apple news to talk about, so let’s circle back around to this living-under-a-rock situation later.

The Apple web’s main discourse this week revolved around reviews of the new iPad Pros (tl;dr very impressive, hecka fast, lighter than air because they are literally lighter than the iPad Airs) and thoughts about the overall iPad experience.…

This is a post limited to Six Colors members.



by Jason Snell

It’s forgery all the way down

Yet another incredible story on Cabel Sasser’s blog, this one about a supposedly vintage Apple Employee badge:

Wow. Someone was selling Apple Employee #10’s employee badge?! What an incredible piece of Apple history! Sure, it’s not Steve Jobs’ badge (despite the auction title), but there are only so many of these in the world — especially from one of the first ten employees.

How do you cover for a forged document? More forged documents!

—Linked by Jason Snell

Our Apple Watch interaction frequency, thoughts on Spotify’s “Daylists” and our ideal playlist name, interest in Google’s AI features at Google I/O, and essential travel tech for “One Bag Travel.”



By Shelly Brisbin

Apple accessibility preview: More for Speech, CarPlay, and Vision Pro

Vision Pro’s live captions convert spoken words to text onscreen.

This year’s accessibility feature preview from Apple, timed to get a one-day jump on the annual celebration of Global Accessibility Awareness Day, continues last year’s expansion beyond the traditional sight, sound, and physical/motor categories. And without saying so explicitly, Apple seems to have teased the kinds of advances that AI and machine learning could bring to the company’s overall OS offerings.

Tell your phone what to do

Use a vocal shortcut to access an app or a feature, and get an alert onscreen to let you know it’s been heard.

Last year, Apple added speech as a major accessibility category, allowing someone to let their device speak on their behalf with Personal Voice and Live Speech. This year, new tools emphasize controlling the device itself with speech, even if speech is limited or difficult to understand.

Vocal Shortcuts will let you turn an utterance – it could be a word or phrase or even a saved vocalization – into a command that an iOS device will understand. Switch Control already recognizes utterances, like a “P” sound, an “F”, a “CH” or other letter or combination that speakers without fuller speech capability can make. With Switch Control, the sound is turned into a tap, a swipe, or other interface action. Vocal Shortcuts is intended to expand that metaphor to more specific tasks, including running a shortcut, opening a specific app, launching Control Center, and more.

Listen for Atypical Speech captures language that is recognizable as speech but might not be easy to understand. For years, Siri hasn’t done a great job at recognizing accented speech, and it’s worth speculating about how this focus on atypical speech could be applied to the wider voice recognition landscape in all of Apple’s OSes.

Personal Voice, an existing feature that people at risk of losing the ability to speak can use to create a synthetic version of their own voice, will expand to more languages, including Mandarin Chinese.

More access behind the wheel

CarPlay will support Sound Recognition, with alerts tied to car horns, sirens and other traffic-related sounds.

For the first time I can remember, Apple is bringing accessibility-specific features to CarPlay and naming them. The speech focus is apparent here, too, as CarPlay gets Voice Control – a step beyond using Siri to command CarPlay apps; Voice Control lets you speak interface actions, like “swipe” or “tap” while using CarPlay.

Sound Recognition for iOS flashes an alert when your phone hears a chosen noise, like a smoke alarm, a baby crying, or water running. That’s helpful if you can’t hear these sounds in your environment. The new CarPlay version of Sound Recognition will focus on traffic sounds, like emergency vehicles or vehicle horns, and get the driver’s attention via the car’s display. Then, there are visual enhancements, like color filters, to make the CarPlay interface easier to customize for your visual needs.

A new feature for passengers who are subject to motion sickness when using their phone in the car will place a set of dots onscreen. They form the equivalent of a horizon, so you can keep using the phone without feeling sick.

The eyes have it

Eye tracking is a marquee Vision Pro feature, and it’s now more fully integrated into iPadOS. Developers can currently use included eye-tracking APIs to make hardware accessories compatible with the iPad. But those special-purpose devices – often head-mounted pointers or switches – are expensive. This year, support is coming at the system level, meaning someone who can’t touch the screen can use eye tracking to control the iPad directly.

Within the Assistive Touch interface, there will also be a virtual trackpad feature for iPad users who don’t have a physical one but want to use Assistive Touch gestures. Just like support for physical mice and trackpads, could support for a virtual trackpad find its way to mainstream parts of iPadOS – this year or in the future?

More for Magnifier to do

Magnifier’s new reader feature can display text in the font and color you choose.

When Apple converted Magnifier from an iOS feature into a full-fledged app, it began adding seemingly disconnected accessibility enhancements to it, like People Detection and Door Detection. They’re both great features, but Magnifier isn’t necessarily the most intuitive place for them.

More logical is this year’s addition of a reader mode for blind or visually impaired users. Capture a document (like a restaurant menu or a piece of mail) with Magnifier and invoke the reader mode to have the text formatted in an easy-to-read layout that uses your preferred font and/or color. It feels a lot like Reader mode in Safari, and that’s a good thing.

More Vision Pro accessibility

The Vision Pro itself will gain new accessibility features, including systemwide live captions and movable captions that are specifically applied to Apple’s immersive video content. Apple also promises new Vision Pro support for cochlear implants and other hearing devices. For low-vision users, visionOS fills in a few display options that were missing from version 1.0, including Smart Invert Colors.

Something for everyone?

Most of what’s been previewed for accessibility this year is quite modern – even the tools that build on past features are based on relatively new categories, like speech. Enhancements to CarPlay and Vision Pro prove that the company continues to bring accessibility along when it innovates for the user base as a whole. Will we see enhancements to accessibility standbys like VoiceOver, especially on the Mac, where not much has changed in recent years? Perhaps, but probably not until beta time this summer.

[Shelly Brisbin is a radio producer, host of the Parallel podcast, and author of the book iOS Access for All. She's the host of Lions, Towers & Shields, a podcast about classic movies, on The Incomparable network.]


By Dan Moren for Macworld

Are we in Apple’s post-iPad era?

The consensus about the new iPad Pro is in: the hardware is more powerful and impressive than ever, but it’s still largely hampered by software that just hasn’t managed to keep up.

Such has been the story of the iPad Pro almost since its inception almost nine years ago, and despite moves that would seem to augur good things for the platform—like forking iPadOS off from iOS—Apple hasn’t exactly done a bang-up job of assuaging users about its commitment to making the tablet a full-fledged citizen of its lineup.

But zooming out there’s a larger question at play here: what are the long term prospects of the iPad? Once heralded the future of computing, the iPad seems to have fallen short of that promise and, in the process, become Apple’s most at-risk product line. And compared to the rest of Apple’s offerings, the iPad increasingly looks like it may be the one left standing when the music stops.

Continue reading on Macworld ↦


by Jason Snell

‘Why iPadOS Still Doesn’t Get the Basics Right’

I linked to this in my iPad Pro review, but in case you missed that, Federico Viticci has done everyone who writes about iPadOS a service and written a detailed account of what we mean when we say “the iPad hardware is let down by its software”:

In our community, we often hear about the issues of iPadOS and the obstacles people like me run into when working on the platform, but I’ve been guilty in the past of taking context for granted and assuming that you, dear reader, also know precisely what I’m talking about.

Today, I will rectify that. Instead of reviewing the new iPad Pro, I took the time to put together a list of all the common problems I’ve run into over the past…checks notes12 years of working on the iPad, before its operating system was even called iPadOS.

I’ve been stunned to see some reactions to our criticism of iPadOS this past week suggest that, somehow, people like Federico and myself just don’t “get” the iPad. We’ve spent years using the iPad and pushing what it can do. We get it all too well.

—Linked by Jason Snell

by Jason Snell

The iPad Pro manifesto, same as it ever was

Steve Troughton-Smith details all the ways iPadOS lacks… and not for the first time:

You can summarize every iPad review for the past decade: iPad hardware is writing checks that its OS simply can’t cash. iPad silicon has so rapidly outgrown the limitations of the software that it has had ample time to break out and do three generational laps through the entire Mac lineup and back since the last time I wrote about it, and yet little has meaningfully improved about the iPad experience.

A solid list from a developer who cares very much about the iPad.

—Linked by Jason Snell

The new iPad Pro is here, and Jason is joined by Federico Viticci to discuss the new model, Jason’s review, and the limitations of iPadOS. Stephen Hackett also joins the show not to crush some creative dreams, but to answer your questions.



Search Six Colors