Generative AI is still in its infancy, but we’re starting to see the largest players in the tech space get on board. Both Apple and Microsoft recently announced their visions of how artificial intelligence can transform the way we use our devices: Apple Intelligence and Microsoft Recall. While there’s a lot of promise on both sides, they’ve also been met with considerable fear, uncertainty, and doubt (FUD) around their implementations, security, and data governance. In this episode, we talk to Brad Bowers and Jonathan Care, experts on cybersecurity, about both tools and how enterprises should approach them. 

Context is king 

The goal with both AI tools is to bring contextual awareness of your personal information to the operating system of whatever device you’re using. Need details about dinner plans you discussed in an email with a friend? Just ask Siri or Recall and they’ll dig up the information and present it to you. If your immediate reaction to this is, “That sounds creepy,” you’re not alone. This is where the FUD starts to set in. How are these tools gathering and storing data, and who else might be able to access it?  

You can go your own way 

Apple and Microsoft have chosen very different approaches to reach a similar goal. On the Apple side, there’s one important distinction that was summed up very well in a post by John Gruber on his website Daring Fireball. While “Apple Intelligence” sounds like a single product or feature, it actually comprises several different models that work in tandem to bring contextual awareness across Apple’s operating systems. When you generate a brand new emoji (what Apple dubs “Genmoji”), get a summary of a webpage, or ask Siri to find the flight information your friend texted you, you’re employing one of these models that runs seamlessly on-device, utilizing the neural engine of Apple’s custom silicon. For more complex tasks that require larger models, Apple has developed Private Cloud Compute, which offloads the work to a remote server. They’re even integrating ChatGPT for queries that require world knowledge outside of the user’s personal context.  

Things are a little different on the Microsoft side. Rather than use multiple small models to build awareness across the system, Recall essentially captures an image of your screen every 5 seconds, then analyzes everything it sees. This allows you to search for anything you saw or did using natural language. If you need to find a file someone shared with you but you can’t remember whether it was in an email or a text, just describe it to Recall and it will find it for you. Microsoft stated that all these snapshots are stored locally and never uploaded to the cloud, and the data can’t be viewed by anyone at Microsoft or sold to advertisers. That didn’t seem to allay any fears. 

I always feel like Recall is watching me 

Microsoft’s announcement of Recall was met with significant backlash, so much so that they delayed its release and have worked to assure the public of its safety. As a result, they’ve implemented a few changes to address some of the concerns, including making it opt-in at setup and adding more layers of data protection.  

Apple Intelligence is also opt-in, so users don’t need to worry about their data being analyzed without their consent. It’s also possible to exclude individual apps from both Apple Intelligence and Microsoft Recall. 

Still, any tool that collects this much personal data is bound to make users suspicious. How might these tools and the data they’re collecting be misused? A major fear is that Recall snapshots could be relatively easy for bad actors to obtain, which researchers have already demonstrated. If Recall’s purpose is to replay your workflow, then anyone who can grab those screenshots can also replay your workflow. There’s also concern regarding, as Jonathan puts it, “overeager employers” misusing these tools to make ill-informed decisions. 

Slow ride 

There is still a lot to learn about Apple Intelligence and Microsoft Recall. As these AI tools evolve and see wider adoption, we'll begin to get a clearer picture of the effectiveness of their security efforts.  

For enterprise devices, it will be up to organizations to weigh the risks versus the potential benefits. For those who decide it’s worth a try, Brad offers some sage advice: “... of course slow roll it out to systems that may be of high sensitivity.” Taking a slow, measured approach will allow admins to further assess the risks, gather feedback, and determine whether these tools are worthwhile additions to their users’ workflows. 

For a deeper dive into generative AI and how organizations should prepare for its adoption, check out this episode of Innovation Heroes with Intel vice president Stacey Shulman. 

--- 
Be sure to check out WeGotYourMac.com for more episodes and content on Mac adoption and other end-user computing topics.  

This episode of We Got Your Mac is presented in collaboration with SHI’s Security Posture Review. SHI’s Security Posture Review takes the guesswork out of determining your current state of network security. From external vulnerability to remote access assessments, we rank the current status of your network – from all angles. Visit SHI.com/SecurityReview to learn more today!