This special episode of The Lab Report was recorded following the Wanderlust Apple Announcement from 12 September 2023.…
The Information has an article by Wayne Ma reporting Apple is spending “millions of dollars a day” on Artificial Intelligence initiatives. The article is pay-walled, but The Verge summarizes it nicely.
Apple has multiple teams working on different AI initiatives throughout the company, including Large Language Models (LLMs), image generation, and multi-modal AI, which can recognize and produce “images or video as well as text”.
The Information article reports Apple’s Ajax GPT was trained on more than 200 billion parameters and is more potent than GPT 3.5.
I have a few points on this.
First, this should be no surprise.
I’m sure folks will start writing about how Apple is now desperately playing catch-up. However, I’ve seen no evidence that Apple got caught with its pants down on AI. They’ve been working on Artificial Intelligence for years. Apple’s head of AI, John Giannandrea, came from Google, and he’s been with Apple for years. You’d think that people would know by now that just because Apple doesn’t talk about things doesn’t mean they are not working on things.
Second, this should dovetail into Siri and Apple Automation.
If I were driving at Apple, I’d make the Siri, Shortcuts and AI teams all share the same workspace in Apple Park. Thus far, AI has been smoke and mirrors for most people. If Apple could implement it in a way that directly impacts our lives, people will notice.
Shortcuts with its Actions give them an easy way to pull this off. Example: You leave 20 minutes late for work. When you connect to CarPlay, Siri asks, “I see you are running late for work. Do you want me to text Tom?” That seems doable with an AI and Shortcuts. The trick would be for it to self-generate. It shouldn’t require me to already have a “I’m running late” shortcut. It should make it dynamically as needed. As reported by 9to5Mac, Apple wants to incorporate language models to generate automated tasks.
Similarly, this technology could result in a massive improvement to Siri if done right. Back in reality, however, Siri still fumbles simple requests routinely. There hasn’t been the kind of improvement that users (myself included) want. Could it be that all this behind-the-scenes AI research is Apple’s ultimate answer on improving Siri? I sure hope so.
For years, I tried to turn the iPad into a production machine. I still love the idea of sitting down with a piece of glass and getting real work done. But there were just too many hoops to jump through for much of my work. I remember at one point sitting in a meeting as a lawyer while we were discussing the purchase of a company. An accountant in the room sent me ten spreadsheets to review. I had an iPad Pro and thought to myself, “How am I going to keep up with everybody in the room trying to process these files on an iPad?“ The answer, it turns out, is that I didn’t.
The real turning point for me was a dinner conversation with my children. Both were in college and using their iPads and were very happy with them. They explained how they used them to take class notes, work on outlines, and make themselves flashcards. It sounded like an Apple commercial.
I realized as I listened to them that they were using the iPad just like Apple envisions it. They were not looking at edge case uses but instead using it for what it is good at. That is when the switch flipped in my brain, and I changed my relationship with the iPad.
People will tell you that if you’re in a relationship with another human and think you can change them, you’re on a fool’s errand. I feel like the same goes for the iPad. You have to accept it for what it is to enjoy using it.
It’s been several years of me using the iPad as Apple intended instead of as I wanted. I can tell you the relationship is a lot healthier. I still use it all the time (indeed, I am dictating this blog post on my iPad mini.), But I’m not trying to get it to record podcasts or perform any task that requires me to do silly things like setting up server connections or gimmicky Rube Goldberg-based automation schemes.
The iPad remains a production-plus-consumption device for me. I write on it, research on it, and plan on it. But as soon as I meet any significant resistance, I set it down and pick up a Mac.
So, the iPad and I are settled and at peace with one another. I still wish Apple would give the iPad more horsepower. There are many ways Apple could make these devices, which run on Apple silicon, take on Mac-type work. Until then, I’m done swimming upstream. I use my iPad as the maker intended.
Getting back to those posts by Jason and John, I usually bring my iPad mini￼ and MacBook Air when I travel. I generally find nothing I can’t do without those two machines. ￼
Day One’s creator wrote a blog post about Apple’s new journaling app and its relationship to Day One. It’s too early to tell exactly how deep the Journaling app will go, but I doubt it matches the Day One feature set, which has been growing for years. I generally agree with Paul in that I don’t think the Journal app will “Sherlock” Day One. Indeed I could see many folks starting with the Journal app and then moving up to Date One once they get it about journaling.
Several journalists got to strap into the new Apple Vision Pro headset. Three of my favorite reads are from Matthew Panzarino, Jason Snell, and Chance Miller. I’ve also spoken to several others here in Cupertino that got the demo. Every person I’ve spoken to that got to try Vision Pro praises the technology. This truly is one of those “only Apple” products. There is no other company with the hardware and software expertise to pull it off. The 12-millisecond latency is something that particularly stands out.
The question that we’ll be asking ourselves is how this technology can change our lives. Looking back at the Apple Watch, Apple initially leaned into it as a fashion product but eventually came around to the idea that it is primarily a health and fitness device. I think for the Vision Pro, the killer use case will be even more stratified. Some people will want them to consume content. Imagine having an IMAX-equivalent screen you can strap on your head or watching your favorite sport virtually on the field next to your favorite players. I love the idea of creating virtual workspaces where I could journal in the middle of Yosemite or do some work while enjoying the view from Machu Picchu. I also like the idea of a seemingly 40-foot whiteboard that I could use in my 14-foot office.
The answer to how we’d use this is going to be “it depends on the person.” Can Apple continue to nail and improve upon this technology? Likely. Will this technology reach critical mass as it gets more affordable? That depends on whether there are enough good reasons for it.
Several Mac sites are reporting today that Apple has approached select iPhone developers to make “full resolution” versions of their applications for a demonstration in January, possibly January 26. Buckle up boys and girls. Tablet mania is about to find its fifth gear. If an announcement is approaching, we are bound to be flooded with “insider” information, most (if not all) of which will be complete speculation. Either way, I think the next few months are going to be interesting.
Reports are coming out that Apple has softened its stance toward third party iPhone application developers. It has begun to approve video streaming applications (bet I’m not the only one waiting for Qik) and Macworld reports Apple is even letting applications with private API calls through (temporarily at least). While this is great news for developers, I suspect Apple’s laser focus on the user experience will prevent it from ever opening the floodgates like other mobile providers are doing. That is a good thing for those of us who like things to work. Still, it is nice they seem to be backing off some of their more jack-tastic moves of late with respect to the App Store. At this point they’ve had long enough to sort out the deluge. Let’s hope this is the first step.