Apple got on the AI train later than the competitors, but it has implemented the artificial intelligence capabilities of Google and Samsung phones better than them.
Ten, eleven years ago, when the chief scientist of Siri’s development sat down to watch the movie Her for the second time, he tried to understand what it was about Samantha, the artificial intelligence character of the movie, that made the protagonist fall in love with her without seeing her; The answer was clear to him. Samantha’s voice was completely natural instead of being robotic! And this made Siri in iOS 11, which was released about four years later, have a human (Terry) voice.
But Samantha was not just a natural voice, she was so intelligent that you thought she really had the power to think, and Siri iOS 11 was supposed to be more than just a natural voice; Or at least that’s what Apple wanted to show us. In the demo that Apple released that year of its programs for Siri, it showed a normal day in the life of Dwayne Johnson with his best friend Siri.
While working out and tending to her potty, Johnson would ask Siri to check her calendar and reminders list, get her a Lyft cab, read her emails, show her photos of clothes she designed from the gallery, and finally Rock in an astronaut suit. Suspended in space, we see that he asks Siri to make a Facetime call and take a selfie with him.
In almost all of Siri’s more or less exaggerated advertising, Apple tried to present its voice assistant as a constant and useful companion that can handle anything without the need to run a program ourselves. Siri was so important to Apple that Phil Schiller introduced it as “the best feature of the iPhone” at the iPhone 4S unveiling ceremony and said that we will soon be able to ask Siri to do our jobs for us.
But this “soon” took 13 years and we still have to wait at least another year to see the “real Siri” that was shown in the demos; I mean when Siri tries to get to know the user better by monitoring the user’s interaction with the iPhone and makes us unnecessary to open many daily applications.
For now, what the iOS 18.2 beta version of “smarter” Siri has given us is the integration with ChatGPT and a tool called Visual Intelligence, which offers something like Google Lens and ChatGPT image analysis together. To use Apple’s image generators such as “Image Playground”, “Jenmoji” and “Image Wand”, you need to join the waiting list, which will be confirmed in the next two or three weeks.
In this regard, Apple Intelligence not only joined the artificial intelligence hype later than its competitors and currently has almost no new and unique features to offer, but it is perhaps the most incomplete product that Cupertino residents have offered to their users.
Still, better late than never, and the future of Apple Intelligence looks even more exciting than its current state.
Siri with ChatGPT seasoning
The integration of Siri with ChatGPT means that instead of relying on Google to answer complex requests, Siri will now rely on the popular OpenAI chatbot (of course, permission must be obtained first; however, for faster responses, this option can be disabled by unchecking “Confirm ChatGPT Requests” In the ChatGPT section of the settings, disable it).
The quality of the answers is what we expect from ChatGPT and Apple even gives you the option to download this chatbot application in the Apple Intelligence & Siri section of the settings; But the good news is that using ChatGPT is free and there is no need to create an account. If you have a Pro account, you can log into the app, but if you don’t, OpenAI won’t be able to save your requests and use them to train its chatbot later.
The real intelligence of Siri is that it determines what request to answer by itself, what request to ask Google and what request to give to ChatGPT. For example, the question about the weather is answered by Siri itself, the question about the news of the day is usually left to Google, and if you have a request to produce text or image, Siri goes to ChatGPT; Of course, if you get bored and start your question with “Ask ChatGPT”, Siri will go directly to the chatbot.
The new Siri has a good feeling and the color change of the keyboard and the color halo around the screen when interacting with Siri is eye-catching; More importantly, we no longer need to call Siri to ask her name, and by double-tapping the bottom of the screen, the keyboard will pop up and you can type your request to her (a feature that shy people like me appreciate). But the use of chatbots, which is also the most famous one, is not a new thing, and it is unlikely that the connection of Siri with ChatGPT will excite anyone, at least until this moment.
We have to wait until 2025 for the story to become exciting; When Apple promised that finally “real Siri” will make us unnecessary to deal with different applications.
visual intelligence; Only for iPhone 16
Apple Intelligence is only available for iPhone 15 Pro and later users and iPads and MacBooks equipped with M-series chips; But the access to the Visual Intelligence feature in the iOS 18.2 version is even more limited and will only be available to iPhone 16 series users; Because of the “Camera Control” button.
Visual Intelligence is a mouthful for a feature that is not Apple’s initiative and we experienced it a long time ago with Google Lens (of course, the Circle to Search feature of Samsung phones has a similar situation). Hold down the camera button to open the camera view. If you tap on the Search option on the right, the subject you see in the camera will be searched in the Google Images section to find similar ones on different websites. If you select the Ask option on the left side of the screen, ChatGPT will come into action and analyze the image for you.
The user interface of Visual Intelligence is minimal and eye-catching, and it shows both the subject search results in Google and the ChatGPT answer in a card on the recorded image. In addition, after submitting the image to ChatGPT, you can continue the discussion about the subject of the image with the chatbot integrated in Siri. The results are useful in most cases (for example, when you are looking for the name of a certain plant), but remember that artificial intelligence is not always reliable. For example, when I took a photo of a notebook with an external hard drive design, ChatGPT mistakenly thought that what it was seeing was really an external hard drive and started explaining its specifications.
The images you take with Visual Intelligence are not recorded on the iPhone, and Apple assures that it will not have access to these images; But if you’re logged into your ChatGPT account, OpenAI is likely to store a copy of the image on its servers for analysis.
The only headache of visual intelligence is specific to Iranian users (of course, except that this feature is limited to new iPhones); While image search in Google does not need to change IP, to analyze it in ChatGPT, you will probably have to change your IP. Sometimes this change causes Google search to not work properly, and constantly switching between VPN on and off can be annoying.
And finally: the magic eraser for iPhone
Apple was a latecomer on the AI train, but the void for a tool to effortlessly remove distracting objects from an image was felt more than any other AI feature on the iPhone. I remember when Google first introduced Magic Eraser, I was quite surprised by its performance. iPhone’s Clean Up function, which has now been added to the Photos application, has exactly the same function, but it no longer has that sense of wonder, because it was released three years late.
Of course, iPhone’s Clean Up tool works cleaner than the Galaxy’s Object Eraser in most cases, and if what you want to erase is small, it’s hard to notice its blank spot in the photo. The Clean Up tool automatically detects disturbing objects and draws a line around them. All the processing is done on the phone itself and therefore, it does not take more than a few seconds.
But Clean Up does not have a new feature to offer, and probably most iPhone users who needed such a feature have been using the same Magic Eraser of Google Photos for a long time; Especially since they don’t need new iPhones and Apple Intelligence to use Magic Eraser.
Artificial intelligence writing tool
The iPhone artificial intelligence writing tool (Writing Tools), which is now available as a new option after Copy in the Safari environment and also next to the pen option in the Notes application, is exactly what we have experienced so far with Google, Microsoft and ChatGPT products; Of course, with this limitation that it does not support the Persian language and therefore, it will not be widely used for Iranian users.
The tool itself consists of various options, including text correction, rewriting, friendly tone, professional tone, summary, summarizing, key points, listing and table creation, the last four options in addition to the “Describe your change” bar, which gives users more freedom of action. For example, you can convert the text format to poetry), are still missing in iOS 18.2 beta version; But they will probably be available with the release of the public version in December.
If you need to rewrite English texts, Writing Tools has a relatively satisfactory performance; However, Apple’s language model has a habit of using buzzwords and flashy descriptions like “eye-catching”, “unique” or “innovative”, even in summaries! And this issue clearly shows the traces of artificial intelligence in the text.
But the most important features of Writing Tools are the ones that are not yet available in the beta version of iOS 18.2, and I guess they will completely transform the note-taking experience in the Notes application.
Apple AI image generator
Probably, like me, you experienced the “magic” of artificial intelligence image production with DALL-E for the first time, and then you were surprised by the near-realistic results of Midjourney; This happened two years ago, and few people are excited to watch a few words turn into a work of art. However, Apple has found a way for its artificial intelligence image generator to have something new to say after two years of delay.
iPhone image generators fall into three categories: ImagePlayground, which, similar to Google’s Pixel Studio feature, converts text prompts into cartoon images; Image Wand in the Notes application, similar to Samsung’s Sketch to Image feature, turns the user’s clumsy designs into more attractive drawings; And Genmoji’s unique feature that creates custom emojis from user text prompts.