Apple are now deep into user testing of their upcoming ‘Apple Intelligence’, a set of AI features designed to help users with queries across apps on newer Apple products.
Recently, some inquisitive beta testers of the upcoming product found a set of files revealing some unusual methods that Apple are employing in an attempt to dodge some of the typical pitfalls of generative AI.
Apple Intelligence Announcement
Apple Intelligence was announced at a conference and through a simultaneous press release on June 10.
It will be accessible on later Apple products and contain a number of AI-driven features, partly developed in partnership with OpenAI, the company behind ChatGPT.
Generative AI
Apple Intelligence will utilize an AI process known as generative AI to help apps respond to users’ queries.
Generative AI is a flavor of AI technology designed towards creating human-like outputs including images and text in response to prompts, such as questions typed by users.
Issues With Generative AI
Generative AI can seem impressive at first, but still comes with a number of issues, notably its so-called ‘hallucinations’.
Hallucinations, when regarding AI, refer to the false or misleading outputs it can generate, such as incorrect facts in a text response.
Blunt Solution
But uncovered files reveal that Apple has a simple solution to the issues that plague generative AI.
Beta testers found a series of prompts in JSON files when they were testing the new features that reveal the blunt solution tech companies are using to try to combat hallucination issues.
Just Speak English
The solution to generative AI that tech companies use, including Apple, it has been revealed, is to simply tell the model in plain English not to make mistakes.
Rather than find a way to make the underlying model less mistake-prone, simple prompts are given to it before the final user interface to tell it directly not to hallucinate or otherwise make errors.
Define a Role
Some of the hidden prompts are used to define a role for the AI suitable for the app in which that particular role may be found.
One generative AI assistant, likely intended for use within a mail app, has the prompt: “You are a helpful mail assistant which can help identify relevant questions from a given mail and a short reply snippet.”
Grammatical Errors
In several instances, beta testers found basic grammatical errors in the prompts for some of the Apple Intelligence features.
For example, one prompt reads: “Please write a concise and natural reply by modify the draft response.”
Do Not Hallucinate
The files reveal the not-so-complex nature of the corrections to the common issues faced with generative AI models.
Prompts include “Do not hallucinate,” and “Do not make up factual information,” as well as “You are an expert at summarizing posts.”
Teething Problems
The prompts are revelatory to the layman of the inner workings of what can sometimes be impenetrably complex computing models.
Observers will continue to be fascinated by how top tech companies, such as Microsoft and Apple, attempt to fix seemingly intractable issues with poorly understood AI technology.