The escalating impact of artificial intelligence (AI) is making numerous organizations rush to tackle the novel cybersecurity and data privacy concerns generated by the technology, especially as AI is utilized in cloud systems. Apple stands firm in addressing AI’s security and privacy challenges with its Private Cloud Compute (PCC) framework.

Apple appears to have resolved the issue of offering cloud services without compromising user privacy or introducing more layers of vulnerability. This action was necessary as Apple needed to establish a cloud infrastructure for running generative AI (genAI) models requiring more processing power than its devices could provide, all the while safeguarding user privacy, as mentioned in a ComputerWorld article.

Apple is opening up the PCC system to security analysts to “deepen their understanding of PCC and independently verify our assertions,” the enterprise announced. Additionally, Apple is broadening its Apple Security Bounty program.

What implications does this hold for AI security in the future? Security Intelligence conversed with Ruben Boonen, CNE Capability Development Lead at IBM, to glean insights on PCC and Apple’s strategy.

SI: ComputerWorld reported on this topic, expressing Apple’s aspiration that “the combined efforts of the entire infosec community will fortify the defense of AI’s future.” What is your perspective on this maneuver?

Boonen: I perused the ComputerWorld piece and scrutinized Apple’s statements regarding their private cloud. I believe Apple’s actions here are commendable. It seems to transcend what other cloud service providers offer as Apple is granting a glimpse into some of the internal elements they leverage, essentially informing the security community that they can scrutinize this to determine its security level.

Also commendable from the standpoint that the AI realm is progressively expanding. Integrating generative AI components into mainstream consumer gadgets and fostering trust among individuals to entrust their data with AI services mark significant progress.

SI: What advantages do you discern in Apple’s method of fortifying AI in the cloud?

Boonen: Other cloud providers do furnish robust security assurances for data stored in their cloud. Numerous enterprises, including IBM, entrust their corporate data to these cloud providers. Yet, often the protocols for data security are opaque to their clientele; they don’t demystify their procedures. The principal divergence here is Apple’s provision of a translucent environment for users to validate its soundness.

Explore AI cybersecurity solutions

SI: What are some of the drawbacks?

Boonen: Currently, the most potent AI models are voluminous, rendering them highly effective. However, when aiming for AI on consumer devices, there is a tendency for vendors to furnish diminutive models unable to answer comprehensive queries, thus relying on larger cloud models. This presents additional risks. Nevertheless, it is foreseeable that the whole industry will transition to the cloud model for AI. Apple is currently implementing this to instill consumer confidence in the AI process.

SI: Apple’s system lacks interoperability with other systems and products. How will Apple’s endeavors in bolstering AI in the cloud benefit other systems?

They are presenting a blueprint that other providers such as Microsoft, Google, and Amazon can emulate. Primarily, this serves as a compelling model for other providers to consider implementing a similar strategy and affording analogous testing capabilities to their clientele. Thus, I believe this doesn’t directly affect other providers but nudges them towards greater transparency in their methodologies.

It is also crucial to note Apple’s Bug Bounty program, inviting researchers to scrutinize their system. Apple has a history of faltering in terms of security, with instances in the past where they declined to remunerate bounties for vulnerabilities discovered by the security community. Hence, I am uncertain if they are solely driven by the desire to attract researchers or if part of the motivation is to reassure their clients that they are prioritizing security.

All things considered, having pored over their comprehensive design documentation, I am of the opinion that they are doing a commendable job in addressing security issues related to AI in the cloud.