Making AI safer with cryptography
Using AI everywhere requires trust. Here's how modern cryptography such as homomorphic encryption and zero-knowledge proofs can solve that.
Over the past few months, AI, and in particular generative models, have taken the world by storm. From generating text to images and now videos, it seems like every aspect of our digital lives will soon be augmented with AI. One of the biggest challenges however is trust:
trust that people will use AI for good, and not to do harm to others.
trust that the content we see is real and not AI generated.
trust that our data and queries remains private, and not visible to companies running the AI models
trust that the company offering an AI service is sending us the correct result, and not some made up one in order to manipulate us.
The first point is not solvable with technology, as there isn’t a single definition of right or wrong. Rather, it’s a regulation issue, and governments should probably impose their own alignment on models based on their local moral values.
The other 3 points however are definitely solvable with technology, and more specifically with cryptography:
Trusting the content we see is real → cryptographic signatures
I wrote about point 2 before. The gist of what I said is that the only way to trust content online will be to have the author sign it using a cryptographic key, similar to how they sign their crypto transactions with their wallet. While people could still sign fake content they generated, at least they won’t be able to impersonate other people, which would already remove most of the risks inherent to deepfakes.
I strongly believe that digital identities will be huge in the coming years, and that crypto wallets will be at the forefront of that. Verified accounts on social media will definitely have a big role as well, which is why Elon is pushing for everyone to have a verified Twitter account. It’s also not a coincidence that Sam Altman, the CEO of OpenAI started a company dedicated to identity, Worldcoin.
Trusting that our data remains private → homomorphic encryption
How can we keep our queries private, while still using AI services in the cloud? After all, if we want to use a service, surely we need to send some data to it, right? Well, yes, but that data doesn’t have to be visible to that company!
Using a new technology called Fully Homomorphic Encryption (FHE for short), it is now possible to compute on encrypted data directly, without having to decrypt it first. As a user, you send encrypted data to the server, who then processes it blindly, returning a response which itself is encrypted and that you alone can decrypt. From your point of view, nothing changes, you send a query and get a response, but now nobody sees your data: it’s encrypted end to end, during transmission and processing.
My company Zama is leading that space, and recently published some results showing that this will become practical in the medium term. You can even try a demo where we apply a filter on an image using FHE.
Trusting that AI companies don’t manipulate results → zero-knowledge proofs
When you send a query to ChatGPT or other AI service, how do you know the result you are getting back hasn’t been manipulated? How do you know the company actually performs the service it says it would?
Fortunately there is a way to solve this, which is called Zero-Knowledge Proofs. The idea of ZK is that you send back a result alongside a cryptographic proof that the result was obtained by running a specific program. By doing so, the user making the query can verify that the response was not manipulated, and was indeed generated by the version of the AI model the company claimed to have used.
This particular field of using ZK for machine learning is called ZKML, and is the topic of conversation of our second episode of Unit Testing, where I will be discussing this with Daniel from Modulus Labs and Alana from Variant. Click here to register, we only have a dozen spaces left.
It’s time to stop trusting AI blindly. Let’s instead use cryptography to make it authenticated, verifiable and private!
Rand