How is Amazon Web Services ensuring AI is Securely handling Our Data?

(AWS Innovate GenerativeAI + Data 2024 Reflection)

I am continuously flabbergasted at the sheer amount of options when it comes to technology. Like, how many routers or phones are out there when it’s time to replace the old one? It’s because of all of these options that we have to consider the following:

Where do I save money?

How long until I replace it?

In the end, you will most often run into having to take a risk. To TRUST the developers, to TRUST the manufacturers, to TRUST the sellers. And while we can’t escape risk, we can mitigate it. We can lower the risk, by learning more about it, and acting responsibly!

This is precisely what Amazon Web Services (AWS) are pushing.

Now, this is not an AWS insider and not a comprehensive recap on AWS’s part. This article is intended for both people who want to learn about whether AWS is a safe option for their AI (Artificial Intelligence) apps and tools (at home and business), and people who want to learn from another perspective on AWS Innovate 2024: GenerativeAI + Data Conference.
With that in mind, grab your grain of salt and read!

Introduction

At the beginning of their 2024 conference, AWS Director of Enterprise Strategy, Tom Godden, spoke on AWS’s approach to handling DATA with their ML and AI services. And what I would consider a theme came from Godden’s phrasing from AWS:

Responsible AI

Now, as I went to the following sessions, I went to only 7 of them (out of the 35 they had going). I picked the ones that stood out to me, but also tried to get a general perspective, as some sessions overlapped in concepts. It was these concepts that led me to narrow down AWS their handling of Generative AI, AI that generates content from user input (or GenAI), with our DATA to three areas:

Transparency

Adaptability
&

Flexibility

Transparency

Now it comes as no surprise that this is one of the points. Because of course, you would want to market your business as trustworthy. And the biggest way to do that would be to set out what is expected of them and how they are doing it well.

So how do we take them at their word?

First, we see their claims on their GenAI Services. They don’t claim “nothing bad will happen to your DATA” In fact, Tom Godden spoke on behalf of AWS saying that if something DOES happen to your GenAI DATA, such as mishandling or violating customer DATA privacy because it is in AWS’s hands, AWS will assist in proving that their services are secure and private.

To my knowledge, AWS in general says that they cannot access a user’s data, as it is encrypted by the user’s account. So to summarize, AWS is not in control of our DATA, contrary to popular belief that if you have the hardware, you have the DATA. But, again, can we take their word for it? The next point shares more on GenAI’s DATA usage.

Adaptability

Now here’s the nitty-gritty, the coup-de-gras of AWS’s cloud resources and whether they are safe or not.

AWS has done quite a bit to showcase its ethics (compliance) with handling DATA, but even more so as the popularity of ML and GenAI rise.
At the conference sessions, I went to many demonstrations, as my focus was to see the bare-bones code and services used to make GenAI in the cloud possible. In those demos, I got to see many different test applications and Proof-of-Concepts. As I am only professionally curious about ML, LLMs, AI, and GenAI I can only speak on my knowledge, in hopes I am at a similar spot as most others in my curiosity.

But to summarize, AWS promotes Governance (giving privileged access permissions) and security in their infrastructure, tools, and our applications.

Their bottom layer, hardware, and infrastructure, provide updates, patches, and overall handles all of the tedious and time-consuming parts of security management.

This leads to giving you complete access to handle your DATA without worrying about the security of your servers.

Also, AWS gives tools for monitoring their tools and your apps.

Some examples they shared in their AI tools of pursuing security and privacy:

Amazon Bedrock, the Foundation Model (FM) ML template platform, now has Guardrails to keep sensitive and harmful DATA from both being accessed and produced.
Amazon SageMaker, the all-in-one ML model developer platform, utilizes Amazon’s in-house services each with their own encryption, validation, and monitoring solutions (pursuing up-to-date security standards on all fronts)
AWS Glue, DataZone, LakeFormation, Clean Rooms, Aurora, RDS, DynamoDB, etc.. to name some similar services used for data storage and sharing (each with its own encryption in transit and at rest)

I would love to go into even more detail on the capabilities of some of these services and how each advancement has its relative security options, but for the sake of time and the long section, there is one more train of thought to consider.

Flexibility

The final point I would like to make springboards from an AWS value:

‘Learn and Be Curious’

They have put in their value system that learning and improving by being curious leads to innovation and challenges their services to become better. Objectively speaking, better includes security. So by looking at what AWS is doing, as mentioned throughout their Innovate Conference, we can see if they are holding themselves to this. And it not only applies to AWS, but to us as potential users of their tools and services. We should continue to learn and grow so that we are not blindsided by a TOS (terms of service) change or a lack of development and effort on Amazon’s side.
Funnily enough, AWS promotes this as they give users control over more and more security and privacy features in their ML and AI services and models.

So… What Now?

We have gone over some of the key points of this event. But it would be improper to end without something to take home or to work.

One way I heard it said in a session:

What do you do on Monday?

Mark Roy, AWS Principal ML Architect

No doubt putting your data in a place out of your reach is TRUSTING in that storage entity. And there isn’t a claim you can make against it either, as there is a good assortment of evidence that AWS has your best interest in mind as their business needs to stay afloat. But as a community, we all need to pitch in to push for better security to give AWS an incentive to keep up with our needs. I hope I have shed some light on what I have learned about using cloud services from the Innovate Conference: GenAI + DATA 2024, especially Generative AI.

With all of the hubbub on GenAI and the legalities and unethical behavior about it, I thought I could give some peace of mind to those who are worried about their data’s security when it is in an AWS server.

Cause, after all, the goal is to be:

Concerned, not Worried.

Hopefully, this allows you to make an informed decision on whether you can trust Amazon’s ML services with your DATA. And if you are still uncomfortable, that is fine, I wanted to give an assurance, but it will be different for each use case and business need.

So what do you take away from this?

Leave a Reply

Your email address will not be published. Required fields are marked *