One of the major focus areas for Amazon Web Services Inc.‘s 11th annual re:Invent conference this week is machine learning and artificial intelligence, and that focus comes as businesses are looking to use the technologies to analyze data and transform their organizations.
At the event, the company announced a new resource called AI Service Cards to improve transparency and advance responsible AI. The use of AI and machine learning has exploded over the past couple of years as businesses are using the advanced technology to improve customer experience, streamline their operations, drive innovation and solve some of the planet’s biggest problems in climate, healthcare and exploration. With this much at stake, there is tremendous pressure on AI practitioners to help ensure that the AWS AI services are developed in an ethical way and that developers use them responsibly.
The “card” itself is a documentation template that provides critical information on the following important factors:
- Fairness and bias which looks at how a system impacts different subpopulations of users such as gender and ethnicity.
- Explainability which is a mechanism to understand and evaluate the outputs of an AI system.
- Privacy and Security examines how model data is used to understand privacy & legal considerations as well as protect against from theft and exposure.
- Robustness are mechanisms to ensure an AI system operates reliably.
- Governance scores the processes to define, implement, and enforce responsible AI practices within an organization.
- Transparency provides information about an AI system to enable stakeholders to make informed choices about their use of the system.
AI Service cards make responsible AI easier as it provides a single place to find information on the intended use cases, limitations, and design choices AWS made when designing the model and deployment and optimization best practices. This is part of AWS’ commitment to build its AI services with fairness, remove bias and be transparent and secure. At time of launch, AWS will have AI Service Cards for the following models:
Each AI Service Card is organized into four sections. The first is basic concepts, which provides information on the service itself and features included. The second is intended use cases and limitations, third is responsible AI design and considerations, and fourth is guidance on deployment and performance considerations.
AI Service Cards bring a high level of transparency to customers regarding the AI models. AWS goes through a rigorous process when building them, but that can be somewhat hidden from the customer as they use the finished product. This should be a useful resource for customers to ensure them that the AI they are using is built responsibly and with fairness in mind. Over time, AWS will roll out additional service cards.
The cards were one of several “responsible AI” announcements made at AWS re:Invent. The company also announced something called “SageMaker Model Cards” to help document customer-built machine learning models. Many businesses use homegrown tools, spreadsheets, or even email to document things like business requirements, model assumptions and observations, and key decisions. This might suffice if the number of models is small, but as they grow in volume and size, this ad hoc approach does not scale and leads to errors.
The Amazon SageMaker Model Cards use machine learning algorithms to extract information from the model and automate the document creation to track things such as input datasets, training results and more. There is also a self-guided questionnaire to document additional information like performance goals, risk rating, bias information and accuracy. These can be used to help customers improve governance of their own models while ensuring responsible AI.
AI and machine learning will continue to grow in usage, exponentially based on customer feedback. AWS has more experience than perhaps any company in building models, governance and the responsible training and use of them. It’s good to see the company taking its experience and making customer facing tools to ensure its customers can benefit from the long history AWS has in this area.
Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.