Microsoft wants to be honest about what their AI can and can’t do. It’s like telling your friend what your new toy can do and what it can’t. They also give out papers and lessons to explain how the AI works and how to use it nicely. For example, they have a special note that talks about the possible problems the AI might have and helps you decide when to use it. This way, everyone knows how to use the AI in a good and safe way. Microsoft wants to make sure that the AI technology it uses in Azure OpenAI is used in a good and fair way. To do this, they do a few things:
- Telling People How It Works: Microsoft explains how its AI systems work and what they can and cannot do. They give information and lessons to help people understand how to use the AI correctly.
- Being Honest About Problems: Microsoft is open about the possible issues that might come up when using their AI. They give warnings about the risks and give advice on how to use the AI responsibly.
- Making AI Fair: Microsoft works hard to make sure that the AI doesn’t treat different people unfairly. They use many methods to avoid this, like training the AI with lots of different types of information and checking it for unfairness.
- Making People Responsible: Microsoft asks people who use their AI to agree to certain rules. These rules say how the AI should be used in a good way. Microsoft also watches how people use the AI and stops any bad or harmful use.
- Listening to Everyone: Microsoft asks people for their opinions about how they use AI. They also work with groups that want AI to be used in the right way.
All these things help Microsoft make sure that the AI in Azure OpenAI is used in a fair and responsible way.