Netflix is a streaming platform used by tens of millions of users worldwide to watch favorite TV shows and movies. Like many other platforms that provide and suggest video content, Netflix uses an AI based system to algorithmically determine what users would want to watch. However, when taking the standards set by Microsoft's Guidelines for Human-AI Interaction into account, the ethicality of Netflix's deployment of its AI system comes into question. The following ratings are on a scale of 1–5 with a score of 1 being 'Clearly Violated' and 5 being 'Clearly Applied'.
Initially: G1 Make clear what the system can do Score: 3
Netflix utilizes an AI system that utilizes user actions to influence. While this isn't unheard of for discovery-based products to utilize AI this way, Netflix doesn't directly inform the user of the impact of their actions. For example, a user can thumbs-up or -down media, but the system will not directly inform you whether the action will impact what you watch in the future.
During Interaction: G4 Show contextually relevant information Score: 4
The shows in the user's feed is selected by the AI based upon what the user has previously watched, added to their "My List," or given a thumbs up/down to in the past. The only time Netflix will display a show that had been thumbs-downed by a user is when they search for terms related to the show.
When Wrong: G12 Remember recent interactions Score: 5
Categories that appear on one's Netflix feed is determined by what they have watched or rated positively. For example, Netflix's AI system will show a section for "comedies" if a user has watched or selected shows or movies in the comedy genre. The system is also able to tell when a user has accidentally clicked on a show or dismissed a show after watching for a few seconds and does not take those shows into account when calculating which shows a user would want to watch.
Over Time: G17 Provide Global Controls Score: 1
Netflix provides users limited control over what the AI system can monitor and behave within each feed. The only control that users have is to be able to select whether the AI system organizes the "My List" category or if the user can do so manually.
Other Guidelines

Initially: G2 Make clear how well the system can do what it can do Score: 5 for Youtube
Youtube carefully words the various features and messages to denote what the system is able to do. Each category appearing on one's feed is qualified as a "recommended" selection, and when dismissing a recommendation from the AI system, the user receives a message informing them that the algorithm has taken that action into account and will use it to "tune" its actions in the future.

During Interaction: G6 Mitigate social bias Score: 1 for Tinder
Tinder's AI system utilizes image recognition software and text recognition software to improve their matching algorithm. In theory, this would help increase the chances of you matching with someone you would like personally and physically. However, this way of selecting potential matches for the user means that the user's "preferences" for race, occupation, etc. are the only standards considered.

When It Goes Wrong: G8 Support efficient dismissal Score: 1 for iOS Autocorrect
Whenever the Autocorrect function detects what the system things is a typo, there are only two ways to dismiss the suggestion: a) retype the word over and over again until the AI system learns that you're typing something on purpose or b) use your cursor to X the suggestion away. These ways are time consuming and disruptive respectively.
G16: Convey the Consequences of User Actions Score 5 for Facebook

When trying performing actions to control the content of your feed such as indicating whether you want to "hide post" or "snooze XXXX for 30 days," the user is able to see a description of exactly what their action will do underneath the main prompt. After selecting an option, Facebook will send a message reiterating the consequences of your action.