Today, we are announcing an important settlement with the US Department of Housing and Urban Development (HUD) that will change the way we deliver housing ads to people residing in the US. Specifically, we are building into our ads system a method — referred to in the settlement as the “variance reduction system” — designed to make sure the audience that ends up seeing a housing ad more closely reflects the eligible targeted audience for that ad.
Today’s announcement reflects more than a year of collaboration with HUD to develop a novel use of machine learning technology that will work to ensure the age, gender and estimated race or ethnicity of a housing ad’s overall audience matches the age, gender, and estimated race or ethnicity mix of the population eligible to see that ad. To protect against discrimination, advertisers running housing ads on our platforms already have a limited number of targeting options they can choose from while setting up their campaigns, including a restriction on using age, gender or ZIP code. Our new method builds on that foundation, and strives to make additional progress toward a more equitable distribution of ads through our ad delivery process. To implement this change while also taking into account people’s privacy, we will use the privacy-preserving approaches we’re pursuing to measure race and ethnicity at the aggregate level.
We’re making this change in part to address feedback we’ve heard from civil rights groups, policymakers and regulators about how our ad system delivers certain categories of personalized ads, especially when it comes to fairness. So while HUD raised concerns about personalized housing ads specifically, we also plan to use this method for ads related to employment and credit. Discrimination in housing, employment and credit is a deep-rooted problem with a long history in the US, and we are committed to broadening opportunities for marginalized communities in these spaces and others.
This type of work is unprecedented in the advertising industry and represents a significant technological advancement for how machine learning is used to deliver personalized ads. We are excited to pioneer this effort, but given the complexity and technical challenges involved, it will take some time to test and implement. We will also engage with external stakeholders throughout the development process so we can hear and integrate their feedback. As we build, we will share both our progress and more details about these changes.
Additionally, we will be sunsetting Special Ad Audiences, a tool that lets advertisers expand their audiences for ad campaigns related to housing. We are choosing to sunset this for employment and credit ads as well. In 2019, in addition to eliminating certain targeting options for housing, employment and credit ads, we introduced Special Ad Audiences as an alternative to Lookalike Audiences. But the field of fairness in machine learning is a dynamic and evolving one, and Special Ad Audiences was an early way to address concerns. Now, our focus will move to new approaches to improve fairness, including the method we announced today.
The changes we’re announcing as part of today’s settlement build on the significant progress we’ve already made to advance non-discrimination and fairness in our ads system, which includes:
- Requiring advertisers to certify their compliance with our non-discrimination policy annually.
- Restricting how housing, employment and credit advertisers can create their target audiences. We’ve disallowed the use of gender or age targeting, and required that location targeting have a minimum 15-mile radius. We then expanded these updates to Canada and the EU as part of our global approach to prevent discrimination on our platform.
- Maintaining all active ads related to housing, employment or credit opportunities in our Ad Library across the US, Canada and EU. This gives everyone a chance to see these ads regardless of whether they were in an advertiser’s intended audience.
- Removing ad targeting options people may find sensitive to protect against the potential abuse of our tools across all ads, not just those about housing, employment or credit.
Beyond our advertising system, we continue to pursue work to embed both civil rights considerations and responsible AI into our product development process. You can see some of that work in our civil rights audit progress report published last fall. We know that our progress — both in ads fairness and broader civil rights initiatives — will be determined not just by our commitment to this work, but by concrete changes we make in our products. We look forward to not only building solutions, but participating in and supporting the critical, industry-wide conversations that lie ahead.