Your Gate to Latin American IP

President Biden launches blueprint for an Artificial Intelligence Bill of Rights

By Ferraiuoli LLC - 01/11/2022

On October 4th, 2022, the White House Office of Science and Technology Policy launched the blueprint for an Artificial Intelligence (“AI”) Bill of Rights. Inspired by the negative impact the implementation of AI has had (such as limitations of credit opportunities; biased algorithms used for hiring; discrimination and privacy violations) the AI Bill of Rights is intended to protect civil rights and promote democratic values in the deployment and governance of automated systems.

Aware of the constant and rapid change of the AI, the Blueprint uses a two-part test to determine what systems are in its scope. Hence, the AI Bill of Rights applies to: (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. In essence, the Blueprint for an AI Bill of Rights is a set of five (5) principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the public.

Safe and effective systems: Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.

Algorithmic discrimination protections: Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, gender identity and sexual orientation), religion, age, national origin, among others. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination.

Data privacy: Designers, developers, and deployers of automated systems should seek your permission and respect the public’s decisions regarding collection, use, access, transfer, and deletion of their data. Any consent requests should be brief, be understandable in plain language, and give control over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed.

Notice and explanations: Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes.
Human alternatives, consideration, and fallback: You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.

The above-mentioned guidelines are non-binding and will be followed on an opt-in basis. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein. Future sector-specific guidance will likely be necessary and implemented for guiding the use of automated systems in certain settings such as AI systems used as part of school building security or automated health diagnostic systems.

By Ferraiuoli LLC - 01/11/2022
Subscribe to the Newsletter

Ferraiuoli LLC

Ferraiuoli LLC (FLLC) was founded in 2003 by the late Blas Ferraiuoli-Martínez, Eugenio Torres-Oyola and María Marchand-Sánchez. This group was then joined in 2004 by Fernando J. Rovira-Rullán, thus forming the founding core of FLLC. FLLC has grown exponentially since its founding from a law firm with three attorneys and a support staff of three to its current size of 54 attorneys with a support staff of 38. Also, FLLC has grown from initially being known as an intellectual property and corporate law boutique law firm to a multiservice law firm that handles most matters relevant to a business while continuing to earn praise for its leading intellectual property and corporate practices.

FLLC has been ranked as a leading law firm in Puerto Rico by the professional publication Chambers Latin America in intellectual property, corporate, bankruptcy, labor & employment, real estate, and tax law. Moreover, 17 FLLC partners have been ranked as leaders in their field by the same publication. 4 FLLC partners are ranked as leaders in Intellectual Property, no other firm has more than 2. This recognition in such a short period of time is a tribute to FLLC’s business model.

FLLC prides itself in doing its work faster and more cost-efficiently yet with the same quality as that of its main competitors. The founding name partners are available at all times to attend to client matters. Their work ethic sets the tone for the rest of the firm. FLLC’s founders’ goal has been steady from the outset: become one of the premier multiservice law firms in Puerto Rico.