Tech industry executives, the national security community, advocacy groups, and others across the public and private sectors have urgently called for government action to mitigate AI’s risks, concerning privacy, misinformation, discrimination and job displacement. Federal leaders have shown a major appetite to set rules that protect Americans against the tech’s worst dangers, but a sweeping response has yet to emerge.
Biden’s directive aims to promote the safe and responsible deployment of AI with a government-wide strategy. Congressional lawmakers, in the meantime, are still working to craft rules that would establish guardrails while promoting the tech’s potential to drive innovation.
Hiring, Discrimination
According to draft text, the Department of Labour would be directed to examine federal agencies’ support to workers displaced by AI and write guidelines for federal contractors on preventing discrimination in hiring systems driven by AI – a major concern of civil rights groups and the Biden administration. The White House would also direct the attorney general to coordinate with agencies to ensure implementation and enforcement of existing laws regarding civil rights violations and discrimination.
The draft order also encourages the Federal Communications Commission to consider using AI to block unwanted robocalls and texts and calls on immigration officials to streamline visa requirements for foreign workers with AI expertise. It would also call on White House officials to convene an AI and technology talent task force for the federal government.
Privacy, Safety
Privacy is expected also to be a key area of focus in the executive order, which will introduce safeguards requiring disclosure of how AI technology is used by federal agencies to collect or use citizens’ information, according to the draft.
The White House declined to comment.

Focus on Risks
The order is expected to touch on numerous AI risks pertaining to cybersecurity, defence, health, labour, energy, education, public benefits, and other issues under agency jurisdiction, and props up a slew of task forces and offices to develop strategies for AI use.
It seeks to crack down on harms posed by generative AI by directing agencies to identify tools to track, authenticate, label and audit AI-generated content, as well as prevent the spread of AI-generated child sexual abuse material and non-consensual intimate visuals of individuals.
The Department of Defence and Department of Homeland Security (DHS) would be directed to develop and deploy AI capabilities to help detect and remediate vulnerabilities in critical US infrastructure and software, according to the draft. DHS would also be responsible to evaluate potential misuse of AI for the development of biological weapons.
Officials would also be directed to examine agencies’ support to workers displaced by AI, and to write guidelines for federal contractors on preventing discrimination in hiring systems driven by AI. Details of the expected principles follow concerns from civil society and subject matter experts about the technology’s potential to replace certain career paths as well as to create opportunities.
According to the draft order, federal government agencies must work to prevent unlawful discrimination occurring through the use of AI for hiring, which has been an existing top technology priority for the Biden administration.
Musk says China is ‘on team humanity’, willing to work on global AI regulations
Musk says China is ‘on team humanity’, willing to work on global AI regulations
Underwriting, Financial Products
Within 180 days, the Labour Secretary must publish guidance for federal contractors over non-discrimination in hiring involving AI and other technology-based hiring systems. The Federal Housing Finance Agency and the Consumer Financial Protection Bureau are also compelled to take action where necessary to address bias caused by the use of AI tool for loan underwriting and the sale of other financial products.
The draft EO also sets out protections to ensure people with disabilities do not receive unequal treatment as a result of the use of AI, including from the use of biometric data such as gaze direction, eye tracking, gait analysis, and hand motions. In addition, the order is expected to introduce privacy safeguards that require disclosure of how AI technology is used by federal government agencies to collect or use the information of citizens.