In the last couple of weeks, many essential changes in the global conversation regarding AI risks and regulations have occurred. The recurring theme that emerged from both hearings in the U.S. hearings on OpenAI with Sam Altman and the European Union’s announcement of the revised AI Act has been an appeal for greater regulation.
What has been surprising to certain people is the agreement among the government, researchers, and AI developers regarding the necessity to regulate. In his testimony to Congress, Sam Altman, the director of OpenAI, proposed the creation of an entirely new body within the government that would issue licenses to developers of massive-scale AI models.
He suggested various ideas for how a body like this could regulate the sector, including “a combination of licensing and testing requirements.” He said that an independent third party should audit firms such as OpenAI.
While there is a growing consensus on the potential risks, which include the impact on jobs and privacy, there’s not a consensus on what these regulations will look like or what the audits could concentrate on. In the inaugural Generative AI Summit held by the World Economic Forum, AI experts from companies, research institutes, governments, and even the private sector came together to create a consensus on the best way to handle these emerging regulatory and ethical issues. Two main themes emerged:
The need for accountable and accountable AI auditing
The first step is to refresh our requirements for businesses in developing and applying AI models. This is crucial in the context of rethinking the meaning of “responsible innovation” really means. Clearly, the U.K. has been leading in this debate, and its government has recently offered guidance on AI via five fundamental principles, which include safety, transparency, and fairness. There is also recent research conducted by Oxford in which they have highlighted that “LLMs such as ChatGPT bring about an urgent need for an update in our concept of responsibility.”
One of the main reasons behind this trend of introducing new responsibilities is the growing difficulty in comprehending and verifying the latest generations of AI models. To think about this change, we could look at “traditional” AI vs. LLM AI or the large-language model AI as an example of suggesting candidates for jobs.
Suppose conventional AI was based on data that identify people of a specific race or gender in higher-level positions and create biases by recommending individuals of the same race or gender to work. It is good that this can be uncovered or verified by looking at all the information used to develop these AI models and any output recommendations.
With the advent of new LLM-powered artificial intelligence that can detect bias, this auditing becomes increasingly difficult and sometimes impossible to determine the quality and bias. We don’t know the data that the “closed” LLM was trained on, but a casual recommendation could introduce biases or ” hallucinations” that are more subjective.
For instance, if you request ChatGPT to provide a summary of a speech given by a presidential candidate, who will decide if the summary is biased?
Therefore, it is more vital than ever before for products with AI recommendations to consider new obligations, like how easily identifiable the recommendations are, to ensure that the algorithms employed in the recommendations can be unbiased instead of relying on LLMs.
This distinction of what is considered the term “recommendation” or “decision” is crucial to the new AI rules in HR. For instance, the new NYC AEDT Law calls for bias audits for technology that specifically affect the decision-making process for employment, such as those that can automatically decide who gets hired.
The regulatory landscape is changing rapidly beyond how AI makes decisions and is now transforming into how AI is constructed and utilized.
Transparency in the transmission of AI specifications to customers
This leads us to the second major issue: the need for governments to set higher-quality and more general guidelines for how AI technologies are developed and how these standards are communicated to users and employees.
In an earlier OpenAI discussion, Christina Montgomery, IBM’s chief privacy & trust officer, stressed that we require standards to ensure that users are aware of each interaction using a chatbot. This level of transparency about the process by which AI is created and the danger of malicious actors using the open-source model is a key element of the latest EU AI Act’s requirements to ban LLM APIs and open-source models.
The issue of how best to limit the rapid growth of new technologies and models will require more discussion before the balance between the benefits and risks becomes clearer. However, what is becoming more apparent is that as the potential impact of AI grows and the demand for standards increases, so does the need for standards and regulations and a greater awareness of the risks and potential.
Impacts on AI Regulations for HR Professionals and business leaders
Its impact on AI is being quickly felt by HR departments which are having to deal with the new requirements to offer employees opportunities to enhance their skills and equip their executives with revised workforce plans and forecasts regarding new capabilities required to adjust their business strategies.
In the two recent WEF Summits focused on Generative AI and Future Work, I talked to leaders from AI and HR and HR, as well as policymakers and academics, on a new consensus that all businesses must insist on an ethical AI implementation and understanding. The WEF recently released the “Future of Jobs Report,” which reveals that in the coming five years, 23 percent of jobs are predicted to shift, and 69 million jobs will be created, but 83 million going away. This means, at most, 14 million jobs are in danger.
The report also reveals that not just the majority of workers need to upgrade their skills for their jobs and will require upgrading and reskilling by 2027. However, only half of the workers are believed to have access to the appropriate training options today.