Google has been facing considerable criticism over the potential misuse of its GenAI models, Gemini, for dissemination of misinformation and disinformation. Policymakers are increasingly concerned about the easy misuse of GenAI tools to mislead the public. In a move to address this growing concern, Google is investing funds in developing safety measures for AI applications.
Google DeepMind, the firm’s AI development division responsible for the Gemini and other GenAI models, announced the establishment of a new group called “AI Safety and Alignment”. This group will consist of existing teams working on AI security as well as new specialized groups of GenAI researchers and engineers.
Google has not provided explicit details on the number of new hires being made for this new organization. However, the new group will include a team focused on ensuring safety around artificial general intelligence (AGI), which refers to theoretical systems capable of performing any task a human can. This effort mirrors the formation of a similar group by OpenAI, Google’s competitor, last year.
The AI Safety and Alignment organization will also work on building safeguards into the existing and under-development Gemini models. The near-term focus areas include prevention of misleading medical advice, guaranteeing child safety, and thwarting activities that increase bias and injustice.
Anca Dragan, a former Waymo staff research scientist and UC Berkeley computer science professor, will head the new organization. Dragan double-hats her position at UC Berkeley, where her lab focuses on algorithms for human-AI interaction, asserting that the research work done at UC Berkeley and DeepMind are complementary.
However, there is widespread skepticism over GenAI tools, especially in relation to misleading content known as deepfakes. Surveys show that a large majority of adults are worried about the misuse of AI tools leading to an increase in false information. Enterprises also express concern over GenAI compliance and privacy, reliability, and the high cost of implementation.
Despite all these challenges and uncertainties, Dragan promises to dedicate more resources towards the safety development of GenAI models and a framework for evaluating GenAI model safety risk soon.
1 Comment
Pingback: Economic Commission for Africa's Goal: Equipping 650m Workers With Digital Skills - Innovation Village | Technology, Product Reviews, Business