CNAR

Carolina News And Reporter Blog

Exploring Vulnerabilities of AI Systems to Online Misinformation

[ad_1]

A University of Texas at Arlington researcher is working to increase the security of natural language generation (NLG) systems, such as those used by ChatGPT, to guard against misuse and abuse that could allow the spread of misinformation online.

We need to research potential vulnerabilities of AI systems when they are exposed to online misinformation.

We need to research potential vulnerabilities of AI systems when they are exposed to online misinformation. Image credit: Jenny Ueberberg via Unsplash, free license

Shirin Nilizadeh, assistant professor in the Department of Computer Science and Engineering, has earned a five-year, $567,609 Faculty Early Career Development Program (CAREER) grant from the National Science Foundation (NSF) for her research.

Shirin Nilizadeh

Shirin Nilizadeh. Image credit: UTA

Understanding the vulnerabilities of artificial intelligence (AI) to online misinformation is “an important and timely problem to address,” she said.

“These systems have complex architectures and are designed to learn from whatever information is on the internet. An adversary might try to poison these systems with a collection of adversarial or false information,” Nilizadeh said.

“The system will learn the adversarial information, in the same way, it learns truthful information. The adversary can also use some system vulnerabilities to generate malicious content. We first need to understand the vulnerabilities of these systems to develop detection and prevention techniques that improve their resilience to these attacks.”

The CAREER Award is the NSF’s most prestigious honor for junior faculty. Recipients are outstanding researchers but are also expected to be outstanding teachers through research, educational excellence and integrating education and research at their home institutions.

Nilizadeh’s research will include a comprehensive look at the types of attacks that NLG systems are susceptible to and the creation of AI-based optimization methods to examine the systems against different attack models.

She also will explore an in-depth analysis and characterization of vulnerabilities that lead to attacks and develop defensive methods to protect NLG systems.

Using machine learning software - artistic impression.

Using machine learning software – artistic impression. Image credit: Mohamed Hassan via Pxhere, CC0 Public Domain

The work will focus on two common natural language generation techniques: summarization, and question-answering.

In summarization stage, the AI is given a list of articles and asked to summarize their content. In question answering, the system is given a document, finds answers to questions in that document and generates text answers.

Hong Jiang, chair of the Department of Computer Science and Engineering, underscored the importance of Nilizadeh’s research.

Coding artificial intelligence algorithms - illustrative photo.

Coding artificial intelligence algorithms – illustrative photo. Image credit: Kevin Ku via Unsplash, free license

“With large language models and text-generation systems revolutionizing how we interact with machines and enabling the development of novel applications for health care, robotics and beyond, serious concerns emerge about how these powerful systems may be misused, manipulated or cause privacy leakages and security threats,” Jiang said.

“It is threats like these that Dr. Nilizadeh’s CAREER Award seeks to defend against by exploring novel methods for enhancing the robustness of such systems so that misuses can be detected and mitigated, and end-users can trust and explain the outcomes generated by the systems.”

Source: University of Texas at Arlington



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *