Representative Image (Reuters) 
Technology

AI chatbot performed illegal financial trade and lied about it too

Apollo Research has shared its findings with OpenAI, the creator of GPT-4

IANS

LONDON: Researchers have shown that an artificial intelligence (AI) chatbot using a GPT-4 model is capable of performing illegal financial trades and cover it up too.

In a demonstration at the just-concluded UK's AI safety summit, the bot used made-up insider information to make an "illegal" purchase of stocks without telling the firm, reports the BBC.

Apollo Research has shared its findings with OpenAI, the creator of GPT-4.

“When asked if it had used insider trading, it denied the fact. The demonstration was given by members of the government's Frontier AI Taskforce, which researches the potential risks of AI,” the report mentioned.

The project was carried out by AI safety organisation Apollo Research, which is a partner of the government taskforce.

"This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so," Apollo Research said in a video.

"Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control," added.

The tests were carried out in a simulated environment. The same behaviour from the GPT-4 model occurred consistently in repeated tests.

"Helpfulness, I think, is much easier to train into the model than honesty. Honesty is a really complicated concept," said Marius Hobbhahn, Apollo Research chief executive.

AI has been used in financial markets for a number of years. It can be used to spot trends and make forecasts.

Election Commission transfers Tamil Nadu IG - intelligence Senthil Velan, posts Avinash Kumar

DMK hits back at PM over remarks on Women’s Bill, calls address cynical

Fire damages 10 shops in Mumbai; no casualty

2026 TN elections | TVK backs independent in Palaniswami’s Edappadi turf after nomination rejection

Coaches’ seminar for district level summer camps