US Congress restricts employees from using ChatGPT to protect privacy; rise in crypto scams expected to accompany AI trend

share
US Congress restricts employees from using ChatGPT to protect privacy; rise in crypto scams expected to accompany AI trend

The U.S. House of Representatives announced in a memo on the 26th document that it has restricted and instructed congressional office staff to only use the "paid version of ChatGPT" to avoid the risk of exposing confidential information. It is reported that OpenAI has been banned in several countries and well-known companies due to privacy concerns. However, the rapid advancement of AI-related technologies may also be exacerbating the number of cryptocurrency-related crimes.

Congressional Staff Restricted to Using Paid ChatGPT Plus

According to Decrypt's report, a written notice within Congress announced that all offices are now limited to using the paid subscription version of the chatbot ChatGPT, specifically the $20 per month ChatGPT Plus. This version offers more powerful tools like GPT-3.5 and GPT-4.

Catherine L. Szpindor, the Chief Administrative Officer of the House of Representatives, explained in a statement to Axios that ChatGPT Plus provides the necessary privacy features required by the House of Representatives, in contrast to the free version.

Departments Sequentially Prohibit the Use of ChatGPT

In April, the developers of ChatGPT, OpenAI, introduced new privacy features. However, ChatGPT has already been explicitly banned in both governmental and corporate settings by several countries and companies, including Russia, China, Iran, as well as companies like Apple and Samsung.

AI Craze Could Lead to Increase in Cryptocurrency Scams?

According to Cointelegraph's report, the AI craze might not only pose privacy risks but could also lead to an increase in cryptocurrency-related scams.

Scammers may promote scam tokens through social media, rapidly acquire a large number of fake followers, and then use AI language training to create seemingly loyal fan bases. By interacting with fake accounts, scammers create an illusion of credibility and popularity to attract unwitting investors.

Furthermore, scammers may use AI-trained chatbots or virtual assistants to interact with the public, providing investment advice, promoting scam tokens, or Initial Coin Offerings (ICOs) with unreasonably high investment returns.

Writer Felix Roemer noted:

The danger of AI scams lies in their ability to mimic human conversation to the point where it's nearly indistinguishable from real interactions, leading individuals to trust them.

Previously, Meta disclosed in a security report that they found hundreds of malicious browser extensions related to ChatGPT, falsely claiming to provide auxiliary tools for ChatGPT users, while actually invading users' devices to steal personal information. The number of similar extensions has been significantly increasing recently.