ISLAMABAD, JULY 18: Concerns about sensitive data being stolen from ChatGPT and other AI apps have been brought up by the Cabinet Division, they have released a warning suggesting the cautious use of these technologies to guard against cyber threats to private data, personal information, and company specifics.

The advise emphasizes that, in order to reduce the danger of data breaches, people who handle extremely sensitive information should refrain from utilizing chatbots. The Cabinet Division advises against using automated procedures and instead suggests physically erasing sensitive information in order to further protect communications.

One of the advisory’s main suggestions is to connect with chatbots using devices devoid of official or private data. It is recommended that institutions limit access to chatbots to persons who are permitted. This step is recommended to regulate the use of AI applications and guarantee that these resources are only accessible to those who have received the necessary clearance.

The Cabinet Division suggests implementing safe communication channels to stop illegal usage of AI applications. To preserve data integrity, this involves utilizing secure network connections and encrypted communications services.

The advice also stresses how crucial it is to teach staff members to use caution when utilizing AI apps. It proposes that staff members should be trained on the risks and best practices related to AI tools in order to assist reduce the likelihood of cyberattacks.

The advise emphasizes how important it is to make sure staff members don’t use AI apps to communicate private information. Organizations can strengthen their data security procedures and so mitigate cyber risks related to AI technology adoption.

 

 

Share.
Leave A Reply Cancel Reply
Exit mobile version