(For student) Security Risks on using OpenClaw
To: All Staff and Students
From: Information Technology Services Office
Date: 12 March 2026
Subject: Security Risks of Using Agentic AI Personal Assistant Tools Like OpenClaw
Dear Colleagues and Students
Information Technology Services Office has observed an increase in the use of "Agentic AI" tools, such as OpenClaw and similar autonomous assistants by staff and students. While these tools offer powerful capabilities for coding and productivity, they pose significant security risks when installed on university-owned hardware or used with university credentials.
What is an Agentic AI?
Unlike standard chatbots (which only provide text responses within a browser window), agentic AI assistants are designed to act autonomously, interact with the desktop operating system to execute commands, automate processes and complete multi-step workflows. To function effectively, tools like OpenClaw often require full system access, which allows the generated code to:
Read, write, and delete files.
Execute commands in the terminal/ command prompt.
Access your web browser and saved passwords.
Call your specific cloud-based AI model for inferencing using your personal subscription key (consuming your subscribed tokens).
Communicate with external servers and websites.
Access your social media accounts such as WhatsApp (if you link your WhatsApp to the desktop device on which you installed the agentic personal assistant tool).
Key Security Risks
Unintended System Damage
AI models can "hallucinate" or misunderstand complex instructions. If an agentic AI is given full system access, a single misinterpreted prompt could lead to the accidental deletion of critical research data, system files, or the unintended formatting of drives.
Data Privacy and Exfiltration
Granting an AI access to your full file system (local as well as network drive) may expose sensitive university data, including student records, proprietary research, and intellectual property. Many open-source or third-party AI agents transmit data to external servers for processing, which may violate the PCPD Privacy Policy.
"Prompt Injection" Attacks
If an agentic AI is tasked with reading an external file or website, that file or website could contain a "malicious prompt" designed to hijack the AI. This could trick the AI into using its system-level permissions to install malware, steal your credentials, or send your files to a third party without your knowledge.
Unvetted or Malicious Skills Published on ClawHub
This public skills registry does not undergo formal security review, there is a risk that some skills may contain vulnerabilities – or be intentionally designed for harmful purposes. Installing a skill is effectively installing code that runs on your machine; when combined with system-level permissions, a malicious or poorly written skill could significantly increase the risk of compromise (e.g., disabling antivirus and security software, stealing browser sessions, cookies and passwords, or exfiltrating sensitive information such as documents on department shared drives).
Accountable Use & Compliance
University IT security policy requires that all software installed on university machines be vetted for security compliance. Using unapproved tools that bypass standard security protocols puts both your personal data and the university network at risk.
Guidelines for Staff and Students
Do Not Grant Full System Access on Personal Devices: Avoid installing any AI tool that requires "Root," "Administrator," or "Full Disk Access" permissions on your home or personal device. Agentic AI personal assistants such as OpenClaw should not be used on personal devices to access, store or process university data or other institutional records.
OpenClaw Should Not Be Used on University Machines: OpenClaw operates with deep access to the machine’s operating system and can install publicly shared skills (codes) and, its use presents elevated risks to university systems and institutional data.
Protection of University Credentials and API Keys: Never store any university credentials or API keys in plain text configuration files that can be accessed by OpenClaw, as malicious skills could exploit them to access university data or steal subscription credits.
Sandboxing: If you are conducting research on agentic AI, these tools must be run in an isolated, non-networked virtual machine or a dedicated laboratory environment, never on a machine containing sensitive university data.
Liability in Using an AI Personal Assistant: The use of an AI personal assistant may introduce various legal, ethical, and operational liabilities for individuals and organizations. While AI assistants can improve efficiency and decision-making, they do not replace human judgment and accountability. Responsibility for actions taken based on AI-generated outputs ultimately remains with the users.
Report Concerns: If you believe a university system has been compromised by an AI tool, please contact the ITS Helpdesk immediately.
Tools that operate at the operating system level require stronger security controls and review before they can be approved for university use.
ITS will continue to monitor developments in this space and reassess as technology matures and stronger safeguards become available. Our position is aligned with other institutions that have taken similar steps after evaluating agent-based AI tools.
Thank you for your attention.
For further information, please contact IT HelpCentre via the following channels:
ITS Hotline: 2766 5900
ITS WhatsApp/ WeChat: 6577 9669
Check your case progress: IT Online ServiceDesk
Information Technology Services Office