Skip to main content Start main content

ITS Notice

(For staff) Security Risks on using OpenClaw

To: All Staff and Students From: Information Technology Services Office             Date: 12 March 2026     Subject: Security Risks of Using Agentic AI Personal Assistant Tools Like OpenClaw     Dear Colleagues and Students     Information Technology Services Office has observed an increase in the use of "Agentic AI" tools, such as OpenClaw and similar autonomous assistants by staff and students. While these tools offer powerful capabilities for coding and productivity, they pose significant security risks when installed on university-owned hardware or used with university credentials.    What is an Agentic AI? Unlike standard chatbots (which only provide text responses within a browser window), agentic AI assistants are designed to act autonomously, interact with the desktop operating system to execute commands, automate processes and complete multi-step workflows. To function effectively, tools like OpenClaw often require full system access, which allows the generated code to: Read, write, and delete files. Execute commands in the terminal/ command prompt. Access your web browser and saved passwords. Call your specific cloud-based AI model for inferencing using your personal subscription key (consuming your subscribed tokens). Communicate with external servers and websites. Access your social media accounts such as WhatsApp (if you link your WhatsApp to the desktop device on which you installed the agentic personal assistant tool).   Key Security Risks Unintended System Damage AI models can "hallucinate" or misunderstand complex instructions. If an agentic AI is given full system access, a single misinterpreted prompt could lead to the accidental deletion of critical research data, system files, or the unintended formatting of drives. Data Privacy and Exfiltration Granting an AI access to your full file system (local as well as network drive) may expose sensitive university data, including student records, proprietary research, and intellectual property. Many open-source or third-party AI agents transmit data to external servers for processing, which may violate the PCPD Privacy Policy. "Prompt Injection" Attacks If an agentic AI is tasked with reading an external file or website, that file or website could contain a "malicious prompt" designed to hijack the AI. This could trick the AI into using its system-level permissions to install malware, steal your credentials, or send your files to a third party without your knowledge. Unvetted or Malicious Skills Published on ClawHub This public skills registry does not undergo formal security review, there is a risk that some skills may contain vulnerabilities – or be intentionally designed for harmful purposes. Installing a skill is effectively installing code that runs on your machine; when combined with system-level permissions, a malicious or poorly written skill could significantly increase the risk of compromise (e.g., disabling antivirus and security software, stealing browser sessions, cookies and passwords, or exfiltrating sensitive information such as documents on department shared drives). Accountable Use & Compliance University IT security policy requires that all software installed on university machines be vetted for security compliance. Using unapproved tools that bypass standard security protocols puts both your personal data and the university network at risk.   Guidelines for Staff and Students Do Not Grant Full System Access on Personal Devices: Avoid installing any AI tool that requires "Root," "Administrator," or "Full Disk Access" permissions on your home or personal device. Agentic AI personal assistants such as OpenClaw should not be used on personal devices to access, store or process university data or other institutional records. OpenClaw Should Not Be Used on University Machines: OpenClaw operates with deep access to the machine’s operating system and can install publicly shared skills (codes) and, its use presents elevated risks to university systems and institutional data. Protection of University Credentials and API Keys: Never store any university credentials or API keys in plain text configuration files that can be accessed by OpenClaw, as malicious skills could exploit them to access university data or steal subscription credits. Sandboxing: If you are conducting research on agentic AI, these tools must be run in an isolated, non-networked virtual machine or a dedicated laboratory environment, never on a machine containing sensitive university data. Liability in Using an AI Personal Assistant: The use of an AI personal assistant may introduce various legal, ethical, and operational liabilities for individuals and organizations. While AI assistants can improve efficiency and decision-making, they do not replace human judgment and accountability. Responsibility for actions taken based on AI-generated outputs ultimately remains with the users. Report Concerns: If you believe a university system has been compromised by an AI tool, please contact the ITS Helpdesk immediately.   Tools that operate at the operating system level require stronger security controls and review before they can be approved for university use. ITS will continue to monitor developments in this space and reassess as technology matures and stronger safeguards become available. Our position is aligned with other institutions that have taken similar steps after evaluating agent-based AI tools. Thank you for your attention.   For further information, please contact IT HelpCentre via the following channels: ITS Hotline: 2766 5900 ITS WhatsApp/ WeChat: 6577 9669 Check your case progress: IT Online ServiceDesk Information Technology Services Office    

12 Mar, 2026

Cyber Security

(For student) Security Risks on using OpenClaw

To: All Staff and Students From: Information Technology Services Office             Date: 12 March 2026     Subject: Security Risks of Using Agentic AI Personal Assistant Tools Like OpenClaw     Dear Colleagues and Students     Information Technology Services Office has observed an increase in the use of "Agentic AI" tools, such as OpenClaw and similar autonomous assistants by staff and students. While these tools offer powerful capabilities for coding and productivity, they pose significant security risks when installed on university-owned hardware or used with university credentials.    What is an Agentic AI? Unlike standard chatbots (which only provide text responses within a browser window), agentic AI assistants are designed to act autonomously, interact with the desktop operating system to execute commands, automate processes and complete multi-step workflows. To function effectively, tools like OpenClaw often require full system access, which allows the generated code to: Read, write, and delete files. Execute commands in the terminal/ command prompt. Access your web browser and saved passwords. Call your specific cloud-based AI model for inferencing using your personal subscription key (consuming your subscribed tokens). Communicate with external servers and websites. Access your social media accounts such as WhatsApp (if you link your WhatsApp to the desktop device on which you installed the agentic personal assistant tool).   Key Security Risks Unintended System Damage AI models can "hallucinate" or misunderstand complex instructions. If an agentic AI is given full system access, a single misinterpreted prompt could lead to the accidental deletion of critical research data, system files, or the unintended formatting of drives. Data Privacy and Exfiltration Granting an AI access to your full file system (local as well as network drive) may expose sensitive university data, including student records, proprietary research, and intellectual property. Many open-source or third-party AI agents transmit data to external servers for processing, which may violate the PCPD Privacy Policy. "Prompt Injection" Attacks If an agentic AI is tasked with reading an external file or website, that file or website could contain a "malicious prompt" designed to hijack the AI. This could trick the AI into using its system-level permissions to install malware, steal your credentials, or send your files to a third party without your knowledge. Unvetted or Malicious Skills Published on ClawHub This public skills registry does not undergo formal security review, there is a risk that some skills may contain vulnerabilities – or be intentionally designed for harmful purposes. Installing a skill is effectively installing code that runs on your machine; when combined with system-level permissions, a malicious or poorly written skill could significantly increase the risk of compromise (e.g., disabling antivirus and security software, stealing browser sessions, cookies and passwords, or exfiltrating sensitive information such as documents on department shared drives). Accountable Use & Compliance University IT security policy requires that all software installed on university machines be vetted for security compliance. Using unapproved tools that bypass standard security protocols puts both your personal data and the university network at risk.   Guidelines for Staff and Students Do Not Grant Full System Access on Personal Devices: Avoid installing any AI tool that requires "Root," "Administrator," or "Full Disk Access" permissions on your home or personal device. Agentic AI personal assistants such as OpenClaw should not be used on personal devices to access, store or process university data or other institutional records. OpenClaw Should Not Be Used on University Machines: OpenClaw operates with deep access to the machine’s operating system and can install publicly shared skills (codes) and, its use presents elevated risks to university systems and institutional data. Protection of University Credentials and API Keys: Never store any university credentials or API keys in plain text configuration files that can be accessed by OpenClaw, as malicious skills could exploit them to access university data or steal subscription credits. Sandboxing: If you are conducting research on agentic AI, these tools must be run in an isolated, non-networked virtual machine or a dedicated laboratory environment, never on a machine containing sensitive university data. Liability in Using an AI Personal Assistant: The use of an AI personal assistant may introduce various legal, ethical, and operational liabilities for individuals and organizations. While AI assistants can improve efficiency and decision-making, they do not replace human judgment and accountability. Responsibility for actions taken based on AI-generated outputs ultimately remains with the users. Report Concerns: If you believe a university system has been compromised by an AI tool, please contact the ITS Helpdesk immediately.   Tools that operate at the operating system level require stronger security controls and review before they can be approved for university use. ITS will continue to monitor developments in this space and reassess as technology matures and stronger safeguards become available. Our position is aligned with other institutions that have taken similar steps after evaluating agent-based AI tools. Thank you for your attention.   For further information, please contact IT HelpCentre via the following channels: ITS Hotline: 2766 5900 ITS WhatsApp/ WeChat: 6577 9669 Check your case progress: IT Online ServiceDesk Information Technology Services Office    

12 Mar, 2026

Cyber Security

Notice_thumbnail_2026 Feb

New Issue of eNewsletter (Febuary 2026) Released

The Febuary 2026 issue of the eNewsletter has been released. Check it out now.  

27 Feb, 2026

General

(For staff) LEARN@PolyU (理學網), MS Teams, Zoom Channels, and Cloud Recordings: Aged Course Material Housekeeping

Dear Colleagues   To ensure compliance with copyright law, course materials uploaded to LEARN@PolyU (理學網), MS Teams, and Zoom Channels platforms will be retained for no longer than 12 months.   Course content associated with Semester 2 of the 2024/25 academic year, including materials on LEARN@PolyU (理學網), MS Teams, Zoom Channels, and cloud recordings in both Zoom and MS Teams, will be removed on 5 February 2026.   If you wish to retain any course materials from Semester 2, 2024/25, please download and archive your files before 5 February 2026. Please refer to the following guides for instructions: How to archive a Blackboard course How to download video from Video Content Management System (uRewind) Turnitin LTI Instructor Guide How to download Zoom recordings Course archive and removal schedule for LEARN@PolyU(理學網), MS Teams and Zoom Channels     For further information, please contact IT HelpCentre via the following channels: ITS Hotline: 2766 5900                          ITS WhatsApp/ WeChat: 6577 9669                          Check your case progress: IT Online ServiceDesk                                          Information Technology Services Office  

29 Jan, 2026

Teaching and Learning

LEARN@PolyU (理學網), MS Teams & Zoom Channels: Aged Course Material Housekeeping

To: All Students   From: Information Technology Services Office             Date: 29 January 2026     Subject: LEARN@PolyU (理學網), MS Teams & Zoom Channels: Aged Course Material Housekeeping     Dear Students   To ensure compliance with copyright law, course materials uploaded to LEARN@PolyU (理學網), MS Teams, and Zoom Channels will be retained for a maximum period of 12 months.   Course content for Semester 2 of the 2024/25 academic year will be removed from LEARN@PolyU (理學網), MS Teams, and Zoom Channels platforms on 5 February 2026. After this date, the course materials associated with Semester 2, 2024/25 will no longer be accessible.    Please backup all your course content and assignments for study purposes if you find it appropriate. Details about the removal schedule for the 2024/25 academic year can be found here.     For further information, please contact IT HelpCentre via the following channels: ITS Hotline: 2766 5900                          ITS WhatsApp/ WeChat: 6577 9669                          Check your case progress: IT Online ServiceDesk                                          Information Technology Services Office         

29 Jan, 2026

Teaching and Learning

Notice_thumbnail_2026 Jan

New Issue of eNewsletter (January 2026) Released

The January 2025 issue of the eNewsletter has been released. Check it out now.  

28 Jan, 2026

General

Notice_thumbnail_2025 Dec

New Issue of eNewsletter (December 2025) Released

The December 2025 issue of the eNewsletter has been released. Check it out now.  

28 Dec, 2025

General

Artboard 1

New Issue of eNewsletter (November 2025) Released

The November 2025 issue of the eNewsletter has been released. Check it out now.  

28 Nov, 2025

General

Notice_thumbnail_2025 Oct

New Issue of eNewsletter (October 2025) Released

The October 2025 issue of the eNewsletter has been released. Check it out now.  

29 Oct, 2025

General

Notice_thumbnail_2025 Sep

New Issue of eNewsletter (September 2025) Released

The September 2025 issue of the eNewsletter has been released. Check it out now.  

29 Sep, 2025

General

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here