Overview
Artificial Intelligence is a rapidly evolving field with numerous considerations depending on who is using it, and how it is being used. OIT strives for enabling novel and creative use of AI, while also ensuring adequate data protection and compliance. This knowledge base article is intended to be interim security guidance while our university continues to define and implement our AI strategy.
University policy (APM 30.11) classifies data based on the risk, as "Low," "Moderate," or "High" to assist the university to remain compliant and to focus security controls on the data that presents the most significant risk. Data uploaded to or downloaded from AI, including prompts, are subject to APM 30.11
The AI platforms or services below have been approved for the data classifications indicated. Individual departments, projects, or research areas may have specific, additional restrictions beyond the OIT defaults.
*While using the services through an @uidaho account.
+Please note for Microsoft Copilot, moderate risk data is only approved within an instance that is covered by our Enterprise data protection, which should result in a green shield in the top corner of the window:

1Only approved to the risk level defined within the System Security Plan (SSP). If you are unsure if a system has an approved SSP, please feel free to reach out to OIT Security.
DeepSeek AI services have been restricted across university networks to ensure compliance with Department of Energy (DOE) and other expected research and security requirements.
Any AI services from banned vendors.
If you have questions about AI usage options, data classifications, or responsible use of AI, please submit a request to the OIT Security Office or consult the U of I AI website: https://ai.uidaho.edu/
Any unapproved, non-banned AI provider may be used if:
- Our usage of it would not violate any terms of service, copyright, or license requirements
- Our usage of it is low risk data that is not specific to the University of Idaho
- Terms of Service and AI output is adequately reviewed prior to usage
Other applications may be approved which contain AI tools or components not specifically listed above.
Guidance on handling output:
Any response from AI tools, even approved tools, should be treated as non-expert opinion. These should be reviewed for accuracy prior to any usage, as well as tested through standard mechanisms to ensure it executes as expected. While AI and algorithmic review tools for review can be helpful, it is recommended that human performs a direct review of the AI generated content as well as any automated review methods.
Whenever possible, it is strongly encouraged to label, watermark, or otherwise identify when content was created with the help of AI.
Guidance on intellectual property issues:
Please read terms of service of AI tools. Any U of I Intellectual Property or programming / code developed on behalf of the university is likely considered U of I property (see FSH 5300). If the terms of service of a tool claims ownership of its output, that would be a conflict that must be avoided.