At WVSOM, Microsoft 365 Copilot is available for use by faculty, staff and students. Microsoft 365 Copilot is one of the recommended tools for using generative AI within the WVSOM environment. Microsoft 365 Copilot is approved to interact with university internal data; you must be logged in with your WVSOM username and password to ensure you’re using Microsoft 365 from WVSOM rather than the consumer-based Microsoft 365 Copilot service.
IMPORTANT: Always use your WVSOM account for university business. Personal (free) non-WVSOM Microsoft accounts are not approved for institutional data. Check the top right corner of your account to ensure you are signed in appropriately.
Microsoft 365 Copilot is built on the same AI models and data as ChatGPT-4, so you can use it to write and summarize content, create code and answer complex questions. Microsoft 365 Copilot also has access to current internet data, enabling it to provide real-time responses to current events.
Chat history in Microsoft 365 Copilot is only available to the logged-in user and the University Information Security Office (UISO), and only the latter in the case of an IT incident. Microsoft has no eyes-on access to train the underlying models.
Even if you use generative AI tools for activities that do not share personal or institutional data, you should still check the tool’s output for accuracy. Since these tools have been known to produce inaccurate content (sometimes called “hallucinations”), verify any factual information generated by an AI tool, and make sure to reference the tool as you would any other source. See the section on “Citing AI Usage” for more information.
Microsoft 365 Copilot is WVSOM’s preferred GenAI tool as it is covered by commercial data protection, meaning it does not claim ownership of data, prompts or outputs you receive.
Go to Microsoft Office or click the Microsoft 365 Copilot app icon on your desktop or taskbar.
Sign in using your WVSOM email address and password. If you have already signed in, you will remain signed in until you sign out. It is good practice to check that you are signed in under your WVSOM account.
AI prompt frameworks are structured approaches to crafting effective prompts for AI models like Microsoft 365 Copilot, ChatGPT, etc., helping users guide the AI and achieve desired outputs. These frameworks provide a structured way to define the AI’s role, task, context, expected format and so on, leading to more precise and relevant responses. A basic prompt would have the following components:
Citation: Genspark, July 22, 2024, Best Practices for Writing Gen AI Agent System Prompts/Instructions. Accessed from https://www.genspark.ai/spark/best-practices-for-writing-gen-ai-agent-system-prompts-instructions/60a24f6a-94cb-4380-ad2c-47498ec28bcd
TRACI is one prompting framework that is useful for engineering prompts for GenAI and is consistent with the general components of most prompts. The letters refer to various components of a prompt:
Task refers to the type of output the prompt should achieve. For example, a rubric, learning goals, or syllabus statement.
Role refers to the persona generative AI should take on when responding. For example, an expert educator, a student advisor, or instructional designer.
Audience refers to the group to whom the response is written. For example, introductory biology students, graduate-level competitive study majors, or Doctor of Pharmacy students.
Create refers to the format of the output. For example, 200 words, a three-column table, or an acrostic poem.
Intent communicates the purpose of the response. For example, to promote a growth mindset or clearly communicate expectations.
Considering the TRACI model will improve initial prompts and provide guidance as prompts are revised.
Example citations (Microsoft Copilot generated the following response when prompted with “Outline how to write a citation that would include the prompt used in Microsoft Copilot.” Accessed on 2025-07-16.):
1. In-Text Citation (APA Style) Parenthetical: (Microsoft, 2025) Narrative: Microsoft (2025) generated the following response when prompted with “Summarize the Geneva Convention in 50 words.” 2. Reference List Entry (APA Style) Microsoft. (2025). Response to the prompt “Summarize the Geneva Convention in 50 words” [Large language model]. https://copilot.microsoft.com You can adapt the title to reflect your actual prompt. For example: Microsoft. (2025). Response to the prompt “Outline how to cite AI tools including the prompt used” [Large language model]. https://copilot.microsoft.com 3. AI Use Disclosure (Recommended) Include a short statement in your paper (e.g., before the References section): AI Use Disclosure: I used Microsoft Copilot (https://copilot.microsoft.com) to assist in drafting and refining this paper. The prompt used was: “Outline how to cite AI tools including the prompt used.” The AI-generated content was reviewed and edited for accuracy and clarity. You can also be more granular with statements as part of an introduction or conclusion, such as: “"Microsoft Copilot was used to brainstorm ideas for this essay.” or “Copilot was used to check spelling and grammar.” Always check with staff or faculty for more specific preferences regarding AI acknowledgment.
Generative AI has potential applications across medical school, health professions education, health care and beyond. However, there are also concerns about the potential misuse of these tools and any data shared with the services. When you provide information to these tools, such as queries, student essays, grant proposals, source code, or datasets, it is the same as posting information on a public website.
WVSOM encourages the use of generative AI services, as long as no institutional data is submitted to them without approval. Currently, Microsoft 365 Copilot is the only generative AI permitted. Any other system used for with official institutional data or information will need to go through review to ensure the necessary contracts and safeguards are in place to protect the data submitted and to ensure the algorithms in use are ethical, transparent and beneficial to the WVSOM community.
Types of institutional or patient/clinical data that should not be submitted to public versions of generative AI tools, even when anonymized, include:
Specific examples that are not appropriate for the public versions of generative AI tools include:
With these precautions in mind, there are numerous ways to use generative AI tools without submitting university data or intellectual property. Using general queries to generate content to pull information from the AI resources is a good way to engage with the products.
Students should use generative AI in ways that align with university academic integrity policies and communicate with their instructors before using generative AI in their coursework. Courses may elect to further restrict generative AI.
From a data management perspective, examples of acceptable uses of generative AI include:
Instructors can use generative AI to help outline course syllabi and lesson plans, getting suggestions for learning objectives, teaching strategies and assessment methods. Course materials the instructor has authored (such as course notes) may be submitted by the instructor.
Students, faculty or staff may use fake information (such as an invented name for the recipient of an email message) to generate drafts of correspondence using AI tools, as long as they are using general queries and do not include institutional data.
Faculty and staff can use AI to draft materials for potential professional development opportunities, including workshops, conferences and online courses related to their field.
AI can assist in drafting event plans, including suggesting themes, activities, timelines and checklists.
AI can help you draft a review, analyze publicly accessible content (for example, proposals, papers and articles) to aid in drafting summaries, or pull together ideas.
Even if you use generative AI tools for activities that do not share personal or institutional data, you should still check the tool’s output for accuracy. Since these tools have been known to produce inaccurate content (sometimes called “hallucinations”), verify any factual information generated by an AI tool, and make sure to reference the tool as you would any other source.