Guidance on Use and Access of Artificial Intelligence (AI)

In the evolving landscape of technology, the West Virginia School of Osteopathic Medicine (“WVSOM”) recognizes the transformative potential of generative Artificial Intelligence (“AI”) tools; however, it is of utmost importance to use these tools in a manner that is constructive, ethical, and aligned with the core values and regulatory frameworks of WVSOM. Therefore, WVSOM provides the following guidance on how AI, CoPilot, ChatGPT, and other AI-related tools may be used.

These guidelines which apply to all WVSOM faculty, staff, and students ensure responsible use of AI tools and align with WVSOM’s existing Institutional Policy GA-31. If you are unsure about something in these guidelines or have suggested enhancements, please contact IT Secruity.

The use of AI technologies using WVSOM owned hardware or software is subject to all applicable institutional policies and procedures without exception.  It is important to keep these principles in mind when using any technology at WVSOM, including generative AI. These core principles align AI use with the ethical standards and integrity values central to WVSOM and supports Institutional Policy GA-31, which identifies the acceptable and unacceptable uses of data and technology at WVSOM.

Faculty, staff, and students are reminded of the following:

  • The use of generative AI is subject to all WVSOM policies and procedures, including those associated with academic integrity and professionalism.
  • AI-generated content can be inaccurate, misleading, entirely fabricated, or may contain copyrighted material. Review AI content for misleading or inaccurate information as the user is responsible for the content created.
  • Do not use AI when working with sensitive, private, restricted, or confidential information, including personally identifiable and protected health information.  Refer to WVSOM’s data risk categorization web page, and confirm that any data being used with AI tools is for “public” consumption. 
  • Safeguards should be considered for copyright infringement or intellectual property concerns, which may put the user and WVSOM at a liability risk. 
  • Reach out to IT for purchase or use of new AI technology, not previously approved by WVSOM. They will ensure the use of the technology is compliant with institutional and state intellectual property and procurement policies. The Director of Infrastructure and IT Security, will coordinate with the Office of the General Counsel for a review of the AI tool, whether free or purchased, on the legality of the terms of use for protection of WVSOM and the end user. 
  • Employ trust and transparency to ensure clarity and openness when employing AI, particularly in areas affecting decision-making or policy development. Always ask yourself if a reasonable person would expect to know that you used generative AI to create the product and explain how you used AI.
  • AI technology cannot be used to give legal advice, prepare legal documents, act in a way that suggests legal expertise or authority, or represent or imply to others that one is entitled to provide legal advice or interpretation. This constitutes the unauthorized practice of law and can result in disciplinary action, and civil or criminal penalties. Relying on AI generated information in a legal context can lead to serious consequences, which may put WVSOM at risk of liability.
  • Embrace continuous learning advancements to these guidelines as needed. WVSOM encourages open dialogue and suggestions for improvements. Regularly refer to these guidelines to stay current with any AI developments and institutional needs.

Acceptable use of AI in student curriculum

AI can be of great benefit for medical practice and learning. While AI is a rapidly expanding field, AI in higher education curriculums can have two main functions: generative and minor editing (e.g., grammar and spellchecking). Concerns for AI related to the WVSOM curriculum will focus on generative AI.

For the purposes of the curriculum, mentions of AI in this guidance will broadly refer to any modality which utilizes algorithms or programming to generate spontaneous information related to any material distributed as part of an approved course outside of human/student development or generation. Examples could include, but are not limited to, CoPilot, ChatGPT, Grammarly, Nuance, etc. 

Acceptable uses of AI in any material distributed as part of an approved course can include improving efficiency of note taking or study plan formation, helping generate ideas related to the course, providing challenges and platforms for problem solving. Unacceptable use constitutes any use of AI that would bypass learning, provide answers without student engagement or logical reasoning, complete graded assignments without student input, assistance with any summative assessment without course director approval, etc.

Each course in the curriculum is unique in its demands and organization. As such, the course director will designate when it is considered acceptable to use AI in completion of an assignment or an assessment in the course. The designation will be highlighted within the syllabus. All assignments or assessments without this designation should not have AI used for the completion of the assignment or assessments without prior approval by the course director.

Use of AI Tools

The following section includes examples of acceptable and not acceptable use of AI/Machine Learning tools. This is not an exhaustive list.

Permitted uses

  • Refining communication messages and creating presentations
  • Analyzing communication patterns for effectiveness
    • Example: Drafting generic email campaigns and suggesting language improvements

Prohibited uses

  • Handling communications that include personally identifiable information (PII) due to privacy concerns
    • Example: Personalizing emails with recipient specific PII

Permitted uses

  • Processing large data sets for extracting insights and trends
  • Enhancing data analysis efficiency and accuracy
    • Example: Analyzing anonymized customer behavior patters

Prohibited uses

  • Analyzing identifiable personal information, especially sensitive data
    • Example: Processing data that could reveal individual customer identities

Permitted uses

  • Assisting in the creation and organization of documents
    • Example: Generating inclusive job descriptions

Prohibited uses

  • Situations where the authenticity and originality of the document are critical
    • Example: Legal documents requiring nuanced huma understanding

Permitted uses

  • Implementing chatbots for routine customer inquiries
    • Example: Retail website chatbots for instant responses

Prohibited uses

  • Handling complex customer service issues that require empathy and deep understanding
    • Example: Resolving sensitive customer complaints

Permitted uses

  • Predicting equipment failures and scheduling maintenance
    • Example: AI analyzing machine data to predict maintenance needs

Prohibited uses

  • Situations where incorrect predictions could lead to significant safety risks
    • Example: Critical safety systems where human oversight is essential

Permitted uses

  • Customizing marketing efforts based on customer data analysis
    • Example: E-commerce platforms recommending products based on user history

Prohibited uses

  • Marketing strategies requiring deep understanding of complex human behaviors and ethics
    • Example: Personalized advertising that could infringe on privacy or ethics

Permitted uses

Prohibited uses

  • Direct handing of sensitive financial data, including budget planning and allocation
    • Example: Making decisions on budget allocations or financial planning

Permitted uses

  • Transcribing meetings, lectures, and other spoken content for record-keeping and accessibility
    • Example: Employing AI to provide real-time transcription of business meetings or academic lectures, making them accessible to a wider audience
      • Note: You must inform everyone in the meeting that you will be using AI for these activities at the beginning of the meeting so that they can consent to its use and collection of their information. If someone does not agree with use of the tool, do not use it.

Prohibited uses

  • Transcribing meetings where the content is highly confidential or sensitive, and the risk of data breaches or inaccuracies is significant
    • Example: Using AI transcription in closed-door, high-level strategic meetings or in contexts involving sensitive personal information, where human discretion is paramount.

Permitted uses

  • Creation of study materials or guides for personal use only
    • Example: Streaming platforms generating custom playlists or video summaries for individual users
  • Enhancing the accessibility of content through automatic dubbing and subtitling in multiple languages
    • Example: Automatically translating and dubbing a lecture into several languages to reach a global audience

Prohibited uses

  • Generating deepfake audio or video that could be used to impersonate individual or spread misinformation
    • Example: Creating realistic video clips of public figures saying or doing things they would never actually did
  • Producing content without proper consideration for copyright, ethical implications, or the potential for harmful misuse
    • Example: Using AI to generate music or videos that closely mimic copyrighted material, or creating content that could be harmful or misleading.

Permitted uses

  • Increase the efficiency of developers and reduce the time for development
    • Example: Use AI to generate the boilerplate code for an API

Prohibited uses

  • Producing code without considering if it truly meets the objective or it is secure
    • Example: Generating code to identify student enrollment numbers that is relying on obsolete data or is built on other people’s mistakes.