As artificial intelligence (“AI”) tools become more widely available and commonly used across higher education and the workplace, it is important to understand both their potential and their limitations – particularly when it comes to legal or policy-related topics.
While AI tools can be useful for drafting documents, performing research, or generating ideas, they do not replace professional legal advice. AI-generated responses rely on specifically-trained data sets and may generate outdated, incorrect, incomplete, or misleading information, and relying on such information in a legal context can lead to serious consequences, which may put WVSOM at risk of liability.
The unauthorized practice of law (“UPL”) occurs when a person – without a law license – gives legal advice, prepares legal documents, acts in a way that suggests legal expertise or authority, or represents or implies to others that they are entitled to provide legal advice or interpretation. Under state law, only individuals who are licensed to practice law may do these things.
Recently, the Office of General Counsel has encountered situations where AI-generated feedback or language was being used in ways which could risk being perceived as the unauthorized practice of law. While we genuinely believe these situations are rarely intentional, they underscore how easily the line between casual use of AI and the unauthorized practice of law can be crossed – even unintentionally.
Using, sharing, or relying on AI-generated content that interprets laws, regulations, contracts, or institutional policies may unintentionally fall into this category. These would be some examples which could be considered UPL:
Violations of UPL statutes can result in disciplinary action and, in some cases, civil or criminal penalties as set forth in W.Va. Code.
So, you may be asking “Why is this a big deal?” AI is an exciting tool for increasing productivity and automating tasks (we have all seen that firsthand). However, in the legal realm, AI has received mixed reviews. AI systems are still developing and can generate something called “hallucinations”.
AI hallucinations are responses or outputs that are inaccurate, false, or nonsensical with little to no indication that the AI platform has hallucinated. The hallucination can be provided with the same level and timing of confidence as any other response. Heavy reliance on AI-generated answers without human intervention and review can have unintended consequences (such as UPL). For context, the American Bar Association has identified more than 150 cases in which AI hallucinations were unintentionally included in court filings. (An example of this is an attorney or paralegal relying on AI to complete a draft document and unknowingly including a hallucinated case to reference. A case that did not exist in any way, but was presented as an AI-generated output as confidently if it were completely factual.) If you find yourself curious, try a google search for “attorneys sanctioned for improper AI use”. This is a great example of how far hallucinations can go, even hallucinating sources in addition to content.
While the use of AI is rapidly growing and the capabilities are exciting, it’s important to contact our office via the Legal Department Email for verifying information that could be considered legal advice or interpretation. We are here to help ensure the innovation continues responsibly and within the boundaries of the law!