Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

As the modern tech landscape continues to invest in the development of AI tools such as a large language models, their presence on CNU campus becomes more and more likely for students and faculty alike. In particular, with the development of job opportunities centered around the usage of AI at companies as large as Microsoft, knowing how to navigate such usage in a safe and ethical manner becomes increasingly important. With this in mind, CNU's IT department has developed a set of recommendations for implementing the use of AI tools on campus. The first of these recommdationsrecommendations, for faculty and students alike, is to be aware of the potential issues surrounding the rise of AI in classrooms. 

Potential Issues

What IS an AI?

Oftentimes the first and largest barrier to using AI safely and ethically in a classroom setting comes from a fundamental misunderstanding of what AI even is. This is understandable, as the term comes with many complicated cultural connotations that can make users believe that the modern-day tools are capable of more than they actually are. Discussions around modern AI models also come with multiple terms that are either unfamiliar, or being used in unfamiliar contexts to new users. 

Artificial General Intelligence (AGI): A shorthand term for an artificial intelligence model that meets or exceeds human abilities on a broad range of cognitive tasks, and can perform those tasks autonomously. In essence, AGI would be considered intelligent in the same way that humans themselves are intelligent. While true AGI does not currently exist yet, modern research suggests that current models are a significant step towards its eventual creation. 

Machine Learning: A computer system that is trained on outside data, that then makes predictions extrapolated from that data. The more data the computer system is exposed to, the more accurate the predictions; thus, exposure to more data is how machine learning systems 'learn'. 

Generative AI: Another name for an AI model designed to generate 'new' text, images, code, or other content by using the content of data used to train it. Some experts consider the term to be a misnomer, as the 'new' content is not wholly new, but recombined from other sources. 

Large Language Models: An AI model designed to be trained on, and output, natural language text. Examples include ChatGPT and Google's Gemini. 

Diffusion Models:  An AI model designed to be trained on, and output, non-text data. Common diffusion models are designed to interpret and produce images based on a text prompt. Models that are designed to output audio or video results are also considered diffusion models. 

Training Data: Any outside information fed to an AI system to 'teach' it how to respond to prompts. 

For other definitions, this article may be a useful starting point. 

Unintended Biases and Discrimination

...

The above issues can be compounded by a lack of transparency on the part of the company hosting an AI tool. Corporations may obscure or outright hide the source of their training data, as well as information such as who is responsible for the AI's development and management. This can make it difficult for users to obtain clear answers on key questions regarding safeguards against misinformation, examination of potential biases, or even legal issues such as data theft. It is the opinion of this department that transparency is necessary in order to ensure any AI tool is being developed ethically and responsibly, and that AI tools that operate with a lack of transparency or clear oversight should be avoided until such transparency and oversight are provided. 


Beyond education, we have additional recommendations for both faculty and students considering using AI tools in the classroom. 

For Faculty

Create Clear Expectations

...

At this time, there are no tech tools that can reliably detect when a piece of text was generated using an AI tool. As a result, using tools that can claim to do so in order to check student work can not be currently recommended. We instead recommend that faculty use more analog methods to investigate student work, such as checking the accuracy of the text and noting sudden stylistic changes. Depending on the context and submission method of the assignment, IT department may investigate text submitted into Scholar to provide further evidence, but this may also be inconclusive. 

Carefully Vet Suggested Tools

AI models are trained on large sets of outside data to develop their algorithms, which can come from a variety of places depending on the type of model. This can be the source of multiple kinds of unethical and even illegal behavior on the part of the model, including the generation of unintended biases, copyright violation, and outright theft of confidential data such as medical records. Work is being done by multiple organizations to enforce transparency regarding the sourcing of training data used to train popular AI models such as ChatGPT; this work has included the filing of multiple lawsuits against models accused of scraping data from unethical or illegal sources.  In order to ensure that campus work does not unintentionally replicate these past mistakes, we recommend that all users research the intended AI model's history with their training data, and avoid using models that do not provide information about where said data is sourced from. We also recommend that faculty warn students about AI models that have either current issues or a record of unethical data sourcing. 

For Students

Check Classroom Guidelines

...