Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Potential Issues


What IS an AI?Transparency and Oversight

Unintended Biases and Discrimination

A strong general rule when judging an AI model is that the model cannot create anything it hasn't already seen before–meaning that the output of an AI will always be the result of all the data that was input into the model to train it. This can result in some key issues in the output of large AI models going completely unnoticed by users who assume the model is more intelligent than it is. One example is that a model trained on biased data will replicate that bias in its results; for example, if a medical AI used to diagnose skin diseases is trained using only data from white patients, it may struggle to properly identify medical issues on bodies of color, and result in fewer correct diagnoses on nonwhite skin as a result. However, the model will be seen as presenting its output as purely objective data. Uncritical usage of models fed on biased data can feed into deeply entrenched systems of discrimination, and furthermore mask that discrimination by presenting it as the unbiased truth from a model unaffected by personal bigotry. 

Misinformation

A term you may hear often in conversations about AI tools is 'hallucinations'. In the case of AI, a hallucination is when a model confidently outputs incorrect information, either by repeating incorrect information found in its training data or by extrapolating a 'best guess' based on data that does not apply to the situation at hand. This happens because AI models are only able to generate outputs based on the training data they are given, and are not able to differentiate between good and bad data when selecting which information to use. Users who are not aware of this possibility may be fooled into thinking the model is providing real facts, and use and spread this misinformation as a result. 

Environmental Impact


Educational Shortcutting

A common concern stated by educators regarding AI is that the use of AI models will enable students to hide a lack of knowledge about a subject they were intended to learn by using an AI model that can generate the knowledge needed to pass the class. For example, a student who does not understand a unit about Shakespeare, instead of demonstrating his own lack of mastery by writing an essay that would not be received well, may use an AI language model to generate an essay that would demonstrate knowledge he has not retained. Such 'shortcuts', if not properly managed, retain the risk of making students dependent on tech tools they do not understand rather than learning the knowledge and skills they need for themselves, which can have dangerous consequences if said students enter fields where said knowledge is required to perform a job properly and safely. 


Training Data, Privacy and Data Governance 

The most important part of any AI model is the data set it is trained on, as that data is the source of any possible outputs it could generate. Robust AI  models require a large amount of data for training, and are programmed to actively seek out more data. However, this can lead to some ethical and legal issues regarding said data. Many models have been found 'scraping' proprietary work from artists who explicitly don't want their work used in this way, potentially violating intellectual property and copyright laws in the process. Other models have been found violating other, more serious laws when obtaining data; well-known examples include AI models found to have accessed and scanned confidential personal information, up to and including personal messages, private files and even medical records. 
While these issues are still being resolved, using these models creates a non-zero possibility that AI-generated outputs may contain snippets of illegally-obtained information, which creates multiple opportunities for faculty and students alike to violate Academic Integrity policies entirely by accident. 


Transparency and Oversight

The above issues can be compounded by a lack of transparency on the part of the company hosting an AI tool. Corporations may obscure or outright hide the source of their training data, as well as information such as who is responsible for the AI's development and management. This can make it difficult for users to obtain clear answers on key questions regarding safeguards against misinformation, examination of potential biases, or even legal issues such as data theft. It is the opinion of this department that transparency is necessary in order to ensure any AI tool is being developed ethically and responsibly, and that AI tools that operate with a lack of transparency or clear oversight should be avoided until such transparency and oversight are provided. 

For Faculty


Create Clear Expectations

...