Skip to Main Content

AI in Higher Education


Where did it come from?

ChatGPT is a project of OpenAI, a group founded in 2015 as a non-profit by leaders in the tech industry with the intention of developing friendly Artificial Intelligence (AI) that would benefit all humanity. Its original goal was to be completely transparent, sharing all research and patents open to the public. This group transitioned to a "capped" for-profit organization, in order to be able to better attract top talent and offer stakes in the company.


What is it?

ChatGPT is the latest generation of their large language model-trained chatbot. Originally trained to mimic human conversation, the chatbot is now able to produce a wide array of responses to prompts, including conversation, essays, stories, song lyrics and poetry, computer programs, create and take tests, and much more. A key feature of the chatbot is that it "remembers" previous interactions and can build on previous prompts and responses.


How does it work?

Essentially, ChatGPT makes use of its massive training set of language, taken in large part from the web in the form of books, articles, blog posts and forums, and through the use of human training on what are illogical and logical responses, predicts the next most likely word or token (e.g. punctuation) to occur, given the previous words/tokens produced. In essence, it calculates the next most likely word to occur, given the previous words and sentences. Because it considers everything that has come before, it is able to create not only sensible sentences, but also documents having a familiar and expected structure. 


What can it do?

One of the most impressive features of ChatGPT is its ability to create a multitude of text types based on the dialogue format used to train it. By prompting/asking it to create a certain type of text, the tool is trained to create text that best conforms to the request, meaning it will attempt to create whatever you ask it to do. Not only can it generate virtually any kind of prose or poetry in various level of formality, it can also produce reviews, critiques, and summaries of texts given to it, it can create tests or quizzes and answer them, or create programming code. Additionally, it not only responds with accurate responses to requests for texts in different styles or formats, but it can also mimic the style of well-known authors. Asked how to change a tire in the style of Shakespeare, and you'll get steps to remove lugnuts in iambic pentameter. 


What can't it do?

While the texts it creates are extremely human-like and generally free of grammatical errors, the responses suffer both from a lack of specificity and what are called 'hallucinations.' Hallucinations is the term for well-formed arguments and statements built off of its trained texts that result in imagined facts or scenarios that were not present in its trained data. While each generation of the tool have attempted to limit this phenomenon, the fact that it is creating novel sentences based on statistical co-occurrences of words means that it is possible for it to make up "facts" that are untrue, albeit with complete confidence in their veracity. The lack of specificity in many of its more complex responses results in what some describe as very bland or generic texts, lacking an individualized voice.