欧博abgGeorge Mason University
Exploring the world of Artificial Intelligence? Start here with an overview of the fundamentals and beyond.
Sub Navigation
What Exactly is "AI"?
Definitions of AI
For purposes of this guidance, AI is shorthand for "Generative Artificial Intelligence," which is a specific approach to producing human-like text, audio, image, and video. OpenAI's ChatGPT is an example of generative AI, as are Google's Gemini, Microsoft's CoPilot, and Anthropic's Claude, among others. A key feature of these systems is a chatbot interface that allows humans to ask questions or prompts using everyday language that can be written or spoken, which leads to AI outputs.
Large Language Models
Large Language Models (LLMs) are trained on a massive number of texts, gathered from the Internet, including much of the text in social media posts (like those on X or Reddit), publicly available websites, the contents of academic journals, and both academic and creative works that have been pirated and made available on the Internet. LLM systems effectively read all of this work and then develop a statistical model that will produce new text in response to a prompt.
LLMs were designed to produce plausibly human-sounding text, but they were not designed to evaluate or to differentiate fact from fiction.
Other Forms of AI
There are many other forms of AI that are not based on language or diffusion models, although most use some combination of machine learning and natural language processing. These guidelines are most concerned with generative AI systems, particularly those that produce text, including computer code. You should understand the basic principles of any AI system before using it, and you should consider the ethical consequences of any such use.
What Should You Know About How AI Systems Work?
Generative AI Makes Plausible-Sounding Language
Generative AI systems were designed to respond to human prompts to produce human-sounding text as an answer. Even with additional systems to guide and censor both input and output, these LLMs will still produce errors, make up “facts” that are incorrect, and create fictional academic sources. When you use AI systems, you should always review and evaluate the output and not simply assume that it is correct.
Chatbots Are Not People
AI systems do not "know," "understand," or "feel." They are internally very complicated statistical models, but they do not have a consciousness, even though it might seem like they do because of the chatbot interface. Any emotional engagement with a chatbot resides only in the human user.
AI Is Not Infallible
In part because the training data is itself full of inaccuracies and biases, generative AI systems will produce text that is incorrect. You must not assume that text produced by AI chatbots is correct by default. Recent research also shows that AI systems will introduce errors when summarizing news sources and other texts.
Experience and Expertise Improves AI Outputs and Efficiency
AI systems can be used to make routine writing tasks more efficient, but for optimal results, the initial prompt must be well-designed. It takes expertise to both design a good prompt and evaluate AI output. You can't skip past learning how to code, how to research, or how to communicate effectively, and just let the AI do it for you, because you won't know if what the AI generates is correct or useful.
Freely Available AI Applications Collect Your Data
Most AI applications collect the information you send it—your prompts, the files you upload, and the output the application generates—in order to further train their systems. You should be very careful not to give these systems information that might allow someone to compromise your or others’ online identity, proprietary or business-related information, or copyrighted works.