History Overview

By: Lauren Klancher

These days, artificial intelligence is seen as a new technology. Surprisingly, this is not true, as the idea of machines being able to think or act intelligently dates back decades. Artificial intelligence is known as "the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings" (Copeland, 2026). There have been many developments in AI throughout history, going back to the 20th century when mathematician and computer scientist Alan Turing introduced the concept of a universal computing machine. It was in 1950, in his paper Computing Machinery and Intelligence, when Turing asked, "Can machines think?" (LLNL, n.d.). This question started what we now know as the Turing Test. The purpose of this test was to see if people could tell the difference between computer answers and human answers, and if they couldn't, the computer could be said to be intelligent.

Although the idea of AI was first proposed by Turing, the term "artificial intelligence" wasn't used until 1956, when John McCarthy organized a conference to establish the Dartmouth Summer Research Project on Artificial Intelligence (LLNL, n.d.). This is known as the official start of artificial intelligence as a research science. The conference focused on how machines could be made to use language, develop concepts, solve human problems, and enhance lives (LLNL, n.d.).

After the conference at Dartmouth, artificial intelligence research grew throughout the 1950s and 1960s. AI programs are advancing, with game-playing programs and learning programs being developed. Interestingly, the first AI program in the United States was the game of checkers created by Arthur Samuel (Copeland, 2026). These developments were based on what was, at that time, known as symbolic AI, where intelligence was a computer's capacity to use symbols and follow rules set by the programmer. An important milestone for artificial intelligence was the creation in 1961 of SAINT, or symbolic automatic integrator, by James Slagle. This was an expert system that had the ability to "solve elementary symbolic integration problems, involving the manipulation of integrals in calculus, at the level of a college freshman" (LLNL, n.d).

In the mid-1960s, research in artificial intelligence was receiving a lot of money from the U.S. Department of Defense. AI laboratories were being set up around the world, with expert systems being the main focus of AI development. These systems used sets of "if-then" rules that helped them to reach decisions in specific areas, similar to how human beings think (Copeland, 2026). Although they started off well, the expert systems had problems, like lacking common sense and not being good at adapting to changing situations.

AI experienced a new beginning towards the end of the 1990s and early 2000s after researchers stopped focusing on programming computers to think and behave the exact way humans do and began to focus on programming them to solve specific tasks well (LLNL, n.d.). AI research became popular again because computers became faster, there was more data to process, and better learning algorithms were created.

How it works: Core Components

Artificial intelligence is "a collection of algorithms, models, and systems used to solve complex problems, automate tasks, and support better decision-making" (CSU, 2025). It works by having computers mimic certain human functions, like learning, recognizing, and deciding. The systems that power AI allow it to quickly process large amounts of data compared to humans. The more data an AI system is exposed to, the stronger it becomes and the more it can do.

Some key components of artificial intelligence are machine learning, deep learning, neural networks, cognitive computing, natural language processing, and computer vision (CSU, 2025). AI learns through machine learning, which is the process of improving over time through exposure to large amounts of data, such as images, text, or numbers. The system searches for patterns in the data, and "improves the results of whatever task the system has been set out to achieve" (CSU, 2025). Deep learning is an important part of this process, using neural networks that mimic the human brain to process information and identify complex patterns, such as speech, facial, or text recognition (CSU, 2025). AI also uses natural language processing (NLP), which enables computers to understand and generate human language. It also uses computer vision, which is what helps artificial intelligence explain images and videos, and cognitive computing, which focuses on imitating the way humans think when problem-solving (CSU, 2025). All these different components of AI work together so computers can better understand data, make decisions, and generate new information in ways that benefit people.

(Wowrack, 2024)
(Wowrack, 2025)

The Diffusion of AI

By: Parker Trobec and Ryan Schulz

While Artificial Intelligence (AI) has been around since the 1950s, it hasn't really been acknowledged on a large scale until more recently. Diffusion refers not only to the existence of a technology but to the rate at which it is adopted, integrated, and normalized throughout organizations, institutions, and everyday uses. In the case of AI, particularly in systems built on deep learning, neural networks, and large language models (LLMs). Recently, evidence shows that there is a transition from experimental development toward widespread use. Examining how AI is diffusing across sectors and into modern society, it helps make clear the scale of influence it holds on society.

Current research shows that AI adoption has become more common within organizational settings. McKinsey's state of AI reports consistently show that a growing number of organizations are adopting and integrating AI into at least one business function, signaling that there's a shift away from manual operational tasks and toward AI enhancement in daily routine and organizational tasks (McKinsey & Company, n.d.). This adoption pattern strongly suggests that AI is no longer considered just a new development, but a useful and powerful tool that increases efficiency, accuracy, and productivity. However, adoption is not even across different industries. Professions in technology and finance have integrated AI much more quickly because their field of work is data-driven wich is an area that AI excels in. In contrast, organizations that have fewer technical resources or more rigid structures often tend to adopt AI at a much slower pace. This uneven diffusion of AI shows us that its integration is shaped not only by technical capabilities but by the organization's readiness and access to resources (McKinsey & Company, n.d.). The figure below illustrates the steady increase in organizational AI adoption over time, highlighting a rapid acceleration in diffusion in recent years.

None
(McKinsey & Company, 2025)

During the last 5 years, AI has had some breakthroughs that have led to popularity and discussion. The result of these breakthroughs has been led by generative AI and large language models (LLMs). OpenAI was the first Company to bring LLMs to the market successfully at a large scale. The creation of these LLMs has made powerful AI tools readily available to consumers, which has significantly impacted the diffusion of AI. The Stanford AI Index Report 2025 notes that advances in model performance, accessibility, and usability have lowered the barriers to adoption, which has allowed AI tools to reach a broader demographic beyond just technical and financial specialists (Stanford Human-Centered AI Institute, 2025).

As a result of this, AI systems are increasingly embedded in everyday workflows, and it's through these LLMs that AI has begun to gain traction with both companies, industries, and general consumers. Additionally, this acceleration in AI adoption has led to closely tied improvements in neural networks and the availability of scalable computing infrastructure. As AI systems become more efficient and easier to integrate, organizations will be more likely to experiment and eventually fully adopt these tools. Currently, AI tools, chatbots, and resources are plentiful. You can pretty much find whatever you are looking for. If not, you can build it using AI. However, while there are many different AI organizations, the First Page Sage finds the top ten generative AI chatbots based on the current market share as of 2026 (First Page Sage, n.d.).

None
(Bailyn, 2026)

This means that the diffusion of generative AI relies on a broader pattern of technical refinement and easy application that will result in social and Organizational uptake (Stanford Human-Centered AI Institute, 2025).

In addition to organizational adoption, AI diffusion is increasingly visible at the consumer level. Data collected by the First Page Sage shows that generative AI chatbots have experienced rapid growth in public use, with a smaller number of platforms accounting for a large share of overall engagement (First Page Sage, n.d.). This concentration suggests that diffusion is being driven by not only technological capability but also by brand recognition, accessibility, and integration into existing technologies. Furthermore, First Page Sage says that "For the first time in 20 years, people have started turning away from Google to conduct research in other places." (First Page Sage, n.d.). The article goes on to talk about how people are turning to chatbots like Chat GPT for search engine and deep research use instead of using search engines like Google. This shows how deep AI has started to diffuse into society, because AI has started to replace a pivotal technological tool that has completely influenced the world of technology.

The widespread public exposure to generative AI tools marks an important shift in diffusion. Rather than just working in institutional systems, AI is now used daily by a large chunk of society to improve writing, creative production, and searching for information. This form of diffusion is widespread and increases AI's normalization within daily life. (First Page Sage, n.d.).

Together, organizational adoption trends and consumer usage patterns suggest that AI is entering a more mature phase of diffusion. The Stanford AI Index Report 2025 emphasises that AI developments and applications are no longer limited to a small group of companies and tech organizations. Instead, both those who build and those who integrate AI have expanded. While AI development is still largely led by major tech firms, the group integrating these tools is only growing larger (Stanford Human-Centered AI Institute, 2025). While diffusion remains uneven, the overall trajectory points toward deeper integration rather than simple expansion.

As artificial intelligence becomes more infrastructural, attention is shifting from whether organizations should adopt AI to how it should be governed, managed, and ethically used. The shift to governing AI is important because it shows that AI is becoming more diffused throughout society to the point that they need to prepare for further advancements (McKinsey & Company, n.d.).

Understanding the diffusion of AI is essential for evaluating its broader social and cultural consequences. Technologies that diffuse rapidly tend to restructure existing systems rather than merely enhance them. The widespread adoption of AI across organizations and its increasing presence in everyday consumer applications suggest that its influence will extend beyond its efficiency gains to affect how people work, communicate, and make decisions. These diffusion patterns provide a critical foundation for further analysis using frameworks such as McLuhan's tetrad, which examines how technologies transform social life through enhancement, obsolescence, retrieval, and reversal.

Societal Effects

By: Bennett Moger and Amber Gamer

Marshall McLuhan introduced the idea of the 'Tetrad of Media Effects' in 1988. Through this framework, he emphasizes the importance of understanding and breaking down new technologies, analyzing the different effects that they can and will have. Technology is often defined by how it's used and implemented; it's important to discern new technologies and the effects they will have personally and societally. The tetrad format emphasizes the four different impacts that technology enhances, what it makes obsolete, what it retrieves from the past, and what it reverses into when pushed to extremes.

When looking at technology such as artificial intelligence, this system helps to highlight both the strengths and weaknesses of AI. It highlights the changes that technology makes in the world around us, in ourselves, and in our perceptions of old and new systems. Rather than blindly accepting new technologies, implementing a tetrad-based judgement helps to cautiously and carefully analyze each piece of technology, leading to a more confident acceptance of the system.

Enhancement: Through the adaptation and growth of artificial intelligence, everyday tasks will become simpler and more efficient through the use of AI. The automation of tasks and the ideation of concepts are starting to break through, and the technology is advanced enough that AI is truly enhancing these ideas. Artificial intelligence has enabled the ability to ask questions and receive immediate answers with little to no research. AI has not always been able to perform tasks like this, and it wasn't until recently that AI was considered immediate. That immediacy is what changed the trajectory of AI and is why we are still talking about its impacts on our world today.

This change in speed is largely attributed to the innovation of NLP and machine learning. Both of these terms have to do with AI growing and getting smarter as it is fed more information. The algorithm is not told what the answer to every question is, but over time, it recognizes patterns and makes predictions. This is why AI isn't always 100% accurate, because it answers based on the analysis it makes on all the data it has collected from other users' prompts. With Natural Language Processing, the AI learns and picks up on human language. This can be things like slang, emotion, and tone. Some companies have even created their own GPTs to protect their information and data because of this.

This new technology enables ideation, automation, and greater task efficiency. Tasks that were once time-consuming in the workplace are now eliminated and taken over by AI. Something like creating a detailed sales report that would take hours can now be done by a GPT in minutes. Based on research done by the National University suggests the same thing through statistical analysis. A few data points stood out when going through their findings. According to Prestianni, 30% of jobs will be automated by 2030, and another 60% will see their day-to-day tasks modified (Prestianni, 2025). The other stat that blew me away was that 13.7% of the workforce has lost their jobs to a robot or AI automation (Prestianni, 2025). While scary, most companies offer to train these employees in a new position. It is difficult for employees to go through a career change, but by doing this, they allow for greater productivity within their company.

None
(McCarthy, 2017)

Obsolete: As Artificial Intelligence continues to learn, adapt, and grow, scientists believe it will eventually surpass human ability (Ord, 2025). While we start to integrate this technology into everyday life, feeding it more information, scientists are simultaneously working to help it continue to innovate and grow. As we work to understand creative and more soft-skilled fields, there aren't many industries left untouched by AI. At this rate of growth, it will soon become much more expensive for companies to hire humans than to use artificial intelligence; even if it means justifying slightly lower-quality work.

Currently, artificial intelligence is taking over many of the mundane, automated tasks. Multiple sources predict that by 2050, the workplace will look completely different due to the integration of AI. Automating small tasks will free workers to tackle bigger ones. While the future of AI is unknown, artificial intelligence is already being implemented in the modern workplace. The Institute for Public Policy Research found that 60% of administrative tasks are automatable. While automation of these tasks is underway, this does not mean that jobs will be scarce in many industries in the future (Kelly, 2025).

Artificial intelligence is just beginning to replace libraries, with the implementation of more generative AI and increased adoption in the academic world. As more turn to AI for research, sources, and even projects as a whole, the amount of research actually being conducted declines. Now, artificial intelligence did not start the decline of libraries, but it is certainly speeding up the process; while it ultimately started with the creation of search engines, the implementation of artificial intelligence is definitely changing how people research issues.

Retrieval: Artificial intelligence draws more on the habits of earlier technologies than on the technology itself. Much of AI relies on pattern recognition and automated research, and through these processes, it can generate content and ideas. For most of human history, humans have been better at pattern recognition than computers and technology. From design to numbers to technological and historical patterns, many scholars argue that these connections have been made before (Üstün, 2024). Now, artificial intelligence allows it to happen much faster and more efficiently, but ultimately, it's argued that there is little to no difference.

Taking a broader view of artificial intelligence, in retrieval, it does the very act of retrieving information. Pulling foundational texts and information at the click of a button. In doing so, we see it calling on long-lasting systems like libraries or even the initial web browsers, through the creation of its interface. Its breadth of knowledge is similar to that of ancient philosophers, such as Socrates, who stood in the marketplace, answering questions and sharing ideas. The relevance, efficiency, and accuracy of artificial intelligence are all reminiscent of the 'marketplace of ideas' theory. This emphasizes that knowledge should be freely shared and that everyone should have access to it.

None
(Lin, 2025)

Reversal: When pushed to its furthest extremes, rather than helping inform the masses, artificial intelligence will lead people to do less research and less reading, thereby cognitively damaging users. If users are always able and willing to turn to AI, then many will lose the ability to do the work for themselves at all. In many cases, less effort results in less learning. As adoption of this technology becomes increasingly common and people start using it at younger and younger ages, there could be real issues with cognitive development.

With its quick advancements and equally fast adoption, the effects of over-reliance on artificial intelligence are yet to be fully seen or comprehended. Artificial intelligence has the potential to provide a dopamine rush through the 'search-and-explore' functions of generative AI platforms such as ChatGPT (Goldman, 2021). In the same way that one may receive a dopamine rush from a new post popping up on their social media, the immediate gratification of an answer can often trigger a dopamine rush. Once a person receives that dopamine rush, they often seek the next one. When applying this to artificial intelligence, there can be a real danger of over-reliance, dropping into addiction through the dopamine rushes. This concept, combined with the cognitive damage from consistent AI use, leads to dangerous cycles and outcomes. The consistent use of artificial intelligence has real biological impacts that often go unnoticed. Our brains are hardwired to only need so much dopamine; consistent and constant overstimulation through dopamine rushes leads to things like intense mood swings, anxiety, depression, anger, addiction, mania, and others ("Dopamine: What It Is, Function & Symptoms," 2022).

In many applications or depictions of artificial intelligence, there is an assumption of a god-like figure, as this system is seemingly all-knowing. When pushed to its limits, this idea presents many challenges for any religion, let alone Christianity. In many cases, it seeks to become its own religion, viewing technology as a higher power. From a Christian perspective, we understand the dangers of this and recognize the importance of the Lord's omnipotence and omniscience. There are divine and holy natures in God that can't be replicated by technology. In this reversal, artificial intelligence goes so far as to question the divinity of the Lord God.

How the world is different: A theory that helps us understand why the world is different because of AI is media ecology. Media ecology argues that technology shapes the culture and experiences that humans encounter, rather than being a neutral tool. AI is far from a neutral tool and is heavily biased due to the algorithm. A great illustration of this is how the culture of learning has changed. In the pre-digital era, information was only available by word of mouth or by going to a library. Now, every factual piece of information is right at the fingertips of any digital user. Libraries are no longer crowded with curious individuals and are used as a place to escape the busyness and study in peace and quiet. A second example of how AI proves this theory is the correlation between the importance of knowledge and the introduction of AI chatbots. It takes no effort to use a GPT or other AI system to get the answer to any question you could think of. If it is that effortless, the human brain will not hold on to information long-term. Talk about AI influencing your life. Even when the internet was first introduced, sure, it still shaped society as media ecology says, but you still had to dig to find the answer you wanted. You had to click into an article and read through it to find what you were looking for. AI bots can give you only the information that you deem necessary and cut out all the other jargon that is not applicable. This applies in a religious context as well. The emphasis on not studying information because "you can just look it up" affects how much biblical knowledge we have. The value of knowing and memorizing the Bible has gone down, and our ability to defend our faith has too. When it comes to a time when you are asked what you believe and why you believe it, it becomes an awkward experience. Most likely, you will feel like you don't know what you are talking about and can't prove what you are saying either. I am not saying this is the same for all, and most Bethel students do a good job of discussing what they believe and why they believe, if there ever comes a time to share their faith. Back in biblical times, the Bible was shared orally and had to be fully memorized. Every verse, every word. I mean, how could you not see how that experience the world differently because of AI? This is exactly what media ecology preaches.

Forecasted Effects

By: Cavan Banks

The recommendation part of this is super critical. As we learn more about AI and its perks, and even the negative effects it has. I want to start with what I learned. As artificial intelligence systems continue to advance. It helps our deep learning and neural networks. AI helps us continue to diffuse across nearly every sector of society. In the long run, this influence will extend far beyond improved efficiency or automation. AI should not be understood as a neutral tool. I think of it as a new communication environment that shapes how humans experience knowledge, creativity, and even personal identity. We can learn so much about how we want our future to look. Much like earlier media revolutions described by McLuhan, AI slowly reorganizes the balance between human agency and technological mediation. This raises profound social and ethical questions about our future as humans, and even our decision-making.

One of the most significant forecasted effects of artificial intelligence lies in its impact on cognition and our education. As AI systems become capable of generating essays in seconds, solving complex problems, and offering personalized feedback on problems you may encounter in the near future. Some users may increasingly rely on this machine intelligence to perform tasks that previously required deep engagement. This creates opportunities for adaptive learning. It creates risks that weaken critical thinking skills. Some skills include memory retention and problem-solving. As we use this, over time, our human intelligence could shift from active reasoning to passive evaluation of machine-generated outputs. This can change how knowledge is produced and validated in educational institutions and surveys. We will use these AI machines more than anything and forget where we came from.

AI is likely to push towards the path of structural changes. These are already underway in global markets. Instead of just eliminating work entirely. They will have artificial intelligence that will distribute power. Specific fields like finance, marketing, law, and even software engineering are shaped by these decision systems that outperform humans in speed and scale. We can't create things that AI can do in seconds. This shift may widen economic inequality by concentrating on technical expertise and capital within a small number of corporations. At the same time, this will displace workers whose roles become algorithmically optimized or redundant. Like in computer science, we have AI that can do these jobs for people. These people lose out on tons of money they have the chance to make. Future employment may prioritize human creativity and emotional intelligence over technical skill alone.

Social and psychological effects represent another major frontier in the long-term evolution of artificial intelligence. (MIT, 2025). The article goes into depth on how AI has effects on machines, videos, and even images. It describes that these systems offer new opportunities for assisting people with information processing, reasoning, decision-making, and creativity with AI. Also, the article explains that as users interact more frequently with conversational AI. These recommendation systems often have emotionally responsive technologies that blur the relationships between humans and machines. Another perspective I want to add is a McLuhan-inspired perspective. This is when the most critical danger emerges in the reversal stage of artificial intelligence. In an article I found, I saw that the extended powers that I have limit our cognitive skills and abilities. This comes into play with interaction, communication, and even personal skills. And "according to McLuhan, when a medium or technology is pushed to its extremes or reaches its limits, its full potential, it tends to flip or reverse its characteristics or effects into something different, often opposite." (BigThink, 2025) When this is pushed to its extreme. AI often reverses from a system designed to help human intelligence, into one that actually slowly erodes it. The more that we humans delegate thinking and our own creativity. The more we risk producing individuals who are highly connected and cognitively disengaged.

With these effects that we have created, I want to address a solution. A way we can influence artificial intelligence in our society to improve who we are and what we do as humans on a daily basis. First and foremost, educational institutions should prioritize AI literacy rather than simple AI usage. Especially when it comes to learning, and possibly advancing the knowledge of the student, and even a professor. According to the APA, they have tested AI's ability to tutor kids who need extra help. And in recent surveys and data on the website. These kids have shown improvement in specific areas. Also in the article, they want kids not just to cheat on assignments with AI, but also to implement it in their learning. Using artificial intelligence as a guide for papers, what to write about, and solving a math problem with steps, if students are stuck. Kids like how friendly AI is when it comes to answering questions. When I was a kid, I'd be afraid to ask a question because I didn't know it was right. And I still have this fear. Kids in class can ask AI for help without the backlash of people laughing at them. "The tool may therefore represent a social opportunity cost if children use it to answer questions they might otherwise ask their parents, peers, or siblings." (American Psychological Association, 2025).

Another solution I want to implement is in businesses and corporations. AI systems should assist in human decision-making rather than replace them entirely. Especially in high-stakes domains such as healthcare, law, education, and business leadership. According to the Harvard Business School, they state that, "In the workplace, you can reflect this balance by encouraging diverse participation in data collection and decision-making and by regularly reviewing AI systems to ensure fairness and transparency." (Harvard Business School, 29 Oct 25). This fairness is an ethical factor that helps ensure businesses use AI in ways that won't hurt company morale. Like when Coca-Cola released an AI commercial, the backlash they received was shocking. Overall, simply using AI effectively will benefit companies and organizations by maximizing their value.

Now that the Government is implementing, the Government body should require transparency and accountability in AI systems. This helps to explain algorithms of certain issues or plans, independent audits, and clear standards for data usage. In a government situation, AI may enhance productivity and personalization. The Government should resist allowing machines to replace genuine emotional reasoning when it comes to implementing AI in their workforce.

To wrap my recommendation up. I wanted to jump back into what I see in AI as I get older, and the technological advancements I see all around me. Artificial intelligence will not determine the future of society on its own. Our own human values will. We matter as humans, and we will not let something artificial control us as a whole. Deep learning systems amplify whatever goals they are given. OpenAI says in their article. " Our main goal is to push current alignment ideas as far as possible, and to understand and document precisely how they can succeed or why they will fail. We believe that even without fundamentally new alignment ideas, we can likely build sufficiently aligned AI systems to substantially advance alignment research itself." (OpenAI). If we humans push towards relying on AI for profit, we risk reducing human life to optimization problems. If we can push to use AI for the reasons I said about education, in the government, and even culturally in businesses and corporations. We can, and we will see that artificial intelligence may become one of the most powerful tools for human flourishing in modern history.

References

Abrams, Z. (2025, January 1). Classrooms are adapting to the use of artificial intelligence. American Psychological Association. Retrieved January 27, 2026, from https://www.apa.org/monitor/2025/01/trends-classrooms-artificial-intelligence

Artificial Intelligence Essentials Part 1. Wowrack. (2024, June 5). https://www.wowrack.com/en-us/blog/security/artificial-intelligence-essentials-part-1/

Amsterdam, U. van. (2021, March 23). We need to better understand the impact of AI on society and Citizens. University of Amsterdam. https://www.uva.nl/en/shared-content/faculteiten/en/faculteit-der-maatschappij-en-gedragswetenschappen/news/2021/03/the-impact-of-ai-on-society-and-individual-citizens.html?cb

Bailyn, E. (2025, September 12). ChatGPT optimization: 2026 guide. First Page Sage. https://firstpagesage.com/seo-blog/chatgpt-optimization-guide/

Bailyn, E. (2026, January 21). Top generative AI chatbots by market share — January 2026. First Page Sage. https://firstpagesage.com/reports/top-generative-ai-chatbots

Cloudflare. (n.d.). What is a large language model (LLM)? Cloudflare Learning Center. https://www.cloudflare.com/learning/ai/what-is-large-language-model/

Connell, E. (2025, October 7). How Does AI Work? CSU Global. Colorado State University Global. https://csuglobal.edu/blog/how-does-ai-actually-work

Copeland, B.J. (2026, January 6). History of artificial intelligence (AI). Britannica. https://www.britannica.com/science/history-of-artificial-intelligence

Dopamine: What It Is, Function & Symptoms. (2022, March 23). Cleveland Clinic. Retrieved January 27, 2026, from https://my.clevelandclinic.org/health/articles/22581-dopamine

Gibson, K. (2024, August 14). 5 Ethical Considerations of AI in Business. HBS Online. Retrieved January 27, 2026, from https://online.hbs.edu/blog/post/ethical-considerations-of-ai

Goldman, B. (2021, October 29). Addictive potential of social media, explained. Stanford Medicine. Retrieved January 27, 2026, from https://med.stanford.edu/news/insights/2021/10/addictive-potential-of-social-media-explained.html

Human Ai Interaction. MIT. https://www.media.mit.edu/projects/theme-human-ai/overview/

Kelly, J. (2025, April 25). Jobs AI Will Replace First in the Workplace Shift. Forbes. Retrieved January 27, 2026, from https://www.forbes.com/sites/jackkelly/2025/04/25/the-jobs-that-will-fall-first-as-ai-takes-over-the-workplace/

McCarthy, N., & Richter, F. (2017, December 1). Infographic: Automation could eliminate 73 million U.S. jobs by 2030. Statista Daily Data. https://www.statista.com/chart/12082/automation-could-eliminate-73-million-us-jobs-by-2030/?srsltid=AfmBOop6HFsnvKgtp4v7XhZUgFZ5TdMzqA4lnigJjN5UHZ-GyPl7Jun2

Mir, A. (2025, November 16). The digital age's reversion to pre-literate communication. Big Think. Retrieved January 27, 2026, from https://bigthink.com/the-present/the-digital-ages-reversion-to-pre-literate-communication/

Ord, T. (2025, May 4). Better at everything: how AI could make human beings irrelevant. The Guardian. Retrieved January 26, 2026, from https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete

Our approach to alignment research. (2022, August 24). OpenAI. Retrieved January 27, 2026, from https://openai.com/index/our-approach-to-alignment-research/

Prestianni, T. (2025, September 22). 59 AI job statistics: Future of U.S. jobs. National University. https://www.nu.edu/blog/ai-job-statistics/

Singla, A., Sukharevsky, A., Hall, B., Yee, L., Chui, M., & Balakrishnan, T. (2025, November 5). The state of AI in 2025: Agents, innovation, and transformation. McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

Stanford University, Institute for Human-Centered Artificial Intelligence. (2025, April 7). The 2025 AI Index report. Stanford HAI. https://hai.stanford.edu/ai-index/2025-ai-index-report

The Birth of Artificial Intelligence (AI) research. The birth of Artificial Intelligence (AI) research. Science and Technology. (n.d.). https://st.llnl.gov/news/look-back/birth-artificial-intelligence-ai-research

Üstün, B. Patterns before recognition: the historical ascendance of an extractive empiricism of forms. Humanit Soc Sci Commun 11, 55 (2024). https://doi.org/10.1057/s41599-023-02574-1