The “next to normal”-thesis posits that AI, in its trajectory, will soon be perceived as a regular component of our technological interactions. This perspective is twofold:
• AI will operate alongside other tools, becoming an integrated part of our technological landscape.
• As we become more accustomed to AI, its once sensationalized nature will diminish, transitioning from being “next to normal” to simply “normal”.
The integration of this technology into our daily lives is becoming more seamless [
5], making it challenging for many to discern between AI-powered and non-AI-powered tools [
6]. Its seamless integration means that for many, AI is being perceived as just another component of a machine or computer, devoid of its current sensational character. For instance, users already often interact with AI when searching on Google, receiving recommendations on Amazon or Netflix, or swiping on Tinder, without necessarily recognizing that they are dealing with Artificial Intelligence in these processes [
7,
8]. Using AI will thus be a tool next to any other tools we handle in our daily lives. It will thus be “next to normal”, as in next to all the other tools we consider normal.
As AI becomes more ingrained in our daily routines [
9,
10], there will likely be a diminishing awareness of its presence [
11]. This “oblivion” means that many users will not particularly realize that they are interacting with an AI system, as it will be perceived as just another part of the machine they are using. This normalization and lack of discernment can be likened to how users interact with search engines or recommendation systems without necessarily recognizing the underlying AI mechanisms [
12,
13,
14]. The technology will hence become “next to normal”, as in the next normal thing that becomes ordinary in our regular technology use. For example, when e-mails first entered the stage, people deemed it astonishing that one could communicate with someone else on the other side of the planet via text messages almost real-time [
15]. It was a natural inclination to ask: “What will this mean for our businesses, our societies, friendships, love lives, and perhaps even our psychological make-ups?” However nowadays, nobody bothers to ask these very questions about e-mails. We have, in a sense, become desensitized to the intriguing nature and possibilities of e-mails because they have become normal to us. Today, we are asking such questions about AI—and rightly so, since there are still considerable chances but also risks associated with the technology.
3.1. The Mere Exposure Effect
The “Mere Exposure Effect” [
16] is a well-documented psychological phenomenon suggesting that increased exposure to something leads to a more positive evaluation thereof [
17,
18,
19]. This effect has significant implications for AI safety concerns. When AI systems like GPT-3 were introduced, there was significant apprehension and fear surrounding their capabilities, i.e., [
20]. However, as these systems become more integrated and familiar, the general public’s perception becomes more positive. This shift in perception could potentially lead to an underrepresentation of genuine AI safety concerns.
To a certain extent, this is already historically true because when AI-driven tools first emerged, they were met with skepticism and fear [
21]. Over time, as people became more familiar with these tools and began to see their benefits, the initial fear gave way to acceptance and even enthusiasm, e.g., [
22]. This can be seen in the adoption rates of AI-driven technologies like virtual assistants and recommendation algorithms [
23,
24]. As these technologies become more commonplace, the initial fear and skepticism surrounding them diminished, and they were then seen in a more positive light.
In several domains, there is empirical evidence showing that such a normalization process is under way when it comes to the place of Artificial Intelligence. Strange and Tucker [
25], for example, performed case studies at the OHCHR, the WHO, and the UNESCO, where they discussed that in politics the idea of AI has become an ‘empty signifier’ (as already hinted at in [
26]). An analysis of 233,914 English tweets upon the introduction of ChatGPT showed that after its introduction, the most powerful LLM was met with considerable sensationalism, but was quickly implemented in many aspects of daily life and normalized surprisingly fast as it became used in creative writing, essay writing, prompt writing, coding, and answering knowledge questions [
27]. There have even emerged websites like “Normal AI” (normalai.org) attempting to establish a ‘normal’ discussion of Artificial Intelligence without any sensationalism attached. On business outlets, AI has been dubbed “the new normal” without denying its revolutionary nature [
28]. In his book “The Culture of AI”, Elliott [
9] explains how AI is the basis of our modern digital revolution, but also how it is getting radically implemented in our everyday lives to become a part of our ordinary experience at an incredible speed. This is also true in creative industries, which is a place where hitherto commentators have thought that computers could never invade [
29]. Such dynamics likewise hold for the education system, where experts claim that AI is not standardized just yet but soon will be based on its adoption in the landscape [
30]. Hence, there is barely an area of life that is not touched by AI as it is now integrated in new tools and innovations [
31], decentralized technology [
32], medicine [
33], the world of business and recruiting [
34], and through the internet of things (IoT) it is even arriving in our homes [
35]. Although the initial employment usually concurs with excitement or some concerns, the Mere Exposure Effect predicts that quite rapidly, its general perception will become largely positive.
However, this positive shift in perception can be problematic. As AI becomes more integrated into our daily lives and its capabilities continue to grow, there is a risk that genuine safety concerns might be overshadowed or dismissed. This could lead to a complacent attitude towards the technology, where potential risks are not adequately addressed. Making use of the Mere Exposure Effect, the “next to normal”-thesis claims that whereas AI technologies become more and more normal through their integration in our everyday lives, the general public will get numb towards the potential problems of AI and will generally not view it in negative terms. Practical issues like LLM hallucinations or even existential threats like instrumental convergence (see Nik Bostrom’s idea of the paperclip maximizer), will likely not be considered highly disconcerting [
36,
37,
38].
These psychological developments raise a number of manifest ethical concerns. One of the most pressing issues is the potential for over-reliance on AI, especially in critical fields like healthcare or criminal justice. This dependency could lead to a significant reduction in human oversight, raising concerns about the ethical implications of AI-driven decisions. Another critical concern is the erosion of privacy norms. As society grows more accustomed to AI, there’s a risk that individuals may become more accepting of extensive data collection and surveillance, potentially leading to an uncritical acceptance of invasive technologies. This shift can pose a threat to individual liberties and autonomy. Further, the normalization of AI could exacerbate existing power imbalances. The control of AI by a few powerful entities, such as tech companies and governments, raises concerns about ensuring that AI benefits society equitably. Additionally, the shifting of moral responsibility in an AI-integrated world complicates ethical accountability, making it difficult to pinpoint who is responsible for AI-driven actions. The unequal distribution of AI benefits could reinforce existing inequalities, giving disproportionate advantages to those with better access to these technologies. Moreover, as AI transforms the nature of work, questions arise about wealth redistribution, retraining displaced workers, and maintaining fair labor practices in an increasingly automated economy. Lastly, the use of AI in governance and public policy decisions necessitates transparency and democratic control. Ensuring that these decisions undergo public scrutiny and adhere to democratic processes is crucial. However, if the Mere Exposure Effect leads to an uncritical adoption of AI systems, then such ethical concerns will inevitably be underrepresented.
3.2. The Black Box Effect
Despite the current increasing familiarity with AI, there appears to remain a segment of experts and creators who express skepticism and concern. Notable figures like Sam Altman, Elon Musk, and Bill Gates have voiced apprehensions about the technology’s potential dangers [
39]. Strong protagonists of the notion that AI is incredibly dangerous are found with experts such as Eliezer Yudkowsky and Nik Bostrom [
40]. According to the “next to normal”-thesis, they should also have become used to AI and therefore not be afraid of it. Why, then, are they still voicing strong concerns?
Their skepticism, despite of being deeply involved with AI, might be attributed to what here is referred to as the “Black Box Effect”. This effect suggests that things we do not fully understand induce a sense of danger or unease, i.e., [
41]—especially to those who ought to understand them because they work with them most intimately in their profession [
42]. Since not even experts like them do yet fully understand why AI systems can sometimes exhibit human-like capabilities [
43,
44], this lack of understanding may counteract the diminishing AI safety concerns brought about by the Mere Exposure Effect.
For instance, while the general public might be more accepting of AI due to their increased exposure to it, experts who understand the complexities and potential risks associated with AI remain cautious, since, at the heart of it, they still do not quite grasp how these new emergent capabilities and dynamics come about. This caution stems from the fact that the technology, in many cases, remains a “black box”: This means that we know what goes in, we can analyze the math that is carried through, and we see the results that come out, but we do not necessarily understand everything that happens in between [
45,
46,
47,
48]. And most of all, it is a mystery how the mathematical operations effectively lead to the semantic results we can now observe in LLMs. This lack of understanding may lead to a sense of unease and skepticism, even amongst those who are deeply involved in the field of AI.
The most prevalent alignment concerns in AI safety engineering are the following [
49,
50,
51,
52,
53]:
•
Data Privacy and Security: AI systems often require access to large amounts of data, which raises concerns about privacy and the protection of sensitive information. There is a risk of data breaches, unauthorized access, and misuse of personal data.
•
Malicious Use of AI: AI can be used for harmful purposes, such as creating deepfakes, automating cyber-attacks, or developing autonomous weapons. The increasing accessibility of AI tools makes it easier for malicious actors to exploit these technologies.
•
Bias and Discrimination: AI systems can inherit biases present in their training data, leading to discriminatory outcomes. This is particularly concerning in areas like hiring, law enforcement, and loan approvals, where biased AI could perpetuate or exacerbate existing inequalities.
•
Lack of Explainability and Transparency: Many advanced AI systems, particularly those based on deep learning, are often seen as opaque due to their complexity. This lack of transparency can make it difficult to understand, predict, or control how these systems make decisions.
•
Autonomous Systems and Loss of Control: As AI systems become more autonomous, there is a risk that they could act in unpredictable or unintended ways, especially if they are operating in complex environments or making high-stakes decisions.
•
AI and Cybersecurity: AI can both enhance and weaken cybersecurity. While it can improve threat detection and response, AI systems themselves can be targets of cyber-attacks, potentially leading to the alteration of algorithms or the manipulation of data.
•
Regulatory and Ethical Challenges: The rapid advancement of AI technologies often outpaces the development of regulatory frameworks. There is a need for international standards and regulations to ensure the ethical and responsible use of AI.
•
AI Disinformation and Propaganda: AI can generate convincing fake content, which can be used to spread misinformation or propaganda at scale. This poses significant challenges for societies, as it can undermine trust in media and institutions.
•
Job Displacement and Economic Impact: Automation through AI could lead to significant job displacement in various industries. This raises concerns about the economic impact on workers and the need for strategies to manage the transition in the labor market.
•
Dependency and Reduced Human Skills: Over-reliance on AI can lead to a decline in certain human skills and capabilities. There’s a risk that critical decision-making may become too dependent on AI, potentially leading to vulnerabilities if the AI systems fail.
As artificial neural networks get increasingly larger, the systems are becoming more opaque, meaning that it becomes more difficult to understand how these emergent capabilities emerge. This means that the Black Box Effect is exacerbating these concerns in AI safety. Although the discipline of “mechanistic interpretability” is making strides in better understanding the dynamics at hand, we are still far from understanding the emergent properties based on artificial neural networks.
3.3. Synthesis
Yagi, Ilkoma, and Kikuchi [
54] have experimentally shown that there is an attentional modulation of the Mere Exposure Effect, meaning that selective attention was responsible for an increase in the effect. This may imply that as AI receives an immense hype at the beginning, the “numbing” force concerning potential problems with the new technology could have its strongest effect in the beginning, thus rapidly leading to a more positive evaluation as the technology gets not only introduced but diffused throughout society. Later on, seven experiments indicated repeatedly that not only does the stimulus lead to the Mere Exposure Effect, but also the cognitive priming of the participants being instructed that a stimulus will occur [
55], which helps to appreciate that not only the direct contact with AI systems but also the simple act of talking, reading and hearing so much about AI leads to its integration as a normal fact of life in society, including a more positive sentiment towards it. It has been shown that the effect holds true even in the medical profession, where repeated exposure leads to a relaxation towards the diagnosis [
56]. Online advertisers use exactly this phenomenon to repeatedly stimulate potential customers and thus expect better performance due to the Mere Exposure Effect [
57], which has now also been strengthened by neuroscientific experiments using ERPs [
58]. Most interestingly, experiments by Weeks et al. [
59] showed that the Mere Exposure Effect led to a tolerance of ethically questionable behavior—a finding that strongly coincides with the central claim of the present ‘next-to-normal’-thesis, namely that AI will not only become more commonplace, but as it does so, society also becomes more ethically tolerant towards Artificial Intelligence.
As discussed above, the reverse dynamics also apply since higher opacity can increase doubts and fears towards a system. This is especially true in the field of medicine where errors may have devastating consequences for a patient [
47]. In the broader literature, this is discussed under the headings of warranted fears towards “black-box algorithms”, which is where the present idea of the Black Box Effect received its name from [
60]. This leads effectively to users conducting in “black-box interactions”, which has real implications for societal trust in AI [
61].
The Mere Exposure Effect and the Black Box Effect can thus be conceptualized as two opposing forces in the “next to normal”-thesis spectrum. While the former leads to a normalization and positive perception of AI, the latter retains skepticism and concern, especially among those at the forefront of AI development. For the general public, increased integration will likely lead to a perception of AI as benign and normal, which will probably lead to an underrepresentation of critical thought towards the outputs made by an LLM (for example, the answers provided by ChatGPT are taken as authoritative and are not scrutinized enough). However, for experts and IT-creators, the mysteries surrounding AI’s capabilities will probably continue to raise alarms in the near future. Nevertheless, even for experts, the longer society treats AI as something that has become normal, the less alarmed might they themselves also become, which eventually also decreases the concerns of the ones at the forefront.
3.4. The Past, the Present and the Future
As the thesis would predict, in the past years during the introduction of new powerful LLMs, the new capabilities of AI seemed “scary” to many [
21]. This was exemplified in the title of an article published in the guardian called: “A robot wrote this entire article. Are you scared yet, human?” [
20]. Indeed, as there has been virtually no prior exposure to LLMs in the public, it seemed both intriguing and potentially dangerous. However today, as many have access and experience with NLP, LLMs such as ChatGPT from OpenAI, Bard from Google, Claude-2 from Anthropic AI, LLaMA-2 from Meta, and many others, for most of the lay audience, they do not appear to raise the same scared sentiments anymore, i.e., [
62]. In fact, many now use LLMs as a viable asset and the number of users is most likely to increase during the upcoming natural diffusion of the technology’s life cycle [
63]. Gradually, LLMs are becoming normal and the ones acting in an alarmist way are mostly academics and experts in the field. In the far future, however, as a potential interpretation of the “next to normal”-thesis would hold, even experts will become so accustomed to the everyday normalcy of the technology that, even though not fully understood, AI will not seem so bad since it has benefitted us for so long (that is, under the assumption that in the meantime AI did not go rogue and in fact threaten society in any existential sense). Then, it would be possible that the Mere Exposure Effect will have reached all participants of society and the potential dangers, problems and threats will altogether be seen as negligeable. One may liken this to the threat of nuclear weapons in the 20th century: Whereas at the beginning, citizens were truly afraid that their country might be wiped out by an atomic bomb, today barely anyone appears to be afraid of being exterminated through the push of a nuclear button—arguably not even the ones working with nuclear materials by profession.
As AI continues its trajectory towards becoming “next to normal”, several implications arise, both positive and negative. On the positive side, the seamless integration of AI into our daily lives can lead to increased efficiency, convenience, and even the potential for new innovations and solutions to complex problems. For instance, AI-driven medical diagnostics can lead to faster and more accurate diagnoses, while AI-driven traffic management can result in smoother traffic flow and reduced congestion. Hence, the willingness to consider AI as normal and to implement it in various of our processes holds the potential for many societal benefits. However, on the flip side, the normalization of AI brings with it a set of challenges. The most pressing of these is the potential for complacency regarding AI safety concerns. As AI becomes more integrated and familiar, there is the risk that the general public and eventually even the experts might downplay or dismiss genuine safety concerns. This could lead to scenarios where potential risks are not adequately addressed until it is too late. Another implication of the “next to normal”-thesis is the potential for a widening gap between the general public’s perception of AI and that of experts in the field. While the public might view AI as benign and normal due to increased exposure, experts, especially those who understand the complexities and potential risks associated with AI, might remain more cautious for a lot longer. This discrepancy in perceptions can lead to challenges in AI governance and regulation, as policymakers grapple with balancing the benefits of AI integration with the potential risks. Since policy-making in a democratic society is supposed to be driven by the people, the progressive “next to normal”-effect might be represented in our new laws more heavily than the conservative and more cautious “black box”-effect. This could lead to an underrepresentation of potential AI risks in governance and policy, cf. [
64]. At the same time, there are also changes that would need to be done in the education system, since this is the place where people ought to learn to deal with new pervasive technologies like AI. According to Thurzo and colleagues [
65], students should be taught to understand at least the most important factors around generative AI, including (i) basic AI terminologies, (ii) treatment and diagnostic capabilities in the case of medical aspirants, (iii) AI’s risks and benefits, (iv) ethical considerations, and (v) the impact of AI on the diversity of affected groups.
There are some rather concrete examples of how AI becoming normalized affects important aspects, such as cybersecurity, privacy concerns, as well as socio-economic impacts. In the sphere of cybersecurity, the normalization of AI presents a double-edged sword. On one hand, AI’s integration into various systems can enhance security measures, making them more robust and intelligent. On the other hand, it introduces new vulnerabilities and complexities. As AI systems become more sophisticated, so do the methods to exploit them. Cyber threats enabled by AI, such as advanced phishing attacks or security bypass mechanisms, could become more prevalent and challenging to detect. The normalization of AI might inadvertently lead to a complacency in cybersecurity vigilance, underestimating the sophistication and potential of AI-enabled cyber threats. Privacy concerns are equally paramount. The deeper integration of AI into everyday technologies often necessitates the collection and analysis of large volumes of personal data. This trend could lead to an erosion of privacy norms, where extensive data collection by AI systems becomes an accepted standard practice. The risk here is that individuals may become desensitized to privacy intrusions, leading to an uncritical acceptance of invasive technologies. In a society where AI is normalized, surveillance and data collection could become not only ubiquitous but also largely unquestioned, posing significant threats to individual liberties and autonomy. A historical example of a similar development can be seen both in the sphere of social media platforms and search engines commercializing on our private data, as well as in the fact that we now often allow websites to generate cookies on our computers in order to get access to the content. One of the most pressing concerns concerning the associated social implications is the potential for job displacement as AI and automation technologies become capable of performing tasks traditionally done by humans. This could lead to increased economic inequalities, disproportionately affecting those lacking AI-related skills. Moreover, the concentration of AI technology and expertise in the hands of a few powerful corporations could lead to increased economic disparities and monopolistic practices. The challenge is to ensure that the benefits of AI technologies are equitably distributed and that measures are in place to support those adversely affected by these technological shifts. Addressing these implications requires a diverse set of actions. Strengthening cybersecurity measures is essential to counter the sophisticated threats posed by AI. This includes developing AI-specific security protocols and investing in AI-driven security solutions. Enhancing privacy protections is another critical step. Policymakers and technology developers must prioritize privacy in the design of AI systems, implementing robust data protection measures and ensuring transparency in how personal data is used. To address socio-economic disparities, policies should be developed to manage the impacts of AI, such as retraining programs for workers displaced by AI and automation, and ensuring equitable access to AI technologies. Increasing public awareness about the implications of AI normalization is hence crucial. Educating the public about the potential risks and benefits of AI, as well as fostering a critical understanding of AI technologies, is necessary to prepare society for these changes.
There is hence an interesting balancing act within the “next to normal” approach where the Mere Exposure and the Black Box Effect compete against each other where most likely the first outweighs the latter due to the fact that the explainability is probably going to increase, which is supplemented by the fact that human complacency often prioritizes usefulness over intangible problems.
Given the implications of the “next to normal”-thesis, several recommendations can be made for both the AI industry and policymakers:
• Increased Transparency: One of the primary concerns associated with AI is its “black box”-nature. By increasing transparency about these problems and investing to make AI systems more interpretable, we can address some of the concerns associated with the Black Box Effect.
• Continuous Education: As AI continues to evolve, it is crucial to ensure that both the general public and experts in the field are continuously educated about its capabilities, potential risks, and best practices.
• Robust Regulation: Policymakers should work closely with AI experts to develop robust regulations that balance the benefits of AI integration with potential risks. This includes regulations on AI safety, ethics, and governance.
• Ethical AI Design and Development: Incorporating ethical considerations into the design and development phase of AI systems can proactively address many potential issues. This involves embedding ethical principles, such as fairness, accountability, and respect for user autonomy, right from the initial stages of AI development. Ethical AI design also entails a commitment to creating systems that are inclusive and do not perpetuate existing societal biases.
• Public Participation and Engagement: As AI becomes more normalized, it is crucial to involve the public in discussions about AI development and its societal implications. This could include public forums, consultations, and collaborative projects that bring together AI developers, users, ethicists, and other stakeholders. Engaging the public not only helps in building trust but also ensures that diverse perspectives are considered in shaping AI policies and practices.
• Systematic Risk Assessments and Audits: Regular and systematic risk assessments and audits of AI systems can help identify and mitigate potential safety concerns. These assessments should be conducted by independent bodies and include evaluations of data privacy, algorithmic fairness, and the potential for unintended consequences. Regular audits ensure continuous monitoring and accountability, especially for AI systems used in critical sectors.
• Empirical Research: More empirical research is needed to understand the interplay between the Mere Exposure Effect, the Black Box Effect, and the broader perception of AI. This research can inform both AI development and policy decisions.
In conclusion, the “next to normal”-thesis provides a novel and compelling narrative of AI’s future trajectory. As AI becomes more integrated into our daily lives, it is crucial to understand the associated psychosocial implications, especially concerning AI safety concerns. By addressing these concerns proactively and ensuring that AI development is guided by principles of transparency, ethics, and safety, we can harness the benefits of AI integration while mitigating potential risks. In any case, the near future of AI will be next to normal, and the far future of AI will be fully integrated in society and considered as something completely normal.
Not applicable.
Not applicable.
This research received no external funding.
There are no competing interests to declare.