The Future of Artificial Intelligence Will Be “Next to Normal”—A Perspective on Future Directions and the Psychology of AI Safety Concerns

Perspective Open Access

The Future of Artificial Intelligence Will Be “Next to Normal”—A Perspective on Future Directions and the Psychology of AI Safety Concerns

Author Information
Kalaidos University of Applied Sciences, Institute for Management & Digitalization, 8050 Zurich, Switzerland
*
Authors to whom correspondence should be addressed.
Views:12914
Downloads:371
Citations: 1
Nature Anthropology 2024, 2 (1), 10001;  https://doi.org/10.35534/natanthropol.2024.10001

Received: 01 November 2023 Accepted: 06 December 2023 Published: 05 January 2024

Creative Commons

© 2024 The authors. This is an open access article under the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).

ABSTRACT: This paper introduces the AI “next to normal”-thesis, suggesting that as Artificial Intelligence becomes more ingrained in our daily lives, it will transition from a sensationalized entity to a regular tool. However, this normalization has psychosocial implications, particularly when it comes to AI safety concerns. The “next to normal”-thesis proposes that AI will soon be perceived as a standard component of our technological interactions, with its sensationalized nature diminishing over time. As AI’s integration becomes more seamless, many users may not even recognize their interactions with AI systems. The paper delves into the psychology of AI safety concerns, discussing the “Mere Exposure Effect” and the “Black Box Effect”. While the former suggests that increased exposure to AI leads to a more positive perception, the latter highlights the unease stemming from not fully understanding its capabilities. These effects can be seen as two opposing forces shaping the public’s perception of the technology. The central claim of the thesis is that as AI progresses to become normal, human psychology will evolve alongside with it and safety concerns will diminish, which may have practical consequences. The paper concludes by discussing the implications of the “next to normal”-thesis and offers recommendations for the industry and policymakers, emphasizing the need for increased transparency, continuous education, robust regulation, and empirical research. The future of AI is envisioned as one that is seamlessly integrated into society, yet it is imperative to address the associated safety concerns proactively and not take the normalization effects take ahold of it.
Keywords: AI; Artificial intelligence; Mere exposure effect; Black box effect; AI safety concerns; Technological developments

1. Introduction

Artificial Intelligence (AI) has made a profound impact on the global stage, revolutionizing various sectors and becoming an integral part of our daily lives. From voice assistants like Siri and Alexa to recommendation algorithms on platforms like Netflix and Amazon, the technology’s presence is undeniable [1]. Since November 2022, due to the launch of ChatGPT, there has been a significant hype surrounding AI, with its capabilities and potential being the center of many discussions [2,3]. This excitement, however, is gradually giving way to a sense of normalization as people become more accustomed to its presence. Microsoft’s rollout of “Copilot” is a testament to this trend, intending to integrate AI into all their products, making it more accessible and commonplace [4]. The present article introduces the AI “next to normal”-thesis, suggesting that as AI becomes more integrated into our daily routines, it will transition from being a sensationalized concept to a regular tool, much like any other technological advancement. However, this normalization brings with it psychosocial implications, especially concerning AI safety concerns, which will be the focal point of the discussion.

2. The “Next to Normal”—Thesis

The “next to normal”-thesis posits that AI, in its trajectory, will soon be perceived as a regular component of our technological interactions. This perspective is twofold: • AI will operate alongside other tools, becoming an integrated part of our technological landscape. • As we become more accustomed to AI, its once sensationalized nature will diminish, transitioning from being “next to normal” to simply “normal”. The integration of this technology into our daily lives is becoming more seamless [5], making it challenging for many to discern between AI-powered and non-AI-powered tools [6]. Its seamless integration means that for many, AI is being perceived as just another component of a machine or computer, devoid of its current sensational character. For instance, users already often interact with AI when searching on Google, receiving recommendations on Amazon or Netflix, or swiping on Tinder, without necessarily recognizing that they are dealing with Artificial Intelligence in these processes [7,8]. Using AI will thus be a tool next to any other tools we handle in our daily lives. It will thus be “next to normal”, as in next to all the other tools we consider normal. As AI becomes more ingrained in our daily routines [9,10], there will likely be a diminishing awareness of its presence [11]. This “oblivion” means that many users will not particularly realize that they are interacting with an AI system, as it will be perceived as just another part of the machine they are using. This normalization and lack of discernment can be likened to how users interact with search engines or recommendation systems without necessarily recognizing the underlying AI mechanisms [12,13,14]. The technology will hence become “next to normal”, as in the next normal thing that becomes ordinary in our regular technology use. For example, when e-mails first entered the stage, people deemed it astonishing that one could communicate with someone else on the other side of the planet via text messages almost real-time [15]. It was a natural inclination to ask: “What will this mean for our businesses, our societies, friendships, love lives, and perhaps even our psychological make-ups?” However nowadays, nobody bothers to ask these very questions about e-mails. We have, in a sense, become desensitized to the intriguing nature and possibilities of e-mails because they have become normal to us. Today, we are asking such questions about AI—and rightly so, since there are still considerable chances but also risks associated with the technology.

3. The Evolving Psychology of AI Safety Concerns

3.1. The Mere Exposure Effect The “Mere Exposure Effect” [16] is a well-documented psychological phenomenon suggesting that increased exposure to something leads to a more positive evaluation thereof [17,18,19]. This effect has significant implications for AI safety concerns. When AI systems like GPT-3 were introduced, there was significant apprehension and fear surrounding their capabilities, i.e., [20]. However, as these systems become more integrated and familiar, the general public’s perception becomes more positive. This shift in perception could potentially lead to an underrepresentation of genuine AI safety concerns. To a certain extent, this is already historically true because when AI-driven tools first emerged, they were met with skepticism and fear [21]. Over time, as people became more familiar with these tools and began to see their benefits, the initial fear gave way to acceptance and even enthusiasm, e.g., [22]. This can be seen in the adoption rates of AI-driven technologies like virtual assistants and recommendation algorithms [23,24]. As these technologies become more commonplace, the initial fear and skepticism surrounding them diminished, and they were then seen in a more positive light. In several domains, there is empirical evidence showing that such a normalization process is under way when it comes to the place of Artificial Intelligence. Strange and Tucker [25], for example, performed case studies at the OHCHR, the WHO, and the UNESCO, where they discussed that in politics the idea of AI has become an ‘empty signifier’ (as already hinted at in [26]). An analysis of 233,914 English tweets upon the introduction of ChatGPT showed that after its introduction, the most powerful LLM was met with considerable sensationalism, but was quickly implemented in many aspects of daily life and normalized surprisingly fast as it became used in creative writing, essay writing, prompt writing, coding, and answering knowledge questions [27]. There have even emerged websites like “Normal AI” (normalai.org) attempting to establish a ‘normal’ discussion of Artificial Intelligence without any sensationalism attached. On business outlets, AI has been dubbed “the new normal” without denying its revolutionary nature [28]. In his book “The Culture of AI”, Elliott [9] explains how AI is the basis of our modern digital revolution, but also how it is getting radically implemented in our everyday lives to become a part of our ordinary experience at an incredible speed. This is also true in creative industries, which is a place where hitherto commentators have thought that computers could never invade [29]. Such dynamics likewise hold for the education system, where experts claim that AI is not standardized just yet but soon will be based on its adoption in the landscape [30]. Hence, there is barely an area of life that is not touched by AI as it is now integrated in new tools and innovations [31], decentralized technology [32], medicine [33], the world of business and recruiting [34], and through the internet of things (IoT) it is even arriving in our homes [35]. Although the initial employment usually concurs with excitement or some concerns, the Mere Exposure Effect predicts that quite rapidly, its general perception will become largely positive. However, this positive shift in perception can be problematic. As AI becomes more integrated into our daily lives and its capabilities continue to grow, there is a risk that genuine safety concerns might be overshadowed or dismissed. This could lead to a complacent attitude towards the technology, where potential risks are not adequately addressed. Making use of the Mere Exposure Effect, the “next to normal”-thesis claims that whereas AI technologies become more and more normal through their integration in our everyday lives, the general public will get numb towards the potential problems of AI and will generally not view it in negative terms. Practical issues like LLM hallucinations or even existential threats like instrumental convergence (see Nik Bostrom’s idea of the paperclip maximizer), will likely not be considered highly disconcerting [36,37,38]. These psychological developments raise a number of manifest ethical concerns. One of the most pressing issues is the potential for over-reliance on AI, especially in critical fields like healthcare or criminal justice. This dependency could lead to a significant reduction in human oversight, raising concerns about the ethical implications of AI-driven decisions. Another critical concern is the erosion of privacy norms. As society grows more accustomed to AI, there’s a risk that individuals may become more accepting of extensive data collection and surveillance, potentially leading to an uncritical acceptance of invasive technologies. This shift can pose a threat to individual liberties and autonomy. Further, the normalization of AI could exacerbate existing power imbalances. The control of AI by a few powerful entities, such as tech companies and governments, raises concerns about ensuring that AI benefits society equitably. Additionally, the shifting of moral responsibility in an AI-integrated world complicates ethical accountability, making it difficult to pinpoint who is responsible for AI-driven actions. The unequal distribution of AI benefits could reinforce existing inequalities, giving disproportionate advantages to those with better access to these technologies. Moreover, as AI transforms the nature of work, questions arise about wealth redistribution, retraining displaced workers, and maintaining fair labor practices in an increasingly automated economy. Lastly, the use of AI in governance and public policy decisions necessitates transparency and democratic control. Ensuring that these decisions undergo public scrutiny and adhere to democratic processes is crucial. However, if the Mere Exposure Effect leads to an uncritical adoption of AI systems, then such ethical concerns will inevitably be underrepresented. 3.2. The Black Box Effect Despite the current increasing familiarity with AI, there appears to remain a segment of experts and creators who express skepticism and concern. Notable figures like Sam Altman, Elon Musk, and Bill Gates have voiced apprehensions about the technology’s potential dangers [39]. Strong protagonists of the notion that AI is incredibly dangerous are found with experts such as Eliezer Yudkowsky and Nik Bostrom [40]. According to the “next to normal”-thesis, they should also have become used to AI and therefore not be afraid of it. Why, then, are they still voicing strong concerns? Their skepticism, despite of being deeply involved with AI, might be attributed to what here is referred to as the “Black Box Effect”. This effect suggests that things we do not fully understand induce a sense of danger or unease, i.e., [41]—especially to those who ought to understand them because they work with them most intimately in their profession [42]. Since not even experts like them do yet fully understand why AI systems can sometimes exhibit human-like capabilities [43,44], this lack of understanding may counteract the diminishing AI safety concerns brought about by the Mere Exposure Effect. For instance, while the general public might be more accepting of AI due to their increased exposure to it, experts who understand the complexities and potential risks associated with AI remain cautious, since, at the heart of it, they still do not quite grasp how these new emergent capabilities and dynamics come about. This caution stems from the fact that the technology, in many cases, remains a “black box”: This means that we know what goes in, we can analyze the math that is carried through, and we see the results that come out, but we do not necessarily understand everything that happens in between [45,46,47,48]. And most of all, it is a mystery how the mathematical operations effectively lead to the semantic results we can now observe in LLMs. This lack of understanding may lead to a sense of unease and skepticism, even amongst those who are deeply involved in the field of AI. The most prevalent alignment concerns in AI safety engineering are the following [49,50,51,52,53]: • Data Privacy and Security: AI systems often require access to large amounts of data, which raises concerns about privacy and the protection of sensitive information. There is a risk of data breaches, unauthorized access, and misuse of personal data. • Malicious Use of AI: AI can be used for harmful purposes, such as creating deepfakes, automating cyber-attacks, or developing autonomous weapons. The increasing accessibility of AI tools makes it easier for malicious actors to exploit these technologies. • Bias and Discrimination: AI systems can inherit biases present in their training data, leading to discriminatory outcomes. This is particularly concerning in areas like hiring, law enforcement, and loan approvals, where biased AI could perpetuate or exacerbate existing inequalities. • Lack of Explainability and Transparency: Many advanced AI systems, particularly those based on deep learning, are often seen as opaque due to their complexity. This lack of transparency can make it difficult to understand, predict, or control how these systems make decisions. • Autonomous Systems and Loss of Control: As AI systems become more autonomous, there is a risk that they could act in unpredictable or unintended ways, especially if they are operating in complex environments or making high-stakes decisions. • AI and Cybersecurity: AI can both enhance and weaken cybersecurity. While it can improve threat detection and response, AI systems themselves can be targets of cyber-attacks, potentially leading to the alteration of algorithms or the manipulation of data. • Regulatory and Ethical Challenges: The rapid advancement of AI technologies often outpaces the development of regulatory frameworks. There is a need for international standards and regulations to ensure the ethical and responsible use of AI. • AI Disinformation and Propaganda: AI can generate convincing fake content, which can be used to spread misinformation or propaganda at scale. This poses significant challenges for societies, as it can undermine trust in media and institutions. • Job Displacement and Economic Impact: Automation through AI could lead to significant job displacement in various industries. This raises concerns about the economic impact on workers and the need for strategies to manage the transition in the labor market. • Dependency and Reduced Human Skills: Over-reliance on AI can lead to a decline in certain human skills and capabilities. There’s a risk that critical decision-making may become too dependent on AI, potentially leading to vulnerabilities if the AI systems fail. As artificial neural networks get increasingly larger, the systems are becoming more opaque, meaning that it becomes more difficult to understand how these emergent capabilities emerge. This means that the Black Box Effect is exacerbating these concerns in AI safety. Although the discipline of “mechanistic interpretability” is making strides in better understanding the dynamics at hand, we are still far from understanding the emergent properties based on artificial neural networks. 3.3. Synthesis Yagi, Ilkoma, and Kikuchi [54] have experimentally shown that there is an attentional modulation of the Mere Exposure Effect, meaning that selective attention was responsible for an increase in the effect. This may imply that as AI receives an immense hype at the beginning, the “numbing” force concerning potential problems with the new technology could have its strongest effect in the beginning, thus rapidly leading to a more positive evaluation as the technology gets not only introduced but diffused throughout society. Later on, seven experiments indicated repeatedly that not only does the stimulus lead to the Mere Exposure Effect, but also the cognitive priming of the participants being instructed that a stimulus will occur [55], which helps to appreciate that not only the direct contact with AI systems but also the simple act of talking, reading and hearing so much about AI leads to its integration as a normal fact of life in society, including a more positive sentiment towards it. It has been shown that the effect holds true even in the medical profession, where repeated exposure leads to a relaxation towards the diagnosis [56]. Online advertisers use exactly this phenomenon to repeatedly stimulate potential customers and thus expect better performance due to the Mere Exposure Effect [57], which has now also been strengthened by neuroscientific experiments using ERPs [58]. Most interestingly, experiments by Weeks et al. [59] showed that the Mere Exposure Effect led to a tolerance of ethically questionable behavior—a finding that strongly coincides with the central claim of the present ‘next-to-normal’-thesis, namely that AI will not only become more commonplace, but as it does so, society also becomes more ethically tolerant towards Artificial Intelligence. As discussed above, the reverse dynamics also apply since higher opacity can increase doubts and fears towards a system. This is especially true in the field of medicine where errors may have devastating consequences for a patient [47]. In the broader literature, this is discussed under the headings of warranted fears towards “black-box algorithms”, which is where the present idea of the Black Box Effect received its name from [60]. This leads effectively to users conducting in “black-box interactions”, which has real implications for societal trust in AI [61]. The Mere Exposure Effect and the Black Box Effect can thus be conceptualized as two opposing forces in the “next to normal”-thesis spectrum. While the former leads to a normalization and positive perception of AI, the latter retains skepticism and concern, especially among those at the forefront of AI development. For the general public, increased integration will likely lead to a perception of AI as benign and normal, which will probably lead to an underrepresentation of critical thought towards the outputs made by an LLM (for example, the answers provided by ChatGPT are taken as authoritative and are not scrutinized enough). However, for experts and IT-creators, the mysteries surrounding AI’s capabilities will probably continue to raise alarms in the near future. Nevertheless, even for experts, the longer society treats AI as something that has become normal, the less alarmed might they themselves also become, which eventually also decreases the concerns of the ones at the forefront. 3.4. The Past, the Present and the Future As the thesis would predict, in the past years during the introduction of new powerful LLMs, the new capabilities of AI seemed “scary” to many [21]. This was exemplified in the title of an article published in the guardian called: “A robot wrote this entire article. Are you scared yet, human?” [20]. Indeed, as there has been virtually no prior exposure to LLMs in the public, it seemed both intriguing and potentially dangerous. However today, as many have access and experience with NLP, LLMs such as ChatGPT from OpenAI, Bard from Google, Claude-2 from Anthropic AI, LLaMA-2 from Meta, and many others, for most of the lay audience, they do not appear to raise the same scared sentiments anymore, i.e., [62]. In fact, many now use LLMs as a viable asset and the number of users is most likely to increase during the upcoming natural diffusion of the technology’s life cycle [63]. Gradually, LLMs are becoming normal and the ones acting in an alarmist way are mostly academics and experts in the field. In the far future, however, as a potential interpretation of the “next to normal”-thesis would hold, even experts will become so accustomed to the everyday normalcy of the technology that, even though not fully understood, AI will not seem so bad since it has benefitted us for so long (that is, under the assumption that in the meantime AI did not go rogue and in fact threaten society in any existential sense). Then, it would be possible that the Mere Exposure Effect will have reached all participants of society and the potential dangers, problems and threats will altogether be seen as negligeable. One may liken this to the threat of nuclear weapons in the 20th century: Whereas at the beginning, citizens were truly afraid that their country might be wiped out by an atomic bomb, today barely anyone appears to be afraid of being exterminated through the push of a nuclear button—arguably not even the ones working with nuclear materials by profession.

4. Implications of the “Next to Normal”—Thesis

As AI continues its trajectory towards becoming “next to normal”, several implications arise, both positive and negative. On the positive side, the seamless integration of AI into our daily lives can lead to increased efficiency, convenience, and even the potential for new innovations and solutions to complex problems. For instance, AI-driven medical diagnostics can lead to faster and more accurate diagnoses, while AI-driven traffic management can result in smoother traffic flow and reduced congestion. Hence, the willingness to consider AI as normal and to implement it in various of our processes holds the potential for many societal benefits. However, on the flip side, the normalization of AI brings with it a set of challenges. The most pressing of these is the potential for complacency regarding AI safety concerns. As AI becomes more integrated and familiar, there is the risk that the general public and eventually even the experts might downplay or dismiss genuine safety concerns. This could lead to scenarios where potential risks are not adequately addressed until it is too late. Another implication of the “next to normal”-thesis is the potential for a widening gap between the general public’s perception of AI and that of experts in the field. While the public might view AI as benign and normal due to increased exposure, experts, especially those who understand the complexities and potential risks associated with AI, might remain more cautious for a lot longer. This discrepancy in perceptions can lead to challenges in AI governance and regulation, as policymakers grapple with balancing the benefits of AI integration with the potential risks. Since policy-making in a democratic society is supposed to be driven by the people, the progressive “next to normal”-effect might be represented in our new laws more heavily than the conservative and more cautious “black box”-effect. This could lead to an underrepresentation of potential AI risks in governance and policy, cf. [64]. At the same time, there are also changes that would need to be done in the education system, since this is the place where people ought to learn to deal with new pervasive technologies like AI. According to Thurzo and colleagues [65], students should be taught to understand at least the most important factors around generative AI, including (i) basic AI terminologies, (ii) treatment and diagnostic capabilities in the case of medical aspirants, (iii) AI’s risks and benefits, (iv) ethical considerations, and (v) the impact of AI on the diversity of affected groups. There are some rather concrete examples of how AI becoming normalized affects important aspects, such as cybersecurity, privacy concerns, as well as socio-economic impacts. In the sphere of cybersecurity, the normalization of AI presents a double-edged sword. On one hand, AI’s integration into various systems can enhance security measures, making them more robust and intelligent. On the other hand, it introduces new vulnerabilities and complexities. As AI systems become more sophisticated, so do the methods to exploit them. Cyber threats enabled by AI, such as advanced phishing attacks or security bypass mechanisms, could become more prevalent and challenging to detect. The normalization of AI might inadvertently lead to a complacency in cybersecurity vigilance, underestimating the sophistication and potential of AI-enabled cyber threats. Privacy concerns are equally paramount. The deeper integration of AI into everyday technologies often necessitates the collection and analysis of large volumes of personal data. This trend could lead to an erosion of privacy norms, where extensive data collection by AI systems becomes an accepted standard practice. The risk here is that individuals may become desensitized to privacy intrusions, leading to an uncritical acceptance of invasive technologies. In a society where AI is normalized, surveillance and data collection could become not only ubiquitous but also largely unquestioned, posing significant threats to individual liberties and autonomy. A historical example of a similar development can be seen both in the sphere of social media platforms and search engines commercializing on our private data, as well as in the fact that we now often allow websites to generate cookies on our computers in order to get access to the content. One of the most pressing concerns concerning the associated social implications is the potential for job displacement as AI and automation technologies become capable of performing tasks traditionally done by humans. This could lead to increased economic inequalities, disproportionately affecting those lacking AI-related skills. Moreover, the concentration of AI technology and expertise in the hands of a few powerful corporations could lead to increased economic disparities and monopolistic practices. The challenge is to ensure that the benefits of AI technologies are equitably distributed and that measures are in place to support those adversely affected by these technological shifts. Addressing these implications requires a diverse set of actions. Strengthening cybersecurity measures is essential to counter the sophisticated threats posed by AI. This includes developing AI-specific security protocols and investing in AI-driven security solutions. Enhancing privacy protections is another critical step. Policymakers and technology developers must prioritize privacy in the design of AI systems, implementing robust data protection measures and ensuring transparency in how personal data is used. To address socio-economic disparities, policies should be developed to manage the impacts of AI, such as retraining programs for workers displaced by AI and automation, and ensuring equitable access to AI technologies. Increasing public awareness about the implications of AI normalization is hence crucial. Educating the public about the potential risks and benefits of AI, as well as fostering a critical understanding of AI technologies, is necessary to prepare society for these changes. There is hence an interesting balancing act within the “next to normal” approach where the Mere Exposure and the Black Box Effect compete against each other where most likely the first outweighs the latter due to the fact that the explainability is probably going to increase, which is supplemented by the fact that human complacency often prioritizes usefulness over intangible problems.

5. Future Directions and Recommendations

Given the implications of the “next to normal”-thesis, several recommendations can be made for both the AI industry and policymakers: • Increased Transparency: One of the primary concerns associated with AI is its “black box”-nature. By increasing transparency about these problems and investing to make AI systems more interpretable, we can address some of the concerns associated with the Black Box Effect. • Continuous Education: As AI continues to evolve, it is crucial to ensure that both the general public and experts in the field are continuously educated about its capabilities, potential risks, and best practices. • Robust Regulation: Policymakers should work closely with AI experts to develop robust regulations that balance the benefits of AI integration with potential risks. This includes regulations on AI safety, ethics, and governance. • Ethical AI Design and Development: Incorporating ethical considerations into the design and development phase of AI systems can proactively address many potential issues. This involves embedding ethical principles, such as fairness, accountability, and respect for user autonomy, right from the initial stages of AI development. Ethical AI design also entails a commitment to creating systems that are inclusive and do not perpetuate existing societal biases. • Public Participation and Engagement: As AI becomes more normalized, it is crucial to involve the public in discussions about AI development and its societal implications. This could include public forums, consultations, and collaborative projects that bring together AI developers, users, ethicists, and other stakeholders. Engaging the public not only helps in building trust but also ensures that diverse perspectives are considered in shaping AI policies and practices. • Systematic Risk Assessments and Audits: Regular and systematic risk assessments and audits of AI systems can help identify and mitigate potential safety concerns. These assessments should be conducted by independent bodies and include evaluations of data privacy, algorithmic fairness, and the potential for unintended consequences. Regular audits ensure continuous monitoring and accountability, especially for AI systems used in critical sectors. • Empirical Research: More empirical research is needed to understand the interplay between the Mere Exposure Effect, the Black Box Effect, and the broader perception of AI. This research can inform both AI development and policy decisions.

6. Conclusions

In conclusion, the “next to normal”-thesis provides a novel and compelling narrative of AI’s future trajectory. As AI becomes more integrated into our daily lives, it is crucial to understand the associated psychosocial implications, especially concerning AI safety concerns. By addressing these concerns proactively and ensuring that AI development is guided by principles of transparency, ethics, and safety, we can harness the benefits of AI integration while mitigating potential risks. In any case, the near future of AI will be next to normal, and the far future of AI will be fully integrated in society and considered as something completely normal.

Ethics Statement

Not applicable.

Informed Consent Statement

Not applicable.

Funding

This research received no external funding.

Declaration of Competing Interest

There are no competing interests to declare.

References

1.
Afjal M. ChatGPT and the AI Revolution: A Comprehensive Investigation of Its Multidimensional Impact and Potential. Libr. Hi Tech 2023, ahead-of-print. doi:10.1108/LHT-07-2023-0322.
2.
Deng J, Lin Y. The Benefits and Challenges of ChatGPT: An Overview.  Front. Comput. Intell. Syst. 2022, 2, 81–83. [Google Scholar]
3.
Weßels D. ChatGPT – ein Meilenstein der KI-Entwicklung. Mitteilungen Dtsch. Math. Ver. 2023, 31, 17–19. [Google Scholar]
4.
Microsoft Microsoft Copilot, der tägliche KI-Begleiter. Available online: https://news.microsoft.com/de-ch/2023/09/21/ankundigung-von-microsoft-copilot-dem-taglichen-ki-begleiter/ (accessed on 26 October 2023).
5.
Nguyen NP, Mogaji E. Artificial Intelligence for Seamless Experience Across Channels. In Artificial Intelligence in Customer Service: The Next Frontier for Personalized Engagement; Springer International Publishing: Cham, Switzerland, 2023; pp. 181–203. 
6.
Yu S, Emani M, Liao C, Lin P-H, Vanderbruggen T, Shen X, et al. Towards Seamless Management of AI Models in High-Performance Computing. arXiv 2022, arXiv:2212.06352.
7.
Hoffmann AL. Terms of Inclusion: Data, Discourse, Violence. New Media Soc. 2021, 23, 3539–3556. [Google Scholar]
8.
Wallace AA. When AI Meets IoT: AIoT. In The Emerald Handbook of Computer-Mediated Communication and Social Media; Emerald Publishing Limited: Bingley, UK, 2022; pp. 481–492.
9.
Elliott A. The Culture of AI: Everyday Life and the Digital Revolution; Routledge: London, UK, 2019.
10.
Maedche A, Legner C, Benlian A, Berger B, Gimpel H, Hess T, et al. AI-Based Digital Assistants. Bus. Inf. Syst. Eng. 2019, 61, 535–544. [Google Scholar]
11.
Haider J, Rödl M. Google Search and the Creation of Ignorance: The Case of the Climate Crisis. Big Data Soc. 2023, 10. doi:10.1177/20539517231158997.
12.
Hassani H, Silva ES, Unger S, TajMazinani M, Mac Feely S. Artificial Intelligence (AI) or Intelligence Augmentation (IA): What Is the Future? AI 2020, 1, 143–155. [Google Scholar]
13.
Kong H, Yuan Y, Baruch Y, Bu N, Jiang X, Wang K. Influences of Artificial Intelligence (AI) Awareness on Career Competency and Job Burnout. Int. J. Contemp. Hosp. Manag. 2021, 33, 717–734. [Google Scholar]
14.
Li X-H, Cao CC, Shi Y, Bai W, Gao H, Qiu L, et al. A Survey of Data-Driven and Knowledge-Aware eXplainable AI. IEEE Trans. Knowl. Data Eng. 2022, 34, 29–49. [Google Scholar]
15.
Cohen-Almagor R. Internet History. In Moral, Ethical, and Social Dilemmas in the Age of Technology: Theories and Practice; IGI Global: New York, NY, USA, 2013; pp. 19–39.
16.
Zajonc RB. Attitudinal Effects of Mere Exposure. J. Pers. Soc. Psychol. 1968, 9, 1–27. [Google Scholar]
17.
Bornstein RF, Craver-Lemley C. Mere Exposure Effect. In Cognitive Illusions: Intriguing Phenomena in Thinking, Judgment and Memory, 2nd ed.; Routledge/Taylor & Francis Group: New York, NY, US, 2017; pp. 256–275.
18.
Harrison AA. Mere Exposure. Adv. Exp. Soc. Psychol. 1977, 10, 39–83. [Google Scholar]
19.
Montoya RM, Horton RS, Vevea JL, Citkowicz M, Lauber EA. A Re-Examination of the Mere Exposure Effect: The Influence of Repeated Exposure on Recognition, Familiarity, and Liking. Psychol. Bull. 2017, 143, 459–498. [Google Scholar]
20.
Porr L. A Robot Wrote This Entire Article. Are You Scared yet, Human? The Guardian 2020.
21.
Cave S, Coughlan K, Dihal K. “Scary Robots”: Examining Public Responses to AI. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu HI USA, 27–28 January 2019; pp. 331–337.
22.
Bai̇doo-Anu D, Ansah LO. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. J. AI 2023, 7, 52–62. [Google Scholar]
23.
Abbass HA. Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human-Autonomy Trust. Cogn. Comput. 2019, 11, 159–171. [Google Scholar]
24.
Owsley CS, Greenwood K. Awareness and Perception of Artificial Intelligence Operationalized Integration in News Media Industry and Society. AI Soc. 2022, doi:10.1007/s00146-022-01386-2.
25.
Strange M, Tucker J. Global Governance and the Normalization of Artificial Intelligence as ‘Good’ for Human Health. AI Soc. 2023, doi:10.1007/s00146-023-01774-2.
26.
Laclau E, Mouffe C. Hegemony and Socialist Strategy: Towards A Radical Democratic Politics; Verso Books: London, UK, 2014.
27.
Taecharungroj V. “What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar]
28.
Pradhan I. Artificial Intelligence: The New Normal. Available online: https://www.theirm.org/news/artificial-intelligence-the-new-normal/ (accessed on 4 December 2023).
29.
Lee H-K. Rethinking Creativity: Creative Industries, AI and Everyday Creativity. Media Cult. Soc. 2022, 44, 601–612. [Google Scholar]
30.
Ng DTK, Lee M, Tan RJY, Hu X, Downie JS, Chu SKW. A Review of AI Teaching and Learning from 2000 to 2020. Educ. Inf. Technol. 2023, 28, 8445–8501. [Google Scholar]
31.
Buarque BS, Davies RB, Hynes RM, Kogler DF. OK Computer: The Creation and Integration of AI in Europe. Camb. J. Reg. Econ. Soc. 2020, 13, 175–192. [Google Scholar]
32.
Dinh TN, Thai MT. AI and Blockchain: A Disruptive Integration. Computer 2018, 51, 48–53. [Google Scholar]
33.
Khan SR, Al Rijjal D, Piro A, Wheeler MB Integration of AI and Traditional Medicine in Drug Discovery. Drug Discov. Today 2021, 26, 982–992. [Google Scholar]
34.
George G, Thomas MR. Integration of Artificial Intelligence in Human Resource. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 5069–5073. [Google Scholar]
35.
Nagaty KA. IoT Commercial and Industrial Applications and AI-Powered IoT. In Frontiers of Quality Electronic Design (QED): AI, IoT and Hardware Security; Springer International Publishing: Cham, Switzerland, 2023; pp. 465–500.
36.
Bostrom N. Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. J. Evol. Technol. 2002, 9.
37.
Bostrom N. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds Mach. 2012, , 22, , 71–85. [Google Scholar]
38.
Müller VC, Bostrom N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence; Springer International Publishing: Cham, Switzerland, 2016; pp. 555–572.
39.
Barrat J. Our Final Invention: Artificial Intelligence and the End of the Human Era; Thomas Dunne Books: New York, NY, USA, 2023.
40.
Yudkowsky E, Bostrom N. The Ethics of Artificial Intelligence. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: London, UK, 2018; pp. 53–66.
41.
Dugas MJ, Laugesen N, Bukowski WM. Intolerance of Uncertainty, Fear of Anxiety, and Adolescent Worry. J. Abnorm. Child Psychol. 2012, 40, 863–870. [Google Scholar]
42.
von Eschenbach WJ. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar]
43.
Poon AIF, Sung JJY. Opening the Black Box of AI-Medicine. J. Gastroenterol. Hepatol. 2021, 36, 581–584. [Google Scholar]
44.
Rai A. Explainable AI: From Black Box to Glass Box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar]
45.
Asatiani A, Malo P, Nagbøl PR, Penttinen E, Rinta-Kahila T, Salovaara A. Challenges of Explaining the Behavior of Black-Box AI Systems. MIS Q. Executive 2020, 19, 259–278. [Google Scholar]
46.
Bathaee Y. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harv. J. Law Technol. Harv. JOLT 2017, 31, 889. [Google Scholar]
47.
Durán JM, Jongsma KR. Who Is Afraid of Black Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI. J. Med. Ethics 2021, 47, 329–335. [Google Scholar]
48.
Holm E. In Defense of the Black Box. Science 2019, 364, 26–27. [Google Scholar]
49.
Amodei D, Olah C, Steinhardt J, Christiano P, Schulman J, Mané D. Concrete Problems in AI Safety. arXiv 2016, arXiv:1606.06565.
50.
Cardon PW, Ma H, Fleischmann C. Recorded Business Meetings and AI Algorithmic Tools: Negotiating Privacy Concerns, Psychological Safety, and Control. Int. J. Bus. Commun. 2023, 60, 1095–1122. [Google Scholar]
51.
Hernández-Orallo J. AI Safety Landscape from Short-Term Specific System Engineering to Long-Term Artificial General Intelligence. In Proceedings of the 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), Valencia, Spain, 29 June 2020–2 July 2020; pp. 72–73.
52.
PAI Safety Critical AI. Available online: https://partnershiponai.org/program/safety-critical-ai/ (accessed on 22 November 2023).
53.
Weidinger L, Rauh M, Marchal N, Manzini A, Hendricks LA, Mateos-Garcia J, et al. Sociotechnical Safety Evaluation of Generative AI Systems. arXiv 2023, arXiv:2310.11986.
54.
Yagi Y, Ikoma S, Kikuchi T. Attentional Modulation of the Mere Exposure Effect. J. Exp. Psychol. Learn. Mem. Cogn. 2009, 35, 1403–1410. [Google Scholar]
55.
Van Dessel P, Mertens G, Smith CT, De Houwer J. The Mere Exposure Instruction Effect. Exp. Psychol. 2017, 64, 299–314. [Google Scholar]
56.
Wills P. Mere Exposure Effect. In Decision Making in Emergency Medicine: Biases, Errors and Solutions; Springer: Singapore, 2021; pp. 209–213.
57.
Fortin DR, Wong MO. Now You See It, Now You Don’t: Empirical Findings from an Experiment on the Mere Exposure Effect of a Web-Based Advertisement. In Proceedings of the 2000 Academy of Marketing Science (AMS) Annual Conference; Springer International Publishing: Cham, 2015; pp. 397–397.
58.
Leynes PA, Addante RJ. Neurophysiological Evidence That Perceptions of Fluency Produce Mere Exposure Effects. Cogn. Affect. Behav. Neurosci. 2016, 16, 754–767. [Google Scholar]
59.
Weeks WA, Longenecker JG, McKinney JA, Moore CW. The Role of Mere Exposure Effect on Ethical Tolerance: A Two-Study Approach. J. Bus. Ethics 2005, 58, 281–294. [Google Scholar]
60.
Véliz C, Prunkl C, Phillips-Brown M, Lechterman TM. We Might Be Afraid of Black-Box Algorithms. J. Med. Ethics 2021, 47, 339–340. [Google Scholar]
61.
Shen MW. Trust in AI: Interpretability Is Not Necessary or Sufficient, While Black-Box Interaction Is Necessary and Sufficient. arXiv 2022, arXiv:2202.05302.
62.
Allioui H, Mourdi Y. Unleashing the Potential of AI: Investigating Cutting-Edge Technologies That Are Transforming Businesses. Int. J. Comput. Eng. Data Sci. IJCEDS 2023, 3, 1–12. [Google Scholar]
63.
Prasad Agrawal K. Towards Adoption of Generative AI in Organizational Settings. J. Comput. Inf. Syst. 2023. doi:10.1080/08874417.2023.2240744.
64.
Plokhy S. Atoms and Ashes: A Global History of Nuclear Disasters; W. W. Norton & Company: New York, NY, USA, 2022.
65.
Thurzo A, Strunga M, Urban R, Surovková J, Afrashtehfar KI. Impact of Artificial Intelligence on Dental Education: A Review and Guide for Curriculum Update. Educ. Sci. 2023, 13, 150. [Google Scholar]
TOP