Indietro

History and Artificial Intelligence: possibilities and risks for new technologies in the Humanities

Di Arianna Boccamaiello 01/10/2025

Abstract

The paper aims to analyze the use of Artificial Intelligence in the humanities, particularly in historical and archival research. The use of these new technologies opens up new horizons for the discipline, but also presents new challenges to overcome. Ignoring AI could be a loss of potential for historians; however, the risk of ethical and methodological issues needs to be acknowledged. 

 

Keywords

Ricerca Storica – Historical Research

Sfide Etiche – Ethical Challenges

Bias Algoritmico – Algorithmic Bias

Intelligenza Artificiale – Artificial Intelligence

Pensiero Critico – Critical Thinking

 

Table of contents 

Introduction 1

Possibilities 3

Risks 6

Conclusion 11

References 13

 

 

Introduction 

 

This paper aims to investigate how Artificial Intelligence (AI) can impact historical and archival research, and how scholars can enhance their work with AI while considering the risks associated with these new technologies. 

The emergence and growth of Artificial Intelligence have transformed the way humans work and make decisions. Artificial Intelligence is a broad term that encompasses various technologies capable of performing and producing many tasks. The European AI Act (2024) defines: 

                        AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[1].

The European AI Act definition is voluntarily vague because the speed at which AI companies and developers work and think requires flexibility. Every month, a new AI system is released. The technology of tomorrow is complex to imagine today, but lawmakers have decided to leave nothing to the imagination.

Some scholars, such as Spyros Makridakis[2], draw a parallel with other technological revolutions, suggesting that the AI Revolution will have a more significant impact on our lives than the Industrial Revolution. 

Nowadays, the scientific literature on the use of Artificial intelligence in the humanities is scarce compared to other disciplines[3]. However, there are two trends: some scholars support the use of AI in research and complain about the tardiness of their colleagues in accepting AI. Meanwhile, many other academics warn against these machines and their potential impact on human nature, as recent results suggest that the outcome could be negative. 

Historians can have huge benefits by incorporating AI into their research, and the paper shares experiences from archivists, archaeologists, historians, and cultural heritage experts who have ventured into this new technology. However, behind the brilliant possibilities of technological advances lie the risks of repeating past mistakes and perpetuating the gender, racial, and power imbalances of the past. Not only are the preexisting discrimination and biases already embedded in the machines’ virtuality and physicality, but new paradigms of error and problems are also emerging in the field. The paper also argues that redefining too much of the historian’s work involving these new technologies leads to the creation of a new profession that requires different training.

Possibilities and risks are the two parts in which the paper inquires about the use of AI in History and other humanities, trying to understand how complicated but interesting their involvement in the research is and could be, trying to avoid leaning towards an uncritical and blind trust in AI or the opposite, giving in to an apocalyptic rejection of this science.

The first paper section focuses on the possibilities; the choice is not casual: the paper’s goal is to provide a critical perspective on the use of AI, without demonizing it or portraying AI as the sole solution for humanity’s survival. The digital and humanities worlds have already combined, creating a new essential and hybrid figure. However, the digital humanist requires a lengthy and demanding training that not every historian is willing to undertake. Imagining new AI models accessible and simplified could be a solution. 

The second section focuses on the risks, exploring how AI, like every other technology, impacts the world and human nature. In contrast to the possibilities section, the second part is longer and more detailed. The imbalance reveals the fear that AI can easily convince people of its benefits, while its problems are often difficult to understand and are frequently overlooked by the general public. Technophobia and technophilia are two interesting concepts to explore. If technophobia causes negative feelings and the rejection of technologies (not to be confused with computer anxiety), technophilia is the opposite. However, this enthusiasm for technology can sometimes reveal a dependence on it[4]. Leaning into either of these concepts can be both dangerous for the scholars. 

Possibilities 

Some scholars have argued that the humanities are technology-phobic, but these disciplines are recently starting to become more digital and technologically aligned[5]. However, the integration of AI in historical research represents more than a simple technological adoption: it signals a fundamental epistemological shift in how historians approach knowledge creation and source analysis. 

Museums and exhibitions have provided numerous examples of how to utilize AI and technologies to showcase the beauty of cultural heritage, through art visualizations, real-time interactions, VR, gaming, and content-based 3D object retrieval[6]. But beyond these presentational applications, AI is reshaping the fundamental practices of historical investigation. 

Already impressive results have been shown in the use of AI in material cultural heritage, such as in the case of the Terracotta Warriors. This Chinese collection of thousands of life-sized statues was analyzed using various AI models, including the Generative Adversarial Network (GAN) and Random Forest. These models helped to study the entire complexity of the group, not just the individual sculpturesA web-based application for classifying content from historical documents that include numerical and alphanumeric tables. Software can locate, extract, and classify visual elements designated ‘content illustrations,’ ‘initials,’ ‘decorations,’ and ‘printer’s marks[7].This case exemplifies a crucial methodological transformation: AI enables what we might call “macro-analytical synthesis”—the ability to analyze entire collections as integrated systems rather than isolated artifacts. Traditional art historical methods would require decades to achieve comparable analytical depth across such a large corpus. AI thus does not merely accelerate existing methods; it enables entirely new forms of historical inquiry. 

Donovan cites three key projects in digital humanities that further illustrate these methodological shifts. “Cordeep” designed by the Max Planck Institute for the History of Science, represents a new approach to documentary analysis: 

A web-based application for classifying content from historical documents that include numerical and alphanumeric tables. Software can locate, extract, and classify visual elements designated ‘content illustrations,’ ‘initials,’ ‘decorations,’ and ‘printer’s marks’[8].

This project demonstrates how AI enables “multimodal document analysis”—simultaneously processing textual, visual, and structural elements in ways that transcend traditional disciplinary boundaries between textual analysis, art history, and material culture studies. 

Secondly, “ITHACA,” developed by DeepMind, shows the transformation of specialized expertise: “a deep neural network trained to simultaneously perform the tasks of textual restoration, geographic attribution, and chronological attribution, previously performed by epigraphers.” Rather than replacing epigraphers, ITHACA extends their analytical capabilities, enabling them to work with fragmentary texts at unprecedented scale while maintaining scholarly rigor. 

Last, the “Venice Time Machine Project,” developed in collaboration between École Polytechnique Fédérale de Lausanne, Ca’ Foscari, and the State Archives of Venice, represents perhaps the most ambitious epistemological transformation: “a digitized collection of the Venetian State archives, which cover 1,000 years of history. Once it is completed, researchers will use deep learning to reconstruct historical social networks”[9]. This project embodies what could be called “temporal network analysis”— using AI to map social, economic, and political relationships across centuries of documentation. Traditional prosopographical methods could never achieve such temporal and social scope, fundamentally altering our understanding of historical continuities and transformations. 

Historians continuously face their own Terracotta Warriors: the archives. An archive is rarely a collection of digitized documents; it can take many unorganized shapes, and sometimes it contains different types of documents, which are often fragmented and fragile. 

In the last few decades, there have been significant investments in the digitalization of archives, which have been processed through datafication to make them fully accessible[10].The arrival of the Internet and new storage have already forced scholars and archivists to view digital documents as future data. However, an archive is a complex system, and an archivist requires in-depth knowledge of the archive, as well as an understanding of how it functions and how it should operate. Indeed, a digital archive needs a digital archivist. 

The integration of AI transforms not only archival practice but archival theory itself. Traditional archival science has been grounded in principles of provenance, original order, and contextual integrity. AI introduces what we might term “algorithmic approach”[11]— the ability to discover connections and contexts that transcend original organizational schemes. This transformation represents a fundamental shift from “custodial” to “analytical” archival paradigms. Rather than simply preserving and providing access to documents, AI-enabled archives become active participants in knowledge creation, suggesting connections, identifying patterns, and generating new research questions. The digital AI archive raises a new series of problems, from the balance between protecting sensitive information and open access to knowledge to the ethical issues created by granting too much autonomy to AI. However, the potential of AI could outweigh the risks, creating an ethical framework for the impact of AI in research practices; scholars could utilize it without compromising the integrity of their work. 

AI software can utilize text processing to identify words and reconstruct fragmentary texts, inscriptions, and other materials. The software can be trained to specialize in text restoration, such as Natural Language Processing (NLP)[12]. Another branch of AI useful for History is Topic Modeling, which connects documents to core themes. This is particularly useful in speeches and articles for identifying recurring words and topics[13].

 These tools represent the evolution from Franco Moretti’s concept of “distant reading” (2000) to what we might call “deep pattern recognition.” While distant reading enabled literary scholars to analyze large corpora quantitatively, AI-powered analysis can identify semantic relationships, conceptual networks, and discursive patterns that operate below the threshold of conscious authorial intention or traditional hermeneutic analysis. 

This capability has profound implications for historical methodology. Historians can now identify conceptual shifts, trace the evolution of ideas, and map intellectual networks across vast data of texts, potentially revealing previously invisible structures of historical change that were previously hidden from traditional methods. 

Scholars who appreciate the use of AI scold their colleagues for their doubts and reticence. They advise reviewing the work of AI with human presence, given that AI outcomes would never be perfect. The human review ensures the reduction of errors while minimizing time-consuming activities[14].These scholars advise their colleagues not to reject AI completely, as it means a loss for the entire field. However, this perspective, while pragmatic, perhaps understates the more profound transformation occurring. The relationship between historians and AI is not simply one of tool use or error correction, but rather the emergence of what we might call “collaborative intelligence”—a new form of scholarly practice where human interpretive capabilities and machine analytical power create synergistic knowledge production[15]

This collaborative model challenges traditional notions of scholarly authority and authorship. When AI identifies patterns that inform historical interpretation, questions arise about the attribution of insights, the nature of scholarly creativity, and the boundaries between human and machine contributions to knowledge. These are not merely practical concerns but fundamental epistemological questions about the nature of historical understanding in the age of artificial intelligence. 

Risks 

Large language models (LLMs), such as ChatGPT, can replicate human language and answer questions on a wide range of topics and tasks. Additionally, they possess a certain degree of creativity and originality[16]. LLMs are also trained on massive amounts of data, a number that no human brain could process in such a short amount of time. However, they function by statically calculating the most probable sequence of words. For LLMs, it is challenging, given that they are trained on a conspicuous but limited amount of data, to think outside the box, as exemplified by the so-called “bag of words”[17].

Human-AI co-creation could resolve this problem by using AI as a tool for developing human creativity[18].

Boers et al.[19] conducted a study with secondary students and observed that, in many cases, AI could diminish human creativity. Some students managed to improve their preexisting ideas by utilizing Chat GPT, while others struggled to generate new ideas and simply repeated Chat GPT’s suggestions. The authors explained that, as novices, students probably struggled to use AI critically, which led to a lack of motivation and self-awareness. Even if the data used to train the machine is limited compared to the knowledge of a secondary student, the AI has access to complicated concepts and links and could accept what is given by the LLMs as the best option, even if it is not right for the context. 

Nevertheless, this paper suggests that only historians who are at least involved in a master’s degree program in History, having already acquired some critical elements of the discipline, should utilize AI in their research. As the historian takes the first steps in research, the more they are at the beginning, the less they should rely on AI. An optimal use of AI for an undergraduate could be for brainstorming and searching literature.

Another source of ethical questions is Academic integrity. Academic integrity is a key factor for both students and scholars. Academic integrity can be defined as “one having to be honest in one’s work, acknowledge others’ work properly, and give credit where one has used other people’s ideas or data”[20]. AI plagiarism cannot be easily detected; however, banning AI tools such as ChatGPT could deprive researchers and students of an important resource, as proven before. Automation bias can be defined as the tendency to rely heavily on artificial intelligence without critically analyzing it through one’s own experience and information. This can lead to misinformation and potentially dangerous outcomes.

Additionally, in certain environments, such as academic institutions or administrative settings, the need for neutrality in the results can lead to a tendency to prefer machines over humans, holding the bias that machines are inherently neutral. Meanwhile, ADMs are trained on data that are sometimes of poor quality, biased, and manipulated, similar to human decision-making. This type of bias in the data used to train the machine is called “sample bias”[21].

AI helps draw new connections in the past that were previously impossible, but the risk of bias or falsification in the research remains present. Donovan[22] refers to another bias, the presentist bias. AI is typically trained with databases that are standardly no older than 15 years and can have difficulties recognizing items and images from the past. Deep learning models can process large amounts of data and offer more abstraction, enabling them to identify patterns in the data. However, if the data is corrupted, they do not critically approach the information. Ultimately, these models do not understand the information they are presented with, and their conclusions could be wrong or even absurd. 

Historians are not specifically trained to use these tools, and many lack a comprehensive understanding of how the machines work. Their functions are not completely clear to the programmers themselves, leading to a “black box” problem, where the first issue is that nobody can retrace how the machines arrived at their outcomes[23].However, some programs and models historians use are created with their future users in mind, simplifying their functions and settings. 

French historian Emmanuel Le Roy Ladurie[24] stated: “The historian of tomorrow will be a programmer, or he will not exist.” This statement can be debated, but certainly, in historical research, if there are enough funds, the team should include a programmer. Or in creating an AI for historical research, the team should include a historian. 

It has been demonstrated that large-language or multimodal models, such as ChatGPT, Claude, and Gemini, hurt moral beliefs and perceptions of equality[25]. Technology generally influences a shift in moralities and the definition of good and evil. Danaher argues that technology gives or subtracts choices, changing option sets and creating new moral dilemmas. It makes it easier or harder to do the right thing or to think about it. Technology creates new relationship paths, even artificial ones, among humans, and it changes the burden and expectations in social relationships. However, technology also affects the social imbalance of power and moral perceptions. 

Indeed, Danaher also investigates how AI can help those who are less skilled in improving their abilities and securing better job positions, thereby improving the equality of the job market. Scholars can speculate that. If AI improves equality, this could shift attention to other values, such as welfare, freedom, sociality, and others.  

The risk of repeating the same discrimination and biases is a constant shadow that looms over the analysis of AI use in all fields. The physicality of AI databases and infrastructures perpetuates the colonial power structure in Africa, where the reality differs from the Big Tech propaganda[26].In their paper, Kerry Holden and Matthew Harsh demonstrated how the new infrastructures developed by AI companies replicate the same colonial power imbalance and exploitation in the African context. Nowadays, it is impossible to imagine our lives without the Internet. If, in the past, sociologists and academics dwelled on the distinction between our real life and the virtual world, the distance has become increasingly thinner in recent years. However, the creation of this new wide world did not overcome the racial, sexual, and political boundaries of the “old world.”

Also, the Internet is our primary source of information. The data is online and available to corporations and private companies, and it has become a source of profit, holding political power strong enough to create and destroy governments. Indeed, it is interesting to analyze the discourse around security and cyberattacks and how the attention is on the behavior of the consumer, who is invited during the navigation to stay on a safe website, mainly those in the hands of big web corporations, as Google, who declare all the time how much they spend on security. This monopoly is moreover confirmed by the fact that when a competitor starts to surpass its users, the big corporations buy the competition. The world is sold as a brave new world with no boundaries, but it is actually a little world dominated by a few companies with a lot of power and capital.

For many, the promise of a better future was tied to the invention and improvement of new technologies, which were also believed to be neutral and positive by nature, capable of transcending human limits and mistakes. Nevertheless, technologies are a human product and are not neutral at all. Indeed, they are perfectly capable of reiterating the same logic of oppression and potentiating its effects. Furthermore, even if it is true that daughters and sons should not be held responsible for their fathers’ mistakes, we must first acknowledge the fathers’ presence and spirits in digital technologies. 

Gender and technology have a strict connection, with technology being at the same time a product and a source of gender relations. However, technologies do not have the transformative power that cyber-feminists attributed to them. For Judy Wajcman, only humans can free themselves[27].

Women are often viewed as consumers of technology rather than producers, and their role in the creation of new technologies is frequently downplayed, if not entirely overlooked. Techno-science is a male-dominated field, also because it wants to be narrated as such, and as a male-dominated field, women are excluded and oppressed by it.

Safiya Umoja Noble’s book, Algorithms of Oppression, was published in 2018, but her research and data analyzed in the book started in 2010.

Noble focused her book on the algorithms in the age of neoliberalism and how they are part of a series of digital decisions that reiterate violence and oppression and “enact new modes of racial profiling”, which she refers to as technological redlining[28].

The idea for the book came to Noble while she was searching for informative content for her stepdaughter and a friend. Noble searched for “black girls” on Google, and much of what she found was porn and racist content. It was just the tip of the iceberg: in Algorithms of Oppression, many examples of systemic racism and sexism towards Black, Latina, and Asian women are displayed, how their bodies are objectified by the porn industry and by the male gaze, and how Google and Social media ignore the phenomenon, and when it becomes too evident they adjust it and classify it as an issue, rather than understanding that the problem is systemic rather than limited to this single episode. Noble’s work highlights that in a free-market technology economy, dehumanization is legitimate and profitable.

So, for Noble, academics, archivists, librarians, and information workers must take accountability for how our information and knowledge are shaped through the Internet—the same accountability that Tech Giants, data companies, governments, and authorities should have. Noble’s problem cannot be resolved solely by teaching black girls how to code, but also by those who have the responsibility of creating the algorithms and the system.

As seen, governments and online platforms wanted to protect freedom of speech despite women’s concerns, considering these abuses as the price to pay for their political participation.

Noble et al.[29] advocate for stricter control of social media by the government and by social media platforms themselves, as well as providing help to victims of online abuse. For them, the automated review of content posted should not be controlled by an algorithm but by human actors.

Deconstructing the Internet is crucial, and the solution is not leaving women to resolve it on their own.

Better technologies do not mean a better society because progress without social change would not advance humankind as a whole; it would only increase the already deep racial, social, and gender imbalance.

AI could deeply benefit from post-colonial and decolonial theories for “creating a critical technical practice of AI, seeking reverse tutelage and reverse pedagogies, and the renewal of affective and political communities”[30]. Decolonial theories could provide the necessary instruments for AI to align technology with ethics. AI is not a technical system, but rather it shapes and transforms the socio-political structure of power dynamics [31]. Mohamed et al. propose three solutions utilizing decolonial AI: critical technical practice, reverse tutelage, and reciprocal engagement, as well as the renewal of affective and political communities. Decolonizing AI will be a long process: AI is a new technology on the market that reveals the deep-rooted biases and discrimination in the social and political system[32]

The study of history is a universe in expansion, and human presence is needed in the future of the humanities. Critical thinking is a skill, not innate, but developed through experience and knowledge. Machines and algorithms have the power to be more efficient and faster than humans, but they cannot understand and criticize the data inserted today. They are helpful instruments. However, more factors slow the process of integrating AI in the humanities, such as financial costs, the complexity of the technology and algorithms, and programming competencies that historians do not acquire directly during their training[33].

However, as humans discuss human matters, human nature thrives on fallacy. In our contemporary age, this is seen as a limit to overcome, not as a natural imposition to indicate where there is time to reflect. The capitalist process aims to achieve enormous goals in a short time and with much human effort. The emergence of Artificial Intelligence presents us with a moral question that all disciplines must address: it also prompts us to reexamine our legal and philosophical understanding of humanity’s role. How much can we press the accelerator on our productivity without affecting the times of the world and our humanity? 

Climate change has already revealed how the universe around us responds to the voracious appetite of consumerism. The unbridled pace of work, which alienates human beings and reduces them to their mere condition of being workers, has not masked how the economic system privileges the object over the subject. Artificial intelligence is an aid and facilitation to human work, but the risk is to take a run-up towards error. The error is already an inherent part of human life, and correcting it can seem a waste of time. However, without acknowledging the error and the consequent waste of time, the error risks becoming a crisis. Moreover, to act quickly to avoid the burden of bureaucracy and time-consuming tasks, one can still be a victim of discrimination and errors. Artificial intelligence is not an angelic art devoid of any earthly influence, but rather, it reflects humanity as a mirror.

Conclusion 

The paper examines the possibilities and risks of using AI in History and other humanities. 

These technologies open new horizons for the discipline but also create new methodological and ethical problems. 

The paper did not aim to review all the existing literature on the humanities and their use of AI, as it is clear, acknowledging that there is no need to be convinced about the possibilities, how powerful and important these new instruments are. There is no limit to how and in which form AI can be used; the possibilities are infinite[34].Even to imagine a limit to this technology is difficult. However, as the law attempts to provide a broad definition of what AI is and how to respond to it, the humanities face a similar task. 

The risk of repeating the same logic of oppression, discrimination, and colonialism is a reality: scholars and journalists have proved that AI infrastructures and technologies in the Global South are continuing the same old history of power imbalance. 

Feminist thinkers like Wajcman and Noble examined technology and the Internet under the gender discrimination lens, and the result was predictable but still disappointing. 

The AI errors and biases can be corrected, and human interventions can mitigate the faults, but in our society, the impact of AI poses an ethical question. Scholars like Danaher were right in stating that technology can influence negative human choices and morality. 

Nevertheless, historians should not succumb to the temptation of complete skepticism regarding AI. AI is changing the way historians conduct their research, enabling them to explore new, fascinating roles and alleviating the burden of long and dispersed tasks. The digital archives facilitate the circulation of free and accessible knowledge. AI can assist scholars in many phases of their investigation, but it cannot substitute for the academic: AI lacks the critical thinking process vital to explaining how the results were obtained and to grasp what the sources are revealing and concealing fully. 

Being a historian is already a challenging profession that requires years of training, dedication, and attention; however, history is not static, the present can transform it. AI can definitely help in framing its complexity. The decolonization of AI can be a common goal to break the chain of oppression and exploitation that involves marginalized communities in the creation of new technologies, thereby avoiding the top-down impositions typical of the capitalist market. 

Technophobia and technophilia are two sides of the same coin: both approach AI technology uncritically, without acknowledging its potential for improving lives and research, or ignoring the criticism and issues that AI creates. 

Humanity has limits, knowledge needs time and effort to be built and acquired. The mass production of new knowledge could respond more to a consumerist need to absorb and forget, rather than the actual advancement of human sciences. The technological and academic perspectives are incredible, but are they sustainable? 

In conclusion, the arrival of new technologies necessitates a scientific effort from every field to adapt, advance, safeguard, and even warn against the associated risks. AI has raised questions about its nature and the impact it could have on humans since its inception. The humanities are among the last to answer the call. Nevertheless, this answer is crucial to imagining a solution to the AI moral dilemmas. If there are disciplines that can provide insights into human nature and where humanity should go, they are the humanities. 

 

References

 

An MIT Technology Review Series: AI Colonialism, «MIT Technology Review», 2022. https://www.technologyreview.com/supertopic/ai-colonialism-supertopic (cited 24 march 2025).

H. Balalle & S. Pannilage, Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity, «Social Sciences Camp & Humanities Open», XI, 101299, 2025. https://doi.org/10.1016/j.ssaho.2025.101299 (cited 24 march 2025).

J. Boers, T. Etty, M. Baars & K. Van Boekhoven, Exploring Cognitive Strategies in Human-AI Interaction: ChatGPT’s Role in Creative Tasks, «Journal of Creativity», XXXV, n.1, 100095, 2025. https://doi.org/10.1016/j.yjoc.2025.100095 (cited 24 march 2025).

D. Chapinal-Heras, & C. Díaz-Sánchez, A review of AI applications in Human Sciences Research, «Digital Applications in Archaeology and Cultural Heritage», XXXII. https://doi.org/10.1016/j.daach.2024.e00323 (cited 24 march 2025). 

G. Colavizza, T. Blanke, C. Jeurgens, & J. Noordegraaf, Archives and AI: An overview of current debates and future perspectives, «Journal on Computing and Cultural Heritage», XV, n. 1, 2021, pp. 1–15. https://doi.org/10.1145/3479010 (cited 24 march 2025). 

J. Danaher, Generative AI and the future of equality norms, «Cognition», CCLI, 105906, 2024. https://doi.org/10.1016/j.cognition.2024.105906 (cited 24 march 2025).

M. Donovan, How AI is helping historians better understand our past, «MIT Technology Review», 2023, April 20. https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/ (cited 24 march 2025).

European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council of March 13, 2024, on artificial intelligence, «Official Journal of the European Union». http://data.europa.eu/eli/reg/2024/1689/oj (cited 24 march 2025).

A. Hawkins,  Archives, Linked Data and the Digital Humanities: Increasing access to digitised and born-digital Archives via the semantic web, «Archival Science»,  XXII, n. 3, 2021, pp. 319–344. https://doi.org/10.1007/s10502-021-09381-0 (cited 24 march 2025).

H.C.H. Hofmann, Automated Decision-Making (ADM) in EU Public Law, «University of Luxembourg Law Research Paper», n. 2023-06, 2023. http://dx.doi.org/10.2139/ssrn.4561116 (cited 24 march 2025).

K. Holden & M. Harsh, On pipelines, readiness and annotative labour: Political geographies of AI and data infrastructures in Africa, «Political Geography», CXIII, 2024, 103150. https://doi.org/10.1016/j.polgeo.2024.103150 (cited 24 march 2025).

Z. Ivcevic & M. Grandinetti, Artificial intelligence as a tool for creativity, «Journal of Creativity», XXXIV, n.2, 100079,  2024. https://doi.org/10.1016/j.yjoc.2024.100079 (cited 24 march 2025).

A. Koutsoudis, C. Makarona & G. Pavlidis, Content-based navigation within virtual museums, «Journal of Advanced Computer Science and Technology», I, n.2, pp. 73–81.

E. Le Roy Ladurie,  La territoire de l’historien, Gallimard, Paris 1973. 

S. Li, Y. Jiang, B. Jing, L. Yang & Y. Zhang, AI-based experts’ knowledge visualization of cultural heritage: A case study of the Terracotta Warriors, «Journal of Cultural Heritage», LXXII, 2025, pp. 81–90. https://doi.org/10.1016/j.culher.2025.01.006 (cited 24 march 2025). 

S. Makridakis, The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms, «Futures», XC, pp. 46–60, 2017. https://doi.org/10.1016/j.futures.2017.03.006 (cited 24 march 2025). 

M. Martínez-Córcoles, M. Teichmann & M. Murdvee, Assessing technophobia and technophilia: Development and validation of a questionnaire, «Technology in Society», LI, 2017, pp. 183–188. https://doi.org/10.1016/j.techsoc.2017.09.007 (cited 24 march 2025). 

S. Mohamed,  M.-T. Png & W. Isaac, W, Decolonial ai: Decolonial theory as Sociotechnical Foresight in Artificial Intelligence, «Philosophy & Technology», XXXIII, n. 4, 2020, pp. 659–684. https://doi.org/10.1007/s13347-020-00405-8 (cited 24 march 2025). 

S.U. Noble, Algorithms of oppression: How search engines reinforce racism, New York University Press, New York 2018.

S.U. Noble, M. Sweeney, J. Austin, L. McKeever, & E. Sullivan, Changing course: Collaborative reflections of teaching/taking “race, gender, and sexuality in the information professions.”, «Journal of Education for Library & Information Science», LV, n.3, 2014, pp. 196–206.

G.Pavlidis, AI trends in digital humanities research, «Trends Comput Sci Inf Technol», VII, n. 2,  026-034, 2022. https://dx.doi.org/10.17352/tcsit.000048 (cited 24 march 2025).

E. Schleiger, C. Mason, C. Naughtin, A. Reeson, & C. Paris, Collaborative intelligence: A scoping review of current applications, «Applied Artificial Intelligence», XXXVIII, 2327890, 2024. https://doi.org/10.1080/08839514.2024.2327890 (cited 24 march 2025). 

H. Shani-Narkiss, B. Eitam, & O. Amsalem, Using an algorithmic approach to shape human decision-making through attraction to patterns, «Nature Communications», XVI, 59131, 2025. https://doi.org/10.1038/s41467-025-59131-4 (cited 24 march 2025). 

T. Smits & Wevers, M. (2023). A multimodal turn in Digital Humanities. Using contrastive machine learning models to explore, enrich, and analyze digital visual historical collections. Digital Scholarship in the Humanities, 38(3), 1267–1280. https://doi.org/10.1093/llc/fqad008 

E. Sullivan, Sweeney, M., Austin, J., & Noble, S. U. (2015, April 3). Noble, S. U., Sweeney, M., Austin, J., McKeever, L., Sullivan, E. (2014). Changing course: Collaborative reflections of teaching/taking “race, gender, and sexuality in the information professions.” Journal of Education for Library & Information. Academia.edu. https://www.academia.edu/11785077/

J. Wajcman, Technofeminism, Polity Press, Cambridge 2004.




Arianna Boccamaiello

(Arianna Boccamaiello, born in Naples, earned her Bachelor’s degree at the University of Naples “L’Orientale” and her Master’s degree in Global Cultures at the University of Bologna. She is currently a PhD candidate in Global History and Governance at the Scuola Superiore Meridionale, with a research project on the Italo-descendant communities in Eritrea and the Dodecanese in the post-World War II period.)




















































 
















 





1)

European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council of March 13, 2024, on artificial intelligence, «Official Journal of the European Union». http://data.europa.eu/eli/reg/2024/1689/oj (cited 24 march 2025).

2)

S. Makridakis, The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms, «Futures», XC, pp. 46–60, 2017. https://doi.org/10.1016/j.futures.2017.03.006 (cited 24 march 2025).

3)

D. Chapinal-Heras, & C. Díaz-Sánchez, A review of AI applications in Human Sciences Research, «Digital Applications in Archaeology and Cultural Heritage», XXXII. https://doi.org/10.1016/j.daach.2024.e00323 (cited 24 march 2025).

4)

M. Martínez-Córcoles, M. Teichmann & M. Murdvee, Assessing technophobia and technophilia: Development and validation of a questionnaire, «Technology in Society», LI, 2017, pp. 183–188. https://doi.org/10.1016/j.techsoc.2017.09.007 (cited 24 march 2025).

5)

G. Pavlidis, AI trends in digital humanities research, «Trends Comput Sci Inf Technol», VII, n. 2, 026-034, 2022. https://dx.doi.org/10.17352/tcsit.000048 (cited 24 march 2025).

6)

A. Koutsoudis, C. Makarona C & G. Pavlidis, Content-based navigation within virtual museums, «Journal of Advanced Computer Science and Technology», I, n.2, pp. 73–81.

7)

S. Li, Y. Jiang, B. Jing, L. Yang & Y. Zhang, AI-based experts’ knowledge visualization of cultural heritage: A case study of the Terracotta Warriors, «Journal of Cultural Heritage», LXXII, 2025, pp. 81–90. https://doi.org/10.1016/j.culher.2025.01.006 (cited 24 march 2025).

8)

M. Donovan, How AI is helping historians better understand our past, «MIT Technology Review», 2023, April 20. https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/ (cited 24 march 2025).

9)

Ibidem.

10)

A. Hawkins, Archives, Linked Data and the Digital Humanities: Increasing access to digitised and born-digital Archives via the semantic web, «Archival Science», XXII, n. 3, 2021, pp. 319–344. https://doi.org/10.1007/s10502-021-09381-0 (cited 24 march 2025).

11)

H. Shani-Narkiss, B. Eitam, & O. Amsalem, Using an algorithmic approach to shape human decision-making through attraction to patterns, «Nature Communications», XVI, 59131, 2025. https://doi.org/10.1038/s41467-025-59131-4 (cited 24 march 2025).

12)

D. Chapinal-Heras, & C. Díaz-Sánchez, op.cit.

13)

Ibidem.

14)

Ibidem.

15)

E. Schleiger, C. Mason, C. Naughtin, A. Reeson, & C. Paris, Collaborative intelligence: A scoping review of current applications, «Applied Artificial Intelligence», XXXVIII, 2327890, 2024. https://doi.org/10.1080/08839514.2024.2327890 (cited 24 march 2025).

16)

J. Boers, T. Etty, M. Baars & K. Van Boekhoven, Exploring Cognitive Strategies in Human-AI Interaction: ChatGPT's Role in Creative Tasks, «Journal of Creativity», XXXV, n.1, 100095, 2025. https://doi.org/10.1016/j.yjoc.2025.100095 (cited 24 march 2025).

17)

Ibidem.

18)

Z. Ivcevic & M. Grandinetti, Artificial intelligence as a tool for creativity, «Journal of Creativity», XXXIV, n.2, 100079, 2024. https://doi.org/10.1016/j.yjoc.2024.100079 (cited 24 march 2025).

19)

J. Boers, T. Etty, M. Baars & K. Van Boekhoven, op. cit.

20)

H. Balalle & S. Pannilage, Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity, «Social Sciences Camp & Humanities Open», XI, 101299, 2025. https://doi.org/10.1016/j.ssaho.2025.101299 (cited 24 march 2025).

21)

H.C.H. Hofmann, Automated Decision-Making (ADM) in EU Public Law, «University of Luxembourg Law Research Paper», n. 2023-06, 2023. http://dx.doi.org/10.2139/ssrn.4561116 (cited 24 march 2025).

22)

M. Donovan, op. cit.

23)

Ibidem.

24)

E. Le Roy Ladurie, La territoire de l’historien, Gallimard, Paris 1973.

25)

J. Danaher, Generative AI and the future of equality norms, «Cognition», CCLI, 105906, 2024. https://doi.org/10.1016/j.cognition.2024.105906 (cited 24 march 2025).

26)

K. Holden & M. Harsh, On pipelines, readiness and annotative labour: Political geographies of AI and data infrastructures in Africa, «Political Geography», CXIII, 2024, 103150. https://doi.org/10.1016/j.polgeo.2024.103150 cited 24 march 2025).

27)

J. Wajcman, Technofeminism, Polity Press, Cambridge 2004.

28)

S. U. Noble, Algorithms of oppression: How search engines reinforce racism, New York University Press, New York 2018.

29)

S. U. Noble, M. Sweeney, J. Austin, L. McKeever, & E. Sullivan, Changing course: Collaborative reflections of teaching/taking “race, gender, and sexuality in the information professions.”, «Journal of Education for Library & Information Science», LV, n.3, 2014, pp. 196–206.

30)

S. Mohamed, M.-T. Png & W. Isaac, W, Decolonial ai: Decolonial theory as Sociotechnical Foresight in Artificial Intelligence, «Philosophy & Technology», XXXIII, n. 4, 2020, pp. 659–684. https://doi.org/10.1007/s13347-020-00405-8 (cited 24 march 2025).

31)

Ibidem.

32)

Ibidem.

33)

D. Chapinal-Heras, & C. Díaz-Sánchez, op.cit.

34)

G. Pavlidis, op. cit.