Artificial Intelligence in the Sciences, Humanities and Social Sciences
Graduate Teaching Assistant (GTA) PhD Studentships
Artificial Intelligence is changing the world. The opportunities, possibilities, pressures, and potentialities it presents are only now starting to be realised. To better understand the implications of Artificial Intelligence, Edge Hill University’s AI Research Group brings together computational cognitive scientists, philosophers, educators, creative writers, historians, media and literature scholars, social scientists, and other academics who are undertaking collaborative research around a human-centred perspective on Artificial Intelligence.
Apply nowWe are seeking interrogative postgraduate researchers passionate about exploring the diverse potentials and implications of AI in a range of disciplines. The University particularly welcomes applications for studentships in the project areas outlined below.
All postgraduate researchers (PGRs) are registered in the University’s Graduate School and housed in the faculty or department that is most appropriate for the project on which they are working.
Key research themes and potential projects
Computational Cognitive Sciences, Philosophy, and Practice – A Human-Centred Perspective on Artificial Intelligence
We are looking for people with a background in computer science, philosophy, cognitive science, humanities, health, education, or public policy who have an interest in working on technical, conceptual or empirical projects at the intersection of Computational Cognitive Sciences, Philosophy, and Practice. We are looking for people who want to make a difference, people who are creative, inspirational and positive about future developments.
Examples of themes under this project title include (but are not limited to):
The impact of AI on our understanding of ethics, and the ethical challenges of AI
While recent developments in AI have undoubtedly unlocked beneficial advancements for humanity in fields such as healthcare diagnosis and industrial efficiencies, agencies such as UNESCO have highlighted profound ethical concerns about the extent to which the risks associated with the development of AI systems may increase existing global inequalities. For example, the environmental implications of developing AI systems, as well as embedded biases in the data used to train them, have started to present further sources of harm for already marginalised groups. Conversely, as well as presenting ethical challenges in the world, developments in AI technology may also require us to fundamentally re-think how we go about understanding the moral landscape and what it is to be a moral agent. For decades, researchers working at the intersection of philosophy, psychology, and neuroscience, have attempted to use developments in areas of computer science (such as AI) to justify and/or explain claims made in relation to moral matters. However, this raises important questions around the extent to which such a scientistic approach to these types of questions can unintentionally, and unhelpfully, frame our understanding of moral judgement, moral development, and moral agency.
Questions and topics within this theme may include, but are not limited to:
- How should we make sense of, or begin to address, some of the ethical challenges posed by the rapid rise in artificial intelligence systems?
- To what extent can/should AI be used to meaningfully address ethical challenges, and what does this mean for human beings as moral agents?
- What is the impact of AI on our understanding of moral education, moral judgement, and/or questions of whether morality is a principled endeavour or not?
- AI and the influence of scientism on philosophical method(s) for addressing ethical dilemmas.
Disciplines: Computer Science, Cognitive Science, Ethics, Philosophy, Public Policy and Psychology
Responsible AI and Sustainable Computing
In addition to having an impact on how we think about being human, AI raises a host of ethical issues that are related with what is the right thing to do and what is the right way of doing it (efficacy, efficiency and effectiveness). We are interested in receiving proposals for projects that intersect and explore both responsibility and sustainability, ideally within a global challenge context. Such projects should address issues related to the UN Sustainable Development Goals.
The project will most likely be in collaboration with other academic(s) from Universities in the UK and beyond, including internationally. We are particularly interested in proposals related to:
- Our ethical responsibility for the reduction of the computational footprint in contexts related to the environment, such as the prevention of and/or the response to mental health, pollution or climate change as indicative global challenges
- The implications and unintended consequences on the wrongdoing of the technology application.
Disciplines: Cognitive Science, Computer Science, Environmental Science, Ethics, Social Science, Philosophy and Psychology
Cognitive Computer Vision for Emotion Recognition in Cross-Cultural Contexts
Aim: Different cultures express emotions in varying ways. In some cultures, people may smile to mask discomfort or anger, for example, while in others, emotions might be more openly expressed. The goal of this project is to advance AI systems for accurately interpreting facial expressions, body language, and other nonverbal visual cues in ways that account for cultural differences in emotional expression. The objective is to make AI systems more aware of cultural diversity, ensuring that they interact appropriately and sensitively with users from different parts of the world.
The proposed research on AI will go beyond universal, one-size-fits-all approaches to better handle the nuances of emotions expressed differently across various cultures. It will not only detect facial expressions but also interpret other cues such as body posture, gestures, and surrounding context (such as a formal meeting or a family gathering) to understand emotions more accurately.
Societal benefits include enhancing well-being, fostering cross-cultural understanding, and improving technological integration into daily life. AI systems such as social robots, chatbots, and virtual assistants would become more culturally aware, improving interactions in multi-cultural environments like global businesses, international teams, or educational settings. Mental health diagnostics and therapy tools to monitor emotional states, particularly in culturally diverse patients, would also be improved. Development of more inclusive AI systems that work accurately across diverse populations would reduce discrimination and ensure that the benefits of AI are available to people from all backgrounds.
Disciplines: Computer Science, Social Science, Psychology and Cognitive Science
Advancing AI for Analysing Historical Visual Art: Cognitive Interpretation of Cultural and Social Narratives
Aim: This study will focus on advancing AI-powered cognitive vision to analyse visual elements in historical artwork. It will investigate how AI can be used to identify and compare symbolic elements across different cultures and time periods, enhancing understanding of how societies have expressed their values and beliefs through art. The goal is to enable AI systems to go beyond simple image recognition and delve into the deeper meanings, symbols, colours, figures, their arrangements and stories conveyed through artwork. It would support uncovering hidden cultural and social narratives embedded in paintings, sculptures, or other visual art forms by analysing the relationships between objects, figures, and scenes. The study aims to bridge the gap between machine vision and art interpretation, providing new ways to understand and analyse the history of visual storytelling. This research has the potential to transform how we study, interpret, and engage with historical visual art, making it more inclusive, accessible, and insightful for society.
Societal benefits: Making art more accessible to non-experts, helping people understand the historical context of an artwork without needing an art historian. Enabling AI to explain historical and cultural narratives in a way that is more relatable and engaging for modern audiences, making art education more widespread. Valuable educational tools for museums, schools, and online platforms, making art history more accessible and engaging through interactive and immersive experiences.
Disciplines: Computer Science, Art, History, Psychology, Social Science and Digital Humanities
AI in the Archives
In the past two decades, millions upon millions of archival documents have been digitised and made available to researchers via digital archives. Simple tools such as keyword searching have already had a transformative impact on the day-to-day practice of historical research, while researchers in the field of digital humanities have made important strides in developing both quantitative and qualitative ways to ‘distant read’ these collections. However, there is still much work to be done to fully unlock the potential of historical datasets. PhD projects in this strand will therefore explore how NLP, Computer Vision, and other AI techniques might be used to assist and enrich the work of historians and archivists. The projects will be collaboratively supervised by both computer scientists and historians.
Historical datasets present additional challenges for the development and application of AI tools. For example, the textual data in these collections is often messy and is based on the output of Optical Character Recognition (OCR) software developed many years ago. Similarly, many of the images in digital archives were produced using older technologies (e.g. woodcuts, sketches, early photographs) than those in modern datasets, and they often feature very limited metadata. Finally, any application of AI tools such as LLMs must recognise the specific (and constantly changing) historical context of the items in these collections, including shifts in linguistic usage and meaning over time. Tools and techniques honed using modern datasets may not work in the same way when applied to historical data.
Projects in this strand will explore how AI can be used to enrich historical datasets and assist the work of both historians and archivists. Possible challenges include:
- Developing useful new ways to ‘distant read’ patterns in large historical datasets, and then applying them to real-world historical problems. This might include NLP techniques such as topic modelling and sentiment analysis
- Enriching the metadata of historical datasets. For example, this might involve detecting the topic/genres of documents in an archive
- Improving the accuracy of Optical Character Recognition (OCR) software or enhancing the quality of existing text transcriptions using LLMs
- Enriching the metadata associated with images in digital archives to improve search results and/or explore broader trends and patterns.
Disciplines: Computer Science, History and Digital Humanities
AI narratives
The rapid advancement of artificial intelligence presents profound opportunities and challenges for contemporary research in the arts and humanities. Whether through speculative fiction, ethical debates, or creative exploration, AI reshapes our understanding of technology, society, and the future. PhD project proposals in this area should be concerned with narratives that interrogate the implications of AI, explore the ethics of posthuman futures, or consider AI’s role in reimagining human-nonhuman relationships. These research themes offer rich terrain for critical and creative enquiry, addressing urgent questions about technology’s impact on culture, ethics and advocacy.
- The representation of AI in speculative fiction, film and/or television, including its role in dystopian and/or utopian narratives
- Narratives specifically addressing the ethics of AI and their ramifications in and/or for posthuman futures
- Responsible use of AI in animal advocacy, including critical analysis of AI in animal agriculture, animal language, AI and the development of advocacy campaigns.
Disciplines: Human Animal Studies, Creative Writing, Computer Science, Film & Television Studies, History, and Literary Studies
How does AI-generated text compare to human-generated text?
LLM-based tools (e.g. ChatGPT) are increasingly used to generate summaries, responses, and different types of text from academic papers to advertising copy. At the same time, this output has been shown to be characterised by providing inaccuracies (hallucinations) with certainty, awkward style, or non-conformity to even basic social communication conventions. In light of the above, research can usefully focus on comparing AI language output and interaction to that of humans. Areas that can be examined in this respect are:
- Linguistic Modality, particularly the expression of likelihood, propensity, obligation/permission, and volition/intention.
- Phraseology: lexical choices and combinations, and the related aspects of linguistic conformity and creativity.
- The pragmatics of interaction, in particular, conforming to social conventions (e.g. politeness, tact), and the unintended communication of implicit meaning.
- Issues arising from lexical polysemy, idiomatic expressions, and metaphors.
- Expressed attitudes to sociopolitical issues: The responses of LLMs depend on the text content of the corpora that have been used to train them or fine-tune them. It would be interesting to investigate how LLMs trained on corpora derived from diverse specialised domains (e.g. social platforms for groups with particular sociopolitical views) can produce different responses to the same question/prompt or interaction with humans.
Investigating how LLMs function and perform with respect to the above issues can help users realise the strengths and shortcomings of the models. Furthermore, research in these areas can inform methods for creating better LLMs in the future. Finally, as more and more technical and professional domains use LLMs in their everyday practice, research on the differences between LLM-generated text and human-generated text could save numerous errors and complications.
In the first instance, please direct enquiries to the Graduate School Manager, Dr Chris Lawton.
All PGRs will be supported by a supervisory team with appropriate expertise. Please see the university’s research repository for further information on the research outputs and interests of each member of staff.
Read more about our colleagues' researchApply now