Computational Cognitive Sciences, Philosophy, and Practice: A Human-Centred Perspective on Artificial Intelligence
All postgraduate researchers (PGRs) are registered in the University’s Graduate School and housed in the faculty or department that is most appropriate for the project on which they are working.
The A.I. Research Group brings together computational cognitive scientists, philosophers, educators, creative writers and other academics who are undertaking collaborative research around a human-centred perspective on Artificial Intelligence (AI). We are looking for people with a background in computer science, philosophy, cognitive science, humanities, health, education, or public policy who have an interest in working on technical, conceptual or empirical projects at the intersection of Computational Cognitive Sciences, Philosophy, and Practice. We are looking for people who want to make a difference, people who are creative, inspirational and positive about future developments.
Potential research themes include (but are not limited to):
how might the development, training and/or deployment of AI models in practical contexts disadvantage, or enable, marginalised groups?
AI, education and moral panics
imagination, science fiction and the reality of AI: How might Creative Approaches, particularly speculative fiction, offer a route to uniting and examining concerns regarding ethical, moral, philosophical and technical aspects of AI?
the impact of Artificial Intelligence and Virtual Reality on how we think about what it is to be human.
robots ‘thinking’ outside the box: The Ever-Growing Automated Cognition Challenge of how to use AI to assist human judgement and decision-making.
responsible AI and Sustainable Computing.
In the first instance please direct all enquiries about proposed projects on topics related to Computational Cognitive Sciences, Philosophy, and Practice – A Human-Centred Perspective on Artificial Intelligence to [email protected] with Dr Chris Hughes, Researcher Development Fellow (Doctoral Training), cc’d ([email protected]).
Indicative examples of potential projects related to these themes may include (but are not limited to)
How might the development, training and/or deployment of AI models in practical contexts disadvantage, or enable, marginalised groups?
The extent to which the use of AI in the NHS may disadvantage/discriminate against or enable/empower minoritized ethnic adult Autistic mental health users.
An exploration of the potential advantages and disadvantages of the deployment of weak (and potentially strong) AI as a form of assistive technology for dyslexic postgraduate research students in the composition of their thesis.
AI, education and moral panics
Beliefs about the potential impact of AI in education alternate between near-utopian predictions and moral panic. Some argue that machine learning, drawing on large datasets and bioinformatics, will assist the teacher with real time guidance personalised to individual students. Others fear that human teachers will become redundant, or that access to AI will enable sophisticated forms of plagiarism that render traditional qualifications redundant. These differing predictions often reveal differences not in the understanding of AI and its potential, but in underpinning beliefs about the nature and aims of education and learning. Claims about the effects of AI in education can be used as a lens to explore:
competing accounts of the aims of education – as qualification, humanisation, socialisation, or emancipation
competing characterisations of teaching competence – as professional, technical, or ethical
competing accounts of learning – as knowledge acquisition, social construction or transformation
the nature and aims of assessment in education.
Imagination, science fiction and the reality of AI: How might Creative Approaches, particularly speculative fiction, offer a route to uniting and examining concerns regarding ethical, moral, philosophical and technical aspects of AI?
Academic writing of various kinds can, quite often it seems, utilise science fiction, creative writing or thought experiments to bring about ideas pertaining to AI. Such approaches seek to bend or mould what is logically possible, or what can be imagined, in order to bring about interesting features of AI. What are the limits to this exercise? What is the difference between the imaginable and the possible, and how does that bear on how we should think about AI and the relationship between human beings and AI? Speculative Fiction can be defined as both an umbrella term for literature that encompasses sci-fi, gothic, fantasy and ‘weird’ writing, and as a sub-category of sci-fi that deals with the human issues and problems of near futures, rather than focussing on the technical aspects as the core concerns. Considering the second definition, it can therefore be utilised as an ontological tool and extended thought experiment to consider how AI might affect and inform human existence and relations through imaginative narratology. Research in this field could be cross disciplinary, could involve meta-approaches whereby AI is used within a methodology, and is likely to encompass one or more of the other areas outlined above and below. Questions and topics may include, but are not limited to:
What is the impact of AI on individuals and groups in varied demographics, and how might speculative narratives be uniquely placed examine this?
Speculative fiction as an intersection between the arts, humanities, and sciences.
Speculative fiction’s role in directing public opinion and determining policy change.
Speculative Fiction as a means of communicating the nuances of scientific/philosophical/ethical concerns related to AI to non-specialist groups.
Creative methodologies that use and examine AI symbiotically.
The impact of Artificial Intelligence and Virtual Reality on how we think about what it is to be human.
It should go without saying that digital technologies have changed our lives considerably in various ways (that is, after all, what they are designed to do). Rather less obvious is various ways in which digital technologies have influenced how we think about what it is to be a human being. Artificial intelligence and virtual reality have perhaps had the greatest influence in that regard. We are interested in receiving proposals from those with academic backgrounds in philosophy, psychology, education, computer science, and other relevant areas for doctoral projects that explore any matters raised by that observation. The list below is not exhaustive and simply gives some prompts designed to help you think of an interesting project.
Conceptions of the human and the influence of the computational metaphor.
Locating and critiquing scientism in all its forms.
Concept-possession, knowledge and the growth of AI: are we losing the human?
The neuro-computational picture and normativity.
Who decides? Judgement, human beings, and machines.
Virtual reality and learning.
Is the virtual real?
Can robots make art? AI and the intention of the artist.
Liberating AI from the shadow of the fanciful: what are the implications of moving beyond the mechanical and the computational images of human beings?
What are the implications of a four Es conception of cognitive science for AI and the place of the human in the modern world?
Is occasion-sensitivity an insurmountable problem for Artificial General Intelligence?
Re-humanising Artificial Intelligence. It is often thought that developments in artificial intelligence and neuroscience tell us something about what it is to be human and what human mental attributes consist in. We are interested in receiving proposals for projects that explore the thought that we misunderstand human beings, the mind and language use in accepting such a picture, and that it can be shown to be misleading in such a way that we better understand human beings and create a basis for better development and use of artificial intelligence in future. By relocating the place of the human in the development of AI we can re-humanise our thinking about both AI and human beings.
Robots ‘thinking’ outside the box: The Ever-Growing Automated Cognition Challenge of how to use AI to assist human judgement and decision-making.
There is a lot of talk (and noise) about AI, which usually revolves around ChatGPT (at least for now). However, many of these discussions tend to be either too shallow or too focused when there are so many aspects that could be developed by achieving a clearer understanding of the models, logic, philosophy, etc. of how we think. This could be achieved via different paths, such as mathematical models of an enhanced decision-making process, to the philosophy of ‘being a human machine’, and many more. Examples of projects we are interested in include those that investigate and seek to understand:
How artificial intuition and consciousness could serve as powerful computational tools for decision making.
Practical philosophical aspects relevant to the project of developing a theory of mind for AI.
Methods for identifying and minimising bias in AI models.
The use (and/or implications) of generative models as aids for medical or other practitioners.
Responsible AI and Sustainable Computing
In addition to having an impact on how we think about being human, AI raises a host of ethical issues, issues that are related with what is the right thing to do and what is the right way in doing it (efficacy, efficiency and effectiveness). We are interested in receiving proposals for projects that intersect and explore both responsibility and sustainability and ideally, within a global challenge context. Such projects should address issues related to the UN Sustainable Development Goals.
The project will most likely be in collaboration with other academic(s) from Universities in the UK and beyond, including internationally. We are particularly interested in proposals related to:
Our ethical responsibility for the reduction of the computational footprint in contexts related to the environment such as the prevention of and/or the response on mental health, pollution or climate change as indicative global challenges.
The implications and unintended consequences on the wrong doing of the technology application.