|Members & Collaborators||Research Projects & Publications||Facilities|
We aim to establish ourselves for “world excellence impact-driven research in visual computing” by
- Connecting with stakeholders to impact society
- Supporting students to acquire research-lead employability skills
- Empowering researchers to solve theoretical and real-world problems in visual computing and utilising virtual and augmented reality technology
Our research has a particular emphasis on research in virtual and augmented reality, games and serious games, image processing and computer vision, 3D graphics and perception, human computer interaction, complex systems simulations, robotics and smart systems. The lab uses the Tech Hub facilities including the high resolution Computer Augmented Virtual Environment (CAVE) and a range of technologies for full immersive and augmented experiences including: Tobii Eye X, Vive, Z-Space, Kinects, Hype Box, Emotive, Enobio BCI, Nao Robot, 3D Scanner and printer.
Dr Chitra Balakrishna – Senior Lecturer
Dr Ardhendu Behera – Senior lecturer
Dr Quanbin Sun – Senior lecturer
Dr Hui Fang – Lecturer
Dr Peter Matthew – Lecturer
Dr Peter Vangorp – Lecturer
Dr Huaizhong Zhang – Lecturer
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal and Quadro M5000 GPU used for our research.
Join us at the Global Game Jam® January 20 – 22, 2017Join SIGGRAPH 2017 as student volunteer
Adaptive User Interface: Design and Development of an Eye-based Human-Computer Interaction (HCI) for People with varying Severity of Disabilities – PI: A.Behera
CyberGaTE – PI: C.BalaKrishna, Ci: D.Romano, J. Coleman – A Gamified Training Environment for Cyber Security
Journal Publications Since 2014
- Vangorp P., et al. (to appear 2017) . The perception of Hazy Gloss. Journal of Vision.
- Fang, H., Walton, S., Delahaye, E., Harris, J., Storchak, D. A., & Chen, M. (2017). Categorical colormap optimization with visualization case studies.IEEE Transactions on Visualization and Computer Graphics, 23(1), 871-880.
- Attard, C., Mountain, G., Romano, D.M. (2016). Problem Solving, Confidence and Frustration when carrying out familiar tasks on a familial mobile Device. Computers in Human Behavior. Elsevier. 61, 300-312.
- Mukherjee, R., Debattista, K., Bashford-Rogers, T., Vangorp, P., Mantiuk, R., Bessa, M., … & Chalmers, A. (2016). Objective and subjective evaluation of high dynamic range video compression. Signal Processing: Image Communication, 47, 426-437.
- Bleser, G., Damen, D., Behera, A., Hendeby, G., Mura, K., Miezal, M., … & Gorecky, D. (2015). Cognitive learning, monitoring and assistance of industrial workflows using egocentric sensor networks. PloS one, 10(6), e0127769.
- Vangorp, P., Myszkowski, K., Graf, E. W., & Mantiuk, R. K. (2015). A model of local adaptation. ACM Transactions on Graphics (TOG), 34(6), 166.
- Zhang, H., & Xie, X. (2015). Divergence of Gradient Convolution: Deformable Segmentation With Arbitrary Initializations. IEEE Transactions on Image Processing, 24(11), 3902-3914.
- Qi, D., Zhang, H., Fan, J., Perkins, S., Pisconti, A., Simpson, D. M., … & Jones, A. R. (2015). The mzqLibrary–An open source Java library supporting the HUPO‐PSI quantitative proteomics standard. Proteomics, 15(18), 3152-3162.
- Sun, Q & Wu S. 2014. A configurable agent-based crowd model with generic behaviour effect representation mechanism. Computer-Aided Civil and Infrastructure Engineering. doi: 10.1111/mice.12081
- Wu, S & Sun, Q. 2014. Computer simulation of leadership, consensus decision making and collective behaviour in humans. PLoS one. 9(1). doi:10.1371/journal.pone.0080680.
- Behera, A., Cohn, A. G., & Hogg, D. C. (2014). Real-time activity recognition by discerning qualitative relationships between randomly chosen visual features. In BMVC 2014-Proceedings of the British Machine Vision Conference 2014. British Machine Vision Association, BMVA.
- Fang, Hui, Neil Mac Parthaláin, Andrew J. Aubrey, Gary KL Tam, Rita Borgo, Paul L. Rosin, Philip W. Grant, David Marshall, and Min Chen (2014). Facial expression recognition in dynamic sequences: An integrated approach. Pattern Recognition 47, no. 3, 1271-1281.
- Chen, M., Walton, S., Berger, K., Thiyagalingam, J., Duffy, B., Fang, H., … & Trefethen, A. E. (2014, June). Visual multiplexing. In Computer Graphics Forum (Vol. 33, No. 3, pp. 241-250).
- S. Walton, K. Berger, J. Thiyagalingam, B. Duffy, H. Fang, C. Holloway (2014), A. Trefethen, Min Chen, Visualising Cardiovascular Magnetic Resonance (CMR) imagery: challenges and opportunities. Progress in Biophysics and Molecular Biology (PBMB). Vol.115, No.2-3: 349-358.
- Kellnhofer, P., Ritschel, T., Vangorp, P., Myszkowski, K., & Seidel, H. P. (2014). Stereo day-for-night: Retargeting disparity for scotopic vision. ACM Transactions on Applied Perception (TAP), 11(3), 15.
As a result of a £13 million investment the Tech Hub hosts several unique facilities which are used for teaching and research by our students, collaborators and industry. In particular we support industry that wishes to thrive and benefit from the use of innovative creative virtual reality solution.
Cave Automatic Virtual Environment (CAVE)
Hosted on the ground floor of the tech hub is located the high resolution Cave Automatic Virtual Environment (CAVE). This is a full-size four walls visualization technology that allows experiencing a virtual environment as if it was real.
The space offers to pre-start or early stage companies a shared office facility in which to work with the additional advantage of being near the University subject matter experts and Tech Hub and Campus facilities. The Hatchery is situated on the third floor of the Tech Hub.
This is the place where Industry partners can collaborators meet our students and set challenges, and is a facilities based on the ground floor of our Tech Hub building.
To find out more about the Tech Hub facilities please contact
Michael Banford, Knowledge Exchange and Enterprise Manager
T: 01695 657645