Institut for Kommunikation og Psykologi

STEP INTO YOUR OWN VIDEO DATA WORLD

Ground-breaking software empowers researchers to “inhabit” video data in virtual reality through immersive 360° video technology. Introducing AVA360VR!

Lagt online: 20.05.2021

 

A team at Aalborg University, Big Soft Video led by Professor Paul McIlvenny and Associate Professor Jacob Davidsen, is launching a new piece of software for Immersive Humanities research, AVA360VR, that revolutionises traditional interaction and video research and holds massive potential for education and pedagogical training. Click here to view the first YouTube video-tutorial on AVA360VR.  

 “Imagine this. You are a researcher interested in classroom interaction. You go out “into the field” to make a video recording of an authentic classroom – not with a traditional video camera, but with a 360° camera, to avoid missing out on important activities. Now, when you get back to your office, instead of watching your recording on a flat computer screen, you “step into” the video data with a virtual reality headset – not just to watch the classroom, but to re-inhabit it. This is next level qualitative video analysis. And it’s possible with AVA360VR – a tool that leads the way for immersive qualitative analytics,” Jacob Davidsen says.

AVA360VR is the name of the software that the team Big Soft Video at Aalborg University is developing. The need for this software is a response to the ways that working with qualitative video data today fall short. 

“This software can revolutionise the way we do  video-based research,” according to Jacob Davidsen, “and it holds massive potential for education and pedagogical training in practice-based fields – for example training of health care professionals or pedagogues in nursery education.”

An upcoming feature is a collaborative version where users can view and annotate the same 360 video across locations – a minimal viable solution has already been taken into use. An example could be a clinical supervisor who uses AVA360VR with their students to analyse, train and practice together. Or a group of international researchers analysing their data collaboratively in AVA360VR.

 

Traditional video and the inevitable risks of compromising data

A common methodological challenge in interaction research with video is how and where to place the camera. Researchers must consider camera position carefully to capture as much of the interaction as possible. It is common for researchers to add multiple cameras simply to secure maximum coverage. However, traditional video, regardless of how many cameras are used, often lose interactionally salient people, actions, events and objects out of sight (and frame). 

“I’m sure most of us are familiar with watching a video recording in which the important parts end up taking place on the margins or even outside of the camera’s gaze. This is a common challenge for researchers – and that’s why 360° video is a complete game-changer,” Jacob Davidsen explains.

 

What is 360° video?

A 360° video, also referred to as immersive videos, is a video recording that records every view direction at the same time, typically with an omnidirectional camera.

Facebook and YouTube already allow for uploading and navigating 360° videos on their platforms. Add to this a growing number of media players such as VLC, GoPro VR and PotPlayer that have launched navigational features for 360° video.

Flat screens, flat data

Within the past few years, humanities and social science researchers have increasingly adopted the use of the more holistic 360° cameras. Yet still, most researchers watch and analyse the 360° video recordings on their flat desktop screens. This comes with a number of drawbacks, Jacob and Paul argue.

“The only available solution at the moment is to watch and analyse 360° video on flat computer screens. We would argue that video should not be flat, but immersive – that it should be experienced in VR. Flat screens render a false perception of relative positions. In a sense, it is the difference between conceptualising the world on a flat map vs. on a round globe. The former stretches the data in order to ‘map’ it onto a flat surface; the latter is truer representation of the real thing. With our software, we invite researchers to ‘step into the middle of the globe’ and look out in 360° onto their video data world. It’s enhanced video data immersion,” Jacob explains.  

Besides poor representation, there is another drawback with rendering 360° video on flat screens. The current market-leading media players for computers provide only limited options for working analytically with 360° video. Most media players offer simple navigational features and no analytical functionalities:

 “When researchers have captured 360° video, they have very limited options for working with it afterwards. Navigating around in a video is powerful for reliving the experience, but it will only take the researcher so far. Researchers rely on fundamental research practices such as annotating, processing, analysing and extracting data. These core research processes have not been supported in any 360° software – until now,” Jacob and Paul explain. 

From watching to inhabiting video – meet AVA360VR

Fuelled by a lack in the market, the Big Soft Video team is launching the innovative software AVA360VR. It is a unique and flexible tool for Annotating, Visualising and Analysing 360° video in Virtual Reality. The tool empowers users – researchers, students and educators – to relive and inhabit any 360° video-recorded situation in virtual reality. It is our idea of immersive humanities – allowing a new way of performing research and dissemination in the humanities.

AVA360VR allows users to work directly in the 360 video – it is like one big canvas for research. For example, users can embed objects onto the 360° video recording – objects such as external images, notes, transcripts, traditional video recordings, and more. Furthermore, user are able to add annotations, such as drawings, notes and arrows, and even animate the objects and annotations so they follow the movements of relevant participants. Finally, it is possible to integrate multiple video cameras in one “reality” and then jump from camera to camera to change the viewpoint of the same interaction.

These functionalities set AVA360VR at the very forefront of virtual reality technology, immersive qualitative analytics, qualitative video analysis and interaction research.

Four innovative features in AVA360VR

1. JUMP BETWEEN CAMERAS

Jump between cameras with a single click, and take a step deeper towards qualitative immersive analytics and 6 degrees of freedom.

2. EMBED OBJECTS FLEXIBLY

Drag traditional videos, images, texts, maps, biometric data etc. onto the 360° video to make sure all data is “at hand” for your analysis.

3. ANNOTATE AND ANIMATE

Write, highlight and draw on the video. Animate the annotations (and the embedded objects) so they follow important participants.

3. EXTRACT DATA EASILY

Export your data from AVA360VR with capture tools such as video snippets, frame grabs and 4-view shots. No need for editing afterwards.

Endless applications for both researchers and practitioners

Initially, the Big Soft Video team designed AVA360VR as a flexible, versatile and analytical tool for researchers who engage with huge amounts of qualitative 360° video data. However, during the development process, it soon became clear that AVA360VR carries massive potential not just for researchers, but also practitioners. The tool is ideal for teaching and training purposes across all kinds of sectors, for example health and education – and stakeholders from these areas have already taken an interest.

 “Originally, we wanted to present interaction researchers with a tool to work smarter and more immersively with their video material. However, we soon realised that this tool is attractive far beyond research. A 360° video recording of authentic situations from a an operating theatre or a classroom can be put to good use when teaching students about communication, interaction with patients, medical procedures etc. The 11,000 nursing students in Denmark could use the tool as preparation before venturing into “the field” or as patient-nurse communication training. Lecturers can embed sound clips of themselves, acting as a voiceover that points students’ attention to crucial interactional parts,” Jacob explains.

The team is looking for funding to develop the software further as an infrastructure for immersive humanities and also as a pedagogical training tool. Currently, the Big Soft Video team has a full-time programmer, Artúr Barnabás Kovács, working to stabilise and improve the existing features and introduce new ones.

Get started with AVA360VR now

Do you want to get started with AVA360VR right away? The basic requirements for running AVA360VR are a VR-ready computer as well as a VR headset (all commercial headsets can be used). The software itself is open-source and is available here:

Software, help tutorials, demo project and support:

github.com/BigSoftVideo/AVA360VR

Help pages:

bigsoftvideo.github.io/AVA360VR

Please get in touch with Jacob Davidsen, Associate Professor at the Department of Communication and Psychology, Aalborg University, at jdavidsen@hum.aau.dk.

 

Behind the researcher | Paul McIlvenny

Paul McIlvenny is a Professor at the Department of Culture and Global Studies at Aalborg University. He holds a PhD from Edinburgh University, Scotland, and is active in the following centres and research groups: Centre for Discourses in Transition (C-DIT), Centre for Mobility and Urban Studies and VILA (Video Research Lab, a part of DIGHUMLAB) at Aalborg University.

 

Behind the researcher | Jacob Davidsen

Jacob Davidsen is an Associate Professor at the Department of Communication and Psychology at Aalborg University. He holds an MA in Information Science and is active in VILA (Video Research Lab, a part of DIGHUMLAB) at Aalborg University.

His research interests include computer supported collaborative learning and embodied interaction analysis. 

 

About BIG VIDEO

BIG VIDEO is a programme at Aalborg University that aims to develop an enhanced infrastructure for qualitative video analysis with innovation in four key areas: 1) Capture, storage, archiving and access of digital video, 2) Visualisation, transformation and presentation, 3) Collaboration and sharing, and 4) Qualitative tools to support analysis. 

Read more about the background for the programme or the BIG VIDEO manifesto. Check out the BigSoftVideo space in GitHub.