Through the looking glass: Adventures with the Hololens

This blogpost has been a long time coming. I have meant to write about our ongoing Hololens developments for some time. I wanted to start by saying, even after over a year with the Hololens, it still really excites me over all of the other VR/AR technology currently available. Since I last posted we have purchased three more Hololens. This expansion was to enable multi-user experiences, something which I think makes the Hololens and AR stand out from VR in a classroom environment. These extra Hololens have helped me to work on two fascinating projects; Spectator-view and Share Reality view, both utilizing multiple units.

Spectator-View

We have had the Hololens for over a year now and only have one video demonstrating it. This is due to how difficult it is to record the AR via the Hololens. Microsoft thought of this and created Spectator-View. The spectator-view allows you to plug in a digital camera and Hololens into a computer and stitch together the images from both. This means you can record the Hololens at much higher resolution. But to do this, you need a second Hololens and a mount to hold it onto the digital camera. So second Hololens, check, Hololens mount, check (see the picture, I 3D printed one over the summer). Now came the hard part. Although Microsoft has created the software for Spectator-View, they don’t package it up in a nice easy application. You have to build it yourself via the source code. After a few hours of debugging, I finally got all of the required applications working. This is our current setup.

top view of Hololens on plastic mount
Hololens sitting on 3d printed mount

I am looking forward to making some new Hololens videos.

Share Reality view

The second package I have been working on is a shared reality experience where the users get to explore an archaeology site, Bryn Celli Ddu, and its associated data. Similar to the spectator view, Share Reality allows each Hololens user to see the same hologram within the same space. This will enable us to create shared experiences, for teaching this is a vital tool. Being able to all see and interact with the same object within in the same space. This adds a whole new level to AR allowing for more social interaction, not isolating the user in their own `realities’ like VR or single user experiences.

This share reality experience was demoed at GIS day.

#RiddleMiaThis, Riddle Me That: Trying out a Puzzle Room App

Celeste selfies with small bronze Icarus statue
just pondering Icarus as the last artifact you see in the puzzle #youhaveflowntooclosetothesun

Together with my intrepid colleague Sarah Calhoun, I tried out the new Riddle Mia This app at the Minneapolis Institute of Art (MIA). The app is designed by Samantha Porter and Colin McFadden (both employed at the University of Minnesota’s Liberal Arts and Technology Innovation Services) along with collaborators from GLITCH, a “community driven arts and education center for emerging game makers,” and was released on Sept. 14, 2018. It’s available for free download on the Google Play Store and the Apple App Store.

This won’t be a clue-by-clue discussion of the experience (how boring!), but rather will highlight a couple clues to point to some broader points about crafting place-based experiences that employ augmented reality (AR).

What’s in a Clue?

longform white text against a dark background with clue hints
example of a clue in the Riddle MIA This app.

The clues are delivered via a text/email type message through the app, with a body of text giving the main part of the clue. The envelope button takes users to the full list of unlocked clues, and the camera opens up your phone’s camera for the clues that include AR aspects (which is maybe half of the total clues). The point opens the official museum map with floorplans for the 2nd and 3rd floors, which are the relevant floors for the app.

The “?” opens a menu of 3 additional options: Map, Puzzle, and Answer. The Map tab opens a selection of the museum gallery map with a line drawing showing where to go for the next clue. The Puzzle tab often gives you the actual information you need to complete the clue, eg. look for this kind of thing. The Answer tab gives the full answer.

My greatest challenge with the app and the overall experience was the structure of the clues. I know, I know, the puzzle aspect is part of the fun! But, I found the ways the clues were written confusing at times because of either word choice or how the clue text was parsed into the sections of the app. For example, for almost every clue there didn’t seem to be a consistent approach to what information landed in the main clue message and what was included in the Puzzle section. I would have preferred having all the information for the puzzle clue on 1 screen and then toggling over to the Map and Answer on another page, more clearly parsing the clues from the solutions in the interface. More signposting in the clues around when to use the camera and when an AR element was going to factor in would also have been welcome.

Direction and Scale Matters

We successfully completed the game in the estimated time of 1 hour. That hour was dedicated almost entirely to moving through the clues, which encompassed 2 floors and numerous galleries.

From the user perspective, I would suggest some ways to flag distance and movement through spaces between clues. The slices of map shown with each clue aren’t accompanied with a scale for estimated travel time. The graffiti clue is the clearest example of this: it suggests that the object is either on the 2nd or 3rd floor and has a considerable amount of travel time from origin to endpoint, including the level change and in our experience winding around some exhibit construction.

Takeaways

To be sure, the ambition of the app is one of its strengths as is the desire to expose users to a wide swatch of art styles, media, and artists. It moves users through MIA’s rich collections and I thoroughly enjoyed zipping through galleries that I had never ventured through before. A group of young people were also participating in the game and were about 4 clues “behind” so it was fun to hear snippets of their time working through the clues.

As I think about how to take inspiration from RiddleMIAThis, I’m pondering the issue of scale. One wish I have for a future version of the RiddleMIAThis (or other comparable museum gallery app) would be different “levels,” each one focused on 1 floor and/or 1 particular set of galleries, moving users from object to object and room to room on a smaller scale and around a particular theme or iconography. A week or so later, I’m hard pressed to think of a cohesive through-line for the art we saw, and the educator in me is always interested in those ways that technology can open up or reinforce teachable moments around the content.

Recap: Day of DH 2018

3 scholars seated on high chairs with microphones smiling and laughing during discussion.

Image caption: (l-r) Thabiti Willis, Jack Gieseking, Adriana Estill in conversation. Photo by Briannon Carlsen.

 

Academic Technology at OLC Innovate 2018!

Andrew, Dann, and Janet presented at the Online Learning Consortium Innovate! Conference in Nashville.  Their talks were (respectively):

Dann’s notes from sessions he attended are summarized below:

Andrew’s Spring 2018 Update

Fall and winter terms were an exciting time for me, with the arrival of our new 3D printer and the in-class trial of one of my Augmented Reality (AR) applications. Spring term will be just as exciting but a bit more virtual for me, as I will be spending time developing virtual experiences for Psychology and making virtual proteins a reality.

Spring term will also see more development and another full trial of our Biochemistry AR application. Working together with Rou-Jia Sung, we will be developing additional modules for use within the Intro to Biochemistry course this term. On this front, we will also be applying for a NSF grant to fund further research into the use of AR within a classroom setting. Excitingly, the AR application will be presented twice this term at the Online Learning Consortium (OLC) in Nashville and at the Society for the Advancement of Biology Education Research (SABER).

Spring will also be an exciting time for me personally. Now I am settled in Carleton, and having worked with the wonderful librarians, I am about to embark on writing my third book Visualizations in Cultural Heritage. The book will look at the history and development of the multitude of visualizations employed within the Cultural Heritage field.

Above us only digital sky: Augmenting Real Life

Time for my second post. This post is a lot later than expected; I still haven’t got this blogging down yet.

As part of the fun new tech we have been purchasing at Carleton, we managed to get a hold of a Hololens. Unlike the HTC Vive, which is VR, the Hololens is AR (Augmented Reality). The Hololens is an impressive piece of kit and one I am the most excited about. According to Microsoft (its developer), the Hololens is “the first self-contained, holographic computer, enabling you to engage with your digital content and interact with holograms in the world around you.” In normal terms, it is a tiny computer attached to a set of glass lenses, which look like a very futuristic headset.

These lenses are where the magic happens. The Hololens has three layered screens for Red, Green and Blue channels, which are combined to render full-color objects. The onboard computer uses an inertial measurement unit to calculate the location of you and the “holographic” object within your surrounds. This technology work in a similar way to AR on your cell phone with games like Pokemon Go and Ingress.

The Hololens opens up some fascinating teaching possibilities. Unlike the Vive and VR, which is very isolating and a single users experience, the Hololens and AR can be developed to be a multi-user experience. This multi-user experience enables to each Hololens to view the same 3D, providing some exciting possibilities within the class.

One of the first projects we worked on was to develop an AR model of the Piper J3 Cub used to train Carleton students in the 1940-50s. This was a part of a museum display for Sesquicentennial celebrations. The original idea of this project was to utilize the VR and HTC Vive, but I felt the Hololens would be more fun for visitors and would still allow them to be present within the space. Thank you to PEPS for editing one of my favorite videos using the Hololens.

Video from Piper Cub J3 (https://vimeo.com/189338455). Watch this space for more fun videos!

 

dh2017 Recap

Sarah and Celeste give thumbs up next to their poster for dh2017

This month, Sarah Calhoun and I attended dh2017 in Montreal to present a prototype augmented reality app co-developed with Andrew Wilson and Adam Kral. Our poster and additional resources are linked here, but here’s the synopsis:

Our goal was to create an augmented reality app that could better visualize complex and multiple temporalities AND be an easy reusable resource for classroom use. We chose a mural painted in a Thai Buddhist temple in the UK as our case study because of its layered iconography: the mural depicts the Buddha’s defeat of Mara, but the painter chose to include anachronistic elements including machine guns, Vincent Van Gogh, and a rocket ship. We wanted a way to highlight both the historical references, which could be plotted along a traditional chronological timeline, and the temporality of Buddha’s history which could not.

We got useful and positive feedback from the poster session at dh, as well as additional ideas for refining and extending the app from attending several sessions. Our next steps are to clean up some of the identified bugs and do several rounds of user testing with faculty, staff, and students to clarify how we proceed.  

Kral, a rising sophomore, did the bulk of the development work over the summer: learning Unity and building it out in AR Toolkit. His account of what he built is posted here, and we plan to continue building on Adam’s work and thank him for his efforts!

 

Student Post: Adam Kral on AR and VR Development

Guest post by Adam Kral (’20) on his summer work for Academic Technology.

So far over the summer I have been working on two projects: an augmented reality app to display images related to Buddhism and a sky diving simulator in virtual reality. Both projects have been built using the Unity game engine. The Buddhism app started with a two-dimensional slider that manipulated an image above it, as shown below.

screenshot of Buddhism app in development

I then converted this app to use augmented reality using AR Toolkit 5. When the camera is shown the background image, the images are now shown in three-dimensional space. The slider has been replaced with a joystick to manipulate the images. The finished product is shown below.

screenshot of Buddhism time app at end of phase 1

In addition to this AR app, I have been building a virtual reality sky diving simulator for the HTC Vive. The player controls their drag, x-y movement, and rotation via the movement of the controllers. This movement is tracked by determining the controllers’ positional relation to the headset. There is still work that needs to be done, such as adding colliders and textures to buildings. Some screenshots from inside the headset are below.

proposal accepted for dh2017!

photograph of temple wall with Buddhist art and alter in foreground

I’m thrilled to say that Andrew Wilson, Sarah Calhoun, and I had our poster proposal accepted for dh2017 in Montreal! We’re experimenting with augmented reality for representing complex temporalities in Buddhist temple murals, and creating lower barrier to entry teaching modules using AR.

Our poster will outline our theoretical framework, detail our development process using Vuforia, and provide possible avenues for further lines of inquiry and applications for temporal visualizations. We’ll include static images of the AR experience, as well as ways to access our project remotely.

We identify two main problems that this initial experiment will address. The first is the issue of visualizing multiple temporalities. Our motivating questions are: what are the visual and spatial relationships between the chronological story of the Buddha defeating Mara given how some Buddhists believe that the Buddha is personal and eternal and always present throughout time? How is that expressed in the mural through a wide range of artistic styles and historical references? These questions will be answered through the course of our research.

The second problem is a more practical question of how to use augmented reality to further research and teaching of these complex cultural concepts when both the visual and technical resources are limited. We intend to use the extant low-res photographs available of the Defeat of Mara temple mural and the augmented reality framework Vuforia to create a cross-platform experience of the religious expression. This will allow users to see and select individual elements in the mural (such as the Mona Lisa or the spaceship) and engage with the different ways one can order and make meaning out of the varied chronologies and temporal references. Vuforia allows us to use an existing framework that has the benefit of being accessible on multiple platforms. We believe this is necessary for facilitating the adoption of augmented reality for classroom and preliminary research uses.