2019 LACOL Language Instruction Jam

Carly wearing a red shirt and gold necklace, gesturing while giving a talk. Carly Born (with Chico Zimmerman and Clara Hardy) recently participated in LACOL’s 2019 Language Jam hosted at Bryn Mawr College. 26 faculty and technologists from across the consortium attended.

The weekend centered on the CHIANTI project: a repository-like site for assignments and materials for instructors to share and use in their own classes, and a resource for students to complete tutorials on specific content areas in which they need extra help. Additionally, Carly shared an update on the development of the Language Dashboard Report, which is a Moodle report plugin intended to give faculty granular information on student performance on language placement tests, and Language Lesson. For more information on the projects demonstrated or on the Language Jam overall, please feel free to contact Carly (cborn@carleton.edu)! 

 

New article published by Janet Russell and Melissa Eblen-Zayas on CUBE

Carleton Undergraduate Bridge Experience

Melissa and Janet’s article “Making an Online Summer Bridge Program High Touch” was recently published in the Journal of College Student Development. The article describes the creation of the Carleton Undergraduate Bridge Experience (CUBE), a hybrid program that includes 6 weeks of online programming during the summer and 10 weeks of face-to-face programming during fall term of the students’ first year.

Citation:

Eblen-Zayas, M. & Russell, J. (2019). Making an Online Summer Bridge Program High Touch. Journal of College Student Development 60(1), 104-109. Johns Hopkins University Press.

https://doi.org/10.1353/csd.2019.0006

PDF download of the article

BiochemAR is now available!

3d model of molecule appears above QR code on a plain table.

BiochemAR, an augmented reality app for visualizing 3d molecular models, is now available for download on Apple’s App Store and Google Play Store. This app, a collaboration between Rou-Jia Sung (biology) and Andrew Wilson (AT), also includes learning modules and ways to use the app in the classroom. To read more, checkout this write-up in The Scientist. If you’re interested in more information or talking through developing additional modules, please email Rou-Jia (rsung@carleton.edu) or Andrew (awilson@carleton.edu) directly.

Video in the Age of Digital Learning: Insight from Jonas Köster’s 2018 book

cover of Jonas Köster's book Video in the Age of Digital LearningJonas Köster recently produced a beautiful and research-rich text entitled Video in the Age of Digital Learning. For those of us in education and developing instructional media, we already know what Köster lays out on the first page—“recent studies overwhelmingly predict the continual rise in the use of instructional video” (xv). Here’s why: “digital video is an extremely powerful method to tell stories, explain complex issues through engaging visuals, offer the learner the ability to work at their own pace, and . . . [it’s] the most efficient and effective method for bringing a teacher and learners together at an incredible scale” (xv).

This shift in teaching and learning requires more than just a camera and an eager instructor, however. For example, student attention span has shortened to only about 8 seconds and making a video engaging “requires a thorough examination of the medium to find the best ways to make it as useful as possible” (xvii). Without regurgitating the entire text, I’ll outline a few aspects of Köster’s book that stood out most.

Continue reading Video in the Age of Digital Learning: Insight from Jonas Köster’s 2018 book

Themes from AZCALL & Carly’s Current Research

Arizona State Flag

Recently, I attended a small conference called AZCALL 2018 hosted by the CALL Club of Arizona State University. This one-day conference was planned by the graduate students in the CALL Club at ASU for the first time, anticipating about 60 people to attend.  To their surprise, actual registrations doubled that number!  The best part of attending small conferences like this one is that they are usually highly impactful without being overwhelming. So I’m still jazzed about some of the topics discussed!

The conference opened with a Keynote by Jonathon Reinhardt, Associate Professor of English at the University of Arizona, about the potential of using multiplayer games for second language learners. If you go to his page, you’ll see his recent research focuses on the use of games and gameful educational techniques, which have been very hot topics in both second language pedagogy and instructional design circles.

Aside from the now common theme of games for education, game-based learning and gamification, virtual and augmented reality were represented in presentations by Margherita Berti, Doctoral Candidate at the University of Arizona and the ending keynote by the always energetic Steven Thorne, among others.  Berti won the conference award for best presentation when she spoke about how she uses 360º YouTube videos and Google Cardboard to increase cultural awareness in her students of Italian.  Check out her website for more of her examples, Italian Open Education.

My personal favorite presentation was given by Heather Offerman from Purdue University, who spoke about her work on using visualization of sound to give pronunciation feedback to Spanish language learners (using a linguistics tool called Praat).  Her work is very close to some of the research I’m doing into the visualization of Chinese tones with Language Lesson, so I was excited to hear about the techniques she was using and how successful she feels they were as pedagogical interventions.  It’s interesting that in the last few CALL conferences I’ve attended, there have started to be more presentations on the need for more explicit and structured teaching of L2 pronunciation in particular, which could appear to be in contrast with the trends for teaching Comprehensible Input (check out this 2014 issue of The Language Educator by ACTFL for more info on CI).  But I argue that it’s possible – and possibly a good idea – to integrate explicit pronunciation instruction along with the CI methodology to get the best of both worlds.  Everything in moderation, as my mom would say.

Just like with all things, there is no silver bullet technology for automatically evaluating student L2 speech and providing them with the perfect feedback to help them improve. Some have been focusing on the use of Automatic Speech Recognition (ASR) technologies and have been using them in their L2 classrooms.  However, the use of ASR is founded on the premise that if the machine can understand you then your pronunciation is good enough.  I’m not sure that’s the bar that I want to set in my own language classroom, I’d rather give the students much more targeted feedback on the segmentals of their speech that not only help them notice where their speech might differ from the model, but also to notice important aspects of the target language to gain better socio-cultural understanding of verbal cues.

That is why I have been working on developing pitch visualization component of Language Lesson. The goal is to help students who struggle with producing Chinese tones properly notice the variance between their speech and the model they are repeating by showing them both the model and their own pitch contours. Soon, I hope to have a display that will overlap the two pitch contours so that students can see very clearly the differences between them. Below are some screenshots of the pitch contours that I hope to integrate in the next 6 months.

This slideshow requires JavaScript.

I’m looking forward to spending part of this winter break working on a research project to assess the value of pitch contour visualization for Chinese L2 learners.  I will be collecting the recordings I’ve been capturing for the past two years and producing a dataset for each group of students (some of whom had the pitch visualization and some who did not). I will be looking to see if there are differing trends in the students’ production of Chinese tones amongst the different treatment groups. Below are just a few of the articles that I’ve read recently that have informed my research direction.  It should be exciting work!

Elicited Imitation Exercises

Vinther, T. (2002). Elicited imitation:a brief overview. International Journal of Applied Linguistics, 12(1), 54–73. https://doi.org/10.1111/1473-4192.00024

Yan, X., Maeda, Y., Lv, J., & Ginther, A. (2016). Elicited imitation as a measure of second language proficiency: A narrative review and meta-analysis. Language Testing, 33(4), 497–528. https://doi.org/10.1177/0265532215594643

Erlam, R. (2006). Elicited Imitation as a Measure of L2 Implicit Knowledge: An Empirical Validation Study. Applied Linguistics, 27(3), 464–491. https://doi.org/10.1093/applin/aml001

Chinese Tone Acquisition

Rohr, J. (2014) Training Naïve Learners to Identify Chinese Tone: An Inductive Approach in Jiang, N., & Jiang, N. (Ed.). Advances in Chinese as a Second Language: Acquisition and Processing. (pgs 157 – 178). Newcastle-upon-Tyne: Cambridge Scholars Publishing. Retrieved from http://ebookcentral.proquest.com/lib/carleton-ebooks/detail.action?docID=1656455a”]

**cross-posted from Carly’s blog, The Space Between.

#ISSOTL18 conference: Toward a learning culture

red sign with moose head shape and text "moose shop"

Sarah Calhoun, Janet Russell, and Celeste Sharpe presented a poster (co-authored with Melissa Eblen-Zayas, Iris Jastram, and Kristin Partlo) titled “Perspectives on connecting SoTL across the (co-) curriculum at a small liberal arts college” at the International Society for the Scholarship of Teaching & Learning Conference in Bergen, Norway. The poster presented three examples of overlapping initiatives at Carleton, and the ways in which these projects are surfacing gaps and providing critical foundation for a more concerted, campus-wide effort. These findings will also be presented at an LTC presentation winter term. The poster and bibliography are available at http://bit.ly/issotl2018-connecting. An image of the poster is below.

conference poster on eportfolios, information literacy in student writing, and student internships as examples of scholarship of teaching and learning in the co-curriculum
authors: Sarah Calhoun, Melissa Eblen-Zayas, Iris Jastram, Kristin Partlo, Janet Russell, and Celeste Sharpe.

Through the looking glass: Adventures with the Hololens

This blogpost has been a long time coming. I have meant to write about our ongoing Hololens developments for some time. I wanted to start by saying, even after over a year with the Hololens, it still really excites me over all of the other VR/AR technology currently available. Since I last posted we have purchased three more Hololens. This expansion was to enable multi-user experiences, something which I think makes the Hololens and AR stand out from VR in a classroom environment. These extra Hololens have helped me to work on two fascinating projects; Spectator-view and Share Reality view, both utilizing multiple units.

Spectator-View

We have had the Hololens for over a year now and only have one video demonstrating it. This is due to how difficult it is to record the AR via the Hololens. Microsoft thought of this and created Spectator-View. The spectator-view allows you to plug in a digital camera and Hololens into a computer and stitch together the images from both. This means you can record the Hololens at much higher resolution. But to do this, you need a second Hololens and a mount to hold it onto the digital camera. So second Hololens, check, Hololens mount, check (see the picture, I 3D printed one over the summer). Now came the hard part. Although Microsoft has created the software for Spectator-View, they don’t package it up in a nice easy application. You have to build it yourself via the source code. After a few hours of debugging, I finally got all of the required applications working. This is our current setup.

top view of Hololens on plastic mount
Hololens sitting on 3d printed mount

I am looking forward to making some new Hololens videos.

Share Reality view

The second package I have been working on is a shared reality experience where the users get to explore an archaeology site, Bryn Celli Ddu, and its associated data. Similar to the spectator view, Share Reality allows each Hololens user to see the same hologram within the same space. This will enable us to create shared experiences, for teaching this is a vital tool. Being able to all see and interact with the same object within in the same space. This adds a whole new level to AR allowing for more social interaction, not isolating the user in their own `realities’ like VR or single user experiences.

This share reality experience was demoed at GIS day.

Quizzing as feedback for students

It’s important for all of us to get feedback and the timeliness of feedback matters too. Remember how it felt when you submitted something to your doctoral thesis committee to review and they took FOREVER to get back to you? Or when you posted that picture on Facebook and the folks you thought would love it didn’t even give it a like let alone a comment? Timely feedback to students is useful to their learning and could be that thing that helps them feel like they belong at Carleton.

When designing or revising your course, one way to situate the types of feedback you’ll give is by using the classic Backward Design model by Wiggins and McTighe. Specifically, it can be helpful to use their diagram for setting curricular priorities into alignment with the types of assessment you might use. We can imagine that quizzing might best align with the concepts or outcomes that are important for students to know or to have facility with in order to wrestle with the BIG ideas or “enduring understanding” of a course.

diagram for setting curricular priorities into alignment with the types of assessment
Diagram of curricular priorities and assessment types

Quizzing, and particularly multiple choice quizzing done outside the classroom (such as implemented via Moodle, auto-graded, and reported to the gradebook), can make frequent, meaningful feedback for students not only possible but efficient.

Frequent low stakes “testing” (i.e. the need to retrieve information whether in a quiz or otherwise) promotes learning (Roediger and Butler 2011). Moreover, frequent quizzing, besides promoting memory, increases the likelihood of transfer (Carpenter 2012).

You can also give feedback on these quizzes. The same Roediger and Butler–but this time in 2008–showed that while multiple choice questions improve student performance, feedback to students on their answers provides additional benefit. If that feedback is explanatory as to why an answer is wrong the transfer effect is stronger than simple feedback saying the answer is wrong (Moreno and Mayer, 2005). Crafting feedback is decidedly not efficient though! But…it may still be worth your effort in terms of student learning and if you reuse the quizzes your time investment will pay off. Moodle can help here too by making it easy to add feedback specific to each of the possible choices students can make in the quiz. And if you’re teaching a course that uses a textbook you should be aware than many textbooks provide banks of questions with answers and feedback and this can certainly lighten your load.

As always, AT is here to help you if you want to consider this pedagogical move. Please don’t hesitate to reach out to me (jrussell@carleton.edu) or any ATer if you have questions or concerns or would like to work with us!

Citations:

Butler AC and Roediger HL III (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory and Cognition 36, 604-616.

S.K. Carpenter (2012),Testing enhances the transfer of learning, Current Directions in Psychological Science (Sage Publications, Inc), 21(5).

Moreno and R.E. Mayer (2005), Role of guidance, reflection, and interactivity in an agent-based multimedia game, Journal of Educational Psychology 97(I).

Roediger HL III and Butler AC (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences 15, 20-27.

Wiggins, Grant P., and Jay McTighe (2011). The Understanding by Design Guide to Creating High-Quality Units. Alexandria, Va: ASCD.

#RiddleMiaThis, Riddle Me That: Trying out a Puzzle Room App

Celeste selfies with small bronze Icarus statue
just pondering Icarus as the last artifact you see in the puzzle #youhaveflowntooclosetothesun

Together with my intrepid colleague Sarah Calhoun, I tried out the new Riddle Mia This app at the Minneapolis Institute of Art (MIA). The app is designed by Samantha Porter and Colin McFadden (both employed at the University of Minnesota’s Liberal Arts and Technology Innovation Services) along with collaborators from GLITCH, a “community driven arts and education center for emerging game makers,” and was released on Sept. 14, 2018. It’s available for free download on the Google Play Store and the Apple App Store.

This won’t be a clue-by-clue discussion of the experience (how boring!), but rather will highlight a couple clues to point to some broader points about crafting place-based experiences that employ augmented reality (AR).

What’s in a Clue?

longform white text against a dark background with clue hints
example of a clue in the Riddle MIA This app.

The clues are delivered via a text/email type message through the app, with a body of text giving the main part of the clue. The envelope button takes users to the full list of unlocked clues, and the camera opens up your phone’s camera for the clues that include AR aspects (which is maybe half of the total clues). The point opens the official museum map with floorplans for the 2nd and 3rd floors, which are the relevant floors for the app.

The “?” opens a menu of 3 additional options: Map, Puzzle, and Answer. The Map tab opens a selection of the museum gallery map with a line drawing showing where to go for the next clue. The Puzzle tab often gives you the actual information you need to complete the clue, eg. look for this kind of thing. The Answer tab gives the full answer.

My greatest challenge with the app and the overall experience was the structure of the clues. I know, I know, the puzzle aspect is part of the fun! But, I found the ways the clues were written confusing at times because of either word choice or how the clue text was parsed into the sections of the app. For example, for almost every clue there didn’t seem to be a consistent approach to what information landed in the main clue message and what was included in the Puzzle section. I would have preferred having all the information for the puzzle clue on 1 screen and then toggling over to the Map and Answer on another page, more clearly parsing the clues from the solutions in the interface. More signposting in the clues around when to use the camera and when an AR element was going to factor in would also have been welcome.

Direction and Scale Matters

We successfully completed the game in the estimated time of 1 hour. That hour was dedicated almost entirely to moving through the clues, which encompassed 2 floors and numerous galleries.

From the user perspective, I would suggest some ways to flag distance and movement through spaces between clues. The slices of map shown with each clue aren’t accompanied with a scale for estimated travel time. The graffiti clue is the clearest example of this: it suggests that the object is either on the 2nd or 3rd floor and has a considerable amount of travel time from origin to endpoint, including the level change and in our experience winding around some exhibit construction.

Takeaways

To be sure, the ambition of the app is one of its strengths as is the desire to expose users to a wide swatch of art styles, media, and artists. It moves users through MIA’s rich collections and I thoroughly enjoyed zipping through galleries that I had never ventured through before. A group of young people were also participating in the game and were about 4 clues “behind” so it was fun to hear snippets of their time working through the clues.

As I think about how to take inspiration from RiddleMIAThis, I’m pondering the issue of scale. One wish I have for a future version of the RiddleMIAThis (or other comparable museum gallery app) would be different “levels,” each one focused on 1 floor and/or 1 particular set of galleries, moving users from object to object and room to room on a smaller scale and around a particular theme or iconography. A week or so later, I’m hard pressed to think of a cohesive through-line for the art we saw, and the educator in me is always interested in those ways that technology can open up or reinforce teachable moments around the content.