2019 LACOL Language Instruction Jam

Carly wearing a red shirt and gold necklace, gesturing while giving a talk. Carly Born (with Chico Zimmerman and Clara Hardy) recently participated in LACOL’s 2019 Language Jam hosted at Bryn Mawr College. 26 faculty and technologists from across the consortium attended.

The weekend centered on the CHIANTI project: a repository-like site for assignments and materials for instructors to share and use in their own classes, and a resource for students to complete tutorials on specific content areas in which they need extra help. Additionally, Carly shared an update on the development of the Language Dashboard Report, which is a Moodle report plugin intended to give faculty granular information on student performance on language placement tests, and Language Lesson. For more information on the projects demonstrated or on the Language Jam overall, please feel free to contact Carly (cborn@carleton.edu)! 

 

New article published by Janet Russell and Melissa Eblen-Zayas on CUBE

Carleton Undergraduate Bridge Experience

Melissa and Janet’s article “Making an Online Summer Bridge Program High Touch” was recently published in the Journal of College Student Development. The article describes the creation of the Carleton Undergraduate Bridge Experience (CUBE), a hybrid program that includes 6 weeks of online programming during the summer and 10 weeks of face-to-face programming during fall term of the students’ first year.

Citation:

Eblen-Zayas, M. & Russell, J. (2019). Making an Online Summer Bridge Program High Touch. Journal of College Student Development 60(1), 104-109. Johns Hopkins University Press.

https://doi.org/10.1353/csd.2019.0006

PDF download of the article

BiochemAR is now available!

3d model of molecule appears above QR code on a plain table.

BiochemAR, an augmented reality app for visualizing 3d molecular models, is now available for download on Apple’s App Store and Google Play Store. This app, a collaboration between Rou-Jia Sung (biology) and Andrew Wilson (AT), also includes learning modules and ways to use the app in the classroom. To read more, checkout this write-up in The Scientist. If you’re interested in more information or talking through developing additional modules, please email Rou-Jia (rsung@carleton.edu) or Andrew (awilson@carleton.edu) directly.

Themes from AZCALL & Carly’s Current Research

Arizona State Flag

Recently, I attended a small conference called AZCALL 2018 hosted by the CALL Club of Arizona State University. This one-day conference was planned by the graduate students in the CALL Club at ASU for the first time, anticipating about 60 people to attend.  To their surprise, actual registrations doubled that number!  The best part of attending small conferences like this one is that they are usually highly impactful without being overwhelming. So I’m still jazzed about some of the topics discussed!

The conference opened with a Keynote by Jonathon Reinhardt, Associate Professor of English at the University of Arizona, about the potential of using multiplayer games for second language learners. If you go to his page, you’ll see his recent research focuses on the use of games and gameful educational techniques, which have been very hot topics in both second language pedagogy and instructional design circles.

Aside from the now common theme of games for education, game-based learning and gamification, virtual and augmented reality were represented in presentations by Margherita Berti, Doctoral Candidate at the University of Arizona and the ending keynote by the always energetic Steven Thorne, among others.  Berti won the conference award for best presentation when she spoke about how she uses 360º YouTube videos and Google Cardboard to increase cultural awareness in her students of Italian.  Check out her website for more of her examples, Italian Open Education.

My personal favorite presentation was given by Heather Offerman from Purdue University, who spoke about her work on using visualization of sound to give pronunciation feedback to Spanish language learners (using a linguistics tool called Praat).  Her work is very close to some of the research I’m doing into the visualization of Chinese tones with Language Lesson, so I was excited to hear about the techniques she was using and how successful she feels they were as pedagogical interventions.  It’s interesting that in the last few CALL conferences I’ve attended, there have started to be more presentations on the need for more explicit and structured teaching of L2 pronunciation in particular, which could appear to be in contrast with the trends for teaching Comprehensible Input (check out this 2014 issue of The Language Educator by ACTFL for more info on CI).  But I argue that it’s possible – and possibly a good idea – to integrate explicit pronunciation instruction along with the CI methodology to get the best of both worlds.  Everything in moderation, as my mom would say.

Just like with all things, there is no silver bullet technology for automatically evaluating student L2 speech and providing them with the perfect feedback to help them improve. Some have been focusing on the use of Automatic Speech Recognition (ASR) technologies and have been using them in their L2 classrooms.  However, the use of ASR is founded on the premise that if the machine can understand you then your pronunciation is good enough.  I’m not sure that’s the bar that I want to set in my own language classroom, I’d rather give the students much more targeted feedback on the segmentals of their speech that not only help them notice where their speech might differ from the model, but also to notice important aspects of the target language to gain better socio-cultural understanding of verbal cues.

That is why I have been working on developing pitch visualization component of Language Lesson. The goal is to help students who struggle with producing Chinese tones properly notice the variance between their speech and the model they are repeating by showing them both the model and their own pitch contours. Soon, I hope to have a display that will overlap the two pitch contours so that students can see very clearly the differences between them. Below are some screenshots of the pitch contours that I hope to integrate in the next 6 months.

This slideshow requires JavaScript.

I’m looking forward to spending part of this winter break working on a research project to assess the value of pitch contour visualization for Chinese L2 learners.  I will be collecting the recordings I’ve been capturing for the past two years and producing a dataset for each group of students (some of whom had the pitch visualization and some who did not). I will be looking to see if there are differing trends in the students’ production of Chinese tones amongst the different treatment groups. Below are just a few of the articles that I’ve read recently that have informed my research direction.  It should be exciting work!

Elicited Imitation Exercises

Vinther, T. (2002). Elicited imitation:a brief overview. International Journal of Applied Linguistics, 12(1), 54–73. https://doi.org/10.1111/1473-4192.00024

Yan, X., Maeda, Y., Lv, J., & Ginther, A. (2016). Elicited imitation as a measure of second language proficiency: A narrative review and meta-analysis. Language Testing, 33(4), 497–528. https://doi.org/10.1177/0265532215594643

Erlam, R. (2006). Elicited Imitation as a Measure of L2 Implicit Knowledge: An Empirical Validation Study. Applied Linguistics, 27(3), 464–491. https://doi.org/10.1093/applin/aml001

Chinese Tone Acquisition

Rohr, J. (2014) Training Naïve Learners to Identify Chinese Tone: An Inductive Approach in Jiang, N., & Jiang, N. (Ed.). Advances in Chinese as a Second Language: Acquisition and Processing. (pgs 157 – 178). Newcastle-upon-Tyne: Cambridge Scholars Publishing. Retrieved from http://ebookcentral.proquest.com/lib/carleton-ebooks/detail.action?docID=1656455a”]

**cross-posted from Carly’s blog, The Space Between.

#ISSOTL18 conference: Toward a learning culture

red sign with moose head shape and text "moose shop"

Sarah Calhoun, Janet Russell, and Celeste Sharpe presented a poster (co-authored with Melissa Eblen-Zayas, Iris Jastram, and Kristin Partlo) titled “Perspectives on connecting SoTL across the (co-) curriculum at a small liberal arts college” at the International Society for the Scholarship of Teaching & Learning Conference in Bergen, Norway. The poster presented three examples of overlapping initiatives at Carleton, and the ways in which these projects are surfacing gaps and providing critical foundation for a more concerted, campus-wide effort. These findings will also be presented at an LTC presentation winter term. The poster and bibliography are available at http://bit.ly/issotl2018-connecting. An image of the poster is below.

conference poster on eportfolios, information literacy in student writing, and student internships as examples of scholarship of teaching and learning in the co-curriculum
authors: Sarah Calhoun, Melissa Eblen-Zayas, Iris Jastram, Kristin Partlo, Janet Russell, and Celeste Sharpe.

#RiddleMiaThis, Riddle Me That: Trying out a Puzzle Room App

Celeste selfies with small bronze Icarus statue
just pondering Icarus as the last artifact you see in the puzzle #youhaveflowntooclosetothesun

Together with my intrepid colleague Sarah Calhoun, I tried out the new Riddle Mia This app at the Minneapolis Institute of Art (MIA). The app is designed by Samantha Porter and Colin McFadden (both employed at the University of Minnesota’s Liberal Arts and Technology Innovation Services) along with collaborators from GLITCH, a “community driven arts and education center for emerging game makers,” and was released on Sept. 14, 2018. It’s available for free download on the Google Play Store and the Apple App Store.

This won’t be a clue-by-clue discussion of the experience (how boring!), but rather will highlight a couple clues to point to some broader points about crafting place-based experiences that employ augmented reality (AR).

What’s in a Clue?

longform white text against a dark background with clue hints
example of a clue in the Riddle MIA This app.

The clues are delivered via a text/email type message through the app, with a body of text giving the main part of the clue. The envelope button takes users to the full list of unlocked clues, and the camera opens up your phone’s camera for the clues that include AR aspects (which is maybe half of the total clues). The point opens the official museum map with floorplans for the 2nd and 3rd floors, which are the relevant floors for the app.

The “?” opens a menu of 3 additional options: Map, Puzzle, and Answer. The Map tab opens a selection of the museum gallery map with a line drawing showing where to go for the next clue. The Puzzle tab often gives you the actual information you need to complete the clue, eg. look for this kind of thing. The Answer tab gives the full answer.

My greatest challenge with the app and the overall experience was the structure of the clues. I know, I know, the puzzle aspect is part of the fun! But, I found the ways the clues were written confusing at times because of either word choice or how the clue text was parsed into the sections of the app. For example, for almost every clue there didn’t seem to be a consistent approach to what information landed in the main clue message and what was included in the Puzzle section. I would have preferred having all the information for the puzzle clue on 1 screen and then toggling over to the Map and Answer on another page, more clearly parsing the clues from the solutions in the interface. More signposting in the clues around when to use the camera and when an AR element was going to factor in would also have been welcome.

Direction and Scale Matters

We successfully completed the game in the estimated time of 1 hour. That hour was dedicated almost entirely to moving through the clues, which encompassed 2 floors and numerous galleries.

From the user perspective, I would suggest some ways to flag distance and movement through spaces between clues. The slices of map shown with each clue aren’t accompanied with a scale for estimated travel time. The graffiti clue is the clearest example of this: it suggests that the object is either on the 2nd or 3rd floor and has a considerable amount of travel time from origin to endpoint, including the level change and in our experience winding around some exhibit construction.

Takeaways

To be sure, the ambition of the app is one of its strengths as is the desire to expose users to a wide swatch of art styles, media, and artists. It moves users through MIA’s rich collections and I thoroughly enjoyed zipping through galleries that I had never ventured through before. A group of young people were also participating in the game and were about 4 clues “behind” so it was fun to hear snippets of their time working through the clues.

As I think about how to take inspiration from RiddleMIAThis, I’m pondering the issue of scale. One wish I have for a future version of the RiddleMIAThis (or other comparable museum gallery app) would be different “levels,” each one focused on 1 floor and/or 1 particular set of galleries, moving users from object to object and room to room on a smaller scale and around a particular theme or iconography. A week or so later, I’m hard pressed to think of a cohesive through-line for the art we saw, and the educator in me is always interested in those ways that technology can open up or reinforce teachable moments around the content.

Recap: Day of DH 2018

3 scholars seated on high chairs with microphones smiling and laughing during discussion.

Image caption: (l-r) Thabiti Willis, Jack Gieseking, Adriana Estill in conversation. Photo by Briannon Carlsen.

 

Carly Born Presents at CALICO 2018!

Carly presents at the Computer-Assisted Language Instruction Consortium (CALICO) 2018’s Technology Showcase on Language Lesson for Speaking Exercises on Thursday, May 31!

Abstract:

Language Lesson is a stand-alone tool designed to facilitate student recording exericses, such as elicited imitation tasks, scaffolded dialog practice or fluency exercises. This tool allows instructors to leave text or oral feedback for students at specific points in student recordings, providing contextualized corrective feedback on students’ speaking. In our current research we are investigating Natural Language Processing technology to faciliate evaluation of student recordings for placement or proficiency assessments. Language Lesson will be released as an open-source project in the Summer of 2018.

Clicker Assessment Summary: 2016-2018

vintage voting machine with dials and labels for democrat and republicanSUMMARY

Academic Technology has conducted short assessments of in-class clicker use across several 100 level courses in the sciences and social sciences in both the 2016-17 and 2017-18 academic years. For all courses surveyed, the students agreed that clickers made class more engaging by helping them participate more openly, increase their attention in class, think more deeply about their answer, and hone their critical thinking skills. Here are a few more details:

Continue reading Clicker Assessment Summary: 2016-2018

Academic Technology at OLC Innovate 2018!

Andrew, Dann, and Janet presented at the Online Learning Consortium Innovate! Conference in Nashville.  Their talks were (respectively):

Dann’s notes from sessions he attended are summarized below: