A Media & Design Guru’s Take . . . On The Media.

A darn-smart educator I know recently commented to me about the media.  He said, the media sources he reads and listens to are “not reporting negatively or demonizing the other side.”  For a moment, I thought he was pulling my leg.  Then I asked who is sources were . . . and I realized that even smart folks don’t always recognize the subtle influence of the media.

It reminds me of the NYT video by Daniele Anastasion called The Price of Certainty which featured social psychologist Arie Kruglanski.  In the short video, Kruglanski points out that in times of fear or anxiety . . . our need for closure increases, so we’re quicker to make judgements—sometimes discounting facts.  We become certain—often without being correct.  For many in the US, this is a time of fear and anxiety.  This is known, amplified by, and capitalized upon through a myriad of agencies, including the news media.

Each media outlet brings bias—intentional or unintentional, systemic or extrinsic—to their reporting.  Careful analysis of the language and visuals makes it apparent to those who take the time to look.  Media sources brand and stereotype large groups of people constantly.  Reuters, for example, recently ran with this headline:  “How Republicans are using immigration to scare voters to the polls.”  Focusing only on the language in the headline, it’s easy to see that a group of people [Republicans] are being connected with a negative behavior [scaring].  While aspects of this might be true, reasonable questions might include: are all Republicans doing this, or just seven.  Are there any Democrats, Libertarians, Green Party members or others also doing this?  It’s clear that the headline includes just one group and assigns that group a negative behavior.

It’s just a a headline, though, right? Yes, but recent studies by the Media Insight Project and the Center for Direct Scientific Communication as reported by both Forbes and the Washington Post indicate at least 59% readers get their news by only reading the headlines.. So, while there might be an explanation or correction further down in an article, the headline is the take-away for a majority of readers.  It’s subtle, but it has real impact.

Faculty in nearly any discipline can encourage students to investigate applicable subject-focused stories carried in media—and how that language, image, audio, or video may be attempting to do more than just be fair and balanced.  We can look at a chart such as Adfontmedia’s Media Bias Chart (below) and argue about whether or not it’s accurate, but even more compelling is having students discover the bias on their own. An interesting and eye-opening project for students to take on is to have them perform an analysis of headlines [or images, or audio, or video] over the course of a week . . . or month . . . or term.

Media Bias Chart outlines the reliability of facts and the perceived bias of different media outlets.


Though a myriad of analysis could be undertaken, it can be done simply, too.  In this example, students can select and categorize headlines with very basic criteria:

1. headlines that are non-biased or seen as neutral facts (NB)
2. headlines that are biased Right (BR)
3. headlines that are biased Left (BL).

I recently took a screenshot from the online New York Times from Oct. 10, 2018. The NYT is generally seen as a more neutral and relatively fact-based news source.  Of the headlines that appear, I’ve quickly done a quick analysis of the word choice.

Here’s that screenshot:

New York Times Headlines, Oct. 10, 2018

The headlines on this page, and a brief example of analysis them follows:

On Instagram, 11,696 Examples of How Hate Thrives on Social Media. (NB. While somewhat fear-mongering and not giving equal coverage for how Love Thrives, this headline could be categorized as fact-based and non-politically biased. A read of the article or visuals may give a different insight, based on which groups are highlighted as being hateful, what images are shown, how many times any one political figure is mentioned positively or negatively, etc.)

When Jewish Funeral Customs Collide with a Crime Scene Investigation. (NB)

Aftermath of Killing Bares Jewish Rifts in Israel and America. (NB)

Reeling From Tragedy, Many in Pittsburgh Say Trump Should Not Visit. (BL. “Reeling from Tragedy” is accurate and factual. “Many…Say Trump Should Not Visit.” This may also be accurate, but there are likely “many” who believe Trump should visit. The headline therefore reinforces a negative attitude that “many” have toward a political figure on the right, so I’d suggest this headline is biased to favor the Left. Interestingly enough, approximately 75% of those in Pittsburgh proper itself voted for Clinton, and may truly not want Trump to visit. That could be the “many” being discussed. Yet 50.2% of those in the larger Pittsburgh Metro area voted for Trump, so there could be a “many” that would favor his visit. Perhaps a better, more balanced phrasing would be “Some in Pittsburgh Say Trump Should Not Visit.”

Trump Seeks to End Birthright Citizenship with Executive Order. (NB)

How Trump-Fed Conspiracies About Migrant Caravan Intersect with Deadly Hatred. (BL. Trump has spoken crudely about the migrants. This is factual. Still, this headline connects a specific political figure on the right with “Deadly Hatred.” A more neutral, unbiased headline might read “How Political Discourse Intersects with Deadly Hatred.”)

Trump is Sending 5,200 Troops to the Border in Response to Migrants. (NB)

Arguably, of the seven headlines above, five are written neutrally and two appear to have a slight liberal bias.

Of course this example is terribly isolated, and no determination of any media outlet should be based on a random sample of seven headlines on a single day. More thorough research could involve doing this with dozens of media outlets over years or decades, but this simple analysis does give a snapshot of how a single media outlet may be using its position in society to gently skew public opinion.

Then, after managing this kind of simple analysis, a deeper dig into psychological choices with associated images, video, audio, or a text analysis throughout the entire written story could also be undertaken. A considerable part of my MFA in Digital Cinema focused on those specific things.  A broad but enlightening overview of those concepts appear in Cinematography & Psychology: How the Camera Decides What We Feel by Nivetha Sivasamy in January of 2017, which gives readers an idea of how certain shots can be selected to influence a viewer. Entire books, college courses, and even degrees are focused on how visual choices can impact viewers psychologically.  That means entire campaigns, careers, and industries know that using certain words or visuals will influence readers and viewers.  So why do we believe the media that we believe?  I guess that’s the Price of Certainty.

So, for those of us in education, let’s help our students become both certain and correct by challenging them to critically evaluate the psychological impact of words (and images and video and audio) in every news story, media outlet, classroom, and discipline.  It will lead to a more intelligent generation.  I’m certain of it.


#RiddleMiaThis, Riddle Me That: Trying out a Puzzle Room App

Celeste selfies with small bronze Icarus statue
just pondering Icarus as the last artifact you see in the puzzle #youhaveflowntooclosetothesun

Together with my intrepid colleague Sarah Calhoun, I tried out the new Riddle Mia This app at the Minneapolis Institute of Art (MIA). The app is designed by Samantha Porter and Colin McFadden (both employed at the University of Minnesota’s Liberal Arts and Technology Innovation Services) along with collaborators from GLITCH, a “community driven arts and education center for emerging game makers,” and was released on Sept. 14, 2018. It’s available for free download on the Google Play Store and the Apple App Store.

This won’t be a clue-by-clue discussion of the experience (how boring!), but rather will highlight a couple clues to point to some broader points about crafting place-based experiences that employ augmented reality (AR).

What’s in a Clue?

longform white text against a dark background with clue hints
example of a clue in the Riddle MIA This app.

The clues are delivered via a text/email type message through the app, with a body of text giving the main part of the clue. The envelope button takes users to the full list of unlocked clues, and the camera opens up your phone’s camera for the clues that include AR aspects (which is maybe half of the total clues). The point opens the official museum map with floorplans for the 2nd and 3rd floors, which are the relevant floors for the app.

The “?” opens a menu of 3 additional options: Map, Puzzle, and Answer. The Map tab opens a selection of the museum gallery map with a line drawing showing where to go for the next clue. The Puzzle tab often gives you the actual information you need to complete the clue, eg. look for this kind of thing. The Answer tab gives the full answer.

My greatest challenge with the app and the overall experience was the structure of the clues. I know, I know, the puzzle aspect is part of the fun! But, I found the ways the clues were written confusing at times because of either word choice or how the clue text was parsed into the sections of the app. For example, for almost every clue there didn’t seem to be a consistent approach to what information landed in the main clue message and what was included in the Puzzle section. I would have preferred having all the information for the puzzle clue on 1 screen and then toggling over to the Map and Answer on another page, more clearly parsing the clues from the solutions in the interface. More signposting in the clues around when to use the camera and when an AR element was going to factor in would also have been welcome.

Direction and Scale Matters

We successfully completed the game in the estimated time of 1 hour. That hour was dedicated almost entirely to moving through the clues, which encompassed 2 floors and numerous galleries.

From the user perspective, I would suggest some ways to flag distance and movement through spaces between clues. The slices of map shown with each clue aren’t accompanied with a scale for estimated travel time. The graffiti clue is the clearest example of this: it suggests that the object is either on the 2nd or 3rd floor and has a considerable amount of travel time from origin to endpoint, including the level change and in our experience winding around some exhibit construction.


To be sure, the ambition of the app is one of its strengths as is the desire to expose users to a wide swatch of art styles, media, and artists. It moves users through MIA’s rich collections and I thoroughly enjoyed zipping through galleries that I had never ventured through before. A group of young people were also participating in the game and were about 4 clues “behind” so it was fun to hear snippets of their time working through the clues.

As I think about how to take inspiration from RiddleMIAThis, I’m pondering the issue of scale. One wish I have for a future version of the RiddleMIAThis (or other comparable museum gallery app) would be different “levels,” each one focused on 1 floor and/or 1 particular set of galleries, moving users from object to object and room to room on a smaller scale and around a particular theme or iconography. A week or so later, I’m hard pressed to think of a cohesive through-line for the art we saw, and the educator in me is always interested in those ways that technology can open up or reinforce teachable moments around the content.

Instructional Video with Professor Dave Explains

I recently came across some great instructor videos by a guy who goes by Professor Dave.  He’s actually a Carleton grad, and his videos (on lots of science-related topics) are well developed, attractive, and engaging.  Instructors who connect an assessment to these videos could easily have some great learning with Professor Dave!  Dave’s style also gives some cool ideas of how instructors can film and produce their own instructional videos!  –dann

https://youtu.be/pYVgB2lnztY via @YouTube

Welcome to my YouTube channel! My goal is to provide the best resource for self-education in existence. I’ve already covered a lot of subjects,…

Tutee or Not Tutee: Who should be on camera in your Instructional Video?

Effective instructional videos can vary in style.  This short video, inspired by an Arizona State University study, reveals preferences and effectiveness in two different styles:

  1. Should you teach to the camera/viewer or
  2. Should you teach a student who is also on camera and film that interaction?

This video featuring Dann Hurlbert, Carleton College’s Media & Design Guru succinctly recaps a 2018 study from ASU’s Katelyn M Cooper, Lu Ding, Michelle Stephens, Michelene T. H. Chi, and Sara E Brownell.

Facing Instructional Videos

How important is it for instructors to include their own faces when creating instructional videos? The answer might surprise you. Dann Hurlbert, Carleton College’s Media & Design Guru (and an actor, director, and inventor of the Little Prompter) leans on research and his own expertise to offer guidance.

Instructional Video Workshops Fill up Fast!

I’m already excited to be a part of the team hosting this Instructional Video Workshop at Carleton in late July!  Attendees will not only take-way a concrete and replicable process for creating process, but they’ll create [at least] 3 Instructional Videos they can start using right away.  The seats filled-up so fast, there is no doubt we’ll be doing more of these in the future!  More information on the workshop itself is available here.  And if you’d like to be notified when we host another one, please complete this short form. — dann

Business Video Benefits (in Education)

Dann Hurlbert, Carleton College’s Media and Design Guru provides an overview of Matt Bowman’s article in Forbes Magazine about video marketing in business. There is a reason businesses are using more video:  it’s working. It can work well in education, too. Take a moment to reflect on Matt’s article — and nibble on the possibilities video can provide educators by watching this:

Recap: Day of DH 2018

3 scholars seated on high chairs with microphones smiling and laughing during discussion.

Image caption: (l-r) Thabiti Willis, Jack Gieseking, Adriana Estill in conversation. Photo by Briannon Carlsen.


Guest Post: Arduino Water Depth Monitor

Author: Nathan Mannes, ’19

With supplies from the Geology Department and with the advising of Andrew Wilson, we have created an Arduino-based water-depth monitor. The grey cone you see at the bottom of the photo is a sonar-device that measures how far the closest solid object in front of it is. That could mean a wall, but we intend to put it over a body of water, like Lyman Lakes, to measure its depth over a long period of time with little maintenance. Because it is solar powered, we can leave it outside and let it send readings on its own.

On the right side you see a 3G shield module (with the antennae) mounted on an Arduino. It uses mobile data to send readings over the internet. But it has to send data to somewhere, right? We are setting up a public-facing webserver so that we can keep track of this data long-term. Then, much like the water tower, we will always be able to check what the depth of the Lyman Lakes are. In the future, we intend to expand this to conduct other readings on the water, like its pH or temperature, or volume of flow.