Hapgood

Mike Caulfield's latest web incarnation. Networked Learning, Open Education, and Online Digital Literacy


Yes, Digital Literacy. But Which One?

One of the problems I’ve had for a while with traditional digital literacy programs is that they tend to see digital literacy as a separable skill from domain knowledge.

In the metaphor of most educators, there’s a set of digital or information literacy skills, which is sort of like the factory process. And there’s data, which is like raw material. You put the data through the critical literacy process and out comes useful information on the other side. You set up the infolit processes over a few days of instruction, and then you start running the raw material through the factory, for everything from newspaper articles on the deficit to studies on sickle cell anemia. Useful information, correctly weighted, comes out the other end. Hooray!

This traditional information/web literacy asks students to go to a random page and ask questions like “Who runs this page? What is their expertise? Do they have bias? Is information clearly presented?

You might even get an acronym, like RADCAB, which allows you to look at any resource and ascertain  its usefulness to a task.

radcab
From http://radcab.com.

Or maybe, if you’re in higher education, you’ll get CRAAP:

craap_infographic
From http://ucsd.libguides.com/preuss/webeval

I’m a fan of checklists, and heuristics, and I’ve got no problem with a good acronym

But let me tell you what is about to happen. We are faced with massive information literacy problems, as shown by the complete inability of students and adults to identify fake stories, misinformation, disinformation, and other forms of spin.  And what I predict is that if you are in higher education every conference you go to for the next year will have panel members making passionate Churchillian speeches on how we need more information literacy, to much applause and the occassional whoop.

But which information literacy do we need? Do we need more RADCAB? Do we need more CRAAP?

In reality, most literacies are heavily domain-dependent, and based not on skills, but on a body of knowledge that comes from mindful immersion in a context. For instance, which of these five sites are you going to trust most as a news (not opinion) sources?

sliver
Source One
sliv
Source Two
sl
Source Three
khkjh
Source Four
cd
Source Five

Could you identify which of these sites was likely to be the most careful with facts? Which are right-wing and which are left wing? Which site is a white supremacist site?

It’s worth noting if you were able to make those determinations you did it not using skills, but knowledge. When I saw that big “W” circled in that red field of a flag, for instance, my Nazi alarm bells went off.  The mention of the “Illuminati” in source three tells me it’s a New World Order conspiracy site. I know little things, like the word “Orwellian” is a judgmental word not usually found in straight news headlines. I’ve read enough news to know that headlines that have a lot of verbal redundancy (“fabricates entire story falsely claiming”, for example, rather than “falsely claims”) are generally not from traditional news sources, and that the term “Globalist” is generally not used outside opinion journalism and “war against humanity” is pitched at too high a rhetorical level.

We act like there’s skills and a process, and there is, certainly. Asking the right question matters. Little tricks like looking up an author’s credentials matters. CRAAP matters. And so on. But the person who has immersed themselves in the material of the news over time in a reflective way starts that process with three-quarters a race’s head start.  They look at a page and they already have a hypothesis they can test — “Is this site a New World Order conspiracy site?” The person without the background starts from nothing and nowhere.

Abstract skills aren’t enough. RADCAB is not enough.

Government Slaves

I first really confronted this when I was helping out with digital fluency outcomes at Keene State about six years ago. One of the librarians there called me into a meeting. She was very concerned, because they ran an information literacy segment in classes and the students did well enough on the exercises, but when paper time came they were literally using sites that looked like this:

cashless

She was just gobsmacked by it. She didn’t want to say — “Look, just don’t use the internet, OK?” — but that was what she felt like after seeing this. It was crushing to spend two days talking authority and bias and relevance and the CRAAP test,  having the students do well on small exercises, and then having students in the course of a project referencing sites like these. (I should say here that we can find lots of sites like this on the liberal side of the spectrum too, but under the Obama administration these particular sorts of sites thrived. We’ll see what happens going forward from here.)

When I started talking to students about sites like this, I discovered there was a ton of basic knowledge that students didn’t have that we take for granted. That FEMA banner is a red flag to me that this site is for people that buy into deep right wing conspiracies that the Obama Administration is going to round conservatives up into FEMA prison camps. The “Government Slaves” to me is a right-wing trope — not necessarily fringe, but Tea Party-ish at the least. Those sites in the top menu — Drudge, Breitbart, InfoWars, and ZeroHedge — are all a sort of conspiracy spectrum starting with alarmist but grounded (Drudge, ZeroHedge) to full on conspiracy sites (InfoWars). The stars on the side signal a sort of aggressive patriotism, and the layout of the site, the Courier typography, etc., is reminiscent of other conspiracy sites I have seen.  The idea that cash/gold/silver is going to be “taken away” by the government is a prominent theme in some right-wing “prepper” communities.

Now we could find similar site on the left. My point here is not about the validity of the site. My point is that recognizing any one of these things as an indicator — FEMA, related sites, gold seizures, typography — would have allowed students to approach this site with a starting hypothesis of what the bias and aims of this site might be which they could then test. But students know none of these things. They don’t know the whole FEMA conspiracy, or that some people on the right feel the government is so strong we are slaves to it. They don’t know the who gold/prepper/cash thing.  And honestly, if you start from not knowing any of this, why would this page look weird at all?

The Tree Octopus Problem

When I started looking at this problem in 2010, I happened upon an underappreciated blog post on critical thinking by, oddly enough, Robert Pondiscio.

I say “oddly enough”, because Pondiscio  is part of a movement I often find myself  at odds with: the “cultural literacy” movement of E.D. Hirsch. That movement contended early on that our lack of common cultural knowledge was inhibiting our ability to discuss things rationally. With no common touchpoints, we might as well be speaking a different language.

The early implementations of that — complete with a somewhat white and male glossary of Things People Should Know — rubbed me the wrong way. And Hirsch himself was a strong adversary of the integrated project-based education I believe in, arguing the older system of studying a wide variety of things with a focus on the so-called “lower levels” of Bloom’s Taxonomy was more effective than project-based deep dives. Here’s Hirsch talking down a strong project-based focus in 2000:

To pursue a few projects in depth is thought to have the further advantage of helping students gain appropriate skills of inquiry and discovery in the various subject matters. One will learn how to think scientifically, mathematically, historically, and so on. One will learn, it is claimed, all-purpose, transferable skills such as questioning, analyzing, synthesizing, interpreting, evaluating, analogizing, and, of course, problem solving—important skills indeed, and well-educated people possess them. But the consensus view in psychology is that these skills are gained mainly through broad knowledge of a domain. Intellectual skills tend to be domain-specific.The all-too-frequent antithesis between skills and knowledge is facile and deplorable. (Hirsch, 2000)

I’ve used project-based learning forever, and my whole schtick — the thing which I’ve being trying to get done in one way or another for 20 years now — is scalable, authentic, cross-insitutional project-based education. I’m looking to scale what Hirrch is looking to dismantle. So Hirsch is a hard pill to swallow in this regard.

But it’s those last three lines that are the core of the understanding, and it’s an understanding we can’t afford to ignore: “[T]he consensus view in psychology is that these skills are gained mainly through broad knowledge of a domain. Intellectual skills tend to be domain-specific. The all-too-frequent antithesis between skills and knowledge is facile and deplorable.”

Robert Pondiscio, who works with Hirsch, shows specifically how this maps out in information literacy. Reviewing the 21st century skills agenda that had been released back in 2009, he notes the critical literacy outcome example:

Outcome: Evaluate information critically and competently.

Example: Students are given a teacher-generated list of websites that are a mixture of legitimate and hoax sites.  Students apply a website evaluation framework such as RADCAB (www.radcab.com) to write an explanation for deciding whether each site is credible or not. 

Pondiscio then applies the RADCAB method to a popular assignment of the time. There is a hoax site called the Tree Octopus often used by educators — teachers send students to it and have them try to evaluate whether it is real.

tree-octopus

Unfortunately, as Pondiscio points out, any quick application of RADCAB or any other “skills only” based heuristic will pass this site with flying colors

The rubric also tells us we are research pros if we “look for copyright information or ‘last updated’ information” in the source.  Very well: The tree octopus site was created in 1998 and updated within the last two months, so it must be a current source of tree octopus information.  We are also research pros if we ”look for the authority behind the information on a website because I know if affects the accuracy of the information found there.”  Merely looking for the authority tells us nothing about its value, but let’s dig deeper.  The authority behind the site is the “Kelvinic University branch of the Wild Haggis Conservation Society.” Sounds credible. It is, after all, a university, and one only has to go the extra mile to be a Level 4, or “Totally Rad Researcher.”  The Tree Octopus site even carries an endorsement from Greenpeas.org, and I’ve heard of them (haven’t I?) and links to the scientific-sounding ”Cephalopod News.”

If you want to know the real way to evaluate the site, claims Pondiscio, it’s not by doing something, it’s by knowing something:

It’s possible to spend countless hours looking at the various RADCAB categories without getting the joke.  Unless, of course, you actually know something about cephalopods — such as the fact that they are marine invertebrates that would have a tough time surviving or even maintaining their shape out of the water — then the hoax is transparent.

And, in fact, when we shake our heads at those silly students believing in the Tree Octopus, we’re not surprised at the fact they didn’t look for authority or check the latest update. We’re disappointed that they don’t understand the improbability of a cephalopod making the leap to being an amphibious creature without significant evolutionary changes. We’re amazed that they believe that long ago a common cephalopod ancestor split off into two branches, one in the ocean, and one in the forest, and they evolved in approximately the same way in polar opposite environments

That’s the weird thing about the Tree Octopus. And that’s what would make any informed viewer look a bit more deeply at it, not RADCAB analysis, not CRAAP, and not some generalized principles.

To Gain Web Literacy You Have to Learn the Web

There’s a second point here, because what a web literate person would actually do on finding this is not blindly go through a checklist, but execute a web search on Tree Octopus. Doing that would reveal a Snopes page on it, which the web literate person would click on and see this:

tree

Why would they click Snopes instead of other web search results? Is it because of Relevance, or Currency? Do you have some special skill that makes that particular result stand out to you?

No, it’s because you know that Snopes is historically a good site to resolve hoaxes. If it was a political question you might choose Politifact. If it wasn’t a hoax, but a question that needed answering, you might go to Stack Exchange. For a quote, Quote Investigator is a good resource with a solid history. Again, it’s not skills, exactly. It’s knowledge, the same sort of knowledge that allows a researcher in their field to quickly find relevant information to a task.

But let’s say you went to Wikipedia instead of Snopes. And again, you found it was labelled a hoax there:

octopus

Well, to be extra sure, you’d click the history and see if there were any recent reversions or updates, especially by anonymous accounts. This is Wikipedia, of course:

wikipedia

Looking at this we can see that this page has had a grand total of seven characters changed or added in the past six months, and almost all were routine “bot” edits. Additionally we see this page has a long edit history — with hundreds of edits since 2010. The page is probably reliable in this context.

Don’t know what Wikipedia bots are, or what they do? Honestly, that’s a far greater web literacy problem than applying “currency” to Wikipedia articles.

Incidentally, “currency” in our RADCAB rubric gets Wikipedia  backwards. If we had arrived at this Wikipedia page in 2010, for example, on February 28th from about 6:31 p.m. to 6:33 p.m. we would have found a page updated seconds ago. But in Wikipedia, that can often mean trouble, so we would have checked “Compare Revisions” and found that minutes ago the assertions the page made were reversed to say that the Tree Octopus was real:

compare-revisions

Furthermore, it’s an edit from an anonymous source, as you can tell from the IP address at the top (65.186.215.54). The recency of the edit, especially from an anonymous source, makes this a questionable source at this particular moment.

Incidentally, in a tribute to the efficiency of Wikipedia, this edit that asserts the Tree Octopus is real is up for less than 120 seconds before an editor sees the prank and reverts it.

vandal

How are you supposed to know this stuff? Edit histories, bots, character counts as an activity check, recency issues in Wikipedia, compare revisions, etc? How are you going to know to choose Snopes over “MyFactCheck.com”?

Through a general skills checklist? Through a rubric?

The radical idea I’d propose is that someone would tell you these things, the secret knowledge that allows web literate people to check these things quickly. That secret knowledge includes things like revision histories, but also domain knowledge — that Snopes is a good hoax checker, and Quote Investigator is a good source for checking quotes. It includes specific hacks to do reverse image searches to see if an image is real, using specific software such as TinEye or Google Reverse Image Search.

Further, the student would understand basic things like “How web sites make money” so they could understand the incentives for lies and spin, and how those incentives differ from site to site, depending on the revenue model.

In other words, just as on the domain knowledge side we want enough knowledge to quickly identify whether news items pass the smell test, on the technical side we don’t want just abstract information literacy principles, but concrete web research methods and well-known markers of information quality on the web.

We Don’t Need More Information Literacy. We Need a New Information Literacy.

So back to those inevitable calls for more information literacy and the inevitable waves of applause.

We do need more education to focus on information literacy, but we can’t do it the way we have done it up to now. We need to come down from that Bloom’s Taxonomy peak and teach students basic things about the web and the domains they evaluate so that they have some actual tools and knowledge to deal with larger questions effectively.

I’ll give an example. Recently there was a spate of stories about a study that found that students couldn’t differentiate “sponsored content” from native content. Many thinkpieces that followed talked about this as a failure of general literacy. We must build our student’s general critical thinking skills! To the Tree Octopus, my friends!

The study authors had a different idea. The solution, they wrote, was to have students read the web like fact-checkers. But to do that we have to look at what makes fact-checkers effective vs. students. Look, for example, at one of the tasks the students failed at — identifying the quality of a tweet on gun control:

capture

As the study authors note, a literate response would note two things:

  • The tweet is from Move On, and concerns a study commissioned by Center for American Progress, a liberal think tank, and this may indicate some bias, but
  • The poll they link to is by Public Policy Polling, a reputable third party third party polling outfit, which lends some legitimacy to the find.

The undergraduates here did not do well. Here’s how they struggled:

An interesting trend that emerged from our think-aloud interviews was that more than half of students failed to click on the link provided within the tweet. Some of these students did not click on any links and simply scrolled up and down within the tweet. Other students tried to do outside web searches. However, searches for “CAP” (the Center for American Progress’s acronym, which is included in the tweet’s graphic) did not produce useful information.

You see both sides of the equation in this tweet. A fact checker clicks links, obviously. And while that seems obvious, keep in mind it never occurred to half the students.

But the second part is interesting too — the students had trouble finding the Center for American Progress because they didn’t know “CAP”. There’s a technical aspect here, because if they just scoped their search correctly — well, here’s my first pass at a search:

capture

So one piece of this is students need to know how to use search. But the other piece is they need to be able to recognize that we call this policy area “gun control”.  That sounds weird, but again, consider that most of these students couldn’t figure that out.

And honestly, if you fact check on the internet long, you’ll end up knowing what MoveOn is and what the Center for American Progress is. Real fact-checkers don’t have to check those things, because any person that tracks these issues is going to bump into a handful of think tanks quite a bit. Learning what organizations like CAP, Brookings, Cato, AEI, and Heritage are about is actually part of the literacy, not a result of it.

Instead of these very specific pieces of knowledge and domain specific skills, what did the students give back to the researchers as method and insight? They gave them information literacy platitudes:

Many students made broad statements about the limitations of polling or the dangers of social media content instead of investigating the particulars of the organizations involved in this tweet.

You see the students here applying the tools that information literacy has given them. Be skeptical! Bias happens! Social media is not trustworthy!

And like most general-level information literacy tools, such platitudes are not useful. They need to know to click the link. They need to know what a think tank is. They need to know how to scope a search, and recognize the common term for this policy area is “gun control”. But we haven’t given them this, we’ve given them high level abstract concepts that never get down to the ground truth of what’s going on.

Fukashima Flowers

You see this time and time again. Consider the Fukashima Flowers task from the same study:

capture

My first thought on this is not “Is this good evidence?” My first thought is “Is this a hoax?” So I go to Snopes:

snopes

And there I get a good overview of issue. The photo is real, and it’s from an area around Fukashima. But the process it shows is fasciation, and, while rare, fasciation occurs all around the world.

Do I want to stop there? Maybe not. Maybe I look into fasciation rates and see how abnormal this is. Or maybe I dig deeper into known impacts of radiation. But I’ve already got a better foothold on this by following the admonition “Check Snopes first” than any acronym of abstract principles would give me.

Do the students check Snopes? No, of course not. They apply their abstract reasoning skills, to disappointing results:

On the other hand, nearly 40% of students argued that the post provided strong evidence because it presented pictorial evidence about conditions near the power plant. A quarter of the students argued that the post did not provide strong evidence, but only because it showed flowers and not other plants or animals that may have been affected by the nuclear radiation.

I know Bloom’s Taxonomy has fallen out of favor recently in the circles to which I belong. But this is an extremely good example of what happens when you jump to criticism before taking time to acquire knowledge. The students above are critical thinkers. They just don’t have any tools or facts to think critically with.

Now maybe in another world Snopes doesn’t have this story. I get that you can’t always go to Snopes. And maybe googling “Fukushima Flowers” doesn’t give you good sources. Well, then you have to know reverse image search. Or you might need to know how to translate foreign news sources.  Or you might need to get the latest take on the flowers by limiting a Google News search by date.

My point is not that you don’t have to deal with questions of bias, or relevancy, or currency. You’re absolutely going to confront these issues. But confronting these issues without domain knowledge or a toolkit of specific technical resources and tricks is just as likely to pull you further away from the truth than towards it.

What Do Real Fact-Checkers and Journalists Do?

Journalists often have to verify material under tight deadlines. What tricks do they use?

Consider this question: you get a video like this that purports to have taken place in Portland, July 2016, where a man pulls a gun on Black Lives Matter protesters. Is this really from Portland? Really from that date, and associated with that particular protest? Or is this a recycled video being falsely associated with a recent event?

Now this video has been out for a while, and its authenticity has been resolved. You can look it up and see if it was correctly labelled if you want. But when it first came out, what were some tricks of the trade? Do they use RADCAB?

No, they use a toolbox of specific strategies, some of which may encode principles of RADCAB, bit all of which are a lot more specific and physical than “critical thinking”.

Here’s what the Digital Verification Handbook out of the European Journalism Centre suggests, for example, about verifying the date of an event on YouTube:

Be aware that YouTube date stamps its video using Pacific Standard Time. This can sometimes mean that video appears to have been uploaded before an event took place.

Another way to help ascertain date is by using weather information. Wolfram Alpha is a computational knowledge engine that, among other things, allows you to check weather from a particular date. (Simply type in a phrase such as “What was the weather in Caracas on September 24, 2013” to get a result.) This can be combined with tweets and data from local weather forecasters, as well as other uploads from the same location on the same day, to cross-reference weather.

You may not realize this, but rainy and cloudy days are actually quite rare in Portland in July — it rains here for like nine months of the year, but the summers are famously dry, with clear sky after clear sky. Yet the weather in this video is cloudy and on the verge of raining. That’s weird, and worth looking into.

We check it out in Wolfram Alpha:

weather

And it turns out the weather fits! That’s a point in this video’s favor.  And it took two seconds to check.

The handbook is also specific about the process for different types of content. For most content, finding the original source is the first step. Many pictures (for example, the “Fukushima flowers”) exist on a platform or account that was not the original source, thus making it hard to ascertain the provenance of the image. In the case of the Fukushima flowers did you ever wonder why the poster of the photo posted in English, rather than Japanese, and had an English username?

A web illiterate person might assume that this was game over for the flowers, because what Japanese person is going to have a name like PleaseGoogleShakerAamer?

capture

But as the handbook discusses, this isn’t necessarily meaningful. Photos propagate across multiple platforms very quickly once the photo becomes popular, as people steal it to try to build up their status, or get ad-click-throughs. A viral photo may exist in hundreds of different knock-off versions. Since this is User-Generated Content (UGC), the handbook explains the first step is to track it down to its source, and to do that you use a suite of tools, including reverse image search.

And when we do that we a screen cap of a Twitter image that is older than the picture we are looking at and uses Japanese, which, lets face it makes more sense:

kaido

(BTW, notice that to know to look for Japanese we have to know that the Fukushima disaster happened in Japan. Again, knowledge matters.)

Once we have that screen cap we can trace it to the account and look up the original with Google translate on. In doing so we find out this is resident of the Fukushima area who has been trying to document possible effects of radiation in their area. They actually post a lot on information on the photo in their feed as they discuss it with various reporters, so we can find out that these were seen in another person’s garden, and even see that the photographer had taken a photo before they bloomed, a month earlier, which reduces the likelihood that this is a photo manipulation somewhat dramatically:

zelp

Even here, the author notes that they know it is a known mutation of such things. And they give the radiation level of that part of Japan in microsieverts which allows you to check it against health recommendation charts.

A month later they check back in on the flowers and post the famous photo. But immediately after they say this:

fasciation

In other words, three tweets after the famous photo the tweeter gives you the word of what this is called (fasciation) and even though the rest of the text is garbled by the translator that’s a word you can plug into Google to better understand the phenomenon:

rare

So here’s a question: does your digital literacy program look like this? Is it detective work that uses a combination of domain knowledge and tricks of the trade to hunt down an answer?

Or does it consist of students staring at a page and asking abstract questions about it?

It’s not that I don’t believe in the questions — I do. But ultimately the two things that are going to get you an answer on Fukushima Flowers are digital fact-checking strategies and some biology domain knowledge. Without those you’re going nowhere.

Conclusion: Domain-Grounded Digital Literacy That Thinks Like the Web

I didn’t sit down to write a 5,000 word post, and yet I’m feeling I’ve only scratched the surface here.

What is the digital literacy I want?

I want something that is actually digital, something that deals with the particular affordances of the web, and gives students a knowledge of how to use specific web tools and techniques.

I want something that recognizes that domain knowledge is crucial to literacy, something that puts an end to helicopter-dropping students into broadly different domains.

I want a literacy that at least considers the possibility that students in an American democracy should know what the Center for American Progress and Cato are, a literacy that considers that we might teach these things directly, rather than expecting them to RADCAB their way to it on an individual basis. It might also make sense (crazy, I know!) that students understand the various ideologies and internet cultures that underlie a lot of what they see online, rather than fumbling their way toward it individually.

I think I want less CRAAP and more process. As I look at my own process with fact-checking, for example, I see models such as Guided Inquiry being far more helpful — systems that help me understand what the next steps are, rather than abstract rubric of quality. And I think what we find when we look at the work of real-life fact-checkers is that this process shifts based on what you’re looking at, so the process has to be artifact-aware: This is how you verify a user-generated video for example, not “here’s things to think about when you evaluate stuff.”

To the extent we do use CRAAP, or RADCAB, or CARS or other models out there, I’d like us to focus specifically on the methods that the web uses to signal these sorts of things. For example, the “S” in CARS is support, which tends to mean certain things in traditional  textual environments. But we’re on the web and awful lot of “support” is tied up in the idea of hyperlinks to supporting sources, and the particular ways that page authors tie claims to resources. This seems obvious, I suppose, but remember that in evaluating the gun control claim in the Stanford study, over half the students didn’t even click the link to the supporting resource.  Many corporations, for business reasons, have been downplaying links, and it is is having bad effects. True digital literacy would teach students that links are still the mechanism through which the web builds trust and confidence.

Above all, I just want something that gets to a level of specificity that I seldom see digital literacy programs get to. Not just “this is what you should value”, but rather, “these are the tools and specific facts that are going to help you act on those values”. Not just “this is what the web is”, but “let’s pull apart the guts of the web and see how we get a reliable publication date”. It’s by learning this stuff on a granular level that we form the larger understandings — when you know the difference between a fake news site and an advocacy blog, or understand how to use the Wayback Machine to pull up a deleted web page — these tools and process raise the questions that larger theories can answer.

But to get there, you have to start with stuff a lot more specific and domain-informed than the usual CRAAP.

(How’s that for an ending? If anyone wants a version of this for another publication or keynote, let me know — we need to be raising these questions, and that means talking to lots of people)



206 responses to “Yes, Digital Literacy. But Which One?”

  1. Reblogged this on Blogfest at Tiffany's.

  2. […] disturbing as well, when I read things from people I respect and admire that present these extremely narrow views of info lit. It’s one thing to see Pearson equate info lit with CRAAP: Maybe simple makes for easier […]

  3. So, what you’re saying is, you want a project-based learning approach to information literacy? E.D. Hirsch, FML.

  4. I agree, Mike. I skimmed this post mostly because it was long, but more because I was already agreeing with your premise.

    In doing my PhD about critical thinking, I learned about the arguments about domain specificity of it – which are VERY strong but somehow drowned out and all of (US?) Higher ed builds itself on generic skill building:
    A. take rhetoric and composition courses so u can argue/write. Duh. How does that help me write papers in engineering or business? It may help w humanities or social sciences.
    B. Take info lit courses and become info literate. No. It doesn’t necessarily transfer deeply to one’s discipline unless it’s taught in a disciplinary context

    I tested the ideas of domain specificity before and there are two major things thar matter: knowledge of domain and familiarity w epistemology of a domain because that also affects how you interpret things critically – what counts as credible, etc. For example, i did a gender studies course with an Islamic topic. I had domain knowledge but not epistemological familiarity.

    Anyway thanks for this! Will be sharing

  5. Thanks, this was a very interesting reading. I would like to point out that one of your first examples contains text that should set off alarms in the mind of anyone with a little “digital literacy” of the generic kind. Words like “ban”, “shocking”, and “watch now before it’s too late” are hallmarks of propaganda and marketing, and no domain-specific knowledge is required to realize that this is probably an unreliable source.

  6. Web literacy can also be learned by a strong web-authoring/network-forming presence in curriculum. Web and digital literacy are both domain-specific and cross-domain, like writing or speaking, which is one reason this stuff is hard to think about. If faculty could be convinced that, all things taken together, we are truly working in new media, we might see substantive efforts along these lines. Alex Reid is particularly good on why faculty do not see that, and what the consequences are.

    As a baby step, I’d be happy if students could parse URLs. Fewer and fewer of my students can do this. But I don’t want them to learn to parse URLs just to know how to parse URLs, or even just to detect fake news sites, important as that is. I also don’t want them to know parts of speech just to be able to recite them. I want them to learn these things in a context, and to understand why they matter, and to exercise a growing disposition toward thoughtful care in all the communicative elements at their disposal.

  7. I think what distinguishes this from the 1980’s Hirschian “cultural literacy” is the idea that domain knowledge is a process. As you frame it, it’s not so much about WHAT you know, but about HOW you know, which transforms knowledge into something accessible and do-able, rather than an elite fluency marker. It also reframes “content” in a distinctly open pedagogy way; instead of dismissing content as less important than networks/community, it recasts content as something that is deeply influenced by the way that it is transmitted through those networks and communities. I like how that collapses the content-community dichotomy that many of us (myself included, for sure) in the Open Ped movement get a little stuck in. I also have been worried lately that we are fetishizing a fact-based, black-and-white understanding of “truth” in our left-wing resistance to Trumpism and such, and this seems like a way to conceptualize credibility and truth without being reductive. With web-based fact-checking, there is always another hyperlink, always another angle to consider, even though you can use the preponderance of your findings to craft an argument or analysis at any time. Really love this piece, Mike, and will draw on it a lot, so thank you for taking the time to write this up.

  8. As a librarian who works in schools and teaches information literacy you are making some very interesting points. I would argue that information literacy is not just web based and learning the skills of searching for good quality resources such as books and online journals are as much part of information literacy as any web search. I do think our web search and evaluation lessons should include a toolbox to check for hoaxes and if you ever come up with a list can you please share it as I would definitely be using it within our secondary schools here.

  9. Really enjoyed this, and it’s very helpful as I design a digital info lit module to kick off my history survey course in the Spring. Your reference to Bloom reminds me of Sam Wineburg’s (he of the aforementioned Stanford sponsored-content study) argument that we ought to turn Bloom’s Taxonomy upside down. The supposed big-ticket stuff (evaluation, synthesis) doesn’t go far without content knowledge. Yet, because thst’s on the bottom of the Bloom’s pyramid, we’re conditioned to blow past it in order to get to the sexier stuff up top, which-according to Wineburg-suffers from the dearth of content literacy. While I don’t always see eye to eye with Wineburg, that point of his really resonated with me, and your post here evokes that resonance as well. Thanks for this; it’s immensely clarifying.

  10. Thanks Michael. This article may be a good example to explore the value digital literacy programs in aid of Policy-makers’ critical thinking expertise – as they are asked to consider this information: “two thirds of the students do not buy the textbooks they need, because they cannot afford them”
    “Cable Green advocates for an open license for all publicly funded educational resources.
    The lack of educational resources remains one of the major problems in the world. “Even in the United-States, two thirds of the students do not buy the textbooks they need, because they cannot afford them,” says Cable Green, Director of Open Education at Creative Commons (CC), which provides open licenses that enable free and open distribution of works. The CC license is today a global standard givig the people the right to share, use, and build upon an otherwise copyrighted work. Wide angle shares Cable Green’s thoughts on the key role that open access to information and knowledge plays in ensuring quality education in the world.” _ https://en.unesco.org/news/cable-green-open-license-all-publicly-funded-educational-resources _

  11. […] Yes, Digital Literacy. But Which One? | Hapgood […]

  12. Now, I tried to apply the knowledge in this article on the article itself and I found:

    On the about page, the author claims to be an director of blended and networked learning at Washington State University Vancouver. For some strange reason, there is no link to this university though – odd. Isn’t Vancouver some place in Canada? Also, blended and networked learning sounds very much like some buzzword made up. Thus, I conclude that this whole page is a great hoax.

    Of course, if I put a little more effort in, I would find that all these things exist and what blended and network learning means (incorporate greater use of technology to enhance student learning or to leverage the resources of the Internet in their classes). But I think another problem with literacy is that legitimate sources are often much more difficult to proof than illegitimate ones – for some reason, fake site seem to have adapted much better to the internet than established and creditable sources.

  13. […] For a deeper read, here’s Mike Caulfield digging into contemporary digital literacy. […]

  14. […] months ago) to everyday bigotry to woefully inept information literacy rates as the result of abhorrently impractical digital literacy methods, apart from policy brutality and health care issues and student debt and the displacement of senior […]

  15. […] Yes, Digital Literacy. But Which One? – Literacy isn't a skill. It's mastery of a knowledge domain. – (digital_literacy ) […]

  16. […] Caulfield’s long post about digital literacy has been getting some attention, as it should.  He thoughtfully tests some […]

  17. I completely agree with you, but what the heck am I supposed to do to get at this level on granulation in my 10 week, all on line Argumentation, Research, and Style class? The curriculum is already overloaded because our Research Writing class was eliminated and the course outcomes folded into the Argumentation class. The darkness around us is deep.

  18. We’re currently trying to write an information literacy curriculum that would go from K-12 and you raise many of the questions that we have been battling with, along with a few of our own.

    Information literacy (IL) is a continuum. If I may metaphorically link it to mathematical knowledge for a moment – you can get by in math for quite a long time by relying on memorisation of “math facts” and formulae, however without “number sense” you’re going to come adrift at some point when the problems get hard or you draw a memory blank. In schools we teach math as a cumulative process that builds on understanding, so you don’t teach multiplication before your students have mastered simple addition.

    So to bring this back to IL. What I’m seeing you hint at in your article is the idea of threshold concepts (see a presentation I made on this here: https://informativeflights.wordpress.com/2015/02/11/information-literacy-beyond-search-and-cite-2/).

    * How can we unpack what is developmentally and pedagogically appropriate at what age?
    * How do we as librarians curate digital resources like we’ve always curated a library collection without “spoon feeding” so that students are incapable of getting to good sources on their own over time?
    * How do we manage the school political climate so that IL is embedded in project based learning, embedded in assignments, embedded in teaching and learning?
    * How many teachers who are not teacher-librarians or information literacy trained are actually capable of doing this stuff?
    * How many parents are able to provide guidance at home at homework/assignment crunch time?

    What you are demonstrating above is a skill-set that has been honed by a TON of practice and experience. If we teach IL in isolation as stand-alone library lessons just at one point in the year when time and timetable allows we will never get there.

  19. Reblogged this on InformativeFlights and commented:
    We’re currently trying to write an information literacy curriculum that would go from K-12 and you raise many of the questions that we have been battling with, along with a few of our own.

    Information literacy (IL) is a continuum. If I may metaphorically link it to mathematical knowledge for a moment – you can get by in math for quite a long time by relying on memorisation of “math facts” and formulae, however without “number sense” you’re going to come adrift at some point when the problems get hard or you draw a memory blank. In schools we teach math as a cumulative process that builds on understanding, so you don’t teach multiplication before your students have mastered simple addition.

    So to bring this back to IL. What I’m seeing you hint at in your article is the idea of threshold concepts (see a presentation I made on this here: https://informativeflights.wordpress.com/2015/02/11/information-literacy-beyond-search-and-cite-2/).

    * How can we unpack what is developmentally and pedagogically appropriate at what age?
    * How do we as librarians curate digital resources like we’ve always curated a library collection without “spoon feeding” so that students are incapable of getting to good sources on their own over time?
    * How do we manage the school political climate so that IL is embedded in project based learning, embedded in assignments, embedded in teaching and learning?
    * How many teachers who are not teacher-librarians or information literacy trained are actually capable of doing this stuff?
    * How many parents are able to provide guidance at home at homework/assignment crunch time?

    What you are demonstrating above is a skill-set that has been honed by a TON of practice and experience. If we teach IL in isolation as stand-alone library lessons just at one point in the year when time and timetable allows we will never get there.

  20. […] Mike Caulfield argues for a media literacy that goes beyond ‘skills’ approaches and takes domain-specific skills and technical skills into account. If you simply look for bias without  “domain knowledge or a toolkit of specific technical resources and tricks,” then it “is just as likely to pull you further away from the truth than towards it.”  Caulfield offers a strong critique of RADCAB and CRAAP and makes a detailed case for domain knowledge. […]

  21. […] background knowledge about the topic being explored is always indispensable. As Caulfield (2016) notes, students are often critical thinkers, but sometimes, “they just don’t have any tools or […]

  22. […] you, source evaluation is a complex skill. As Mike Caulfield so eloquently argues in his piece, Yes, Digital Literacy. But Which One?,  an information seeker needs a certain amount of subject expertise to truly judge whether a […]

  23. […] just finished listening to a Teaching in Higher Ed podcast episode with Mike Caulfield on digital literacy that will air on Thursday, February […]

  24. […] the fact that things like CRAAP can be rightly criticized for giving the impression that discerning credibility is easy, I think it still has its place. That […]

  25. […] trying to give readers a thorough exploration of the topic (Mike Caulfield admitted on Twitter that his Digital Literacies: Which One? post was initially intended to be just a few hundred words, but before he knew it, quickly grew […]

  26. […] “What is Digital Citizenship?” By Autumm Caines http://autumm.edtech.fm/2016/01/25/what-is-digital-citizenship-recap-on-week-1-and-announcing-week-2-of-digciz/ “Digital Literacy, but Which One?” by Mike Caufield https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/ […]

  27. […] What is media literacy? What is digital literacy? […]

  28. […] aren’t good. I’ve already talked about the reasons why CRAAP is ineffective. I’ve been more hesitant to talk about a popular program from the News Literacy Project […]

  29. […] M. (2016, December 19). Yes, digital literacy. But which one? Retrieved from https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/ Caulfield, M. (2017, April 4). How “news literacy” gets web misinformation wrong. Retrieved […]

  30. […] Caulfield‘s “Yes, Digital Literacy. But Which One?” borders on screed; however, his take on the literacy situation is spot on and aligns with the […]

  31. […] basic idea is that traditional web literacy checklists don’t really help when we ask kids to evaluate fake news. And they do nothing to encourage civic engagement and […]

  32. I have noticed you don’t monetize your page, don’t waste
    your traffic, you can earn additional cash every month because you’ve got high quality content.
    If you want to know how to make extra $$$, search for:
    best adsense alternative Dracko’s tricks

  33. […] of the readings we have done so far (such as “Yes Digital Literacy, But Which One?”  and ” Confronting the Challenges of Participatory Culture“) suggest that this lack […]

  34. […] are many as described by MIKECAUFIELD– commonly RADCAB and CRAAP. Or as my school librarian suggest, the 5W framework. I bet there […]

  35. […] via Yes, Digital Literacy. But Which One? — Hapgood […]

  36. […] I was reading Mike Caulfield’s blog (https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/), I was laughing and becoming frustrated. So often, when I talk to my students about what sources […]

  37. […] realm of digital literacy. So far we have delved into the topics of building an online identity, how understanding online news sources comes from knowledge already acquired and should be heavily scrutinized, how people take different […]

  38. […] with the sheer abundance of information available is simply not practical; as Mike Caulfield notes, “In reality, most literacies are heavily domain-dependent, and based not on skills.” In other […]

  39. […] look and knowledge before considering to travel there. Adding some links to these information for credibility is something to consider I think.  In overall her content creates a lovely environment for the […]

  40. […] just finished listening to a Teaching in Higher Ed podcast episode with Mike Caulfield on digital literacy that will air on Thursday, February […]

  41. […] References:Travis, G. (2015). “Design Machines. How to survive in the digital Apocalypse.” July 2015. Retrieved from: https://louderthanten.com/articles/story/design-machinesCaulfield, M. (2016). Yes, Digital Literacy. But which one? Retrieved from https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/ […]

  42. This website is amazing i love this site.

  43. […] We need to begin modeling good digital literacy. To do so, I would recommend starting by reading Caufield’s article (2016), as it provides in-depth information and explanation on the subject. As I am someone who […]

  44. […] gain web literacy you have to learn the web (Mike Caulfield), in pub 101 we learn all kinds of aspects to digital information literacy. Yes, we evaluate surface […]

  45. […] Caulfield, Mike. (December 19, 2016). Yes, Digital Literacy. But which one? […]

  46. […] not feel that all her content matches the brand values I get off of the header. In regards to the RADCAB evaluation techniques, I feel that some of Kristen’s blog posts do not fit the healthy […]

  47. […] addition, I found this great acronym, RADCAB, that I will keep in mind while writing my posts. I think from now on I will focus on […]

  48. […] Supplemental reading from Mike Caulfield on the problems of teaching digital literacy. […]

  49. […] close attention to the sense of trust and authority I have on the topics that I write about. Mike Caulfield’s article about digital literacy prompted me to think about my content is trustworthy, unbiased, and relevant. After some […]

  50. […] I wish to keep in mind, is whether my content is trustworthy, unbiased, and relevant. According to Mike Calfield content should be building upon trust with your audience. When writing an article, I will lean […]

  51. […] in a world that heavily revolves around the internet. Having interacted and engaged with the digital literacies of creating content for an online publication, I’ve grown to understand the complexities of the […]

  52. […] sites and made decisions about my own blog’s design and content, I realize the accuracy in Mike Caulfield‘s outlook on what constitutes as digital literacy. I would say most of my decisions and […]

  53. […] Caulfield, Mike. December 19, 2016. Yes, Digital Literacy. But which one? […]

  54. […] There are many misleading information available online and on social platforms. Firstly, let’s look at the term: misinformation and disinformation. Misinformation is the false information that is spread without any intent to mislead. On the other hand, disinformation is intentionally generated false information to mislead. Both have broad differences but have a common ground that is to mislead. Hence, as a user who has to stay up to date in the digital cyberspace it important to be aware of the surroundings, we cannot simply quite the digital race and go back to the traditional way of living. The reading by Caulfield (2016) nicely explains the way to deal with information online, like using the CRAAP/REDCAB. You can read the article here. […]

  55. […] There are many misleading information available online and on social platforms. Firstly, let’s look at the term: misinformation and disinformation. Misinformation is the false information that is spread without any intent to mislead. On the other hand, disinformation is intentionally generated false information to mislead. Both have broad differences but have a common ground that is to mislead. Hence, as a user who has to stay up to date in the digital cyberspace it important to be aware of the surroundings, we cannot simply quite the digital race and go back to the traditional way of living. The reading by Caulfield (2016) nicely explains the way to deal with information online, like using the CRAAP/REDCAB. You can read the article here. […]

  56. […] Caulfield, Mike. December 19, 2016. Yes, Digital Literacy. But which one? […]

  57. […] Caulfield, M. (2016, December 22). Yes, Digital Literacy. but which one? Hapgood. Retrieved November 7, 2022, from https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/ […]

  58. […] Caulfield’s article, “Yes, Digital Literacy. But Which One?” goes into some ways that people can fact-check information online. In the end, a lot of credibility […]

  59. […] Caulfield’s article, “Yes, Digital Literacy. But Which One?” goes into some ways that people can fact-check information online. In the end, a lot of credibility […]

  60. […] the article “Yes, Digital Literacy. But Which One?” the authors discuss the importance of digital literacy in today’s society and the […]

  61. […] November, 2017. “Something is Wrong on the Internet”Caulfield, Mike. December 19, 2016. Yes, Digital Literacy. But which one?Mod, Craig. 2017. “How I Got My Attention […]

  62. […] Yes Digital Literacy. But Which One? was very entertaining. The article talked about identifying “fake news” and sites with […]

  63. […] M. (2016, December 19). Yes, Digital Literacy. But Which One?. Hapgood. https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/Liedke, J. & Gottfried, J. (2022, November 4). US adults under 30 now trust information from […]

  64. […] the years, I’ve learned quite a bit about digital literacy. For example, Caulfield, in his explanation of digital literacy, talks about the CRAAP Test. I have to say that I’m […]

  65. […] November, 2017. “Something is Wrong on the Internet”Caulfield, Mike. December 19, 2016. Yes, Digital Literacy. But which one?Peiyue, Wu. 2022. “She Spent a Decade Writing Fake Russian History. Wikipedia Just […]

  66. […] Hapgood, M. (2016, December 19). Yes, digital literacy. But which one? Mike Caulfield. https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/ […]

  67. […] Caulfield, Mike. December 19, 2016. Yes, Digital Literacy. But which one? […]

  68. […] in order to understand how to determine reputable sources in Mike Caufield’s article “Yes, Digital Literacy. But Which One?” Caufield advocates for digital literacy, but specifically in conjunction with personal agency. […]

  69. […] Then in high school, we learnt about the CRAAP test and how to use it, which is touched on in this week’s reading by Mike Caulfield. In recent years, there has been some severe misinformation that has spread on the internet, […]

  70. […] Caulfield, M. (2016, December 19). Yes, digital literacy. But which one? Hapgood. https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/ […]

  71. […] Caulfield’s article “Yes, Digital Literacy. But Which One?” lists a bunch of well-known instances of disinformation online that has escalated to influence […]

  72. Your blog post provided me with a fresh perspective on the topic. I enjoyed the unique angle you took and the thought-provoking insights you shared. To continue exploring this subject, click here.

  73. Shahnawaz Ahmad Avatar
    Shahnawaz Ahmad

    Great work on your blog post! I found it to be an informative and thought-provoking read. To explore further on the topic, simply click here. Thanks for sharing your insights!

  74. […] information that can be obtained online in the modern day (Something is wrong on the internet, and Yes, Digital Literacy. But Which One?), discussed the consequences of children being able to roam an internet world overtaken by adults […]

Leave a comment