I’m at a conference in LA at the moment; it feels so lovely and strange to be able to walk around outside in short sleeves.
I took a really excellent short course on Sunday afternoon about the psychology of decision-making, led by Alan Schwartz. At one point, he was talking about the gist paradox. There is an emerging body of research demonstrating that when people are given the gist of the information vs. the full information picture, they will make better decisions. (Better = more rational and/or more in line with their own stated values. This is an admittedly problematic definition, but I think it is a reasonable one in this context.) This has major implications for informed consent procedures, decision aids, and so on.
He outlined this issue briefly, and then made a statement along the lines of, “Intuitively, this seems wrong. Why wouldn’t more information lead to better decisions?” My hand shot up as if I were in elementary school and I questioned his premise. I thought that those findings were absolutely intuitive. We discussed it briefly, but I didn’t really have a moment to reflect on why I might have had that reaction until later, when, upon reflection, I arrived at the idea that my intuition is probably heavily influenced by my longstanding alignment with the artificial intelligence/information studies view of information, which divides it into data, information and knowledge. (E.g., see Aamodt and Nygard.)
Loosely put, data are just the ones and zeroes, information is data processed into understandable puzzle pieces, and knowledge is the puzzle pieces put together in relation to each other. The transition from data to information can be done fairly easily by both humans and machines. (I’d even say that in a good proportion of cases now, machines can do it better.) The transition from information to knowledge is much less straightforward, and humans still outperform machines here by a reasonably wide margin. (For example, see the semantic web.)
It occurred to me that although it is sometimes there implicitly, I don’t recall ever seeing any sort of explicit hierarchy like this discussed in the literature about health communication, knowledge translation, medical decision-making, and the like. I’m going to ask around to see if it’s there and I have just missed it, but if it isn’t, I wonder if it might help to better clarify some of the issues with which the fields grapple. If the goal is to communicate knowledge (and then, hopefully, generate particular actions, another problematic process) but what’s often discussed is not really knowledge, but information, there is a missing element. To finish the story, you need to take into account the process of gathering or receiving individual information pieces, understanding each piece and its relationship to other pieces, and putting it all together accordingly into knowledge. I think that when you think of it in those terms, it completely makes sense that you can have too few pieces, and you can also have too many.