I spent last week attending a class on Ethical Data Visualization at the Digital Humanities Summer Institute (DHSI) in Victoria, Canada. It was a fantastic experience, not just because I enjoyed my class but also because I met some amazing friends with similar academic interests to my own.
In class we learned how visualizations can be constructed to support a misleading narrative. Here are a few examples:
Problems with this visual (to name a few): It shows an arbitrary amount of time to disguise the rise in the trend line that would be shown if the first big spike was taken away. (What does that since 1750 even mean?) It’s showing change over time, so a straight line actually shows a steady increase. The arrow is an intentional optical illusion to trick you into seeing more of a horizontal pattern than there actually is.
Problems with this visual: There are two y-axis that aren’t labeled, so it seems to make the two lines much more comparable when in reality they aren’t even that close. The colors are highly suggestive and are intended to play off of the public’s bias to think of women as feminine and delicate (pink) and abortion as murder (blood red). This doesn’t even kind of take into account any of the other services provided by Planned Parenthood. Cancer screenings are only going down because there are more effective methods for screenings which require fewer tests. There are no points between the two end points on each line, leaving out a significant part of the data.
Problems with this visual: For one thing, it doesn’t add up to 100% even though it’s a pie chart. The colors are misleading (blue used in chart is the same blue as the background making Huckabee less prominent). It’s hard to read the source. It’s tilted, which distorts our understanding of the amount of space each slice takes up, which makes Palin look like she has the smallest chunk of magical 193% pie when in reality she has the most support. The divisions between the pieces of pie are absolutely arbitrary. The word “Back” doesn’t need to appear on each label.
I would like to point out that the first two visuals were used by actual congressmen during congressional testimonies and the third was used on live TV. The bar is literally as low as it can possibly go. It’s not too hard to make visuals that are more ethical than these.
At the same time, we debated the possibility of creating a value-neutral/non-political visualization. Is such a thing even possible? Is it what we should strive for? Or should we attempt to construct a narrative using our own values and politics with the understanding that one of our many values should be the truth? Most of the class seemed to agree the second was a more realistic and fruitful option.
Finally, and I think most importantly, the idea that computers are human-made machines, not magical black boxes of science was repeatedly reinforced throughout the course. It seems obvious but given the amount of propaganda we consume on a daily basis, this is an important message to reinforce. Just because a visualization looks official and uses numbers and cites a source, doesn’t mean it’s an accurate reflection of reality. Every choice we make with our visualizations impacts the narrative it conveys to the audience. Computers aren’t inherently value-neutral because the data we feed into them isn’t value neutral and they’ve been programmed to parse it in a particular way–a systematic way, but necessarily in a way that reflects the messiness of the world. It’s just like Miriam Posner’s quote about data-based work that I quoted in my first ever blog post:
I would like us to start understanding markers like gender and race not as givens but as constructions that are actively created from time to time and place to place. In other words, I want us to stop acting as though the data models for identity are containers to be filled in order to produce meaning and recognize instead that these structures themselves constitute data. That is where the work of DH should begin. What I am getting at here is a comment on our ambitions for digital humanities going forward. I want us to be more ambitious, to hold ourselves to much higher standards when we are claiming to develop data-based work that depicts people’s lives.
The data is what you make it. The visualizations are what you make them. On the one hand this is empowering. For so long, we’ve been led to believe that there are some things that are just set in stone when it comes to computers and data, but it doesn’t have to be this way. On the other hand, this is a little terrifying. It’s a lot of responsibility to make an ethical visualization. We can’t use computers as scapegoats for the inaccurate narratives our research may display. It’s up to you to be intentional with your work.
Outside of class, I talked with dozens of wonderful people and made some great friends. It was really helpful for me, personally, to talk with academics close to my age who are interested in areas of research similar to my own. I got a lot of advice and I’m more than a little excited to go on to grad school and maybe (probably) to get a PhD.
Overall, it was a great week (even with the cold weather)!