Original paper: http://drum.lib.umd.edu/bitstream/1903/367/2/CS-TR-2645.pdf
This paper presented a very clean over view of the space-filling approach to tree maps. I thought it was very clear and the inclusion of pseudo code was very effective. It felt like a final project report for cs 245 or something with the definitions, but that made it seem accessible, which in a paper can be very effective. Midway through the paper I preemptively wrote down that there was not limits of this tree map algorithm included. I knew that this function had a finite number of nodes that it could effectively show on the screen. However, the display resolution section addressed my concern and included the idea of possible zooming, which was cool. Overall, I liked this paper because it felt like something I could easily write about my own work in the future.
Original Paper: http://www.csc.ncsu.edu/faculty/healey/download/cga.03.pdf
I read this article before creating my user evaluation over the summer and found it really useful in preparing for an effective user evaluation. Their inclusion on examples of studies and what they did is helpful. The “Basics of User Study Design” blurb inserted in the paper was a very clear background that I think was necessary for the reader to have. The most interesting part of this paper to me was about what to do when things “go wrong” or in other words you get data that fails to reject your null hypothesis and that just because null results aren’t necessarily publishable, they are super informative and can help you further your work so you can eventually get something worth publishing. Overall, it is clear that good user studies can enhance the quality of your research.
Original Paper: http://hcil2.cs.umd.edu/trs/2004-19/2004-19.pdf
Overall, I felt like this paper, especially compared to the other two user evaluation papers we are reading this week, is pretty disorganized and not as carefully laid out. This paper did emphasize the fact that usability testing and controlled experiments are the basis of evaluation. I thought the part about taking into consideration the training of the users was interesting because when I was working on my user evaluation this idea definitely came up and it is pretty important whether you want to train the user and if so how much. I thought the section on learning from the examples of technology transfer was the most interesting.
Original Paper: http://enrico.bertini.me/material/tvcg2011-seven-scenarios.pdf
The biggest thing to notice about this paper is its thoroughness and the extensive amount of background work that went into it, which is clearly shown with table 1. They do a good job of presenting a “descriptive rather than prescriptive approach,” which they mentioned as a goal early on in the paper. However, because of this, the paper is kind of a boring read, even though it doe a good job at presenting a bunch of potentially helpful information. A bulk of the paper is describing the goals/outputs, example evaluation questions, and methods and examples of each of their seven evaluation scenarios. These scenarios are: Understanding Environments and Work Practices (UWP), Evaluating Visual Data Analysis and Reasoning (VDAR), Evaluating Communication Through Visualization (CTV), Evaluating Collaborative Data Analysis (CDA), Evaluating User Performance (UP), Evaluating User Experience (UE), Evaluating Visualization Algorithms (VA). I think the paper does a good job at explaining visualization evaluations and are encouraging to get people to reflect on their goals before choosing methods.
While reading this paper I could not stop thinking about the San Francisco art styled typographic maps of the districts (which to be honest I kind of like). However, this paper was showing how there is a much more efficient way of producing a typographic map with a computer. Typographic maps merge text and spatial data and can be used for traffic density, crime rate, demographic data and more. Mostly became popular because of their high visual aesthetics. A particular visualization mentioned that became really popular was the common text visualization (a word cloud), but this paper listed a couple problems with such visualizations. I think the most memorable figure in the paper was the side by side comparison, which I thought was very effective. Yes, you could see some obvious differences, but not super ridiculous ones, so when the time it took gets taken into consideration the 2 weeks to 2-3 seconds is remarkable. Also I thought the scenario of a cop looking at a map with the highlighted specific area of interest in typographic style was interesting.
This paper stemmed from the idea that there is a lot of visual digital data being collected from geospatial referencing from vehicles, PDAs, cell phones, etc, that should be utilized in a visualization, a geovisualization to be exact. They describe geovisualization as the process for leveraging data resources to meet needs and with GIS it is also a field of research and practice that develops visual methods and tools for lots of applications. As one would assume geovisualization draws from both cartography and geography. They present four functions, which are: explore, analyze, synthesize, and present. There are three main applications for geovisualization: public health, environmental science, and crisis management. (The environmental science part interests me the most, because it seems like a field I could utilize my math major, and my minors in computer science and environmental science!) One really cool thing is that this paper referred to a paper written by a woman (the Viewpoints paper) and I think that’s a first from the papers we’ve read!
This paper explains how to select color schemes by: number of data classes, the nature of their data, and the end-use environment (something I didn’t necessarily think of previously). Other things I learned from this paper include the idea that diverging schemes are always multi-hue sequences. We all know that nominal data has no order, but now I also know that it doesn’t make sense to pair it with light to dark color scheme for that exact reason. When working with the data class number there is a fine line between generalization and too many colors to differentiate. The more complex the spatial patterns, the harder it is to distinguish slightly different colors. I found it interesting that illustrator and photoshop use different color conversion algorithms. Something I learned was the difference between design and display mediums and how much paying attention to these are. After checking out http://colorbrewer2.org I found it very clear and useful. It will definitely be a resource of mine in the future!
This paper provided a very effective, clear basis for different thematic cartography and its evolution. It did a great job explaining and thoroughly describing the basics in hopes to provide a strong basis for research and hopefully aide people in finding new techniques for visual representations with thematic cartography. I felt like the historical content was really informative and cool! Also the figures provided in the paper were effective and clear. Overall, not super crazy ideas presented, but a very clear general basis to base further research. They divided the development corresponding to point, line, and area symbolization. These can be divided into four stages according to the qualitative-quantitative and physical-cultural distinctions.
Original paper linked here. I liked the figure that opened their paper, it gave a visual representation of what the paper was going to be about before I could read the abstract. After reading this paper I can see the potential perks of stacked graphs (when the different stacks are sums of the whole, example: if you divided sales for the same company up into categories). As far as braided graphs, which their paper showed that people liked (given how they look in their paper, I’m not too sure I believe them). They seem pretty ugly, but I can see how they work. However, I agree with them that aesthetically they can make some changes to it. I took particular interest in the fact that they had a training session for the participants and a pilot study, which is ideal! I also like how they added that the questions/tasks asked increased in difficulty (something that is important that I noticed from my own work on user studies). Lastly, the paper notes that it really became about differing between shared and split spaces, which makes sense.