AS3: event.target vs. event.currentTarget

I just figured out how to solve an ongoing bug in my AS3 code.

The problem: I added a MouseEvent.CLICK handler to a Label. Whenever I clicked on the label, the event would fire– but when I called “event.target” in the event handler, it would return a TextField instead of a Label. This meant that to get the Label, I had to come up with messy workarounds like checking the type of the event and calling its parent if it wasn’t a Label.

The solution is actually quite simple: rather than calling “event.target”, use “event.currentTarget” to get the object that originally had the event handler registered to it (in my case, the Label). I wish I’d known this earlier! For further reference regarding event propagation in Flex, see the Adobe documentation.

Inline help - the FAQ is dead!

I just completed a quick&dirty heuristic evaluation for one of my research advisor’s other projects (redesigning an online Eiolca Life Cycle Assessment tool). Although the new design includes many changes that vastly improve usability, one change that I paid particular attention to was a shift to including inline help, rather than linking users to extensive help documentation. This seems to be a growing trend in UI design, and I believe that it is a welcome one. We are a culture that strives for instant gratification, and wasting time loading up a FAQ document or knowledgebase is just too much work. In fact, I’ve found that many people are so anti-“reading the documentation” that they will opt for tiresome calls to tech support or give up altogether.

Thus, inline help is a great solution– anticipate potential user problems or questions, and provide quick links/blurbs that will address their concerns, should they have any. I’ve seen inline help work effectively in several different ways, particularly when having users fill out forms (as was the case for my professor’s project):

1. The most basic: provide clear labels that prompt users for the correct information the 1st time around, but also have further explanatory instructions available right above/next to the field for immediate clarification.

2. Provide a link/icon/button that users can click on for more help. The “?” button is quite popular; I’ve also seen mouseover techniques work quite effectively. A sidebar that contains links to specific information in an online help document is also an option, although users may distrust these if relevant information is not immediately found.

3. Have “warning” labels that update/display automatically when users input invalid information, thus helping to keep them on the right track and avoid frustrating errors. Suggestions can also be quite useful, be they the default answers supplied in fields, or small suggestions listed beneath the fields.

Long story short, it’s not about creating more extensive online help documents– it’s about providing relevant help at the right time. This can be achieved by using inline help (and rephrasing documentation so that it is clear and understandable given the target audience’s level of knowledge) is. An added bonus: by minimizing user confusion and frustration from the outset, one might be able to lower operating costs by reducing the amount of calls to a call center. Plus, happy users will say great things about your system!

What the Font?

A nifty tool for identifying fonts based on images: What the Font

Simply upload an image, verify characters, and view the results. I was very impressed by how useful and usable the tool is–I was able to identify a (very close match of) a font in less time than it took to find the image on my computer.

WTFont results page

The division between computer & user “work” is very well managed– the results page shows fonts which are somewhat similar to the user-submitted font. The user can then look through all of the results and choose which one is the best match. A really great detail: a copy of the image that you submitted remains in the center of the screen as you scroll up and down the list for quick user comparison. Links to asking experts for help or purchasing fonts are worked in seamlessly. Overall, I was very impressed by both the tool and site design. (Much improved from their old site– view if you dare)

I haven’t checked how well this tool works with more decorative fonts, but I’m sure I’ll be back soon.

P.S. Just noticed they have an iPhone app– bonus usefulness points.

Benefits of Introducing Constraints

I recently read an article by Peter Kollock that suggests that certain online communities may be successful because they appropriately introduce risks and constraints that users must overcome. This reminded me of another CMC paper that discusses how users “overcome” constraints of low-bandwidth CMC in order to form stronger interpersonal relationships. For example, in online chat rooms, users feel like the medium’s restrictions make it difficult to learn about their chat partners. This turns relationship building in online chatrooms into a multi-step process. Users have a few initial cues with which to learn about their chat partners (“A/S/L”), then begin to pick up on speaking style, vocabulary, topics of interest, and other defining information. They may then share personal websites or photos which further develop understanding. Although this process seems inefficient and potentially daunting, there are aspects of exploration, discovery, and insight that also make it rewarding. Knowledge acquisition, particularly about things that we consider “special” or “secret,” tends to be a very enjoyable thing (think: gossip). Perhaps, then, limitations add an aspect of “specialness” to interactions that we would otherwise take for granted, thereby bringing online communities together.

In terms of virtual worlds, it’s also interesting to see how constraints encourage people to come together. In some ways, this is acknowledged by game designers and built into the systems– when a special item is very rare, it encourages people to try to obtain it, which can result in a wealth of complex interpersonal interactions.

In some ways, the unintentional constraints of games are even more interesting. Regarding technical constraints like limited information display, zone load lines, or laggy connections, users find creative ways to turn constraints into benefits. For example, having load zones in MMORPGs is often used to players’ benefits when running away from monsters. Although this doesn’t really mirror anything that would happen in the real world, it’s an example of how constraints can be turned into beneficial, and even key pieces of, game play.

This reminds me of another reading that I did for another class that talks about how interaction designers are actually designing a “space of possibility,” in which people are free to explore as they wish. The designer can only hope to introduce constraints with sufficient feedback that allow users to make the environment “their own.” Although the designer may have some ideas of what could happen, it’s really the people in the environment who decide what happens in it.

CHI'09: Mobile Interaction Design Class

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

Since my capstone project involves working with mobile devices for Nokia, my team enrolled in the “Mobile Interaction Design” class at CHI. It was led by Matt Jones from the University of Wales, who developed it in conjunction with Gary Marsden. Although much of the material was not new to me, Matt had some interesting thoughts and fun anecdotes to share.

The class began with a discussion of “cool things we can do with mobile technology.” For example, people use camera phones to capture transient images, share them, and then allow them to essentially disappear. People also appropriate technology in new and unexpected ways—for example, when Matt’s daughter heard the new TomTom GPS system say, “Bear to the left,” she asked, “Daddy, where are the bears?” This turned into a new game between the two, demonstrating a completely unexpected appropriation of this piece of technology.

There was discussion of the Digitial Divide, and Frohlich’s Storybank project which tackles the question, “How do we design for people who can’t read?” Also, discussion of Don Norman’s question, “are we designing things to fill unnecessary needs?” For example, if there were a coffee cup that automatically signals a waitress to come over and refill it, this removes some of the satisfaction of the interpersonal interaction that had previously existed. In a world where everything is being indexed and mapped to other things, what happens to chance encounters? What are the impacts of turning the “fuzziness” into something that is clearly defined?

Some other technologies discussed included implantable computers, RFID tags, and wearable mobile devices. Matt described a story in which he ran into a famous scientist at a conference and tried to introduce himself. As he said hello, the scientist used a device that he was wearing to look up Matt’s personal webpage. The scientist explained, “I’m trying to decide whether you’re an interesting person.” Talk about streamlining interpersonal interactions!

The discussion then turned towards what mobile devices are really for. Are they for everything? Communication versus information? Mobile allows for rethinking of relationships. Voice, context awareness, SMS, email, local web pages, blogging, communities, and pico-blogging (quick interactions like a “poke” on Facebook) have all helped to redefine our relationships with others.

There is a bit of an “appliance attitude” towards mobiles. They’ve started to become a lot like Swiss Army knives—they have lots of features, but individually none of them work well. Matt questions whether this appliance attitude is really correct—although it seems handy to only need to carry one device, “People do carry lots of stuff… we like clutter!” That is, as long as that clutter is useful and attractive. An interesting thing to note is that the iPhone, unlike some other devices, only allows you to have one application open at a time. When that application is open, the device becomes that application. Thus, adding new applications doesn’t feel like it’s detracting from the device. The App store also keeps everything in one place to build trust; people see buying iTunes songs as a “little reward” for themselves, much like buying a latte from Starbucks.

The course touched on some alternative interactions for mobile devices which extend beyond the keyboard and the screen. For example, Schwesig (2005) introduced Gummi: UI for Deformable Computers. There are also auditory (icons, which are natural sounds, and earcons, which are synthetic sounds), haptic (Sony’s Touch Engine), gestural (pointing, micro, mini, macro), and multimodality. Mobile devices represent a fluent mix of life and technology, in which there are not two realities, but rather one: the next time you’re walking down the street, watch kids talking and showing their cell phones to each other while their headphones are in.

Interaction design is important for mobile devices. Poor interaction design results in frustration, wasted time, physical harm, and environmental damage. Can we design things that people won’t want to upgrade, or will want to keep in a sustainable fashion after the technology becomes obsolete? Not only that, but how can we use interaction design to bring experiences from outside the home into the home? Good interactions include visibility and a transparency of process, with a bit of organic clutter thrown in there. When we design, we need to understand the value that people have in certain things. For example, “people would walk across broken glass to send a text.” The UI for text messaging could definitely be improved, so what is it about the system that people find to be so valuable?

The class discussed “Upside down usability,” a movement which seeks to turn the traditional view of usability on its head. This favors the palliative over the generative; it celebrates the inefficient and the ineffective. This is part of the Slow Technology movement (Hallnas & Redstrom 2001), which has “a design agenda for technology aimed at reflection and moments of mental rest rather than efficiency in performance.” In a fully indexed world where everything is known, how do we make our own meaning of it? How do we explore it and find the “hidden gems”? There are benefits and consequences of mobile devices driving you towards certain things. Perhaps the solution is to introduce randomness for “serendipitous” experiences—for example, the randomize feature of the iPod shuffle. What about a running map that is randomly generated as you go? Or a randomly generated pub crawl that suggests new locations to visit and people to meet?

There are dangers of “technologizing away” childhood. Several systems have been developed to relay bedtime stories or teach children how to brush their teeth. However, these remove important opportunities for family bonding time. Are we designing technology to prey on our most important moments?

Some other cool tools that we discussed:

  • The “clutter bowl” – you drop your mobile phone into it and it extracts and projects photos, making the unseen seen (“Clutter in the home” – Taylor & Harper 2007).
  • iCandy – a tangible UI for iTunes that has cards with barcodes that you can share and swap to play music.
  • Pico-projectors – tiny projectors that can project anywhere, allowing for mobile offices, temporary graffiti (mobispray.com), art shows, classroom pranks, navigation, collaboration, and more.

In short, we need to design things that have strong identity (Amazon and eBay), use interaction as a brand (iPod, iPhone, Nokia NaviKey), develop an editorial voice or distinct point of view, and deliver interaction as a package. For a full reference list for the class, as collected by Matt, see: http://www.cs.swan.ac.uk/~csmatt/ReferencesWSITE.pdf

Again, I’m not sure that I learned many new or revolutionary things from the class; I guess that’s a testament to my training and interests. However, Matt was a good speaker, and it was interesting to see some of the “aha!” moments that other members of the class experienced. Also, I was surprised by how many people in the room said that they owned Nokia phones! I had expected an overwhelming majority of iPhones. Maybe it’s because of a non-US crowd, or developers have traditionally gone for Nokia phones?

Regardless, I enjoyed the class, and have turned to some of the other work that was mentioned for my own inspiration.

CHI'09: alt.chi - Feel the Love, Love the Feel

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

One of the most enjoyable sessions that I attended at CHI was one of the alt.chi sessions entitled, “Feel the love, love the feel.” The session included several “nontraditional” talks that were intended to spark discussion and create a new flow of ideas.

Interactive Slide: an Interactive Playground to Promote Physical Activity and Socialization of Children

This paper described an interactive children’s game in which children play on a slide in order to build a robot. An interesting discussion point emerged when the speaker talked about how in her experiment, there had been a technical flaw which caused the game to break, and the children to play in an unexpected way. When the speaker explained, “we had to throw out the data,” this sparked debate because, as an audience member pointed out, play is a very freeform thing. Was the researcher suggesting that research is not creative? Why was she putting blinders on, rather than using the “bad” data for new design ideas?

The poor researcher seemed somewhat floored and unable to answer. I felt bad for her, because it was obvious that she comes from a academic research background where well-run experiments are essential for testing your hypotheses and getting your work published. In contrast, it seemed that her interrogator was from a much more flexible background that welcomes new ideas. I could almost see the researcher thinking to herself, “sure, if I had 6 more months, maybe I could’ve done that!” But deadlines loom, and sometimes you just need to make those experiments work!

Since my view of academia is somewhat limited, I’m actually curious about what sorts of limitations one must impose in order to come up with something that can stand up to peer review. Is the need for review a very limiting thing? In industry, it seems like “quick-and-dirty” tests are often the norm; when it comes to designing great interfaces, this might be all you need. But for a research field like HCI, which is not quite as concrete as say chemistry or cognitive psychology, how do you mediate the desire to try new things with the limitations of having to back up everything that you say? This concern of having new ideas being limited by others is one of the things that stops me from pursuing a career in academia. Strange, when so many people say that they go into academia specifically so that they can pursue their interests.

Opportunities for Actuated Tangible Interfaces to Improve Protein Study

Ashlie Brown, the researcher who gave this talk, is actually a grad student with a chemistry background. Her focus is on teaching people about proteins, which are much easier to understand when shown in 3D, rather than 2D. She gave a variety of ideas, such as animations, virtual reality, tangible models, and augmented reality using haptic systems (like Phantom or Posey). Students would greatly benefit from the ability to compare structures and track how to get from one protein to another.

As someone who was completely boggled in my Intro to Biology class, I wish Ashlie the best in creating new ways for students to learn about cell-level interactions. It would be really neat to see if this sort of technology would be helpful for non-students as well: in the lab, could scientists working with proteins somehow magnify them and work with them on a haptic level?

soft(n): Toward a Somaesthetics of Touch

Thecla Schiphorst is a Canadian artist who is also a “computer media artist, computer systems designer, choreographer, and dancer.“ I certainly did not expect to meet someone like her at CHI! Her session was about bringing dance and somatics to HCI.

“Somatics” is defined as, “The art and science of the interrelational process between awareness, biological function and environment, all three factors being understood as a synergistic whole” (thanks, Wikipedia). Thecla talked about this in terms of the motion of the self, attention to creation of a state, experience and interconnectedness (empathy), the opportunity to add value through attenuation, and the acceptance of experience as a skill. She defines touch to be both subjective and objective, because we can feel ourselves touching, even as we touch. Based on these principles, she created “soft(n),” which are 10 soft physical objects that communicate with people through vibration (creates a sense of intimacy), light (creates a sense of distance), and sound (when tossed, an object emits a “whee!” sound). The objects are life-size to evoke the idea of play and past-lives. In order to create the objects, she used a variety of unique materials, such as conductive foam and conductive silks.

This talk was much more “artsy” than I had anticipated, but I enjoyed hearing Thecla talk about interaction design in a new way. Although I personally thought the 10 objects were a little creepy (do I really want a stuffed cube screaming when I toss it in the air?) I suppose that this is what she was aiming for in creating them. Perhaps the reason that I was so “creeped out” was because the objects were more intimate and human than I would have liked them to be. This reminds me of a discussion we had in my Computer Mediated Communication class—do we really want to create anthropomorphic robots? Do people actually want to interact with “almost-humans”, or would they prefer to interact with things that are physically nothing like themselves? That concern aside, I thought that Thecla’s view of HCI was quite beautiful. Many elements of her talk, such as using physical sensations to create emotional product attachment, are worth further exploration.

Stress OutSourced: A Haptic Social Network via Crowdsourcing

The last speaker at the alt.chi session described a set of wearable devices that enabled crowd therapy via touch. A user would send an outgoing “SOS” call by making a frustrated gesture with her device. Others would receive the call, then gently press on their own gadgets to send calming signals back. The number of calming signals that the original user receives would indicate the number of responses, and each would be felt from a different point on her device. There are opportunities for making this a locality-based system, and for having a web component that breaks down responses at a city level. It could be scalable by not having a one-to-one mapping to a person. There are also additional opportunities that take advantage of the beauty of simple signals: for example, sending messages via nudges, or sending a tangible Facebook “poke.”

The idea of haptic social networks is rather unique. However, I am curious about the impacts of impersonal touch. What does it mean to rely on strangers instead of close friends for the coveted sensation of touch? This is a similar question to that of, “what are the impacts of replacing face-to-face conversation with computer mediate communication?” It is a difficult question to answer, but certainly one that should be considered, as there are pros and cons for both.

I found the alt.chi session to be very fun and insightful. It had what was, in my opinion, the most unique and controversial discussion topics; also, I found the crowd to be much more energized and involved than in other sessions. I would highly suggest that in future years, attendees attend at least one of the alt.chi sessions to see what they’re all about.

CHI'09: Moving UX into a position of Strategic Relevance

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

This panel session explored many of the issues which I’ve discussed with people in UX (user experience) teams at large corporations. The panel focused on 5 key strategies for “moving UX into a position of strategic relevance.” The strategies were:

  1. UX evangelism and documentation
  2. Ownership of UX
  3. Organizational positioning
  4. Calculating ROI
  5. Conducting “ethnographic” research

The panel had 5 members (one was absent, and her viewpoint was represented by another member). They included Killian Evers (a PM at PayPal), Richard Anderson (Riander), Jim Nieters (UX team at Yahoo), Craig Peters (Founder of Awasu Design), and Laurie Pattison (UX team at Oracle). Each panel member was given a few minutes to share their thoughts, and then the group discussed a series of scenarios. The panel ended with a brief Q&A session.

Panel Discussion:

How do we make ourselves strategically important to an organization? We need to find the 1 big thing that makes us important and stress that. We need to show that we understand the business, and the fact that business goals surround making money. This could be accomplished by partnering with business teams to identify the most significant problems facing the company, then working together to solve that.

It’s important to deliver results quickly. We only get one chance to make a first impression, which in business terms usually means that we have one calendar quarter to make a significant impact and prove our worth. In order to prove your value, choose to do something that the rest of the company can’t do themselves. For example, when creating deliverables like prototypes and wireframes, create something really nice that others can’t produce themselves. If you are just starting a UX movement at your company, try to pick projects that matter most to the bottom line, such as those that will be demoed to the customers the most, or those that impact market share and sell the most. Also, make sure that the first few projects that you choose are projects that you can really succeed on, because if you fail, it will follow you forever.

For example, at Laurie’s company, they were having a problem with online help—even though there was an online help document, people continued to call the call center and ask the same questions. After conducting a series of user studies, she found that when users were frustrated, they just called the call center instead of trying to use the online help. She decided to address this by moving answers to key questions into the system itself. After making this change, the number of calls that went to the call center dropped, thus saving time and money. This made the value of Laurie’s contribution more apparent to those in charge.

Common Scenarios

Scenario I: You’re on a good UX team, but the sales team sells from a tech-focused point of view. The UX team is “too busy” to teach the sales team about the nuances of a user-centered point of view.

Solution: You may need to find the time to do important things like teaching user-centered point of view to the rest of the company. For example, you could give them a “crash course” about what the UX team does at a high level. This could also be accomplished through “brown bag” sessions, formal training, or answering RFPs.

Scenario II: The CEO of your company “gets” UX, but middle management doesn’t. Without middle management support, user centered design is not recognized as a real science, and is not seen as necessary for executing the CEO’s vision.

Solution: Gain buy-in from people from other groups, such as project managers, software engineers, and other tech and hardware folks. One of the panelists described a story in which he helped the hardware team to reduce a 5-hour cable setup process into a 20 minute procedure by color-coding the wires, getting rid of the big instruction manual, and introducing clear GUIs. Although the UX team didn’t do all of this by themselves, the project never would have gotten done without them. This was an example of how UX expertise was used to optimize beyond just the UX team, which leads to increased buy-in and trust from other teams. That being said, the UX team may need to reposition itself so that it owns the important and relevant pieces of projects, such as owning the UI specifications for an engineering team.

Scenario III: A large enterprise software company has branches in many different countries. The UX team has trouble understanding what matters to a project’s success because they have limited domain knowledge and are less likely than others to be invited to strategic meetings. This makes it harder for them to make intelligent compromises.

Solution: Educate the team members to fill in gaps in domain knowledge, perhaps starting with new hire training. Have weekly meetings for the team in which you invite people from other parts of the company to speak and share their knowledge, thus increasing respect given by the rest of the company. It’s important not to seem like a liability because you don’t understand the product or the business; ask questions. Also, the team needs to learn how to compromise because no one wants to work with someone who is not willing to compromise. However, there is the danger to compromise yourself out of anything useful, which might cause you to lose the respect of others. As with anything, it’s important to find common ground, rather than arguing.

Q&A Session

Q: This talk seems to assume that UX is not in a position of relevance. What if you haven’t been “invited” to take on a position of relevance at your company?

A: You might need to invite yourself. Figure out where you could be useful and go knocking. Where could you have the greatest impact? Take on that project and do your best work to make sure that the project gets noticed.

Q: What if you are an internal supplier of UX whose job it is to make others’ jobs easier?

A: Being an internal supplier of UX is much like any other type of UX practitioner, except your end users are internal. Testimonials go a long way for this sort of position—for example, you might want to work with QA and share success stories with the rest of the company.

Q: Is UX moving towards the role of a specialist, such as a lawyer? For example, does a farmer need a biotechnician in his employ?

A: Instead of worrying about being a specialist versus a technician, what about just being a team player? This sort of middle ground does exist. The key is to know your value proposition, and have a team that has a mix of different skills.

My Thoughts

Having spoken with many UX professionals and attended a variety of HCI talks & events, I’m aware that these are common discussion points in the HCI community. During the talk, someone asked, “How long will we be able to keep making excuses because HCI is a ‘young field’?” It does seem a little strange that this many decades in, HCI is still struggling to find its “place in the world.”

It’s a valid concern, though. Last summer, I attended a brown-bag where my mentor led a discussion about how UX can be integrated into the company’s Agile software development cycle. Clearly this methodology, which is becoming more popular in large tech firms, was not designed with the user experience in mind. It will be interesting to see if new methodologies will be able to bridge the gap between user-centered design and the software development cycle. I suspect that this “gap” is really not quite so large as it would initially seem, and I remain optimistic that if we continue to share the lessons we learn, soon we won’t need to make any “excuses” whatsoever.

CHI'09: PrintMarmoset

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

Although I try to be as green as possible, sometimes I just need to print something to read it offline. I really hate printing web pages straight from my browser because I always feel like I’m wasting paper on advertising, strange page aspect ratios, and excessively-large or too-light text. To get around this, I usually copy-and-paste relevant text and images into a MS Word document, format the text the way that I want it, and print from there. Although this is a viable workaround, it can be a bit of a pain because Word never applies formatting quite the way I want it. As a result, I don’t always print “cleanly” as much as I’d like to. The good news: Jun Xiao and Jian Fan, researchers at HP Lab, have come up with an idea to combat wasteful print jobs which they call “PrintMarmoset.”

PrintMarmoset’s self-proclaimed goal is to improve the experience of printing web content “while simultaneously addressing user needs and environmental responsibility.” Its creators hope to help people save paper when printing web pages by allowing them to specify print areas, called “printer masks.” These allow people to highlight important content and images, which cutting out headers, footnotes, and advertising. In addition to making printouts more useful and relevant, they reduce excess pages from printing.

At the time of the conference, it seemed that the researchers focused primarily on how one might set a printer mask using a browser plug in. After a user successfully sets a printer mask, the mask is saved to that web page, so the next time that she visits the web page, she can see the printer mask that she already set. The researchers also explained a potential social component, in which users could see each others’ printer masks. Another idea is to include a counter widget that shows PrintMarmoset users how much paper has been saved thanks to the plug in. This could increase user commitment to the tool and encourage additional prosocial behavior.

Although PrintMarmoset is a very cool concept, there are still many challenges that need to be addressed. Several audience members asked the researchers about the impacts that the tool would have on web design. In particular, how would this impact online advertising? Are there best-practices that web designers would want to adopt to comply with PrintMarmoset’s outputting?

The details of how a printer mask would be actually print out also need further exploration. When an audience member asked what a printout might look like, the researchers admitted to not having focused on this element much, but offered some suggestions, such as using templates. This is one of the more interesting aspects, as I can think of several potential solutions that could involve different levels of customizability, such as drag-and-drop or the ability to apply a personalized CSS template based on HTML tags. Either of these would be an obvious improvement over the current copy-and-paste method that I use in Word. If they could solve formatting issues without requiring one to open another application, this would be ideal. Also, a “Print Preview” feature could be incredibly useful, as this is currently lacking from browser print jobs.

I really like the concept of PrintMarmoset—it seems like a potentially simple and lightweight solution to a common problem. It seems like a simple and convenient enough tool that everyday users could adopt it. The actual interactions surrounding the tool are critically important, though, and I would encourage the researchers to give more attention to detailing the actual printout process since this could determine the success of the product. If the plug-in launches as a Firefox extension, I might give it a try, but unless it provides an obvious benefit of convenience, speed, and performance, it’s back to copy-and-pasting into Word for me.

Facebook as a means of self-socialization

I was doing a bit of reading about the social psychology theories behind Facebook, and stumbled across the concept of “peripheral awareness.” Resnick (2001) describes this as the phenomenon of people learning more about their community in order to increase their social capital. I hadn’t really thought about it before, but perhaps this is why people spend so much time simply “browsing” Facebook. Although a part of it may be based on the desire to (for lack of a better word) “stalk” individuals, a larger motivation may be self-socialization.

Consider the concept of “people watching.” Although this may be in-part motivated by curiosity and a desire for amusement, another part of it is consciously observing others as a way for us to understand our own place in the world. On Facebook, we are free to people-watch without any danger of getting “caught.” This allows us to spend large amounts of time understanding the average and deciding who we want to be more/less like. Having observed and discussed Facebook usage amongst my classmates and friends, I’ve found that most people spend a significant amount of time browsing two types of people: those they are close with, and those they are jealous of or wish to belittle. Could it be that while watching the lives of others, we are simultaneously deciding how we will change ourselves in response to them? By viewing the trends of the majority, can we not better learn how to express ourselves in a way that helps us to become the people that we wish to be?

Unintended Consequences of Health Care Technology

Here’s an interesting article about why the strong push for electronic medical records may not be a wise decision. The article talks about the government’s proposal to spend $50 billion over five years to promote technology for health care– a major proponent of which is replacing paper medical records with digital ones. Although the concept sounds good at first, the article points out that there is very little research-based support for the benefits of electronic medical records.

This reminded me about a BayCHI talk that I went to over the summer where Chris Longhurst (of Stanford’s Children’s Hospital) discussed the “Law of Unintended Consequences” with regards to health care technology. In his talk, he described how hospitals that were “going paperless” in order to rid themselves of inefficient processes discovered some unexpected problems. One particular example was of a system which allowed doctors to prescribe medications in a computer system by selecting things from a list (rather than by writing them out by hand). Although this sounds like a great way to streamline processes, the task was in fact made too easy. Doctors would mistakenly select incorrect medications from the list and not notice their mistake, whereas if they had been writing things by hand, this would have been much less likely to happen. Unfortunately, mistakes can be very costly in the health care field– according to this study, mortality rates have in fact increased in hospitals that adopt certain health care technology.

Clearly, if we are going to invest in new healthcare technology, we need to be aware of the risks involved and do everything we can to foresee and plan for these “unintended consequences.” In such a high-stakes field, it’s not enough to just design experiences that streamline processes and “make things easier to use.” We also need to consider how we can call attention to tasks that demand high levels of focus without creating information overload or frustration. Health care technology is an incredibly important field with immense potential for good, but these sorts of considerations are absolutely necessary if we are to create technologies that help more than they hurt.

Blogging to evade social norms

I recently had a discussion with a friend about how we are living in an “extremely narcissistic time.” Reading some of the academic literature on motivations for blogging, this claim seems like it has some validity to it. Many of the motivations for blogging seem tied to a desire for one-sided self-expression and indulgence.

A 2004 paper by Nardi et al. includes some interesting excerpts from blogs that are particularly telling. When bloggers write about events that happened during their day (typical “diary style” fodder), part of the motivation may be to look back on it for future enjoyment. Part may be for gaining personal insight by reflecting on past events. However, why use a public blog rather than a private diary? It seems that many bloggers are motivated by the knowledge that others may read and form impressions about the blogger based on their words. If the blog is entertaining, it suggests that the blogger is an entertaining person. If it teaches a skill, it suggests that the blogger is very skillful. If it captures life events that seem interesting or glamorous, then it suggests that the blogger is an interesting person. Self-projection, then, is key. Can we say that this is a form of Narcissism?

This seems to have strong similarities to the Facebook mini-feed phenomenon. When

HCI versus Interaction Design

I was working with a masters student in CMU’s Interaction Design program today, and afterward we got into a lively discussion about the distinctions between ID and HCI.

One of the few things we agreed on: HCI and ID use similar methodologies to learn about users, including both qualitative and quantitative studies.

There were many more things that we did not agree on. I found it particularly interesting to hear her perspective because given her background, I had expected her to have a good understanding of what HCI “is all about.” However, she presented several misconceptions which are, if anything, even more prevalent in the greater design and technology communities. Some examples:

  • Misconception 1: Although user research methodologies may be similar, HCI and ID use them for different reasons. ID is all about designing a “complete user experience,” while HCI is completely centered on coming up with technical solutions. My response to that is– how can we come up with a useful, usable, and pleasant technology system without actually looking at the complete user experience? The “solution” might look like a piece of technology, but behind that are serious considerations about our users and their behaviors, values, aspirations, and more. Technology is just a medium through which a better user experience can be achieved– just as in ID, it is the way that people choose to use that technology that really defines their experience.
  • Misconception 2: People who study HCI are simply UI designers for the desktop/mobile/web. Take a walk through the HCI labs at CMU and you will see how absolutely untrue this is. HCI strives to push the limits. The beauty of technology is that it makes anything possible. As HCI practitioners, we are not boxed into using one certain type of media– we can explore any number of new ideas. We can combine virtual and real-world elements into new creations that have all sorts of unique affordances. UI design is just some small part of this, and when there are so many new types of interfaces, even UI design itself can be an incredibly immense area to explore.
  • Misconception 3: ID is about people-to-people interaction. HCI is not because it’s limited to technology. This statement troubles me because it implies that HCI is solely about having people interact with computers. This is a gross misconception– it pains me to know that people think of HCI as just finding ways to redesign the Photoshop UI. HCI is about creating technology that enables. As to what sorts of interactions it enables, well, this could really be anything– how people interact with each other (instant messaging, Facebook, etc.), how they behave within their environment (GPS, wearable computing), how they understand themselves (online identity building, methods of self-recording), and so on– the possibilities are endless. Although I can’t profess to know much about the specifics of ID, I would imagine that they pitch a similar platform of the possibilities that their field encompasses. And I am sure that many ID projects have a strong technology component, simply because technology is so prevalent in every aspect of life. Design someone’s experience as they walk through a museum, and you need to be aware that viewers are probably carrying cell phones with them. How can you completely disregard a potential distractor (or opportunity!) like this if you claim that you are designing a true space of possibility?

All-in-all, it was very interesting to see what sorts of misconceptions are associated with HCI. Why does someone in ID have such a restricted view of HCI, even though the two disciplines have so much overlap? I wonder if some of it has to do from the courses that we take and the deliverables involved. I suspect that if she were to read some HCI research papers or attend and HCI conference, she would realize that the distinction is not quite as strong as she originally thought. Classroom deliverables aside, our goals are the same: to improve the lives of our end users.

Challenges of CMC Research

I am currently taking a class about Computer Mediated Communication (CMC). Some of the issues that consistently come up are challenges of studying CMC phenomena. For example, when we rely strictly on elements captured through technology, this limits our view of all types of communication. This can in turn limit our understanding of the impacts of CMC. However, this sort of analysis may be appropriate for different types of questions related to CMC. This seems to indicate that a large part of the challenge with studying CMC is phrasing research questions correctly, and choosing appropriate methodologies by which they can be answered. This also seems like a research area where arguing validity/generalizability is particularly challenging.

Even more challenging is the fact that technology rapidly, with just as rapid effects on social interaction. For example, “self-expression” studies that were run with personal websites 5 years ago could be repeated with Facebook now, but the results might be drastically different. In the digital realm, people keep coming up with new types of technology and throwing it out there to see what sticks. Then others start to use it and integrate it into their own lives and learn more/are changed by it.

The continuing social shift/development is particularly hard to capture across time and technologies. I found a paper by Garton et al. that attempts to overcome this by visualizing social network changes over time. Although people’s interactions with technology, expression, and connection will continue to change over time as new methods of CMC emerge, the piece that will stay consistent is that technology causes interpersonal relationships to change as new possibilities emerge. Can we measure this across different networks as they come and go? It is an interesting challenge, but perhaps this sort of work can give us a better understanding of the network shifts that are ocurring.

Cisco TelePresence

Cisco takes a stab at co-presence:

Cisco TelePresence promo on YouTube

Honestly, there doesn’t seem to be much of a difference between this and traditional teleconferencing, with the exception of a larger screen and smoother connection technology. That being said, since this is only a promo video, who knows what the system is like in real life– there might be more lag than shown in the video, and auditory input/output might be a challenge. Never mind the setup of the video screens– not everyone’s going to have that same cherry wood conference table. And what happens when there are 10 people in one office trying to get in on the same conference?

The TelePresence system also does not solve many of the problems caused by a lack of co-presence, such as the ability to pass artifacts around the table. In a real conference, you may move your seat to get a better view of the whiteboard; in TelePresence, you do not have this ability. There is no way to make a private comment to your neighbor, and no way way to break off into small group discussion.

Although Cisco TelePresence furthers much of the technology for remote communication, it still fails to afford many of the capabilities of face-to-face communiction. Until those can be bridged, systems like TelePresence will not make us feel like we are “really in the same room as all of you.”

Co-presence Affordance in Virtual Worlds

In one of my classes, we were discussing the affordances present in different types of computer mediated communication. Afterwards, I was reading through one of Prof. Kraut’s papers about using visual information to collaborate on physical tasks. It got me thinking more about the co-presence affordance, and whether it is considered to be a part of virtual worlds like Second Life. Note: The co-presence affordance means that while communicating with others, you share the same environment as your conversation partners– in the Kraut study, this would be the example where subjects are repairing the bicycle together, physically in the same room. In comparison, the video and audio only tasks do not have co-presence affordance; for more examples of the resulting trade offs, see the article.

For example, in Second Life, there is some sense of co-presence because in the game world, players think of their avatars as the reality which they are currently in. Thus, you could say that Second Life has co-presence because even though you aren’t in the same environment as the actual player, to some degree neither are you. Are the details sufficient for true co-presence, though? You can carry out actions in order to succeed in some Second Life tasks, like following someone somewhere. However, you would never be able to accomplish a task that requires complexity such as the bicycle repair task. Though Second Life tries to imitate the actions a person could make in real life, it does not have the co-presence affordance sufficient to stand in for FTF interactions.

However, players in Second Life adapt their view of the world to that which is available to them (in this case, rough movements like a “follow me” task). How similar to real life must an experience be in order to be considered “true co-presence”? In a game with a restricted view of reality, where more detailed tasks are not required, are restricted affordances enough? Perhaps some of the appeal of virtual worlds like Second Life comes from being able to ignore fine tuned interactions (such as those necessary to repair a bike) and focus on other types of interactions instead.

It would be interesting to see how interactions in virutal worlds change if they gain more realistic co-presence affordances. I have heard of situations where people have tried to use Second Life for non-recreational purposes, such as work meetings and training sessions. I imagine that some of the motivation for trying these is to capitalize on Second Life’s supposed co-presence affordance, but perhaps the reason that these have not caught on is that the types of co-presence that session leaders were hoping for– students being able to observe a speaker’s facial features, or a speaker being able to tailor a lecture based on the body language of the students– are not yet present in this digital world. Thus, this sort of interaction could even be detrimental because it forces users to adapt to a different environment with different rules. The co-presence experienced is really a virtual one, and the ability to translate between this and the real world is an interesting challenge.

Emotional Multitasking

This morning, I was chatting on IM, checking email, and reading an article for class (a typical Friday morning). As I was doing this, I found myself wondering about how people multitask at an emotional level. Since working memory is limited to just a couple items (7 +- 2) at a time, people who are good at multitasking are those who are good at quickly swapping task-related data in and out of memory. What sort of effect does this have on emotion? For example, if you were IMing with someone about happy news, but reading a very sad email, would your emotions fluctuate as you flipped between the two items? Would the stronger emotion dominate, or would the other emotion help to temper it?

Also, there must be some sort of cost for trying to mediate the different emotions associated with each task. With so many concurrent forms of emotional stimuation, its no wonder stess levels keep going up.

iGoogle meets Chickenfoot

A few weeks ago, I started using iGoogle as an attempt to free up Firefox tabs while feeding my GMail/GCal/GReader addiction. Since then, iGoogle and I have formed a bit of a love-hate relationship. Although I like being able to see all my information in one place, the feature limitations are very frustrating (why can’t I apply labels without going to “real” GMail?!) The design limitations are also painful. In particular, I am continuously irked by how LARGE that header image is. It’s visually distracting and takes up precious screen real estate. This means that when I’m looking for information, I have to try to ignore the distracting image and potentially scroll down to see the bottom halves of my gadgets. Although iGoogle has built up a community around skinning themes, there is no ability to modify dimensions or layout, making the header a consistent annoyance in my iGoogle experience.

Before: is this header really necessary? That header is only cute the first time you see it. After that, it’s a distraction.

Today, I finally found a way to get rid of that header. Meet Chickenfoot, a “Firefox extension that puts a programming environment in the browser’s sidebar so you can write scripts to manipulate web pages and automate web browsing.” Although the basic idea is similar to GreaseMonkey, Chickenfoot’s goal is to allow users to easily write scripts to interact with web pages without having to look through source code. For example, it’s easy to automate a task like running a Google search, or changing the text label on a button. Users can write scripts on the fly using a built-in command line, or save scripts as “Triggers” that can be run manually or automatically later on.

I decided to give the application a try, and was delighted to find that Chickenfoot is very easy to pick up. In about 2 hours, I learned a bit about scripting in the Chickenfoot environment, wrote a script to fix my iGoogle design problem, and exported the fix to a Firefox extension.

The script I wrote is surprisingly simple. Every time you load up iGoogle, the script replaces the DIV that contains the header with a simple 1-line search box. Easy! Converting the script into a Firefox extension was a snap using Chickenfoot’s package function. The only complaint that I have is that the script does not run until after the webpage has fully loaded, which is noticeable since iGoogle loads so slowly. However, since iGoogle uses AJAX, the script only runs the first time you load up the webpage. This is a small tradeoff for the lovely screen real estate which I’ve freed up. Amazing how 2 hours and 4 lines of code have made me so much happier with the iGoogle experience– thank you, Chickenfoot! Now, if only I had time to rewrite the entire iGoogle user experience…

After installing the iGoogleClean Firefox plugin Final product, sans terrible header.

You can check out the code that I posted on the Chickenfoot script repository, or download the Firefox extension.

Singing with Machines

Kelly Dobson doesn’t just “work with machines”– she sings with them. Her website features some of interactive machines she’s made, such as the “ScreamBody” (scream into it and it will silence your scream, but record it and allow you to play it back later). For more overviews of some of her past work, check out a video of her talk at GEL 2008. In the first few minutes, she sings (snarls?) with a blender, and relates they story of how she learned to make machines an extension of herself by singing with them.

Kelly’s research is interesting because it focuses on mutual empathy in human-machine, and even machine-machine, pairs. As you listen to her speak, it’s easy to forget that line between what makes humans and machines different. Singing in harmony is one of those things that seem so distinctly human– if you can start to do this with a machine, how can you not start to feel some sort of empathy? I wonder what other sorts of activities humans do to relate to each other that can be extended to machines. Also, what are the benefits of strengthening relationships between humans and machines? Kelly mentions therapy– if we trust in machines, perhaps we can allow them to console us and provide support.

Another interesting thought she brings up: in the future, will there be a need for machines to console other machines? This may sound far-fetched, but how many times have we contemplated machines that “feel” emotions? I think this leads to another question– does feeling emotions simply mean having needs that must be satisfied externally? The typical view of creating emotional machines is that we need to build systems that mimic how people emotionally respond to different situations. A sophisticated system might be able to pass a Turing test if it were able to detect and respond to situations in an appropriate way. However, does this mean that a machine is really “feeling”?

It is also important to consider how people learn emotions, and include this in such a model. Social learning theory might suggests that emotions are really learned during childhood as children view the world around them for cues about how to respond to things emotionally. Other theories suggest that emotions are inborn traits– perhaps born out of an evolutionary need for survival. For example, the feeling of “loneliness” might push people to connect with others, which builds relationships that are beneficial to the individual as well as society. Can we build machines that have base “instincts” that guide their behavior, but are also capable of learning appropriate emotional responses? Can machines use some sort of social referencing in order to learn appropriate reactions to situations based on both the context and their emotional state? I’m curious about how much of machine-emotion research is about capturing the ways that people learn and express emotions. An alternative may be to determine how people judge others’ emotions based on their words and behavior. This could lead to the design of machines that cause us to perceive them as emotional beings, based on our own emotional reactions to them.

Five Second Test

I just found this little site: www.fivesecondtest.com

Web designers submit images of their site mockups. Users then come to the Five Seconds Test website and select a test to take. The image of your website layout flashes on their screen for 5 seconds, and then the user completes one of the following tasks depending on which type of test they are taking:

  • Classic: users are asked to list things that they remember after viewing your interface
  • Compare: users see two versions of your interface and specify their preference
  • Sentiment: users are asked to list their most and least favorite things about your interface

I took a couple of the tests and found that it was quite fun to be a tester. Maybe that’s just because I really like looking at and analyzing UIs, but the fast paced-nature and simple feedback form makes it rather absorbing. I felt like I wanted to just review website after website, rather than having to keep clicking the “do a random test” button!

Getting users to come to and continue to participate in the tests must be one of FST’s challenges. Without a certain continuous flow of testers, people submitting designs will get little out of the service since this sort of limited feedback really needs to be available in larger amounts in order to gain useful recommendatiosn from it. Although this seems to be a pet project right now, I think this has a lot of potential as a method for quick usability tests & uniting a webdesign community. I’m sure there must be websites out there that are dedicated to users sharing their interfaces and receiving feedback from the community, but the FST feels different because it blends a sense of low commitment with promise of high reward. For quick design iterations, the FST might be all that you need if you’re looking for the impressions of many, rather than the detailed analyzations of a few. It would be great to see the FST creators, mayhem(method), try to build up some community around this, or for an existing online design community to adopt a similar type of test.