Samsung Galaxy Tab vs. the iPad

I can’t wait to design for this! Multi-tasking, “desktop” widgets, better web support (ahem, Flash), and cameras are going to make this so much more USEFUL than iPad.

Weighing half as much will help too. All those “doctors walking around with tablets balanced on their arms” scenarios suddenly seem much more plausible… have you ever tried holding an iPad with one hand for more than 2 minutes? It’s quite heavy!

On a side note: I’m a bit disappointed that I haven’t heard anything about a tablet with the added ability to use a stylus. Although there’s a lot to be said to building the UI around touch interactions, there are still some places I’d love to be able to use a stylus, such as handwriting notes (no, Steve, the iPad is not “a dream” to type on) or sketching things.

—–

UPDATE: There are in fact rumors about a bluetooth stylus! Fingers crossed (& ready for stylus action) http://www.engadget.com/2010/08/30/samsung-galaxy-tab-accessories-may-include-bluetooth-stylus-and/

Instant Adoption, the "Google Way"

Google does an amazing job at getting users to opt into new “beta” features.

I did the following in < 5 minutes:

  1. Hear about Gmail Priority Inbox for the first time (callout bubble)
  2. Learn what it is, and why it will “make my life so much better” (brief description and short animated video)
  3. Opt in and get started (click one button, and suddenly I’m in a friendly and familiar environment, but with just the 1 twist that I expected)

Awareness + Clear & Believable Value Proposition + Low Barrier to Entry = instant adoption!

Some other factors that led to my quick adoption:

  • Limited Beta: The limited beta approach adds an air of excitement and exclusivity. (Google’s been playing this angle for years, and it continues to work for them!)
  • Extremely clear value proposition: A brief, amusing animation was used to explain the new feature and why it’s worth checking out. The fact that this could be conveyed in less than 2 minutes speaks to the clarity of the message.
  • Trust: I’ve been using Gmail for years, basically to a point where I “count on it always being there for me” (you and me, Google, BFFLs!) So, why not try this new feature? You wouldn’t lie to me, right?
  • Nothing to lose: The “off” switch is quite visible. If I try it out and decide that it’s “not for me”, I can easily turn off the feature later.
  • Non-intimidating: Based on the brief intro material I’d seen, I already knew what to expect in terms of what would be the same/different. I didn’t have any worries like “Will I be able to understand this? Will I need to read lots of help documentation?” In general, the gradual build up of new features over time helps users to feel like they are growing and evolving with the product, rather than having to reorient themselves each time.

These all work well for Google’s free, consumer-facing products. Where else can we apply these techniques to increase user adoption?

AS3: event.target vs. event.currentTarget

I just figured out how to solve an ongoing bug in my AS3 code.

The problem: I added a MouseEvent.CLICK handler to a Label. Whenever I clicked on the label, the event would fire– but when I called “event.target” in the event handler, it would return a TextField instead of a Label. This meant that to get the Label, I had to come up with messy workarounds like checking the type of the event and calling its parent if it wasn’t a Label.

The solution is actually quite simple: rather than calling “event.target”, use “event.currentTarget” to get the object that originally had the event handler registered to it (in my case, the Label). I wish I’d known this earlier! For further reference regarding event propagation in Flex, see the Adobe documentation.

Inline help - the FAQ is dead!

I just completed a quick&dirty heuristic evaluation for one of my research advisor’s other projects (redesigning an online Eiolca Life Cycle Assessment tool). Although the new design includes many changes that vastly improve usability, one change that I paid particular attention to was a shift to including inline help, rather than linking users to extensive help documentation. This seems to be a growing trend in UI design, and I believe that it is a welcome one. We are a culture that strives for instant gratification, and wasting time loading up a FAQ document or knowledgebase is just too much work. In fact, I’ve found that many people are so anti-“reading the documentation” that they will opt for tiresome calls to tech support or give up altogether.

Thus, inline help is a great solution– anticipate potential user problems or questions, and provide quick links/blurbs that will address their concerns, should they have any. I’ve seen inline help work effectively in several different ways, particularly when having users fill out forms (as was the case for my professor’s project):

1. The most basic: provide clear labels that prompt users for the correct information the 1st time around, but also have further explanatory instructions available right above/next to the field for immediate clarification.

2. Provide a link/icon/button that users can click on for more help. The “?” button is quite popular; I’ve also seen mouseover techniques work quite effectively. A sidebar that contains links to specific information in an online help document is also an option, although users may distrust these if relevant information is not immediately found.

3. Have “warning” labels that update/display automatically when users input invalid information, thus helping to keep them on the right track and avoid frustrating errors. Suggestions can also be quite useful, be they the default answers supplied in fields, or small suggestions listed beneath the fields.

Long story short, it’s not about creating more extensive online help documents– it’s about providing relevant help at the right time. This can be achieved by using inline help (and rephrasing documentation so that it is clear and understandable given the target audience’s level of knowledge) is. An added bonus: by minimizing user confusion and frustration from the outset, one might be able to lower operating costs by reducing the amount of calls to a call center. Plus, happy users will say great things about your system!

What the Font?

A nifty tool for identifying fonts based on images: What the Font

Simply upload an image, verify characters, and view the results. I was very impressed by how useful and usable the tool is–I was able to identify a (very close match of) a font in less time than it took to find the image on my computer.

WTFont results page

The division between computer & user “work” is very well managed– the results page shows fonts which are somewhat similar to the user-submitted font. The user can then look through all of the results and choose which one is the best match. A really great detail: a copy of the image that you submitted remains in the center of the screen as you scroll up and down the list for quick user comparison. Links to asking experts for help or purchasing fonts are worked in seamlessly. Overall, I was very impressed by both the tool and site design. (Much improved from their old site– view if you dare)

I haven’t checked how well this tool works with more decorative fonts, but I’m sure I’ll be back soon.

P.S. Just noticed they have an iPhone app– bonus usefulness points.

Benefits of Introducing Constraints

I recently read an article by Peter Kollock that suggests that certain online communities may be successful because they appropriately introduce risks and constraints that users must overcome. This reminded me of another CMC paper that discusses how users “overcome” constraints of low-bandwidth CMC in order to form stronger interpersonal relationships. For example, in online chat rooms, users feel like the medium’s restrictions make it difficult to learn about their chat partners. This turns relationship building in online chatrooms into a multi-step process. Users have a few initial cues with which to learn about their chat partners (“A/S/L”), then begin to pick up on speaking style, vocabulary, topics of interest, and other defining information. They may then share personal websites or photos which further develop understanding. Although this process seems inefficient and potentially daunting, there are aspects of exploration, discovery, and insight that also make it rewarding. Knowledge acquisition, particularly about things that we consider “special” or “secret,” tends to be a very enjoyable thing (think: gossip). Perhaps, then, limitations add an aspect of “specialness” to interactions that we would otherwise take for granted, thereby bringing online communities together.

In terms of virtual worlds, it’s also interesting to see how constraints encourage people to come together. In some ways, this is acknowledged by game designers and built into the systems– when a special item is very rare, it encourages people to try to obtain it, which can result in a wealth of complex interpersonal interactions.

In some ways, the unintentional constraints of games are even more interesting. Regarding technical constraints like limited information display, zone load lines, or laggy connections, users find creative ways to turn constraints into benefits. For example, having load zones in MMORPGs is often used to players’ benefits when running away from monsters. Although this doesn’t really mirror anything that would happen in the real world, it’s an example of how constraints can be turned into beneficial, and even key pieces of, game play.

This reminds me of another reading that I did for another class that talks about how interaction designers are actually designing a “space of possibility,” in which people are free to explore as they wish. The designer can only hope to introduce constraints with sufficient feedback that allow users to make the environment “their own.” Although the designer may have some ideas of what could happen, it’s really the people in the environment who decide what happens in it.

CHI'09: Mobile Interaction Design Class

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

Since my capstone project involves working with mobile devices for Nokia, my team enrolled in the “Mobile Interaction Design” class at CHI. It was led by Matt Jones from the University of Wales, who developed it in conjunction with Gary Marsden. Although much of the material was not new to me, Matt had some interesting thoughts and fun anecdotes to share.

The class began with a discussion of “cool things we can do with mobile technology.” For example, people use camera phones to capture transient images, share them, and then allow them to essentially disappear. People also appropriate technology in new and unexpected ways—for example, when Matt’s daughter heard the new TomTom GPS system say, “Bear to the left,” she asked, “Daddy, where are the bears?” This turned into a new game between the two, demonstrating a completely unexpected appropriation of this piece of technology.

There was discussion of the Digitial Divide, and Frohlich’s Storybank project which tackles the question, “How do we design for people who can’t read?” Also, discussion of Don Norman’s question, “are we designing things to fill unnecessary needs?” For example, if there were a coffee cup that automatically signals a waitress to come over and refill it, this removes some of the satisfaction of the interpersonal interaction that had previously existed. In a world where everything is being indexed and mapped to other things, what happens to chance encounters? What are the impacts of turning the “fuzziness” into something that is clearly defined?

Some other technologies discussed included implantable computers, RFID tags, and wearable mobile devices. Matt described a story in which he ran into a famous scientist at a conference and tried to introduce himself. As he said hello, the scientist used a device that he was wearing to look up Matt’s personal webpage. The scientist explained, “I’m trying to decide whether you’re an interesting person.” Talk about streamlining interpersonal interactions!

The discussion then turned towards what mobile devices are really for. Are they for everything? Communication versus information? Mobile allows for rethinking of relationships. Voice, context awareness, SMS, email, local web pages, blogging, communities, and pico-blogging (quick interactions like a “poke” on Facebook) have all helped to redefine our relationships with others.

There is a bit of an “appliance attitude” towards mobiles. They’ve started to become a lot like Swiss Army knives—they have lots of features, but individually none of them work well. Matt questions whether this appliance attitude is really correct—although it seems handy to only need to carry one device, “People do carry lots of stuff… we like clutter!” That is, as long as that clutter is useful and attractive. An interesting thing to note is that the iPhone, unlike some other devices, only allows you to have one application open at a time. When that application is open, the device becomes that application. Thus, adding new applications doesn’t feel like it’s detracting from the device. The App store also keeps everything in one place to build trust; people see buying iTunes songs as a “little reward” for themselves, much like buying a latte from Starbucks.

The course touched on some alternative interactions for mobile devices which extend beyond the keyboard and the screen. For example, Schwesig (2005) introduced Gummi: UI for Deformable Computers. There are also auditory (icons, which are natural sounds, and earcons, which are synthetic sounds), haptic (Sony’s Touch Engine), gestural (pointing, micro, mini, macro), and multimodality. Mobile devices represent a fluent mix of life and technology, in which there are not two realities, but rather one: the next time you’re walking down the street, watch kids talking and showing their cell phones to each other while their headphones are in.

Interaction design is important for mobile devices. Poor interaction design results in frustration, wasted time, physical harm, and environmental damage. Can we design things that people won’t want to upgrade, or will want to keep in a sustainable fashion after the technology becomes obsolete? Not only that, but how can we use interaction design to bring experiences from outside the home into the home? Good interactions include visibility and a transparency of process, with a bit of organic clutter thrown in there. When we design, we need to understand the value that people have in certain things. For example, “people would walk across broken glass to send a text.” The UI for text messaging could definitely be improved, so what is it about the system that people find to be so valuable?

The class discussed “Upside down usability,” a movement which seeks to turn the traditional view of usability on its head. This favors the palliative over the generative; it celebrates the inefficient and the ineffective. This is part of the Slow Technology movement (Hallnas & Redstrom 2001), which has “a design agenda for technology aimed at reflection and moments of mental rest rather than efficiency in performance.” In a fully indexed world where everything is known, how do we make our own meaning of it? How do we explore it and find the “hidden gems”? There are benefits and consequences of mobile devices driving you towards certain things. Perhaps the solution is to introduce randomness for “serendipitous” experiences—for example, the randomize feature of the iPod shuffle. What about a running map that is randomly generated as you go? Or a randomly generated pub crawl that suggests new locations to visit and people to meet?

There are dangers of “technologizing away” childhood. Several systems have been developed to relay bedtime stories or teach children how to brush their teeth. However, these remove important opportunities for family bonding time. Are we designing technology to prey on our most important moments?

Some other cool tools that we discussed:

  • The “clutter bowl” – you drop your mobile phone into it and it extracts and projects photos, making the unseen seen (“Clutter in the home” – Taylor & Harper 2007).
  • iCandy – a tangible UI for iTunes that has cards with barcodes that you can share and swap to play music.
  • Pico-projectors – tiny projectors that can project anywhere, allowing for mobile offices, temporary graffiti (mobispray.com), art shows, classroom pranks, navigation, collaboration, and more.

In short, we need to design things that have strong identity (Amazon and eBay), use interaction as a brand (iPod, iPhone, Nokia NaviKey), develop an editorial voice or distinct point of view, and deliver interaction as a package. For a full reference list for the class, as collected by Matt, see: http://www.cs.swan.ac.uk/~csmatt/ReferencesWSITE.pdf

Again, I’m not sure that I learned many new or revolutionary things from the class; I guess that’s a testament to my training and interests. However, Matt was a good speaker, and it was interesting to see some of the “aha!” moments that other members of the class experienced. Also, I was surprised by how many people in the room said that they owned Nokia phones! I had expected an overwhelming majority of iPhones. Maybe it’s because of a non-US crowd, or developers have traditionally gone for Nokia phones?

Regardless, I enjoyed the class, and have turned to some of the other work that was mentioned for my own inspiration.

CHI'09: alt.chi - Feel the Love, Love the Feel

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

One of the most enjoyable sessions that I attended at CHI was one of the alt.chi sessions entitled, “Feel the love, love the feel.” The session included several “nontraditional” talks that were intended to spark discussion and create a new flow of ideas.

Interactive Slide: an Interactive Playground to Promote Physical Activity and Socialization of Children

This paper described an interactive children’s game in which children play on a slide in order to build a robot. An interesting discussion point emerged when the speaker talked about how in her experiment, there had been a technical flaw which caused the game to break, and the children to play in an unexpected way. When the speaker explained, “we had to throw out the data,” this sparked debate because, as an audience member pointed out, play is a very freeform thing. Was the researcher suggesting that research is not creative? Why was she putting blinders on, rather than using the “bad” data for new design ideas?

The poor researcher seemed somewhat floored and unable to answer. I felt bad for her, because it was obvious that she comes from a academic research background where well-run experiments are essential for testing your hypotheses and getting your work published. In contrast, it seemed that her interrogator was from a much more flexible background that welcomes new ideas. I could almost see the researcher thinking to herself, “sure, if I had 6 more months, maybe I could’ve done that!” But deadlines loom, and sometimes you just need to make those experiments work!

Since my view of academia is somewhat limited, I’m actually curious about what sorts of limitations one must impose in order to come up with something that can stand up to peer review. Is the need for review a very limiting thing? In industry, it seems like “quick-and-dirty” tests are often the norm; when it comes to designing great interfaces, this might be all you need. But for a research field like HCI, which is not quite as concrete as say chemistry or cognitive psychology, how do you mediate the desire to try new things with the limitations of having to back up everything that you say? This concern of having new ideas being limited by others is one of the things that stops me from pursuing a career in academia. Strange, when so many people say that they go into academia specifically so that they can pursue their interests.

Opportunities for Actuated Tangible Interfaces to Improve Protein Study

Ashlie Brown, the researcher who gave this talk, is actually a grad student with a chemistry background. Her focus is on teaching people about proteins, which are much easier to understand when shown in 3D, rather than 2D. She gave a variety of ideas, such as animations, virtual reality, tangible models, and augmented reality using haptic systems (like Phantom or Posey). Students would greatly benefit from the ability to compare structures and track how to get from one protein to another.

As someone who was completely boggled in my Intro to Biology class, I wish Ashlie the best in creating new ways for students to learn about cell-level interactions. It would be really neat to see if this sort of technology would be helpful for non-students as well: in the lab, could scientists working with proteins somehow magnify them and work with them on a haptic level?

soft(n): Toward a Somaesthetics of Touch

Thecla Schiphorst is a Canadian artist who is also a “computer media artist, computer systems designer, choreographer, and dancer.“ I certainly did not expect to meet someone like her at CHI! Her session was about bringing dance and somatics to HCI.

“Somatics” is defined as, “The art and science of the interrelational process between awareness, biological function and environment, all three factors being understood as a synergistic whole” (thanks, Wikipedia). Thecla talked about this in terms of the motion of the self, attention to creation of a state, experience and interconnectedness (empathy), the opportunity to add value through attenuation, and the acceptance of experience as a skill. She defines touch to be both subjective and objective, because we can feel ourselves touching, even as we touch. Based on these principles, she created “soft(n),” which are 10 soft physical objects that communicate with people through vibration (creates a sense of intimacy), light (creates a sense of distance), and sound (when tossed, an object emits a “whee!” sound). The objects are life-size to evoke the idea of play and past-lives. In order to create the objects, she used a variety of unique materials, such as conductive foam and conductive silks.

This talk was much more “artsy” than I had anticipated, but I enjoyed hearing Thecla talk about interaction design in a new way. Although I personally thought the 10 objects were a little creepy (do I really want a stuffed cube screaming when I toss it in the air?) I suppose that this is what she was aiming for in creating them. Perhaps the reason that I was so “creeped out” was because the objects were more intimate and human than I would have liked them to be. This reminds me of a discussion we had in my Computer Mediated Communication class—do we really want to create anthropomorphic robots? Do people actually want to interact with “almost-humans”, or would they prefer to interact with things that are physically nothing like themselves? That concern aside, I thought that Thecla’s view of HCI was quite beautiful. Many elements of her talk, such as using physical sensations to create emotional product attachment, are worth further exploration.

Stress OutSourced: A Haptic Social Network via Crowdsourcing

The last speaker at the alt.chi session described a set of wearable devices that enabled crowd therapy via touch. A user would send an outgoing “SOS” call by making a frustrated gesture with her device. Others would receive the call, then gently press on their own gadgets to send calming signals back. The number of calming signals that the original user receives would indicate the number of responses, and each would be felt from a different point on her device. There are opportunities for making this a locality-based system, and for having a web component that breaks down responses at a city level. It could be scalable by not having a one-to-one mapping to a person. There are also additional opportunities that take advantage of the beauty of simple signals: for example, sending messages via nudges, or sending a tangible Facebook “poke.”

The idea of haptic social networks is rather unique. However, I am curious about the impacts of impersonal touch. What does it mean to rely on strangers instead of close friends for the coveted sensation of touch? This is a similar question to that of, “what are the impacts of replacing face-to-face conversation with computer mediate communication?” It is a difficult question to answer, but certainly one that should be considered, as there are pros and cons for both.

I found the alt.chi session to be very fun and insightful. It had what was, in my opinion, the most unique and controversial discussion topics; also, I found the crowd to be much more energized and involved than in other sessions. I would highly suggest that in future years, attendees attend at least one of the alt.chi sessions to see what they’re all about.