RAA #5: The benefit of simulated social interaction for older users

Research Article Analysis

Citation

Chattaraman, V., Kwon, W., & Gilbert, J. (2012) Virtual agents in retail web sites: Benefits of simulated social interactions for older users. Computer in Human Behavior 28(6).Retrieved from http://www.sciencedirect.com/science/article/pii/S0747563212001598

Purpose

This study focused on exploring the effects and benefits of simulated social interactions through virtual agents for older users’ experience with online shopping websites.

This research is composed of two separate studies.  The first identifies the barriers to older users when they use an online shopping website.  The second study demonstrates that using a virtual agent to help with search and navigational functions increases an older user’s perceived trust, social support and their intent to perform online shopping functions again.

Methods

The first study used focus group interviews. The overall topic was, “What factors inhibit the adoption of online shopping among older users?” The group’s mean age was 73, and there were 48 participants.   the order to triangulate the data, this study used a semi-structured interview about online shopping use and adoption, an experiential task on viewing an online shopping site and buying something, as well as another semi-structure interview focusing on the challenges of performing the task.  There was also a questionnaire.

The second study used an experiment were the older user explored and used an online shopping website that either had a virtual agent to assist them or lacked a virtual agent. After they performed their task, the participants filled out a questionnaire about the experience.

Main Findings

The first study found six barriers to online shopping adoption by older users: Perceived risk barriers, trust barriers, social support barriers, familiarity barriers, experiential barriers and  search barriers.

The second study found that social presence in online shopping websites via a virtual agent helps older users perceive greater social support from the web interface.  This led to a greater sense of trust for the website and reduced older user’s fear of buying an incorrect product online.

Overall, the inclusion of a virtual agent in an online shopping website improved older user’s experience and increased their willingness to use the procedure more often.

Analysis

This is of course assuming the virtual agent was programmed well enough to answer a majority of the older user’s questions, as well as understands what the user was asking.  I often get annoyed with the “chat” functions you see on websites.  They often are not real people and it annoys me that it says “Chat with …” They are not there and I don’t want to speak with a computer, I can interact with the computer perfectly well on my own and I am often skeptical of its own ability to tell me how to use it.  However, this does not seem to be the case with older users and this articles presents an excellent description of their barriers. I hope to remember this list in the future.

I think that even if it annoys other more tech savvy users to see the chat option, it should be included.  It doesn’t harm anyone and this study shows that it does actually help older users feel more comfortable with technology.  It is important to remember to design for all levels of use.  It is much better to go to the trouble of including a feature that will attract an entire age group of the population than to just give up the group as a lost cause. It only takes time and can pay off well for your website. I also am reminded to consider age as a factor when designing personas.  Not all website users are technologically minded!

RAA # 4 Gender and website preferences

Research Article Analysis

Citation

Djamasbi, S., Tullis, T., Hsu, J., Mazuera, E., Osberg, K., & Bosch, J. (2007). Gender Preferences in Web Design: Usability Testing through Eye Tracking AMCIS Conference Proceedings, 2007.  Retrieved from http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1643&context=amcis2007

Purpose

The authors wanted to see if gender played a role in website preferences. Previous literature suggested this would be the case in layout and presentation of stimuli, spurring the authors to test if male and females had different preferences when viewing a website.

The authors note that previous research didn’t use an eye tracking  system  like theirs. Previous studies had the participants wearing odd headgear which limited there head movement and caused an unnatural setting.  The author’s eye tracking system was done through a camera sitting on top of the monitor and did not impede the participant.

Methods

The authors used bricklets to test their hypothesis. A bricklet is a small window with specific useful information designed to make navigation of the website faster and easier for the user. Its main purpose is to bring important information to the attention of the user.

The authors background color and image of the bricklets to determine if that changed how females or males noticed them.

The authors hypothesized:

H1) Female participants will notice bricklets with pictures of people more than males.

H2) Female participants will notice bricklets with a light color background more than males.

The sample was 17 male and 19 female participants at Fidelity Investments. (the employers of some of the authors)

Participatns were asked to navigate the website to find information. Of the tasks, only one required the use of a bricklet to finish, however all of the tasks could be made more efficient by utilizing a bricklet. There were four different designs of bricklets, with four different colors.

Main findings

Contrary to the author’s hypothesis, females did not notice bricklets with pictures of people more than males. The eye tracking systems did not notice the females fixating more on the bricklets with the images than the males. The second hypothesis was also not supported by the data.  Overall, there is no difference between the genders with regards to the number of time they fixated on the bricklets.

Analysis

The authors did note that their study indirectly supported the Banner blindness theory. (Users overlook banners on websites) So, they suggest that men and women equally ignore the banners.

I think it is important to note that the authors were testing noticeabilty, not appealability. The genders did have preferences about that but they didn’t affect how they noticed things. When I think of preferences, I think of appealability, not noticeability.

I found it interesting to read another study where the hypothesis was not supported.  I wonder what the difference would have been if the study wasn’t using work related materials. The participants could have simply not cared about what they were doing.  I also wonder what the merit in this study was for the employers.  Did they intend to change their systems into a male and female based system? Or was this a study simply to find out if there was a difference?

I also was confused about the website itself. They say some of the tasks were to find out how many people have a particular account. This doesn’t sound like a website, it sounds like a company system and I think that would affect how the participants looked at it.  They didn’t go into it thinking website, they were thinking company system.  I think I would have liked more clarity on the website itself and its purpose.

I wanted to do a gender based article because I had read and reported about a culture based one.  This study leaves me feeling let down so I think I may try and find a different one studying gender.

RAA#3 Social cues and perceived agency in HCI

Research Article Analysis

Appel, J., Puetten, A., Kramer, C., & Gratch, J. (2012) Does humanity matter? Analyzing the importance of social cues and perceived agency of a computer system for the emergence of social reactions during human-computer interaction. Advances in Human-Computer Interaction, 2012. Doi: 10.1155/2012/324694

Purpose of the research

This research had two main hypotheses:

(H1) The social effects will be higher in the conditions with a presented virtual character as interaction partner (high number of social cues) than in the conditions with a presented text-based interface as interaction partner (low number of social cues). (The effect of social cues).

(H2) The social effects will be higher in the conditions with an assumed avatar as interaction partner (high agency) than in the conditions with an assumed agent as interaction partner (low agency). (The effect of agency).

Their overall purpose was to see what factors led for the emergence of social behavior in human-computer interaction.

Methods

The authors tested two competing explanations against one another, the agency and the number of social cues approach. They did this by having the participant tell three stories about their life to either a computer text program or animated character

Agency assumes there will be a difference between the assumed interaction with a computer or a human being, suggesting that real humans as interlocutor will evoke stronger social reactions than computers.

In order to test the factor agency, the instructed participants they would communicate  with an artificial intelligence or another real participant in the next room.  The number of social cues factor was varied by using either a text-based interface of animated character.

The authors used a scale called the Social Presence scale to generate quantitative data. They also applied qualitative analysis to the participant’s answers to the computer.

Main findings

For agency, there was no strong support found.  They only noted that agency was found for the feeling of social presence which was more intense after communicating with the “other subject” via avatar or text chat than after communicating with the computer, however the social presence scale could have provoked stronger reactions that usual due to its wording.  The author’s advice the rewording of the scale in order to eliminate this problem in future studies.

For the social cues factor, they found several results supporting the assumption that the number of social cues displayed influences the strength of social reactions. This suggests that a human likely virtual character triggers stronger social reactions than a text based interface. The authors did not find this data to be supported extremely well and so can only suggest that the assumption that the more computes present characteristics that are associated with humans, the more likely they are to elicit social behavior.

Analysis

I think this is the first article I have read that basically had no strong findings.  They can make a suggestion that the more social characteristics that a computer elicits, the more likely the user will elicit social behavior with the computer but they admit their data doesn’t strongly support this.  They note that different scales of measurement should have been used as well as the fact that they only measured two levels of social cues.  This article, for all that it defended its methods well enough at the beginning seemed to almost lose steam at the end when the authors had to write about how they didn’t really find out much.  While one could say that any knowledge is useful, I am used to stronger results.   I find their premise interesting and I was hoping for a lot stronger of a statement at the end. It’s a pity this study didn’t support them and so many limitations.

 

RAA2: Text Advertising Blindness

Research Article Analysis

Citation:

Owens, J., Chaparro, B., & Palmer, E. (2011). Text Advertising Blindness: The new banner blindness? Journal of Usability Studies, 6(3). 172-197. Retrieved from http://delivery.acm.org/10.1145/2010000/2007460/JUS_Owens_May_2011.pdf?ip=128.211.249.128&acc=ACTIVE%20SERVICE&CFID=161498057&CFTOKEN= 90678659&__acm__=1348268449_a87fcdf4fabf8f7c2e5d6221360e0ebd

Purpose:

To expand the banner blindness concept to text advertising blindness and to examine the effect of search type and advertisement location on the degree of blindness.

Do people not see text advertisements the same way they are ‘blind’ to banner advertisements?

Methods:

The authors created a website about the different Hawaiian Islands. For the first study, the participants had to either locate a specific fact in the website or find information related to a given topic.  The website was divided into content areas so the authors could determine and explain where their tracked eye movements were located.

After performing the tasks the participants filled out a questionnaire that asked them in what regions the advertisements were located.

Main Findings:

Users tend to miss information in text ads at the top of the page.  When information was placed in either the top or side areas where advertisements usually were, they user was likely to miss the information. Participant search strategies differed depending on search type and whether the top are of the page was perceived to be advertising or relevant content.

Text ad blindness does occur, can significantly affect search performance on web pages and is more prevalent on the right side of the page than the top.

The authors recommend that text advertisements be placed near the content of the webpage and designers should realize that most users ignore the right side and the top of the page, expecting an advertisement and not relevant information.

Analysis:

Text ad blindness and banner advertisement blindness was one of those things that you knew happened but never really thought about.  When you go to a webpage, you automatically have an idea of where useful information is and know to not click anything on the side or the top of the page.  It is interesting and useful information to confirmed that this is in fact the case, we are ‘blind’ to those spots.

Now what can a User experience designer use with this information? How much of a concession does a user experience designer have to make for advertisements?  I suppose it depends on how much the client wants to please their advertisers. Then again, I imagine they treat it just like any part of the site that must be there, they design to provide the best user experience first and then please the advertises? Hmm, this is more of an ethical question for the client I guess.  Better user experience or happier advertisers?  As a user, I would be annoyed by seeing an advertisement where I am used to content and would likely not use that website. But as an advertiser, I want you to see my ad at all costs and don’t care if you had a good time.

This blindness though, I would like to see more data on what happens if the text advertisments scroll, or move, or blink.  Something to grab your attention.  Does it still get ignored or do these, what I would call cheap, tricks work? This would be an interesting study to read.

RAA#1: Improved User Experience with a Culturally Adapted Interface

Research Article Analysis

1. Citation

Reinecke, K., & Bernstein, A. (2011). Improving performance, perceived usability, and aesthetics with culturally adaptive user interfaces. ACM Transactions on Computer-Human Interaction, 18(2). doi: 10.1145/1970378.1970382

Active Link:    http://dl.acm.org/citation.cfm?id=1970382&bnc=1

2. Purpose

The authors set out to show that if you adapt a user interface to reflect the cultural background of the user, the user will have a better user experience. They evaluated a system designed to automatically generate a personalized interface that reflected the user’s cultural background.

3. Methods

The authors had previously developed a culturally adaptive system called MOCCA which helps organize user’s tasks.  Since they used tasks the user input, they could not influence the user with cultural content, they could simply organize the information according to a system that was determined to best reflect the users cultural background.

The authors first had the participant’s compare a US based MOCCA system against a MOCCA system that was generated after being told the user’s cultural background. The participants had to then perform three tasks with MOCCA which included creating a new category, creating a new to-do item, and finding an already created to-do task.  The participants then filled out a subjective questionnaire about usability and aesthetics of the interface after performing the tasks in each system.  They were also asked to rate the systems according to preference.

The participants’ errors, clicks, and timed needed for the tasks were recorded as well as how many times they had to ask for help. The authors the compared the findings between the two versions.

4. Main findings

Interfaces that adapt to cultural preferences can immensely increase the user experience.

For each task participants were faster and used less clicks with the culturally modified version.  They also made 69% less errors with the modified version over the US version.  Two participants had to ask for help when using the US version and none asked for help while using the culturally modified version.

There was not a strong rating for either version in aesthetics so apparently there wasn’t a major violation of aestheics for either. When the participants evaluated the user experience of the models, the culturally modified version again was rated higher than the US version.

Overall, the participants answered in their surveys that they preferred the culturally modified version over the US version. The authors did note an interesting phenomena where the participants who said they would be more efficient with the US version actually had worse times when using the US.  The participants who said they were more efficient with the culturally modified version were actually faster.

5. Analysis

I found this article very interesting since it is reflecting another aspect of a user’s relationship with technology.  I had not really given much thought to making an interface designed for a certain culture but it makes absolute sense now that I happened upon these authors. Culture really does have a gigantic effect on many things and should have an effect on technology as well.  Making a user interface one size fits all may be currently cost effective, but the site would likely be more effective overall if it takes cultural influences into account. (This is beyond aesthetics)

I would like to go back in and read more about the author’s creation of MOCCA.  I think their choice of a task organizer interesting and would like to read a little more into their explanations for choosing this and also what systems they felt would not work.

I also want to see more of their examples and sources for how different cultures view user interfaces. This article seems limited although I can understand why they limited it.  I would love to see them expand on this testing while using different types of systems.  This system was designed to keep track of tasks, what would a culturally sensitive email system look like?  The other day, someone posted a Chinese YouTube link and then changed it, does it look different than the US version?  If so, why did they make the changes they made?  Is only a language change?