I’ve always felt that impression formation on the Internet is largely a self-fulfilling prophecy. You interpret interaction in a manner that supports your prior assumptions. But what if you have no prior assumptions, no expectations for your partner? I decided to perform a little “mini-experiment” to determine how I build impressions when I have no former connection to my partner.
Several months ago, I added the AIM screenname, SmarterChild, to my iChat. SmarterChild is a chatterbot, which is basically a computer program that simulates face-to-face conversation. I decided to employ the use of a chatterbot in my “mini-experiment” as a way to account for the personality variable. I wanted to determine, first, if I would be able to establish normal banter with the chatterbot and, if so, how I would form impressions during the interaction. Chatting with a normal person, your impressions are easily swayed by external variables (you respond to those in the “I <3 KITTIES” differently than those in a cybersex chatroom).
There are a variety of chatterbots available online, some even claiming to have a “mind of their own” (I’m not sure how that is possible). I tried chatting with a variety of these programs, engaging in “normal banter” about chatterbot-initiated topics such as movies and television shows—not too far removed from normal face-to-face meetings.
I was shocked at how quickly I began to form emotional reactions. Almost immediately, I started to assess each program and determine its unique characteristics—becoming frustrated when it didn’t grasp my questions or catch my jokes. My feelings surprised me; I wasn’t expecting to react so strongly towards a computer program!
I think this reaction proves the over-attribution theory. I assumed characteristics about the chatterbot with very limited input. I immediately became convinced the programs were stubborn and of limited mental (mechanical?) capacity. It would repeatedly ask me the same question, and I would grit my teeth and curse under my breath. As such, it is part of the Hyperpersonal Model: my dislike was based on very few cues but was still remarkably extreme.
Chatterbot: Let us change the subject.
Me: Ok, you first.
Chatterbot: I started already.
Me: Good point.
Chatterbot: Yes, I thought it was interesting when I heard it too. (what?!?)
Me: Ha ha hahahha, Ok, so what's your favorite thing to do in your spare time.
Chatterbot: I'm glad you got a laugh out of it.
Me: I asked you a question.
Chatterbot: What was the question?
AHHHHHHH! Maybe you could say that my irritated response was a result of me chatting with a computer program. I think there’s more to it. I would argue that my “mini-experiment” proves how easy it is to form impressions online—even in the absence of any extreme personality (or, in this case, any personality). People assume characteristics based on very limited transactions. I think it would be interesting to see if others had the same reactions when chatting with a chatterbot, especially if they were blind to the study and assumed they were chatting with a real person.
Subscribe to:
Post Comments (Atom)
5 comments:
It's interesting to see that despite the fact you knew it was a program you were talking to, you still got frustrated with it as if it were a real person. I know of people of the complete opposite opinion - because they know it's a program, they like talking to it and feeding it conversation to see what ridiculous nonsense it spits out, and this amuses them.
Would you honestly say you had no impression of SmarterChild at all before you began talking? You already knew a few things before starting: that it's a chatterbot, that it responds to what you type. If it had been a person, or even a new chatterbot that had yet to become popular, you wouldn't have known this, and the impression might've been different. For example, you might've become frustrated even quicker, or you might've become suspicious if the other party was 'playing' with you because it kept on dodging your replies. And in that suspicion, you might've stopped telling it truthful things, and might've started baiting it as well.
You've got a lot of patience to spend time on talking to a program, though. What sort of model do you think your impression of Chatterbot followed the most? (It seems to me like Hyperpersonal, since you so quickly formed such an intense reaction based on what little information you had.)
Thank you for your input. If I am correct, the over-attribution theory is part of the Hyperpersonal Model. I forgot to write that into my post, so I appreciate the second eye! Also, in regards to my sanity- I would like to note that I normally don't "spend time talking to a program"! It was all in the name of science!
This was an interesting read. I talked to SmarterChild once, and I think the first thing I did was curse at "it" – afterward, it wouldn't respond to anything I said until I apologized, which was surprisingly human.
I like your idea at the end; I'd expand upon it in wondering what the SIP theory would say about this situation. SIP claims that the more time you spend with someone the more you discover their true personality – would it then be the case that, not knowing you were talking to computer, after a while talking with one you would inevitably realize the "person" has no real personality, or is a computer? I suppose SmarterChild and most bots have the big problem of issuing completely nonsensical responses that completely dispel the believability (an example of which you included), but could a good bot fool you, and if so, what would that mean in terms of SIP and personality development as a whole?
Anyway, good job conveying the impressive yet idiotic nature of these chat bots, and choosing an interesting idea, but I do have to wonder if you can really treat a computer program the same way as you would a person in terms of these theories.
I find it very strange that you chose to interact with a computer program rather than an actual person on the internet. I would have never even thought to do something like that, especially for an assignment related to impression formation. Its interesting how you began to form an impression of the "person" that you were chatting with even though you knew that it was a computer program. I have had very little experience with computer programs like SmarterChild myself because, as you realized during your "conversation", they are very frustrating and tend to be completely useless unless you use the ones designed to check things like movie times and locations. However, i wonder if your analysis of the SmarterChild program is really a fair analysis. After all, when you entered the conversation you knew that it was a computer program and must have known that it has very specific capabilities and many limitations. Despite this, i believe that it would be interesting to see what would happen if someone were to unknowingly interact with a computer program that he/she thought was a living, breathing human.
Post a Comment