Friday, December 08, 2017

Interactive species: GOFIs, Things and Beings

At the end of our book "Things That Keep Us Busy--the elements of interaction" (Janlert and Stolterman, 2017, MIT Press) we spend some time speculating about the future of interaction. One of the ideas we present is to consider three forms of interactive 'species'. We claim that even if our ideas are speculations, they are not pure fantasy, actually, we argue that they are logical consequences of the examination of the nature of interaction that we engaged with throughout the book. So, here are a few pages that present some of these ideas (page 198-202).


"11.1 Things and Beings

Any attempt to imagine what may lie ahead easily becomes science fiction or pure fantasy. Not least when it comes to interactivity—a popular topic of futuristic portrayals in science fiction movies. It is exciting to imagine futuristic scenarios where the methods and patterns of interaction have completely changed due to some unknown technology. It is tempting to imagine future forms of interactivity that relieve us of all the complications and issues we have discussed throughout this book—without considering how realistic they may be. Even though we believe that science fiction can stimulate technological development and invention, and indeed has in many ways influenced our field, we will refrain from wishful futuristic thinking. Instead of imagining radically new forms of interactivity without connection to our present situation, we are rather trying to extend what we already have observed in existing and developing technology.

Let us also make it clear that we do not in any way foresee the demise of “traditional” interfaces and, let us call them, GOFIs, Good Old-Fashioned Interfaced artifacts and systems with dedicated interaction areas occupy- ing a limited part of their surface. GOFIs are likely to continue to be used even as faceless interaction and other nontraditional solutions become common. One reason is that a clearly located user interface generally has lower risk of accidental interaction and may simplify handling of the artifact as a physical object. By keeping the object’s “smart parts” separated from the “dumb parts,” users can have better control and confidence regarding when and which operations are actually performed. Another reason is that a clearly defined and visually recognizable interface enables users to quickly see similarities with other interfaces and draw on earlier experiences.
In the present situation we can discern the emergence of two novel species of interactive artifacts and systems, different from the ordinary GOFIs: one development toward “Things,” another toward “Beings” (we will spell them with capital initials to distinguish them from things and beings in general).

The Things line of development means abandoning the traditional dedicated, surface-bound user interface for artifacts that can be interacted with in a fashion similar to how traditional small- or medium-sized “dumb” interfaceless things, natural or artificial, are handled—that is, by being moved, squeezed, thrown, shaken, folded, twisted, rubbed, bent, and so forth, just like a chair, a pillow, or a piece of paper can be interacted with in many ways without any designated area for interaction. Some of these Things may be in more or less magical and mystical rapport with other Things. Lately, we have seen a lot of effort in HCI research (embodied inter- action, tangible interaction, ubiquitous computing, Internet of Things, and more) that can be viewed as work in this direction.

While the role model for Things is ordinary, nondigital things, the point is of course that Things have nonordinary and perhaps extraordinary properties and qualities. Given the present state of technological development, we can for instance reasonably expect to see objects entirely covered by some relatively cheap touch-sensitive display layer on top of some smart shape-changing material, equipped with various kinds of sensors and micromotors, enabling the object to change its shape and physical configuration, color, and pattern in a controlled manner under the impression of external forces and sensations—and still without adding a traditional interface. Furthermore, today wireless access to and delivery of local and remote information are already very much taken for granted. Yet, in the end we do not think there will remain a sharp dividing line between things and Things; as Things become common their once extraordinary and marvelous properties will come to seem more ordinary. What initially will set Things apart are above all their expressive and impressive abilities, particularly their dynamic, live impressions and expressions, and their ability to offer interactions that (to begin with) challenge everyday experience in unexpected and interesting ways (e.g., when you push, instead of yielding, the Thing might move in the opposite direction, contrary to the applied force). But these expressive-impressive abilities do not take the form of a developed symbolic language as we have been accustomed to in the interfaces of GOFIs, and as interactants their agency is weak—it still makes sense to think of them as “things.”

The introduction of Things into everyday life means that we will encounter new forms of interaction, and interactivity will appear in many places where none was expected before. Instead of turning on the room light with a fixed wall switch, in the future you might turn it on by tapping or stroking the wall anywhere in certain ways; you might unlock or lock the door by pressing your palm against it; you might adjust the height of the tabletop by nudging it with three fingers in the desired direction—and so on, and on. A chair might groan when you sit down if you are overweight, whine if you jump up and down on its seat, and lock its wheels (if it has wheels) if you step up on it. What used to be ordinary things may be equipped with interactive and expressive abilities that are related to their physical and tangible qualities, their materials, shapes, and forms. We will see Things that can change their appearance in ways that serve functional purposes as well as deliver expressions even at a quite nuanced and subtle level. We may encounter a chair that expresses sadness through some slightly drooping shape change, perhaps because it is in disrepair or because it commiserates with us. We may be able to interact with Things by expressing emotions through our manner of physically handling them, by our posture or intonation, for instance. There may be nothing very remarkable in any single example, but when Things are everywhere they will transform everyday life.

The other line of development we call Beings. In contrast to Things, Beings have stronger agency and may also have elaborate and sophisticated language-like methods of expressing themselves and be impressed by their users symbolically. Some may even have GOFI-like interaction areas (some robots come with an integrated screen for displaying texts and images, for example). While a Thing is basically dumb and has only a limited and fixed repertoire of behavioral patterns, a Being is smarter and can have a richly varied, adaptable behavior, capable of development. Simple Beings you might shoo at or pat; more advanced Beings you might strike up a conversation with. In any case, to interact with Beings should be more like getting along with your dog or cat than dealing with your furniture.1 But just as with Things and things, there may be no sharp line separating Beings from Things. Rather, we imagine an unbroken chain of entities at different “levels of existence,” stretching from things over Things to Beings (again reminiscent of the old idea of the great chain of being; see our earlier comment at the end of section 7.1). And why not as in earlier times indulge ourselves by chauvinistically putting humans on top of Beings, as a kind of superBeings (but let us stop there and go no further).

The dream of infusing intelligence into the things around us goes far back, but it has always led to mixed feelings. To be surrounded by “beings” that you can relate to in a supposedly more “natural” or “human” way, some see as desirable, others as a nightmare. This is also a favorite theme in many science fiction narratives. The omnipresent Being called Hal in the movie 2001: A Space Odyssey evolves from being the perfect servant to a mortal enemy with dubious moral instincts. The prospect of future artificial “superintelligence” and the potential danger for humankind it poses, as analyzed and discussed by Nick Bostrom (2014), should certainly be taken seriously.

Even though we are not yet living in a world of superintelligent Beings, a number of perhaps minor but still significant steps have already been taken on the road to populate our environments with Beings. There are numerous examples of toys, social robots, and certain everyday objects like cars and homes that already behave like Beings. They respond to commands, they perform actions based on our desires, they engage in conversations, they can proactively suggest your next activity. They may still be somewhat experimental and not very broadly used, but a rapid spread of more advanced Beings in everyday life does not seem unrealistic.

Some Beings will be able to remember us, they will know who we are, understand our needs, understand what we are doing, and they may even persuade or force us to behave and do things we are not eager to do, like diet, study, or drive under the speed limit. They may become our partners, our allies, our superegos, or “parents.” In some cases, we will interact with them through traditional surfaces or gestures, but in many cases language will be the primary mode of interaction, in some cases developed into what could be called conversations. To what extent our average future Being will be an eloquent conversation partner is not clear. We can already converse (although primitively) with our car about where we want to go, how we want to be entertained, and with whom we want to communicate. This interaction may increase our perception of the car as having a character (as we examined earlier). Whether we will experience this Being, the car, as a servant or boss (or something else) is another issue. When and where conversational interaction will be of any use is still an unknown and will probably continue to be difficult to predict, partially because it is to a large extent a consequence of what is culturally and socially accepted behavior. We have seen how the use of mobile phones in public have evolved since its initial days, not because the interaction has changed but as a result of changing social norms.

Things and Beings have in common that interaction has ceased to be a matter of having detailed knowledge about precise operations and their effects (with or without a designated interface) and instead becomes a matter of understanding and interpreting primitive reactions, expressions, and behavior patterns (Things), or objectives, needs, intentions, and plans (Beings)—and of behaving in a corresponding fashion in relation to the artifacts and systems. Living with such Things and Beings, we are undoubtedly getting closer to the animism of Toontown—even though not necessarily to its frenzy."

Tuesday, December 05, 2017

Practical (design) reasoning explained (Martha Nussbaum)

After quite many years I am re-reading an essay by Martha Nussbaum. The title is "The Discernment of Perception: An Aristotelian Conception of Private and Public Rationality" (to be found in the book "Love's Knowledge--essays on philosophy and literature" published in 1990). This essay helped me a lot when it was first published and it has influenced my thinking over the years in so many ways. It is therefore great to re-read it carefully now, many years later and realize that it is even better now.

Even though the title of this essay may scare some people with its complexity and reference to Aristotle, the essay is in my view one of the best texts ever written about practical reasoning and judgment. It is an essay that resonates perfectly with anyone who is reflecting on design practice and how designers reason, think and make judgments.

Nussman discusses why practical reasoning is not possible to understand with some simplistic (scientific)  form of logic. She builds her argumentation on the writings of Aristotle and his "attack on scientific conceptions of rationality". She summarizes her intention at the beginning of the essay by stating:

"I shall suggest that Aristotle's attack has three distinct claims, closely interwoven. These are: an attack on the claim that all valuable things are commensurable; an argument for the priority of  particular judgments to universals; and a defense of the emotions and the imagination as essential to rational choice."

Nussman then goes through these claims and explains how they lead to a definition of practical reasoning that is distinct, understandable and useful. This understanding of practical reasoning fits extraordinary well with the reality that designers face, when dealing with overwhelming but insufficient information, in their dealing with particulars and not universals, and having to rely on imagination and accept being influenced by emotions.

Just read it!!

------------------------------- Addition --------------------------

Ok, today I found my notebooks from earlier years and randomly pick one up, and randomly open up a page. At the top of the page I had written: "Good idea for an article, based on the notions of private versus public rationality by Martha Nussbaum."

Then a note (translated from Swedish): "No one has pushed this [rationality] far enough, not Churchman, not SSM [Soft Systems Mthodology]. Everyone is trying to start with how the world is, while Nussbaum starts with how people are. An article idea: How to manage systems design: the conflict between private and public rationality."

So why did I read Nussbaum yesterday and why did I happen to see that page today? Synchronicity...

Tuesday, November 21, 2017

"How to think" by Alan Jacobs

I just want to recommend Alan Jacobs new book "How to think --  a survival guide for a world at odds". It is a wonderful book that is easy to read about an extraordinarily important topic. What
resonates with my own thinking is the argument that thinking is work, it leads to trouble, it is slow, and it is far from comforting. Excellent thinking about thinking. Great examples. Useful advice. Read it.

Tuesday, November 14, 2017

The Basic Anatomy of Interaction

What is interaction and how can we describe it? In our recent book "Things That Keep Us Busy--the elements of interaction" we take on this challenge and we develop, what we call, an anatomy of interaction.  We also develop a detailed account of when it is reasonable to say that interaction actually takes place. We do this by employing the notion of the "window of interaction" (more on that later).

Below I am briefly presenting some of the work on our anatomy of interaction (from Chapter 4 in the book, as a teaser :-)

The basic elements of the anatomy are artifact and user. Interaction takes places between a human and an artifact/system, as described in the figure below (4.3).


Some of the terms used in the figure need to be explained since they mean very specific things. First of all, an artifact has certain 'states':

internal states, or i-states for short, are the functionally important interior states of the artifact or system.
external states, or e-states for short, are the operationally or functionally relevant, user-observable states of the interface, the exterior of the artifact or system.

And then

world states, or w-states for short, are states in the world outside the artifact or system causally connected with its functioning.


To fully describe the anatomy of interaction some more terms are needed (as defined in the glossary in the book):

Action (with respect to an artifact or system): an action that a human interactant can do in its fullness, here defined to include also the intention with the action; only used for human interactants

Cue the user’s impression of a move of an artifact or system

Move (with respect to artifact or system) something the artifact or system can do, the counterpart of a human action; only applicable to nonhuman interactants

Operation (with respect to artifact or system) an artifact’s or system’s impression of an action by a human interactant; something the artifact or system is designed to take as input from a human interactant; only applicable to nonhuman interactants


So, how does it work. Here is an excerpt from the book, page 65.

"Let us first look at the artifact or system end of the interaction. States can change. They can change as a result of an operation triggered by a user action. For digital artifacts and systems i-states as well as e-states are usually affected by an operation. They can also change as a result of the functioning of the artifact or system itself, what we will call a move. For digital artifacts and systems the changes caused by a move will concern first of all i-states, but frequently also e-states, and sometimes w-states.

An operation can be seen as an artifact’s perception of a human action, a projection of an action. Operations can be seen as partially effective implementations of actions. A move can be seen as the artifact counterpart of a human action. To avoid confusion, we choose to call it “move” rather than “action.” Operations and moves are thus artifact centered: they change i-states always, e-states sometimes, and in some cases also w-states (see figure 4.3). .........

Turning now to the human end of the interaction, we have already pointed out that user actions appear to the artifact or system as operations. Similarly, the moves of an artifact or system appear as cues to the user. A cue is the user’s perception of an artifact move: it is what the user perceives or experiences of a move, the impression of a move. Actions and cues are user- centered concepts. Cues come via e-state changes or w-state changes. When using a word processor the cues mainly stem from the changing images and symbols on the display, but in the case of a robot vacuum cleaner, the important cues will come rather from watching its physical movements, hearing the sounds it makes, and seeing dust and dirt disappear from the floor (all a matter of moves that change w-states). .....

To summarize: User actions appear to the artifact as operations and are reciprocated by artifact moves that appear as cues to the user. Operations are projected actions. Cues are projected moves."

Well, that is a lot. If you find this interesting, read Chapter 4 in the book! Have fun.

Friday, November 03, 2017

Some great books on rationality

In my last post, I talked about my interest in the relationship between designing and rationality. Here are some of my major inspirational sources for this project.



New book project "Nature of design rationality"

I have since my early days of being a Ph.D. student been intrigued by the question of what it means to be rational and to act rationally. This interest manifested itself in my Ph.D. dissertation that translated to English had the title "The Hidden Rationality of Design Work".

Reading about rationality has since then been a lifelong side project, almost like a hobby. I have not done so much writing on the topic but I have read. Recently I have started a book project around designing and rationality (maybe with a title similar to my dissertation, however with different content).

The main idea of this project is that the designing, as a major human approach for change, still struggles with a "hidden rationality". Even though today the praise of designing is stronger than ever before, it is far from clear what is the distinguishing features of the approach compared to other approaches. What is the rationality underlying designing that makes it into a unique approach and makes it possible to achieve outcomes that seems difficult when using other approaches?

There are a lot of superficial ideas about designing presented today as a design approach. In many cases, designing is not seen as anything more than some steps or phases and the use of some simple techniques. It is obvious that we would not define science in the same sense. So what if we treated designing as an approach that has to be understood and explained at the same depth as we do with science. This is what I think is needed and where my interest in rationality can help, I hope. I understand that this is ambitious and maybe overwhelmingly difficult but it is very exciting and maybe I will be able to develop the book project to at least relate to some of these big issues.

[For a long time I have been inspired by the book "The Nature of Rationality" by Robert Nozick. It is a wonderful book that develops a fundamental understanding of rationality and also opens up for a form of rationality that seems to resonate with design.]

Featured Post

Why Design Thinking is Not Enough

If you go to Youtube and look for "design thinking" you will find a large number of videos with TED talks and other talks all expl...