Printable Version of this PageHome PageRecent ChangesSearchSign In

RESEARCH TOPIC

In everyday life, the environment around us is becoming
more and more interactive. For example, interactive
displays are increasingly becoming embodied in the very
artifacts of our physical space such as tables and walls,
having different scale and form factors, and supporting
individual as well as social interactions. Additionally, a
progressive “hybridization” of our everyday life
interactions and experiences is occurring, which blends
digital pictures, music, and documents with their physical
counterparts in a variety of contexts. In such interactive
environments, the general WIMP interaction paradigm is
inadequate to support users’ interactions.

In the desktop environment, the appearance of GUIs for
widgets remains consistent across different types of
applications, relying on office-related metaphors and visual
cues in order to suggest affordances for mouse and
keyboard interaction (e.g., 3D effects for clicking buttons,
white fields for text entry, ripples on the moving part of
scrollbars for dragging). When information is displayed for
a different interaction style, and enters different domains of
mixed reality, new affordances need to be designed for
users’ understanding of the interaction conceptual model.
The design of physical metaphors and tangible UIs
addresses this issue by exploiting people’s existing mental
models about how things work in the physical realm, so as
to encourage manipulation in a similar way. But we need to
think thoroughly about how we can use physical
affordances as a design resource while at the same time
exploiting the new possibilities of digital media. This
requires the understanding not only of people’s
expectations and mental models about digital versus
physical media, but also an understanding of the different
affordances for interaction in these different situations.
Thus, my work addresses the question: How-to, and what
are the benefits (affordances) of integrating aspects of
physical interaction in the design of digital information for
hybrid interaction? On another level, I am also seeking to
understand what it is about physicality, in terms of
cognitive as well as multi-sensorial and emotional aspects,
that affects the quality of hybrid experiences.

In order to address these questions, in my work I focus on:
• Identifying the affordances (comprehending cognitive,
functional, sensorial and social affordances as
considered in [20]) of both physical and digital media
in a systematic way;
• Given that affordances are goal and context-
dependent, understanding how the domain, the spatial
context and the social context can affect the perception
of such affordances;
• Understanding how the combination of physical and
digital affordances in a specific context constitutes a
resource for the design of novel meaningful
experiences, that go beyond the ones that are possible
in the purely physical realm.

RELATED WORK

The recent advances in the area of display technologies
make the vision of ubiquitous computing closer to reality:
novel technologies afford both input and output at the same
point of interaction, for example [4, 13]; advanced
computer vision techniques in combination with projection
onto surfaces make it possible to recognize real objects,
hand gestures and body movements, e.g., [14, 23, 24].
making it possible to represent virtual objects in a life-size
way.

The use of metaphors for user interface design has been
largely discussed in the literature, e.g., [2, 6, 12], its most
familiar example being the graphical user interface of the
“desktop metaphor” [16]. In the desktop metaphor, many
elements of the interface are modeled on artifacts (e.g.,
wastebasket, folders, buttons) and behaviors (e.g., direct
manipulation [10]) from the physical world. As computing
moves beyond the desktop and becomes more integrated in
our physical environment, the work on tangible user
interfaces (or TUIs) has provided different ways of
integrating physicality in the interaction with digital media.
Beginning with early work by Fitztmaurice, Ishii, Buxton,
and others [8, 11], there have been many instantiations and
variations of the TUI paradigm, e.g., [9, 23]. Fishkin [7]
provides a useful taxonomy for the analysis of tangible
interfaces based on the dimensions of “metaphor” and
“embodiment”.

Given the emerging popularity of interactive surfaces and
the new interaction paradigms they make use of, it is a
good time to examine more deeply what specific aspects of
the physical world and physical interaction are being drawn
upon as a resource in their design. In my work I
investigate these aspects and analyze how they have an
impact on users’ mental model and experience of
interaction.

BACKGROUND

Having an academic background in industrial design, I look
at these issues from a design perspective, and investigate
how to design and recognize affordances for digital
information embedded in a real physical environment and
social context. The users’ possibility to move around in an
interactive space and to directly manipulate objects and
information needs to be supported by interfaces that are
properly scaled to users’ metrics, locations in the space,
reciprocal distance among users and motor capabilities.
Issues such as users’ height, their visual angle, the
proximity of displayed objects to the hands, the proportion
between objects and hands sizes, imply ergonomic
considerations that need to be included in the interface
design so as to merge virtual and physical worlds.

My investigation develops in the context of the FLUIDUM
research project, http://www.fluidum.org, at the University
of Munich, Germany. The goal of the project is to develop
interaction techniques and metaphors for differently scaled
ubiquitous computing scenarios within everyday life
environments. In such a context I can benefit of an
infrastructure encountering an interactive room, which is
instrumented with large interactive displays, both vertical
and horizontal, and several other mobile displays. With the
FLUIDUM team we have worked on several interaction
techniques and instantiations of design concepts which
contribute to the investigation of my research questions.

In tandem with these technological advances, more
attention to the development of interfaces for wall and
tabletop displays has driven a number of new and
compelling applications in this area (for a review see [3]).
Most make heavy use of physical metaphors as the basis
for interaction, the increased size of display surfaces

My work is supervised by Prof. Andreas Butz, from the
LMU University of Munich, and Abigail Sellen and Bill
Buxton, from Microsoft Research.

APPROACH

The approach I’ve adopted so far is explorative and
empirical at the same time. It builds on three main
activities, using different methods of investigation: i.e., i)
contextual inquiries about the use of displays, ii) design of
experience prototypes, and iii) empirical assessments.

Contextual Inquiries

In order to frame my design space and gather a preliminary
understanding of the roles of traditional displays in the
everyday life environments, I conducted two contextual
inquiries on the use of physical display artifacts (such as
post-its, calendars, mirrors) in the home. Building on this
work I constructed a taxonomy [15] of domestic displays
and considered how physical displays could be digitally
augmented. This was explored in the design of two systems
(the LivingCookbook and the Time-Mill Mirror, see next
section) which support different social and physical
activities (i.e., cooking and browsing through pictures
respectively). Both these systems were first evaluated in the
lab and are going to be evaluated in real domestic
environments so as to gather an assessment of the user
experience, beyond usability issues.

Experience Prototypes
In this section I briefly introduce the projects I have been
working on in order to unpack my main research question.
These designs have acted as research tools for validation as
well as elicitation of design issues to be considered, in line
with a design research approach. I explore what specific
aspects of the physical world and physical manipulation
can be drawn upon as a resource in the design of novel
interaction paradigms:
• A 3D space of manipulation, making possible different
kinds of actions and feedback from those actions (e.g.,
the Learning Cube [17]).
• The use of physical metaphors in the way digital
objects are graphically represented to suggest gestures
and actions on those objects consistently with the
conceptual model of their physical counterparts (e.g.,
the Mug Metaphor Interface [18]).
• The use of spatially multiplexed input (such as
bimanual, multi-finger input) to interact with virtual
objects (e.g., the EnLighTable, [20]).
• Continuity of action and richness of manipulation
vocabulary in input, as distinct from discrete actions or
gestures afforded by mouse and keyboard (e.g.,
Brainstorm, see below);
• Direct spatial mapping between Input and Output so
that an action produces feedback at the point where the
input is sensed (e.g., the Hybrid Tool, see below);
• Rich multimodal feedback, not limited to visual and
audio feedback, such as it is possible in the physical
world (e.g., the LivingCookbook, [19]).
• Physical constraints, which affect users’ mental model
of the possible manipulations with an artifact (e.g., the
Learning Cube [17], the Hybrid Tool, the Time-Mill
Mirror).

The Learning cube
The Learning Cube [17] is a tangible learning appliance
which aims at providing a playful learning interface for
children. Exploiting the physical affordances of the cube
and augmenting it with embedded sensors and LCD
displays placed on each face, we implemented a general
learning platform that supports a multiple choice test where
a question and 5 possible answers are displayed on the
faces; the selection of an answer is possible by gestures,
i.e., shaking the cube (see Fig.1, a). One of the applications
is meant for learning spatial geometry, thus creating a
semantic link between physical control, digital output and
abstract concept, which provides a redundant learning
interface.

The Mug Metaphor Interface
The Mug Metaphor Interface [18] was designed to support
direct touch interaction on large displays. In this project I
investigate the possibility of mapping the affordances of
real world objects to gestures, relying on the manipulation
vocabulary and on the conceptual model of such physical
objects. Containers of information are graphically
represented as mugs: such digital mugs and units of
information, the latter represented as kind of drops, can be
manipulated across the display in a way which is related to
their physical counterparts. When manipulating a real mug,
for example, we know we can move it around by holding
its handle, and incline it to pour its content (see Fig. 1, b).
Empty mugs are expected to be lighter than full ones (e.g.,
contain less data), smoking mugs are expected to be hot
(e.g., contain recent data). In order to cope with the need of
freedom of movement of the user, and to enable two-hands
cooperative interaction, pie menus appear in
correspondence of the hands (see Fig. 1, c), thus
“following” the user while moving across the display,
rather than being operable just in a fixed location on the
screen.

The Living Cookbook
The Living Cookbook [19] is a kitchen appliance, similar
to a family authored digital cookbook. It consists of a
camera, a tablet PC with touch sensitive display mounted
on a kitchen cupboard (see Fig. 1, d) and a projector
connected to a server. On the tablet PC a multimedia digital
cookbook is displayed and controllable. On the same
interface people can either author a new recipe in their
personal book, or consult the book and learn someone
else’s recipe. In the authoring/teaching mode, the video of
the cooking session is captured by the camera: in the
learning mode the video is projected on the wall above the
counter and the learner can cook along. To create the link
to domestic activities, the metaphor of a traditional
cookbook is used. The book metaphorically offers the
affordances of paper, where people can both write and
read, and flip pages: this comes at hand to display both the
authoring and rendering environment using a consistent
conceptual model. In the interface different widgets are
metaphorically referring to artifacts of a normal kitchen
and semantically related to different functions (see Fig. 1,
e). The digital pages can be turned by tipping a flipped
corner; portions can be set by placing plates on a table, the
video can be controlled on an egg-shaped timer.

The EnLighTable
The EnLighTable [20] is an appliance based on a table-top
touch-sensitive display for creative teamwork in the
selection of pictures and layout design, e.g., in advertising
agencies. In this work I explore the affordances for
collaborative creativity of large displays. The system
enables multiple users to simultaneously manipulate digital
pictures of a shared collection, and rapidly create and edit
simple page layouts. By analogy to plates on a set table, the
graphic layout suggests personal areas of interaction
through the arrangement of three Imagetools in a
predefined position (see Fig. 1, f), oriented towards the
sides of the table. Imagetools are movable virtual tools for
basic editing of digital pictures. In the center, a shared
“tray” of information is displayed, which contains the

thumbnails of a shared picture collection. Copies of the
original slides in the shared collection can be edited with
the Imagetool. This adopts the conceptual model of a magic
lens, which in our case is controlled by two hands directly
on the surface of the table. Such virtual tool provides
affordances for direct manipulation relying on the way we
manipulate certain physical objects (see Fig. 1, g). The
zooming gear on the left side of the tool, for example, can
be “scrolled” with a continuous movement of one hand.
Discrete interaction, such as tapping, is suggested by the
3D effect of the buttons for mirroring and saving changes,
on the right side of the tool. The EnlighTable was
evaluated through experience trials with graphic designers.

Brainstorm
Brainstorm is an environmental appliance based on one
table-top and three wall shared displays (see Fig. 1, h). In
our environment the central wall display has a higher
resolution for focused interaction, while the two peripheral
ones allow more coarse interaction supporting context
awareness and spatial organization. In this set-up we
developed a brainstorming application which
metaphorically builds on the “idea card” method, i.e., the
use of Post-its for brainwriting and clustering ideas later
on, as participants stick them and group them on a flip-
chart. The design of such a socio-technical environment
aims at supporting co-located collaborative problem
solving: the goal is to maintain the immediacy of face-to-
face paper-based collaboration, which is fundamental for
creative processes, while exploiting the benefits of tracking
and storage afforded by embedded technology. Users can
simultaneously start generating ideas in virtual Post-its on
the table. Virtual Post-its can be edited, moved, deleted and
copied by any participant at any time. As the participants
create Post-its in their working area, the Post-its appear
simultaneously on the vertical display, which is located
next to the table: here they are automatically reoriented
upright, i.e., readable for both readers, but they maintain a
spatial mapping to the territorial setup on the table display,
thus affording reciprocal activity awareness. When users
move from the table (generative phase, divergent thinking)
to the wall display (structural phase, convergent thinking),
they can spatially organize their ideas by rearranging the
virtual Post-its on the wall. In addition they can create
clusters, which can be connected to each other or to single
Post-its. Whole clusters can be moved across the display,
thus moving all the Post-its they contain. This clearly
extends the functionality of a physical whiteboard or flip
chart, while it maintains the direct manipulation
characteristics thereof, facilitating the creation of a
structured knowledge representation. Brainstorm was
evaluated in comparison to paper based brainstorming
sessions.

Hybrid Tools
Hybrid tools, or simply hybrids, are handles for
manipulation of digital information on interactive displays.
Hybrids consist of a physical and a virtual component
which are tightly coupled, spatially and semantically (i.e.,
there is a direct spatial mapping between the physical and
the digital element, and manipulation and effect behave in
an isomorphic way). Some fundamental aspects afforded
by physical handles are the shift from an absolute to a
relative referential space, as well as haptic feedback and the
possibility of a richer manipulation vocabulary.
Furthermore, multiple physical handles create multiple
access points and reference frames, thus supporting multi-
user interaction. The virtual component of the hybrid
appears and becomes coupled to the physical handle the
moment the tool is placed on the surface, possibly
overlaying the information on the table-top and delivering
an alternative, user-dependent interactive visualization of
the information displayed on the surface. Different
instantiations of such a concept are being developed in our
group. Currently I am working on a hybrid tool which aims
at supporting collaborative picture browsing on a table-top
display (see Fig. 1, i).

The Time-Mill Mirror
Time-Mill [22] is an interactive multimodal mirror which I
designed in the context of an internship at the Microsoft
Research Lab in Cambridge, UK, within the Socio-Digital
Systems group. The motivation for the design and
development of such an artifact is to explore the potential
of multimodal mixed experiences, which blend physical
and digital, past and present, to evocate domestic memories
in an unpredicted fashion, and to stimulate people’s
reflection about time, space and its inhabitants. Like a
traditional mirror, Time-mill dynamically reflects in real
time the scenes taking place in front of such a situated
display: but differently from a traditional mirror, it can
capture and retrieve snippets of those scenes when people
engage in the interaction, thus augmenting the present with
traces of the past. The artifact consists of a physical wheel
coupled with a mirror: a Tablet PC is mounted behind the
see-through mirroring glass and a wide-angle digital
camera is embedded in the mirror frame. When users
rotate the wheel (see Fig. 1, j) a melody is played, similarly
to the interaction with a music box, and an animation is
displayed on the LCD: this shows flying leaves, which
metaphorically evocate the flowing of time and the human
possibility of capturing and remembering just some
impressions. Within the leaves, pictures of the people who
have been engaged with the display are shown, because the
digital camera has been taking pictures when they started
rotating the wheel, as they stayed in front of the mirror.
Such pictures are randomly selected from bundles of
pictures that were created along the time, and which are
retrieved in a regressive chronological order.

Empirical Assessments

The designs presented are meant to enrich the
understanding of users’ expectations of hybrid interaction.
To confirm the relevance of the identified aspects, and their
design implications, I am complementing this work with
two empirical evaluations in controlled experimental
settings. One has been completed [21] which explores
interaction in 3D vs 2D manipulation tasks, by comparing a
sorting and a puzzle tasks with physical vs digital media on
a table-top.

NEXT STEPS AND DOCTORAL PARTICIPATION

I plan to conduct another comparative study exploring the
differences in manipulation and in users’ experience of a
hybrid tool and its digital counterpart, represented as a
GUI. Besides I’m trying to learn from the experience of
using the LivingCookbook and Time-Mill in real domestic
set-ups. In the first case the design of a “mobile version” of
the appliance has become necessary for testing in the field.
Based on the lessons learned from these design projects
and activities, I am going to wrap-up my contribution,
which I hope to present and discuss during the doctoral
colloquium. In case of acceptance I would be keen on
receiving feedback about my work from a multidisciplinary
HCI audience and possibly getting some comments about
the approach I adopted so far. Furthermore I would
welcome suggestions about how to structure my work in
such a way that clearly highlights its contribution to a
scientific community, while recognizing its design research
nature.

REFERENCES

1. Bier, E. A., Stone, M., Pier, K., Buxton , W., DeRose,
T. Toolglass and Magic Lenses: the See-through
Interface. In Proc. of SIGGRAPH 1993, 73-80.
2. Carroll, J.M., Thomas, J.C. Metaphors and The
Cognitive Representation of Computing Systems. IEEE
Transactions on Systems, Man and Cybernetics, 12 (2),
1982, 107-116.
3. Czerwinski, M., Robertson, G.G., Meyers, B., Smith,
G., Robbins, D., Tan, D. Large Display Research
Overview. In Proc. of CHI 2006, 69-74.
4. Dietz, P., Leigh, D. DiamondTouch: A Multi-User
Touch Technology. In Proc. of UIST 2001, 219-226.
5. Dragicevic, P. Combining Crossing-Based and Paper-
based Interaction Paradigms for Dragging and Dropping
Between Overlapping Windows. In Proc. of UIST 2004,
193-196.
6. Erickson, T. Working with Interface Metaphors. In The
Art of Human-Computer Interface Design, Ed. by B.
Laurel, Addison-Wesley, 1990.
7. Fishkin, K. P. A Taxonomy for and Analysis of
Tangible Interfaces. Journal of Personal and
Ubiquitous Computing, 8 (5), September 2004.
8. Fitzmaurice, G. W., Ishii, H., and Buxton, W. A.
Bricks: Laying the Foundations for Graspable User
Interfaces. In Proc. of SIGCHI 1995, 442-449.
9. Hinckley, K., Pausch, R., Goble, J. C., and Kassell, N.
F. Passive real-world interface props for neurosurgical
visualization. In Proc. of SIGCHI 1994, 452-458.

10. Hutchins, E., Hollan, J., Norman, D. Direct
Manipulation Interfaces. In D. A. Norman & S. W.
Draper (Eds.) User Centered System Design: New
Perspectives in Human-Computer Interaction, 1986.
11. Ishii, H. and Ullmer, B. Tangible Bits: Towards
Seamless Interfaces Between People, Bits and Atoms.
In Proc. of SIGCHI 1997, 234-241.
12. Laurel, B. Interface as Mimesis. In DA Norman & SW
Draper (eds.), User Centered Systems Design, Hillsdale,
NJ: Lawrence Earlbaum Assoc, 1986, 67-85.
13. Rekimoto, J. SmartSkin: An Infrastructure for Freehand
Manipulation on Interactive Surfaces. In Proc. of CHI‘01.
14. Ringel, M., Berg, H., Jin, Y., Winograd, T. Barehands:
Implement-Free Interaction with a Wall-Mounted
Display. In Extended Abstracts of CHI 2001, 367-368.
15. Schmidt, A., Terrenghi, L. Methods and Guidelines for
the Design and Development of Domestic Ubiquitous
Computing Applications. To appear in Proc. of
PerCom2007.
16. Smith, D., Irby, C., Kimbal, R., Verplank, B., Harslem,
E. 1982. Designing the Star User Interface. Byte, 7/4.
17. Terrenghi, L., Kranz, M., Holleis, P., Schmidt, A. A
Cube to Learn: a Tangible User Interface for the Design
of a Learning Appliance. In Personal and Ubiquitous
Computing, Springer Journal, 2005.
18. Terrenghi, L. Design of Affordances for Direct
Manipulation of Digital Information. In Proc. of Smart
Graphics Symposium 2005, 198-205.
19. Terrenghi, L., Hilliges, O., Butz, A. Kitchen Stories:
Sharing Recipes with the Living Cookbook. In Personal
and Ubiquitous Computing, Springer Journal, 2006.
20. Terrenghi, L., Fritsche, T., Butz, A.: The EnLighTable:
Design of Affordances to Support Collaborative
Creativity. In Proc. of Smart Graphics Symposium
2006.
21. Terrenghi, L., Kirk, D., Sellen, A., Izadi, S.
Affordances for Manipulation of Physical versus Digital
Media on Interactive Surfaces. To appear in the Proc. of
CHI 2007.
22. Terrenghi, L., Patel, D., et Al. Time-mill: An Interactive
Mirror for Evoking Reflective Experience in the Home.
Interactivity submission to CHI 2007.
23. Underkoffler, J., and Ishii, H. Urp: A Luminous-
Tangible Workbench for Urban Planning and Design.
In Proc. of CHI 1999, 386-393.
24. Wilson, A. PlayAnywhere: a Compact Interactive
Tabletop Projection-vision System. In Proc. UIST 2005,
83-92.


Last modified 18 February 2008 at 10:00 pm by haleden