These are my notes in essay form from the Intro to Human Computer Interaction course taught by Scott Klemmer at the University of California, San Diego.
These are my notes in essay form from the Intro to Human Computer Interaction course taught by Scott Klemmer at the University of California, San Diego. All credit for content goes to Scott, any errors are my own.
This final piece of notes compiles content that is not directly related to the process of creating an interface, but are helpful considerations and bonus material that Klemmer taught in class.
The most familiar and ubiquitous input device for our devices is the keyboard. Essentially, keyboards are nothing but a matrix of switches that send signals into the computer. But one area of creating interfaces and understanding human computer interaction is to think about the implications of keyboards.
When looking at the history of computation, it’s no doubt that computing input devices have come a long way. When computers used to take up entire rooms, the punch card was a common method for human input into a computer.
As computers evolved, the keyboard emerged as the most common input method for computers. Essentially, keyboards are nothing more than a collection of switches. In order to simplify wiring for keyboards, each key is placed on a grid that identifies with a specific row and column. When a key is pressed, the keyboard sends the scan codes into the computer.
In the history of computers, there has always been an asymmetry of the level of output to the level of input. In other words, the level of detail that computers are able to output is greater than the level of detail that we are able to input.
One big idea that has begun to shape computation is to put input directly on top of output. For instance, touchscreens allow users to directly manipulate what is on the screen, dramatically reducing the amount of input required.
Another example of direct manipulation is the mouse. Traditionally, the mouse used a ball that turned two rods in order map the movement of the mouse to the movement of the cursor on the screen. However, in the last decade mice have begun using optical methods and even laser to increase precision and reliability.
When designing interfaces to be used for the mouse, we want to be aware of how fast users are able to move the mouse and acquire a target. Fitt’s law states that the time it takes to acquire a target is proportional to the distance divided by the target width. Thus, targets that are close and large are going to be faster to acquire than targets that are further and smaller.
One interface design aimed to decrease the difficulty of acquiring targets is the radial menu. By making targets infinitely large in certain directions, users can simply move their mouse in the direction of the right menu item, making it fast and accurate.
Just like testing an interface, creating the most efficient design for a particular input type can be measured in a large variety of different ways. Depending on your goals, you may consider to optimize time on task, recall, accuracy, etc. Ultimately, input devices are more than just peripherals. They enable classes of dialogues that fundamentally affect human-computer interaction.
Many of the applications that are successful today are ones that have a social component to them. When these applications work, they usually have a large number of users that make the application extraordinarily valuable.
One strategy for designing good online applications is to begin by studying how people interact offline. This means asking questions like “What makes a great physical communities?”
William Whyte, a famous urban sociologist, created a documentary called “The Social Life of Small Urban Spaces” that explores many of these questions. Essentially, Whyte studied different factors of where people sit, how they interact with public spaces, and more.
In applying these concepts from physical spaces into digital spaces, there are two main attributes to consider: location and time. These two factors can be used to categorize both physical and digital interactions. For example, when you are speaking to someone else in person, you are both present at the same place and time.
However, different tools allow us to manipulate these factors, creating different types of experiences. The telephone allows people to interact in different locations at the same time, while leaving post-it notes on someone’s desk allows people to interact in the same location at different times. More often, however, communication tools like email allow us to interact with others in both different places and different times.
The caveat with different locations and time is that people are not as engaged. Even a video call with participants interacting in different locations at the same time is not as engaging as in person interactions.
Consider driving as an example. Studies show that when teenagers drive with a passenger, their risk for an accident is lower. This can be attributed to the passenger also being present as another set of eyes to help the driver concentrate. However, when a passenger is present virtually over the phone, the risk for accident increases significantly because passengers are unable to see what is going on around the car.