User studies & prototype refinements

In this week’s assignment we had to conduct user tests with our paper prototypes. Since our project has a very specific user group that is hard to get by, we chose to conduct our tests on three friends of ours: Robert, Max and Matthias. In this blog post we want to present our observations during the user tests and show the resulting changes that we made to our paper prototype based on these observations.

Observations

User 1 – Robert

The first user adapted to the possibility of using voice commands using the keyword „CRIME“ very quickly. He totally ignored the gesture based navigation. This confirmed our thesis that the prototype is lacking a hint for the possibility of navigating using hand gestures. The user also criticized the missing of a back button and/or voice command for navigating to the previous view. He once navigated to a view he didn’t intend to go to and had no way of going back. Further more there was no or at least no obvious way to select multiple entities using a voice command, which is needed to establish relationships between entities.

User 2 – Max

After opening the case as described in the scenario the user was presented with an empty page. The folder icon right next to the page title suggested that there were hidden options, so the user started out by using hand gestures trying to click the folder icon. He employed the gesture control very intuitively and thus ignored the possibility of using voice commands at first. When he first tried to use the voice control by saying „CRIME“ he directly went to the entity map. Since there is no option for navigating back he was stuck and could not finish the task of uploading pictures to the case. When he was asked to create a new evidence with one of the uploaded pictures he was irritated by the concept of creating a new entity out of a picture. In his opinion the pictures should rather be tagged e.g. with „is evidence“, or „is suspect“. Generally user 2 favored the gesture control over the voice commands and suggested that a „grabbing“ gesture should be added, which could be used to pan (e.g. in the entity map or the timeline).

User 3 – Matthias

When uploading the pictures from the camera using the „upload pictures from <camera>“ voice command the user was irritated that the pictures were already uploaded automatically. He thought that he had to select the photos which he intended to upload first. After creating the entities from the pictures he wanted to create a connection between two entities. Unfortunately the „create connection“ voice command was missing from the context menu. The user 3 also criticized the lack of a back button. When he was on the entity map, the user assumed that he could access more information about the entity by selecting it. This feature was missing because otherwise the prototype would have gotten too complex. Finally we could observe that navigating through the prototype using voice commands took unproportionally longer than using hand gestures. But we think this is just due to the lack of experience of the user.

Changes to the prototype

Bild

Picture 1 – We addressed the problem that multiple photos could not be selected using voice commands by introducing a new voice command for selecting multiple photos at once. Also a new option was introduced for switching into a selection mode were the user is able to select pictures using a hand gesture.

Bild

Picture 2 – When the user holds up his hand and the cursor pops up a back button is faded in, which enables the user to navigate to the previous view.

Bild

Picture 3 – A context menu was introduced in the entity map. It pops up when the user selects entities. This was made to address the problem that user expected that more information would be shown once an entity is selected. In the future we could add an option for displaying all the information about the selected entity.

Bild

Picture 4 – In order to address the problem that some testers had a hard time figuring out that they were actually able to use their hand as an input device, we added a hint. When the user does not interact with the prototype for a couple of seconds the message „Please say ‚CRIME‘ or use your hand as a cursor“ pops up.

Bild

Picture 5 – When the user is not using hand gestures to navigate through the prototype he is reminded that he is able to say „CRIME back“ in order to navigate to the previous view. This change was also made to address the problem of navigating back.

Bild

Picture 6 – One of our testers was confused that the pictures were already uploaded after using the voice command „Upload pictures from <camera>“. Therefore we added a message which is displayed when the upload is done.

Bild

Picture 7 – The first version of the prototype did not have a command for creating a connection between entities, so we added a menu option for that.

Bild

Picture 8 – We also added the same menu to the entity, since it is just another way of displaying the entities stored in the case. Now the user does not have to switch between views in order to perform certain tasks.

Bild

Picture 9 – In conjunction with the problem that the users could not navigate back we also added a close menu command to the context menu, so that users can close it when they either decided to do something else or accidentally opened the menu.

Task breakdown

  • Felix
    • User studies
    • Evaluation of the user studies
  • Marcel
    • User studies
    • Evaluation of the user studies
  • Lukas
    • Changing the prototype according to the user study evaluation
    • Writing the blog post
  • David
    • Changing the prototype according to the user study evaluation
    • Writing the blog post
Advertisements

Paper prototype

In this week’s task we were asked to create a low fidelity paper prototype, which can then be used to perform usability evaluations. In these usability evaluations our team acts as the computer and performs all actions that the user wants to do. The feedback of the test users can then be used to overthink the design of our software and make changes if necessary. Since we are still in an early stage of the development it is very easy to perform major changes to the software design, which we won’t be ably to incorporate later on. The following photos show the prototype as a whole.

Image

A comprehensive view of the paper prototype.

Image

The case management parts of the paper prototype.

Image

The entity management parts of the paper prototype.

Image

The timeline and the entity map.

Image

The voice control parts of the paper prototype.

Three scenarios

Here we want to depict three different common scenarios of the software using our paper prototype to show how easy it is to perform user tests with it.

Scenario 1

The first scenario shows how the user can enter data into the system using the prototype. The scenario starts out with the system waiting for input. The user then proceeds to upload pictures from the camera and creates a new evidence entity using one of the photos.

Image

Step 1 – The start screen of the application.

Image

Step 2 – The application prompts „Please say ‚CRIME'“ in order to show to the user that it is awaiting input.

Image

Step 3 – When the user says „CRIME“ all options that are possible in this context are displayed.

Image

Step 4 – The user chooses to open the case #1.

Image

Step 5 – The user says „CRIME“ and the application displays all contextual options.

Image

Step 6 – The user chooses to upload pictures from a camera. The application proceeds to upload the pictures.

Image

Step 7 – The application displays the uploaded pictures.

Image

Step 8 – The user says „CRIME“ and the application displays the contextual options.

Image

Step 9 – The user selects the picture #2 and the application displays a context menu with the options.

Image

Step 10 – The user uses a hand gesture to move the cursor over the „Create evidence“ option. By holding the hand in position the option gets invoked.

Image

Step 11 – The application opens a prompt where the user enters a new name for the evidence by speaking it.

Scenario 2

The second scenario depicts how connections between entities can be created. The user selects two entities and then proceeds to create and name a new relationship between them.

Image

Step 1 – The user holds his hand up and the cursor appears on the screen.

Image

Step 2 – The user holds the position of his hand to select an item.

Image

Step 3 – The circle around the cursor fills up to signal that it is selecting the item.

Image

Step 4 – The item #2 is selected.

Image

Step 5 – The user proceeds to select the item #3.

Image

Step 6 – The circle around the cursor fills up to signal that the item is being selected.

Image

Step 7 – The item #3 is selected.

Image

Step 8 – The user speaks the voice command for creating a new relationship between the two selected items and names the relationship by speaking the new name.

Scenario 3

The third scenario shows how the data, that is stored in the database, can be viewed and thus be used to derive new clues that might be vital to the investigation. The user displays the entity map an then shows the entities in the timeline.

Image

Step 1 – The user says „CRIME“ and the application prompts the contextual options.

Image

Step 2 – The user says „Show entity map“ to navigate to entity map that shows all entities belonging to the case and their relationships to each other.

Image

Step 3 – The user holds up his hand and the cursor appears on the screen.

Image

Step 4 – The user moves his hand to pan the view of the entity map.

Image

Step 5 – The user proceeds to pan the entity map view.

Image

Step 6 – The user says „CRIME“ and the application prompts the contextual options.

Image

Step 7 – The user says „Show events on timeline“ and the application navigates to the timeline.

Image

Step 8 – The user holds up his hand and the cursor appears on the screen.

Image

Step 9 – The user moves his hand to pan the view of the timeline.

Task breakdown

  • Felix
    • Paper prototype creation
    • Photos of the paper prototype
  • Lukas
    • Paper prototype creation
    • Photos of the scenarios
  • Marcel
    • Paper prototype creation
    • Photos of the paper prototype
  • David
    • Photos of the scenarios
    • Writing and publishing the blog post