We created two storyboards that show how people could interact with our software. We chose to depict two different scenarios. The first scenario shows how crime scene investigators could collect evidence from a scene of crime and populate the database with information and photos of the evidence. The second scenario shows how investigators could use the system to evaluate the information they gathered and how it might help them solve their case.
Storyboard 1 – Collecting evidence
This storyboard is set in a scenario where the investigators first arrive at a crime scene. A person was murdered and the investigators are collecting the evidence. Later on they document the evidence by inserting them into the database of our software. Once all evidence was entered into the system, connections between them are created. Creating these connections is a key feature of the system and helps the investigators to set pieces of information into a greater context by adding semantic information to the relationship of pieces of information to each other. There will be different types of information (or entities as they are called in the software). The types could include locations, evidence, persons, etc.
Storyboard 2 – Evaluating the information
This storyboard is set in the same scenario but a little further down the investigation process. All evidence is gathered and evaluated, and several suspects have been interviewed. Now the investigation team gathers and uses the system to find new clues. This storyboard shows the three main tools that the software offers: the entity map, the timeline, and the map. The entity map shows all pieces of information that the investigators gathered and puts them into context by showing their relationship to each other. The timeline and the map put the information into a temporal or spatial context to each other respectively. The investigators use these tools to find out which of the suspects could most likely be the killer. Finally they are able to narrow it down to one person and solve the case. As seen in the storyboards the users can interact with the system in several different ways, e.g. voice commands, hand gestures, and the usual forms of input devices. We want the software to be agnostic to the form of input, so that users may interact with the system in a way that suits the task and their personal preferences.