Cyber Defense
Augmented Reality
Human-Machine Interfaces
Visual Communication
Our Augmented Reality Cross-Domain Solution (AR-CDS) is a wearable which guarantees data privacy and confidentiality of data stored by the wearable computing device. Instead of using backchannels to inform and control the wearable device, the device itself uses machine vision algorithms to detect data on surrounding displays and computing devices, interprets detected on-screen objects and data in other device displays, and augments these displays with virtual overlays that are only viewable to the wearer.
Our team has explored numerous applications of augmented and virtual reality. A key assumption of most devices and implementations is that the wearable device can be directly connected to information systems and data sources in the surrounding environment and that the surrounding computing devices can directly manipulate or inform the wearable headset and its associated computing hardware via network connectivity. The AR-CDS is physically disconnected from the surrounding computing and network environment and only connects to networks and computing devices which are at the same sensitivity level of the wearable device. All external information is obtained via machine vision and interpretation of observable information channels in the surrounding environment.
In order to address the problem of system-oriented security being obtuse to users (and often hidden), we have prototyped and validated a class of ambient peripheral display. Our primary goal is to enable end-user perception of threats and aberrant system behavior by providing a users with real-time representations of internal system operations (such as packet flow or system call sequences).
Improvements in user detection of anomalous behavior can be performed without significant disruption of a user’s active task by desiging such displays to be non-distracting and by leveraging aspects of human perception and cognition, such as habituation and perceptual priming. The goal of the approach is to empower everyday users to make better and more informed security decisions. By exposing and displaying hidden system information which relates directly to the user’s active task, over time a user will learn the perceptual landscape representing that task. The parameters of such a display are many, and include not only the medium of the feedback (e.g., visual, aural, haptic, or kinesthetic) but also the tuning of each (e.g., dynamic range, pattern, duration,or placement). We have been calling such displays "Ambient Activity Monitors" or AAMs.
The goal of an ambient display for threat detection is to convey information to the user without taking overt attentional resources away from their primary task. In other words, an AAM is not an alert or alarm. We are instead examining and leveraging the difference between peripheral and focal attentional resources. In addition to moving the display into the background instead of the foreground of alerts and alarms, the display shouldn’t be particularly noticeable unless it is likely that something is amiss. The goal is to use the AAM to provide the user with Just Noticeable Differences in feedback that are correlated to their interaction with the primary system. As such an ambient display presents stimuli, detected reliably (at minimum a just noticeable difference), while not causing undue distraction from the user’s primary task
In 2009 and 2010 our team begain exploring alternative uses of capacative-touch-screens present on devices such as the iPad and iPhone. One novel use-case involved using the touch-screen digitizer directly for digital data input and output. Conventional stylus devices either act as pointing devices or utilize a backchannel through bluetooth or WiFi protocols to communicate with the device. In our implementation, the digitizer is calibrated to allow for rapid sequences of touches to be interpreted directly as data. Rather than using a backchannel, the user can directly receive and send data through the screen itself. This dramatically simplifies data management and manipulation, allowing data to be literally held in your hand when transfering between devices.
From 2006-2012 our team explored the use of gesture-based data manipulation, inscription, communication, and human-machine interfaces. We created dozens of prototype devices for different environments and use-cases. Our devices allowed sub-milimeter tracking of individual digits that mapped to a skeletalmodel for real-time tracking of hand movement and gesture.
Although symbols did not originate with computer systems, their usage has peaked during the rise of the modern computer. In the last several decades human interaction with computing systems has relied on graphical user interfaces and using symbols and iconography to visualize applications, tools and commands. User interfaces have allowed for flexible task and data representation, which enable more agile human problem description and solution. Directly mapping abstract ideas, actions, and interactions into symbol shapes or sounds can initially prove fruitful through the careful application of metaphor and analogy, but is apt to result in poor usability strategy. Standards have been developed specifically to ameliorate information density problems and aid recognition, making critical relationships and knowledge about platform type, position, condition and capabilities easily viewable. Yet in many ways, current symbology solutions seem to apply only to the domain of physical world operations. Cyberspace differs substantially from physical space, affecting how operators attempt to gain awareness of the battlespace. Key differences between domains suggest symbology may need to branch away from existing standards while retaining some of the useful carryover elements. (Gutzwiller and Fugate, 2016.)