CITeR 2013

Projects:

Bacterial DNA as a Human Biometric Identifier
Jeremy M. Dawson (WVU ‐ LCSEE), Letha Sooter (WVU ‐ HSC Basic Pharmaceutical Sciences)

Face Recognition with Significant Aging and Photo Quality Variations
Guodong Guo (WVU), Jeremy Dawson (WVU) and Bojan Cukic (WVU)

Improving Latent Fingerprint Matching Accuracy: Role of Feedback from Exemplar Prints
Anil K. Jain, Michigan State University

Automating Interoperability Enhancement for Fingerprint Sensors
E. Marasco, B. Cukic ( WVU)

3D Face Acquisition using a Smartphone
D. Adjeroh, M. Piccirilli, G. Doretto, A. Ross (WVU)

Benefits of Cross‐spectral (visible, NIR and LWIR) and Cross‐Distance Face Recognition In Non‐Ideal Environments
S. Schuckers, T. Bourlai, and F. Hua (Clarkson and WVU)

Fingerprint Texture Modeling for Synthetic Fingerprint Generation and Liveness Detection
Stephanie Schuckers, Peter Johnson (Clarkson)

Lie to Me, Chatter Bot Style
Joseph R. Buckman, Justin Scott Giboney, Mark Grimes, Ryan Schuetzler, Judee K. Burgoon (University of Arizona)

Remote Heart Rate Identification to Detect Deception
Justin Scott Giboney, Jeff Proudfoot, Ryan Schuetzler, Steven Pentland, and Judee Burgoon (University of Arizona)

Matching Methods for Privacy Preserving Indexed Fingerprint Templates
Venu Govindaraju, Atri Rudra (Buffalo)

Soft Biometrics for Mobile Smart Environments
Venu Govindaraju/Sergey Tulyakov (Buffalo)

Cloud‐Empowered Mobile Biometrics
Matthew Valenti and Arun Ross (WVU)

Matching Iris Images against Face Images
Arun Ross (WVU)

Towards Low Cost, Deployable Thermal Biometrics: Achieving Cooled Camera Performance from Next Generation Uncooled Devices
T. Bourlai, J. Dawson and L. Hornak (WVU)

Detecting and Tracking Facial and Sub‐Facial Regions in Thermal Image Sequences
T. Bourlai, Bojan Cukic (WVU)

Sample Size Estimation and Stratification for Large Biometrics Databases
Mark Culp, Thirimachos Bourlai, Bojan Cukic (WVU)

Biometrics in the Cloud: Development of a Biometric Research Data Portal Testbed (Application to NSF Fundamental Research Program)
Bojan Cukic, Stephanie Schuckers, Judee Burgoon, Michael Schuckers (WVU, Clarkson and Arizona)

Tattoo Sketch to Tattoo Image Matching
Anil K. Jain (Michigan State)

Image Enhancement for Iris Recognition from Incomplete and Corrupted Measurements
Joachim Stahl and Stephanie Schuckers (Clarkson)

Biometric Identification with a Remote Microwave Thoracic Radar
Daniel J. Rissacher, Ralph E. McGregor, William Jemison, Stephanie Schuckers (Clarkson)

Mobile Interviewing Agent
Ryan Schuetzler, Justin Giboney, Mark Grimes, Jim Marquardson, David Wilson (Arizona)

Detecting Impostership through Soft Biometrics and Cognitive Metrics
Judee Burgoon, Joe Valacich, Nathan Twyman, Jeff Proudfoot, Mark Grimes (UA), Stephanie Schuckers (Clarkson)



Summaries:

 

Bacterial DNA as a Human Biometric Identifier

Jeremy M. Dawson (WVU ‐ LCSEE), Letha Sooter (WVU ‐ HSC Basic Pharmaceutical Sciences)

The goal of this project is to develop a method of bacteria‐based identification that can overcome the degradation and throughput issues associated with human STR analysis. The presence of bacteria on human skin has a negative connotation in the realm of health and hygiene. However, recent research into the variations of these bacterial colonies in individual people has led to the application of human bacteria as an identification tool. Because bacterial colonies remain viable and multiply on surfaces, bacterial DNA is often more robust to environmental exposure than that of deposited human epithelial cells which are used to obtain human DNA. Recent studies of human skin bacterial composition based on gender, body location, time, and even washing habits, indicate little to no colonial variability for a single individual within these parameters, and high diversity among small groups (~20) of human subjects. The specific aim of this project is to examine the population(s) of hand bacteria colonies within a group of 200 individuals. Thefindings of this study will provide the basis for the development of biometric identification techniques based on humanbacterial signatures.

 

Face Recognition with Significant Aging and Photo Quality Variations

Guodong Guo (WVU), Jeremy Dawson (WVU) and Bojan Cukic (WVU)

Human identity management is important in visual surveillance, security, and law enforcement applications. As an important biometric cue, face recognition is very useful for non‐invasive identification of non‐ cooperative users. However, there are great challenges in developing robust face recognition systems. The great challenges include facial aging variations and image quality changes among others. For example, in the enrollment, a high quality face photo can be captured, while in recognition, the query face images can be captured by cameras with different qualities (i.e., interplatform operation), or appeared in different formats, e.g., passports, social media, photo scan, newspapers, etc. Furthermore, the subjects in query can often have ages different from the gallery with significant facial appearance changes. Thus in many real applications, face recognition suffers from coupled aging variations and all kinds of photo quality changes. In this work, we will develop new methods for face matching robust to aging and photo quality variations. The FBI BCOE database will be used to validate our techniques and build a benchmark.



Improving Latent Fingerprint Matching Accuracy: Role of Feedback from Exemplar Prints

Anil K. Jain, Michigan State University

Latent fingerprints have served as an important source of forensic evidence for identifying suspects. However, latent fingerprint impressions are typically of poor quality with complex background noise which makes feature extraction and matching of latents a significantly challenging problem. In this research, we propose incorporating “top‐down” information or feedback from an exemplar to refine the features extracted from a latent with the eventual aim of improving the latent matching accuracy. The refined latent features such as ridge orientation and frequency, after feedback, are used to re‐match the latent to the top‐K candidate exemplars and resort the candidate list. The objectives of this research include: (i) devising systemic ways to use information in exemplars for latent feature refinement, (ii) developing a feedback paradigm which can be wrapped around any latent matcher for the purpose of improving its matching accuracy, and (iii) determining when feedback is actually necessary to improve the baseline latent matching accuracy.



Automating Interoperability Enhancement for Fingerprint Sensors

E. Marasco, B. Cukic (WVU)

Interoperability within complex biometric systems enables seamless operation in spite of the variations introduced by different capture devices. Increasing competitiveness within the fingerprint sensor market makes the reliance upon a single sensor vendor economically and technically unwise. Our initial experiments show that even for a specific sensing technology, different arrangements of sensing elements cause varying distortions in the biometric data. In many practical scenarios (US Visit, law enforcement, etc), users are enrolled using a 500 dpi optical sensor (most with a sensing area of 1.2” x 1.2”), but there is no guarantee that the same device will be used to capture the probe. In 2012, we carried out a large‐scale data collection (500 users) using nine different fingerprint devices and a set of ink prints. Some of the devices are included in the CJIS certified products list. We found that the genuine match scores generated by comparing biometric samples captured using the same device were generally higher than when devices were different. These score fluctuations exist even between the devices included in the CJIS list. We also found that device diversity leads to an increase of the false non‐match rate. The impact is higher when the gallery is of low quality. The objective of this project is to develop automated tools that reduce the impact of the lack of interoperability on the matching performance. The image quality of a fingerprint image is affected by the fingerprint’s condition, user’s familiarity with the device, user interaction, acquisition technology and characteristics of the device. We will model the qualitative information about a fingerprint from the device id and quality measures. In addition to the clarity of the ridge and valley patterns we plan to use minutiae count, alignment and number of paired minutiae as interoperability model parameters.



3D Face Acquisition using a Smartphone

D. Adjeroh, M. Piccirilli, G. Doretto, A. Ross (WVU)

Recently, the acquisition of 3D face has gained significant interest, given the improvements in human recognition using 3D facial features. Though various 3D face scanners exist today, data acquisition using portable mobile devices still remains a critical challenge. Yet, various applications in biometrics, public safety, security, and mobile health stand to benefit from progress on this front. Our primary goal in this project is to create a hand‐held portable system for the acquisition of 3D faces using a simple smartphone. The basis of our approach is the concept of 3D shape from structured light [1, 2]. While the field of 3D scene analysis has been studied for decades, the technology to support the type of information needed for 3D shape from structured light only became available relatively recently. Our approach for acquiring a 3D face using a smartphone equipped with only an RGB camera is as follows: (1) We will collect 3D information of the face by just illuminating it with the desired pattern and invoking algorithms for 3D reconstruction through structured light. We propose to use a small illuminant which can illuminate a scene up to 3 meters with a good contrast. It will have the form factor of a cellular phone, will be portable, and will have enough battery life to guarantee a few hours of acquisition. (2) The illuminant will connect to the cellular phone, providing us the capability to build a sophisticated structured‐light system that can withstand different variabilities in the acquisition environment.



Benefits of Cross‐spectral (visible, NIR and LWIR) and Cross‐Distance Face Recognition In Non‐Ideal Environments

S. Schuckers, T. Bourlai, and F. Hua (Clarkson and WVU)

Standard FR systems compare new facial images or probes with gallery pictures to establish identity. They typically perform well in the visible band, when lighting is good and cooperative subjects are close to the camera and without facial expression. However, many law enforcement and military applications deal with mixed FR scenarios that involve matching active (0.9 ‐ 2.5 microns) and passive (3‐5 or 7‐14 microns) infrared (IR) probe images against all images (e.g. mug shots) in a visible gallery database. This is also known as the heterogeneous problem. While there are studies reported in the literature where either NIR or thermal images are matched against visible ones [1‐6], to our knowledge, there is no study reported where all face datasets available are simultaneously collected (i) in three different bands, covering the visible, active and passive IR bands, and (ii) at different standoff distances. In this work, we have the advantage of utilizing an existing multi‐spectral, multi‐distance dataset to investigate the benefits of cross‐spectral, cross‐distance, and cross‐expression face recognition. This is basically one sub‐dataset collected as part of the Clarkson’s ‘Un‐constrained Biometrics at a Distance’ project, namely the ‘Multi‐spectral Face Dataset’. In particular, the dataset contains visible, NIR and LWIR face data captured under ideal (neutral facial expression) and non‐ideal (varying facial expressions) scenarios at 7ft, 17ft, 25ft and 35ft standoff distances. Our proposed work will investigate answering the following questions: (1) Can we efficiently match long wave infrared (LWIR) and Near Infrared (NIR) face images to their visible counterparts? (2) Can we repeat (1) when standoff distances varies (i.e. cross‐spectral, cross‐distance)? (3) Can we repeat (1 and 2) when facial expression varies?

 

Fingerprint Texture Modeling for Synthetic Fingerprint Generation and Liveness Detection

Stephanie Schuckers, Peter Johnson (Clarkson)

We propose a fingerprint texture modeling framework to be used for synthetic fingerprint generation as well as fingerprint liveness detection, i.e. discriminating between live and fake fingers. Texture characterizing features can be modeled from real fingerprint images. The texture characterizing features proposed here include ridge intensity along the ridge centerlines with multiple frequency components, ridge width, ridge cross‐sectional slope, ridge noise, and valley noise. The measured features can be linked to synthetic generation approaches in order to create syntheticfingerprint texture which is statistically representative of a particular real fingerprint database. Additionally, with statistical representations of the texture features, a more robust model for fingerprint liveness detection can be created. Liveness detection often requires large sets of training data to achieve good performance. With this more robust model of texture features, it is likely that training set size could be greatly reduced.

 

Lie to Me, Chatter Bot Style

Joseph R. Buckman, Justin Scott Giboney, Mark Grimes, Ryan Schuetzler, Judee K. Burgoon (University of Arizona)

Deception in computer‐mediated communication (CMC) can be especially prevalent because deceivers can freely edit their messages to make them more persuasive or believable. To investigate the unique indicators of this type of deception, we will create an online chat bot program that conducts an interview. We will combine the text‐based indicators available with chat interfaces (e.g., keystroke data) from Derrick et al. (2012) with the deceptive linguistic cues in synchronous and asynchronous CMC found in Zhou et al. (2004) to identify whether these combined features provide more accurate classification of truthful and deceptive responding.

 

Remote Heart Rate Identification to Detect Deception

Justin Scott Giboney, Jeff Proudfoot, Ryan Schuetzler, Steven Pentland, and Judee Burgoon (University of Arizona)

Heart rate can be a measure of stress, arousal and emotion, all correlates of deception. Many screening and interviewing scenarios could benefit from the ability to remotely measure heart rate, however, the ability to do so has not existed until recently. The purpose of this project is to test the accuracy of a newly identified methodology of remotely measuring heart rate (from MIT). This new methodology leverages video analysis software to identify changes in color saturation in the face, indicative of changes in blood density below the surface of the skin, which are associated with pulse activity. The focus of this project is evaluating the heart rate accuracy of this software and the deception detection accuracy of output.

 

Matching Methods for Privacy Preserving Indexed Fingerprint Templates

Venu Govindaraju, Atri Rudra (Buffalo)

The feature set indexing (also known as bag of features, or bag of words) methods are getting popular in computer vision applications, and have been shown to outperform all other indexing methods for fingerprint templates. In our recent work we combined such indexing methods with privacy preserving fingerprint templates created by fuzzy vault method. Although the algorithm has satisfactory indexing performance, the matching accuracy based on such indexed templates is worse than that of traditional fuzzy vaults or other fingerprint matching methods. We believe that matching accuracy decrease is due to two factors: (a) discarding the spatial relationships between features in the indexing method and (b) the bin quantization not allowing the calculation of the informative matching score between features. The solution of keeping the alternative fingerprint templates and performing second stage matching to confirm the indexing results, suggested by previous works on fingerprint indexing, might not be feasible since such templates could violate privacy property, especially when combined with index information. Instead, it might be possible to improve performance by keeping additional specific information addressing two factors in stored templates and using it for refined second stage match. We propose to investigate the use of two types of additional information: mean positions and directions of features, and the differences between features and quantization bin centers. The theoretical studies on privacy preservation and experimental studies on matching performance will be conducted in this project.

 

Soft Biometrics for Mobile Smart Environments

Venu Govindaraju/Sergey Tulyakov (Buffalo)

One of the key desirable properties of a smart environment is the ability to track the location of each person, and in this project we assume that such an environment is being modeled using the video data captured by mobile or wearable cameras. As an example, emergency response team personnel could be investigating a vehicle crash scene using wearable cameras and a networked computer system would reconstruct the model of the scene and track the positions of injured persons. In this project, we would like to investigate the use of soft biometric features for person tracking in such environments. The distinguishing characteristics of the problem is the intermittency in the person’s observations and variable positioning of video cameras, which might require the use of novel soft biometric features. For example, it might be difficult to determine frequently used soft biometric traits of the height and gait due to insufficient observations of person’s walking and uncertain position of the camera. The technique of subtracting the background to determine the profile of the person might not be available as well due to changing camera positions. Instead, we propose to use the set of session soft biometric features, such as skin, hair and clothing colors, skin marks, feature descriptor vectors associated with interest points (e.g. SIFT, SURF), etc. Such features will be extracted and positioned with respect to tracked face position and will enhance the traditional face biometric matcher. The use of multi‐frame matching and score fusion algorithm for smart environment tracking will be investigated as well.

 

Cloud‐Empowered Mobile Biometrics

Matthew Valenti and Arun Ross (WVU)

As biometric systems mature, two conflicting challenges have emerged. On the one hand, surges in enrollment have dramatically increased the computing requirements. On the other hand, the desire to implement biometric identification on mobile, handheld systems has reduced the amount of computing power available to the end users. These two requirements can be simultaneously met using cloud‐computing resources, such as Amazon’s Elastic Compute Cloud (EC2), which allows computing to be treated as a utility. However, it is not yet clear when and how to best leverage cloud computing for biometric applications. Furthermore, the risks of cloud‐computing based biometric systems have not been fully characterized, and research needs to be directed towards mitigating these risks. In this work, we will investigate the use of cloud‐computing technologies for performing biometric identification and related tasks. The goals will be to identify appropriate uses of cloud technology, quantify their risks, and explore methodologies that minimize risk. A proof‐of‐concept mobile facial recognition system will be developed, which uses the concept of visual cryptography to ensure the privacy of the database.

 

Matching Iris Images against Face Images

Arun Ross (WVU)

We consider the problem of matching color (RGB) face images obtained using a digital camera against near‐infrared (NIR) iris images obtained using an iris scanner. This problem is especially relevant in the context of matching iris images against legacy face databases in order to establish identity. However, there are several challenges that have to be addressed. Firstly, the spatial resolution of the iris in the RGB face image can be significantly lower thereby offering limited biometric information compared to that of the iris obtained using a dedicated iris scanner. Secondly, thedifference in spectral bands (RGB versus NIR: the cross‐spectral problem) and sensors (face camera versus iris scanner: the cross‐sensor problem) can introduce photometric variations in the corresponding iris images. These variations will be especially pronounced for dark‐colored irises whose texture cannot be easily discerned in the RGB domain. Thirdly, due to effects of human aging, there may be anatomical differences in the corresponding face and iris images. This work will explore the possibility of addressing these challenges and designing robust segmentation, feature extraction and matching algorithms for matching face images against iris images.

 

Towards Low Cost, Deployable Thermal Biometrics: Achieving Cooled Camera Performance from Next Generation Uncooled Devices

T. Bourlai, J. Dawson and L. Hornak (WVU)

In thermal‐based face recognition investigations, the selection of infrared (IR) detectors [1] is frequently critical in producing key trace evidence for the successful solution of human identification problems [2]. The two detector technologies most commonly used for face examination currently are cooled (Photonic IR) and uncooled (e.g. micro bolometers). Acquisition of thermal face imprints is used as the first step in the analytical process of identification and comparison of thermal imaging characteristics of human faces (e.g. subcutaneous face patterns etc.). The main issues in the acquisition of face images regarding the usage of IR detectors include the following facts: (1) Cost‐Size‐Deployability: the usage of high sensitivity detectors (photonic IR) with low noise results is high image quality; but this requires cooling the detectors (which makes direct photon detection) and, thus, the complexity as well as the size and inability for easy deployment of such detectors increases. (2) Temperature Calculation: certain IR cameras (mainly the uncooled ones) do not have built‐in software that allows the users to focus on specific areas of the Focal Plane Array (FPA) and calculate the temperature. (3) Variable FOV Optics: the selection of camera components is critical in producing accurate readings and it is highly dependent on the experience of the camera user. In this work, we will extend the capabilities of an existing uncooled system, by determining which camera lenses can be used to improve the image quality of the uncooled detector, and build software to convert gray level values to temperature for each pixel within the FPA. An evaluation study will follow that will determine that temperature readings are statistically similar to those acquired when using a high‐end uncooled detector that will provide ground truth data.

 

Detecting and Tracking Facial and Sub‐Facial Regions in Thermal Image Sequences

T. Bourlai, Bojan Cukic (WVU)

The first goal of this work is explore a new tracking paradigm. We will utilize the face tracking capabilities of an existing visible‐based system (MS Kinect) to accurately perform face tracking on data captured from a low cost thermal‐based camera system, before recognition (using full or partial faces) is performed. The second goal of this work is to test whether we can develop the necessary software platform to allow simultaneous face tracking when using two thermal sensors (placed at different angles to the face) assisted by two Kinect sensors. The third and final goal is to investigate the feasibility of extracting and using full frontal face images captured by the low quality thermal sensors to perform FR. The results will be compared to those computed when applying the same FR algorithms to full frontal face images captured by high quality thermal sensors.

 

Sample Size Estimation and Stratification for Large Biometrics Databases

Mark Culp, Thirimachos Bourlai, Bojan Cukic (WVU)

One of the critical steps underlying the progress in biometric research is the ability to project biometric test results to very large operational data sets. Biometric collections and test data selection typically follow the examination of operational needs leading towards the design of scenarios. In our prior work in the area of research [1], we investigated stratified sampling and developed a sample size estimation approach using distance‐based measurements for a closedset identification. Theoretically, we validated that the match similarity scores follow a variation of the Gumbel distribution. This approach has strong merits for small‐scale databases and our empirical work demonstrated these qualities. For large open databases (~100+ million), three key problems emerge, not addressed in [1]: (i) identification errors are costly (i.e., 0.1% is huge), (ii) if the database is large enough and open then the probability of a match goes to one even if the match is not in the database and (iii) the approach in [1] becomes unstable. The most fruitful approach to address the first and second issue is to constrain the scores and study these distribution characteristics. New distributions will be more robust to the costly error problems of large databases. In addition, the proposed work will address sampling characteristics for large open set biometric identification, the most challenging problem in biometrics. An empirical investigation will be performed to compare different face matching algorithms, which will provide insights on bridging the theoretical distributional tools to the practical ones and challenging applications of analyses to large databases. Finally, in this regard we plan to investigate alternative stratification strategies to better assess the effect of incorrect meta data in sampling.

 

Biometrics in the Cloud: Development of a Biometric Research Data Portal Testbed (Application to NSF Fundamental Research Program)

Bojan Cukic, Stephanie Schuckers, Judee Burgoon, Michael Schuckers (WVU, Clarkson and Arizona)

Storage of identifying images have long been a critical privacy threat of any biometric or credibility assessment program. Institutional Review Board provides guidelines for collection and distribution in order to provide protections for the volunteer subjects. Data sharing is possible, but permissions can be limited. In research practice, there is a need to provide larger, more diverse sets of data that can serve as reasonable community benchmarks. Combining data from multiple datasets may be necessary to achieve representative samples. Models are emerging in other fields that enable sharing of research data through cloud based architecture. We propose to study cloud‐based architectures and develop a Biometric Research Data Portal. This portal will provide tiers of access for both (1) storage of data for retrieval and (2) processing of algorithms. By uploading algorithms instead of downloading data (in some cases), the dataset is not revealed to the user, thereby protecting sensitive data. More powerful cloud processing architectures can be put to the task in order to reduce run‐time. Statistical performance summaries can be returned to biometric algorithm developers in order to indicate algorithm performance but reduce revealing personal information.

 

Tattoo Sketch to Tattoo Image Matching

Anil K. Jain (Michigan State)

Tattoos engraved on the human body have been successfully used to assist human identification in forensics. Tattoo pigments are embedded in the skin to such a depth that even severe skin burns often do not destroy a tattoo. For this reason, tattoos helped in identifying victims of the 9/11 terrorist attacks and the 2004 Asian tsunami. Criminal identification is another important application because tattoos often contain hidden information related to a suspect’s criminal history (e.g., gang membership, previous convictions, years spent in jail, etc.). The current practice of tattoo matching and retrieval, based on ANSI/NIST classes (keywords), is prone to significant errors due to limited vocabulary and the subjective nature of tattoo labeling. To improve the performance and robustness of tattoo matching and retrieval, we designed and developed the Tattoo‐ID system. This system automatically extracts features from a query image and retrieves near‐duplicate tattoo images from a database. In many scenarios, an image of the suspect’s tattoo is not available. Instead, the victim or a witness, who has seen the tattoo on the suspect’s body, is able to describe the tattoo to the police. We call a drawing of the tattoo based on this description as tattoo sketch. The objective of this research is to develop techniques to match a query tattoo sketch to a large collection of tattoo images in law enforcement databases.

 

Image Enhancement for Iris Recognition from Incomplete and Corrupted Measurements

Joachim Stahl and Stephanie Schuckers (Clarkson)

Our current work on our “Image Enhancement for Iris Recognition from Incomplete and Corrupted Measurements” project can be expanded to dramatically improve its performance and usability. The proposed extension will achieve this in two steps: First, by investigating what is the minimum number of iris data from a subject needed to have a reliable recognition (via mosaicking). In particular we will apply this for a small sub‐region of the iris, which would allow us to stop processing further images in a video sequence when enough information has been collected on a per‐subregion basis. Second, we will adapt our current mosaicking and in‐painting algorithms to take advantage of our findings in the first step, and further improve performance by subdividing the processing of different iris sub‐regions using parallel processing techniques. In summary, this project will produce a more efficient solution by simultaneously reducing unnecessary processing (first step) and distributing the processing (second step).

 

Biometric Identification with a Remote Microwave Thoracic Radar

Daniel J. Rissacher, Ralph E. McGregor, William Jemison, Stephanie Schuckers (Clarkson)

Radar systems have shown success in measuring cardiac and pulmonary activity and could be promising tools to provide biometric identification data. However, the biometric content of these radar signals have yet to be fully explored. In this project, we will apply a 2.4GHz radar system already in use in our laboratory to collect data from human subjects. The radar has been developed internally and preliminary data have proven its capability to detect cardiac signals. Two novel algorithms will be developed and applied on this data to determine the promising directions for future work: 1‐ An algorithm to identify the human subject amongst the pool of other human subjects using the radar data, 2‐ An algorithm that will provide real‐time heart rate and heart rate variability (HRV), which could be used for some applications as a measure of anxiety.

 

Mobile Interviewing Agent

Ryan Schuetzler, Justin Giboney, Mark Grimes, Jim Marquardson, David Wilson (Arizona)

Rapid portability of our kiosk‐based AVATAR system is hampered by its sheer size. However, we realize that there is huge potential for leveraging the technology in a more portable way. We propose to bring the AVATAR concept to a mobile, tablet‐based platform and validate built‐in sensors (i.e., camera, microphone) for their potential use in identification contexts. The focus of this project would be the actual porting of the AVATAR to the mobile platform, laying the groundwork for future projects to investigate additional sensors (e.g., accelerometer) and uses for a mobile identification Kiosk.

 

Detecting Impostership through Soft Biometrics and Cognitive Metrics

Judee Burgoon, Joe Valacich, Nathan Twyman, Jeff Proudfoot, Mark Grimes (UA), Stephanie Schuckers (Clarkson)

An experiment will partially replicate one conducted with EU border guards on detecting impostership and malintent among hooligans attending a mass public event. Participants will be UA students ostensibly planning to attend the UAASU football rivalry game. Imposters will purport to be an ASU fan intending to disrupt the rival team’s activities and will be present false documents. The main objective will be to determine which sensor(s) can best detect false identities and (in)ability to maintain a false cover story.

CITeR: Center for Identification Technology Research

Research Overview

2014 Projects

2013 Projects

2012 Projects

2011 Projects

2010 Projects

2009 Projects

2008 Projects

2007 Projects

2006 Projects

2005 Projects

2004 Projects

2003 Projects

2002 Projects



CITeR Contact Information:
Clarkson Phone: 315-268-6536
WVU Phone: 304-293-1455
Arizona Phone: 520-621-5818
UB Phone: 716-645-1558
More Information
National Science Foundation