Detecting Deception Screening

Detecting Deception in a Security Screening Scenario

APA Reference

Meservy, T. O., Jensen, M. L., Kruse, J., Burgoon, J. K., & Nunamaker, J. F., Jr. (2006, January 4-7). Detecting deception in a security screening scenario. Proceedings of the 39th Annual Hawaii International Conference on System Sciences (CD-ROM), Koloa, Kauai, HI.

Abstract

Detecting deception is critical in security screening. There are numerous ways to detect deception; however not all of them are amiable for use in the security screening scenario. An pproach for deception detection based on body movement is presented. It is argued that such an approach is amiable for screening as it is automatic, unobtrusive, and can be used with various sampling rates. An experiment showing the robustness to decreasing sample sizes is shared.

Authors' Bio (name, school)

Thomas O. Meservy
Matthew L. Jensen
John Kruse
Judee K. Burgoon
Jay F. Nunamaker

Problem Statements/Phenomena

Border and airport security environments require accurate and nonobtrusive deception detection techniques. Security personnel suffer from either truth or lie bias. Current methods of deception detection rely on micro-movements, which are hard to detect. A security screening environment requirement may demand low video frame rate. Blob analysis may help in this environment and with this technical constraint by analyzing macro movements of head and hands.

Research Questions

  1. Discussion strengths and weaknesses of current deception detection techniques in security screening environment (not a question, but half the paper is dedicated to this topic)
  2. What are the consequences of varying the frames per second of input video used in blob analysis?
  3. How many video frames per second are required to achieve a reasonable level of classification accuracy?

Theory Used or Developed

Authors contrast current deception detection methods and the theoretical basis of each. The authors experiment is based of behavior analysis assumptions (deceivers behave different than truthtellers) and Interpersonal Deception Theory.

Current methods of deception rely on physiological changes or behavior changes in those who are deceptive.

Table 1: Examples of current deception methods.

Physiological Behavioral
polygraph Statement validity assessment
brain scan linguistic analysis
thermal scan micro-momentary expression analysis
voice stress behavioral analysis
polygraph
uses control question test (CQT) or Guilty Knowledge Test (GKT) to arrouse the suspect. CQT has been criticized as scientifically unreliable. GKT requires crime details.
Statement validity assessment (SVA)
focuses on verbal content and meaning for deception detection. Effective but require trained personnel, cannot be automated.
Crteria-Based Content Analysis (CBCA)
an SVA method based on Undeutsch-Hypothesis which states " A statement derived from a memory of an actual experience differs in content and quality from a statement based on invention of fantasy."

CBCA takes place during a structured interview where the interviewer scores responses according to predefined criteria such as general characteristics, specific contents, motivation related contents, and offense related elements. CBCA has been used successfully in judging the validity of statements given by children and it has been used in criminal cases where children are involved [3]. pg.3

Reality Monitoring (RM)
an SVA method focused on perceptual, contextual, and effective differences

RM also uses a scoring mechanism to judge potential deception, however, it is based on the hypothesis that verbal recall of actual events will contain more perceptual, contextual, and effective information than recall of fabricated events. Reality monitoring requires the interviewer to judge levels of clarity, perceptual information, spatial information, temporal information, affect, reconstructionability of the story, realism, and cognitive operations [12]. pg3

linguistic analysis
detects deception in written statements; e.g. feature mining, speech act profiling.
message feature mining
difference among deceivers and truthtellers in written statements are evident in average sentence length, passive voice ration, emotiveness, word diversity.
behavior analysis
based on theory that deceivers act differently than truthtellers. may people are fooled by deceptive-behavior myths. Some evidence of deceptive behavior include: lack of head movement, lack of illustrating gestures

Assumption:

  1. macro behavior of deceivers is different than of truthtellers
  2. Gestures last less than 5 seconds

Independent Variables

Authors examined 12 different video frame rates as an moderator variable between macro-behavior feature set and deception/truth accuracy

  • 30 fps,
  • 15 fps,
  • 10 fps,
  • 5 fps,
  • 3 fps,
  • 1 fps,
  • 1 frame every 2 seconds (1F2S),
  • 1 frame every 3 seconds (1F3S),
  • 1 frame every 4 seconds (1F4S),
  • 1 frame every 5 seconds (1F5S).
  • 1 frame every 10 seconds (1F5S).
  • 1 frame every 20 seconds (1F5S).

The macro-behavior feature set (independent variable) is not describe in this paper but is described in the Meservy et. al 2005. paper Automatic Extraction of Deceptive Behavioral Cues from Video.

Dependent Variables

classification accuracy (deception or truthful)

Hypotheses

H1 Higher frame rate will contribute to better classification accuracy
H2 At some point higher frame rates will decrease classification accuracy

Terminology

blob analysis
determining deception or truthfulness based on head and hand movements using a computer to analyze video; developed at Computational Biomedicine Imaging and Modeling Center (CBMI) at Rutgers University
Othello error
same as a lie bias (habitually judging truth as lie)
movement-based indicators
deception clues based on body movements. e.g. head, hand, leg

Methodology

Method Type:

  1. argumentative method used to describe blob analysis in screening environment
  2. lab experiment for demonstration of frame rate effects on blob analysis

Description:

Authors use existing video data collected in Mock Theft experiment (interviews about a stolen wallet from a classroom with many witnesses) to apply blob analysis techniques. Authors vary frame rates and use an 8-variable feature set model of macro behavior as independent variable to effect classification accuracy (deceptive or truthfull).

The procedures follow these steps.

  1. input video data (Procedures for video processing are on page 4. )
  2. extract metrics from head and hand movements
  3. calculate features (over 150 features are possible, classified as single and multiple frame cues. )
  4. classify as deceptive or truthful using statistical or AI methods.

Video was originally at 30 fps and then edited via code to simulate the the lower frame rates

Subject and Selection Criteria:

  1. preexisting data from Mock Theft experiment involving students
  2. limited the narratives to only those regarding the theft, excluding base-line questions and other questions

Sample Size:

38 recorded interactions; 16 were truthful, 22 were deceptive in nature

Measuring Instrument:

blob analysis "with statistical averages and variances of all feature values summarized across all frames for each clip" pg 6

stepwise discriminant analysis

Major Findings

General decline in accuracy as in frame rate decreased.

All frame rates above 1 frame every 3 seconds has classification accuracy above 80%

"Need at least 1 frame per second of video data to support our method of deception detection." pg8

"All of the models are highly significant (p < .025)" pg.7 Don't know what models the authors are referring too, feature set or frame rates.

All of the models for the conditions that have a frame rate of at least 1 frame per second are highly significant (p<.001), and all of the models that are 1 frame per every 2 seconds or less are not significant (p>.10). pg8

blob_analysis_results.gif
blog_analysis_results2.gif

Discussion Summary & Author Recommendations

Deception detection in screening environments must be scalable, robust, and noninvasive.

As alluded to previously, a key advantage to using movement-based cues to establish deception is that it can be done unobtrusively and under varying conditions. This method focuses on macro behavior which lowers the precision required in measurement and provides flexibility in the sampling. This allows the method to function well in a natural environment.

Our approach of capturing macro-level movements of the head and hands has shown promising but limited results. However, these results have primarily been based on analyzing the dataset using 30 video frames per second (fps). We assert that our approach, unlike other methods of deception detection (including micro-momentary facial analysis), can function using a much lower sampling rate (less than 30 frames per second).

We assert that, while the results are not decisive, they do provide evidence that our approach shows promise at rates lower than 30 frames per second. The data suggest that frame rates of 1 frame every 2 seconds or lower may be problematic for our approach when 1) the variables of a model are defined a priori and 2) the model was initially derived for video containing 30 fps.

Opportunities for blob analsys at border security:

  1. while people are in lines or common areas looking for arousal cues indicating concealment or avoidance
  2. when meeting with security personnel
  3. when selected for additional screening
  4. can be used in conjunction with other detection methods; e.g. thermal imaging, voice stress, arousal based tools, linguistic tools

Why paper is important? Why paper is cited?

Future blob analysis can use these frame rate guidelines in their methodology.

Sean's comments: It is unknown what the authors mean by using the "30-fps model of 8-feature sets" pg8. Is the model a use of a particular frame rate or a varying set of macro-behavior features? It can't be the first because table 1 shows varying frame rates while the caption says the results are using the "30-fps model."

Persistent Link to Library

HICCS paper, see AFOSR & CMI repository for paper

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.