Next generation lie detection software (mobile only)
- David Bennett
- Mar 21, 2020
- 4 min read
Updated: Oct 18, 2020
A look at the dangers of ever advancing technological progress in the field of lie detection.

Abstract:
I argue that unless action is taken to restrict the availability of next generation lie detection software algorithms, Government security services' use of Covert Human Intelligence Sources (agents) will be so severely compromised that in effect, these informants become obsolete in the fight against terror. We are currently on course for this catastrophic development in the very near future if politicians fail to intervene.
We face the prospect of terror groups becoming impossible to infiltrate. The potential for loss of such crucial informant information is extremely concerning as agent Intel serves as intelligence agencies' primary tool for foiling terror plots. The frightening eventuality would leave us susceptible to large scale terroristic assaults and successful plots without recourse and so must be avoided at all costs.
The dangerous scenario could only materialise if terrorist organisations possess capabilities to screen all given operatives in their network with lie detection software and ostracise personnel who fail their test. It is therefore essential to take a look at lie detection software currently available today, the efficacy of it and what is likely to come in the near future and for Government to consider restricting software which is produced for the purposes of detecting deception.
Background:
Covert Human Intelligence Sources (agents) are vital to the security services who depend on tip-offs from moles to identify security threats and successfully prevent them. The vast majority of all foiled terrorist plots are thwarted due to a source close to or inside a terror cell leaking details of the plot to the security services. As such, having an effective informant network is crucial to keeping citizens safe from terrorism.
Due to technological advances, highly effective and readily available lie detection software is set to jeopardise these informant networks and threatens to render security services as totally ineffectual in combatting terrorism. It is inevitable that groups intent on committing acts of terror will look to utilise lie detection software to shore up their own group and ensure only loyal participants are privy to sensitive information.
Having the ability to safeguard operational details of a terror plot would open the door for far more elaborate and large scale schemes and ultimately make a more effective terror group more dangerous. The low risk of being caught would mean greater appeal to more would-be terrorists and encourage a greater frequency of terrorist attack, involving more participants.
Furthermore, not only does the threat from highly efficacious lie detection software threaten counter terrorism measures but the same concerns apply for organised crime groups, diminishing law enforcement efforts to penetrate them and bring them to justice.
Technology that exists today and a look to the future:
Several private sector companies are racing to capitalise on next generation lie detection technology. These start-ups are commercialising systems that already boast very high levels of accuracy and selling them to the public.
Eyedetect by Converus is one such system. It is already on the market and boasts accuracy of 86% for a 30 minute screening test and 90% accuracy in a 15 minute single-issue test. However, If EyeDetect and a Software polygraph are used together, they combine statistically for a far higher confidence outcome. Test accuracy rises to 97-99% when an examinee passes or fails both tests in succession.
Recently now artificial intelligence is being leveraged in the bid for a better lie detection system with machine learning algorithms scanning human facial micro-expressions to spot tell tale signs of deception such as “lips protruded” or “eyebrows frown,” as well as analysing audio frequency for revealing vocal patterns that indicate whether a person is lying or not.
Computer science researchers from the University of Maryland have developed the Deception Analysis and Reasoning Engine (DARE), a system that uses artificial intelligence to autonomously detect deception in this way. The algorithm was trained by scrutinizing previous courtroom trial videos in which a verdict was reached. The computer then compares the signs of truth / deception to the microexpressions displayed by the examinee.
DARE scores an impressive AUC (Area Under the Curve) of 0.877, which, when combined with human annotations of micro-expressions, improves to 0.922. Ordinary people have an AUC of 0.58. The DARE algorithm is improving all the time as the A.I. continues learning, in fact Bharat Singh - researcher at Maryland from the DARE team predicts we could be just three to four years away from an A.I. that detects deception flawlessly by reading the emotions behind human expressions.
A need to protect existing and past sources:
As well as retaining the ability to use informants in the future, we also have a moral obligation to protect individuals who currently / have previously contributed to intelligence gathering. It is clearly conceivable that human intel sources who have informed on terroristic operations in the past could be endangered if terror groups and rogue regimes were to use lie detection software routinely. In the most severe cases lives could be lost to recriminatory retaliation.
Recommendation:
Governments should ban lie detection software / algorithms due to the huge risk they will be employed by nefarious non-state actors, leading to an increase in the frequency and sophistication of terrorist attacks as well as a resurgence of organised crime. It is however advisable that governments themselves should invest in developing this technology as it could be useful to them but the technology must be kept safely guarded and out of the hands of the general public / private sector at all times. Governments would do well to introduce a national ban immediately but although that will help, it won't keep citizens completely safe with software able to traverse national borders as well as plots being formulated
overseas. For this reason, it is advised that international cooperation is needed to agree upon a worldwide ban of next generation lie detector technology.
Author:
David Ian Bennett
Published:
20th March 2020
Comments