Understanding the Brain

Analysis of Competing Hypotheses

From Cognopedia
Jump to: navigation, search

The Analysis of Competing Hypotheses (ACH) provides an unbiased methodology for evaluating multiple competing hypotheses for observed data. It was developed by Richards (Dick) J. Heuer, Jr., a 45-year veteran of the Central Intelligence Agency, in the 1970s for use by the Agency[1] and is used by analysts in various fields who make judgments that entail a high risk of error in reasoning. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.[1]

ACH was indeed a step forward in intelligence analysis methodology, but it was first described in relatively informal terms. Producing the best available information from uncertain data remains the goal of both researchers, tool-builders, and industrial, academic, and government data analysts. Their domains include data mining, cognitive psychology and visualization, probability and statistics, etc. Abductive reasoning is an earlier concept with points of similarity to ACH.


Heuer outlines the ACH process in considerable depth in his book, Psychology of Intelligence Analysis.[1] It consists of the steps:

  • Hypothesis – The first step of the process is to identify all potential hypotheses, preferably using a group of analysts with different perspectives to brainstorm the possibilities. The process discourages the analyst from choosing one "likely" hypothesis and using evidence to prove its accuracy. Cognitive bias is minimized when all possible hypotheses are considered.[1]
  • Evidence – The analyst then lists evidence and arguments (including assumptions and logical deductions) for and against each hypothesis.[1]
  • Diagnostics – With the use of a matrix, the analyst applies evidence against each hypothesis in an attempt to disprove as many theories as possible. Some evidence will have greater "diagnosticity" than others—that is, some will be most helpful in judging the relative likelihood of alternative hypotheses. According to Heuer, this step is the most important. Instead of looking at one hypothesis and all evidence, the matrix forces the analyst to consider one piece of evidence and examine it against all possible hypotheses.[1]
  • Refinement – At this point the analyst reviews his/her findings and then identifies gaps and collects the additional evidence needed to refute as many of the remaining hypotheses as possible.[1]
  • Inconsistency – The analyst seeks to draw tentative conclusions about the relative likelihood of each hypothesis, with less consistency equaling less likelihood. The least consistent hypotheses are eliminated. While the matrix generates a mathematical sum, it is up to the analyst to use his/her judgment to decide on the conclusion.
  • Sensitivity – The analyst then tests his/her conclusions using sensitivity analysis, which weighs how the conclusion would be affected if key evidence or arguments were wrong, misleading, or subject to a different interpretation. The validity of key evidence and the consistency of important arguments are double-checked to assure that the conclusion's linchpins and drivers are sound.[1]
  • Conclusions and evaluation – The analyst then provides the decisionmaker with his/her conclusions, as well as a summary of alternatives that were considered and why they were rejected. The analyst also identifies milestones in the process that can serve as indicators in future analyses.[1]


There are many benefits of doing an ACH matrix. It is auditable. It is widely believed to help overcome cognitive biases, though there is a lack of strong empirical evidence to support this belief.[2] Since the ACH requires the analyst to construct a matrix, the evidence and hypotheses can be backtracked. This allows the decisionmaker or other analysts to see the sequence of rules and data that led to the conclusion.


The process to create an ACH is time consuming. The ACH matrix can be problematic when analyzing a complex project. It can be cumbersome for an analyst to manage a large database with multiple pieces of evidence.

Especially in intelligence, both governmental and business, analysts must always be aware that the opponent(s) is intelligent and may be generating information intended to deceive.[3] [4] Since deception often is the result of a cognitive trap, Elsaesser and Stech use state-based hierarchical plan recognition (see abductive reasoning) to generate causal explanations of observations. The resulting hypotheses are converted to a dynamic Bayesian network and value of information analysis is employed to isolate assumptions implicit in the evaluation of paths in, or conclusions of, particular hypotheses. As evidence in the form of observations of states or assumptions is observed, they can become the subject of separate validation. Should an assumption or necessary state be negated, hypotheses depending on it are rejected. This is a form of root cause analysis.

Evidence also presents a problem if it is unreliable. The evidence used in the matrix is static and therefore it can be a snapshot in time.

van Gelder[5] has made the following criticisms:

  • ACH demands that the analyst makes too many discrete judgments, a great many of which contribute little if anything to discerning the best hypothesis
  • ACH misconceives the nature of the relationship between items of evidence and hypotheses by supposing that items of evidence are, on their own, consistent or inconsistent with hypotheses.
  • ACH treats the hypothesis set as "flat", i.e. a mere list, and so is unable to relate evidence to hypotheses at the appropriate levels of abstraction
  • ACH cannot represent subordinate argumentation, i.e. the argumentation bearing up on a piece of evidence.
  • ACH activities at realistic scales leave analysts disoriented or discombobulated.

Structured Analysis of Competing Hypotheses

The Structured Analysis of Competing Hypotheses offers analysts an improvement over the limitations of the original ACH.[discuss][6] The SACH maximizes the possible hypotheses by allowing the analyst to split one hypothesis into two complex ones.

For example, two tested hypotheses could be that Iraq has WMD or Iraq does not have WMD. If the evidence showed that it is more likely there are WMDs in Iraq then two new hypotheses could be formulated: WMD are in Baghdad or WMD are in Mosul. Or perhaps, the analyst may need to know what type of WMD Iraq has; the new hypotheses could be that Iraq has biological WMD, Iraq has chemical WMD and Iraq has nuclear WMD. By giving the ACH structure, the analyst is able to give a nuanced estimate.[7]

Other approaches to formalism

One method, by Valtorta and colleagues uses probabilistic methods, adds Bayesian analysis to ACH.[8] The work by Akram and Wang applies paradigms from graph theory.[9]

Other work focuses less on probabilistic methods and more on cognitive and visualization extensions to ACH, as discussed by Madsen and Hicks.[10] DECIDE, discussed under automation is visualization-oriented.[11]

Work by Pope and Jøsang uses subjective logic, a formal mathematical methodology that explicitly deals with uncertainty.[12] This methodology forms the basis of the Sheba technology that is used in Veriluma's intelligence assessment software.


A few online and downloadable tools help automate the ACH process. These programs leave a visual trail of evidence and allow the analyst to weigh evidence.

PARC ACH 2.0[13] was developed by Palo Alto Research Center (PARC) in collaboration with Richards J. Heuer, Jr. It is a standard ACH program that allows analysts to enter evidence and rate its credibility and relevance. Another useful program is the Decision Command software created by Dr. Willard Zangwill.[14]

SSS Research, Inc. is an analytic research firm that created DECIDE.[11][15] DECIDE not only allows analysts to manipulate ACH, but it provides multiple visualization products.[16]

Competing Hypotheses is an open source ACH implementation.

See also


  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Heuer, Richards J., Jr, "Chapter 8: Analysis of Competing Hypotheses", Psychology of Intelligence Analysis, Center for the Study of Intelligence, Central Intelligence Agency, https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art11.html 
  2. Thomason, Neil (2010), "Alternative Competing Hypotheses", Field Evaluation in the Intelligence and Counterintelligence Context: Workshop Summary, National Academies Press, http://www.nap.edu/catalog/12854.html 
  3. Elsaesser, Christopher; Stech, Frank J. (2007), "Detecting Deception", in Kott, Alexander; McEneaney, William, Adversarial Reasoning: Computational Approaches to Reading the Opponent’s Mind, Chapman & Hall/CRC, pp. 101–124 
  4. Stech, Frank J.; Elsaesser, Christopher, Deception Detection by Analysis of Competing Hypotheses, MITRE Corporation, https://analysis.mitre.org/proceedings/Final_Papers_Files/94_Camera_Ready_Paper.pdf 
  5. van Gelder, Tim (December 2008), "Can we do better than ACH?", AIPIO News (Australian Institute of Professional Intelligence Officers) (Issue 55), http://timvangelder.com/2007/12/31/hypothesis-testing-whats-wrong-with-ach/ 
  6. Wheaton, Kristan J., et al. (November-December 2006), "Structured Analysis of Competing Hypotheses: Improving a Tested Intelligence Methodology", Competitive Intelligence Magazine 9 (6): 12–15, http://www.mcmanis-monsalve.com/assets/publications/intelligence-methodology-1-07-chido.pdf 
  7. Chido, Diane E. et al. (2006), Structured Analysis Of Competing Hypotheses: Theory and Application, Mercyhurst College Institute for Intelligence Studies Press, p. 54 
  8. Valtorta, Marco et al. (May 2005), "Extending Heuer's Analysis of Competing Hypotheses Method to Support Complex Decision Analysis", International Conference on Intelligence Analysis Methods and Tools, http://www.cse.sc.edu/~mgv/reports/IA-05.pdf 
  9. Akram, Shaikh Muhammad; Wang, Jiaxin (23 August 2006), "Investigative Data Mining: Connecting the dots to disconnect them", Proceedings of the 2006 Intelligence Tools Workshop, pp. 28–34, http://www.huitfeldt.com/repository/ITW06.pdf 
  10. Madsen, Fredrik H.; Hicks, David L. (23 August 2006), "Investigating the Cognitive Effects of Externalization Tools", Proceedings of the 2006 Intelligence Tools Workshop, pp. 4–11, http://www.huitfeldt.com/repository/ITW06.pdf 
  11. 11.0 11.1 Cluxton, Diane; Eick, Stephen G., ""DECIDE Hypothesis Visualization Tool"", 2005 Intl conf on Intelligence Analysis, https://analysis.mitre.org/proceedings/Final_Papers_Files/119_Camera_Ready_Paper.pdf 
  12. Pope, Simon; Josang, Audun (June 2005), Analysis of Competing Hypotheses using Subjective Logic (ACH-SL), Queensland University, Brisbane, Australia, ADA463908, http://handle.dtic.mil/100.2/ADA463908 
  13. Xerox Palo Alto Research Center and Richards J. Heuer, ACH2.0.3 Download Page: Analysis of Competing Hypotheses (ACH), http://www2.parc.com/istl/projects/ach/ach.html 
  14. Zangwill, Willard, Quantinus Biography, http://www.quantinus.com/aboutus/biodrwillardzangwill.asp 
  15. Lankenau, Russell A. et al. (July 2006), SSS Research, Inc. – DECIDE, VAST 2006 Contest Submission, http://www.cs.umd.edu/hcil/VASTcontest06/SUBMITTED/SSSResearch-DECIDE/index.html 
  16. SSS Research (– Scholar search), DECIDE: from Complexity to Clarity, http://www.sss-research.com/decide.aspx 

External links