Portal field news

Portal field news

in ,

⚽ | Spain national team, Corona positive Llorente is back!Welcomed with applause


Spain national team, Corona positive Llorente is back!Welcomed with applause

If you write the contents roughly
Marca suggests that Llorente may have been a false positive.

The Spanish national team was infected with the new coronavirus just before the opening of EURO.DF Diego, who was one of them ... → Continue reading


Wikipedia related words

If there is no explanation, there is no corresponding item on Wikipedia.

Type I error and Type II error

Type I error(Daisy basket,British: Type I error) Orfalse positive(Giyosei,British: False positive[1])WhenType II error(Danish Shuka,British: Type II error) OrFalse negative(Ginsei,British: False negative[2]) IsHypothesis testIs a term that represents an error. Type I errorα error(Α error)My mistake[3], Type II errorβ error(Β error) andVague error[3]Also called. In addition, "error" meanserrorByBinary classificationSuch asSortMeans to make a mistake.

Statistical and systematic errors

There are two types of errors[4].

Statistical error
The error between the calculated or measured value and the true theoretical value is caused by random, inherently unpredictable fluctuations[5].
Systematic error
The error between the calculated and measured values ​​and the true theoretical value is a non-random effect of unknown sources (UncertaintyIf the source is specified, it can be excluded.[5].

Statistical errors: Type I and Type II

In statistics, the fact that evidence is attributed to nothingNull hypothesisThere is. For example, it may indicate that the individual is not ill, the defendant is innocent, or the potential login target is not authorized.

On the other hand, the situation that is completely opposite to the null hypothesisAlternative hypothesisThere is. That is, it means that the individual is ill, the accused is guilty, or the login target is a permitted user.

The goal is to reject false hypotheses and adopt true ones. Perform some kind of test (blood test, trial, login attempt) and get data.

The result of the test may be negative (ie not ill, not guilty, not allowed to log in). On the other hand, it may be positive (ie sick, guilty, successful login).

If the test result does not match the actual state, an error has occurred. If the test results match the actual situation, the decision is correct. Depending on which hypothesis was mistakenly adopted, the error is classified into "type I error" and "type II error".

Type I error

The type I error (α error, false positive) is an error in which the null hypothesis is rejected even though it is actually true. In other words, it is an error caused by a false hit.

Type II error

Type II error (β error, false negative) is an error that adopts the null hypothesis even though the alternative hypothesis is actually true. In other words, it is an error caused by the fact that truth is missing. An error in which the alternative hypothesis is not adopted when it is correct.

Examples of errors

"The true criminalArrest"To do" is compared to "to reject the null hypothesis." The first type of error isFalse accusationBe arrested in." Type II error means "to get rid of the true criminal."

Criminal procedure codeArticle 336 stipulates that "when the defendant case is not guilty or there is no proof of the crime in the defendant case, the sentence must be acquitted." this isDo not punish doubtsAlso say. It is recommended to adopt a method that avoids type I error[6].. For other categories, see below.Proposal for error type expansionPlease refer to.


Hypothesis testIs a technique for determining whether the difference between the distributions of two samples can be explained by random randomness. If we conclude that there is a significant difference between the two distributions, we must be careful enough to judge that the difference cannot be explained by random contingencies. Care must be taken to minimize the possibility of adopting a hypothesis that is not true. Generally, the probability of a Type I error is set to .2 or .2. This means that errors occur in 05 or 01 out of 100 cases. This is called "significance level". It is not possible to say unequivocally that 5 cases out of 1 cases are sufficient, so it is necessary to exercise extreme caution in selecting the significance level. For example,Six SigmaIn the factory that adopts the quality control ofstandard deviationThe control limit is 6 times the width (±6σ) of (the deviation from this is extremely rare).

The advantage of statistical methods lies in random sampling. That is, it is possible to randomly follow how the difference between the two distributions changes before and after treatment. But it is clear that reality is not that simple. When a random sample is taken, it is extremely unlikely that the distribution will be exactly the same. Even if they have the same distribution, it is not possible to determine whether it is a coincidence or is it always the case.


1928, prominentstatisticsPerson ofYez Neyman(1894-1981) andEgon Pearson(1895-1980) discussed the issue of "determining whether a particular sample can be randomly selected from a population".[7].. And David pointed out that the "random" adjective is for the sampling method, not for the sample itself."[8].

They described the "two sources of error" as follows:

  • (a) Error to reject the hypothesis to be adopted
  • (b) Error in adopting hypothesis to be rejected[9]

In 1930, they reinvented the notion of "two sources of error":

…The hypothesis test must always consider the following two points. (2) We must be able to keep the possibility of rejecting the true hypothesis low as necessary. (1) The test must reject the hypothesis that seems to be false. [10]

In 1933, they said these "problems do not exist when the truth of a hypothesis can be confidently asserted."[11].. They are also the "group of alternatives"[12]Therefore, it was assumed that errors would easily occur in the decision to reject or adopt a particular hypothesis.

…And (and) those errors are divided into two types:

  • (I) HoReject it (that is, the hypothesis to be tested) is true.
  • (II) Alternative hypothesis Hi H is trueo Adopted[11].

In a joint paper by Neyman and Pearson, Ho Always represents the "hypothesis to be tested"[13].. The subscript is "O", not zero (meaning "original").

Same paper[14], They call "the source of the two errors" the errors of type I and the errors of type II.[15].

Statistical treatment


Type I error and Type II error

The Neyman and Pearson definitions of error are widely adopted and are known as Type I and Type II errors. In addition, these are often referred to as false positives and false negatives for the sake of clarity. These terms have been expanded from their original definition and used in various situations. For example,

  • Type I error (false positive): An error that rejects (rejects) the null hypothesis that should be accepted (accepted). For example, convicting an innocent person.
  • Type II error (false negative): An error that accepts (accepts) the null hypothesis that should be rejected (rejected). For example, acquitting a true criminal.

The example above illustrates the ambiguity in this expanded definition. Here, we focus on "being guilty", but of course we can also think about "being guilty". The conditions are shown in the table below.

 Actual condition
will get No
PositiveStatus "Yes" + Result "Positive"
= True positive (TP)
Status "None" + Result "Positive"
= False positive (FP)
Type I error
 negative State "Yes" + Result "Negative"
= False negative (FN)
Type II error
State "None" + Result "Negative"
= True negative (TN)

An example of a pregnancy test is shown.

 Actual condition
PregnantNot pregnant
PregnantTrue positivefalse positive
 (The test result shows that you are pregnant,
I'm not really pregnant) 
Type I error
Not pregnantFalse negative
 (I'm pregnant
Could not be detected) 
Type II error
True negative

It should be noted here that when the test result is “true” or “false”, there are two kinds of meanings. In the actual state (condition), true=yes (there is a certain attribute) and false=absence (there is no certain attribute), and the accuracy of the test result is true positive/false positive/true negative/false negative Is used. To avoid this confusion in the table above, the status is shown as “Yes/No”.

False positive rate/Type I error

False positive rateIs the proportion of samples that were erroneously determined to be positive in the negative sample population. Ie from 1SpecificityIt is the same as the value obtained by subtracting.

SpecificityIncreases, the probability of a Type I error decreases, but the probability of a Type II error increases.[16].

False negative rate/Type II error

False negative rateIs the proportion of samples that were falsely determined to be negative in the positive sample population. Ie from 1感 度It is the same as the value obtained by subtracting.

ThePower of detectionCall.

Proposal for error type expansion

The type I errors (false positives) and type II errors (false negatives) proposed by Neyman and Pearson are widely adopted, but other types of errors (“type III errors” and “type IV errors”) There have been several attempts to define[17].

These have not been widely accepted. The main ones are introduced below.


University College LondonI was also a colleague with Neyman and Pearson at Florence Nightingale David (1909-1993)[18]Jokingly mentions in his 1947 paper about the possibility of extending Neyman and Pearson's "two sources of error" third in his findings.

In explaining the basic idea of ​​this theory, I may be criticized for being in error (of the third kind) and for choosing the wrong test method for the sample. Worried [19].


1948, Frederick Mosteller (1916-2006)[20] Is "Type III errorWas proposed to be defined as[21].

  • Type I error: reject the true null hypothesis
  • Type II error: adopting the false null hypothesis
  • Type III error: Correctly reject the null hypothesis for the wrong reason


Henry F. Kaiser (1927-1992) extended Mosteller's classification in his 1966 paper to point to making a wrong decision based on the rejected hypothesis of "type XNUMX error".[22].. Kaiser alsoγ error(Γ errors).


1957, Allyn W. Kimball (Oak Ridge National LaboratoryStatistician) proposed a new type of error following the type I error and the type II error. Kimball's definition of "type XNUMX error" is "error by giving correct answer to wrong problem"[23].

MathematicianRichard Humming(1915-1998) states that "it is better to give the wrong solution to the right problem than to give the right solution to the wrong problem."

Howard Raiffa, an economist at Harvard University, also said that he "had fallen into the wrong problem"[24][25].

Mitroff and Featheringham

In 1974, Ian Mitroff and Tom Featheringham extended Kimball's classification by saying, "The most important factor in considering a solution to a problem is how it is first explained and formulated."

They are,Type III errorWas defined as "the error of solving the wrong problem when the correct problem should be solved" or "the error of selecting the wrong expression when the correct problem should be expressed"[26].


In 1969, Harvard economist Howard Raiffa joked that he was a "type XNUMX error candidate: taking too long to solve the right problem".[27].

Marascuilo and Levin

In 1970, Marascuilo and LevinType IV errorProposed. This is a Mosteller-like definition and is an error due to "inappropriate interpretation of a correctly rejected hypothesis." They give this example as "the doctor's diagnosis of the disease is correct, but the subsequent prescription of the medicine is incorrect."[28].

Concrete example

There are two trade-offs in statistical testing:

  • (a) Acceptable level of false positives
  • (b) Acceptable levels of false negatives

The sensitivity can be changed by setting the threshold value. The lower the sensitivity, the greater the risk of determining a true positive as a negative, and the higher the sensitivity, the greater the risk of producing a false positive.


In computer-related terms, the terms "false positives" and "false negatives" are used in various situations.

Computer security
Security vulnerabilities are an important concept that should be considered when keeping computer data safe, only accepting access from appropriate users (Computer securityreference). Moulton (1983) emphasizes the following points (p.125).
  • Prevents type I errors (false positives) that classify "authenticated users" as "unauthorized users."
  • Prevents type II errors (false negatives) that classify "illegal accessors" as "authenticated users".
Spam filtering
Normal with "spam filtering"電子 メ ー ルTheス パ ムIt is called a false positive if it is erroneously classified. In this case, the distribution of ordinary e-mail is hindered. Spam filtering has a high probability of blocking unwanted email, but efforts are still ongoing to reduce the incidence of false positives to a negligible level.
On the other hand, the fact that spam is not detected and is passed as it is is called false negative. The lower the false negative rate, the better the efficiency of spam filtering.
Antivirus softwareNow, let’s call the fileウ イ ル スFalse recognition is called false positive. The cause isheuristicOr due to an error in the database. Similar problemsTrojan horse,SpywareIt also occurs at the detection of.
Database search
In database searches, inappropriate results obtained for a search request are called false positives. EspeciallyFull-text searchIt is easy to occur in. Full-text search searches the entire contents of all stored documents for words containing several words specified by the user.
The cause of false positivesNatural languageOften there is a ambiguity. For example, the word "home" can mean "someone's residence" or "top-level page of a website".[29].
Optical character recognition (OCR)
Generally detectedalgorithmIs prone to false positives.Optical character recognition(OCR) software may recognize a collection of dots that look like "a" as "a".
General security
False positives often occur in security checks at airports. The alarm is designed to sound when it is determined that a weapon is about to be brought in, but its sensitivity is set higher, so even if it is not actually a weapon, a key, buckle, change or coin It is often caught on the phone (metal detectorreference).
In this case, there are far more false positives than true positives (when detecting real weapons),Positive predictive valueWill be very low.
Iris recognition,Retina scan,Face recognition systemSuch asBiometricsFalse negatives are a problem with scans. In this type of system, a person can accidentally match a known person in the database. In this case, the person may be judged to be a person who is allowed to pass or a criminal who is in the process of being arranged.


In medicine, there is a big difference between "screening" and "laboratory testing".

This is a relatively simple test, and it is often performed together for a large number of people. Often targeted at people who have no symptoms.
Clinical examination
This is a relatively expensive test, and means such as collecting blood are often used. For this reason, it is often done to confirm it in patients suspected of having some kind of illness.

For example, in many states in the United StatesPhenylketonuriaHypothyroidismOf congenital diseases such asscreeningI do. In this case, the probability of "false positives" is very high, but there is an advantage that those diseases can be detected at an extremely early stage.[30].

transfusionAt the time ofHIV,hepatitisHowever, the probability of "false positive" is high in this case as well. Tests to see if they actually have those diseases will give more accurate results.

The most talked about "false positives" in screening areMammographybyBreast cancerIt will be an inspection. False positive rate in mammography screening in the US is as high as 15%, which is very high in the world.[31]. NetherlandsHas the lowest false positive rate, at 1%[32].

Clinical examination

Pregnancy test,Medical checkupThen, "false negative" becomes a big problem. In the case of "false negatives", the patient is given the false message that he/she is really ill but not ill. For this reason, the subsequent treatment policy is set under the wrong assumption. For example,coronary artery OfArteriosclerosisIt is known that there is a false negative in the cardiac stress test that detects

"False negatives" pose a serious problem, especially in the case of symptomatic or routine illnesses. "False positives" are an issue when the number of patients in the population is very small. For more informationBayesian estimationPlease refer to.

Investigation of paranormal phenomena

The term false positiveParanormal,SpiritIn the survey, it means a photo that is mistakenly adopted as evidence. In other words, it refers to media (images, videos, audio recordings, etc.) that are not proven but are believed to contain spirits.[33].


  1. ^ Medical dentistry english dictionary
  2. ^ Definition and usage of false negative. Eijiro.
  3. ^ a b "JIS Z 8101-1:2015 Statistics-Terms and symbols-Part 1: Probability and general statistical terms". 2019/4/28Browse.
  4. ^ Excludes other intentional mistakes such as cheating. See Allchin (2001) for a more comprehensive explanation.
  5. ^ a b The magnitude of the error between the observed value and the predicted value is independent of the magnitude of the observed value.
  6. ^ Masakiyo Kawade, 2011, "Hypothesis test Desirable hypothesis test: Type 1 error and type 2 error", "Compact Statistics" First Edition, Volume 8, Shinseisha <Compact Economics Library> ISBN 978-4-88384-156-1 “Decantatore 165” (Presenze grafiche).
  7. ^ Neyman and Pearson, 1928/1967, p.1.
  8. ^ David, 1949, p. 28.
  9. ^ Neyman and Pearson, 1928/1967, p.31.
  10. ^ Neyman and Pearson, 1930/1967, p.100.
  11. ^ a b Neyman and Pearson, 1933/1967, p.187.
  12. ^ Neyman and Pearson, 1933, p.201.
  13. ^ See, for example, Neyman and Pearson, 1933/1967, p.186.
  14. ^ Neyman and Pearson, 1933/1967, p.190.
  15. ^ In English, the notations type I and type II are common, not type-I or type-II, or type 1 or type 2.
  16. ^ When developing detection algorithms and tests, the balance between the risk of false positives and false negatives must be considered. Usually, the difference ofThresholdThere is. The higher the threshold, the more false negatives and less false positives.
  17. ^ For example, Onwuegbuzie & Daniel (2003) defines eight new types of errors.
  18. ^ Larry Riddle (January 2014, 1). “Florence Nightingale David". Biographies of Women Mathematicians. 2015/2/28Browse.
  19. ^ David, 1947, p.339.
  20. ^ 1981 yearsAmerican Society for the Promotion of SciencePresident[1]
  21. ^ Mosteller, 1948, p.61.
  22. ^ Kaiser, 1966, pp.162-163.
  23. ^ Kimball, 1957, p.134.
  24. ^ Raiffa, 1968, pp.264-265.
  25. ^ In addition, Raiffa mistakenly called "Type XNUMX error" in this retrospective.John Tukey(1915-2000) and has made the term.
  26. ^ Mittoff and Featheringham, 1974, p.383.
  27. ^ Raiffa, 1968, p.264.
  28. ^ Morascuilo and Levin, 1970, p.398.
  29. ^ The incidence of false positives can be reduced by limiting vocabulary. However, this work is expensive. This is because expert work is required to determine the vocabulary, and the work of assigning an appropriate index to each document also occurs.
  30. ^ There is a study result that such a newborn screening is 12 times more likely to be a false positive than regular screening (Gambrill, 2006. [2])
  31. ^ Due to the high false-positive rate, half of the women in the United States receiving a false-positive result in the last 10 years receive a false-positive result. As a result, retesting costs $1 million each year. In fact, 90% to 95% of the positives are false positives.
  32. ^ The low false positive rate is due to checking the results twice. It can also be said that the threshold value is set high at the second time, which lowers the statistical test power of the test.
  33. ^ As a site showing examples of false positive evidence of psychic/paranormal Moorestown Ghost Research There is.


  • Allchin, D., "Error Types", Perspectives on Science, Vol.9, No.1, (Spring 2001), pp.38-58.
  • Betz, MA & Gabriel, KR, "Type IV Errors and Analysis of Simple Effects", Journal of Educational Statistics, Vol.3, No.2, (Summer 1978), pp.121-144.
  • David, FN, "A Power Function for Tests of Randomness in a Sequence of Alternatives", Biometrika, Vol.34, Nos.3 / 4, (December 1947), pp.335-339.
  • David, FN, Probability Theory for Statistical Methods, Cambridge University Press, (Cambridge), 1949.
  • Fisher, RA, The Design of Experiments, Oliver & Boyd (Edinburgh), 1935.
  • Gambrill, W., "False Positives on Newborns' Disease Tests Worry Parents", Health day, (5 June 2006).
  • Kaiser, HF, "Directional Statistical Decisions", Psychological Review, Vol.67, No.3, (May 1960), pp.160-167.
  • Kimball, AW, "Errors of the Third Kind in Statistical Consulting", Journal of the American Statistical Association, Vol.52, No.278, (June 1957), pp.133-142.
  • Lubin, A., "The Interpretation of Significant Interaction", Educational and Psychological Measurement, Vol.21, No.4, (Winter 1961), pp.807-817.
  • Marascuilo, LA & Levin, JR, "Appropriate Post Hoc Comparisons for Interaction and nested Hypotheses in Analysis of Variance Designs: The Elimination of Type-IV Errors", American Educational Research Journal, Vol.7., No.3, (May 1970), pp.397-421.
  • Mitroff, II & Featheringham, TR, "On Systemic Problem Solving and the Error of the Third Kind", Behavioral Science, Vol.19, No.6, (November 1974), pp.383-393.
  • Mosteller, F., "A k-Sample Slippage Test for an Extreme Population", The Annals of Mathematical Statistics, Vol.19, No.1, (March 1948), pp.58-65.
  • Moulton, RT, “Network Security”, Datamation, Vol.29, No.7, (July 1983), pp.121-127.
  • Neyman, J. & Pearson, ES, "On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference, Part I", reprinted at pp.1-66 in Neyman, J. & Pearson, ES, Joint Statistical Papers, Cambridge University Press, (Cambridge), 1967 (originally published in 1928).
  • Neyman, J. & Pearson, ES, "The testing of statistical hypotheses in relation to probabilities a priori", reprinted at pp.186-202 in Neyman, J. & Pearson, ES, Joint Statistical Papers, Cambridge University Press, (Cambridge), 1967 (originally published in 1933).
  • Onwuegbuzie, AJ & Daniel, LG "Typology of Analytical and Interpretational Errors in Quantitative and Qualitative Educational Research", Current Issues in Education, Vol.6, No.2, (19 February 2003).[3]
  • Pearson, ES & Neyman, J., "On the Problem of Two Samples", reprinted at pp.99-115 in Neyman, J. & Pearson, ES, Joint Statistical Papers, Cambridge University Press, (Cambridge), 1967 (originally published in 1930).
  • Raiffa, H., Decision Analysis: Introductory Lectures on Choices Under Uncertainty, Addison-Wesley, (Reading), 1968.

Related item

外部 リンク


Jorente(Llorente)スペインIs the surname.


Back to Top