# Type II Error: Definition, Overview & Examples

The world of statistics is complicated and at times convoluted.

With so many variables and external factors, no test will ever be perfect and statistical errors will occur. But what people can do is recognize the chance of an error occurring and incorporate that level of chance into their findings.

That’s where type II errors come into play. But what exactly is a type II error? And how does it differ from a type I error?

Read on as we take a closer look at this statistical term.

Table of Contents

KEY TAKEAWAYS

- A type II error can be defined as the probability of failing to reject the null hypothesis incorrectly.
- A type II error can also be described as a false negative.
- The sample size and the statistical power directly influence the chances of an error occurring. By increasing both of these factors, the chances of an error are reduced.

## What Is a Type II Error?

A type II error is a statistical term. It is commonly used within the context of hypothesis testing. It describes the error that can occur when someone fails to reject a null hypothesis that is actually false.

This error produces a false negative, which can also be known as an error of omission.

So for example, a medical test for a certain disease or illness may come back with a negative result, even though the patient that was tested was actually infected with the disease they were testing for. This would be described as a type II error because the negative result was accepted, even though this was incorrect.

## What Causes Type II Errors?

A type II error can often be caused if the statistical power of a test is low. Essentially, the higher the statistical power, the less chance there is of an error occurring.

## How to Avoid the Type II Error?

As stated above, the higher the statistical power, the less chance that a type II error will be committed. To avoid a type II error, the recommended statistical power should be at least 80% before a test is conducted.

So for example, let’s say that anything that falls within the bounds of a 95% confidence interval is considered statistically insignificant. By decreasing the tolerance to 80% you will narrow the bounds and get fewer negative results, and less error risk. By increasing the tolerance, the chance of error increases.

## What is the Difference Between a Type I and Type II Error?

In statistical analysis, a type I error is when a true null hypothesis is rejected. Whereas a type II error is an error that occurs when someone fails to reject a null hypothesis that is actually false. The error rejects the alternative hypothesis. This happens even though it does not occur due to chance.

So essentially, whereas a type II error is a false negative, a type I error is a false positive. For example, let’s quickly take it back to our medical analogy. Let’s say that a patient undergoes a test for a certain disease or illness and the result comes back positive. This result is accepted, even though the truth is that the patient does not have the illness.

The probability that a type I error will be committed is equal to the level of significance that was set for the hypothesis test. So if there is a level of significance of 0.05, then there is a 5% chance that a type I error may happen.

The probability of a type II error being committed is equal to one minus the power of the test. This is also known as beta. The power of the test can be increased by increasing the sample size. This decreases the error risk of committing a type II error as further tests are being carried out, therefore spreading out the risk.

## Example of a Type II Error

Let’s say that a pharmaceutical company wants to compare how effective two of its new drugs are for treating a disease.

The null hypothesis states that the two drugs are equally as effective as the other. The null hypothesis states that the company is hoping that it can reject using the one-tailed test.

The other alternative hypothesis states that the two drugs aren’t equally effective. The alternative hypothesis is the state of nature that is supported by the null hypothesis being rejected.

The pharmaceutical company starts a large clinical trial of 5,000 patients with the disease. This is to compare the treatments of the two drugs they have created. The company divides the 3,000 patients into two random, equally sized groups. One group is given one of the drugs, the other group is given the other drug.

The significance level is set at a level of 0.05. This indicates that it’s willing to accept a 5% chance that it may reject the null hypothesis incorrectly. This is accepting a 5% chance of a type I error being committed.

The beta error is assumed to be 0.025, or 2.5%. Therefore the chance that a type II error is committed is 97.5%. If the two sets of drugs aren’t equal, the null hypothesis would be rejected. However, if the pharmaceutical company doesn’t reject the null hypothesis when the drugs aren’t equally effective, then a type II error has occurred.

## Summary

A type II error is the probability of failing to reject the null hypothesis when it should be rejected. It is essentially what’s known as a false negative. This is the opposite of a type I error which is essentially a false positive.

## FAQs About Type II Error

### What Affects a Type II Error?

A type II error is inversely affected by the statistical power of a study. So the higher the statistical power, the less chance there is of a type II error taking place.

### How Do I Minimize Type II Errors?

There are two ways to minimize type II errors:

- Increase the sample size
- Increase the significance level

### Why Are Type I and Type II Errors Important?

Any test that is taken has an inherent risk of giving a false result. There is always a risk of making one type of error or another. This makes knowing what error type can come into play imperative. By knowing about these errors, you can take steps to minimize the amount of risk. As well as take the amount of risk into consideration when displaying results.

Share: