Let’s say Apple wanted to hypothesis a variance for defective products. They know that defects can happen by human error and failed components. To be cost-effective they budget 10 out of every 100 phones manufactured will have some sort of defect. They have been seeing an increased number of defects and want to check if they are still averaging the 10 defects per 100.
The hypothesis would look like the following:
Ha: μ > 10
H0: μ ≤ 10
If the null hypothesis is rejected then that means that the company’s assumption is right, and more than 10 devices are defective which is cutting into their bottom line.
Is this right? If yes or no, please explain why?
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here