You wish to test the claim that the average IQ score is less than 100 at the .10 significance level. You determine the hypotheses are:
Ho: μ=100Ho: μ=100
H1:μ<><>
You take a simple random sample of 73 individuals and find the mean IQ score is 95.2, with a standard deviation of 15.1. Let's consider testing this hypothesis two ways: once with assuming the population standard deviation is not known and once with assuming that it is known.
Round to three decimal places where appropriate.
Is there a significant difference between when we know the population standard deviation and when we don't? Explain
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here