1 / 12

Audit Tests: Risk, Confidence and Materiality

Audit Tests: Risk, Confidence and Materiality. Some basic statistics about Inherent Risk and Control Risk. Audit Testing. Audit testing is done on a single account to test a hypothesis H 0 : Actual error in the account is less than the tolerable limit (set in planning / materiality)

jaser
Download Presentation

Audit Tests: Risk, Confidence and Materiality

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Audit Tests: Risk, Confidence and Materiality Some basic statistics about Inherent Risk and Control Risk

  2. Audit Testing • Audit testing is done on a single account to test a hypothesis • H0 : Actual error in the account is less than the tolerable limit (set in planning / materiality) • Account testing compares: • Statistical estimates: {y% confidence limit} • with • Financial stmt: A/C balance ± tolerable error

  3. The ‘true’ account value • Suppose you take several samples each of size n from the population and for each you calculate • Then, on average, 95% of the intervals will contain the true but unknown value µ and 5% will not.

  4. If you plotted the intervals vertically they might look like this

  5. Point / Interval • The sample mean provides a point estimate (i.e. single value approximation) for µ • Confidence limits provide an interval estimate together with a degree of confidence that the parameter is in the interval • e.g. with 95% confidence the population mean height µ is in the interval (164, 166) cms • The width of the interval (i.e. precision of estimate) depends on sample size. • In the example, the sample size was n=100 so the 95% confidence interval is • 165 ± 1.96 × i.e. (164, 166) cms.

  6. Sample Size • Suppose the sample size had been n=40 but the mean and standard deviation were still = 165 and s = 5. Then a 95% confidence interval for µ is • 165 ± 1.96 × = 165 ± 1.55 • which gives (163.5, 166.5). • Notice that increasing the sample size increases the precision of the estimate • e.g. width of 95% confidence interval • = U - L • = ( + 1.96 ) - ( - 1.96 ) • = 2 × 1.96 • So if n = 100, width = = 0.392 s • or if n = 25, width = = 0.784 s. • If you increase the sample size by 4 you decrease the width of the confidence interval by ½. Precision of the estimate depends on the term in the standard error SE =

  7. Example - Bolt production • A manufacturer produces bolts with a nominal length of 15 cms. The actual lengths vary slightly. The process is stable and the population standard deviation is known to be s = 0.3 cms. • A sample of 50 bolts has a mean length of = 14.85 cms. Does this suggest that the average length of all bolts is not 15 cms? • Sampling distribution of sample mean is • In this case s = 0.3, n = 50, = 14.85 and we want to know if µ = 15 is plausible. • Find a confidence interval for µ. A 95% confidence interval is given by • 14.85 ± 1.96 × i.e. (14.77, 14.93)cms • Interpretation - with 95% confidence the interval (14.77, 14.93) contains the population mean µ of all bolts produced by the process. As the interval does not contain 15.0, the data are not consistent with the hypothesis that µ = 15. That is, the data do not support the hypothesis that the average length of all bolts is 15cms.

  8. Another Approach • The second approach is to test the hypothesis (i.e. µ = 15cm) more directly as follows • Assume µ = 15 (that is, assume the null hypothesis is true). • Calculate the probability of getting a sample mean as far away as or further from the assumed population mean as was observed (i.e. = 14.85) • values as far away as or further from µ = 15 as the observed value = 14.85 • This is called the "p-value“ • In this case • p-value = P( 14.85 or 15.15) If ( ~ N(15,) then Z = Hence P( 14.85 or 15.15). • = = P(Z < - 3.5 or Z > 3.5) < 0.001 from tables • This probability, p-value <0.001, is very small so we conclude that the sample data provide evidence against the assumption µ = 15. We reject the hypothesis that the average length of all bolts was 15 cms.

  9. Hypothesis Testing • H0 is the assertion that the clients accounts are correct

  10. Power • Type I and Type II errors, and the power of a statistical test • In hypothesis testing there are two kinds of errors you can make • i) Reject H0 (because the p-value is small) when H0 is true • ii) Do not reject H0 (because the p-value is not small) when H0 is false • power = 1 - P(type II error given H0 is false) • The probability of accepting the null hypthesis when it is false is conventionally called b ("beta"), so that: power = 1-b. • Ideally studies should be designed so that power, 1-b, is at least 0.8. This requires using an efficient design and a sufficiently large sample.

  11. H0 is the assertion that the clients accounts are correct Errors

  12. Risk Measures • Probability of Type I error • Expected loss over entire distribution of error (Bayes’ Risk) • Willingness to pay for insurance against a specific risk (in portfolio theory, Markowitz risk premium)

More Related