1 / 34

Management Accounting Research: Experimental Approaches

Management Accounting Research: Experimental Approaches. Joan Luft Michigan State University Management Accounting Section Mid-Year Meeting January 2005. What does experimental research (not) do well?. 1. Approaches that often succeed

ballard
Download Presentation

Management Accounting Research: Experimental Approaches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Management Accounting Research: Experimental Approaches Joan Luft Michigan State University Management Accounting Section Mid-Year Meeting January 2005

  2. What does experimental research (not) do well? 1. Approaches that often succeed • Examples of influential experimental research in the social sciences 2. Approaches that often fail • Examples of experimental studies in management accounting that you do not see published 3. Contributions to management accounting • What does experimental research do to help us understand the problems of accounting practice?

  3. 1. Approaches that succeed: classic examples from experimental economics and psychology • Different traditions of “good experimentation” in economics and psychology, but common characteristics of successful research • Generating original and influential ideas from mundane observations • Learning from failure (unsatisfactory experiences): throwaway explanations versus theory-building explanations • Laboratory testing as a process of clarifying & refining theory in order to make it more usable for a broad array of types of research

  4. Example 1: an asset-market experiment • Buyers & sellers (students, classroom setting) seek each other out and negotiate trades. • Sellers have an initial endowment of (artificial) assets with assigned values. • Buyers also have assigned values for assets: surplus from trade arises when (for example) a seller who owns an asset worth $10 to her finds a buyer to whom the asset is worth $15, and they trade at a price of (say) $13.

  5. An unsatisfactory experience • Prices did not reach equilibrium (over 46 markets), and considerable potential surplus from trade was not captured. • Researcher tentatively concluded that (a) intersection of supply & demand curves did not predict actual market results, but (b) participants and readers were unconvinced in many cases. • What was the problem with the experiment?

  6. Throwaway explanations • “Experiments with student subjects don’t tell us much.” • “There wasn’t enough money at stake.” • “The task in the experiment didn’t look the way it does in the real world.” • What could be wrong with these explanations?

  7. What’s wrong with throwaways? • Not true • It is not the case in general that student subjects, modest stakes, or simplified tasks yield results that are uninformative about the “real world.” • Not fruitful • As generalizations, these statements do not provide the basis for generating further interesting ideas. • Relying on throwaway explanations, instead of looking further, can cause researchers to miss opportunities for discovery.

  8. Failed asset market: Theory-building explanation • Observations by an irritated student participant in the market: • Exchange proceeded through a series of private bilateral negotiations rather than a central mechanism (e.g., auction) • Bids, asks, and prices were not public. • “Bad experiment, unlike real world”? • No: some real-world “markets” work like this others do not. How much (what kind of) difference does this make? • Important new idea at the time!

  9. Clarifying & refining theory: the effects of variation in market institutions • Is the potential surplus from trade captured and distributed in the same way in different kinds of competitive markets? • auctions (e-Bay) • posted-price markets (department stores) • multiple bilateral negotiations (selling major software to business customers)? • What are effects of market size, duration, detailed rules of exchange, etc.? Auction design (first-price, second-price)? Posted-price design (dept stores vs. airlines)?

  10. Contributions of market-design experiments: 1 • Lab work allows researchers to: • “Turn one dial at a time,” find out “what happens if I change only this element or that element of the market design?” • Clarify, refine ideas about exactly what matters to market design and why. • Suggests what to model in analytic research and what to look for in archival data. • Research is sometimes limited not by lack of data but by lack of ideas with which to make sense of, or find interesting patterns in, the data.

  11. Contributions of market-design experiments: 2 • Practice: Dialogue between theory & experiment on market design has influenced public policy (treasury-bond auctions and bandwidth auctions) and design of private-sector Web-based markets. • Management accounting (potential): • Effect of product-market characteristics on product costing? • Internal (designed) markets as substitutes for institutions like traditional budgeting? (HBR, April 2004)

  12. Example 2: unsatisfactory experiences with teaching • Two faculty members commiserate with each other about the difficulty of getting students to grasp basic statistical reasoning & decision models.

  13. Throwaway explanations • Students don’t work hard enough. • They’re immature; they don’t have enough experience of the real world. • The textbooks are not very good. • Etc.

  14. Theory-building explanation • It’s not that students are stupid and lazy, but they’re thinking in systematically different ways, which they brought to the classroom with them and will probably take out into life with them as well. • But what are these ways? • Can we describe them unambiguously, and generalize usefully about them?

  15. Some contributions of experimentation in cognitive psychology • Behavior in non-experimental settings (financial markets, medicine, sports, etc.), consistent with models of heuristic judgment has been found • … once researchers knew what they were looking for and how to make sense of it! • “What to look for and how to make sense of it” was defined by clarifying & refining theory through lab testing (initial descriptions of heuristics sometimes incomplete, unclear, hard to use in research outside the psych lab) • (See Gilovich, Griffin & Kahneman, Heuristics & Biases, 2002)

  16. Example: Availability heuristic • Does sensational media coverage of a business (accounting) failure unduly influence managers’ judgments of the probability that similar failure threatens their firm (or a supplier or customer)? • Vivid or repeated coverage could make the reported failure more available in memory, and this availability has been shown to influence probability estimates.

  17. Refining “availability” in the lab to aid efficient search for effects outside the lab (1) What is the “availability” that influences probability judgments? • Availability = many rather than few instances come to mind (same number of actual instances have been encountered) • Availability = instances come to mind easily rather than with difficulty (number may be the same) (2) Individuals & organizations may guard against this availability bias. How? • Evidence of availability effects may not be observed bias but costly steps taken to prevent it. • What can reduce the bias (effort, judgment strategies)? What do people think reduces it?

  18. 2. Approaches to experimentation in management accounting that often fail • Testing (apparently) obvious assertions • … without making it clear why it is interesting or valuable to test them • Testing poorly-specified assertions from practitioner literature • … without improving the specification (clarifying & refining!) to create interesting testable statements

  19. Testing the obvious • Examples: is testing these hypotheses a top priority? • If people do not have the information that is clearly necessary to make decision X, then they will not do a very good job of making decision X. • When people choose what information to use, they are more likely to use information type Z if they believe it is useful than if they believe it is not useful (all else equal).

  20. Testing poorly specified assertions • Examples: attempts to test the value of ABC, TQM, EVA, BSC, and other TLA’s. • Researchers sometimes hope to use lab settings to eliminate the natural confounds, self-selection, endogeneity problems, and some (not all) proxy measurement problems that plague archival research.

  21. Difficulties with this approach • The lab is useful for testing theories. TLA’s are often not theories. • They are loose bundles of economic insights, heuristics for applying these insights, and persuasive rhetoric. • If we “test” a TLA in the lab, what exactly do we test?

  22. What do we test …? • Example: need to choose a decision task and provide (create) ABC or BSC information for the experiment • If it is clearly “better information” for the experimental task (e.g., less biased product costs in a decision setting where bias clearly matters), we are likely to be back to the problem of testing the obvious.

  23. Is “more reality” the answer? Probably not • Consider a firm that has adopted a TLA and is not sure how well it is working. (Non-obvious.) • Suppose we took information from the firm’s actual TLA system and asked appropriately chosen participants to make decisions with it that are actually made by users of this system in the firm. • Randomly assign non-TLA information to other similar individuals & see how decisions differ. • What do we conclude if TLA-based decision is “better”? Unique instance? What generalizable assertion can we make? • Do we have any idea what properties of the information, task, setting or people produced the result?

  24. What properties matter? Example • A well-known BSC property is the four-category system of classifying performance measures. Does this property matter? • Does it make a difference whether (for example) the financial measures are grouped together & labeled “financial”? • Yes (Lipe & Salterio, AOS) • What kind of difference? Why? • We need ideas, to help make sense of observed patterns!

  25. 3. Clarifying & refining ideas about management accounting in the lab • Example: do the conventions of accounting reports (conservatism, aggregation, periodicity, etc.) aid or mislead managers in significant business decisions? • An accounting report (like an economic model or a laboratory task) is a selective abstraction from the “real world.” • What are the consequences (benefits & costs) of the way accountants select and abstract?

  26. Concerns from practice • It is sometimes claimed that ABC, BSC, etc., support better decision-making by managers, and that “traditional” (oversimplified, delayed, wrongly aggregated, biased, incomplete) accounting can lead managers to make poor decisions about pricing, product mix, production, customer relations, etc. • Note that this is a claim about decision-facilitating not decision-influencing (contracting) uses of accounting information, & as such the claim has been challenged.

  27. Can these managers be misled? • Consider a manager who is involved in particular business processes every day, and who has detailed data (often non-accounting)on relevant activities and expenditures. • Will the incompleteness or bias in a summary accounting report cause her to misunderstand what drives success in her business? Doesn’t she know her business, apart from the accounting reports? • Perhaps the only real importance of the formal management-accounting report is the fact that it is contractible information that can be used in the reward system.

  28. How could accounting sometimes mislead well-informed, well-motivated decision-makers? • Example: we sometimes observe: • Decision-makers fail to use information they certainly do know and consciously intend to use. • Their decisions are influenced by factors that they are not aware of or would not choose to use in decisions. • Can accounting influence these unconscious elements of decision-making? What does psychology theory tell us about this?

  29. Example: Two-system (dual-process) theories of reasoning • System 2: Deliberate, effortful, relatively slow, consciously controlled processes, often dependent on abstract rules (e.g., adding columns of numbers) • System 1: Automatic, rapid, uncontrolled processes, often context-dependent (e.g., “gut-feel” judgments, recognizing familiar faces) • Efficiency/accuracy/adaptability tradeoffs • Limited ability to choose between systems (e.g., can decide to review gut-feel judgments more carefully; can’t decide not to recognize a familiar face) • (Sloman, Psych. Bull. 1996; Stanovich & West in Gilovich et al. Heuristics & Biases)

  30. Fundamental properties of accounting reports • Distinctive structure (e.g., income statement, balance sheet) • Regular periodicity (monthly, quarterly …) • Classification & and labeling (assets, expenses, profits …) • How might these properties influence unconscious (System 1) processing, & sometimes mislead well-informed managers (unless costly steps are taken to counteract the effects)?

  31. Example 1: accounting report structure influences problem structuring • Experiment: Participants understand the concept of opportunity costs and apply it correctly in a personal-finance decision. • But the more financial-accounting training they have, the more they ignore opportunity costs in a very similar business decision – presumably because they (automatically) structure the business decision by thinking in income-statement (not opportunity-cost) terms. The accounting model has become their economic model (Vera-Munoz TAR 1998). • Doesn’t show that “accountants in the real world always ignore opportunity costs.” • Does show that accounting-report structure can influence people’s problem-structuring in ways they are not fully aware of. This is a pattern we could look for in a variety of settings.

  32. Example 1, cont. • What might prevent the omission of opportunity costs in decision-making in firms? • Firm procedures? (For important decisions, firms have rules that force people away from this income-statement structuring.) • Work experience? (We train people in accounting but they get over it.) • Vera-Munoz, Kinney & Bonner (TAR 2001): it’s more complicated than that … (more theory development probably needed)

  33. Example 2: Periodic repetition in accounting reports makes items salient & thus influential • Individuals make investment decisions, are evaluated, and later evaluate others who make similar decisions. Expected return on investments is known; actual return provides no additional information about quality of investment decisions. • Individuals begin with a belief in the principle that investment decisions should be evaluated based on expected (not actual) return. • Individuals’ belief in and use of this principle is undermined by making decisions under a system where they themselves are evaluated based on actual return—which is not advantageous for them! • Bigger effect when their own work was evaluated after each of 12 decisions rather than once, cumulatively, after all 12. (No difference in payoff, just in timing/ repetition of feedback.) Frederickson et al. JAR 1999

  34. Conclusion • The lab is a place to clarify initial (perhaps incomplete or imperfectly specified) versions of theories, to unpack their implications, to grow and prune and refine them. • The “real world” is not a good place for doing this particular kind of testing, but our understanding of what we see in the real world is enhanced by what we learn in the lab.

More Related