530 likes | 918 Views
2. VA All Employee Survey Committee. VHA National Center for Organization Development (NCOD)HSR
E N D
1. 1 2010 VA All Employee SurveyHuman Resources & Administration Results and Findings
2. 2 VA All Employee Survey Committee VHA National Center for Organization Development (NCOD)
HSR&D Center for Organization, Leadership & Management Research (COLMR)
Occupational Safety, Health and Prevention Strategic Healthcare Group
VHA Human Resources Management (HRM) Group
Workforce Management and Consulting Office (WMC)
VHA Support Service Center (VSSC)
Employee Education System (EES)
3. 3 2010 VA All Employee Survey Conducted from 4/19/10 to 5/17/10
Raw data received on 5/24/10
The VA All Employee Survey Committee feels that there is a connection between the energy being put into planning for action steps based on the All Employee Survey results and the consistently high response rate (substantially higher than most organizational surveys). Employees are most likely to respond to the survey when they see concrete actions being taken in response to their previous participation.
The VA All Employee Survey Committee feels that there is a connection between the energy being put into planning for action steps based on the All Employee Survey results and the consistently high response rate (substantially higher than most organizational surveys). Employees are most likely to respond to the survey when they see concrete actions being taken in response to their previous participation.
4. 2010 VA All Employee SurveyDVA Response Rate 4
5. 5 2010 VA All Employee SurveyHuman Resources & AdministrationResponse Rate
6. 6 Demographics of DVA AES Respondents
7. 7 As the Survey has gained credibility as the “voice of the people” the number of people utilizing paper surveys nationally has dropped dramatically. The reasons for providing paper surveys up to now was to ensure access for all employees and to also make sure people felt comfortable with the security of their confidentiality. We are currently researching ways to make sure that access and security needs are met with only electronic responses, which could dramatically reduce turnaround time in the future.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
As the Survey has gained credibility as the “voice of the people” the number of people utilizing paper surveys nationally has dropped dramatically. The reasons for providing paper surveys up to now was to ensure access for all employees and to also make sure people felt comfortable with the security of their confidentiality. We are currently researching ways to make sure that access and security needs are met with only electronic responses, which could dramatically reduce turnaround time in the future.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
8. 8 An “unknown” response means the respondent skipped the gender question. This category continues to remain small. This is sometimes interpreted to represent caution among respondents about having their responses identified.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
An “unknown” response means the respondent skipped the gender question. This category continues to remain small. This is sometimes interpreted to represent caution among respondents about having their responses identified.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
9. 9 The percentage of DVA respondents in the 60 and up age cohort has increased over the last four survey administrations.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
The percentage of DVA respondents in the 60 and up age cohort has increased over the last four survey administrations.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
10. 10 The percentage of DVA respondents by ethnicity has remained stable over the last four survey administrations.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.The percentage of DVA respondents by ethnicity has remained stable over the last four survey administrations.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
11. 11
12. 12 The percentage of DVA respondents who selected a supervisory level of “none” has remained stable over the last 3 administrations.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T. The percentage of DVA respondents who selected a supervisory level of “none” has remained stable over the last 3 administrations.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
13. 13 The response selections for the “Years in Service” category changed in 2010, therefore, only the data for the updated response selections are provided. The response selections for the “Years in Service” category changed in 2010, therefore, only the data for the updated response selections are provided.
14. 14 DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
DVA demographics in 2007, 2008, and 2009 consisted of the Office of Acquisition, Logistics, and Construction, the Office of Construction and Facilities Management, and OI&T.
15. 15
16. 16 The service question asks respondents what is the main type of service they provide. Respondents were asked to select only one option.
The “unknown” category is high in 2008 because OI&T took a separate, customized version of the AES that did not include the “Service” question.
The service question asks respondents what is the main type of service they provide. Respondents were asked to select only one option.
The “unknown” category is high in 2008 because OI&T took a separate, customized version of the AES that did not include the “Service” question.
17. 17 The training question asks, “Before becoming a VA employee, did you take part in a training or educational program based partly or entirely in VA (such as paid or unpaid internships, residencies, fellowships, or clinical, or administrative rotations)?”
The training question asks, “Before becoming a VA employee, did you take part in a training or educational program based partly or entirely in VA (such as paid or unpaid internships, residencies, fellowships, or clinical, or administrative rotations)?”
18. 18 2010 VA All Employee SurveyOverview of FindingsHuman Resources & Administration
19. 19 2010 VA All Employee Survey Three components of the AES
Job Satisfaction Index (JSI)
Employees’ individual satisfaction with key job features
Organizational Assessment Inventory (OAI)
Employee perceptions of conditions in their immediate work group
Culture
Employee perceptions of the general atmosphere at their facility overall
20. 20 2010 VA All Employee Survey Human Resources & Administration Overview: Current Status
Comparison of 2010 Program Office average to 2010 DVA average indicated:
Please note that a lower score in JSI and OAI represents a poorer work environment (except OAI-Demands). However, lower Culture scores do not necessarily indicate an undesirable work environment. The DVA average includes responses from all respondents mapped to a DVA work group.Please note that a lower score in JSI and OAI represents a poorer work environment (except OAI-Demands). However, lower Culture scores do not necessarily indicate an undesirable work environment. The DVA average includes responses from all respondents mapped to a DVA work group.
21. 21 2010 VA All Employee SurveyJob Satisfaction IndexResults and FindingsHuman Resources & Administration
22. 22 Job Satisfaction Index (JSI) Measures employee perceptions of individual satisfaction
Job satisfaction is determined by the discrepancy between how much satisfaction a person has vs. how much one wishes to have
Discrepancy notion supported by much research
But, there are measurement problems with assessing mathematical differences of discrepancies
So, best alternative is to combine two questions into one: “How much should you have compared to what you have now?”
Used a single item for each facet of job satisfaction
Used a five point scale ranging from 1 (not at all satisfied) to 5 (very satisfied)
23. 23 This slide shows overall JSI factor scores for the last 4 survey administrations. This slide shows overall JSI factor scores for the last 4 survey administrations.
24. 24 The "VA Avg" is computed by averaging all AES respondents’ scores. “DVA Avg” is computed by averaging all the scores of individuals who are mapped to a DVA workgroup.
The "VA Avg" is computed by averaging all AES respondents’ scores. “DVA Avg” is computed by averaging all the scores of individuals who are mapped to a DVA workgroup.
25. 25
26. 26 Grid Legend: Color Marks Meaningful (Gray=Low, Blue=High) Anything that is colored is a meaningful difference.
We adopted conventional statistical criteria (effect sizes of greater than .2) for deciding what size of difference between groups is meaningful. In VA, this coincidentally translates to differences in scores of approximately .2 or more.
Anything that is blue is higher than the comparison group.
Anything that is gray is lower than the comparison group.
Darker colors (blue or gray) indicate statistically significant and meaningful differences from the comparison group.
Statistical significance indicates that a difference in scores was large enough to conclude that the difference was likely not due to chance or random fluctuations. Statistical significance suggests that the observed differences are likely to be due to the object facility and not due to chance.
Lighter colors indicate differences that are meaningful (large enough) but not statistically significant (perhaps random).
Underlined cells indicate differences that are statistically significant (not due to chance) but not large enough to be a meaningful difference.
Unformatted or white cells indicate scores are not different from the comparison group.Anything that is colored is a meaningful difference.
We adopted conventional statistical criteria (effect sizes of greater than .2) for deciding what size of difference between groups is meaningful. In VA, this coincidentally translates to differences in scores of approximately .2 or more.
Anything that is blue is higher than the comparison group.
Anything that is gray is lower than the comparison group.
Darker colors (blue or gray) indicate statistically significant and meaningful differences from the comparison group.
Statistical significance indicates that a difference in scores was large enough to conclude that the difference was likely not due to chance or random fluctuations. Statistical significance suggests that the observed differences are likely to be due to the object facility and not due to chance.
Lighter colors indicate differences that are meaningful (large enough) but not statistically significant (perhaps random).
Underlined cells indicate differences that are statistically significant (not due to chance) but not large enough to be a meaningful difference.
Unformatted or white cells indicate scores are not different from the comparison group.
27. 27 Comparisons
DVA comparisons. For each factor, the DVA average (row 2) was compared to the VA national average (row 1).
Program Office comparisons. For each factor, each Program Office division average score was compared to the DVA average (row 2).
Statistically Significant Differences
A statistically significant difference indicates that a comparison was large enough to conclude that the difference was likely not due to chance or random fluctuations. For instance, one may flip a coin 10 times and expect to get tails five times. However, if one would get six tails, would one conclude that the coin was “tainted”? What if one got seven tails? Eight? Nine? Ten? In other words, at what point would we conclude that the coin may be tainted? Or, to put it another way, at what point would we state the results we obtained were not due to chance, but due to something unique to the coin? Statistical significance suggests that the observed differences are likely to be due to the object (or facility) and not due to chance. By the way, one would need to get tails 9 out of 10 times in order to conclude that the coin is probably tainted.
Statistically Significant vs. Meaningful Differences
Whether or not a statistically significant difference is also large enough to be meaningful from a practical or managerial standpoint is another question. The two are not necessarily the same. Meaningful refers to differences that are either worthy of recognition on one hand, or of efforts to improve on the other. This year, we adopted a statistical criterion to decide the size of a meaningful difference. (In statistical terms, meaningful was defined as effect size equal to or exceeding .2).
Not Available using ProClarity. Please note that the statistical significance grids are only available in this report and cannot be obtained from the “data cubes” using ProClarity. Comparisons
DVA comparisons. For each factor, the DVA average (row 2) was compared to the VA national average (row 1).
Program Office comparisons. For each factor, each Program Office division average score was compared to the DVA average (row 2).
Statistically Significant Differences
A statistically significant difference indicates that a comparison was large enough to conclude that the difference was likely not due to chance or random fluctuations. For instance, one may flip a coin 10 times and expect to get tails five times. However, if one would get six tails, would one conclude that the coin was “tainted”? What if one got seven tails? Eight? Nine? Ten? In other words, at what point would we conclude that the coin may be tainted? Or, to put it another way, at what point would we state the results we obtained were not due to chance, but due to something unique to the coin? Statistical significance suggests that the observed differences are likely to be due to the object (or facility) and not due to chance. By the way, one would need to get tails 9 out of 10 times in order to conclude that the coin is probably tainted.
Statistically Significant vs. Meaningful Differences
Whether or not a statistically significant difference is also large enough to be meaningful from a practical or managerial standpoint is another question. The two are not necessarily the same. Meaningful refers to differences that are either worthy of recognition on one hand, or of efforts to improve on the other. This year, we adopted a statistical criterion to decide the size of a meaningful difference. (In statistical terms, meaningful was defined as effect size equal to or exceeding .2).
Not Available using ProClarity. Please note that the statistical significance grids are only available in this report and cannot be obtained from the “data cubes” using ProClarity.
28. 28
29. 29 2010 VA All Employee Survey Job Satisfaction Index (JSI) Conclusions: Human Resources & Administration In 2010…
For the Human Resources & Administration, no JSI Factors are meaningfully above or below all of DVA.
‘Quality of Work’ is the highest rated factor.
‘Satisfaction vs. 2 Years Ago’ is the lowest rated factor.
30. 30 Why Care About the JSI Results? We have found higher JSI scores to be associated with:
Better employee outcomes
Lower sick leave rates
Fewer EEO complaints
Greater civility among coworkers
Better performance outcomes
Higher outpatient satisfaction
Higher inpatient satisfaction
Higher Joint Commission scores
31. 31 2010 VA All Employee SurveyOrganizational Assessment InventoryResults and FindingsHuman Resources & Administration
32. 32 Organizational Assessment Inventory (OAI) Measures employee perceptions of work group conditions
Core OAI since 2004:
27 items strongly correlated with important outcomes (employee health, patient care quality)
Combined into 17 factors (from earlier surveys)
A summary factor, Civility, averages items from 4 of these factors: Cooperation, Coworker Support, Conflict Resolution, and Diversity Acceptance.
In 2008 two new factors added, for total of 31 items
Engagement (2 items)
Psychological Safety (2 items)
33. 33
34. 34 The "VA Avg" is computed by averaging all AES respondents’ scores. “DVA Avg” is computed by averaging all the scores of individuals who are mapped to a DVA workgroup.
The "VA Avg" is computed by averaging all AES respondents’ scores. “DVA Avg” is computed by averaging all the scores of individuals who are mapped to a DVA workgroup.
35. 35 Remember, Civility is an average of 8 different items drawn from 4 of the OAI factors: Cooperation, Coworker Support, Conflict Resolution, Diversity Acceptance. Civility at the VA workplace is actionable: that is, it responds to an intervention. For example, research demonstrates that the CREW interventions increase civility in the participating groups. Empirical evidence supports that substantial CREW participation within a facility improves civility climate at the facility as a whole, to statistically significant levels.
Remember, Civility is an average of 8 different items drawn from 4 of the OAI factors: Cooperation, Coworker Support, Conflict Resolution, Diversity Acceptance. Civility at the VA workplace is actionable: that is, it responds to an intervention. For example, research demonstrates that the CREW interventions increase civility in the participating groups. Empirical evidence supports that substantial CREW participation within a facility improves civility climate at the facility as a whole, to statistically significant levels.
36. 36 Comparisons
DVA comparisons. For each factor, the DVA average (row 2) was compared to the VA national average (row 1).
Program Office comparisons. For each factor, each Program Office division average score was compared to the DVA average (row 2).
Statistically Significant Differences
A statistically significant difference indicates that a comparison was large enough to conclude that the difference was likely not due to chance or random fluctuations. For instance, one may flip a coin 10 times and expect to get tails five times. However, if one would get six tails, would one conclude that the coin was “tainted”? What if one got seven tails? Eight? Nine? Ten? In other words, at what point would we conclude that the coin may be tainted? Or, to put it another way, at what point would we state the results we obtained were not due to chance, but due to something unique to the coin? Statistical significance suggests that the observed differences are likely to be due to the object (or facility) and not due to chance. By the way, one would need to get tails 9 out of 10 times in order to conclude that the coin is probably tainted.
Statistically Significant vs. Meaningful Differences
Whether or not a statistically significant difference is also large enough to be meaningful from a practical or managerial standpoint is another question. The two are not necessarily the same. Meaningful refers to differences that are either worthy of recognition on one hand, or of efforts to improve on the other. This year, we adopted a statistical criterion to decide the size of a meaningful difference. (In statistical terms, meaningful was defined as effect size equal to or exceeding .2).
Not Available using ProClarity. Please note that the statistical significance grids are only available in this report and cannot be obtained from the “data cubes” using ProClarity.
Comparisons
DVA comparisons. For each factor, the DVA average (row 2) was compared to the VA national average (row 1).
Program Office comparisons. For each factor, each Program Office division average score was compared to the DVA average (row 2).
Statistically Significant Differences
A statistically significant difference indicates that a comparison was large enough to conclude that the difference was likely not due to chance or random fluctuations. For instance, one may flip a coin 10 times and expect to get tails five times. However, if one would get six tails, would one conclude that the coin was “tainted”? What if one got seven tails? Eight? Nine? Ten? In other words, at what point would we conclude that the coin may be tainted? Or, to put it another way, at what point would we state the results we obtained were not due to chance, but due to something unique to the coin? Statistical significance suggests that the observed differences are likely to be due to the object (or facility) and not due to chance. By the way, one would need to get tails 9 out of 10 times in order to conclude that the coin is probably tainted.
Statistically Significant vs. Meaningful Differences
Whether or not a statistically significant difference is also large enough to be meaningful from a practical or managerial standpoint is another question. The two are not necessarily the same. Meaningful refers to differences that are either worthy of recognition on one hand, or of efforts to improve on the other. This year, we adopted a statistical criterion to decide the size of a meaningful difference. (In statistical terms, meaningful was defined as effect size equal to or exceeding .2).
Not Available using ProClarity. Please note that the statistical significance grids are only available in this report and cannot be obtained from the “data cubes” using ProClarity.
37. 37
38. 38 2010 VA All Employee Survey Organizational Assessment Inventory (OAI) Conclusions: Human Resources & Administration In 2010…
For the Human Resources & Administration, no OAI Factors are meaningfully above or below all of DVA.
‘Work/Family Balance’ is the highest rated factor.
‘Job Control’ is the lowest rated factor.
39. 39 Why Care about the OAI Results? In previous research by the AES team, higher OAI scores were correlated with:
Better employee outcomes
Lower sick leave rates
Fewer lost time claims
Fewer EEO claims
Better patient care outcomes
Higher patient satisfaction
Inpatient and outpatient
Higher quality of chronic disease care
Higher quality of preventive care
40. 40 2010 VA All Employee SurveyOrganizational Culture
Results & Findings
Human Resources & Administration
41. 41 2010 VA All Employee Survey Measuring Organizational Culture: Five Elements Group
Motto: Our people are our most important asset.
Entrepreneurial
Motto: Let’s find a way to do it better!
Hierarchical / Bureaucratic
Motto: Follow standard operating procedures.
Enabling
Motto: Create policies & procedures that facilitate getting work done effectively.
Rational
Motto: We get the job done.
The conceptual model of culture that we used was based on the published work of Zammuto and Krakower and has been widely used in organization research within healthcare. According to this model, the culture of any given organization can be thought of as a mixture of four different elements:
Group/teamwork orientation
Entrepreneurial orientation
Bureaucratic/hierarchical orientation
Rational or production orientation
To complement this framework, beginning in 2009 we added four questions designed to measure a fifth element that we have labeled the enabling orientation, based on the terminology used by Paul Adler and Bryan Borys in their 1996 article, “Two types of bureaucracy: Enabling and coercive” [Administrative Science Quarterly, 41(1), 61-89]. The characteristics of an enabling culture orientation are described in greater detail on the next slide.
The hypothetical motto listed for each culture element is our attempt to capture the essential spirit of each of these orientations.
Like the personality of an individual, the culture of an organization is a mix of elements, not an all-or-nothing situation in which one element is present to the exclusion of all others.
The conceptual model of culture that we used was based on the published work of Zammuto and Krakower and has been widely used in organization research within healthcare. According to this model, the culture of any given organization can be thought of as a mixture of four different elements:
Group/teamwork orientation
Entrepreneurial orientation
Bureaucratic/hierarchical orientation
Rational or production orientation
To complement this framework, beginning in 2009 we added four questions designed to measure a fifth element that we have labeled the enabling orientation, based on the terminology used by Paul Adler and Bryan Borys in their 1996 article, “Two types of bureaucracy: Enabling and coercive” [Administrative Science Quarterly, 41(1), 61-89]. The characteristics of an enabling culture orientation are described in greater detail on the next slide.
The hypothetical motto listed for each culture element is our attempt to capture the essential spirit of each of these orientations.
Like the personality of an individual, the culture of an organization is a mix of elements, not an all-or-nothing situation in which one element is present to the exclusion of all others.
42. 42 2010 VA All Employee Survey Enabling Culture An enabling culture is one in which policies & procedures:
Clarify employee roles & responsibilities
Save time & effort because they represent best ways of doing things
Are revised as necessary to adapt to changing circumstances
Enabling culture added to AES to capture positive aspects of rules & structure
Enabling culture questions were pilot tested during 2008 AES administration
Reliability and validity established using data from over 6,000 respondents
The enabling element of culture was added to the AES in 2009 as a counter-balance to the hierarchical/bureaucratic element, which focuses on the constraints and limitations associated with organizational policies and procedures. The new enabling culture questions assess the positive aspects of organizational rules and structure, which over the years some managers and other users of the AES data felt was lacking.
Questions representing the enabling culture were developed based on prior research. A pilot study was conducted during the 2008 AES administration; a randomly selected subset of employees were given a version of the AES that included five new questions. A total of 6,401 employees responded to these additional items; 85 facilities had at least 30 respondents.
The reliability and validity of the proposed new enabling culture scale were established and confirmed in several ways. The new enabling culture scale also demonstrated considerable variation across medical centers and a low correlation with the hierarchical / bureaucratic scale, suggesting that those two scales measured different constructs.
Higher levels of Enabling Culture were found to be associated with lower patient missed opportunity rates and higher rates of patients seen within 30 days in various clinics, higher immunization rates, more major depression follow-up, inpatient tobacco counseling, and LDL measurement for diabetic and AMI patients. The new Enabling Culture scale did not correlate significantly with SHEP patient overall quality ratings.The enabling element of culture was added to the AES in 2009 as a counter-balance to the hierarchical/bureaucratic element, which focuses on the constraints and limitations associated with organizational policies and procedures. The new enabling culture questions assess the positive aspects of organizational rules and structure, which over the years some managers and other users of the AES data felt was lacking.
Questions representing the enabling culture were developed based on prior research. A pilot study was conducted during the 2008 AES administration; a randomly selected subset of employees were given a version of the AES that included five new questions. A total of 6,401 employees responded to these additional items; 85 facilities had at least 30 respondents.
The reliability and validity of the proposed new enabling culture scale were established and confirmed in several ways. The new enabling culture scale also demonstrated considerable variation across medical centers and a low correlation with the hierarchical / bureaucratic scale, suggesting that those two scales measured different constructs.
Higher levels of Enabling Culture were found to be associated with lower patient missed opportunity rates and higher rates of patients seen within 30 days in various clinics, higher immunization rates, more major depression follow-up, inpatient tobacco counseling, and LDL measurement for diabetic and AMI patients. The new Enabling Culture scale did not correlate significantly with SHEP patient overall quality ratings.
43. 43 In this graph we report the national VA culture profile over time. Consistently, the bureaucratic and rational components are rated the strongest, followed by group and entrepreneurial.
Higher levels of Group and Entrepreneurial culture are associated with more effective work processes (e.g., quality improvement) and positive outcomes (Shortell, O’Brien, Carman et al., 1995; Cameron & Quinn, 1999; Gifford, Zammuto & Goodman, 2002; Meterko, Mohr & Young, 2004).
In this graph we report the national VA culture profile over time. Consistently, the bureaucratic and rational components are rated the strongest, followed by group and entrepreneurial.
Higher levels of Group and Entrepreneurial culture are associated with more effective work processes (e.g., quality improvement) and positive outcomes (Shortell, O’Brien, Carman et al., 1995; Cameron & Quinn, 1999; Gifford, Zammuto & Goodman, 2002; Meterko, Mohr & Young, 2004).
44. 44 The "VA Avg" is computed by averaging all AES respondents’ scores. “DVA Avg” is computed by averaging all the scores of individuals who are mapped to a DVA workgroup.The "VA Avg" is computed by averaging all AES respondents’ scores. “DVA Avg” is computed by averaging all the scores of individuals who are mapped to a DVA workgroup.
45. 45
46. 46 Comparisons
DVA comparisons. For each factor, the DVA average (row 2) was compared to the VA national average (row 1).
Program Office comparisons. For each factor, each Program Office division average score was compared to the DVA average (row 2).
Statistically Significant Differences
A statistically significant difference indicates that a comparison was large enough to conclude that the difference was likely not due to chance or random fluctuations. For instance, one may flip a coin 10 times and expect to get tails five times. However, if one would get six tails, would one conclude that the coin was “tainted”? What if one got seven tails? Eight? Nine? Ten? In other words, at what point would we conclude that the coin may be tainted? Or, to put it another way, at what point would we state the results we obtained were not due to chance, but due to something unique to the coin? Statistical significance suggests that the observed differences are likely to be due to the object (or facility) and not due to chance. By the way, one would need to get tails 9 out of 10 times in order to conclude that the coin is probably tainted.
Statistically Significant vs. Meaningful Differences
Whether or not a statistically significant difference is also large enough to be meaningful from a practical or managerial standpoint is another question. The two are not necessarily the same. Meaningful refers to differences that are either worthy of recognition on one hand, or of efforts to improve on the other. This year, we adopted a statistical criterion to decide the size of a meaningful difference. (In statistical terms, meaningful was defined as effect size equal to or exceeding .2).
Not Available using ProClarity. Please note that the statistical significance grids are only available in this report and cannot be obtained from the “data cubes” using ProClarity.
Comparisons
DVA comparisons. For each factor, the DVA average (row 2) was compared to the VA national average (row 1).
Program Office comparisons. For each factor, each Program Office division average score was compared to the DVA average (row 2).
Statistically Significant Differences
A statistically significant difference indicates that a comparison was large enough to conclude that the difference was likely not due to chance or random fluctuations. For instance, one may flip a coin 10 times and expect to get tails five times. However, if one would get six tails, would one conclude that the coin was “tainted”? What if one got seven tails? Eight? Nine? Ten? In other words, at what point would we conclude that the coin may be tainted? Or, to put it another way, at what point would we state the results we obtained were not due to chance, but due to something unique to the coin? Statistical significance suggests that the observed differences are likely to be due to the object (or facility) and not due to chance. By the way, one would need to get tails 9 out of 10 times in order to conclude that the coin is probably tainted.
Statistically Significant vs. Meaningful Differences
Whether or not a statistically significant difference is also large enough to be meaningful from a practical or managerial standpoint is another question. The two are not necessarily the same. Meaningful refers to differences that are either worthy of recognition on one hand, or of efforts to improve on the other. This year, we adopted a statistical criterion to decide the size of a meaningful difference. (In statistical terms, meaningful was defined as effect size equal to or exceeding .2).
Not Available using ProClarity. Please note that the statistical significance grids are only available in this report and cannot be obtained from the “data cubes” using ProClarity.
47. 47
48. 48 2010 VA All Employee Survey Organizational Culture Conclusions: Human Resources & Administration In 2010…
For the Human Resources & Administration, no Culture Factors are meaningfully above or below all of DVA.
‘Bureaucratic’ is the highest rated factor.
‘Entrepreneurial’ is the lowest rated factor.
49. 49 2010 VA All Employee Survey Why Care About the Culture Results? Culture differences are important
Culture related to employee satisfaction & patient satisfaction
Example: One point difference in combined group & entrepreneurial culture associated with 4.3% lower turnover among physicians
Knowledge of culture may be used to customize intervention strategies to be more effective
Why should we care about organizational culture anyway? We think there are two reasons. One you might describe as related to the direct impact of culture, and the other might be described as indirect.
Direct impact refers to the relationship of organizational culture to other important factors such as employee satisfaction and patient satisfaction. For example, a statistically significant positive relationship was found between group culture and patient satisfaction among inpatients: the higher the dose of group/teamwork culture, the higher the patient satisfaction.
Another example of direct impact is the relationship between culture and turnover. A 1 point difference/increase in G/E culture was associated with over a 4% lower turnover rate among physicians.
Culture is also indirectly important as a factor that should be taken into account when planning improvement or change activities. To understand this point, it is helpful to think of culture as the “personality” of an organization.
Why should we care about organizational culture anyway? We think there are two reasons. One you might describe as related to the direct impact of culture, and the other might be described as indirect.
Direct impact refers to the relationship of organizational culture to other important factors such as employee satisfaction and patient satisfaction. For example, a statistically significant positive relationship was found between group culture and patient satisfaction among inpatients: the higher the dose of group/teamwork culture, the higher the patient satisfaction.
Another example of direct impact is the relationship between culture and turnover. A 1 point difference/increase in G/E culture was associated with over a 4% lower turnover rate among physicians.
Culture is also indirectly important as a factor that should be taken into account when planning improvement or change activities. To understand this point, it is helpful to think of culture as the “personality” of an organization.
50. 50 2010 VA All Employee Survey Concluding Thoughts Access to the AES data
Stratified Analyses
Next Steps
51. 51 2010 VA All Employee SurveyAccess to Data & Custom Stratification Using ProClarity, it is possible to access the AES data and perform various stratifications or “breakdowns” of the scores
This can help you more precisely identify areas of excellence and opportunities for improvement
Are your factors scores consistent across different types of employees, work groups, etc.?
Do the various types of employees or work groups differ in their level of satisfaction?
How do other Program Offices score on a given factor?
52. 52 2010 VA All Employee Survey Next Steps AES Data Cube ProClarity training available for in-depth review of work group results – AES Coordinators have schedule of classes
Local dissemination of results, development of goals, further development of action plans, and follow-up utilizing performance measures
For AES Action Planning in Workforce Succession Plans, go to Succession Planning Website:
http://lrnestweb8.dva.va.gov/succession/Templates/Master.aspx?pid=986
General information about the AES and its interpretation go to the AES Portal:
http://aes.vssc.med.va.gov/Pages/Default.aspx
53. 53 Questions? Contact:
VHA National Center for
Organization Development
513-247-4680
Email:
VHANCOD@va.gov