Western Interstate Commission for Higher Education
FROM THE WICHE PROJECT ARCHIVE  (BROWSE THE ARCHIVE)

Client Outcomes And Costs In Frontier Mental Health Organizations

Letter to the Field No. 13

by James E. Sorensen, Ph.D., CPA, School of Accountancy, University of Denver

Table of Contents
Introduction | Acquiring a Comprehensive View of MH Services | Outcomes | Costs, Outcomes, and Effectiveness | Summary | References

INTRODUCTION

Managers of mental health organizations that serve frontier area residents are expected to acquire and manage resources to create effective and efficient mental health services. Efficient cost management requires understanding cost behavior, applying cost distinctions for planning and control, computing unit-of-service costs, using unit-cost data in contracting and financial management and adding credibility to the unit-of-service costs by including the opinion of an independent auditor. But the fast emerging managed care environment requires more than just efficient cost management. Managed behavioral health care seeks to reduce or eliminate unnecessary services, reduce and control the costs of care, and maintain or increase outcomes and effectiveness.

As costs are reduced, concerns surface about compromised quality of care or, more specifically, poor clinical outcomes and meager client satisfaction. Knowing about client outcomes with services can help identify costs to be enhanced, diminished or reengineered. Outcome measures such as client functioning or symptomatic psychological distress or quality of life appropriate for the age and client type should be considered. Client satisfaction should also be measured. While not a measure of client functioning, assessing client satisfaction is a key measure of program performance and may be as important as treatment outcome. Standardized methods provide the ideal assessment approach for both outcome and client satisfaction. Comparing the costs and outcomes of two or more services enables managers (and policy makers) to make cost-effective choices among services and programs.

Frontier mental health programs must document costs, outcomes, and client satisfaction at a minimum to survive the assault of managed care. As part two of a three-part series on frontier mental health that includes (1) analyzing cost dynamics, (2) linking costs and client outcomes, and (3) choosing cost-effective management strategies, this report builds a framework outlining the role of costs and outcomes in cost-outcome and cost-effectiveness analyses. This paper explores issues related to outcomes (to be linked with costs) and how cost-outcomes and cost-effectiveness may be used as a management strategy in operating frontier mental health programs.

ACQUIRING A COMPREHENSIVE VIEW OF MENTAL HEALTH SERVCIES

With the stimulus of widespread implementation of managed care, various healthcare organizations are focusing on a comprehensive framework of analysis using broad spheres of activity (or influence) called domains (MHSIP,1996; ACMHA, 1997; NASMHPD Research Institute, 1997 & 1998). While the list varies across organizations, the domains generally include the four listed in the MHSIP Consumer-Oriented Mental Health Report Card (1996):

  • access-is a full range of needed services quickly and readily obtainable?
  • appropriateness-do appropriate services address a consumer's individual strengths and weakness, cultural context, service preferences and recovery goals?
  • outcomes-do services for individuals with emotional and behavioral disorders have an effect on their well-being, life circumstances, and capacity for self-management and recovery?
  • prevention-do preventive activities reduce the incidence of mental disorders by (1) early identification of risk factors or precursor signs and symptoms of disorders and (2) increasing social supports and coping skills for those at risk?

An analysis of each domain can produce a robust set of categories and questions. The MHSIP analysis of the domains focused heavily on the customer perspective. What is the customers' perception of access, appropriateness, and outcomes? Besides the customer viewpoint, a mental health manager may want additional measures. For example, access includes continuity of care, integration of physical and behavioral health care, use of hospitalization, success at engaging specific target populations (or penetration rates) and assessments of waiting time. The other domains expand in the same way. A suggested expansion is shown in Table 1. Many of the questions surrounding a domain are pervasive and may emerge at a service or program level or, perhaps, at a county or state level. When a frontier service provider tries to select from the bewildering number of interesting and relevant questions, s/he is compelled to make choices because of limited resources such as time and money. This paper suggests several questions may be more important than others given sparse resources. Two are prominent: service costs and client outcomes.

As mental health services increase as a part of total health services (Broskowski, 1991), new emphasis is placed on costs and outcomes (Mirin & Namerow, 1991). Managing care requires careful documentation of the costs of services and of clinical outcomes. Strategies to monitor and assess treatment plans and outcomes take many forms ranging from preadmission reviews, continuing treatment authorizations, concurrent review, screens (often computerized), to performance outcome measures (Austin, Blum, & Murtaza, 1995). This documentation of cost and outcome can be used, in addition, to respond to consumer and management concerns. Now consumers (including clients, employers and payers) are beginning to demand accountability for the consumption of resources and the client outcomes in mental health programs. Good managers of mental health programs need to know how well their program and their clients are doing. Information systems (IS) to meet this need should focus on systematic cost reports, indicators to assess clinical outcomes, and analyses of costs and outcomes to evaluate cost-effectiveness. Comparing the costs and outcomes of optional services enables cost-effective choices among services and programs. Today's complex mental health environment gives neither easy nor clear-cut guidelines for these information requirements.

Because the analysis of costs is pursed in a separate Letter to the Field, (No. 12, "Cost Dynamics of Frontier Mental Health Services"), this paper will focus on issues of measuring outcomes and linking to cost. Regardless of the programmatic or service strategy taken, assessing the costs and outcomes is a vital first step in managing for cost-effective mental health.

Table 1
Analysis of Domains

Domain Illustrative focus of content Illustrative type of question
Access Consumer survey What are the customer's perceptions of access? Complaints?
  Continuity of care What are the arrangements to refer inpatients to residential or outpatient services?
  Integration of physical and behavioral health care systems How is the transition from one to the other supposed to work?
  Hospital utilization What are readmission rates and average lengths of stay (ALOS)?
  Penetration rates What is the ratio of x clients served to total x population in catchment area?
  Waiting time What are the standard and actual results for timeliness after request for services?
Appropriateness Consumer survey What are the customer's perceptions of appropriateness? Complaints?
  Continuity of care What are the referral patterns from inpatients to residential or outpatient services?
  Cost of services What is the cost per unit of service?
  Integration of physical and behavioral health care systems What are the number of coordination events between the two?
  Voluntary participation What is the percentage of inpatient admissions that are involuntary?
  Penetration rates What percentage of certified SPMI are served?
  Services to promote recovery What is the ratio of residential to inpatient units of service?
Outcomes Independence What is the average number of days spent in the community?
  Criminal justice What is the proportion of adults and children who spent time in jail?
  Productive activity (employment or education) What is consumer's vocational and/or educational status? (days worked? $ earned?)
  Functioning What is the change in functioning over time?
  Hospital utilization What is the proportion of clients readmitted within 30 days?
  Living situation What is the type of living arrangement (and level of independence)?
  Quality of life What is the level of general life functioning?
  Satisfaction What is the consumer satisfaction with their mental health center and services?
  Substance abuse What is the age of first use of alcohol? Marijuana? Cocaine?
  Symptom reduction What is the reduction in symptoms?
Prevention Information provided to reduce the risk of developing mental disorders What are the expenditures per enrollee on dissemination of preventive information?
  Interventions designed to reduce the risk of developing mental disorder What is the percentage of enrollees participating in preventive programs?

OUTCOMES

Concern for client outcomes was embedded in the traditional mental health program evaluation literature (Attkisson, Hargreaves, Horowitz, & Sorensen, 1978; Ciarlo, Brown, Edwards, Kiresuk, & Newman, 1986). Today it is part of a larger quality movement in health care known as Continuous Quality Improvement (or CQI). In the corporate sector the movement is often called Total Quality Management (or TQM) and is associated with improvements in employee morale and productivity, customer satisfaction, and financial viability (General Accounting Office, 1991; Ernst & Young, 1992). The CQI movement complements managed care as both focus on client outcomes. CQI in managed care calls for providing ". . . the right care . . . deliver[ed] to the right patients at the right time in the right way" (Freeman & Trabin, 1994). A significant feature of this quality movement in health care is the reemergence of a concern for the client and how s/he feels about and responds to health care encounters. Shern (1994, p. 23), described the linkage between CQI and outcomes by observing "... CQI focuses on a recipient and outcomes orientation with an emphasis on understanding how program processes are related to desired outcomes." The application of CQI in mental health, unlike health care, is in an early developmental stage (Rago & Reid, 1991; Evans, Faulkner, & Hodo, 1992; Sluyter & Barnett, 1995). As purchasers and providers press prices and costs downward, consumer concern about compromised quality of care surface. Outcome management and practice guidelines programs may be able to deliver consistent and high quality care by reducing practice pattern variation (Freeman & Trabin, 1994).

Research on Outcomes. Outcome can be defined in many ways (Ware, 1997; Bergin & Garfield, 1993; Massey, 1991; Newman, 1980). The McGuirk, Zahniser, Bartsch, and Engleby (1994) study, using varying stakeholders, found a general preference for skilled coping, safety, and symptom reduction as measures of outcome. Ranked closely behind were customer involvement and social functioning. All six were ranked higher than customer satisfaction as an outcome by both consumers and providers. Program implementation and demonstration projects offer additional examples of outcome measures. New Mexico (Callahan & Shaening, 1994) has outcome measures focusing on living arrangements, work and related activity, quality of life, and client satisfaction. Oregon (Wachal, 1994) adult community outcomes concentrate on housing, financial supports, daily activities, employment, overall treatment satisfaction and level of functioning. A Unified Services Program (USP) in Pittsburgh, PA (Gould, 1994, p.63) uses scales covering ". . . symptomatology, levels of functioning, multiple measures of quality of life, substance abuse and treatment participation." Andrews, Peters and Teesson (1994), in Australia's search for mental health outcome measures, conclude with a set dealing with symptoms, functioning, quality of life, burden and satisfaction.

State-level Indicators. The National Association of State Mental Health Program Directors (NASMHPD) Research Institute is currently preparing an inventory of managed care performance indicators including outcome measures for state mental health programs (Mazade, 1997; NASMHPD Research Institute, 1997). The database should reflect service structures, levels of resources available, processes and outcomes used in developing and monitoring managed care contracts. In a five state feasibility study on state mental health agency performance measures, the NASMHPD Research Institute (1998) examined the feasibility and comparability of state performance indicators on

  • outcomes (e.g., improvement of functioning, reduction in symptoms)
  • consumer evaluation of care (e.g., outcome, access, appropriateness)
  • consumer status (e.g., % employed, % living independently)
  • community services (e.g., % contacted within 7 days of hospital discharge, % receiving case management).

In this study a frontier mental health organization could be responsive to state requirements for performance information if it obtained outcome and consumer evaluation of care data and was able to extract consumer status (e.g., % employed) and community services information (% receiving case management) from internal sources such as the client record.

Classifying Outcome Measures. Ciarlo, et al. (1986) consolidated knowledge about outcome measures for mental health clients. The authors suggest a useful three-dimensional taxonomy:

  • Assessment approach (individualized, partially standardized and standardized methodology)
  • Functional area/domain assessed (individual/self, family/interpersonal, and community functioning)
  • Respondent (client, collateral, therapist, and other)

Client satisfaction with services is differentiated from client outcome evaluation because ". . . the former measures do not normally address any specified area of client functioning" (Ciarlo, et al., 1986, p.1). In the new thrust of managed care and CQI, however, the satisfaction of the client or an organization (e.g., Medicaid, an employer or a managed care vendor) may be as important as treatment outcome (Ware, Snyder, Wright, & Davies, 1983). Competitive advantage accrues to providers who learn about and respond to customer needs. The challenge is to ". . . design an assessment program that provides useful, reliable, and valid data in an easy-to-use and cost-effective manner" (Plante, Couchman, & Diaz, 1995, p. 265). Quality for rural areas may be meaningfully addressed through a combination of clinical outcomes and client satisfaction (Bird, Lambert, & Hartley, 1995).

Recommendations. Most frontier mental health programs should focus on outcome measures such as

  • client functioning or symptomatic psychological distress or quality of life that are appropriate for the age (adult, adolescent, or child) and type (e.g., inpatient or outpatient, severely and persistently mentally ill, alcohol or other drug abuser) of patient, and
  • satisfaction of the client.

Standardized methods provide the ideal assessment approach (Ciarlo, et al., 1986). Well-standardized measures are needed to maximize the reliability (the extent to which the measure is reproducible) and sensitivity (the extent to which true changes in functional status can be detected). McLellan and Durell (1996) argue that standardized measures permit comparison conditions. Results from a single evaluation can be measured against results from a larger data base of comparable patients samples and treatment conditions. Without comparisons, outcome data from a single treatment or program cannot be interpreted scientifically (McLellan & Durell, 1996). While convergence between multiple respondents creates more valid measures, often client and therapist evaluations alone provide adequate and useful assessments, especially when standardized measures are employed.

The key ingredients are assessment of client outcomes and client satisfaction. The outcome reports can document program performance for managers, clients, and payers. Satisfaction data can help spot areas where the process can be improved (Nguyen, Attkisson, & Stegner, 1983). Recent news reports, for example, reveal an HMO responding to client dissatisfaction with appointment processes (Graham, 1995). Now the HMO offers the same or next-day appointments instead of a delayed visit. Anyone who calls and asks for an appointment that day will get one. "Our approach to a member who called before was, 'are you sure you want to be seen (by a medical provider)?' Now it's 'when do you want to be seen?'" This important change in the service would not have happened without client/customer satisfaction reports.

Client satisfaction information, however, may not be enough. Summaries of satisfaction may not pinpoint what might be wrong with the health care system. By the time the information works its way back to front-line managers and providers, it may be too general to be helpful. A client satisfaction survey may also not help front-line professionals to provide better service or to solve problems that cross departmental or service boundaries. Front-line personnel often need the results of root-cause analysis (Reichheld, 1996). Focus groups, as an example, that converge on dissatisfied customers and those who defect from the system can be rich sources of information about needed adjustments in the health care delivery system-adjustments that may not be clearly revealed in satisfaction surveys.

Criteria for Selecting Outcome Measures. Several authors identify the criteria (see footnote 1) for selecting outcome measures (Attkisson, et al, 1978; Ciarlo, et al., 1986; Ciarlo, 1982; Mirin & Namerow, 1991; Vermillion & Pfeiffer, 1993; Burlingame, Lambert, Reisnger, Neff, & Mosier, 1995; Sherman & Kaufmann, 1995; Mulkern, Leff, Green, & Newman, 1995):

  • The measure should meet minimal psychometric standards including reliability, validity, sensitivity, nonreactivity to situations, and minimization of respondent bias. If a measure does not have known reliability or validity, then its use is discouraged. This requirement eliminates most individualized (or homemade) instruments. Internal consistency reliability (coefficient alpha) estimates should be at .80 or above and test-retest should exceed .70. Validity coefficients should be at least .50 and are preferred at .75 or above.
  • The measure should be suitable for the population under care. In managed care settings, nearly 75% of all patients present adjustment problems, affective (anxiety or depression) problems and/or problems with daily living (Ludden & Mandell, 1993). Mental health measures should tap symptomatic and psychosocial functions of the client (Russo et al., 1996).
  • The measure should be easy to use, score and interpret. While some mental health literature on outcomes suggests multiple instruments (Waskow & Parloff, 1974), practice seems to follow a more simple approach (Lambert & Hill, 1993). Simple methodology and procedures insure uniformity (Ciarlo, et al., 1986). To guarantee outcome assessments are integrated into mental health practice, brief and understandable instruments can report client status simply and objectively. If a measure is used frequently and addresses key dimensions of presenting problems and/or relates to treatment goals, then it becomes an easy addition to the clinical record. It can also reduce the effort spent on progress notes.
  • The measure should be relatively low cost. If many clients are to be assessed regularly, then expensive instruments will present prohibitive demands on limited resources. Impossible requests for time and money are likely to result in no evaluation at all.
  • The measure should be useful in clinical service functions and for evaluation purposes. The measure should be useful in planning treatment, measuring its impact and predicting outcome (American Psychiatric Association, 1994). The measures should reflect meaningful change. Some scales mix broad improvements in symptomatic and functional areas. Others attempt to separate symptom distress, interpersonal relations, and social role performance (Lambert, Lunnen & Umpress, 1994). Sometimes a measure is not used for clinical decisions about individualized client changes, but it is helpful in assessing how groups of clients perform. This aggregated analysis can be powerful in assessing program effectiveness and in documenting client progress to clients, clinicians, program managers, payers and legislative or regulative groups.

While only exploratory solutions are offered on what are good outcome measurements, frontier mental health programs must carefully select from available measures to survive the descending mantle of managed care enveloping all health care programs. The struggle is to balance sound research methods with the demands of a fast-paced market-driven business (Freeman & Trabin, 1994). Ciarlo (1996) suggests outcome for managed mental health care in frontier rural areas should focus on one (or more) of the following types of outcome assessment for

  • adults using general measures such as global assessment of functioning (GAF), a role functioning scale (RFS) or a composite score from a symptom check-list (SCL-90-R or BSI) or a combination of behavior and symptom identification scale (BASIS-32) or the MOS 36-item short-form health survey (SF-36)
  • children and adolescents using a behavioral and symptom checklist oriented to younger clients (Children Behavioral Checklist or CBCL) since adult scales are usually inappropriate or ineffective for children and adolescents.
  • seriously and persistently mentally ill (SPMI) people focusing on the lower end of the functioning continuum relative to meeting basic needs, securing self-support via employment, and avoiding inappropriate and/or violent behavior (see footnote 2).
  • alcohol and other substance abuse identifying the special impairment arising from alcohol and drug abuse.

Table 2, Selected Program or Service Outcome Measures, reviews 12 measures including a client satisfaction scale. The measures, which tend to be inexpensive, are assessed for reliability, validity and the ability to produce an overall score that can be linked to costs. Samples of the instruments can be obtained from the authors, sponsors or through the Health and Psychosocial Instruments (HAPI) database (see footnote 3). Key work of the primary authors or sponsors is included in the references. In an independent and separate research effort, Sederer and Dickey (1996) concurrently review 10 of the 12 suggested measures.

COSTS, OUTCOMES AND EFFECTIVENESS

With increased accountability, service providers of all sizes are being asked to demonstrate their effectiveness with outcome data. Outcome data can provide valuable information for accountability and for the improvement of clinical services and programs (Newman & Sorensen, 1985). Demonstrating effectiveness by itself, however, is usually insufficient. In managed care settings, effectiveness must be linked with costs.

Callahan (1994) suggests outcomes provide a method for evaluating the cost-effectiveness of services. Her approach involves outcomes, effectiveness and cost-effectiveness as evidenced by the questions for varying stakeholders:

Client How does my progress and length of service compare to the progress made by other persons with similar characteristics?
Have my symptoms improved (or changed) as reflected by a valid scale or assessment tool?
Mental Health Staff How does the progress of this person compare to the progress of my other clients with similar characteristics?
Have the client's symptoms improved as reflected by a valid scale or assessment tool?
Program Manager What was the rate of effectiveness for each type of service and treatment alternative?
How many clients were served? At what cost?
How does our program compare to others with similar services?
Policy Maker What types of service utilization patterns have the best (most effective) outcomes for specific types of clients?
Are these outcomes being achieved in the most cost effective manner?

The client and mental health staff questions use outcomes (or comparative outcomes) to assess effectiveness (see footnote 4). The client is asking, "Am I getting better?" as a measure of progress or effectiveness while the clinician asks, "Are my clients improving, especially when compared to a relevant comparison group?" When the program manager and policy maker frame their questions, they are asking comparative cost-outcome or cost-effectiveness questions. "How do my costs and outcomes compare to other programs?" and "Are the outcomes most cost-effective" requires comparing costs and outcomes to assess cost-effectiveness (see footnote 5).

Cost-Outcome and Cost-Effectiveness. Cost-outcome assessment (tying cost to clinical outcome) is one key to building viable cost-effectiveness analyses for program evaluation and accountability (Newman & Sorensen, 1985). Figure 1 identifies the major financial, statistical and evaluation tasks required for cost-outcome and cost-effectiveness analysis.

Starting with total costs of a (public) mental health organization, costs are refined to the per unit cost of service. Statistical data on professional staff activities are required to assign personnel costs, while information about services (e.g., units of service) is necessary to unitize program and service costs. With unitized costs of service and accumulated services received by specific target groups, total costs for an episode of care may be computed. Evaluation tasks then involve the selection of a target group, preintervention assessment, and careful non-experimental assignment of clients to varied treatments or services. Random assignment is ideal, but practical constraints argue for quasi-experimental procedures which try to equate for problem severity and other key characteristics of clients. After postintervention measurements, outcomes are assessed. Then costs are related to outcomes for the final cost-outcome report. If cost outcomes are calculated on more than one service and comparatively analyzed, cost-effectiveness can be assessed for optional approaches for specific target groups (Thornton, et al., 1990).

Illustrative example of cost-outcome and cost-effectiveness. As measures of human service accountability and program management, cost-outcome and cost-effectiveness are interrelated. Cost-outcome analysis finds the programmatic resources consumed to achieve a change in a relative measure of client outcome (e.g., functioning). Cost-effectiveness analysis compares beneficial program outcomes to the cost of programs (or modalities or techniques) to identify the most effective programs. The following example illustrates the basic steps. The outcome measure used in the illustration identifies the major criteria for client performance (Figure 2) and the scale metrics (Figure 3). The scale is a global assessment of the four criteria scaled into nine levels of measurement (Endicott, Spitzer, Fleiss, & Cohen, 1976). Levels 1 to 4 are considered dysfunctional while levels 5 to 9 are deemed functional. Figure 4 is a basic cost-outcome matrix using only the dysfunctional-functional level of functioning. Level of functioning is assessed at the start and end of a time period for a specific target group of clients. Combining the two rows and two columns results in four-cells:

cell A start: dysfunctional (1-4 ratings) end: dysfunctional (1-4 ratings)
cell B start: dysfunctional (1-4 ratings) end: functional (5-9 ratings)
cell C start: functional (5-9 ratings) end: dysfunctional (1-4 ratings)
cell D start: functional (5-9 ratings) end: functional (5-9 ratings)

Figure 2.

Major Criteria for Performance

  1. Personal self-care (adjust to age level)
  2. Social functioning (adjust to age level)
  3. Vocational and/or educational functioning
    1. Working adults
    2. Homemakers and/or parents and/or elderly
  4. Evidence of emotional stability and stress tolerance

Figure 3. Develop Scale Metrics

Level 1 Dysfunctional in all four areas
Level 2 Not working; intolerable; minimal self care, requires restrictive setting
Level 3 Not working; strain on others; movement in community restricted
Level 4 Probably not working, but may if in protective setting; can care for self; can interact but avoid stressful situations
Level 5 Working or schooling, but low stability and stress tolerance: barely able to hold on and needs therapeutic intervention
Level 6 Vocational/educational stabilized because of direct therapeutic intervention; symptoms noticeable to clientand others
Level 7 Vocational/educational functioning acceptable; therapy needed
Level 8 Functioning well in all areas; may need periodic services (e.g., med check)
Level 9 Functioning well in all areas and no contact with Behavioral Health Services is recommended

Next, for the clients in each cell, the services received and related unit-of-service costs are multiplied and summed and statistics such as the mean (x-bar) and standard deviation (sd) are computed for each cell. Of special concern is cell C since moving from functional to dysfunctional may suggest clinical risk. Cell A is of interest since the clients have not moved from a dysfunctional status and often represent high consumption of expensive services. Cell B is of interest since the clients moved from a dysfunctional to a functional level and this change may prompt questions about the type and cost of services used. Finally cell D may deserve a review to assess resource consumption by clients who started and ended the review period as functional.

Figure 5 is an expanded matrix of costs and outcomes using all nine points of the scale developed in Figure 3. Individuals starting and ending at the same level are on the diagonal while those showing improvement are above the diagonal and those showing regression are below the diagonal. Means and standard deviations are computed for each cell. Client change and costs are aggregated by improvement, maintenance, and regression (as shown conceptually in Figure 6) and illustrated with sample values in Figure 7. Client outcome (e.g., improvement, maintenance or regression) and the resources used to achieve the outcome are linked in Figure 7. Note in the illustration that 40% are improved (with 19% of the resources), 50% are maintained (by consuming 71% of the resources) and 10% regressed (while receiving 10% of the resources).

In cost-outcome analysis, there is no way to document whether change during service is actually caused by the intervention or is simply concurrent with it. Gathering comparative cost-outcomes on optional services (e.g., A vs. B) may separate the effects of service strategy and cost differences. Potential intervening variables, such as history, selection bias, practice effects, maturation and other factors unrelated to the service can be controlled by random assignment to alternative services or by less desirable quasi-experimental methods such as matched comparisons. The purpose of the analysis is to reach conclusions about the relative cost and effectiveness of the services. Figure 8 reviews the logical relationships and choice points about two services (A and B). Seven of the choice points are self-explanatory (e.g., A is as effective and A costs less, therefore choose A) while the cells with question marks (?) are not clear conclusions (e.g., A is less effective and A costs less).

Effect Of Capitation. Cost-effective care with limited resources can be reinforced by capitation (Lehman, 1987). The Monroe-Livingston demonstration project, as an illustration, evaluated capitated funding of mental health care in contrast to fee-for-service in a seriously mentally ill population. After a two-year follow-up, Cole, Reed, Babigian, Brown, and Fray (1994) found patients in the capitation had fewer hospital inpatients days than the fee-for-service group, while both groups were similar in their functioning and level of symptoms. This report evaluated effectiveness using outcomes. Reed, Hennessy, Mitchell, & Babigian (1994) evaluated total costs and benefits in the same demonstration and concluded, ". . . capitation funding can promote care of seriously mentally ill persons in community settings at lower overall costs." This report then linked costs to outcomes to assess cost-effectiveness.

SUMMARY

Frontier mental health programs need to document costs and outcomes at a minimum. Armed with cost and outcome data, a cost-outcome report is possible. Medicaid (and Medicare) purchasing authorities, state mental health authorities, managed care vendors, HMOs and business coalitions are likely to respond positively to cost-outcome information. Cost-outcome can also continuously assess, plan and improve services. Where comparative cost-outcome information is available, cost-effectiveness reports may be possible, but in frontier mental health environments these opportunities may be limited.

Cost-effectiveness as a strategy for the design and deployment of frontier mental health services is reflected in several applications reviewed or proposed. In some instances, highly acceptable approaches (in theory) must by tempered by the realities faced in deployment (in practice.)

REFERENCES

American College of Mental Health Administration (ACMHA). (1997). Santa Fe Summit, 1997. Columbia, SC: Author

American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, D.C.: Author.

Andrews, G., Peters, L., & Teesson, M. (1994). The measurement of consumer outcome in mental health: A report to the National Health Information Strategy Committee. Sydney, Australia: Clinical Research Unit for Anxiety Disorders.

Attkisson, C.C., Hargreaves, W.A., Hororwitz, M.J., & Sorensen, J.E. (1978). Evaluation of human service programs. New York: Academic Press.

Austin, M.J., Blum, S.R., & Murtaza, N. (1995). Local-state government relations and the development of public sector managed mental health care systems. Administration and Policy in Mental Health, 22, 203-215.

Bergin, A, & Garfield, S. (Eds.). (1993). Handbook of psychotherapy and behavior change (4th ed.). New York: Wiley.

Bigelow, D.A., McFarland, B.H., & Olson, M.M. (1991). Quality of life of community mental health program clients: Validating a measure. Community Mental Health Journal, 27, 43-55.

Bird, D.C., Lambert, D., & Hartley, D. (1995). Rural models for integrating primary care, mental health and substance abuse treatment services. Portland, ME: Maine Rural Health Research Center.

Bohrnstedt, G.W. (1983). Measurement. In P.H. Rossi, J.D. Wright, & A.B. Anderson (Eds.), Handbook of survey research. San Diego: Academic Press, Inc.

Broskowski, A. (1991). Current mental health care environment: Why managed care is necessary. Professional Psychology: Research and Practice, 22, 6-14.

Burlingame, G.M., Lambert, M.J., Reisnger, C.W., Neff, W.M., & Mosier, J. (1995). Pragmatics of tracking mental health outcomes in a managed care setting. The Journal of Mental Health Administration, 22, 226-236.

Callahan, J., & Shaening, M.A. (1994). Mental health in the 90's: A New Mexico initiative. In F.D. McGuirk, A.M. Sanchez, & D.D. Evans (Eds.), Outcomes issues in a managed care environment. Boulder, CO: Western Interstate Commission for Higher Education.

Callahan, N.M. (1994). The role of outcomes in managed mental health care. In F.D. McGuirk, A.M. Sanchez, & D.D. Evans (Eds.), Outcomes issues in a managed care environment. Boulder, CO: Western Interstate Commission

for Higher Education.

Ciarlo, J.A. (1982). Accountability revisited: The arrival of client outcome evaluation. Evaluation and Program Planning, 5, 31-26.

Ciarlo, J.A. (1996, August 26-28). Remarks on outcome evaluation of mental health services in frontier rural areas. Presented at WICHE Mental Health Program's 11th Annual Decision Support Conference, Reno, NV.

Ciarlo, J.A., Brown, T.R., Edwards, D.W., Kiresuk, T.J., & Newman, F.L. (1986). Assessing mental health treatment outcome measurement techniques (DHHS Pub. No. (ADM)86-1301). Washington, DC: Superintendent. of Documents., U.S. Government Printing Office.

Cole, R.E., Reed, S.K., Babigian, H.M., Brown, S.W., & Fray, J. (1994). A mental health capitation program: I. patient outcomes. Hospital and Community Psychiatry, 45, 1090-1096.

Derogatis, L.R. (1977). SCL-90R: Administration, scoring and procedures manual-I for the (revised) version. Maryland: Johns Hopkins University.

Derogatis, L.R., & Clearly, P.A. (1977). Confirmation of the dimensional structure of the SCL-90: A study in construct validation. Journal of Clinical Psychology, 33, 981-989.

Drake, R.E., Osher, F.C., & Wallach, M.A. (1989). Alcohol use and abuse in schizophrenia: A prospective community study. Journal of Nervous and Mental Disease, 177, 408-414.

Eisen, S.V., Dill, D.L., & Grob, M.C. (1994). Reliability and validity of a brief patient-report instrument for psychiatric outcome evaluation. Hospital and Community Psychiatry, 45, 242-247.

Endicott, J., Spitzer, R.L., Fleiss, J.L., & Cohen, J. (1976). The global assessment scale: A procedure for measuring overall severity of psychiatric disturbance. Archives of General Psychiatry, 33, 766-771.

Ernst & Young. (1992). International Quality Study. New York: The American Quality Foundation.

Evans, O.M., Faulkner, L.R., & Hodo, G.L. (1992). A quality improvement process for state mental health systems. Hospital and Community Psychiatry, 42, 465-474.

Feldman, S. (1992). Managed mental health services. Springfield, IL: Charles C. Thomas.

Freeman, M., & Trabin, T. (1994). Managed behavioral healthcare: History, models, key issues, and future course. Rockville, MD.: U.S. Center for Mental Health Services, Substance Abuse and Mental Health Services Administration.

General Accounting Office. (1991). U.S. companies improve performance through quality efforts. Gaithersburg, MD: Author.

Goodman, S.H., Sewell, D.R., Cooley, E.L., & Leavitt, N. (1993). Assessing levels of adaptive functioning: The role function scale. Community Mental Health Journal, 29, 119-131.

Gould, E.K. (1994). Decision support and service outcomes in a managed care environment. In F.D. McGuirk, A.M. Sanchez, & D.D. Evans (Eds.), Outcomes issues in a managed care environment. Boulder, CO: Western Interstate Commission for Higher Education.

Graham, J. (1995, August 6). Kaiser tries more consumer-friendly ways. The Denver Post, pp. 1G, 15G.

Hodges, K. (1994). Child and adolescent functional assessment scale. Unpublished manuscript.

Lambert, M.J., & Hill, C.E. (1993). Assessing psychotherapy outcomes and processes. In A. Bergin, & S. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed.). New York: Wiley.

Lambert, M.J., Lunnen, K., & Umpress, V. (1994). Manual for the outcome questionnaire. Salt Lake City, UT: Behavioral Health Care Efficacy.

Larsen, D.L., Attkisson, C.C., Hargreaves, W.A. & Nguyen, T.D. (1979). Assessment of client/patient satisfaction: Development of a general scale. Evaluation and Program Planning, 2, 197-207.

Lehman, A.F. (1987). Capitation payment and mental health care: A review of the opportunities and risks. Hospital and Community Psychiatry, 38, 31-38.

Lehman, A.F. (1988). A quality of life interview for the chronically mentally ill. Evaluation and Program Planning, 11, 51-62.

Lehman, A.F. (1997). Evaluating quality of life for persons with severe mental illness. Cambridge, MA: Evaluation Center @ HSRI.

Ludden, J., & Mandell, L. (1993). Quality planning for mental health. The Journal of Mental Health Administration, 20, 72-78.

Massey, O.T. (Ed.). (1991). Level of functioning. The Journal of Mental Health Administration, 18, 77-134.

Mazade, N.A. (1997). Interassociation project on performance measures and client outcome indicators. Administration and Policy in Mental Health, 24, 257-259.

McGuirk, F.D., Zahniser, J.H., Bartsch, D., & Engleby, C. (1994). Capturing outcome values: priorities of key stakeholders. In F.D. McGuirk, A.M. Sanchez, & D.D. Evans (Eds.), Outcomes issues in a managed care environment. Boulder, CO: Western Interstate Commission for Higher Education.

McLellan, A.T., & Durell, J. (1996). Outcome evaluation in psychiatric and substance treatments: Concepts, rationale, and methods. In L.I. Sederer, & B. Dickey (Eds.), Outcomes assessment in clinical practice. Baltimore: Williams & Wilkins.

McPheeters, H.L. (1984). Statewide mental health outcome evaluation: A perspective of two southern states. Community Mental Health Journal, 20, 44-55.

MHSIP Task Force on a Consumer-Oriented Mental Health Report Card. (1996). The MHSIP consumer-oriented mental health report card. Rockville, MD: Center for Mental Health Services, Substance Abuse and Mental Health Services Administration.

Mirin, S.M., & Namerow, M.J. (1991). Why study treatment outcome?. Hospital and Community Psychiatry, 42, 1007-1013.

Mulkern, V., Leff, S., Green, R., & Newman, F. (1995). Performance indicators for a consumer-oriented mental health report card: Literature review & analysis. In Stakeholder perspectives on mental health performance indicators. Cambridge, MA: The Evaluation Center at HSRI.

National Association of State Mental Health Program Directors (NASMHPD) Research Institute. (1997). Managed care performance indicators and outcome measures. Manuscript in preparation.

National Association of State Mental Health Program Directors (NASMHDP) Research Institute. (1998). Five state feasibility study on state mental health agency performance measures (Draft Executive Summary). Alexandria, Virginia: Author.

Newman, F.L. (1980). Global scales: Strengths, uses and problems of global scales as an evaluation instrument. Evaluation and Program Planning, 3, 257-268.

Newman, F.L., & Sorensen, J.E. (1985). Integrated clinical and fiscal management in mental health. Norwood, NJ: Ablex Publishing Corporation.

Nguyen, T.D., Attkisson, C.C., & Stegner, B.L. (1983). Assessment of patient satisfaction: Development and refinement of a service evaluation questionnaire. Evaluation and Program Planning, 6, 299-313.

Plante, T.G., Couchman, C.E., & Diaz, A.R. (1995). Measuring treatment outcome and client satisfaction among children and families. The Journal of Mental Health Administration, 22, 261-269.

Rago, W.V., & Reid, W.H. (1991). Total quality management strategies in mental health systems. The Journal of Mental Health Administration, 18, 253-262.

Reed, S.K., Hennessy, K.D., Mitchell, O.S., & Babigian, H.M. (1994). A mental health capitation program: II. Cost-benefit analysis. Hospital and community psychiatry, 45, 1097-1103.

Ross, D.C. & Klein, D.F. (1979). A comparison analysis of covariance and the technique as applied to illustrative psychopharmacological data. Journal of Psychiatric Research, 15, 67-75.

Reichheld, F.F. (1996, March-April). Learning from customer defections. Harvard Business Review, 57-69.

Russo, J., Roy-Byrne, P., Jaffe, C., Ries, R., Dagadakis, C., Dwyer-O'Conner, C., & Reeder, D. (1996). The relationship of patient-administered outcome assessments to quality of life and physician ratings: Validity of the BASIS-32. The Journal of Mental Health Administration, 24, 200-214.

Sederer, L.I., & Dickey, B. (Eds.). (1996). Outcomes assessment in clinical practice. Baltimore: Williams & Wilkins.

Sherman, P., & Kaufmann C. (1995). A compilation of the literature on what consumers want from mental health services. In Stakeholder perspectives on mental health performance indicators. Cambridge, MA: The Evaluation Center at HSRI.

Shern, D.L. (1994). System change and the maturation of mental health outcome measurement. In F.D. McGuirk, A.M. Sanchez, & D.D. Evans (Eds.), Outcomes issues in a managed care environment. Boulder, CO: Western Interstate Commission for Higher Education.

Sluyter, G.V., & Barnett, J.E. (1995). Application of total quality management to mental health: A benchmark case study. The Journal of Mental Health Administration, 22, 278-285.

Sorensen, J.E., Hanbery, G.W., & Kucic, A.R. (1983). Accounting and budgeting for mental health organizations. Washington, DC: US Government Printing Office.

Thornton, P.H., Goldman, H.H., Stegner, B.L., Sorensen, J.E., Rappaport, M., & Attkisson, C.C. (1990). Assessing the costs and outcomes together:  Cost effectiveness of two systems of acute psychiatric care. Evaluation and Program Planning, 13, 231-241.

Vermillion, J., & Pfeiffer, S. (1993). Treatment outcome and continuous quality improvement: Two aspects of program evaluation. Psychiatric Hospital, 24, 9-14.

Wachal, M. (1994). The Oregon health plan: managing by outcomes. In F.D. McGuirk, A.M. Sanchez, & D.D. Evans (Eds.), Outcomes issues in a managed care environment. Boulder, CO: Western Interstate Commission for Higher Education.

Ware, J.E. (1997). Health care outcomes from the patient's point of view. In E.J. Mullen & J.L. Magnabosco (Eds.), Outcomes measurement in the human services: Cross-cutting issues and methods (pp. 44-67). Washington, DC: NASW Press.

Ware, J.E., Kosinski, M., & Keller S.D. (1994). SF-36 physical and mental health summary scales: A user's manual. Boston: The Health Institute, New England Medical Center.

Ware, J.E., Snyder, M.K., Wright, W.R., & Davies, A.R. (1983). Defining and measuring patient satisfaction with medical care. Evaluation and Program Planning, 6, 247-263.

Waskow, I.E., Parloff, M.B. (Eds.). (1974). Psychotherapy change measures (DHEW Pub. No. (ADM)74-120). Washington, DC: Superintendent. of Documents, U.S. Government Printing Office.


footer.gif (2339 bytes)
Write us with comments on our site
This project is supported by the Center for Mental Health Services, Substance Abuse and Mental Health Services Administration
Contract No. 280-94-0014

Frontier Mental Health Resource Network
Please send comments and suggestions on this home page to Dennis F. Mohatt at dmohatt@wiche.edu
http://www.wiche.edu/MentalHealth/Frontier/frontier.asp